Response
stringlengths 8
2k
| Instruction
stringlengths 18
2k
| Prompt
stringlengths 14
160
|
---|---|---|
Fix this Last-Modified: Tue, 12 Oct 2010 18:11:57 GMT. Send the current time in the Last-Modified header.
|
So I have this form here: http://www.piataterenuri.info/register.php
The problem is that Firefox caches the captcha image and after an incorect input, or a page refresh the captcha will show the old image instead of the curent one.
I've already placed:
header("Cache-Control: no-cache");
header("Expires: Sat, 26 Jul 1997 05:00:00 GMT");
And also changed the captcha img src to
<?php $rand=microtime() * mktime(); echo "src=\"captcha.php?time=$rand\""; ?>
What else can I do to stop firefox from caching the image?
|
Stop FIREFOX from caching captcha
|
The browser won't know if ErrorMessages.xml has been updated on the server. It has to issue a request to check if the file has been modified.
You may want to set the ifModified option to true in your jQuery $.ajax() request, since this is set to false by default:
$.ajax({
type: "GET",
ifModified: true,
async: false,
url: "../../../ErrorMessages.xml",
dataType: "xml",
success: function (xml) {
// ..
}
});
Quoting from the jQuery.ajax() documentation:
ifModified (Boolean)
Default: false
Allow the request to be successful only if the response has changed since the last request. This is done by checking the Last-Modified header. Default value is false, ignoring the header. In jQuery 1.4 this technique also checks the 'etag' specified by the server to catch unmodified data.
As long as your web server supports the Last-Modified header, then the first request to the XML would look something like this:
GET /ErrorMessages.xml HTTP/1.1
Host: www.example.com
HTTP/1.1 200 OK
Last-Modified: Wed, 06 Oct 2010 08:20:58 GMT
Content-Length: 1234
However, subsequent requests to the same resource will look like this:
GET /ErrorMessages.xml HTTP/1.1
Host: www.example.com
If-Modified-Since: Wed, 06 Oct 2010 08:20:58 GMT
HTTP/1.1 304 Not Modified
If the web server finds that the file has been modified since the If-Modified-Since header date, it will serve the file normally.
|
I am reading a xml using $.ajax() request.
$.ajax({
type: "GET",
async: false,
url: "../../../ErrorMessages.xml",
dataType: "xml",
success: function (xml) {
$(xml).find("*[Name='" + Field + "']").each(function () {
message = $(this).find(Rule).text();
});
}
});
I want to make the call only when, the resource ErrorMessages.xml is updated. else use the browser cache.
|
$.ajax() call, only on updation
|
Thanks for the responses.
I am all for the below alternative.
I would Load the keystore once and extract the SecretKey and assign to an instance or class variable of the class you are using and then use the SecretKey whenever one need to encrypt or decrypt
|
I am using AES to accomplish symmetric-key encryption.
I store the Key in a password protected KeyStore.
the api exposes the following for loading the key from keystore
keyStore.load(inputStream, keyStorePassword.toCharArray());
so everytime when i want to enrypt or decrypt , i have to pass the inputstream which is atleast in my opinion a performance hit as it has to read the content everytime afresh.
Could you anyone please help me out with the strategy of storing it in memory and from then on accessing it and converting to a InputStream?
Note :
i did try to read the contents of the keystore to String (UTF-8)and convert it to InputStream and passed it to the api .But it spit out following exception
java.io.IOException: Invalid
keystore form
|
Cache the contents of KeyStore and convert it to a InputStream
|
We have seen a similar problem appear when we entered Daylight Savings Time (British Summer Time in the UK).
Check that the date/time on the server is not out of sync with the real world (usually by an hour). This may not be the cause of your problem, but best to check it anyway.
|
I've just uploaded my silverlight webpage to my production webserver. The issue im having is that everytime I open the website in the browser it is downloading a new copy of the XAP.
What could be causing this issue?
|
silverlight XAP gets downloaded every time
|
Subsequent runs of the report have expanded memory and filled various caches.
Never having seen your app, my guess would be the biggest effect is that your database server caches the data you query for. It loads the data off disk and into memory, and having nothing better to do with that memory, it leaves it there. Next time the query comes along, the database doesn't have to go to disk for the data, it's still there in memory.
The obvious and simplest way to exploit this is to run one "fake" query before your users are let loose on the system; that would mean you suck up the 1800 ms wait and your users get the sweet 400. Unfortunately, this will only work if all queries are the same, i.e. if everybody requests the same report. If there are different reports and different data, the caches will be flushed for different data and it will take more time to load the new results.
In short: If you always had the same query, you could give really quick answers, but then you'd never be presenting anything new.
|
i just profiled my reporting application when i tried to generate four times the same report in a row. the first one took 1859ms whereas the following ones only took between 400 and 600ms.
what is the explanation for that? could i somehow use it to make my application faster?
the reporting module runs on the server and waits for the user to click on "print report"..
|
speeding up jasperreports
|
You can read about page output caching on MSDN: http://msdn.microsoft.com/en-us/library/ms178597.aspx
The directive allows you to specify settings such as whether a page should have its output cached, for how long, and other configuration options.
Output caching can be a performance benefit to a page that is expensive to run. Once it is cached, future requests will be served from the cache without the page having to run again.
|
I think it has been related to caching concept but don't know how is it used.
|
What is use of @outputCache directive in ASP.NET?
|
This code is updating the cached object regardless of what hits are. The important line is here:
var hit = (HitInfo)(context.Cache[key] ?? new HitInfo());
It's grabbing a reference to the HitInfo object inside the cache, unless it doesn't exist, in which case it creates a new one. So both the ASP.Net Cache and the local variable hit have a reference to the same object - updating it in this code is updating it in the cache.
In the case where it creates a new one, it then adds it to the cache, so the next time the code executes, the line above will be returning that object. There's no need to remove the object and then re-cache it.
|
I found some code on the web and it threw me off. Look at the code below. You will notice only when the Hits == 1, does the cache get added. After that, the cache object isn't updated. It begs the question, does the object when updated, update the cache as well automatically? The answer here would make me remove some code in some of my classes.
public static bool IsValid( ActionTypeEnum actionType )
{
HttpContext context = HttpContext.Current;
if( context.Request.Browser.Crawler ) return false;
string key = actionType.ToString() + context.Request.UserHostAddress;
var hit = (HitInfo)(context.Cache[key] ?? new HitInfo());
if( hit.Hits > (int)actionType ) return false;
else hit.Hits ++;
if( hit.Hits == 1 )
context.Cache.Add(key, hit, null, DateTime.Now.AddMinutes(DURATION),
System.Web.Caching.Cache.NoSlidingExpiration,
System.Web.Caching.CacheItemPriority.Normal, null);
return true;
}
I would only guess that I would need to add the lines after the if statement:
if( hit.Hits == 1 )
context.Cache.Add(key, hit, null, DateTime.Now.AddMinutes(10),
System.Web.Caching.Cache.NoSlidingExpiration,
System.Web.Caching.CacheItemPriority.Normal, null);
else if (hit.Hits > 1)
{context.Cache.Remove(key);
context.Cache.Add(key, hit, null, DateTime.Now.AddMinutes(10),
System.Web.Caching.Cache.NoSlidingExpiration,
System.Web.Caching.CacheItemPriority.Normal, null);
}
Found the code at the bottom of the page here: http://www.codeproject.com/KB/aspnet/10ASPNetPerformance.aspx?msg=2809164
|
Does ASP.NET Cached Objects update automatically with the object updating?
|
3
Why not use a cron job that cleans up every hour?
Any check you make in every request is bound to be expensive.
If that isn't possible, keeping a central text file to store the modification times might be the best way, but you're going to get locking problems.
Share
Improve this answer
Follow
answered Jul 2, 2010 at 9:56
PekkaPekka
446k146146 gold badges977977 silver badges1.1k1.1k bronze badges
1
The cron job method works very nice, and you can make in such way that if the webserver is really busy then you may "skip" a cleanup and wait until the cronjob starts again and the server is less busy... basically you have more control and more speed
– Quamis
Jul 2, 2010 at 10:40
Add a comment
|
|
Hy there. I need to implement cache for my PHP web application. I implemented the cache file control (saving and getting files from cache dir) but now I need to enforce cache folder max size control, because cache folder should be limited in max size.
I had an idea to limit the size by deleting the least used files when the space is needed. Now, I've read that using the fileatime function on all the files in cache dir would slow down my application.
Is there any other method that springs in your mind?
(DB (MySQL) usage for storing last access time for cache files is, unfortunately, unimplementable.)
|
Implementing PHP cache with maximum size of cache folder
|
In simple terms data caching is storing data in memory for quick access. Typically information that is costly to obtain (in terms of performance) is stored in the cache. One of the more common items stored in a cache in a Web application environment is commonly displayed database values; by caching such information, rather than relying on repeated database calls, the demand on the Web server and database server's system resources are decreased and the Web application's scalability increased. As Microsoft eloquently puts it, "Caching is a technique widely used in computing to increase performance by keeping frequently accessed or expensive data in memory. In the context of a Web application, caching is used to retain pages or data across HTTP requests and reuse them without the expense of recreating them."
Read more : .NET Data Caching
|
I am a learner.I am learning Caching in ASP.NET.There are three types of caching in ASP.NET.
1.Page output caching.
2.Partial Output caching.
3.Data Caching.
In Page output caching, all the rendered content of the page saved in Cache and page every time re-execute.
In Partial Output caching, we can apply caching rules on different parts of pages.
But Data Caching, I didn't understand.
Could anyone please explain me Data Caching?
Thanx in advance.
|
Data Caching in ASP.NET
|
I don't think so. One trick would be to do this work in an AppDomain. You could create a new AppDomain, do all of your work, report your results, then unload the AppDomain. Not a trivial task and fairly slow but it is the only way to effectively unload assemblies or reflection related caches (that I know of).
|
Per MSDN, calling Type.GetMethods() stores reflected method information in a MemberInfo cache so the expensive operation doesn't have to be performed again.
I have an application that scans assemblies/types, looking for methods that match a given specification. The problem is that memory consumption increases significantly (especially with large numbers of referenced assemblies) since .NET hangs onto the method metadata.
Is there any way to clear or disable this MemberInfo cache?
|
Can the .NET MethodInfo cache be cleared or disabled?
|
I have a fairly large project where we are doing Example 1 - Cache static class with retrieval delegate used from controllers. Actually, in our case we have a service class layer that handles caching and the controllers reference the service layer. The service layer deals with data retrieval, caching, permission checking, etc, while the controllers deal mainly with assembling data from the services into models.
Per your question, you don't necessarily need a static cache helper. You could use DI to inject an instance of your cache helper, so then you could mock it out for testing, etc.
|
I realise there have been a few posts regarding where to add a cache check/update and the separation of concerns between the controller, the model and the caching code.
There are two great examples that I have tried to work with but being new to MVC I wonder which one is the cleanest and suits the MVC methodology the best? I know you need to take into account DI and unit testing.
Example 1 (Helper method with delegate)
...in controller
var myObject = CacheDataHelper.Get(thisID, () =>
WebServiceServiceWrapper.GetMyObjectBythisID(thisID));
Example 2 (check for cache in model class) in controller
var myObject = WebServiceServiceWrapper.GetMyObjectBythisID(thisID));
then in model class..............
if (!CacheDataHelper.Get(cachekey, out myObject)) {
//do some repository processing
// Add obect to cache CacheDataHelper.Add(myObject, cachekey);
}
Both use a static cache helper class but the first example uses a method signature with a delegate method passed in that has the name of the repository method being called. If the data is not in cache the method is called and the cache helper class handles the adding or updating to the current cache.
In the second example the cache check is part of the repository method with an extra line to call the cache helper add method to update the current cache.
Due to my lack of experience and knowledge I am not sure which one is best suited to MVC. I like the idea of calling the cache helper with the delegate method name in order to remove any cache code in the repository but I am not sure if using the static method in the controller is ideal?
The second example deals with the above but now there is no separation between the caching check and the repository lookup. Perhaps that is not a problem as you know it requires caching anyway?
|
Can I use a static cache Helper method in a NET MVC controller?
|
Have you looked at Guava's MapMaker class? I think it will do everything you need - although instead of providing a Future, you give the class a Function<? super K, ? extends V> which is used to compute the value.
Looking back over your post, if you really need to put values in there rather than computing them, it won't work as well - but I'll leave the suggestion here in case a computing map is okay for you.
|
I need to cache some objects with fairly heavy creation times, and I need exactly-once creation semantics. It should be possible to create objects for different CacheKeys concurrently. I think I need something that (under the hood) does something like this:
ConcurrentHashMap<CacheKey, Future<HeavyObject>>
Are there any existing open-source implementations of this that I can re-use ?
|
Multithreaded java cache for objects that are heavy to create?
|
We are using the built-in HttpRuntime Caching in our application, and it works very well.
It is very easy to put in place, as there is nothing to install on your servers. Moreover, moving to AppFabric Caching should not be such a big deal later on.
That said, it also comes with some limitations especially if your services are not hosted inside the same IIS application as the cached objects will be duplicated for each of them. If you don't plan to cache much data and/or if you don't plan to cache them for a long time, you should be fine as you won't end up consuming too much RAM.
You don't appear to have load-balanced servers, but in such a scenario, using the HttpRuntime caching also means duplicating the cache on each server. This is the kind of things that you can prevent with memcached or the AppFabric Caching...
|
The company I work for is looking to implement a caching solution. We have several WCF Web Services hosted and we need to cache certain values that can be persisted and fetched regardless of a client's session to a service. I am looking at the following technologies:
Caching Application Block 4.1
WCF TCP Service using HttpRuntime Caching
Memcached Win32 and Client
Microsoft AppFabric Caching Beta 2
Our test server is a Windows Server 2003 with IIS6, but our production server is Windows Server 2008, so any of the above options would work (except for AppFabric Caching on our test server).
Does anyone have any experience with any of these? This caching solution will not be used to store a lot of data, but it will need to be fetched from frequently and fast.
Thanks in advance.
|
WCF Caching Solution - Need Advice
|
No, the @Cache annotation goes on entities and on collections. Your composite key will be used as key for the cache entry.
|
my entity class is annotated with @Cache, my primary keys are combined of few table fields, primary keys are embedded into this class as embedded class. Do I need to put @Cache for the embedded class as well?
|
jpa embedded class need cachable?
|
1
You should only do this if you've profiled and determined that latency caused by the browser queuing up the downloads is a major factor in the performance of your site. If the static resources load in a few miliseconds, but the HTML itself takes 3 seconds to download after the server has to hit the database ... then improving those few ms won't do much.
Share
Improve this answer
Follow
answered Feb 19, 2010 at 22:44
Joel MartinezJoel Martinez
47.2k2626 gold badges132132 silver badges185185 bronze badges
2
I have already identified that downloading my static resources is a bottleneck. That said, what solution do you advice and why ?
– fabien7474
Feb 19, 2010 at 23:04
1
@OMG Ponies : This is really not premature optimization ! First, if I do optimization, this is because I have already identified a need for that. Second, take a look at this article (yuiblog.com/blog/2006/11/28/performance-research-part-1) and you will learn that the first step for optimizing is to reduce HTTP requests because 40% of your users are first-users and do not use browser cache. Therefore, you should deal first with your static content that can be optimized and bundled
– fabien7474
Feb 19, 2010 at 23:08
Add a comment
|
|
I am currently in the process of improving my grails website performance and following many best practices I found on the internet, I currently need to make a decision between two solutions before refactoring my code
Solution 1 : Export all of my static resources (js, css, images) to a separate domain and server (as already done by SO team - see here).
Solution 2 : Just keep my resources into my WAR file and configure apache to act as a reverse/caching proxy so that incoming requests to /images, /css, /js etc are all cached by apache.
What do you recommend and what are the pros and cons?
PS: Concerning solution 1, is there any web hosting providers specialized for static content out there?
Thank you.
|
Is it worth having static resources in a separate domain/server?
|
I was able to replicate your issue on my test server. I then changed from jquery 1.3.2 to 1.4.1. With Jquery 1.4.1 it doesn't add the cache-breaking string.
<script type="text/javascript" src="jquery-1.4.1.min.js"></script>
Of course, using 1.4.1 might not be an option for you.
|
I'm loading a view page via an $.ajax() call with jQuery. I'm explicitly setting the "cache" option to true. Nowhere in the application are we using $.ajaxSetup() to specify otherwise.
Here's the ajax request setup:
$(".viewDialogLink").click(function() {
$.ajax({
url: $(this).attr("href"),
dataType: "html",
type: "GET",
cache: true,
success: function(data) { $("#dlgViews").html(data).dialog("open"); }
});
return false;
});
The response comes back successfully. The dialog opens, and some content displays.
HOWEVER
There are script tags in the returned html. For example:
<script type="text/javascript" src="http://../jsapi/arcgis/?v=1.4"></script>
Now - in the response text, these look normal. But the actual browser requests for these scripts, as seen from FireBug, include a cache-breaker parameter in the query string. They look like:
http://serverapi.arcgisonline.com/jsapi/arcgis/?v=1.4&_=1264703589546.
None of the other resources in the loaded html - css or images - include the cache breaker in their request.
What is going on? How do I turn this cache breaker off?
|
jQuery (or maybe the browser) is cache-breaking ajax loaded scripts
|
In C (or C++) you have fine-grained control over the exact size of each data structure. You also have the possibility of fine-grained control over storage allocation. You can, after all, extend the new method, use malloc directly and otherwise structure memory to create spatial locality.
In most dynamic languages (like Python) you have no control at all over the exact size of anything, much less it's location.
In Python you may have some temporal locality, but you have little or no spatial locality.
Temporal locality might be enhanced through simple memoization of results. This is such a common speed-up that people often include a memoization decorator to uncouple the memoization (temporal locality) from the core algorithm.
I don't think that C or C++ cache-oblivious implementations translate to dynamic languages because I don't think you have enough control. Instead, just exploit memoization for speed-ups.
http://wiki.python.org/moin/PythonDecoratorLibrary#Memoize
|
I've been reading recently about cache-oblivious data structures like auxiliary buffer heaps. These data structures work by keeping their most-recently-accessed elements in cache memory so any subsequent access is also quicker.
Most of these data structures are implemented with a low-level language like C/C++. Is it worth it to try to port these data structures over to a dynamic language like Python, or does the overhead of running on a virtual machine ruin all the performance benefits of these data structures? It seems like the latter, but I thought I would ask to see if someone actually has some experience with it.
|
Cache-oblivious data structures and dynamic languages - effective?
|
I think what you have is probably the cleanest option.
Another option, which I haven't tested, may be to set the VaryByCustom parameter and override GetVaryByCustomString in Global.asax.
public override string GetVaryByCustomString(HttpContext context, string arg)
{
if (arg.ToLower() == “id”)
{
// Extract and return value of id from query string, if present.
}
return base.GetVaryByCustomString(context, arg);
}
See here for more information: http://codebetter.com/blogs/darrell.norton/archive/2004/05/04/12724.aspx
|
I have the following action:
public class HomeController : Controller
{
public ActionResult Index(int? id) { /* ... */ }
}
I'd like to [OutputCache] that action, but I'd like that either:
it doesn't use the cache if id == null; or
it uses the cache if id == null but with a different duration.
I think I can achieve this by:
public class HomeController : Controller
{
[OutputCache(VaryByParam = "none", Duration = 3600)]
public ActionResult Index() { /* ... */ }
[OutputCache(VaryByParam = "id", Duration = 60)]
public ActionResult Index(int id) { /* ... */ }
}
However this solution implies 2 actions, when the id is actually optional, so this might create some code repetition. Of course I could do something like
public class HomeController : Controller
{
[OutputCache(VaryByParam = "none", Duration = 3600)]
public ActionResult Index() { return IndexHelper(null); }
[OutputCache(VaryByParam = "id", Duration = 60)]
public ActionResult Index(int id) { return IndexHelper(id); }
private ActionResult IndexHelper(int? id) { /* ... */ }
}
but this seems ugly.
How would you implement this?
|
ASP.NET MVC OutputCacheAttribute: do not cache if a parameter is set?
|
2
Actually,
I would look here for analysis of Judy Trees.
As illustrated in this data, Judy's
smaller size does not give it an
enormous speed advantage over a
traditional "trade size for speed"
data structure. Judy has received
countless man-hours developing and
debugging 20,000 lines of code; I
spent an hour or three writing a
fairly standard 200-line hash table.
If your data is strictly sequential;
you should use a regular array. If
your data is often sequential, or
approximately sequential (e.g. an
arithmetic sequence stepping by 64),
Judy might be the best data structure
to use. If you need to keep space to a
minimum--you have a huge number of
associative arrays, or you're only
storing very small values, Judy is
probably a good idea. If you need an
sorted iterator, go with Judy.
Otherwise, a hash table may be just as
effective, possibly faster, and much
simpler.
Share
Improve this answer
Follow
answered Dec 24, 2009 at 20:37
Joe Soul-bringerJoe Soul-bringer
3,30455 gold badges3232 silver badges3737 bronze badges
Add a comment
|
|
Is there someplace where I can get a Big-O style analysis / comparison of traditional data structures such as linked lists, various trees, hashes, etc vs. cache aware data structures such as Judy trees and others?
|
Big-O and Cache Aware Data Structures & Algorithms
|
Unlike user controls, you can't OutputCache a Master page by itself--only as part of a Page.
Also, OutputCaching won't help the performance of a toolbar with lots of images anyway.
The kind of things that would help include image sprites, client-side caching, using a CDN, using multiple domains for static files, etc.
In case it's helpful, I cover those strategies in my book: Ultra-Fast ASP.NET.
|
How can I cache the master page in ASP.NET?
|
Cache Master Page in ASP.NET
|
When the PHP interpreter exits it calls fclose() on all open files, thus releasing any locks. Maybe you haven't found the right problem yet.
However, when you need to clean up something before the script terminates - use register_shutdown_function(). The registered function will be called even when the script terminates with an error.
|
I have a caching system build within php that stores the results of a mysql query in xml. I lock the cache when building by creating a lock file with an exclusive write handle, and then remove it once the cache file is completed.
However, there are times when the script either times out or halts mid execution, leaving the lock file in place, making any further executions of the script think that the cache is always updating.
I have tried checking for the file being a number of minutes old and attempting to get an exclusive write access to the file to refresh the lock and begin execution, however it appears that the file is still open with the previous handle and I cannot open the new handle to ensure the current process is the only one with access and relock the file.
Is there a way to ensure that if the script is halted mid execution, any open file handles get closed and the files involved are available for future processes to access?
Thanks
|
How to ensure that a file handle is closed if a php script times out?
|
The nsICacheVisitor is an interface that you implement and pass to the visitEntries method on nsICacheService. See this test file for example code.
|
Referring to nsICacheService (https://developer.mozilla.org/en/NsICacheService) and nsICacheVisitor (https://developer.mozilla.org/en/nsICacheVisitor):
Where do I get an instance of nsICacheVisitor?
Where do I get a list of devices so that I can call visitEntry() and visitDevice()?
|
Firefox (Gecko) code - interrogating the cache - how do I get a list of devices?
|
Do you mean you want to cache the results of hibernate queries, in addition to the entities? If so, then you need to look at query caching.
|
I have a website that allows searches for lists of content in various ways, for instance "show stuff created by user 523 ordered by date" or "show a list of the most recent 10 posts."
I use Hibernate for my ORM, and Hibernate provides a cache for objects. For lists of objects, though, like the front page list of most recent content, I'm at a loss as how best to cache that content. Right now, I have my Spring controllers just return a standard JSP page, and then I use oscache at the JSP level wrapped around a call to another class.
This seems inelegant, though. What I really want is for my controller to have access to a cached result if one's available so that the JSP can just be concerned with displaying results.
What are my options here?
|
caching spring/hibernate webapp
|
I haven't been able to test it yet, but I think that one of the techniques explained in the "Circumventing Class Loader Isolation" chapter of the Application Development Guide for the version of Glassfish you are using may solve you problem.
Short version, at least valid for versions 2-3-4 : use the Common Classloader (what exactly this common classloader does and its relation to the other classloaders is explained in the same manual). There are several ways to do this:
copy the jar to domain-dir/lib
or copy the jar to as-install/lib
or run asadmin add-library --type common /path/to/your.jar (will only work in version 4 iirc)
There are several questions here on SO that are related to "Circumventing Class Loader Isolation" (just use that search term), look there for examples and more discussion.
|
I am using JCS to store the ldap search results which should be shared by multiple EJB. I have created a singleton class to initialize JCS only once but due to EJB's classloader, it's been initialized multiple times with its own copy. so search resources are not shared.
How are you guys resolving issue where you need to share the cache across multiple beans?
I am looking for cache within JVM. (Not the remote e.g memcached etc.).
Glassfish is used as an application server.
|
How do you share Java Caching System (JCS) resource across multiple EJB
|
3
It's a long shot but try deleting the all.js file and see if the app rebuilds it correctly. Once the file is in place Rails seems to never try to rebuild it, so if it was badly formed because of some bug or whatnot it may have been left there.
Share
Improve this answer
Follow
answered Nov 24, 2010 at 17:12
Matt SchwartzMatt Schwartz
3,37422 gold badges2121 silver badges1515 bronze badges
2
It's been too long since I've had this problem, but this is a great recommendation for debugging if I come across is again.
– James A. Rosen
May 25, 2011 at 16:34
Worked for me! I had an issue where it was serving up an old file after deploying multiple times and deleting the cached file seems to have fixed it.
– Daniel X Moore
Jun 1, 2011 at 6:26
Add a comment
|
|
I have my CSS and JS set to cache in RAILS_ROOT/app/views/layouts/application.html.erb:
<%= stylesheet_link_tag 'reset', ...
'layout', 'colors', :cache => 'cache/all' %>
<%= javascript_include_tag 'jquery-1.3.2.min', ...
'application', :cache => 'cache/all' %>
If I turn on caching in my development environment, everything works as planned:
# in RAILS_ROOT/config/environments/development.rb:
config.action_controller.perform_caching = true
When I put the same line in staging, though, /stylesheets/cache/all.css is generated properly, but /javascripts/cache/all.css isn't. The line is generated in the HTML as though it were, though:
<script src="/javascripts/cache/all.js?1253556008" type="text/javascript"></script>
Going to that URL yields an empty JS file (though not a 404, oddly). There is no file on the file-system (under RAILS_ROOT/public/javascripts/cache/all.js).
Any thoughts?
|
Why is Rails' asset cache not working for JS in the staging environment?
|
It looks like the second level cache (the one associated with the session factory) should be disabled so the only other thing I can suggest is to explicitly clear the cache with the call:
sessionFactory.evict(Alert.class)
Note: read the comments for the full answer.
|
I have a small system consisting of a .net client and a java web service.
The .net client inserts an object into the database, and then calls the web service. The web service tries to retrieve this object using hibernate. First time it works fine, but every other time it says that there is no object with the given identifier.
I have checked the database manually and the row is indeed there! (I debugged the web service and checked for the row even before a session was opened).
SOLUTION
Added this to the hibernate config file
<property name="connection.isolation">1</property>
Here's what I've tried so far:
The second level cache is disabled
Added .setCacheMode(CacheMode.REFRESH)
Here's the failing code:
Session session = Program.HibernateUtil.getSessionFactory().openSession();
try
{
return (Alert)session.load(Alert.class, id);
} ...
|
Hibernate doesn't notice database updates made from other source
|
Even if each page you cache is only 5kb, that still adds up over time - cache 200 pages and you've used an addition 1mb in your DB; cache 20,000 pages and you've used 100mb - and many pages (when you consider the markup+content) are going to be larger than 5kb.
One alternative option would be to save pages to disk as (potentially compressed) files in a directory and then simply reference the saved filename in your database - if you don't need to search through the contents of the page code via query after your initial datamining, then this approach could reduce the size of your database and query results while still storing the full pages.
|
I’m working a project involving datamining from various sites, a good analogy is gathering statistical data on eBay auctions. However as well as storing the key data, I really need to ensure access to the original page, and on some sites the original pages may not be permanent – like if eBay removed an auction’s page after completion. I’d ideally like to have a similar system to how Google caches pages, e.g storing a copy of the page on my own server. However I’ve been advised there may be complications as well as a big impact on resources needed for my database.
|
Best way to cache pages in a database?
|
In this example, the LinkedHashMap is being extended with an "anonymous inner class".
The removeEldestEntry method is overriding the super-class's version, which always returns false (indicating the eldest entry should not be removed). The overriding version returns true if the size of the map exceeds the limit, indicating that the oldest entry should be removed.
|
The standard example for implementing LRU Cache in Java points to the example depot url
http://www.exampledepot.com/egs/java.util/coll_Cache.html
How is removeEldestEntry called by default after just adding a new entry in the code snippet below?
final int MAX_ENTRIES = 100;
Map cache = new LinkedHashMap(MAX_ENTRIES+1, .75F, true) {
// This method is called just after a new entry has been added
public boolean removeEldestEntry(Map.Entry eldest) {
return size() > MAX_ENTRIES;
}
};
// Add to cache
Object key = "key";
cache.put(key, object);
// Get object
Object o = cache.get(key);
if (o == null && !cache.containsKey(key)) {
// Object not in cache. If null is not a possible value in the cache,
// the call to cache.contains(key) is not needed
}
// If the cache is to be used by multiple threads,
// the cache must be wrapped with code to synchronize the methods
cache = (Map)Collections.synchronizedMap(cache);
|
Question about LRU Cache implementation in Java
|
A couple of thoughts:
You could store them using a dbms like Derby, which is built into many versions of java
You could store them in a compressed output stream that writes to bytes - this would work especially well if the data is easily compressed, i.e. regularly repeating numbers, text, etc
You could upload portions of the arrays at a time, i.e. as you generate them, begin uploading pieces of the data up to the servers in chunks
|
Im currently developing a system where the user will end up having large arrays( using android). However the JVM memory is at risk of running out, so in order to prevent this I was thinking of creating a temporary database and store the data in there. However, one of the concerns that comes to me is the SDcard limited by read and write. Also another problem that comes to mind is the overhead of such an operation. Can anyone clear up my concerns, as well as also suggest a possibly good alternative to handling large arrays ( in the end these arrays will be uploaded to a website by writing a csv file and uploading it).
Thanks,
Faisal
|
Caching large Arrays to SQLite - Java/ Android
|
Use clearCache(), possibly automatically in a model afterSave callback:
// model file:
function afterSave($created) {
clearCache();
}
(Please also document other available solutions, this is the only one I could find.)
|
Even though it's documented that CakePHP will automatically clear the view caches when a model is updated, it doesn't.
It is important to remember that Cake will clear a cached view if a model used in the cached view is modified. For example, if a cached view uses data from the Post model, and there has been an INSERT, UPDATE, or DELETE query made to a Post, the cache for that view is cleared, and new content is generated on the next request.
Even calling the suggested Cache::clear() method manually does nothing. How do I clear the view cache in Cake?
(As of version 1.2.2.8120. Looking at the repository commits, even .8256 should have this problem.)
|
Why doesn't Cache::clear() clear my (view) cache? (CakePHP)
|
Phil Haack has written good article about that , it's called dnout hole caching http://haacked.com/archive/2009/05/12/donut-hole-caching.aspx
|
How do I cache an individual user control with ASP.NET MVC? I also need the VaryByParam etc support that usually comes with ASPX Output Caching. I don't want to cache the entire action though, only one of my user controls in the view.
An example would be nice :) Thank you!
|
ASP.NET MVC, Cache individual User Control
|
You should set the appropriate HTTP cache headers for that generated XML file so any client request past that time will get the new version and cache it locally, like any other static content.
|
I have an ajax application where the client might lookup bunch of data frequently(say by key stroke), the data gets updated on the server side once or twice a day at fixed times by a demon process. To avoid visiting the server frequently, I store the data in a xml file, so the client downloads it once when the page first loads, then look the data up from local data file via javascript.
But the user might load the page shortly before the changes, then start using it without ever refreshing the page, so the data file never gets updated, hence keep telling user the new data is not available.
How do I solve this issue?
|
cache or not to cache
|
There's Django's thread-safe in-memory cache back-end, see here. It's cPickle-based, and although it's designed for use with Django, it has minimal dependencies on the rest of Django and you could easily refactor it to remove these. Obviously each process would get its own cache, shared between its threads; If you want a cache shared by all processes on the same machine, you could just use this cache in its own process with an IPC interface of your choice (domain sockets, say) or use memcached locally, or, if you might ever want persistence across restarts, something like Tokyo Cabinet with a Python interface like this.
|
I have been looking into different systems for creating a fast cache in a web-farm running Python/mod_wsgi. Memcache and others are options ... But I was wondering:
Because I don't need to share data across machines, wanting each machine to maintain a local cache ...
Does Python or WSGI provide a mechanism for Python native shared data in Apache such that the data persists and is available to all threads/processes until the server is restarted? This way I could just keep a cache of objects with concurrency control in the memory space of all running application instances?
If not, it sure would be useful
Thanks!
|
Python/mod_wsgi server global data
|
For HTML pages it's difficult. I turned off client caching for that same reason, and tried to make the server caching as efficient as possible. I now use OutputCache with VaryByCustom set to the login status.
We ran some load tests on that system and the only bottleneck is the bandwidth that this generates.
And on a side note: I used donut-caching for the login status. But I was not able to get it to work with dynamic compression (to reduce the bandwidth bottleneck mentioned above)
See also this question
|
In my ASP.net MVC application, I've got several views that I'd like to set to save in the browser's cache. I've got the methods built to do it, but here's my issue.
The menu in my site is different between logged in and logged off visitors. If the logged in page is cached, then even when the user logs off the menu remains in the logged in mode. It's actually not, but on that visitor's browser it is.
How can I go about clearing/expiring that cache so the visitor's browser updates when I need it to, yet still be able to make use of browser cache?
Thanks in advance!
|
How to clear/expire browser cache on log off?
|
OK, I fixed it.
Here is what I did for anyone else and for my own future reference:
// Check for repeated request for the same image from a browser
if (HttpContext.Current.Request.Headers.Get("If-None-Match") == imgRepGetCache.DateCached.Value.ToString())
{
// Return 304 - Not Modified
HttpContext.Current.Response.Status = "304 Not Modified";
}
else
{
if (imgRepGetCache.DateCached.HasValue)
HttpContext.Current.Response.Headers.Set("Etag", imgRepGetCache.DateCached.Value.ToString());
// ... do my other stuff here
}
Works a charm!
If anyone spots any potential problems here, let me know so I can update this.
To pre-empt one obvious one - I can 100% rely on the date string for identifying whether an image is new or not (in my particular scenario).
|
I have a custom handler that is returning an image to the browser.
The images are fetched from a database.
For some reason the images are not being cached by the browser, and I was wondering if someone might be able to spot what I am missing from the below code:
HttpContext.Current.Response.BinaryWrite(imageBytes);
HttpContext.Current.Response.Cache.SetCacheability(HttpCacheability.Public);
Context.Current.Response.Cache.SetAllowResponseInBrowserHistory(true);
if(imgRepGetCache.DateCached.HasValue)
HttpContext.Current.Response.Cache.SetLastModified(imgRepGetCache.DateCached.Value);
HttpContext.Current.Response.Cache.SetExpires(DateTime.Now.AddDays(2));
HttpContext.Current.Response.ContentType = "image/jpeg";
Or alternatively if I'm completely missing the point somehow and there's somewhere else I need to look.
Edit: As per request for more info:
The URL is always the same
I am testing loading the same file via standard IIS pipe and my pipe in the same browser on the same PC. The one that loads through IIS normally is cached, my file isn't.
Edit 2: After inspecting the HTTP requests/responses on the normal IIS route I think it has something to do with the ETag. The ETag (which I'm new to as of just now) seems to be a sort of checksum for the document. On subsequent requests by a browser the ETag is sent and if the server finds the ETag hasn't changed then it returns a 304 - Not Modified. All good! But I'm now setting the ETag using:
HttpContext.Current.Response.Cache.SetETag(imgRepGetCache.DateCached.ToString());
But it doesn't appear in the response. Closer...
Edit 3: I fixed it in the end after taking advantage of Firebug for some HTTP inspecting fun. I posted my solution below.
|
Why isn't my custom delivered image caching in the browser?
|
An application pool may specifiy the Maximum virtual memory size that a worker process can allocate. This is setting will affect the maximum size of data that the application object can hold.
If this setting is not specified (or is larger than 2GB) then another factor will be whether the process is running in 32 Bit mode. If so then you could only expect to get a maximum of 1.5GB (if that) in the application object regardless of how much memory is present on the server.
On 64 bit server running the worker process as a 64 bit process it would be able to consume as much RAM and pagefile that it can get.
|
I am creating an ASP script that uses the application object to store the pages. The question in my mind is whether there is a size limit to this object. Anyone know?
|
What is the size limit of the application object in classic asp?
|
2
Out of your selection I've only ever attempted to use memcached, and even then it wasn't the C#/.NET libraries.
However memcached technology is fairly well proven, just look at the sites that use it:
...The system is used by several very large, well-known sites including YouTube, LiveJournal, Slashdot, Wikipedia, SourceForge, ShowClix, GameFAQs, Facebook, Digg, Twitter, Fotolog, BoardGameGeek, NYTimes.com, deviantART, Jamendo, Kayak, VxV, ThePirateBay and Netlog.
I don't really see a reason to look at the other solution's.
Good Luck,
Brian G.
Share
Improve this answer
Follow
edited Jun 20, 2020 at 9:12
CommunityBot
111 silver badge
answered Oct 2, 2008 at 7:04
Brian GianforcaroBrian Gianforcaro
26.8k1111 gold badges5858 silver badges7777 bronze badges
3
There's always a reason to look elsewhere - popularity is hardly the best indicator of applicability. The .NET libraries for memcached are difficult to use (I seem to have misplaced my sources, but I've read this in several blogs).
– Erik Forbes
Oct 2, 2008 at 7:09
Wow, I didn't realize that was so highly used. I should look into that. I am currently in the research & development process of choosing a distributed memory tool.
– Elijah Manor
Oct 3, 2008 at 5:25
Unless you cluster memcached (not available out of box but can be done) then its a failure point. Memcache is awesome so long as just caching, i.e. can tolerate loss of info, if using for session can you cope with loosing session, whats effect on user ?
– Simon Thompson
Feb 18, 2012 at 9:52
Add a comment
|
|
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
I am currently looking at a distributed cache solution.
If money was not an issue, which would you recommend?
www.scaleoutsoftware.com
ncache
memcacheddotnet
MS Velocity
|
Distributed Cache/Session where should I turn? [closed]
|
Being straight to the point: no, in that case it would not be useful.
Transformations have lazy evaluation in Spark. I.e., they are recorded but the execution needs to be triggered by an Action (such as your count).
So, when you execute df3.count() it will evaluate all the transformations up to that point.
If you do not perform another action, then it is certain that adding .cache() anywhere will not provide any performance improvement.
However, even if you do more than one action, .cache() [or .checkpoint(), depending on your problem] sometimes does not provide any performance increase. It will highly depends on your problem, and the transformation costs you have - e.g., a join can be very costly.
Also if you are running Spark using its interactive shell, eventually sometimes .checkpoint() can be better suited after costly transformations.
|
I've been reading about pyspark caching and how execution works. It is clear for me how using .cache() when multiple actions trigger the same computation:
df = sc.sql("select * from table")
df.count()
df = df.where({something})
df.count()
can be improved by doing:
df = sc.sql("select * from table").cache()
df.count()
df = df.where({something})
df.count()
However, it is not clear for me if and why it would be advantageous without intermediate actions:
df = sc.sql("select * from table")
df2 = sc.sql("select * from table2")
df = df.where({something})
df2 = df2.where({something})
df3 = df.join(df2).where({something})
df3.count()
In this type of code (where we have only one final action) is cache() useful?
|
When to cache in pyspark?
|
2
No it does not, actually it is quite common approach in microservice architecture when service stores a copy of related data from another services and uses some mechanism to sync it (usually using some async communications via message broker).
Storing the copy of data does not transfer ownership of that data from service which manages it.
Share
Improve this answer
Follow
edited Oct 27, 2022 at 22:10
answered Oct 20, 2022 at 15:11
Guru StronGuru Stron
121k1111 gold badges124124 silver badges160160 bronze badges
0
Add a comment
|
|
Say I have a service that manages warehouses(that is not very frequently updated). I have a sales service that requires the list of stores( to search through and use as necessary). If I get the list of stores from the store service and save it( lets say in redis) inside my sales service but ensure that redis is updated if the list of stores changes. Would it violate the single responsibility principle of Microservice architecture?
|
Does storing another service's data violate the Single Responsibility Principle of Microservice
|
NextJs will always call getServerSideProps if the url changes. As mentioned in the comments, shallow routing does not work if the url actually changes.
I think there are a couple of ways around this:
set a cache-control: max-age on the response with the time of caching that you want. That way, the request will be made, but it will not hit your server, but come from the browser cache instead. As an advantage, those fetches will also succeed while you're offline.
instruct next to not make the query if the request comes from a client transition. There is an open discussion about this:
Add option to disable getServerSideProps on client-side navigation
shallow routing doesn't really solve it, so it seems you have to make the request and then check if it comes from SSR or not. This comment has the best workaround I guess ?
use incremental static site regeneration. It basically makes your page static, but it revalidates after a certain time if a request comes in.
|
I'm using NextJS with TanStack Query (formerly ReactQuery). TanStack Query acts as a cache between my NextJS app and the data stored in the backend.
I was previously doing SSR only, but I'm complementing it with TanStack Query for optimistic updates. I previously needed to fetch data on getServerSideProps for every "detail" page, but now I'm thinking that I could skip some of those fetches since I already have the data in the cache and it's still fresh.
For example. Let's say we have a TODO app. When I visit /todo/id_1 for the first time, it's nice to have SSR to send the page already rendered to the client. If I go somewhere else, and come back to /todo/id_1, I know for a fact that the contents of that TODO hasn't changed, but I still need to go through SSR.
Would there be a way to skip SSR in that case?
I was hoping I could to something like the following:
<Link href={`/todo/${id}`} skipSsr={cachedTodo[id].notStale} />
|
Skip SSR in NextJS when I already have the data cached and not stale in the client
|
You can set the Cache-Control header inside getServerSideProps to cache its response.
Example from the Caching with Server-Side Rendering documentation:
// This value is considered fresh for ten seconds (s-maxage=10).
// If a request is repeated within the next 10 seconds, the previously cached value will still be fresh.
// If the request is repeated before 59 seconds, the cached value will be
// stale but still render (stale-while-revalidate=59).
// In the background, a revalidation request will be made to populate the
// cache with a fresh value. If you refresh the page, you will see the new value.
export async function getServerSideProps({ req, res }) {
res.setHeader(
'Cache-Control',
'public, s-maxage=10, stale-while-revalidate=59'
)
return {
props: {}
}
}
You can configure the Cache-Control header value as you see fit. Note that setting a Cache-Control value only works in production mode, as the header will be overwritten in development mode.
|
This question already has an answer here:
How to enable cache for getServerSideProps?
(1 answer)
Closed 1 year ago.
I am building a Next.js app it is supposed to be SSR.
So I have some common data, e.g Navigation and a few other things which I do need to be server rendered and I need it on every page. But loading it on every page from my REST API in getServerSideProps when I move between pages doesn't seem right.
So I am looking for a way to save this information in some local cache or state and I don't load it the second time when I move to another page in the browser.
Is there a way to do so?
|
How to save data in state or cache in getServerSideProps? [duplicate]
|
2
With ASP.NET Core, you may use the TagHelper attribute: asp-append-version to perform cache busting.
<script src="/Scripts/A.js" asp-append-version="true"></script>
Share
Improve this answer
Follow
answered Aug 30, 2022 at 8:24
dandan
1,75233 gold badges2222 silver badges3030 bronze badges
Add a comment
|
|
I have a js module, let's call it A. It uses versioning by appending ?v=xxxxxxxxxxxx into its URL (like <script src="/Scripts/A.js?v=637082108844148373"></script>). v changes everytime we make changes in the file.
Here is the code:
public static class UrlHelperExtensions
{
public static string Content(this UrlHelper helper, string filename, bool versioned)
{
var result = filename;
if (versioned)
{
var lastWriteTimeToken = CalculateToken(helper.RequestContext.HttpContext, filename);
result = filename + "?v=" + lastWriteTimeToken;
}
return helper.Content(result);
}
}
And then we can use it in Razor views as this:
// Sample.cshtml
// ... code omitted for the sake of brevity ...
@section scripts {
<script type="module" src="@Url.Content("~/Scripts/A.js", true)"></script>
}
// ... code omitted for the sake of brevity ...
The module imports modules B.js and C.js:
// A.js
import {Foo} from "./B.js";
import {Bar} from "./C.js";
If I change something in module A.js, client browser's cache is busted since we have ?v=xxxxxxxxxxxx0 parameter which is changing every time we make any changes in ?v=xxxxxxxxxxxx1. But if I change something in module ?v=xxxxxxxxxxxx2 or ?v=xxxxxxxxxxxx3, its version remains the same and I have to clear cache manually (?v=xxxxxxxxxxxx4) to see the changes.
In other words, we can't use ?v=xxxxxxxxxxxx5 parameter for the lines:
?v=xxxxxxxxxxxx6
How to solve this problem of cache busting for imported files in MVC 5?
|
How to force JS script update in ASP.NET?
|
Here's the behavior I want:
As long as the network is available, the browser must check if there's a new version
If there isn't a new version, the browser can use the cached version
This is a common use case, and can be accomplished by using Cache-Control: no-cache (or max-age=0, must-revalidate) and providing an ETag or Last-Modified header.
The cached version expires after X days
This is not possible. It's not part of the design of HTTP caching because there's no use case for it.
I'd like the browser to refetch after X days because in case there's a bug, I don't want users stuck with a broken cached version.
If the browser is checking for a new version each time, how can the user ever get stuck with a "broken" cached version?
If I do Cache-Control: max-age=86400 (1 day) with an ETag, would it:
Make a server request every time, but the server will just return 304 if the ETag didn't change. After 1 day, discard the cached version, and refetch from the server (which should be the same as the discarded version).
Doesn't make any server requests for a day. Then, after 1 day, the server can still return 304. The cached version can stay indefinitely.
Number 2. The max-age tells the browser how long it can consider the resource to be fresh, meaning that the cached version can be used without checking with the server. When that time has expired, the resource is considered stale, and a new request has to be made. If the cached resource has an ETag or Last Modified header that request can be a conditional one to allow the server to avoid sending the entire resource in the response.
|
There are several similar questions, but none of the ones I've found are clear or definitive.
Here's the behavior I want:
As long as the network is available, the browser must check if there's a new version
If there isn't a new version, the browser can use the cached version
The cached version expires after X days
I think I can do this with Cache-Control: max-age and ETags. However, I can't find whether the max-age should be 0 or how long the content should be cached for.
E.g. if I do Cache-Control: max-age=86400 (1 day) with an ETag, would it:
Make a server request every time, but the server will just return 304 if the ETag didn't change. After 1 day, discard the cached version, and refetch from the server (which should be the same as the discarded version).
Doesn't make any server requests for a day. Then, after 1 day, the server can still return 304. The cached version can stay indefinitely.
I'd like the browser to refetch after X days because in case there's a bug, I don't want users stuck with a broken cached version.
|
What happens when you use Cache-Control: max-age with ETags?
|
In this case, I would work with WordPress transients:
https://developer.wordpress.org/reference/functions/set_transient/
https://developer.wordpress.org/reference/functions/get_transient/
To set a transient, you can use this code:
$transient = 'your_token_name';
$value = 'your_token'
$expiration = 3600
set_transient( $transient, $value, $expiration );
To receive the value of your transient again, you can use:
$token = get_transient( $transient );
Using these methods is better than update_option or get_option since WordPress manages the expiration (deletion) of transients completely, so you don't have to implement your own logic for this.
Before you pass the value to the transient method, you can encrypt and decrypt it by storing a salt/key in your wp-config.php. You can find more infos about this topic here:
http://php.net/manual/es/function.openssl-encrypt.php
http://php.net/manual/es/function.openssl-decrypt.php
To define a constant in WordPress you need to go to your wp-config.php file and add it between the "you can edit" words:
define( 'YOUR_SALT', '12345678' );
You can read it again as a normal constant in WordPress:
$salt = YOUR_SALT;
|
I am looking for some help concerning WordPress plugin development. My plugin queries an API which requires an Authentication Token, that token is fetched via a Token delivery APi. The token expire every 3600 seconds.
I would like to store the Token, in a persistent way (for all sessions , like server side caching) and update it only when needed. Multiple Api call could be done with the same token. The problem is, if I store the token in a global variable, it gets reset each time a user reload a page which uses my plugin.
https://wordpress.stackexchange.com/questions/89263/how-to-set-and-use-global-variables-or-why-not-to-use-them-at-all
After looking for answer I found:
-WP_CACHE , but it is not persistent.
-I know I can store the token in the Database, but a Token in Database is not a use case I found elegant
-Tool such as Redis Object Cache for PHP but I found it to be really complicated , installing etc...
Is there any good practice or easy way of doing this? I only need to keep a string for an hour and access it within the plugin in PHP.
Thank you.
https://wordpress.stackexchange.com/questions/89263/how-to-set-and-use-global-variables-or-why-not-to-use-them-at-all
|
WordPress Store Authentication token in memory
|
2
As size is a method of ArrayList. Please use it like this.
@Cacheable(value = "saveCache", key = "{#a, #b, #c}", unless="#result.result.size() > 0")
For ref:
SpringBoot Cacheable unless result with List
Share
Improve this answer
Follow
answered May 3, 2021 at 15:53
RanjanRanjan
40022 silver badges1111 bronze badges
Add a comment
|
|
I want to cache the result in spring boot with condition on the result.
@Cacheable(value = "saveCache", key = "{#a, #b, #c}")
public Response save(String a, String b, String c) {
// block of code
List<Map<String, Object>> result = new ArrayList();
new Response(result)
}
In the above code, I want to cache only if response.result is not empty. I have tried the below method, but it doesn't work for me
@Cacheable(value = "saveCache", key = "{#a, #b, #c}", unless="#result.result == null")
@Cacheable(value = "saveCache", key = "{#a, #b, #c}", unless="#result.result.size > 0")
[Error] EL1008E: Property or field 'size' cannot be found on object of
type 'java.util.ArrayList' - maybe not public or not valid?
@Cacheable(value = "saveCache", key = "{#a, #b, #c}", unless="#result.result.length > 0")
[Error] EL1008E: Property or field 'length' cannot be found on object
of type 'java.util.ArrayList' - maybe not public or not valid?
|
How to cache with condition on result object in spring boot
|
2
Thanks to fluffy.
To implement the same, simple use the removalListener.
For my use case:
Cache<String, File> cache = Caffeine.newBuilder()
.removalListener((String key, File file, RemovalCause cause) -> {
file.delete();
})
.expireAfterWrite(3, TimeUnit.SECONDS)
.maximumSize(100)
.build();
solves the problem.
Share
Improve this answer
Follow
edited Mar 9, 2021 at 16:19
Ben Manes
9,37333 gold badges3737 silver badges4040 bronze badges
answered Mar 9, 2021 at 13:55
Abhiram NatarajanAbhiram Natarajan
16311 silver badge1010 bronze badges
Add a comment
|
|
I want to create a cache like so
Cache<String, File> cache = Caffeine.newBuilder()
.expireAfterWrite(1, TimeUnit.MINUTES)
.maximumSize(100)
.build();
That i will populate with a temporary file like so,
File f = File.createTempFile("jobid_", ".json");
FileWriter fileWriter = new FileWriter(f);
fileWriter.write("text values 123");
fileWriter.close();
cache.put("jobid", f);
Now after 1 minute I understand that cache.getIfPresent("jobid") will return null, my question is that is there some way in which I can trigger another task when this entry expires - deleting the temporary file itself.
Any alternative solution works as well.
|
Perform action on expiry with Caffeine on Java
|
2
I have a solution:
import inspect
import functools
list_of_cached_properties_names = [
name for name, value in inspect.getmembers(DataSet)
if isinstance(value, functools.cached_property)
]
Share
Improve this answer
Follow
answered Jun 2, 2021 at 10:55
BenoitBenoit
3155 bronze badges
Add a comment
|
|
Here is an example:
import statistics
from functools import cached_property
class DataSet:
def __init__(self, sequence_of_numbers):
self._data = sequence_of_numbers
@cached_property
def stdev(self):
return statistics.stdev(self._data)
@cached_property
def variance(self):
return statistics.variance(self._data)
What is the easiest way to list cached properties?
|
How to list all class cached_properties in Python?
|
Yes, it makes sense.
With the configuration mentioned in that comment, your users will get instant stale responses, so they'll have to verify it the next time they make a resquest. And the CDN will cache a valid response for 604800 seconds. So repeated requests will be mostly served by CDN, instead of the Origin server.
But what if you update your app? What happens to the stale cache on the CDN?
After a new deployment, you need to make sure all of your stale cache from the CDN will be purged / cleared.
For example, see Purging cached resources from Cloudflare: it gives you numerous options on how to do that.
Purge by single-file (by URL)
Purging by single-file through your Cloudflare dashboard
Purge everything
Purge cached resources through the API
etc
Firebase Hosting, for example, will clear all CDN cache after a new deployment:
Any requested static content is automatically cached on the CDN. If you redeploy your site's content, Firebase Hosting automatically clears all your cached static content across the CDN until the next request.
As far as the setting suggested in the comment, I think Cache-Control: no-cache would do a better job.
From MDN - Cache Control:
no-cache
The response may be stored by any cache, even if the response is normally non-cacheable. However, the stored response MUST always go through validation with the origin server first before using it, therefore, you cannot use no-cache in-conjunction with immutable. If you mean to not store the response in any cache, use no-store instead. This directive is not effective in preventing caches from storing your response.
|
Somebody commented on this question about caching:
...using a Cache-Control value of: max-age=0, s-maxage=604800 seems to get my desired behavior of instant client updates on new page contents, but still caching at the CDN level
Will I really get caching at CDN level and instant updates for my users?
Does it make sense? How does that combination work?
|
Does it make sense to set Cache-Control max-age=0 and s-maxage= not zero?
|
2
A year and a half later, this question was a top result when I was looking for the same answer.
Looking in the code, I found the jit results are cached by default under __pycache__ on a function-by-function basis. I removed these and saw them repopulate when running the code again.
AFAIK the "automatic" caching you are referring to is just in memory, while the disk cache is of course on disk. The disk cache persists and is loaded next time you run your program, skipping the compilation as you said.
After delving into the sources, I did end up finding the answer in the docs, too. Looks like you can override the cache location using the NUMBA_CACHE_DIR environment variable.
Share
Improve this answer
Follow
edited Jun 15, 2022 at 18:43
answered Jun 15, 2022 at 15:10
Kirk MKirk M
15111 silver badge77 bronze badges
1
Came here in same context, and indeed disk caching seems to be off by default and can be enabled by including cache=True inside the @jit() definition. The jit annotation output being a compiled version of the function being annotated, it can take up to one second in my case for a not-that-sophisticated python function doing numpy and some numerics to be computed and returned by numba's jit annotation, so caching to disk @jit(cache=True) can make a substantial difference in some use cases such as in tests code.
– matanox
Jan 12 at 18:53
Add a comment
|
|
I've been looking for an answer in the numba docs, but I haven't been able to find anything.
The numba.jit decorator caches compiled functions automatically. Additionally, you can pass cache=True argument to it to create an on-disk cache.
What's the difference between both caching methods? Does on-disk cache persist, so that next time I execute my code, even on a "fresh" Ipython kernel, I can skip the compilation?
Thanks in advance!
|
Numba Jit auto cache vs on disk caching
|
2
The aiohttp-client-cache library works with aiohttp-client and can be used to store cache on a local filesystem.
Share
Improve this answer
Follow
answered Aug 29, 2021 at 8:19
Jonathan FeenstraJonathan Feenstra
2,60411 gold badge1616 silver badges2424 bronze badges
Add a comment
|
|
I have a synchronous app using cache-control + requests which works well with a local filesystem cache. I'm looking to migrate this to an async project using aiohttp-client however, it looks like there aren't any client-side caching libraries that work with it?
Are there any async HTTP clients in Python that I can use a local cache with?
|
Maintain a client-side http cache with aiohttp
|
2
1) Hash Angular files while building
Angular has a built in hashing mechanism to ensure updated files are not cached. To activate this functionality you have to add the property "outputHasing": "all" to the build configurations in your angular.json file. Alternatively you can build your project with the command: ng build --output-hashing=all. This will add a hash to each file name.
2) Add server-side Cache-Control headers
However, Angular does not hash the index.html file. Server-side response headers should ensure that this file isn't cached - as they override your meta tags. Cache-Control is such a header that you can configure on your web server to add to all outgoing requests, which will tell the browser and CDNs how to cache your content. This answer explains how to set these no-cache headers on your web server.
You can verify if these cache control headers are set correctly by going to the Inspect > Network tab in your browser or by using the curl -I www.yourURL.com command.
3) Handle previously cached files
All versions of your index.html file that were cached in your clients browser -before you added the new cache control headers- will still be cached. To overcome this issue you should use different URL's. This can be done by using a new domain name (as long as you do not care about SEO), changing the routes or by adding a URL parameter (without touching SEO).
After building your Angular project as described above and adding the configuration on your web server, users will always get the newest version of your page, even after a future release.
Share
Improve this answer
Follow
edited Aug 8, 2022 at 6:10
answered Aug 4, 2022 at 11:28
NicoNico
28611 silver badge77 bronze badges
Add a comment
|
|
I am facing this annoying issue with Angular cache in Chrome. I get this issue every time I do a release.
I have added cache control settings in HTML.
<meta http-equiv="Cache-Control" content="no-cache, no-store, must-revalidate">
<meta http-equiv="Pragma" content="no-cache">
<meta http-equiv="Expires" content="0">
What is weird is, on one of the most frequent routes, Chrome uses old main.xxxx.js file.
<script type="text/javascript" src="main.e782af08b3f281507dba.js"></script>
But as soon as I switch to another less frequent route, it loads latest main.xxxxx.js file.
<script type="text/javascript" src="main.075b8caa48c74ed93f64.js"></script>
I am facing this after every release, which is very annoying not just for me, but for my clients as well. I can't ask them to clear their cache every time I do a new release.
Also, in last release I had put a check for version change, and if version is changed, use window.reload() to reload the browser, which it does. But as soon as it routes to frequent path, chrome gets old main.js file.
|
Angular cache - Chrome is loading old main.js file on frequent used routes
|
2
I got this message because I forgot to initialize the app
cache.init_app(app)
I would try to debug if this method call is reached before cache.get('XXX') is called.
Share
Improve this answer
Follow
answered Oct 8, 2020 at 15:45
maxim1500maxim1500
11811 silver badge1212 bronze badges
Add a comment
|
|
I am using application factory pattern in which I have initialized my cache
from xyz.caching import cache # this is the cache object
def create_app():
app = Flask(__name__)
cache.init_app(app)
# other relevant variables.
return app
My caching.py
from flask_caching import Cache
cache = Cache(config={....})
When I import this in any file xyz.caching import cache, this works totally fine. However, in my application I have a entry point script, run.py
run.py
from xyz.caching import cache
def run_this():
cache.get('XXX')
if __name__ == "__main__":
run_this()
After running python run.py, I get the following error
'AttributeError: 'Cache' object has no attribute 'app''
Pls. guide me what is wrong in this, why I am getting this error and what is the way to solve this ?
|
Flask cache gives me 'AttributeError: 'Cache' object has no attribute 'app''
|
Some browsers will optimize things by detecting when there isn't a fetch handler in a service worker and not block navigations on service worker startup. (Chrome is pretty aggressive about this.) Other browsers don't. You don't mention which browser you're testing this on, but I'm not particularly surprised that there's some impact.
There's some more background on this topic in this talk.
|
A very simple setup.
Importing service worker file within <script> tag in index.html like so:
if ('serviceWorker' in navigator) {
window.addEventListener('load', function() {
navigator.serviceWorker.register('/app/static/js/service-worker.js', { scope: '/' });
});
}
Service worker itself is empty (1 line):
console.log('Successfully Installed Service Worker.');
After running page load speed tests I get interesting results:
DOM Interactive DOM Complete Load Event End Number of page loads
no-service-worker 0.232 2.443 2.464 30
with-service-worker 0.343 2.484 2.502 30
What gives? How does an empty service worker slow down page load by whopping 120+ milliseconds?
|
Why does an empty service worker significantly slow down page load speed?
|
2
That is cached on the client via headers in the response that you can't "clear" it . As a workaround , you can firstly setting a suited max age of the response cache on client side , then use VaryByHeader or VaryByQueryKeys , each time you want to refresh the cache you should provide a different value for your header/querystring :
https://learn.microsoft.com/en-us/aspnet/core/performance/caching/middleware?view=aspnetcore-3.1
Share
Improve this answer
Follow
answered Jan 13, 2020 at 6:30
Nan YuNan Yu
26.9k99 gold badges7171 silver badges151151 bronze badges
Add a comment
|
|
I have a controller action which renders a partial view which fetches some data from the database asynchronously. Let's say these are menu items.
[Route("SomeData")]
[ResponseCache(Duration = 1000 * 60 * 60)]
public IActionResult SomeData()
{
//returns a partial view for my ajax call
}
The data does not change often, but a user might do something and know that it should result in a change in that partial view, i.e. new menu items should appear.
However, with a cached response, the data is not loaded from DB. I would like to add a 'refresh' button on the page so that a user can explicitly clear all cache.
I tried javascript to do window.reload(true); as well as this reply https://stackoverflow.com/a/55327928/2892378, but in both cases it does not work.
I need the behaviour identical to clicking Ctrl + Refresh button in Chrome.
Cheers
|
ASP NET Core - clear ResponseCache programmatically
|
For sure you can share your Redis database with as many users as you want, provided you open up the network to your endpoint and port.
Note there is no Access Control List in Redis before version 6. Redis 6 is just out as release candidate 1 a few days ago. If you require ACL you may consider it if you are ok working with an RC1.
You can configure a Redis password, but it is one password for all users - shared. You can ask clients to also identify themselves by providing a name, but it is an honor system. Again, there is no ACL before Redis 6.
You can also use the firewall (network security) to limit what machines can connect to your instance.
Take a look at https://redis.io/topics/security for more on security.
To learn about Redis 6 ACL see https://redis.io/topics/acl
|
If I store some data on my redis cache on my machine. Then is that data accessible to other people on their machines or the redis db is limited to one user only.
|
Share Redis Db among many users
|
By default, the path to the cache database (generated by knitr) is dependent on the R Markdown output format. That is why the cache has to be regenerated for different output formats like HTML and Word. To use the same copy of the cache database for all output formats, you can manually specify a path that does not depend on the output format, e.g.,
```{r, setup, include=FALSE}
knitr::opts_chunk$set(cache.path = 'a/fixed/directory/')
```
However, please note that there is certainly a reason for why each output format uses its own cache path: the output from an R code chunk may be dependent on the output format. For example, a plot may be written out with the Markdown syntax  for Word output, but could become <img src="..." /> for HTML output. If you are sure that your code chunk doesn't have any side-effects (e.g., generating plots and tables), you are safe to use a fixed path for the cache database. Usually I would not recommend that you turn on cache = TRUE for whole documents (because caching is hard), but only cache the specific code chunks that are time-consuming.
|
I'm performing some computationally intensive operations that I would like to generate reports from. I'm experimenting with bookdown or straight rmarkdown. Essentially I'd like an html_document report and a word_document report.
My .Rmd file looks like this:
---
title: "My analysis"
author: "me"
date: '2019-12-17'
output:
bookdown::word_document2:
highlight: tango
df_print: kable
reference_docx: Word_template.docx
toc: yes
toc_depth: 2
fig_caption: yes
bookdown::html_document2:
theme: yeti
highlight: tango
df_print: paged
toc: yes
toc_depth: 2
fig_caption: yes
keep_md: yes
---
***
```{r child = 'index.Rmd', cache=TRUE}
```
```{r child = '01-Read_in_raw_data.Rmd', cache=TRUE}
```
```{r child = '02-Add_analysis.Rmd', cache=TRUE}
```
What happens is that the html and word documents get cached separately, which is a) time-consuming because they are run twice and b) annoying due to some exported files creating problems when caching (they are generated during the first knit operation but already exist for the second and subsequent ones and generate errors).
I've tried generating just the .md file but it doesn't change problem (a) and I just get really ugly reports from .md inputs with pandoc.
Does anyone have a more elegant way of doing this?
|
Is there a way to generate a cached version of an rmarkdown document and then generate multiple outputs directly from the cache?
|
You are using performance mode with 1 day maxAge, which always gives value from cache if available. You'll see data change after one day.
Instead you can use freshness mode or diminish maxAge in performance mode.
Yours manual done request doesn't apply, because URL https://my-api.com/v1/languages is cached in service worker. Either cache manipulation in request won't work, because request cache and service worker cache are distinct cache layers.
From Angular docs:
The Angular service worker can use either of two caching strategies for data resources.
performance, the default, optimizes for responses that are as fast as possible. If a resource exists in the cache, the cached version is used, and no network request is made. This allows for some staleness, depending on the maxAge, in exchange for better performance. This is suitable for resources that don't change often; for example, user avatar images.
freshness optimizes for currency of data, preferentially fetching requested data from the network. Only if the network times out, according to timeout, does the request fall back to the cache. This is useful for resources that change frequently; for example, account balances.
|
In an Angular application, I have an URL endpoint that is being cached like so:
// ngsw-config.json
"dataGroups": [{
"name": "api-performance",
"urls": [
"https://my-api.com/v1/languages",
],
"cacheConfig": {
"strategy": "performance",
"maxSize": 300,
"maxAge": "1d"
}
}
]
It works perfectly in offline scenarios when a client goes through a survey process. But in admin panel, when I try to update the language information, indeed - it does update the record in database, but when I try to refresh the data, it doesn't send the request to our endpoint, but to stored cache in browser.
This is what I tried:
getLanguages(shouldCache: boolean): Promise<any> {
if (shouldCache) {
return this.httpClient.get('https://my-api.com/v1/languages').toPromise();
} else {
const headers = new HttpHeaders({
'Cache-Control': 'no-cache, no-store, must-revalidate, post-check=0, pre-check=0',
'Pragma': 'no-cache',
'Expires': '0'
});
return this.httpClient.get('https://my-api.com/v1/languages', { headers: headers }).toPromise();
}
}
Unfortunately, it doesn't work. I thought about updating the cache also, but I don't know how to do it.
Does anyone have an idea how to solve this problem?
|
PWA: Exclude caching in some part of application
|
Yes, the resource will be cached.
This follows from the semantics of HTTP and URLs. A URL is a Universal Resource Location: it provides the location of the resource, in a form that can be used anywhere, and which always indicates the same resource: in <a> elements of different web sites, on business cards, on advertising posters. An HTTP client (a web browser) knows that a URL one web site has refers to the same resource if used on a different web site, and so it is safe to reuse a cached copy.
The exception to this is when a URL is a relative URL (your example uses absolute URLs). To make use of a relative URL the client must resolve the URL, using some context, to produce an absolute URL. Different web sites have different contexts and thus resolve to different absolute URLs. It is the absolute URL that the client must use to fetch resources and which is used as the key in its cache.
|
Let's say you have a resource, could be an image, could be jQuery from a cdn. This resource is hosted at some 3rd party url, like https://example-cdn.com/resource.ext. Let's also assume it is cacheable (whatever that means--let me know if that is a non-trivial detail).
When https://website-a.com requests the resource (let's assume it was included in the html directly), it takes some time to load, but then the browser caches it for faster load next time.
Now, https://website-b.com is also including that resource in its html, using the exact same url (https://example-cdn.com/resource.ext).
My question is this: will the browser reach for the cached resource (because it was already fetched while loading https://website-a.com), or is there some reason that it would not be able to find it in the cache and have to load it over the network all over again?
Edit: This stackexchange answer seems to contain some related information. Can anyone verify that this answer is correct in all its assertions about caching? https://webmasters.stackexchange.com/a/84685
|
Will browser pull from cache if the same resource is being requested by a different origin?
|
Most browsers by default cache the images that are loaded in your application once so when you make the same request again, it doesn't necessarily need to load it from the server again.
To check this, you can run your application in Chrome, then run developer tools and go to the Network tab and in the filter section click on the Img label so it will filter by images only.
Once you do this try hitting refresh on your page and you will see that some images are loaded from either disk cache/memory cache thus eliminating the need to load from the server again and again.
|
we display item image in item list page when click on each item go to item detail.
the problem we don't want to load image again just take it from item list to display
how to solve this?.
|
Angular cache images when already loaded?
|
2
You can take it from Retrofit instance by the following way:
val cacheSize = (retrofit.callFactory() as OkHttpClient).cache()?.size()
Share
Improve this answer
Follow
answered Sep 7, 2019 at 11:50
Andrei TananaAndrei Tanana
8,13211 gold badge2727 silver badges3737 bronze badges
Add a comment
|
|
I'm using retrofit2 and okhttp in my android project and cache response data
from server side.
Is there any way to find that how many bytes stored in cache storage and log it?
Thanks in advance!
|
How to get okhttp stored cache size as bytes?
|
From HTML5 ★ Boilerplate Docs:
What is ?v=1" '?v=1' is the JavaScript/CSS Version Control with
Cachebusting
Why do you need to cache JavaScript CSS? Web page designs are getting
richer and richer, which means more scripts and stylesheets in the
page. A first-time visitor to your page may have to make several HTTP
requests, but by using the Expires header you make those components
cacheable. This avoids unnecessary HTTP requests on subsequent page
views. Expires headers are most often used with images, but they
should be used on all components including scripts, stylesheets etc.
How does HTML5 Boilerplate handle JavaScript CSS cache? HTML5
Boilerplate comes with server configuration files: .htacess,
web.config and nginx.conf. These files tell the server to add
JavaScript CSS cache control.
When do you need to use version control with cachebusting?
Traditionally, if you use a far future Expires header you have to
change the component's filename whenever the component changes.
How to use cachebusting? If you update your JavaScript or CSS, just
update the "?v=1" to "?v=2", "?v=3" ... This will trick the browser
think you are trying to load a new file, therefore, solve the cache
problem.
That being said, you can use various things for version to the files. Using filemtime is a fine way to do that. I believe it is one of the most mainstream ways I've seen people use. You can leave it and know it will always work correctly and there will be no collisions anytime soon. I am not aware to be any difference between ?v and ?ver, but ?v is the one that is used in 90% of the cases I've seen, maybe more. Hope this helps.
|
I'm using php's filemtime to add a version to files, is it a good idea?
<link rel="stylesheet" href="css/custom.css?v=<?=filemtime("./css/custom.css")?>"/>
<script src="js/custom.js?v=<?=filemtime("js/custom.js")?>"></script>
This shows up in source like
<link rel="stylesheet" href="css/custom.css?v=1564681659"/>
<script src="js/custom.js?v=1564599819"></script>
Is putting a ?v or a ?ver the same?
P.S: I'm doing the same to .js & css files, bootstrap, jquery etc. (all local ones only though).
|
Is php's `filemtime` a good idea for Browser Caching or Cache Busting?
|
as it has less space
That is not true for most Android devices created in the last 8 years.
the platform might automatically delete files when it wants space for other operations
That also holds true for getExternalCacheDir().
please suggest other possible ways to cache the files securely, so that user cannot view/access using other external application
Use getFilesDir().
|
My requirement is to cache the files securely in android internal/external storage, where apps other than my app should not view/access the documents I store.
Current implementation:
Currently, the app uses context.getExternalCacheDir() as a base directory and followed by respective folder structure to cache files. The problem here is, any user can view the files stored by just navigating through the path using some File Explorer apps.
We can use context.getCacheDir() or file directory,
There are limitations in using it, as it has less space and the platform might automatically delete files when it wants space for other operations.
Required Implementation:
Encryption/decryption would be one way yet, please suggest other possible ways to cache the files securely, so that users cannot view/access using other external applications.
|
How to restrict user from viewing/accessing the cached files from file explorer in an Android Device?
|
2
Since I went from JUnit 4 -> JUnit 5 I had to remove the @RunWith annotation from all my test classes. Which was great, it all ran fine.
Turns out I needed to add the @ExtendWith annotation to the top of the test classes using caching.
@ExtendWith(SpringExtension.class)
@ContextConfiguration(classes = CacheConfig.class)
@EnableConfigurationProperties
public class CacheConfigTest { ...
I'm not exactly sure why it was needed for just those cases, but it must have to do with the way the cache load up. And for some reason spring doesn't know how to load it the right contexts for it without that
Share
Improve this answer
Follow
answered Jul 23, 2019 at 19:09
canpan14canpan14
1,20311 gold badge1616 silver badges4040 bronze badges
1
1
The reason you need to @ExtendWith Spring is to make injection work, else Spring is not in control and all of your injections will be null.
– Josh M.
Feb 21 at 14:08
Add a comment
|
|
I had this working in with Spring Boot 1.x and JUnit 4 but after swapping to Spring Boot 2.x and JUnit 5 it no longer works.
Using Caffine for our cache manager.
The test was to ensure that our cache was preloading itself with some constants, nothing complicated.
@ContextConfiguration(classes = CacheConfig.class)
@EnableConfigurationProperties
public class CacheConfigTest {
@Autowired
private CacheManager myCacheManager;
@Test
public void verifyCacheManagerIsInitializedWithCaches() {
CacheConstants.CACHES.forEach(cacheName -> {
assertTrue(myCacheManager.getCacheNames().contains(cacheName)));
}
}
}
When I run it now the myCacheManager is null, causing the rest of the code to throw a null pointer exception unsurprisingly.
Here is the CacheConfig class for reference.
@EnableCaching
@Configuration
public class CacheConfig extends CachingConfigurerSupport {
private static final String CACHE_EXPIRE_ONE_HOUR = "expireAfterAccess=3600s, expireAfterWrite=3600s";
@Bean
@Override
public CacheManager cacheManager() {
CaffeineCacheManager cacheManager = new CaffeineCacheManager(CacheConstants.SOME_CONSTANT);
cacheManager.setCacheSpecification(CACHE_EXPIRE_ONE_HOUR);
return cacheManager;
}
@Bean
public CacheManager myCacheManager() {
return new CaffeineCacheManager(Arrays.stream(CacheConstants.CACHES.toArray()).toArray(String[]::new));
}
}
|
JUnit 5 Cache Manager Init Data Test - Null Cache Manager
|
I may not be understanding your question properly, but your reply to Sabee's answer was helpful. It sounds like you're looking to either merge multiple geometries into a single mesh or implement a form of model instancing, with the goal of reducing draw calls.
There is more than one way to accomplish this, depending on your requirements. You can merge multiple geometries into a single geometry object, and provide either one material or an array of materials (where each index corresponds to one of the merged geometries). You can also use GPU-accelerated instancing to achieve a similar effect with only a single copy of the geometry.
I'll refer to Dusan Bosnjak's excellent Medium series on instancing, which starts here: https://medium.com/@pailhead011/instancing-with-three-js-36b4b62bc127
As well, here are the three.js examples regarding instancing: https://threejs.org/examples/?q=instanc#webgl_buffergeometry_instancing_dynamic
|
PIXI.js has Container#cacheAsBitmap which causes the container to "render" itself to an image, save that, render the image instead of its children and when a child is added or removed or updated, the cache is updated.
What's the alternative for Three.js (but instead of an image it would be a mesh)?
|
Cache scene in Three.js
|
I don't think you're using it in quite the correct way, Rails handles caching of collections automatically quite well.
This is the standard setup for caching a collection
posts = Rails.cache.fetch(Post.by_latest) do
Post.by_latest
end
render json: posts
|
I've just gotten into caching for my application and a little stuck. I have a query that relies on a scope. The scope gathers the last (x) posts created on the page.
Rails.cache.fetch('homepage/posts') do
posts = Post.by_latest
render json: posts
end
Here's what I'm having a hard time understanding. Since I want to make sure the site displays the latest posts I don't want to manually set an expiration time or date. Instead I'd like to expire the cache when a new post is created or destroy. Here's the other dilemma. I thought of making a callback in my Post model but things get a little jumbled when I want to expire the cache of other keys but all want to accomplish the same thing.
def flush_cache
Rails.cache.delete('homepage/posts')
Rails.cache.delete('posts')
...
end
I wanted to flush the cache in more of a dynamic approach as you would with a single object...where the cache key is comprised with the updated_at attribute of the object and expires whenever modified. I'm finding the difficult to do with a collection though. Generating a cache key from the query would result in a query in itself, thus defeating the overall purpose of caching:
Rails.cache.fetch(Post.by_latest.cache_key) do
posts = Post.by_latest
render json: posts
end
Any help would be appreciated.
|
Setting a collection cache key without multiple subsequent queries
|
Closures in PHP can not be serialized "from the box" hence array with closures too. You can only use 3rd party ways/libraries to reach this.
Discussed here: Serializing anonymous functions in php
|
I want to cache an array which has also closures.
I've tried with:
serialize() - can't serialize closures
json_encode() - replaces closure with empty value
base64_encode() - doesn't accept arrays
What else to do for caching array containing also closures ?
|
How to cache closures in php
|
You can use @Scheduled:
@Scheduled(cron = "<cron expression>")
@CacheEvict(value = "<cache name>")
public void clearCache() {
}
|
I would like to refresh the cache everyday 12am or the cache expired at 12am. I had check the available methods in net.sf.ehcache.config.CacheConfiguration, but these methods i.e. timeToIdleSeconds, timeToLiveSeconds seem like not what I want. May I know how to achieve this?
Edit 1:
Here is how I use @Cacheable.
@Override
@Cacheable(value = "cacheName")
public Object retrieveConfigurations() {
...
}
|
Spring @cacheable how to refresh cache 12am?
|
2
That passage is about memory that is changed, not about memory that remains constant. Sharing constants between threads is fine.
When you have multiple CPUs each updating the same place, they have to be sending their changes back and forth to each other all the time. This results in contention for 'owning' a particular piece of memory.
Often the ownership isn't explicit. But when one CPU tells all the others that a particular cache line needs to be invalidated because it just changed something there, then all the other CPUs have to evict the value from their caches. This has the effect of the CPU to last modify a piece of memory effectively 'owning' the cache line it was in.
And, again, this is only an issue for things that are changed.
Also, the view of memory and cache that I gave you is rather simplistic. Please don't use it when reasoning about the thread safety of a particular piece of code. It's sufficient to understand why multiple CPUs updating the same piece of memory is bad for your cache, but it's not sufficient for understanding which CPU's version of a particular memory location ends up being used by the others.
A memory location that doesn't change during the lifetime of a thread being used by multiple threads will result in that memory location appearing in multiple CPU caches. But this isn't a problem. Nor is it a problem for a particular memory location that doesn't change to be stored in the L2 and L3 caches that are shared between CPUs.
Share
Improve this answer
Follow
answered Apr 30, 2019 at 13:50
OmnifariousOmnifarious
55k1919 gold badges134134 silver badges196196 bronze badges
Add a comment
|
|
I'm currently making an application with multiple worker threads, running in parallel. The main part of the program is executed before the workers, and each workers are put to sleep when they have finished their tasks:
MainLoop()
{
// ...
SoundManager::PlaySound("sound1.mp3"); // Add a sound to be played, store the sound in a list in SoundManager
SoundManager::PlaySound("sound2.mp3");
SoundManager::PlaySound("sound3.mp3");
// ...
SoundThreadWorker.RunJob(); // Wake up thread and play every sound pushed in SoundManager
// Running other threads
SoundThreadWorker.WaitForFinish(); // Wait until the thread have finished its tasks, thread is put to sleep(but not closed)
// Waiting other threads
// ...
}
// In SoundThreadWorker class, running in a different thread from the main loop
RunJob()
{
SoundManager::PlayAllSound(); // Play all sound stored in SoundManager
}
In this case, the static variable storing all sounds should be safe because no sound are added when the thread is running.
Is this cache efficient?
I have read here that: https://www.agner.org/optimize/optimizing_cpp.pdf
"The different threads need separate storage. No function or class
that is used by multiple threads should rely on static or global
variables. (See thread-local storage p. 28) The threads have each
their stack. This can cause cache contentions if the threads share
the same cache."
I have a hard time understand how static variable are stored in cache, and how they are used by each thread. Do I have two instance of SoundManager in cache, since thread does not share their stack? Do I need to create a shared memory to avoid this problem?
|
Cache efficiency with static member in thread
|
1
I believe you're using origin request Lambda function currently and It doesn't include the changed path to cache key, I know viewer request would help in achieving it but unfortunately then you need to write viewer request (change path) and origin request to choose origin.
Share
Improve this answer
Follow
answered Mar 29, 2019 at 18:04
James DeanJames Dean
4,22111 gold badge1111 silver badges1919 bronze badges
Add a comment
|
|
I have some problem on invalidating the cache of my CloudFront distribution.
I mapped a wildcard domain name to my CloudFront distribution; then I created a Lamba@Edge that modify the request origin redirecting each subdomain to its subfolder.
It works in this way:
aaa.mydomain.com => mydomain.com/aaa
bbb.mydomain.com => mydomain.com/bbb
ccc.mydomain.com => mydomain.com/ccc
...
I'm not be able to invalidate the cache:
if I invalidate the path /bbb/* it doesn't work. Instead with the path /* works, but in this way I invalidate all the S3 Bucket and I would like to avoid it.
Any help?
Thanks!
|
Problem on invalidating the cache of a Cloudfront distribution
|
If you just want to invalidate the whole cache at will, you might create a trivial view which does that:
file views.py:
from django.core.cache import cache
from django.core.exceptions import PermissionDenied
from django.http import HttpResponse
from django.views.decorators.cache import never_cache
@never_cache
def clear_cache(request):
if not request.user.is_superuser:
raise PermissionDenied
cache.clear()
return HttpResponse('Cache has been cleared')
file urls.py:
from django.urls import path
from . import views
urlpatterns = [
...
path('clear_cache/', views.clear_cache),
]
then invoke it with your browser:
http://HOST/clear_cache
|
[additional information]
I asked for a way to disable caching site-wide. This is probably overkill, because all I need is a way to be able to see the most recent version of a page, when either the database or the program to generate it has been modified.
There is a strong consensus that modifying settings at runtime is a very bad idea.
So, some ideas: clearing the cache could work, as would sending a flag to specify that I don't want to see a cached version, or specifying that requests from my IP address shouldn't see cached pages.
[original question]
I have a Django-based website at ozake.com, and I frequently rewrite parts of the programming or change page content.
Each time I work on it, I modify settings.py to disable caching so I can see my modifications in real time.
When I'm done, I re-enable caching.
I am using file-based caching. Here is the relevant part of settings.py:
CACHES = {
'default': {'BACKEND':
#'BACKEND': 'django.core.cache.backends.dummy.DummyCache',
'BACKEND': 'django.core.cache.backends.filebased.FileBasedCache',
'LOCATION': '/var/www/mysite.com/cache',
When I work on the site I comment out the last two lines and uncomment the dummy cache line.
This means SSH'ing into the site, modifying settings.py, working on the site, then re-modifying it.
Is there any way I can make this into a check box somewhere in /admin with admin.py?
|
Is it possible to disable caching from Django admin pages?
|
That would allow you to cache a huge amount of GB on the clients computer. I can't see how that would work out long-term (user clearing cache, changing browser, computer, etc). This sounds like a poor solution to the problem.
With that being said, you should look into IndexedDB:
IndexedDB is a low-level API for client-side storage of significant amounts of structured data, including files/blobs. This API uses indexes to enable high-performance searches of this data. While Web Storage is useful for storing smaller amounts of data, it is less useful for storing larger amounts of structured data. IndexedDB provides a solution.
Notes:
This feature is available in Web Workers.
IndexedDB API is powerful, but may seem too complicated for simple cases. If you'd prefer a simple API, try libraries such as localForage, dexie.js, ZangoDB, PouchDB, and JsStore that make IndexedDB more programmer-friendly.
|
I have a web app, that serves images of scanned old documents with their transcript, it's a lot of images about 5 gigs of images, and the client needs to browse them on regular basis.
I need a solution to cache the images in the client side, since the images are immutable, they should be fetched from the server only once.
|
Is there a way to cache files in client computer permanently?
|
2
LRU Cache policy : Evict the least recently used.
How do we achieve this? Well this depends on the actual algorithm but the bottomline is this.
Every node/key has an age bit.
When you access a key x, either get or put, you reset its age to 0.
Why? Because x was the most recently used and we denote this by setting its age as 0 indicating that it is like a new born key. More recent than others.
But, we need to do one more step here.
Increment everyone's age except the one you recently accessed.
This is done to signify that all others except x are now older than their last age.
All that remains to evict (if size is breached) is to evict the key whose age is the highest. In your case, it will be the 1025th key.
In summary, it is the increment-all op that's really costly to implement.
Try increasing the cache size and you'd notice better runtime. However, it'll always be lesser than dict. Dict() implementation in Python is a hash-table.
Share
Improve this answer
Follow
answered May 17, 2019 at 18:11
isopropylcyanideisopropylcyanide
42544 silver badges1818 bronze badges
Add a comment
|
|
Below are two recursive functions that use memoization. cache_fibonacci uses a cache dictionary while lru_cache_fibonacci uses Python's lru_cache decorator. Why is latter so slow?
from functools import lru_cache
cache=dict()
def cache_fibonacci(n):
return helper_fibonacci(n)
def helper_fibonacci(n):
if n in cache:
#Cache already exists
return cache[n]
if n==1:
value=0
elif n==2:
value=1
else:
#Cache not set
a=helper_fibonacci(n-1)
b=helper_fibonacci(n-2)
value=a+b
cache[n]=value
return value
@lru_cache(maxsize=1024)
def lru_cache_fibonacci(n):
if n==1:
return 0
if n==2:
return 1
else:
a=rec_fibonacci(n-1)
b=rec_fibonacci(n-2)
return a+b
The runtime outputs are:
Cached-recursive time= 1.4781951904296875e-05
LRU Cached-recursive time= 0.14490509033203125
|
Why is the lru_cache slower than cache implemented as a dictionary for the following fibonacci calculator?
|
If this is about .css and .js changes, one way is to to "cache busting" is by appending something like "_versionNo" to the file name for each release. For example:
script_1.0.css // This is the URL for release 1.0
script_1.1.css // This is the URL for release 1.1
script_1.2.css // etc.
Or alternatively do it after the file name:
script.css?v=1.0 // This is the URL for release 1.0
script.css?v=1.1 // This is the URL for release 1.1
script.css?v=1.2 // etc.
And you can have a look at here too META TAGS
|
I've recently made changes to a clients website but the browser is showing a cached version. It's a static website with .html files. If I delete the cache on my browser it works but what i really want is to force every visitors browser to show the latest version without them having to manually delete their cache.
I understand that you can set a version on .css & .js file to show the latest version but how do you do this with static .html files?
|
How to force a browser to clear the cache on my website .html files
|
1
You may want to read 3.3.1 Associativity in What Every Programmer Should Know About Memory from Ulrich Drepper.
https://people.freebsd.org/~lstewart/articles/cpumemory.pdf#subsubsection.3.3.1
The title is a little bit catchy, but it explains everything you ask in detail.
In short:
the problem of caches is the number of comparisons. If your cache holds 100 blocks, you need to perform 100 comparisons in one cycle. You can reduce this number with the introduction of sets. if A specific memory-region can only be placed in slot 1-10, you reduce the number of comparisons to 10.
The sets are addressed by an additional bit-field inside the memory-address called index.
So for instance your 16 Bit (from your example) could be splitted into:
[15:6] block-address; stored in the `cache` as the `tag` to identify the block
[5:4] index-bits; 2Bit-->4 sets
[3:0] block-offset; byte position inside the block
so the choice of the method depends on the availability of hardware-resources and the access-time you want to archive. Its pretty much hardwired, since you want to reduce the comparison-logic.
Share
Improve this answer
Follow
answered Dec 5, 2018 at 9:55
DomsoDomso
97011 gold badge1010 silver badges2323 bronze badges
Add a comment
|
|
Im trying to understand hardware Caches. I have a slight idea, but i would like to ask on here whether my understanding is correct or not.
So i understand that there are 3 types of cache mapping, direct, full associative and set associative.
I would like to know is the type of mapping implemented with logic gates in hardware and specific to say some computer system and in order to change the mapping, one would be required to changed the electrical connections?
My current understanding is that in RAM, there exists a memory address to refer to each block of memory. Within a block contains words, each words contain a number of bytes. We can represent the number of options with number of bits.
So for example, 4096 memory locations, each memory location contains 16 bytes. If we were to refer to each byte then 2^12*2^4 = 2^16
16 bit memory address would be required to refer to each byte.
The cache also has a memory address, valid bit, tag, and some data capable of storing a block of main memory of n words and thus m bytes. Where m = n*i (bytes per word)
For an example, direct mapping
1 block of main memory can only be at one particular memory location in cache. When the CPU requests for some data using a 16bit memory location of RAM, it checks for cache first.
How does it know that this particular 16bit memory address can only be in a few places?
My thoughts are, there could be some electrical connection between every RAM address to a cache address. The 16bit address could then be split into parts, for example only compare the left 8bits with every cache memory address, then if match compare the byte bits, then tag bits then valid bit
Is my understanding correct? Thank you!
Really do appreciate if someone read this long post
|
Cache mapping techniques
|
Strings in Redis are binary safe, which means you could store binary files without any problem (https://redis.io/topics/data-types#strings).
The way you will do this depends on the language and frameworks you are using, but, generally speaking, one way to accomplish this is just storing in Redis the file content as base64.
Hope it helps.
|
I would like to temporary cache uploaded files in Redis. I know it is utilizing a lot of memory, but I think it is the best way to have a really low latency for a temporary amount of time.
How do I store files in Redis? Do I somehow convert them into binary and store them and decode them when I need them?
|
Using Redis to temporary caching files
|
2
The expression style.css?v=1 for the browser means a dynamic file like "fetch me the file style.css with a parameter named "v" set to 1". As long as you change the value of the v parameter, the file will be fetched as a new unique url.
The expression style.css?=v1 has no meaning.
The expression style.css?1 could also work.
Share
Improve this answer
Follow
edited Nov 22, 2018 at 14:54
answered Nov 22, 2018 at 11:34
Kostas KrevatasKostas Krevatas
12566 bronze badges
2
is the query string only based on naming? Or is there any technical difference between style.css?v=1 and style-v1.css?
– Malte
Nov 22, 2018 at 11:37
1
The difference between these two is that if you are working on a file style.css and you want to decache it then the ?v={version_number} at the end is enough. The file style-v1.css is different than the file style-v2.css inside your server meaning that you must create a new file each time each time you want to decache it on your browser.
– Kostas Krevatas
Nov 22, 2018 at 11:47
Add a comment
|
|
css-file is named: style.css?v=1
the link inside of html: <link rel="stylesheet" href="style.css?v=1">
Contrary to the classic style.css the style.css?v=1 doesn't get recognised:
Failed to load resource: the server responded with a status of 404 (Not Found)
Did I miss something? What's important to versioning files and where does the syntax ?v=1 come from?
|
Link a CSS file with version number for cache busting
|
2
Here you have: react-native-offline-cache-webview
It can cache your WebView for offline consulting, and it works for Android and iOS.
Example app here
Share
Improve this answer
Follow
answered Oct 30, 2018 at 12:06
Andriy KlitsukAndriy Klitsuk
56433 silver badges1414 bronze badges
1
1
not working and not maintained for current configurations
– Sagar
Mar 22, 2021 at 6:38
Add a comment
|
|
I'm currently using WebView component to render a website inside my app but I also want to cache that website so that it can be opened even when there is no internet and also how to remove that cache whenever needed.
Please suggest me how to do this.
|
How to cache webview in react-native
|
1
I guess you are asking this question for the production environment not for development purpose.
If this is the case, then you can check the following:
index.html file is not getting updated.
check what cache expiration is getting set on the browser. It must be no-cache.
How are you generating the building the code. If using angular-cli then the ng build will create a the chunk names with hash and it is different each time when the content is changed.
check the cache header for lazy loaded module
Share
Improve this answer
Follow
answered Oct 9, 2018 at 18:16
Vivek KumarVivek Kumar
4,87288 gold badges5252 silver badges8585 bronze badges
12
Ideally would like for both Dev and Prod to be refreshed (from end user point of view) soon as new feature is added (and index file or other files are modified).
– Joe
Oct 9, 2018 at 18:31
What are you getting in the Response Header for the cache-contro and expires? It should be cache-control: no-cache and expires: -1
– Vivek Kumar
Oct 9, 2018 at 18:46
I guess the apache server might be setting something different and that's why the browser is loading from cache.
– Vivek Kumar
Oct 9, 2018 at 18:47
If expires: -1 for your index file then the browser will never cache it and always download from server.
– Vivek Kumar
Oct 9, 2018 at 18:49
This is all under Header -> Headers -> Response Headers (for the main page url - there are other parts of the page in the list as well): HTTP/1.1 304 Not Modified Date: Tue, 09 Oct 2018 19:10:31 GMT Server: Apache/2.4.6 (Red Hat Enterprise Linux) Last-Modified: Tue, 09 Oct 2018 16:35:36 GMT ETag: "946-577ce53c9a03f" Accept-Ranges: bytes Content-Type: text/html; charset=UTF-8 Proxy-Connection: Keep-Alive Connection: Keep-Alive Age: 0
– Joe
Oct 9, 2018 at 19:14
|
Show 7 more comments
|
I created Angular 6 app and hosting it using Apache on a remote server.
Created a build with ng build --prod.
I noticed that when making changes and updating an html file - the page is being loaded from a cache and not from a new version of a file that is placed in Apache folder (using default configuration in Apache and nothing in Meta tags in HTML pages yet).
How to force reloading page on a client browser but only when there is a new version of the same page? (new changes to an existing site)
What are the best practices?
|
Best approach to reload page (not from cache) using Apache and Angular
|
The result of your map is always the original value of $arg_latitude, because that is the value that you have inserted in the right hand column.
You need to add a capture to your regular expression and use that as the new value.
For example:
map $arg_latitude $rounded_latitude {
default $arg_latitude;
~^(?<rounded>\d+\.\d\d) $rounded;
}
Use of a named capture is recommended, as a numeric capture may not be in-scope at the point where $rounded_latitude is evaluated.
See this document for more.
|
I am trying to use nginx caching features, however we have and endpoint that uses latitude and longitude, so for that, to increase the cache hit ratio, we have to truncate lat and long.
I created a map to ignore last two latitude digits. The problem is the map isn't working, it always returns the original latitude (45.45452).
Consider $arg_latitude being 45.45452, the expected result is 45.45.
map $arg_latitude $rounded_latitude {
default $arg_latitude;
~\d+\.\d\d $arg_latitude;
}
Any idea why isn't working?
|
Regex to truncate a string on nginx
|
2
I was able to resolve this by deleting the contents of the .cache directory. Using rimraf I added a new script to my package.json to simplify the process in the future.
"cleancache": "rimraf .cache/*"
Share
Improve this answer
Follow
answered Jul 21, 2018 at 15:38
Justin FiedlerJustin Fiedler
6,49833 gold badges2121 silver badges2727 bronze badges
1
2
I had no issues while using gatsby develop but after trying to gatsby build, I had this error when navigating to the index. Deleting the .cache folder solved it.
– timiscoding
Oct 8, 2018 at 4:25
Add a comment
|
|
I am getting errors running 'gatsby develop' on a Gatsby v2 site after deleting a few old pages.
error UNHANDLED EXCEPTION Error: ENOENT: no such file or directory,
open 'D:\dev\my-gatsby-blog\public\static\d\573\path---
my-blog-post-d-5-f-ef1-Y9bdv2wHaTrcrlb7d2XeeQc6MYw.json'
websocket-manager.js:21 readCachedResults [my-blog]/[gatsby]/dist/utils/websocket-manager.js:21:24
websocket-manager.js:44 getCachedPageData [tutorial-part-four]/[gatsby]/dist/utils/websocket-manager.js:44:13
websocket-manager.js:140 Socket.s.on.path [tutorial-part-four]/[gatsby]/dist/utils/websocket-manager.js:140:26
socket.js:528 [tutorial-part-four]/[socket.io]/lib/socket.js:528:12
next_tick.js:131 _combinedTickCallback internal/process/next_tick.js:131:7
next_tick.js:180 process._tickCallback internal/process/next_tick.js:180:9
My particular site is a blog that shows a list of posts on the index. Not sure if its relevant but the deleted pages are Markdown files resolved with the plugin gatsby-transformer-remark.
|
Caching issues in Gatsby v2 project
|
I would suggest making your cached class synchronized via the Double Check Lock mechanism, instead of using the additional implementation for thread safety:
public final class CachedText implements Text{
private final Text origin;
private String result;
public CachedText(final Text orgn) {
this.origin = orgn;
}
@Override
public String asText() throws IOException {
if(this.result == null){
synchronized(this) {
if(this.result == null){
this.result = this.origin.asText();
}
}
}
return this.result;
}
}
There might be concerns using the DCL as seen here- but if they exist on your end, just comment and I'll post additional support (I believe that modern JVMs are better suited for handling DCLs).
This should be good for your needs.
|
I have a simple interface
public interface Text {
String asText() throws IOException;
}
And one implementation
public final class TextFromFile implements Text{
private final String path;
public TextFromFile(final String pth) {
this.path = pth;
}
@Override
public String asText() throws IOException {
final String text = Files.readAllLines(Paths.get(this.path))
.stream()
.collect(Collectors.joining(""));
return text;
}
}
This class is very simple, it reads text from a file then returns it as a string. In order to avoid reading from the file multiple times I want to create a second class that will decorate the original one
public final class CachedText implements Text{
private final Text origin;
private String result;
public CachedText(final Text orgn) {
this.origin = orgn;
}
@Override
public String asText() throws IOException {
if(this.result == null){
this.result = this.origin.asText();
}
return this.result;
}
}
And now it work; however the result is muttable, and in order to work correctly with multiple threads i have created another decorator
public final class ThreadSafeText implements Text{
private final Text origin;
public ThreadSafeText(final Text orgn) {
this.origin = orgn;
}
@Override
public String asText() throws IOException {
synchronized(this.origin){
return this.origin.asText();
}
}
}
But now my program will spend resources on synchronization each time I call asText() .
What is the best implementation of a caching mechanism in my situation?
|
Simple Cache mechanizm using decorators
|
1
i am not a flask user but perhaps this is your wanted decorator
def timed_cache(cache_time:int, nullable:bool=False):
result = ''
timeout = 0
def decorator(function):
def wrapper(*args, **kwargs):
nonlocal result
nonlocal timeout
if timeout <= time.time() or not (nullable or result):
result = function(*args, **kwargs)
timeout = time.time() + cache_time
return result
return wrapper
return decorator
Share
Improve this answer
Follow
answered Mar 11, 2018 at 13:47
sahamasahama
67988 silver badges1616 bronze badges
Add a comment
|
|
My Flask app, will get data from an url only from certain time. If it is outside the range of time, it will used the last query data from the url that save in Cache. Outside the range of time, the url will return no data. Thus, I want to reuse the last data in cache
from flask_app import app
from flask import jsonify,abort,make_response,request
from flask.ext.sqlalchemy import SQLAlchemy
from flask.ext.cache import Cache
from datetime import datetime, time
app.config['CACHE_TYPE'] = 'simple'
app.cache = Cache(app)
@app.route('/top', methods=['GET'])
@app.cache.cached(timeout=60)
def top():
now = datetime.now()
now_time = now.time()
if now_time >= time(10,30) and now_time <= time(16,30):
print "within time, use data save in cache"
# function that use last data query from url, save in cache
else:
page = requests.get('http://www.abvfc.com')
data = re.findall(r'items:(.*)',page.content)
return jsonify(data)
The problem is I can't get the last Cache data. If there is no access to the api /top in the last 60 seconds, there will be no data.
Cache the data one minutes before the url return no data at 16.30
User can use the cache data outside range of time
I am not familiar with cache, so may be my current idea is not the best way.
|
Python: Flask Cache for a range of time
|
2
One way to trigger the schema reload across servers is to have your post-migration script run NOTIFY postgrest_reload; in the database to which PostgREST is attached. Then on the same server as PostgREST, run a tool like pg_listen to catch that event and send the sighup. For instance: pg_listen <db-uri> postgrest_reload "killall -HUP postgrest".
You can even make PostgREST automatically reload its schema cache whenever the schema changes (using a DDL trigger), as explained in https://postgrest.com/en/v4.4/admin.html#schema-reloading
Share
Improve this answer
Follow
answered Mar 11, 2018 at 19:21
Joe NelsonJoe Nelson
54955 silver badges1212 bronze badges
1
1
However on Heroku it would require modifying the buildpack to include pg_listen. We've been looking into adding LISTEN support right inside the postgrest binary, but it requires a feature in our postgresql access library that is not yet available.
– Joe Nelson
Mar 12, 2018 at 5:56
Add a comment
|
|
I have a PostgREST instance deployed on Heroku using the buildpack.
The Postgres schemas are created by a Node.js program that uses node-pg-migrate.
After the migrations have run, the schema is changed and PostgREST needs to reload the schema to update its schema cache.
To refresh the cache without restarting the PostgREST server there's the option to send the server process a SIGHUP signal: killall -HUP postgrest
Since I have the migrations running from a Node.js program (npm run migrate:up) it seems to make sense to send that signal with a post-migration script. I'm not even sure if I can send such a signal from another server to the PostgREST instance.
Basically, what I'm asking is how to send a SIGHUP signal to PostgREST on Heroku from a Node.js program on another server.
|
How to send a SIGHUP signal to PostgREST on Heroku from another Node.js program?
|
I found 2 possibile solutions.
First of all I wanna say that it's better to cache (using pwa Cache API) also the sw.js because when you're offline, it will be requested by sw_main.js.
FIRST SOLUTION:
Use a the service worker's cache as a fallback and always attempt to go network-first via a fetch().
This only for sw.js and maybe sw_main.js.
You lose some performance gains that a cache-first strategy offers, but the js file size is very light so I don't think it's a big problem.
SECOND SOLUTION:
If your cached sw.js file has changed?
We can hook into "onupdatefound" function on the registered Service Worker.
Even though you can cache tons of files, the Service Worker only checks the hash of your registered service-worker.js.
If that file has only 1 little change in it, it will be treated as a new version.
So this confirm my previous question! I'll try it!
If it works, the second solution is the best
|
I have a doubt about the service worker update process.
In my project there are 2 files related to sw:
"sw.js", placed in website root, will be NOT cached (by Cache API and Web Browser).
My service worker manages the cache of all statics files and all dynamic url pages.
Sometimes I need to update it and the client must detect that there's an update and do that immediatelly.
"sw_main.js" is the script that installs my sw. This file is cached by Cache API because my app must work offline.
Inside we can find:
var SW_VERSION = '1.2';
navigator.serviceWorker.register("sw.js?v=" + SW_VERSION, { scope: "/" }).then(....
The problem is: because sw_main.js is cached, if I change the SW_VERSION and then deploy online the webapp, all clients will not
update because cannot see the changes in that file.
Which is the best way to manage the SW update process?
As I now, there are 3 ways to trigger sw update:
push and sync events (but I'm not implementing these)
calling .register() only if the service worker URL has changed (but
in my case it's not possible because the sw_main.js is cached so I'm
not able to change the SW url)
navigation to an in-scope page (I think we've the same cache problem
of point 2)
I read also this: "Your service worker is considered updated if it's byte-different to the one the browser already has".
That means that if I change the content of sw.js (that is not cached), the service worker will automatically detect the update?
Thank you
|
Service worker automatic update process
|
Yes, I found the answer.
In Redis, you can use the Dictionary of the values.
So, following this issue, we have to save data by using Key & Value, where inside the Value we will have again Key & Value.
Then you will get Values by two keys.
As a result, you will have something like that
HSET myhash field1 "Hello"
More information on official site (https://redis.io/commands/hset)
If we speak about performance, it means that we have time complexity: O(N) where N is the total number of elements in all given sets.
By following these rules:
O(n) time
1. Traversing an array
2. Traversing a linked list
3. Linear Search
4. Deletion of a specific element in a Linked List (Not sorted)
5. Comparing two strings
6. Checking for Palindrome
7. Counting/Bucket Sort and here too you can find a million more such examples.... In a nutshell, all Brute Force Algorithms, or Noob ones which require linearity, are based on O(n) time complexity
So, it means that the final access time to the element will be equal to about:
O(n) + O(m);
|
I've just got stuck.
I have an issue when I working with an entity, I want to save it to Redis, but after some time I want to get this entity, but I don't know what kind of query will be searched it.
So, I need to save several keys into Redis. And I will be able to search my Entity by several queries.
For example,
I have an entity:
public class Book
{
int Id,
string Name
}
In one time, I want to search this Entity by Id, in another case by Name.
Have you any propositions or solving how I can do it?
Maybe, I can use the tags or something like that.
Thanks a lot!!!
|
Redis multiple key for value (multiplying search by keys)
|
2
If you provide SpringTransactionManager to Spring, it will create an Ignite transaction around the method annotated with @Transactional. First of all, I believe DB transaction will not be even started in this case. And even if it does, it will be independent from Ignite's one.
I see two options to solve this:
Configure Ignite with JTA [1] and use JtaTransactionManager that will be also aware of DB transactions.
Instead of @Cacheable, use integrate Ignite with DB via CacheStore [2][3] and use write-through. In this case Ignite will take care of transactional consistency.
[1] https://apacheignite.readme.io/docs/transactions#integration-with-jta
[2] https://apacheignite.readme.io/docs/3rd-party-store
[3] https://github.com/apache/ignite/blob/master/examples/src/main/java/org/apache/ignite/examples/datagrid/store/spring/CacheSpringStoreExample.java
Share
Improve this answer
Follow
answered Oct 26, 2017 at 22:25
Valentin KulichenkoValentin Kulichenko
8,38011 gold badge1717 silver badges1212 bronze badges
Add a comment
|
|
I am exploring Ignite transactional cache. I already have a piece of code which uses Spring transaction management for JDBC. I wanted to integrate ignite transactional cache in the code using Spring cache abstraction.
I came across SpringTransactionManager (provided by Ignite) but I am unable to find the proper way to use it. Essentially, I want to do something like:
@Transactional
@Cacheable(cacheNames="personcache", key="#person.id", unless="#result == null")
public Person create(Person person) {
String queryPerson = "insert into Person (id, name) values (?,?)";
JdbcTemplate jdbcTemplate = new JdbcTemplate(dataSource);
jdbcTemplate.update(queryPerson, new Object[] { person.getId(), person.getName() });
System.out.println("Inserted into Person Table Successfully");
return person;
}
When the transaction commits, database and cache should get committed together. For this, Ignite docs mention the use of SpringTransactionManager https://www.gridgain.com/sdk/pe/latest/javadoc/org/apache/ignite/transactions/spring/SpringTransactionManager.html.
I am not sure how to plug this transaction manager along with Spring's DriverManagerDataSource https://docs.spring.io/spring/docs/current/javadoc-api/org/springframework/jdbc/datasource/DriverManagerDataSource.html
Can someone please guide me? An example would help a lot.
Thanks!
|
Ignite transactional cache with Spring transaction management
|
2
You're probably running into this open bug:
Since view decorators run on the outgoing response first, before response middleware, the cache_page decorator caches the response before any of the mentioned response middlewares have a chance to add their Vary headers. This means two things: 1) the cache key used won't include the headers the response ought to vary on, and Django may later serve that response to users who really shouldn't get it, and 2) when that cached response is later served to a user, it still won't include the Vary header that it should have, and thus may also be cached wrongly by an upstream HTTP cache.
In other words, at the time that the response is cached the SessionMiddleware hasn't yet had a chance to set the Vary: Cookie header, so all sessions will share the same cache key.
You can probably work around this by specifying the Vary header explicitly. For example:
from django.views.decorators.cache import cache_page
from django.views.decorators.vary import vary_on_cookie
@cache_page()
@vary_on_cookie()
def my_view():
pass
Share
Improve this answer
Follow
answered Aug 21, 2017 at 17:29
Kevin Christopher HenryKevin Christopher Henry
47.5k77 gold badges121121 silver badges106106 bronze badges
1
This could be it! Thank you for insight.
– Brachacz
Aug 22, 2017 at 8:41
Add a comment
|
|
I've got a pretty complex webapp based on Django 1.11.
Some time ago users started reporting that they are getting 'someone else's views' - memcached provided them with html cached by decorator @cache_page(xx) without distinguishing between sessions within the cache grace period.
Upon further investigation, I discovered that in some cases Vary: Cookie header was missing and wrong 'session' was served. What's strange, it only showed when querying backend with curl (which has no session, user etc -> backend served logged in cached view).
Unfortunately, this issue is really hard to reproduce, sometimes it occures, sometimes it doesn't. I even build a simple Django app from scratch to see if I could check what is the cause.
What was observed, is that the issue does not occur when @cache_page is removed or login_required is added .
I ended up removing all @cache_page decorators from views and the issue was not observed on production since but it's a workaround and I would like to know what is the cause.
If anyone has any hint what could be the cause, it would be greatly appreciated!
|
Django missing Vary:Cookie header for cached views
|
For Google's Gboard app, the path is /data/data/com.google.android.inputmethod.latin/files/cache . Files here in .dict and .acc format exists containing the data of typing of respective languages, one is using with the app.
One will need root access to go to that path and open the files.
|
I'm trying to do vulnerability assessment on Android devices.
I'd like to know if , with root permission, there's a way to view all the words saved in the cache of the Android keyboard (all the word that are suggested while typing).
|
View Android keyboard cache
|
2
IMHO, I don't think you have to cache all the entries in memory, but a part of them, maybe:
Maybe just use a ring buffer, or
More complicated, and make more sense, to implement a LFU Cache, that keeps the N top most frequently accessed item only. See this question for a hint of how to implement such a cache.
Share
Improve this answer
Follow
answered Jul 21, 2017 at 4:45
Duong NguyenDuong Nguyen
85066 silver badges1111 bronze badges
2
The problem is I don't know which are the most frequently used names. They didn't record that type of activities.
– Rony
Jul 21, 2017 at 5:56
Base on run-time activities (the 'get' action), LFU algorithm will help you to keep most frequently used items and evict those less used. You don't need to identify it somehow.
– Duong Nguyen
Jul 24, 2017 at 6:17
Add a comment
|
|
I have to access a database with 380,000 entries. I don't have write access to the DB, I can just read it. I've made a search function using a map to search for users by firstname. Here is my process:
1 - Load everything from the DB
2 - Store everything into a Map<Charactere, ArrayList<User>>, using Alpha letters to store users according to the first letter of their firstname.
<A> {Alba, jessica, Alliah jane, etc ...}
<B> {Birsmben bani, etc ...}
When someone searches for a user, I take the firstletter of the firstname typed and use map.get(firstletter), then iterate on the ArrayList to find all the users.
The Map Take a huge space in the memory I guess (380,000 User object). I had to increase the heap size
I want to make it faster. Use firstname as key for the Map, in order to make it faster (there are many people with the same firstname).
I have two solutions in mind:
1 - Still use a map with firstname as key (increasing the heap size again?)
2 - Use files on the disk instead of Map (Alba.dat will contain all Alba for example) and open the right file for each search. No need to incease the heap size, but are there any side effects?
Which one is better? (pros and cons)
Update with more info
It's a database of customers who calls our customer service on the phone. The person who takes the call has to search using the customers names (usually firstname and then lastname). Using the Db is too slow to search. The solution I've implemented is much faster already (1/2 seconds vs 26 seconds using the db), but I want to improve it.
|
Speed a search cache without using too much memory
|
Used below buildDataSourceFactory and its storing the cache
DataSource.Factory buildDataSourceFactory(boolean cache) {
if (!cache) {
return new DefaultDataSourceFactory(context, BANDWIDTH_METER,
buildHttpDataSourceFactory(BANDWIDTH_METER));
}else{
return new DataSource.Factory() {
@Override
public DataSource createDataSource() {
LeastRecentlyUsedCacheEvictor evictor = new LeastRecentlyUsedCacheEvictor(100 * 1024 * 1024);
SimpleCache simpleCache = new SimpleCache(new File(context.getCacheDir(), "media_cache"), evictor);
return new CacheDataSource(simpleCache, buildCachedHttpDataSourceFactory(BANDWIDTH_METER).createDataSource(),
new FileDataSource(), new CacheDataSink(simpleCache, 10 * 1024 * 1024),
CacheDataSource.FLAG_BLOCK_ON_CACHE | CacheDataSource.FLAG_IGNORE_CACHE_ON_ERROR, null);
}
};
}
}
private DefaultDataSource.Factory buildCachedHttpDataSourceFactory(DefaultBandwidthMeter bandwidthMeter) {
return new DefaultDataSourceFactory(context, bandwidthMeter, buildHttpDataSourceFactory(bandwidthMeter));
}
|
I'm trying to caching HLS and DASH streaming video,
I have tried many solution but not working with Exoplayer v2.2
many issue redirect to below links but not getting any proper solution.
https://github.com/google/ExoPlayer/issues/420 and Using cache in ExoPlayer.
In the one solution 'ExtractorSampleSource' class is not found in Google Exoplayer 2.2
OkHttpClient okHttpClient = new OkHttpClient.Builder().cache(new okhttp3.Cache(context.getCacheDir(), 1024000)).build();
OkHttpDataSource okHttpDataSource = new OkHttpDataSource(okHttpClient, "android", null);
OkHttpDataSource ok2 = new OkHttpDataSource(okHttpClient, "android", null);
HttpDataSource dataSource = new CacheDataSource(context, okHttpDataSource, ok2);
ExtractorSampleSource sampleSource = new ExtractorSampleSource(
uri,
dataSource,
allocator,
buffer_segment_count * buffer_segment_size,
new Mp4Extractor(), new Mp3Extractor());
In other solution got same error 'DefaultUriDataSource' class not found in v2.2
DataSource dataSource = new DefaultUriDataSource(context, null, new OkHttpDataSource(getClient(context), userAgent, null, null/*, CacheControl.FORCE_CACHE*/));
all the solutions are 1 to 2 year older and it's not supported latest version of Google Exoplayer v2.2.
any one have idea or any sample or any solution to do caching with HLS and DASH stream?
|
Android Google Exoplayer 2.2 HLS and DASH streaming cache
|
2
Sorry to answer my own question, but I wanted to make sure that people see this answer in case they have the same problem...
I tried to delete the entire resource group multiple times over the course of several hours, but it didn't work. As David suggested, I ended up submitting a support ticket, and the MS support folks cleared up the problem overnight. They said it was nothing that I did, but rather that "the Redis product group identified and mitigated an issue".
Share
Improve this answer
Follow
answered Mar 2, 2017 at 19:26
user3027881user3027881
5122 silver badges44 bronze badges
0
Add a comment
|
|
I created an Azure Redis Cache in Azure Government. The deployment failed after almost a half hour - I have no idea why. However, the ProvisioningState is stuck at Creating, which means I can't delete it. Any ideas on how to delete it?
(I've done this in regular Azure several times, and everything works fine - I've only seen this in Azure Government.)
|
Azure Redis Cache stuck in Create
|
Turns out there is a configuration for this, in an unexpected place.
In my WebMvcConfigurerAdapter-extending config class, in the addResourceHandlers method:
registry.addResourceHandler("**/pluginresource/**")
.setCacheControl(CacheControl.noStore() )
.resourceChain(false)
.addResolver(pluginResourceResolver);
resourceChain(false) actually sets whether or not Spring will create the chain with a default handler that makes use of caching.
|
I have a Spring Boot MVC application where I serve content packaged in OSGi bundles. The goal behind using OSGi is to make these content bundles fully self-contained and hot-swappable.
Requests are mapped to bundle resources via the url.
The problem I'm running into happens when I replace a bundle with a new one that would be pointed to by the same URL and a request comes in for a resource that had been served from the old bundle.
Spring sees that it has already returned a resource for that URL and so attempts to open the stream for the cached bundle URL it has to determine when the resource was last modified, which is now associated with the now-unavailable bundle and throws an IOException.
Everything I have found so far involves modifying the client-side caching policy, not the Spring internal cache. Additionally, trying to disable the cache via spring.resource.chain.caching or spring.resources.cache-period do not work, as Spring still tries to figure out the last modified timestamp regardless.
|
Prevent Spring from attempting to read old resource stream
|
2
So my Question is, is there a way to Cache my Data so i can get it even if my Application restarted?
Yes. If you use CacheItemPriority.NotRemovable, the cache will survive even if the application restarts.
ObjectCache cache = System.Runtime.Caching.MemoryCache.Default;
CacheItemPolicy policy = new CacheItemPolicy
{
Priority = CacheItemPriority.NotRemovable,
AbsoluteExpiration = DateTimeOffset.Now.AddMinutes(10);
};
cache.Add(item, policy);
MSDN NOTE: Adding an entry to the cache with a priority level of NotRemovable has the potential to overflow the cache with entries that can never be removed. Cache implementations should only set the NotRemovable priority for a cache entry if they provide ways to evict such entries from the cache and to manage the number of cache entries.
Share
Improve this answer
Follow
answered Feb 17, 2017 at 16:15
NightOwl888NightOwl888
56.2k2626 gold badges143143 silver badges216216 bronze badges
2
If "app restart" means new process / app domain, there is no way the MemoryCache still has your previously cached items, no matter how hard you cache them ;)
– MichaC
Feb 18, 2017 at 10:46
This works in ASP.NET-based applications. Of course, if you are talking about a thick client, restarting the application is a different story and all memory allocated will be lost, including the cache. But I am using this in ASP.NET and I can restart the application without losing cached data.
– NightOwl888
Feb 18, 2017 at 12:53
Add a comment
|
|
Im trying to cache data that i get from a SQL Service, i tried using the Memory Cache Class from System.Runtime.Caching but it seems like the Cache is being emptied whenever i exit the Application.
So my Question is, is there a way to Cache my Data so i can get it even if my Application restarted?
Im Working with a Service from Biztalk so the Application gets started whenever its needed, but the SQL polling takes too long.
Here is the Code im testing with:
const string KEY = "key";
static void Main(string[] args)
{
MemoryCache cache = MemoryCache.Default;
Cache(cache);
Get(cache);
Console.Read();
}
private static void Get(MemoryCache cache)
{
string result = cache.Get(KEY) as string;
Console.WriteLine(result);
}
private static void Cache(MemoryCache cache)
{
CacheItemPolicy policy = new CacheItemPolicy();
policy.AbsoluteExpiration = DateTimeOffset.Now.AddMinutes(10);
bool res = cache.Add(KEY, "This was Cached", policy);
if (res)
Console.WriteLine("Added Succesfully");
else
Console.WriteLine("Adding Failed");
}
|
Cache data outside of Application
|
No, Spark RDD cannot be used in other application or in another run.
You can connect Spark with for example Hazelcast or Apache Ignite to save RDDs in memory. Other application will have possibility to read data saved in first application
|
Is there a possibility in Spark to re-use a cached RDD in another application (or in another run of the same application)?
JavaRDD<ExampleClass> toCache = ... // transformations on the RDD
toCache.cache(); // can this be reused somehow in another application or further runs?
|
Reuse a cached Spark RDD
|
2
You can use an alternative to cache which is localStorage. Each website has the right to store up to 5 MB of data on the user disk.
So use this to save data:
//browser support localStorage
if((typeof(Storage) !== "undefined"){
localStorage.setItem("mydataname", data);
}
And to retrieve or download a new one:
//browser support localStorage
if((typeof(Storage) !== "undefined"){
var data = localStorage.getItem("mydataname");
if(data){ //data does exist in localStorage
// Use data, no need to download a new version.
}
else{ // data doesn't exist, not saved yet or have been removed
// download new version of data and save it using the above code.
}
}
else{
// browser doesn't support localStorage redownload data.
}
More about localStorage here.
Share
Improve this answer
Follow
answered Jan 14, 2017 at 10:29
ibrahim mahriribrahim mahrir
31.4k55 gold badges4848 silver badges7676 bronze badges
Add a comment
|
|
So I'm using this code to change the content of my website and loading specific plugins for each "page":
$.ajax({
url: urlPath,
type: 'GET',
success: loadContent //content and plugins are loaded through this
});
Now I noticed it doesn't cache the loaded plugins from loadContent, each time downloading them again and again, thefore the page using ajax requests is 0.5s to 1.5s slower than simple http requests (obviously after the plugins have already been cached from first load).
Using cache: true/false doesn't make any difference.
I've read this can't be done, because javascript can't write to disk, but still maybe I missed something and there is a way to cache the plugins and avoid losing additional time on each load?
|
Save plugin loaded with Ajax to cache
|
1
Adding in a no-op .catch() to the end of your fetch() promise chain should prevent the Uncaught (in promise) TypeError: Failed to fetch message from being logged:
var fetchPromise = fetch(event.request).then(function(networkResponse) {
// if we got a response from the cache, update the cache
if (response) {
console.log("cached page: " + event.request.url);
cache.put(event.request, networkResponse.clone());
}
return networkResponse;
}).catch(function() {
// Do nothing.
});
I know you mentioned that you tried to add in a .catch(), but perhaps it wasn't located on the correct part of the chain?
I'm not aware of any way to prevent the Failed to load resource: net::ERR_CONNECTION_REFUSED message from being logged, as that comes directly from Chrome's native networking code. There's nothing you could "handle" from JavaScript.
Share
Improve this answer
Follow
answered Dec 19, 2016 at 16:20
Jeff PosnickJeff Posnick
54.6k1414 gold badges145145 silver badges170170 bronze badges
1
Yep, it seems to be the consensus that you cannot hide the network state alerts :( The catch still didn't work but I did get it working. See my final code. Thanks for your help.
– Nigel Johnson
Dec 22, 2016 at 5:56
Add a comment
|
|
I am using the caching facility of service workers, however, after an update to the site, the cache still serves the old data when I refresh the page.
So as per the answer in this post I implemented the stale-while-revalidate:
self.addEventListener('fetch', function(event) {
event.respondWith(caches.open(CACHE_NAME).then(function(cache) {
return cache.match(event.request).then(function(response) {
var fetchPromise = fetch(event.request).then(function(networkResponse) {
// if we got a response from the cache, update the cache
if (response) {
console.log("cached page: " + event.request.url);
cache.put(event.request, networkResponse.clone());
}
return networkResponse;
});
// respond from the cache, or the network
return response || fetchPromise;
});
}));
});
While connected, all seems well, and I can see the console log message.
When I stop the server and refresh the page I get a load of exceptions.
Uncaught (in promise) TypeError: Failed to fetch
Failed to load resource: net::ERR_CONNECTION_REFUSED
I added a catch on the fetch() to try and handle the exceptions but it still fails (and the catch is not called). I added a catch on the caches.open() and respondWith() but same thing.
I know I can ignore these errors, but I'd rather handle them and do nothing (including not outputing them to the console) so I can see the meaningful stuff in the console I am outputting.
How can I stop the error messages?
The error when the service installs is less of a probem, but that would also be nice to catch and ignore.
|
Service Worker throws uncaught errors when the server is down
|
Great question! The Extensible Service Proxy (ESP) does not perform request caching. Its function is to intercept incoming requests, validate auth tokens, and then forward the request to Google Service Control where additional API Management rules are applied as defined in your Open API spec. Endpoints uses a distributed proxy model for better performance, to avoid the extra network hop that's typically incurred with a traditional multi-tenant API proxy. This is in fact the same model used internally within Google to power our own APIs.
Please let us know if you have anymore questions!
|
Will requests to Cloud Endpoints get cached?
The official docs are a little light on this matter. The docs read:
Cloud Endpoints uses the distributed Extensible Service Proxy to
provide low latency and high performance for serving even the most
demanding APIs. [...] and can be used with Google App Engine, Google
Container Engine, Google Compute Engine or Kubernetes.
A 'distributed extensible service proxy' makes me think the Endpoint is distributed to the edge nodes for faster responses, but the docs don't specifically state this.
We can use Cloud CDN to cache requests from GAE, Compute and Container Engine. Endpoints can be used with all those. This makes me wonder if there's some magic in the background with CDN+compute to cache the Endpoints responses. Again, the docs are a little light on this.
Has anyone figured this out? Thanks!
|
Caching of Google Cloud Endpoints?
|
2
The caching mechanism is implemented in the ConfigListener class of the ModuleManager (source of write config & read config).
As you can see there, the only supported caching method is writing the cached configuration to a file.
It is instantiated as a default in the DefaultListenerAggregate (source), which again is hard coded in the ModuleManagerFactory of the MVC module (source).
In order to replace this with your own logic, you would have to:
Replace the ConfigListener with your own (or at least extend the respective parts)
Change the ModuleManagerFactory to explicitly set your own ConfigListener on the DefaultListenerAggregate before it gets lazy-created.
While feasible, it don't think it is actually worth the effort. As the merged config file is a php file, it should get cached by the OpCache anyway. And the OpCache ultimately is an php-optimized in-memory cache. So I'd expect it to be even faster then any all-purpose in-memory store.
Share
Improve this answer
Follow
answered Nov 13, 2016 at 11:12
FgeFge
2,97144 gold badges2424 silver badges3737 bronze badges
Add a comment
|
|
One of the must-haves for the performance optimization of a Zend Framework 2 application is the caching of the configurations. The idea is to merge them to one big config file (or actually two files, e.g. module-classmap-cache.php and module-config-cache.php), so that the config files don't need to be opened and merged on every request. (See the info in the official documentation and a how-to in the article of Rob Allen "Caching your ZF2 merged configuration"):
application.config.php
return [
'modules' => [
...
],
'module_listener_options' => [
...
'config_cache_enabled' => true,
'config_cache_key' => 'app_config',
'module_map_cache_enabled' => true,
'module_map_cache_key' => 'module_map',
'cache_dir' => './data/cache',
],
];
I'd like to optimize it a bit more and load the configs from the in-memory cache (e.g. APCu). Is it provided by the framework? Or do I have to write a this functionality myself?
|
How to cache the application configs in memory in Zend Framework 2?
|
2
Assuming you are using a docker runner, then you need to update the config.toml file and make /root/.ivy2 persistent with a volume
here is mine:
concurrent = 1
check_interval = 0
[[runners]]
name = xxx
url = yyy
token = zzz
executor = "docker"
[runners.docker]
tls_verify = false
image = "ruby:2.1"
privileged = false
disable_cache = false
volumes = ["/cache", "/srv/home:/root/" ]
[runners.cache]
Share
Improve this answer
Follow
answered Nov 12, 2016 at 16:38
Sebastian PiuSebastian Piu
7,94811 gold badge3333 silver badges5050 bronze badges
Add a comment
|
|
When using gitlab ci runner and sbt, I want to avoid downloading all the sbt jar files each time there is a build. Is there any way I can cache this? Here's my .gitlab-ci.yml file which does not cache successfully the .ivy2 files.
image: openjdk:8-jre-alpine
services:
- docker:dind
variables:
SBT_VERSION: "0.13.13"
SBT_HOME: "/usr/local/sbt"
SBT_JAR: "http://dl.bintray.com/sbt/native- packages/sbt/${SBT_VERSION}/sbt-${SBT_VERSION}.tgz"
cache:
paths:
- ~/.ivy2
stages:
- setup
setup:
stage: setup
script:
- export PATH="${SBT_HOME}/bin:$PATH"
- apk --update add bash wget curl tar git
- wget ${SBT_JAR}
- mkdir /usr/local/sbt
- tar -xf sbt-${SBT_VERSION}.tgz -C /usr/local/sbt --strip-components=1
- echo -ne "- with sbt sbt-${SBT_VERSION}\n" >> /root/.built
- rm sbt-${SBT_VERSION}.tgz
- echo "$PATH"
- cat /root/.built
- ls -als /usr/local/sbt
- sbt sbt-version
- ls -als ~/.ivy2
|
How to enable .ivy2 caching for sbt when using a gitlab ci runner between builds
|
2
The problem is not the way you call the read() method. Your numHits() method always returns 0 because you return the value of a local variable, which is always initialized to 0.
numHits() would only make sense if HitCounter becomes an instance variable :
private int HitCounter = 0;
public int numHits(){
if(read(addr)){ // you should replace addr with some variable that you actually declare
return ++HitCounter;
}
return 0;
}
I also changed return HitCounter++ to return ++HitCounter, since post increment operator (return HitCounter++) will return the old value of HitCounter instead of the incremented value.
EDIT : Another issue is that you pass to read a variable that isn't declared anywhere. You should decide what you want to pass to that method.
Share
Improve this answer
Follow
edited Oct 25, 2016 at 5:53
answered Oct 25, 2016 at 5:47
EranEran
390k5555 gold badges708708 silver badges776776 bronze badges
3
Really? I think it's somewhere in read(), because it keeps giving me this error.
– Computer Geek
Oct 25, 2016 at 5:49
Multiple markers at this line - String cannot be resolved to a variable - Syntax error on token "addr", delete this token
– Computer Geek
Oct 25, 2016 at 5:49
It keeps giving me this error: "addr cannot be resolved to a variable."
– Computer Geek
Oct 25, 2016 at 5:53
Add a comment
|
|
class DirectMappedCache extends Cache {
private int logLineSize; //log_2 of the number of bytes per cache line
private int logNumLines; //log_2 of the number of cache lines
private TreeMap<String, CacheSlot> tmap;
public DirectMappedCache(int logLineSize, int logNumLines) {
//constructor that takes lengths of different fields
this.logLineSize = logLineSize;
this.logNumLines = logNumLines;
tmap = new TreeMap<String, CacheSlot>();
}
public boolean read(String addr) {
System.out.println("Read from address " + addr + ": ");
String tag = addr.substring(0,9);
String slotnumber = addr.substring(9,13);
String offset = addr.substring(13,16);
if(tmap.containsKey(slotnumber)){
CacheSlot temp = tmap.get(slotnumber);
if(temp.valid){
if(tag.equals(temp.tag)){
return true;
}
}
}
CacheSlot put = new CacheSlot();
put.valid = true;
put.tag = tag;
tmap.put(slotnumber, put);
return false;
}
public int numHits(){
int HitCounter = 0;
if(read(addr)){
return HitCounter++;
}
return 0;
}
}
I am making a Cache Simulator but I don't think my numHits() is working because the way I'm calling the value is wrong.
|
How can I call boolean return value into other method?
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.