Response
stringlengths 8
2k
| Instruction
stringlengths 18
2k
| Prompt
stringlengths 14
160
|
---|---|---|
9
put this code in onDestroy() for clear app cache
@Override
protected void onDestroy() {
super.onDestroy();
try {
trimCache(this);
// Toast.makeText(this,"onDestroy " ,Toast.LENGTH_LONG).show();
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
public static void trimCache(Context context) {
try {
File dir = context.getCacheDir();
if (dir != null && dir.isDirectory()) {
deleteDir(dir);
}
} catch (Exception e) {
// TODO: handle exception
}
}
public static boolean deleteDir(File dir) {
if (dir != null && dir.isDirectory()) {
String[] children = dir.list();
for (int i = 0; i < children.length; i++) {
boolean success = deleteDir(new File(dir, children[i]));
if (!success) {
return false;
}
}
}
// The directory is now empty so delete it
return dir.delete();
}
Share
Improve this answer
Follow
edited Nov 16, 2012 at 10:51
Ram kiran Pachigolla
21.1k1515 gold badges5757 silver badges7878 bronze badges
answered May 12, 2012 at 6:21
ChiragChirag
2,3212424 silver badges3636 bronze badges
Add a comment
|
|
Hi I've got an app with a code size of approximately 1/2mb. The app includes a webview for showing several pages. My cache therefore ends up at 2,5mb. Not much but enough. How can I clear my cache onDestroy?
Thx!
|
Clearing android cache
|
I don't usually recommend using the MySQL query cache. It sounds great in theory, but unfortunately isn't a great win for caching efficiently, because access to it from queries is governed by a mutex. That means many concurrent queries queue up to get access to the query cache, and this harms more than it helps if you have a lot of concurrent clients.
It even harms INSERT/UPDATE/DELETE, even though these queries don't have result sets, because they purge query results from the query cache if they update the same table(s). And this purging is subject to the same queueing on the mutex.
A better strategy is to use memcached for scalable caching of specific query results, but this requires you to think about what you want to cache and to write application code to access memcached and fail back to MySQL if the data isn't present in the cache. That's more work, but if you do it right it gives better results.
See TANSTAAFL.
|
I want to cache data on MySQL
SET GLOBAL query_cache_size = SOME_SIZE;
Is it all the thing required for caching data [efficiently] in MySQL ?
Do I need to add something extra to use the cache efficiently ?
I don't have good knowledge on data caching but still need to use for performance issue, so if I've missed to give some vital info, answer this question assuming the system is in default state.
|
Caching data using MySQL
|
If the images are not loaded when you call them, jQuery will return an empty object. Move your assignment inside your document.ready function:
$(document).ready(function() {
var $one = $('#image1');
var $two = $('#image2');
animate1();
animate2();
});
// ... etc.
If you need to cache them for later use outside of your initialization script then add them to a storage object:
var my_storage_object = {};
$(document).ready(function() {
var $one, $two;
my_storage_object.$one = $one = $('#image1');
my_storage_object.$two = $two = $('#image2');
animate1();
animate2();
});
// ... etc.
Then later on, outside of document.ready you can call:
my_storage_object.$one //still has a reference to the jQuery object.
|
I need to cache about 100 different selections for animating. The following is sample code. Is there a syntax problem in the second sample? If this isn't the way to cache selections, it's certainly the most popular on the interwebs. So, what am I missing?
note: p in the $.path.bezier(p) below is a correctly declared object passed to jQuery.path.bezier (awesome animation library, by the way)
This works
$(document).ready(function() {
animate1();
animate2();
})
function animate1() {
$('#image1').animate({ path: new $.path.bezier(p) }, 3000);
setTimeout("animate1()", 3000);
}
function animate2() {
$('#image2').animate({ path: new $.path.bezier(p) }, 3000);
setTimeout("animate2()", 3000);
}
This doesn't work
var $one = $('#image1'); //problem with syntax here??
var $two = $('#image2');
$(document).ready(function() {
animate1();
animate2();
})
function animate1() {
$one.animate({ path: new $.path.bezier(p) }, 3000);
setTimeout("animate1()", 3000);
}
function animate2() {
$two.animate({ path: new $.path.bezier(p) }, 3000);
setTimeout("animate2()", 3000);
}
|
How do I cache jQuery selections?
|
is it possible to Clear a certain page
from the Cache?
Yes:
HttpResponse.RemoveOutputCacheItem("/pages/default.aspx");
You can also use Cache dependencies to remove pages from the Cache:
this.Response.AddCacheItemDependency("key");
After making that call, if you modify Cache["key"], it will cause the page to be removed from the Cache.
In case it might help, I cover caching in detail in my book: Ultra-Fast ASP.NET.
|
If you cache pages in a HttpHandler with
_context.Response.Cache.SetCacheability(HttpCacheability.Public);
_context.Response.Cache.SetExpires(DateTime.Now.AddSeconds(180));
is it possible to clear a certain page from the cache?
|
ASP.NET Clear Cache
|
10
See here.
You can set the size of the disk cache using the InternalCacheDiskCacheFactory.
builder.setDiskCache(new InternalCacheDiskCacheFactory(context, yourSizeInBytes));
You can apply in this your project like below:
import android.annotation.TargetApi;
import android.content.Context;
import android.os.Build;
import android.os.Environment;
import android.os.StatFs;
import android.util.Log;
import com.bumptech.glide.Glide;
import com.bumptech.glide.GlideBuilder;
import com.bumptech.glide.load.engine.cache.InternalCacheDiskCacheFactory;
import com.bumptech.glide.module.GlideModule;
import com.example.MyApplication;
import java.util.Locale;
public class LimitCacheSizeGlideModule implements GlideModule {
@Override
public void applyOptions(Context context, GlideBuilder builder) {
if (MyApplication.from(context).isTest())
return; // NOTE: StatFs will crash on robolectric.
builder.setDiskCache(new InternalCacheDiskCacheFactory(context, yourSizeInBytes));
}
@Override
public void registerComponents(Context context, Glide glide) {
}
}
and then in your manifest add it like this
<manifest
...
<application>
<meta-data
android:name="YourPackageNameHere.LimitCacheSizeGlideModule"
android:value="GlideModule" />
...
</application>
</manifest>
Share
Improve this answer
Follow
edited Oct 18, 2017 at 7:55
answered Oct 18, 2017 at 7:35
Anjal SaneenAnjal Saneen
3,1792424 silver badges3838 bronze badges
Add a comment
|
|
According to the documentation ,
The internal cache factory places the disk cache in your application's internal cache directory and sets a maximum size of 250MB.
As I am trying to implement some offline features in my apps, it possibly require cache size larger than 250MB. So does Glide allow to modify the cache size or I need to find out an alternative way of doing this? If so, what mechanism should I follow?
I have seen in the documentation an approach to increase that.
builder.setDiskCache(
new InternalCacheDiskCacheFactory(context, yourSizeInBytes));
How do I implement that in my code?
|
How to increase the cache size in Glide android?
|
5
RequestOptions provides type independent options to customize loads with Glide in the latest versions of Glide.
Make a RequestOptions Object and use it when we are loading the image.
RequestOptions requestOptions = new RequestOptions()
.diskCacheStrategy(DiskCacheStrategy.NONE) // because file name is always same
.skipMemoryCache(true);
Glide.with(this)
.load(photoUrl)
.apply(requestOptions)
.into(profile_image);
Share
Improve this answer
Follow
edited May 28, 2018 at 17:12
Jason Roman
8,2161010 gold badges3535 silver badges4242 bronze badges
answered May 28, 2018 at 16:53
Vinay JohnVinay John
1,0001212 silver badges1313 bronze badges
0
Add a comment
|
|
I know it is a very basic question. I have tried to find the numerous solution but I am not able to understand them.
What I want
upload image to the server and in return I am getting URL but the problem is while setting the image using this URL, the old image is set. This is happening because the glide is taking old cache and not updating the cache.
How to solve this.
Glide.clear(profilePic);
Glide.with(getApplicationContext())
.load(url)
.diskCacheStrategy(DiskCacheStrategy.ALL)
.skipMemoryCache(true)
.transform(new CircleTransform(MainProfile.this))
.into(profilePic);
currently, the pic is changed but when I click the back button and come back to this activity then it loads an old image. Loading the image from cache like that.
//setting up the profile pic
Glide.with(getApplicationContext())
.load(userProfilePicUrl)
.asBitmap()
.centerCrop()
.into(new BitmapImageViewTarget(profilePic) {
@Override
protected void setResource(Bitmap resource) {
RoundedBitmapDrawable circularBitmapDrawable =
RoundedBitmapDrawableFactory.create(MainProfile.this.getResources(), resource);
circularBitmapDrawable.setCircular(true);
profilePic.setImageDrawable(circularBitmapDrawable);
}
});
The problem is when I come back to this activity it shows old pic instead of new one.
|
Delete cache while using glide
|
It uses the docker centos7 image downloaded from Docker Hub.
That will always be the case, cache or no cache.
The --no-cache would apply to the directive/step after FROM.
|
I build an image:
Dockerfile:
FROM centos:7
build command:
$ docker build -t my-image:1.0 .
Now I make a second image (which is based on the original dockerfile)
Dockerfile:
FROM centos:7
RUN yum install -y mysql
I build with the --no-cache option on true
$ docker build --no-cache=true -t my-image:1.1 .
Sending build context to Docker daemon 2.048 kB
Step 1 : FROM centos:7
---> 970633036444
Step 2 : xx
it seems to use the cache. And also when I try to delete my-image:1.0:
docker rmi 970633036444
Error response from daemon: conflict: unable to delete 970633036444 (cannot be forced) - image has dependent child images
What am I doing wrong?
|
docker build --no-cache=true still builds with cache?
|
Constant variables accessed by immediate constants (constants in the opcode) or indexed constants (accessed via ldc instruction) are accessed by (bank, offset) pair, not by address. These reads go through the immediate constant and index constant caches. On some chips these are the same cache. Examples of constant accesses are:
// immediate constant
ADD r0, r1, c[bank][offset]
// r1 has packed version of bank, offset
LDC r0, r1
Arguments for cc2.0 and above are passed such that you will see immediate constant accesses.
Constant accesses go through the constant memory hierarchy which in end results in a global address which can be in system memory or device memory.
If you set a constant variable to a pointer to global then the data will be read through the data hierarchy.
If you define a const variable the compiler can choose to put the read only data in either a bank/offset or an address.
If you review the SASS (nvdisasm or tools) you will see LD instructions. Depending on the chip this data may be cached in the L1/Tex cache then L2 cache.
SHARED
LDS/STS/ATOMS -> shared memory
GENERIC
LD/ST (generic to shared) -> shared memory
LD/ST (generic to global) -> L1/TEX -> L2
LD/ST (generic to local) -> L1/TEX -> L2
LOCAL
LDL/STL (local) -> L1/TEX -> L2
GLOBAL
LDG/STG (global) -> TEX -> L2
INDEXED CONSTANT
LDC -> indexed constant cache -> ...-> L2
L2 misses can go to device memory or pinned system memory.
In the case you mention the constant variable will very likely be accessed via an immediate constant (best performance assuming reasonable size of constants) and the de-referenced pointer will result in a global memory access.
On GK110 LDG instructions are cached in the texture cache.
On Maxwell LDG.CI instructions are cached in the texture cache. LDG.CAoperations are cached in the texture cache (GM20x). All other LDG accesses go through the texture cache but are not cached beyond the lifetime of the warp instruction.
|
Instead of passing lots of arguments to a kernel, I use a __constant__ variable. This variable is an array of structures which contains many pointers to data in global (these pointer would be a list of arguments); an array for the multiple different datasets to call a kernel on. Then the kernel accesses this array and dereferences to global the appropriate data. My question is, does this data get cached through L2 or the constant cache? Moreover, if the latter and, if loaded via __ldg(), does it go through L1 or still the constant cache?
To be more specific the data itself sits in global, however the kernel dereferences a __constant__ variable to get to it. Does this adversely affect caching?
|
CUDA __constant__ deference to global memory. Which cache?
|
Here's a simple test allowing to know:
System.out.println("Thread.currentThread() = " + Thread.currentThread());
LoadingCache<String, String> cache = CacheBuilder
.newBuilder()
.refreshAfterWrite(2, TimeUnit.SECONDS)
.build(new CacheLoader<String, String>() {
@Override
public String load(String s) throws Exception {
System.out.println("Thread.currentThread() = " + Thread.currentThread());
return "world";
}
});
cache.get("hello");
Output:
Thread.currentThread() = Thread[main,5,main]
Thread.currentThread() = Thread[main,5,main]
Of course, as the documentation indicates, if another thread has already started loading the value for the key, the current thread won't reload it: it will wait for the value to be loaded by the other one:
If another call to get(K) or getUnchecked(K) is currently loading the value for key, simply waits for that thread to finish and returns its loaded value.
|
Does Google Guava Cache load the cache on the same thread by default?
Code:
cache = CacheBuilder
.newBuilder()
.refreshAfterWrite(2, TimeUnit.SECONDS)
.build(new CacheLoader<String,String>() {
@Override
public String load(String s) throws Exception {
return addCache(s);
}
});
Will the call to addCache be made on a different thread? As far as I know, it is a synchronous call but I am not sure.
|
Does Google Guava Cache load on same thread?
|
6
In Rails 5:
$ rails dev:cache
Development mode is now being cached.
Activates caching in development environment.
Share
Improve this answer
Follow
edited Feb 1, 2017 at 0:46
answered Jan 31, 2017 at 22:18
Artem PArtem P
5,29655 gold badges4040 silver badges4444 bronze badges
1
2
Since this question is about Rails 4.2, you may want to leave this as a comment instead.. (Also, it's not really clear what your code is supposed to do in the context of this question)
– Grisha Levit
Feb 1, 2017 at 0:29
Add a comment
|
|
I'm trying to fragment cache a static portion of my site, but it doesn't seem to be working at all. I've set up config/application.rb with the following:
config.action_controller.perform_caching = true
config.cache_store = :dalli_store
In my view, I have this:
<% cache 'cache_key' do %>
<!-- cached markup -->
<% end %>
I don't see anything in my logs about saving the fragment to cache or retrieving it on subsequent page loads. I've also tried using the default Rails :file_store caching. I know that the cache store is working because using Rails.cache.fetch works properly.
How can I get this to work?
|
Rails 4.2 fragment caching isn't working
|
Your cache.appcache looks invalid, it should look like this:
CACHE MANIFEST
#v0.0.1 change this to force update
CACHE:
./my.js
if it's still not working after this, can you provide me a full HTML so I can test it more?
|
ApplicationCache won't cache anything
index.html
<!DOCTYPE html>
<html manifest="cache.appcache">
<head>
<title>Zipcode Database</title>
File cache.appcache
CACHE MANIFEST
my.js
htacces file
AddType text/cache-manifest .appcache
When in console i run
applicationCache.update();
then error :
InvalidStateError: Failed to execute 'update' on 'ApplicationCache': there is no application cache to update.
code: 11
message: "Failed to execute 'update' on 'ApplicationCache': there is no application cache to update."
name: "InvalidStateError"
stack: "Error: Failed to execute 'update' on 'ApplicationCache': there is no application cache to update.↵
And applicationCache.cache =0
|
Failed to execute 'update' on 'ApplicationCache': there is no application cache to update
|
I think you would be better off using memcached or Redis to cache the query results. MongoDB is more of a full database than a cache. While both memcached and Redis are optimized for caching.
However, you could implement your cache as a two-level cache. Memcached, for example, does not guarantee that data will stay in the cache. (it might expire data when the storage is full). This makes it hard to implement a system for tags (so, for example, you add a tag for a MySQL table, and then you can trigger expiration for all query results associated with that table). A common solution for this, is to use memcached for caching, and a second slower, but more reliable cache, which should be faster than MySQL though. MongoDB could be a good candidate for that (as long as you can keep the queries to MongoDB simple).
|
I just had this idea and thinks it's a good solution for this problem but I ask if there are some downsides to this method. I have a webpage that often queries database, as much as 3-5 queries per page load. Each query is making a dozen(literally) joins and then each of these queries results are used for another queries to construct PHP objects. Needless to say the load times are ridiculous even on the cloud but it's the way it works now.
I thought about storing the already constructed objects as JSON, or in MongoDB - BSON format. Will it be a good solution to use MongoDB as a cache engine of this type? Here is the example of how I think it will work:
When the user opens the page, if there is no data in Mongo with the proper ID, the queries to MySQL fire, each returning data that is being converted to a properly constructed object. The object is sent to the views and is converted to JSON and saved in Mongo.
If there was data in Mongo with the corresponding ID, it is being sent to PHP and converted.
When some of the data changes in MySQL (administrator edits/deletes content) a delete function is fired that will delete the edited/deleted object in MongoDB as well.
Is it a good way to use MongoDB? What are the downsides of this method? Is it better to use Redis for this task? I also need NoSQL for other elements of the project, that's why I'm considering to use one of these two instead of memcache.
MongoDB as a cache for frequent joins and queries from MySQL has some information, but it's totally irrelevant.
|
MongoDB as MySQL cache
|
The cache is global so every use will access the same resources through the site. You can look at the Session if you need session persistent user data.
|
Using ASP.NET I am putting data into cache which is user-specific. The Site uses Windows-Authentication:
HttpContext.Current.Cache.Insert(....)
Is this cache available to the user only, or will any user who requests the cache with the same key get the same data?
|
ASP.NET: Cache per user
|
Your HTML is incorrect. Replace
<div id="#tour_name">
by
<div id="tour_name">
|
I use codeigniter but my JS code isn't working! Can anyone guide me?
Example
<select name="tour_name" style="display: none; ">
<option disabled="disabled" value="">select</option>
<option>hello</option>
<option>hi</option>
<option>what</option>
<option>how</option>
</select>
<div id="#tour_name">
JS:
$(document).ready(function () {
$('select[name="tour_name"]').change(function () {
$('#tour_name').empty();
var $val = $(this).val();
$(this).hide();
$('#tour_name').hide().fadeIn('slow').append('<b>' + $val + '</b>')
$('#tour_name b').click(function () {
$('#tour_name').empty();
$('select[name="tour_name"]').fadeIn('slow')();
})
})
});
|
Not work append(jQuery) in the code
|
It doesn't really tell the browser not to cache it -- The browser caches each query string individually, so if the next request is to rails.js?9283482934, that is a new URL that needs to be requested from the server.
That lets you tell the browser to cache the file, but by updating the html file with a new number, you can force all client browsers to download the new version without actually changing the new of the js file.
The reason to use the number is to allow clients to cache it but also allow you to force an update -- so it should not negatively affect the performance. However, if you are programmatically generating a random number for each request, you will be forcing all clients to always request the file, effectively disabling caching for that file.
|
I think I remember reading that that has to do with telling the browser not to cache it or something. How does this impact browser caching and application performance in production? When does that number change?
<script src="/javascripts/rails.js?1271798101" type="text/javascript"></script>
|
What does this random number do: /javascripts/rails.js?1271798101
|
4
Are you able to employ memcache?
It helps to lessen db load.
Share
Improve this answer
Follow
answered Mar 31, 2010 at 16:32
webbiedavewebbiedave
48.6k88 gold badges8989 silver badges103103 bronze badges
2
+1 This is the simplest and best answer. Avoiding the database is your best bet.
– ryeguy
Mar 31, 2010 at 17:01
Yes, perhaps. I am not too familiar with it but will look into it. Thanks!
– Booker
Apr 1, 2010 at 12:18
Add a comment
|
|
I am facing quite a specific optimization problem.
I currently have 4 normalized tables of data.
Every second, possibly thousands of users will pull down up-to-date info from these tables using AJAX.
The thing is that I can predict relatively easily which subset of data they need... The most recent 100 or so entries in those 4 normalized tables.
I have been researching de-normalization... but feel that perhaps there is an easier solution.
I was thinking that I could somehow every second run one sql query to condense the needed info, store it in a temp cached table and then have all of the user queries just draw from this. This will allow the complex join of 4 tables to only be run once, and then from there the users just need to do a simple lookup from the cached table.
I really don't know if this is feasible. Comments on this or any other suggestions would be much appreciated.
Thanks!
|
De-normalization alternative to specific MYSQL problem?
|
You need to create a class that holds Thing1 and Thing2, e.g:
class Things {
public final Thing1 thing1;
public final Thing2 thing2;
public Things(Thing1 thing1, Thing2 thing2) {
this.thing1 = thing1;
this.thing2 = thing2;
}
@Override
public boolean equals(Object obj) { ... }
@Override
public int hashCode() { ... };
}
Then to use it:
Things key = new Things(thing1, thing2);
if (!cache.contains(key) {
Integer result = computeResult();
cache.put(key, result);
return result;
} else {
return cache.getValue(key);
}
Note you have to implement equals and hashcode to make this code work properly. If you need this code to be thread-safe then have a look at ConcurrentHashMap.
|
I have an expensive computation, the result of which I'd like to cache. Is there some way to make a map with two keys? I'm thinking of something like Map<(Thing1, Thing2), Integer>.
Then I could check:
if (! cache.contains(thing1, thing2)) {
return computeResult();
}
else {
return cache.getValue(thing1, thing2);
}
pseudocode. But something along those lines.
|
Java: Data structure for caching computation result?
|
You can use this function,
<?php
function to_netmask($ip, $prefix) {
$mask = $prefix==0?0:0xffffffff << (32 - $prefix);
return long2ip(ip2long($ip) & $mask);
}
?>
|
How could I get a /22 network address from an IP address?
Like:
/24 of 193.95.221.54 is 193.95.221.0
/16 of 193.95.221.54 is 193.95.0.0
/8 of 193.95.221.54 is 193.0.0.0
/22 of 193.95.221.54 is 193.95.X.0
I'd like to cache GeoIP data for 1024 IP addresses with a single cache entry to conserve cache space and number of lookups (GeoIP database is 25 MB).
|
PHP IP to network translation
|
In a POSIX system like Linux or Solaris, try using posix_fadvise.
On the streaming file, do something like this:
posix_fadvise(fd, 0, 0, POSIX_FADV_SEQUENTIAL);
while( bytes > 0 ) {
bytes = pread(fd, buffer, 64 * 1024, current_pos);
current_pos += 64 * 1024;
posix_fadvise(fd, 0, current_pos, POSIX_FADV_DONTNEED);
}
And you can apply POSIX_FADV_WILLNEED to your other file, which should raise its memory priority.
Now, I know that Windows Vista and Server 2008 can also do nifty tricks with memory priorities. Probably older versions like XP can do more basic tricks as well. But I don't know the functions off the top of my head and don't have time to look them up.
|
I need to keep as much as I can of large file in the operating system block cache even though it's bigger than I can fit in ram, and I'm continously reading another very very large file. ATM I'll remove large chunk of large important file from system cache when I stream read form another file.
|
Keeping a file in the OS block buffer
|
In default configuration rails does not have caching enabled/configured in development.
Starting with rails 5 you can touch tmp/caching-dev.txt or rm tmp/caching-dev.txt and restart server to toggle it (for earlier versions you can backport this to your app, see config/development.rb of 5.2.1).
Note that you also have to configure production environment and have a cache backend - it may be wasteful to have separate cache in each worker. Thus redis/memcached should be handy.
Then as a rule of thumb - it's better to use the same cache store and similar configuration in development, because cache store has non-zero latency and sometimes it can be faster not to cache something and you want your development env to be closer to production.
|
I'm using a render partial: 'fragment', locals:{obj:item} for every row in table.
It takes long time to process the whole page.
Is there a way to save all fragments for each item at first loading and don't render them again each time during the server works?
UPDATED
card/index.html.haml
%table
=render partial: 'card/card', collection: @cards, cached: true
card/_card.html.haml
-cache card do
%tr=card.title
card_controller.rb
def index
@cards = Card.order(:name)
end
SOLVED
Cached started to work after I add to development.rb:
config.action_controller.perform_caching = true
config.cache_store = :memory_store, { size: 64.megabytes }
|
Rails how to cache partial?
|
Hashed the params, then you can include as many as you can:
$params = [
'page' => 1,
'price_from' => '',
'price_to' => '',
'param0' => '',
...
];
foreach (array_keys($params) as $param) {
if (request()->has($param))
$params[$param] = request()->input($param);
}
$prefix = 'category_';
$hashed = md5(json_encode($params));
$cache_key = $prefix . $hashed;
|
Laravel 5.5 + Redis.
Got the following code in controller:
$products = Cache::remember('category_'.$category->alias.'_page_'.$page, 1440, function() use ($childrenCategoriesIndexes){
return Product::whereIn('category_id', $childrenCategoriesIndexes)
->userFilter()
->paginate(15);
});
It caches each page. But what if there are too many custom filters? This is scopeUserFilter() from Product model:
public function scopeUserFilter($query) {
if (request('price_from')) {
$query->where('price', '>', request('price_from'));
}
if (request('price_to')) {
$query->where('price', '<', request('price_to'));
}
return $query;
}
And there're only 2 variables. But what if there will be 10 and more variables, how to cache this data? I think keys like this are not good:
'category_'.$category->alias.'_page_'.$page.'_'.request('price_from').'_'.request('price_to')
|
Cache data with user filters in Laravel (and not only)
|
While it's not ideal do depend on implementation details, we can take a look at the code of IsSubclassOf using dnSpy
public virtual bool IsSubclassOf(Type c)
{
Type type = this;
if (type == c)
{
return false;
}
while (type != null)
{
if (type == c)
{
return true;
}
type = type.BaseType;
}
return false;
}
So the short answer is that in this version of the framework (4.6) the call is not cached, and it implies walking up the inheritance hierarchy.
The question of how expensive the call is depends on your use case. You should measure whether your code spends a significant amount of time in this method and whether a cache helps.
Performance
The question of whether it is worth caching the result, is one of measuring the amount of time the call takes vs a cache lookup. I tested 5 scenarios:
Direct invocation
Cache using: Dictionary<Tuple<Type, Type>, bool>
Cache using: Dictionary<(Type, Type), bool>(value tuple)
Cache using: ConcurrentDictionary<Tuple<Type, Type>, bool>
Cache using: ConcurrentDictionary<(Type, Type), bool> (value tuple)
Results
Direct invocation - 0.15s/ call
Cache using: Dictionary<Tuple<Type, Type>, bool> - 0.12s / call
Cache using: Dictionary<(Type, Type), bool> - 0.06s / call
Cache using: dnSpy0 - 0.13s/call
Cache using: dnSpy1 (value tuple) - 0.7s/call
dnSpy2 with value tuples offers the best thread safe performance, if you don't plan to use the code from multiple threads a simple dnSpy3 with value tuples also works very well.
Generally the cache only halves the call time and the test was not performed for large amount of data in the cache so performance may degrade with more classes. I don't think it's worth caching the result.
Code
dnSpy4
|
simple Question:
is type.isSubclassOf(Type otherType) somewhere cached as a let's say Dictionary<Tuple<Type, Type>, bool>?
If not, how expensive is such a call?
I'm quite often checking for that to keep my code extendable and am converting methods I use the most into dictionaries...
|
Is type.isSubclassOf(Type otherType) cached or do I have to do that myself?
|
Laravel cache doesn't allow to store null values in cache but you can store false values.
cache(['key' => null],120)
var_dump(Cache::has('key')); //prints false
cache(['key' => false],120)
var_dump(Cache::has('key')); //prints true
So I would suggest you to try something like:
return Cache::remember("cacheKey_{$id}", 120, function () use ($id) {
$your_check = FooModel::where(...)->first();
return is_null($your_check) ? false : $your_check;
});
Alternatively you can assume that when there isn't the key, it would be null (check with Cache::has() or isset())
|
I'm using Laravel 5.2 and wants to cache eloquent results, but it does not store empty results (= null) on cache. Is there any way to do it?
return Cache::remember("cacheKey_{$id}", 120, function () use ($id) {
return FooModel::where(...)->first();
});
When the result is not empty, the cache is working properly.
|
Laravel cache does not store null value
|
As @mpf82 proposed you can simply add a version or something as a query string argument to the file which you want to reload.
If the filename changes the browser wont cache the old file anymore.
In flask variables which are unknown in an url_for are handled as query strings, so you simply choose a variable which is unknown, f.e. versionand add there a version number, f.e. 12052017:
<script type=text/javascript src="{{ url_for('static', filename='js/main.js', version='12052017') }}"></script>
And thats it, the result:
<script type=text/javascript src="/static/js/main.js?version=12052017"></script>
|
I am using following code to cache my static files on my flask app which is hosted on heroku:
# cache control
@app.after_request
def add_header(response):
# rule so it will only affect static files
rule = request.path
if "static" in rule:
response.cache_control.max_age = 1000000
return response
else:
return response
It works fine.
But now I made some changes and I need that the site loads the new files.
If I open the site in regular browser where I already opened it, it loads the old files (because they are cached).
In incognito mode or hitting ctrl+f5 = loads the new files. The problem is a regular user wont hit ctrl+f5 or use incognito mode.
|
Python Flask force reload cached JS files
|
5
Found the answer here:
https://github.com/GoogleCloudPlatform/google-cloud-node/issues/1087
so for your code:
const gcloud = require('google-cloud');
const gcs = gcloud.storage({credentials: myCredentials});
const storageBucket = gcs.bucket(myConfig);
storageBucket.upload('path/to/image', {
public: true,
destination: storageBucket.file('storageBucketName/imageName.png')
metadata: {
cacheControl: 'public, max-age=14400'
}
} , (err, file, apiResponse) => {
});
or how I do it:
const cf = bucket.file(fn);
fs.createReadStream(path.join(topDir, fn))
.pipe(cf.createWriteStream({
predefinedAcl: 'publicRead',
metadata: {
contentType: mime.lookup(fn) || 'application/octet-stream',
cacheControl: 'public, max-age=315360000',
},
gzip: true
}))
.on('error', err => {
logErr(err);
resolve();
})
.on('finish', () => {
resolve();
});
Share
Improve this answer
Follow
edited Mar 21, 2017 at 22:21
answered Mar 21, 2017 at 22:10
JacottJacott
73566 silver badges77 bronze badges
Add a comment
|
|
I'm having trouble setting the cache-control max-age header for images in my storage bucket. The images are actually stored in a firebase storage bucket, if that makes any difference.
I can successfully upload an image receive the file object in the response. I then set the cache-control max-age header for the file to 31536000 like:
const gcloud = require('google-cloud');
const gcs = gcloud.storage({credentials: myCredentials});
const storageBucket = gcs.bucket(myConfig);
storageBucket.upload('path/to/image', {
public: true,
destination: storageBucket.file('storageBucketName/imageName.png')
} , (err, file, apiResponse) => {
file.setMetadata({
cacheControl: 'public, max-age=31536000'
});
});
When I visit the image at the public url (https://storage.googleapis.com/my-bucket-name.appspot.com/storageBucketName/imageName.png) the cache-control max-age header is set to 3600, which is the default.
Oddly enough if I visit the public url over http (http://storage.googleapis.com/my-bucket-name.appspot.com/storageBucketName/imageName.png) the cache-control max-age header is set to 31536000, as expected.
How can I set this header for the public url available over https? Thanks!
|
Can't set Cache-Control max-age header using google-cloud-node
|
9
This command will force clean the entire NPM cache:
npm cache clean --force
Check this NPM documentation link for more information on clearing the NPM cache: https://docs.npmjs.com/troubleshooting/try-clearing-the-npm-cache
Share
Improve this answer
Follow
edited Oct 19, 2018 at 13:02
Samuel Liew
77.9k109109 gold badges165165 silver badges269269 bronze badges
answered Oct 19, 2018 at 2:10
Kenneth NgigiKenneth Ngigi
9911 silver badge44 bronze badges
Add a comment
|
|
I start from my root directory or my project directory and enter npm cache clean/clear (I have tried both and nothing seems to happen.)
Anyone know how completely clear the cache for npm? Basically I need to run webpack (using as a react npm package) and it used to work and now does not. Does anyone know how to properly npm cache clean and in which directory to do this?
Additionally I also read something about your environment PATH variable needing to be correct for cleaning cache and I somehow screwed that up. Under PATH it now seems to have combined two paths, one for my mongoDB and one for npm, both chained together with following value:
PATH C:\Program Files\MongoDB\Server\3.2\binC:\Users\test\AppData\Roaming\npm
Would invalid PATH be a problem (npm itself runs ok) for cleaning cache? (I'll fix PATH with Adding directory to PATH Environment Variable in Windows).
|
How does one properly do npm cache clean?
|
In short its a caching profile that you can set in the web.config instead of having to apply the settings to every Action or Controller you want to use the cache settings with:
In web.config you specify options for the cache profile:
<system.web>
<compilation debug="true" targetFramework="4.5.1" />
<httpRuntime targetFramework="4.5.1" />
<caching>
<outputCacheSettings>
<outputCacheProfiles>
<add name="cp1" duration="30" location="Any" varyByHeader="user-agent" />
</outputCacheProfiles>
</outputCacheSettings>
</caching>
</system.web>
Then anywhere you want to use the profile you can just use e.g.
[OutputCache(CacheProfile="cp1")]
public ActionResult Index() {
//...
}
Above example taken from Apress Pro ASP.NET MVC 5 Platform by Adam Freeman.
I recommend it as a good read.
|
I encountered with it while I was reading this article from MSDN documentation. I am pretty newbie about caching, in fact, one of the first articles that I read about caching. Can you explain me simply?
|
What is cache profile?
|
Let's assume, for the sake of this answer, that the uniqueness of a Point is determined by two int coordinates, x and y (you can change that easily to fit the actual parameters that determine your Point's uniqueness).
You don't want to create a Point instance in order to determine if that Point already exists in some HashSet or HashMap. That defeats the purpose of avoiding creation of multiple instances (though using a HashMap or HashSet would prevent you from keeping all those duplicate instances, and the GC will release them soon, so it may be enough to solve the memory consumption issue).
I'm suggesting that you have a static Point getPoint(int x,int y) method in your Point class. That method would check inside a static internal Point0 whether those x,y coordinates already have a corresponding Point1 instance and return that instance. If an instance doesn't exist, it will be created and added to the Point2.
This is similar to what Point3 does for small integers - it returns a cached Point4 instance instead of creating a new one.
|
I have an application that uses a data structure Point. Let's say that overall there are 50 distinct instances of Point (meaning p1.equals(p2) == false). However during calculation loads of new instances are created, that are actually the same as already instantiated objects.
As these instances are stored this has a heavy impact on memory consumption:
50 distinct Points are represented by 500'000 instances of Point. In the data structure there is nothing that would prevent the reuse of already present instances. For that reason I created a cache:
HashMap<Point, Point> pointCache = new HashMap<>();
So I can check if the point is present and add it if it is not. This kind of cache however seems like a bit of overkill, as the key and the value are essentially the same.
Furthermore I already have a map present:
HashMap<Point, Boolean> flag = new HashMap<>();
What I am curious about is: Is there a map like data structure that I could use for flag that would allow the retrieval of the key? If not is there any other data structure that I could use for the cache that would be more like a set and would allow easy checking and retrieval?
EDIT: For completeness, the Point class I am using is javafx.geometry.Point2D and therefore nothing that I can change.
|
Reusing cached instances
|
you cannot set password for the cache driver as its not supported.
If you want an alternative, consider using SncRedisBundle
Example config
snc_redis:
clients:
cache:
type: predis
alias: default
dsn: redis://secret@localhost/1
logging: %kernel.debug%
doctrine:
metadata_cache:
client: cache
entity_manager: default # the name of your entity_manager connection
document_manager: default # the name of your document_manager connection
result_cache:
client: cache
entity_manager: [default, read] # you may specify multiple entity_managers
query_cache:
client: cache
entity_manager: default
|
Sounds simple but its had me going for hours on end.
How can I connect to a redis cache server using a password. It connects and caches doctrine queries successfully without a password but when I put a password it throws an exception
InvalidConfigurationException in ArrayNode.php line 309:
Unrecognized option "password" under "doctrine.orm.entity_managers.default.query_cache_driver"
I have tried a combination but my current code is
config.yml
.....
entity_managers:
default:
metadata_cache_driver: apc
query_cache_driver:
type: redis
host: localhost
port: 6379
# password: myStr0nG!passw0rd - adding this causes exception
instance_class: Redis
result_cache_driver:
type: redis
host: localhost
port: 6379
# password: myStr0nG!passw0rd - adding this causes exception
instance_class: Redis
|
Doctrine connect to redis WITH password
|
NSURLCache does not write to disk in its default so we need to declare a shared NSURLCache in app delegate :
NSURLCache *sharedCache = [[NSURLCache alloc] initWithMemoryCapacity:2 * 1024 * 1024
diskCapacity:100 * 1024 * 1024
diskPath:nil];
[NSURLCache setSharedURLCache:sharedCache];
Reference
|
The cache mechanism of AFNetworking is not working for me, I have an image of 500ko with big dimension and I want to cache it. But every time I close the application and open it again I have to wait for 20sec for the image to load so the cache system is not working here is the code:
I tried :
[imageView setImageWithURL:[NSURL URLWithString:imageURL]
placeholderImage:[UIImage imageNamed:@"placeholder"]];
and then tried :
NSURLRequest *imageRequest = [NSURLRequest requestWithURL:[NSURL URLWithString:imageURL]
cachePolicy:NSURLRequestReturnCacheDataElseLoad
timeoutInterval:60];
[imageView setImageWithURLRequest:imageRequest
placeholderImage:[UIImage imageNamed:@"placeholder"]
success:nil
failure:nil];
|
Image Caching With AFNetworking
|
3
It reminds me of one question where someone was implementing a similar thing:
Coalescing items in channel
I gave an answer with an example of implementing such a middle layer. I think this is in line with your ideas: have a routine keeping track of requests for the same resource and prevent them from being recalculating in parallel.
If you have a separate routine responsible for taking requests and managing access to cache, you don't need an explicit lock (there is one buried in a channel though). Anyhow, I don't know specifics of your application, but considering you need to check cache (probably locked) and (occasionally) perform an expensive calculation of missing entry – lock on map lookups doesn't seem like a massive problem to me. You can also always span more such middle layer routines if you think this would help, but you would need a deterministic way of routing the requests (so each cache entry is managed by a single routine).
Sorry for not bringing you a silver bullet solution, but it sounds like you're on a good way of solving your problem anyway.
Share
Improve this answer
Follow
edited May 23, 2017 at 12:23
CommunityBot
111 silver badge
answered Jun 27, 2015 at 17:51
tomasztomasz
12.8k44 gold badges4343 silver badges5555 bronze badges
1
seems pretty much the same, in my case since the request is quite heavy the locks prob won't be the bottleneck, i was just wondering if there is a good lock free way of doing it
– Feras
Jun 27, 2015 at 19:57
Add a comment
|
|
I was wondering if there is already a library to do that or maybe a suggestion which way to go for the following problem:
Client A makes request for resource A, this is a long running request since resource A is expensive and it results in a cache miss. In the meantime client B makes request for resource A, now it's still a cache miss since client A's request hasn't returned and populated the cache yet. so instead of making a new request to generate resource A, client B should block and be notified when client A's request is complete and has populated the cache.
I think the group cache library has something along those lines, but I haven't been able to browse through the code to figure out how they do it, I also don't wanna tie the implementation to it and use it as a dependency.
The only solution I had so far is a pub-sub type of thing, where we have a global map of the current in-flight requests with the reqID as a key. When req1 comes it sets its ID in the map, req2 comes and checks if its id is in the map, since its requesting the same resource it is, so we block on a notifier channel. When req1 finishes it does 3 things:
evicts its ID from the map
saves the entry in the cache
sends a broadcast with its ID to the notifier channel
req2 receives the notification, unblocks and fetches from the cache.
Since go doesn't have built in support for broadcasts, theres probably 1 grouting listening on the broadcast channel and then keeping a list of subscribers to broadcast to for each request, or maybe we change the map to reqId => list(broadcastChannelSubscribers). Something along those lines.
If you think there is a better way to do it with Go's primitives, any input would be appreciated. The only piece of this solution that bothers me is this global map, surrounded by locks, I assume it quickly is going to become a bottleneck. IF you have some non-locking ideas, even if they are probabilistic Im happy to hear them.
|
Golang detect in-flight requests
|
3
You can unbind the event first before binding using die() if you're using jQuery < v1.7.2.
$('#tstButton').die('click').live('click', function() {
alert();
});
If you're using jQuery v > 1.7.2
You can use on and off:
$('#tstButton').off('click').on('click', function() {
alert();
});
Share
Improve this answer
Follow
answered Jun 26, 2015 at 5:51
TusharTushar
86.6k2121 gold badges161161 silver badges181181 bronze badges
0
Add a comment
|
|
I have page which contains jQuery code:
$('#tstButton').live('click',function(){
alert();
});
And I load this page from ajax call.
When I load the page multiple times, each time it loads the script and stores to cache.
Say when I make ajax call three times. and click $('#tstButton') once, then it will alert 3 time.
I have used:
cache:false
in ajax call. But still its not clearing cache.
How can I clear these javascript codes from cache?
|
Clearing javascript code from cache
|
You could store the processed content on Azure Blob Storage and serve the content from there.
|
I want to cache some cropped images and serve them without calculating them again in a Azure WebSite. When I used the Azure VM I was just storing them at the D drive (temporary drive) but I don't know where to store them now.
I could use the Path.GetTempPath but I am not sure if this is the best approach.
Can you suggest me where should I store my Temporary files when I am serving from a Azure WebSite?
|
Temporary storage for Azure WebSites
|
The Django template engine has basically three steps to perform:
load the template file from the filesystem
compile the template code into python
execute the code to output plain text (usually HTML markup).
The cached.Loader caches only the two first steps : your templates wont be loaded and compiled every time, but will be executed. This is faster and usually safe as long as you are using thread safe template tags.
The fragments caching mecanism caches the final output : the (static) HTML markup ready to be rendered to users.
So if you need to render an already cached template fragment, no calculation will be made other than retrieving the final output from your cache engine.
As you are now serving static, pre-computed content, it's up to you to ensure that the right data are served to the right user: each fragment may be cached per user, language, etc.
| ERROR: type should be string, got "\nhttps://docs.djangoproject.com/en/dev/ref/templates/api/#django.template.loaders.cached.Loader\nHere we have \"cached.Loader\" to cache template\nhttps://docs.djangoproject.com/en/dev/topics/cache/#template-fragment-caching\nAnd we also have \"Template fragment caching\"\nI know the latter allows a finer control over which parts to cache. But if I enable both, will it consume double amount of memory for same fragments?\n" |
Django : Two ways of caching template : What is the difference?
|
You have to fetch html using http request then you can store it in template cache. For example:
$http.get('templ.html', {
cache: $templateCache
}).then(function(result) {
console.log(result);
});
Updated plunker code here
|
I need to cache some HTML files when my Angular controller initializes.
According to Angular $templateCache documentation I can add HTML templates to Angular with:
$templateCache.get('templateId.html')
But I can't get this to work. I've tried to get the template file within the controller and within the module run() function (Plunker). But I can see in the network console that the template is not fetched.
app.run(function($templateCache) {
$templateCache.get('templ.html');
});
What am I doing wrong?
|
How to use AngularJS $templateCache.get()?
|
If the initial value is 0, you want alloc_nr(0) to give a strictly positive number (24 here). Without the 16 it would be 0. You want alloc_nr(x)to be greater than x (and not too near of x to avoid too frequent reallocations).
The particular numbers 16 and 3 and 2 are not very important (the ratio 3/2 is more significant).
|
I've seen the below macro used in a lot of cache.h files:
#define alloc_nr(x) (((x)+16)*3/2)
Here is one example.
I know it's used to increase allocated buffer size when the buffer is almost full. The buffer would grow by roughly 1.5 times its current size. That's why *3/2 is used. But why an extra 16 is added? The macro became x*1.5+24 when it's expanded. Is there any particular reason for this macro? Why everyone likes to use this?
|
Why is '#define alloc_nr(x) (((x)+16)*3/2)' macro used in many cache.h files?
|
In general, a cache manager SHOULD keep your entries for as long as possible, and MAY delete them if/when necessary.
The Time-To-Live (TTL) mechanism exists to flag entries as "expired", but expired entries are not automatically deleted, nor should they be, because APC is configured with a fixed memory size (using apc.shm_size configuration item) and there is no advantage in deleting an entry when you don't have to. There is a blurb below in the APC documentation:
If APC is working, the Cache full count number (on the left) will
display the number of times the cache has reached maximum capacity and
has had to forcefully clean any entries that haven't been accessed in
the last apc.ttl seconds.
I take this to mean that if the cache never "reached maximum capacity", no garbage collection will take place at all, and it is the right thing to do.
More specifically, I'm assuming you are using the apc_add/apc_store function to add your entries, this has a similar effect to the apc.user_ttl, for which the documentation explains as:
The number of seconds a cache entry is allowed to idle in a slot in
case this cache entry slot is needed by another entry
Note the "in case" statement. Again I take this to mean that the cache manager does not guarantee a precise time to delete your entry, but instead try to guarantee that your entries stays valid before it is expired. In other words, the cache manager puts more effort on KEEPING the entries instead of DELETING them.
|
I'm running APC mainly to cache objects and query data as user cache entries, each item it setup with a specific time relevant to the amount of time it's required in the cache, some items are 48 hours but more are 2-5 minutes.
It's my understanding that when the timeout is reached and the current time passes the created at time then the item should be automatically removed from the user cache entries?
This doesn't seem to be happening though and the items are instead staying in memory? I thought maybe the garbage collector would remove these items but it doesn't seem to have done even though it's running once an hour at the moment.
The only other thing I can think is that the default apc.user_ttl = 0 overrides the individual timeout values and sets them to never be removed even after individual timeouts?
|
APC User Cache Entries not being removed after timeout
|
With the code you've provided you're telling the end user browser to cache the results for 30 minutes, so you aren't doing any server side caching.
If you want to cache the results server side you're probably looking for HttpRuntime.Cache. This would allow you to insert an item into a cache that is globally available. Then on the page load you would want to check the existence of the cached item, then if the item doesn't exist or is expired in the cache, go to the database and retrieve the objects.
EDIT
With your updated code sample, I found https://stackoverflow.com/a/6234787/254973 which worked in my tests. So in your case you could do:
public class autocomp : IHttpHandler
{
public void ProcessRequest(HttpContext context)
{
OutputCachedPage page = new OutputCachedPage(new OutputCacheParameters
{
Duration = 120,
Location = OutputCacheLocation.Server,
VaryByParam = "name_startsWith"
});
page.ProcessRequest(HttpContext.Current);
context.Response.ContentType = "application/json";
context.Response.BufferOutput = true;
var searchTerm = (context.Request.QueryString["name_startsWith"] + "").Trim();
context.Response.Write(searchTerm);
context.Response.Write(DateTime.Now.ToString("s"));
}
public bool IsReusable
{
get
{
return false;
}
}
private sealed class OutputCachedPage : Page
{
private OutputCacheParameters _cacheSettings;
public OutputCachedPage(OutputCacheParameters cacheSettings)
{
// Tracing requires Page IDs to be unique.
ID = Guid.NewGuid().ToString();
_cacheSettings = cacheSettings;
}
protected override void FrameworkInitialize()
{
base.FrameworkInitialize();
InitOutputCache(_cacheSettings);
}
}
}
|
Given the generic handler:
<%@ WebHandler Language="C#" Class="autocomp" %>
using System;
using System.Text;
using System.Text.RegularExpressions;
using System.Web;
using System.Web.UI;
public class autocomp : IHttpHandler {
public void ProcessRequest (HttpContext context) {
context.Response.ContentType = "application/json";
context.Response.BufferOutput = true;
var searchTerm = (context.Request.QueryString["name_startsWith"] + "").Trim();
context.Response.Write(searchTerm);
context.Response.Write(DateTime.Now.ToString("s"));
context.Response.Flush();
}
public bool IsReusable {
get {
return false;
}
}
}
How would I server side cache this file for 1 hour based on the name_startsWith query string parameter? With web user controls it's easy:
<%@ OutputCache Duration="120" VaryByParam="paramName" %>
But I've been looking around for a while to do the same with a generic handler (ashx) file and can't find any solutions.
|
ASP.net cache ASHX file server-side
|
This is frequently handled by adding a random string or timestamp to the query.
i.e. <img src="/images/image.jpg?timestamp=1357571065" />
|
Consider the minimal example: Using php I have a form that you enter text and it produces an image of the text. When I then change the text and update, I don't see the new image because I assume, it is being cached. Is there some way to automatically remove this one image file from the cache when I update it?
|
clear cache of one image
|
Vast question depending on your needs.To cache domain objects you can use the Hibernate cache like this:
class Book {
…
static mapping = {
cache true
}
}
And configure Hibernate second level cache in grails-app/conf/DataSource.groovy:
hibernate {
cache.use_second_level_cache=true
cache.use_query_cache=true
cache.provider_class='org.hibernate.cache.EhCacheProvider'
}
Grails Documentation and caching guide.
You can also cache your controllers and services using Grails cache plugin based on Spring cache:
@Cacheable('message')
Message getMessage(String title) {
println 'Fetching message'
Message.findByTitle(title)
}
You'll find the excellent documentation here.
If you want to cache rendered page you can also have a look at the gsp template rendering cache plugin.
|
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 11 years ago.
Ist there something like a best practice for what is a good methodology to implement caching with grails? What plugins should be used and which parts of the page should be cached and how?
|
Caching in Grails? [closed]
|
You are right that looking something up in a shared cache (like memcached) is slower than looking it up in a local cache (which is what i think you mean by "replication").
However, the advantage of a shared cache is that it is shared, which means each user of the cache has access to more cache than if the memory was used for a local cache.
Consider an application with a 50 GB database, with ten app servers, each dedicating 1 GB of memory to caching. If you used local caches, then each machine would have 1 GB of cache, equal to 2% of the total database size. If you used a shared cache, then you have 10 GB of cache, equal to 20% of the total database size. Cache hits would be somewhat faster with the local caches, but the cache hit rate would be much higher with the shared cache. Since cache misses are astronomically more expensive than either kind of cache hit, slightly slower hits are a price worth paying to reduce the number of misses.
Now, the exact tradeoff does depend on the exact ratio of the costs of a local hit, a shared hit, and a miss, and also on the distribution of accesses over the database. For example, if all the accesses were to a set of 'hot' records that were under 1 GB in size, then the local caches would give a 100% hit rate, and would be just as good as a shared cache. Less extreme distributions could still tilt the balance.
In practice, the optimum configuration will usually (IMHO!) be to have a small but very fast local cache for the hottest data, then a larger and slower cache for the long tail. You will probably recognise that as the shape of other cache hierarchies: consider the way that processors have small, fast L1 caches for each core, then slower L2/L3 caches shared between all the cores on a single die, then perhaps yet slower off-chip caches shared by all the dies in a system (do any current processors actually use off-chip caches?).
|
I might be asking very basic question, but could not find a clear answer by googling, so putting it here.
Memcached caches information in a separate Process. Thus in order to get the cached information requires inter-process communication (which is generally serialization in java). That means, generally, to fetch a cached object, we need to get a serialized object and generally transport it to network.
Both, serialization and network communication are costly operations. if memcached needs to use both of these (generally speaking, there might be cases when network communication is not required), then how Memcached is fast? Is not replication a better solution?
Or this is a tradeoff of distribution/platform independency/scalability vs performance?
|
memcached and performance
|
You're right, this is cache problem.
You can use cache_control decorator to force no cache on views[1]:
from django.views.decorators.cache import cache_control
@cache_control(no_cache=True, must_revalidate=True, no_store=True)
def func()
#some code
return
You should also write your own decorator that replaces @login_required so that you don't need to use both on every page.
[1] Disable browser 'Back' button after logout?
|
::Edit::
@cache_control(no_cache=True, must_revalidate=True, no_store=True) FTW!!!!!
Cache-Control: no-cache, no-store, must-revalidate did the trick. It took going to a few IRC chans and looking around but finally I got it to work.
::EDIT::
I have a view where I'm setting @login_required on it and its secure for the most part, but if you have looked at the view then logout and just hit the back button in your browser you can view the content again with out being asked to login. Though if you refresh the page the server with will redirect you.
My suspension is its a cache issue where maybe I need to tell chrome not to store it in the history.
if you view a invoice for example then logout you can view the invoice again by selecting that page in your back history.
I have tried this issue o firefox with no problem. firefox asks for you to log back end so it must be a browser issue.
|
Django @login_required views still show when users are logged out by going back in history on Chrome
|
Nginx has two methods to cache content:
proxy_store is when Nginx builds a mirror. That is, it will store the file preserving the same path, while proxying from the upstream. After that Nginx will serve the mirrored file for all the subsequent requests to the same URI. The downside is that Nginx does not control expiration, however you are able to remove (and add) files at your will.
proxy_cache is when Nginx manages a cache, checking expiration, cache size, etc.
|
I'm curious if nginx can be configured so the cache is saved out in some manner that would make the data user-friendly? While all my options might fall short of anything anyone would consider "human friendly" I'm in general interested in how people configure it to meet their specific needs. The documentation may be complete but I am very much a learn by example type guy.
My current configuration is from an example I ran accross and if it were to be used woud is not much more than proof to me that nginx correctly proxy caches/the data
http {
# unrelated stuff...
proxy_cache_path /var/www/cache levels=1:2 keys_zone=my-cache:8m max_size=1000m inactive=600m;
proxy_temp_path /var/www/cache/tmp;
server {
server_name g.sente.cc;
location /stu/ {
proxy_pass http://sente.cc;
proxy_cache my-cache;
proxy_cache_valid 200 302 60m;
proxy_cache_valid 404 1m;
}
}
|
nginx - can proxy caching be configured so files are saved without HTTP headers or otherwise in a more "human friendly" format?
|
Depending on the type the solution may simply be to clone the object/item on retrieval from the cache.
XmlNode Clone method
|
I am writing a .net c# application.
I retrieve some data from an xml file, cache the data to the .net cache and return it from my method. I perform some processing on the data and return it another part of my application.
Next call, I read from cache, process it and return it etc.
The problem I have is that the processing performed on the cache data seems to modify the cache and not the local variable which means the next time I read from cache, its the processed data from the previous processing that is returned.
So it seems that the data returned from cache is returned by ref and not value.
Any idea how I can prevent the cache being modified?
|
cache being modified instead of local variable (pass by ref)
|
6
How about:
<configuration>
<location path="path/to/the/file.js">
<system.webServer>
<staticContent>
<clientCache cacheControlMode="DisableCache" />
</staticContent>
</system.webServer>
</location>
</configuration>
(Note that the path is relative to the web.config file)
Share
Improve this answer
Follow
answered Mar 5, 2013 at 16:49
gdtgdt
1,8641818 silver badges2020 bronze badges
3
IE's cache is pretty annoying during development. I put the DisableCache in Web.Debug.config and it works fine. Thx!
– Sebastian Weber
Oct 8, 2014 at 9:22
It definitely /can/ work - what caching headers are you seeing going across the network?
– gdt
Mar 13, 2017 at 11:52
It can be added without '<location>' tags, so it is applied to whole project
– Maris B.
Jun 20, 2017 at 14:02
Add a comment
|
|
I currently cache everything possible on my site (images, JS, CSS). There is only one JS file that I need to be loaded fresh every single time. How do I omit just one file from caching, using web.config, whilst leaving everything else cached?
Note that I tried another link here, and it didn't seem to stop the caching of my file:
How do I disable caching of an individual file in IIS 7 using weserver config settings
|
Disable Caching For A Single JS File Using Web.config
|
This is a recipe for lost data. I have a hard time believing that a guest book is causing enough write activity to be an issue. Also, the bookkeeping involved in this would be tricky, since memcache isn't searchable.
|
I am trying to modify the guestbook example webapp to reduce the amount of database writes.
What I am trying to achieve is to load all the guestbook entries into memcache which I have done.
However I want to be able to directly update the memcache with new guestbook entries and then write all changes to the database as a batch put.() every 30 seconds.
Has anyone got an example of how I could achieve the above? it would really help me!
Thanks :)
|
Limit amount of writes to database using memcache
|
4
If you really don't want to create a custom module, there is handbook page on creating a page to clear your cache that includes a snippet to add to a page using the PHP Input format and a refinement in the comments. Keep in mind, using the PHP Input Format is usually discouraged.
Share
Improve this answer
Follow
answered Feb 24, 2011 at 22:31
Matt V.Matt V.
9,7711010 gold badges3636 silver badges5757 bronze badges
Add a comment
|
|
I have a site-editor user role with custom permissions. Currently they can access some actions in the admin menu, but they cannot access clear-cache.
I want to expose just that option to the non-administrator (site-editor) user role. I can't find an option that granular in the permissions.
I've found some alternative options, but they involve coding, custom pages, etc. I want a pure drupal GUI option (if any exists). Not: http://drupal.org/node/152983
The reason is that site-editors enter content, but I'm caching panels and views. I need them to be able to clear the cache so they can see the changes they've made.
|
Allow a non-site administrator access to clear-cache through administrator menu, Drupal 6
|
Maybe johnny-cache?
Also, please look at this article, it covers Django QuerySet caching, and it’s really detailed.
|
I have a friend and he's developing a prototype website off his laptop. It's a cool site, very data driven and does some cool stuff on the fly. Problem is, it takes like 10 seconds to load on his local machine.
Is there a way he can cache the results of the queries? There are probably only a few thousand potential queries that need to be made and the resultant data won't change.
Google gave me nothing, so I turn to humans. Any ideas?
|
SQL Caching on Django
|
If you want to keep your investment in the controller_action_predispatch events and stay in the Magento framework for your code, you could do a couple of things.
Add the URL param into the cache key. That way the FPC will serve the different versions of the page that the param can trigger while still giving the benefit of caching. Refer to this blog post for an example of creating your own PageCache container and overriding the cache key.
Exclude those controllers from FPC. Obviously this is only valid if the controllers affected are a subset of the overall site, otherwise you won't get any caching benefit.
Cheers,
JD
|
Based on my previous question, in Enterprise Edition Magento it doesn't appear to be a good idea to use any of the available controller events if you are planning on having the full page caching enabled. It looks to be you only want to use these events if you are doing something with the actual page.
However, we have built some extensions for Magento that on controller_action_predispatch, we have an observer and from there we grab a parameter in the URL and if it is set correctly we do some additional functionality from there. For example, we have made it so a client can put promo codes in urls for email campaigns and when they click that link it is attached to the customer's quote. We have to look for the parameter before the page is loaded and do our thing.
But now that there is this full page caching it isn't working correctly. So is it probably better to not do this with an Observer and just extend the code? Or are there better Observers to do this with. We tried to use HTTP send response before one and got mixed results.
|
What is the best way to to execute code before a page is loaded
|
Here is a simple solution:
Check the last time your local feed.xml file was modified. If the difference between the current timestamp and the filemtime timestamp is greater than 3600 seconds, update the file:
$feed_updated = filemtime('feed.xml');
$current_time = time();
if($current_time - $feed_updated >= 3600) {
// Your sample code here...
} else {
// use cached feed...
}
|
Im using a remote xml feed, and I don't want to hit it every time. This is the code I have so far:
$feed = simplexml_load_file('http://remoteserviceurlhere');
if ($feed){
$feed->asXML('feed.xml');
}
elseif (file_exists('feed.xml')){
$feed = simplexml_load_file('feed.xml');
}else{
die('No available feed');
}
What I want to do is have my script hit the remote service every hour and cache that data into the feed.xml file.
|
Cache an XML feed from a remote URL
|
That's correct. It's controlled by the HTTP Cache-Control and Expires headers.
The first one basically tells the client the cache strategy. The second one basically tells the client the expiration time of the cache strategy (i.e. for how long to adhere the cache strategy before obtaining the new response and/or throwing the cached response away).
The webserver usually sends a default set of those headers. You can set/override those headers permanently in the server configuration or on a request basis in PHP using header() function. The following example instructs the client to never cache the response.
header('Cache-Control: no-cache, no-store, must-revalidate');
header('Pragma: no-cache');
header('Expires: 0');
The Pragma header is there to ensure compatibilty with old HTTP 1.0 clients which doesn't support Cache-Control yet (which was introduced in HTTP 1.1).
When the cache has been expired and the cached response contains a Last-Modified and/or ETag header as well, then the client can fire a conditional GET request with If-Modified-Since and/or Expires0. Whenever the Expires1 and/or Expires2 conditions are positive, then the server will send a 304 "Not Modified" response back without any content. If this happens, then the client is allowed to keep the currently cached content in the cache and update the headers.
|
I used to think that cache is browser driven and browser dont request the same file again if they think data is being repeated but reading some text on web, I realize that it is website that tell browser for which files should be not requested twice.
Can anyone clarify me on this?
|
Whats the mechanism of data caching in web?
|
It should be submitted to /search/term1+term2
Nope. Plus symbols only represent spaces in application/x-www-form-urlencoded content, such as when the query-string part of the URL is used to submit a form. In the path-part of a URL, + simply means plus; space should be encoded to %20 instead.
That's what the JS escape function should do I think.
Yes it does, and that's the problem. escape encodes spaces to +, which is only suitable for form submissions; used in a path, you will get an unexpected and unwanted plus sign. It also mangles non-ASCII characters into an arbitrary format specific to the escape function that no URL-decoder will be able to read.
As Tomalak said, escape()/unescape() is almost always the wrong thing, and in general should not be used. encodeURIComponent() is usually what you really want, and will produce %20 for spaces, which is safe as it is equally valid in the path part or the query string.
|
I'm running a Rails Application which gets a lot of traffic at the moment so I started using Page Caching to increase the performance. So far everything works like a charm. But when I tried to also cache search results I run into a strange problem.
My Approach:
Use meaningful URLs for searching and pagination (/search?query=term&page=3 becomes /search/term/3)
Use Javascript to submit the form - if JS is disabled it falls back to the old form (which also works with my routes, but without caching)
My Code:
// Javascript
function set_search_action() {
window.location = '/search/' + escape(document.getElementById('query').value);
return false;
}
// HTML
<form action="/search" id="search_form" method="get" onSubmit="return set_search_action();">
<input id="query" name="query" title="Search" type="text" />
<input class="submit" name="commit" type="submit" value="Search" />
</form>
The Problem
Everything works for single words like "term". But when I search for "term1 term2" the form is submitted to /search/term1 term2/ or /search/term1 term2/1 . It should be submitted to /search/term1+term2 That's what the JS escape function should do I think.
So far it works also with spaces in development mode. But I guess it will become a problem in production mode with caching enabled (URLs shouldn't contain any whitespaces).
Any ideas on what I did wrong? Thanks!
|
URL Encoding in JS for meaningful URLs and Rails Page Caching
|
In browser HTTP debuggers are probably the easiest to use in your situation. Try HTTPFox for Firefox or Opera which has dragonfly built-in. Both of these indicate when the local browser cache has been used.
If you appear to be getting conflicting information, then wireshark/tcpdump will show you if the objects are being downloaded or not as it is monitoring the actual network packets being transmitted and received. If you haven't looked at network traces before, this might be a little confusing at first.
|
I want to verify that the images, css, and javascript files that are part of my page are being cached by my browser. I've used Fiddler and Google Page Speed and it's unclear whether either is giving me the information I need. Fiddler shows the HTTP 304 response for images, css, and javascript which should tell the browser to use the cached copy. Google Page Speed shows the 304 response but doesn't show a Transfer Size of Zero, instead it shows the full file size of the resource. Note also, I have seen Google Page Speed report a 200 response but then put the word (cache) next to the 200 (so Status is 200 (cache)), which doesnt make a lot of sense.
Any other suggestions as to how I can verify whether the server is sending back images, css, javascript after they've been retrieved and cached by a previous page hit?
|
How can I verify that javascript and images are being cached?
|
This is a matter of finding a balance between low-latency updates, and overall system/network load (aka, performance vs. cost).
If you have capacity to spare, the simplest solution is to keep your votes in a database, and always look them up during a page load. Of course, there's no caching here.
Another low-latency (but high-cost) solution is to have a pub-sub type system that publishes votes to all other caches on the fly. In addition to the high cost, there are various synchronization issues you'll need to deal with here.
The next alternative is to have a shared cache (e.g., memcached, but shared across different machines). Updates to the database will always update the cache. This reduces the load on the database and would get you lower latency responses (since cache lookups are usually cheaper than queries to a relational database). But if you do this, you'll need to size the cache carefully, and have enough redundancy such that the shared cache isn't a single point of failure.
Another, more commonly used, alternative is to have some kind of background vote aggregation, where votes are only stored as transactions on each of the front-end servers, and you have a background process that continuously (e.g., every five seconds) aggregates the votes and populates all the caches.
AFAIK, reddit does not do live low-latency vote propagation. If you vote something up, it isn't immediately reflected across other clients. My guess is that they're doing some kind of aggregation (as in #4), but that's just me speculating.
|
We have a PHP website like reddit, users can vote for the stories.
We tried to use APC, memcached etc. for the website but we gave up. The problem is we want to use a caching mechanism, but users can vote anytime on site and the cached data may be old and confusing for the other visitors.
Let me explain with an example, We have an array of 100 stories and stored in cache for 5 mins., a user voted for some stories so the ratings of the stories are changed. When the other user enter the website, he/she will see the cached data, therefore the old data. (This is the same if the voter user refreshes the page, he'll also see the old vote number for the stories.)
We cannot figure it out, any help will be highly appreciated
|
Best way to do caching on a reddit-like website
|
Try putting this in your head tags
<meta http-equiv="cache-control" content="no-cache">
Edit Just a note, this doesn't force the browser not to cache it, but most browsers will listen
|
I have a website that is updated regularly and I am having a problem where old content is showing up on the page. It can be fixed by refreshing a few times or clearing the cache. I am looking for a solution so no data is stored on the PC and the site is forced to refresh each time. Perhaps an auto cache clear plugin or something of the like? Any ideas?
|
Old content appearing on site. Auto Clear Cache?
|
Even if it was stored in Isolated Storage, there is no Silverlight library to read in SQL Server Compact Edition. Perhaps in a future version. I have heard of a couple of open source projects that are trying to do this but there are not any that have releases yet. I tried to wrap the Google Gears DB in Beta 2 to no success.
|
Does the silverlight clr support access to a sql compact database placed in the silverlight application's isolated storage?
If so, any pointers to code samples.
I would like to cache information retrieved from the server in previous sessions.
|
Can a silverlight client access a local sql compact database that is stored in isolated storage
|
You can benefit from the memory cache if you access addresses within the same cache line in a short amount of time. The explanation below assumes your arrays contain 4-byte integers.
In your first loop, your two memory accesses in the loop are 50*4 bytes apart, and the next iteration jumps forward 400 bytes. Every memory access here is a cache miss.
In the second loop, you still have two memory accesses that are 50*400 bytes apart, but on the next loop iteration you access addresses that are right next to the previously fetched value. Assuming the common 64-byte cache line size, you only have two cache misses every 16 iterations of the loop, the rest can be served from two cache lines loaded at the start of such a cycle.
|
I have a 2D array a1[10000][100] with 10000 rows and 100 columns, and also a 2D array a2[100][10000] which is the transposed matrix of a1.
Now I need to access 2 columns (eg. the 21th and the 71th columns) of a1 in the order of a1[0][20], a1[0][70], a1[1][20], a1[1][70], ..., a1[9999][20], a1[9999][70]. Or I can also access a2[100][10000]0 to achive the same goal (the order: a2[100][10000]1, a2[100][10000]2, a2[100][10000]3, a2[100][10000]4, ..., a2[100][10000]5, a2[100][10000]6). But the latter is much faster than the former. The related code is simplified as follows (a2[100][10000]7 = 10000):
a2[100][10000]8
Accessing more rows (I have also tried 3, 4 rows rather than 2 rows in the example above) of a2[100][10000]9 is also faster than accessing the same number of columns of a10. Of course I can also replace Lines 3-5 (or Lines 11-14) with a loop (by using an extra array to store the column/row indexes to be accessed), it also gets the same result that the latter is faster than the former.
Why is the latter much faster than the former? I know something about cache lines but I have no idea of the reason for this case. Thanks.
|
Access efficiency of C++ 2D array
|
Yes you have to set a fixed key (as they said in the Symfony Doc).
You can use also the environment name (dev, staging, prod... - SYMFONY_ENV or APP_ENV) with the application name if you want to use the same Redis cluster for staging and prod for example.
framework:
cache:
...
prefix_seed: '%kernel.environment%_myapp'
...
|
I'm using Redis to manage some caching within my Symfony 3.4 app, configured like this:
config.yml
framework:
cache:
default_redis_provider: 'redis://127.0.0.1:6379'
pools:
cache.catalog:
adapter: cache.adapter.redis
provider: iwid.custom_redis_provider
default_lifetime: 86400
public: true
cache.language:
adapter: cache.adapter.redis
provider: iwid.custom_redis_provider
default_lifetime: 86400
public: true
services.yml
services:
iwid.custom_redis_provider:
class: Redis
factory: ['Symfony\Component\Cache\Adapter\RedisAdapter', 'createConnection']
arguments:
- 'redis://127.0.0.1:6379'
- { retry_interval: 0, timeout: 30 }
Now, this is working like a charm in dev and prod environments, except for one thing in production: when I deploy a new release, my deployer system creates a new folder and git pull inside it, then target this folder as the current rootdir with a symlink.
Then, when I deploy any release, the prefix of my Redis keys is changed, as the path to my app is different. Then, this obviously invalidate any previously cached keys... which is not what I want!
So, ho can I change this, probably by having a sort of "fixed" cache key (one for each pool obviously).
Any help greatly appreciated!
|
Set Redis cache prefix key on Symfony
|
I was able to solve this problem by storing values as a string key and object value -- which works wonderfully with Spring @Cacheable annotations. Objects are casted into the return types by Spring if they are found within the cache.
|
I am trying to load my cache off of a cold start, prior to application startup. This would be done so values are available as soon as a user accesses a server, rather than having to hit my database.
@Cacheable functionality from Spring all works great, the problem is how I manually store objects in the Cache so that they can be read when the function is executed.
Spring is storing these objects in bytes, somehow -- and I need to mimic this while I manually load the cache. I'm just trying to figure out how they process the return objects, in the function, to store into the cache in a key,val pair.
|
Using @Cacheable Spring annotation and manually add to Infinispan Cache
|
You never reuse Suppliers.memoizeWithExpiration so it's always a new call
You are creating a new memoizing supplier at each call, so you basically make a new call each time because the new memoizing supplier is empty and therefore propagates the call to fill itself. You should create the memoizing supplier only once, and call it repeatedly like this :
private final Supplier<List<Client>> getClientsSupplier = Suppliers.memoizeWithExpiration(ClientsDao::getClients, Server.CACHE_REFRESH_DURATION, Server.CACHE_REFRESH_TIMEUNIT);
public List<Client> getClients() {
return getClientsSupplier.get();
}
|
We have below method in a HeartBeat thread which runs every 30 seconds, we have introduced Guava cache with 5 minutes refresh as below for ClientsDAO.getClients() so that we dont hit the database for every 30 seconds.
private List<String> getClients() {
final Supplier<List<String>> supplier = () -> ClientsDAO.getClients();
if(Server.CACHE_REFRESH_DURATION <=0)
return supplier.get();
else{
LOG.debug("Fetching Clients from cache, duration = "+Server.CACHE_REFRESH_DURATION+". timeunit = "+Server.CACHE_REFRESH_TIMEUNIT);
return Suppliers.memoizeWithExpiration(supplier, Server.CACHE_REFRESH_DURATION, Server.CACHE_REFRESH_TIMEUNIT).get();
}
}
As you can see in the below log every time thread HeartBeat runs its hitting the database instead of fetching it from the cache. can someone please help me fix it?
[Jun 10 10:16:05] pool-16-thread-1 | DEBUG | com.server.Heartbeat | Fetching Clients from cache, duration = 5. timeunit = MINUTES
[Jun 10 10:16:05] pool-16-thread-1 | DEBUG | com.server.ClientsDAO | Getting DB connection
[Jun 10 10:16:05] pool-16-thread-1 | DEBUG | com.server.ClientsDAO | Queried for Clients
[Jun 10 10:16:35] pool-16-thread-1 | DEBUG | com.server.Heartbeat | Fetching Clients from cache, duration = 5. timeunit = MINUTES
[Jun 10 10:16:35] pool-16-thread-1 | DEBUG | com.server.ClientsDAO | Getting DB connection
[Jun 10 10:16:35] pool-16-thread-1 | DEBUG | com.server.ClientsDAO | Queried for Clients
[Jun 10 10:17:05] pool-16-thread-1 | DEBUG | com.server.Heartbeat | Fetching Clients from cache, duration = 5. timeunit = MINUTES
[Jun 10 10:17:05] pool-16-thread-1 | DEBUG | com.server.ClientsDAO | Getting DB connection
[Jun 10 10:17:05] pool-16-thread-1 | DEBUG | com.server.ClientsDAO | Queried for Clients
|
Guava 11: Cache not working with 5 minute refresh
|
This is a Virtual platform related issue.A simple workaround is to remove manually.
$ sudo rm -rf app/cache/*
Read more about this issue app/console cache:clear problem
|
I'm working on a vagrant machine (Homestead). In my Homestead.yml I have:
sites:
- map: myproject.local
to: /home/vagrant/projects/myproject/web
type: symfony
I'm working with Symfony version 3.3 on PHP 7.1.2.
The problem is when I try to execute the command php bin/console cache:clear I'm getting the following error:
[Symfony\Component\Filesystem\Exception\IOException]
Failed to remove directory "/home/vagrant/projects/vkfilestore-code/var/cache/de~/pools": .
In my AppKernel.php I have:
public function getCacheDir()
{
return dirname(__DIR__).'/var/cache/'.$this->getEnvironment();
}
When I dump $this->getEnvironment() it says dev.
What could be the problem here?
|
Symfony - can't clear cache through command
|
8
I had the same problem and part of your solution worked for me:
UIImageView.af_sharedImageDownloader.imageCache?.removeAllImages()
UIImageView.af_sharedImageDownloader.sessionManager.session.configuration.urlCache?.removeAllCachedResponses()
Let me know if it worked for you too.
In pods, I just put
pod 'AlamofireImage'
Share
Improve this answer
Follow
answered Mar 16, 2018 at 12:48
Rafaela LourençoRafaela Lourenço
1,1561717 silver badges3333 bronze badges
1
swift 5 update: UIImageView.af.sharedImageDownloader.imageCache?.removeAllImages() UIImageView.af.sharedImageDownloader.session.sessionConfiguration.urlCache?.removeAllCachedResponses()
– Ruslan Sabirov
May 2, 2022 at 13:27
Add a comment
|
|
How to clear all images from cache using AlamofireImage. I’ve gone through following answers (posted in stack overflow) but none of them working.
How to clear AlamofireImage setImageWithURL cache
AlamofireImage cache?
AlamofireImage how to clean memory and cache
Code I’ve tried to clear cache:
UIImageView.af_sharedImageDownloader.imageCache?.removeAllImages()
I tried this also:
let sharedCache = URLCache(memoryCapacity: 0, diskCapacity: 0, diskPath: nil)
URLCache.shared = sharedCache
//Clear all cookies
if let sharedCookies = HTTPCookieStorage.shared.cookies {
for cookie in sharedCookies {
HTTPCookieStorage.shared.deleteCookie(cookie)
}
}
//-----------------------------------
URLCache.shared.removeAllCachedResponses()
But it can’t remove all images from cache.
Note: I’ve used AlamofireImage 3.1 to load images.
|
AlamofireImage: how to clear all image cache?
|
Umbraco use the ClientDependency framework to cache the backoffice assets. CDF works by caching based on the version number in the ~/Config/ClientDependency.config file. As soon as you change the version number (just make it 1 higher or lower) the caches will be regenerated and the querystrings that automatically get added to all the backoffice assets changes.
This should bust the browser cache as well, but some browsers (Chrome especially) are very aggressive in caching assets, so on rare occasions it will also be necessary to clear your browser cache.
|
Am creating a custom property editor for Umbraco 7. Had a a typo in the controller.js and despite what I do to clear the cache the croken code keeps showing up in the cached Dependency Handler so far I've tried:
Restarting application in IIS
Republishing Umbraco site Change
Clearing Browser cache
Change Debug="false" to Debug="true" in web.config - This worked while in debug but went
back to broken cached version when I put it back to false.
Modify ClientDependancy.config to exclude .js from fileDependencyExtensions - Again this worked while .js was excluded but went back to broken code when I added it back again.
Remove the reference to the controller from the property editors manifest. - This allowed the page to load again, but obviously the the property editor then had no controller.
Have removed datatype and and all references, restarted application and recreated it.
There has to be an easy to do this. Any suggestions?
|
Umbraco won't clear broken code from cache
|
6
Memory cache :-
Faster to access this cache
This cache take up your application memory so avoid it for storing huge data
Memory cache gets destroyed once app goes background and gets killed by system to save up resources
Disk Cache :-
Slower than memory cache
Use it for large cache data
Data is present even after app goes background
Share
Improve this answer
Follow
edited Aug 27, 2019 at 11:15
Farid
2,46911 gold badge2121 silver badges3939 bronze badges
answered Sep 12, 2018 at 11:29
ManoharManohar
22.8k1111 gold badges112112 silver badges149149 bronze badges
Add a comment
|
|
I didn't fully understand when should I use memory cache (LruCache) and when to pick disk caching. or should I use them both together?
I looked here
|
android disk cache vs memory cache
|
6
We are not supposed to do your homework.
AMAT: 0.66 + 0.08*(5.62+0.95*70) = 6.4296
CPI: 0.36 * 6.4296/0.66 + 0.64 = 4.15
Share
Improve this answer
Follow
answered Nov 24, 2015 at 18:10
aminfaraminfar
2,32733 gold badges2828 silver badges3939 bronze badges
1
2
That's true. So why did we do it?
– VAndrei
Dec 6, 2015 at 10:10
Add a comment
|
|
Can anyone help me with this question? It is my computer architecture homework. I have no idea how to solve this. I know only this formula. AMAT = Hit time + (miss rate * miss penalty). However, I know that this formula cannot apply to this problem. I also don't know how to find CPI.
Assume that main memory accesses take 70 ns and that memory accesses are 36% of all instructions. L1 and L2 caches are attached to a processor P. The specification of the two caches can be listed as follows:
L1: size 2KB, miss rate = 8%, and hit time (time needed if a word is found in L1 ) is 0.66ns
L2: size = 1 MB. miss rate = 95%, and hit time is 5.62ns
What is the AMAT (Average Memory Access Time) for P? Assume the base CPI (Cycle per Instruction) of 1.0 without any memory stalls (a word is found in L1), what is the total CPI for P?
|
Find AMAT and CPI of Multi-level cache for a processor
|
The javadoc of CacheLoader#load(String) states
Parameters:
key the non-null key whose value should be loaded
Returns:
the value associated with key; must not be null
Throws:
Exception - if unable to load the result
You've implemented it as returning null, that breaks the CacheLoader contract.
|
I'm trying to use google guava cache on a program but am not quite getting how it works.
I'm loading up the cache and then at a later stage i'm trying to check if an item exists in the cache, my code below doesnt quite work
The getIfPresent returns null if it doesnt exist but the load which calls it bombs out after with the error
Exception in thread "main" com.google.common.cache.CacheLoader$InvalidCacheLoadException: CacheLoader returned null for key
private static LoadingCache<String, Image> imageCache
= CacheBuilder.newBuilder()
.build(new CacheLoader<String, Image>() {
@Override
public Image load(String key) throws Exception {
if (getImage(key) != null) {
return getImage(key);
}
return null;
}
});
public static Image getImage(String key) throws ExecutionException {
return imageCache.getIfPresent(key);
}
this means i cant check for the presense of the item in the cache like so
try {
readImage = imageCache.get(fileName);
} catch (ExecutionException e1) {
// TODO Auto-generated catch block
e1.printStackTrace();
}
if (readImage != null) {
}
can someone explain to me what i'm doing wrong here?
|
google guava checking for item in cache
|
They can be used for similar purposes but there's a few distinct differences between the hash and the buffer components.
They both work by storing the result set in memory but the hash components allow you to store multiple hash objects and retrieve specific hash sets. This can be useful if you need to temporarily store multiple result sets and then join them back in some way for example transforming multiple data sources and then writing the data out in a single entry to your target. You can also append the output of one hash to another to write to the same data set.
The buffer components only have a single append only option where multiple buffer outputs will write into the same, shared buffer. This makes it less flexible than the hash components but can still be useful for many tasks.
What the buffer components offer extra over the hash components is that the buffer can be read by parent jobs to send data back up to the calling parent job. This same mechanism is also used if you want to deploy your Talend job as a web service and return data from it as shown in this tutorial.
Other options in a similar space but more for when you start dealing with amounts of data that can't be processed in memory easily (but need to be fully contained in memory for some reason rather than being iterated on) are to use the tCache family of components that I know a few other posters here quite like (although I have yet to need). This works like the hash components but will also spill to disk if needed.
An embedded H2 database could also be ran in memory to provide a similar effect and quite a lot more options but at the added cost of complexity in your job.
|
I don't clearly understand the difference between using tHash and tBuffer components in Talend.
I am looking at storing the result of a tMap in Impala table and also another copy in memory (cache) and perform other transformations on this to finally write to a table.
|
difference between thashoutput/input and tbufferoutput/input in Talend
|
How do you initialize your RequestQueue? I suspect that you are creating RequestQueues for each activity.So initiate it in your Application class as
public class ApplicationController extends Application {
private static ApplicationController sInstance;
private RequestQueue mRequestQueue;
@Override
public void onCreate() {
super.onCreate();
// initialize the singleton
sInstance = this;
}
public static synchronized ApplicationController getInstance() {
return sInstance;
}
public RequestQueue getRequestQueue() {
// lazy initialize the request queue, the queue instance will be
// created when it is accessed for the first time
if (mRequestQueue == null) {
mRequestQueue = Volley.newRequestQueue(getApplicationContext());
}
return mRequestQueue;
}
//your code
}
And getRequest queue from your activity as
mrq = ApplicationController.getInstance().getRequestQueue();
|
I am using Volley to set an image url. My code is crashing at:
mrq=Volley.newRequestQueue(this);
Log cat says that the exemption is at: com.android.volley.toolbox.DiskBasedCache.streamToBytes
If I comment the code out the program does not crash.
I have tried restarting my phone a couple times because in my research I found that has worked for some with this problem.
Why is creating a RequestQueue using so much memory?
How can I prevent the OutOfMemoryError from happening?
Do I need to empty the cache?
Thank you for your help and taking the time to read this.
|
Volley.newRequestQueue is causing OutOfMemoryError
|
I've just solved exactly this scenario. There are two things I did
I put the node-replace and node-insert type calls (that is any calls that modify the XML structure into a separate module and then called that module using xdmp:invoke, passing in any parameters required, like this
let $update := xdmp:invoke("/app/lib/update-attribute-node.xqy",
(xs:QName("newValue"), $new),
{xdmp:modules-database()})
The reason why this works is that the call to xdmp:invoke happens in it's own transaction and once it completes, the memory is cleared up. If you don't do this then, each time you call the update or insert function, it will not actually do the write, until the end in a single transaction meaning your memory will fill up pretty quickly.
Any time I needed to loop over paths in MarkLogic (or documents or whatever they are called - I've only been using MarkLogic for a few days) and there are a large number of them I processed them only a few at a time like below. I came up with an elaborate way of skipping and taking only a batch of documents at a time, but you can do it in any number of ways.
let $whatever:= xdmp:directory("/whatever/")[$start to $end]
I also put this into a separate module so that it is processed immediately and not in a single transaction.
Putting all expensive calls into separate modules and taking only a subset of large data sets at a time helped me solve my expanded tree cache full errors.
|
I am new to XQuery and MarkLogic.
I am trying to update documents in MarkLogic and get the extended tree cache full error.
Just to get the work done I have increased the expanded tree cache but that is not recommended.
I would like to tune this query so that it does not need to simultaneously cache as much XML.
Here is my query
I have uploaded my query as an image because it was not so pretty when I pasted it on the editor. If any one knows a better way please suggest.
Thanks in advance.
|
Need help rewriting XQuery to avoid expanded tree cache full error in MarkLogic
|
From the Play 2.2 Migration guide:
Play cache is now split out into its own module. If you are using the
Play cache, you will need to add this as a dependency. For example, in
Build.scala:
val addDependencies = Seq(
jdbc,
cache,
...
)
|
I'm new to play (v2.2.0), and I'm modifying the hello-play-java template. I'd like to add caching, however, the JavaCache documentation makes what appears to be conflicting statements:
The default implementation of the cache API uses EHCache and it’s enabled by default.
and
The cache API is provided by the play.cache.Cache object. This requires a cache plugin to be registered.
Indeed, when I import Cache, the compiler barfs; and older documentation that discusses plugins seems outdated as play install ... is no longer valid.
Thus: how can I enable the default caching module?
|
Enabling ehcache in Play 2.2.x
|
6
You can explicitly disable caching in .htaccess for chosen file extensions:
<FilesMatch "\.(png|jpe?g|gif|js|css)$">
FileETag None
<ifModule mod_headers.c>
Header unset ETag
Header set Cache-Control "max-age=0, no-cache, no-store, must-revalidate"
Header set Pragma "no-cache"
Header set Expires "Tue, 14 Jan 1975 01:00:00 GMT"
</ifModule>
</FilesMatch>
Share
Improve this answer
Follow
answered Sep 10, 2013 at 17:12
anubhavaanubhava
771k6666 gold badges582582 silver badges649649 bronze badges
0
Add a comment
|
|
I've tried the ?ver=478459 stuff to help prevent browsers from caching some files like CSS, Images, JavaScript. And Since I've come to realize its only so helpful. Does't appear to work every time. Example, I have to VPN into work on occasion, and for whatever reason even when changing the parameters at the end of a file like above mentioned files will remain in the cache until I clear it. I'm figuring the parameter might not be sticking for some reason cause I am running through a proxy. But, it does't always seem to be the case either.
Anyway. I am trying to figure out, if there is a way I can through htaccess provide a rewrite_rule for my js, css, img files and if the URL provided is I dont know lets say
/scripts/jquery/__ver<version number>__/filename.js
/scripts/__ver<version number>__/filename2.js
/scripts/3.0.x/__ver<version number>__/filename2.js
/styles/jquery-ui/__ver<version number>__/filename.css
/styles/__ver<version number>__/filename2.css
/img/__ver<version number>__/something.png
/img/dir/dir/dir/__ver<version number>__/something-else.png
essentially where the rewrite rule is looking for this __ver<version number>__ specifically and <version number> is either a 3 dot versioning logic or a md5 of some sort.. or something either way basically looking for _ver*_
Where when this is found, the rewrite rule would remove it and user the path without that part
|
.htaccess rewrite rule to prevent caching of css, js, image files.
|
1) Try using setAppCacheEnabled and setAppCacheMaxSize to limit the cache size to very little , lower cache size will result in faster cleanup.
Ex: wv.getSettings().setAppCacheMaxSize(1);
OR
2) If you don't need the cached data then simply set setCacheMode(WebSettings.LOAD_NO_CACHE); , which means
"Don't use the cache, load from the network", even though data is cached.
In-short, simply ignore the cached data, android will take care of it.
OR
3) you can also try the below code for no-caching,
Note: this is only available for Android API 8+
Map<String, String> noCacheHeaders = new HashMap<String, String>(2);
noCacheHeaders.put("Pragma", "no-cache");
noCacheHeaders.put("Cache-Control", "no-cache");
view.loadUrl(url, noCacheHeaders);
OR
4) Clear the cache every-time whenever page load finishes.
Something like this in the WebViewClient.
@Override
public void onPageFinished(WebView view, String url) {
super.onPageFinished(view, url);
view.clearCache(true);
}
OR
5) You can try deleting whole cached database at once.
context.deleteDatabase("webview.db");
context.deleteDatabase("webviewCache.db");
This might give a bit faster result, hope so.
|
I am using WebViews in an Android app, and I need to prevent the WebViews from caching.
Unfortunately it seems like this seemingly simple goal is nearly impossible to achieve. The solution I have resorted to use is to execute webview.clearCache(true) in the onPageFinished event so that the cache is cleared each time a page is loaded. There are some issues...
I have noticed that as the cache grows it becomes very time consuming for the clearCache method to execute. Sometimes if you execute clearCache and then switch to a different Activity that contains different webview, that webview will not load for a few seconds because it is still waiting on the previous clearCache operation to finish.
What's worse is that execution time of subsequent calls to clearCache does not seem to decrease after the cache has been already cleared. If the call to clearCache takes 3 seconds to complete and then I immediately call clearCache a second time, then I would expect the second call to clearCache to complete almost immediately. But that is not what I'm experiencing; I'm experiencing that the second call to clearCache still take approximately 3 seconds.
Has anyone else experienced this? Is there any way to improve performance? Waiting 2-3 seconds for a webview to load (from the local filesystem) is horrible.
EDIT:
Here is my best alternative to actually clearing the cache. It more or less works but it's sort of flaky and I'm not 100% happy with it (written in Mono c#):
public class NoCacheWebClient : WebViewClient
{
string previous;
public override void OnPageStarted(WebView view, string url, Android.Graphics.Bitmap favicon)
{
base.OnPageStarted(view, url, favicon);
if (!string.Equals(previous, url))
{
previous = url;
view.Reload(); //re-load once to ignore cache
}
else
{
previous = null;
}
}
}
|
Android Webview's ClearCache is very slow
|
One copy of read-only data per thread will not help you with caching; quite the opposite, it can hurt instead when threads execute on the same multicore (and possibly hyperthreaded) CPU and so share its cache, as in this case per-thread copies of the data may compete for limited cache space.
However, in case of a multi-CPU system, virtually all of which are NUMA nowadays, typically having per-CPU memory banks with access cost somewhat different between the "local" and "remote" memory, it can be beneficial to have a per-CPU copies of read-only data, placed in its local memory bank.
The memory mapping is controlled by OS, so if you take this road it makes sense to study NUMA-related behavior of your OS. For example, Linux uses first-touch memory allocation policy, which means memory mapping happens not at malloc but when the program accesses a memory page for the first time, and OS tries to allocate physical memory from the local bank.
And the usual performance motto applies: measure, don't guess.
|
Regarding performance, assuming we get a block of data that will be freqenctly accessed by each threads, and these data are read-only, which means threads wont do anything besides reading the data.
Then is it benefitial to create one copy of these data (assuming the data there read-only) for each thread or not?
If the freqenently accessed data are shared by all threads (instead of one copy for each thread), wouldnt this increase the chance of these data will get properly cached?
|
shared memory multi-threading and data accessing?
|
Oracle databases support Java triggers, so in theory you could implement something like this yourself, see this guide. In theory, your Java trigger could invoke the client library of whichever distributed caching solution you are using, to update or evict stale entries.
Oracle also have a caching solution of their own, known as Coherence. It might have integration like this built in, or at least it might be worth checking it out. Search for "java distributed cache" for some alternatives.
As far as I know Hibernate does not support queries on objects stored in its cache.
However if you cache an entire collection of objects separately, then there are some libraries which will allow you to perform SQL-like queries on those collections:
LambdaJ - supports advanced queries, not as fast
CQEngine - supports typical queries, extremely fast
BTW I am the author of CQEngine. I like both of those libraries. But please excuse my slight bias for my own one :)
|
I am looking for best solution for caching large amount of simple transactional pojo structure in memory. Transactions happen at oracle database on 3-4 tables by external application. Another application is kind of Business Intelligence type, which based on transactions in database evaluates updated pojos(mapped to table) and applies various business rules.
Hibernate solution relies on transactions on same server; where as in our case transactions happen some where else, and not sure cached objects can be queried.
Question:
Is there oracle jdbc API that would trigger update event on java side?
Which Caching solution would support #1,
Is cached objects can be queried?
|
Java Large number of transaction object caching
|
Yes, you can use ajaxSetup.
$.ajaxSetup({
cache: false
});
$(document).ready(...
|
I am using jquery and jstree plugin to load some images when clicked on a jstree node. It looks like when I click a jstree node, it first checks the cache and if not in cache, noghting happens. I have to click the node again to load it from the server. (if the images in cache, first time click on the node works.)
So, the behavior is not consistent for the end user. The first click on the jstree node, I should go to the server to retreive the image and put it in a div. I've been looking at this for awhile now, I couldn't come up with any solution.
I am reaching out this community, maybe someone has seen this before and can help.
$(document).ready(function() {
$("#div_tree").jstree({
"xml_data": {
"ajax": {
"url": "tree.xml"
},
"xsl": "nest"
},
"plugins": ["themes", "xml_data", "ui", "types"]
}).bind("select_node.jstree", function(event, data) {
var node_id = data.rslt.obj.attr("id");
if (node_id = "tree_a") {
$("#mydiv").html(myPic1);
}
Is there a quick way to disable jquery cache, so that everytime I click on a jstree node, I should get the images from the server.
|
disabling cache on jquery
|
A cache works by having previously loaded and remembered the requested piece of data.
When a cache is initialized, it is empty. So the first access for any given piece of data will result in a cache-miss and take more time than one would wish.
Warming the cache means executing code right after startup that loads stuff into the cache before the program needs it "for real" (so that is already there when an end-user uses the application).
This is similar to athletes doing stretching exercises to warm up their muscles before they need them for the competition.
|
What is meant by running the program in warm cache?
Can someone be kind enough to explain.
Does warm cache imply..the cache which when queried gives a cache hit?
|
What is meant warm cache?
|
Unfortunately, there are very few options that lets you manipulate the MySQL query cache.
The query_cache_limit option instructs MySQL to not cache query results larger than a set limit. Reasons why you would want to lower this value:
some relatively rare queries return large result sets
most slower queries typically return small result sets
The SQL_NO_CACHE keyword, immediately placed after the SELECT statement, instructs MySQL to not cache this result.
Conversely, you could set the query_cache_type server option to 2 so that only queries using the SQL_CACHE keyword are cached.
I would also advise to make sure your query cache is actually fully used. SHOW STATUS LIKE 'Qcache_free_memory'; gives you this information. If a high proportion of your query cache is free, then it probably means that your data changes too frequently for the results in cache to be reused.
It could also mean that most of your queries return results sets larger than query_cache_limit (which in turn probably suggests badly designed queries).
There are a few other tips here.
However, you are correctly wondering whether this is worth the hassle., In my opinion the query cache is, at best, a secondary factor for a fast database. Appropriate indexing is the first factor. Moreover, in many cases, your memory would be better used for caching indexes (the most important parameters being the innodb_buffer_pool_size for InnoDB tables, or the key_buffer_size for MyISAM tables)
|
I have one MySQL server running on Linux.
It has a few critical apps, and we have set the querycache as high as we could afford.
I also have a few non critical databases, (wordpress / etc).
Questions:
Is it possible to fine tune query cache on a per database level?
Is it possible to fine tune query cache on a per table basis?
Is it even worth doing? Having a fine tuned query cache on critical tables, will the db still stutter when accessing non important data?
Thanks in advance
|
MySQL Query Cache Per Database / Table
|
The problem which you describe could be in Internet Explorer, but it will be not exist in jqGrid if you use default options.
If you look at the full URL which will be used you will see parameters like
nd=1339350870256
It has the same meaning as cache: true of jQuery.ajax. jqGrid add the current timestemp to the URL to make it unique.
I personally like to use HTTP GET in jqGrid, but I don't like the usage of nd parameter. The reason I described in the old answer. It would be better to use prmNames: {nd:null} option of jqGrid which remove the usage of nd parameter in the URL. Instead of that one can control the caching on the server side. For example the setting of
Cache-Control: private, max-age=0
is my standard setting. To set the HTTP header you need just include the following line in the code of ASP.NET MVC action
HttpContext.Current.Response.Cache.SetMaxAge (new TimeSpan (0));
You can find more details in the answer.
It's important to understand, that the header Cache-Control: private, max-age=0 don't prevent the caching of data, but the data will be never used without re-validation on the server. Using other HTTP header option ETag you can make the revalidate really working. The main idea, that the value of ETag will be always changed on changing the data on the server. In the case if the previous data are already in the web browser cache the web browser automatically send If-None-Match part in the HTTP request with the value of cache: true0 from the cached data. So if the server see that the data are not changed it can answer with HTTP response having cache: true1 status and empty body of the HTTP response. It allows the web browser to use local previously cached data.
In the answer and in this one you will find the code example how to use cache: true2 approach.
|
I am using jqgrid in my ASP.NET MVC application. Currently I have mTYpe: 'POST' like this:
jQuery("#myGrid").jqGrid({
mtype: 'POST',
toppager: true,
footerrow: haveFooter,
userDataOnFooter: haveFooter,
But I was reading this article, and I see this paragraph:
Browsers can cache images, JavaScript, CSS files on a user's hard
drive, and it can also cache XML HTTP calls if the call is a HTTP GET.
The cache is based on the URL. If it's the same URL, and it's cached
on the computer, then the response is loaded from the cache, not from
the server when it is requested again. Basically, the browser can
cache any HTTP GET call and return cached data based on the URL. If
you make an XML HTTP call as HTTP GET and the server returns some
special header which informs the browser to cache the response, on
future calls, the response will be immediately returned from the cache
and thus saves the delay of network roundtrip and download time.
Given this is the case, should I switch my jqGrid mType all to use "GET" from "POST" for the mType? (It says XML (doesn't mention JSON). If the answer is yes, then actually what would be a situation why I would ever want to use POST for jqGrid mType as it seems to do the same thing without this caching benefit?
|
Should I be using POST or GET when retrieving JSON data into jqGrid in my ASP.NET MVC application?
|
3
Use an ActiveRecord::Observer to watch for model changes. It can expire the cache.
Share
Improve this answer
Follow
answered May 10, 2012 at 1:29
Larry KLarry K
48.5k1515 gold badges9090 silver badges143143 bronze badges
2
This looks like a good approach. Even though, I am not very keen about expiring a cache in /app/models/foo_observer.rb. I would think that foo_observer.rb shouldn't know about caches. Destroys, the MVC concept
– Christian Fazzini
May 10, 2012 at 3:21
Putting the code in an observer makes it separate from the model for the reason you give--model should not know of this side-effect. An observer is not a model. You don't need to name it after the model, use the higher-order issue in the name. Saying that using an observer "destroys" MVC is a bit much. A good app is engineering, not science. If you want absolute purity, don't cache.
– Larry K
May 11, 2012 at 15:02
Add a comment
|
|
I've taken the quote below, which I can see some sense in:
"Cached pages and fragments usually depend on model states. The cache doesn't care about which actions create, change or destroy the relevant model(s). So using a normal observer seems to be the best choice to me for expiring caches."
For example. I've got a resque worker that updates a model. I need a fragment cache to expire when a model is updated / created. This can't be done with a sweeper.
However, using an observer will mean I would need something like, either in the model or in the Resque job:
ActionController::Base.new.expire_fragment('foobar')
The model itself should not know about caching. Which will also break MVC principles that will lead to ugly ugly results down the road.
|
How would you expire fragment caches in the model or in a Resque worker?
|
You can extend the default dict and use __missing__ method to call a loading function if the key is missing:
class ImageDict(dict):
def __missing__(self, key):
self[key] = img = self.load(key)
return img
def load(self, key):
# create a queue if not exist (could be moved to __init__)
if not hasattr(self, '_queue'):
self._queue = []
# pop the oldest entry in the list and the dict
if len(self._queue) >= 100:
self.pop(self._queue.pop(0))
# append this key as a newest entry in the queue
self._queue.append(key)
# implement image loading here and return the image instance
print 'loading', key
return 'Image for %s' % key
And the output (the loading happen only when the key doesn't exist yet.)
>>> d = ImageDict()
>>> d[3]
loading 3
'Image for 3'
>>> d[3]
'Image for 3'
>>> d['bleh']
loading bleh
'Image for bleh'
>>> d['bleh']
'Image for bleh'
One evolution would be to store only the N last element in the dict, and purge the oldest entries. You can implement it by keeping a list of keys for ordering.
|
I have a directory of images in order. Typically my code will be using data from a sequential subset of images (e.g. images 5-10), and the naive options for accessing these are:
Create a wrapper object with a method that loads the image when needed and reads my data (e.g. a pixel value). This has little memory overhead but will be slow as it will need to load each image every time.
Store all the images in memory. This will be fast but obviously there's a limit to how many images we can store.
I would like to find:
Some method by which I can define how to read the image corresponding to an index or a path, and then allows me to access, say magic_image_collection[index] without me having to worry about whether it's going to return the object in memory or read it afresh. This would ideally keep the appropriate images or the n most recently accessed images in memory.
|
Smart caching of expensive objects in Python
|
To add to what @Srikar have said, only save data in Caches if you can regenerate it. It has been found that when the device is running short on memory it clears off the app's cache directory. This has been confirmed on various blog post and developers.
|
I heard that apple changed some things, concerning the memory management in iOS 5. So where is the best place, to save app data and files in my new iOS 5 application - without losing data?
|
Where to save files in iOS 5 applications?
|
The answer you got on your other question also applies here: Provide a unique url for your applet each time. It's not humor, as lots of people are using this technique and it would solve your problem.
|
Is there a way to delete older version of an applet from browser's cache? The things I have already tried to prevent the cache problem in first place are:
1- To set "no-cache" in HTTP response header, I placed following script on the top of my jsp:
<%
if (request.getProtocol().compareTo("HTTP/1.0") == 0) {
response.setHeader("Pragma", "no-cache");
} else if (request.getProtocol().compareTo("HTTP/1.1") == 0) {
response.setHeader("Cache-Control", "no-cache");
}
response.setDateHeader("Expires", 0);
%>
2- While deploying applet 'cache_option' is set to 'no'
But of no use. I was now wondering if there is a way to programatically delete this applet jar file from cache?
[UPDATE]
Providing a unique url for applet each time doesn't look like a good idea in my case. As, in my case applet reloads(refresh) itself after a time (say at mid-night, using Timer), hitting on a url
applet.getAppletContext().showDocument(url);
It would be difficult to communicate new url to applet
|
how to remove old applet from browser's cache programmatically?
|
It must be mentioned that Varnish includes the ability to uppercase and lowercase strings in the std vmod ( https://www.varnish-cache.org/docs/trunk/reference/vmod_std.generated.html#func-tolower )
This is much cleaner than the embedded C route (which is disabled by default in Varnish 4). Here's an example I use to normalize the request Host and url;
import std;
sub vcl_recv {
# normalize Host header
set req.http.Host = std.tolower(regsub(req.http.Host, ":[0-9]+", ""));
....
}
sub vcl_hash {
# set cache key to lowercased req.url
hash_data(std.tolower(req.url));
....
}
|
In Varnish (3.0), urls are treated in a case sensitive way. By that I mean http://test.com/user/a4556 is treated differently from http://test.com/user/A4556. On my web server they're treated as the same url. What I'd like to do is have varnish lowercase all request urls as they come in.
I managed to find this discussion but the creator of Varnish indicates that I will have to use inline C to do it. I could achieve this in a simplistic way using multiple regexes but that just seems like it's bound to fail.
Ideally, what I'd like is a VCL configuration to do this (an example of this can be found here) but I'd settle for a C function that takes in a const char * and returns const char * (I'm not a C programmer so forgive me if I get the syntax wrong).
|
Lowercase urls in Varnish (inline C)
|
Answering my own question. As it turns out, setting the following response headers (as opposed to META tags) worked for me:
Cache-Control private, no-store, max-age=0, no-cache, must-revalidate, post-check=0, pre-check=0
Pragma no-cache
Expires Fri, 01 Jan 1990 00:00:00 GMT
If you're working in Rails like I am, you can do this easily by putting the following in an ApplicationController before_filter callback:
response.headers["Cache-Control"] = "private, no-store, max-age=0, no-cache, must-revalidate, post-check=0, pre-check=0"
response.headers["Pragma"] = "no-cache"
response.headers["Expires"] = "Fri, 01 Jan 1990 00:00:00 GMT"
|
I know this question has been asked multiple times before, but the solutions posted are not working for me.
I have put the following in the <head> tag, to no avail:
<meta http-equiv="Cache-Control" content="no-cache, no-store, must-revalidate">
<meta http-equiv="Pragma" content="no-cache">
<meta http-equiv="Expires" content="-1">
I also saw some mention about the onunload attribute and the bfcache in browsers, so I added that to the <body> tag, also to no avail.
The response headers from my server contain:
Cache-Control max-age=0, private, must-revalidate
Would appreciate it if someone could point me in the right direction here - what am I doing wrong?
|
How do I prevent a cached page from being served when the user clicks the back button?
|
Clean them up in your applicationWillTerminate method.
- (void)applicationWillTerminate:(UIApplication *)application
{
NSFileManager *fileManager = [NSFileManager defaultManager];
NSError *error;
NSArray *imagesFiles = [fileManager contentsOfDirectoryAtPath:saveDirectory error:&error];
for (NSString *file in imagesFiles) {
error = nil;
[fileManager removeItemAtPath:[saveDirectory stringByAppendingPathComponent:file] error:&error];
/* do error handling here */
}
}
|
I have created an application , where i ll be downloading the images from server and storing it locally on the iPhone's file system. Its happening fine. Now the problem is , i want to clear the locally cached images on iPhone when ever i quit my application.
How do i delete those cached images on iPhone. it's using Secondary memory on iPhone, how do i access it programmatically ?
Thank You in advance.
Suse.
|
programmatically clearing secondary cache on iPhone
|
Premature optimization is the root of all evil
-Donald Knuth
Optimize when you see issues, don't jump to conclusions and waste time optimizing what you think might be the issue.
Besides, I think you have more important things to work out on the site (like being able to cast multiple votes on the same question) before worrying about a caching layer.
Its done in PHP, MySQL and jQuery's AJAX, at the moment there is only a dozen or so submissions and already i can feel it lagging slightly when it goes to a new page (therefore running a new mysql query)
"Can feel it lagging slightly" – Don't feel it, know it. Run benchmarks and time your queries. Are you running queries effectively? Is the database setup with the right indexes and keys?
That being said...
CDN's
A CDN works great for serving static content. CSS, JavaScript, images, etc. This can speed up the loading of the page by minimizing the time it takes to request all the resources. It will not fix bad query practice.
Content Caching
The easiest way to implement content caching is with something like Varnish. Basically sits in front of your site and re-serves content that hasn't been updated. Minimally intrusive and easy to setup while being amazingly effective.
Database
Is it most important for me to try and optimise my MySQL queries (by prepared statements)
Why the hell aren't you already using prepared statements? If you're doing raw SQL queries always use prepared statements unless you absolutely trust the content in the queries. Given a user content based site I don't think you can safely say that. If you notice query times running high then take a look at the database schema, the queries you are running per-page, and the amount of content you have. With a few dozen entries you should not be noticing any issue even with the worst queries.
|
I've just made a user-content orientated website.
It is done in PHP, MySQL and jQuery's AJAX. At the moment there is only a dozen or so submissions and already I can feel it lagging slightly when it goes to a new page (therefore running a new MySQL query)
Is it most important for me to try and optimise my MySQL queries (by prepared statements) or is it worth in looking at CDN's (Amazon S3) and caching (much like the WordPress plugin WP Super Cache) static HTML files when there hasn't been new content submitted.
Which route is the most beneficial, for me as a developer, to take, ie. where am I better off concentrating my efforts to speed up the site?
|
How important is caching for a site's speed with PHP?
|
There's no sense in caching both image sizes, it takes too much memory.
Best practices would be (to my humble opinion):
Make sure your cache uses SoftReferences, this way you can make sure that you don't run out of memory, and can always load new bitmaps on the "expense" of losing old ones.
Use the Canvas' drawBitmap methods to draw the large-scale bitmaps smaller.
Make sure you guard against OutOfMemoryError, and notice that it's a subclass of Throwable, and not a subclass of Exception, so a catch(Exception e) clause will not catch it.
|
I have an app which operates a large quantity (~100) of bitmaps - i.e. music cover art. Bitmaps are used in two ways - as a large background and a small (50dip) icon.
Does it make sense to preload and cache two sizes as separate bitmaps?
I've implemented both approaches (use large bitmap as the icon | cache both sizes), but I can't see actual performance difference.
What is the best practice for such situation?
|
Android bitmap caching
|
4
Either the grails-melody plugin or the app-info plugin will allow you to see what's in the 2nd level EHCache, as well as lots of other interesting details about the internals of your app.
The hibernate 1st level cache is more transient and as far as I know, there isn't any way to examine it.
Share
Improve this answer
Follow
answered Feb 4, 2011 at 22:43
ataylorataylor
65.4k2525 gold badges162162 silver badges189189 bronze badges
Add a comment
|
|
By default, Grails uses Hibernate with EHCache as the second level cache. I'm still learning about how Hibernate works internally and would love to be able to introspect the caches (both EHCache and anything Hibernate does itself at 'level 1') while the application is running and executing my queries. Are there any Grails plugins or similar that will facilitate this?
|
How to monitor Hibernate statistics (cache hits and misses) in a grails app?
|
From Books Online:
If a view is not created with the
SCHEMABINDING clause, sp_refreshview
should be run when changes are made to
the objects underlying the view that
affect the definition of the view.
Otherwise, the view might produce
unexpected results when it is queried.
|
I had an old view that was giving out some odd data when I would query it. Two of it's columns, C and D, had a copy of data from columns A and B respectively. So C had a copy of A's data and D had a copy of B's data. When I extracted the query used by the view and ran it standalone everything was fine. Columns A, B, C, and D had the data that I expected to see. When I looked at the view definition I noticed that it had some wildcards (*) for column selection like so:
SELECT
TableX.*,
TableY.*
FROM
X AS TableX INNER JOIN
Y AS TableY ON TableX.PK = TableY.FK
I was told never to use wildcards in views for various other reasons but I was wondering why it has this effect? I noticed that when I re-create the view and run a select query on the view that everything is fine. One of the senior developers informed me that the problem occurs because of some caching that Sql Server does but I was hoping for a more detailed answer.
|
What is the problem with using wildcards for selecting columns?
|
Session and Cache are different concepts.
If you want to share the object between all users of your site you should use Cache while Session is for storing user specific data.
So your question should not be:
Is there any reason to use
Microsoft.Practices.EnterpriseLibrary.Caching
over standard ASP.NET Session?
but:
Is there any reason to use
Microsoft.Practices.EnterpriseLibrary.Caching
over standard ASP.NET Cache?
and the answer to this question as always is: it depends on your scenario. This article provides an overview of where can the Caching Application Block be used and what issues does it try to resolve.
|
Working on a legacy ASP.NET application we've found that ASP.NET session gets used for caching some objects, but for some objects Microsoft.Practices.EnterpriseLibrary.Caching gets used.
Is there any reason to use Microsoft.Practices.EnterpriseLibrary.Caching over standard ASP.NET Session?
Edit
In my scenario, the Enterprise Library caching is actually being used to cache per-user data by appending the ASP.NET Session ID to the cached item's key.
|
Is it appropriate to use Microsoft.Practices.EnterpriseLibrary.Caching with ASP.NET?
|
As you're implementing a cache then I'd suggest only exposing those methods that you need to the outside world, to prevent any unexpected side-effects that result if a.n.other user fiddles with the dictionary.
|
I'm thinking about using class (singleton) with collections implemented using ConcurrentDictionary. This class will be used as a cache implementation (asp.net / wcf).
What do you think about exposing these collections explicitely from such class vs exposing just e.g. 3 methods (get,add,clear) for each of them (using safe methods from CD) ?
|
ConcurrentDictionary as static cache
|
The method you have works just fine, an alternative would be to use DateTime.UtcNow.Ticks. You can of course use cache control headers, but if you want to be absolutely sure, the method you have is the way to go.
Though, if you're generating the file dynamically anyway, and need to include it in the page...any reason for not just sticking the JavaScript in the page? Since you want it to never cache, this will save a round-trip for the client.
|
I have a dynamically-generated javascript file that I want to ensure is NEVER cached by the browser. My current method is to simply append a (new) guid to the url in the script tag on each page view.
For example:
<script type="text/javascript" src="/js/dynamic.js?rand=979F861A-C487-4AA8-8BD6-84E7988BD460"></script>
My question is...is this the best way to accomplish my goal?
For reference, I am using ASP.NET MVC 2 and the javascript file is being generated as a result of a controller action.
|
How can I ensure a dynamically-generated javascript file is never cached?
|
Because you invoke a stored procedure, not directly a query, then your only query that changes is the actual batch you send to SQL, the EXEC GetFooData '2010-05-01', '2010-05-05' vs. GetFooData '2010-05-01', '2010-05-04 15:41:27'. This is a trivial batch, that will generate a trivial plan. While is true that, from a strict technical point of view, you are loosing some performance, it will be all but unmeasurable. The details why this happes are explained in this response: Dynamically created SQL vs Parameters in SQL Server
The good news is that by a minor change in your SqlClient invocation code, you'll benefit from even that minor performance improvement mentioned there. Change your SqlCommand code to be an explicit stored procedure invocation:
SqlCommand cmd = new SqlCommand("GetFooData", connection);
cmd.CommandType = CommandType.StoredProcedure;
cmd.Parameters.AddWithValue("@StartDate", dateFrom);
cmd.Parameters.AddWithValue("@EndDate", DateTime.Now);
As a side note, storing localized times in the database is not a very good idea, due to the clients being on different time zones than the server and due to the complications of daylight savings change night. A much better solution is to always store UTC time and simply format it to user's local time in the application.
|
Question:
Does passing DateTime.Now as a parameter to a proc prevent SQL Server from caching the query plan? If so, then is the web app missing out on huge performance gains?
Possible Solution:
I thought DateTime.Today.AddDays(1) would be a possible solution. It would pass the same end-date to the sql proc (per day). And the user would still get the latest data. Please speak to this as well.
Given Example:
Let's say we have a stored procedure. It reports data back to a user on a webpage. The user can set a date range. If the user sets today's date as the "end date," which includes today's data, the web app passes DateTime.Now to the sql proc.
Let's say that one user runs a report--5/1/2010 to now--over and over several times. On the webpage, the user sees 5/1/2010 to 5/4/2010. But the web app passes DateTime.Now to the sql proc as the end date. So, the end date in the proc will always be different, although the user is querying a similar date range.
Assume the number of records in the table and number of users are large. So any performance gains matter. Hence the importance of the question.
Example proc and execution (if that helps to understand):
CREATE PROCEDURE GetFooData
@StartDate datetime
@EndDate datetime
AS
SELECT *
FROM Foo
WHERE LogDate >= @StartDate
AND LogDate < @EndDate
Here's a sample execution using DateTime.Now:
EXEC GetFooData '2010-05-01', '2010-05-04 15:41:27' -- passed in DateTime.Now
Here's a sample execution using DateTime.Today.AddDays(1)
DateTime.Today.AddDays(1)0
The same data is returned for both procs, since the current time is: DateTime.Today.AddDays(1)1.
|
How does DateTime.Now affect query plan caching in SQL Server?
|
Its easier to snoop data over http than https. So in that aspect you should consider transmitting over http only the things that do not contain sensitive info.
Another way of thinking it: will someone benefit from snooping this image of the logo of my corporation? probably not.
However lets say you have (for whatever reason) an image with the bank account details of a customer. Should you transmit it over http? probably not.
EDIT:
plus when you mix http & https requests in some browsers your customers will get nasty popup messages informing them that some content is unencrypted
|
We are creating typical web applications secured by https. In order to be able to cache static resources, I would like to expose images, javascript files etc. over http. Otherwise they don’t get cahched. Is this advisable from security point of view? What are the risks involved?
EDIT: I would like to have static content cached by proxies and by browsers. Actually, the most important issue here is having this content cached by reverse proxy, so I don't have to distribute static content manually to http server (reverse proxy).
|
What are the dangers in exposing static resources of your secure web application unsecured?
|
Does the page by any chance require authentication? The runtime will force Cache-Control: private on pages that require authentication to prevent the accidental caching of private content on public proxies.
Are you using Cassini? If so, it always forces Cache-Control: private. If so, you might try switching to IIS instead.
|
I have an HttpHandler (have also done this as an ASPX page) that retrieves an image stored in the db and writes it out to the response. I have added the following lines to the code to try and get the images to cache in the browser, but whenever I look at the response in Firebug, it always has a cache-control header value of "private".
Response.Cache.SetCacheability(HttpCacheability.Public)
I've tried all sorts of things, like using the Response.ClearHeaders & Response.AddHeader() to manually add the "Cache-Control" header value, but to no avail. Any ideas?
Edit:
More info: This is running in an HTTP Handler (.ashx), and I have tested it both on my local IIS 5.1, and on the hosting site which I think is IIS 6.
|
ASP.NET Cache.SetCacheability(HttpCacheability.Public) not setting header
|
You need to add something like:
$expiry = 3600*24*7; // A week
header('Expires: ' . gmdate('D, d M Y H:i:s' time() + $expiry) . ' GMT');
header('Cache-control: private, max-age=' . $expiry);
|
I store all of my images behind the webroot (before /var/www/), which means that the web server is unable to send cache headers back for my pictures. What do I need to add to this to make the user's web cache work? Currently, this is getting hit every time by the same browser.
My <img> path on my pages look something like this:
<img src="pic.php?u=1134&i=13513&s=0">
Edit: Could it be that it is because "pic.php?u=1134&i=13513&s=0" is not a valid file name or something?
// pic.php
<?php
// open the file in a binary mode
$user = $_GET['u'];
$id = $_GET['i'];
$s = $_GET['s'];
if (!isset($user) && !isset($s) && $isset($id))
{
// display a lock!
exit(0);
}
require_once("bootstrap_minimal.php"); //setup db connection, etc
// does this image_id belong to the user?
$stmt = $db->query('SELECT image_id, user_id, file_name, private FROM images WHERE image_id = ?', $id);
$obj = $stmt->fetchObject();
if (is_object($obj))
{
// is the picture is the users?
if ($obj->user_id != $_SESSION['user_id'])
{
// is this a private picture?
if ($obj->private == 1)
{
// check permissions...
// display a lock in needed!
}
}
}
else
{
// display a error pic?!
exit(0);
}
if ($s == 0)
{
$picture = $common->getImagePathThumb($obj->file_name);
}
else
{
$picture = $common->getImagePath($obj->file_name);
}
// send the right headers
header("Content-Type: image/png");
header("Content-Length: " . filesize($picture));
$fp = fopen($picture, 'rb');
// dump the picture and stop the script
fpassthru($fp);
exit;
?>
|
Why do images served from my web server not cache on the client?
|
6
You can run rm -rf ~/flutter/bin/cache in the terminal to clear the Flutter bin cache.
Share
Improve this answer
Follow
answered Aug 3, 2022 at 15:40
OmattOmatt
9,41722 gold badges4949 silver badges159159 bronze badges
2
does it affect flutter performance?
– amit.flutter
Aug 4, 2022 at 9:27
Only in the first run. Not in the performance of the Flutter app, but in its build time.
– Omatt
Aug 4, 2022 at 15:46
Add a comment
|
|
Is there any way to clean up flutter/bin folder cache? or older app release apk without opening an individual app and writing the command flutter build clean?
I just want to free up space from the flutter folder.
|
How Clean Flutter/bin folder cache
|
Apparently there is no solution at the moment. See here: https://github.com/Baseflow/flutter_cached_network_image/issues/77
I use the version of the image in the address to get around this. Something like: https://example.com/image.jpg?version=14
|
I'm using the package flutter_cached_network_image to load images from firebase. When a user updates his profile picture (filename is still the same) the image that is loaded is still the same as the one before the update because it is loaded from cache (because it still has the same url as before).
Is it possible to clear the picture from the cache so the new image is loaded?
|
Flutter cached image is it possible to reload image instead of getting the cached image?
|
Access cache like a regular dictionary without using the decorator
item = cache.get(key, None)
if item is not None:
...
else:
...
# get item the slow way
cache[key] = item
|
I have the following python function that uses cache functions:
from cachetools import cached, TTLCache
cache = TTLCache(maxsize=100, ttl=3600)
@cached(cache)
def load_data():
# run slow data to get all user data
load_response = requests.request(
'GET',
url=my_url
)
return load_response
Is there a way to check if the key exists in the cache first so that i can implement an else functionality?
I am trying to implement another cache to fetch data from there when the cache key does not exist here.
|
Python check if cache key exists
|
You can register the Cache module with a generalised ttl value for your module
Then in the controller while using the class CacheInterceptor and the CacheTTL decorator from the '@nestjs/common' package like below
@Get()
@CacheTTL(100)
@UseInterceptors(CacheInterceptor)
findAll(): string[] {
return service.longRunningOperation();
}
The value inside CacheTTL Decorator overrides the default expiration time value of cache
|
I'm currently using the NestJS caching mechanism as described in the docs: https://docs.nestjs.com/techniques/caching
Using this I can customise the caching of an entire module with the following:
CacheModule.register({
ttl: 5, // seconds
max: 10, // maximum number of items in cache
});
However, there are certain endpoints that I want to cache for a longer period of time than the rest. (e.g. Long running operations that don't change as often as the others)
Something similar was described here: https://github.com/nestjs/nest/issues/695 but looks like it was closed without truly solving the whole problem.
I'm imagining something like:
@Cache({ ttl: 600 })
@Get()
findAll(): string[] {
return service.longRunningOperation();
}
Any thoughts?
|
Can you configure the NestJS cache TTL per-endpoint?
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.