Response
stringlengths 8
2k
| Instruction
stringlengths 18
2k
| Prompt
stringlengths 14
160
|
---|---|---|
2
When apc.use_request_time is set to true, which is the default, this is what happens -- the SAPI request start time is used for TTL calculations, not the time each function is called.
Share
Improve this answer
Follow
answered Aug 1, 2017 at 22:41
David SickmillerDavid Sickmiller
1,19588 silver badges88 bronze badges
Add a comment
|
|
By specifying the TTL, the item should be aged out of the cache. But it is not working. This is very very simple. the TTL is set to 1 second. Have I made a mistake?
My version;
PHP 7.0.12-1+deb.sury.org~xenial+1 (cli) ( NTS )
Copyright (c) 1997-2016 The PHP Group
Zend Engine v3.0.0, Copyright (c) 1998-2016 Zend Technologies
with Zend OPcache v7.0.12-1+deb.sury.org~xenial+1, Copyright (c) 1999-2016, by Zend Technologies
My script;
cat apcu.php
<?php
$key="KEY";
function xxx($key) {
if (apcu_exists($key)) {
print ("In Store\n");
$value = apcu_fetch($key);
var_dump($value);
} else {
$value = "Hello Big Daddy";
apcu_add($key, $value, 1);
print ("Not in store, adding\n");
}
}
xxx($key);
sleep(2);
xxx($key);
sleep(3);
xxx($key);
Output;
php apcu.php
Not in store, adding
In Store
string(15) "Hello Big Daddy"
In Store
string(15) "Hello Big Daddy"
I do not think the item should be in the Cache on the second call.
But even if someone said it should, then it should certainly not be in the Cache on the third call.
|
APCu TTL not working php 7.0
|
I suspect that you might have a middleware problem.
Your code above does produce the correct output.
$app->get('/test', function ($req, $res, $args) {
header_remove("Cache-Control"); //Edit <--
$newResponse = $res->withHeader('Cache-Control', 'public, max-stale=13910400')->withJson(["message" => "Test"]);
return $newResponse;
});
CURL Output
C:\Users\Glenn>curl -X GET -v http://localhost/vms2/public/test
HTTP/1.1 200 OK
Date: Tue, 13 Sep 2016 19:04:42 GMT * Server Apache/2.4.10 (Win32) OpenSSL/1.0.1i PHP/5.6.3 is not blacklisted
Server: Apache/2.4.10 (Win32) OpenSSL/1.0.1i PHP/5.6.3
X-Powered-By: PHP/5.6.3
Set-Cookie: VMS2=2qf14qr1c0eplgfvibi8t2hcd2; path=/
Expires: Thu, 19 Nov 1981 08:52:00 GMT
Pragma: no-cache
Cache-Control: public, max-stale=13910400
Content-Length: 18
Content-Type: application/json;charset=utf-8
{"message":"Test"}
Connection #0 to host localhost left intact
|
I have to return a specific cache-control header (Cache-Control: public, max-stale=13910400) but when run this, I get this:
Cache-control has been duplicated, but I only need custom values.
$newResponse = $response->withHeader('Cache-Control', 'public, max-stale=13910400')->withJson($appInfo);
return $newResponse;
I tried this but it doesn't work (just for testing):
$newResponse = $response->withoutHeader('Cache-Control')->withHeader('Cache-Control', 'public, max-stale=13910400')->withJson($appInfo);
return $newResponse;
How can I set the header correctly?
Thank you
|
Slim v3 duplicates cache-control header
|
1
The trigonometric functions are usually implemented as Taylor expansions. They are fast. You can write your own and compare.
public class Main{
private static double factorial(double n) {
if (n <= 1) // base case
return 1;
else
return n * factorial(n - 1);
}
private static double sin(int n) {
int PRECISION = 10;
double rad = n*1./180.*Math.PI;
double sum = rad;
for (int i = 1; i <= PRECISION; i++) {
if (i % 2 == 0)
sum += Math.pow(rad, 2*i+1) / factorial(2 * i + 1);
else
sum -= Math.pow(rad, 2*i+1) / factorial(2 * i + 1);
}
return sum;
}
public static void main(String []args){
System.out.println(sin(180));
System.out.println(Math.sin(Math.PI));
System.out.println(sin(90));
System.out.println(Math.sin(Math.PI/2));
System.out.println(sin(200));
System.out.println(Math.sin(200*2*Math.PI/360));
}
}
Surely you can cache the values but these methods are likely to be already optimized.
Share
Improve this answer
Follow
answered Sep 11, 2016 at 18:45
Niklas RosencrantzNiklas Rosencrantz
26.1k7676 gold badges232232 silver badges435435 bronze badges
1
1
You can keep a running variable for the factorial element, which will save avoid an O(n^2) issue (factorial itself is O(n) as written). After that is done, sin & cos can be implemented O(n) where n is precision.
– abligh
Sep 11, 2016 at 19:25
Add a comment
|
|
How extensive are functions like Math.sin(), Math.cos() etc.?
Does the compiler optimise the code if you call the method with the same arguments multiple times in a row? If not, at how many calls of these methods should you start caching the result in a variable?
|
Should trigonometric functions be cached?
|
try this Get full cached path of product relative images Magento
Mage::helper('catalog/image')->init($product->getProduct(), 'image', $_image->getFile())->resize(2000);
|
I have the following situation.
It pictures to nearly 10 megabytes and to perform the necessary routine can be up to 4 megabytes.
I can get the images as follows:
$imgsProdMagento = $product->getMediaGalleryImages();
foreach ($imgsProdMagento as $imgProdMagento) {
var_dump($imgProdMagento);
}
It returns me the picture with 10 megabytes, looking at the frontend is possible to see the image qua cache is much smaller, as I do to obtain it?
I've tried as follows, however it returns me just an image that is marked and does not return me the URL.
Mage::helper('catalog/image')->init($product, 'image');
I need to get the url of all images of the product starting from the cache.
|
Get images from Cache MAGENTO
|
Controllers aren't cached in Symfony. There is no time consuming logic involved in reading controllers (it's just PHP code anyway).
When deploying to production server, always make sure to clear the cache using the cache:clear command. Also make sure to remove the app_dev.php file and any other, not used, PHP file in the web/ directory. You shouldn't run the dev environment on the production server.
|
I use Symfony 2.8.
I modified a controller file and I loaded the changes on the production server, I can see the changes without the need to use php app/console cache: clear and without use app_dev.php.
If I modify a Twig file everything does not happen and I have to clean the cache to transport the changes from app_dev.php to production environment.
why?
|
Symfony does not cache changes on controllers
|
2
You can pass to default_doctrine_provider either a Redis connection DSN (for example "redis://127.0.0.1:6379") or ID of a service which implements Symfony\Component\Cache\Adapter\AdapterInterface
You can have a look at already implemented adapters here
Share
Improve this answer
Follow
answered Aug 16, 2016 at 15:59
nikita2206nikita2206
1,14922 gold badges1010 silver badges1717 bronze badges
Add a comment
|
|
Config.yml:
cache:
app: cache.adapter.doctrine
system: cache.adapter.doctrine
default_doctrine_provider: ~
default_psr6_provider: ~
default_redis_provider: "redis://localhost:6379"
Symfony 3.1 support doctrine cache, but you do not have enough documentation.
Cache Component: http://symfony.com/doc/current/components/cache.html
Supported drives: http://symfony.com/doc/current/components/cache/cache_pools.html
Symfony Integration: http://symfony.com/blog/new-in-symfony-3-1-cache-component
default_doctrine_provider: ? What do I enter as Provider
|
Symfony 3.1 PSR-6 Caching Settings
|
For those using discord.js v8 or lower.
If you want to pull all the message objects from the channel I recomend ignoring the cach and instead using getChannelLogs(channel, limit, options, callback) Which will allow you to fetch up to 100 messages at once, but those messages do not have to be cached within discord.js. You can quite easily create a recursive function that on the callback fetches more messages to fetch as many messages as you want.
That said server.channel.messages will be all of the messages in it that discord.js has cached. If it appears empty chances are no-one sent a message since the bot was activated.
source: http://discordjs.readthedocs.io/en/latest/docs_client.html#getchannellogs-channel-limit-options-callback
|
Discord.js is an API for Discord that allows the developers to make plugins for the program, discord. here's the link to the API code it's in js, https://github.com/hydrabolt/discord.js/
Discord is setup to be like a server where you connect and chat on channels, my problem is how do I pull the message data from the channels.
What they did is setup all the channels in a JSON cache and within the channel, objects is another cache with the messages objects(what documentation says). But when I get to the message cache all I see is messages: Cache { limit: 1000 } }. How do I pull all the message objects from the channel?
|
how to pull message data from discord.js?
|
Using .Single, .First, .Where etc will not cache the results unless
you are using second-level caching.
If you need to cache the result, you need to implement second level caching in EF.
EntityFramework.Cache that enables us caching of query results for EF 6.1 applications.
we need to tell EF to use caching by configuring the caching provider
and the transaction handler.
public class Configuration : DbConfiguration
{
public Configuration()
{
var transactionHandler = new CacheTransactionHandler(new InMemoryCache());
AddInterceptor(transactionHandler);
var cachingPolicy = new CachingPolicy();
Loaded +=(sender, args) => args.ReplaceService<DbProviderServices>(
(s, _) => new CachingProviderServices(s, transactionHandler,
cachingPolicy));
}
}
|
I have some data tables that almost never change so I don't want to call database every time that I run a query on db-context. In NHibernate, there is an option to do so on the mapper:
Cache.ReadOnly();
And it will read the whole table to your cache on the start up and every time you want to load the object like with lazy loading, it will fetch data from the cached memory instead.
How can I do the same with Entity-Framework?
|
Cache table with entity framework
|
2
The explanation of other answer was not clear for me, so i tested it and here is the summary:
await output.GetChildContentAsync(); ⇒ gets the original content inside the tag which is hard coded in the Razor file. note that it will be cached at first call and never changed at subsequent calls, So it does not reflect the changes done by other TagHelpers at runtime!
output.Content.GetContent(); ⇒ should be used only to get content modified by some TagHelper, otherwise it returns Empty!
Usage samples:
Getting the latest content (whether initial razor or content modified by other tag helpers):
var curContent = output.IsContentModified ? output.Content : await output.GetChildContentAsync();
string strContent = curContent.GetContent();
Share
Improve this answer
Follow
edited Aug 9, 2019 at 4:58
Uwe Keim
40k5858 gold badges179179 silver badges295295 bronze badges
answered Mar 9, 2019 at 10:00
S.SerpooshanS.Serpooshan
7,87844 gold badges3535 silver badges6262 bronze badges
1
@jerrythomas It works for .net core 2.2, I have tested it!
– S.Serpooshan
Jul 21, 2019 at 8:27
Add a comment
|
|
According to this article if we use several tag helpers(targeted to the same tag) and in each of them we will use await output.GetChildContentAsync() to recieve html content we will come to the problem with cached output:
The problem is that the tag helper output is cached, and when the WWW tag helper is run, it overwrites the cached output from the HTTP tag helper.
The problem is fixed by using statement like:
var childContent = output.Content.IsModified ? output.Content.GetContent() :
(await output.GetChildContentAsync()).GetContent();
Description of this behaviour:
The code above checks to see if the content has been modified, and if
it has, it gets the content from the output buffer.
The questions are:
1) What is the difference beetween TagHelperOutput.GetChildContentAsync() and TagHelperOutput.Content.GetContent() under the hood?
2) Which method writes result to buffer?
3) What does it mean "cached output": ASP.NET MVC Core caches initial razor markup or html markup as result of TagHelper calling?
Thank in advance!
|
TagHelper cached output by calling GetChildContentAsync() and Content.GetContent()
|
2
I'm also new at iOS Dev and I'm using kingfisher for storring images to cache.
To get into the point while you prefetch the images there is an optionInfo which you leave it nil. So if i guess right you prefetch images from URLs, so in this optionInfo you can set a Cache Identifier for each image, and would be best if you put there the URL of the image.
Kingfisher documentation is pretty clear about the optionInfo so you could change your function to this
func downloadImages() {
if(self.albumImagePathArray.isEmpty == false) {
let myCache = ImageCache(name: "the url of the images")
//dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), {
let urls = self.albumImagePathArray.map { NSURL(string: $0)! }
let prefetcher = ImagePrefetcher(urls: urls, optionsInfo: [.TargetCache(myCache)], progressBlock: nil, completionHandler: {
(skippedResources, failedResources, completedResources) -> () in
print("These resources are prefetched: \(completedResources)")
})
prefetcher.start()
}
}
So with this practise you can store them to Cache. To continue you have to show us what you have tried after the prefetch.
Share
Improve this answer
Follow
answered Aug 12, 2016 at 9:26
Konstantinos NatsiosKonstantinos Natsios
2,88499 gold badges3939 silver badges7676 bronze badges
Add a comment
|
|
I'm using Kingfisher framework to prefetch images. The link to the Kingfisher framework is: https://github.com/onevcat/Kingfisher
Here's my code that I've written:
func downloadImages() {
if(self.albumImagePathArray.isEmpty == false) {
//dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), {
let urls = self.albumImagePathArray.map { NSURL(string: $0)! }
let prefetcher = ImagePrefetcher(urls: urls, optionsInfo: nil, progressBlock: nil, completionHandler: {
(skippedResources, failedResources, completedResources) -> () in
print("These resources are prefetched: \(completedResources)")
})
prefetcher.start()
}
}
I'm not sure how to use this framework as I'm quite new to App development. Once I have prefetched the images, how do I get these prefetched images from cache. I want to pick these images from cache and put them into an array of UIImage. Previously I was not picking from cache, but I was loading the URLs every time and putting them an array of strings called albumImagePathArray and through contentsOfURL I was converting it into UIImage so it can be put into albumImages as shown in the code below:
var albumImages = [UIImage]()
for i in 0 ..< self.albumImagePathArray.count
{
let image = UIImage(data: NSData(contentsOfURL: NSURL(string: self.albumImagePathArray[i])!)!)
self.albumImages.append(image!)
}
Now I want to change all that, I want to pick directly from cache after prefetching so I don't have to download the images every time which takes lot of time. Can anyone please help me on this? Thanks a lot!
|
Picking images from cache after prefetching
|
2
With mongo >3.0 you can use inMemory storage, so that means you could have instance of mongo when seeded collection stays in memory (all changes aren't persisted).
From other side if your collection is static - there could be a way to implement a cache storage like Redis, or even TTL indexed collection with stored query and response.
the process of seeding could be done by backup of current collection and restore in in memory collection.
When querying frequently collection - it residues in memory as long as mongo needs to load other collections (on busy system).
Any comments welcome!
Share
Improve this answer
Follow
edited Jun 14, 2016 at 13:23
answered Jun 14, 2016 at 13:13
profesor79profesor79
9,39333 gold badges3232 silver badges5252 bronze badges
2
So, can I use inMemory storage just for this collection? How do I "seed" the collection?
– XCS
Jun 14, 2016 at 13:14
1
I would expect that after running the queries once the relevant disk blocks are present in the OS's disk buffers (provided that there is enough memory available).
– robertklep
Jun 14, 2016 at 13:15
Add a comment
|
|
I am using mongoDB to store a collection of polygons and use $geoIntersects queries to find in which polygon a specific point is.
My mongoose Schema looks like this:
var LocationShema = mongoose.Schema({
name: String,
geo: {
type: {
type: String
},
coordinates: []
}
});
LocationShema.index({geo: '2dsphere'});
module.exports = mongoose.model('Location', LocationShema);
So, each element is a polygon. I added the 2dsphere index hoping that the queries would be faster and the entire collection would be stored in memory. Unfortunately it takes about 600ms for ~20queries which is way too much for my use case.
My queries look like this:
Location.find({
geo: {
$geoIntersects: {
$geometry: {
type: 'Point',
coordinates: [pos.lng, pos.lat]
}
}
}
},...)
Is there anyway I can make this run faster? Can I force MongoDB to cache the entire collection in the database (as ther collection never changes). Is there any way I can check if the collection is actually stored in an in-memory cache?
Also, are there any alternatives I can use (eg: a library or something) that allows for fast geo-spatial queries?
|
MongoDB cache collection in memory?
|
Assuming that you use Elasticsearch 2.x, there is a possibility to have the having-semantics in Elasticsearch.
I'm not aware of a possibility prior 2.0.
You can use the new Pipeline Aggregation Bucket Selector Aggregation, which only selects the buckets, which meet a certain criteria:
POST test/test/_search
{
"size": 0,
"query" : {
"constant_score" : {
"filter" : {
"bool" : {
"must" : [
{"term" : {"fc" : "33"}},
{"term" : {"year" : 2016}},
{"terms" : {"type" : ["a","b","c"] }}
]
}
}
}
},
"aggs": {
"group_by_csgg": {
"terms": {
"field": "csgg",
"size": 100
},
"aggs": {
"sum_amount": {
"sum": {
"field": "amount"
}
},
"no_amount_filter": {
"bucket_selector": {
"buckets_path": {"sumAmount": "sum_amount"},
"script": "sumAmount == 0"
}
}
}
}
}
}
However there are two caveats. Depending on your configuration, it might be necessary to enable scripting like that:
script.aggs: true
script.groovy: true
Moreover, as it works on the parent buckets it is not guaranteed that you get all buckets with amount = 0. If the terms aggregation selects only terms with sum amount != 0, you will have no result.
|
I want to convert the following sql query to Elasticsearch one. can any one help in this.
select csgg, sum(amount) from table1
where type in ('a','b','c') and year=2016 and fc="33" group by csgg having sum(amount)=0
I tried following way:enter code here
{
"size": 500,
"query" : {
"constant_score" : {
"filter" : {
"bool" : {
"must" : [
{"term" : {"fc" : "33"}},
{"term" : {"year" : 2016}}
],
"should" : [
{"terms" : {"type" : ["a","b","c"] }}
]
}
}
}
},
"aggs": {
"group_by_csgg": {
"terms": {
"field": "csgg"
},
"aggs": {
"sum_amount": {
"sum": {
"field": "amount"
}
}
}
}
}
}
but not sure if I am doing right as its not validating the results.
seems query to be added inside aggregation.
|
Converting SQL query to ElasticSearch Query
|
std::mutex() is conformant to Mutex requirements (http://en.cppreference.com/w/cpp/concept/Mutex)
Prior m.unlock() operations on the same mutex synchronize-with this
lock operation (equivalent to release-acquire std::memory_order)
release-acquire is explained here (http://en.cppreference.com/w/cpp/atomic/memory_order)
Release-Acquire ordering
If an atomic store in thread A is tagged
memory_order_release and an atomic load in thread B from the same
variable is tagged memory_order_acquire, all memory writes (non-atomic
and relaxed atomic) that happened-before the atomic store from the
point of view of thread A, become visible side-effects in thread B,
that is, once the atomic load is completed, thread B is guaranteed to
see everything thread A wrote to memory.
The synchronization is
established only between the threads releasing and acquiring the same
atomic variable. Other threads can see different order of memory
accesses than either or both of the synchronized threads.
Code example in this section is very similar on yours. So it should be guaranteed that all writes in thread 1 will happen before mutex unlock in push().
Of course if "ct->foo = 3" hasn't any special tricky meaning where actual assignment happens in another thread :)
wrt cache-invalidation, from cppreference:
On strongly-ordered systems (x86, SPARC TSO, IBM mainframe),
release-acquire ordering is automatic for the majority of operations.
No additional CPU instructions are issued for this synchronization
mode, only certain compiler optimizations are affected (e.g. the
compiler is prohibited from moving non-atomic stores past the atomic
store-release or perform non-atomic loads earlier than the atomic
load-acquire). On weakly-ordered systems (ARM, Itanium, PowerPC),
special CPU load or memory fence instructions have to be used.
So it really depends from the architecture.
|
Let's assume we have a SyncQueue class with the following implementation:
class SyncQueue {
std::mutex mtx;
std::queue<std::shared_ptr<ComplexType> > m_q;
public:
void push(const std::shared_ptr<ComplexType> & ptr) {
std::lock_guard<std::mutex> lck(mtx);
m_q.push(ptr);
}
std::shared_ptr<ComplexType> pop() {
std::lock_guard<std::mutex> lck(mtx);
std::shared_ptr<ComplexType> rv(m_q.front());
m_q.pop();
return rv;
}
};
then we have this code that uses it:
SyncQueue q;
// Thread 1, Producer:
std::shared_ptr<ComplexType> ct(new ComplexType);
ct->foo = 3;
q.push(ct);
// Thread 2, Consumer:
std::shared_ptr<ComplexType> ct(q.pop());
std::cout << ct->foo << std::endl;
Am I guaranteed to get 3 when ct->foo is printed? mtx provides happens-before semantics for the pointer itself, but I'm not sure that says anything for the memory of ComplexType. If it is guaranteed, does it mean that every mutex lock (std::lock_guard<std::mutex> lck(mtx);) forces full cache-invalidation for any modified memory locations up-till the place where memory hierarchies of independent cores merge?
|
Ordering of read/write operations in a C++ queue
|
The solution was rather easy, just explicitly set the cache control to no-store.
function nocache(req, res, next) {
res.header('Cache-Control', 'private, no-cache, no-store, must-revalidate');
res.header('Expires', '-1');
res.header('Pragma', 'no-cache');
next();
}
Above is a quick middleware function which will change a given request to a no-cache status.
|
I am having some issues with my eCommerce site when using Microsoft edge. The main issue I am having is during an angular $http.get request.
Generally the application flow is whenever the user visits the /cart page the browser makes a request to /api/cart which returns JSON with the contents of the cart along with pricing information.
The issue I am having is that when a product is added (on other pages) the session is correctly updating with the information, yet edge loads /api/cart with old information. Yet when you directly request /api/cart in another tab of edge the JSON loads correctly.
To clarify this incorrect behavior only occurs in Edge; Chrome and Firefox work as expected.
I'm using the MEAN stack for my development.
TLDR: How do I prevent caching on an api JSON request in Edge
|
Microsoft Edge Cache Api Requests
|
Although I have not got any direct text regarding it. But it seems, locking (or other way for synchronization) is applied on server end. And it make sure data is not corrupted from multiple threads/processes.
And why it is important fro make client libraries thread safe, is because they write/read on TCP connection (via network stream I guess). And it is important that if same client is used by multiple thread, it should work fine (in case client is thread safe), otherwise it will be document that, client should not shared among multiple thread.
I am not marking this as a correct answer. If people up vote this and agree on that, then I will do that.
|
I know what thread safety is. And in some scenarios it makes perfect senses. For instance, I understand that logger need to be thread safe, otherwise it might try to open the same file and access it (when access from multiple threads).
But I cannot visualize, why thread safety is important in while accessing cache. How can get/set from multiple thread can corrupt cache.
And most important, if thread safety is required (while accessing cache), how can we use it when cache is accessed from multiple processes. It would be nice if someone can answer in context of Redis.
Thanks In Advance
|
Do we need thread safety when Cache is accessed from multiple processes (Redis)
|
2
Should the page ever be cached by the browser or not?
As you suggest, a browser should ignore s-maxage. From the spec:
The "s-maxage" response directive indicates that, in shared caches,
the maximum age specified by this directive overrides the maximum age
specified by either the max-age directive or the Expires header
field.
Likewise, the browser ignores Cache-Control: private:
The "private" response directive indicates that the response message
is intended for a single user and MUST NOT be stored by a shared
cache.
A browser with a private cache should ignore both of those directives; they only apply to shared caches.
As such, the header is essentially ignored by the browser; it should be cached heuristically like a response with no Cache-Control header at all.
Share
Improve this answer
Follow
edited Oct 7, 2021 at 11:02
CommunityBot
111 silver badge
answered Oct 15, 2016 at 9:05
JoeJoe
30.1k1212 gold badges7070 silver badges9090 bronze badges
4
You mean Likewise, the shared cache ignores Cache-Control: private: ?
– Kim
Dec 21, 2018 at 13:56
@Kim no, the browser has (usually) a private cache, and ignores any requirements that only apply to shared caches.
– Joe
Dec 23, 2018 at 12:23
I feel the wording is wrong then? A browser caches Cache-Control:private, shared caches do not (so they ignore) Cache-Control:private
– Kim
Dec 28, 2018 at 19:32
@kim 'ignores Cache-Control: private' as in 'ignores the directive when considering whether to cache the response'. You're perhaps interpreting 'ignores' as 'does not cache responses that include that directive'? That's not how I intended it.
– Joe
Dec 29, 2018 at 12:09
Add a comment
|
|
If a page answers with header
Cache-Control:private, s-maxage=0
Should the page ever be cached by the browser or not? What's the specification behavior in this case?
rfc2616 defines s-maxage has:
If a response includes an s-maxage directive, then for a shared cache (but not for a private cache), the maximum age specified by this directive overrides the maximum age specified by either the max-age directive or the Expires header. The s-maxage directive also implies the semantics of the proxy-revalidate directive (see section 14.9.4), i.e., that the shared cache must not use the entry after it becomes stale to respond to a subsequent request without first revalidating it with the origin server. The s- maxage directive is always ignored by a private cache.
This confuses me a bit. I understand that if max-age and s-maxage is defined, s-maxage is used for a shared cache but what happens to a private (browser) cache? Is s-maxage still used by the private cache or not?
My tests indicate that Chrome 49 and Firefox 44 would not cache this page request while IE 11 effectively does browser caching of this page (tests on win7 64bit). This page request is done via AJAX in case that matters.
So you can see different browsers have different behaviors. Reading the spec it seems IE is in the wrong here. What's the root cause for this? Maybe different default values?
Edit: Further testing points me that my header works the same way as Cache-Control:private.
In this case, Chrome never uses browser cache for both a 'normal' page request and an AJAX GET request while IE 11 doesn't cache the normal page request but caches the AJAX GET request, for no apparent good reason.
|
How Cache-Control:private, s-maxage=0 should behave on AJAX requests?
|
2
I found the solution myself. It took a while until I came to the conclusion that - if the data is cached - there shouldn't be any individual code at all that is run. Cause that should be the main purpose of the cache: Don't run any code when the data is cashed.
That led me to the conclusion that the code causing the problem must be run before the cache. And so the "bad boy" was easy to find. Another attribute (in this case an AuthorizeAttribute) that is before the OutputCache-Attribute in the code is still run when caching applies but cannot access the Session:
[Route("{id}")]
[UserAuth(Roles =Directory.GroupUser)]
[JsonException]
[OutputCache(Duration = 600)]
public ActionResult Select()
{
DoSelect();
}
Putting the UserAuth-Attribute BELOW the OutputCache-Attribute solved the problem
Share
Improve this answer
Follow
edited Apr 27, 2016 at 16:10
answered Mar 23, 2016 at 10:52
Ole AlbersOle Albers
9,0451010 gold badges7676 silver badges175175 bronze badges
Add a comment
|
|
I have a (working) MVC-application that uses Session properties on multiple parts:
return httpContext.Session[SPContextKey] as SharePointAcsContext;
(ignore that this is sharepoint; This problem isn't be SP-specific)
This works fine until I try to enable Outputcaching:
[OutputCache (Duration =600)]
public ActionResult Select() {
DoSelect();
}
When the content is cached, httpContext.Session becomes NULL.
Is there a way to keep the Session data and also use caching?
|
Session gets lost when enabling OutputCache
|
In the applicable use cases, e.g. accessing a remote disk-based RDBMS or performing an expensive computation, the network latency is orders of magnitude lower than the alternative. Furthermore, while it is true that networks are generally unreliable, during normal operation you still get sub-millisecond latency.
That said, usually a local cache beats a remote cache in terms of latency but on the other hand it could prove problematic to scale.
Edit: answering the OP's comment.
You can essentially think of a disk-based DB as a memory cache over the data in disk - but the DB server's RAM is limited (like any other server). An external cache is therefore used to offload some of that stress, reduce the contention on the DB server resources and free it for other tasks.
As for latency, yes - I was referring to AWS' network. While I'm less familiar with Memcachier's offer, we (Redis Labs) make sure that our Memcached Cloud and Redis Cloud instances are co-located in the same data region as Heroku's dynos are to ensure minimal possible latency. In addition, we also have an Availability Zone Mapping utility that makes it possible to have the application and cache instances reside within the same zone for the same purpose.
|
As far as I understand, memcached is mainly used to cache key value objects in local memory to speed up access.
But on platform like heroku, to use memcached you have to choose add-on like Memcachier, which is cloud based. I don't understand why is that useful? The network latency is orders of magnitude higher than accessing local memory and completely unpredictable.
So what am I missing?
|
What's the point of remote/cloud memcached service?
|
2
The only way to refresh the cache is to restart the website.
Azure Web apps WEBSITE_LOCAL_CACHE_OPTION=Always requires stop and start of the site
Share
Improve this answer
Follow
answered Feb 29, 2016 at 20:30
PeterPeter
27.8k88 gold badges6565 silver badges8585 bronze badges
Add a comment
|
|
We have an Azure WebApp with WEBSITE_LOCAL_CACHE_OPTION = Always set in order to reduce static file latency.
We have made a file change there and would like to force the instances to reload the cache.
How do I force a file cache refresh?
|
Refresh WebApp Instance Cache
|
This snippet actually works fine: https://djangosnippets.org/snippets/2396/
As I understood, the only problem with using global variables for caching is thread safety, and this no-pickle version is thread-safe.
|
I used to cache a database query in a global variable to speed up my application. Since this is strongly unadvised (and it did generate problems), I want to use any kind of Django cache instead. I tried LocMemCache and DatabaseCache, but both take... about 15 seconds to set my variable (twice longer than it take to generate the data, which is 7MB in size).
Is that expected ? Am I doing something wrong ?
(Memcached is limited to 1MB, and I cannot split my data, which consists in arbitrarily big binary masks).
Edit: FileBasedCache takes 30s to set as well.
Settings.py:
CACHES = {
'default': {...},
'stats': {
'BACKEND': 'django.core.cache.backends.db.DatabaseCache',
# or 'BACKEND': 'django.core.cache.backends.locmem.LocMemCache',
'LOCATION': 'stats',
},
}
Service.py:
from django.core.cache import caches
def stats_service():
stats_cache = caches['stats']
if stats_cache.get('key') is None:
stats_cache.set('key', data) # 15s with DatabaseCache, 30s with LocMemCache
return stats_cache.get('key')
Global variable (super fast) version:
_cache = {}
def stats_service():
if _cache.get('key') is None:
_cache['key'] = data
return _cache['key']
|
Very slow to write to Django cache
|
You could try to add a time token to the url of your video in your html page.
$filepath = $destination_path.$video_name.".mov";
$videoUrl = $wwwpath . $filepath . "?" . filemtime($filepath);
|
About what I'm doing:
I'm doing a website where you can search videos.
-I show rows of content with a mini-clip embebed for each row that shows a video.
I use jQuery.AJAX for retrieve this videos so I don't need to reload the page.
Everything works fine. I show the videos and I can play them from that rows.
For administrators of the page I have a button that overwrite that videos. So this put a new video into the location where the old one used to be, replacing it. I do this at server-side with PHP doing the following code:
unlink($destination_path.$video_name.".mov");
move_uploaded_file($_FILES['newVideo']['tmp_name'], $destination_path.$video_name.".mov");
Then I use a header("Location: index.php"); to go to the main page.
Problems:
Then when I play the new video in the website I see the old one instead of the new one.
I clear cache with F5: Not working
I clear cache with Ctrl+F5: Not working
I set this at the top of my page:
Not working
I close the browser and open again (or opening a new one): Working
Some extra info:
The preload attr from the video is set to none. So I retrieve my videos with no cache. They start loading when I press play on them.
The files are overwrited pretty well and I also check where I am pointing (same location and same like before).
If I go to the location of the videos the new is in there, the old is gone.
Happened to all browsers.
It looks like some cache stuff are in the browser or my server is ignoring me so hard. Is there any other ways to empty cache? Or what is going on here?
|
Clearing cache doesn't work for videos
|
1.) There are 2 layers of cache in Volley, one is the in-memory cache (in RAM) and the other one is a disk cache. Once a cache is full, the oldest image (meaning the image that hasn't been accessed the longest) in that cache will be evicted when a new image is about to be cached to make room for the new items. When something is evicted from the in-memory cache, it is still present in the disk cache and can be loaded very quickly from disk if it is needed again. If an image is evicted from the disk cache, it would have to be redownloaded if it's needed again.
2.) This doesn't sound reasonable once you understood the answer to question 1. The cache automatically makes room for newer content and there is no reason to evict content manually. Manual eviction will in fact lower your cache's efficiency.
3.) Broadly speaking, this is not possible (without hacks), because it should not be needed. If an image resource (almost) always expires after a certain time, the server should announce this using HTTP headers when sending the resource to the client. For example using the max-age property of the cache-control header. There are lots of websites explaining this in detail, for example: http://www.mobify.com/blog/beginners-guide-to-http-cache-headers/. If an image resource almost never expires, you can consider changing its filename upon change and store that filename as a property. For example a user can have an avatar property containing the URL to the avatar. The avatar image can be cached indefinitely and you change the URL of the image if a new avatar gets uploaded.
|
in the google's own volley image cache tutorial
// Returns a cache size equal to approximately three screens worth of images.
public static int getCacheSize(Context ctx) {
final DisplayMetrics displayMetrics = ctx.getResources().
getDisplayMetrics();
final int screenWidth = displayMetrics.widthPixels;
final int screenHeight = displayMetrics.heightPixels;
// 4 bytes per pixel
final int screenBytes = screenWidth * screenHeight * 4;
return screenBytes * 3;
}
the recommended cache is three screens worth of images which equals to 7mb. I have an social media app and there is a newsfeed inside it.
1-) My first question is what will happen after the cache is full?
2-) I am thinking about removing cache every one hour and thus the cache will include the newer content. Is that reasonable ? What is the image caching logic behind the apps which includes something like newsfeed(for example, instagram)?
3-) How can i remove the old cache of specific item and force it to download it again? I tried this solution but it did not work:
VolleySingleton.getInstance().getRequestQueue().getCache().remove(IMAGE_URL);
mNetworkImageView = (NetworkImageView) getView().findViewById(R.id.networkImageView);
mImageLoader = VolleySingleton.getInstance().getImageLoader();
mNetworkImageView.setImageUrl(IMAGE_URL, mImageLoader);
There are a lots of clone question of my third question but none of them has been answered.
Thanks for your helps. :)
|
Android volley image caching questions
|
Should I clear the contents of this folder on each app exit ?
Why would you want to do that? Admob does this to provide better and speedy ad serving. And I'm sure the amount it is caching is within tolerable limits. Clearing that much space on the user's storage will not make him much happy. But you'll lose on revenue because of Admob trying to re-download all the stuff again. It will lower your fill rate and thereby your revenue.
There is no harm in doing it but you'll lose more (revenue) than you gain (user satisfaction). The user might not even notice it.
|
I have integrated AdMob in my Android app. I noticed the app taking up more and more storage space in a folder called app_webview generated by AdMob inside the app data folder. Should I clear the contents of this folder on each app exit ?
|
Is it good practice to clear the cache files generated by AdMob in Android?
|
Risks/mistakes: Of course one major thing is data consistency. When caching data from a database I'll usually make sure I make use of transactions when updating. Usually I use a pattern like this:
begin transaction
invalidate cache entries in the transaction
update database
commit transaction
In case of a cache miss during the update happens the read needs to wait until the transaction is committed.
For your use case the typical choice is a clustered or distributed cache, like: HazelCast, Infinispan, Apache Ignite. However, somehow this seams really to heavy in your use case.
An alternative is to implement an own mechanism to publish invalidation events to all nodes. Still this is no easy task, since you may want to make sure that every node received the message, but also be fault tolerant if one nodes goes down at the same time. So you probably want to use a proper library for that, e.g. JGroups or the various MQ products.
|
The application I'm developing uses simple HashMaps as cache for certain objects that come from the DB. It's far from ideal, but the amount of data for these chached lists is really small (less than 100) and does not change often. This solution provides minimal overhead. When an item in one of these cached lists changes, its value is replaced in the HashMap.
We're nearing the launch date on production for this application. To provide a reasonably scalable solution, we've come with a load-balancing solution. The balancer switches between several Wildfly-nodes, which each hold the entire application, except for the DB.
The issue now is that when an cached item changes, it's only updated in one of the nodes. The change is not applied to the cache in other nodes. Possible solutions are:
Disable the caching. Not an option.
Use a cache server like Ehcache Server. In this way there would be one cache for all nodes. The problem however would be too much overhead due to REST calls.
A additional web service in every node. This web service would keep track of all load-balanced nodes. When a cached value changes in a node, the node would signal other nodes to evict their caches.
An off-the-shelf solution like Ehcache with signalling features. Does this exist?
My question is: Are there products that offer the last solution (free and with open license, commercially usable)? If not, I would implement the third solution. Are there any risks/mistakes I would have to look out for?
|
Cache invalidation in load balanced application
|
This is a broad subject. There are two ways of storing complex objects in Redis: serialization, and hashes. Serialization is opaque blobs - only(usually) interpreted by the calling application. I discussed this in this github issue that I suspect is also you. Hashes are name/value pairs inside a single key (kinda like dynamic database columns, ... -ish) - this allows fetch of a subset of properties, etc.
Note that you can't have hashes inside lists.
Next we have the issue of lookup by an id. If you use a Redis list, you can fetch by position only: not by some property. I suspect you're also thinking of Redis with RDBMS goggles, but Redis simply doesnt work like that.
Personally, I would have a key per item, named by the primary key. For example keys like /user/12345. Then fetching (or updating) user 12345 is a case of reading (or writing) to the key by name. Redis does not natively support additional indexing, but you can implement indexes manually using additional storage. For example, a hash in /users/ssid that maps whatever572618 to the user that has that id.
Josiah Carlson's "Redis in Action" book may be of use to you in understanding how to work with Redis.
|
I am using stackexchange.redis api to access simple List of string into Redis. Now I needed to Add/Update/Delete/Get List into redis
Then Access the objects like lst.Find(h=>h.Id == "1") e.t.c
Basically a functionality to Manipulate ReferenceType object.
I can't find it build in there. Anybody know how can i do this?
|
Api for Save/Select List of Reference type into redis
|
2
Put this in your child theme functions.php
// Scheduled Action Hook
function w3_flush_cache( ) {
$w3_plugin_totalcache->flush_all();
}
// Schedule Cron Job Event
function w3tc_cache_flush() {
if ( ! wp_next_scheduled( 'w3_flush_cache' ) ) {
wp_schedule_event( current_time( 'timestamp' ), 'hourly', 'w3_flush_cache' );
}
}
add_action( 'wp', 'w3tc_cache_flush' );
Share
Improve this answer
Follow
answered Nov 24, 2016 at 20:03
Mhluzi BhakaMhluzi Bhaka
1,36433 gold badges2020 silver badges4343 bronze badges
Add a comment
|
|
Is it possible to clear the page cache every hour in W3 total cache? I have a dynamic website (plugin) with data that updates maybe every couple minutes so I want to clear the cache every hour so the data is something like up-to-date.
Now I dont use the page cache otherwise the data is not up-to-date but it really slows down my sites response time and I really need to improve it!
Is this possible with W3 total settings or something?
Regards
Joep
|
W3 Total Cache - Clear page cache every hour automatic
|
1
We use Jade and to prevent caching, we have a variable based on the time that gets appended to the end of our JS/CSS includes (style.css?v=2012881). Since we already have an 'appVersion' via this variable, I chose to expose that variable using an angular module and constant:
script.
angular.module('appVersion',[]).constant('appVersion',#{curDate});
In my main Angular module I have:
.config(['$httpProvider','appVersion',function($httpProvider,appVersion){
$httpProvider.interceptors.push(function() {
return {
'request': function(config) {
if(!config.cached && config.url.indexOf('.html') > -1){
if(config.url.indexOf("?") > -1){
config.url = config.url.replace("?","?v="+appVersion+"&");
}
else{
config.url += "?v="+appVersion;
}
}
return config;
}
};
});
}])
Since the templates are loaded using $http.get, I added an interceptor that detects if a request is a request for a template and appends the appVersion to the request if it is. That way we have the same versioning for the CSS, JS, and HTML.
Share
Improve this answer
Follow
answered Nov 16, 2015 at 17:37
William NeelyWilliam Neely
1,98211 gold badge2020 silver badges2424 bronze badges
Add a comment
|
|
We have an Angular project where the templates have changed numerous times thanks to our "Agile" environment. Browsers seem to strongly cache the templates because of the html file type. This means that when business goes to our dev site after an update, they occasionally see the old templates. How can we make sure that when changes are made to the templates, the user downloads the new template instead of loading from the cache?
|
How can I de-cache AngularJS templates when they change on the server?
|
2
I believe this is because you can have the same partition cached in multiple locations. See SPARK-4049 for more details.
EDIT:
I'm wondering if maybe you have speculative execution (see spark.speculation) set? If you have straggling tasks, they will be relaunched which I believe will duplicate a partition. Also, another useful thing to do might be to call rdd.toDebugString which will provide lots of info on an RDD including transformation history and number of cached partitions.
Share
Improve this answer
Follow
edited Oct 16, 2015 at 4:14
answered Oct 16, 2015 at 3:33
Rohan AlettyRohan Aletty
2,43211 gold badge1515 silver badges2020 bronze badges
2
I didn't call .persist twice on any RDD. What other operation might cause this? Is there a way I can find out which RDD is cached twice? Thanks!
– Edamame
Oct 16, 2015 at 3:39
Looking at your UI, the RDD's that are cached twice have 500 partitions (RDD 19) and 50 partitions (RDD 30). You can programmatically call rdd.partitions.size (or rdd.getNumPartitions() in pyspark) on each RDD to figure out which RDD is exceeding 100%.
– Rohan Aletty
Oct 16, 2015 at 3:58
Add a comment
|
|
I have the following Spark job, some RDD have RDD fraction cached more than 100%. How can this be possible? What did I miss? Thanks!
|
Fraction cached larger than 100%
|
I think you've misunderstood what the DatabaseCache is. It is not a cache of your database, it's a cache in your database; that is, when you explicitly cache something, it'll be stored in a table in your db. It's still up to you to actually do any caching, and similarly it's up to you to do any cache invalidation.
|
I have enabled basic Django query caching by adding the following to my settings.py :-
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.db.DatabaseCache',
'LOCATION': 'trialrun_cache_table'
}
}
Does Django automatically invalidate query cache for a particular table if data is inserted or updated? If not, How should I go about implementing this behavior?
|
Does Django invalidate query cache on update?
|
You should not manage a spring bean by yourself. Let spring to manage it.
@EnableCaching
@Configuration
public class myConfiguration {
@Bean(name = "CacheManager")
public CacheManager cacheManager() {
return new GuavaCacheManager("MyCache");
}
@Bean
public MyClass myClass(){
return new MyClass();
}
}
After that you should use MyClass in a managed manner.
public static void main(String[] args) throws Exception {
final ApplicationContext applicationContext = new AnnotationConfigApplicationContext(myConfiguration.class);
final MyClass myclass = applicationContext.getBean("myClass");
myclass.get("testKey");
myclass.get("testKey");
myclass.get("testKey");
}
|
I'm trying to use google guava cache in my spring app, but result is never caching.
This are my steps:
in conf file:
@EnableCaching
@Configuration
public class myConfiguration {
@Bean(name = "CacheManager")
public CacheManager cacheManager() {
return new GuavaCacheManager("MyCache");
}
}
In class I want to use caching:
public class MyClass extends MyBaseClass {
@Cacheable(value = "MyCache")
public Integer get(String key) {
System.out.println("cache not working");
return 1;
}
}
Then when I'm calling:
MyClass m = new MyClass();
m.get("testKey");
m.get("testKey");
m.get("testKey");
It's entering function each time and not using cache:
console:
cache not working
cache not working
cache not working
Does someone have an idea what am I missing or how can I debug that?
|
spring - using google guava cache
|
Well HashMap.values() directly returns a Collection. There is no computation or data copying done. It's as fast as it can be.
So caching in form of HashMap is the thing to do in your case.
But consider that if in any case you want List<CachedObject> in your code
ArrayList<CachedObject> list = new ArrayList<CachedObject>(hashMap.values());
the ArrayList, valuesList calls the collection hashMap toArray() method which essentially does a for loop from 0..N (size) element in the collection.
|
Currently in my legacy code i have cached some DB info in form of list List<CachedObject>.
CachedObject looks something like this
public class CachedObject
{
private int id;
private int id_type;
private int id_other_type;
// getters and setters
}
Most of the time i am fetching one specific object via function:
public CachedObject getById( Integer id )
{
if( id != null )
for ( CachedObject cachedObject : this.cachedObjectList )
if( cachedObject.getId().equals( id ) )
return cachedObject;
return null;
}
My question is would it be better to cache object in Map<Integer, CachedObject>with id as key. My concern in that scenario is that my other getters from that list have to look like:
public CachedObject getByIdType( Integer id )
{
if( id != null )
for ( CachedObject cachedObject : cachedObjectList.values() )
if( cachedObject.getId().equals( id ) )
return cachedObject;
return null;
}
I haven't done this before, because I really don't know drawbacks of this map cache, and it seems silly not to do this in the first place.
|
Java cache in map vs list
|
Answer: no, they do not do that.
I should listen to the QuestionResponse objects.
Thanks @ozgur for this answer.
|
This seems like a pretty simple question but I'm having trouble finding the answer to it:
Do Django models with a foreign key ever call the save() method of the model they're pointing to when they are saved/changed?
I'm working on a model for SAT exams being taken, graded and scored--the last of which involves caching and cache invalidation--and trying to figure out just when I have to delete a cached Score object and recalculate it.
I have three models: ExamResponse, QuestionResponse, and ExamScore, which for concreteness we can say look like this:
class ExamResponse(models.Model):
user = models.ForeignKey(User)
exam = models.ForeignKey(Exam)
class QuestionResponse(models.Model):
exam_response = models.ForeignKey(ExamResponse)
answer = models.TextField()
score = models.smallIntegerField(default=0)
class ExamScore(models.Model):
exam_response = models.ForeignKey(ExamResponse)
score = models.smallIntegerField(default=0)
Whenever a teacher grades an QuestionResponse (by changing the score field), I want to delete any ExamScore associated with the QuestionResponse's ExamResponse. Can I listen for a signal from a change to an ExamResponse object?
@receiver(post_save, model=ExamResponse)
def invalidate_exam_response_stats(sender, **kwargs):
"""
Delete the ExamScore associated with this ExamResponse
since it's become invalid.
"""
Or do I have to listen for the actual QuestionResponses to be saved?
@receiver(post_save, model=QuestionResponse)
def invalidate_exam_response_stats(sender, **kwargs):
"""
Look up the QuestionResponse's ExamResponse, then delete
the associated ExamScore.
"""
|
Using Django model's post_save for manual cache invalidation: do foreign keys trigger save()?
|
As you mentioned, M state is pretty obviously useless since you never keep modified data in your cache.
As for Exclusive state: keep in mind that in some sense it's "stronger" than shared state, since in WB caches, it guarantees that a write to that line doesn't need to obtain ownership and invalidate other copies first, and instead can write directly to that line without having to go out of the local cache. In other words, the transition from E to M is simple, while S to M is more complicated and requires invalidating all other copied first.
On the other hand, in a WT cache you already have the guarantee that no one else is holding a modifies version of the line, and more importantly - you don't have the benefit of doing the simple transition in your local cache (since you have to write the data outside anyway), so there's really no need for an exclusive state - you gain no benefit from having it. In fact, you may actually lose from it, because having an E state forces you to send snoops on any other core reading the same line (E -> S transition),
Of course, when writing something outside, you'll still need to invalidate all other copies, but you don't need the distinction between E and S to tell you if they exist, usually there's a snoop filter or some other list to tell you which cores to snoop.
|
I came across following question, while reading the slides of a lecture about cache coherency protocols:
Which MESI states are relevant, if cache with write-through policy is used?
The answer was also given: I (Invalid) and S (Shared Unmodified).
I understand that the state M (Modified Exclusive) is not relevant, since cache with write-through policy propagates the changes to main memory anyway.
The state E (Exclusive Unmodified) is not relevant, since it's only issued when exclusive read misses with replacement occur (and is kept with further read hits).
Can someone explain the given answer?
|
Which MESI protocol states are relevant if cache with write-through policy is used?
|
Although we have released version 0.1.2, we haven't completed redis cache support. So, redis cache isn't available at the moment. We're sorry for any inconvenience.
We have been implementing client. Please check out our repository to find more details. https://github.com/Cetsoft/imcache/tree/master/imcache-redis/src/main/java/com/cetsoft/imcache/cache/redis
|
I've specified dependency as follows but I can't build a new redis cache.
<dependency>
<groupId>com.cetsoft</groupId>
<artifactId>imcache</artifactId>
<version>0.1.2</version><!--Can be updated for later versions-->
</dependency>
My code is simply as follows but I get build failure.
Cache<String, User> cache = CacheBuilder.redisCache().build();
|
Can't build redis cache with the latest imcache release(0.1.2)
|
2
Bootstrap 4
Although this question is 2 years old and asked for BS 3.2 I decided to share my solution because the 'problem' still exists in Bootstrap 4.
$('#audio-menu, #style-menu').on('hide.bs.popover', function (e) {
var id = $(this).data('bs.popover').tip.id;
if(id) {
var content = $('#'+id).find('.popover-body');
var checkboxes = content.find("input[type=checkbox]");
checkboxes.each(function(){
var checkbox = $(this);
var checked = checkbox.prop('checked');
checkbox.attr('checked', checked);
});
$(this).attr('data-content', content.html());
}
});
As you can see, I've implemented two popovers (audio-menu and style-menu). Everytime the hide event is fired, you'll get the ID of the hiding popover by extracting it from the data attribute. (In my case I have to check if the ID is not empty, because my popover('hide') is fired by other functions too, even the popover isn't shown. So maybe you can leave that out).
Keep in mind, that "this" isn't your popover as it is represented it the DOM. In my case its a button filled with the data-content attribute. To get the content of 'the real' popover we can use the ID and search the DOM. In the next step you can do the checkbox handling as described by Dominik Vogt below. Afterwards you can save your modified content in the data-content attribute again.
Share
Improve this answer
Follow
edited May 25, 2018 at 18:19
answered May 25, 2018 at 18:03
AlexanderAlexander
21533 silver badges1313 bronze badges
Add a comment
|
|
So I have a problem with bootstrap popover in v3.2. I create a popover, with changable content (checkboxes).
$(elem).popover({
container: 'body',
trigger: 'manual',
placement: 'auto top',
selector: false,
title: 'Feedback',
html: true,
content: htmlOptions,
template: '<div class="popover popover-feedback" role="tooltip"><div class="arrow"></div><h3 class="popover-title"></h3><small class="popover-subtitle">Was gefällt dir an dem Foto?</small><div class="popover-content"></div></div>
});
htmlOptions contains the html with the checkboxes.
<div class="checkbox right"><label><input type="checkbox" checkbox-id="0" checked>Inspiration</label></div>
<div class="checkbox right"><label><input type="checkbox" checkbox-id="1">Kreativität</label></div>
<div class="checkbox right"><label><input type="checkbox" checkbox-id="2">Komposition</label></div>
<div class="checkbox right"><label><input type="checkbox" checkbox-id="3">Qualität</label></div>
When I hide the popover with $(..).popover('hide'); the popover is removed from dom. When i reopen the popover with $(..).popover('show'); the changed content (eg. checked checkbox) is not shown, because the popover had been removed from dom.
How do I disable the popover being removed from dom?
|
bootstrap popover is removed from dom on close
|
The db->cache_on is only designed to use file caching. It isn't technically a "file cache" such as OP/APC and is purely handled by some code in the Ci library.
Essentially, when a controller is accessed, the system checks for a version of the cache file that matches the controller and function. If it finds a file, it pulls the result from that instead of calling the DB for the result. If no file is found, it will query the DB and write the file for future queries that match that same call.
If you want to make use of memory / system caching such as APC / OP, you need to use the caching library.
Once loaded, it is access through $this->cache and not $this->db
Docs on CI are found at:
http://www.codeigniter.com/user_guide/libraries/caching.html
Happy caching!
|
I have to create an exam app that has to load questions that will never change.
According to Documentation
This will cache the query
this->db->cache_on();
$query = $this->db->query("SELECT * FROM mytable");
1.But this is file driver by default right? but how do I make it use APCu by default?
What would be a good mix: Codeigniter - Opcache/file or Opcache/APC
Thanks I hope you can point me in the right direction.
|
Codeigniter cache Opcache and APCu
|
2
I would not server the static files from my python application but try to delegate that to the web server (nginx, apache... ).
Then you could set the time to expire through headers controlling how long should the browser cache them.
Share
Improve this answer
Follow
answered Jun 5, 2015 at 6:37
chaoschaos
48033 silver badges88 bronze badges
3
OK, lets say flask trying to set the cache 500 for static objects (well it caches all kinds of things or none), ang apache tries to set 1 week. Which would have preference? The static files would be cached for 500 or 1 week?
– Emmet B
Jun 5, 2015 at 7:06
If your http server is serving the static files, Flask doesn't. That means Flask won't cache them at all.
– dirn
Jun 5, 2015 at 12:50
I agree to the comment above. Plus you do not need to necessarily cache them because they will not overload your application server. The files will be served directly from the disk by the web server.
– chaos
Jun 5, 2015 at 14:04
Add a comment
|
|
I really could not find any resource on this. So how can I seperate caching of views/functions than static files (i.e. .css,.js)?
I want to cache my static objects for a week, on the other hand I need to cache functions/views for only a few minutes.
When I do following
from flask.ext.cache import Cache
cache = Cache(config={'CACHE_TYPE': 'simple'})
cache.init_app(app)
@cache.cached(timeout=500)
def index():
return render_template('index.html')
then all views, objects' cache time is set to the same value, 500. How to go about it?
|
Flask: Caching static files (.js, .css)
|
2
By default, CircleCI will cache vendor/bundle and ~/.bundle so if you let it run bundler for you everything should be cached automatically.
Share
Improve this answer
Follow
answered Jun 25, 2015 at 16:36
Conor McDermottroeConor McDermottroe
1,36399 silver badges1010 bronze badges
3
1
Technically this is true. i've found instances where circleci ignores cache data and spends time loading the cache only to run bundle install again, spending a lot more time than if cache was disabled. Not sure if this is specific to my build or not, but I thin it's a CircleCI config issue.
– techalicious
Jun 25, 2015 at 17:53
If you spot that happening, use the in-app help and send us a build URL and we'll be able to track down the root cause.
– Conor McDermottroe
Jun 25, 2015 at 20:55
I found the following advise from another question: bundle check --path=vendor/bundle || bundle install --path=vendor/bundle --jobs=4 --retry=3 stackoverflow.com/questions/30666176/circleci-gems-caching
– stmllr
Jul 1, 2015 at 8:29
Add a comment
|
|
Is there a way to cache my dependencies that I get from bundler (using bundle install)? I know there's the cache_dependencies command that I can use in circle.yml, but I'm not sure what pathway to pass to it.
For reference, in TravisCI, you can cache bundler by using
cache: bundler
|
CircleCI cache bundle install
|
2
You can try to force caching
$.ajax({
url: "/yourpage",
cache: true,
dataType: "html",
success: function(data) {
$("#content").html(data);
}
});
.
$.ajaxSetup({
cache: true // Enable cache as jQuery won't let the script be cached by default
});
Share
Improve this answer
Follow
answered Jun 2, 2015 at 0:57
intikaintika
9,06455 gold badges3737 silver badges5555 bronze badges
1
ajaxSetup is global so I didn't want to use that. I might try your first suggestion though.
– user648931
Jun 2, 2015 at 1:31
Add a comment
|
|
When loading html content through $.load, and the html content contains <script> tags referencing javascript files, the linked Javascript files are appended with a cache busting parameter, which prevents the file from being cached by the browser.
So, instead of requesting something like <script src="/js/foo.js">, it requests <script src="/js/foo.js?_=123123">, causing the script to be loaded every time.
Is there a way to disable this behavior?
|
Jquery disable cache buster for external scripts downloaded after calling the jquery load function
|
2
Sometimes the mime types can be a bit of a pain when dealing with IIS caching. The code below gives you a basic example of how to ensure that your mappings are correct. You shouldnt need to do this for every mime type, only certain ones...
<staticContent>
<remove fileExtension=".js" />
<mimeMap fileExtension=".js" mimeType="text/javascript; charset=UTF-8" />
<!-- Caching-->
<clientCache cacheControlMode="UseExpires"
httpExpires="Tue, 22 Aug 2016 03:14:07 GMT" />
</staticContent>
Does the above code cache your JavaScript files correctly?
Also, have you tried recycling your App pool? Sometimes the server might need to be recycled before the changes kick in.
Share
Improve this answer
Follow
answered May 18, 2015 at 10:59
DeanoDeano
2,82533 gold badges2929 silver badges4242 bronze badges
1
I tried that literally as you posted, but I see no change in the response and request headers for the any js file. I made sure to recycle the App Pool on my server. For good measure, I also checked this on both server and local. No difference.
– user4864716
May 18, 2015 at 15:37
Add a comment
|
|
What got me started on this line of thought was when I played around with PageSpeed Insights and saw this message:
I have looked on the available forum posts and have tried this:
<system.webServer>
<staticContent>
<clientCache cacheControlMode="UseMaxAge" cacheControlMaxAge="1.00:00:00" />
</staticContent>
</system.webServer>
Looks simple, but when I verified both on my local server and my remote server, I saw no evidence that the cache setting stuck by means of Chrome Developer Tools.
After more research, I saw that the better way to do this was from within IIS. However, when I talked with GoDaddy technical support, they said that IIS was locked down to people like me because IIS changes affect everyone. So the IIS option is not an option for me.
However, if it was only a matter of the server, then wouldn't I see the web.config approach work on the local Visual Studio server? Clearly I'm missing something.
Am I doing this completely wrong or is it a matter of adding an extra line?
I'm starting to think that because web.config isn't doing the job and GoDaddy won't make any IIS changes, could something in Global.asax do the job? [And no, I can't tell my client to change servers.]
|
How can I set *.js and *.css cache expiration by age? <clientCache> setting in web.config not working
|
2
You don't pay from S3 to Cloudfront:
If you are using an AWS origin, effective December 1, 2014, data transferred from origin to edge locations (Amazon CloudFront "origin fetches") will be free of charge.
You do pay outbound transfer. If we stay with the lowest-priced Cloudfront tier, you'll pay $85 for data plus 75 cents for the HTTP requests.
In comparison, using S3, ignoring the free transfer tier, you'll pay $90 for data plus 40 cents for HTTP requests.
So, why use Cloudfront? First, users will get significantly lower latency to download your 1mb file. Second, the price remains lower as you go up the bandwidth tiers- if you are shipping 300tb of data per month, the final tier is 20% cheaper than the S3 price.
There are certainly places that offer better bandwidth pricing than AWS, in server/file/cache variations. If you use Cloudflare you'll pay for S3 to Cloudflare. You may only need to pay it once (1mb), but in reality it'll be more than once.
Share
Improve this answer
Follow
answered May 14, 2015 at 18:08
300D7309EF17300D7309EF17
24k1313 gold badges8787 silver badges102102 bronze badges
3
1
for millions of users downloading millions of cached files , cloudfront would cost over $100,000 while cloudflare would cost less than $10,000 . so is the 100 thousand difference is just the cost of a lower latency or is there other issues with cloudflare ?
– eddy
May 14, 2015 at 19:01
different pricing structure. Like I said, AWS's bandwidth pricing isn't impressive. Look at how Youtube supposedly gets 'free bandwidth', for instance.
– 300D7309EF17
May 14, 2015 at 19:42
2
Please also note that you pay more for HTTPS requests
– Jón Trausti Arason
Feb 13, 2018 at 13:18
Add a comment
|
|
lets say i am using aws's s3 connected with cloudfront .
And s3 sends one image file of size 1 MB to be cached on all cloudfront's edge nodes.
then the cached image file is downloaded 1 million times from cloudfront's edges .(the image is not grabbed again from s3 ).
do i pay for
1-the bandwidth for transferring the file from s3 to cloudfront edges.
2- the bandwidth for transferring the file from s3 to cloudfront edges + the bandwidth for 1 million downloads from the cloudfront edges .
also , if the answer is 2 , then that means caching doesn't save money at all ? only impove performance ?
and if that's the case why use cloudfront instead of cloudflare since cloudflare provides free bandwidth ? (is there a catch there ?)
|
cloudfront cache hits only pricing
|
2
You could pass the dummy parameter in the URL so that URL become an unique by adding data into it. Passing dummy parameter in params array will not harm the $http get call.
$http({
method: 'GET',
url: 'myurl',
params: { 'dummy': new Date().getTime() }
})
This will ensure caching will not be done for your url.
Best option would be disable caching on server side link here
Share
Improve this answer
Follow
edited May 15, 2015 at 15:21
answered May 14, 2015 at 13:12
Pankaj ParkarPankaj Parkar
136k2323 gold badges235235 silver badges301301 bronze badges
5
ok thanks for ur reply...i already tried on this way...but is there any alternative way to handle this issue without passing time as param .
– Priya Jacob
May 14, 2015 at 13:29
@AnilaJacob I'm sure there is no other way to do this on clientside..what is problem with this way..?otherwise you need to set caching on server side..
– Pankaj Parkar
May 14, 2015 at 13:32
Hi pankajparkar, Plz check thi like, coderwall.com/p/40axlq/… When i searching this issue all having same answer like set cache property to false or remove from cache before requesting..But all are not working for me.Why this not working ?Is any version problem or anything?
– Priya Jacob
May 15, 2015 at 4:52
@AnilaJacob you need to look at this github.com/angular/angular.js/issues/1586 .doing server side caching is best practice
– Pankaj Parkar
May 15, 2015 at 15:20
Ok thanks pankajparkar.....Now i continue with server side caching with following code ServletResponse response ;response.setHeader("Cache-Control", "no-cache, no-store, must-revalidate"); response.setHeader("Pragma", "no-cache"); response.setDateHeader("Expires", 0);
– Priya Jacob
May 16, 2015 at 5:35
Add a comment
|
|
I want to prevent caching in angular.For that i set the cache property to
false.After doing this i request the same url .But didn't send that request
to my server.
Used code for preventing,
$http({
cache : false,
method: "GET",
url :"myurl";
}).success(function(data) {
}).error(function(data) {
});
And code used for remove cache,
var $httpDefaultCache = $cacheFactory.get('$http');
$httpDefaultCache.remove('myurl');
$http({
cache : false,
method: "GET",
url :"myurl";
}).success(function(data) {
}).error(function(data) {
});
can u help me?Please
|
How can i Prevent caching in angular?
|
You should make use of WordPress Transients API to store temporary, cacheable data.
This allows you to save a value, along with an amount of time that the value should be cached for. Here is an example of how this might work with your function:
function popular_uploads() {
// Try to retrieve saved data from the cache
$json_data = get_transient('my_unique_identifier');
// If no saved data exists in the cache
if ($json_data === false) {
// Fetch new data from remote URL
$url = 'https://www.googleapis.com/youtube/v3/search?order=viewCount&part=snippet&channelId='. channel_id(). '&maxResults=5&key={key}&type=video';
$json = file_get_contents($url);
$json_data = json_decode($json, false);
// Save the data in the cache, let it live for up to 1 hour (3600 seconds)
set_transient('my_unique_identifier', $json_data, 3600);
}
foreach ( $json_data->items as $item ) {
$id = $item->id->videoId;
echo '<iframe id="ytplayer" type="text/html" width="auto" height="auto"
src="//www.youtube.com/embed/' . $id . '?rel=0&showinfo=1"
frameborder="0" allowfullscreen></iframe>';
}
}
By using the Transients API, you let WordPress decide the best way to store the values (either in the database, or in a cache of some kind). From the documentation:
...Transients are inherently sped up by caching
plugins, where normal Options are not. A memcached plugin, for
example, would make WordPress store transient values in fast memory
instead of in the database. For this reason, transients should be used
to store any data that is expected to expire, or which can expire at
any time. Transients should also never be assumed to be in the
database, since they may not be stored there at all.
|
Is there any way to cache data for a WordPress plugin ? I have a ready one that uses third party API access to YouTube API V3 and I need to apply cache for both optimization and keeping the hits under the quota.
Supposing I have this function:
function popular_uploads() {
$url = 'https://www.googleapis.com/youtube/v3/search?order=viewCount&part=snippet&channelId='. channel_id(). '&maxResults=5&key={key}&type=video';
$json = file_get_contents($url);
$json_data = json_decode($json, false);
foreach ( $json_data->items as $item ) {
$id = $item->id->videoId;
echo '<iframe id="ytplayer" type="text/html" width="auto" height="auto"
src="//www.youtube.com/embed/' . $id . '?rel=0&showinfo=1"
frameborder="0" allowfullscreen></iframe>';
}
}
How am I going to cache the data for sometime in a database ? I am really new to this process and I have looked it up and couldn't find a fix.
Thank you so much for assisting a beginner!
Regards.
Edit:
I have followed all steps found in Simple PHP caching article, the only difficulty I have is implementing the code below :
<?php
// Some query
$q = mysql_query("SELECT * FROM articles ORDER BY id");
while ($r = mysql_fetch_array($q)) {
echo '<li><a href="view_article.php?id='.$r['id'].'">'.$r['title'].'</a></li>';
}
?>
|
How to cache third party API access in SQL database for a WordPress plugin
|
You can't specify docker build's --no-cache because eb doesn't allow you to.
A workaround is to build the image locally (using --no-cache). Then use docker push to publish your image to Docker hub public registry.
Your Dockerfile could be simplified (untested) down to:
FROM custom_java_server_build:latest
MAINTAINER xy
EXPOSE 8080
CMD ["/run.sh"]
It does sound like you're creating a large image, you might be able to mitigate this by turning the entire install sequence into a single RUN statement. Don't forget to delete all your temporary files too.
|
I am using EB on AWS to deploy a dockerfile.
Currently I deploy to scripts:
The dockerfile and a run.sh file which starts a server.
The dockerfile roughly looks like this
FROM ubuntu:14.04
MAINTAINER xy
[...install a java server...]
ADD run.sh /run.sh
RUN chmod +x /*.sh
EXPOSE 8080
CMD ["/run.sh"]
run.sh starts the java server.
I would like to set the --no-cache flag for the docker. Where can I set that?
|
AWS docker set --no-cache flag
|
2
If you don't need real-time information, the easiest way to find the actual memory usage is to take a heap dump and look at the cache objects' retained sizes (e.g. with Eclipse MAT).
Your method is going to ignore the overhead of Infinispan's internal structures. Normally, the per-entry overhead should be somewhere around 150 bytes, but sometimes it can be quite big - e.g. when you enable eviction and Infinispan allocates structures based on the configured size (https://issues.jboss.org/browse/ISPN-4126).
Share
Improve this answer
Follow
answered Mar 13, 2015 at 9:10
Dan BerindeiDan Berindei
7,14433 gold badges4242 silver badges4848 bronze badges
Add a comment
|
|
I need to get rough estimation of memory usage of my Infinispan cache ( which is implemented using version 5.3.0) - ( For learning purposes )
Since there is no easy way to do this, I came up with following procedure.
Add cache listener to listen cache put/remove events and log the size of inserted entry using jamm library which uses java.lang.instrument.Instrumentation.getObjectSize. But i'm little skeptical about it, whether it returns right memory usage for cache. Am I doing this measurement correctly ? Am I missing something here or do I need consider more factors to do this ?
|
Calculating Infinispan cache memory size
|
2
You should not use skipMemoryCache now, because it is deprecated. Here is some discussion about this method. Instead of using skipMemoryCache you should use memoryPolicy(MemoryPolicy policy, MemoryPolicy... additional).
Share
Improve this answer
Follow
answered Aug 7, 2015 at 14:52
PetrSPetrS
1,1101414 silver badges3838 bronze badges
2
I'm aware of this now, since I posted this in Feb., could you tell me what performance issues / improvements, user experience issues / improvements that might have?
– AndyRoid
Aug 7, 2015 at 16:43
nice one, this help us a lot.
– Harvi Sirja
Mar 31, 2016 at 11:10
Add a comment
|
|
I have a main DashboardActivity that uses a REST client to grab photos from a staging server. I found that when I would load the Bitmap from the server into an ImageButton it would throw a OutOfMemoryError. I solved this issue with using .skipMemoryCache() like so:
Picasso.with(this)
.load(imageUrl)
.skipMemoryCache()
.into(imageButton);
What would be some potential problems I would run into with this approach?
|
Potential problems with skipMemoryCache() in Picasso?
|
2
Whether caches need to be invalidated or flushed is architecture dependent.
You should always use the Linux DMA functions to handle these issues correctly.
Read DMA-API-HOWTO.txt and DMA-API.txt.
Share
Improve this answer
Follow
answered Feb 1, 2015 at 21:59
CL.CL.
176k1717 gold badges226226 silver badges269269 bronze badges
Add a comment
|
|
My code has a user mode mapping (set up via mmap()) which I need to flush after writing to it from the CPU but before I dispatch the data by DMA’ing the underlying physical memory. Also I need to invalidate the cache after data has arrived via a DMA to the underlying physical memory but before I attempt to read from it with the CPU.
In my mind “cache flushing” and “cache invalidating” mean two different things. Roughly “cache flushing” means writing what’s in the cache out to memory (or simply cache data goes to memory) whereas “cache invalidating” means subsequently assuming all cache contents are stale so that any attempts to read from that range will provoke a fresh read from memory (or simply memory data goes to cache).
However in the kernel I do not find two calls but instead just one: flush_cache_range().
This is the API I use for both tasks and it “seems to work”… at least it has up until the present issue I'm trying to debug.
This is possibly because the behavior of flush_cache_range() just might be to:
1) First write any dirty cache entries to memory- THEN
2) Invalidate all cache entries
IF is this is what this API really does then my use of it in this role is justified. After all it’s how I myself might implement it. The precise question for which I seek a confident answer is:
IS that in fact how flush_cache_range() actually works?
|
Linux flush_cache_range() behavior
|
1
You can't really do that.
Your CSS and JS files are being served by your web server (whichever one you're using). Assets and bundles are a mechanism that takes files from a folder not accessible by web server (e.g. /assets/), and places them into a folder accessible by web server, like /web/assets/xxxxxxx, which is then visible via http://<your_domain>/assets/xxxxxxx.
The files are served directly without any involvement from Yii. So if you need specific headers (for cache control or any other reason), your web server config is where it should be done.
Share
Improve this answer
Follow
answered Aug 6, 2015 at 22:03
BeowulfenatorBeowulfenator
2,2721717 silver badges2626 bronze badges
Add a comment
|
|
I am using Asset Bundles of yii2. But I do not find a away to influence the http header of each file (css and js). For example I want to set the cache.
For the controllers I do this:
'class' => 'yii\filters\HttpCache',
'only' => ['index', 'view'],
'cacheControlHeader' => 'public, max-age=3600',
'lastModified' => function ($action, $params) {
$q = new \yii\db\Query();
return $q->from('user')->max('updated_at');
},
But how to do this for the Assets / Asset Bundles?
|
Yii2 Asset Bundle: How to set headers?
|
I'm not sure that information is exposed - the documentation mentions that even if the resource is cached, the request (using the cache) will still be asynchronous, so it'll look like any other request. The only thing I can think of is rolling your own cache and using that to know when a successful lookup has occurred. From the documentation:
You can change the default cache to a new object (built with
$cacheFactory) by updating the $http.defaults.cache property. All
requests who set their cache property to true will now use this cache
object.
|
is there any indication from $http if the response brought from the cache?
For example lets say I have made this two calls:
$http.get({url, cache: true).then(function(response, ..., moreData) { alert(moreData.fromCache) }) //alerts false
$http.get({url, cache: true).then(function(response, ..., moreData) { alert(moreData.fromCache) }) //alerts true
as you can see the first call is really executing an ajax call so it is not from cache, the second one brings the data from cache so it alerting true
|
How can I know if the $http response is brought from cache - Angular
|
2
You can simply write:
Person.findByName("some-name", [cache: true])
Share
Improve this answer
Follow
answered Jan 6, 2015 at 17:49
Shashank AgrawalShashank Agrawal
25.4k1111 gold badges9393 silver badges123123 bronze badges
1
Yes, I am aware of that. But what I want is to change 'Person.findByName("some-name")' so that it is cacheable query by default. Don't want a coder to remember to add ', [cache: true]' wherever the dynamic finder is used.
– Champ
Feb 5, 2015 at 5:53
Add a comment
|
|
Say I have a class Person with a name field. I can do a Person.findByName but what if I want to override this method to ensure that the query is cached? How do I override this dynamic finder method?
|
How to override a grails dynamic finder?
|
I'm fairly new to cache but I LOVE it!!
Since you're using SSIS 2012, are you deploying in the project model? If so, you can create a new cache project connection (although it MIGHT work as a package connection too). Then you can intialize the cache connection in one of the first steps of the package. And then any child package can reference the cache data source. It's really slick..
Right-Click Connection Managers
Choose "Cache"
Name the new Cache Connection
On the columns table, add the columns in the lookup
Click ok
In the parent package, initialze the cache dataset:
Create a new dataflow task
Source: Can be anything. SQL Query
Destination: Cache Transform
Voila!
Now any child package can use the cache as a data source.
|
I am developing an ETL process and I will be using the same lookup in several packages. Instead of creating a new cache for each package I would like to create the cache once and reference it for each package. I am planning on saving the cache to file so it can be shared among multiple packages but I am not sure where I should put that file. Also, what is the best way of having one location for the file being used in development and another location in production? I thought of using a parameter but it doesn't seem like that is possible.
|
Where should I store the CAW file for a Cache Connection Manager when using the cache in several packages?
|
2
I'm using the "cordova-plugin-cache-clear" plugin
https://github.com/anrip/cordova-plugin-cache-clear
To use the plugin, simply call window.CacheClear(success, error);
and it cleans the webView cache.
Share
Improve this answer
Follow
answered Apr 5, 2017 at 8:15
A. YosupovA. Yosupov
14544 silver badges88 bronze badges
Add a comment
|
|
I am trying to clear the cache stored in android application which uses cordova webview.
I tried with cordovaWebView.clearCache(true); Also tried with
public void deleteCache(Context context) {
Log.i("Utility", "deleting Cache");
try {
File dir = context.getCacheDir();
if (dir != null && dir.isDirectory()) {
deleteDir(dir);
}
} catch (Exception e) {
Log.e("Utility", " exception in deleting cookies");
}
}
public static boolean deleteDir(File dir) {
if (dir != null && dir.isDirectory()) {
String[] children = dir.list();
for (int i = 0; i < children.length; i++) {
boolean success = deleteDir(new File(dir, children[i]));
if (!success) {
return false;
}
}
} else if (dir != null && dir.isFile()) {
dir.delete(); // delete the file INSIDE the directory
}
Log.i("Utility", "deleting Cache " + dir.delete());
return true;
}
But both didnt work.
May I get any solution for this, as in web view user use to login and hence we need to clear the cache when loading the app second time.
|
Cordova Web view cache clear in android
|
I think you have DNS resolving problem.
There is two ways you can use your website's locahost instead of the external domain name.
1. If you have your own server / VPS / Dedicated server,
Add entry in vim /etc/hosts for 127.0.0.1 website.net or try to fetch the content with localhost(127.0.0.1).
2. If you are using shared hosting, then try to use below url(s) in your code,
http://localhost.mywebsite.net/~username/home.php.
OR
Try to call http://localhost.mywebsite.net/home.php
Try this to fetch the url content -
$dynamic = TRY_ABOVE_URL(S)
$out = "home.html" ;
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL,$dynamic);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
$file = curl_exec($ch);
if($file === false) {
echo 'Curl error: ' . curl_error($ch);
}
file_put_contents($out, $file);
|
I have some code to convert a PHP page to HTML:
$dynamic = "http://website.net/home.php";
$out = "home.html" ;
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL,"$dynamic");
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
$file = curl_exec($ch);
file_put_contents($out, $file);
This works perfect in localhost but it takes too much time/doesn't work on the live site.
I've tried php_get_contents, but that also doesn't work.
Note:
http://website.net/home.php page is in the same site where the code is hosted.
curl is enabled and allow_url_fopen is on as per phpinfo() in both localhost and at server.
EDIT:
It works fine when using other website's page instead of my website.
The site's target page perfectly loads in my browser.
The web page of my website is loading fast as usual but when I use curl or file_get_contents, it's too slow and even can't get output.
|
How to fetch the url content using php curl or file_get_content on the same server?
|
1
I was just running into this same issue and finally after hours of messing around with different things I came up with a way to modify it with jQuery.
$this->registerJs("$(document).ready(function(){
$('input[name=_csrf]').val('".$this->renderDynamic('return Yii::$app->request->csrfToken;')."');
});", View::POS_END);
And then get rid of all the extra stuff you have in there
<?php $form = ActiveForm::begin(); ?>
<?php echo $form->field($model, 'name'); ?>
<?= Html::submitButton('Save') ?>
<?php ActiveForm::end() ?>
Share
Improve this answer
Follow
edited Jun 14, 2015 at 21:34
answered Jun 12, 2015 at 22:56
Michael St ClairMichael St Clair
6,2671111 gold badges5151 silver badges7676 bronze badges
Add a comment
|
|
I have cached fragment of html including form in my view.
<?php $form = ActiveForm::begin(); ?>
<?php echo $form->field($model, 'name'); ?>
<?= Html::submitButton('Save') ?>
<?php ActiveForm::end() ?>
Problem is with CSRF validation token - it should be dynamic (not static). Is there any other / better way how to render it without disabling and enabling it again?
<?php Yii::$app->request->enableCsrfValidation = false; ?>
<?php $form = ActiveForm::begin(); ?>
<?php Yii::$app->request->enableCsrfValidation = true; ?>
<input type="hidden" name="_csrf" value="<?php echo $this->renderDynamic('return Yii::$app->request->csrfToken;'); ?>">
<?php echo $form->field($model, 'name'); ?>
<?= Html::submitButton('Save') ?>
<?php ActiveForm::end() ?>
If I don't disable & enable CsrfValidation I have two tokens in html - first is from cache and the second one is dynamic.
|
render dynamic csrf hidden input while using cache in yii2
|
Just go through this link, it is not possible to clear cache but can prevent it by using metatag "no cache" or either in http headers. but alternatively can do this by using parameters,refreashing etc. or javascript file name versioning so browser will download it always.
http://www.sitepoint.com/forums/showthread.php?866717-How-to-force-the-user-s-browser-to-clear-its-cache
http://www.sitepoint.com/caching-php-performance/
|
In my mini project i'm trying to crop an already uploaded image and then show the cropped image in place of the original image in the gallery page. But after I'm done with cropping i'm shown the the original image when i'm redirected to the gallery page even refreshing the page doesn't work. I have to clear the browser's cache in order to see the cropped image. Can i clear the browser's cache using jQuery?
|
Clear browser cache using jQuery
|
1
It would be wonderful if you would convert your embarrassment into a community service for all of us by testing something, since you are seeing a condition few people admit to or want to reproduce themselves. :)
Google is known to be relatively (or very, depending on who you ask) good at ignoring hidden content. They implemented this way back when people used to keyword-stuff content by using blocks that were either set to "display: none" or had a tiny/unreadable/white-on-white font.
What you're seeing is a problem for more than just SEO, so I'm suggesting this because things like prerender.io are great but they only solve that one piece. What about users who simply have slow browsers? Well, it turns out Angular has a great solution for this called ngCloak. It hides things like template content fields (ngModel mappings and bindings, expressions, etc) until Angular is ready to fill them in itself.
It's very easy to implement this; you just need a small block of CSS (in an early-loaded file, or embedded directly into your HTML page):
[ng\:cloak], [ng-cloak], [data-ng-cloak], [x-ng-cloak], .ng-cloak, .x-ng-cloak {
display: none !important;
}
and then you apply the ngCloak directive to items that are affected, or to a high-level parent (like your ng-view DIV):
<div id="wrapper" ng-cloak>
<!-- Page content here -->
</div>
This works kind of like a no-js target, but specifically for AngularJS.
You'll still want something specific to give Google that is actually good material, but at least with the above you can filter out everything else.
Share
Improve this answer
Follow
answered Jul 28, 2014 at 2:45
Chad RobinsonChad Robinson
4,5952323 silver badges2727 bronze badges
Add a comment
|
|
We just updated portions of our website with AngularJS - particularly the login page. When I look at the search results in Google, the cached page and the small snippet it displays automatically is the raw Angular markup. So, lots of {{model.username}}, {{model.errorMessage}}, etc.
I won't link the results since it's too embarrassing.
Any idea of how do get Google to actually store the page as rendered vs. the raw HTML?
|
Raw AngularJs markup appearing in Google search results page
|
The browser will respect the cache time set by the response for that particular asset. Any subsequent GET should look to the cache until the timeout is reached.
Its possible you have devtools ignoring this.
|
We're developing an app with AngularJS and RESTful services. The data returned by services is changed infrequently and I very much would like to cache responses for a period of time. I'm setting Cache-Control: no-transform, max-age=604800 in the response.
Is there a way to have AngularJS JSON requests ($http/$resource) respect browser cache instead of using completely parallel built-in AngularJS cache (http://www.metaltoad.com/blog/angularjs-vs-browser-http-cache) or angular-cache library (http://angular-data.pseudobry.com/documentation/api/angular-cache)? From what I can see watching the network, by default $http requests are ignoring Cache-Control headers.
|
Is there a way to use browser cache for AngularJS JSON requests ($http/$resource)?
|
2
You can check ttl in Rails cache using command below.
Rails.cache.data.ttl("yourkey")
Share
Improve this answer
Follow
answered Sep 14, 2018 at 5:09
Marcelo AustriaMarcelo Austria
9211010 silver badges1818 bronze badges
Add a comment
|
|
So I have some items stored in Rails.cache and I always write them with :expires_in => 5.minutes
My question is, is there a way to see what the ttl is on a cache entry?
I know the entry class in activesupport has a method but I can't seem to get an entry object out of Rails.cache methods.
I'm implementing rate limiting by the way.
|
checking ttl for Rails.cache object
|
2
I don's think, that you can achieve what you need using just OutputCache.
Basically you need data storage and worker. For storage you can using anything from static variable to external database.
Same thing with worker. It's might be just simple long running task or external service. Basic sample, so you can get the idea of what i am talking about
public class TestController : Controller
{
private static int _result = 0;
static TestController()
{
Task.Factory.StartNew(async () =>
{
while (true)
{
await Task.Delay(new TimeSpan(0, 0, 5));
_result++;
}
}, TaskCreationOptions.LongRunning);
}
public ActionResult Index()
{
return Json(_result, JsonRequestBehavior.AllowGet);
}
}
Share
Improve this answer
Follow
answered Jun 13, 2014 at 6:40
UriilUriil
12.3k1111 gold badges5050 silver badges7171 bronze badges
1
This is an interesting way of solving my requirement. I will probably consider this if there is no other elegant way of doing it. Thanks.
– Rosdi Kasim
Jun 14, 2014 at 5:17
Add a comment
|
|
I have a few expensive pages that I cache using ASP.NET output cache like so,
[OutputCache(Duration=3600, VaryByParam = "none")]
Obviously, the cache will expire after 3600 seconds (1 hour), and the next poor guy that happens to load that page will have to wait for the cache to be refreshed from the dabatase.
My question is, how do I make the cache to be refreshed immediately on expiry? So that the next guy who happens to visit the page when the cache had just expired will not have to wait for the cache to be refreshed and instead is served with a new cache?
Update: I need the cache to be updated pretty frequently (1 hour to 3 hour) as I do not want the data to stale for too long either.
|
Automatically refresh ASP.NET Output Cache on expiry
|
2
You're right. Memcache don't support tags.
You can use another key-value to implement tag for memcache.
EX :
$this->objCache->save($arrResults, $strKey,array($strMyTag),$intCacheTime) // note : array($strMyTag) don't work for Memcache
MemcacheTag::setTag($strKey, $strMyTag) // our work around
About setTag Method & MemcacheTag:
function setTag($strKey,$strTag){
$arrKey = $cacheOjb->get($strTag);
$arrKey[]= $strKey;
}
function deleteCacheWithTag($strTag){
$arrKey = $cacheOjb->get($strTag);
foreach ($arrKey as $strKey){
$objCache->delete($strKey);
}
}
This work around is quite simple and it works for my projects.
*Note: these codes need some modification, sorry for posting in a hurry
Share
Improve this answer
Follow
edited Jul 10, 2019 at 8:53
powtac
40.8k2828 gold badges116116 silver badges172172 bronze badges
answered Jul 16, 2014 at 7:42
letanthangletanthang
42066 silver badges77 bronze badges
Add a comment
|
|
We use Memcached and Zend Framework in our web project. Now, we need to clean cache selectively using tags as specified in Zend_Cache API.
Unfortunately, memcached doesn't support tags.
I have found these workarounds:
Memcached-tag project. Has anybody tested it? How to implement it with Zend?
Use wildchards like in this question, but it seems a bit confusing, less transparent and harder to implement with Zend.
Use this implementation or this one, for supporting tags in Memcached, beeing aware of the drawbacks.
Any other option?
Thanks in advance
|
How to selectively clear cache (using tags or other option) with Memchached backend and Zend Framework
|
It depends on how you do your caching.
If you mean the default Django middlewares (UpdateCacheMiddleware, FetchFromCacheMiddleware), the requests never reach Django-Rest-Framework and so are never counted against any throttle. So this is what is really happening.
What you could do is to cache the responses in your views. Since your view-methods (classes) get called by DRF the throttling will be used. drf-extensions has an example on this. This would cache your data before it gets encoded into your output format (json, yaml, xml, ..)
In general you should only cache when you know something is slow. Django's cache middleware caches only based on a timeout. Real cache invalidation can be hard.
|
I'm currently working on an API made with Django-rest-framework. I have to set the throttling rates on a per User-Group basis.
We are currently using memcached with default configuration as the cache backend, which is per-site cache.
While making some simple tests with AnonRateThrottle and UserRateThrottle, I notice that if the requests that the user is making is already cache it doesn't count for the throttle rates.
The documentation says that throttling is determined before running the main body of the view, i guess because the requests is being serve from the cache the view is not being executed so throttle is not take into account.
Basically i wanna ask:
Is this what's really happening?
Could there be a way to count the cache requests for throttling purposes? (pros and cons if you could)
One thing I thought of was caching only the database/Orm lookups, so that every request executes the corresponding view body.
Probably the number of requests that exceeds the throttling rate is not that great, and because they're cached they don't impact the performance of the service, so basically i just want to know the behavior of the service in this case.
|
Django Rest Framework throttling rates with cached requests
|
This was my fault. I was using css/compiled.css?123 in my header. The ?123 was an ever changing number to bust the cache during development. It seems the application cache treats this as a seperate file so I either have to remove the unique ID or sync it with the application cache for this to work.
|
If I load the page while online chrome console logs:
Document was loaded from Application Cache with manifest http://app.x.com/cache.manifest app.x.com/
Application Cache Checking event app.x.com/
Application Cache Downloading event app.x.com/
Application Cache Progress event (0 of 7) http://app.x.com/public/img/logos/logo.png app.x.com/
Application Cache Progress event (1 of 7) http://app.x.com/public/img/backgrounds/crowd.png app.x.com/
Application Cache Progress event (2 of 7) http://app.x.com/public/img/icons/[email protected] app.x.com/
Application Cache Progress event (3 of 7) http://app.x.com/public/js/third_party/zepto.js app.x.com/
Application Cache Progress event (4 of 7) http://app.x.com/public/css/compiled.css app.x.com/
Application Cache Progress event (5 of 7) http://app.x.com/public/js/compiled.js app.x.com/
Application Cache Progress event (6 of 7) http://app.x.com/ (index)
Application Cache Progress event (7 of 7) (index)
Application Cache UpdateReady event
To me this looks fine?
chrome://appcache-internals shows:
However, as soon as I pull the plug on my Ethernet and refresh I get an un-styled page with the following in my console:
For reference my cache.manifest looks like:
CACHE MANIFEST
# v52
CACHE:
public/js/third_party/zepto.js
public/js/compiled.js
public/css/compiled.css
public/img/logos/logo.png
public/img/backgrounds/crowd.png
public/img/icons/[email protected]
NETWORK:
*
And I have AddType text/cache-manifest .manifest in my .htaccess
|
App cache not working, despite Chrome correctly caching assets
|
When I need to bring something into cache (and there are use-cases for that, for example to shorten the time an offline index build takes), I use something like this:
SELECT COUNT_BIG(*)
FROM T WITH (NOLOCK, INDEX(IndexNameHere))
OPTION (MAXDOP 1)
And run that for each index. It doesn't get more efficient than this. The NOLOCK is there to get an IAM scan instead of a b-tree-order scan.
Still, I'd like to find out why you want this. The DB will be brought gradually into cache when it is being used. Basically on the first access to a page that page is cached. Isn't that enough?
|
I've read a fair amount on this. but can't find a workable solution. I have a server with plenty of RAM and a 10GB DB. I want to load the entire DB (including indexes) into RAM/cache.
This solution doesn't seem to work: http://sqlsmurf.wordpress.com/2011/05/23/sql-warm-up-script/
Is there an way to load everything into RAM storage? I could do SELECT * FROM blah, however that (I believe) wouldn't work as it wouldn't load the indexes properly, it would also be somewhat slow.
|
Warm up database (put the whole db into cache)
|
2
As far as I know:
CacheStoreMode.USE should be used if a given EntityManagerFactory has an exclusive write-access to the underlying database thus it implies that there is no chance for an entity instance stored in the shared cache to be stale.
CacheStoreMode.REFRESH should be enabled if the underlying database might be accessed by multiple commiters (i.e. EntityManagerFactory instances, applications in different JVMs, external JDBC sources) thus an entity instance stored in the shared cache may become stale.
Since CacheStoreMode.USE does not force refresh of already cached entities when reading from the database, CacheStoreMode.REFRESH does.
Share
Improve this answer
Follow
answered May 5, 2014 at 10:54
wypieprzwypieprz
8,11144 gold badges4646 silver badges4848 bronze badges
Add a comment
|
|
The javadocs for CacheStoreMode differentiate in a point I cannot really grasp:
The javadocs for the USE mode:
Insert/update entity data into cache when read from database and when
committed into database: this is the default behavior. Does not force
refresh of already cached items when reading from database.
The javadocs for the REFRESH mode differ in the last sentence:
Forces refresh of cache for items read from database.
When an existing cached entity instance is updated when reading from database, this would typically involve overwriting the existing data. So what is the difference between forcing and not forcing a refresh in this context?
Thank you.
|
What is the difference between CacheStoreMode USE and REFRESH
|
When using the following XML configuration (which is almost the same as the above!) I can find my entries in the store:
<?xml version="1.0" encoding="UTF-8"?>
<infinispan xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:infinispan:config:6.0 http://www.infinispan.org/schemas/infinispan-config-6.0.xsd"
>
<!-- Using the cluster mode with grouping API-->
<global>
<globalJmxStatistics enabled="false" />
</global>
<default>
<!-- Enbaling eviction/expiration -->
<eviction strategy="LRU" maxEntries="2000" />
<expiration lifespan="1000" maxIdle="500" />
<jmxStatistics enabled="false" />
<clustering>
<hash>
<groups enabled="true" />
</hash>
</clustering>
</default>
<namedCache name="CacheStore">
<persistence passivation="false">
<singleFile fetchPersistentState="true"
ignoreModifications="false"
purgeOnStartup="false" location="${java.io.tmpdir}">
<async
enabled="true"
flushLockTimeout="15000"
threadPoolSize="5" />
</singleFile>
</persistence>
</namedCache>
</infinispan>
|
I'm trying to persist cached data from infinispan 6.0.2 to a file, I'm using the embedded mode and this is the cache configuration:
ConfigurationBuilder builder = new ConfigurationBuilder();
builder.eviction().strategy(EvictionStrategy.LRU).maxEntries(1)
.persistence()
.passivation(false) // save evicted entries to cache store
.addSingleFileStore()
.preload(true)
.shared(false)
.fetchPersistentState(true)
.ignoreModifications(false)
.purgeOnStartup(false)
.location(System.getProperty("java.io.tmpdir")+"infinispan")
//.async().enabled(true).threadPoolSize(5)
.singleton()
.enabled(true)
.pushStateWhenCoordinator(true)
.pushStateTimeout(20000);
Configuration configuration = builder.build();
It does not work for me (and I don't have errors), the file store is created in the file system but contains only "FCS1" and if it's already created nothing happen (i.e. no update).
Here is the code (nothing fancy) for adding key/value pairs to the cache:
// Avoid JMX problems related to org.infinispan already registered domain
GlobalConfiguration globalConf = new GlobalConfigurationBuilder()
//.clusteredDefault()
.globalJmxStatistics()
.mBeanServerLookup(DummyMBeanServer.lookup)
.build();
EmbeddedCacheManager manager1 = new DefaultCacheManager(globalConf, configuration);
manager1.start();
Cache<String, String> cache1 = manager1.getCache(); // default cache
cache1.put("key11", "val11");
cache1.put("key12", "val12");
cache1.put("key13", "val13");
cache1.evict("key11"); // a desperate attempt to move this key to the store
cache1.stop();
// when I restart the cache all data is lost
cache1.start();
|
Persisting data in infinispan cache to file
|
This could be done with check of Content-Length in backend answer, and if it larger than some size, then tag it with some mark and restart request transaction
Example, files with Content-Length >=10,000,00 should be piped:
sub vcl_fetch {
..
if ( beresp.http.Content-Length ~ "[0-9]{8,}" ) {
set req.http.x-pipe-mark = "1";
return(restart);
}
..
}
Then we returned back to checking request receiving and parsing.
Here we can check our mark and perform pipe
sub vcl_recv {
..
if (req.http.x-pipe-mark && req.restarts > 0) {
return(pipe);
}
..
}
|
I have Varnish installed with the default setting on my Apache web server. Apache listing to port 8080 and Varnish listing to 80.
I have few downloadable files on the website with the sizes 100MB, 500MB and 1GB
The 1GB is not working, when you click on it it will say unavailable page or connection closed by server. The other two are working fine but I'm not sure if this is the correct way to download them.
How do I make varnish bypass these files and get them directly from the web server?
Thank you.
|
Varnish bypass a large file
|
CakePHP's function does
$this->header(array(
'Expires' => 'Mon, 26 Jul 1997 05:00:00 GMT',
'Last-Modified' => gmdate("D, d M Y H:i:s") . " GMT",
'Cache-Control' => 'no-store, no-cache, must-revalidate, post-check=0, pre-check=0'
));
so maybe you can make a function with:
header("Expires: Mon, 26 Jul 1997 05:00:00 GMT");
header("Last-Modified: " . gmdate("D, d M Y H:i:s") . " GMT");
header("Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0");
and call it when needed.
|
How can I disable the http cache on Yii? The browser can't update the view, until I refresh the browser manually.
Is there some like cakephp disableCache()?
|
Yii disable http cache
|
2
If the clients are on different machines (you run Infinispan as server(s)), you should use HotRod clients to access the cache. There, see the getVersioned and replaceWithVersion methods on
RemoteCache.
Share
Improve this answer
Follow
answered Feb 3, 2014 at 10:12
Radim VansaRadim Vansa
5,77633 gold badges2525 silver badges4343 bronze badges
3
Ah, I haven't noticed that you know about these operations (according to your previous question).
– Radim Vansa
Feb 3, 2014 at 10:27
this question I have asked before I get information about the versioned Api. But as discussed in question stackoverflow.com/questions/21522203/… I have share the log file which you asked. Can u please check the issue?
– Dinoop paloli
Feb 4, 2014 at 11:17
I've updated the other answer. Then, please mark this answer as correct.
– Radim Vansa
Feb 8, 2014 at 21:39
Add a comment
|
|
We are planning to use Infinispan in client server mode. The architecture has many clients (client 1, client 2 and so on ) and a distributed infinispan network.
We need to update the data in the cache periodically, say every5 hours . All clients could be able to update the data. If one of them (say client 1) is updating we need to prevent others doing the same job. Once the updating is complete all clients wait another 5 hour and, any of them will do the the updating again.
How to achieve this in infinispan 6?
Thanks in advance.
|
Infinispan , How to assure read write lock on a key/cache
|
2
That image doesn't show up when I run PageSpeed Insights: https://developers.google.com/speed/pagespeed/insights/?url=http%3A%2F%2Fwww.megatravel.com.mx%2F
Instead, my run says that you need to leverage browser caching for:
http://d2aj9ttnhtlit4.cloudfront.net/logo-mega-travel.png (expiration not specified)
http://static.mtmedia.com.mx/mt/carrusel/elige-tu-destino.jpg (expiration not specified)
http://static.mtmedia.com.mx/mt/carrusel/viajes-a-europa.jpg (expiration not specified)
http://static.mtmedia.com.mx/mt/carrusel/viajes-a-medio-oriente.jpg (expiration not specified)
http://static.mtmedia.com.mx/mt/carrusel/viajes-a-sudamerica.jpg (expiration not specified)
http://static.mtmedia.com.mx/mt/carrusel/viajes-africa.jpg (expiration not specified)
http://static.mtmedia.com.mx/mt/carrusel/viajes-al-pacifico.jpg (expiration not specified)
http://static.mtmedia.com.mx/mt/carrusel/viajes-asia.jpg (expiration not specified)
http://static.mtmedia.com.mx/mt/carrusel/viajes-canada.jpg (expiration not specified)
Looking at logo-mega-travel.png it looks like it has no Cache-Control headers, so that seems accurate ... but it does look like the others have valid Cache-Control headers, so it's not clear why PageSpeed is complaining about them.
Note that PageSpeed Insights online tool will cache results from testing your website for a short time, so if you just added Cache-Control headers to the other resources, it can take some time to update.
Share
Improve this answer
Follow
answered Feb 1, 2014 at 4:30
sligockisligocki
6,30555 gold badges4040 silver badges4747 bronze badges
Add a comment
|
|
I've uploaded images and set expiration headers to my S3 Amazon account.
Example: http://d2aj9ttnhtlit4.cloudfront.net/mt/carrusel/elige-tu-destino.jpg
When I check the images headers, it shows:
HTTP/1.1 200 OK =>
Content-Type => image/jpeg
Content-Length => 5389
Connection => close
Date => Wed, 29 Jan 2014 15:53:12 GMT
Cache-Control => max-age=2628000
Expires => Sun, 15 Feb 2015 12:00:00 GMT
Last-Modified => Wed, 29 Jan 2014 15:44:31 GMT
ETag => "16d47fedbba7aedc3e3d454baf1d6f8f"
Accept-Ranges => bytes
Server => AmazonS3
Age => 101140
X-Cache => Hit from cloudfront
Via => 1.1 a7659acb73506d9cdaa5e4d5e6f0ba0b.cloudfront.net (CloudFront)
X-Amz-Cf-Id => 1e7GVW-p4nj88gUBDzVfJnUPzyODHV2pBo1_xFTK67PIqFNuzXRriQ==
...if I run the Page speed test I get the error:
Leverage browser caching for the following cacheable resources:
http://d2aj9ttnhtlit4.cloudfront.net/mt/carrusel/elige-tu-destino.jpg (expiration not specified)
For all the static images in my S3
How is that possible if expiration is set in headers?
Can you help me to understand? Thank you.
|
Leverage browser caching Using S3 Amzon
|
Typically, the DBMS will put the requested changes in a kind of "queue" in memory, but also write these changes to permanent storage (disk) in the background. At the same time, it's keeping track of what has changed, in case a ROLLBACK is needed.
Having the in-memory queue minimizes blocking on each individual SQL command, yet not waiting too long before writing changes to permanent storage minimizes blocking on COMMIT1.
That's the reason why one larger transaction tends to be faster than a bunch of smaller ones - it gives the DBMS more chance to do things in the background, before being forced to block the client on COMMIT.
1 A "D" in "ACID transaction" stands for "durable", which essentially means that when transaction COMMITs, its effects are guaranteed to already be in permanent storage (and not just in volatile memory that can be lost in case of power failure or other problem).
|
When a change is "not yet committed" to the DB, where exactly is that info stored? Is it in some temporary table? Written to a file? Or directly in RAM?
|
DB Caching: Where exactly is it happening?
|
You can allocate large chunk of memory and then arrange elements of the linked list within allocated block on any boundary you want.
|
I am trying to understand how lmbench measures latency for L1, L2 and main memory.
The man page for lat_mem_rd mentions the method, but it's not clear to me:
The benchmark runs as two nested loops. The outer loop is the stride
size. The inner loop is the array size. For each array size, the
benchmark creates a ring of pointers that point forward one stride.
Traversing the array is done by
p = (char **)*p;
in a for loop (the over head of the for loop is not significant; the
loop is an unrolled loop 1000 loads long). The loop stops after doing
a million loads.
How do you "create a ring of pointers that point forward one stride" ? Wouldn't this mean that if the stride size was 128 Bytes, you would need to make a linked list with each node separated by exactly 128 Bytes from it's previous one? malloc just returns some random free piece of memory, so I don't see how that's possible in C.
And in the piece of code, I would always get a segmentation fault. (tested it, and what is p supposed to be initialized with?)
There is a similar thread on SO(link) and the first answer discusses this, but it does not talk about how strided approach can be used with linked lists. I also looked at the source code itself (lat_mem_rd.c) but couldn't understand this from that either.
Any help is appreciated.
|
How does lmbench measure L1 and L2 cache latencies using C? (cannot understand explanation in manual)
|
2
Wouldn't the cache content for processors in these states be the same
as the content in main memory?
This depends on the write policy.
If the write-through policy is used and all changes are directly written back to main memory, you are right.
But that's not true for write-back. There could be a state transition from M to S through a snoop read (or relating to your diagram: BusRd, i.e. read request from the bus without intent to modify), at this point the cache is not consistent with the main memory, so the content has to be written back to main memory, if the cache is invalidated.
Also, what exactly is the difference between Flush and Flush'
I assume you have the diagram from wikipedia; there is an updated version, I linked to it above, that doesn't make that distinction.
Share
Improve this answer
Follow
answered Aug 1, 2015 at 12:55
mikemike
4,95944 gold badges4343 silver badges8383 bronze badges
Add a comment
|
|
In the state transition diagram shown below for the Illinois MESI protocol, why is there a Flush' signal when transitioning from state S to state I and a Flush signal when going from state E to state I upon observing a BusRdX signal. Wouldn't the cache content for processors in these states be the same as the content in main memory? If so, what would be the point of these caches going to state I to flush their data? Also, what exactly is the difference between Flush and Flush'. Is it just that in Flush', data is exchanged transferred by one cache?
Transition diagram:
|
MESI protocol understanding state transitions
|
2
Spotlight has an index of the files and documents, as well as their contents. Wikipedia has a quite a good description.
You can select which files or directories you want it to index in the System Preferences. It wouldn't be able to read encrypted files but they would show up as metadata (ie. the files themselves, with filename) instead if they were in the directory you saw.
Share
Improve this answer
Follow
answered Nov 7, 2013 at 2:08
LeighLeigh
12.2k44 gold badges2828 silver badges3636 bronze badges
Add a comment
|
|
Obviously Spotlight is a very useful resource in finding documents because it has the ability to search for text within a document as well as text in the title. This process, however, must require a large amount of processing power and time. Furthermore, a certain document, lets say a pdf is generally encrypted, and if spotlight had to decrypt every .doc or .pdf in order to search through it, this would be incredibly slow. This implies that somewhere, macs store a cached version of files such as our pdf. Does anyone know where? Or can anyone disprove this idea?
Thanks
|
Mac spotlight: How does it work so quickly?
|
2
A blog post explaining memory leak issue with LinkedHashMap in multi-thread context
https://hoangx281283.wordpress.com/2012/11/18/wrong-use-of-linkedhashmap-causes-memory-leak/
Share
Improve this answer
Follow
answered Apr 8, 2014 at 3:35
Hoang TOHoang TO
6144 bronze badges
Add a comment
|
|
I'm trying to use LinkedHashMap as a local FIFO cache solution overwriting its removeEldestEntry method to keep size fixed:
Map lhm = new LinkedHashMap(MAX_CACHE_SIZE + 1, .75F, false) {
protected boolean removeEldestEntry(Map.Entry eldest) {
return size() > MAX_CACHE_SIZE;
}
};
but when I constantly add new entries to the map monitoring process memory I see it keeps growing until max virtual machine memory is used though map size doesn't increase.
Is it by design? Why does it need more memory if old values are discarded and size of the map is limited?
UPDATE:
as requested I'm attaching full code:
@Test
public void mapMemory() {
final int MAX_CACHE_SIZE = (int) 1E3;
Map lhm = new LinkedHashMap(MAX_CACHE_SIZE + 1, 1F, false) {
protected boolean removeEldestEntry(Map.Entry eldest) {
return size() > MAX_CACHE_SIZE;
}
};
for (long i = 0; i < 1E10; i++) {
lhm.put("key_" + i, "VALUE");
}
}
|
Fixed size LinkedHashMap memory leak?
|
2
I'm guessing that the 128B peak is most likely due to spatial prefetching. You can see in Intels' Optimization guide, under section 2.1.5.4
This prefetcher strives to complete every cache line fetched to the L2 cache with the pair line that completes it to a 128-byte aligned chunk
It wouldn't be a clean jump since this prefetches is not always firing, and even when it does, it only prefetches into the L2, but it's much better than fetching from memory. To make sure this is the case, you can disable prefetches (through BIOS or other means, although some systems may not support that), and check again.
As for the L3 size - you didn't specify your exact model, but i'm guessing you have more than 4M L3 - just keep the curve going and see if it jumps.
EDIT
Just noticed another thing - your k*i expression is probably overflowing int at the max range, which means your access pattern might not be cyclic as you expect.
Share
Improve this answer
Follow
edited Jun 20, 2020 at 9:12
CommunityBot
111 silver badge
answered Oct 27, 2013 at 9:25
LeeorLeeor
19.5k55 gold badges5959 silver badges8888 bronze badges
Add a comment
|
|
It might be a very common and simple question but I need some explanation about the curve that I just obtained from a cache benchmarks code. The goal here is to find the cache line size. I used the code from here:
(h**ps://github.com/jiewmeng/cs3210-assign1/blob/master/cache-l1-line.cpp)
This is the curve that I have obtained from running the code on my machine (Macbook Pro with core i7 - cache line size is 64byte - L1 data cache is 32KB).
The Time vs different stride size curve
I think the peak happens on 128 bytes and not on the 64 bytes. if it is true I want to know why?
Why the time is reduced at 512 bytes?
Update:
I also ran a code to determine the size of the L1 and L2 caches. Here is the figure just to document the data. As you can see there is two peak in 32KB (L1 Cache size) and 256KB (L2 Cache size).
Question:
I am wondering if there is any way to find the size of L3 shared cache.
Cache size figure.
Thanks
|
Cache line size
|
It depends on how the HTTP server is configured. More specifically, in the response header, there's a key Cache-Control: that sets this behaviour.
If you set it in your getfoo action response as Cache-Control: no-cache, well, the first option will returns "fresh new" data every time is called.
|
Can someone please clarify how caching works ? Or does it entirely depend on the browser ?
<script type="text/script" src="/controller/getfoo"> </script>
is a dynamic way of serving js file where request is set as a js file from controller .
<script type="text/script" src="/somewhere/foo.js"> </script>
is the most normal way of doing it .
How does the caching work in both cases ?
In case 1 it is going to sent a HTTP request all the time ? However that is during the page load only . In case 2 browser understands the file name is mentioned explicitly and it will check if foo.js is already available locally else sents a HTTP request ?
|
How does caching in the case of dynamic files/urls work?
|
Live caching defines delay of the entire stream, not the idleness time between individual frames. That is, you have the stream sent at full frame rate with a delay of 300 ms.
Delays like this let sending application compensate for irregular frame acquisition and capture/read delays, while still delivering output at a steady rate.
To redefine effective output capture rate you typically transcode the feed, e.g. Stream Output, Destinations, Add, Activate Transcoding, Edit Selected Profile, Video Codec, Frame Rate.
|
I'm streaming a video captured on a webcam to a remote computer using vlc media player. In 'Show more options' under 'Stream', there is an option for setting 'Caching time' which is set to 300ms by default. In the vlc streaming manual, it is given that 'Caching time' refers to the time vlc has to wait before transmitting a frame. So 300ms means in a second, it can transmit 3-4 frames. But the video at the receiver looks fairly continuous, for which a frame rate of 30 per second is needed. So how is this maintained?
|
Caching time and frame rate in video streaming using vlc media player
|
The first two cases you are exposing in your question are about the same. Things would really change in the following two cases:
CASE 1:
for(int i = 0; i < 10; i++)
{
for(int j = 0; j < 1000; j++)
{
b[i] += a[i]*a[j];
}
}
Here you are accessing the matrix "a" as follows: a[0]*a[0], a[0]*a1, a[0]*a[2],.... In most architectures, matrix structures are stored in memory like: a[0]*a[0], a1*a[0], a[2]*a[0] (first column of first row followed by second column of first raw,....). Imagine your cache only could store 5 elements and your matrix is 6x6. The first "pack" of elements that would be stored in cache would be a[0]*a[0] to a[4]*a[0]. Your first acces would cause no cache miss so a[0][0] is stored in cache but the second yes!! a0 is not stored in cache! Then the OS would bring to cache the pack of elements a0 to a4. Then you do the third acces: a[0]*a[2] wich is out of cache again. Another cache miss!
As you can colcude, case 1 is not a good solution for the problem. It causes lots of cache misses that we can avoid changing the code for the following:
CASE 2:
for(int i = 0; i < 10; i++)
{
for(int j = 0; j < 1000; j++)
{
b[i] += a[i]*a[j];
}
}
Here, as you can see, we are accessing the matrix as it's stored in memory. Consequently it's much better (faster) than case 1.
About the third code you posted about loop tiling, loop tiling and also loop unrolling are optimizations that in most cases the compiler does automaticaly. Here's a very interesting post in stackoverflow explaining these two techniques;
Hope it helps! (sorry about my english, I'm not a native speaker)
|
Which of these optimizations is better and in what situation? Why?
Intuitively, I am getting the feeling that loop tiling will in general
be a better optimization.
What about for the below example?
Assume a cache which can only store about 20 elements in it at any time.
Original Loop:
for(int i = 0; i < 10; i++)
{
for(int j = 0; j < 1000; j++)
{
a[i] += a[i]*b[j];
}
}
Loop Interchange:
for(int i = 0; i < 1000; i++)
{
for(int j = 0; j < 10; j++)
{
a[j] += a[j]*b[i];
}
}
Loop Tiling:
for(int k = 0; k < 1000; k += 20)
{
for(int i = 0; i < 10; i++)
{
for(int j = k; j < min(1000, k+20); j++)
{
a[i] += a[i]*b[j];
}
}
}
|
Loop interchange versus Loop tiling
|
2
DOM elements have properties and attributes.
If you change an attribute e.g. the value="" then the DOM is changed.
But the current value of a form element is stored in the property value and this is the one that is changed when the user types something e.g. into an input field.
If the attribute changes the css rules needs to be rechecked if some don't apply anymore or some others will apply now.
Here a little example:
CSS
[value='foo'] {
color: red;
}
[value='bar'] {
color: green;
}
HTML
<input id="text-element" type="text" value="foo"><br>
<a href="#" id="prop-change">prop-change</a>
<a href="#" id="attr-change">attr-change</a>
JS
document.getElementById("attr-change").onclick = function() {
document.getElementById("text-element").setAttribute("value","bar");
return false;
};
document.getElementById("prop-change").onclick = function(e) {
document.getElementById("text-element").value = "bar";
return false;
};
Live Demo (JSFiddle)
If you try this in current FireFox or Chrome and type bar or click on prop-change the color is not changing to green.
But if you click on attr-change it turns green because the attribute changes.
Additionally if you reload and type e.g. test into it and then press value0 you see that it will turn green but value1 will still be the current value.
Share
Improve this answer
Follow
edited Apr 30, 2014 at 19:59
answered Sep 20, 2013 at 13:56
t.nieset.niese
39.9k99 gold badges7575 silver badges103103 bronze badges
Add a comment
|
|
I've read somewhere in a documentation that most browsers don't update DOM as form values change, because frequent DOM manipulation need heavy computing performance. Instead, they create a cache of form values to register form manipulation, and only update the DOM when a the form is submitted.
Do browsers really work this way? Is there an extensive documentation about this behavior?
|
How does form value caching work in browsers?
|
2
LiipDoctrineCacheBundle provides a service wrapper around Doctrine's common Cache (documentation) that allows you to use several cache drivers like filesystem, apc, memcache, ...
I would recommend loading your generic container-parameters/settings (like maintainance mode,...) from database in a bundle-extension or a compiler-pass.
route-specific settings (like page title, ...) could be loaded in a kernel event listener. You can find a list of kernel events here.
update/invalidate their cache using a doctrine postUpdate/postPersist/postRemove listener.
Share
Improve this answer
Follow
edited Sep 13, 2013 at 11:57
answered Sep 13, 2013 at 11:48
Nicolai FröhlichNicolai Fröhlich
51.8k1111 gold badges127127 silver badges131131 bronze badges
Add a comment
|
|
I have a db table (doctrine entity) that I use to store some editable settings for my app, like page title, maintenance mode (on/off), and some other things..
I can load the settings normally using the entity manager and repositories, but I think that's not the best solution...
My questions are:
- can I load the settings only once at some kernel event and then access them the same way I access any other setting saved in yml config files..
how can I cache the database settings, so I would only do one DB query, and then in future page requests, it would use the cached values, instead of doing a DB query for each page request? (of course, everytime I change something in the settings, I would need to purge that cache, so the new settings could take effect)
|
Symfony2 load settings from database
|
Here is what I could do using the StaticPublishQueue module. In your NewsDataObject.php:
function onAfterWrite() {
parent::onAfterWrite();
$url = array();
$pages = $this->Pages(); //has_many link to pages that include this DataObject
foreach($pages as $page) {
$pagesAffected = $page->pagesAffected();
if ($pagesAffected && count($pagesAffected) > 0) {
$urls = array_merge((array)$urls, (array)$pagesAffected);
}
}
URLArrayObject::add_urls($urls);
}
This takes each of the pages that references your DataObject, asks it for all it's URL and the URL of any related pages (e.g. Virtual Pages that reference that page), compiles all the URLs into a big array, then adds that array to the static publishing queue. The queue will gradually process until all the affected pages are republished.
The event system allows you to add a layer of abstraction between the republishing and the triggers for republishing, but for something simple you don't necessarily need to use it. Instead, you can add pages to the queue directly. (You might also like to read this blog post describing the StaticPublishQueue module)
|
is there a possibility to trigger an update of the cache if a DataObject is edited?
for example updating a News DataObject should update the cache of pages, that are displaying these NewsObjects.
many thanx,
Florian
|
silverstripe static publisher - pages affected by DataObject Changes
|
If you set the cache store to use redis-rb, and it hasn't implemented maxmemory, I don't see why it'd work.
In particular, it seems like you configure redis's maxmemory in the redis server's config, so I don't think you can do it through a connecting client (ie. redis-rb).
|
I'm trying to set maxmemory and maxmemory-policy in my cache_store configuration of my Rails app.
I did the following in my production.rb file:
redis_url = "redis://localhost:6379/0"
config.cache_store = :redis_store, redis_url, { :expires_in => 4.weeks ,
:namespace => 'rails-cache',
:maxmemory => '25gb',
'maxmemory-policy' => 'volatile-ttl'}
But the maxmemory doesn't seam to be working. When I do Rails.cache.methods I don't get any methods about memory or max.
I dont' see any examples on the web for Rails, the closest thing was handling redis maxmemory situations with rails when using rails caching but it doesn't give any examples.
I also cloned and grepped for maxmemory in the the redis-rb gem (https://github.com/redis/redis-rb), but nothing comes up. So it seem like it has not been implemented.
|
Rails Redis setting maxmemory and maxmemory-policy
|
In your layout, you have two footer blocks, which use the same page/html_footer type. Or, this block type is not intended to be used more than one time on the same page, the first content it will display will be cached and returned on the later calls (see Mage_Page_Block_Html_Footer::getCacheKeyInfo()). So, for one of your footer blocks, you should use another block type (this should be footer_block, as it's the one not existing in base Magento).
On a side note, your footer_block block is defined two times, once in page.xml and once in catalog.xml, and both of your footer blocks contain a child named bottom.container, so you could try to remove it from the definition of footer_block.
|
I have Magento CE 1.7.0.2 site with a custom created theme.
Problem is: Only when I turn on cache - some content on page "doubles". So footer showed on page again in the end of the page.
Screen: http://img37.imageshack.us/img37/3038/eqv7.jpg
(Shop By block and footer doubled, as you see in the bottom)
Any suggestions how to fix? Or where to start looking at?
Thanks for any help,
Stanislav.
P.S.
Code of "1-column.phtml" (this page template PHTML)
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<?php echo $this->getChildHtml('head') ?>
</head>
<body<?php echo $this->getBodyClass()?' class="'.$this->getBodyClass().'"':'' ?>>
<?php echo $this->getChildHtml('after_body_start') ?>
<div class="wrapper">
<?php echo $this->getChildHtml('global_notices') ?>
<?php echo $this->getChildHtml('header') ?>
<div class="category-page">
<?php echo $this->getChildHtml('breadcrumbs') ?>
<div class="bread" style="margin-top:40px"></div>
<?php echo $this->getChildHtml('global_messages') ?>
<div class="product-page" style="min-height:auto;">
<div class="content_main">
<?php echo $this->getChildHtml('content') ?>
</div>
</div>
</div>
<div class="bread2"></div>
<?php echo $this->getChildHtml('footer_block') ?>
</div>
<?php echo $this->getChildHtml('footer') ?>
<?php echo $this->getChildHtml('before_body_end') ?>
</body>
</html>
|
Magento cache adds content twice
|
2
The @Cached annotation doesn't work for every method call. It only works for Actions and moreover, you can't use parameters as a cache key (it's only a static String). If you want to know how it works, look at the play.cache.CachedAction source code.
Instead, you will have to use either Cache.get(), check if result is null and then Cache.set() or the Cache.getOrElse() with a Callable with a code like :
public static User getUser(int userId, String otherParam){
return Cache.getOrElse("user-" + userId + "-" + otherParam, new Callable<User>() {
@Override
public User call() throws Exception {
return getUserFromDatabase(userId, otherParam);
}
}, DURATION);
}
Be careful when you construct your cache keys to avoid naming-collision as they are shared across the whole application.
Share
Improve this answer
Follow
answered Jun 6, 2013 at 19:06
mguillerminmguillermin
4,1632020 silver badges2626 bronze badges
Add a comment
|
|
Can somebody exlain, with a sample, how works the cache annotation in play framework 2 in java
I would like to cache the result of the method with his parameters; something like this:
@Cache(userId, otherParam)
public static User getUser(int userId, String otherParam){
return a User from dataBase if it isn't in cache.
}
Maybe a tutorial is available?
Thanks for your help.
|
play framework cache annotation
|
Right, currently there are only two http cache storage backends available built-in (docs):
DBM storage backend
Filesystem storage backend
There is also a mongodb cache storage implemented.
If I were you I'd implement memcached cache storage by myself. Should be pretty straightforward - it's just a class with several required by contract methods (see dbm storage class or mongodb storage class, for example).
Hope that helps.
|
I'm using scrapy framework for crawling and I wonder if there is a way to use memcached as storage backend for http cache (HTTPCACHE_STORAGE option). The storages, which are available from the box, are file based and dbm, which doesn't suite my situation. Is there a possibility to use memcached?
|
Scrapy http cache storage in memcached
|
Depends on Varnish version,
From Varnish 3.0.2 you can stream uncached content while it caches the full object.
https://www.varnish-software.com/blog/http-streaming-varnish
"Basically, his code lifts the limitations of the 3.0 release and allows Varnish to deliver the objects, while they are being fetched, to multiple clients."
The feature will be available on beresp.do_stream
https://www.varnish-software.com/blog/streaming-varnish-30
|
I'm trying to configure Varnish to cache range requests. I notice the http_range_support option, but everything I've read says that this will attempt to cache the entire file before satisfying the request. Is it possible to do so without requiring the entire file already be cached?
|
How can I configure Varnish to cache range requests?
|
2
This is the normal behavior of any system that uses a cache :
at first time execution the results are loaded in cache , thus a small overhead appears
the following execution will take much less because they are already in cache BUT if any input is changed (like in your case : changing some parameters of the stored procedure) then the results that are already in cache are not longer viable so the new results (using the changed inputs) must be put in cache that is why it takes longer
You can read more here
Share
Improve this answer
Follow
answered May 10, 2013 at 8:11
StephanStephan
8,05033 gold badges3737 silver badges4242 bronze badges
Add a comment
|
|
Should stored procedure cached in Mysql? If yes, How long it is stay in cache?
In my case, when I call one stored procedure first time, it is giving me result in 1sec, after that it gives me result in 400ms. And when I am changing some parameters passed to the stored procedure and call first time, same behavior performed. So, I can not understand what happening? Can some one guide me?
Thanks.
|
Should stored procedure cached in Mysql
|
I have a use case for Rabbit MQ that holds the last valid status of a system. When a new client of that system connects it receives the current status.
It is so simple to do!
You must use the Last Value Cache custom exchange https://github.com/simonmacmullen/rabbitmq-lvc-plugin
Once installed you send all your status messages to that exchange. Each client that needs the status information will create a queue that will have the most recent status delivered to that queue on instantiation. After that it will continue to receive status updates.
|
I am wondering if MQ can be used as a state cache for monitoring? And is this a good idea or not?
In theory you can have many sources (monitoring agents) that detect problem states and distribute them to subscribers via an MQ system such as RabbitMQ. But has anyone heard of using MQ systems to cache the state, so when clients initialize, they read from the state queue before subscribing to new state messages? Is that a bad way to use MQ?
So to recap, a monitor would read current state from a state queue then setup a subscription queue to receive any new updates. And the state queue would be maintained by removing any alerts that are no longer valid by the monitoring agents that put the alert there to begin with.
Advantage would be decentralized notification and theoretically very salable by adding more mq systems to relay events.
|
MQ Cache? good or bad idea?
|
1
I had the same problem.
Adding ehcache.xml file into the config directory seemed to fix it.
See http://ehcache.org/documentation/integrations/grails for details.
Btw, I had to replace the attributes 'maxEntriesLocalHeap' with 'maxElementsInMemory'.
Share
Improve this answer
Follow
edited Aug 30, 2013 at 1:00
Diosney
10.6k1515 gold badges6666 silver badges112112 bronze badges
answered Aug 30, 2013 at 0:40
JaanusJaanus
1111 bronze badge
1
I've honestly moved on from my initial question at this point and I am not in a position to test your answer. At some point I may revisit Grails caching, but until then accept an upvote for being the only one to answer this question.
– James McMahon
Aug 30, 2013 at 15:06
Add a comment
|
|
With the Grails ehcache plugin I'm trying to cache a response and occasionally refresh that response.
This is what I have in my Config.groovy,
grails.cache.config = {
cache {
name 'winners'
eternal false
timeToLiveSeconds 10
}
}
And I annotating the winners endpoint with @Cacheable('winners').
I am seeing the response get cached, but I am never seeing the cache expire. Anyone have any clue what might be going on here?
|
Grails ehcache plugin cache not expiring
|
2
You can also consider to use HazelCast In Memory Data Grid solution. I have used it, I would say its one of the best solution I came across.
Share
Improve this answer
Follow
answered Apr 23, 2013 at 9:27
Alpesh GediyaAlpesh Gediya
3,74611 gold badge2626 silver badges3939 bronze badges
Add a comment
|
|
This question already has answers here:
Is there a open-source off-heap cache solution for Java?
(7 answers)
Closed 10 years ago.
I am looking for In memory Cache for Java that does not use JVM Heap space.
I have looked into EhCache and JCS but both of them uses the Heap.
I want it to be scalable.
|
In memory Cache for Java that does not use JVM Heap space [duplicate]
|
You set the Cache-Control on a per route basis:
get '/stream/:service/:stream_id.png' do
# Building image_url omitted
response['Cache-Control'] = "public, max-age=0, must-revalidate"
redirect image_url
end
|
I have a Sinatra route for displaying a status image. While this simple solution works, I run into caching issues:
get '/stream/:service/:stream_id.png' do
# Building image_url omitted
redirect image_url
end
What is a proper way to handle the caching here, to set a max TTL? These images will be embedded on other sites, otherwise I could simply link straight to the images I redirect to.
The problem is that it generates an URL like site.com/image.png which in turn redirects elsewhere -- but it's site.com/image.png that is considered cached by the browser, so it won't check if it's updated.
I've experimented a bit with Cache-Control headers, but I have yet to find a solution.
I'm open for other solutions if this method is completely boneheaded.
|
Caching a redirect to a static image in Sinatra
|
2
I don't see any reason for you to make this a singleton. I would just write a wrapper for each property that gets/sets the values. That way you can inject it much more easily with an IoC container and increase testability.
As it stands now, your code isn't thread safe which could cause problems too.
Share
Improve this answer
Follow
answered Mar 6, 2013 at 13:34
Daniel A. WhiteDaniel A. White
189k4747 gold badges368368 silver badges452452 bronze badges
6
How often is more than one thread working with the same instance of HttpContext.Current though?
– Yuck
Mar 6, 2013 at 13:37
I'm not concerned about the current context - thats thread local. Its this class. _instance isn't guarded.
– Daniel A. White
Mar 6, 2013 at 13:38
@Yuck, why do you think its unlikely there will be more than one thread accessing the Cache object? it is shared by the app domain, isnt it?
– YavgenyP
Mar 6, 2013 at 13:41
@YavgenyP Cache is thread safe by design. it's really up to the provider to do that. i'm not sure how asp.net handles it.
– Daniel A. White
Mar 6, 2013 at 13:42
Cache might be thread safe (i think it is), but not if you wrap the access to it with an if..else statement, so it depends on the the way he handles the keys here. anyway, to me his code doesnt make any sense (i dont understand what _instance does, as it seems it gets overriden everytime).
– YavgenyP
Mar 6, 2013 at 13:46
|
Show 1 more comment
|
I am using the given singleton pattern to cache a class object in ASP.NET. Can anyone highlight the disadvantages of this approach?
public class CacheManager
{
private static CacheManager _instance;
protected CacheManager(string key) { id = key; }
public static CacheManager getInstance(string key)
{
if (HttpContext.Current.Cache[key] == null)
HttpContext.Current.Cache.Insert(key, _instance = new CacheManger(key), null, System.Web.Caching.Cache.NoAbsoluteExpiration, TimeSpan.FromSeconds(120),CacheItemPriority.Default,new CacheItemRemovedCallback(ReportRemovedCallback));
return (CacheManager)HttpContext.Current.Cache[key];
}
private static void ReportRemovedCallback(string CacheManager, object value, CacheItemRemovedReason reason)
{
_instance = null;
}
public string id { get; private set; }
public string field1 { get; set; }
public string field2 { get; set; }
public string field3 { get; set; }
}
|
Disadvanage of given Singleton cache patterm
|
Delphi doesn't "cache DCU files" except in memory when it's compiling.
If you have new properties that you can't access in the new components, you haven't properly uninstalled the old ones, and the IDE is getting its information from the old designtime/runtime packages.
You need to properly remove the old packages and dcus before installing the new ones, and then properly install the new packages into the IDE. The DCUs by default are installed in the SecureBlackBox\Sources folder after installation (they're created there when the packages are compiled and installed).
You may also need to remove the compiled package files (.dpc) from your computer before installing the new ones. Search your computer for *.dcp files; you should find several of them related to SecureBlackBox in your My Documents folder. They should be removed before installing the new versions as well.
|
I have had an older version of Eldos SecureBlackBox installed. Now I deinstalled it and installed the latest version. Unfortunately Delphi caches the old DCU file, so I can not use the new property from the new dcu file.
Does anyone knows, where Delphi 7 caches the DCU files?
What I have to clear, that the new DCU file is loaded?
I have tried clean up with CCLeaner, but without success.
Thanks
Walter
|
Does anyone knows, where Delphi 7 caches the DCU files?
|
2
The request is being rejected by the CachedResponseSuitabilityChecker in the canCachedResponseBeUsed method. If you need different behavior, that's the class to implement your own version of, and then use the long constructor for cachingHttpClient
CachingHttpClient(HttpClient backend,
CacheValidityPolicy validityPolicy,
ResponseCachingPolicy responseCachingPolicy,
HttpCache responseCache,
CachedHttpResponseGenerator responseGenerator,
CacheableRequestPolicy cacheableRequestPolicy,
CachedResponseSuitabilityChecker suitabilityChecker,
ConditionalRequestBuilder conditionalRequestBuilder,
ResponseProtocolCompliance responseCompliance,
RequestProtocolCompliance requestCompliance)
Share
Improve this answer
Follow
answered Feb 12, 2013 at 20:11
evil ottoevil otto
10.5k2525 silver badges3838 bronze badges
1
2
The reject point should come from CacheableRequestPolicy. The long constructor could not be inherited by subclass. So, you have to copy the whole source code into your package. This will make bug fix patch difficult.
– SXC
Feb 13, 2013 at 3:20
Add a comment
|
|
I am using Apache cachingHttpClient to query a REST API from java code.
I want to cache some http response despite receiving "Cache-Control: no-cache" header which cause the cachingHttpClient to not cache the file.
With standalone http proxy such as squid,mod_cache..., I could tweak the configuration to ignore those headers and overide default behaviour.
I'd rather not go for standalone http proxy but rather go for 100% java code.
is there another http client that would offer more control on caching ?
can I implement an intermediate layer/proxy that would rewrite the headers ?
can I patch cachingHttpClient through inheritance ?
|
cachingHttpclient cannot ignore header "Cache-Control: no-cache"
|
So you have one database and many applications on various servers. Each application has its own cache and all the applications are reading and writing to the database.
Look at a distributed cache instead of caching locally. Check out memcached or AppFabric. I've had success using AppFabric to cache things in a Microsoft stack. You can simply add new nodes to AppFabric and it will automatically distribute the objects for high availability.
If you move to a shared cache, then you can put expiration times on objects in the cache. Try to resist the temptation to proactively evict items when things change. It becomes a very difficult problem.
I would recommend isolating your critical items and only cache them once. As an example, when working on an auction site, we cached very aggressively. We only cached an auction listing's price once. That way when someone else bid on it, we only had to do one eviction. We didn't have to go through the entire cache and ask "Where does the price appear? Change it!"
For 95% of your data, the reads will expire on their own and writes won't affect them immediately. 5% of your data needs to be evicted when a new write comes in. This is what I called your "critical items". Things that always need to be up to date.
Hope that gives you ideas!
|
I have a high-performance application I'm considering making distributed (using rabbitMQ as the MQ). The application uses a database (currently SQLServer, but I can still switch to something else) and caches most of it in the RAM to increase performance.
This causes a problem because when one of the applications writes to the database, the others' cached database becomes out-of-date.
I figured it is something that happens a lot in the High-Availability community, however I couldn't find anything useful. I guess I'm not searching for the right thing.
Is there an out-of-the-box solution?
PS: I'm sorry if this belongs to serverfault - Since this a development issue I figured it belongs here
EDIT:
The application reads and writes to the database. Since I'm changing the application to be distributed - Now more than one application reads and writes to the database. The caching is done in each of the distributed applications, which are not aware to DB changes from another application.
I mean - How can one know if the DB was updated, if he wasn't the one to update it?
|
Caching database table on a high-performance application
|
You could use an SqlDependency Object.
Subscribe to its onChange event and you'll be able to respond when your data changes.
Enable service broker on your database
Grant subscribe query notifications to your asp.net account
|
I would like to find out the best practice for the following scenario: we have a database table created using Entity Framework (code-first). There's a DBContext and a local collection used as cache, corresponding to the data in the table.
We need to find out if and when someone updates the database manually (any CRUD ops) in order to keep the cache in sync with the database at all times. Ops will be accessing the DB and there's no way around it - so it has to be a technical solution and not a BI one. How can this be done?
Thanks.
|
SQL Server : find out if a db table has changed (using entity framework)
|
You probably missed the query_cache_limit option, which prevents resultsets larger than this from being cached.
You may also have a non-standard setting for query_cache_type.
|
On Centos 6.3 I'm trying to enable query caching on Mysql.
I have enabled query caching
SHOW VARIABLES LIKE 'query_cache_size';
query_cache_size 52428800
SHOW VARIABLES LIKE 'query_cache_type';
query_cache_type ON
When running a few simple select queries (select * from titles), the Qcache_hits always stays 0.
(I'm using these sample mysql database: https://launchpad.net/test-db/+download)
show status like "Qcache%";
Qcache_free_blocks 1
Qcache_free_memory 52419904
Qcache_hits 0
Qcache_inserts 0
Qcache_lowmem_prunes 0
Qcache_not_cached 50
Qcache_queries_in_cache 0
Qcache_total_blocks 1
I'm out of options figuring out what's wrong here.
Does anyone have an idea what can be wrong?
|
Qcache_hits always 0
|
I'll expand on @Guffa's answer and share my chosen solution.
When calling the Server.Transfer method, the .NET engine treats it like an .aspx page, so It doesn't add the appropriate HTTP Headers needed (e.g. for caching) when serving a static file.
There are three options
Using Response.Redirect, so the browser makes the appropriate request
Setting the headers needed and using Request.BinaryWrite to serve the content
Setting the headers needed and calling Server.Transfer
I choose the third option, here is my code:
try
{
DateTime fileLastModified = File.GetLastWriteTimeUtc(MapPath(fileVirtualPath));
fileLastModified = new DateTime(fileLastModified.Year, fileLastModified.Month, fileLastModified.Day, fileLastModified.Hour, fileLastModified.Minute, fileLastModified.Second);
if (Request.Headers["If-Modified-Since"] != null)
{
DateTime modifiedSince = DateTime.Parse(Request.Headers["If-Modified-Since"]);
if (modifiedSince.ToUniversalTime() >= fileLastModified)
{
Response.StatusCode = 304;
Response.StatusDescription = "Not Modified";
return;
}
}
Response.AddHeader("Last-Modified", fileLastModified.ToString("R"));
}
catch
{
Response.StatusCode = 404;
Response.StatusDescription = "Not found";
return;
}
Server.Transfer(fileVirtualPath);
|
I'm using an .aspx page to serve an image file from the file system according to the given parameters.
Server.Transfer(imageFilePath);
When this code runs, the image is served, but no Last-Modified HTTP Header is created.
as opposed to that same file, being called directly from the URL on the same Server.
Therefor the browser doesn't issue an If-Modified-Since and doesn't cache the response.
Is there a way to make the server create the HTTP Headers like normally does with a direct request of a file (image in that case) or do I have to manually create the headers?
|
Why does HTTP headers doesn't get created when I use Server.Transfer()?
|
If each application are using there own cache manager, they will have separated cached.
You can retrieve the cache container managed by the application server via JNDI support of Spring (The JNDI name is java:jboss/infinispan/my-container-name). So Spring will be responsible to make sure every part are using the same container.
I am not 100% sure you will get the same cache, it may return you a application specific cache (the 2 applications data object are in fact coming from different class loader).
Embedded cache is probably not mean for inter application communication. You probably need to use the client/server paradigm.
|
I'm trying to use JBoss 7 Infinispan cache as a communication form (something more later) of two war-deployed spring-based apps. I'm having a problem with accessing the JBoss managed cache managers.
When I use
DefaultCacheManager cacheManager = new DefaultCacheManager();
cache = cacheManager.getCache();
on each of two applications, I get two separate caches. Is there any way to access the cache created by JBoss server without using the @ManagedBean annotation and Java EE standard at all ?
It's done. Thanks to Kazaag, I used JNDI.
JndiTemplate jndiTemplate = new JndiTemplate();
jndiTemplate.lookup("java:jboss/infinispan/container/cluster");
I had the well known problem with a DefaultEmbeddedCacheManager Class Cast Exception. I used reflections.
Map<Object, Object> cache;
JndiTemplate jndiTemplate = new JndiTemplate();
Object cacheManager;
try {
cacheManager = (Object) jndiTemplate.lookup("java:jboss/infinispan/container/cluster");
Method method = cacheManager.getClass().getMethod("getCache");
cache = (Map) method.invoke(cacheManager);
} catch (Exception e) {
e.printStackTrace();
return;
}
Moreover I had to mark container as started eagerly.
<cache-container name="cluster" aliases="ha-partition" default-cache="default">
<transport lock-timeout="60000"/>
<replicated-cache name="default" mode="SYNC" start="EAGER" batching="true">
<locking isolation="REPEATABLE_READ"/>
</replicated-cache>
</cache-container>
The cache is replicated although different class loaders.
|
Spring, Infinispan and JBoss 7 integration
|
On Linux it is possible to collect such information via OProfile. Each CPU has performance event counters. See here for the list of the AMD K15 family events: http://oprofile.sourceforge.net/docs/amd-family15h-events.php
OProfile regularly samples the event counter(s) and together with the program counter. After a program run you can analyze how many events happen and at (statistically) what program position.
OProfile has build in Java support. It interacts with the Java JIT and creates a synthetic symbol table to look up the Java method name for a peace of generated JIT code.
The initial setup is not quite easy. If interested, I can guide you through or write a little more about it.
|
I'm writing a program in Java
In this program I'm reading and changing an array of data. This is an example of the code:
public double computation() {
char c = 0;
char target = 'a';
int x = 0, y = 1;
for (int i = 0; i < data.length; i++) {
// Read Data
c = data[index[i]];
if (c == target)
x++;
else
y++;
//Change Value
if (Character.isUpperCase(c))
Character.toLowerCase(c);
else
Character.toUpperCase(c);
//Write Data
data[index[i]] = c;
}
return (double) x / (double) y;
}
BTW, the INDEX array contains DATA array's indexes in random order to prevent prefetching. I'm forcing all of my cache accesses to be missed by using random indexes in INDEX array.
Now I want to check what is the behavior of the CPU cache by collecting information about its hit ratio.
Is there any developed tool for this purpose? If not is there any technique?
|
How to collect AMD CPU Cache Hit Ratio with Java?
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.