Response
stringlengths 8
2k
| Instruction
stringlengths 18
2k
| Prompt
stringlengths 14
160
|
---|---|---|
docker image rm command will delete the image build during pipeline. for that reason when you delete the image, next time the image will be build again when it comes to docker build command.
So in this case, Yes this command is the reason for not caching docker image for next build.
If you want to use cache for docker steps please use docker build command with --cache-from option, and remove this command. i-e docker image rm
I assume you are deleting the image at the end of pipeline, just to free space and resources, I would suggest not to delete them after every build, but run a cron job in your build agent that runs at stipulated time e.g maybe every night at 12AM and in that cron job you can run docker image prune command which will delete all unused/dangling images.
|
At the end of a Jenkins pipeline there is a
sh "docker image rm #{IMAGE_ID_HERE}"
I noticed that on each build the cache is not utilized, and all the steps are executed again. Could this be the reason why the cache is not utilized?
|
Does removing docker image affect the caching
|
3
You are modifying the same array as you iterating.
const uniqSort = arr => {
const breadcrumbs = {};
const newArray = [];
for (let i = 0; i < arr.length; i++) {
if (!breadcrumbs[arr[i]]) {
breadcrumbs[arr[i]] = true;
newArray.push(arr[i]);
}
}
return newArray.sort((a, b) => a - b);
};
If you are looking for quick solution, how to filter duplicities from array, you can convert it to Set and then back to array.
const uniqSort = arr => {
return Array.from(new Set(arr)).sort((a, b) => a - b);
};
Share
Improve this answer
Follow
answered Oct 27, 2020 at 1:46
Jan CizmarJan Cizmar
36022 silver badges1515 bronze badges
3
Perfect answer! You can make the shorter answer even more shorter by removing the block {} and the return statement and use the implicit return of arrow functions: const uniqSort = arr => Array.from(...).sort(...);
– ibrahim mahrir
Oct 27, 2020 at 1:49
what's wrong with modifying the same array ? i know the Set solution, recently I started to deep dive into algorithms so i'm trying different things
– Ahmed Anwar
Oct 27, 2020 at 1:50
You can find more here. stackoverflow.com/questions/9882284/…
– Jan Cizmar
Oct 27, 2020 at 21:13
Add a comment
|
|
This question already has answers here:
Looping through array and removing items, without breaking for loop
(18 answers)
Closed 3 years ago.
I'm trying to transform this sort into a unique sort:
const sort = arr => arr.sort((a, b) => a - b);
I tried this:
const uniqSort = arr => {
const breadcrumbs = {};
for(let i =0; i<arr.length; i++){
if(breadcrumbs[arr[i]]){
arr.splice(i,1)
} else {
breadcrumbs[arr[i]] = true;
}
}
return arr.sort((a, b) => a - b);
};
for some reason it doesn't work properly, can anyone know why?
when I input uniqSort([4,2,2,3,2,2,2]);
the output is [2,2,3,4] instead of [2,3,4]
|
transform a sort into a unique sort [duplicate]
|
Yes, there are a few different ways to achieve this type of caching:
Use a different cache validator
In addition to configuring your cache expiration (as you've done above), you can also choose to configure a cache validator. In your case, you might use either an input or parameter validator.
Use a cache key
You can "share" a cache amongst tasks (both within a single Flow and across Flows) by specifying a cache_key on your tasks:
@task(cache_for=datetime.timedelta(hours=1), cache_key="my-key")
def some_task():
...
This will then look up your candidate Cached states by key instead of by task ID.
Use a file-based target
Lastly, and increasingly the more popular setup, is to use a file-based target for your task. You can then template this target string with things like flow_run_id and the inputs provided to your task. Whenever the task runs, it first checks for the existence of data at the specified target location, and if found, does not rerun. For example:
@task(target="{flow_run_id}/{scheduled_start_time:%Y-%d-%m}/results.bytes")
def some_task():
...
This template has the effect of re-using the data at the target if both of the following are true:
the task is rerun within the same day
the task is rerun as a part of the same flow run
You can then share this template across multiple tasks (or in your case, across all the mapped children).
Note that you can also provide inputs and parameters to your target template if you desire.
|
I have a flow in which I use a .map(); as such, I "loop" over multiple inputs, however some of the inputs I need to generate only once, but I notice that my flow keep re-generating them.
Is it possible to cache/checkpoint the result of a task (which is used in other tasks) for the duration of the run?
My understanding is that it's possible to cache for a specific amount of time like so:
import datetime
from prefect import task
@task(cache_for=datetime.timedelta(hours=1))
def some_task():
...
However, if the run is less than the cache_for time, would the cache still hold for the next run (if not I guess a caching with a long time will work).
|
In Prefect, can a task value be cached for the duration of the flow run?
|
It might be taking the .js file from cache.
Keep developer console open for not to use cached file. (Press F12 to open developer console)
|
I am using something simple like this to load my js files:
<script src="js/file.js" type="text/javascript" charset="utf-8"></script>
Its running ok, but when making tests on my localhost its taking the js file from cache I guess. My updates are not loading instantly.
How to reload my updates at the moment?
|
How to load script js without using stale file from cache?
|
No, it's not possible there're two reasons for that.
Spring does not support setting maxSize for Redis cache Refer: RedisCacheConfiguration
Even if Spring would be supporting this then it would be difficult to track active cache entries.
To support max size, we would need detail about currently non-expired/non-evicted keys. Finding these would require scanning all the cache keys. One simple way could be tracked all the cache key in another Redis SET data structure. As you have active cache keys now we need to apply some background policy to delete one or more keys. Deleting these keys is not easy as well, you need to see which one should be deleted, FIFO, LRU, or ?.
I would suggest implementing your own algorithm with the help of RedisCacheWriter. Here while adding/removing to the cache, you can update your cache keys. Also, you need to run a background job that would run at certain intervals to cap the active cache entries.
|
In a Spring Boot app I'm doing the migration from a local cache implemented with Caffeine to a Redis distributed cache.
I see in Caffeine cache that we can set the maximum number of entries
Cache cache = new CaffeineCache(cacheName, Caffeine.newBuilder()
.recordStats()
.expireAfterWrite(expireIn, TimeUnit.SECONDS)
.maximumSize(maxSize)
.build());
Can the same be achieved in code for Redis? I need to set the different values for different cache names.
|
How to set max number of entries for different caches in the Redis cache manager in spring boot application?
|
A different take on what Sadra suggested, instead of the trackBy() method you can override the isRequestCacheable() method on the CacheInterceptor class. It is a little easier.
@Injectable()
export class CustomCacheInterceptor extends CacheInterceptor {
excludePaths = ["/my/custom/route"];
isRequestCacheable(context: ExecutionContext): boolean {
const req = context.switchToHttp().getRequest();
return (
this.allowedMethods.includes(req.method) &&
!this.excludePaths.includes(req.url)
);
}
}
|
I activate the global cache by APP_INTERCEPTOR in my NestJS app.
But now, I need to ignore it on some routes.
How can I do that?
|
How to ignore global cache on some routes in NestJs
|
If you're talking about package:cached_network_image, then CachedNetworkImage is a widget, and CachedNetworkImageProvider is an ImageProvider, which identifies the image resource to show.
The CachedNetworkImage widget exists for convenience and creates a CachedNetworkImageProvider for you from its construction arguments; you alternatively could use a normal Image widget with a CachedNetworkImageProvider.
|
This is in context of Flutter, the Dart based framework for making mobile apps.
I'm looking for sources that can explain the underlying fundamentals and principles, not just a rule of thumb.
|
What is the difference between CachedNetworkImage and CachedNetworkImageProvider? Which case should each be used in?
|
3
I finally figured this out. I have to determine which block types and which node types might reference the refcode query parameter and update the cache settings for them:
function mymodule_preprocess_block( &$variables )
{
if ( $variables['base_plugin_id'] == 'system_menu_block' )
$variables['#cache']['contexts'][] = 'url.query_args:refcode';
}
function mymodule_preprocess_node( &$variables )
{
if ( $variables['node']->bundle( ) == 'main' )
$variables['#cache']['contexts'][] = 'url.query_args:refcode';
}
Share
Improve this answer
Follow
answered Jan 22, 2020 at 7:03
Maine MikeMaine Mike
6155 bronze badges
Add a comment
|
|
I would like to configure my Drupal 8 website so that any time query parameter "refcode" is used to visit my site, that value is replicated on all menu links on that page. For example, using https://www.example.com?refcode=joe would add "?refcode=joe" to all menu links on that page. Once someone enters the site using a particular refcode, then using menu links to navigate around the site would preserve that refcode and using menu links to navigate away from the site would also preserve that refcode on external menu links.
When the cache is empty, this code works:
function mymodule_link_alter( &$variables )
{
if ( $refcode = \Drupal::request( )->query->get( 'refcode' ) )
$variables['options']['query']['refcode'] = $refcode;
}
When the page is cached, it doesn't. I have tried adding this:
$variables['#cache']['contexts'][] = 'url.query_args:refcode';
but that does not work. I think I have to add this caching directive somewhere else, but I don't know where. Is there a place where I can instruct Drupal 8 to take "refcode" into account when retrieving any cached page?
|
Where to specify cache context to operate on all pages?
|
To answer that, I have to explain something about PostgreSQL internals.
In PostgreSQL, rows are never updated in place. Rather, each UPDATE creates a new version of the row. Similarly, a DELETE does not remove the row, but marks it as invalid.
Each row version carries the ID of the transaction that created it and of the transaction that marked it as invalid. Together, these transaction IDs determine the visibility of a row version.
COMMIT and ROLLBACK do not touch the table at all, they only mark the transaction committed or aborted in the commit log.
Now a query that reads a row version has to consult that commit log to determine if it can see a row version or not. For example, if the transaction that created a row version is rolled back, the row version is invisible.
You can imagine that this would create a lot of traffic on the commit log, which would hurt performance, if there were no optimization in place: If a statement accessing a row finds that the creating or deleting transaction is ended, it will set a so-called hint bit on the row version. Subsequent readers now don't have to consult the commit log any more.
Your query was the first to read some of the rows in the table, so it set those hint bits. This modification makes the block containing the row version "dirty", that is, it has to be written to storage.
This explains how reading queries can end up writing data in PostgreSQL. For that reason, it is often a good idea to VACUUM a table after a bulk data modification: it will take the onus of setting hint bits from the first reader.
|
While trying to optimize some queries in Postgres 11, I stumbled upon this behavior I can't understand.
> explain (analyze, buffers) select count(*) from events;
QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------------------------
Finalize Aggregate (cost=2533599.69..2533599.70 rows=1 width=8) (actual time=113869.828..113869.828 rows=1 loops=1)
Buffers: shared hit=204077 read=2205033 dirtied=16112
I/O Timings: read=319766.985
-> Gather (cost=2533599.48..2533599.69 rows=2 width=8) (actual time=113869.814..113871.340 rows=3 loops=1)
Workers Planned: 2
Workers Launched: 2
Buffers: shared hit=204077 read=2205033 dirtied=16112
I/O Timings: read=319766.985
-> Partial Aggregate (cost=2532599.48..2532599.49 rows=1 width=8) (actual time=113866.031..113866.032 rows=1 loops=3)
Buffers: shared hit=204077 read=2205033 dirtied=16112
I/O Timings: read=319766.985
-> Parallel Seq Scan on events (cost=0.00..2507901.58 rows=9879158 width=0) (actual time=0.048..111664.011 rows=8055167 loops=3)
Buffers: shared hit=204077 read=2205033 dirtied=16112
I/O Timings: read=319766.985
Planning Time: 0.142 ms
Execution Time: 113871.415 ms
My understanding here is that my select count(*) dirtied 16112 blocks.
Two questions:
Am I understanding this right?
How can a read-only operation make blocks dirty?
|
Postgres: how can a "select count(*)" cause dirty blocks?
|
3
If you write your handler with a simple function (aka no arrow function), the this object will be binded to the fastify server:
exports.getBooks = async function (request, reply) {
console.log(this)
let data = {
book: 'Book 1',
author: 'Author 1'
}
return this.redis.get('key1')
}
Share
Improve this answer
Follow
answered Feb 6, 2020 at 7:46
Manuel SpigolonManuel Spigolon
12.1k66 gold badges5454 silver badges7878 bronze badges
Add a comment
|
|
How to use fastify-redis plugin from other controllers or other.js while declaring the redis connection in server.js
server.js
const fastify = require('fastify')({ logger: false })
const routes = require('./routes')
fastify.register(require('fastify-redis'), { host: '127.0.0.1' })
routes.forEach((route, index) => {
fastify.route(route)
})
const start = async () => {
try {
await fastify.listen(3000)
fastify.log.info(`server listening on ${fastify.server.address().port}`)
//const { redis } = fastify
//console.log(redis)
} catch (err) {
fastify.log.error(err)
process.exit(1)
}
}
start()
Controller -> books.js
exports.getBooks = async (request, reply) => {
//console.log(redis)
let data = {
book: 'Book 1',
author: 'Author 1'
}
//return data
return redis.get('key1') // Not Defined
//return redis.get('key1')
}
So, simply how can I use the Redis instance in other files to set some values in Redis as a to implement caching database data.
|
Use fastify-redis from controllers in node.js
|
I don't see how you are establishing the order of your cache in the description. But to answer your question, it's possible to reduce the LRU store method to O(1) time complexity.
The classical way to do it is to have these two data structures:
Doubly Linked List : for order in the cache. Each node stores a data element (it plays the role of your content store).
HashMap that associates each key to the pointer to the node in the linked list. (it plays the role of your index table)
So when you access already stored data in your cache, it must be at the top of the list, so you delete the corresponding node from the linked list (in O(1) time because you have access to its previous and next nodes) and store it at the head.
For new data it is simpler, only store it at the head of the list and store your (key, value) in the hashmap.
|
I know, Maybe the title is a little confusing. however, my actual question is basic I think.
I'm working on a brand new LRU implementation for that I use an Index Table which maps the name of the incoming packet to index of where the content of packet stored in CS.
As illustrated below each incoming packet store in the CS and can be addressed by Index Table.
Now suppose new packet arrived, as we know, regarding LRU, its index must set to top of CS (zero) and it needs to upgrade other indexes, they need to be incremented as a result.
One obvious solution is to loop over all entries in the Index Table and increment them.
Is there any solution or structure that is using for such a problem?
|
How to implement dynamic indexes?
|
These knobs are tucked away under cache.policy() as they are specific to how the cache was created. This way many messy methods can be provided without complicating the core apis.
cache.policy().eviction().ifPresent(eviction -> {
eviction.setMaximum(newCacheSize);
});
|
I am currently migrating from ConcurrentLinkedHashMap to Caffeine and I am stuck on trying to find an equivalent feature of setCapacity
_myCache.setCapacity(newCacheSize);
Is there a way to do the same in Caffeine ?
Should I copy my current cache into a newly created one with the new size ? That doesn't seem very efficient but I don't see another way to do so at the moment
|
Dynamically resize a Caffeine Cache
|
You can create a method callUntilSuccess in Service class or in any other suitable place (here I'm assuming it is in your Service). You could also define a maximum number of tries in this method and after that it will return null, so you could avoid calling your service indefinitely (this suggestion isn't implemented in the code supplied bellow but it is very easy to do so). As the Guava method expects a Supplier, you can even create a lambda with this logic and pass it directly to the memoizeWithExpiration method.
public MyResponse callUntilSuccess() {
MyResponse response = myService.call();
while (!response.isSuccessful()) {
response = myService.call();
}
return response;
}
Then do the memoization in this way:
private void createCache() {
this.cache = Suppliers
.memoizeWithExpiration(myService::callUntilSuccess, timeout,
TimeUnit.MINUTES);
}
|
I know how to memoize a single object. However, I'd like to memoize only if some condition is met. I'm calling a service that sometimes returns a response that is not successful. I'd like to memoize only if the service's response if successful.
MyResponse myResponse = myService.call()
boolean success = myResponse.isSuccessful();
And my cache is created like so:
private Supplier<MyResponse> cache;
private void createCache() {
this.cache = Suppliers
.memoizeWithExpiration(myService::call, timeout,
TimeUnit.MINUTES);
}
Question: Is it possible to somehow cache the response only if the response is successful using the Supplier passed to the memoizeWithExpiration method?
The only workaround I found to do this is to, when retrieving the value, call cache.get() first, check if the object stored in cache is successful, and if it's not, call createCache() again to clear it and then get the value again. This way if the subsequent service call returns a valid object, it will get stored, and if not, every subsequent call will clear the cache and call the service again.
public MyResponse getResponse() {
MyResponse myResponse = cache.get();
if (myResponse.isSuccess()) {
return myResponse;
} else {
createCache();
return cache.get();
}
}
However, in this solution, if the cache is empty and the service returns unsuccessful response, it will get called again immediately.
|
Conditional memoization in Guava
|
I'm not aware of "add to home screen" being used as a signal when determining whether or not to clear an origin's storage when a device is running out of space. I wouldn't rely on it one way or another.
Instead, there's a web platform feature known as "Persistent Storage" that you can use to explicitly request that your origin's storage not be purged due to space constraints. That's something you could rely on with greater certainty. From that article:
Beginning with Chrome 55, Chrome will automatically grant the
persistence permission if any of the following are true:
The site is bookmarked (and the user has 5 or less bookmarks)
The site has high site engagement
The site has been added to home screen
The site has push notifications enabled
The permission is automatically denied in all other cases. The goal is
to ensure that users can rely on their favorite web apps and not find
they have suddenly been cleared.
You'd use it like:
if (navigator.storage && navigator.storage.persist) {
navigator.storage.persist().then((granted) => {
// Optionally update your UI based on the granted state.
});
}
|
When a browser runs low on storage space, I understand that it will start purging the caches of sites (PWAs) starting with the least-recently visited. Per, Using the Cache API in the service worker:
You are responsible for implementing how your script (service worker) handles updates to the cache. All updates to items in the cache must be explicitly requested; items will not expire and must be deleted. However, if the amount of cached data exceeds the browser's storage limit, the browser will begin evicting all data associated with an origin, one origin at a time, until the storage amount goes under the limit again. See Browser storage limits and eviction criteria for more information.
The linked eviction criteria includes a section on LRU policy:
When the available disk space is filled up, the quota manager will start clearing out data based on an LRU policy — the least recently used origin will be deleted first, then the next one, until the browser is no longer over the limit.
We track the "last access time" for each origin using temporary storage. Once the global limit for temporary storage is reached (more on the limit later), we try to find all currently unused origins (i.e., ones with no tabs/apps open that are keeping open datastores). These are then sorted according to "last access time." The least recently used origins are then deleted until there's enough space to fulfill the request that triggered this origin eviction.
Now the question: do browsers use whether a site has been added to the homescreen (A2H) as a signal to defer clearing the cache storage for a given site? To me it would seem logical for a browser to prioritize purging the caches of least-visited non-A2H site caches before starting to purge the caches of sites that a user has explicitly given the special A2H designation.
Question originally asked in the context of the PWA feature plugin for WordPress.
|
Is homescreen addition (A2H) ever a signal for browsers to defer PWA cache purging when storage runs low?
|
2
Its not as easy as just "caching" a file in perl.
vlc or whatever program needs to interpret the content of the data (in your case the .wav file).
Either you stick with calling an external program and just give it a file to execute or you need to implement the whole stack in perl (and probably Perl XS Modules). By whole stack I mean:
1. Keeping the Data (your .wav file) in Memory (inside the perl runtime).
2. Interpreting the Data inside Perl.
The second part is where it gets tricky you would probably need to write a lot of code and/or use 3rd Party modules to get where you want.
So if you just want to make it work fast, stick with system calls. You could also look into Nama which might give you what you need.
From your Question it looks like you are mostly into getting the runtime of a .wav file. If its just about getting information about the File and not about playing the sound then maybe Audio::Wav could be the module for you.
Share
Improve this answer
Follow
edited Jul 12, 2019 at 18:48
answered Jul 12, 2019 at 18:42
SimeraxSimerax
41255 silver badges1313 bronze badges
Add a comment
|
|
I would like to manually cache files in Perl, so when playing a sound there is little to no delay.
I wrote a program in Perl, which plays an audio file by doing a system call to VLC. When executing it, I noticed a delay before the audio started playing. The delay is usually between about 1.0 and 1.5 seconds. However, when I create a loop which does the same VLC call multiple times in a row, the delay is only about 0.2 - 0.3 seconds. I assume this is because the sound file was cached by Linux. I found Cache::Cache on CPAN, but I don't understand how it works. I'm interested in a solution without using a module. If that's not possible, I'd like to know how to use Cache::Cache properly.
(I know it's a bad idea to use a system call to VLC regarding execution speed)
use Time::HiRes;
use warnings;
use strict;
while (1) {
my $start = Time::HiRes::time();
system('vlc -Irc ./media/audio/noise.wav vlc://quit');
my $end = Time::HiRes::time();
my $duration = $end - $start;
print "duration = $duration\n";
<STDIN>;
}
|
How to cache files with Perl while playing sound files using vlc?
|
If you were having problems with the cache you could tell the browser not to cache anything:
cache_control :no_cache
You might also add Pragma and Expires to the header:
headers \
"Pragma" => "no-cache",
"Expires" => "0"
and put it all in a before filter:
before do
cache_control :no_cache
headers \
"Pragma" => "no-cache",
"Expires" => "0"
end
Or, since you're doing demonstrations, open the browser's inspector and turn off caching. Both Chrome and Firefox have this option.
(OP Adding this) A minimalist version for one call might be just to have
headers "Expires" => "0"
within the get in question
|
I trying to create a fairly minimal example for templates and teaching.
I create my app.rb file
require 'sinatra'
get '/' do
"Minimal!__ !_!"
end
My Gemfile just has
source 'https://rubygems.org'
gem 'rspec'
gem 'thin'
I started up Sinatra
$ ruby app.rb
== Sinatra (v2.0.5) has taken the stage on 4567 for development with backup from Thin
Thin web server (v1.7.2 codename Bachmanity)
Maximum connections set to 1024
Listening on localhost:4567, CTRL+C to stop
and I can visit the page
but when I then change the code the page is cached and the new content doesn't show unless i stop and start the server.
I've read the Sinatra documentation but still can't figure it out.
I've tried adding
set :sessions, false
and
cache_control :off
to no avail
|
How to create Sinatra app with page that reloads with new content not cache
|
Network calls are an order of magnitude faster than disk look-ups (less than 100 micros seconds RTT within adata center). Look-up from memory is also fairly fast (10-20 micro seconds per read). On other hand, databases often have to read from the disk and they maintain extra transaction meta-data and locks.
So caches provide higher throughput as well as better latencies. The final design depends on the type of databases and data access scenarios.
|
I want to understand what is the benefit of running an in-memory cache instance on a separate server to lookup data in distributed caching. The application server will have to make a network call to get the data from Cache. Isn't network call adding to the latency while reading the data ? Wouldn't it make more sense to get the data directly from the database instance ?
|
As distributed caching requires network call, isn't it beneficial to read directly from the DB in some cases?
|
3
If you have set up Google search console on your website you can do followings to speed up updating cache content:
request removing of outdated content here. In your case you should paste the url of image.
In sitemap section of search console, submit a sitemap which contains that specific URL.
in URL INSPECTION use REQUEST INDEXING to queue the inspection of that URL.
Note: none of the above guarantee the fast update of cached content. In all cases you have to wait for Google response.
Share
Improve this answer
Follow
answered Mar 23, 2019 at 19:22
Ali SheikhpourAli Sheikhpour
10.7k55 gold badges4646 silver badges8787 bronze badges
Add a comment
|
|
Google's image search caches the thumbnails of their images results. In one case, I've updated the image that Google is linking to but they continue to show the previous version as the thumbnail. Interestingly, if I click on the image or hover over it the new one is displayed. Is there a way to get Google to refresh this cached image or do I have to just wait it out?
|
How can I refresh Google Image Search cache?
|
I think there is a little confusion between the caching done by the service worker, and what Lighthouse is reporting.
The cache setting is ngsw-config.config is referring to how long the Service Worker should keep a file cached (for offline use) before asking for it from the server again.
Lighthouse is referring to the HTTP cache header, which it is saying is too low. So from the Lighthouse docs, you should be setting the header to:
Cache-Control: max-age=31536000
Now this is outside of the realm of Angular ... it is your HTTP server's responsibility. How to set that up depends on what you are using: IIS, Apache, etc.
One option to consider with SPA (Single Page Applications) like Angular, is that if you put them on a CDN, then you get great caching (and a great Lighthouse score!).
Look at Azure CDN or Google CDN ... there is probably one from AWS too.
|
My Lighthouse Report suggests that I "Serve assets with an efficient cache policy". My problem is that my runtime.xxx.js, polyfills.xxx.js, main.xxx.js, and styles.xxx.js files need a longer cache lifetime than 1hr. Can a longer cache lifetime for these files be achieved using a Service Worker? If so, how?
I am using a ServiceWorker, which uses an ngsw-config file.
{
"index": "/index.html",
"assetGroups": [
{
"name": "app",
"installMode": "prefetch",
"resources": {
"files": [
"/favicon.ico",
"/index.html"
],
"versionedFiles": [
"/*.bundle.css",
"/*.bundle.js",
"*.js",
"/*.chunk.js"
]
},
"cacheConfig": {
"maxSize": 100,
"maxAge": "86400",
"strategy": "performance"
}
},
{
"name": "assets",
"installMode": "lazy",
"updateMode": "prefetch",
"resources": {
"files": [
"/assets/**"
]
}
}
]
}
When I use Google Chrome DevTools and go offline, my application still shows. The Network tab shows that my styles.xxx.css is "(from Service Worker)", but my runtime.xxx.js, polyfills.xxx.js, and main.xxx.js are "(from memory cache)". When I click on runtime.xxx.js the Headers tab cache-control is max-age=3600. (Perhaps a default header setting is overriding my Service Worker.)
I've included the information from the Chrome Dev Tools Network tab when the app is offline. And my files are in dist/maldonado-a/. For example dist/maldonado-a/polyfills.xxx.js.
Please write to me if you need more information. Thank you.
|
Angular: Setting maxAge on static content files
|
How can one go about "refreshing" the instance variable?
Simply set it to nil. Then, upon next call, it'll be reloaded from the db.
Or you could proactively replace the value yourself. First, separate memoization and computation.
def self.get_lastname(client_id)
@client_by_lastname ||= compute_client_by_last_name
end
def compute_client_by_last_name
Client.select(:id, :lastname)
.map{|e| e.attributes.values}
.inject({}){|memo, client| memo[client[0]] = client[1]; memo}
end
Then, when you want to refresh the state (in an after_save callback or wherever)
@client_by_lastname = compute_client_by_last_name
|
I have a model instance variable in which I keep descriptions that will be "mostly" static throughout the application.
The code looks like this
def self.get_lastname(client_id)
@client_by_lastname ||= Client.select(:id, :lastname)
.map{|e| e.attributes.values}
.inject({}){|memo, client| memo[client[0]] = client[1]; memo}
return @client_by_lastname[client_id] if @client_by_lastname[client_id]
result = Client.select('lastname').where('id = ?',client_id)
return @client_by_lastname[client_id] = result[0].lastname
end
So essentially. On first load it stores the clients and their last names in an array. However a client "May" change last name every once in a while when a client is bought by another entity. The client ID remains the same but the first name and last name will change. When that happens, this client last name ends up being wrong in the app and we essentially have to restart the app for the instance variable to be reset.
There has to be a way to reset these instance variables so that they are reloaded the next time they are queried. I would then place an after_save callback on the client file when the first name or last name is modified and get that instance variable reloaded.
How can one go about "refreshing" the instance variable?
|
Refresh class instance variable that is set with ||=
|
3
You can add these meta tags and it will force a page to reload from the server on every reload. However, you should realize that this could result in a lot more data usage for both you and your users.
<meta http-equiv=“Pragma” content=”no-cache”>
<meta http-equiv=“Expires” content=”-1″>
<meta http-equiv=“CACHE-CONTROL” content=”NO-CACHE”>
If you have a reload link or button on your html page, you can add this attribute to it to force a reload from the server. The 'true' parameter is what forces a full reload instead of a cache reload.
onClick="location.reload(true);"
Share
Improve this answer
Follow
answered Feb 6, 2019 at 14:24
Kevin VandyKevin Vandy
40133 silver badges1010 bronze badges
Add a comment
|
|
I run a website for a small company. The websites mostly contains text and images. Whenever i update the website by replacing a image or updating the css, it doesn't get updated on other people's browsers because it's been cached. I've found a way to get around this by adding a version number where i'm linking my stylesheet, however, this doesn't apply to images. What is the most simple way to get around this?
I've done a lot of research on the web and on stackoverflow, but they are all complicated solutions. There must be a better way.
Just to be clear, i know i can clear the cache in my browser. I'm looking for a solution that works for everyone who access my website.
|
HTML and CSS doesn't update online due to browser caching
|
Use Vary: *. This magically solved my problem.
This answer helped me: https://stackoverflow.com/a/2068353/1364158
Alternatively, you can really force browser to load a new version of request by including some meaningless random query arg in your url, e.g. /api/user?ts=18284
|
I have set all headers that I know of to disable caching (even disabling ETAG) on my server, yet Safari still occasionally (about 50% times) caches my requests.
Workflow
I am implementing oauth 1, so:
Browser makes GET /api/user request
Server returns 405
Browser redirects to 3rd party website to authenticate
Browser is redirected to api/callback which stores some info into cookie.
Browser is redirected back to original route.
Browser makes GET /api/user request which should be successful, however it gets 405 served from disk cache instead.
Request summary from Safari Network Inspector
Summary
URL: http://localhost:3000/api/user
Status: 405 Method Not Allowed
Source: Disk Cache
Request
No request, served from the disk cache.
Response
Transfer-Encoding: Identity
Content-Type: application/json; charset=utf-8
Pragma: no-cache
Cache-Control: private, no-cache, no-store, must-revalidate, max-age=0
Vary: api/callback0
Date: api/callback1
Content-Encoding: api/callback2
Expires: api/callback3
Connection: api/callback4
x-powered-by: api/callback5
Conclusion
I have no idea what's wrong and I will greatly appreciate any help. My
Safari version is api/callback6. I wasn't able to replicate this issue with Chrome.
|
Safari Caching GET request even with disabled cache
|
You can use unless to specify do not cache the result if it is null , but it does not work with sync=true
@Repository
public interface StudentRepository extends JpaRepository<Student, Long> {
@Cacheable(cacheNames="StudentCache", cacheManager="DepartmentCacheManager", unless="#result == null")
Student findByStudentId(String StudentId);
}
If you must use sync , you can consider to use @CachePut to update the cache after saving the student.
public class StudentRepository{
@CachePut(cacheNames="StudentCache", cacheManager="DepartmentCacheManager",key="#student.studentId")
public Student save(Student student){
}
}
|
We are having a Student Entity, and implemented cache. We perform insert record only when record not found in database.
It works fine first time, as record doesn't exist in DB. But, second time if we re-submit same request then it fetch student Entity from cache and return null. (In DB, it exist) and hence try to re-insert that data and fails with unique constrain.
Can someone help, how to clear/update cache in this case.
Repository Code Snippet:
@Repository
public interface StudentRepository extends JpaRepository<Student, Long> {
@Cacheable(cacheNames="StudentCache", cacheManager="DepartmentCacheManager", sync=true)
Student findByStudentId(String StudentId);
}
Service Code Snippet
Student student = studentRepository.findByStudentId(studentId);
if (student == null) {
student= createStudentObject(studentId);
studentRepository.save(student);
}
|
Spring Data JPA Repository to refresh cache after Save()
|
1) no your schema is not reliable
You should not call
cache.asMap().putIfAbsent(givenKey, givenObj);
by guava documentation method cache.get(K key, Callable loader) is preferable than to use asMap methods.
2) yes it can be optimised
You should call instead this method:
cache.get(K key, Callable<? extends V> loader)
This method will return the value if already in cache, or it will add the value from the loader into the cache if the value is not in the cache and returns it.
so for example:
MyObject objInCache = cache.get(givenKey, ()->givenObj)
if(!objInCache.equals(givenobj)){
//obje was in the cache,
//update object
}
3)you don't need volatile if the cache is thread safe
|
I have a Runnable which has a cache (of type Cache) and we assume that it offers thread safe operations. This Runnable object is used by multiple threads.
Our threads get objects from outer source and then
check if the object key exists in the cache
If not, then put
If it's already in the cache then update
I'm looking for the right scheme (i.e. minimal synchronized code) to work with the cache, reliably.
I came up with the following scheme:
MyObject current = cache.getIfPresent(givenKey);
if (current == null) {
MyObject prev = cache.asMap().putIfAbsent(givenKey, givenObj);
if (prev == null) {
// successful put in cache
return givenObj;
}
}
// current != null or another thread update
synchronized (current) {
return update(current, givenObj); // in place change of current
}
The key ideas behind my scheme + "proof" of reliability:
If threads work on different keys then no need to block
If current is null, then since the cache is thread-safe, exactly one thread will be able to put the object in the cache whilst the others will see prev != null
The other threads must update serially. Notice I'm syncing on current, the object to be updated.
Questions
Is my scheme reliable?
Can be optimised?
In some cases, volatile must be used to make the memory synchronization reliable. Do I need it here?
Thanks!
|
How to update Cache from multiple threads
|
1
Use @CachePut
@CachePut(value = "users", key = "#result.id"")
@RequestMapping(value = "/user", method = RequestMethod.POST)
public User createUser(@RequestBody User user);
Share
Improve this answer
Follow
answered Nov 26, 2018 at 9:53
Aditya Narayan DixitAditya Narayan Dixit
2,1351313 silver badges2424 bronze badges
Add a comment
|
|
I'm using item ID to identify caching data like this:
@CachePut(value = "users", key = "#p0.id")
@RequestMapping(value = "/user", method = RequestMethod.PUT)
public User updateUser(@RequestBody User user);
Function create new Item return item ID after created. How can I save item into cache after create new Item?
@Cacheable(value = "users", key = "???")
@RequestMapping(value = "/user", method = RequestMethod.POST)
public User createUser(@RequestBody User user);
|
Spring cache on create new item
|
3
I have found info in the source:
FILENAME_MAX_SIZE = 228 # max filename size on file system is 255, minus room for timestamp and random characters appended by Tempfile (used by atomic write)
Share
Improve this answer
Follow
edited Oct 26, 2018 at 12:40
sawa
167k4848 gold badges279279 silver badges387387 bronze badges
answered Oct 26, 2018 at 11:53
anilanil
15933 silver badges1111 bronze badges
Add a comment
|
|
I am using Rails Model caching. When generating cache file, I am giving different formats to store data. Because of that, it takes 140-180 length file name, for example:
1000011_2000014_2000004_2000013_1000006_1000010_2000005_2000001_1000012_2000013_2000012_2000015_2000006_1000006_1000006_1000000_1000008_brand_list
May I know what the maximum length if for ActiveSupport::Cache::FileStore file name?
|
What is the `ActiveSupport::Cache::FileStore` key limit?
|
All you have in ngOnInit() will be executed when you "come back" to component. You can put your data in variable in a service and then on ngOnInit() check to see are there any data in service and if there is do not ask for them from a server again.
if(myservice.data) {
do nothing}
else {
myservice.data = this.apiService.fetchData();}
|
I am using BehaviourSubject from my service to get all data from BackEnd which I subscribe in my mainComponent using the async pipe.
If I now route to another subComponent and then click for example on a back button (uses Location.back() function) to get back to my mainComponent it always fetchs all data with a new request from BackEnd.
I guess this comes from using async pipe as it unsubscribes the BehaviourSubject when leaving the mainComponent.
Do I need to implement some caching strategy for this or can this be solved using a ReplaySubject with the size of all data i fetch from mainComponent?
Below my Code:
Service:
private subject$: BehaviorSubject<Setting[]> = new BehaviorSubject<Setting[]>([]);
fetchData() {
const fetch$: Observable <Setting[]> = this.getSettings().pipe(share());
fetch$.pipe(
map(allSettings => this.subject$.next(allSettings))
);
return fetch$;
}
MainComponent:
data: Observable<Setting[]>;
// Load Setting while starting
ngOnInit() {
this.data = this.apiService.fetchData();
}
MainComponent.html:
<tr *ngFor="let s of data | async">
<!--Do Something...-->
</tr>
SubComponent:
goBack(): void {
this.location.back();
}
Many thanks in advance :)
|
Don't request data again after using location.back() in Angular 6 - ReplaySubject necessary?
|
I think the closest you can get is LINQPad + .NET Thin Client.
Ignite NuGet package actually includes LINQPad sample to get first 5 items from every cache in cluster and display them, you can modify it to your needs.
This approach requires some coding, but is quite flexible with LINQ capabilities and rich API at your disposal, plus LINQPad data display features.
Sample code:
var cfg = new IgniteClientConfiguration { Host = "127.0.0.1" };
using (var client = Ignition.StartClient(cfg))
{
// Create cache for demo purpose.
var fooCache = client.GetOrCreateCache<int, object>("thin-client-test").WithKeepBinary<int, IBinaryObject>();
fooCache[1] = client.GetBinary().GetBuilder("foo")
.SetStringField("Name", "John")
.SetTimestampField("Birthday", new DateTime(2001, 5, 15).ToUniversalTime())
.Build();
var cacheNames = client.GetCacheNames();
"Diplaying first 5 items from each cache:".Dump();
foreach (var name in cacheNames)
{
var cache = client.GetCache<object, object>(name).WithKeepBinary<object, object>();
var items = cache.Query(new ScanQuery<object, object>()).Take(5)
.ToDictionary(x => x.Key.ToString(), x => x.Value.ToString());
items.Dump(name);
}
}
```
|
I am loving Apache Ignite particularly as distributed caching. However I have realised that the tooling is not as good.
I am looking for a simple desktop tool to be able to view and search the cache for values etc something similar to Redis Deskop Manager
I am in WINDOWS environment. My google searches has returned "DBeaver" which I have downloaded and configured but doesn't show my cache key values. The other one has been "Web Console" though this is a web based and I prefer some desktop thing - Not sure if I can install this locally?
Anything else around?
much appreciated.
|
Apache ignite cache viewer like Redis Desktop Manager
|
The web cache container is used for storing HTTP session information. WildFly's High Availability Guide contains information on all clustered services.
|
I'm using Wildfly 12.
The default configuration of the infinispan subsystem defines a cache-container named "web". I tried to find out why this container is defined and who uses it, but could not find any explanation in the documentation or anywhere on google so far.
standalone-full-ha-custom.xml:
<cache-container name="web" default-cache="dist" module="org.wildfly.clustering.web.infinispan">
<transport lock-timeout="60000"/>
<distributed-cache name="dist">
<locking isolation="REPEATABLE_READ"/>
<transaction mode="BATCH"/>
<file-store/>
</distributed-cache>
</cache-container>
What kind of data is stored in this cache and which components need it?
|
What is the Wildfly infinispan web cache container for?
|
3
I'm unaware of any library that does this. On the other hand, it's not a lot of work to get something working. Here's something I threw together in ten minutes:
struct Dict(K, V, E)
if (isExpiry!E)
{
import std.typecons : Tuple;
private:
Tuple!(V, "value", E, "expiry")[K] _payload;
public:
V opIndex(K key)
{
return *(key in this);
}
V* opBinaryRight(string op : "in")(K key)
{
auto p = key in _payload;
if (!p || p.expiry.expired) return null;
return &p.value;
}
void insert(K key, V value, E expiry)
{
expiry.initialize();
_payload[key] = typeof(_payload[key])(value, expiry);
}
void remove(K key)
{
_payload.remove(key);
}
}
enum isExpiry(T) = is(typeof((T t){
t.initialize();
if (t.expired) {}
}));
static assert(!isExpiry!int);
struct Timeout
{
import core.time;
Duration duration;
MonoTime start;
void initialize() {
start = MonoTime.currTime;
}
@property
bool expired()
{
auto elapsed = MonoTime.currTime - start;
return elapsed > duration;
}
}
static assert(isExpiry!Timeout);
unittest
{
import core.time;
import core.thread;
Dict!(int, string, Timeout) a;
assert(3 !in a);
a.insert(3, "a", Timeout(100.dur!"msecs"));
a.insert(4, "b", Timeout(10.dur!"days"));
assert(3 in a);
assert(4 in a);
Thread.sleep(200.dur!"msecs");
assert(3 !in a);
assert(4 in a);
a.remove(4);
assert(4 !in a);
}
Share
Improve this answer
Follow
edited Jun 3, 2018 at 16:00
answered Apr 27, 2018 at 11:37
BioTronicBioTronic
2,2891313 silver badges1515 bronze badges
Add a comment
|
|
I want a library that offers an in-memory data structure,such that I can write,for example:
cache.insert(key,value,expiry)
and I can retrieve the value with something like cache[key],unless it has passed expiry seconds.
Can be done? What library should I use?
Prefer libraries for D,if possible
|
Auto expiring dictionary
|
The property is called pageNumber not number
@Cacheable(key = "#pageable.pageNumber")
public Person getPersons(Pageable pageable)
The general sense of using the cache this way can be questioned though. But that's maybe out of scope for this question.
|
I've a method which accept org.springframework.data.domain.Pageable as parameter.
When user request for page = 0 and size = 20 and so on then I want to make these passed values as cache key.
What I did is
@Cacheable(key = "#pageable.number")
public Person getPersons(Pageable pageable)
It's giving axception :
EL1008E: Property or field 'number' cannot be found on object of type
'org.springframework.data.domain.PageRequest' - maybe not public?
How can I make cache key it?
|
How to make cache key if method parameter is Interface
|
It's important that key names in dask graphs are unique (as you found above). Additionally, we'd like identical computations to have the same key so we can avoid computing them multiple times - this isn't necessary for dask to work though, it just provides some opportunities for optimization.
In dask's internals we make use of dask.base.tokenize to compute a "hash" of the inputs, resulting in deterministic key names. You are free to make use of this function as well. In the issue you linked above we say the function is public, just that the implementation might change (not the signature).
Also note that for many use cases, we recommend using dask.delayed now instead of custom graphs for generating custom computations. This will do the deterministic hashing for you behind the scenes.
|
Dask supports defining custom computational graphs as well as opportinistic caching. The question is how can they be used together.
For instance, let's define a very simple computational graph, that computes x+1 operation,
import dask
def compute(x):
graph = {'step1': (sum, [x, 1])}
return dask.get(graph, 'step1')
print('Cache disabled:', compute(1), compute(2))
this yields 2 and 3 as expected.
Now we enable opportunistic caching,
from dask.cache import Cache
cc = Cache(1e9)
cc.register()
print('Cache enabled: ', compute(1), compute(2))
print(cc.cache.data)
we get incorrectly a result of 2 in both cases, because cc.cache.data is {'step1': 2} irrespective of the input.
I imagine this means that the input needs to be hashed (e.g. with dask.base.tokenize and appended to all the keys in the graph. Is there a simpler way of doing it, particularly since the tokenize function is not part of the public API?
The issue is that in complex graphs, a random step name, needs to account for the hash of all the inputs provided to it's children steps, which means that it's necessary to do full graph resolution.
|
Opportunistic caching with reusable custom graphs in Dask
|
This was part of the original discussion of Guava's Map.get, but was likely a poor idea, miscommunicated, and eventually lost. A rational was due to users not expecting side effects for most Map operations, which MapMaker changed with computing maps and thus broke the equals method.
In retrospect treating any Map methods as different breaks the principle of least astonishment and is not very useful. This was likely realized during the implementation, but due to the disjoint team and abundance of details, an aspect that I forgot. We also had decided on the principle that users shouldn't need to know how the policies work or have configuration knobs to influence their implementation, which a quiet get would have exposed.
However, one aspect did remain for better or worse. Unlike Cache.getIfPresent, Map.get will not record hit rate statistics. Similarly for all other Map operations, they may be opted out of updating CacheStats. In Guava this is stated as,
Map0
This is slightly modified in Caffeine for additional Java 8 methods,
Map1
Likely this opt-out of statistics should not have occurred, but is the remanent of that original discussion. This is a subtle detail that it may not be honored in full, as I believe Guava's addition of the computing Map2 methods do not. Thankfully it is a minor enough detail to have not caused many issues and could be changed if deemed worthwhile.
|
The Javadoc says:
Returns a view of the entries stored in this cache as a thread-safe map. Modifications made to the map directly affect the cache.
What I'm missing is the information about whether access to the view influences the admission and eviction policies. According to this old related issue it does not:
In Guava's CacheBuilder we added the asMap() view specifically to allow
bypassing the cache management routines. There a cache.asMap().get(key) is a peek operation.
This surely makes sense. OTOH the view provides many operations unavailable directly and users may be tempted to use them hoping that they update the access statistics just like direct operations.
For example, I found myself using cache.asMap().putIfAbsent as my values are functions of the keys, so replacing them is pointless. I'd like it to work exactly as cache.put in case the entry was absent.
|
Behaviour of Caffeine Cache.asMap views
|
3
You could use Camel EhCache. There's a "getting started" on the docs. But you may take a look at the unit tests from this component here.
That way you'll have a more detailed approach of how to use it. For example, the cache manager should leverage directly from the EhCache API:
CacheManagerBuilder.newCacheManagerBuilder()
.withCache(
"myCache",
CacheConfigurationBuilder.newCacheConfigurationBuilder(
String.class,
String.class,
ResourcePoolsBuilder.newResourcePoolsBuilder()
.heap(100, EntryUnit.ENTRIES)
.offheap(1, MemoryUnit.MB))
).build(true)
Cheers!
Share
Improve this answer
Follow
answered Jan 3, 2018 at 18:02
Ricardo ZaniniRicardo Zanini
1,07177 silver badges1212 bronze badges
Add a comment
|
|
I have implemented some routes in JBOSS Fuse which are exposed as REST Web service. I want to implement cache for web services. Lets say if request for same username for specific resource in specific time span return the cached response. Doing some research i got to know about camel cache component. I tried to read about it to check if camel component will help me in getting my objective done or not but got nothing on which i can decide.
If any one can suggest me any approach how to cache response on basis of request or if camel cache component can be used. If yes then suggest any startup tutorial for this.
|
Camel cache usage
|
2
try ./gradlew clean and then ./gradlew assembleRelease . It should work .
Share
Improve this answer
Follow
answered May 17, 2018 at 9:34
Suyog K.CSuyog K.C
68299 silver badges2121 bronze badges
Add a comment
|
|
while doing release build android react-native
$ ./gradlew assembleRelease
the follwing error thrown at the end.
Loading dependency graph, done.
ENOTEMPTY: directory not empty, rmdir '/tmp/react-native-packager-cache-02b2ace9b81fa119b172fdfd36d3a3b4e4156b70/cache'
:app:bundleReleaseJsAndAssets FAILED
FAILURE: Build failed with an exception.
* What went wrong:
Execution failed for task ':app:bundleReleaseJsAndAssets'.
> Process 'command 'node'' finished with non-zero exit value 1
* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output.
BUILD FAILED
I am using
"react": "16.0.0-beta.5",
"react-native": "0.49.3",
Please guide
|
ENOTEMPTY: directory not empty, rmdir '/tmp/react-native-packager-cache-02b2ace9b81fa119b172fdfd36d3a3b4e4156b70/cache'
|
Try to add
proxy_cache_valid 200 1d;
to keep all files matching your location that have a http state of 200 for 1 day in cache
|
I am trying to force the cache of this type of files: css, woff, otf, js, jpg, jpeg, gif, png, ico, cur, gz, svg, svgz, mp3, mp4, ogg, ogv and files without extension.
I can cache some images and mp3 files, but the rest are always in a state: X-Cache MISS
proxy_cache_path /tmp/test keys_zone=test:10m loader_files=300 max_size=4g;
location ~* (^/.*(css|woff|otf|js|jpg|jpeg|gif|png|ico|cur|gz|svg|svgz|mp3|mp4|ogg|ogv)$|^/web/image.*) {
proxy_cache test;
proxy_cache_methods GET HEAD;
proxy_cache_lock on;
proxy_ignore_headers Set-Cookie;
proxy_ignore_headers X-Accel-Expires;
proxy_ignore_headers Expires;
proxy_ignore_headers Cache-Control;
add_header Pragma "public";
add_header Cache-Control "public";
add_header X-Cache $upstream_cache_status;
access_log off;
expires 12h;
proxy_pass http://test;
}
GET:
Accept: text/css,/;q=0.1
Accept-Encoding: gzip, deflate, br
Accept-Language: es-ES,es;q=0.8,en-US;q=0.5,en;q=0.3
Cache-Control: no-cache
Connection: keep-alive
DNT: 1
Pragma: no-cache
Nginx:
Cache-Control: max-age=43200
Connection: keep-alive
Content-Encoding: gzip
Content-Type: text/css
Date: Fri, 15 Dec 2017 10:20:22 GMT
ETag: W/"XXXXXXXXXX"
Expires: Fri, 15 Dec 2017 22:20:22 GMT
Server: nginx
Transfer-Encoding: chunked
Vary: Accept-Encoding
X-Cache-Status: MISS
X-Content-Type-Options: nosniff
THX
|
Force cache on files with Nginx
|
Because there is no serialization and deserialization overhead making it low cost operation and cached data can be load without additional memory. SerDe is expensive and significantly increase overall cost. And keeping serialized and deserialized objects (particularly with standard Java serialization) can double memory usage in the worst case scenario.
|
I am learning Apache Spark and trying to clear out the concepts related to caching and persistence of RDDs in Spark.
So according to the documentation of persistence in book "Learning Spark":
To avoid computing an RDD multiple times, we can ask Spark to persist the data.
When we ask Spark to persist an RDD, the nodes that compute the RDD store their partitions.
Spark has many levels of persistence to choose from based on what our goals are.
In Scala and Java, the default persist() will
store the data in the JVM heap as unserialized objects. In Python, we always serialize
the data that persist stores, so the default is instead stored in the JVM heap as pickled
objects. When we write data out to disk or off-heap storage, that data is also always
serialized.
But why is-- the default persist() will
store the data in the JVM heap as unserialized objects.
|
Why does the default persist() will store the data in the JVM heap as unserialized objects?
|
3
i had the same problem. and as i see, the team of symfony is fix that.
try to do this:
in composer.json change version from 3.4.* to 3.3.* and composer update symfony/symfony
than change 3.3.* to 3.4.* back and composer update symfony/symfony
for me it fix the problem. try.
Share
Improve this answer
Follow
answered Dec 3, 2017 at 20:44
Yury YagupovYury Yagupov
3111 bronze badge
Add a comment
|
|
I've created a very simple Symfony (3.4) project, with only one api that prints out a static json.
When i run that api for the first time everything works fine, but from second time i got this error:
(1/1) FatalErrorException
Compile Error: Cannot redeclare class Symfony\Bundle\FrameworkBundle\Controller\ControllerNameParser
in ControllerNameParser.php (line 24)
This is why inside cache/{env}/classes.php that class was redefined. In fact, clearing cache turn api to work, but again only for the first time.
How can I solve the issue? I think could be something related to composer.json autoload, here the snip
"autoload": {
"psr-4": {
"AppBundle\\": "src/AppBundle",
"MyCompany\\TypeBundle\\": "src/MyCompany/TypeBundle"
},
"classmap": [
"app/AppKernel.php",
"app/AppCache.php"
]
},
Thanks in advance
|
Symfony - Duplicate class definition in cache
|
Does redis provide me with any such functionality where I can do this LRU style eviction of hashmap entries not touching rest of the stored keys?
No, it doesn't.
Or can one build it on top of what redis provides in any way?
Yes, one can.
There are 3 ways one could go about it:
Client-side logic: you can manage the Hash's fields eviction logic in your application. This will require storing additional (meta) data in the Hash's values (i.e. delimit/structure the meta and real data in the value), at the Hash's level (you can use "special" field names, like "_eviction_heap_"), and/or with additional data structures (looks like a Sorted Set per Hash would be useful).
Server-side Lua: for optimizing the above, you can package the logic in Lua and execute it with the EVAL command.
Redis modules: this is the advanced stuff, but if you're up to it you can pretty much do anything - including implementing a new "hashmap with size limit and LRU eviction functionality" data structure.
|
Lets say I have some keys in a redis store. I want to keep some key value pairs in a new hashmap structure. I also want to keep a limit on the size of this hashmap and evict the least recently used key value pair of the hashmap when its size (hashmap) grows beyond a limit and not touch the rest of the already present redis data structures. Does redis provide me with any such functionality where I can do this LRU style eviction of hashmap entries not touching rest of the stored keys? Or can one build it on top of what redis provides in any way? Thanks for the help!
|
Redis: hashmap with size limit and LRU eviction functionality
|
Ideally I'd like to do something like this = Assets.cached[id] inside the constructor
The magic keyword here is return. You can just return an arbitrary object from the constructor and it will be used instead of this.
constructor(id, some_property) {
if (id in Assets) {
// use cached instance instead of creating a new one
return Assets_cached[id];
} else {
this.id = id;
this.some_property = some_property;
// Cache this object
Assets_cached[id] = this;
}
}
|
I've tried to search for instance caching and singletons on Google and StackOverflow without success, seeing only posts about module.exports, if you know a post that answers this question, feel free to reference it. Thank you!
I have an application that needs to work on a set of objects that rarely change, and hence need to be cached for performance optimisation.
Here is a toy example where a single property is set directly.
When I call the application, I export an object that will contain the set of cached objects in assets_cached.js:
const Assets = {};
module.exports.Assets = Assets;
In another module of the application I have an ES6 class:
const _ = require('lodash')
const { Assets } = require('./assets_cached')
class Asset {
constructor(id, some_property) {
if (id in Assets) {
// Update instance data with cached properties
_.assign(this, Assets_cached[id]);
} else {
// If it's not cached, create a new object
this.id = id;
this.some_property = some_property;
// Cache this object
Assets_cached[id] = this;
}
}
getProperty() {
return this.some_property;
}
setProperty(value) {
this.some_property = value;
// Is there a way of avoiding having to do this double assignment?
Assets_cached[id].some_property = value;
}
}
module.exports = Asset;
How may I avoid having to set the some_property twice (in the current instance and the cache, while ensuring that other instances are updated in parallel)?
Ideally I'd like to do something like:
if (id in Assets) {
this = Assets.cached[id]
}
inside the constructor, but this is not possible.
What's the most elegant and correct way of making this work?
|
Node + ES6 classes: Setting up a set of cached objects
|
3
1) For the delete part:
You can also use this lib
const revDelete = require('gulp-rev-delete-original');
and call it like ".pipe(revDelete())" after calling .pipe(rev())
2) For the reference part (update new hash references):
I use gulp-rev-replace lib
const revReplace = require('gulp-rev-replace');
and call it like ".pipe(revReplace())" after ".pipe(revDelete())"
I see that you found a solution, but, just showing an alternative, maybe it's useful in the future.
Share
Improve this answer
Follow
answered Sep 28, 2018 at 10:56
Pedro BezanillaPedro Bezanilla
1,2471717 silver badges2323 bronze badges
2
1
Could never get rev-del to do anything. gulp-rev-delete-original simply worked OOB: .pipe(rev()).pipe(revDel()).... Thanks for the suggestion.
– Metro Smurf
Oct 22, 2021 at 15:49
glad it worked, thanks for the thumbs up :)
– Pedro Bezanilla
Oct 22, 2021 at 15:50
Add a comment
|
|
I am building an angular2 project using systemJS. I use Gulp to deploy.
I want to avoid making my users click on ( Cntl + R ) to get rid of the cached css files.
To do that, I wanted to make file revision using Gulp.
Knowing that all I need, is changing a css file for the moment. so I am using :
gulp-rev , to rename my files and to create my manifest.json
gulp-rev-collector , to replace these files name from where we call them using our manifest
rev-del, to delete the old files.
knowing that my css file is under a folder named : public. and my index.html is also under public/.
gulp.task("revision:rename", function () {
return gulp.src(["public/*.css"])
.pipe(rev())
// .pipe(revDel())
.pipe(gulp.dest('public'))
.pipe(rev.manifest())
.pipe(gulp.dest("public"))
});
gulp.task("revision:updateReferences", ["revision:rename"], function () {
gulp.src(["public/rev-manifest.json","public/*.css"])
.pipe(collect({
replaceReved: true,
dirReplacements: {
'css': 'public'
}
}))
.pipe(gulp.dest("public"))
});
Calling these functions, I get correctly css files renamed, and a correct manifest.json.
But I got two problems,
The collector doesnt replace my html file. And I don't know how to deal with that
The rev-del, makes an error when I use it ( I commented it for the moment )
Any help will be appreciated !
Thank you
|
How to correctly solve browser Cache With Gulp-Rev and Gulp-Rev-Collector so that my index.html file is updated
|
IIS Cache settings have no affect on service worker caching. Remember the server code and the client code are completely decoupled.
What you are setting in IIS is the Cache-Control header value. This value is used by the browser cache, not service worker cache. You are 100% in control of what gets cached and how long it is cached in the service worker cache.
|
Specifically the cache-control property:
<?xml version="1.0"?>
<configuration>
<system.webServer>
<httpProtocol>
<customHeaders>
<add name="Cache-Control" value="no-cache" />
</customHeaders>
</httpProtocol>
<staticContent>
<remove fileExtension=".json" />
<mimeMap fileExtension=".json" mimeType="application/json; charset=utf-8"/>
</staticContent>
</system.webServer>
</configuration>
I'm developing locally with a Node server and everything works fine, but on our deployment server the app runs in an IIS instance and the ServiceWorker isn't caching the requested assets. It's not throwing errors either, so I'm wondering if it's just this "no-cache" declaration getting in the way.
I'm super new to ServiceWorkers and not at all a devops guy. Not hunting for the exact solution, just trying to narrow down the diagnosis so I have a clearer idea what to ask my back-end developer.
Thank you!
|
Do the web.config settings for IIS interfere with a ServiceWorker caching?
|
2
setIndexedTypes takes an even number of parameters. Every odd parameter corresponds to a key type, and every even - to a value type. In your case you should probably use id parameter as a key, so you should call it this way:
cacheConfig.setIndexedTypes(Long.class, Person.class);
Javadoc for setIndexedTypes method contains a pretty good explanation of this method: https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/configuration/CacheConfiguration.html#setIndexedTypes(java.lang.Class...)
UPD:
There will be registered a table in SQL for each pair of parameters that you provide to setIndexedTypes method.
Your SQL entities will map to cache records and they will have _key and _val columns in addition to the ones that you configured as QuerySqlField-s. So, you should specify types of keys and values that will be used in cache for each table.
You can refer to this page for more information: https://apacheignite.readme.io/docs/dml#basic-configuration
Share
Improve this answer
Follow
edited Oct 6, 2017 at 8:48
answered Oct 5, 2017 at 13:25
Denis MekhanikovDenis Mekhanikov
3,58188 silver badges1313 bronze badges
2
So in case, lets say I have two indexes on the class Person @QuerySqlField(index = true) private long id; @QuerySqlField(index = true) private String orgId; My indexTypes should be -- cacheConfig.setIndexedTypes(Long.class, Person.class, String.class, Person.class); is this correct?
– frewper
Oct 6, 2017 at 6:40
@frewper no, actually key type doesn't depend on indexes. Even if you make other fields indexed, then setIndexedTypes call will stay the same. I added some info about it to my answer.
– Denis Mekhanikov
Oct 6, 2017 at 8:49
Add a comment
|
|
I am following the code for the running SQL queries in the Ignite cache, but am able to fully realize the use of the CacheConfiguration.setIndexedTypes API.
I am following the only help that I could find at the ignite site.
The documentation here says to use
CacheConfiguration.setIndexedTypes(MyKey.class, MyValue.class).
Now lets say in the Person class
@QuerySqlField(index = true)
private long id;
@QuerySqlField
private String firstName;
Which are the parameters that I should be passing in the setIndexedType method?
|
How to use CacheConfiguration.setIndexedTypes for Ignite Cache
|
I pursued this on github where, long story short, I ended up caching the json string and JObject.Parse it on retrieval.
The pertinent question to ask was "are you using a distributed cache?" I am, and I'm also using a local cache.
If I was only using a local cache, I could put the JObjects straight in the cache as no serialization is involved.
When using a distributed cache however, you actually cannot place a JObject into it since that type isn't serializable (no SerializableAttribute).
When using both, you're constrained by the requirements of both, meaning you're left with caching the json strings and parsing them on retrieval.
It is possible that one could use the CacheManager.Serialization.Json package to swap out the serialization mechanism. But I'd rather keep the binary serializer in my scenario as I'm mostly caching POCO's and a binary serializer should be more efficient in general. I don't think using this would net me any performance gains anyway, since the built-in serializer would have to transform the JObjects to json internally anyway.
In conclusion: By keeping the binary serializer and caching the json as strings I don't lose out on performance, but I do have to add a few JObject.Parse(..) here and there when reading from the cache. With decent encapsulation, that's not an issue.
|
I have some JObjects which I need to cache, and I wonder what best practice is when caching such data in CacheManager?
I'm concerned with
Using a reasonably small amount of memory in the cache.
Not unnecessarily serialize to avoid useless processing.
If I cache the json string I need to parse it every time I read the cache.
If I cache the JObject I don't know how it will be serialized to the cache. Probably as a non-compact binary array. But I won't have to do anything to it after retrieving it.
That's why I'm considering that perhaps it will serialize Bson better, or maybe that's going to simply add another layer of serialization? After all, I'll have to convert the Bson to JObject when reading the cache, much like if I were to cache the json string.
|
Should I cache json, Bson or JObject in the ICacheManager?
|
3
You can set 'Expires' header configuring liferay HeaderFilter filter:
com.liferay.portal.servlet.filters.header.HeaderFilter
Liferay filters are configured in WEB-INF/liferay-web.xml, default configuration is:
https://github.com/liferay/liferay-portal/blob/master/portal-web/docroot/WEB-INF/liferay-web.xml
Out of the box, Liferay only adds "Expires" and "Cache-control" headers to some URL patterns, see:
filter definitions: WEB-INF/liferay-web.xml lines 144-207
filter-mapping definitions: WEB-INF/liferay-web.xml lines 570-617
So you have to create:
new filter entry with the headers you want to add
new filter-mapping with the urls you want to modify
Share
Improve this answer
Follow
answered Sep 14, 2017 at 11:39
jorgediaz-lrjorgediaz-lr
94266 silver badges1414 bronze badges
Add a comment
|
|
Is it possible to set cache expired time for whole pages in liferay?
I found next solution but it's disabling caching in browser at all. Is it possible to set expiring time for whole pages?
#
# Set this to true if you want the portal to force the browser cache to be
# disabled. It will only disable the cache for the rendered HTML response.
# It will not have an impact on static content or other resources.
#
browser.cache.disabled=true
#
# Set this true if you want to disable the cache for authenticated users.
# This property is not read when the property
# "browser.cache.signed.in.disabled" is true. This is useful to ensure that
# authenticated users cannot go to the sign in page by clicking on the back
# button in their browsers.
#
browser.cache.signed.in.disabled=true
|
Cache expired time for all pages in liferay
|
3
It's not possible to return two responses from the SW to the page as far as I know.
What you can do, for instance, is to use the PostMessage or Broadcast Channel APIs to send a message from the SW to the page with the fresh data. Basically you would request something from the page using the Fetch API, then return cached data from the SW and at the same time initiate a network request, then refresh the cache with whatever the SW received from the network, and lastly if the response was something new, notify the page via PostMessage that "hey, there's a new version available, take it here: { data }".
This, however, might not be any better than what you saw in the Offline Cookbook, and could actually make you write more code. You would somehow need to manage the messages (with fresh data) from the SW and keep track of what data to update on the page etc.
What I would do if I understand you correctly: generalize the Offline Cookbook idea in a function that can be used instead of simple Fetch request and make it return a Promise. It should be fairly simple to go through the code and replace fetch() calls with customFetch() calls and keep all the Caches API/SW logic somewhere else.
Share
Improve this answer
Follow
edited Sep 7, 2017 at 21:57
jacoballenwood
2,96722 gold badges2626 silver badges4242 bronze badges
answered Sep 3, 2017 at 16:06
patepate
5,10311 gold badge1919 silver badges2525 bronze badges
Add a comment
|
|
I'm wanting to cache volatile data with my service worker. My thought is to return first with the possibly stale cache match, and simultaneously fire off the fetch request and return that data if/when the fetch is successful.
This is the code I'm running for non-volatile data that I want to cache:
return e.respondWith(
caches.match(request.url).then(function(cacheResponse) {
// Found it in the cache
if (cacheResponse) {
console.log('[ServiceWorker] Found it in Cache')
return cacheResponse
// Not in the cache
} else {
console.log('[ServiceWorker] Not in Cache')
return fetch(request).then(function(fetchResponse) {
console.log('[ServiceWorker] Fetched from API', fetchResponse)
// don't cache errors
if (!(fetchResponse && fetchResponse.status === 200)) {
console.log('[ServiceWorker] Fetch error, no cache')
return fetchResponse
}
return caches.open(cacheName).then(function(cache) {
cache.put(request.url, fetchResponse.clone())
console.log('[ServiceWorker] Fetched and Cached Data', request.url)
return fetchResponse
})
})
}
})
)
How would I return both the cachedResponse and the fetchResponse asynchronously? Is this possible?
UPDATE:
I've read Jake Archibald's offline cookbook where he talks about cache then network, which is exactly what I'm wanting to do, but it appears you need to add code everywhere you make a call where you want this 'cache then network' service.
I was hoping that there would be a way to do this directly from the service worker, though I'm not sure it's possible, since once you return with cached data how could you also return fetch data without making the exact same request again from the page?
Thanks for the help!
|
ServiceWorkers: Return cached response first and fetch response async
|
There are two headers to look at when determining if cloudfront is caching.
hit means is was served from cache. If you see miss, it went to the origin
age, how many seconds the object has been in cache.
Based on those to headers, yes, cloudfront is caching your content as you asked by setting cache-control headers.
|
I am serving static images from an S3 bucket and adding a header of Cache-Control: public,max-age=31557600 to all assets in the bucket. The assets are then distributed via cloudfront. Here are the headers in the browser:
Request URL: cloudfront url here
Request Method:GET
Status Code:304 Not Modified
Remote Address: remote address
Referrer Policy:no-referrer-when-downgrade
Response Headers
HTTP/1.1 304 Not Modified
Connection: keep-alive
Date: Fri, 25 Aug 2017 14:00:27 GMT
ETag: "871e4a2d65f891b79a30b1fdf7622650"
Server: AmazonS3
Age: 52182
X-Cache: Hit from cloudfront
Via: 1.1 f348970492a18bf5c630c5acc86c1ee3.cloudfront.net (CloudFront)
X-Amz-Cf-Id: u35A-l_zhEAMsJSmtLmf4VFIPfBfDLdBqIjdjwfAJSDBcJhxLC7OdA==
I am unsure about what to make out of this. I believe hit from cloudfront means that the edge servers are caching my assets. Does this mean that CloudFront has obeyed the Cache-Control headers being sent from S3, and these images are being cached in CloudFront edge servers? Or are the images being cached in the broswer? I appreciate any help with elucidating my confusion. Thanks!
|
Serving images from s3 with Cache-Control header distributed with CloudFront not caching?
|
InnoDB doesn't have any option to direct certain tables to stay in memory and other tables to stay out of memory. But it's kind of unnecessary.
InnoDB reads tables by loading them page-by-page into the buffer pool. Your usage of the tables guides InnoDB to keep pages in memory.
Reading a page once in a while is unlikely to kick out pages that you need to stay in memory. InnoDB keeps an area of the buffer pool reserved for recently-accessed pages. There's an algorithm for "promoting" pages into this reserved area, and pages that aren't promoted tend to get kicked out first.
Read details here: https://dev.mysql.com/doc/refman/5.7/en/innodb-buffer-pool.html
If you really need to ensure that certain tables are not cached in the InnoDB buffer pool, the only certain way is to alter the storage engine for those tables. Non-InnoDB tables (e.g. MyISAM) are never cached in the InnoDB buffer pool. But this is probably not a good enough reason to switch storage engine.
|
Lets say I have several InnoDB tables:
1. table_a 20Gb
2. table_b 10Gb
3. table_c 1Gb
4. table_d 0.5Gb
And a server with limited memory (8Gb)
I want fast access to table_c and table_d, and can allow slower access to table_a and table_b.
Is there a way to direct MySQL to cache c,d in memory, and NOT a,b?
(I'd move a,b to a different servers, but sometimes I require a join on a,c)
|
How to direct MySQL not to cache a table in memory?
|
3
If server do not provide explicit expiration times, a cache MAY assign a heuristic expiration time.
Defined in RFC 7234 Section 4 and Section 4.2.2
One heuristic algorithm is
('date header value' - 'last-modified header value') * 10%
Share
Improve this answer
Follow
edited Oct 7, 2021 at 11:04
CommunityBot
111 silver badge
answered Aug 23, 2017 at 9:20
tianzhipengtianzhipeng
2,18911 gold badge1313 silver badges1919 bronze badges
Add a comment
|
|
The chrome browser return http 200 from disk cache. But I don't find "expire" or "cache-control" in response header? As I know, there should be expire or cahce-control in response, then the resource could be from cache.
Access-Control-Allow-Credentials:true
Access-Control-Allow-Headers:DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type
Access-Control-Allow-Methods:GET, POST, OPTIONS
Access-Control-Allow-Origin:*
Content-Encoding:gzip
Content-Security-Policy-Report:default-src 'self' 'unsafe-eval'; img-src *; child-src 'self' *; connect-src 'self' * wss:; script-src 'self' 'unsafe-eval' 'unsafe-inline' *.modules.yaas.io js.stripe.com *.sapjam.com *.hanatrial.ondemand.com; style-src 'self' 'unsafe-inline' sapui5.hana.ondemand.com data: *.yaas.io api.eu.yaas.io api.us.yaas.io s3.amazonaws.com accounts.sap.com
Content-Type:application/x-javascript
Date:Fri, 14 Jul 2017 03:23:27 GMT
Etag:W/"59675378-8db28"
Last-Modified:Thu, 13 Jul 2017 11:03:20 GMT
Server:nginx/1.11.13
Vary:Accept-Encoding
X-Frame-Options:SAMEORIGIN
X-Vcap-Request-Id:34e06156-0a53-49d8-6e1e-f0ad50ac46bb
X-Xss-Protection:1; mode=block
Please see the http response header screen shot
When I use firefox firebug to do investigation. There is a cache section indicate a expire date, but there is no expire date in response header.
|
Why return http 200 code from disk cache, neither expire nor cache-control in response header?
|
You could delete your cache everytime before you update or before you close you application.
Code to clear the cache:
public static void trimCache(Context context) {
try {
File dir = context.getCacheDir();
if (dir != null && dir.isDirectory()) {
deleteDir(dir);
}
} catch (Exception e) {
// TODO: handle exception
}
}
public static boolean deleteDir(File dir) {
if (dir != null && dir.isDirectory()) {
String[] children = dir.list();
for (int i = 0; i < children.length; i++) {
boolean success = deleteDir(new File(dir, children[i]));
if (!success) {
return false;
}
}
}
// The directory is now empty so delete it
return dir.delete();
}
Call Trimcache when you want to clear the cache (before you update perhaps? or override the onStop() method, to clear the cache when the application is going to close.
|
After some research I found that Android likes to cache some parts of an app while installing to improve the performance while runtime.
Is there a way to prevent Android from caching things from my app?
I am sharing my app via my website and users install and update the app manually. As soon as I update my app some Activities and Code-parts seems to be cached on their devices.
|
How to prevent app from being cached in android
|
My answer got deleted by mods even though it was legitimate...
You only need to add the iterative appendage to your code and there is
no need to alter the actual file name.
|
I would like to know the best process for cache busting using query strings, I've read up on a few different sources but I still don't quite understand how to implement it..
If I reference a new file in my HTML header, e.g. "style.css?v=1.1", does that file have to be renamed to reflect the new appendage?
Or should I just leave the filename as "style.css" and let the server figure it out?
Thanks in advance.
|
Cache Busting with Query Strings
|
It sounds like you have $http caching turned on. Try disabling it for this request
$http.get('data/all.json', {cache: false})
https://docs.angularjs.org/api/ng/service/$http#caching
If that doesn't work (it is still cached), then it sounds like server-side caching. You can bust this by sending a unique query string.
$http.get('data/all.json?_=' + Date.now(), {cache: false})
This will make each request a unique request and should prevent the server side caching.
One caveat is that since you are ignoring the caching, you lose all the performance benefits of caching.
|
I'm building an app where I add and delete items from a JSON file.
I've encountered the following problem: When I delete an item, it gets reflected in the frontend (the item disappears), but it takes a couple of hard page reloads for the app to read the new JSON file produced by my PHP file instead of the cached one.
If I just reload once, it will just read the JSON file in cache, which doesn't reflect the changes made.
Is there any way to deal with this issue directly in AngularJS?
Here's my Angular code:
$scope.remove = function(array, index){
if($scope.totsselected){
array.splice(index, 1);
$http.post("deleteall.php", {
data : array
})
.then(function(data, status, headers, config) {
$http.get('data/all.json')
.then(function (response) {
$scope.productesgenerals = response.data;
console.log($scope.productesgenerals);
}).catch(function (error) {
});
});
}
)};
And my PHP code:
<?php
$contentType = explode(';', $_SERVER['CONTENT_TYPE']);
$rawBody = file_get_contents("php://input"); // Read body
$data = json_decode($rawBody); // Then decode it
$all = $data->data;
$jsonData = json_encode($all);
file_put_contents('data/all.json', $jsonData);
?>
|
Dealing with cache and updated JSON files in AngularJS
|
3
You should explore distributed caching, which will allow you to access cache across multiple applications. Redis is one such caching provider. There are multiple .Net clients available to access Redis cache. StackExchange.Redis is one such client. You can read more about it here StackExcange .Net Redis client and here you can find details on how to install Redis on Windows
Share
Improve this answer
Follow
edited May 23, 2017 at 10:29
CommunityBot
111 silver badge
answered Feb 8, 2017 at 10:21
VinodVinod
1,91222 gold badges1919 silver badges2727 bronze badges
2
Agree, although i`d still consider tcp connection as an option.
– Vladimir
Feb 8, 2017 at 10:22
Using distributed cache is the correct answer ;) Don't ever try to build that yourself with custom TCP (why?) connections. you can also use abstractions to not deal with the Redis client directly, Like cachemanager.net
– MichaC
Feb 9, 2017 at 11:11
Add a comment
|
|
Basically, I have two separate applications lets they be AppA and AppB. I have implemented the cache feature available in
"Microsoft.Practices.EnterpriseLibrary.Caching.dll" and "Microsoft.Practices.EnterpriseLibrary.Common.dll".
I want a cache to be added using AppA
var cacheManager = CacheFactory.GetCacheManager();
cacheManager.Add("SharedData","Data from AppA");
now, i want the next application "AppB" to retrieve this value
var cacheManager = CacheFactory.GetCacheManager();
var data = cacheManager.GetData("SharedData");
is it possible as these application runs in different application domain. If not can anybody suggest me any other alternative to achieve this behavior?
|
How to share a cached data between the application running in separate application domain?
|
The "Response" column in Chrome's cache storage viewer is meant to reflect the value of the statusText field of the associated Response object. The traditional status line for a successful response is '200 OK', so 'OK' is the statusText portion of that. You can see this in action by storing a Response with a different statusText, e.g.:
Sometimes when there is no 'OK' listed, it's because the Response is opaque, and the statusText isn't accessible. That's the case for those font resources in your second screenshot.
That being said, I have seen situations in which responses that are not opaque, like those sw-precache-managed entries in your first screenshot, don't have the statusText reflected properly. It might be in those situations that your web server is just responding with '200' status line rather than '200 OK', which would translate to a status of Response0 and a Response1 of Response2. That's a perfectly valid thing for a web server to use for a status line, and as long as the entries are being used successfully, I don't think you need to worry about it.
|
I'm getting my preactjs website set up with service worker.
Using the Sw-precache-plugin
I'm just looking at the cache items it stores in the Cache Storage, and the entries I expected are all there but they have an empty response.
I checked some other websites with service workers and noticed their entries have a response of "OK".
Just wondering what is the trigger to make a cache response say "OK" rather than blank.
Another example I found where some return OK and others Blank
|
Application Cache Storage items have empty response?
|
You need to use ionic modal remove instead of hide.. Refer this Ionic Documentation.
http://ionicframework.com/docs/api/service/$ionicModal/
|
I encounter the problems of modal-backdrop hidekeep append in my ionic view.
I need to call modal for my code. In my case, if the students click to the button check the answer then the modal of marking status will comes out.
However, If I repeatedly answer a lot of questions, the modal-backdrop hide append everytime I click the button.
Is it anyway to clear the cache for the modal like in <ion-view cache-view="false"></ion-view>?
|
How to avoid `modal-backdrop hide` insert to ionic cache?
|
I think that Safari doesn't cache audio files even if listed in the manifest file. Have you tried encoding the audio as BASE64 string and decode/convert back to audio at startup? Have a look at the WepApi, it can probably help you with that process.
|
I am building an HTML5 web game, a spin on the classic asteroids game to be precise. The problem I'm running into is that Safari, unlike Chrome or Firefox, will NOT automatically cache and use .wav sound files. This is causing substantial lag if there are an abundance of sounds occurring. On each instance of needing a "shooting" sound, the browser will perform a new GET request to obtain the audio file, as seen below:
you will see that the 151013__bubaproducer__laser-classic-shot-2.wav is the shooting sound that is being requested through the network over and over again and is not being cached. In an attempt to force Safari to cache this file, I've created an Asteroids.manifest file for the appcache to utilize:
Yes, that is the correct directory location relative to both the index.html as well as the Asteroids.manifest. And the manifest file does appear to be consumed, as it is visible in Safari's debugger storage:
Please let me know if you have any suggestions, as I've tried a variety of different setups of the .manifest file, including naming it .appcache, ensuring that it is served up with a MIME type of text/cache-manifest, and removing the NETWORK/CACHE/FALLBACK fields
Edit note: the window.applicationCache.status() is 1 (Idle)
|
Safari not utilizing application cache manifest
|
Referring to a background image URL in CSS will form a HTTP GET Request within the browser which will first check the local cache for that URL.
So yes - if the file is cached locally, and the URL is exactly the same, it will pull it from the local cache.
|
If I am using background: url in my CSS files like:
background: url("/app/assets/imgs/someimage.png") no-repeat !important;
Is the image will load from my cache(assuming it is there), or it will send a new request for it?
I have checked chrome://cache/ and the image exists.
|
CSS- loading images from cache
|
3
Alluxio is a memory-centric distributed storage system. Alluxio can be used to cache Spark RDDs in memory, for multiple and future Spark applications and jobs to access.
Spark can store RDDs in Alluxio memory, and future Spark jobs can read them from Alluxio memory. That blog post has more details on how that works. Here is information on how to setup and configure Alluxio with Spark.
Share
Improve this answer
Follow
answered Sep 20, 2016 at 15:17
Gene PangGene Pang
23111 silver badge55 bronze badges
Add a comment
|
|
I have a case where I want to download some data from a remote store every one hour and store that as Key-Value pairs in a RDD on an executor/worker. I want to cache this RDD so that all future jobs/tasks/batches running on this executor/worker can use the cached RDD to do a lookup. Is this possible in Spark Streaming?
Some relevant code or pointers to relevant code will be helpful.
|
Can we use cached RDD across batches on an executor
|
You could use the app delegate methods and implement a check every time the user enters foreground of the app. You could keep a variable that holds the last time the user opened the app and if the current time is 3 hours past then, clear the cache.
As stated in the comments, most users will not have 3 hour sessions.
On a side note: Most users may understand images not updating immediately, For instance, I know twitter takes awhile to update. In my app, I leave the cache alone and re-download all images any time the app is completely quit. If you compress the images before putting them on the server this won't hurt data usage to bad.
|
I'm struggling to think of a good solution here. So, in my app, I have a UITableView that loads user data, and each cell has a profile image UIImageView. So, every single time it's looping through the cells, it's individually downloading the profile pic for each user. For THIS reason, I started using NSCache to store the profile pics. This worked tremendously, and now the lag is gone.
However, what if someone changes their profile pic? The profile pics are uploaded in backend one per user. I use User ID to reference these profile pics. If someone changes their profile pic, it will just load the image I have in the cache, and not their new image. So, I want to have the entire cache on the IOS device cleared like once every 3 hours, or so. That will cure everything. How do you clear the entire NSCache at intervals?
TL;DR:
How would I clear the whole NSCache for an app at time intervals? (example: have the app cache cleared every 3 hours)
|
How to periodically clear NSCache in swift?
|
Building on the comment regarding local storage. You can build a check for local storage into the process of loading the data, if the check comes up empty then run the ajax request.
componentDidMount() {
const source = 'http://api.my.com/activities';
// Check for data you need and either get the data from API
// or use the data in local storage.
const data = localStorage.getItem('data');
if (!data) {
$.get(source, (result) => {
this.setState({
activities: result,
});
localStorage.setItem('data', data);
});
} else {
this.setState({
activities: data,
});
}
}
You could break this logic out into its own function, and put it in its own module, or use a different life cycle method. However, this follows your current pattern.
Note that you can use an npm library called localForage if you are storing anything other than a string. Or you can convert the return result to a string to store in localStorage. See here for a recent q&a on this topic.
|
Many people ask how to disable jquery ajax cache in React, while my question is different. I want it to be cached, or maybe more precisely, save the property I got from the first time call with ajax in browser memory or whatever, then it will not call the REST api again.
Below is my code:
import React from 'react';
import $ from 'jquery';
export default class ActivityIndex extends React.Component {
constructor(props) {
super(props);
this.state = {
activities: [],
};
}
componentDidMount() {
const source = 'http://api.my.com/activities';
this.serverRequest = $.get(source, (result) => {
this.setState({
activities: result,
});
});
}
componentWillUnmount() {
this.serverRequest.abort();
}
render() {
let rows = [];
this.state.activities.forEach((element) => {
rows.push(
<div key={element.act_id}>
<div>{element.act_id}</div>
<div>{element.act_title}</div>
</div>
);
});
return <div>{rows}</div>;
}
}
Every time I click back to this page, it will call the ajax again. I believe it's because in the componentDidMount method I called the ajax. Maybe I should put it in other place? Or how can I cache the ajax result, so then next time when I enter this page, the ajax will not be called?
|
How to cache jQuery ajax result in React?
|
Cast the value to double precision or, if the whole precision is necessary, to numeric:
select sum(value::float)
from myclass
where date between ? and ? and account = ?
float with no precision specified means double precision
Numeric types
|
What is the best way to get the sum of my value based on the dates from the ignite cache
Im getting all the data I need from my postgres DB based on selected dates.
value is of type String in MyClass.class
date | value | account
01-01-2015 | 363947.5636999999987892806529998779296875 | 110589
23-08-2016 | 56985.5636999999987892806529998779296875 | 110589
30-11-2016 | 875347.5636999999987892806529998779296875 | 110589
23-11-2016 | 756247.5636999999987892806529998779296875 | 225863
Then I want to sum the returned value using my cache query. Whats the best way to do this?
IgniteConfiguration ignitionConfig = new IgniteConfiguration();
ignitionConfig.setCacheConfiguration(cfg);
Ignite ignite = Ignition.getOrStart(ignitionConfig);
IgniteCache<Integer, MyClass> cache = ignite.getOrCreateCache(cfg);
StringBuilder builder = new StringBuilder();
builder.append(" SELECT SUM(value) FROM MyClass WHERE");
builder.append(" date BETWEEN ? AND ? AND ");
builder.append(" account = ? ");
SqlFieldsQuery qry = new SqlFieldsQuery(builder.toString());
qry.setArgs(startDate);
qry.setArgs(endDate);
qry.setArgs(account);
try (QueryCursor<List<?>> cursor = cache.query(qry)) {
for (List<?> row : cursor){
System.out.println("test=" + row.get(0));
}
}
Alternatively, I can loop through each results but I wanted to use the SQL way first.
|
How to get the sum of my value based on the dates from apache ignite cache
|
To answer your first and second question:
In-Memory in SQL Server 2016 can be created in two ways, one is with Schema Only - Which means data is volatile and it stays only in-memory
CREATE TABLE TestInMem (
i int,...<columns>
)
WITH (MEMORY_OPTIMIZED = ON,
DURABILITY = SCHEMA_ONLY);
Other way is Schema and Data - Here both will be persisted into secondary storage on periodic basis and your operation will be faster because you will be directly hitting primary memory for accessing data.
CREATE TABLE TestInMem (
i int,...<columns>
)WITH (MEMORY_OPTIMIZED = ON,
DURABILITY = SCHEMA_AND_DATA);
https://msdn.microsoft.com/en-us/library/dn133186.aspx
|
Starting from Sql Server 2014 Microsoft implemented In-Memory OLTP that IMO is very interesting feature!
I never tried it but I'm interested in because it can really speed up my "READ" actions (I mean select queries).
My idea is to run "WRITE" actions (insert, update, delete) directly to the disk (not in memory) to be sure that data are written persistently.
Instead "READ" actions (in particular queries on big tables, counters at Application_Startup, ecc) will be done in memory.
Now I have some questions:
1) is in-memory table synchronized in some way with the data saved in disk?
2) Is possibile to implement what I wrote above or I misunderstood?
3) Because the enterprise version of Sql Server 2016 costs too much for a startup or a small company, is possible to implement all of this using Redis?
I'm new also on Redis and I'm not sure that it has a "in-memory" table feature.
Searching on Google I found that it made a cache (it's not clear of what, queries?) and it's an in-memory data structure store (key-value pairs)
Thank you in advance guys
|
Sql Server In-memory table implemented with Redis
|
Your location ~ \.(css|js)$ block inherits root /usr/share/nginx/html from the server block.
Also, regular expression location blocks take precedence over prefix location blocks - see this document for details.
You can force your location /site/admin/ block to override regular expression location blocks at the same level by using the ^~ modifier:
location ^~ /site/admin/ {
alias /usr/share/nginx/html/site/admin/src/;
}
The above location block is a prefix location (and not a regular expression location block). See this document for details.
Of course, this also means that URIs beginning with /site/admin/ and ending with .css or .js will no longer have their caching parameters changed. This can be fixed by adding a nested location block, as follows:
location ^~ /site/admin/ {
alias /usr/share/nginx/html/site/admin/src/;
location ~ \.(css|js)$ {
expires 1y;
access_log off;
add_header Cache-Control "public";
}
}
|
I'm using nginx host by Ubuntu 14.04
My config file :
server {
listen 80 default_server;
listen [::]:80 default_server ipv6only=on;
root /usr/share/nginx/html;
index index.php index.html index.htm;
# Make site accessible from http://localhost/
server_name testing.com;
location /site/admin/ {
alias /usr/share/nginx/html/site/admin/src/;
}
location ~ \.(css|js)$ {
expires 1y;
access_log off;
add_header Cache-Control "public";
}
I've got some errors
[error] 29224#0: *10047 open() "/usr/share/nginx/html/site/admin/assets/js/jquery.nestable.js" failed (2: No such file or directory)
Actually that file is located :
/usr/share/nginx/html/site/admin/src/assets/js/jquery.nestable.js
How can i set my config file?
|
Config caching css/js files at Nginx
|
What I understand from reading various resources is - following headers may help in caching authorized resources.
Cache-Control: public, max-age=0
Max-Age = 0: Requires cache to revalidate with the server using a
conditional GET request. While revalidating with the server, the
Authorization headers will be sent to the server.
The max-age=0 differs from must-revalidate. The max-age=0 allows caching of
responses that contain Authorization headers also.
Also refer
Rest in Practice - REST+caching+authorize
Web Caching - Authentication
|
I have Web API method as listed below, for a REST service. This is for getting all users information for InventoryAuditors. Only authorized InventoryAuditor users can access this resource.
[RoutePrefix("api/users")]
public class UsersController : ApiController
{
[Authorize(Roles="InventoryAuditor")]
[Route("")]
[HttpGet]
public List<User> GetAllUsers()
{
//Return list of users
}
}
public class User
{
public int UserID { get; set; }
public string FirstName { get; set; }
}
Questions
Is this resource cacheable for shared caches (like Forward Proxies and other intermediary caches)?
If yes, how does the shared cache perform authorization check – how does the cache know that the resource must be served only for InventoryAuditors?
How the headers should look like to make this authorized representation cacheable?
Or is HTTP Caching not all to be used in case of authorized resources?
Note: The article "Caching Tutorial for Web Authors and Webmasters" says:
By default, pages protected with HTTP authentication are considered private; they will not be kept by shared caches. However, you can make authenticated pages public with a Cache-Control: public header; HTTP 1.1-compliant caches will then allow them to be cached.
REFERENCES
https://www.rfc-editor.org/rfc/rfc7235#section-4.2
https://www.rfc-editor.org/rfc/rfc7234#section-3.2
https://www.rfc-editor.org/rfc/rfc7234#section-5.2.2
Hypertext Transfer Protocol (HTTP/1.1): Caching
Feature: Bearer Authentication- Squid
Stupid Web Caching Tricks
|
Authorization check for HTTP Caches
|
If your timestamp has enough precision so that you can guarantee it will change any time the resource changes, then you can use an encoding of the timestamp (the header value needs to be ascii).
But bear in mind that ETag may not save you much. It's just a cache revalidation header, so you will still get as many requests from clients, just some will be conditional, and you may then be able to avoid sending payload back if the ETag didn't change, but you will still incur some work figuring that out (maybe a bunch less work, so could be worth it).
In fact several versions of IIS used the file timestamp to generate an Etag. We tripped over that when building WinGate's cache module, when a whole bunch of files with the same timestmap ended up with the same Etag, and we learned that an Etag is only valid in the context of the request URI.
|
So in one of my projects i have to create a http cache to handle multiple API calls to the server. I read about this ETag header that can be used with a conditional GET to minimize server load and enact caching.. However i have a problem with generating the E-Tag.. I can use the LAST_UPDATED_TIMESTAMP of the resource as the ETag or hash it using some sort of hashing algo like MD5. but what would be the best way to do this? Is there any cons in using raw timestamp as the Etag??
any supportive answer is highly appreciated .. Thanks in advance....Cheers!!
|
What is the best way to generate a ETag based on the timestamp of the resource
|
3
An objectify session is short lived and defines its own scope. It prevents multiple datastore gets for the same entity in disparate parts of your code by using memory, on the assumption that extra calls are typically wasteful.
If you require read/write consistency then you must use an idempotent transaction (which won't share the session cache and also does dirty checking and automatically retries).
By reading, mutating, then writing within the transaction you always avoid the issue of what could be happening in other requests (on other instances or otherwise)
Objectify sessions are not by default shared, and definitely shouldn't be. They are not synchronized across requests or instances. You can use memcache by putting @Cache on an entity to enable the write through cache, but this is distinct from the ofy session.
Share
Improve this answer
Follow
answered Aug 12, 2016 at 22:32
NickNick
1,8221010 silver badges99 bronze badges
0
Add a comment
|
|
From the caching documentation: https://github.com/objectify/objectify/wiki/Caching
The session cache is local to the Objectify instance. If you start a new session (via ObjectifyFactory.begin()), it will have a separate cache. If you use the thread-local ObjectifyService.ofy() method, the session cache will "just work" appropriately.
and
A get-by-key operation (single or batch) for a cached entity will return the entity instance without a call to the datastore or even to the memcache
My question is:
Request 1 is served by Instance A: an Object is updated and persisted. The session cache will be updated due to the object has been modified.
Request 2 is served by Instance B (which have already the Object in session cache due to a previous request): the session cache will be differente because it is another instance.
How can the request get the updated entity instead of the previous one?
The App Engine memcache is shared across instances, but the session cache is single per instance (or even Objectify instance).
Is the session-cache synchronized among all the instances in some way?
Is it possible that differente requests served by different instances can have different versions of the same object?
|
Objectify: how "session cache" works across instances
|
As Kevin said, it's called cache stampede.
One of the best documents to do with this problem I have read is Using memcached: How to scale your website easily (comes from Josef Finsel):
What we need in this instance is some way to tell our program that
another program is working on fetching the data. The best way to
handle that is by using another memcached entry as a lock.
When our program queries memcached and fails to find data, the first
thing it attempts to do is to write a value to a specific key. In our
example where we are using the actual SQL request for the key
name we can just append ":lock" to the SQL to create our new key.
What we do next depends on whether the client supports returning
success messages on memcached storage commands. If it does,
then we attempt to ADD the value. If we are the first one to attempt
this then we’ll get a success message back. If the value exists then
we get a failure indication and we know that another process is trying
to update the data and we wait for some predetermined time
before we try to get the data again.
When the process that’s updating
the cache is done, it deletes the lock key.
|
I have a question concerning redis in a distributed architecture.
Assume I have n clients, either windows desktop applications or asp.net web/web api servers.
One of the clients, lets say client A, hits the cache for a data and has a miss (the data is not in the cache). The client then starts to get the real data (from lets say a database) and then sets it in the cache when it's done.
Client B comes along and wants the same data, does a fetch to the cache and since it's a miss, does the same processing.
Is there a way for Client B to ...(N) not to do the processing (i.e go to the database) until the data is in the cache and fetch the data from the cache instead when it's available?
I understand that on a single app (or web server), using threads it's easy to check that, but in a distributed architecture?
Is this also a correct way of thinking as well? for the wait process that is
If so then could Client A put a flag somewhere stating that he's loading Data X and that all other clients should wait until he's done?
Otherwise, the idea then would be something along the lines of :
Client A requests Data X
Miss in cache
Processes Data X
Looks if Data X is now in cache
If not, add Data X to cache, otherwise, use it and don't store it in cache
Thanks!
|
Multiple clients using redis cache
|
3
Looking at the Cache class, it doesn't do pre-allocation of the files, so your 1000MB will pass without any exception BUT that doesn't mean it will work if the device doesn't even have that kind of empty space around, which means ...
When it runs out of space when saving file, it will purge files according to the LRU. a
No one can answer your 'best' cache size since it depends on your use case. And you said "without having issues"; what issues are you afraid of? Once a video file has cached, will that likely to change? How often will it change? On average how large is the video file? For segmented files (e.g. HLS), do you want to keep use the default LRU?
example: let's say you're doing HLS, each chunk is 500k, so on average, you can save 2000 chunks.
a: DiskLruCache.java from okhttp3 (Note it may change overtime, just search for it if link is broken)
Share
Improve this answer
Follow
answered Aug 4, 2016 at 21:11
Brian ChuBrian Chu
11399 bronze badges
2
I never used cache before so I was wondering what may happen, so I can use a cache size with 2 GB and it will just work?
– Daniel Gomez Rico
Aug 6, 2016 at 17:56
Why do you ask if I want to use a default LRU cache? is there another?
– Daniel Gomez Rico
Aug 6, 2016 at 17:57
Add a comment
|
|
Deps:
Kotlin 1.0.3
Exoplayer r1.5.9
Retrofit 2.1.0
okhttp 3.4.1
Im trying to setup cache for my video playback project (list of videos of 20 secs per video) and Im wondering if theres any problem if I set the cache for OkHttp
val cacheSize: Long = 1000 * 1024 * 1024 // 1000 MB <-------- HERE
val cookieManager = CookieManager()
cookieManager.setCookiePolicy(java.net.CookiePolicy.ACCEPT_ORIGINAL_SERVER)
return OkHttpClient.Builder()
.cache(Cache(File(cacheDir, "responses"), cacheSize))
.cookieJar(JavaNetCookieJar(cookieManager))
May have a Exceptions if I try to use 1000 MB for cache?
How can I find the best cache size for my app without having issues?
|
Whats the optimal/less-error-prone cache size for OkHttp client (used with exoplayer)
|
You are probably wondering why the numbers don't line up with the addresses, as in the direct mapped case. What is going on in this diagram is that the items are placed into the sets left to right, that is all, because the sets are initially empty. The values 2, 0, 10 and 8 map to the leftmost set. The 2 appears first so it is in the leftmost column. Then 0 is placed in the next available position. 2 occurs again, and that is a "hit" indicated by the parentheses. Then 10 occurs and goes into the third spot. 8 goes to the fourth spot, and the cache block is now full. 0 recurs, and there is a hit, since it is still in the cache, in the second spot. Now 4 occurs. The cache set is full: something has to be kicked out. The 2 is kicked out (possibly due a least-recently-used (LRU) replacement policy) and replaced by 4. That is why the 4 is in the leftmost column; it has replaced the 2. Now 2 occurs again and is no longer in the cache, since it was just kicked out. Now the least recently used cache item is 0, so it is kicked out and 2 now lives in the second spot.
Note that real four-way set-associative caches don't always use a full block-wide LRU replacement policy due to some further simplifications to speed them up.
And, by the way, the addresses are distributed into the sets according to simple modulo 4. It is not the case that even addresses go to the left set and odd to the right:
set 0 set 1
0 1 2 3 | 0 1 2 3 <- addr modulo 4
---------------+-----------------
0 1 2 3 | 4 5 6 7 <- full addr
8 9 10 11 | 12 13 14 15
As you can see, this is consistent with what is in the table; except of course that the addresses don't match their modulo 4 position: they are given an arbitrary spot in each set based on the replacement policy.
|
The title may not be very good but I couldn't find a better one.
We had homework to do and I didn't hand it in because I didn't understand it. Now because it's over, we got the solutions... And now I'm trying to understand the task using the solutions because trying to understand the complicated script of our professor is a waste of time for me.
The task:
We have a direct mapped cache with following access frequency on main
memory blocks:
2 5 0 13 2 5 10 8 0 4 5 2
What's the hitting quote (aka hit rate) if the cache is a
set-associative cache with set size 4 and FIFO?
From my last question about direct-mapped caches, I learned how to count the hit quote and wanted say thank you very much for it by the way.
My only problem for this is that I don't understand how the numbers are placed in the table like that.
I thought like programming maybe: 0-3 is array1 and other 0-3 is array2.
We take first number of the cache, 2 and put it in array1 so it is in array1[0]. Then we do same for the next number, take 5 and put it in array2[0]. Now take next number 0 and put in array[1].
But as it seems the pattern is wrong, it's correct till line 4 of table but then it's wrong...
Why are the numbers placed like that in the table?
Solution:
|
Cache hit rate for a set-associative cache: I don't understand this diagram
|
2
Ernest is right - MBean servers are per JVM not per ClassLoader, so you need to ignore duplicated domains. But what's more interesting - Wildfly uses Infinispan for session clustering, so the default cache manager might be already running. I strongly recommend using your own cache manager name:
new GlobalConfigurationBuilder().globalJmxStatistics()
.cacheManagerName(CACHE_NAME).build();
Ernest also suggested using a HotRod Server cluster and connecting to it using a HotRod client (which is by far faster than using REST interface). This sounds reasonable in scenario you described.
Share
Improve this answer
Follow
answered Jul 1, 2016 at 20:27
Sebastian ŁaskawiecSebastian Łaskawiec
2,6771616 silver badges3333 bronze badges
3
Is there the possibility to get cache manager belonging to other web application using jmx infinispan?
– Alex
Jul 2, 2016 at 17:08
I'm not sure why would you want to use JMX for this? I deeply believe that client/server model is what you need here.
– Sebastian Łaskawiec
Jul 4, 2016 at 13:58
It s matter of performance. I think that a client server call will take too much time. Being infinispan-jmx aware of duplicated mbean... It Should give me the possibility to inspect a cachemanager... Or not?
– Alex
Jul 5, 2016 at 6:29
Add a comment
|
|
I'm working with Infinispan 8.1 and WildFly 10.
I initialize my CacheManager programmatically using these code lines:
public class SessionManager {
private static DefaultCacheManager cacheManager;
public void initializeCache(){
if (cacheManager ==null){
GlobalConfigurationBuilder gcbLocal = new GlobalConfigurationBuilder();
ConfigurationBuilder builderLocal = new ConfigurationBuilder();
builderLocal.clustering().cacheMode(CacheMode.LOCAL);
cacheManager = new DefaultCacheManager(gcbLocal.build(), builderLocal.build());
cacheManager.getCache();
These code lines belong to a jar imported as dependency in multiple web applications deployed on my server.
So every time i deploy a new application, the initialize method is invoked and infinispan tries to create a new DefaultCacheManager, giving me this exception:
ISPN000034: There's already a JMX MBean instance type=CacheManager,name="DefaultCacheManager" already registered under 'org.infinispan' JMX domain. If you want to allow multiple instances configured with same JMX domain enable 'allowDuplicateDomains' attribute in 'globalJmxStatistics' config element
This issue can be resolved simply adding this code line:
gcbLocal.globalJmxStatistics().allowDuplicateDomains(true);
But now the effect is that Infinispan will create a new domain separated CacheManager. This means that every application will have its own.
My target is to have just 1 DefaultCacheManager serving all the web applications deployed inside the server the way that if WebApplicationA stores some value inside the infinispan cache, the webApplicationB can get it.
Is it possible? How can i obtain a global Cache Manager?
|
Infinispan Unique Cache Manager for deployed Web Applications
|
I made some changes and now it's working quite alright for me so thought I share it here to everyone else:
Go to Run->Edit Configurations->Android Application->Application->Miscellaneous
Uncheck "Skip installtion if APK has not changed" and restart your Android Studio.
Now it should build your APK all the time you build to your phone!
If this did not do the trick for you, try doing the same process by going to:
Go to Run->Edit Configurations->Android Application->Miscellaneous
Go to Run->Edit Configurations->Android Native->Miscellaneous
Go to Run->Edit Configurations->Android Tests->Miscellaneous
Then Sync your project with Gradle files and restart your Android Studio.
|
I'm working on an app for myself to do the simple act of connecting to bluetooth devices!
when I try to build the app to my Nexus 5 phone from Android Studio, it seems to not build the most recent changes I made. It somehow looks like it cached the first build. It fixes the issue when I go to File>Invalidate Cache. But then when I try to build it again, it seems to have caches the next build. Is there a way to prevent caching programatically?
I had another app and it didn't happen to that one!
P.S. New to Android and Android Studio! :)
Thanks
|
Android build does not update the apk. Is there a way to prevent caching programatically?
|
Simple work around. Change your urls.py like this.
url(r'^example/example-url/(?P<special_id>\d+)/$',
views.example_view,
name="example"),
Then modify your example_view like this:
def example_view(request, sepcial_id):
if request.user.is_authenticated():
key = 'exmpv{0}'.format(special_id)
resp = cache.get(key)
if not resp:
# your complicated queries
resp = render('yourtemplate',your context)
cache.set(key, resp)
return resp
else:
# handle unauthorized situations
Can I also interest you in switching to memcached instead of file based caching?
|
I have created a view in my Django app that requires authentication to load. If the credentials are incorrect, then an error 403 page is sent back. Since I declared the view to be cached in the urls.py file, like this...
url(r'^example/example-url/(?P<special_id>\d+)/$',
cache_page(60 * 60 * 24 * 29, cache='site_cache')(views.example_view),
name="example"),
... then even the error pages are being cached. Since the cache is for 29 days, I can't have this happen. Furthermore, if a the page is successfully cached, it skips the authentication steps I take in my view, leaving the data vulnerable.
I only want django to cache the page when the result is a success, not when an error is thrown. Also, the cached page should only be presented after authentication in the view. How can I do this?
My cache settings in setting.py:
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.locmem.LocMemCache',
'LOCATION': 'unique-snowflake',
},
'site_cache': {
'BACKEND': 'django.core.cache.backends.filebased.FileBasedCache',
'LOCATION': '/var/tmp/django_cache',
}
}
Thanks in advance
|
How to optionally cache a view in Django
|
2
You can set any cache. Just set it at config file.
'components' => [
'cache' => [
'class' => 'yii\caching\FileCache',
],
'memCache' => [
'class' => 'MEMCACHE CLASS HERE',
],
.... ANY cache you want ...
]
Share
Improve this answer
Follow
answered Apr 14, 2016 at 7:34
Xiaosong GuoXiaosong Guo
49522 silver badges88 bronze badges
1
I see. I will try to set that up. Thanks
– Jake
Apr 14, 2016 at 7:42
Add a comment
|
|
Is it possible to use two or more caching storage in yii2 framework? I already setup a Memcache for my web app but I wanted also to use a FileCache since I will be dealing with a large chunk of data.
hope someone can help. Thanks!
|
Yii2 Multiple caching storage
|
If you have many files that are short enough, caching looks reasonable:
// Simplest, not thread safe
private static Dictionary<String, String[]> s_Files =
new Dictionary<string, string[]>(StringComparer.OrdinalIgnoreCase);
private static IEnumerable<String> ReadLines(String path) {
String[] lines;
if (s_Files.TryGetValue(path, out lines))
return lines;
else {
lines = File.ReadAllLines(path);
s_Files.Add(path, lines);
return lines;
}
}
...
foreach (var myfile in allfiles) {
...
// Note "ReadLines" insread of "File.ReadLines"
foreach (var line in ReadLines(myfile.path + "\index.txt")) {
}
}
Compare both implementations - your current one - and - this cached routine and then decide whether or not you'd want to cache.
|
I am using File.ReadLines() on the same few files often and don't know the overhead associated with reading a file in this way?
I am searching for each file id (hash) within a txt file.
At the moment I am using this code but wonder if I should cache these index files. My hesitation is that the files will be edited so often that it will cause just as much performance hit by reloading the file in to cache each time. It is much more likely that I will be adding a line to the text file on each iteration (there will not be a match).
foreach (var myfile in allfiles) // roughly 5 thousand
{
...
foreach (var line in File.ReadLines(myfile.path + "\index.txt"))
{
// compare the line to the current record's hash
if (myfile.hash.equals(line))
...
return x;
}
...
// otherwise add a new line (a hash) to index.txt
}
...
There are about 5-10 index.txt files at different paths that need to be checked depending on the file... so each one would need to be cached.
Is caching the index.txt file a better idea? Does File.ReadLines() have a lot of overhead?
Thanks for any pointers.
|
C# caching a txt file or using File.ReadLines
|
Yes, that is precisely what is happening. The local memory cache is exactly that: local to the process.
It is not really suitable for use in production, and definitely not in a multi-process environment. Use a proper cache backend; Redis for example is very simple to get up and running.
|
I set up a simple local memory cache that I use like this:
from django.core.cache import caches
def stats_service(db):
stats_cache = caches['stats']
if stats_cache.get(db) is None:
stats_cache.set(db, GlobalStatsService(db))
return stats_cache.get(db)
After the server is running, I call this function through a view, with a curl on the command-line, to initialize the cache.
The problem is that if I call it several times, sometimes it will find the item and return the value immediately, as expected, and sometimes will not find it and will recalculate the value. The keys (here db) are the strings I expect them to be. I cannot understand why items are dropped from the cache, apparently randomly, and how to make them stay.
Interestingly, the behavior was the same when I was using global variables instead of Django's cache framework (and I tried them all except memcached because of the 1MB limitation).
I have set no TIMEOUT value (and obviously the global variables version had none either):
CACHES = {
...
'stats': {
'BACKEND': 'django.core.cache.backends.locmem.LocMemCache',
'LOCATION': 'stats',
},
My app runs with Apache and mod_wsgi, 2 processes and 4 threads. Maybe it is related. Could it be that a different process accesses its own version of the cache ?
What am I doing wrong ?
|
Cache randomly removing items
|
Carefully notice line 476 to 491 in the file jquery.pjax.js here in this link: jQuery PJAX by Chris Wanstrath (defunkt). The codes under the if actually calls the Cached Contents and the code under the else portion is what we only want.
if (contents) {
container.trigger('pjax:start', [null, options])
pjax.state = state
if (state.title) document.title = state.title
var beforeReplaceEvent = $.Event('pjax:beforeReplace', {
state: state,
previousState: previousState
})
container.trigger(beforeReplaceEvent, [contents, options])
container.html(contents)
container.trigger('pjax:end', [null, options])
} else {
pjax(options)
}
So, now you know what we are going to do! Yes, just remove everything except this line:
pjax(options)
and you're good to go! It's strange that nobody gave me this solution. However, I've finally found a solution by myself, and I think this will help a lot of people.
Thanks, anyways.
|
While using the popstate in pjax, I want the content to be loaded from the server again, instead of being replaced by cached content of the browser. If the cached content is being applied, the <script> inside the cached content is not at all working. So, is there any way to force a normal PJAX call to the server instead of using the cached data, when we simply hit the browser's Back/Forward buttons?
Please help.
Thanks.
|
How to make pjax:popstate work like a simple PJAX call
|
Caching headers work following way:
If max-age or Expires is set, then resource will be cached for provided time with one exception, if Cache-Control contains must-revalidate then following will happen.
Because a cache MAY be configured to ignore a server's specified expiration time, and because a client request MAY include a max- stale directive (which has a similar effect), the protocol also includes a mechanism for the origin server to require revalidation of a cache entry on any subsequent use. When the must-revalidate directive is present in a response received by a cache, that cache MUST NOT use the entry after it becomes stale to respond to a
subsequent request without first revalidating it with the origin server.
or if Cache-Control contains no-cache then following will happen.
If the no-cache directive does not specify a field-name, then a cache MUST NOT use the response to satisfy a subsequent request without successful revalidation with the origin server.
In addition you can combine ETag and max-age/Expires headers to make caching more qualitative. When time expires, browser will send ETag based revalidation request.
Note that max-age and Expires are equivalent, but max-age has higher priority.
One more thing, if you didn't provide any of previous headers, then browser (Chrome for example) can cache your resource for 10% of a time passed since Last-Modified header value, but any way will send revalidation request based on Last-Modified value of cached resource.
|
Im testing how cache-control header works on Chrome.
My setup in nginx is quite easy:
server {
listen 80;
server_name localhost;
etag on;
root /usr/share/nginx/html;
location / {
add_header Cache-Control "must-revalidate, private, max-age=10";
}
}
The behaviour I was expecting for the setup would be:
max-age: Use cache for the specified time in seconds
etag: Use for validating freshness of data
private: avoid proxy caches to keep the data
With hard-reloads Im getting the expected behaviour:
1- First download hit the server and returns data with 200
2- Second refresh(ctrl + R) hit the server for validating freshness returning 304 if not modified or 200 if data was modified. (It seems that it's never served from web cache. Why?)
The previous behaviour are mostly expected but using back and forward button always retrieved data from web cache. Why is this? Why if I modified data but use the back/forward button Im still getting stale data from cache? Even if I wait more than 10 seconds the result is the same. Just hard-reload will get new data.
|
Cache-Control headers, max-age defined but back button always deliver web cache data
|
Realised elements in a lazy sequence are able to be garbage collected like any other object in Clojure, with one important caveat. You should not hold a reference to the head of the sequence. This is known as "holding the head".
In concrete terms using doall to evaluate the whole sequence, or storing a reference (say in an atom or a def) to the lazy sequence while traversing it with map are both holding the head.
|
In Practical Clojure, the authors mention that once a lazy seq value is calculated, it is cached.
If we get a very large number of values from a lazy-seq, might we see an out-of-memory error? Or is there a mechanism to prevent that (e.g. older cached values are removed to make room for new ones)?
|
Clojure's lazy-seq caching
|
You could use the Reflectivity framework to add pre and post meta links to your methods. A link could check a cache before execution transparently.
link := MetaLink new
metaObject: self;
selector: #cachedExecute:;
arguments: #(selector);
control: #before.
(MyClass>>#myMethodSelector) ast link: link.
This code will install a meta link that sends #cachedExecute: to a MyClass object with the argument #myMethodSelector. The link is installed on the first AST node of the compiled method (of that same method selector, but could be on another method). The #control: message ensures that the link will be executed before the AST node is executed.
You can of course install multiple meta links that influence each other.
Note that in the above example you must not send the same message (#myMethodSelector) again inside of the #cachedExecute: method since you'd end up in a loop.
Update
There's actually an error in the code above (now fixed). The #arguments: message takes a list of symbols that define the parameters of the method specified via #selector:. Those arguments will be reified from the context. To pass the method selector you's use the #selector reification, for the method context the #cachedExecute:0 reification and for method arguments #cachedExecute:1. To see which reifications are available, look at the #cachedExecute:2 on the class side of the subclasses of #cachedExecute:3.
|
I have a class which is essentially a collection of methods for some data transformations. In another words I have some data in my files and I use a few different code snippets to transform the textual data into something that I can easily query.
Now the methods often reuse each-other and as the core data is changing I'd like to simply cache the results of each method, for the speed reasons.
I don't want to change each method by adding:
^ methodsCache ifNil: [ methodsCache := "compute" ]
I want to use the power of Pharo reflection to accomplish my task without touching much of code.
One idea that I had is if I can run some code before each method, thing I can either return a cached value or continue the execution of the method and cache it's result
|
Caching the result of all methods
|
3
Response.Cache.SetOmitVaryStar(true);
Share
Improve this answer
Follow
answered May 24, 2016 at 22:25
Matt HinzeMatt Hinze
13.6k33 gold badges3535 silver badges4040 bronze badges
1
or globally stackoverflow.com/a/13509708/1507124
– CervEd
Nov 5, 2021 at 13:00
Add a comment
|
|
I am trying to use OutputCache for both server and client caching on an MVC view but setting Location to ServerAndClient forces the HTTP header Vary: * to be set, which largely defeats the purpose of the client caching (it tells the browser to check for freshness EVERY time the resource is used).
I want the browser to cache the file for 24hours and only make another request if I change the LastModifield parameter that I add to the querystring. This works if I have only Client as the location but I also want the server to cache the file so that it doesn't need to regenerate it when another user requests the same resource.
The resource is based on a database that will rarely get updated (possibly once or twice a month, I know when it has changed) and the resource could get hit very hard so I don't want to be generating it all the time or even handling modified-since conditionaly requests.
My OutputCache attribute is:
[OutputCache(Location = OutputCacheLocation.ServerAndClient, Duration = 86400, VaryByParam = "LastModified")]
I have tried extending the OutputCacheAttribute class and overriding all of the On* methods and removing the Vary http header but the Vary header doesn't seem to be added until AFTER all of these methods have been called.
|
How to use OutputCacheLocation.ServerAndClient without Vary: * HTTP header
|
Intel® 64 and IA-32 Architectures Software Developer's Manual
Process-Context Identifiers (PCIDs)
Process-context identifiers (PCIDs) are a facility by which a logical processor may cache information for multiple linear-address spaces. The processor may retain cached information when software switches to a different linear-address space with a different PCID (e.g., by loading CR3; see Section 4.10.4.1 for details). A PCID is a 12-bit identifier.
...
When a logical processor creates entries in the TLBs (Section 4.10.2) and paging structure caches (Section 4.10.3), it associates those entries with the current PCID. When using entries in the TLBs and paging-structure caches to translate a linear address, a logical processor uses only those entries associated with the current PCID
Related: Does Linux use x86 CPU's PCID feature for TLB? If not, why?
|
I've found that TLB contains PID of each process for performance reason, which means by leaving each process's VA to PA mappings in TLB for saving context-switching cost. So, my question is can kernel manipulate that PID entries in TLB?
I am really curious because I have heard that TLB is a cache maintained inside MMU. Please give me answer :)
*I assume x86 :)
|
Can kernel manages Process id written on TLB entry?
|
\Slim\HttpCache\Cache() is HTTP Cache for client-side (browser) caching
It expects up to 3 Parameters:
* @param string $type The cache type: "public" or "private"
* @param int $maxAge The maximum age of client-side cache
* @param bool $mustRevalidate must-revalidate
and generates corresponding HTTP-response headers.
It has nothing with server-side caching to do.
|
What is HTTP Cache for? How do I use it in Slim 3?
But I am not quite sure how this is done in Slim 3:
use Slim\Http\Request;
use Slim\Http\Response;
require_once __DIR__ . '/../vendor/autoload.php';
// Register service provider with the container
$container = new \Slim\Container;
$container['cache'] = function () {
return new \Slim\HttpCache\CacheProvider();
};
$app = new \Slim\App($container);
// Add middleware to the application.
$app->add(new \Slim\HttpCache\Cache('cache', 86400));
// Routes:
$app->get('/', function (Request $request, Response $response, array $args) {
$response->getBody()->write('Hello, World!');
return $response->withHeader('Content-type', 'application/json');
});
$app->get('/foo', function ($req, $res, $args) {
$resWithEtag = $this->cache
->withEtag($res, 'abc')
// ->withExpires($res, time() + 60)
;
return $resWithEtag;
});
$app->run();
Any ideas?
|
Slim 3: what is HTTP Cache for?
|
Smarty compiles all the .tpl files into PHP on first use, and places the result in a configured cache directory. These PHP files are then included just like any other PHP file, so there is nothing special APC/OpCache would need to do to be invoked for them.
On subsequent requests, Smarty will check if the timestamp of the underlying .tpl file has changed, and re-compile if it has; otherwise, it will just leave the existing PHP file in place. This behaviour can be turned off, e.g. on a production server where files should not be edited (this setting was available in Smarty 2 as well, it's nothing new).
I'm not sure what the manual compilation process you describe was trying to achieve; it sounds from your description to just be replicating what Smarty already does, but with a small boost for the first hit on each template by "warming the cache". It certainly has nothing to do with the presence or absence of APC/OpCache - that won't change how often Smarty compiles things into PHP, only how often PHP compiles PHP into "op codes".
There may be some other trick being used which you haven't spotted / described, or it may be that the previous programmer of the system didn't know what they were doing and over-complicated things.
|
I am working on an old legacy PHP application using Smarty. I am not familiar with Smarty. Hence my questions.
I understand that Smarty templates are compiled into PHP. Then, they are invoked with some data to generate an output. The generated PHP is compiled as part of this process.
APC (and other cache solution) avoid the recompilation of PHP between user requests.
i) If I call Smarty with a raw template, it will compile it into PHP first, then into opcode, right?
ii) If a cache system like APC is enabled in my PHP application, and if the template has already been invoked (i.e., compiled) in the past:
a) Will Smarty be smart enough to not recompile the template into PHP at each user request?
b) Will the opcode of the compiled template's PHP be re-used via APC?
Why am I asking these questions? This legacy application has been implemented a long time ago (some parts before 2010). They have implemented a precompilation system of all their Smarty template and copied them in some directory of the application to invoke generated PHP code directly.
I believe it could have made sense then regarding performance, but now, since opcode cache solutions are still available, does it still make sense? Could we get rid of this precompilation process?
|
Does Smarty use APC (or other cache solutions)?
|
Silly me. I forgot to add the fetch handler into my service worker. I thought it works like appchache and automatically returns the cached data when matches with the cache URL. I underestimated the power of Service Worker. Following code did the trick for me.
this.addEventListener('fetch', function(event) {
console.log(event.request.url);
event.respondWith(
caches.match(event.request).then(function(response) {
return response || fetch(event.request);
})
);
});
|
I am trying to pre-cache some of my static app shell files using service worker. I can't use 'sw-appcache' as I am not using any build system. However, I tried using 'sw-toolbox' but I am not being able to use it for pre-caching.
Here is what I am doing in my service worker JS:
self.addEventListener('install', function(event) {
event.waitUntil(
caches.open('gdvs-static').then(function(cache) {
var precache_urls = [
'/',
'/?webapp=true',
'/css/gv.min.css',
'/js/gv.min.js'
];
return cache.addAll(precache_urls);
});
);
});
I've also tried this:
importScripts('/path/to/sw-toolbox.js');
self.addEventListener('install', function(event) {
var precache_urls = [
'/',
'/?webapp=true',
'/css/gv.min.css',
'/js/gv.min.js'
];
toolbox.precache(precache_urls);
});
Here is the URL of my app: https://guidedverses-webapp-staging.herokuapp.com/
Here is the URL of my service worker file: https://guidedverses-webapp-staging.herokuapp.com/gdvs-sw.js
What am I missing?
|
Pre-cache statics files using Service Worker
|
You can use the query builder with method:
$books = App\Book::with(['author','publisher'])->get();
Or simply do the additional load inside the cache callable:
$books = Cache::remember('allbooks', 60, function() {
return App\Book::all()->load('author', 'publisher');
});
Update: to keep the caches separated you need two variables, like this:
$books = Cache::remember('allbooks', 60, function()
{
return App\Book::all();
});
$booksAP = Cache::remember('allbooks_ap', 60, function() use ($books)
{
return $books->load('author', 'publisher');
});
|
How do I cache the lazy eager loading queries based on the model relationships. For example -
$books = App\Book::all();
$books->load('author', 'publisher');
I can cache the first query with something like this
$books = Cache::remember('allbooks', 60, function() {
return App\Book::all();
});
How do I cache the second query?
If there is no direct way, please suggest any workaround, possibly with a sample code.
Update: I need the second query to be executed separately so I can clear these two cache keys separately.
|
Caching Lazy Eager Loading Queries in Laravel 5.1
|
I discovered this method to clear cache:
[[[PINRemoteImageManager sharedImageManager] cache] removeObjectForKey:
[[PINRemoteImageManager sharedImageManager]cacheKeyForURL:your_URL processorKey:nil]];
So, in your - (void)viewWillAppear:(BOOL)animated you can set again your ImageView with your_URL.
That did the trick at my side ;)
|
I'm using PINRemoteImage in my iOS App for setting image on UIImageView. I always have the same link for image, but in meantime image can change (I can upload different image), but whenever I call pin_setImageFromURL on UIImageView it always sets an old image (not if I delete app and reinstall it). I found out that calling [[[PINRemoteImageManager sharedImageManager] defaultImageCache] removeAllObjects] will delete image from cache but only when I close and reopen app, so does anyone known how to force app to update cache immediately after calling upper method?
|
PINRemoteImage delete cache
|
From the config sample you provided I'm guessing you want to cache the Doctrine results rather than the full HTTP responses (although the latter is possible, see below).
If so, the easiest way to do this is that whenever you create a Doctrine query, set it to use the result cache which you've set up above to use redis.
$qb = $em->createQueryBuilder();
// do query things
$query = $qb->getQuery();
$query->useResultCache(true, 3600, 'my_cache_id');
This will cache the results for that query for an hour with your cache ID. Clearning the cache is a bit of a faff:
$cache = $em->getConfiguration()->getResultCacheImpl();
$cache->delete('my_cache_id');
If you want to cache full responses - i.e. you do some processing in-app which takes a long time - then there are numerous ways of doing that. Serializing and popping it into redis is possible:
$myResults = $service->getLongRunningResults();
$serialized = serialize($myResults);
$redisClient = $container->get('snc_redis.default');
$redisClient->setex('my_id', $serialized, 3600);
Alternatively look into dedicated HTTP caching solutions like varnish or see the Symfony documentation on HTTP caching.
Edit: The SncRedisBundle provides its own version of Doctrine's CacheProvider. So whereas in your answer you create your own class, you could also do:
my_cache_service:
class: Snc\RedixBundle\Doctrine\Cache\RedisCache
calls:
- [ setRedis, [ @snc_redis.default ] ]
This will do almost exactly what your class is doing. So instead of $app_cache->get('id') you do $app_cache->fetch('id'). This way you can switch out the backend for your cache without changing your app class, just the service description.
|
This is my current setup:
snc_redis:
clients:
default:
type: predis
alias: cache
dsn: "redis://127.0.0.1"
doctrine:
metadata_cache:
client: cache
entity_manager: default
document_manager: default
result_cache:
client: cache
entity_manager: [bo, aff, fs]
query_cache:
client: cache
entity_manager: default
I have an API which gets multiple duplicate requests (usually in quick succession), can I use this setup to send back a cached response on duplicate request? Also is it possible to set cache expiry?
|
Use redis to cache duplicate requests in symfony2
|
The module configuration values, like apiUrl in your example, are not touched by RequireJS unless you call require.toUrl() on them explicitly. I think this is what is happening in your case. To avoid this problem, you should always do the concatenation first and only then call require.toUrl() on the full resulting URL.
So, instead of doing:
var fullUrl = require.toUrl(config.apiUrl) + '/my/resource';
Do this:
var fullUrl = require.toUrl(config.apiUrl + '/my/resource');
By the way, instead of setting the version directly in the RequireJS configuration, you can simply add the version of your application to the data-w20-app-version attribute on the <html> element of the master page:
<html data-w20-app data-w20-app-version="2.0.0">
This will provide the same behavior but will work correctly in the case of Angular templates in $templateCache. If your master page is automatically generated by the backend, this is done automatically. Check this page for the details.
|
using SeedStack 14.7 we are facing a cache issue when uploading a new version on servers: every user have to clear their cache to get the last version of files.
I tried to use "urlArgs": "version=2" in the requireConfig part of the fragment JSON file. It do the job by adding argument on every files and so we can use it when changing version, but it also affect the urls in the config of each modules !
As we are using this config to pass the REST base url to each module, it breaks all REST requests by adding the argument to the base url.
My fragment JSON file :
{
"id": "mac2-portail",
"modules": {
"gestionImage": {
"path": "{mac2-portail}/modules/gestionImage",
"autoload": true,
"config": {
"apiUrl": "muserver/rest"
}
}
},
"i18n": {...},
"routes": {...},
"requireConfig": {
"urlArgs": "version=2",
"shim": {...}
}
}
Any idea to solve the cache issue without breaking REST requests ?
EDIT : it is not a duplicate of Prevent RequireJS from Caching Required Scripts. Yes SeedStack uses RequireJS and this configuration solve the cache issue, but it also affect other modules defined in the fragment so I need to find another solution to prevent browser to cache files
|
Prevent browser cache issue on Javascript files with RequireJS in SeedStack
|
This is a basic network connection error. You need to ensure that you have access to connect to TCP port 6379 on mydemo.redis.cache.windows.net (11.22.216.225:6379) and open up any firewall rules as necessary.
You can test TCP connections with telnet or by running redis-cli.exe (from redis-windows) on the same server you're trying to use ServiceStack.Redis, e.g:
redis-cli -h 11.22.216.225 -p 6379
SSL Redis Connections to Azure Redis
The connection string if you're trying to connect to a redis-server on Azure is typically in the format:
{AzureRedisKey}@servicestackdemo.redis.cache.windows.net?ssl=true
The ?ssl=true option says to use SSL on the default port Azure SSL port 6380.
|
I have install PM> Install-Package ServiceStack.Redis and used following code to connect azure redis cache.
I think I missed connection string as I have not given PRIMARY KEY in host
string host = "mydemo.redis.cache.windows.net";
var redisManager = new PooledRedisClientManager(host);
using (var redisClient = redisManager.GetClient())
{
IRedisTypedClient<Customer> redis = redisClient.As<Customer>();
Getting error :
{"could not connect to redis Instance at
mydemo.redis.cache.windows.net:6379"}
A connection attempt failed because the connected party did not
properly respond after a period of time, or established connection
failed because connected host has failed to respond 11.22.216.225:6379
|
Unable to connect Redis Cache server using ServiceStack.Redis library
|
Here is a breakdown of what those things are:
localCulls - The number of times items were displaced out of the item cache as a result of new items being loaded, but the cache was full.
localItemsCulled - The number of items that were culled out of the cache as a result of a local cull (see above).
localMaxCulled - The maximum number of items that were pushed out of the cache at once
weakCulls - The number of times the weak item cache is cleared. This will increase by 1 when you manually invoke the clearWeakTables() method on the repository via the component browser.
weakItemsCulled - The number of items that were culled out of the weak item cache. This will occur when the weak item cache is cleared and this number is the count of how many items have been GCed and therefore removed from the weak item cache.
weakMaxCulled - Like localMaxCulled, it is the maximum number of weak item entries that were cleared at once.
|
Can anyone provide definitions for entryInvalidations vs localItemCulls in respect to ATG Repository cache usage statistics? The documentation for caches does not appear to have been updated with an explanation on what these items are.
These can be viewed through dyn/admin on any of ootb repositories in the cache usage statistics section e.g. atg/userprofiling/ProfileAdapterRepository/
I suspect this relates to entries which have expired due to a cache timeout vs entries which have been removed as a result of a high cache churn rate.
Please note this question is NOT about local vs external caches.
Thanks in advance.
|
ATG Cache cull vs invalidate
|
For the problem itself it does not matter whether you cache IDs in a collection or object contents in a collection.
If an object is updated, it may not fulfill the query criteria any more.
So what we speak about is caching a query result, correct?
Conceptually there are many approaches:
Invalidate / clear the whole cache when an update happens
Invalidate the query results whenever a table is updated
When a value is updated: Evaluate the query against the old and new object value and update the cached result
Don't tackle the problem at all and use an expire the data after let's say 5 minutes
The easiest options are clear the whole cache or work with an expiry. This works pretty well most of the time. Always start with the easy thing and then go for more complex solutions if this is really needed.
BTW: Within elastic search they implemented exactly the functionality you described, this is called "percolator". See: https://www.elastic.co/guide/en/elasticsearch/reference/current/search-percolate.html
|
Lets support we develop application in java and have a big table. And in order to increase performance we want to cache data. And here we have two ways of caching:
object cache - by id.
collection cache - caching collection of ids(!)
Example of collection cache. We have a sql query SELECT * FROM person WHERE birhddate=A AND age<B ORDER BY firstName,lastName. And for this query we cache collection of ids. Now, for the same query we can use cache. However, the problem of such caching is that if there is any updates/creates/deletes all collections caches become old and can't be used any more.
These are questions -
is collection cache used in practice?
are there any patterns/solutions/lib for java/algorithms to work with collection caches?
|
Java: caching collections
|
The short answer is: Yes, cache values can be evicted or purged even if you use InfiniteAbsoluteExpiration.
If your program can't handle missing cache values (which it probably should), then you need to handle the case where an item is removed for eviction regardless of OutOfMemoryExceptions. MemoryCache (as part of ObjectCache) has an UpdateCallback delegate that can be set like this:
private void PopulateCache(Widget value)
{
var policy = new CacheItemPolicy();
policy.UpdateCallback = CacheUpdate;
_cache.Set(GetCacheItemKey(value), value, policy);
}
private void CacheUpdate(CacheEntryUpdateArguments args)
{
// if expired or evicted, put it back in!
if (args.RemovedReason == CacheEntryRemovedReason.Expired || args.RemovedReason == CacheEntryRemovedReason.Evicted)
{
_cache.Set(args.Key, _cache[args.Key], CacheItemPolicy);
}
// if removed or ChangeMonitorChanged, do nothing
}
For more information check out: https://msdn.microsoft.com/en-us/library/dd988702(v=vs.110).aspx
As a best practice, the typical pattern, however, is that when you go to retrieve a value from cache, if the item is expired/evicted, you should get the item from your data store and repopulate the cache.
|
I'd like to cache certain values forever and would like to be sure, they're there when I access them.
Currently I use the following code:
ObjectCache objectCache = MemoryCache.Default;
CacheItemPolicy policy = new CacheItemPolicy() { AbsoluteExpiration = ObjectCache.InfiniteAbsoluteExpiration };
objectCache.Set(new CacheItem("anykey", anyobject), policy);
In the documentation on MSDN the following is written about the setting ObjectCache.InfiniteAbsoluteExpiration:
However, a cache entry with this setting can be evicted from the cache
for other reasons that are determined by a particular cache
implementation, such as a change-monitor event eviction caused by
memory pressure.
Source: https://msdn.microsoft.com/en-us/library/system.runtime.caching.objectcache.infiniteabsoluteexpiration(v=vs.110).aspx
Does it mean, any cache-values will be purged, if my windows-service would crash with an OutOfMemoryException?
|
Can values be evicted from ObjectCache even if InfiniteAbsoluteExpiration is used?
|
I posted an issue on the AMS github page and went back and forth with @joaomdmoura and @groyoh until we came up with this temporary solution. It works on my end and it'll do for now until AMS makes an official decision on the best solution.
module ActiveModel
class Serializer
class Adapter
def cache_key
key = @klass._cache_key
key = @cached_serializer.instance_exec &key if key.is_a?(Proc)
key ? "#{key}/#{@cached_serializer.object.id}-#{@cached_serializer.object.updated_at}" : @cached_serializer.object.cache_key
end
end
end
end
class AppLabelSerializer < ActiveModel::Serializer
cache key: ->(){ "#{scope.app_language.name}/app_labels" }, expires_in: 3.hours
attributes :id, :label, :label_plural
end
It looks funny, but yes you just paste in that extension of the ActiveModel module right into your already existing serializer file.
NOTE: This only works with v0.10.0.rc1
|
I am using a database-driven solution for labels and translations that I would like to cache at the serializer level. Here is my serializer.
class AppLabelSerializer < ActiveModel::Serializer
cache key: 'app_label', expires_in: 3.hours
attributes :id, :key, :label, :label_plural
def key
object.app_label_dictionary.key
end
end
The problem is that I need to cache the labels for each language, so I need to specify the language somewhere in the key. I tried this solution:
cache key: "#{scope.app_language.name}/app_label", expires_in: 3.hours
But the value of scope isn't available there for some reason.
|
How to use a dynamic value for cache key with ActiveModel::Serializers (v0.10.0.rc1)
|
Expiration and refresh are closely related but different mechanisms. An expired entry is considered stale and cannot be used, so it must be discarded and refetched. An entry eligible for being refreshed means that the content is still valid to use, but the data should be refetched as it may be out of date. Guava provides these TTL policies under the names expireAfterWrite and refreshAfterWrite, which may be used together if the refresh time is smaller than the expiration time.
The design of most caches prefer discarding unused content. An active refresh would require a dedicated thread that reloads entries regardless of whether they have been used. Therefore most caching libraries do not provide active refresh themselves, but make it easy for applications to add that customization on top.
When a read in Guava detects that the entry is eligible for refresh, that caller will perform the operation. All subsequent reads while the refresh is in progress will obtain the current value. This means that to the refresh is performed synchronously on the user's thread that triggered it, and asynchronously to other threads reading that value. A refresh may be fully asynchronous if CacheLoader.reload is overridden to perform the work on an executor.
Caffeine is a rewrite of Guava's cache and differs slightly by always performing the refresh asynchronously to a user's thread. The cache delegates the operation to an executor, by default ForkJoinPool.commonPool which is a JVM-wide executor. The Policy api provides means of inspecting the runtime state of the cache, such as the age of an entry, for adding application-specific custom behavior.
For other ScalaCache backends support is mixed. Ehcache has a RefreshAheadCache decorator that refreshes lazily using its own threadpool. Redis and memcached do not refresh as they are not aware of the system of record. LruMap has expiration support grafted on and does not have any refresh capabilities.
|
I'd like to do a TTL based memoization with active refresh asynchronously in scala.
ScalaCache example in the documentation allows for TTL based memoization as follows:
import scalacache._
import memoization._
implicit val scalaCache = ScalaCache(new MyCache())
def getUser(id: Int): User = memoize(60 seconds) {
// Do DB lookup here...
User(id, s"user${id}")
}
Curious whether the DB lookup gets triggered after TTL expires for existing value, synchronously and lazily during the next getUser invocation, or if the refresh happens aggressively and asynchronously - even before the next getUser call.
If the ScalaCache implementation is synchronous, is there an alternate library that provides ability to refresh cache actively and asynchronously ?
|
scalacache memoization asynchronous refresh
|
If the Expires header is set to 0, the browser interprets it as 1 January 1970, which relates to the Unix time aka POSIX time. Because this date lies in the past, this means that the request is not cached.
The Expires header is defined within RFC 7234, which includes this paragraph related to the statement above:
A cache recipient MUST interpret invalid date formats, especially the value "0", as representing a time in the past (i.e., "already expired").
|
I am working on caching some pages and I noticed this in Firebug:
So it says that the cache expired 45 years ago. Is this a bug or some bad data?
I have another page that is caching correctly
I just do not understand why its saying Expires ... 1970. This page won't cache at all on my site, even though I'm using the Boost module with Drupal.
|
Why does the expiration date of the request cache lie in the past?
|
in flash i would have a 'long' magicnumber that would increment perpetually. then subsequent calls to the url would increment the magicnumber and add it to the url.
such as
url="http://example.com?property=948&angle=street-corner&magicnumber=837493"
|
I'm with a quite annoying situation.
One application's user is having problem getting updated data from my database. Here as follow:
The user generate an insert statement into my database.
When the user list these inserts, he only got the data that was previous there when he login into my application, only after some minutes he can see the data that he sends a while back ago.
I'm executing simple and plain statements of insert and select, and only this user is getting this problem, the rest of my users are fine.
Is it possible that somewhat the value that I retrieve from the data base is being cached?
|
php - mysql storing old data
|
You can put your ajax click handler outside of that function:
$(document).on('click', '.email-resend, .email-send, .show-doc, .show-acc, .more-acc, .no, .yes, .termin', function(){
$.fn.doAction(classname, comEntry);
});
As you mentioned that you have placed this in the function. Instead you should put this outside of it:
$(document).on('click', '.bt-ok', function(){
$.ajax({
....
});
});
Edit by @pandora: Or just use a second function (in my case) like this:
$(document).on('click', '.bt-ok', function(){
$.fn.sendRequest($('#modal').attr('data-1'), $('#modal').attr('data-2'));
});
Then I can call the function like this and execute the AJAX:
$.fn.sendRequest = function(data1, data2) {
$.ajax({
url: 'target.php',
cache: false,
data: { gimme1 : data1, gimme2 : data2 },
success: function(response) {
// Show Error
if(response.length > 0){
alert(response);
}
// Reload Content
$.fn.Reload('','');
// Close Modal
$('#modal').css({'display' : 'none'});
}
});
};
|
I´m using AJAX to execute some server-side actions and refresh a table inside the page without reloading the page itself. All works fine at first view. But I wrote a function in PHP to send an email and execute it via AJAX. When I´m starting this action for a second time old responses gets triggered.
What exactly happens:
I´m clicking a Button to execute the "Sendmail-Action"
a Modal asks me if I want to execute the action and I´m clicking yes
a PHP-Script gets executed and sends the email
the Modal gets closed after the PHP-Script finishes (~2s)
the table (with emails and so on) gets refreshed and the status updated
What happens next:
I´m clicking a Button to execute the "Sendmail-Action" for the same
or another entry in my table
a Modal asks me if I want to execute the action and I´m clicking yes
the Event triggers twice / 4xAjax-Request instead of 2 (I can see it in my Chrome-Console)
the Modal closes and 2 Mails sended
What I´ve tried to get rid of this behaviour:
checked my JS with JSHint
checked my PHP-Code (no errors in apache error.log)
redesigned the PHP-Script (Sendmail), now the AJAX executes a function
tried different browsers
deactivated AJAX-Caching
completely deactivated Caching with Apache (correct Headers)
deactivated Session-Caching in my php.ini (I don´t use sessions)
unset all variables in PHP
cleared the cache for all of my browsers
checked all headers
searched several hours for solutions
More information about my setup:
jQuery 2.1.4
php 5.4
Apache 2.2
I can´t find a solution to my problem and maybe this is caused because I´m adding the content dynamically to the table. I had the same problem in the past and I´m thinking the Modal (which is also dynamic) triggers the click on the "Yes-Button" twice.
|
AJAX response caching / onclick-event inside of a function
|
Caching is relative to the object, so if you move the object on the x/y, you don't have to update the cache. Additionally, when you adjust the alignment, the bounds will have an x and y property, which will be the offset of the top left from the registration point.
Here is an update fiddle:
https://jsfiddle.net/xnqcjsg8/1/
This is the new cache function. If you sub out the x and y with [0,0] you can see that how it crops based on the alignment.
this.label.cache(rec.x, rec.y, rec.width, rec.height);
I also simplified your fiddle a little.
|
I would like some help regarding caching Text objects using EaselJS library.
I never fully understood how caching works, and i must be missing something really fundamental, because i cannot seem to make it work.
Take the following simle example.
this.label.cache(this.label.x, this.label.y, rec.width, rec.height);
https://jsfiddle.net/xnqcjsg8/
If you comment the line that caches the Text object then it is displayed correctly. Otherwise you can see nothing on stage.
I know that i can and should cache text objects, because they are expensive to render, but i cannot figure out how.
Any help appreciated, thanks in advance!
|
EaselJS cache Text object
|
3
Opcache and Memcached store data in memory. In the vast majority of cases, retrieving data from memory is faster than retrieving data from the file system. The drawback? Running Memcached and using an opcache will obviously use up some of your server's memory.
Share
Improve this answer
Follow
answered May 21, 2015 at 14:34
Wayne WhittyWayne Whitty
19.7k77 gold badges4646 silver badges6868 bronze badges
2
So you are saying that in Opcache the Database results also stored in the memory? and no more time database query execution?
– Raja
May 21, 2015 at 15:10
1
@Yadheendran - OpCache stores purely script bytecode, not data of any kind, so database results will never be stored in OpCache
– Mark Baker
May 21, 2015 at 15:14
Add a comment
|
|
I just recently got an updated about Opcache in php and i am little familiar with file based caching in Codeigniter.
But i thought as of now File based caching is faster other caching techniques, since there won't be any database access and it directly connect to the generated html file to load. So it should be fast than other techniques.
So i have searched in Google and some websites compared the speed of caching by benchmarking it where they mentioned File caching is slow on retrieve when compared to other caching technique memcache and Opcache php and I am confused with the report.
I know every caching technique having their own pros and cons. Suggest me on the situation so my page won't be need of real time data and currently i am using file based caching. So Is it ok to go Opcache or Memache?
|
Which one is faster php File based caching or Opcache
|
All you need to do is to reestablish database connection. Try:
system("bundle exec rake db:reset")
puts User.count
User.destroy_all("username = 'user3'")
puts User.count
system("bundle exec rake db:reset")
ActiveRecord::Base.clear_all_connections!
puts User.count
User.destroy_all("username = 'user3'")
puts User.count
The main question is: why do you need to do this? I am pretty certain there is a better way to achieve what you want.
|
I am running rails db:reset db:migrate in between tests from within my testing script (which imports and interfaces with the model directly), but the changes are not reflected between the first test and the second test. More specifically, the changes caused by the first test are not reversed as they should be.
When I connect to the database externally (from the shell), I observe that the command has taken effect.
I have already looked at this question but the solution had no effect (quite literally, there was no error but also no discernible effect).
How can I force my test script to clear its in-memory cache of the sqlite state?
Full steps for reproducing the problem.
Create a new rails app.
rails new MWE
Put the following in db/schema.rb
ActiveRecord::Schema.define(version: 20140408213603) do
create_table "users", force: true do |t|
t.string "username"
end
end
Put the following in db/seed.rb.
User.create(username: 'user1')
User.create(username: 'user2')
User.create(username: 'user3')
Put the following in the Gemfile.
source 'https://rubygems.org'
gem 'rails', '4.0.0'
gem 'sqlite3'
gem 'protected_attributes'
Put the following in a file called app/models/user.rb.
class User < ActiveRecord::Base
attr_accessible :username
end
Run the following commands.
rails new MWE
0
Place the following contents in a file called rails new MWE
1
rails new MWE
2
Run the file and observe actual output.
rails new MWE
3
Desired Output
rails new MWE
4
How can I force the DB reset to be reflected in the model?
|
How can I force the rails SQL cache to clear?
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.