Response
stringlengths 8
2k
| Instruction
stringlengths 18
2k
| Prompt
stringlengths 14
160
|
---|---|---|
Simply use proxy_no_cache and proxy_cache_bypass instead of if, testing the value of $arg_geoloc (not $args_geoloc) with map directive.
map $arg_geoloc $bypass {
default 1;
1 0;
}
server {
...
location /ajax/airport.php {
...
proxy_no_cache $bypass;
proxy_cache_bypass $bypass;
...
# No need to add /ajax/airport.php in proxy_pass
proxy_pass http://127.0.0.1:8080;
}
...
}
Nginx also allows to test several parameters with proxy_no_cache and proxy_cache_bypass. If you need something like that just put the parameters one after another:
proxy_no_cache $cookie_PHPSESSID $bypass_cache;
proxy_cache_bypass $cookie_PHPSESSID $bypass_cache;
|
I'd like to use nginx cache for a specific url only
The url is /ajax/airport and must contain the parameter ?geoloc=1.
Cache is working fine, the only issue I'm facing is to get it working for an url containing the given parameters.
Here is my nginx site.conf file:
server {
listen 80;
server_name _;
server_tokens off;
location /ajax/airport.php {
if ($args_geoloc = 1) {
proxy_pass http://127.0.0.1:8080/ajax/airport.php;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_cache my-cache;
proxy_cache_valid 300s;
#proxy_no_cache $cookie_PHPSESSID;
#proxy_cache_bypass $cookie_PHPSESSID;
proxy_cache_key "$scheme$host$request_uri";
add_header X-Cache $upstream_cache_status;
add_header LEM airport;
}
}
location / {
proxy_pass http://127.0.0.1:8080/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
add_header LEM all;
}
}
server {
listen 8080;
.. usual location handeling ...
And the error I get:
nginx: [emerg] "proxy_pass" cannot have URI part in location given by regular expression, or inside named location, or inside "if" statement, or inside "limit_except" block in /etc/nginx/sites-enabled/site.com.conf:8
Thank you for your help!
|
Niginx location with cache for a specific url with params
|
MongoDB default storage engine maps the files in memory. It provides an efficient way to access the data, while avoiding double caching (i.e. MongoDB cache is actually the page cache of the OS).
Does this mean as long as my data is smaller than the available ram it will be like having an in-memory database?
For read traffic, yes. For write traffic, it is different, since MongoDB may have to journalize the write operation (depending on the configuration), and maintain the oplog.
Is it better to run MongoDB from memory only (leveraging tmpfs)?
For read traffic, it should not be better. Putting the files on tmpfs will also avoid double caching (which is good), but the data can still be paged out. Using a regular filesystem instead will be as fast once the data have been paged in.
For write traffic, it is faster, provided the journal and oplog are also put on tmpfs. Note that in that case, a system crash will result in a total data loss. Usually, the performance gain does not worth the risk.
|
I will be creating a 5 node mongodb cluster. It will be more read heavy than write and had a question which design would bring better performance. These nodes will be dedicated to only mongodb. For the sake of an example, say each node will have 64GB of ram.
From the mongodb docs it states:
MongoDB automatically uses all free memory on the machine as its cache
Does this mean as long as my data is smaller than the available ram it will be like having an in-memory database?
I also read that it is possible to implement mongodb purely in memory
http://edgystuff.tumblr.com/post/49304254688/how-to-use-mongodb-as-a-pure-in-memory-db-redis
If my data was quite dynamic (can range from 50gb to 75gb every few hours), would it be theoretically be better performing to design mongodb in a way which allows mongodb to manage itself with its cache (default setup of mongo), or to put the mongodb into memory initially and if the data grows over the size of ram use swap space (SSD)?
|
Mongodb - make inmemory or use cache
|
Take a look at Infinispan. I think it covers all of your requirements.
|
Closed. This question is seeking recommendations for software libraries, tutorials, tools, books, or other off-site resources. It does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for software libraries, tutorials, tools, books, or other off-site resources. You can edit the question so it can be answered with facts and citations.
Closed 8 years ago.
Improve this question
We have an application that processes image data, running on a single machine. The processing is very expensive (10 to 30 seconds), so we cache the resulting files on disk. Those files are also big, so we have to prune the cache when it hits the configurable boundaries. The cached image files themselves are created by a different non-Java process. And there are user actions that may lead to certain cache entries becoming invalid.
Current implementation:
Currently we're using a custom cache manager for that and store some meta data separately in a file system structure.
Requirements for a caching provider:
I've looked into a couple of Java cache solutions, but none seem to match our requirements.
overflow to disk (we cannot keep the entire cache in memory because RAM is very limited)
persistent on shutdown, read on startup (must not be failsafe, but at least best effort)
LRU eviction strategy
size limitation of disk and memory cache by (different) max number of elements
custom cache eviction listeners (to notify a second system)
Here's why the common frameworks don't qualify:
ehcache fails on point (1) and (2) since one cannot have both at the same time
JCS fails on point (5) as it's not possible to react to disk cache eviction events
Guava fails on point (1) as there's no overflow to disk option
Any advice is appreciated.
|
Looking for a restart-persistent LRU disk caching solution in Java - ehcache, guava, JCS do not qualify [closed]
|
2
You can set the file by doing something like this, assuming that the file only contains the value 1 or 0 if the checkbox should be set or not (call this script when submitting the form):
if(isset($_POST["mycheckboxname"])){
file_put_contents('file.txt', '1');
}
else{
file_put_contents('file.txt', '0');
}
This is to put the checkbox in right format on screen. Use this when displaying the checkbox form.
$checked = file_get_contents('file.txt');
echo '<input type="checkbox" name="mycheckboxname" ';
if($checked=='1') echo 'checked ';
echo '/>';
Be sure to set the right permissions when creating the file so that the PHP processor has write access to it.
Share
Improve this answer
Follow
answered Apr 24, 2015 at 15:55
pBuchpBuch
1,01611 gold badge77 silver badges2020 bronze badges
3
Thanks, I tried your code but the problem persists. Writing the file is no problem, but when I refresh the page the checkbox is automatically set to unchecked because when the page is loaded, isset($POST["name"]) is not true.
– Jim
Apr 24, 2015 at 16:18
As I stated, you have to divide the code into two scripts. One that is executed when displaying the site and one that handles submitting of the file. The second one can e.g. forward the user back to the first one. Then it should work as intended.
– pBuch
Apr 24, 2015 at 16:20
Ah! That was exactly the solution I needed, didn't even think about using a seperate script. Working like a charm now :) Thanks!
– Jim
Apr 24, 2015 at 16:34
Add a comment
|
|
For my home automation project (using a Raspberry Pi with an Apache server and a configuration webpage) I'm trying to save a checkbox's setting to a file on the server side, but I can't get it working in my situation.
Using php with fopen() and fwrite() I can store any string into a text file, that's no problem. The issue is that the form uses POST and I can't seem to figure out how to write my code in such a way that:
1) the checkbox itself is set to the value that is currently present in the text file ('remembering and retrieving' the setting);
2) the setting that was just set by the user is written to the file, which happens when the page loads (POST).
These actions seem to get in eachother's way because php is server side. If the page is refreshed or visited for the first time there's no problem, the problem exists in reloading the page after the form is submitted.
It doesn't really matter which method or language I use to save the checkbox's setting on the server side.
Which method could do the trick?
|
Storing checkbox value on server side
|
Your best bet is probably to replace the view and remove the cache block:
https://github.com/spree/spree/blob/v3.0.0/frontend/app/views/spree/products/show.html.erb#L3
There are other solutions for disabling caching site wide, but nothing else that I know of for disabling specific caches that aren't terrible hacks.
|
I am using Spree in my application and it's working pretty well except one issue that has got me thinking . On the product page , i have made products available according to the time of the day . But some of the products take values from the cache and based on time selected it puts in products that aren't supposed to be there . How can i stop caching on this single page of spree (I want it to be working on other pages though) .I have created an action for that page and put a before filter for it but it deletes cache from all the pages . Any inputs on the same would be highly welcome .
|
Stopping Fragment caching on a single page - Spree
|
3
The documentation for the <Files> directive clearly states where you can use it:
Context: server config, virtual host, directory, .htaccess
In most cases, you'd probably want to add it to your application's virtual host (if you want the caching rules to apply only to that application) or the server's config (outside any directive, usually in httpd.conf) - if you want to apply those rules globally (useful if you have more than one GWT application on the server).
If you want to use it in a virtual host directive:
<VirtualHost *:80>
ServerName host.example.com
#...
<Files *.nocache.*>
ExpiresActive on
ExpiresDefault "now"
Header merge Cache-Control "public, max-age=0, must-revalidate"
</Files>
<Files *.cache.*>
ExpiresActive on
ExpiresDefault "now plus 1 year"
</Files>
</VirtualHost>
If you want to use them globally, just put them in httpd.conf, outside of any directives.
Share
Improve this answer
Follow
answered Apr 6, 2015 at 19:43
Igor KlimerIgor Klimer
15.3k33 gold badges4848 silver badges5757 bronze badges
Add a comment
|
|
The GWT perfect caching documntation (http://www.gwtproject.org/doc/latest/DevGuideCompilingAndDebugging.html#perfect_caching) suggests to add the following lines to my .htaccess file:
<Files *.nocache.*>
ExpiresActive on
ExpiresDefault "now"
Header merge Cache-Control "public, max-age=0, must-revalidate"
</Files>
<Files *.cache.*>
ExpiresActive on
ExpiresDefault "now plus 1 year"
</Files>
As I'm not using .htaccess files but have access to the Apache 2.2 httpd.conf file I would prefer to add those lines there.
But where / how?
Thanks for any advice.
|
GWT cache config via Apache 2.2 httpd.conf?
|
Try to enable the WebView app cache:
// Set cache size to 8 mb by default. should be more than enough
mWebView.getSettings().setAppCacheMaxSize(1024*1024*8);
mWebView.getSettings().setAppCachePath("/data/data/"+ getPackageName() +"/cache");
mWebView.getSettings().setAppCacheEnabled(true);
cm = (ConnectivityManager) this.getSystemService(Activity.CONNECTIVITY_SERVICE);
if (cm != null && cm.getActiveNetworkInfo() != null && cm.getActiveNetworkInfo().isConnected()) {
webView.getSettings().setCacheMode(WebSettings.LOAD_DEFAULT);
} else {
webView.getSettings().setCacheMode(WebSettings.LOAD_CACHE_ELSE_NETWORK);
}
You will need to add .ACCESS_NETWORK_STATE and .ACCESS_WIFI_STATE permissions.
Here you can find more info about all the WebSettings.
|
I'm trying to using WebView to show my blog posts in Android.
I can save HTML text in sqlite DateBase for next use when user is offline but the problem is saving images in a directory when user is online .
for next visit if image found in specified directory , so use it ! , else download and cache for future uses !
|
Android WebView download image in a path as Cache
|
3
[InvalidateCacheOutput("Get", typeof(BeamsController))]
seems to work, instead of 'manual invalidation' (RemoveStartsWith), above.
In fact, after reading the source code of the attribute, it appears that the documentation is wrong and it should be:
cache.RemoveStartsWith(Configuration.CacheOutputConfiguration().MakeBaseCachekey("Beams", "Get"));
which works as expected, calling the method.
Share
Improve this answer
Follow
edited Feb 5, 2015 at 0:57
answered Feb 4, 2015 at 23:13
TomTom
8,06288 gold badges4646 silver badges6262 bronze badges
Add a comment
|
|
I have the following caching attribute on my controller method:
[CacheOutput(ClientTimeSpan = 14400, ServerTimeSpan = 14400)]
I am attempting to clear the cache. However, after running this line of code:
//clear cache
cache.RemoveStartsWith(Configuration.CacheOutputConfiguration().MakeBaseCachekey("BeamsController", "Get"));
I am still getting a 304 not-modified response without the controller method being invoked.
I am using this library https://github.com/filipw/AspNetWebApi-OutputCache
|
WebAPI OutputCache cache invalidation
|
Yii 2 now requires closures to wrap the query. AR does a query eventually so you can put that in the closure. In an AR class, get the db and wrap the query you want to use. The closure has a signature function($db) and you usually need to access more variables, so add use($variable) to make variables visible within the closure.
$db = self::getDb();
$object = $db->cache(function ($db) use($id) {
return self::findOne($id);
});
If you write to the db, the cache above won't know about it until the cache duration expires. So dependency should be added to the cache function to tell it when to invalidate the cache. Dependency gets complicated fast...
http://www.yiiframework.com/doc-2.0/yii-caching-dependency.html
|
I am quoting the guide:
``Query caching is a special caching feature built on top of data caching. It is provided to cache the result of database queries.
Query caching requires a DB connection and a valid cache application component. The basic usage of query caching is as follows, assuming $db is a yii\db\Connection instance:
$result = $db->cache(function ($db) {
// the result of the SQL query will be served from the cache
// if query caching is enabled and the query result is found in the cache
return $db->createCommand('SELECT * FROM customer WHERE id=1')->queryOne();
});
``
I do not think that I will manually create db connection in AR classes. So how to do this in my AR models ?
I have asked the same question on yii2 forum but I got no answer. It seems that people do not know how to do query caching in Active Record.
|
How to use query caching in yii2 ActiveRecord
|
WillPaginate doesn't include Array pagination by default, but you can include it...
require 'will_paginate/array'
...in an initializer. Pagination on an array should work almost exactly like pagination on an ActiveRecord::Relation.
|
I have a scope in my model(scope :short_listed, -> ....). To list all the items on index page I use this code:
@posts = Post.cached_short_listed.paginate(:page => params[:page])
post.rb
def self.cached_short_listed
Rails.cache.fetch([name, "short_listed"], expires_in: 5.minutes) do
short_listed.to_a
end
end
and I have this error
undefined method `paginate' for #<Array:
if I remove .paginate(:page => params[:page]) and <%= will_paginate....%> everything works fine.
How to make will_paginate work with model caching.
|
How to use will_paginate with model caching in rails?
|
The gem command line tool automatically caches gems. From the documentation:
Gem::Installer does the work of putting files in all the right places on the filesystem including unpacking the gem into its gem dir, installing the gemspec in the specifications dir, storing the cached gem in the cache dir, and installing either wrappers or symlinks for executables.
|
When using Python with pip we can specify an environment variable which says to also download the packages to a cache location: "How do I install from a local cache with pip?".
export PIP_DOWNLOAD_CACHE=$HOME/.pip_download_cache
pip install numpy
How can we do the same for bundler as well?
|
How to install Ruby Gems, also caching them along the way?
|
Sure. CacheBuilder.newBuilder().maximumSize(0) will do the job.
|
Is it possible to build/configure a Cache, which does not Cache at all?
|
Guava Cache that does not cache
|
Instead of relying on CacheDuration property you can do caching by yourself in webmethod. In that case you will have full control.
You can create unique cache key based on input parameter and store result in the cache (for specific duration) if there is no error.
So when there is error it means there is no cache, so you will again check if account is valid. And put result in cache if account valid.
|
How can you avoid caching an error result with ASP/ASMX WebMethod with a CacheDuration property?
Let's say you have a WebMethod that adds two numbers (so it has two number parameters) and also has an account ID parameter. If say the account ID is an expired account, the answer to those 3 parameters would be "Error: Expired Account". If you have a cache duration of 10 minutes, if the account gets fixed in 1 minute, and that method is called again within 10 minute expiration period (with same 3 parameters), doesn't it return the error message that is cached?
Is there a way to avoid caching a result if it's an error message?
|
WebMethod CacheDuration - Avoid Caching Bad Results
|
You can use version as a query parameter, e.g. /resources/foo.js?_version=1.0.0. If you are using Maven, it is not that hard to get version information from /META-INF/maven/{groupId}/{artifactId}/pom.properties. Of course this will force reload all scripts with every new version... but new versions are probably not deployed that often.
Then it is always a good practice to properly set HTTP caching headers. <mvc:resources> should correctly handle Last-Modified header for you. And you can set cache-period to make browser check the for resource modifications more often.
|
We're trying to force the client's browser to reload static files when they change. Currently, we have our build scripts automatically creating a new directory with the build timestamp and replace the content in the code to point to that directory.
However, I feel this is hardly an optimal solution. It forces the client browser to load every single file if a new build exists, even if only 1 file changed, and build time increases considerably by scanning every file and replace every static file reference.
I know we can also set the version when we declare files (something like < link src="blahblah.css?version=1.1" />), but this forces us to change all our code to include a version placeholder and still have our build scripts replacing it.
Is there a smarter way to do this? We're using Spring MVC. Is there any option in mvc:resources that I'm not aware of to do this without changing code? Or something on web.xml?
We're using tomcat. Is there a way to do this at server level? Would it help to use a cache like Varnish or something? Or these caches only allow to set expiry times and not check that the file changed? Bear in mind I'm not comfortable at all in server and cache configuration tasks.
I found out about this project https://code.google.com/p/modpagespeed/, but since it's far from my comfort zone, I'm struggling to understand capabilities and if this helps with what I want.
Anyone has any ideas?
Thanks
|
Java web app: Force browser to load static content (js, css, html) if deployed file changed
|
3
I found a related question with the correct answer here: https://stackoverflow.com/a/12081608/2290153
The relevant spec is http://docs.jboss.org/cdi/spec/1.0/html/interceptors.html and the important part is section 9.4.
Specify CacheResultInterceptor (full qualified name) in beans.xml and that should work.
Share
Improve this answer
Follow
edited Feb 24, 2018 at 4:57
Juan Pablo Perata
4377 bronze badges
answered Feb 28, 2015 at 20:45
Matthias WiedemannMatthias Wiedemann
1,4211313 silver badges2424 bronze badges
Add a comment
|
|
I am building a Java EE application and want to use JSR107's @CacheResult annotation to "transparently" add some caching to my service layer. This is my first "full-featured" Java EE app, I usually work in Spring were annotation processing seems a lot easier ;)
So, here is my software stack:
Wildfly 8.1
List item
EHCache 2.8
EHCache-JCache
JSR 107 Reference Implementation + annotation processing (https://github.com/jsr107/RI/tree/master/cache-annotations-ri)
... and this is the layout of my EAR:
the root contains a few EJB/CDI bean jars
/lib contains all required libraries
one of the beans inside one of the root-level-jars contains a few methods annotated with @CacheResult, the parameters to the method are a String
My problem: no caching happens ;)
Concrete questions:
has anyone here ever got the software stack I am using to successfully work together to perform caching?
is there a way for me to get more debugging information on what is internally going on during interceptor processing? I tried various logger configurations and digged through the sources of the frameworks I am using but seem to miss the essential spot.
Thanks in advance
Sven
Update
It works if I explicitly add @Interceptors(CacheResultInterceptor.class) to the service bean. However, my own interceptors (within the same jar file) don't need to be declared that way, the respective interceptor binding type suffices. Is there a difference if I try to use interceptors that reside in an external jar?
|
Wildfly 8.1 + EHCache + Annotations not working
|
Everything you're experiencing is just how jQuery Mobile is intend to work. You have read documentation regarding caching and prefetching but at the same time you are missing bigger picture, mostly because you didn't read everything.
When working with jQuery Mobile caching has sense only if you are using multi HTML template. Lets take a look at your current state. You are using multi page template where every page is part of a single HTML page. In this case, initial HTML file is fully loaded into the DOM and it will stay there until page is refreshed or until you open some subsequent HTML file using rel="external" (which is equal to full page restart).
In any other case initial HTML page will stay it the DOM forever and you can't do nothing to prevent that. Basically you can't remove pages loaded into the DOM if they were part of an initial HTML file. Of course you can remove them forcefully but application will then suffer from history navigation problem and I don't want to advise that case.
You have two solutions:
Move that specific page to some other HTML file. In this case when you transition to some other page, from this specific page, it will be removed from the DOM.
Clean previous form data during pagebeforechange page event
|
I have an app built with jQuery Mobile where all the pages are in a single HTML file. When I navigate to a page, fill out a form and then navigate away from it, I want the form data that I filled not to be there next time I go on that page. My question is, is this a caching issue? and if so how do I prevent it? I tried things like:
pageContainerElement.page({ domCache: false });
$(document).bind("mobileinit", function(){
$.mobile.page.prototype.options.domCache = false;
});
but the data is still there whenever I go back onto the page
|
jQuery Mobile multi-page template caching
|
There is no difference, the former uses the current page instance and it's Cache property, the latter uses the static approach via HttpContext.Current.Cache which would work also in a static method without page instance.
Both are referring to the same application cache.
So you can get the Cache via Page, for example in Page_Load:
protected void Page_load(Object sender, EventArgs e)
{
System.Web.Caching.Cache cache = this.Cache;
}
or in a static method (which is used in a HttpContext) via HttpContext.Current:
static void Foo()
{
var context = HttpContext.Current;
if (context != null)
{
System.Web.Caching.Cache cache = context.Cache;
}
}
|
I have read source code of one web application and seen that it used Cache object( Web.Caching.Cache) for caching data. In code behind files( aspx.cs files), it uses Page.Cache to get Cache while in others class-define files it uses HttpContext.Current.Cache to get Cache. I wonder why it not use the same option to get Cache. Can someone explain differences between Page.Cache and HttpContext.Current.Cache? Why use each one for each context above. Can I use Page.Cache or HttpContext.Current.Cache for both contexts above?
Thanks in advance.
|
Page.Cache vs. HttpContext.Current.Cache
|
This can happen for caches of small size. Let's compare caches of size 2.
In my example, the directly-mapped "DM" cache will use row A for odd addresses, and row B for even addresses.
The LRU cache will use the least recently used row to store values on a miss.
The access pattern I suggest is 13243142 (repeated as many times as one wants).
Here's a breakdown of how botch caching algorithms will behave:
H - hits
M - misses
----- time ------>>>>>
Accessed: 1 | 3 | 2 | 4 | 3 | 1 | 4 | 2
\ \ \ \ \ \ \ \
LRU A ? | ? | 3 | 3 | 4 | 4 | 1 | 1 | 2 |
B ? | 1 | 1 | 2 | 2 | 3 | 3 | 4 | 4 |
M M M M M M M M
DM A ? | 1 | 3 | 3 | 3 | 3 | 1 | 1 | 1 |
B ? | ? | ? | 2 | 4 | 4 | 4 | 4 | 2 |
M M M M H M H M
That gives 8 misses for the LRU, and 6 for directly-mapped. Let's see what happens if this pattern gets repeated forever:
----- time ------>>>>>
Accessed: 1 | 3 | 2 | 4 | 3 | 1 | 4 | 2
\ \ \ \ \ \ \ \
LRU A | 2 | 3 | 3 | 4 | 4 | 1 | 1 | 2 |
B | 1 | 1 | 2 | 2 | 3 | 3 | 4 | 4 |
M M M M M M M M
DM A | 1 | 3 | 3 | 3 | 3 | 1 | 1 | 1 |
B | 2 | 2 | 2 | 4 | 4 | 4 | 4 | 2 |
H M H M H M H M
So the directly-mapped cache will have 50% of hits, which outperforms 0% hits of LRU.
This works this way because:
Any address repeated in this pattern has not been accessed for previous 2 accesses (and both these were different), so LRU cache will always miss.
The DM cache will sometimes miss, as the pattern is designed so that it utilizes what has been stored the last time the corresponding row was used.
Therefore once can build similar patterns for larger cache sizes, but the larger the cache size, the longer such pattern would need to be. This corresponds to the intuition that for larger caches it would be harder to exploit them this way.
|
For caches of small size, a direct-mapped instruction cache can sometimes outperform a fully associative instruction cache using LRU replacement.
Could anyone explain how this would be possible with an example access pattern?
|
Direct-mapped instruction cache VS fully associative instruction cache using LRU replacement
|
3
Parse released an update to their SDK just 3 days after this question where they added a local datastore feature. This can be used for caching any ParseObjects which are returned from CloudCode. Not the most elegant solution since it does not automatically cache network requests and you need to have all your data in the form of ParseObjects - but definitely makes life easier.
Check out their press release for details - http://blog.parse.com/2014/04/30/take-your-app-offline-with-parse-local-datastore/
Share
Improve this answer
Follow
answered May 3, 2014 at 21:34
VarunTVarunT
6977 bronze badges
Add a comment
|
|
I am using Parse with Android. I was using ParseQueryAdapter's API for querying and getting back results from the Parse database. However, the default queries cannot filter the content in the way I want and I am implementing a Parse Cloud Code function to get me the ParseObjects instead. The problem is that by using Cloud Code I have to give up all optimisations build in ParseQueryAdapter, especially cache-ing of results and pagination.
Can someone provide me a way to implement cache-ing for ParseObjects returned through Cloud Code?
Thanks in advance!
|
How to implement a local cache on Android device for Parse Cloud Code function results?
|
You are caching your query results, but not your entity (those are separate caches)
Caching a query's results just stores the IDs; if you are not caching your entities too, a query is issued to load each returned entity (this is usually bad)
The default table name for the MyDTO class is MyDTO, so that's where it's looking
This looks like a Query by ID, for which you shouldn't be using a loose named query, but a proper loader .(see 17.4. Custom SQL for loading)
Once you set up the loader and entity caching, you'll just be able to retrieve your objects using just session.Get(id), which will use the second level cache, as long as you do all of your work inside transactions, which is a recommended practice.
|
I have an asp.net-mvc site that is using nhibernate and SQL server, there are a few pages that are quite slow because they require view that need queries which join about 25 different tables. If i don't a large join it takes a while and if I do a multi query it still seems to take a while
Its a pretty ready heavy (light write) DB so I wanted to see if there is a good way to basically load up the entire object graph of my database (my server has plenty of memory) into 2nd level cache so I am confident that it rarely hits the db. I am using
NHibernate.Caches.SysCache.SysCacheProvider
as the second level cache (not a distributed cache). Is there any flaw in this idea and is there a recommended way of doing this?
|
How can I load everything up in nhibernate 2nd level cache?
|
Your test code isn't measuring the time it takes simply to invoke ResolveRequestCache. Instead, it's measuring the time it takes to invoke everything between ResolveRequestCache and AcquireRequestState.
If you're making parallel requests to the server and if those requests contain a Session cookie, the ASP.NET runtime will serialize the requests. The time spent holding these requests in the pipeline would show up between the ResolveRequestCache and AcquireRequestState method invocations.
See Are AJAX calls processed in serial fashion on server if you use ASP.Net session state? for more information on AJAX + Session.
Edit: I should mention that a few other events (such as Routing) occur between ResolveRequestCache and AcquireRequestState. If you have many routes defined or if the route matching logic is particularly complicated, this would also contribute to increased time spent.
If you're going for accurate performance measurements in your web application, take a look at http://www.iis.net/configreference/system.webserver/tracing.
|
I have an ASP.NET application with a trace (log) tool.
It uses AJAX for retrieving data (controls).
I have those two trace entries:
protected void Application_ResolveRequestCache(Object sender, EventArgs e)
{
Log("Application_ResolveRequestCache", null);
}
protected void Application_AcquireRequestState(Object sender, EventArgs e)
{
Log("Application_AcquireRequestState");
}
Time diff between these two entries is duration of Application_ResolveRequestCache (i think). Method Log depends on singleton class, which writes in HttpContext.Current.Trace
Example: user do something on page, for that site sends 5 AJAX requests (updates 5 controls) to server.
Duration of Application_ResolveRequestCache for each request is (in order by time): 0.001, 1.518, 4.556, 5.057, and 5.575 (in seconds).
In my Global.asax i have disabled cache via filters:
public static void RegisterGlobalFilters(GlobalFilterCollection filters)
{
filters.Add(new OutputCacheAttribute
{
VaryByParam = "*",
Duration = 0,
NoStore = true,
});
}
So, why Application_ResolveRequestCache takes time even output cache is disabled?
note: current issue only for AJAX requests.
additional information:
AJAX query:
jQuery.ajax({
url: "/mvcget/ajax/LoadControl/",
data: properties,
type: "POST",
contentType: "application/x-www-form-urlencoded",
async: true
});
LoadControl action:
[HttpPost]
public virtual ActionResult LoadControl()
{
var control = CreateControlFromRequest<ITemplatedControl>();
control.SaveUserValues();
var content = control.Render();
return Content(String.IsNullOrEmpty(content) ? " " : content);
}
|
Why is Application_ResolveRequestCache have a long duration?
|
The reason you can't detach the listener is because you're already in the route event. By that point all the listeners have been marshalled together and queued for execution.
Why not instead take the listener out of the equation beforehand?
public function onBootstrap(MvcEvent $e)
{
$app = $e->getApplication();
$events = $app->getEventManager();
$shared = $events->getSharedManager();
$services = $app->getServiceManager();
$routeListener = $services->get('RouteListener');
$routeListener->detach($events);
$shared->attach('Zend\Mvc\Application', MvcEvent::EVENT_ROUTE, array($this, 'onRoute'), 100);
}
|
in my application i have a route (Literal or segment) for every action. i am not using one global route for everything so as a result the number of routes has grown hugely with 44+ modules (and more in future) .
It is my understanding (from what i have seen in the code) that in every page request zend goes throw all this routes in a array ans searches for a match witch could be bottleneck for the application (am i right?)
So i was thinking why not cache the matched routes in a db table with index to speed up the search ?
FIRST QUESTION : would this make the systems performance better?
so my first problem is skipping the system route matching mechanism. this is what i tried but it did not work :
public function onBootstrap(MvcEvent $e)
{
$em = StaticEventManager::getInstance();
$em->attach('Zend\Mvc\Application', MvcEvent::EVENT_ROUTE, array($this, 'onRoute'), 100);
}
public function onRoute(MvcEvent $e)
{
//var_dump($e->getRouteMatch());//->null routing has not been done yet
/* @var $router \Zend\Mvc\Router\Http\TreeRouteStack */
$router = $e->getRouter();
//-------------------------------------created a dummy route
$routeMatch = new \Zend\Mvc\Router\RouteMatch(array(
'controller' => 'Links\Controller\Items',
'action' => 'view',
'catId' => 0
));
$routeMatch->setMatchedRouteName('app/links');
$e->setRouteMatch($routeMatch);//set the dummy route
//--------------------------------------------PROBLEM HERE
//detach the onRoute event from routeListener
$e->getApplication()
->getServiceManager()
->get('RouteListener')
->detach($e->getApplication()->getEventManager());
}
the detach method is executed but the onRoute event still gets executed and matches the url to the correct route. so how to bypass(skip|detach) route matching ?
|
ZF2 cacheing matched routes
|
Create a new South datamigration (just a blank migration):
python manage.py datamigration <app> create_cache_table
Edit the generated migration. I called my cache table simply cache.
import datetime
from south.db import db
from south.v2 import DataMigration
from django.db import models
from django.core.management import call_command # Add this import
class Migration(DataMigration):
def forwards(self, orm):
call_command('createcachetable', 'cache')
def backwards(self, orm):
db.delete_table('cache')
...
If you are using multiple databases and need to define which to use. Note the second import statement for dbs instead of db. You also will need to set up routing instructions: https://docs.djangoproject.com/en/dev/topics/cache/#multiple-databases.
import datetime
from south.db import dbs # Import dbs instead of db
from south.v2 import DataMigration
from django.db import models
from django.core.management import call_command # Add this import
class Migration(DataMigration):
def forwards(self, orm):
call_command('createcachetable', 'cache', database='other_database')
def backwards(self, orm):
dbs['other_database'].delete_table('cache')
...
|
We use South for our schemamigrations and datamigrations. Now I need to enable the cache in Django which is quite simple to do. That forced me to use manage.py createcachetable cache_table in my terminal. Though I would like to automate this process with South. Is there a way I can create a cache table using South ?
|
Creating Django Cache table using South?
|
3
Each vector needs 8MB of memory(1024 * 1024 * 8B, assuming 8B for long). So if these vectors are contiguously allocated, then a(i), b(i), c(i), d(i) and e(i) will map to the same cache set(not same cache line always, as it is 2 way). Nevertheless there can only be two of them at a time in the cache set. So when cache lines containing d(i) and e(i) are brought in cache, cache lines containing b(i) and c(i) will be evicted.
If you are sure that these vectors are contiguously allocated, you can just pad them with one cache line size i.e. 32B. That will do the trick. So a(i), b(i), c(i), d(i) and e(i) will be on contiguous cache sets. And after accessing 4 elements of a vector, each cache line will be evicted. This is because each cache line contains 4 long variables(a(0), a(1), a(2), a(3) will on the same cache line, as will be a(4), a(5), a(6), a(7)).
So you declare your vectors like
long a(max),pad1(32),b(max),pad2(32),c(max),pad3(32),d(max),pad4(32),e(max)
For related discussion, you can follow this link
why-is-one-loop-so-much-slower-than-two-loops
Share
Improve this answer
Follow
edited May 23, 2017 at 12:30
CommunityBot
111 silver badge
answered Feb 7, 2014 at 15:05
SoumenSoumen
1,06811 gold badge1313 silver badges1818 bronze badges
1
I understand that the equation to find out to which set a memory address goes to is Set Number = (memory address) mod (cache size) So that would mean: a[0] =(1024*4) mod 32Kb = 0. Set 0 b[0] =(2048*4)+(32*4) mod 32Kb = 0. Set 0 And so on.. I guess I need to force them onto a different set? Otherwise there are not enough ways to keep all the data in cache?
– user1876942
Feb 17, 2014 at 6:44
Add a comment
|
|
I am trying to understand cache thrashing, is the following text correct?
Taking the code below to be an example.
long max = 1024*1024;
long a(max), b(max), c(max), d(max), e(max);
for(i = 1; i < max; i++)
a(i) = b(i)*c(i) + d(i)*e(i);
The ARM Cortex A9 is four way set associative and each cache line is 32 bytes, total cache is 32kb. In total there are 1024 cache lines. In order to carry out the above calculation one cache line must be displaced. When a(i) is to be calculated, b(i) will be thrown out. Then as the loop iterates, b(i) is needed and so another vector is displaced. In the example above, there is no cache reuse.
To solve this problem, you can introduce padding between the vectors in order to space out their beginning address. Ideally, each padding should be at least the size of a full cache line.
The above problem can be solved as such
long a(max), pad1(256), b(max), pad2(256), c(max), pad3(256), d(max), pad4(256), e(max)
For multidimensional arrays, it is enough to make the leading dimension an odd number.
Any help if the above is true or where I have made an error.
Thanks.
|
Cache thrashing, general help in understanding
|
RFC 2616, 9.4 HEAD:
The response to a HEAD request MAY be cacheable in the sense that the
information contained in the response MAY be used to update a
previously cached entity from that resource.
It doesn't really make sense to cache the response to a HEAD request itself, as it contains no entity.
|
Lets say I have a file on a server with cache-headers indicating it should be cached. Will the response of a HEAD request to that file be cached as well?
|
Will an HTTP HEAD request ever be cached by phone / browser
|
It turns out that the response coming back from Shopify was not actually JSON as I initially suspected. It was apparently an Array. By JSON encoding the response, the cache is working as expected.
Route::get('product/{id}', function($id)
{
$value = Cache::remember("product_$id", 10, function() use ($id)
{
echo('Getting this from Shopify');
$shopify = new ShopifyLib;
return json_encode($shopify->getShopifyProduct($id));
});
var_dump(json_decode($value));
});
|
I am using Laravel 4 in conjunction with the a PHP Shopify connector. I am trying to limit the calls made to retrieve Shopify products by caching the particular product pages when they are first needed.
I made the following simple route in Laravel to test this flow:
Route::get('product/{id}', function($id)
{
$value = Cache::remember($id, 10, function() use ($id)
{
echo('Getting this from Shopify');
$shopify = new ShopifyLib;
return $shopify->getShopifyProduct($id);
});
var_dump($value);
});
ShopifyLib is a PHP library I wrote to communicate with the Shopify connector. The call returns a product page in JSON format correctly ever time. The problem is that the call is always being made externally to Shopify and not being retrieved from the cache.
I am using database caching with the following entry being saved:
key : laravel172989477
value : eyJpdiI6Imw4aUwzNHN4eExwdElPWFZKXC9hdFpVRjc4ZG5pK1dYMTlJVm44a1doKzlvPSIsInZhbHVlIjoieVJ6N2J6Q1A3SGNsWG1xWFJKTUdVak5FSEtHWDZKQkd2Y2x0ZEI2dHlvcz0iLCJtYWMiOiJhNWU0OGUxOTkyNWE2NTRhNTY1ZTNhMjRlOWNhNzRjNGI1ZDIyY2YzNGM3NTVjOThhMDUyYjllZmI1OTJiZmE1In0=
expiration : 1386616552
The $id never changes so I assume that this entry should be returned every time.
I tried a simpler example using:
Route::get('product/{id}', function($id)
{
$value = Cache::remember('test', 5, function()
{
echo('Not getting this from the cache!');
return 'example';
});
var_dump($value);
});
This worked as expected only calling the non-cache one time with all future calls going to the cache.
|
Laravel 4 Cache::remember is never returning a response with a given ID
|
First and foremost, I would suggest using Cache.getAs, which takes a type parameter. That way you won't be stuck with Option[Any]. There are a few ways you can do this. In my example, I'll use String, but it will work the same with any other class. My preferred way is by pattern matching:
import play.api.cache.Cache
Cache.set("mykey", "cached string", 0)
val myString:String = Cache.getAs[String]("mykey") match {
case Some(string) => string
case None => SomeOtherClass.getNewString() // or other code to handle an expired key
}
This example is a bit over-simplified for pattern matching, but I think its a nicer method when needing to branch code based on the existence of a key. You could also use Cache.getOrElse:
val myString:String = Cache.getOrElse[String]("mykey") {
SomeOtherClass.getNewString()
}
In your specific case, replace String with Product, then change the code to handle what will happen if the key does not exist (such as setting a default key).
|
How to get object from Play cache (scala)
Code to set:
play.api.cache.Cache.set("mykey98", new Product(98), 0)
Code to get:
val product1: Option[Any] = play.api.cache.Cache.get("mykey98")
I get Option object. How to get actual Product object I stored in first step.
|
How to get object from Play cache (scala)
|
POST requests usually travel unaltered and are not cached, but there's
a drawback to that when you need to investigate server logs and cannot
see query string params in the log. Another popular cache busting
technique is to append a random query string param to each request,
like ?ts=${timestamp}, which forces proxy servers to fetch content
from the origin servers.
In my opinion the best solution for that problems is to use SSL
whenever possible. This makes it impossible for ISPs to tamper with
requests and it is safe to assume that the communication is happening
directly between client and server (and it is possible to detect when
someone tries to hijack the encrypted connection).
Credit to Filip Wasilewski for bringing this to my attention.
|
Some (rogue) ISPs may implement caching on their mobile network in order to reduce traffic on their connections. Some even don't tell their users.
Is there any standard way to defeat all caching mechanism in such cases and get sure to get fresh data when issuing a request on a web server ?
Thanks in advance.
|
How to invalidate any ISP cache when querying a web service from a mobile?
|
I hope it's can help you. I use SDWebImage with my all projects.
using :
add your viewcontroller : #import "UIImageView+WebCache.h"
[yourImageView setImageWithURL:[NSURL URLWithString:@"ImageUrl"]];
you need to add MapKit and ImageIO to the project. if you didn't add
To do that:
Click on the project at the top of the project navigator in Xcode
Select the 'Build Phases' tab.
Open up the 'Link Binary with Libraries' box.
Click the '+'.
Add MapKit and ImageIO frameworks.
|
i have implement one application in ARC format. but i want to cache some image url(s) in cache folder of library .
any idea?
thanks in advance.
|
best way to cache image url from my webservice?
|
You can use Future.wait to wait onLoad on every images :
Future.wait(myImageList.map((e) => e.onLoad.single)).then((_){
myContext.drawImage(myImageList[i], x, y);
// or call function that assumes that images are loaded
});
|
So, I got how to draw images in canvas using dart, which goes something like this:
///create the image element with given source
ImageElement myImage = new ImageElement(src:"myImage.png");
///load the image
myImage.onLoad.listen((event){
///when the image is loaded, draw it
myContext.drawImage(myImage, 0, 0);
});
But, how do you draw an image at a later date?
As in, say I have a list of images:
List<ImageElement> myImageList;
I want to then load all my image elements one by one given their source. Then when that's done, whenever I feel like it, I can just go:
myContext.drawImage(myImageList[i], x, y);
without this code being inside the onLoad.listen method.
|
How do you cache images to be drawn later in canvas2d in dart?
|
Both of the ideas that you have for caching are good ideas, lookup lists data that does not change often or can be predicted consistent list by user id for example are all candidates for caching
By client side caching, I can't tell if you are using output caching or something else but I would look at output caching where appropriate
[OutputCache(Duration = int.MaxValue, VaryByParam = "id")]
public ActionResult Details(int id)
Two small points about output caching
Try to use VaryByParam, this insures that if the data changes the cached output doesn't become stale
Try to pick a reasonable time for duration, otherwise cache can only be refreshed if the app pool is recycled
More information about output caching can be found here http://www.asp.net/mvc/tutorials/older-versions/controllers-and-routing/improving-performance-with-output-caching-cs
|
Our site has a large number of lists of constants for fields such as address type descriptions, states, generic comments, boolean type values, gender, etc...
We would like to be able to have this data available on all of our pages or at least the relevant lists given the page being loaded.
Our architecture consists of client side HTTP and Ajax requests hitting our MVC4 controller actions where those actions then query the Web API and retrieves data from the database etc.
We have decided to go with http://www.asp.net/web-forms/tutorials/data-access/caching-data/caching-data-at-application-startup-cs as well as some client side caching.
Is there anything else we can do so we can have these values on the client with even less overhead? Should we cache on controller methods (essentially caching what's inside HttpRuntime.Cache?
thanks
|
How can I improve the performance of static data caching in ASP.NET?
|
So you're doing this downloading in a view controller (as evidenced by you putting it in a "viewDidLoad" method).
What is likely happening is that when you move to another view, or when the view controller is deallocated from memory, is that the "manager" object that you are using in that dispatch_queue is also being released when the view controller is leaving memory as well.
That's why the download stops.
To prove that I am right, leave the view on screen that is doing all this GCD work. I bet you'll get all your images, not just 30 or 40.
What you need to do is move this code to somewhere where it can live until it's done downloading (perhaps not a view controller, which is only on the screen for as long as the user wants to see it)?
|
i have to download large number of web images in my app. now i am trying to download images in my initial class using Grand Central Dispatch.
code:
- (void)viewDidLoad
{
[super viewDidLoad];
dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0ul);
dispatch_async(queue, ^{
// Perform non main thread operation
for (NSString *imageURL in tempoaryArr1) {
NSURL *url = [NSURL URLWithString:imageURL];
SDWebImageManager *manager = [SDWebImageManager sharedManager];
[manager downloadWithURL:url
delegate:self
options:0
success:^(UIImage *image, BOOL cached)
{
}
failure:nil];
}
dispatch_sync(dispatch_get_main_queue(), ^{
// perform main thread operation
});
});
}
But the above code only download 30 to 40 images at the first launch of app after that it stop downloading images in cache until i stop the app and run it again then the second time it download the some more images in cache and after that stop downloading so on.
So my question is why it stop after downloading some images,i want that it keeps downloading images in cache until it download all the images.
|
How to download web images in Cache before displaying using SDWebimage?
|
2
For delete purpose we can use file API like following:
function removeAllCache(){
window.resolveLocalFileSystemURL(cordova.file.externalCacheDirectory, gotDirRemove, function(error){});
}
function gotDirRemove(entry){
var directoryReader = entry.createReader();
directoryReader.readEntries(function(entries) {
var i;
for (i = 0; i < entries.length; i++) {
entries[i].remove(function(file){
},function(error){
});
}
},function(){});
}
Share
Improve this answer
Follow
answered May 18, 2016 at 8:09
HomenHomen
1,21211 gold badge1212 silver badges1616 bronze badges
1
Hi, just add removeAllCache(); also in order to call the function.
– pancy1
Feb 6, 2021 at 0:46
Add a comment
|
|
I'm currently developing app for android using Phonegap technology, for your information, my app concept as below
Capture image (by default Phonegap will store cache image locally: i.e the image path is (file://androidappnames/cache/21323213.jpg)
Retrieve the image
Do some work with the image.
The question is, how to delete the cache image?
|
Delete cache image in phonegap
|
Sorry - this might be naive/obvious but just have a facade type class which does
if(Cache["Products"] == null)
{
Cache.Insert("Products", Products(), null, DateTime.Now.AddMinutes(15), TimeSpan.Zero);
}
return Cache["Products"];
There is also a CacheItemRemoveCallback delegate which you could use to repopulate an expired cache. As an alternative
ALSO
use the cache object rather than static objects. More efficient apparently (Asp.net - Caching vs Static Variable for storing a Dictionary) and you get all your cache management methods (sliding expiration and so on)
EDIT
If there is a concern about update times then consider two cache objects plus a controller e.g.
Active Cache
Backup Cache - this is the one that will be updated
Cache controller (another cache object?) this will indicate which object is active
So the process to update will be
Update backup cache
Completes. Check is valid
Backup becomes active and visa versa. The control now flags the Backup cache as being active
There needs to be a method which will fire when the products cache object is populated. I would probably use the CacheItemRemoveCallback delegate to initiate the cache repopulation. Or do an async call in the facade type class - you wouldn't want it blocking the current thread
I'm sure there are many other variants of this
EDIT 2
Actually thinking about this I would make the controller class something like this
public class CacheController
{
public StateEnum Cache1State {get;set;}
public StateEnum Cache1State {get;set;}
public bool IsUpdating {get;set;}
}
The state would be active, backup, updating and perhaps inactive and error. You would set the IsUpdating flag when the update is occurring and then back to false once complete to stop multiple threads trying to update at once - i.e. a race condition. The class is just a general principle and could/should be amended as required
|
I have a list of product that I have stored in asp.net cache but I have a problem in refreshing the cache. As per our requirement I want to refresh cache every 15 minutes but I want to know that if in the mean time when the cache is being refreshed if some user ask for the list of product then should he get error or the old list or he have to wait until the cache is refreshed.
the sample code is below
public class Product
{
public int Id{get;set;}
public string Name{get;set;}
}
we have a function which gives us list of Product in BLL
public List<Product> Products()
{
//////some code
}
Cache.Insert("Products", Products(), null, DateTime.Now.AddMinutes(15), TimeSpan.Zero);
I want to add one more situation here, Let say I use static object instead of cache object then what will happen and which approach is best if we are on a stand alone server and not on cluster
|
manage cache objects in asp.net
|
2
But Redis have only Key/Value methods the Key/Value must be String format
Redis actually has very good support for storing values as lists, if that's what you want to do. Do that, if you want to do any sort of list operations on the value.
If you just want to store the list and retrieve it as a whole, then you want to just serialize it into a string prior to storing in Redis. In that case, encode the list as a JSON string (or any other serializing format) and store that in Redis. Then just GET it and deserialize when you want it back.
Share
Improve this answer
Follow
answered Jun 24, 2013 at 17:57
EliEli
37.7k4040 gold badges148148 silver badges208208 bronze badges
Add a comment
|
|
I am trying to store Arraylist values which coming from DB to Redis Client . But Redis have only Key/Value methods the Key/Value must be String format. How can i store Key as String and Values As Arraylist.
|
How to store ArrayList in Redis caching?
|
It is actually a trade-off. If myPseudoCache is going to contain only 'Users', you could go with the second approach - i.e. myPseudoCache.Put("Users_" + userId, singleUserEntity);
If there are many disparate items (say Groups, Roles), especially with different frequencies, an exploded cache structure may not be the most optimum one. In this case, it makes sense to follow a tree-like structure or a dictionary within a dictionary - i.e.myPseudoCache.Put("Users", myDictionaryWithGuidsAsKeys);
For example; If your number of roles are less, a role lookup need not suffer because of an extraordinarily high number of users.
|
When storing a dictionary for fast lookup into the cache, i could think of two options:
Either storing a whole dictionary as a cache item, or prefixing cache keys with the identifiers.
Example:
myPseudoCache.Put("Users", myDictionaryWithGuidsAsKeys);
vs
myPseudoCache.Put("Users_" + userId, singleUserEntity);
The environment is C# with the HttpRuntime.Cache in an ASP.NET application, if that is of any interest, but I am also interested in general Thoughts (Maintainability aspects, performance, etc).
I am trying to make a reasonable decision here, but really am not sure about concrete advantages and disadvantages. Maybe someone with more experience could give me some advice on what to consider? Thanks a lot!
|
Caching - IDictionary vs top level key prefixes
|
3
If you are currently just using pickle, might I recommend cPickle which is purported to be up to 1000 times faster.
Share
Improve this answer
Follow
answered May 18, 2013 at 19:13
TadghTadgh
2,0291212 silver badges2525 bronze badges
3
In that case, you might consider Marshall as it is much faster, but, as the docs say, not secure against erroneous or malicious data.
– Tadgh
May 18, 2013 at 20:30
@GillBates: If you use cPickle and still observe these 1-2 second unpicklings, it seems that those querysets must be quite huge. How big are they? Do you really need to cache all that data?
– liori
May 18, 2013 at 20:51
@Tadgh According to the docs on Marshall, atleast in python 3.8, that is not true. Taken directly from the docs ...use the pickle module instead – the performance is comparable...
– Taylor
Dec 6, 2019 at 16:54
Add a comment
|
|
Working on optimization of one web-site, I found, that finally pickling of QuerySets becomes bottleneck in caching, and no matter how clever your code could be, unpickling of relatively large QS in time of 1-2secs will kill all effort.
Does anybody encountered this?
|
Django caching - Pickle is slow
|
If you're talking about https://pear.php.net/package/Cache_Lite then i could tell you a story. We used it once, but it proved to be unreliable for websites with lots of request.
We then switched to Zend_Cache (ZF1) in combination with memcached. I can be used as standalone component.
However, you have to tune it a bit in order to use tags. There are a few implementations out there to get the job done: https://github.com/bigwhoop/taggable-zend-memcached-backend
|
I'm using Cache_Lite for html and array Cache in my project. I found Cache_Lite may lead to high system IO problem. Maybe because the performance of Cache_Lite is not good
I'm asking is there any stable php html/page cache to use?
I already have APC installed for opcode cache, Memcached installed for common data/array cache.
|
PHP Cache_Lite Alternative
|
2
I've never had to do this, but a quick search shows a few options, including automating the server restart:
nodemon
node-supervisor
You may also be able to do something similarly creative to this:
delete require.cache['/home/shimin/test2.js']
You could almost definitely clear the version RequireJS has in cache, forcing it to reload it, although I suspect the node would just serve up the old file again.
Finally, maybe take a look at hot-reloading, which seems to work around the caching without needing to restart the server (example).
Share
Improve this answer
Follow
edited Oct 26, 2020 at 13:06
answered Apr 4, 2013 at 9:49
c24wc24w
7,62377 gold badges3939 silver badges4747 bronze badges
1
Thanks for hint delete require.cache[] I needed to have always fresh content from some postcss required file during webpack watch or webpack dev server.
– mikep
Oct 26, 2020 at 8:24
Add a comment
|
|
I am using something like airbnb's rendr, i.e. my own implementation of sharing backbone code between client and server, to build full HTML on the server using the same Backbone models, views, collections and templates I am using on the client. My files/modules are defined as requirejs modules, that way I can share them easily on the client.
In development mode, I want requirejs to refetch/reload any modules from disc when I refresh my browser (without restarting the server), so I get my server rendering uses the newest templates and javascript to finally serve me the newest HTML.
when using requirejs on the server with nodejs, the trick of appending bust parameters to the urlArgs like the following doesn't work, i.e. the server doesn't reload/refetch any modules from disc
urlArgs: "bust=v2"
I wonder, if reloading/refetching requirejs modules from disc-space without restarting the server is possible in node? specifically, that would be very useful for the require-text plugin for template. additionally, it would be nice to apply reloading only to a restricted set of modules.
|
Prevent RequireJS from Caching Required Scripts on Nodejs
|
3
The best/easiest tool I could find is perf, For example, the following command:
perf stat -e LLC-load-misses,LLC-store-misses /bin/ls
Will output the number of last level cache misses for running ls.
see perf --help
Other good tools are vTune, or cachegrind which was mentioned before.
For a programmatic approach you can also check the PAPI API.
Share
Improve this answer
Follow
edited Mar 21, 2013 at 15:52
answered Mar 21, 2013 at 14:38
BenBen
7,48088 gold badges3939 silver badges4646 bronze badges
Add a comment
|
|
I'm searching for C++ code / functions that allow to monitor read / write operations on the CPU cache / caches for multicore CPUs in order to be able to detect performance bottle necks due to competition between multiple cores accessing the same memory locations.
Everything that comes even close is appreciated. Can anyone help? Thanks in advance.
Thank you for all the answers so far. After going through them I think I should get a little more specific about the solution of the actual problem.
The desired outcome is a software for Windows systems written in Visual C++.
The software should be able to work with all CPUs and not just those of specific manufacturers.
Tools are handy when it comes to double-checking results, but as long as there's no fully documented source code available I won't get much from it.
At this point it would be of great help to get some VC++ code snippets like, how do I even detect the kind of CPU, the kind of cache it has, and when it reads / writes data from / to which adresses in that cache.
It doesn't have to be uber complex, I'd just have to work in a simple way.
|
Scanning for CPU cache operations in multicore systems with C++
|
3
That is not a problem of cache, but a problem of LyX, or a feature that has not been implemented. At the moment, the LyX child documents are treated as independent files, meaning that they are compiled in separate R sessions, so variables cannot be shared across documents. You may file a feature request to LyX developers. The key point is, when a LyX contains the knitr or Sweave module and is included as a child document of another document, it should not be compiled separately (hand this job over to knitr or Sweave).
Anyway, personally I do not find this a big problem -- I always put everything in one LyX document.
Share
Improve this answer
Follow
answered Mar 15, 2013 at 18:52
Yihui XieYihui Xie
29.2k2323 gold badges198198 silver badges426426 bronze badges
1
Thanks. Very clear, and along the lines of what I expected. There are a number of obvious workarounds, but it would be nice in a large knitr-lyx document with many R graphics to limit test compilations to one section. Thanks for the prompt response.
– user2174495
Mar 17, 2013 at 15:01
Add a comment
|
|
Lyx file F
knitr chunk caches a value for x
then text A contains several Sexpr{} calls, including Sexpr{x}
Compiling F to pdf works fine
Now I move text A into a separate LyX file C, make C a child file with F the master file
Rewrite F -- should produce "text A" twice
knitr chunk caches a value for x
text A
\include(C)
Every thing works fine, compilation produces "text A" twice, EXCEPT \Sexpr{x} in the included portion cannot find the cached value. I've reviewed knitr and knitr/LyX documentation and numerous help sites, but cannot figure out how caching works (or fails to work) in this situation.
|
LyX child document that contains a knitr Sexpr{} cannot find cached value
|
I developed my own extension on the fix.
In short the get parameters are used in the cache ID. So in order to bypass this, I created an extension that changed the following:
/app/code/core/Enterprise/PageCache/Model/Processor/Category.php
Two functions where changed
protected function _getQueryParams()
AND
public function getPageIdWithoutApp(Enterprise_PageCache_Model_Processor $processor)
/app/code/core/Enterprise/PageCache/Model/Processor/Default.php
One function was changed
public function getPageIdWithoutApp(Enterprise_PageCache_Model_Processor $processor)
Once changed, it no longer created the cache ID with my specified tracking parameters.
example:
public function getPageIdWithoutApp(Enterprise_PageCache_Model_Processor $processor)
{
$queryParams = $_GET;
ksort($queryParams);
/**
* unset known tracking codes
*/
unset($queryParams["trk_msg"]);
unset($queryParams["trk_contact"]);
unset($queryParams["utm_source"]);
unset($queryParams["utm_medium"]);
unset($queryParams["utm_term"]);
unset($queryParams["utm_campaign"]);
unset($queryParams["utm_content"]);
/** End Edit */
$queryParamsHash = md5(serialize($queryParams));
return $processor->getRequestId() . '_' . $queryParamsHash;
}
|
It's a simple question with no answer in search(google/bing/stackoverflow). The answer of course could be complicated.
I've read a couple articles on FPC within Magento, and have yet to really nail down where I need to add or create code so that when certain URL parameters are sent it serves up cached version of the page and not try and re-cache with the URL parameters.
http://www.kingletas.com/2012/09/how-does-magento-full-page-cache-works.html
So for example, when you go to http://www.example.com/shoes it loads the correct cached version. however, with google analytics and any other type of 3rd party reporting, especially with unique identifiers, it will reload the page as if it wasn't cached. So http://www.example.com/shoes?utm_key=A1537BD94EF07 would create a new cached version of that page and so on.
I would like to be able to exclude certain URL parameters and not all. Mainly any parameter I am using for tracking of customers.
As far as code, I have not come up with anything, due to the fact of the complexity of FPC and not having a dev site currently setup to test on.
Any leads as to where I can add this exception would be helpful, thanks!
EDIT: I would like to add that I am working with the Enterprise Edition. And using the Redis for cache.
|
How do I get Magento to serve up the cached version of the page when I have unique URL parameters?
|
You are right, SortedMap sorts entries according its keys on inserts. And the most known implementation of SortedMap is TreeMap, which stores entries as balanced binary tree.
From their javadoc:
A {@link Map} that further provides a total ordering on its
keys. The map is ordered according to the {@linkplain Comparable
natural ordering} of its keys, or by a {@link Comparator} typically
provided at sorted map creation time. This order is reflected when
iterating over the sorted map's collection views (returned by the
entrySet, keySet and values methods).
Several additional operations are provided to take advantage of the
ordering.
Even though entries and keys are returned in sorted ascending order from SortedMap, it also has methods firstKey() and lastKey(), which might be helpfull as well.
Particularly for LRU: the most common way in Java is to use LinkedHashMap, which has method removeEldestEntry(Map.Entry). By default it does nothing, but you can easily extend class and implement that method. More information you may find here.
|
I am working on a simple data structure that will implement a cache eviction policy. The two possible scenarios I want to implement are LRU and MRU
I am looking for a map-like data structure where keys are possibly time (or maybe just auto incremented integer) to know which cache block is most recently used or least recently used. And values are the IDs of the blocks.
Is there an existent Data structure that sorts the data by the keys on insert, and retrieves the value of a certain key in O(1) time?
For instance Java's HashMap has the constant time lookup, but I will need to get all keys, sort them and pick the last or the first depending on the algorithm I am implementing. Is SortedMap what I should go for? Do you suggest any other data structures that work well with LRU and MRU implementations?
Thanks
|
Sorted Keys data structure for LRU and MRU cache eviction policies
|
You named your cache configuration 'long' inside your core.php but using configuration 'longterm' inside your action
Also, If you have enabled debugging (e.g. debug set to 1 or 2 in your core.conf), the cache duration may be set to 10 seconds automatically. Not sure if this will apply to your own cache definitions as well though
|
I have a site developed in cakephp. I want to cache a query. I have read the documentation and I have in my bootstrap.php this:
Cache::config('default', array('engine' => 'File'));
Cache::config('short', array(
'engine' => 'File',
'duration' => '+1 hours',
'path' => CACHE,
'prefix' => 'cake_short_'
));
// long
Cache::config('long', array(
'engine' => 'File',
'duration' => '+1 week',
'probability' => 100,
'path' => CACHE . 'long' . DS,
));
My controller to store the query is this:
public function test_view () {
$product_general = Cache::read('product_general_query', 'longterm');
if (!$product_general) {
echo("test");
$product_general = $this->Product->query('SELECT DISTINCT * FROM products');
Cache::write('product_general_query', $product_general, 'longterm');
}
$this->set('product_general', $product_general);
}
Everytime that I enter into the page it print me "test" because doesn't find the query into the cache. Where is the problem? Have I miss something?
|
Cache query cakephp
|
3
regardless of 3G or WIFI you can use NSURLRequestReturnCacheDataElseLoad with your NSURLRequest which caches webpage otherwise load.. you could create a check for your 3G status
here is the usage of NSURLRequestReturnCacheDataElseLoad
NSURLRequest *request = [NSURLRequest requestWithURL:[NSURL URLWithString:urlString] cachePolicy:NSURLRequestReturnCacheDataElseLoad timeoutInterval: 10.0];
and load your data returned from webpage by using loadHTMLString in UIWebView
Share
Improve this answer
Follow
edited Feb 26, 2013 at 15:09
answered Feb 26, 2013 at 15:01
nsgullivernsgulliver
12.7k2323 gold badges4444 silver badges6464 bronze badges
2
Will this follow links through as well? Or just the initial page?
– Chris Byatt
Feb 26, 2013 at 15:12
I think if you have loaded your pages once in the UiWebView and then try to load again, then it will show the cached pages, thus reducing the request time for loading the page from start.. you could follow this question, there could be more elaboration to the solution, you could save the page in your docs also to check for the connection
– nsgulliver
Feb 26, 2013 at 15:24
Add a comment
|
|
I'm trying to find a way to get a UIWebView to cache an entire web page while one wifi and view it from the cache while connected to 3G, but then reload and recache while on WiFi again.
Are the any APIs or anything to do this?
Cheers
|
iOS Cache Content from a UIWebView
|
3
What your VCL is currently doing is removing Cookie from the request header and caching all requests. This causes the exact behavior you describe: the first page load is cached and all subsequent users get the cached content - no matter who makes the request. Generally you only want to cache content for users that have not logged in.
You cannot do authorization or access control using Varnish. This needs to be handled by the backend. In order to do this you need to identify the cookie that contains relevant session information and keep the cookie if a session is defined, and drop the cookie in other cases.
For example:
sub vcl_recv {
if(req.http.Cookie) {
# Care only about SESSION_COOKIE
if (req.http.Cookie !~ "SESSION_COOKIE" ) {
remove req.http.Cookie;
}
}
}
That way all requests that contain a "SESSION_COOKIE" cookie will be passed through to the backend while users that have not logged in receive a cached copy from Varnish.
If you wish to use Varnish for caching with logged in users as well, I'd suggest you take a look at Varnish's ESI features.
Share
Improve this answer
Follow
answered Jan 16, 2013 at 11:43
KetolaKetola
2,7671919 silver badges2121 bronze badges
2
1
Can this be done without any modifications on the web pages itself i.e. esi tags, and just do it through the VCL?
– Frankline
Jan 18, 2013 at 9:05
For ESI you will need to change your pages considerably. As far as cookies are concerned, if your web pages only set the cookie for users that have logged in and clears it when they log out (or the session expires), you can use the VCL in my answer (modified to your needs) to remove redundant cookies.
– Ketola
Jan 18, 2013 at 12:09
Add a comment
|
|
I installed varnish and everything works OK.
However, I have a need to cache logged-in users. This is what I have in my VCL:
backend default {
.host = "127.0.0.1";
.port = "8080";
}
sub vcl_recv {
unset req.http.Cookie;
if (req.http.Authorization || req.http.Cookie) {
return (lookup);
}
return (lookup);
}
sub vcl_fetch {
unset beresp.http.Set-Cookie;
set beresp.ttl = 24h;
return(deliver);
}
The above works but users can view other users data e.g. Say I'm logged in as Sam and access page A. When another user, say Angie, logins in and opens page A, she sees the same content as Sam.
Is there a way I can restrict pages to logged-in users who actually are authorized to view that page?
My request header is as follows:
Request Headersview source
Accept text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Encoding gzip, deflate
Accept-Language en-US,en;q=0.5
Authorization Basic YWRtaW46YWRtaW4=
Connection keep-alive
Cookie tree-s="eJzTyCkw5NLIKTDiClZ3hANXW3WuAmOuRKCECUjWFEnWIyIdJGvGlQgEegAD3hAj"; __ac="0zVm7PLtpaeQUXtm9BeYrC5%2BzobmBLPQkjRtxjyRGHs1MGY1MTgzZGFkbWluIQ%3D%3D"; 'wc.cookiecredentials'="0241d135445f7ca0b1cb7aced45da4e750f5414dadmin!"
Can I use the Authorization entry on the request header to enforce this restriction?
|
Varnish - How to cache logged-in users
|
It seems to me that you are hardcoding the array in tags. If the array is really big, the browser have to load more bytes.
Consider using AJAX in conjunction with JSON. Use forexample jQuery to load the data from another script. E.g. api.php?req=getBigArray. And jQuery "success" callback to run logic when array is loaded. This means that two http requests will be done, but it will load your page at once.
Serverside:
<?php //api.php
switch($_GET['req']){
case "getBigArray":
$arrayFromDatabase = array( /* Load from db ... */ );
echo json_encode($arrayFromDatabase);
break;
}
Client:
$(document).ready(function(){
$.getJSON('api.php?req=getBigArray', function(data) {
console.log(data); // Use data in some way.
});
});
This also decouples logic from serverside / frontend.
You can also look at memcache/apc if you want to cache results at the backend. Really simple API, but requires extra software installed on the serverside.
|
I have a php generated javascript page that is creating an array from a database and is absolutely murdering my page load time. The page has a php extension and is currently using a header to mark its content-type as application/javascript.
I found a possible solution online, but it doesn't seem to be doing much to speed up my page's load time. The header code of my file right now is this:
header("Cache-Control: must-revalidate");
$offset = 60 * 60 * 24 * 3;
$ExpStr = "Expires: " . gmdate("D, d M Y H:i:s", time() + $offset) . " GMT";
header($ExpStr);
header("Content-type: application/javascript");
Is there anything special I need to be doing to cache the file so I don't have it constantly trying to load these database calls? I'm using IIS 7.5 and PHP 5.3.13.
|
Caching a php generated javascript page
|
Calling mysqli_stmt::close will:
Closes a prepared statement. mysqli_stmt_close() also deallocates the
statement handle.
therefore not being able to use the cached version of the statement for further executions.
I wouldn't mind of freeing resources or closing statements since PHP will do it for you at the end of the script anyway.
Also if you are working with loops (as you described) take a look at mysqli_stmt::reset which will reset the prepared statement to its original state (after the prepare call).
|
This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
Are prepared statements cached server-side across multiple page loads with PHP?
I'm working on a new project and using parameterized queries for the first time (PHP with a MySQL DB). I read that they parameterized queries are cached, but I'm wondering how long they are cached for. For example, let's say I have a function 'getAllUsers()' that gets a list of all active user ID's from the user table and for each ID, a User object is created and a call to function 'getUser($user)' is made to set the other properties of the object. The 'getUser()' function has it's own prepared query with a stmt->close() at the end of the function.
If I do it this way, does my parameterized query in 'getUser()' take advantage of caching at all or is the query destroyed from cache after each stmt->close()?
Note: I also use the getUser() function if a page only requires data for a single user object so I wanted to do it this way to ensure that if the user table changes I only ever need to update one query.
Is this the right way of doing something like this or is there a better way?
Update: Interesting, just saw this on php.net's manual for prepared statements (http://php.net/manual/en/mysqli.quickstart.prepared-statements.php)
Using a prepared statement is not always the most efficient way of executing a statement. A prepared statement executed only once causes more client-server round-trips than a non-prepared statement.
So I guess the main benefit for parameterized queries is to protect against SQL injection and not necessarily to speed things up unless it's a query that will repeated at one time.
|
MySQL Parameterized Queries - Cache Duration [duplicate]
|
Cache-Control was introduced in HTTP 1.1 to replace Expires. If both headers are present, Cache-Control is preferred over Expires:
If a response includes both an Expires header and a max-age
directive, the max-age directive overrides the Expires header, even
if the Expires header is more restrictive. This rule allows an origin
server to provide, for a given response, a longer expiration time to
an HTTP/1.1 (or later) cache than to an HTTP/1.0 cache. This might be
useful if certain HTTP/1.0 caches improperly calculate ages or
expiration times, perhaps due to desynchronized clocks.
But there are still clients out there that can only HTTP 1.0. So for HTTP 1.0 requests/responses, you should still use Expires.
|
I see big player (i.e. akamai) started to drop the Expires header all together and only use Cache-Control, e.g.
curl -I https://fbcdn-sphotos-e-a.akamaihd.net/hphotos-ak-snc7/395029_379645875452936_1719075242_n.jpg
HTTP/1.1 200 OK
Last-Modified: Fri, 01 Jan 2010 00:00:00 GMT
Date: Sun, 25 Nov 2012 16:46:43 GMT
Connection: keep-alive
Cache-Control: max-age=1209600
So still any reason to keep using Expires?
|
Is Expires header not needed now?
|
You are probably using the python-twitter project; it creates a temporary directory with the name python.cache_<username>, with that exact structure (based on a md5 hash).
On Raspberry you are running your code as root usually, so that fits.
Another python library that uses the exact same structure is python-lastfm; the code was obviously copied between projects. Both projects have sprouted a few forks, so the method may be more widespread still.
|
Running Python on a Raspberry Pi, I found that my /tmp folder was becoming full over time. On investigation, I found that it was becoming full of files of the form /tmp/python.cache_root/<1>/<2>/, where <1> and <2> are oct digits. These files were created when I ran a scheduled (self-written) Python script.
Googling "Python Caching" and related terms only turned up results from frameworks or products like Maya or Django, which were no good to me. Apologies if I've missed an obvious result!
Is this a sign of sloppy coding (e.g. unclosed resources), or simply something that Python does as a by-product of running scripts? Is there an accepted way to deal with this? Running rm -rf /tmp/* "solves" the problem, as does rebooting the Pi, but obviously these aren't desirable solutions.
EDIT: It's been suggested that the python-twitter library might be to blame, as per this bug
|
Multiple files in /tmp/python.cache_root
|
I'm not familiar with Amplify, but its API says
Requests made through amplify.request will always be resolved asynchronously
So, you'll have to pass a callback into getDropdownJobs, to be executed after jobsString is filled, and any code that relies on the value goes in it.
Alternatively you could use Amplify's pub/sub system to subscribe an event for when jobsString is filled and publish to it during getDropdownJobs.
|
In one of my files, I make a call as follows:
var jobsString = getDropdownJobs();
It calls this function :
function getDropdownJobs() {
var jobsString = amplify.store("JobsList");
if (typeof jobsString === 'undefined') {
// Amplify sends a request for getJobs, if it exists in cache, it will return that value.
// If it does not exist in cache, it will make the AJAX request.
amplify.request("getJobs", function (data) {
// Leave a blank option for the default of no selection.
var jobsString = '<option value=""></option>';
// Append each of the options to the jobsString.
$.each(data.jobs, function () {
jobsString += "<option " + "value=" + this.JobNo_ + ">" + this.JobNo_ + " : " + this.Description + this.Description2 + "</option>";
});
// Store the jobsString to be used later.
amplify.store("JobsList", jobsString);
});
}
return jobsString;
}
Where the amplify definition of "GetJobs" is :
amplify.request.define("getJobs", "ajax", {
url: "../api/Job/Jobs",
dataType: "json",
type: "GET",
cache: "persist"
});
Whenever it returns, it's undefined. I put "async: false" in the AJAX definition and it didn't change anything.
How can I make sure that the value is there before returning?
|
How to ensure that my amplify request is completed before returning a value?
|
3
You can also use GigaSpaces XAP in memory data grid for caching and even hosting your web application. You can choose just the caching option or combine the power of two and gain single management of your environment along other things.
Unlike the key value pair approach you suggested, using GigaSpaces XAP you'll be able to have complex queries such as SQL, object based temples and much more. In your caching scenario you should check out more specifically the local cache related features.
Local Cache
Web Container
Disclaimer, I am a developer in GigaSpaces.
Eitan
Share
Improve this answer
Follow
answered Nov 11, 2012 at 9:12
EitanEitan
49455 silver badges1010 bronze badges
Add a comment
|
|
What is a good tool for applying a layer of caching between a webserver and an application server.
Basic Requirements:
The application server needs a way to remove items from the cache and put items in the cache with an expiration date.
The webserver needs a way to pull items out of the cache in a very light-weight, fast manner without requiring thread allocation on the application server.
It does not neccessarily need to be a distributed cache (accessible from multiple machines), but it wouldn't hurt.
Strategies I have considered:
Static file caching. Request comes in, gets hashed, if a file exists we serve it, if not we route the request to the app server. Is high I/O a problem or file locking problems due to concurrency? Is it accurate that the file system is actually very fast due to kernel level caching in memory.
Using a key-value DB like mongodb, or redis. This would store the finished HTML/JSON fragments in db. The webserver would be equipped to read from the DB and route to the app server if needed. The app server would be equipped to insert/remove from the DB.
A memory cache like memcached or Varnish (don't know much about Varnish). My only concern with memcached is that I'm going to want to cache 3 - 10 gigabytes of data at any given time, which is more than I can safely allocate in memory. Does memcached have a method to spill to the filesystem?
Any thoughts on some techniques and pitfalls when trying this type of caching layer?
|
Caching strategy to reduce load on web application server
|
Check this post: CACHING TUTORIAL.
Use <META HTTP-EQUIV="CACHE-CONTROL" CONTENT="NO-CACHE"> tag in your <head/> section.
Use headers in your PHP code:
<?php
Header("Cache-Control: must-revalidate");
$offset = 60 * 60 * 24 * 3;
$ExpStr = "Expires: " . gmdate("D, d M Y H:i:s", time() + $offset) . " GMT";
Header($ExpStr);
?>
|
I've heard of stylesheets being cached, but are regular pages (like the one we're on) cached? I noticed on the more recent websites Google has made, they are not even using stylesheets, just a <style> tag with a single, compressed line of CSS. This led me to believe that the entire page (including the <style>) is cached, and not just stylesheets. Am I correct on this? Why would Google not use stylesheets and want their CSS to be cached when their sites are viewed billions of times a month.
From my understanding, everything that appears in the "Network Panel" of Chrome's inspect element is cached?
I researched "PHP Caching" but that seems to be for include's, so I'm guessing pages are already cached automatically.
|
Are entire websites automatically cached (Google using <style>)?
|
There are a few options for this, some more convoluted than others.
The most cross-browser way is to make sure you set your cache headers to long cache times, and keep the URL consistent (similar to @Petah's suggestion).
Some browsers support the HTML5 File API. You could use JavaScript to store images here if it's supported.
You could also use the HTML5 AppCache (also with limited support), but this means you have to be careful about how you structure your application - because the HTML page will be cached forever until the cache manifest changes.
You can also serialise your image as a Data URL (fairly well supported) and store that string locally somehow (localstorage, cookies), but since cookies have small size-limits, and you wouldn't want to store 6MB in the HTML 'cos the page would take forever to load, this is probably not a great option.
|
On my main page there is a big size image (~6 MB). It's kind of map.
I would like to store that image in user's browser even if they close browser or restart their PC, is it possible somehow?
That image is used in CSS file if it matter.
My hosting has small bandwidth, so I would like to store that image as long as I can on user's browser
Thanks for any tips.
|
Is it possible to store image on user's browser
|
You have to store the Cache on the server instead of the local machine:
[OutputCache(Location = OutputCacheLocation.Server, Duration = 120)]
Here's the dependancy:
System.Web.UI
|
I have an actionResult that I've added caching to.
[OutputCache(Duration = 120)]
When I change some elements on the page, that is cached, I want to remove the cached output. Here is what I didL
public void RemoveCachingForProfile(string guid)
{
HttpResponse.RemoveOutputCacheItem("/Profile/" + guid);
}
When I change something in my page, I go through the RemoveCachingForProfile function. Then, when I come back on my Profile's page, it still shows what's in the cache, even tho I disabled it.
If i hit f5 it shows the correct output. It seems to be the browser that is caching the page.
|
Disable outputcaching
|
The FindView and the FindPartialView methods are internally called by the ASP.NET MVC framework with useCache=true first, then useCache=false if nothing is found. You could try doing the same.
Confusing me further, if we simply ask the ViewEngine.Engines
collection to find the view, we're not given a useCache parameter:
That's because this method does what I explained previously.
|
I need to render a partial view to string, so I can render a Razor syntax partial view into an Asp.net view. Part of the process involves getting the view engine to find the view:
var engine =
ViewEngines.Engines
.OfType<RazorViewEngine>()
.SingleOrDefault();
if(engine == null)throw new Exception("no razor viewengine");
//useCache is the last parameter in the invocation below
var viewResult = engine
.FindPartialView(htmlHelper.ViewContext, viewName, false);
If I set the useCache parameter to true (i.e. I don't want the virtual path provider to search for it every time I ask for it), the view is no longer found. I would have expected that if it's not found via cache, we'd fall back to non-cached method.
Are my expectations wrong? Is this something to do with mixing ViewEngines?
Confusing me further, if we simply ask the ViewEngine.Engines collection to find the view, we're not given a useCache parameter:
var viewResult =
ViewEngines
.Engines
.FindPartialView(htmlHelper.ViewContext, viewName);
I think I'll go with this FTTB, but it begs the question... why the discrepancy?
|
RazorViewEngine.FindPartialView useCache. What does it do?
|
sure, from http://wiki.nginx.org/HttpCoreModule#Variables
$sent_http_HEADER
The value of the HTTP response header HEADER when converted to lowercase and
with 'dashes' converted to 'underscores', e.g. $sent_http_cache_control,
$sent_http_content_type...;
so you could match on $sent_http_response in an if-statement
there's a gotcha though since http://nginx.org/en/docs/http/ngx_http_headers_module.html#expires doesn't list if's as allowed context for the expires directive
you can work around that setting a variable in the if-block, and then referring to it later like so:
set $expires_time 1m;
if ($send_http_response ~* "200") {
set $expires_time 5m;
}
expires $expires_time;
|
When using nginx fastcgi_cache, I cache HTTP 200 responses longer than I do any other HTTP code. I want to be able to conditionally set the expires header based on this code.
For example:
fastcgi_cache_valid 200 302 5m;
fastcgi_cache_valid any 1m;
if( $HTTP_CODE = 200 ) {
expires 5m;
}
else {
expires 1m;
}
Is something like the above possible (inside a location container)?
|
nginx: add conditional expires header to fastcgi_cache response
|
3
Basically, all you have to do is return the appropriate cache headers when you render your Flask views.
For instance, here is a simple view which renders a robots.txt file, and specifies that it should be cached for 30 days:
from flask import Flask, make_response, render_template
app = Flask(__name__)
@app.route('/robots.txt')
def robots():
response = make_response(render_template('robots.txt'))
response.headers['Cache-Control'] = 'max-age=%d' % 60 * 60 * 24 * 30
return response
Share
Improve this answer
Follow
answered Aug 16, 2012 at 1:51
rdeggesrdegges
33.2k2121 gold badges8686 silver badges109109 bronze badges
1
2
Warning, you need to use parenthesis 'max-age=%d' % (60 * 60 * 24 * 30) if you don't want to have `"max-age=60" * 60 * 24 * 30 which gives a really big header ! ;)
– Mickaël
Jan 5, 2015 at 15:41
Add a comment
|
|
I'm writing a simple web application with Flask and will run it using Gunicorn. I'd like to know how to cache the pages returned by this application using Varnish.
I've been able to use Varnish with a Django application, also running on Gunicorn, by following this article. The instructions included using one extra application and some middleware, but I'm not sure how to do it with Flask.
Thanks for your suggestions!
|
How to run Flask with Gunicorn and Varnish?
|
Could you get the complete list on the first query, and cache this (presuming it is not too enormous)? Then for any subsequent queries get the data from the cache and query it to filter out just the year you want.
Maybe you could store the data in the cache in the form of
IDictionary<int, IEnumerable<NewsItem>>
(assuming there are more than one NewsItem for a year) where the key is the year so you can just retrieve a single dictionary value pair to get a year's data.
Alternatively cache the data by year, as you are doing in your sample, and implement a mechanism to get data from the cache for each year if it exists. When getting all, you can either just get all from the data store and cache this separately or implement some mechanism to determine what years are missing from the cache and just get these. Personally, I think I would go with caching all and filtering data from the cache so as to not clog up the memory with too much cached data.
A useful TryGetValue pattern to get from cache is something like:
private bool TryGetValue<U>(string key, out U value)
{
object cachedValue = HttpContext.Current.Cache.Get(key);
if (cachedValue == null)
{
value = default(U);
return false;
}
else
{
try
{
value = (U)cachedValue;
return true;
}
catch
{
value = default(U);
return false;
}
}
}
|
I am retrieving a big list of news items.
I want to utilise caching of this list for obvious reasons (running a query any more than necessary won't do).
I bind this list to a repeater on which I have extended to enable paging (flicking between pages causes page load and so a retrieval of the list again).
The complication is the fact that these news items can also be queried by date from a query string 'year' that is present only when querying the news items by date.
Below is pseudo code for what I have so far (pasting the real code here would take too much time trimming out all of the bits that just add confusion):
if(complete news items list are not cached OR querystring["year"] != null)
{
Int year = queryString["year"] (could be null)
Cache.Add(GetNewsItems(year));
}
else
{
return cached newsItems;
}
The problem is that when the news items page loads (because of a paging control's postback) the querystring's [year] parameter will still be populated and so will re-do the GetNewsItems. Also - even if the home URL (i.e. no query string) is then navigated to - in effect there is still a cached version of the news items so it will not bother to try and retrieve them - but they MAY be there for a particular year and so aren't relevant to an 'all years' kind of search.
Do I add a new cache entry to flag what most recent search was done?
What considerations do I have to make here to cache timing out?
I can add a new query string if needs be (preferably not) - would this solve the problem?
|
Best caching strategy in this particular scenario
|
2
There's no built in solution, but I recommend you giving APC, Redis or Memcache a try (they're all in-memory datastores).
Share
Improve this answer
Follow
answered Jun 12, 2012 at 8:27
Samy DindaneSamy Dindane
18.3k33 gold badges4141 silver badges5050 bronze badges
Add a comment
|
|
Is there any built in possibility (or an external bundle) to cache data in Symfony2?
I don't want to cache the page itself, but data inside the application, using a simple key -> value store on the file system for example.
|
How to cache data in symfony2
|
The solution will depend on how often you need to work on the objects as a collection.
Reasons for storing as a collection:
Storing each object individually, if all 250 objects are always
populated, take up more space, as each item in cache would have an
associated CacheItemPolicy. This case is probably unlikely,
however.
You would not have strongly typed extension methods made
available by Linq on collections. (The extension methods are
available, but MemoryCache items are exposed as KeyValuePair<string,
object>).
Reasons for storing individually:
You are only or mostly going to be working one a single object at a time.
You want each object to be created and removed from cache based on its own frequency of usage, rather than that of a whole collection.
So, compare your likely usage scenario, and choose accordingly. Chances are, unless you are writing lots of .Where, .Select, etc, calls or have reason to pass around the whole collection, then storing individually is going to be the better choice.
|
I am implementing caching using MemoryCache (.net 4.0) for caching global data which will be used by all users of my website.
My initial approach:
Store a KeyedCollection which would hold a collection of the Customer objects with a key to get a single object. This collection could have up to 250 such objects. Whenever the cache expires, I would rebuild the KeyedCollection and add it to the cache.
New approach
Now, I am thinking why not store each Customer object directly to the cache with the customerid as the look-up key. Therefore, MemoryCache.Default would have upto 250 such Customer objects versus a single KeyedCollection.
Benefits:
More efficient since I will get the Customer object directly from
the cache without having to perform another look-up on the Keyed
Collection.
I would add a new Customer object to the cache only when it is
requested for the first time. Sort of lazy addition as opposed to
pre-building the entire cache.
Any thoughts on using one versus the other in terms of performance and other factors?
|
.Net System.Runtime.Caching.MemoryCache - Store a single collection or individual custom objects?
|
You can't unload assemblies in .Net - you can however shut down an app domain (which is where assemblies are loaded into).
|
If I use reflection to retrieve an assembly, it will seek out the assembly and then cache it. I can tell this because, if it fails to find the assembly, it won't find it even after I add it to the correct location. If it finds the assembly and I remove it, it still uses the assembly it found and cached.
I know I'm not the only one to have this issue: Clear .Net Reflection cache
The code I'm using to get the assembly is the following - and I make this call every time (not caching myself):
// Try to get the assembly referenced.
Assembly reflectedAssembly;
try
{
reflectedAssembly = Assembly.Load(assembly);
}
catch (FileNotFoundException)
{
errorMessage = string.Format("Could not find assembly '{0}'", assembly);
result = null;
return false;
}
What I want to know is if there's a way to clear the reflection cache without restarting the process. Any help would be greatly appreciated.
Thanks in advance!
|
How can I clear .NET's reflection cache?
|
I almost think this might be a bug in Rails. Check out this block of code around line 143 of actionpack/lib/action_controller/caching/actions.rb:
body = controller.read_fragment(cache_path.path, @store_options)
unless body
controller.action_has_layout = false unless @cache_layout
yield
controller.action_has_layout = true
body = controller._save_fragment(cache_path.path, @store_options)
end
body = controller.render_to_string(:text => body, :layout => true) unless @cache_layout
controller.response_body = body
It looks like it's correctly rendering the body without a layout in the first unless block, but then it's forcing the template to render with a layout as part of the response body. And if you look at the stack trace, that's the line that leads to the exception.
I manually edited the file to :layout => @cache_layout (which always evaluates to :layout => false since it's guarded by an unless) and the view rendered as expected.
I'm not sure what you could do about this other than temporarily patch that file yourself and open a bug report. I might also be wrong about the behavior of that line, but it certainly looks like the culprit.
|
I have an action on a controller where I'm using action caching. However I'm using the layout: false flag on that caching call since my layout has user dependent information like login status. This works perfectly.
Then I was adding the pjax-rails gem, which basically adds this code to the controller:
layout ->(c) { pjax_request? ? false : 'application' }
That is on some requests the layout isn't rendered. Now I (kinda logically) want to combine these two approaches together.
However when pjax_request? == true I get this error:
There was no default layout for MyController
What am I doing wrong and how can I solve this problem?
PS: This is easiest reproducible in this case:
class MyController < ApplicationController
layout false
caches_action :index, :layout => false
def index
end
end
|
Conditional layout and action caching without a layout
|
I try to use HTTP headers but it's not work
Well, maybe you didn't implement those headers correctly. Here's a nice tutorial that you may take a look at to better understand caching in HTTP.
|
Many people ask how to prevent AJAX caching in IE, but I want to implement this technique in other browsers.
I try to use HTTP headers but it's not work and I'm confused in this. Please help me.
|
How to implement caching of AJAX responses? Like in Internet Explorer
|
Issuing two requests in the same test case would be the most realistic way.
You would need to enable caching in your test environment, by default it's turned on only in production. Ideally not every test suite needs to have caching enabled so you'd need to flip this on/off before/after a suite. See this question: Optionally testing caching in Rails 3 functional tests
It depends on your storage layer for cache fragments, but Rails.cache.clear should be smart enough. I don't think there's any case where you would want the cache to persist across specs since pollution will make things confusing.
If your fragment makes a database call, stub the database connection in your tests, returning a fake result the first time, and raising an exception the second time. This can be generalized to other slow calls (such as a network request).
|
I'm using fragment caches pretty extensively and have ran into a few gotchas where unexpected objects got caught in the cache and/or the fragment didn't expire as originally planned.
I think this would be a prime candidate for request specs but am not sure of:
What would need to be done to simulate a scenario where a cache is triggered (efficiently).
What adjustments would need to be done in the test environment to allow for cacheing.
Would the cache persist across multiple specs or would rspec automatically discard the cache between each spec?
Most importantly, is there a method to determine if the cache was actually served?
|
How can I test rails fragment cacheing with rSpec?
|
The general idea here is to use stateless session EJBs to cache and manage infrequently changed data. Update the EJB occasionally if data, against all expectations, changes.
Java EE 6 provides a slightly different technique, singleton beans: http://java.sun.com/developer/technicalArticles/JavaEE/JavaEE6Overview_Part3.html.
|
I am querying database from my EJB Bean, which is DAO, my query look's like:
public List findDirectories()
{
allDirectories = getHibernateTemplate().find("from " + Directory.class.getName() +
" d order by upper(d.name)";);
return allDirectories;
}
I want to cache this results, how can i do that, is there an example which i can refer too.
All i want to do is in my EBJ Bean, cache the resultset of above query so next time when page refreshes then i go and get results from the cache rather then getting from database.
Update: Am using older versions of EJB and so can use cool features of EJB3
|
How to set caching in EJB bean where am querying database using hibernate template?
|
That basically forces the groups of objects to stay on the same node, but you can't control which node it is. In order to force location to a specific address you can use the KeyAffinityService. Be aware though that objects might be moved around if the topology changes.
|
We want to use Infinispan as a compute grid. We found the documentation on the Distributed Execution Framework in Infinispan 5.0.
What we want to do is to dedicate some nodes of the cache as dedicated nodes for executing particular tasks, since only these nodes have the necessary hardware.
My idea was to create a distributed cache mapping HardwareDriverKey to HardwareDriver, and execute the task using
DistributedExecutorService.submit(task, hardwareDriverKey).
For this to work, we need to figure out a way to ensure that the hardwareDriverKey is always located on the particular node of the distributed cache containing the actual hardware.
Do we need to write a custom ConsistentHash that can extract the node address from the hardwareDriverKey? Have you got an example for this? Or is there another way?
Thanks in advance,
Geert.
|
How to dedicate nodes in an infinispan compute grid
|
Yes, you need to enable ASP.NET integration for the WCF service. This involves setting the aspNetCompatibilityEnabled attribute for the serviveHostingEnvironment element in config as well as adding the AspNetCompatibilityRequirementAttribute attribute to you service class to indicate that you support it.
More on this subject can be found here on MSDN.
|
I've got two different, but closely related ASP.Net web applications that use the same data on some pages. In both applications I am using the ObjectDataSource control, have EnableCaching="true", and use the same CacheKeyDependency value in both applications.
I would like to make it so that when a new record is inserted or deleted in one application, it clears the cache in both applications. I began by simply clearing cache by using Page.Cache, but soon realized that it does not clear the cache in the other application. Then I added a WCF service to each application; each service clears the cache object in the application it is hosted in. Except that it doesn't...
First, I discovered that System.Web.HttpContext is always null in WCF. Then I tried instantiating a System.Web.Routing.RequestContext object, but its HttpContext object is always null as well.
It all boils down to this: If I set a Page.Cache object, can a WCF service access that same cache object, if the service is hosted in the same application as the page?
|
WCF Cache vs. Page.Cache
|
For direct mapped, each address only maps to one location in the cache, thus the number of sets in a direct mapped cache is just the size of the cache.
There would be 0 bits for the tag, and you don't provide enough information to determine the index or displacement bits.
Assuming you are using word addressing and you meant there are 9 or 10 bits for the index + tag:
9 bits -> 2^9 sets
10 bits -> 2^10 sets
|
If I have 9 address bits, how many sets are there in a direct mapped cache?
If I have 10 address bits, how many sets are there in a direct mapped cache?
Is there a general formula for this question?
|
# of sets in direct mapped cache
|
From your question:
I want to avoid getting the browser popup that asks the user to
'resend.' I want the browser to just use the copy of the page it has
in it's cache.
If browser asks you to resend data, it means that content was response to POST request.
According to RFC 2616 - Hypertext Transfer Protocol -- HTTP/1.1:
Some HTTP methods MUST cause a cache to invalidate an entity. This is
either the entity referred to by the Request-URI, or by the Location
or Content-Location headers (if present). These methods are:
PUT
DELETE
POST
So, to make cache work, you need to convert your POST to GET.
|
I added an outputcache directive to my asp.net page (asp.net 4.0) as such:
<%@ OutputCache Duration="3600" Location="Client" VaryByParam="None" %>
However, this does not appear to be working. When I check the http header information I see this:
HTTP/1.1 200 OK =>
Cache-Control => no-cache, no-store
Pragma => no-cache
Content-Type => text/html; charset=utf-8
Expires => -1
Server => Microsoft-IIS/7.0
X-AspNet-Version => 4.0.30319
Set-Cookie => ASP.NET_SessionId=0txhgxrykz5atrc3a42lurn1; path=/; HttpOnly
X-Powered-By => ASP.NET
Date => Tue, 15 Nov 2011 20:47:28 GMT
Connection => close
Content-Length => 17428
The above shows that the OutputCache directive was not applied. I even tried this from codebehind:
this.Response.Cache.SetExpires(DateTime.Now.AddHours(1.0));
TimeSpan ds = new TimeSpan(0, 1, 0, 0);
this.Response.Cache.SetMaxAge(ds);
The above code should have the same results as the OutputCache directive, but when I check the http header information, I can see it's still not being applied.
Basically, the purpose here is to make sure that when a user clicks the back button and lands on my page, the page will not be retrieved from the server. I want to avoid getting the browser popup that asks the user to 'resend.' I want the browser to just use the copy of the page it has in it's cache.
Thanks in advace for any help.
|
OutputCache directive not working in Asp.Net
|
3
Yes, using shouldInterceptRequest. Which you can use to detect if you have a cached version of the requested URL and return an input stream to the cache which the WebView will use instead of loading it from ze webz
For versions before honeycomb, it might be possible to use shouldOverrideUrlLoading, calling webview.loadData(datafromcache, "text/html", "UTF-8"); and return true.
Share
Improve this answer
Follow
edited May 27, 2018 at 19:27
Zoe is on strike♦
27.8k2222 gold badges125125 silver badges152152 bronze badges
answered Nov 15, 2011 at 10:56
FunkTheMonkFunkTheMonk
10.9k11 gold badge3232 silver badges3737 bronze badges
3
is there any code snippet which relates to cache ?
– prat
May 27, 2023 at 17:51
things have changed in the last 12 years, now its probably better to have trigger retrofit to pre-load the urls and then it'll have them cached (if configured) for later
– FunkTheMonk
May 29, 2023 at 9:46
thanks, my project is 10 years old so need to adapt accordingly. If any leads, let me know
– prat
May 30, 2023 at 6:20
Add a comment
|
|
I am writing an app where the user is present with a list of URLs. To make it appear more faster, I want to detect the Wi-Fi state and load the URLs in the background, so when the user picks a URL, they are quickly presented the data specially when they are connected to Wi-Fi. Is there a way I can do this?
|
Android WebView caching
|
You have cached the entire parent view for 10 seconds. This means that during those 10 seconds the child action wouldn't ever be hit and the entire view will be served from cache. Even if the cache of the child action expires after 5 seconds it still won't be hit.
In ASP.NET MVC 3 only donut hole caching is supported (cache a portion of the page by using the OutputCache attribute on a child action). Donut caching is not supported (exclude portions of a cached page from this cache).
|
When attempting to set a different OutputCache property on a partial view I find that the PartialView cache is using the parents output cache duration. With the following code I would hope that the RenderPartial would result in a shorter OutputCache duration but I find that it is the same as the parent view (10 seconds)
public class HomeController : Controller
{
[OutputCache(Duration=10, VaryByParam="none")]
public ActionResult Index()
{
ViewBag.Message = "Time now: "+ DateTime.Now.ToString();
return View();
}
[ChildActionOnly]
[OutputCache(Duration=5, VaryByParam="none")]
public PartialViewResult LogonPartial()
{
return PartialView("_LogOnPartial");
}
}
With this simple example showing the DateTime.Now in the partial view I find that the PartialView does not clear it's cache until the parent view flushes his where I would hope that the Partial view clear's cache every 5 seconds (not every 10 as the parent view does). With the examples that I have seen using OutputCache on a PartialView the cache is implemented on the PartialView not the containing view. Does anyone know if this is a limitation of caching in MVC3 or if there is another way to handle different caching mechanisms on the same page? Thanks in advance!
|
MVC3 Partial View OutputCache overridden by parent view
|
If you have a solid schema, that you wont think it will change, you might want to use relational database. You will need to parse the json and make objects out of the JSON response and using your framework you can persist it to database.
If you think your schema will change use NoSQL.
It also depends what will you do with this data. Are you going to search the nodes within JSON?
You can also do a object to mongo mapping, you can either parse the JSON and store it as an object or you can store the JSON the way it is.
Nice thing about NOSQL is that they support JSON pretty well in which they use BSON (Binary JSON).
In terms of cache, IME, it should be used only for lookups, and actually, you cant search the cache. It s just for getting objects faster than going to database and getting it.
Take a look at this:
http://www.mongodb.org/display/DOCS/Inserting#Inserting-JSON
|
I was wondering what would be the best way to store JSON/XML responses. I'm currently building an application that supports heavily on the SoundCloud API to fetch songs/playlists etc.
Here are some ideas I've come up with.
Storing the results in a Relational Database and then using PHP to convert them to classes to make easy use of them throughout my application.
Doing the above, only this time using my framework's built-in ORM.
Using a Document-Oriented Database. (ie. MongoDB, couchDB, ...)
Storing the JSON responses in a cache. (using my framework's cache classes)
Can anyone care to shed some light on some of the advantages/disadvantages of using any of these methods?
Which one do you prefer?
|
Best way to store JSON response?
|
3
It sounds like a very specific situation, but in order to avoid the need for a per-process in-memory cache (i.e. your class variables) to naturally warm up, I'd be investigating the feasibility of scripting the warm-up process and running it from inside an initializer... your app may take longer to start up, but your users would not have to wait.
EDIT | Note that if you were using something like Unicorn, which supports pre-loading application code before forking worker processes, you could minimize the impact of such initialization.
Share
Improve this answer
Follow
answered Oct 8, 2011 at 16:14
d11wtqd11wtq
35k1919 gold badges121121 silver badges197197 bronze badges
1
Thanks for the hint of unicorn. I'll give it a try. (One more reasone for unicorn: Had some issues with nginx and passenger)
– Deradon
Oct 8, 2011 at 16:32
Add a comment
|
|
First, let me explain the situation, I've got following:
A "Node" Class with following attributes:
node_id (unique)
node_name (unique)
And a "NodeConnection" Class with following attributes:
node_from
node_to
We'll have around 1 to 3 million nodes and something around 3 to 10 million NodeConnections.
After the nodes and connections are imported once, they won't change.
On each request to the Rails-Application, we'll have to look up around 10 to 100 node_ids by possible node_names. And we have to lookup a few hundred to a few thousands node_connections.
We currently prototyped this without any caching (so, a LOT of database-queries) and response times were horrible (like 2 Minutes).
So we switched over to cache the nodes and connections via memcached.
Got a performance boost, but still lacking of performance. (Because we're calling Cache.read for every NodeConnection, that's a few thousand calls per request)
Now, we tried caching via Classvariable and got a huge performance boost. (Response times within a few hundred ms)
# Pseudocode below
class Node
def nodes
@@nodes ||= get_nodes
end
def node_connections
@@node_connections ||= get_node_connections
end
end
So, I'd like to ask about Pros and Cons of this solution.
Cons I've got yet:
Every Rails instance has to build up its own cache (it's own ClassVariables) -> higher total memory usage
Initializing the cache is time consuming (1-3 minutes), so we can't do this within a request
Any other solutions out there to cache large (>100MB) and static (data won't change during applications lifetime) data efficiently, so all rails instances within the same machine can access this cache very fast (!)?
|
Cache (large and static) data with class variables
|
This actually depends on what the underlying cache provider is. If it is the in-memory provider, then yes: you are manipulating a single reference to an object. This could be very bad, since DataTable is not thread-safe - plus you will confuse other code using it.
If, however, you are using something like sql-server as a backend for cache, it will serialize and deserialize.
Personally, my advice is therefore: only cache immutable data, or, if you must cache mutable data, don't mutate it when getting it from cache.
In your specific case, yes: copying the DataTable after fetch would avoid the issue.
|
I'm seeing what seems like strange behaviour when dealing with a DataTable I'm storing in the cache. Although it may be by design.
The scenario is simple. I'm storing the results of a DB query in a DataTable and dropping that into that cache. After doing the DB query or grabbing the results from the cache I need to do some post manipulation on it (such as adding a column or ordering). It seems like the manipulation is happening to the item in the cache. So when I grab the item from the cache it must be grabbing a reference to it?
The code
string CacheKey = "SearchData";
DataTable CachedResults = null;
object CachedObject = Cache[CacheKey];
if (CachedObject is DataTable) { CachedResults = (DataTable)CachedObject; }
if (CachedResults == null)
{
CachedResults = new DataTable();
// Query the database and fill CachedResults
if (Cache[CacheKey] == null)
{
Cache.Add(CacheKey, CachedResults, null, DateTime.Now.AddMinutes(30), Cache.NoSlidingExpiration, CacheItemPriority.Low, null);
}
}
CachedResults.Columns.Add("NewColumn");
I guess my question is, is CachedResults simply a reference to the cached item? And if I want to avoid that I'd need to do a copy on the DataTable?
Thanks so much,
Maleks
|
.NET reference to cached item (DataTable)
|
2
You are right that the App fabric cache is stored out of process.
When the request comes in for a app fabric cache item, there is first a lookup to find where the item is, then a wcf net.tcpip call to get the item. Therefore, it will be slower than asp.net caching. But there are times when appfabric caching is better:
You do not loose the cache when the application pool is recycled.
If you have 100 web servers then you need to get the data from the database once, not 100 times
If you are running Enterprise Edition of windows you do not loose the cache if a machine goes down
Share
Improve this answer
Follow
answered Sep 28, 2011 at 22:00
Shiraz BhaijiShiraz Bhaiji
64.5k3434 gold badges145145 silver badges257257 bronze badges
2
Thanks Shiraz, good info. A couple more questions then.. Since the request for cache data is out of process, and likely on another machine, it sounds like it's not likely to be any performance improvement over going to the database directly? Possibly even worse performance that database direct? If that's the case, the only advantage it seems is that it's easier to scale the AppFabric tier by adding more machines to the cluster that it is to scale the database via partitioning, etc. Is that a correct assumption?
– user969996
Sep 29, 2011 at 17:57
1
Appfabric is faster than the database, since the database may need to go to disk. You can configure appfabric to cache locally, but then it will be in process.
– Shiraz Bhaiji
Sep 30, 2011 at 6:42
Add a comment
|
|
I'm working on a plan to increase performance and scalability of a web app by caching a user database for a WCF web service. Goals are to increase performance by accessing this data inProc vs a round trip the database server, as well as increase scalability of the service by reducing the load on the database server, thus allowing more web servers to be added to increase scale.
In researching AppFabric, I really don't see the value in my situation because it seems like for the most part, I'm just replacing a round trip to the database with a round trip to a cache cluster (which seems like it might even have more overhead than the db to keep nodes in synch).
For the performance question, it seems like using the asp.net cache (in process) would be much faster than a round trip to the cache cluster, even though the data is in memory on those servers, and even if some of it is cached locally (I believe that would still be out of process from the web app).
For the scalability issue, it also seems easier to be able to add identical web servers to a web farm (each caching the user data in process), rather than manage a cache cluster seperately which adds complexity.
With that said, could someone explain why I would choose one approach over the other, given my stated goals? If you recommend the AppFabric approach, can you explain how the performance would be better than storing data in the asp.net cache in process.
Thanks!
|
AppFabric vs asp.net cache with sqldependency performance
|
Here is one way. Though not necessarily concerned with memory based caching
|
Assuming live chat clients (Skype, Windows Live Messenger) use sockets to stay connected to their relative services, what are some strategies that the developers implement to scale out their servers? Even a system like Xbox LIVE where users are able to chat and send out game invites to their online friends.
The main problem is that each of these connections have to share state; some of this state needs to be queried by other clients (who could be connected to a different server behind a load balancer on the other side of the world). The most obvious one is online status.
Do these services use giant RAM based caches (maybe something like memcached) or NoSQL databases (like Cassandra) which all servers around the world connect to and update and retrieve the required state information.
I was wondering if this sort of solution would be fast (or reasonable) enough for real time services like the ones I described above.
My main problem is with memory. Distributing load is fairly straight forward (i hope) with a combination of load balancers and round-robin DNS balancing.
|
Scaling Out Socket Servers
|
As I read this section, get_multi issues multiple requests that run in parallel, the idea being that for large requests, get_multi allows the total amount of time to get all results to be reduced. I don't see any guarantee or mention that the independent requests, done together, are collectively atomic. The same rule likely applies to set_multi (i.e. the individual requests are atomic, but the collection of them is not).
There also appears to be no mention of transactions.
|
In the official FAQ of Memcached i read:
"All individual commands sent to memcached are absolutely atomic."
However this is still unclear to me when it comes to get_multi and set_multi. I'd like to know whether get_multi and set_multi are atomic in the following sense:
All writes performed by set_multi will be performed together atomically.
All reads performed by get_multi will be performed together atomically.
For example these situations should be impossible:
1)
Initially contents of the cache is {'a': 0, 'b': 0}
machine A calls set_multi({'a': 1, 'b': 1})
machine B calls get_multi(['a', 'b']) and receives {'a': 1, 'b': 0}
2)
Initially contents of the cache is {'a': 0, 'b': 0}
machine A calls `set({'a': 1})
machine A calls `set({'b': 2})
machine B calls get_multi(['a', 'b']) and receives set_multi0
This question is just so important for my design, that I thought I'd better ask for confirmation.
|
get_multi / set_multi atomic?
|
3
Ideally, my preference would be to use the stock ticker as the ID. This seems like a case where you have a perfectly good natural key, and so no need for a surrogate.
However, given your object model, i'd go for the query cache option. Fortunately, it seems that Hibernate lets you control the query cache in a reasonably fine-grained way:
As mentioned above, most queries do not benefit from caching or their results. So by default, individual queries are not cached even after enabling query caching. To enable results caching for a particular query, call org.hibernate.Query.setCacheable(true). This call allows the query to look for existing cache results or add its results to the cache when it is executed.
You're very sensibly using the JPA interface to Hibernate, but with that, i think you can get the same effect using a query hint:
org.hibernate.cacheable: Whether or not a query is cacheable ( eg. new Boolean(true) ), defaults to false
I am by no means a Hibernate expert, but it doesn't seem that you would need to move your entity into solitary confinement in its own persistent unit to enable query caching for it.
Share
Improve this answer
Follow
answered Sep 15, 2011 at 13:42
Tom AndersonTom Anderson
46.7k1717 gold badges9393 silver badges134134 bronze badges
Add a comment
|
|
I have an entity with a dataless primary key (@Id) and an attribute that is unique, but meaningful (stockTicker). Clients of this entity sometimes request results by the @Id criteria and sometimes by the stockTicker criteria. I'd like the cache to be able to hit on either criteria. The @Id criteria is no problem. I can think of 2 solutions for cache hits on stockTicker. I can create a separate entity and set the @Id to the stockTicker which will allow the second level cache to be used. Alternatively I can turn on query cache. I don't really want to turn on query cache because there are other entities in the same EntityManager that I don't really want cached. Thus I'd have to break this query out into a separate persistence unit. Please suggest if one of these approaches is correct, or if there is a better option.
@Entity
@Immutable
@Cache(usage= CacheConcurrencyStrategy.READ_ONLY)
@Table(name = "Security")
public class SecurityEntity {
@Id
private Integer id;
private String stockTicker;
...
|
Hibernate entity caching on a field other than the id
|
2
You might want to have a look at AppFabric. One of its components is Velocity (which was a research in-memory distributed cache). It's only supported on server editions of Windows.
Share
Improve this answer
Follow
answered Sep 5, 2011 at 21:43
Jonathan DickinsonJonathan Dickinson
9,13811 gold badge3737 silver badges6060 bronze badges
1
Didn't know that, but it seems a good alternative. I'll take a look at the samples to see how it works.
– svrx
Sep 6, 2011 at 8:52
Add a comment
|
|
Have anyone came across a opensource project or library in .Net to act as a caching layer between database and the aplication that automaticaly or on request sincronizes the data, so that performance could be improved.
The .Net stack as some festures that can be used, like SqlDependencies and the Cache, but both have problems.
Tested alternatives:
SqlDependency are table based, so when one record on table is updated the whole table is invalidated.
The Cache object works well but lack the object management features to manage the changes in objects.
DataTable's in Cache may be a solution, but i would like to deal with the cache as objects not DataRow's.
Any sugestions on system specialized in this task? Any good ORM that can do that?
|
What is the best way to cache a dataset with memory like performance, and having it tied to database changes?
|
This isn't really specific to Struts2 at all. You definitely do not want to try storing this information in the ActionContext -- that's a per-request object.
You should look into a caching framework like EHCache or something similar. If you use Hibernate for your persistence, Hibernate has options for caching data so that it does not need to hit the database on every request. (Hibernate can also use EHCache for its second-level cache).
|
I am trying to find what is the usual design/approach for "static/global"! data access/storage in a web app, I'm using struts 2. Background, I have a number of tables I want to display in my web app.
Problem 1.
The tables will only change and be updated once a day on the server, I don't want to access a database/or loading a file for every request to view a table.
I would prefer to load the tables to some global memory/cache once (a day), and each request get the table from there, rather than access a database.
I imagine this is a common scenario and there is an established approach? But I cant find it at the moment.
For struts 2, Is the ActionContext the right place for this data.
If so, any link to a tutorial would be really appreciated.
Problem 2.
The tables were stored in a XML file I unmarshalled with JAXB to get the table objects, and so the lists for the tables.
For a small application this was OK, but I think for the web app, its hacky to store the xml as resources and read in the file as servlet context and parse, or is it?
I realise I may be told to store the tables to a database accessing with a dao, and use hibernate to get the objects.
I am just curious as to what is the usual approach with data already stored in XML file? Given I will have new XML files daily.
Apologies if the questions are basic, I have a large amount of books/reference material, but its just taking me time to get the higher level design answers.
|
Struts2 static data storage / access
|
How to stop app to be updated?
Once an application is offline it remains cached until one of the following happens:
The user clears their browser's data storage for your site.
The manifest file is modified. Note: updating a file listed in the manifest doesn't mean the browser will re-cache that resource. The manifest file itself must be alternated.
The app cache is programatically updated.
http://www.html5rocks.com/tutorials/appcache/beginner/#toc-updating-cache
In short: do not modify manifest file.
How to update manifest file for each user individually?
If user visit website first time, his browser loads current manifest, so We'd use dynamic URL and generate manifest file dynamically:
<html manifest="manifest.php?version=2">
Browser remembers URL manifest.php?version=2 and every time generated manifest file remains same, so browser won't update (manifest file is unmodified).
Script file would looks like:
<?php
header ( "Content-Type: text/cache-manifest" ) ;
echo "CACHE MANIFEST\n\n" ;
echo "# version " . $_GET [ "version" ] . "\n" ;
echo "index.php\n" ;
echo "styles.css\n" ;
echo "scripts.js\n" ;
?>
Now, how to force browser to load manifest form another URL, for example manifest.php?version=5?
I tried to change manifest attribute content and call window.applicationCache.update()
but browser requests manifest file from old URL.
Another way might be:
ask user if he/she wants to update;
if yes, then save cookie ("wish_to_update=1");
in manifest.php read cookie and check if user wishes to update;
in manifest.php:
if ( $_COOKIE [ "wish_to_update" ] == "1" )
{
// generate modified version
echo "# version another than in your URL" ;
setcookie ( "wish_to_update", "0" ) ;
}
else
{
// generate unmodified version
echo "# version " . $_GET [ "version" ] . "\n" ;
}
modified manifest file will force browser to download all resources again.
|
I'm working on an ipad webapp that will receive monthly changes.
However I can not figure out how to let the user decide weither to update the cache or not. The ipad tends to just go ahead and update when it notices a change to the manifest file. I would like to prevent this so users that haven't finished reading this months issue can update when they feel like it. I've searched for a solution to this question but I fail to find any usable information.
The way my app is set up is I've got a content page that fetches data from the database and all the other files (appart from media that is added to the content page) is static.
I have an cache.manifest with every file in it and a version number automatically changed on update at that top in a comment.
So an update to the content means a new manifest and that means the updateReady event is fired. If anyone could give me any pointers on how to catch this and prevent it from automatically switching to the new version, that would be nice.
Thanks!
|
html 5 appcache user controlled updating
|
If the information only lasts for one hour and will be changed, then it's no reason using cache for that section of information, because the next time people visit, they will get another information and the cached one goes waste.
And, I don't think there is much difference between including a file and including a file's content in the current page, since they will all be executed similarly. The use of include() just makes your code look cleaner, easier to control and maintain.
Turning now to the question why your homepage loads too slow, I think it's not a problem with your include()'s, but could be a problem with your way of processing data. As somebody commented in your post, use Xdebug to find what makes your homepage slow.
Good luck.
|
I am using many include to show small sections of my site. Is it fine to use many include or I should just reduce them (as much possible). How much more time does a include function cost?
My home page loads very slowly. What is the way to make it load faster. (My homepage shows almost same content on home page for an hour daily (and it shows some different data in some sections only). Can I cache it..what is the best solution available for caching or some other way with which i can make things faster.)
|
PHP number of includes to use
|
1
PEAR also has two libs; Cache and Cache_Lite. Both are not very current unfortunately and don't offer memcached backends.
Share
Improve this answer
Follow
answered May 23, 2011 at 17:39
cweiskecweiske
30.4k1414 gold badges139139 silver badges199199 bronze badges
Add a comment
|
|
Closed. This question is seeking recommendations for software libraries, tutorials, tools, books, or other off-site resources. It does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for software libraries, tutorials, tools, books, or other off-site resources. You can edit the question so it can be answered with facts and citations.
Closed 8 years ago.
Improve this question
I am looking for a PHP caching library that has multiple back-end storage adapters. For example, something that can save the cache in a file or in Memcache.
Here are some of the libraries that I have found:
Extensible PHP Caching Library
Stash
RayCache
Zend_Cache
SabreCache
|
Looking for a PHP Caching Library with multiple back-end storage adapters [closed]
|
Check: NSWorkspace Class Reference
Refresh the Finder like this:
[[NSWorkspace sharedWorkspace] noteFileSystemChanged:path];
|
I have a process that modifies icons on files with an overlay. The Finder, though, seems to be caching the images it generates, which sometimes happens after I generate the overlay (in the case of the icon being a preview of the file instead of a plain icon).
Is there a programatic way I can tell the Finder to dump its cache for an icon image, and recreate it, without changing the file information, specifically the modification date?
|
How to programatically tell the Mac Finder to redraw a folder or file icon?
|
In .Net 4, Microsoft introduced the System.Runtime.Caching namespace. The most obvious choice in your scenario would likely be the MemoryCache:
http://msdn.microsoft.com/en-us/library/system.runtime.caching.memorycache.aspx
|
I wrote a generic class that encapsulates a static hashtable with some put,get methods.
But I thought would be better if it has features where I can define some expiry time and some sync in place. All I need to do is cache some objects on the client. My application is a winform client and just need to cache some frequently used data items in memory.
I searched and found NCache, memcache and other server cache products and frameworks. But is there anything simple for client side caching in thick clients ?
Many Thanks,
Mani
|
client side caching library/framework for thick clients
|
Google "simple php cache tutorial" or look at this one.
Instead of echo or print-ing text to the screen as you probably are now, build up a variable using the technique of string concatenation like so:
$html = '<h3>Weather View</h3>';
foreach( element in your xml feed){
$html .= 'Some more information';
}
//then when done
file_put_contents('weather_cache.txt', $html );
Essentially you ought to cache a segment of html, which you then use PHP to include at the correct place in your webpages, probably using file_get_contents();
The logic in the tutorial will tell you how to check the date of the cache and then decide to a) go and refresh the xml and recreate the cache or b) lift up and display the cached file
|
I just started using SimpleXML to get a feed and display data from that XML feed on one of my webpages. See my first post https://stackoverflow.com/questions/5925368/how-to-use-the-weather-gov-xml-feed-on-a-website I have a basic knowledge of PHP so I may be missing something basic.
If I understand what is happening correctly, every time someone looks at my page before it displays the php, the script first has to go and get the feed. Then it does what I have asked it to do with the feed and then displays the page.
I would think everything would be faster if I was to cache either the feed or formatted the feed and cache that. Which is better caching the raw feed or format the feed and cache the result? How do I go about caching either?
I am hoping that someone can point me in the direction of a tutorial that will teach me how to cache things with php or maybe someone has some example code that I could learn from and/or adapt for my project?
Thanks.
|
cache a feed or a finished page?
|
3
So in our scenario, it appears that monolithic cache objects are going to be preferred. With big fat pipes in the datacenter, it takes virtually no perceptible time for ~30 MB of serialized product data to cross the wire. Using a Dictionary<TKey, TValue> we are able to quickly find products in the collection in order to return, or update, the individual item.
With thousands of individual entities, all well under 1 MB, in the cache, bulk operations simply take too long. Too much overhead, latency in the network operations.
Edit: we're now considering maintaining both the entities and the monolithic collection of entities, because with the monolith, it appears that retrieving individual entities becomes a fairly expensive process with a production dataset.
Share
Improve this answer
Follow
edited Apr 23, 2011 at 14:30
answered Apr 22, 2011 at 18:03
andrewbaderaandrewbadera
1,37299 silver badges1919 bronze badges
Add a comment
|
|
In a distributed caching scenario, is it generally advised to use or avoid monolithic objects stored in cache?
I'm working with a service backed by an EAV schema, so we're putting caching in place to minimize the perceived performance deficit imposed by EAV when retrieving all primary records and respective attribute collections from the database. We will prime the cache on service startup.
We don't have particularly frequent calls for all products -- clients call for differentials after they first populate their local cache with the object map. In order to perform that differential, the distributed cache will will need to reflect changes to individual records in the database that are performed on an arbitrary basis, and be processed for changes as differentials are called for by clients.
First thought was to use a List or Dictionary to store the records in the distributed cache -- get the whole collection, manipulate or search it in-memory locally, put the whole collection back into the cache. Later thinking however led to the idea of populating the cache with individual records, each keyed in a way to make them individually retrievable from/updatable to the cache. This led to wondering which method would be more performant when it comes to updating all data.
We're using Windows Server AppFabric, so we have a BulkGet operation available to us. I don't believe there's any notion of a bulk update however.
Is there prevailing thinking as to distributed cache object size? If we had more requests for all items, I would have concerns about network bandwidth, but, for now at least, demand for all items should be fairly minimal.
And yes, we're going to test and profile each method, but I'm wondering if there's anything outside the current scope of thinking to consider here.
|
Best-practice caching: monolithic vs. fine-grained cache data
|
How I would do this? With the helper I would replace all curly braces words into links and when the user hovers the linked word I would call an Ajax which will get the word description or the link or whatever you need to do. This way you request the description only when it's needed.
If you still insist to use the helper - it's just a class in PHP, so you can include it in your Model, create an object of the class and use it's functions.
The third option is to create your own class and use it both in Model and the Helper.
|
Now before you burn me at the stake hear me out!
I want some keywords of a product description field to link to other products (kinda like mediawiki links), however at some point I need to make these associations and link the keywords up, so I'll need to do a search on each curly-braced word I find in the description and produce a formatted version of the description to cut down on processing these keyword links every time the description is displayed.
For ease/consistency I am creating all product links with a custom helper, and all I need to do is pass the product row in and the helper products a link for me with any options I specify. The only this is, is that I need to now do this in beforeSave() so I can populate description_formatted.
At the minute, beforeSave() checks for the original description row, then calls a private method in the model which matches each keyword, queries the db for a matching row... that is as far as I've got.
|
How to use helper inside model function in CakePHP
|
You're basically paying a price for not wanting to store the balance in each transaction. You could optimize your database with indices and use caches etc; but fundamentally you'll run into the problem that calculating a balance will take a long time, if you have lots of transactions.
Keep in mind that you'll continue to get new transactions, and your problem will thus get worse over time.
You could consider several design alternatives. First, like Douglas Lise mentioned, you could store the balance in each transaction. If an earlier dated transaction comes in, it means you may have to do an update of several transaction since that date. However, this has an upper-bound (depending on how "old" transactions you want to allow), so it has a reasonable worst-case behavior.
Alternatively, you can do a reconciliation step. Every month you "close the books" on transactions older than X weeks. After reconciliation you store the Balance you calculated. In def balance you now use your existing logic, but also refer to "balance as of the previous reconciliation". This again, provides a reasonable and predictable worst-case scenario.
|
I'm new to Ruby/Rails, so this is possibly (hopefully) a simple question that I just dont know the answer to.
I am implementing an accounting/billing system in Rails, and I'm trying to keep track of the running balance after each transaction in order to display it in the view as below:
Date Description Charges($) Credits($) Balance($)
Mar 2 Activity C $4.00 -$7.50
Feb 25 Payment for Jan $8.00 -$3.50
Feb 23 Activity B $1.50 -$11.50
Feb 20 Activity A $2.00 -$10.00
Each transaction (also known as line item) is stored in the database, with all the values above (Date, Description, Amount) except for the Balance. I can't store the balance for each transaction in the database because it may change if something happens to an earlier transaction (a payment that was posted subsequently failed later for example). So I need to calculate it on the fly for each line item, and the value for the Balance for a line item depends on the value for the line item before it (Balance = Balance of Prev Line Item + Amount for this Line Item, i.e.)
So here's my question. My current (inept) way of doing it is that in my LineItem model, I have a balance method which looks like :
def balance
prev_balance = 0
#get previous line items balance if it exists.
last_line_item = Billing::LineItem.get_last_line_item_for_a_ledger(self.issue_date,self.ledger_item_id)
if last_line_item
prev_balance = last_line_item.balance
.. some other stuff...
end
prev_balance + (-1*net_amount) # net_amount is the amount for the current line item
end
This is super costly and my view takes forever to load since I'm calculating the prev line item's balance again and again and again. Whats a better way to do this?
|
Cache a complex calculation in Rails 3 model
|
Yes. The browser will cache each unique url, provided there are no headers that tell it not to.
Your file should have one entry in the browser cache even if it is requested from a number of referring pages. Once cached from one site the browser will use the cached version for all the others so speeding up the page load.
This is the idea behind loading JavaScript libraries from a CDN (content delivery network). If you load jquery from http://ajax.googleapis.com/ajax/libs/jquery/1.5/jquery.min.js there's a good chance the user already has it in their browser cache so it will load instantly.
|
I have several different sites in different hosts and I use the same JS file in all of them which is loaded from one and only remote host. For instance,
One single JS filename my.js is stored at someotherhost.net.
This filename is loaded in several different pages (sites):
somedomain1.net/home.html
somedomain2.net/home.html
somedomain3.net/home.html
Browsing through these sites browser caches my.js. But will it use the same cache for all different sites?
Or maybe it doesn't matter whether the requested filename is named the same, stored in single remote host and loaded in different pages, browser will have different caches?
How browser caching works?
|
browser caching: same remote filename in different sites
|
You can load asynchronously as I described in the update to the question. You get an app id from "create application" on facebook. (Google it...)
|
I have added the facebook like button to my website. There's a delay when the jscript below is loaded, and the rest of the page doesn't load until it's done. Also, if I block facebook with my firewall there's a very long delay before it gives up, and THEN the rest of my page is displayed. How do I get my page displayed first, and then that like button? (I did that to see what would happen if facebook was having a problem.) Also, when I load different pages the facebook like button keeps reloading (it's visibly absent until reloaded - 1/3 second.) The button is in the same place on every page, in a banner on top. The other images don't reload - they are cached - and they appear static when selecting a different page. Is there a way I can make that script be cached?
Here's the code:
<div style="position: absolute; top:20px; right:85px; width:70px; height:25px">
<script type="text/javascript" src="http://connect.facebook.net/en_US/all.js#xfbml=1"></script>
<fb:like href="http://www.facebook.com/pages/somepage/somenum" layout="box_count" show_faces="false" width="50"></fb:like>
</div>
Thanks
UPDATE:
I found this code for loading all.js asynchronously:
"Another way is to load asynchronously so it does not block loading other elements of your page: "
<div id="fb-root"></div><script>window.fbAsyncInit = function() {
FB.init({appId: 'your app id', status: true, cookie: true,
xfbml: true});
};
(function() {
var e = document.createElement('script'); e.async = true;
e.src = document.location.protocol +
'//connect.facebook.net/en_US/all.js';
document.getElementById('fb-root').appendChild(e);
}());
</script>
But what is "your app id?" That code is from here http://www.mohammedarif.com/?p=180
|
How do I keep the Facebook like button from delaying the loading on my website?
|
HI Jamey,
WebClient and HttpWebRequest urls are cached which causes problems when fetching the same URL but wanting fresh results. One workaround is to make something unique in the query string.
Images aren't cached prompting people to develop solutions to this.
One-time Cached Images in Windows Phone 7 « Ben.geek.nz
|
In Windows Phone 7 Do calls to WebClient and HttpWebRequest use a caching system or do they ALWAYS pull from the web?
Also if I use <Image Source="http://www.images.com/someimage.jpg"/> does the image cache or does it pull from the web ever time the app loads?
|
Does Windows Phone 7 cache web request?
|
You should send the cache-control headers with the HTTP protocol. The proxy you are using may not parse HTML for meta tags.
Cache-Control: no-cache
|
I know this is a duplicate question, but I've tried:
<META HTTP-EQUIV="CACHE-CONTROL" CONTENT="NO-CACHE">
<META HTTP-EQUIV="EXPIRES" CONTENT="-1">
<meta http-equiv="pragma" content="no-cache" />
And yet I still sometimes get cached pages. (old data etc.)
And this is the solution I've always seen when searching stackoverflow for a sollution.
|
HTML: force proxy server to fetch new copy of page
|
2
I think its to do with Marshal.load, basically rails magically loads all the classes for you but when you call Rails.cache.fetch at some point it will call Marshal.load which doesn’t know anything about the Rails dependency loading and can sometimes silently fail(undefined class/module)
My solution is to simply add
require_dependency 'post'
to your controller which should load the class for the Marshal library to see
Share
Improve this answer
Follow
answered Nov 18, 2010 at 15:54
RichardRichard
2111 bronze badge
Add a comment
|
|
I'm having a problem using memcached in Rails 3
The following is in my controller
@last_post = Rails.cache.fetch('last') {Post.last}
From the view I call @last_post.title
The first time the view is loaded, the title of the last post is displayed. Once the view is refreshed, I get the error undefined method 'title' for #<String:0x8007ae0>
It seems like the object isn't being deserialized the second time around.
Am I doing something wrong? What can I do to fix this? (Ruby 1.8.7, Rails 3.0.1)
Thanks!
|
Memcached problem with Rails 3. Object isn't being deserialized the second time around
|
I assume by caching you mean output caching (caching just the output html returned after processing the view result of controller). What you are looking for is called cache substitution or "donut caching". As far as I know it is not supported in ASP.NET MVC 1 & 2. In the rc of MVC 3 it is supported as you can read here - http://weblogs.asp.net/scottgu/archive/2010/11/09/announcing-the-asp-net-mvc-3-release-candidate.aspx.
|
I add cache to my application, I have a page which contains several User Control, my problem is I just want to cache the data returned from Controller, but not want to cache all the page content. Since one of my user control is login control, if I cache all the result, then it will behave incorrectly.
my problem is :
1.Is it possible to just cache the data returned from controller ?
2.If a page is cached, can I force a control in the page to be uncached ?
|
How to avoid to cache the user control in asp.net mvc?
|
You could use the built-in cache. As far as the application startup is concerned you could use the Application_Start method in global.asax.
|
I have ASP.NET MVC application, where I keep all of drop down values in a table. So on average every page accesses this table 2-3 times. I want to cache (load into memory) this table on application startup.
Is there a way to do so? I have googled, but helpful topics found.
Thanks in advance.
|
Caching table at startup ASP.NET MVC Application
|
It seems to me that it could empower the server admin to centrally optimize content created by a large set of developers. Also, it could be a good way of baking in some well-tested (by Google) best practices that might be costly to develop on your own.
|
My title pretty much says it all. I have been looking at mod_pagespeed and it somehow impresses me as being very little more than a way to offload the work of optimization to the server instead of the developer.
There may be some advantages to doing this such as reducing developer time etc so I'm not suggesting that it is all bad. But it also does strike me as a bit of a script kiddie way to do things. Rather than learn about all those performance techniques, hey! just let the server do it!
Is mod_pagespeed something that would be good to implement on my production web application or would I be better off doing the optimization "by hand"?
Here is the original announcement.
|
Is Google's new mod_pagespeed just another tool for webmasters who don't know how to do things right?
|
3
Depending on what you are doing, by default I think the L2 cache is only caching items called for by ID, E.G. session.Get or session.Load. To cache queries using ICriteria etc you need to specifically say you want that query to be cached. E.G.
ICriteria criteria = Session.CreateCriteria( ).SetCacheable( true ).SetCacheRegion( "SomeNameHere" );
The some name here is your cache region. In short this groups together cache items, Keeping this really breif I usually just put the name of the class/entity such as "Person" or "Company".
When setting up your class maps you might always want to play with the Cache property from the base class. Its something like
Cache.ReadWrite( ).IncludeAll( ) ;
I personally found that without this, when a query was executed it cached the ID's of each item in the resultset but not the items themselves, so this would make a heavy query fast, but then it has to hit the database for each item, so if you have a really simple query returning 100 items, your database could then get hit 100 times. I found that adding the above to my mapping class solved that problem.
Hopefully that helps.
Share
Improve this answer
Follow
answered Oct 20, 2010 at 3:42
cdmdotnetcdmdotnet
1,69333 gold badges1818 silver badges2424 bronze badges
0
Add a comment
|
|
Using this, I can tell Fluent NHibernate to use SysCache as a 2nd Level Cache Provider:
MsSqlConfiguration.MsSql2008.ShowSql().ConnectionString(x =>
{
x.Server(@"localhost\ANDREPC");
x.Database("mydb");
x.TrustedConnection();
}).Cache(c => c.ProviderClass<SysCacheProvider>().UseQueryCache())
Besides, SysCache's configurations must be placed on Web.Config:
<configuration>
<configSections>
<section name="syscache" type="NHibernate.Caches.SysCache.SysCacheSectionHandler,NHibernate.Caches.SysCache" />
</configSections>
<syscache>
<cache region="foo" expiration="500" priority="4" />
<cache region="bar" expiration="300" priority="3" />
</syscache>
</configuration>
Now what? What does these regions mean? How do I associate a region with a type? How do I make this to work? My jMeter tests sugest that after the configuration above my application got 7% slower than before.. I need to understand SysCache and learn how to continue with the configuration.
Thanks.
PS: The official SysCache documentation is here and it's not explanatory
|
How to set up SysCache on Fluent NHibernate?
|
Keeping a clean clone locally is very common and a good idea in general. I've always stuck with the two step process you describe, but you could do this with hooks if you wanted.
In your cache repos you'd put soemthing like this in the .hg/hgrc file:
[hooks]
preoutgoing = hg pull
which tells that repo to do a hg pull before it bundles up changes in response to a pull or clone request made on it.
Note that even if the downstream (real clone) repo requests a subset of the changesets using pull -r or clone -r this cache repo will pull down everything. That's likely what you want since your goal is a mirror but the commenter points it's worth pointing out.
|
I have lots of different clones which I work on separately. When I want to update those clones, it can be quite slow to update them from the server. So instead I have a "clean" clone, which I update from the server regularly, and all the other clones clone from the clean clone (hope that makes sense).
But now what I have is a two-step approach. First go to the clean clone and pull, and then go to the real clone i'm working on and pull from the clean clone. Can this be made into 1 step?
Ideally, the "clean" clone would be transparent: when it's pulled from, it would do a pull itself. That way we'd have caching and 1-step. Is there a way to do this?
|
Creating a local transparent cache of a mercurial repository
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.