Response
stringlengths 8
2k
| Instruction
stringlengths 18
2k
| Prompt
stringlengths 14
160
|
---|---|---|
Using version 1.6.1.1
Another possibility would be to add your custom entries below the following comment:
# ~~end~~ Do not remove this comment, Prestashop will keep automatically the code outside this comment when .htaccess will be generated again
#Your entries go here e.g
php_value max_input_vars 3000
As said in the comment
Prestashop will keep automatically the code outside this comment when .htaccess will be generated again
|
I need add my custom entry to the .htaccess file. Unfortunately each rebuild of cache or cache refresh in the admin is getting lost my entry. Can I declare that entries in the panel or code to prevent them deleting?
|
PrestaShop 1.6 - custom entries in htaccess file
|
As I suggested in the comment Igor claimed that in Global.asax there were code for disabling caching:
Response.Cache.SetCacheability(HttpCacheability.NoCache);
Response.Cache.SetExpires(DateTime.UtcNow.AddHours(-1));
Response.Cache.SetNoStore();
Igor just to inform you these lines is one of the suggested way to fix 'browser back button' scenario (but as you can see with some cons). Simple scenario steps:
Login to application - looged in user is redirected to the home page
Logout - user redirected to the login page
Click back browser button - User should not be redirected to the home page but with enabled caching it could be an issue.
Please check the back browser button funcionality. If the scenario which I wrote is a problem for you please just use atribute
[OutputCache]
with proper parameters.
Regards Piotr
|
In an MVC5 project we encounter a problem with caching of the bundled resources (js/css).
According to the mvc docs, by default the bundles should be cached. And it works in other projects. However, here, no matter what configurations, the response headers for our resources are
Cache-Control: no-cache, no-store
Connection: Keep-Alive
Content-Encoding: gzip
Content-Type: text/javascript; charset=utf-8
Date: Wed, 01 Jul 2015 11:22:11 GMT
Expires: -1
Keep-Alive: timeout=5, max=100
Pragma: no-cache
Server: Microsoft-IIS/8.5
Transfer-Encoding: chunked
Vary: Accept-Encoding
I can't figure out where this is coming from as we are not disabling cache anywhere. Any ideas?
|
ASP.NET/MVC Bundler Cache does not work
|
Have you tried using the OutputCache attribute? (The link is to older docs, but it's still valid).
using System.Web.Mvc;
namespace MvcApplication1.Controllers
{
[HandleError]
public class HomeController : Controller
{
[OutputCache(Duration=600, VaryByParam="none")]
public ActionResult Index()
{
return View();
}
}
}
The output of that action will be cached for the number of seconds specified, in this case, 10 minutes.
|
I'm building a KPI monitoring web app. It used to work perfectly fine with a few users, but lately the load increased, causing problems with our backend systems. Some of them are very expensive to license, others are hard to scale mainframes.
It is very important for the business to never show data which is more that 10 min old. Ideally I'd like to call my backend once every 10 min, but only if there are active users.
My biggest problem currently is that most users start work at the same time and my app receives 30-40 requests in the matter of a few seconds.
How can I reliably solve this issue?
I've tried creating a static object in Global.asax.cs. That got very complex very fast, as I needed locking and a way to signal to pages that there is a backend info retrieval in progress.
I've also used CacheCow.Server in the past, but it's WebAPI only and it felt like a black box I don't really understand.
Can you give me an advice on how to proceed please?
|
Safest way to cache information in an ASP.NET MVC 5 web site
|
In Symfony2 there's a few things you need to do in order to activate esi caching.
1) In app/config/config.yml make sure you activated esi, with a fragments path.
framework:
esi: { enabled: true }
fragments: { path: /_proxy }
2) Wrap the kernel with the AppCache object
// web/app.php
$kernel = new AppCache($kernel);
3) Set up the AppCache configuration
// app/AppCache.php
use Symfony\Bundle\FrameworkBundle\HttpCache\HttpCache;
class AppCache extends HttpCache
{
protected function getOptions()
{
return array(
'debug' => false,
'default_ttl' => 0,
'private_headers' => array('Authorization', 'Cookie'),
'allow_reload' => false,
'allow_revalidate' => false,
'stale_while_revalidate' => 2,
'stale_if_error' => 60,
);
}
}
About your issue if it is caching your response and the only problem is that it's reloading every time you refresh the page. make sure the configuration allow_reload property is set to false.
You can read more about it here:
http://symfony.com/doc/current/book/http_cache.html
|
I have a Controller whose Action is rendered in twig with
{{ render_esi(controller('MyWebsiteBundle:Element:header')) }}
The Action itself looks like this:
/**
* @return Response
*/
public function headerAction()
{
$currentLocale = $this->getCurrentLocale();
$response = $this->render('MyWebsiteBundle:Element:header.html.twig', array(
'currentLocale' => $currentLocale,
'myTime' => time()
));
$response->setPublic();
$response->setSharedMaxAge(3600);
return $response;
}
When I reload my Browser, the "myTime" changes everytime.
How can I use setShardeMaxAge(), so that the Twig is only renderd after the MaxAge is expired?
|
Symfony2: ESI setMaxAge Cache
|
Caching is enabled by default for services built using liferay service builder.
I believe none of the steps mentioned above are required as cache is default enabled.
Below properties are set to true in default portal.properties and applies to all entities, not just for custom entities.
value.object.entity.cache.enabled=true
value.object.finder.cache.enabled=true
You can open *PersistenceImpl.java class for your custom entities to observe the caching code. Debugging this class could give you details on why its not hitting cache.
For example, calling API with cache off argument won't hit cache.
|
I am using liferay 6.1 version.
I have created custom entities for portlet using service builder. I want to cache that custom entities.
I have set following properties in my portal-ext.properties to enable cache.
ehcache.statistics.enabled=true
value.object.entity.cache.enabled=true
value.object.finder.cache.enabled=true
velocity.engine.resource.manager.cache.enabled=true
layout.template.cache.enabled=true
net.sf.ehcache.configurationResourceName=/custom_cache/hibernate-clustered.xml
log4j.logger.net.sf.ehcache=DEBUG
log4j.logger.net.sf.ehcache.config=DEBUG
log4j.logger.net.sf.ehcache.distribution=DEBUG
log4j.logger.net.sf.ehcache.code=DEBUG
I created ehcache.xml file to override the ehcache-failsafe.xml to configure my custom entities so that it can enable for caching.
my ehcache.xml file is in my classpath [classpath:liferay-portal-6.1.1-ce-ga2/tomcat-7.0.27/webapps/ROOT/WEB-INF/classes].
<diskStore path="java.io.tmpdir/ehcache"/>
<defaultCache
maxElementsInMemory="10000"
eternal="false"
timeToIdleSeconds="120"
timeToLiveSeconds="120"
overflowToDisk="true"
maxElementsOnDisk="10000000"
diskPersistent="false"
diskExpiryThreadIntervalSeconds="120"
memoryStoreEvictionPolicy="LRU"
/>
<cache
eternal="false"
maxElementsInMemory="10000"
name="com.pr.test.model.impl.StudentImpl"
overflowToDisk="false"
timeToIdleSeconds="600"
timeToLiveSeconds="300"
statistics="true"
copyOnRead="true"
copyOnWrite="true"
clearOnFlush="true"
transactionalMode="off"
/>
Also create hibernate-clustered.xml file under src path [/docroot/WEB-INF/src] which is same as my ehcache.xml file.
since I am using service builder, cache-enable="true" is enough to cache the entities?
I use Jconsole to monitor the cache hits, But what the problem is the percentage for cache Misses is more than cache hits. Below is my statistics for caching :
Any help will be appreciated.
|
Liferay custom entity caching
|
4
Are you asking how you can inspect the cache in your running server from the shell?
Well, there's a reason why it's called the local memory cache. That's because it's local, in other words it's not shared between processes. There's absolutely no way to get access to it from a different process.
If you want a cache that you can access from a different process, you should use one of the other cache backends. To be honest, you should be doing that anyway; locmem is really only for development.
Share
Improve this answer
Follow
answered Apr 3, 2015 at 9:17
Daniel RosemanDaniel Roseman
593k6666 gold badges891891 silver badges907907 bronze badges
1
Good point :) I actually did read about the cache not being shared by other processes but it just didn't click. Thanks for pointing it out!
– Cheng
Apr 3, 2015 at 9:41
Add a comment
|
|
I would like to know if there is a way to inspect the content stored in the local memory cache from Django shell.
I found this answer: Contents of locmem cache in django?
But it did not work. This is what I have tried so far:
python manage.py shell
>>> from django.core.cache import caches
>>> caches.all()
[]
I found the wonderful plugin: Django debug toolbar. I can verify from the debug panel that the cache I created did exist and it has content in it.
I just want to know how to look at the cached content from Django shell.
Thanks!
Here is how I defined my local memory cache:
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.locmem.LocMemCache',
'LOCATION': 'question_cache',
'TIMEOUT': 60 * 60, # 60 secs * 60 mins
'OPTIONS': {
'MAX_ENTRIES': 100
},
}
}
|
Django inspect local memory cache from shell
|
4
A simple solution is to invert the approach: include baseline files within your application's executable/bundle. Upon first startup, copy them to the application's data directory. Then, whenever you have access to your server, you can update the data directory.
All modifications of the data directory should be atomic - they must either completely succeed, or completely fail, without leaving the data directory in an unusable state.
Typically, you'd create a new, temporary data folder, and copy/hardlink the files there, and download what's needed, and only once everything checks out you'd swap the old data directory with the new one.
Letting your application access QML and similar resources directly online is pretty much impossible to get right, unless you insist on explicitly versioning all the resources and having the version numbers in the url.
Suppose your application was started, and has loaded some resources. There are no guarantees that the user has went to all the QML screens - thus only some resources will be loaded. QML also makes no guarantees as to how often and when will the resources be reloaded: it maintains its own caches, after all. Sometime then you update the contents on the server. The user proceeds to explore more of the application after you've done the changes, but now the application he experiences is a frankenstein of older and newer pieces, with no guarantees that these pieces are still meant to work together. It's a bad idea.
Share
Improve this answer
Follow
answered Mar 29, 2015 at 17:06
Kuba hasn't forgotten MonicaKuba hasn't forgotten Monica
97k1717 gold badges156156 silver badges318318 bronze badges
Add a comment
|
|
I'm trying to make one of my QML apps "offline capable" - that means I want users to be able to use the application when not connected to the internet.
The main problem I'm seeing is the fact that I'm pretty much pulling a QML file with the UI from one of my HTTP servers, allowing me to keep the bulk of the code within reach and easily updatable.
My "main QML file" obviously has external dependencies, such as fonts (using FontLoader), images (using Image) and other QML components (using Loader).
AFAIK all those resources are loaded through the Qt networking stack, so I'm wondering what I'll have to do to make all resources available when offline without having to download them all manually to the device.
Is it possible to do this by tweaking existing/implementing my own cache at Qt/C++ level or am I totally on the wrong track?
Thanks!
|
Possible to make QML application "offline capable" using caches?
|
You could clone the already existing Element. That way all data and image data is preserved and doesn't need to be fetched anew. Also this method is gazillion times faster than adding the new HTML via string.
function startthefun() {
var imgElem = document.createElement('img');
imgElem.setAttribute('src','http://cdn.sstatic.net/stackoverflow/img/sprites.svg?v=1bc6a0c03b68');
document.getElementById('content').appendChild(imgElem);
window.timer = window.setInterval(function() {document.body.appendChild(imgElem.cloneNode());},1000);
}
<input type="button" value="start" onclick="startthefun()"><input type="button" onclick="window.clearInterval(window.timer)" value="stahp"><div id="content"></div>
You can just simply feed the HTMLElement into the $(selector).append(HTMLElement) Jquery will append the html node to your selected item.
|
I am developing an chat application and it appends data like below
$(selector).append("<img src='image.php?id=username'><div id ='msg'>hi</div>");
So each time when same image is appended , Its requesting the image URL again and again .
How to stop the image refresh on append , If the image url is already loaded, I want it to load that image cache.
|
Stop Image load on append 2nd time
|
Rails automatically caches the SQL queries if it encounters the same query again. Here is the reference link for this. SQL Caching. If still you want to cache it then you can use Low Level Caching. Use a key based caching it would do the work.
|
I have three tables - Countries, USRegions and Regions. For the different countries Regions holds a particular administrative division of the territory of that country. For the US these are the 50 states and Washington, D.C.. So again:
Countries - USA, Germany, Japan etc.
USRegions - Pacific, Middle Atlantic, South Atlantic (a total of 9 rows)
Regions - Virginia, Texas etc., but also similar divisions for other countries, like regional cities for Poland etc.
In one of the views I have a form, where the user can select certain US Regions and certain States. So in order to display the names of these, I make two calls to the database. One with all rows to the USRegions table and one with all rows with country_id = USA.id to the Regions table. The thing is - the result of these calls will always be the same (unless there are new states, which is very unlikely, although possible). So is there a way to cache these calls and if yes - how and where?
|
How to cache sql queries, the result of which never change?
|
4
Despite the fact that the question is off-topic I'd use redis as it is fast and simple. You'd only need a key-value store for keeping session information and not a fully grown directory server. LDAP might be fast in getting Information out of it, but you would need to put the information in first and update those informations on a regular basis which isn't what LDAP has been designed for.
Share
Improve this answer
Follow
answered Feb 6, 2015 at 4:58
heiglandreasheiglandreas
3,84311 gold badge1919 silver badges2323 bronze badges
Add a comment
|
|
I am using openLDAP storing users. where should I keep user session details as it we need to track few user details for a session. What is Ideal place to store user sessions?
Thank you...
|
What is ideal place for storing user sessions openldap or redis
|
Since you want to cache templates and resolve them immediately if already avaialbale, you need to make sure API is consistent in both cases. It means that your function should implement some sort of deferred or promise interface. For example:
var getTemplate = (function(jQuery) {
var templates = {
test: {
url: 'templates/test.html'
}
};
return function(templateId) {
if (templates[templateId].html) {
return jQuery.when(templates[templateId].html);
}
return jQuery.ajax({
url: templates[templateId].url
})
.then(function(responce) {
templates[templateId].html = responce;
return responce;
});
}
})(jQuery);
Then usage would be:
getTemplate('test').done(function(html) {
console.log(html);
});
Demo: http://plnkr.co/edit/jutnHSQDAA4HLnmS0DqR?p=preview
|
Let say I have function to get templates, if template was never before use it should send request by AJAX and after that return template, if template was used it should be return from cache.
Sample code:
var getTemplate = (function(jQuery){
//full template obj = {url: '', html: ''}
var templates = {test: {url: '/templates/test.html'}};
function getTemplate(templateId){
if(templates[templateId].html){
return templates[templateId];
}
jQuery.ajax({
method: "get",
url: templates[templateId].url
}).success(function(respond){
templates[templateId].html = respond;
});
//no idea what next...
return getTemplate
}
})(jQuery);
Sample use:
var template = getTemplate('test') should always return /templates/test.html content
I don't wont use async: false and any framework. I wont learn how to do that ;)
|
Memoizing Javascript function that gets its data using asynchronous Ajax
|
Take a look at Spring cache http://docs.spring.io/spring/docs/current/spring-framework-reference/html/cache.html
You can annotate your repository methods:
@Cacheable(value="customer", key="#prsonalNum")
public Customer findCustomer(String prsonalNum) {
...
}
The application will enter the method body only the first time. After that the value will be taken from the cache.
You can also evict the cache when you update some customer for example.
@CacheEvict(value="customer", allEntries=true)
public void addCustomer(Customer cust)
|
I have in my application list of customers and users. I would like get the list of them only on start. Then use data that is stored locally, not from DB.
Can You advice me some methods with examples?
I think about HttpSession object? But I am not sure is it ok?
Cause this data should be available only for logged user, that access it on start.
List of customers will be available on each page off application!
|
Spring framework. How to store some data to prevent each time access db
|
2
<</MaxPatternCache 0>> setsystemparams
Assuming that your interpreter doesn't have a password protecting the system parameters, and that it honours this system parameter.
See appendix C of the 3rd edition PLRM, especially section "C.3.3 Other Caches". You will need to consider Forms as well.
Share
Improve this answer
Follow
answered Jan 11, 2015 at 14:26
KenSKenS
30.5k33 gold badges3535 silver badges5454 bronze badges
3
Do you know of an interpreter that will respect that parameter? I tried it with ghostscript, but it keeps caching them anyway, even between separate fill operations.
– AJMansfield
Jan 11, 2015 at 18:17
Well, I presume Adobe's CPSI does, Global Graphics' JAWS Rip also does. I have no real idea about others. If you mean a 'Free Open Source' PostScript interpreter, then I don't think there are any except Ghostscript these days (Luser Droog has one though, but I'm not sure its complete enough to deal with patterns). You cannot disable the pattern cache in Ghostscript, its fundamental to the operation of the graphics library.
– KenS
Jan 12, 2015 at 10:32
You may want to consider wirting a Ghostscript device, instead of hacking the language, which would give you access to the objects instead of trying to extract them in this rather slow and painful way. Finally you could redefine makepattern and setpattern to store the pattern definition yourself. Then you can redefine setcolorspace to tile the pattern over the area yourself. How are you planning to deal with images and shadings ?
– KenS
Jan 12, 2015 at 10:34
Add a comment
|
|
I am writing a program that will convert a postscript file to a simpler sequence of points that I can send to a plotter I am building. I am doing this by running a bit of header code that replaces the painting operations with operations that print points to stdout, which my main control program will then use:
/stroke { gsave
matrix defaultmatrix setmatrix
flattenpath
/str 20 string def
{(m ) print 2 copy exch str cvs print ( ) print =}
{(l ) print exch str cvs print ( ) print =}
{6 {pop} repeat (error) =} % should never happen...
{(l ) print exch str cvs print ( ) print =}
pathforall
grestore
stroke
} bind def
/fill {
gsave stroke grestore fill
} bind def
As a side note, I really wish postscript had a printf command, like 1 1 add (1+1=%d) printf.
In order for this to work with fonts, I disabled font caching by setting the cache limit to 0, with 0 setcachelimit. Otherwise, the postscript interpreter will not invoke the painting operations for subsequent uses of cached objects. I would have rather disabled font caching by redefining setcachedevice and setcachedevice2 but those operators also have to handle some character metric stuff, not just the caching.
User paths can also be cached , and I was able to disable this caching by redefining ucache and setting the cache limit to 0 via /ucache {} def.
However, there does not seem to be a command for configuring the pattern cache parameters, and patterns do not need to explicitly request caching., and even if there was I would need to force it to invoke the painting operations for each and every pattern cell even within the same fill operation. How can I disable pattern caching?
|
Disable pattern caching
|
You cannot use this decorator as is in this case, but you may write your own, which will wrap this one like:
from functools import wraps
from django.utils.decorators import available_attrs
def my_super_cached_page(func):
@wraps(func, assigned=available_attrs(func))
def wrapper(request, *args, **kwargs):
cached = cache_page(60 * 15, key_prefix=request.session['test'])(func)
return cached(request, *args, **kwargs)
return wrapper
I didn't test it. Just wrote it to show you the idea. Hope this helps.
|
I have a middleware with process_request that decides which view version should be calculated for A/B testing a page:
request.session[test] = bool(getrandbits(1)) (randomly choose True/False)
I want to be able to cache two versions of the view, dependent on the the request.session[test] session variable.
something like this (which doesn't work):
@cache_page(60 * 15, key_prefix=request.session[test])
def view(request):
...
Is it possible to make the cache decorator be session depended?
(p.s. in the real code I'm using a mix of 4 A/B tests on 4 different views, so its actually 16 different caching keys, and 64 versions of views, and not only 2 - and thats the motivation to solve this on the view level)
|
In django, can per-view cache decorator be session dependent? (for A/B testings)
|
config.Cache(x => x.Not.UseSecondLevelCache());
solved my problem. It removed all logs and unnecessary cpu cycles. This is via fluent. If you are using configuration following configuration may be needed.
<property name="cache.use_second_level_cache">
false
</property>
|
var config =
Fluently
.Configure()
.Database(MsSqlConfiguration.MsSql2008
.IsolationLevel(IsolationLevel.ReadCommitted)
.ConnectionString(connectionString)
.DefaultSchema(defaultSchema)
.FormatSql())
.ExposeConfiguration
(
c => c.SetProperty("current_session_context_class", sessionContext)
);
if (secondLevelCacheSettings.UseSecondLevelCache)
{
if (secondLevelCacheSettings.CacheType == SecondLevelCacheSettings.SecondLevelCacheType.Memcached)
{
config.Cache(c => c.ProviderClass<MemCacheProvider>().UseQueryCache())
.ExposeConfiguration(c => c.SetProperty("expiration",
secondLevelCacheSettings.CacheExpirationMinutes.ToString()));
}
if (secondLevelCacheSettings.CacheType == SecondLevelCacheSettings.SecondLevelCacheType.HashTable)
{
config.Cache(c => c.ProviderClass<HashtableCacheProvider>().UseQueryCache());
}
}
When I don't want to use second level cache, i want to completely disable it. Seems the default configuration is using FakeCache. How do I disable FakeCache also?
Also in the logs I see,
09-04 14:14:02,088 WARN Second-level cache is enabled in a class, but no cache provider was selected. Fake cache used. - [4] NHibernate.Cache.NoCacheProvider [(null)]
Seems second level cache is enabled by default even though we did not configure it. what is the cleaner way to disable it.
|
Nhibernate Disable Second Level Cache with Fluent - Disable Fake Cache also
|
Easiest way to do this is to monkey patch the cache_fragment_name method so that skip_digest is true by default. In order to use the md5 digest when you need it, you'd just need to set skip_digest to false
module ActionView
module Helpers
module CacheHelper
def cache_fragment_name(name = {}, options = nil)
skip_digest = options && !options[:skip_digest].nil? ? options[:skip_digest] : true
if skip_digest
name
else
fragment_name_with_digest(name)
end
end
end
end
end
|
I'm in the process of migrating a Rails 3 app to Rails 4. The migration was mostly fairly smooth, but one big problem I'm encountering is that my old Rails 3 code to expire my caches isn't working. I'm getting logs like:
Expire fragment views/localhost:3000/cardsets/36?action_suffix=edityes (0.0ms)
...
Read fragment views/localhost:3000/cardsets/36?action_suffix=edityes/d8034b6e68ba30b5916a2ebb73b68ffe (0.0ms)
This turns out to be because Rails 4 brings a new funky kind of caching, cache digests. That long string of hex on the end is the md5 digest of some view that Rails wants to associate with this cache fragment.
I believe I have no need for cache digests. My application gets updated pretty infrequently, and generally I can clear the cache when it's updated, so the concept of a cache fragment that refers to a previous deployment's version of a piece of my view code is irrelevant.
I see that I can modify any given call to cache with the :skip_digest => true flag. This blog post refers to modifying a large number of their cache calls to add :skip_digest. But I believe I want to apply this flag to every single call to cache in my application. Surely there must be some way to just disable cache digests universally?
|
Disable cache digests in Rails 4
|
By default, while loading templates, Angular looks inside $templateCache and fetches templates over XHR when it cannot find them locally in its $templateCache.
When the XHR request is slow or our template is large enough, it can seriously negatively impact end users’ experience .
instead of fetching templates XHR, we can fake the template cache loading by wrapping it into a JavaScript file and shipp the JavaScript file along with the rest of the application.
Sample :
angular.module('myApp')
.run(['$templateCache', function($templateCache) {
$templateCache.put('xyz.html', ...);
}]);
So, unless you are very much sure to remove the cached templates form $templateCache, don't do that.
|
While profiling a large AngularJS app, I started tracking $templateCache.
https://docs.angularjs.org/api/ng/service/$templateCache
What is this object? Does it store the partials as they are stored on disk or the final rendered HTML from the route?
I've added some code like this to my controller to make sure it's not persisting anything:
$scope.$on('$destroy', function cleanup() {
$templateCache.removeAll();
});
EDIT: the reason I am looking into this is that a single (not every) controller is loading a large partial with several loops inside of it, depending on User input it could be about 100 input fields with large <select> statements and many custom directives. The client is requesting it be this way and I want to make sure that all of this is getting disposed during routing.
To clarify the question: $templateCache is only storing the partials as they appear on disk, no routing or rendering information is being stored?
|
AngularJS: what is $templateCache?
|
You need to copy all files to shared hosting, except app/cache and app/logs dirs. And then in app/config/parameters.yml change database connection to new one.
Then set permissions on app/cache and app/logs dirs to 777
That's all
P.S. And don't forget to check PHP version on hosting, better use 5.4 or 5.5.
|
I'm trying to deploy a Symfony2 application in a shared hosting and everyone knows that nobody can run any command on that type of hosting so this is what I do in order to get things done and after reads the docs at Symfony.com site:
All this was done in my development server:
Run the command php composer.phar install --no-dev --optimize-autoloader
Run the command php app/console assetic:dump --env=prod --no-debug
Then after all was done I copied the entire folder to my shared hosting but now I'm getting this error:
Fatal error: Uncaught exception 'UnexpectedValueException' with
message 'The stream or file
"/var/www/html/tanane/app/../var/logs/prod.log" could not be opened:
failed to open stream: No such file or directory' in
/home/tanane72/public_html/tanane/var/cache/prod/classes.php:5014
Stack trace: #0
/home/tanane72/public_html/tanane/var/cache/prod/classes.php(4958):
Monolog\Handler\StreamHandler->write(Array) #1
/home/tanane72/public_html/tanane/var/cache/prod/classes.php(4883):
Monolog\Handler\AbstractProcessingHandler->handle(Array) #2
/home/tanane72/public_html/tanane/var/cache/prod/classes.php(5083):
Monolog\Handler\AbstractHandler->handleBatch(Array) #3
/home/tanane72/public_html/tanane/var/cache/prod/classes.php(5388):
Monolog\Handler\FingersCrossedHandler->handle(Array) #4
/home/tanane72/public_html/tanane/var/cache/prod/classes.php(5488):
Monolog\Logger->addRecord(500, 'Uncaught PHP Ex...', Array) #5
/home/tanane72/public_html/tanane/vendor/symfony/symfony/src/Symfony/Component/HttpKernel/EventListener/ExceptionListene
in /home/tanane72/public_html/tanane/var/cache/prod/classes.php on
line 5014
The file prod.log exists in shared hosting but Symfony is looking in my development server as this line said /var/www/html/tanane/app/../var/logs/prod.log, where I can change that behavior? How the process works in order to change this route to the current one on shared hosting?
|
Deploy Symfony2 app in shared hosting without run any command
|
4
Some controllers in the Nop.Web project use MemoryCacheManager. These controllers with static cache (MemoryCacheManager) are defined in the \Nop.Web\Infrastructure\DependencyRegistrar.cs file. In this file we define which one of ICacheManager implementations should be injected.
But I would recommend you to use the same approach which is used in nopCommerce. Use events. Subscribe to product insert/delete/update events in the \Nop.Web\Infrastructure\Cache\ModelCacheEventConsumer.cs file and reset cache there. Just see how it's already done there.
Share
Improve this answer
Follow
answered Jul 16, 2014 at 9:58
Andrei MAndrei M
3,42944 gold badges2929 silver badges3535 bronze badges
2
thanks @and.maz. I have changed DependencyRegistrar for ImportManager service with this statement builder.RegisterType<ImportManager>().As<IImportManager>() .WithParameter(ResolvedParameter.ForNamed<ICacheManager> ("nop_cache_static")).InstancePerHttpRequest(); as I am writing method to clear cache in ImportManager would that be ok?
– Nitin Varpe
Jul 16, 2014 at 10:33
Current MemoryCache is set for 60 minutes , can i have it forever until i remove it manually, ofcourse i am going to remove it everyday.
– Nitin Varpe
Jul 16, 2014 at 11:13
Add a comment
|
|
I am working on NopCommerce 2.40 . I want to set Cache on home page using CacheManager.
var cacheModel = _cacheManager.Get(cacheKey, () =>
{
var model = new HomePageProductsModel()
{
....
....
}
return model;
});
When I debug this code. It hits Get method in MemoryCacheManager. Now at admin side when I want to remove this cache by Key when I update any product. Remove method called below hits Remove of PerRequestCacheManager.
_cacheManager.Remove(string.Format("product.hometemplate-{0}-{1}", storeid, true));
So this cache on home page is not removed. Is there any solution to it
|
Nopcommerce PerRequestCacheManager vs MemoryCacheManager
|
2
With passivation, the entry is EITHER in memory (activated) OR in cache store (passivated). Therefore, no, it won't.
Share
Improve this answer
Follow
answered Jun 20, 2014 at 6:26
Radim VansaRadim Vansa
5,77633 gold badges2525 silver badges4343 bronze badges
2
so its not at all possible to have recover from jvm restarts? Can we utilize shutdown hook..?
– Dhananjay
Jun 20, 2014 at 7:44
If you talk about shutdown hooks, you probably consider grateful ones - after calling cacheManager.stop() etc. In that case, yes, entries are passivated into disk correctly. I had rather in my mind those non-grateful such as JVM crash or machine going out of power.
– Radim Vansa
Jun 24, 2014 at 7:21
Add a comment
|
|
I am using infinispan 6.0.1 release, I have configured it to use SingleFileStore as loader
Configuration is as below
<namedCache name="MyCache">
<persistence passivation="true">
<singleFile fetchPersistentState="true"
ignoreModifications="false"
purgeOnStartup="false" maxEntries="5000">
</singleFile>
</persistence>
My question is, will this cache survive JVM restarts?
I mean lets say my cache is holding {n} entries and my jvm goes down.
When JVM starts again will my cache initialized with {n} entries?
Thanks in advance!!
|
Infinispan singleFileStore cache restartability
|
4
Starting with SqlServer 2005 onwards(and therefore applicable to sqlserver 2008R2 ) , Sql Cache Dependency works by using the query change notification mechanism.They use a notification infrastructure and messaging system that’s built into the database, called the Service Broker.
Sql server 2000 and earlier versions employed the polling mechanism.
you may be interested in further reading as suggested below::
Jess Liberty, author of Book: programming in asp.net (Oreilly Media) Says:
There is NO need to configure the database with aspnet_regsql.exe and there is NO need to add <sqlCacheDependency> element in your web.config in case you use sqlserver 2005 or later with Query Notification mechanism.
MSDN also says that :: This configuration setting i.e. <sqlCacheDependency> has no effect when you use the sqlCacheDependency element in conjunction with query notifications on SQL Server 2005. This does means that setting the pollTime will have NO effect when using Query Notifications.
Share
Improve this answer
Follow
answered Jun 16, 2014 at 7:27
R.CR.C
10.5k22 gold badges3636 silver badges4848 bronze badges
2
Thanks! Is there a special flag or setting which I should set to make sqlCacheDependency work in conjunction with query notifications?
– Ramesh
Jun 16, 2014 at 7:41
1
Yes, there are some steps to be done for enabling query notifications. check this URL: srikanthtechnologies.com/blog/dotnet/sqlcachedependency.aspx
– R.C
Jun 16, 2014 at 7:52
Add a comment
|
|
My Need is to build a simple configuration framework on top of a Key value table. As this is frequently used and rarely changed, would prefer to cache the table values. One requirement is if the value is changed in DB it should reflect immediately in the App. So, I planned to implement SqlCacheDependency. Doc says
The query notification mechanism of SQL Server 2005 detects changes
to data that invalidate the results of an SQL query and removes any
cached items associated with the SQL query from the
System.Web.Caching.Cache
From the sample I noticed there is a property in the config called PollTime. Doc says
Gets or sets the frequency with which the SqlCacheDependency polls the
database table for changes.
I am confused here on whether it uses the Query Notification technique or it uses Polling mechanism.
My stack is .NET 4.0 and SQL Server 2008 R2.
|
Does SqlCacheDependency use polling or query notification?
|
4
Its work for me.open list.phtml file locate this code line no. notepad++ 133
<img id="product-collection-image-<?php echo $_product->getId(); ?>"
src="<?php echo $this->helper('catalog/image')->init($_product, 'small_image')->resize(236,193); ?>"
alt="<?php echo $this->stripTags($this->getImageLabel($_product, 'small_image'), null, true) ?>" />
repalce with this code
<img width="236" height="193" id="product-collection-image-<?php echo $_product->getId(); ?>"
src="<?php echo Mage::getModel('catalog/product_media_config')->getMediaUrl( $_product->getSmallImage()); ?>"
alt="<?php echo $this->stripTags($this->getImageLabel($_product, 'small_image'), null, true) ?>" />
it only for grid mode you can change for list mode also and product details page medai.pthml file only change on this file src tag src="<?php echo Mage::getModel('catalog/product_media_config')->getMediaUrl( $_product->getSmallImage()); ?>"
Share
Improve this answer
Follow
answered Jun 10, 2015 at 9:40
Randhir YadavRandhir Yadav
1,4491515 silver badges2424 bronze badges
Add a comment
|
|
For all my magento product images i am getting the image from the cache url.How to disable it and make my product images to use the original url?
I have tried with the below code in my /public_html/dirname/app/code/core/Mage/Catalog/Helper/image.php file but its not supporting.
Mage::getModel('catalog/product_media_config')->getMediaUrl($_product->getImage());
Exactly where i need to use the code. or else suggest some solution to overcome this issue.
|
How to remove the cache url for product images in magento
|
This did the trick... it's not perfect according to my own question though as it ignores ALL query params, not just utm ones. When I need to actually implement a non-utm value which changes the content I will need to revisit this regex:
sub vcl_recv {
set req.url = regsub(req.url, "\?.*", "");
}
|
Can I 'ignore' query string variables before pulling matching objects from the cache, but not actually remove them from the URL to the end-user?
For example, all the marketing utm_source, utm_campaign, utm_* values don't change the content of the page, they just vary a lot from campaign to campaign and are used by all of our client-side tracking.
So this also means that the URL can't change on the client side, but it should somehow be 'normalized' in the cache.
Essentially I want all of these...
http://example.com/page/?utm_source=google
http://example.com/page/?utm_source=facebook&utm_content=123
http://example.com/page/?utm_campaign=usa
... to all access HIT the cache for http://example.com/page/
However, this URL would cause a MISS (because the param is not a utm_* param)
http://example.com/page/?utm_source=google&variation=5
Would trigger the cache for
http://example.com/page/?variation=5
Also, keeping in mind that the URL the user sees must remain the same, I can't redirect to something without params or any kind of solution like that.
|
Ignore utm_* values with varnish?
|
3
GAE should work fine with 10m value.
Most probably that was because you were signed in with your google admin account.
GAE returns no-cache for such accounts.
Trying opening the same page in an incognito returns proper cache expiry times.
By default GAE sets cache to 10 minutes, so even if you didn't set any expiry - you should see 10 minutes instead of no-cache.
Share
Improve this answer
Follow
edited Jan 11, 2017 at 18:20
answered Nov 3, 2016 at 15:56
bumbubumbu
1,3071111 silver badges3030 bronze badges
0
Add a comment
|
|
In my GAE app I am serving static content as follows (those are my entries in my app.yaml file):
handlers:
- url: /css
static_dir: static/css
expiration: "10m"
- url: /js
static_dir: static/js
expiration: "10m"
Despite the information available here: https://developers.google.com/appengine/docs/python/config/appconfig#expiration the content is never cached in the browser regardless whether I use the dev server or upload my app.
I am using Chrome and the request header is:
cache-control:max-age=0
and the response headers are:
cache-control:no-cache, must-revalidate
pragma:no-cache
server:Google Frontend
status:304 Not Modified
As per some answers I was able to find, I tested this both with being logged in and out into my google admin account and nothing changes.
Any help on this would be greatly appreciated. Many thanks!
Response headers I get when logged out of my admin account:
date:Fri, 25 Apr 2014 09:54:44 GMT
etag:"lhoIow"
server:Google Frontend
status:304 Not Modified
version:HTTP/1.1
|
unable to set cache expiration on in app.yaml for a python app
|
I use this Library which is just perfect
SDWebImage
You just need to #import <SDWebImage/UIImageView+WebCache.h> to your project, and you can define also the placeholder when image is being downloaded with just this code:
- (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath
{
static NSString *MyIdentifier = @"MyIdentifier";
UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:MyIdentifier];
if (cell == nil)
{
cell = [[[UITableViewCell alloc] initWithStyle:UITableViewCellStyleDefault
reuseIdentifier:MyIdentifier] autorelease];
}
// Here we use the new provided setImageWithURL: method to load the web image
[cell.imageView setImageWithURL:[NSURL URLWithString:@"http://www.domain.com/path/to/image.jpg"]
placeholderImage:[UIImage imageNamed:@"placeholder.png"]];
cell.textLabel.text = @"My Text";
return cell;
}
It also cache downloaded images and gives you great performance.
Hope it will help you!
|
I hava a uitableview , with custom cell containing two UImages. The logo images are taken from an online website, that's why there's a need to cache the images. Loading the image till now is made like this :
NSURL * imageURL = [NSURL URLWithString:[arra1 objectAtIndex:indexPath.row / 2]];
NSData * imageData = [NSData dataWithContentsOfURL:imageURL];
NSURL * imageURL2 = [NSURL URLWithString:[arra2 objectAtIndex:indexPath.row / 2]];
NSData * imageData2 = [NSData dataWithContentsOfURL:imageURL2];
cell.ima1.image = [UIImage imageWithData:imageData];
cell.ima2.image2 = [UIImage imageWithData:imageData2];
What i learned from searching , is that dataWithContentsOfURL is not asynchronous , and while scrolling it will take a lot of time. I tried several methods but i can't seem to get to right one. This is my first time caching UIImages , i would highly appreciate a detailed explanation with implementation so i could learn aside from getting the job done.
Many Thanks
|
Cache UIImage using uitableview from url
|
You're on the right track. Services in Angularjs are singeltons, so using it to cache your $http request is fine. If you want to expose several functions in your service I would do something like this. I used the $q promise/deferred service implementation in Angularjs to handle the asynchronus http request.
appServices.service("eventListService", function($http, $q) {
var eventListCache;
var get = function (callback) {
$http({method: "GET", url: "/events.json"}).
success(function(data, status) {
eventListCache = data;
return callback(eventListCache);
}).
}
}
return {
getEventList : function(callback) {
if(eventListCache.length > 0) {
return callback(eventListCache);
} else {
var deferred = $q.defer();
get(function(data) {
deferred.resolve(data);
}
deferred.promise.then(function(res) {
return callback(res);
});
}
},
getSpecificEvent: function(id, callback) {
// Same as in getEventList(), but with a filter or sorting of the array
// ...
// return callback(....);
}
}
});
Now, in your controller, all you have to do is this;
appControllers.controller("EventListCtrl", ["$scope", "eventListService",
function ($scope, eventListService) {
// First time your controller runs, it will send http-request, second time it
// will use the cached variable
eventListService.getEventList(function(eventlist) {
$scope.myEventList = eventlist;
});
eventListService.getSpecificEvent($scope.someEventID, function(event) {
// This one is cached, and fetched from local variable in service
$scope.mySpecificEvent = event;
});
}
]);
|
I'm new to AngularJS and am still trying to wrap my head around using services to pull data into my application.
I am looking for a way to cache the result of a $http.get() which will be a JSON array. In this case, it is a static list of events:
[{ id: 1, name: "First Event"}, { id: 2, name: "Second Event"},...]
I have a service that I am trying to use to cache these results:
appServices.service("eventListService", function($http) {
var eventListCache;
this.get = function (ignoreCache) {
if (ignoreCache || !eventListCache) {
eventListCache = $http.get("/events.json", {cache: true});
}
return eventListCache;
}
});
Now from what I can understand I am returning a "promise" from the $http.get function, which in my controller I add in a success callback:
appControllers.controller("EventListCtrl", ["$scope", "eventListService",
function ($scope, eventListService) {
eventListService.get().success(function (data) { $scope.events = data; });
}
]);
This is working fine for me. What I'd like to do is add an event to the eventListService to pull out a specific event object from eventListCache.
appServices.service("eventListService", function($http) {
var eventListCache;
this.get = function (ignoreCache) { ... }
//added
this.getEvent = function (id) {
//TODO: add some sort of call to this.get() in order to make sure the
//eventListCache is there... stumped
}
});
I do not know if this is the best way to approach caching or if this is a stupid thing to do, but I am trying to get a single object from an array that may or may not be cached. OR maybe I'm supposed to call the original event and pull the object out of the resulting array in the controller.
|
Caching an array in an angularjs Service
|
Well, as you say that you want to cache the entire content of your underlying target resource, you have to cache its byte[] from inputStream.
Since <mvc:resources> is backed by ResourceHttpRequestHandler there is no stops to write your own sub-class and use it directly instead of that custom tag.
And implement your caching logic just within overrided writeContent method:
public class CacheableResourceHttpRequestHandler extends ResourceHttpRequestHandler {
private Map<URL, byte[]> cache = new HashMap<URL, byte[]>();
@Override
protected void writeContent(HttpServletResponse response, Resource resource) throws IOException {
byte[] content = this.cache.get(resource.getURL());
if (content == null) {
content = StreamUtils.copyToByteArray(resource.getInputStream());
this.cache.put(resource.getURL(), content);
}
StreamUtils.copy(content, response.getOutputStream());
}
}
And use it from spring config as generic bean:
<bean id="staticResources" class="com.my.proj.web.CacheableResourceHttpRequestHandler">
<property name="locations" value="/public-resources/"/>
</bean>
<bean id="urlMapping" class="org.springframework.web.servlet.handler.SimpleUrlHandlerMapping">
<property name="mappings">
<value>/resources/**=staticResources</value>
</property>
</bean>
|
Suppose I have bound all urls to Spring dispatcher servlet, and set up some .css and .js directories with <mvc:resources> in mvc Spring namespace.
Can I cache those static Spring resources in memory to avoid hitting disk on user request?
(Note that I am not asking about HTTP caching like Not Modified response and also don't mean Tomcat static files caching or setting up another webserver in front of Java webserwer, just Spring solution)
|
Is there a way to cache Spring <mvc:resources> files in memory?
|
You could create you own command to do that, i.e. write a program to do that. For example, you could do something like:
while (1) {
if ((fp = fopen("/proc/sys/vm/drop_caches", "r")) == NULL) {
/* error handler */
}
fprintf(fp, "3\n");
fclose(fp);
nanosleep(...); /* See nanosleep(2) */
}
in your program, and made it a daemon.
|
Working on a system measurement project and it requires me to drop a cache periodically to get an accurate number (since having cache there would alter the results). Currently I can manually drop the cache by using:
echo 3 > /proc/sys/vm/drop_caches
However, I want it to automatically drop every microsecond. What command should I use?
|
How to drop memory cache periodically in Linux?
|
3
The correct enum for the cache Policy iOS7 is as described below:
NSMutableURLRequest *request = [NSMutableURLRequest requestWithURL:downloadURL
cachePolicy:NSURLRequestReloadIgnoringLocalAndRemoteCacheData timeoutInterval:60];
If you are over 3G, some provider use caching even if you disable it in your NSMutableURLRequest, so if the cache policy doesn't work then set the http header field cache-control to no-cache.
[request setValue:@"no-cache" forHTTPHeaderField:@"cache-control"];
Here the enum list check your header NSURLRequest.h for the correct up to date enum :)
enum
{
NSURLRequestUseProtocolCachePolicy = 0,
NSURLRequestReloadIgnoringLocalCacheData = 1,
NSURLRequestReloadIgnoringLocalAndRemoteCacheData = 4, // Unimplemented
NSURLRequestReloadIgnoringCacheData = NSURLRequestReloadIgnoringLocalCacheData,
NSURLRequestReturnCacheDataElseLoad = 2,
NSURLRequestReturnCacheDataDontLoad = 3,
NSURLRequestReloadRevalidatingCacheData = 5, // Unimplemented
};
typedef NSUInteger NSURLRequestCachePolicy;
Share
Improve this answer
Follow
edited Apr 24, 2015 at 6:15
Esha
1,3281313 silver badges3434 bronze badges
answered Jul 13, 2014 at 7:44
BenpaperBenpaper
14444 bronze badges
Add a comment
|
|
I am working on an iPhone app project now which implements some connections using NSURLRequests using cachePolicy:NSURLCacheStorageNotAllowed (I am using ios 7).
But it seems like the responses are still cached and I get old responses for the same URL call. In spite of having the cache policy as "cachePolicy:NSURLCacheStorageNotAllowed".
Why is it still caching the responses? Is the issue still present in the latest release?
|
NSURLCacheStorageNotAllowed still caching for NSURLRequest
|
3
Well according to this EHCache page, using EHCache in distributed mode is a commercial product. So if you want to use a free distributed cache, you need something different like Memcached or Redis.
Share
Improve this answer
Follow
answered Feb 22, 2014 at 16:44
Marius SoutierMarius Soutier
11.2k11 gold badge3838 silver badges4848 bronze badges
Add a comment
|
|
I want to cluster EHCache in Play Framework 2.x web application in several node. Why everyone recommend to avoid to use EHCache as a distributed cache in Play 2.x clustered web application?
I use nginx proxy to serve request across Play node and i want to make default EHCache of each node share its content.
|
Why everyone recommend to avoid use EHCache as a distributed cache in Play 2.x?
|
The P-flag in your RewriteRule causes the request to be proxied to the internal server using mod_proxy. mod_proxy by itself does not cache content. The caching is probably a result of mod_cache being enabled as well on the server. The settings you need to disable caching for your internal server can unfortunately only be done in server or virtual-host config. The solution would be to add what you tried to the configuration of the internal server thus telling mod_cache that it should not cache any response from your internal server:
Using .htaccess
Header set Cache-Control "max-age=0, private, no-store, no-cache, must-revalidate"
or PHP
header('Cache-Control: no-cache, no-store, must-revalidate'); // HTTP 1.1.
header('Pragma: no-cache'); // HTTP 1.0.
header('Expires: 0'); // Proxies.
|
I'm using an htaccess rule to proxy to an internal server, using the answer recommended on this question, "Can ProxyPass and ProxyPassReverse Work in htaccess". I'm using htaccess as that is all I have access to. The method suggested works, but when I make a change on one of the internal pages and reload (from the external server) I don't even see it hitting the internal server, even after clearing the cache on the browser. In fact, if I try to load the page from another browser which never has tried to load the page before, it too gets the old copy.
This suggests something is being cached on the server, but how to change this? The apparent caching is rather annoying as I am trying to fix some issues that only occur on the proxied page.
If I hit the internal server directly and reload after a change, I always get the latest page.
I have tried a <filesMatch ...> rule for the affected pattern (using the same pattern as used in the RewriteRule in the following manner:
<filesMatch "^/?somedir/(.*)$">
Header set Cache-Control "max-age=0, private, no-store, no-cache, must-revalidate"
</filesMatch>
My rewrite rule looks like this, and comes after the filesMatch directive:
RewriteEngine On
RewriteRule ^/?somedir/(.*)$ https://internal.local.net:8000/$1 [L,P]
But this has not had any effect. I have also tried "NoCache *" but this directive causes an error as it is not allowed in an .htaccess file.
|
How to disable caching of a rewrite rule which proxies an internal server?
|
Please follow the below steps:
Step 1: Change debug status in app/config/core.php
Configure::write('debug', 3);
Step 2: Run your code/script again.
It will solve your problem.
Note: make sure you have added the required field in db table.
|
I have added one more field in db table and have added validation rule for it in model class. But on saving the record all data get saved except that new field (note: validation for new field works). That field is not there in the model object schema attribute. Have cleared the cache directory and then on saving new record, got new cache created with this new field but got 'internal server error' instead of successful insertion. May I kindly know, where exactly the issue lies.
Thanks
|
Cakephp - How to make model accept changes made in phpmyadmin?
|
Your cache key is the problem (the first argument to #cache). You're telling Rails to identify the cached fragments using the string "#{job}_big_2". To interpolate a Job instance into that string, Ruby will call #to_s on it, resulting in the #<Job:0xbbb61b4> segments you see. But this contains the object_id, which will be different each time the record is loaded - this is why you're not getting any cache hits.
If Job is a descendant of ActiveRecord::Base, you can use the #cache_key method, which will produce a string with the record's record ID and updated_at timestamp to produce a key that is unique to that record, but which will be regenerated if the record is updated:
<% jobs.each_with_index do |job, i| %>
<% cache("#{job.cache_key}_big_2", :expires => 30.minutes) do %>
...
Read fragment views/jobs/1234-20130614084704_big_2 (0.8ms)
...
If "#{job}_big_2"0 is not an "#{job}_big_2"1 model then you need to implement a similar method to generate a cache key that will uniquely identify your "#{job}_big_2"2 records.
|
so I implemented some caching on my page that runs on rails 3.3 but when I look into the production log I see this:
Read fragment views/#<Job:0xbbb6970>_big_2 (0.8ms)
Write fragment views/#<Job:0xbbb6970>_big_2 (0.5ms)
Read fragment views/#<Job:0xbbb65d8>_big_2 (0.5ms)
Write fragment views/#<Job:0xbbb65d8>_big_2 (0.6ms)
Read fragment views/#<Job:0xbbb61b4>_big_2 (0.6ms)
Write fragment views/#<Job:0xbbb61b4>_big_2 (0.4ms)
Rendered shared/_jobslist.html.erb (88.5ms)
I am not sure this is supposed to be like this :) As far as I understand it, it should read all the time once the fragment is saved in cache. I specified an expiration of 30 minutes.
<% jobs.each_with_index do |job, i| %>
<% cache("#{job}_big_2", :expires => 30.minutes) do %>
...
|
Ruby on Rails caching with memcache and Dalli doesn't seem to work
|
3
Cache invalidation is a hard problem to solve. There are at least 3 ways I've seen to solve this. They vary by complexity and by how long an expired record is still deemed valid.
The simplest answer is that all front-end servers must call the database to verify the etag before returning "304 Not Modified". This might be best if there are many updates or the cost of downloading a record from the database is high.
If it sometimes okay to send back an old value, then you can set an expiration time on your cached items.
The other option is that when 1 front-end server sees an update, it needs to tell the other front-end servers to expire that cached item (maybe by calling a webservice?). This allows for long cache durations but might be too chatty if there are lots of updates.
Share
Improve this answer
Follow
answered Nov 12, 2013 at 23:44
DavidDavid
34.3k33 gold badges6363 silver badges8080 bronze badges
1
you may comment on my answer. :))
– Supun Wijerathne
Nov 30, 2017 at 6:50
Add a comment
|
|
I am trying to understand which would be the best cache strategy in case of a REST API layer that allows to query and update a customer registry database. We currently have 3 fronted servers all speaking with a central database server.
The idea would be to return an etag to the calling client with the etag matching the customer record version id (an hash value that is updated at any change on the account) with update calls accepted only if the received etag matches the version id stored on the database.
Let's suppose that a client performs a GET for a customer record being routed to Server 1 by the load balancer. Server 1 does not have the customer record cached so will query the database, cache the record locally and return the record as response of the call including the etag header.
If a second client arrives and perform the same GET for the same customer record being routed to Server 2, the Server 2 will also cache the entry locally and return the same etag header back.
Let's assume that now the first client has performed an update call against the same record via Server 1. Server 1 cache gets updated with the latest record details and the first client gets back a new etag.
After this, the second client performs a conditional get call providing the "If-None-Match" header set with the received etag. The request will hit again the Server 2. My assumption is that the Server 2 will still have cached the old etag and will return a 304 Not Modified response to the client. Is this a correct assumption?
With this situation, a client would get stale data easily and would impact the overall consistency of the data seen and used on client side.
What would be needed to solve this and ensure that no stale customer record data are returned to clients at any time?
Thanks a lot!
|
Consistency with etags and multi server setup?
|
It's so that the client's browser doesn't use a cached stylesheet when the page is loaded. That number is probably generated programatically, on the server-side. It will be different every time the page is loaded and so the browser will identify that stylesheet as different from the one in the cache and hence, re-download it.
|
I just encountered a line like this in the head of a (program generated) web page, any ideas what purpose the number and ID serve?
<link rel="stylesheet" type="text/css" href="css/contact.css?4131768016" id="pagesheet"/>
|
unusual link to external style sheet
|
Only one generally correct approach: code, profile, measure, and compare.
For example: do you need to actually transpose the array? Or could it suffice to read it transposed (in which case an iterator will do the trick). Often times when I interact with my favorite enemy ( Fortran) I have to "read transposed" because the fool is column major.
Play with Eigen, which lets you specify the storage order.
But---again---test and see. It may very we'll be the case that you are pursuing a red herring, and the difference in performance won't make it worth your while to complicate the code.
|
If I have a 1D array that represents the contents of an MxN matrix (where the least significant dimension is contiguous in memory), how do I make the best use of caching when transposing it (to place the contents of the most significant dimension in contiguous memory). This question could be rephrased as follows;
If I have a choice between reading contiguous memory but writing to random access locations or reading from random access locations and writing to contiguous memory, all things being equal, which should I choose?
|
How do I use the cache efficiently when transposing an array?
|
If you're using a machine with a coherent caches (most mainstream machines) then cache-line flushes generally aren't required, and the flush directive is unlikely to do anything explicit regarding the cache. In a coherent system, anything written to one core's cache is immediately visible to all other cores.
However the FLUSH directive may act as a memory barrier, or fence, and it also forces the compiler to generate store instructions for values which it might have been storing in registers.
There's a good description of the directive here, including this note:
Q17: Is the !$omp flush directive necessary on a cache coherent system?
A17: Yes the flush directive is necessary. Look in the OpenMP specifications for examples of its uses. The directive is necessary to instruct the compiler that the variable must be written to/read from the memory system, i.e. that the variable can not be kept in a local CPU register over the flush "statement" in your code.
Cache coherency makes certain that if one CPU executes a read or write instruction from/to memory, then all other CPUs in the system will get the same value from that memory address when they access it. All caches will show a coherent value. However, in the OpenMP standard there must be a way to instruct the compiler to actually insert the read/write machine instruction and not postpone it. Keeping a variable in a register in a loop is very common when producing efficient machine language code for a loop.
If you're using a machine with noncoherent caches, you're probably working at a supercomputing facility and should consult local experts familiar with your architecture and toolset.
|
I have been trying to understand the openmp pragmas, does an openmp flush (#pragma omp flush) lead to a cache line flush?
How does that change for an implicit flush?
|
Does an explicit openmp flush lead to a cache line flush?
|
I am using web Service not WebView
and finally I found the solution
HttpGet httpget = new HttpGet("http://example.com/dir/page.php?"+r.nextInt(1000));
putting a random number as a parameter make the request Different
thanx all
|
I'am making a game and make a website to store the Score Board when I call the webservise the old Score Board appears !!! I tried to open the webServise from the android browser but the browser appears the old Score Board when i "refresh" android web Browser the new one appears
how can I Solve this Problem?
this is my app link on the play store I need to update it soon
My Application
|
android cache my webServise results
|
Although you can use the Cache-Control headers to specify a very high expiration time (maximum is in the year 2038), it will not guarantee that CloudFront will actually cache the files for that long.
As any caching proxy or CDN, each CloudFront location has a limited amount of disk space to use for caching and it would not be feasible to store files that have a very low number of hits for a long time. The amount of time that CloudFront will actually cache a file is completely up to them and you should not rely on this (e.g. remove the source files in the hopes that the edge location will still have a copy).
|
I would like to cache images stored in S3 in Tokyo region to an edge in Korea.
So here's my question.
Say I have many images in S3 and I would like all of them to be cached in the Korea edge. Also, I would like all those cached images to live forever(TTL=forever) unless I update or delete one of them. (You could say replicate S3 in Korea edge)
I would really like to do this because my service is deployed only in Korea for the time being.
In short, is it possible to cache a lot of contents(like 4~500,000 images) with TTL=forever?
|
AWS CloudFront caching contents forever?
|
4
<META HTTP-EQUIV="PRAGMA" CONTENT="NO-CACHE">
<META HTTP-EQUIV="CACHE-CONTROL" CONTENT="NO-CACHE">
Share
Improve this answer
Follow
answered Aug 13, 2013 at 20:47
KonstantinKonstantin
3,2741515 silver badges2121 bronze badges
2
Ah, I forgot about those meta tags. Are these heavily reliable and safe?
– Chad Johnson
Aug 13, 2013 at 21:46
It looks like this will prevent caching by browsers but not proxies, since proxies rely on HTTP headers, not HTML contents. I may use .php files with header() calls, and then make the rest of the file completely static.
– Chad Johnson
Aug 13, 2013 at 22:07
Add a comment
|
|
I am considering generating .html files for my entire web site as I want my site to be as fast as possible. The files would be generated with dynamic content via a backend service as data updates occur.
How do I ensure users always see the latest content? Say I publish a change to my home page, index.html. How do I prevent these files from always coming from the user's cache and ensure new content, if available, is always retrieved and displayed?
Remember, I am using pure HTML.
If there is absolutely no way, I would not be adverse to using .php files containing HTTP cache-related header() calls prior to the content; e.g.:
<?php header(...) ?>
<!DOCTYPE html>
<html>
<head>
<title>Page Title</title>
etc.
|
Prevent caching and stale content with a static web site
|
To implement this in iOS app you might want to download this Apple sample or (I prefer) this handy github project
If you choose second way, you just need to create and configure Reachability class instance at startup like this
// disable automatic sending of statistics
[GAI sharedInstance].dispatchInterval = 0;
Reachability* reach = [Reachability reachabilityForLocalWiFi];
reach.reachableBlock = ^(Reachability *reach){
[GAI sharedInstance].dispatchInterval = 20;
// you also may send cached statistics right on WiFi appear
[[GAI sharedInstance] dispatch];
};
reach.unreachableBlock = ^(Reachability *reach) {
// disable automatic upload, this will force GAI to cache all hits
// until you call dispatch or set dispatchInterval
[GAI sharedInstance].dispatchInterval = 0;
};
|
I searched for similar posts here but found no answer, so mark this duplicate if there is one I didn't find.
So my question is: I'm using Google Analytics on both android and iOS, and both are facing the same problem. I wanted GA only to send tracking when WiFi is available, and this could be easily done by checking the net status. But now I want GA to cache the events/screens when no WiFi is available and send it later when Wifi available.
I found in document the cache is only written when no internet is available, so what I want to ask is actually: is there a way to force GA cache the tracking even when 3G is available? didn't find such method in SDK.
Please help.
thanks.
|
Google Analytics for iOS send tracking only when WiFi is available, otherwise to Cache
|
Because by slicing the queryset in the class definition, you've evaluated it then and there - at class definition time, ie when the server starts up. So the cache is being refreshed, but only with an old set of items. Don't do that slice at class level: do it when returning the results from get_queryset.
|
I've added a simple caching to my web application and when I delete or add new object the cache does not get refreshed after the peroid of time (2 minutes) that I've set.
It looks like it froze. When I restart my application then it gets refreshed.
I tried it on memached and locmemcache.
INDEX_LIST_CACHE_KEY = "index_list_cache_key"
class IndexView(BaseView):
queryset = Advert.objects.all().select_related('category', 'location')[:25]
template_name = "adverts/category_view.html"
def get_queryset(self):
queryset = cache.get(INDEX_LIST_CACHE_KEY)
if queryset is None:
queryset = self.queryset
cache.set(INDEX_LIST_CACHE_KEY, queryset, 2 * 60)
return queryset
Why caching behaves like that in this project?
Edit - settings.py:
for locmemcache
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.locmem.LocMemCache',
'LOCATION': 'oglos-cache'
}
}
for memcached
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': '127.0.0.1:11211',
}
}
|
Django cache does not get refreshed
|
From best to worst:
APC is in-memory and very fast; it's serialized and unserialized automatically for you.
memcached is in-memory too, and a bit slower than APC. This is more than compensated by the fact that it allows to use the same cache across servers.
unserialize(file_get_contents()) involves hitting the disk, but is faster than parsing php. It's an OK option if you don't have APC, memcached, or equivalent in-memory caching.
var_export() to create a php file that you then include is slower than unserializing a string because the file needs to be parsed -- in addition to hitting the disk. The plus side is that it allows to easily edit the array if you ever need to.
serialize() into a variable held in a php file offers the worst of each: a disk hit, parsing of php and unserializing the data.
(There might also be something to be said about having proper indexes in your database. Fetching 200 rows to build an array shouldn't be slow.)
|
Let's say we have a PHP array with ~ 200 keys containing site data, globally shared for all users.
This array is constructed from an SQL database, which takes too long. We want to store this array.
What's the difference (mainly in speed) between storing the array with apc_store() or serializing it and saving to a .php file on a disk, then retrieving by either apc_fetch() or file_get_contents() and unserialize?
Which would be faster? Why not use the file? Why use the cache?
EDIT One reason to use a file instead of a cache (for me) is that I can access the file from CLI/shell/root with CRON.
|
Difference between using apc_cache and storing to a file?
|
unrelated of the application stack that you are using, i don't think that a caching approach scales in your situation. twitter-like functionality is often handled by de-normalization.
in your situation, this could mean implementing a feed model for each user, appending new posts of the followers, so that it is fast to load the 'timeline' of a user from his own feed, instead of joining all his (possible thousands) of friends.
|
In a scenario where there are users that have posts, and each user has a view representing a news feed (much like with a logged in Tumblr account), and each post overview has a link to the comments with a comment counter per post, what is the best caching strategy here (On a Rails 4 stack)?
Assuming 5 users, A B C D E, with each being subscribed to the 2 users on their right (A is subscribed to B and C, B is subscribed to C and D etc.) and only having the users they've subscribed to showing up on their news feed view.
Edit:
Assume a fan-out-on-write approach is taken, where each user has a unique set (of post ids) in Redis, and on every post create, the id of the new post is appended to every of the post creator's friends' sets. The redis sets act as an index and a user's feed is fetched via a single SQL query.
Bearing this in mind, caching each feed should be a matter of this approach:
Check set in redis (first hit)
write @feed_array to memcached
fetch posts with single SQL command and save to @feed
write @feed to memcached
Check set in redis (second hit)
If set values match @feed_array then return @feed from memcached. Otherwise new SQL query and override @feed in memcached
This approach would mean easy cache use for the views when iterating through the @post divs, but how would one handle the comment counts?
|
Rails caching techniques for a personalized news feed
|
3
Benefits of caching are related to the number of times you need the cached item and the cost of getting the cached item. Your status table, even though only 10 rows long, can be "costly" to get if you have to run a query every time: establish connection, if needed, execute a query, pass data over the network, etc. If used frequently enough, the benefits could add up and be significant. Say, you need to check some status 1000 times a second or every website request, you have saved 1000 queries and your database can do something more useful and your network is not loaded with chatter. For your web server, the cost of retrieving the item from cache is usually minimal (unless you're caching tens of thousands or hundreds of thousands of items). So pulling something from the cache will be quicker than querying a database almost every time. If your database is the bottleneck of your system (which is the case in a lot of systems) then caching definitely is useful.
But bottom line is, it is hard to say yes or no without running benchmarks or knowing the details of how you're using the data. I just highlighted some of the things to consider.
Share
Improve this answer
Follow
answered Jun 18, 2013 at 21:44
TombalaTombala
1,68099 silver badges1111 bronze badges
Add a comment
|
|
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 10 years ago.
Improve this question
I've inherited a system where data from a SQL RDBMS that is unlikely to change is cached on a web server.
Is this a good idea? I understand the logic of it - I don't need to query the database for this data with every request because it doesn't change, so just keep it in memory and save a database call. But, I can't help but think this doesn't really give me anything. The SQL is basic. For example:
SELECT StatusId, StatusName FROM Status WHERE Active = 1
This gives me fewer than 10 records. My database is located in the same data center as my web server. Modern databases are designed to store and recall data. Is my application cache really that much more efficient than the database call?
The problem comes when I have a server farm and have to come up with a way to keep the caches in sync between the servers. Maybe I'm underestimating the cost of a database call. Is the performance benefit gained from keeping data in memory worth the complexity of keeping each server's cache synchronized when the data does change?
|
Is application caching in a server farm worth the complexity? [closed]
|
Perhaps you may want to send the request as POST instead of GET, which won't use Joomla caching.
ajax.js
var userToken = document.getElementsByTagName("input")[0].name;
var request = new Request.JSON({
url: 'index.php?task=ajaxFunction&'+ userToken +'=1',
onException: function(headerName, value) {
// etc.
}
onComplete: function(res) {
// etc.
}
}).post({});
controller
JRequest::checkToken('get') or die( 'Invalid Token!' );
Stick this at the top of your template file (before all other input tags), it will create a hidden input field containing a token, which will eventually be replaced with the non-cached one on render
tmpl/default.php
<?= JHtml::_('form.token'); ?>
|
I'm using Joomla for a project, and there's some Ajax requests happening to populate data. I generate a Joomla session token in the PHP view, and tack this onto the URL of the Ajax request endpoint, which is also a PHP page, and validates the token before returning data.
Something like this:
// view.html.php
$script = "var ajaxurl = 'index.php?task=ajaxFunction&".JFactory::getSession()->getFormToken()."=1';";
$document->addScriptDeclaration($script);
// ajax.js
var request = new Request.JSON({
url: ajaxurl,
onException: function(headerName, value) {
// etc.
}
});
// controller
public function ajaxfunction()
{
JRequest::checkToken('get') or die( 'Invalid Token!' );
// do other stuff
}
This works just fine until caching is enabled.
The problem is that the view.html.php file, when Joomla uses its internal caching, is cached with the token already set-- so anytime a browser requests the page, it pulls the cached token along with it, meaning the controller will return an invalid Token error.
I know in earlier Joomla builds caching flat out didn't work. Is there a way to make this work in Joomla 2.5+, short of just disabling the Joomla cache? I can't find any way to exclude a single view from caching.
|
Session tokens in PHP being cached by Joomla
|
File or session cache are so similar, because sessions also written in files, but in fact session are more practical and easier to use, my prefer is Memory Cache, Like Mysql Memory Engine or APCU. just try once.
|
Session or FileCache ? Which one is better to use ?
For example when a user logged in, I want to save some data like username, password, id, details, etc. as long as he didn't logout.
I can save this data serialized in some file. and also I can save it in session.
What should I do ?
|
FileCache vs Session in php
|
You can use the already available Universal Image Loader library.
More info here : https://github.com/nostra13/Android-Universal-Image-Loader
|
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I am developing an application which has a list view with image. I planned to use lrucache to cache the image.Before implementing i just want to know is there any efficient way to do this(The list view is more or less like a Facebook new feed screen which shows images, comments, title etc)
Need some suggestion on implementing this efficiently.
|
how to cache the image for a android list view [closed]
|
4
By using the store, the client has a local cache that they can use. This cache gives them a performance boost and decreases the load on your own server.
In your case, I think it makes sense to have sensitive pages sent with no caching.
I believe another technical problem with no-store (and this is more of a weird side effect) is that older versions of IE have problems with the Content-Disposition header with caching turned off. The behavior is such that the download prompt will indefinitely have 0% progress.
One misconception about no-caching policies is that the browser will actually honor it and not save it to disk. This is not true - many modern browsers actually cache all responses to disk (See this SO). However, this cache is encrypted in those cases.
Overall, I think its safe to do so. Make sure you're not relying on this mechanism as @Robert Harvy says, once you send it over, you're at the mercy of the browser of how it wants to save it.
Share
Improve this answer
Follow
edited May 23, 2017 at 12:21
CommunityBot
111 silver badge
answered May 15, 2013 at 8:56
badunkbadunk
4,34055 gold badges2828 silver badges4848 bronze badges
1
Thanks. Your first paragraph is what I figured, but its tough to measure. In our app, there is a LOT of sensitive information and caching is very useful. We are hoping we can keep the caching and avoid the disk storage in a balanced way. And yes, I am clear this is not a complete security fix. If a bad person has access to the users account, they could replace the browser with something that would do with the data whatever they please, for example. I am trying to make it more complicated to find this information in the circumstance. The security of the machine cannot be guaranteed.
– John Y.
May 15, 2013 at 18:02
Add a comment
|
|
We want to "prevent the inadvertent release or retention of sensitive information (for example, on backup tapes :) )" and plan to use the HTTP header Cache-control: no-store. What are the down-sides of doing so? From the spec, it appears caching will continue to operate - it just cannot use non-volatile storage. In order to choose which responses to specify no-store on, we have some measure of "sensitivity." What is the counterbalancing measure we we should use - in other words, why not mark all pages no-store?
|
What are the drawbacks of using cache-control: no-store?
|
My though is that if you write ENOUGH data, then there simply won't be enough memory to cache it, and thus SOME data must be written to disk.
You can also, if you want to make sure that small writes to your file works, try writing ANOTHER large file (either from the same process or a different one - for example, you could start a process like dd if=/dev/zero of=myfile.dat bs=4k count=some_large_number) to force other data to fill the cache.
Another "trick" may be to "chew up" some (more like most) of the RAM in the system - just allocate a large lump of memory, then write to some small part of it at a time - for example, an array of integers, where you write to every 256th entry of the array in a loop, moving to one step forward each time - that way, you walk through ALL of the memory quickly, and since you are writing continuously to all of it, the memory will have to be resident. [I used this technique to simulate a "busy" virtual machine when running VM tests].
The other option is of course to nobble the caching system itself in OS/filesystem driver, but I would be very worried about doing that - it will almost certainly slow the system down to a slow crawl, and unless there is an existing option to disable it, you may find it hard to do accurately/correctly/reliably.
|
I have a program that is used to exercise several disk units in a raid configuration. 1 process synchronously (O_SYNC) writes random data to a file using write(). It then puts the name of the directory into a shared-memory queue, where a 2nd process is waiting for the queue to have entries to read the data back into memory using read().
The problem that I can't seem to overcome is that when the 2nd process attempts to read the data back into memory, none of the disk units show read accesses. The program has code to check whether or not the data read back in is equal to the code that is written to disk, and the data always matches.
My question is, how can I make the OS (IBM i) not buffer the data when it is written to disk so that the read() system call accesses the data on the disk rather than in cache? I am doing simple throughput calculations and the read() operations are always 10+ times faster than the write operations.
I have tried using the O_DIRECT flag, but cannot seem to get the data to write to the file. It could have to do with setting up the correct aligned buffers. I have also tried the posix_fadvise(fd, offset,len, POSIX_FADV_DONTNEED) system call.
I have read through this similar question but haven't found a solution. I can provide code if it would be helpful.
|
How to prevent C read() from reading from cache
|
http://localhost:8080/?1234445 and http://localhost:8080?1234445 are exactly the same, according to the BNF for URIs. There's nothing to worry about.
Edit
You really should be using something more ...effective... for cache busting. Highlights of that post:
The URL to solve a cache problem is not the way to go. The URL should represent a method to access the content, and nothing more.
[...] with some sensical headers, you will be just fine.
<?php
header("Cache-Control: no-cache, must-revalidate"); // HTTP/1.1
header("Expires: Sat, 26 Jul 1997 05:00:00 GMT"); // Date in the past
?>
The never-ending redirect
What I think is happening, and have been able to reproduce with EasyPHP 5.4.6 (PHP 5.4.6) on Win7, is that the header call responsible for the redirect is unguarded, so the redirect chain proceeds as follows:
http://localhost:8080
http://localhost:8080?1367490108
http://localhost:8080?1367490108
http://localhost:8080?1367490109
http://localhost:8080?1367490109
http://localhost:8080?1367490109
http://localhost:8080?1367490110
http://localhost:8080?1367490110
http://localhost:8080?1367490110
http://localhost:8080?1367490111
http://localhost:8080?1367490111
http://localhost:8080?1367490111
http://localhost:8080?1367490112
and so on in a never-ending tail-recursion of redirects [until Firefox smells a rat]. [Info: the times in the querystrings of the above chain of redirects are more-or-less hypothetical. Your average client+connection+server combination can redirect at more than three per second, and Firefox aborts after one-and-a-half seconds of redirect loop.]
|
I am trying to make a simple PHP cache buster, but I have a little problem: if I do
header('Location: http://localhost:8080?'.time()); then my url will be http://localhost:8080/?1234445, however what I need is to have it like http://localhost:8080?1234445 without the trailing slash.
Please help me with this, I'm quite new to PHP.
|
Cache buster in PHP
|
2
If you have multiple server running than Appfabric kind of Caching system will help a lot. In build asp.net caching system is also good if are going to use for the single server. One of the most important factor will be the RAM size you are going to have . If are going to add more and more RAM your website will be faster and faster. Even memcache can also be used Is there a port of memcache to .Net? . Even if you see website like stack overflow they have around 380 GB of RAM on the Database server, which is making it lightening fast. Even SQL Servers are meant to keep the data in the RAM to make it fast access.
Share
Improve this answer
Follow
edited May 23, 2017 at 12:19
CommunityBot
111 silver badge
answered Apr 16, 2013 at 5:26
DeveshDevesh
4,52011 gold badge1717 silver badges2828 bronze badges
Add a comment
|
|
Our site developed in ASP .NET is expecting a 1000 concurrent users and its performance is degrading by increasing number of listings in DB. What kind of a caching(like NCache or Appfabric) will be the best to reduce the load of DB and increase the performance for more concurrent users.Please give me some suggestions.
|
ASP .NET Application Caching
|
maxAge is a value in milliseconds, which in your case seems quite low (1800, which is 1.8s). The resources might be expiring from the cache before you even get the chance of reloading them, so they would seem to never get cached.
|
I've spent over an hour trying to get express to cache static files in production. Is there something i'm doing wrong? All of the headers come back 200 on the first request and 304 on subsequent requests. I've even tried pasting the code into the main app.configure and pasted code straight from the express documentation.
Request URL:http://localhost:3000/javascripts/jquery.min.js
Request Method:GET
Status Code:304 Not Modified
Request Headersview source
Accept:*/*
Accept-Encoding:gzip,deflate,sdch
Cache-Control:max-age=0
// Generated by CoffeeScript 1.3.3
(function() {
var app, express, fs, http, path;
express = require('express');
http = require('http');
path = require('path');
fs = require('fs');
app = express();
app.configure(function() {
app.set('port', process.env.PORT || 3000);
app.set('views', __dirname + '/views');
app.set('view engine', 'jade');
app.use(express.favicon());
app.use(express.logger('dev'));
app.use(express.bodyParser());
app.use(express.methodOverride());
app.use(express.compress());
return app.use(require('less-middleware')({
src: __dirname + '/public'
}));
});
app.configure('development', function() {
app.use(express["static"](__dirname + '/public'));
app.use(app.router);
app.use(express.errorHandler());
return console.log("Hello from dev");
});
app.configure('production', function() {
app.use(express["static"](__dirname + '/public', {maxAge: 1800}));
app.use(app.router);
return console.log("Hello from prod");
});
app.get('/', function(req, res) {
.......
|
Express isn't setting max-age headers
|
It depends what you mean by robust clustering.
If you need a solution robust enough to support storage (and not caching), or if you consider you cannot afford to loose your cached data, then twemproxy (or any other proxy solution) is not enough. This is true for both Memcached or Redis. For this kind of clustering requirements, you will be better served by things like Couchbase, Infinispan, or even Riak, and Cassandra.
Now if you need a caching solution with automatic sharding, consistent hashing, and proper management of server eviction (or black-listing), then twemproxy works fine with both Redis and Memcached. Performance of twemproxy is excellent.
|
So referencing this question, the top post states that the biggest concern of Redis is its scalability in terms of robust clustering. I was wondering if using Redis with Twemproxy, an open-source proxy developed my Twitter for memcache and redis, would alleviate that problem to the point where my main cache can just be Redis.
|
Redis with Twemproxy vs memcached
|
As the documentation page you provided, you can render one controller within another using:
{{ render_esi(controller('YourBundle:Default:news', { 'max': 5 })) }}
You can also use a route name instead of the controller reference:
{{ render_esi(url('latest_news', { 'max': 5 })) }}
However, you will need to set up a gateway cache for ESI to work.
|
From the documentation, there is no example of how to render a template inside template using ESI. Is it possible to do that?
For example, I have a template index.html.php and I want to render form.html.php template with ESI. How to do that?
|
Symfony2.2 render ESI template
|
4
You sound like you're describing Message Queues
There's MSMQ, ZeroMQ, RabbitMQ, and a couple others.
Here's a bunch of links on MSMQ:
http://www.techrepublic.com/article/use-microsoft-message-queuing-in-c-for-inter-process-communication/6170794
http://support.microsoft.com/KB/815811
http://www.csharphelp.com/2007/06/msmq-your-reliable-asynchronous-message-processing/
http://msdn.microsoft.com/en-us/library/ms973816.aspx
http://www.c-sharpcorner.com/UploadFile/rajkpt/101262007012217AM/1.aspx
Here's ZeroMQ (or 0MQ)
And here's RabbitMQ
Share
Improve this answer
Follow
answered Mar 2, 2013 at 6:38
JerKimballJerKimball
16.7k33 gold badges4444 silver badges5555 bronze badges
2
I had a read about MSMQ but it seems rather complicated for a simple application I'm developing. MSMQ requires you to run a service which I don't think any clients would have that service
– user1034912
Mar 2, 2013 at 9:51
1
Big data, high throughput, near zero response time != simple. What exactly are you making?
– JerKimball
Mar 2, 2013 at 15:48
Add a comment
|
|
I am developing an application in c# which handles a large stream of incoming and outgoing data from a queue like buffer. The buffer needs to be some sort of file in the disk. Data will be written to the buffer very often (i'm talking like once every 10 ms!). Each data written will make 1 record/line in the buffer.
The same program will also read the buffer (line by line) and process the buffered data. After one line/record has been processed, the buffered data must immediately delete the line/record from the buffer file to prevent it from reprocessing the buffered data in the event of a system reboot. This read and delete will also be done at a very fast rate (like every 10ms or so).
So it's as system which writes, reads, and purges what has been read. It gets even harder as this buffer may grow up to 5GB (GIGABYTE) in size if the program decides not to process the buffered data.
**So my question is: What kind of method should I use to handle this buffering mechanism? I had a look at using SQlite and simple text file but they may be ineffecient handling large sizes or maybe not so good handling concurrent inserts, read and delete.
Anyway, really appreciate any advice. Thank you in advance for any answers given!**
|
What is the most efficient and reliable way to operate a critical streaming buffer in C#?
|
4
You can cache the bitmaps from a remote service using a Disk Cache, there is more information about how to do this on the Google developer site http://developer.android.com/training/displaying-bitmaps/cache-bitmap.html
This will allow you to store the images and display immediately if there is no connection, or you want to load the images whilst loading the remote images.
Depending on what text you need to store you can associate the text with the images in the cache or alternatively set up an ArrayList with the data and store to disk. Some more details here Best Way to Cache Data in Android
Also there are tools around to ensure you are making the most of your network connections, such as the AT&T ARO tool, running this will help you to optimize your app by reducing your network calls to a minimum. See http://developer.att.com/aro
Share
Improve this answer
Follow
edited May 23, 2017 at 12:07
CommunityBot
111 silver badge
answered Mar 6, 2013 at 12:32
Rod BurnsRod Burns
2,15411 gold badge1313 silver badges2525 bronze badges
Add a comment
|
|
I'm developing a part of an app where application is supposed to read product-images and prices from online storage (website, which is to be built for this purpose only), make local storage of product-images and prices so that it could show the product-images and prices when it is offline. there will be a button; once it is pressed, its job is to synchronize the local cache. How could I implement this ? Any help would be highly appreciated.
Thanks in advance.
|
How to cache web content for offline mode?
|
I suggest you to use Php Speedy or this may help:
<?php
function getUrl () {
if (!isset($_SERVER['REQUEST_URI'])) {
$url = $_SERVER['REQUEST_URI'];
} else {
$url = $_SERVER['SCRIPT_NAME'];
$url .= (!empty($_SERVER['QUERY_STRING']))? '?' . $_SERVER[ 'QUERY_STRING' ] : '';
}
return $url;
}
//getUrl gets the queried page with query string
function cache ($buffer) { //page's content is $buffer
$url = getUrl();
$filename = md5($url) . '.cache';
$data = time() . '¦' . $buffer;
$filew = fopen("cache/" . $filename, 'w');
fwrite($filew, $data);
fclose($filew);
return $buffer;
}
function display () {
$url = getUrl();
$filename = md5($url) . '.cache';
if (!file_exists("/cache/" . $filename)) {
return false;
}
$filer = fopen("cache/" . $filename, 'r');
$data = fread($filer, filesize("cache/" . $filename));
fclose($filer);
$content = explode('¦', $data, 2);
if (count($content)!= 2 OR !is_numeric($content['0'])) {
return false;
}
if (time()-(100) > $content['0']) { // 100 is the cache time here!!!
return false;
}
echo $content['1'];
die();
}
// Display cache (if any)
display(); // if it is displayed, die function will end the program here.
// if no cache, callback cache
ob_start ('cache');
?>
Just include this script anywhere you need caching and set a cron job for running it automated.
|
I have been using a basic caching system on my site based on this link
It has so far worked well for everthing I want to do.
$cachefile = 'cache/'. basename($_SERVER['QUERY_STRING']) . '.html';
$cachetime = 1440 * 60;
if (file_exists($cachefile) && (time() - $cachetime < filemtime($cachefile))) {
include($cachefile);
echo "<!-- Cached ".date('jS F Y H:i', filemtime($cachefile))." -->";
exit;
}
ob_start();
// My html/php code here
$fp = fopen($cachefile, 'w'); // open the cache file for writing
fwrite($fp, ob_get_contents()); // save the contents of output buffer to the file
fclose($fp); // close
ob_end_flush(); // Send to browser
However I have a couple of pages with more detailed mysql queries, I have spent a fair bit of time optimising it however it still takes about 10 secs to run when I query it in mysql and even longer on the website. And sometimes it seems to time out as I get the below message.
The proxy server received an invalid response from an upstream server.
The proxy server could not handle the requestGET http://www.example.com
Reason: Error reading from remote server
This isn't a huge issue as because I am using the caching system above only the first person to click on it for the day gets the delay and the rest of the time the users get the cached page so it is actually quite fast for them.
I want to save myself from having to be the first person each day to go to the page and automate this process so at 17:00 (on the server) each day the file gets written to the cache.
How would I best achieve this?
|
Automatically create cache file with php
|
Before starting the operation GetDocSummary() you could get the current value from the cache and store that value (or a Hash from this value, if memory might be an issue) in a local variable.
After the operation is finished, before adding the data to the cache, compare the local variable with the current cached value.
Adding some sample, based on the comments, using a Tuple:
var cacheEntry = new Tuple<List<Doc>, DateTime>(ds, DateTime.Now);
After executing GetDocSummary(), get your entry from the cache and see if the date falls between starting and ending this operation.
|
I am inserting a List into data cache in ASP.Net as shown in sample code below. I am using C# as the programming language.
I am not able to find a solution to following problem:
A long database operation which could last from .1 s to 20 s is started.
At the end of this operation, I need to determine if the Cache by the name of 'dSummary' has been re-inserted or refreshed from database since the long operation began (i.e. while the operation was in progress).
The code I use to insert cache item is as below:
List<Doc> ds = GetDocSummary();
System.Web.HttpContext.Current.Cache.Insert("dSummary", ds, null,
DateTime.Now.AddMinutes(15), System.Web.Caching.Cache.NoSlidingExpiration);
UPDATE 1:
After getting so many replies, I think a possible foolproof approach would be to store a guid string along with the List object in Cache. Then I could simply compare the guid string property of the object stored in Cache before and after the long operation. If they are the same then Cache["dSummary"] has not been re-inserted, else it has been re-inserted.
So I would need to use the following extended class of 'Doc' class.
public class DocX : Doc
{
public UniQueIdentifier { get;set; }
public DocX ( Doc doc, string unqiueIdentifier)
{
this.Doc = doc;
this.UniqueIdentifier = unqiueIdentifier;
}
}
I would then insert into cache using following code:
List<Doc> ds = GetDocSummary();
System.Web.HttpContext.Current.Cache.Insert("dSummary",new DocX( ds,
new Guid.Newguid().ToString() ), null,
DateTime.Now.AddMinutes(15), System.Web.Caching.Cache.NoSlidingExpiration);
|
Challenging to determine if Cache for data has been changed in ASP.Net since a long operation began
|
You can only do this if the device is rooted and your application has super user rights.
|
I´ve been reading around here about cleaning another applications cahce-memory, and I´ve also tried coding my own app. The result I´ve got, is that with Androids current securitylayer, it´s not possible.
But, there is currently many cache-cleaner applications out there on the Market (Google Play)?
When I started my application which I gave the android.permission.DELETE_CACHE_FILES permission, the LogCat printed
Not granting permission android.permission.DELETE_CACHE_FILES to package <MY_PACKAGE_NAME> (protectionLevel=3 flags=0x8be46)
After some research I found out that 3:rd party apps would not be granted permissions with protectionLevel=3 so, I encounter a java.lang.SecurityException whenever I try to delete another application cache (logically)
My question is therefor: "How is these applications on Google Play permitted and able to delete other applications cache?"
Sorry for my bad English, not a native speaker
|
Clear another applications cache
|
As far as I know you can control to force a browser to reload the data by means of these meta tags:
<meta http-equiv="Pragma" content="no-cache">
<meta http-equiv="Cache-control" content="no-cache">
<meta http-equiv="Expires" content="0">
but you cannot force it to read from cache. The browser itself will do that for you if you don't explicitly specify to ignore the cache, and the page data are in fact cached and not expired.
This does not depend on CodeIgniter because it's client-side, but you might want to use the meta() function included in CI's html helper, which will simply output the corresponding meta tag. e.g:
echo meta('Cache-control', 'no-cache', 'http-equiv');
would generate the second code line above.
Note:
The 1st meta tag is specified for http/1.0 while the 2nd one is for http/1.1 but both are used to allow backwards compatibility.
If you're using xhtml instead of html remember to close the meta tags with />
|
Every time I do I search on this I get information about how to disable the browser cache.
Never anything about enabling it.
How do I get the back button to use the cache and not regenerate the page?
|
How do I correctly ENABLE browser caching using codeigniter?
|
Thanks for the question! You can set any HTTP header in an HTTP response.
For instance:
onRequest(HttpRequest request, HttpResponse response) {
...
response.headers.add("Cache-Control", "max-age=3600");
...
}
If you want more sophisticated handling, such as respecting Etags or If-Modified-Since, you'll probably have to add them yourself. In general, it makes sense to proxy the Dart HTTP Server behind a server such as Nginx or Apache, and then have that server take care serving all of your static files.
|
So, I've noticed that using Dart's built in HttpServer class tends to make the client request for every file every time.
On Apache, it is possible to tell the client to cache the file for a maximum of a certain length of time -- does Dart support this feature to lighten the load on HttpSever?
|
Dart's HttpServer and client-side caching
|
Caching is just about always worth it. Pulling it from APC's in memory user cache vs. establishing a db connection and running queries is a massive difference--especially if you're doing 25 queries on a page!
The benefits will compound:
Pulling from memory you'll serve up requests faster by requiring less overhead
You'll free up db connections
You'll free up apache processes faster
All of which will help server up request faster...
|
I have a PHP script that runs very simple queries on a MySQL database (up to a maximum of 25 times per page). Is it going to be worth caching the results (e.g. using APC), or is the performance difference likely to be negligible?
|
Is it worth caching simple MySQL results
|
Alright so, here's the deal. Without the cache group supplied, it defaults to file. So if you -dare- change that, be my guest. But just set the static instance in bootstrap.php, answer at the bottom.
-- This is from the base cache class. --
public static $default = 'file';
public static function instance($group = NULL)
{
// If there is no group supplied
if ($group === NULL)
{
// Use the default setting
$group = Cache::$default;
}
So in your bootstrap.php set this, though I would name it to APC in your config:
Cache::$default = 'default';
|
After upgrade Kohana framework from 3.2 to 3.3, Cache ask me to put an default group to it.
config/cache.php
return array(
'default' => array( // Driver group
'driver' => 'apc', // using APC driver
'default_expire' => 3600, // life time
),
);
Before, I used to do like this without the group name:
Cache::instance()->set('key', 'val');
Now, that sends an Exception: Failed to load Kohana Cache group: file.
But, when I set the name group all woks perfect.
Cache::instance('default')->set('key', 'val');
How can I set now in 3.3 a default group without type it always whatever I want to use it? Maybe is a new upgrade, but, I checked the new features of kohana 3.3 and I don't see any of that.
Hope you can help me.
|
Set default group to Cache in kohana 3.3
|
You can connect to AppFabric Cache without authentication by:
Set-CacheClusterSecurity -SecurityMode None -ProtectionLevel None
Configuring your client like this:
<dataCacheClients>
<dataCacheClient name="CacheName" maxConnectionsToServer="20">
<hosts>
<host name="hostName" cachePort="22233" />
</hosts>
<securityProperties mode="None" protectionLevel="None" />
</dataCacheClient>
</dataCacheClients>
But I wouldn't recommend that.
You should be able to connect from one domain to another but configuring that should be a job for your administration team.
|
I have set up an AppFabric CacheServer on a webserver, in a different domain.
When I try to access it I get the exception:
The server has rejected the client credentials.
InnerException: The logon attempt failed.
I have tried
Grant-CacheAllowedClientAccount Everyone
But it didnt help, I have tried
Grant-CacheAllowedClientAccount MYDOMAIN\MyIISusr
But that only gives me an error: Windows account MYDOMAIN\MyIISusr is not valid.
Probably because we are not in the same domain?
I have tried to set the apppool account to networkservice and use
Grant-CacheAllowedClientAccount Networkservice
But this didnt help either.
Is there some way I can skip the authorization and not authorize users?
Everything is behind firewalls and not reachable from public so the authorization is not needed in this application.
Or does someone have any solution to my problem?
|
AppFabric cache client rejection
|
Your array size of 4M is not big enough. The entire array fits in the cache (and is in the cache after the first k loop) so the timing is dominated by instruction execution. If you make arr much bigger than the cache size you will start to see the expected effect.
(You will see an additional effect when you make arr bigger than the cache: Runtime should increase linearly with arr size until you exceed the cache, when you will see a knee in performance and it will suddenly get worse and runtime will increase on a new linear scale)
Edit: I tried your second version with the following changes:
Change to volatile int *buffer to ensure buffer[i] = buffer[i] is not optimized away.
Compile with -O2 to ensure the loop is optimized sufficiently to prevent loop overhead from dominating.
When I try that I get almost identical times:
kronos /tmp $ time ./dos 2
./dos 2 1.65s user 0.29s system 99% cpu 1.947 total
kronos /tmp $ time ./dos 1
./dos 1 1.68s user 0.25s system 99% cpu 1.926 total
Here you can see the effects of making the stride two full cachelines:
kronos /tmp $ time ./dos 16
./dos 16 1.65s user 0.28s system 99% cpu 1.926 total
kronos /tmp $ time ./dos 32
./dos 32 1.06s user 0.30s system 99% cpu 1.356 total
|
I have just read a blogpost here and try to do a similar thing, here is my code to check what is in example 1 and 2:
int doSomething(long numLoop,int cacheSize){
long k;
int arr[1000000];
for(k=0;k<numLoop;k++){
int i;
for (i = 0; i < 1000000; i+=cacheSize) arr[i] = arr[i];
}
}
As stated in the blogpost, the execution time for doSomething(1000,2) and doSomething(1000,1) should be almost the same, but I got 2.1s and 4.3s respectively. Can anyone help me explain?
Thank you.
Update 1:
I have just increased the size of my array to 100 times larger
int doSomething(long numLoop,int cacheSize){
long k;
int * buffer;
buffer = (int*) malloc (100000000 * sizeof(int));
for(k=0;k<numLoop;k++){
int i;
for (i = 0; i < 100000000; i+=cacheSize) buffer[i] = buffer[i];
}
}
Unfortunately, the execution time of doSomething(10,2) and doSomething(10,1) are still much different: 3.02s and 5.65s. Can anyone test this on your machine?
|
Measuring the effect of cache size and cache line size in C
|
Unfortunately, we can't use ACL for HTTP Host header. It's for matching client address only
|
So here is what I am trying to accomplish. Im trying to get varnish working on a shared environment and I would like to set it up to where only domains within a vcl include get cached and the rest are simply passed. Here is what I am looking at:
include "/etc/varnish/whitelist.vcl";
if (req.http.host !~ vhosts) {
return(pass);
}
acl vhosts {
"domain.net";
"www.domain.net";
"...";
}
...
Now varnish tells me that this isnt possible:
Message from VCC-compiler:
Expected CSTR got 'vhosts'
(program line 940), at
('input' Line 11 Pos 30)
if (req.http.host !~ vhosts) {
-----------------------------######---
Running VCC-compiler failed, exit 1
VCL compilation failed
Now I know I can just do the following:
sub vcl_recv {
if (req.http.host == "domain1.com" ||
req.http.host == "domain2.com") {
return(pass);
}
}
But I really like the clean look of the first. Any ideas?
|
How to return(pass) all hosts not in acl - Varnish
|
You shouldn't place all of the 20K files in a single directory.
Divide them into directories (by letter, for example), so you access:
a/apple-pie-recipe
j/john-doe-for-presidency
etc.
That would allow you to place more files with less constraints on the file-system, which would increase the speed. (since the FS doesn't need to figure out where your file is in the directory along with other 20k files, it needs to look in about a hundred)
|
I made a dynamic site that has over 20,000 pages and once a page is created there is no need to update it for at-least one month or even a year. So I'm caching every page when it is first created and then delivering it from a static html page
I'm running a php script (whole CMS is on PHP) if (file_exists($filename)) to first search for the filename from the url in cache-files directory and if it matches then deliver it otherwise generate the page and cache it for latter use. Though it is dynamic but still my url does not contain ?&=, I'm doing this by - and breaking it into array.
What I want to know is will it create any problem to search for a file from that huge directory?
I saw a few Q/A like this where it says that there should not be problem with number of files I can store on directory with ext2 or ext3 (I guess my server has ext3) file system but the speed of creating a new file will decrease rapidly after there are files over 20-30,000.
Currently I'm on a shared host and I must cache files. My host a soft limit of 100,000 files in my whole box which is good enough so far.
Can someone please give me any better idea about how to cache the site.
|
Caching a fully dynamic website
|
2
The theme is not used as part of the FPC uri and therefore there is only one cache per package.
I wrote a little extension to fix the issue and you can grab it on Github.
https://github.com/benjy14/MobileFpcFix
Share
Improve this answer
Follow
answered Jan 6, 2013 at 1:59
BenBen
1,5251111 silver badges1919 bronze badges
Add a comment
|
|
I've went through many forums, but I was unable to solve the FPC problem in Magento EE 1.11 version. When I browsing the mobile theme it is taking web theme instead, because of FPC. If I disable the FPC, the performance will go down.
Can somebody help me to solve this problem?
|
Magento EE 1.11.1, Full Page Cache issue with mobile theme?
|
I'm not sure that I can address all of your concerns above, but I can address 1 of them --
I don't see any reason why a function couldn't be garbage collected. However, since your function is a key in a dictionary, as long as that dictionary is around, the reference count for your function will never reach zero and it won't be subject to garbage collection.
I don't know about reloading of a module, however, that seems like a corner case that you shouldn't really need to worry about. Modules aren't really meant to be reloaded ... the fact that you can do it in some circumstances is mostly for debugging purposes in an interactive terminal and not meant to be used in any real code ... (as far as I know anyway ...)
|
I am using function objects as dictionary keys. I am doing that because I need to cache the result of these functions. This is roughly the code I am using:
# module cache.py:
calculation_cache = {}
def cached(func):
# func takes input as argument
def new_function(input):
try:
result = calculation_cache[(func, input)]
except KeyError:
result = func(input)
calculation_cache[(func, input)] = result
return result
return new_function
# module stuff.py
@cached
def func(s):
# do something time-consuming to s
# ...
return result
I could use func.__module__ + func.__name__ instead of func, but if func works fine, I'd rather use it since I'm afraid of possible name conflicts (e.g., for lambda functions or nested functions or functions that were replaced by another with the same name, etc.)
This seems to work fine, but I suspect that this may cause problems in some hard-to-test situations.
For example, I am concerned about a function being somehow deleted and another function reusing its memory space. In this case, my cache would be invalid, but it would not know about it. Is this a valid concern? If so, is there any way to address it?
Can a function be deleted? Does reloading a module move a function to a new address (thus changing its hash value, and releasing the old memory address for new functions)? Can someone (for some strange reason) simply delete a function defined in a module (again making the memory available for a new function)?
If it is only safe to do this with functions explicitly defined with def, then I can prohibit the use of cached except as a decorator (I don't know how to enforce it, but I can just document it in the cached docstring).
|
Using function objects as a dictionary keys
|
I believe LinkedHashMap is exactly what you need. You just need to override removeEldestEntry(...) method, and it will automatically remove old entries for you if the maximum capacity is reached. Something like:
import java.util.*;
class CacheMap<K,V> extends LinkedHashMap<K,V> {
protected final int maxCapacity;
public CacheMap(int maxCapacity) {
this.maxCapacity = maxCapacity;
}
@Override
protected boolean removeEldestEntry(Map.Entry eldest) {
return size() > maxCapacity;
}
}
You could implement a more sophisticated logic, for example remove very old entries even if the max. capacity is not reached.
If synchronizing atomic map operations is enough for you, you can just wrap the map into Collections.synchronizedMap(...):
Map<K,V> map = Collections.synchronizedMap(new CacheMap<K,V>(capacity));
If you need more accurate synchronization, for example read the map and update it in one synchronized block, you need to synchronize (all) code blocks that work with the map yourself.
|
I am working on a web-based medical application and need to create a small in-memory object cache. Here is my use-case.
We need to show list of requests submitted by people who need certain things (Blood, Kidney, etc.) and it's not going to be a huge list as in a given day request for blood or anything else will be a limited one. Please take into account that we do not want to use any caching API as it would be an overkill. The idea is to create a Map and place it in the ApplicationContext.
The moment a new request is being placed by any person, we will update that Map in the Application context and the moment the request expires, we will remove them from the Map. We need to look into the following points additionally.
Need to set Max Element Limit.
If Max Limit reached, we should removed entry which was added first.
Take care of any Synchronized issues.
Please suggest what Data-structure should be used and what things to take care of while implementing this.
|
Develop in Memory Object Cache
|
Yes, known issue... In our production workflow we ended up with such block in bin/vendors script:
if (in_array('--env=dev', $argv)) {
system(sprintf('%s %s assets:install --symlink %s', $interpreter, escapeshellarg($rootDir . '/app/console'), escapeshellarg($rootDir . '/web/')));
system(sprintf('%s %s assetic:dump --env=dev', $interpreter, escapeshellarg($rootDir . '/app/console')));
system(sprintf('%s %s myVendor:assets:install --symlink ', $interpreter, escapeshellarg($rootDir . '/app/console')));
} else {
system(sprintf('%s %s assets:install %s', $interpreter, escapeshellarg($rootDir . '/app/console'), escapeshellarg($rootDir . '/web/')));
system(sprintf('%s %s assetic:dump --env=prod --no-debug', $interpreter, escapeshellarg($rootDir . '/app/console')));
system(sprintf('%s %s myVendor:assets:install ', $interpreter, escapeshellarg($rootDir . '/app/console')));
}
As you can see, we defined our console command, which installs assets into web folder after installing and dumping of Symfony's assets. In the MyVendorCommand script we do something like this:
$version = $this->getContainer()->getParameter('your_version_parameter');
$assetsInstallCommand = $this->getApplication()->find('assets:install');
$commandOptions = $input->getOptions();
$assetsInstallArguments = array(
'command' => 'assets:install',
'target' => 'web/version-' . $version,
'--symlink' => $commandOptions['symlink']
);
$assetsInstallInput = new ArrayInput($assetsInstallArguments);
$returnCode = $assetsInstallCommand->run($assetsInstallInput, $output);
|
I want assetic to output compressed js and css to something like this:
v2.3.1/css/whatever.css
Currently this is how I dump my css and js for production: $ php app/console assetic:dump --env=prod --no-debug. But they get dumped into css/ and js/, without the version.
I have read this but it seems to refer to images only, not css/js.
An important reason for doing this is for cache busting/invalidation.
|
How to prepend assets version to css and js?
|
Two steps to the solution:
The controller performing the changes on the model needs to have the sweeper reference, not the destination controller as shown above. In this case it is active_admin, so I added this to my admin/articles.rb file (source) instead of the home controller.
controller do
cache_sweeper :article_sweeper
end
And the controller name needs a slash
expire_action(:controller => '/home', :action => 'index')
|
I'm attempting to use a sweeper to clear the home page index action when a new article is published.
The home page cache is working fine in development environment and expires after 1 minute. However when an article is saved, the sweeper action is not triggered.
class HomeController < ApplicationController
caches_action :index, :expires_in => 1.minute
cache_sweeper :article_sweeper
def index
@articles = Article.published.limit(5)
end
end
class ArticleSweeper < ActionController::Caching::Sweeper
observe Article
def after_update(article)
expire_action(:controller => 'home', :action => 'index')
end
end
Either I've gone wrong somewhere or a different approach is needed to expire the home page cache.
My app uses ActiveAdmin to update articles, and Dalli for Memcache (as I'll be using Heroku).
|
How do I expire home page cache when an article is updated?
|
4
There are several forms of caching from within PHP.
If you have access to memcached or APC on your webhost (some shared plans disable this functionality), look them up as they are considered fairly high-performance forms of caching as it utilizes the system memory directly (memcached is more suited for distributed systems).
http://php.net/manual/en/book.memcache.php
http://php.net/manual/en/book.apc.php
If not, look into file caching. PHP comes with a handy file library (documented within the PHP documentation) which will allow you to read/write to files.
http://www.php.net/manual/en/ref.filesystem.php
Lastly, you may look into SQL caching. Although this is not typically recommended in comparison to the other options, data that you wish to store through a database may be an option as well (if you need to link it to other data from within your tables).
http://php.net/manual/en/ref.pdo-mysql.php
Good luck!
Share
Improve this answer
Follow
answered Jul 6, 2012 at 3:28
Daniel LiDaniel Li
15.1k66 gold badges4343 silver badges6060 bronze badges
Add a comment
|
|
I have a confusion about cache in php. I created a file for display 2 to 500 in my php file. Now I want to use the cache file for storing the data and display it. My code is bellow.
<?php
for ($i = 2; $i <= 500; $i++)
echo "The number is:".$i."<br />";
?>
Now how can I use the cache file to save the output and display in the browser further time. If there is some other way to use cache file in php then please help me. In the php file I want to know about the time saving. How to save the time using cache script in php to store the output and display it further.
|
Using cache in php script
|
Some approaches:
Maintain an array of allowed resolutions, and check whether the requested resolution is in that array. Downside: you can't quickly add a resolution without editing the array.
If this is in a CMS context: allow the creation of new images (that are not in the cache yet) only by authenticated users; refuse the request otherwise. When an authenticated user adds an image in the CMS, they preview it, and doing that generates the resized image. Downside: not entirely easy to implement.
|
Okay so I have an idea on how I want to serve and cache my images.
I don't know if this is the right way to do it but if it is, I'd like to know how to go about preventing abuse.
Situation:
index.php
<img src="images/cache/200x150-picture_001.jpg" />
images/cache/.htaccess
RewriteEngine On
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule ^(.*)$ images/image.php?f=$1 [L]
The above checks if the image exists, if not it will be rewritten to image.php
image.php
PSEUDO code
get height and width from filename
resize and save image to cache folder
serve the image with content-type and readfile.
This way I'm trying to reduce HTTP requests and PHP load with readfiles.
The browser gets the image/jpeg if it exists and if not it will be generated.
Eventually all images will be cached and served in the right height and width so browsers won't be downloading oversized images.
The only catch.. if you change the image url to different dimensions, you can fill up the server.
Am I doing it right, what's the right way to do it. Is there any chance of stopping this from happening?
I know this whole concept of caching has been refined a million times, please enlighten me.
|
Create resized image cache but prevent abuse
|
They are the same and both call the HttpRuntime.Cache - From the source code:
public static Cache System.Web.Hosting.HostingEnvironment.Cache
{
get
{
return HttpRuntime.Cache;
}
}
and
public Cache System.Web.HttpContext.Cache
{
get
{
return HttpRuntime.Cache;
}
}
Also this is state on the MSDN:
HostingEnvironment.Cache
Gets the Cache instance for the current application.
Namespace: System.Web.Hosting
Assembly: System.Web (in system.web.dll)
ref: http://msdn.microsoft.com/en-us/library/system.web.hosting.hostingenvironment.cache(VS.85).aspx
HttpContext.Cache Property
Gets the Cache object for the current application domain.
Namespace: System.Web
Assembly: System.Web (in System.Web.dll)
ref: http://msdn.microsoft.com/en-us/library/system.web.httpcontext.cache.aspx
|
I found this article who explain how to use cache items expiration to make a scheduled job for ASP.NET applications without using any scheduler or windows services.
It is really interesting for me!
In the article, the author is using HttpContext.Current.Cache to add an item. So when item expires in cache a treatment can be done..
But in his article he's making a 'false' request from the server to himself to have access to HttpContext to reach Cache and add item again when the previous expires.
I tried to use System.Web.Hosting.HostingEnvironmentto access Cache without any HttpContext (so no need to make a 'DummyRequest') and it seems working.
Is there something I don't understand or know about this cache? Is the HttpContext and the HostingEnvironment cache different? I think this is the same thing, intellisense describe the twos like the 'application cache' without differences.
|
Is System.Web.Hosting.HostingEnvironment.Cache equals to HttpContext.Current.Cache?
|
I have fix this. I only had to increase the time the connection expires and check the number of the reconnection. Here are the lines I had add:
host = "app01.site.com";
.port = "80";
.connect_timeout = 1.5s;
.first_byte_timeout = 45s;
.between_bytes_timeout = 30s;
if (req.restarts > 3) {
set beresp.saintmode = 5m;
}
|
I'm using varnish to cache the content of my websites. It is working as it supposed, but there is a problem. Randomly it returns an 503 error, it is really strange, since the app servers are ok and the load is under .8, also the database server its ok. Here is part of my configuration:
backend app05 {
.host = "app05.site.com";
.port = "80";
.connect_timeout = 0.7s;
.first_byte_timeout = 30s;
.between_bytes_timeout = 30s;
.probe = {
.url = "/";
.interval = 5s;
.timeout = 1s;
.window = 5;
.threshold = 3;
}
}
director app_director round-robin {
{ .backend = app01; }
{ .backend = app02; }
{ .backend = app03; }
{ .backend = app04; }
{ .backend = app05; }
}
sub vcl_fetch {
# remove all cookies
# unset beresp.http.set-cookie;
# cache for 12 hours
# set beresp.ttl = 2h;
# Don't allow static files to set cookies.
if (req.url ~ "(?i)\.(png|gif|jpeg|jpg|ico|swf|css|js|html|htm|mp4|flv)(\?[a-z0-9]+)?$") {
unset beresp.http.set-cookie;
set beresp.ttl = 12h;
} else {
set beresp.ttl = 30m;
}
# If the backend server doesn't return properly, don't send another connection to it
# for 60s and try another backend via restart.
#
# https://www.varnish-cache.org/docs/trunk/tutorial/handling_misbehaving_servers.html
# --
if(beresp.status == 500) {
set beresp.saintmode = 5m;
if (req.request != "POST") {
return(restart);
} else {
error 500 "Failed";
}
}
# Allow items to be stale if needed.
set beresp.grace = 1h;
}
Do I have to add also to the if beresp.status == 503?
|
Varnish 3.0 returns 503 error
|
4
Dependent on your SQL Database version you may be able to use SqlCacheDependency.
Very briefly in your web.config
<caching>
<sqlCacheDependency pollTime="10000" enabled="true" >
<databases>
<add connectionStringName="ConnectionString" name="Coverage"/>
</databases>
</sqlCacheDependency>
</caching>
Then in the code
private void BindData()
{
// if null then fetch from the database
if (Cache["CoverageDataTable"] == null)
{
// Create the cache dependency
SqlCacheDependency dep = new SqlCacheDependency("Coverage", CoverageDataTable");
string connectionString = ConfigurationManager.ConnectionStrings[
"ConnectionString"].ConnectionString;
SqlConnection myConnection = new SqlConnection(connectionString);
SqlDataAdapter ad = new SqlDataAdapter("SELECT ColA, ColB, ColC " +
"FROM CoverageDataTable", myConnection);
DataSet ds = new DataSet();
ad.Fill(ds);
// put in the cache object
Cache.Insert("CoverageDataTable", ds, dep);
}
gvCoverageDataTable.DataSource = Cache["CoverageDataTable"] as DataSet;
gvCoverageDataTable.DataBind();
}
Some background can be found here: Caching in ASP.NET with the SqlCacheDependency Class
Share
Improve this answer
Follow
answered May 29, 2012 at 6:42
heads5150heads5150
7,35333 gold badges2626 silver badges3434 bronze badges
1
By the way, don't forget a lock implementation in order to avoid subsequent requests.
– Adam Right
May 29, 2012 at 8:01
Add a comment
|
|
I have a data set storing all continents with their respective countries. I am caching the data table:
DataSet dset = new DataSet();
string cacheKey = "CoverageDataTable";
object cacheItem = Cache[cacheKey] as DataTable;
if (cacheItem == null)
{
dset = (DataSet)_obj.GetAllContinent();
cacheItem = dset.Tables[0];
Cache.Insert(cacheKey, cacheItem, null, System.Web.Caching.Cache.NoAbsoluteExpiration, TimeSpan.FromHours(5), CacheItemPriority.High, null);
}
Now I want that if there is any change in the data table, fresh data should be fetched from the database. How can I do this?
|
Update the cache when dataset is updated
|
Resque github repository has this secret gem, a god task that will do exactly this: watch your tasks and kill stale ones.
https://github.com/resque/resque/blob/master/examples/god/stale.god
# This will ride alongside god and kill any rogue stale worker
# processes. Their sacrifice is for the greater good.
WORKER_TIMEOUT = 60 * 10 # 10 minutes
Thread.new do
loop do
begin
`ps -e -o pid,command | grep [r]esque`.split("\n").each do |line|
parts = line.split(' ')
next if parts[-2] != "at"
started = parts[-1].to_i
elapsed = Time.now - Time.at(started)
if elapsed >= WORKER_TIMEOUT
::Process.kill('USR1', parts[0].to_i)
end
end
rescue
# don't die because of stupid exceptions
nil
end
sleep 30
end
end
|
I have an application that uses resque to run some long-running jobs. Sometimes the take 8 hours or more to complete.
In situations where the job fails, is there a way to monitor resque itself to see if the job is running? I know I can update the job's status in a database table (or in redis itself), but I want to know if the job is still running so I can kill it if necessary.
The specific things I need to do are:
Determine if the job is still running
Determine if the job has stopped
Kill jobs that are stuck
|
Find out if a resque job is still running and kill it if it's stuck
|
The best way will be simply:
var file = Titanium.Filesystem.getFile(Titanium.Filesystem.applicationDataDirectory,filename);
if ( file.exists() ) {
file.deleteFile();
}
For complete details Titanium.Filesystem.File.
|
In titanium, what is the best way to delete -specific- files in the applicationDataDirectory?
|
Delete files in applicationDataDirectory
|
You can't always guarantee that the end browser is going to cache the page. Especially on mobile devices, memory is very limited and so caches get filled up and invalidate quickly.
If you want fast page load times, you will want to make the smaller images since they will download faster. The only problem is request times. People browsing through things (such as a gallery of images) are going to skim through and only click on a few of the images. If you start out with all small thumbnails, the page would load pretty fast and then as they ask to increase the size of selected ones, you would dynamically load those images.
One thing you could do to reduce requests is use a server side script to put all the thumbnails of each size into a tileset of sorts (sorted by size...you could also do this manually if it were for a static page) and use CSS to adjust the view boundaries. This is how JQuery does their buttons. This would make it so that you would only have to request one image to load all the thumbnails for a given set. As your end user asks to increase the size of the thumbnails, your page could then dynamically request the larger sizes. The downside to this is that if you have many images on each page, you would have one big image to download and I believe that with many browsers, the users wouldn't see the image until it was fully loaded. Using individual thumbnails, they could see them progressing.
|
I'm working on a site that currently stores thumbnails of size 100x100. A designer is working with us and has created a design which requires thumbnails of 100x100, 32x32, 22x22 and 16x16.
It is likely any given page on the site will be displaying the same thumbnail many times at different sizes.
My question is: Should we create and store thumbnails of each size? Or is it enough to store the thumbnail as 100x100 and just use CSS to achieve the smaller thumbnail sizes?
My theory is that using the 100x100 thumbnail across the board and using CSS to get the desired size will perform better (ie: faster page loads) than storing each individual size.
Why do I think this?
Because the first time the 100x100 is downloaded it will be cached and the CSS will 'resize' it throughout the page where the same image is used. If we store and reference differently image files explicitly we'll need to grab 4 files instead of just one and won't be able to leverage the browser's cache as well.
Thoughts?
|
storing multiple thumbnails vs using css
|
you should try '#sessionStorageService.getUser()'
|
We're planning to use the Spring 3.1 cache abstraction instead of the Grails Spring cache plugin. I've experimented with it locally, but an issue occurred when using Spring el expressions like
@Cacheable(value = 'dashboardCache', key = 'sessionStorageService.getUser()', condition = 'sessionStorageService.getUser() != null')
public List<BusinessDashboard> getUserDashboards(String serverName, SessionStorageService sessionStorageService) { ... }
the following error occurs when executing the integration test case
EL1008E:(pos 0): Field or property 'sessionStorageService' cannot be found on object of type 'org.springframework.cache.interceptor.CacheExpressionRootObject'
org.springframework.expression.spel.SpelEvaluationException: EL1008E:(pos 0): Field or property 'sessionStorageService' cannot be found on object of type 'org.springframework.cache.interceptor.CacheExpressionRootObject'
I assume this behavior is caused by missing debug information - thus my question:
Can Spring el expressions be enabled in Grails apps, or is there any parameter to the compilation process to tell Grails to keep debug symbols in the class files?
(we're running on Grails 2.0.1)
|
Grails - using Spring el expressions in Spring 3.1's @Cacheable
|
Look at php's ob_start(), it can buffer all output and save this.
http://php.net/manual/en/function.ob-start.php
Addition:
Look at http://www.php.net/manual/en/function.ob-start.php#106275 for the function you want :)
Edit:
Here a even simpeler version: http://www.php.net/manual/en/function.ob-start.php#88212 :)
Here some simple, but effective, solution:
template.php
<?php
echo '<p>Now is: <?php echo date("l, j F Y, H:i:s"); ?> and the weather is <strong><?php echo $weather; ?></strong></p>';
echo "<p>Template is: " . date("l, j F Y, H:i:s") . "</p>";
sleep(2); // wait for 2 seconds, as you can tell the difference then :-)
?>
actualpage.php
<?php
function get_include_contents($filename) {
if (is_file($filename)) {
ob_start();
include $filename;
return ob_get_clean();
}
return false;
}
// Variables
$weather = "fine";
// Evaluate the template (do NOT use user input in the template, look at php manual why)
eval("?>" . get_include_contents("template.php"));
?>
You could save the contents of template.php or actualpage.php with http://php.net/manual/en/function.file-put-contents.php to some file, like cached.php. Then you can let the actualpage.php check the date of cached.php and if too old, let it make a new one or if young enough simply echo actualpage.php or re-evaluate template.php without rebuilding the template.
After comments, here to cache the template:
<?php
function get_include_contents($filename) {
if (is_file($filename)) {
ob_start();
include $filename;
return ob_get_clean();
}
return false;
}
file_put_contents("cachedir/cache.php", get_include_contents("template.php"));
?>
To run this you can run the cached file directly, or you can include this on an other page. Like:
<?php
// Variables
$weather = "fine";
include("cachedir/cache.php");
?>
|
I want caching some php files partially. for example
<?
echo "<h1>",$anyPerdefinedVarible,"</h1>";
echo "time at linux is: ";
// satrt not been catched section
echo date();
//end of partial cach
echo "<div>goodbye $footerVar</div>";
?>
So cached page should be like as
(cached.php)
<h1>This section is fixed today</h1>
<? echo date(); ?>
<div>goodbye please visit todays suggested website</div>
It may be done with templating but I want it directly. Because I want alternative solution.
|
Php Partial Caching
|
There's no Powershell commandlet out of the box for creating/managing regions.
The solution - write one!
As Daniel Richnak says in the comments, Powershell is .NET under the covers, and this means you can write extra Powershell commandlets to fill in the gaps.
A commandlet is a regular class that inherits from System.Management.Automation.Cmdlet, and it's decorated with the System.Management.Automation.Cmdlet attribute as well. Making it work is then a matter of overriding the ProcessRecord method. Command-line parameters are implemented as properties on the class, decorated with the System.Management.Automation.Parameter attribute. So a commandlet for creating regions would look something like:
using System.Management.Automation;
using Microsoft.ApplicationServer.Caching;
[Cmdlet(VerbsCommon.New, "CacheRegion")]
public class NewCacheRegion : Cmdlet
{
[Parameter(Mandatory = true, Position = 1)]
public string Cache { get; set; }
[Parameter(Mandatory = true, Position = 2)]
public string Region { get; set; }
protected override void ProcessRecord()
{
base.ProcessRecord();
DataCacheFactory factory = new DataCacheFactory();
DataCache cache = factory.GetCache(Cache);
try
{
cache.CreateRegion(Region);
}
catch (DataCacheException ex)
{
if (ex.ErrorCode == DataCacheErrorCode.RegionAlreadyExists)
{
Console.WriteLine(string.Format("There is already a region named {0} in the cache {1}.", Region, Cache));
}
}
}
}
|
I think the title is clear, I'm surfing the web about an hour but every single page talks about creating regions dynamically using .net. I'm sure we have a command to execute on powershell. do you know it?
Thanks in advance,
|
How to create regions on App Fabric via Powershell
|
4
Unfortunately i can't make comments i dont have 50 reputation yet so i will just write my comment into this answer,
Can you explain more about the code? What tool do you use to prepare the code? How do you deploy it ? My assumption is that if you "only" make a restart, clean and after a deploy it will not work, do it the other way, clean, deploy, restart. If you give more information, i can give you a better answer.
What you can try to do is delete the application cache or set the caching off if you believe its a caching issue
cachingAllowed="false" in confg/context.xml
Share
Improve this answer
Follow
answered Mar 3, 2012 at 15:07
OliverOliver
9281010 silver badges2525 bronze badges
1
1
this is half comment half answer :D
– oers
Mar 3, 2012 at 15:11
Add a comment
|
|
I use code like:
Thread currentThread=Thread.currentThread();
ClassLoader classLoader=currentThread.getContextClassLoader();
InputStream configFile=classLoader.getResourceAsStream("config.xml");
But this code starts to work after 2 - 3 hours. I didn't nothing - only restart, clean, deploy, etc.
I suppose that old jar/class that cannot find file config.xml was in some cache - may be in cache of tomcat or in cache of OS/VM. Does it possible?
Thanks.
|
How to clean tomcat cache?
|
Only solutions I've found were:
1) Adding Response.ContentControl = "no-cache" to each handler.
I don't like this because this requires all of the handlers to change and for all developers to be aware of it.
2) Setting HTTP Header override on a folder where the handlers live
I don't like this one because it requires the handlers to be in their own directory. While this may be good practice in general, unfortunately our application is not structured that way, and I cannot just move them because it would break client-facing links.
If nobody provides a better answer I'll have to accept that these are the only two choices.
|
My company uses ASHX files to serve some dynamic images. Being it that the content type is image/jpeg, IIS sends headers with them as would be appropriate for static images.
Depending on settings (I don't know all of the settings involved, hence the question) the headers may be any of:
LastModified, ETag, Expires
Causing the browser to treat them as cacheable, which leads to all sorts of bugs with the user seeing stale images.
Is there a setting that I can set somewhere that will cause ASHX files to behave the same way as other dynamic pages, like ASPX files? Short of that, is there a setting that will allow me to, across the board, remove LastModified, Etag, Expires, etc and add a no-cache header instead?
|
How to prevent IIS from sending cache headers with ASHX files
|
You can check if an image is cached by checking it's complete and readystate properties.
var zoomImg = new Image();
zoomImg.src = zoomlist[currentFrame];
if (zoomImg.complete || zoomImg.readystate === 4) {
image.attr('src', zoomlist[currentFrame])
}
else {
showLoader();
zoomImg.onload = function() {
hideLoader();
image.attr('src', zoomlist[currentFrame]);
});
}
|
I have some preview images that can be zoomed.
If user clicks "zoom", the image is cached:
var zoomImg = new Image();
zoomImg.onload = function() {
image.attr('src', zoomlist[currentFrame]);
});
What I need to know, is how to check if the image is cached or not, to know if I should show the loader.
|
How to check with Jquery if image element exists in DOM based on src attribute?
|
Sessions are per user session.
Cache is not - it is for everyone.
|
Can anyone list the major differences between session and caching?
Because it seems to me as the same, like sessions are also stored on server and caching. Also, session is used to store the data to reuse, caching too, what exactly could be the major difference that Microsoft created these two components?
A real world scenario would be more helpful.
|
Difference between session and caching
|
Only the "master" output page needs the headers, as you've shown. The server-side include happens internally on the server, so the browser never sees it.
You're doing it right.
|
I have a classic ASP page that calls in some other ASP files using Server Side Includes.
I want neither the main file nor the included files to be cached by any browser.
At the moment my main looks something like this:
<%@ Language="VBSCRIPT" %><% Option Explicit %>
<%
Response.CacheControl = "no-cache"
Response.AddHeader "Pragma", "no-cache"
Response.Expires=-1
%>
<!--#include file="scripts1.asp"-->
<!--#include file="scripts2.asp"-->
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
<title>myTitle</title>
<!--#include file="head.asp"-->
</head>
<body>
<!--#include file="body.asp"-->
</body>
</html>
I have only placed the Response.CacheControl, Response.AddHeader, Response.Expires code on the main page and not on the included files.
My questions are:
Do all server side included ASP pages need the Response.CacheControl, Response.AddHeader and Response.Expires code that I have used, or just the main file?
Is the code I have used sufficient to prevent caching on all browsers?
|
Preventing caching of server side include files
|
I've played around a bit and come to the conclusion that caching with client side and using headers does not seem to work in my case. Maybe I'm doing it wrong, maybe it is the way my application works or web server config.
Anyway my solution was to use APC:
http://www.php.net/manual/en/book.apc.php
I'm using windows, the appropriate binaries can be found here:
http://downloads.php.net/pierre/
which one depends on your PHP version and how it was compiled (with vc6 or vc9).
The php_apc.dll will need to be put in your php extension directory and you will need to add the line
extension=php_apc.dll
to the php.ini
Then you bascially do:
if (apc_exists($key)){
return apc_fetch($key);
}
// get data from database because it was not in the cache
//...
//add data to cache
apc_add($key, $result);
If data on my page is uncached it took around 1-2s to load. Well thats not bad but feels very laggy. if data is in cache it is more like 20-30ms. of course this difference is very noticeable.
|
I'm trying to cache JSON content generated by a php script from database. However the dataset is very stable, there are very few changes or additions. Meaning data could go unchanged for weeks.
The issue is that it contains a LOB column and that just takes a noticeable time to load, longer compared to supplying the json from a text file meaning git is the actual database call that makes it slow.
I'm displaying the data in a table with pagination(datatables jquery plugin) and for each page change the data is fetched from database again, also when going back to the previous page.
I've tried following:
"beforeSend": function (request)
{
request.setRequestHeader("cache-control", "max-age=86400");
},
Does not work.
I tried mod_expires:
ExpiresActive On
ExpiresDefault "access plus 4 hours"
ExpiresByType application/javascript "access plus 1 day"
ExpiresByType application/json "access plus 1 day"
Does not work.
Therefore I assume all these settings are for real files on the file system only and not for dynamic generated stuff?
I would prefer a configurable approach meaning using Apache/PHP since I will not have full control over the server.
EDIT BEFORE FIRST ANSWER:
Note that the JSON contains multiple records so a key/value store would be kind of difficult to avhieve. The key would have to contain a lot of stuff: Query/filter expression and the requested page for paging.
EDIT 2:
Development and prod. are Windows... so memcached is not really an option...
EDIT 3:
I've tried kristovaher solution but does not work. The cache headers are not in the response all the time and after some plaing around I believe I determined the issue:
I'm required do use NTLM authentication and when doing 2 request shortly after each other it works fine, however if you wait a bit, it seems the user is re-authenticated and then the cache control header is "lost".
|
caching JSON: Apache, PHP, jQuery
|
There are multiple ways in which caching can happen (and yes there is some redundancy).
Starting with (I think) rails 3.1, Rack::Cache is setup for you. This is an http level cache that understands all about expiry times, etags etc and can store the data in a variety of cache stores. This is what is reporting a cache miss, probably because you're not emitting the cache control headers that would allow it to cache the page (see the expires_in or fresh_when helpers).
Page caching of the sort you have configured is way older and operates entirely dIfferently. It dumps the rendered HTML into the directory of your choice and rails is then serving those as static assets (in production you would configure this to be served straight from the web server without touching ruby level code). This caching is less smart and knows nothing about http cache control headers and so on (but is on the other hand very fast).
So in summary you've got two caching schemes in place that are unaware of each other which is why you get a miss from one of them and a hit from the other one.
|
Using Rails 3.2 in development mode, I'm trying to test out some simple page caching.
pages_controller.rb
class PagesController < ActionController::Base
caches_page :index, :show
def index
@pages = Page.all
end
def show
@page = Page.find(params[:id])
end
end
development.rb
config.action_controller.perform_caching = true
application.rb
config.action_controller.page_cache_directory = File.join(Rails.root, 'public')
When I test this setup out, it seems to process these actions like normal, and the page cache gets written as expected. However, subsequent requests report the following two things that have me confused:
It seems to miss the cache, but...
Requests afterwards don't seem to load the controller, run any queries, etc., leading me to believe that it DID load from the cache.
Here's what the log outputs on first request, and then five reloads afterwards:
Started GET "/pages" for 127.0.0.1 at 2012-02-12 21:01:24 -1000
Processing by PagesController#index as HTML
Page Load (0.2ms) SELECT `pages`.* FROM `pages`
Rendered pages/index.html.erb (0.8ms)
Write page /Users/ckihe/Sites/experiment/public/pages.html (0.3ms)
Completed 200 OK in 3ms (Views: 1.9ms | ActiveRecord: 0.2ms)
cache: [GET /pages] miss
cache: [GET /pages] miss
cache: [GET /pages] miss
cache: [GET /pages] miss
cache: [GET /pages] miss
Anyone have any ideas why the cache says it's missing?
|
Cache miss with Rails 3.2 and page caching in development - anyone else?
|
None that I know of... the only option are global/device-specific:
using -sync option with mount
using drop_caches
Another point:
Even IF you could do what you ask for there is no guarantee that any other processes (C, D, E etc.) behaves in a way that "the file cached by process A in the memory" gets replaced...
UPDATE - after comments from OP ragarding performance:
Linux offers (as most modern OS) something called "memory-mapped file" - basically this is a way to access the file's contents in-memory... the OS assigns the file (depending on the given params) part of the address space and loads the content of the file into that address space (again: exact behaviour depends on the given params).
You would do this in Process A to achieve what you want...
Checkout the mmap API calls for details.
|
Is there a way to disable file cache for a particular process ?
I have two process running A and B.
I want file opened by A to remain in cache.
and
I don't want to enable file cache for B so It doesn't replace the file cached by process in the memory.
Is there a way to disable file cache for a particular process?
|
Linux : Disabling File cache for a process?
|
I solved this one myself but am leaving it out there so that it may be of use to someone else.
The problem was that a custom ActionFilterAttribute was manually setting the cache information and therefor the caching I was setting on the Action were being ignored.
The Attribute in question trimmed for brevity:
public class CustomAttributeName: ActionFilterAttribute
{
public override void OnActionExecuting(ActionExecutingContext filterContext)
{
var cache = filterContext.HttpContext.Response.Cache;
cache.SetExpires(DateTime.UtcNow.AddDays(-1));
cache.SetValidUntilExpires(false);
cache.SetRevalidation(HttpCacheRevalidation.AllCaches);
cache.SetCacheability(HttpCacheability.NoCache);
cache.SetNoStore();
base.OnActionExecuting(filterContext);
}
}
|
So I am having an issue with IE 7 being able to download a file from an SSL site built in MVC 3. For IE 7 to be able to save a file from an SSL site, it must be cache-able.
The code for the method is:
[OutputCache(Location = OutputCacheLocation.ServerAndClient, Duration = 20, VaryByParam = "none", NoStore = true )]
public override FileContentResult Export(int? id, string extra)
{
...
return new FileContentResult(byte[], mimetype);
}
This working in IE9, Chrome,Safari, and Firefox.
I have tried various settings for VaryByParam, Duration and NoStore. When ever I change any of those settings the response headers never seem to change.
Cache-Control:no-cache, no-store, must-revalidate
Content-Disposition:attachment; filename=PersonalInfo-02092012.xlsx
Content-Length:11933
Content-Type:application/vnd.openxmlformats-officedocument.spreadsheetml.sheet
Date:Thu, 09 Feb 2012 18:16:35 GMT
Expires:-1
Pragma:no-cache
Server:Microsoft-IIS/7.5
Any help would be appreciated.
|
OutputCache attribute being ignored in MVC 3
|
Less.js will already recompile if the source has changed -- I have never had an issue with it being stale during development. I did end up switching to a compile-on-save workflow using textmate's less bundle though wince switching out link tags before deployment and testing was getting annoying.
That being said I'm sure you could wire up something to watch the file on disk and invoke the node.js lessc compiler.
UPDATE TO CLARIFY DEV CYCLE:
During development I include the less.js file in the page and link to my styles.less file via <link rel="stylesheet/less" type="text/css" href="styles.less>
When I push out to production I change that to:
<link rel="stylesheet" type="text/css" href="styles.css">
But, during development, every time I save the styles.less file, I use a TextMate bundle to also compile down a styles.css file, so the change out is a matter of commenting/uncommenting in my source file.
I have the bundle set to use node.js lessc compiler with --compress set so it gives you a nice compact stylesheet.
|
I'm interested in using a css preprocessor and am leaning towards LESS. I'm proficient at writing CSS myself, but want to take advantage of a few of the dynamic features. I'm not positive that adding constant extra steps in production (like compiling) are worth the benefit though.
I like LESS because I can use less.js to compile client-side (during development only), is there a tool out there though that would automatically detect the timestamp on my less file and compile it to css and overwrite the current css file? I'd love to achieve this level of transparency so I could worry about the code and less about compiling it and refreshing... Something similar to http://cssrefresh.frebsite.nl/ but combining a compiler with it? If not Is anyone interested in helping build it?
|
LESS & automatic CSS cacheing
|
It doesn't matter what order they're downloaded in. Appcache operations are atomic. Nothing is available from the appcache for a particular manifest file until everything is available from the appcache for that manifest.
If you want to break up the download then, as @PaulGrime suggests, have multiple manifest files. You will need to have the user visit the host page for each manifest but you should be able to manage that with a hidden iframe or something.
|
I'm creating an app with a number of videos/large images that will use appcache to handle the preloading of them.
Is there a way for me to control what order those assets get loaded? For example, I would like to load the big image/video on the homepage before the ones on the sub pages. I have tried listing the files in the order I want them to load in the appcache manifest but that didn't seem to make a difference.
|
html5 appcache load order
|
can be done with htaccess but is kind of a pain.
RewriteCond %{TIME_WDAY} ^0$
RewriteCond yourfile.php - [E=daystring:SUN]
#etc (7x)
RewriteCond %{TIME_MON} ^0$
RewriteCond yourfile.php - [E=monthstring:JAN]
#etc (12x)
Header set "Expires" "%{daystring}, %{TIME_DAY} %{monthstring} %{TIME_YEAR} %{TIME_HOUR}:59:59 GMT "
Better to just do this in the PHP itself (after session_start()).
<?php
$nexthour = mktime (date("H"), 59, 59);
header('Expires: '.gmdate('D, d M Y H:i:s \G\M\T', $nexthour));
?>
|
I want to cache a number of php pages that display different data at the beginning of every hour (xx:00:01)?
So far, I've found a way of cacheing a page +1hour from the time of accessing (or modifying the file), but if the user accesses the page at xx:59:00, then at xx+1:00:01, he will see the cache'd page data, not the newly displayed data.
What do I need to write to get a regular, "top-of-the-hour" cache expiry, preferably using .htaccess?
Final code (non htaccess):
$nexthour = mktime(date("H")+1, 00, 20) - mktime();
header("Cache-Control: public, must-revalidate, max-age=".$nexthour.", s-maxage=".$nexthour);
At the top of each page.
|
I need .htaccess to cache php pages, and expire them at the beginning of every hour (xx:00:00)
|
If you configured AppFabric as session provider you can now use Session State just in the same way when you had InProc session. Other features that use Session State (for example TempData) also work.
|
I have configured the Azure AppFabric Cache as session provider in my ASP.NET MVC application.
How do i store session data in Azure Cache?
Is it the same was as with inProc session?
Like by using ViewBag, TempData and stuff?
Thanks.
|
Using the Azure AppFabric Cache in ASP.NET MVC
|
Trust Boundary Violation is not often a simple thing to fix. TO really understand this, you need to confer with your security auditor and your architect and determine what is the trust boundary. To do this, draw a logical architecture of your application, including the cache, the end user and all the other systems the application needs to interface with.
Then, draw a dotted line around the part of the application that needs to be protected. Everything inside this line is stuff that you do not have to check... it's all data that, presumably was created by you the developer, or else it was scrubbed by your input validation function and you are sure it is only the kind of data you expect. (See https://www.owasp.org/index.php/Data_Validation)
Now, where is the cache?
If it's inside the trust boundary, then this Trust Boundary Violation is a false positive and you can create a filter so that if the source comes from that file or package, the issue will be hidden. Your filter would look something like this:
category:"trust boundary violation" package:com.example.mycachepackage
or
category:"trust boundary violation" file:MyCacheObject.java
If the cache is outside the trust boundary, then the assumption is that the attacker may use the cache as a mechanism to attack your program or users. Then you have to check all the data every time you put data into the cache or take anything out of the cache.
Once you've defined the validation function(s) for the cache mechanism, your security auditor or Fortify consultant will write a custom validation rule that will make all the fixed issues disappear.
|
I have requirement to get application cache object => session object, modify it and use it. While everything works fine, I am receiving the Trust Boundary Violation threat from Fortify (for more info) https://www.fortify.com/vulncat/en/vulncat/sql/trust_boundary_violation.html.
Any ideas on how to fix this?
|
Trust Boundary Violation while combining Application cache with session data
|
You can specify cache_key for your block:
protected function _construct()
{
$this->addData(array(
'cache_key' => 'some_static_or_dynamic_key', // can be static or dynamic
'cache_lifetime' => 120,
'cache_tags' => array(
Mage_Core_Model_Store::CACHE_TAG,
Mage_Cms_Model_Block::CACHE_TAG),
)
);
}
And then you can ensure that block is cached by calling:
Mage::app()->loadCache('your_cache_key');
Here is good article about blocks caching.
|
i cached my custom block inherit of Mage_Core_Block_Template. I cached the bloc with the next constructor:
protected function _construct()
{
$this->addData(array(
'cache_lifetime' => 120,
'cache_tags' => array(Mage_Core_Model_Store::CACHE_TAG, Mage_Cms_Model_Block::CACHE_TAG),
));
}
Right, i want verify that this block is cached. How i can list all block cached in my Magento.
I want a similar instruction:
var_dump($this->getLayout()->getUpdate()->getHandles());exit;
to see all layout , in blocks cached.
thx.
|
Magento: How to know many blocks are cached
|
An indirect solution:
.htaccess:
RewriteCond %{HTTP:if-modified-since} .
RewriteRule . /not_modified.php [L]
not_modified.php:
header($_SERVER['SERVER_PROTOCOL'].' 304 Not Modified');
|
If a user has a file cached in their browser and they send a http request with an If-Modified-Since header, is there a way to automatically serve them a 304 Not Modified response using .htaccess?
|
Is it possible to return a 304 Not Modified with .htaccess
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.