Response
stringlengths 8
2k
| Instruction
stringlengths 18
2k
| Prompt
stringlengths 14
160
|
---|---|---|
1
You realize you aren't using your HttpURLConnection, right? If you want to get the InputStream using the HttpURLConnection, you need to call
InputStream is = new BufferedInputStream(urlConn.getInputStream());
Also, I believe that it's standard to use Apache HttpClient for this sort of thing with Android, since it's built in and a much better API than the standard Java stuff.
Share
Improve this answer
Follow
answered Feb 22, 2011 at 14:35
ColinDColinD
109k3030 gold badges202202 silver badges203203 bronze badges
3
1
you are right. i forgot the strange thing is it's working on emulator, but not on the phone (a htc desire)!
– pino zulpo
Feb 22, 2011 at 14:47
and of course, i forgot to add the code you remember me up on the question.
– pino zulpo
Feb 22, 2011 at 14:49
7
A more recent android blog post says you should be using Apache HttpClient only for Froyo and below, but HttpURLConnection for everything after Froyo.
– Gordon Glas
Jul 7, 2012 at 13:18
Add a comment
|
|
i'm trying to get a xml file but it seems to be cached.
there's my code:
URL url = new URL("http://delibere.asl3.liguria.it/SVILUPPO/elenco_xml.asp?rand=" + new Random().nextInt()+"&Oggetto=" + text +"&TipoDocumento="+tipoDocumento);
HttpURLConnection urlConn = (HttpURLConnection) url.openConnection();
urlConn.setDefaultUseCaches(false);
urlConn.setAllowUserInteraction(true);
urlConn.setDoInput(true);
urlConn.setDoOutput(true);
urlConn.setUseCaches(false);
urlConn.setRequestMethod("GET");
urlConn.setRequestProperty("Pragma", "no-cache");
urlConn.setRequestProperty("Cache-Control", "no-cache");
urlConn.setRequestProperty("Expires", "-1");
urlConn.setRequestProperty("Content-type", "text/xml");
urlConn.setRequestProperty("Connection","Keep-Alive");
DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance();
DocumentBuilder db = dbf.newDocumentBuilder();
InputStream is = new BufferedInputStream(url.openStream());
Document doc = db.parse(is);
doc.getDocumentElement().normalize();
thanks!
|
Why android HttpURLConnection cache the inputstream results?
|
What sort of volumes of data are we talking about here? A few KB? A MB? A hundred GB?
If anything but the latter, rather than reinvent the wheel, use one of the existing cache services. Based on the C# tag and the mention of IIS, I assume you are running a Windows. As such I would suggest that you have a look at Memcached
From the Memcached page:
Free & open source, high-performance, distributed memory object caching system, generic in nature, but intended for use in speeding up dynamic web applications by alleviating database load.
Memcached is an in-memory key-value store for small chunks of arbitrary data (strings, objects) from results of database calls, API calls, or page rendering.
I have used Memcached myself in environments similar to the one you describe (using MS SQL rather than nosql for data storage). After an initial round of cache tuning we have not encountered any problems whatsoever.
Without knowing specifics of your requirements it is difficult to say if Memcached is the solution that is right but it is certainly something worth looking into. For reference their FAQ Wiki can be found here:
http://code.google.com/p/memcached/wiki/FAQ
|
I have an application that run on 100 servers. this application communicated with a nosql database which i dont like. 100 servers creating sessions, locking, committing etc.
I d like to build a cache farm or dirty real pools that the IIS servers will go and read data, objects from them. Caches will expires every once a while. etc.
The whole point is to avoid database access from these 100 servers.
What would you use for this cache farm ?
WCF? REST? Best Practices and Patterns Caching Block ? or Windows AppFabric?
I already have a layer of Distributed Cache so not thinking about that.
What architecture would you go for ? any recommendation or case studies?
|
Cache Farming, Read Pools
|
Serializing only works for data objects (or part of the data of an object). Network connections can't be serialized, since they are very specific to a client, server and time (by default a tcp connection times out after 5 minutes). You could use a global IMAP connection that you put in memory within your application, but you might run into issues there with several users. The most standard way is to have a pool of connections where you borrow a connection from.
By the way, are you running into issues when creating a connection every time? I would not cache the connection, but the data that you retrieved via IMAP and work from there.
|
I am creating a web-based IMAP client in Rails, and want to cache the IMAP object between requests. I was planning to serialize the object and save it in Redis (since it caches strings only), however none of the popular serialization methods seems to be working.
Marshal and ActiveSupport::Cache::MemoryStore both give the following error
Marshal.dump(imap)
TypeError: no marshal_dump is defined for class Mutex
YAML serialization works, but deserializing fails.
s = YAML::dump(imap) # works, i.e. loads up a string with the imap data
imap2 = YAML::load(s)
TypeError: allocator undefined for Thread
Is there any other alternate caching mechanism that works for arbitrary ruby objects, especially ones that might be using threads internally? Does an alternate key-value store (I have been using Redis) support such caching?
Even better, is there anyway for Rails to remember certain objects, rather than me deserializing them?
PS> I am using Ruby 1.9.2 with Rails 3.0.3 on a Macbook, if that helps in any way.
|
How do I cache/serialize an Net::IMAP object in Ruby/Rails?
|
1
Not really a Hibernate problem, from what I can see. I would say that this is more about caching itself. So, I would recommend looking at some distributed caches, specially Infinispan. This way, both applications can share the same cache and manipulate it. If you just use Hibernate, but the caches are still in different machines, with different states, then you'll face the same problem.
Share
Improve this answer
Follow
answered Jan 8, 2011 at 10:46
jpkroehlingjpkroehling
14k11 gold badge3838 silver badges3939 bronze badges
Add a comment
|
|
I have 2 applications.First one is a web application through which we provision reference data.Second one is an ESB based application where the reference data is used.The reference data changes but not very frequently.We needed to cache the reference data.The web application( I am not the owner) used hibernate. But my ESB based application did not.We only used EHCache.
When reference data is changed by the independent web application that needs to be reflected in ESB application.We implemented using message queue - that is when reference data changes web application sends a message to the message queue.Our ESB application listens to that message & clears the cache & caches the data once again.This works.But it is time intensive.How can I use Hibernate to improve the situation?
Regards,
Subhendu
|
Caching using hibernate
|
Found the answer...
As we move towards commercial launch, we'll look to add many of the features that make Windows Server AppFabric Caching extremely popular, such as High Availability, the ability to emit notifications to clients when they need to refresh their local cache, and more.
http://blogs.msdn.com/b/windowsazureappfabric/archive/2010/10/28/introduction-to-windows-azure-appfabric-caching-ctp.aspx
So no go for the time being, but looks like it is planned. Good enough for me, since it is not yet released anyway. But any ETA on release date (or quarter) would be very helpful for configuring my roadmap...
|
Back when AppFabric Caching was "Velocity", High Availability was "out of the box" option, and one of the few major features that made it stand out over other caching systems, namely memcached. However along the way "someone" decided it best to charge customers extra for HA by making it only available to Windows Server 2008 Enterprise Edition (or higher)... I am hopeful that Windows Azure AppFabric Caching does provide HA. I'm betting it must, if for no other reason because there is no "upgrade" option. Anyone know for certain?
As a secondary question, if anyone knows of the ETA for the production release of Windows Azure AppFabric Caching other than "2011", much appreciated. Roadmaps would be mighty handy... Instead most of us rely on Google to determine guesswork roadmaps :). Always been a big fan of MS products/services, but they could really use some additional "focus" in the area of Azure.
I understand AppFabric Caching is currently only in "alpha", and is only available via the Azure Labs portal.
|
Does Windows Azure AppFabric Caching support High Availability?
|
1
Instead of forcing the browser to cache, you should send a must-revalidate header and control the caching from within your programming language (for example, PHP) by sending an Expires and Last-Modified header. The browser will then ask your site for the latest version on each request, but make sure to answer with an empty page if nothing has changed.
This may take some time to implement, but it definitely works.
Share
Improve this answer
Follow
answered Dec 16, 2010 at 9:25
Tom van der WoerdtTom van der Woerdt
29.8k77 gold badges7272 silver badges105105 bronze badges
Add a comment
|
|
I currently use this setup in my vhost:
<Location />
SetOutputFilter DEFLATE
BrowserMatch ^Mozilla/4 gzip-only-text/html
BrowserMatch ^Mozilla/4\.0[678] no-gzip
BrowserMatch \bMSI[E] !no-gzip !gzip-only-text/html
SetEnvIfNoCase Request_URI \
\.(?:gif|jpe?g|png)$ no-gzip dont-vary
Header append Vary User-Agent env=!dont-vary
</Location>
<Directory />
ExpiresActive On
ExpiresByType text/html "access plus 5 minutes"
ExpiresByType text/css "access plus 1 month"
ExpiresByType application/x-javascript "access plus 1 month"
ExpiresByType application/javascript "access plus 1 month"
ExpiresByType text/javascript "access plus 1 month"
ExpiresByType image/gif "access plus 1 month"
ExpiresByType image/png "access plus 1 month"
ExpiresByType image/jpg "access plus 1 month"
ExpiresByType image/jpeg "access plus 1 month"
ExpiresByType image/x-icon "access plus 1 month"
ExpiresDefault "access plus 1 day"
<FilesMatch "\.(ico|jpeg|pdf|flv|jpg|png|gif|js|css|swf)$">
Header set Cache-Control "max-age=2592000, public"
Header unset Last-Modified
Header unset ETag
FileETag None
</FilesMatch>
<FilesMatch "\.(html|php)$">
Header set Cache-Control "max-age=900, public, must-revalidate"
</FilesMatch>
</Directory>
While it works great for speeding up the thing, sometimes users dont see the changes they themselfs made on content (mainly while using FireFox) :( any suggestions / optimization hints?
|
Apache Vhost Directives Optimized for Pagespeed
|
SOLUTION
The problem is with the location of the memorize_headers parameter.
I was trying this:
$frontendOptions = array(
'lifetime' => 3600,
'default_options' => array(
'cache' => $cache_flag,
'cache_with_cookie_variables' => true,
'memorize_headers' => array('Content-Type', 'Content-Encoding'),
'make_id_with_cookie_variables' => false),
'regexps' => array(
'^(/.+)?/admin/?' => array('cache' => false),
'^(/.+)?/admin/pictures/view-image/?' => array('cache' => true),
'^(/.+)?/authentication/?' => array('cache' => false),
'^(/.+)?/fan-profile/?' => array('cache' => false),
'^(/.+)?/fan-registration/?' => array('cache' => false))
);
The right location of this is out of the default_options key:
$frontendOptions = array(
'lifetime' => 3600,
'memorize_headers' => array('Content-Type', 'Content-Encoding'),
'default_options' => array(
'cache' => $cache_flag,
'cache_with_cookie_variables' => true,
//'cache_with_session_variables' => true,
'make_id_with_cookie_variables' => false),
'regexps' => array(
'^(/.+)?/admin/?' => array('cache' => false),
'^(/.+)?/admin/pictures/view-image/?' => array('cache' => true),
'^(/.+)?/authentication/?' => array('cache' => false),
'^(/.+)?/fan-profile/?' => array('cache' => false),
'^(/.+)?/fan-registration/?' => array('cache' => false))
);
Now it works.
|
I have the following action:
public function viewImageAction()
{
$this->_helper->layout()->disableLayout();
$this->_helper->viewRenderer->setNoRender();
$filename = sanitize_filename($this->_request->getParam('file'), 'jpg');
$data = file_get_contents(APPLICATION_PATH . '/../private-files/fans-pictures/' . $filename);
$this->getResponse()
->setHeader('Content-type', 'image/jpeg')
->setBody($data);
}
And in my index.php before the application start I have:
/** Zend Cache to avoid unecessary application load **/
require_once 'Zend/Cache.php';
$frontendOptions = array(
'lifetime' => 3600,
'default_options' => array(
'cache' => $cache_flag,
'cache_with_cookie_variables' => true,
'make_id_with_cookie_variables' => false),
'regexps' => array(
'^(/.+)?/admin/?' => array('cache' => false),
'^(/.+)?/pictures/view-image/?' => array('cache' => true),
'^(/.+)?/authentication/?' => array('cache' => false),
'^(/.+)?/fan-profile/?' => array('cache' => false),
'^(/.+)?/fan-registration/?' => array('cache' => false))
);
$backendOptions = array(
'cache_dir' => APPLICATION_PATH . '/cache/pages/');
$cache = Zend_Cache::factory(
'Page', 'File', $frontendOptions, $backendOptions
);
$cache->start();
The cache works fine, except that if I try to access the url, like public/admin/pictures/view-image/file/63.jpg the headers come with text/html not image/jpeg.
Am I doing something wrong?
EDITED
I've tried:
'memorize_headers' => array('Content-type')
But nothing...
Also, I've notice that this type of caching (before the application start) can't be done on admin areas because the application need to run and check the session. So I need to put the chache as soon as possible to avoid the load of all components involved.
Any tips?
|
When I try to cache private images (an action with modifed headers), the headers are omitted
|
1
What I wound up doing was setting a global variable in my background process before the Rails environment was loaded:
$background = true
then in environments/production.rb:
config.action_view.cache_template_loading = !$background
Not thrilled, but it works. I get template reloading for email templates in my background job but cached view templates for the online application.
Share
Improve this answer
Follow
answered Nov 12, 2010 at 19:26
SteveDSteveD
5144 bronze badges
Add a comment
|
|
I am sending emails in a background job using ActionMailer. Users can create new email templates but they aren't recognized until the background job is restarted. Used to use
ActionView::TemplateFinder.reload!
which forced reloading of templates (now deprecated on 2.3.4). I have tried
ActionView::Base.cache_template_loading = false
but that does not work.
|
How do I selectively turn off template caching in Rails?
|
1
I have the same problem. The other applicationCache events (updateready, cached, downloading) do trigger and you can change the html using jQuery or whatever to indicate changes but not the progress event. I can understand that the iPad might not choose to support this event as this is an event that could get triggered many times and may affect the perfomance of the iPad's slower processor. Regarding the download, you need to make sure that every resource listed in your manifest is available.
Share
Improve this answer
Follow
answered Dec 1, 2010 at 8:59
KehanKehan
16722 silver badges77 bronze badges
1
The "progress" events are thrown all in a burst after all files have been downloaded, so I can't show a nice progress bar. Any update on this about why it does not fires the "progress" event just after each download? Desktop. Chromium
– Alfonso Nishikawa
Feb 12, 2018 at 17:13
Add a comment
|
|
I have an html5 application that listens to all window.applicationCache events. When it needs to download, I write a pretty dialog to the screen, and during the progress event I calculate the percentage of files done by doing the right math on the event.loaded and event.total properties, in order to update the percentage with it.
The result is a fine dialog that says "Installing n%" up to 100. Everything works out as I expect it, and the application caches offline nicely and starts on all browsers.
However, on iPad, this doesn't seem to work. The only thing I can achieve during applicationCache events is write to the console. There I do see these events actually being listened to by my handlers.
I've tried everyting up to function timeouts.
My questions:
Is there a way to update any HTML and visualize this during these events on iPad?
Why is the iPad not downloading everything in one go like all other browsers do? It seems to go into idle state for a reason unknown to me.
PS: I need to cache over 600 files. The total size is under 1Mb in total.
|
The iPad applicationCache events can't update the screen
|
1
I'm currently developing an app that will deal with this kind of stuff. And I found many possible solutions.
If you still have issues with this, have a look here: http://cocoawithlove.com/2010/09/substituting-local-data-for-remote.html
If your app has to be simple, this could be perfect.
An other way could be to subclass NSURLProtocol, but I'm still investigating on this solution.
Share
Improve this answer
Follow
answered Jul 25, 2011 at 15:06
TroXTroX
8911 silver badge77 bronze badges
Add a comment
|
|
I'm new to iPhone development and I still have some gaps that needs to be filled in the first application I'm developing. This app will consume data from a site managed by Wordpress through the Wordpress JSON plugin, which allows to retrieve the posts in the form of a json string.
I wanted my application to store the posts in some form of caching, so that the users would only need to download the new content after the first time. Think about the Twitter App, that keeps all your previous loaded tweets and only load the new ones.
What's the best way to do that, should I save the json as a file or there are another more efficient method to keep it in cache?
|
How to cache JSON data for offline reading in iPhone APP?
|
See here:
http://www.symfony-project.org/book/1_2/12-Caching#chapter_12_sub_configuring_the_cache_dynamically
You can use that approach, modifying the filter class to use whatever conditions you want to enable/disable cache.
The example in the link above disables/enables cache per module/action. You can use:
sfConfig::set('sf_cache', false);
to enable/disable it globally if preferred.
|
If a certain condition is met, is it possible to tell symfony not to cache the current request? I'm aware of the contextual configuration flag, but this is an action, and not a partial or component, so I don't think it applies.
Context: I'm running a multi-site CMS. Each site can have multiple domain names associated with it, but one is set as the primary. It a page request is made with the alternate domain name, it gets forwarded to the primary. But the action gets cached so when the same url is accessed, it serves the cache file instead of redirecting.
|
Is it possible to disable symfony's cache per request
|
After trying both, the more successful option seems to be a combination of two factors. Management of the Cache Cluster (host information) is primarily an operations concern and is managed best by the operations team (i.e. those guys that read Server Fault). Since this information is stored in the configuration as well (and would require an XML file obtained from Export-CacheClusterConfig for each environment) it's best left to the operations team on how they want to manage it. Importing the wrong file (with the incorrect host information) has led to a number of issues.
So, we're left with PowerShell scripts. Here's a sample that I have. It could be cleaned up (check for Cache existence first) but you get the general idea. It's also much easier to store in source control (as it's just one file).
New-Cache -CacheName CRMTickets -Eviction None -Expirable false -NotificationsEnabled true
New-Cache -CacheName ConsultantCache -Eviction Lru -Expirable true -TimeToLive 60
New-Cache -CacheName WorkitemCache -Eviction None -Expirable true -TimeToLive 60
|
I'm working on building out a standard set of configurations for our cache clusters within App Fabric. My goal is to have a repeatable cache settings configuration when we load up a new environment (so server names are different, number of hosts, and other environmental factors).
My initial pass was to utilize the XML available from Export-CacheClusterConfig and simply change server names and size attributes in the <hosts> section, but I'm not sure what else is automatically registered with those values (the hostId parameter, for example).
My next approach that I've considered is a PowerShell script to simply build up the various caches with the correct parameters passed in that would simply run as a post-deploy step.
Anyone else have experience with repeatable AppFabric cache cluster deployments?
|
SCM management of AppFabric Cache Cluster
|
Try instantiating an explicit transaction within a session using block. This article looks relevant:- http://nhprof.com/Learn/Alerts/DoNotUseImplicitTransactions
|
I have enabled 2nd level cache in FluentNHibernate:
Fluently.Configure()
.Database(MsSqlConfiguration.MsSql2005
.ConnectionString(connectionString)
.Cache(c => c.ProviderClass<SysCacheProvider>().UseQueryCache())
)
.Mappings(m => m.FluentMappings.AddFromAssemblyOf<PersonMap>());
My mapping is as follows:
public PersonMap()
{
Id(x => x.Id);
Map(x => x.Name);
Cache.ReadWrite();
}
When I call Persons from my repository, I run:
var query = session.GetSession().CreateCriteria<Person>("p")
.Add(Expression.Eq("p.Org.Id", orgRep.GetOrg().Id));
query.SetCacheable(true);
return query.List<Person>().AsQueryable<Person>();
When I start the application everything (including cache) works fine. My first query will hit the database, but following ones don't. Problem arises when I save the Person. Person is saved like:
public virtual void Save(Person p)
{
if (p.Id > 0 && session.GetSession().Get<Person>(p.Id).Org != orgRep.GetOrg())
throw new SecurityException("Organization mismatch");
session.GetSession().Merge(p);
session.GetSession().Flush();
}
Saving works but after that the cache doesn't. Queries will always hit the database. Looking through nhibernate log says that:
DEBUG - Checking query spaces for up-to-dateness [[Person]]
DEBUG - Fetching object 'NHibernate-Cache:UpdateTimestampsCache:[Person]@1639794674' from the cache.
DEBUG - cached query results were not up to date for: sql: SELECT this_.Id as Id0_0_, this_.Name as Name0_0_, this_.Org_id as Org5_0_0_ FROM [Person] this_ WHERE this_.Org_id = ?; parameters: ['1']; first row: 0
What am I doing wrong?
|
NHibernate query cache not working after update to the database
|
1
it's possible to do this in one command but it's more obviously.
rm `find /path_to_cache_folder/ -type f | grep -v 'index.php'`
rm `find /path_to_cache_folder/source -type f | grep -v 'index.php'`
or in one cron job
rm `find /path_to_cache_folder/ -type f | grep -v 'index.php'` && rm `find /path_to_cache_folder/source -type f | grep -v 'index.php'`
Share
Improve this answer
Follow
answered Mar 13, 2013 at 11:25
san4osan4o
19611 silver badge77 bronze badges
Add a comment
|
|
I am using phpThumb on a client website, and as it is a very image heavy application the cache gets huge quick. Today the thumbs stopped working and I had rename the cache folder, as the folder was too big to delete via ftp. I renamed it cache_old and am trying to delete it now via ssh. I recreated the cache folder and everything worked fine again.
Since it seems it stops working when the cache folder is too full, plus just to keep the server tidy, I would like to setup a daily cron job to clear files from the cache folder. I have no idea how to do this though and haven't been able to find an answer yet..
The cache folder has a file in it called index.php which I assume needs to stay, plus a sub folder called source, which again has a file called index.php, again I assumed that needs to be there. So I need a command that will delete everything BUT those files.
Any guidance on how to set this up would be appreciated!
Thanks,
Christine
P.S. The site is hosted on DreamHost, and I have set other jobs up via there cronjob panel, and I do have SSH access if setting it up that way is easier. Cheers!!
|
clear phpThumb cache regularly with cron job
|
The answer is sort of, but you will need to use Aspect Oriented Programming techniques in order to achieve this. Castle Windsor or PostSharp are two excellent .net frameworks that can help get you there, but this is not a light subject by any means.
Basically AOP involves wrapping up common boilerplate functionality into aspects that can be used to decorate code, leaving only the essence of what you want. This can be done dynamically at runtime as Castle does, or at compile time by IL weaving (dynamically compiling the new code into your assembly).
Each approach has its strengths and weaknesses, but I urge you to research the topic as it spans further than just caching. Many common tasks such as exception handling and logging can easily be refactored into aspects to make your code extra clean.
|
Using VB.NET on this one, in ASP.NET Web Forms.
I have a property that reads a list from a database over the entity framework, as follows:
Public ReadOnly Property MyProperty As List(Of MyType)
Get
Using service As New MyDatabaseService
Return service.GetMyTypeList()
End Using
End Get
End Property
Because this service requires a round-trip to the database, I'd like to cache the results for future accesses during the same page lifecycle (no ViewState needed). I know this can be done as follows:
Private Property _myProperty As List(Of MyType)
Public ReadOnly Property MyProperty As List(Of MyType)
Get
If _myProperty Is Nothing Then
Using service As New MyDatabaseService
_myProperty = service.GetMyTypeList()
End Using
End If
Return _myProperty
End Get
End Property
Now for the question: is there any way to utilize caching via an attribute, to "automatically" cache the output of this property, without having to explicitly declare, test for, and set the cached object as shown?
|
Caching for a single property or method in .NET
|
1
When you update the file on the web server, the last modification time of it should change. You can verify this by sending a GET or HEAD request (use a command line tool like wget, curl or something like FireBug to verify).
When the date changes, the browsers should update their caches (unless your browser has a bug).
Share
Improve this answer
Follow
answered Jul 20, 2010 at 8:54
Aaron DigullaAaron Digulla
325k109109 gold badges611611 silver badges828828 bronze badges
1
First make sure that the date changes. If it doesn't then the browser doesn't matter.
– Aaron Digulla
Jul 20, 2010 at 9:16
Add a comment
|
|
I have an updated stylesheet that I cannot rename or insert any ?nocahce=blah sort of thing - but browsers are still using a cached version.
Can I put a meta tag in (or something like that) that effectively cancels any cache from a certain date?
|
Is there a way to tell browsers to ignore cache from before a certain date?
|
NCache Enterprise is the only .NET cache provider I can find that supports streaming:
// Insert
using (var cacheStream = cache.GetCacheStream("mykey", StreamMode.Write))
cacheStream.Write(...);
// Retrieve
using (var cacheStream = cache.GetCacheStream("mykey", StreamMode.Read))
cacheStream.Read(...);
|
Does anyone know any cache providers for .NET that support streaming data directly in/out by key, rather than serializing/deserializing objects?
I was rather hoping AppFabric (Velocity) would, but it only appears to deal with objects like the built-in ASP.NET cache.
|
.NET cache providers that support streaming?
|
1
Use Windows Scheduled Tasks to enter this page every 12 minutes because ASP.NET works with triggers only. Trigger can be either Ajax that requests other page every 12 minutes or next user that comes to your webpage.
Share
Improve this answer
Follow
answered Jul 6, 2010 at 7:26
eugeneKeugeneK
11k1919 gold badges6868 silver badges102102 bronze badges
5
I thought about it, but I'm not sure that it's the only possible / best solution in this case.
– Adam
Jul 6, 2010 at 9:07
@Adam, what control does? Maybe you can move some parts of it out of control and Cache only needed part?
– eugeneK
Jul 6, 2010 at 9:21
Unfotunately the control creates a lot of controls (labels, divs) and adds it into other controls within the control. And I can not cache controls outside a control - because adding cached HTML by using the Controls.Add() method - will end with an exception.
– Adam
Jul 6, 2010 at 9:25
then i see no other way then either to use my advice or to redesign control itself.
– eugeneK
Jul 6, 2010 at 10:09
I think it's the only option as for now.
– Adam
Jul 6, 2010 at 10:30
Add a comment
|
|
I solved most of the issues I had with caching. But still there is one thing. I have a UserControl for which I use output caching. Just like this:
<%@ OutputCache Duration="1200" VaryByParam="none" %>
However, as you can see, the control is recreated every 12 minutes, because it takes from 5 to 10 seconds to generate it.
Now, the default behavious for ASP.NET is to create the control when user enter the page and keep it in cache for 12 minutes. Then when after another 5 minutes user enters the page the control is created again.
Is there a way to force ASP.NET to recreate the control after the 12 minutes cache expires? No matter on the next user visit?
Or even a perfect solution: recreate control in background after lets say 11 minutes 50 seconds, and than just replace the actual one with the new one after 12 minutes?
Thanks for help!
|
ASP.NET recreating cached control
|
1
One approach I use for caching data that isn't likely to change (e.g. configuration data) is to use memoization, via the excellent Memoize module. I wrap the sql query in a function where I pass in the bind parameters and the table name, and memoize that function.
use Memoize;
sub get_config_for_foo
{
my ($table, $field1, $field2) @_;
# generate my sql query here, using table, field1 and field2...
return $result;
}
memoize(get_config_for_foo);
You could also use a caching strategy in memcache or something similar; check out Tie::Cache::LRU for a good implementation for this.
Share
Improve this answer
Follow
answered Jun 22, 2010 at 23:52
EtherEther
53.5k1313 gold badges8787 silver badges159159 bronze badges
Add a comment
|
|
In a Perl program I cache SQL request result to speed up the program.
I see two common way to do that:
Create a hash using the query as index to cache result, like suggested here
Create a hash with but 2 index, first is the list of used table, second is where clause
I today used the 2nd option because it's easier to clean the cache for a given set of table when you know they have been changed.
My problem is to handle the cache cleaning, today most select query I do are against table with very few change. So when I run an update/delete/... I just clean up the hash table part that cache result for this table.
This has few impact on performance as I rarely have to clean the part of the hash that is often used.
But now for a program with more often update/delete on most table, this make my cache much less efficient as I often have to clean it.
How to deal with that ? My current cache system is quite simple, Cache::Memcached::Fast is quite complex. Do you have a solution that would be more efficient that mine but still quite simple ?
|
How to handle SQL query result caching
|
Google gives this Link.
Up to date Link
The article gives pros and cons of each type of caching system and states that both can be implemented at the same time. Froma programmer's point of view, anytime I need not worry about virtual addressing and associated costs, its a win, but programming for cache hit/misses is going to affect performance much more than slight latency I believe. This area is not my forte, coming from small embedded systems programming though where caching is just now starting to become relevent to what I do.
|
What is the pros and cons of:
- Physical cache (between MMU and Memory)
- Logical cache (between CPU and MMU)
from a programmer's view? How to get the best of each of them?
Thanks
|
Physical cache vs Logical cache
|
You certainly need a good caching strategy to avoid problems with stale data. With dynamic data and using memcached, you would have to delete cache entries on certain data updates. You can't just rely on cache entries to time out. With memcached you can cache just parts of your dynamic content for a specific page generation. If you want to cache complete html documents, I would recommend using a reverse proxy like varnish (http://varnish-cache.org/).
|
I have a personal caching class, which can be seen here ( based off WordPress' ):
http://pastie.org/988427
I recently learned about memcache and it said to memcache EVERYTHING:
http://highscalability.com/blog/2010/5/17/7-lessons-learned-while-building-reddit-to-270-million-page.html
My first thought was just to keep my class with the current functions and make it use memcache instead -- is there any downside to doing this?
The main difference I see is that memcache stays on with the server from page to page, while mine is for 1 page load. The problem I see arising, and this is with any system, is that they're dynamic. They change all the time. Whether its search results, visible products, etc. etc. If it's all cached, won't the create a problem?
Is there a way to handle this? Obviously if something is bringing back the same results everytime it would be cached, but that's why I was doing it on a per page load basis. I'm sure there is a way to handle this, or is the cache time usually set between 5 minutes and an hour?
|
Personal Cache vs Memcache?
|
Caching is not enabled by default in NHibernate.
One thing you need to consider is how to handle concurrent updates. Suggested read: http://nhibernate.info/doc/nh/en/index.html#transactions-optimistic
|
I know this is not a good idea and the best would be to let the applications talk Web Services. But I have a situation where the legacy application is accessing a database with an ORM and I need to access the same database from the new .net application using Fluent nHibernate.
So the question is what problems this will make and how to solve them?
I guess the main issue is the caching. I need to disable the caching on one of the applications (which would be the new app).
So how can I disable caching in nHibernate?
Is there anything else that should be careful about?
|
Access one database from multiple ORMs :: Caching issue
|
1
I'd use a startup stored proc that invoked sp_updatestats
It will benefit queries anyway
It already loops through everything anyway (you have indexes, right?)
Share
Improve this answer
Follow
edited Nov 9, 2018 at 16:30
jfrobishow
2,89522 gold badges2727 silver badges4444 bronze badges
answered May 4, 2010 at 19:36
gbngbn
427k8282 gold badges593593 silver badges681681 bronze badges
2
do you have it configured so that it does a full scan and not just a sampling?
– Dave Markle
May 4, 2010 at 19:42
I would use normal sampling. Data is loaded in cache in 64k extents (8 pages), so you need to sample one row per extent. Unless you have very wide rows, but even then you'd get readahead IO with Enterprise edition too). It could miss stuff, but it's easy to do and even 5% sampling with 3 rows per page would load 100%
– gbn
May 4, 2010 at 19:47
Add a comment
|
|
For a particular apps I have a set of queries that I run each time the database has been restarted for any reason (server reboot usually). These "prime" SQL Server's page cache with the common core working set of the data so that the app is not unusually slow the first time a user logs in afterwards.
One instance of the app is running on an over-specced arrangement where the SQL box has more RAM than the size of the database (4Gb in the machine, the DB is under 1.5Gb currently and unlikely to grow too much relative to that in the near future). Is there a neat/easy way of telling SQL Server to go away and load everything into RAM?
It could be done the hard way by having a script scan sysobjects & sysindexes and running SELECT * FROM <table> WITH(INDEX(<index_name>)) ORDER BY <index_fields> for every key and index found, which should cause every used page to be read at least once and so be in RAM, but is there a cleaner or more efficient way? All planned instances where the database server is stopped are out-of-normal-working-hours (all the users are at most one timezone away and unlike me none of them work at silly hours) so such a process (until complete) slowing down users more than the working set not being primed at all would is not an issue.
|
"Priming" a whole database in SQL Server for first-hit speed
|
VTune will give you pretty detailed cache and pipeline analysis. It's not cheap though. I believe some level/edition of VS (I remember it was "team edition" on XP) had a decent profiler.
|
As the title says I'd like to somehow get the cache behavior of my code. I'm running Windows 7 64-bit edition, compiling on Visual Studio 2008 Professional Edition, compiling C++ code.
I understand that there's Valgrind under Linux, but are there any free alternatives I could use, or methods otherwise?
|
Any way to profile code for cache behavior?
|
1
You are going to want to cache multiple versions of your page. You will want one for the Logged in view and one for the guest view. You can set the two different views either by VaryByParams or VaryByHeaders.
http://msdn.microsoft.com/en-us/library/aa719665%28v=VS.71%29.aspx
Share
Improve this answer
Follow
answered Apr 8, 2010 at 16:51
GlennularGlennular
18k99 gold badges5959 silver badges7777 bronze badges
Add a comment
|
|
I have an ASP.NET page where I am trying to do some output caching, but ran into a problem.
My ASPX page has
<%@ Page Language="C#" AutoEventWireup="true" CodeBehind="Default.aspx.cs" Inherits="MYProject._Default" %>
<%@ OutputCache Duration="600" VaryByParam="None" %>
<%@ Register TagPrefix="MYProjectUC" TagName="PageHeader" Src="~/Lib/UserControls/PageHeader.ascx" %>
<%@ Register TagPrefix="MYProjectUC" TagName="PageFooter" Src="~/Lib/UserControls/PageFooter.ascx" %>
I have a user control called "PageHeader" in the ASPX page. In PageHeader.ascx, I have an ASP.NET Substitution control, where I want to show some links based on the logged in user.
<%@ Control Language="C#" AutoEventWireup="true" CodeBehind="PageHeader.ascx.cs" Inherits="MyProject.Lib.UserControls.PageHeader1" %>
<div class="headerRow">
<div class="headerLogo">
<a href="Default.aspx"><img src="Lib/Images/header.gif" alt=""></a>
</div>
<div id="divHeaderMenu" runat="server">
<asp:Substitution ID="subLinks" runat="server" MethodName="GetUserProfileHeaderLinks" />
</div>
</div><!--headerRow-->
In my user control code-behind I have a static method which will return a string based on whether the used logged in or not using session:
public static string GetUserProfileHeaderLinks(HttpContext context)
{
string strHeaderLinks = string.Empty;
// check session and return string
return strHeaderLinks;
}
But the page still shows the same content for both logged in user and Guest user.
My objective is to to have the page be cached except the content inside the substitution control. How do I do this?
|
ASP.NET 'Donut Caching' not working
|
Just create the CacheManager using the overloaded constructor which takes the path to your ehcache config file as an argument
http://ehcache.org/apidocs/net/sf/ehcache/CacheManager.html#CacheManager%28java.lang.String%29
This will create a non-singleton CacheManager which will play nice with CF9
|
We have been using EHCache with CF8 for a while now with no issues.
We are now moving to CF 9 and it seems that the baked-in version of EHCache with CF 9 is actually conflicting with our EHCache setup.
So is there:
Any way to disable the baked-in version of EHCache? This would be a temporary solution.
If we use the CF9 baked-in caching, is there any way to specify more than one cache in ehcache.xml and most importantly, to put into that specific cache via the tag?
Many thanks in advance.
|
EHCache on Coldfusion 9 - can I create multiple caches or disable it?
|
No it doesn't, Syscache is an abstraction over asp.net cache. You have to use a different cache for the service.
EDIT:
I remember this blog post: http://www.hanselman.com/blog/UsingTheASPNETCacheOutsideOfASPNET.aspx
They say it should be usable outside a web-environment. It's just not recommended because microsoft maintains it to be usable in a web environment. That means that you can use it now, but you might have trouble when .Net 4 (or 5, 6, 7, ...) is released.
|
I know SysCache uses ASP caching under the hood, but since I'm not aware of the implementation of the ASP cache (and if it depends on anything IIS), I was wondering if SysCache would work in a non-web application (like a Windows Service)?
Activating it and using NHprofiler seems to show it is not.
|
Does NHibernate SysCache work in a non-web app?
|
Your problem is that Drupal caches pages which is a good thing since you would probably get a huge performance hit otherwise.
A solution, though not so pretty is to let a js post the data with Ajax. Since Drupal don't cache post requests this could be a solution. It is however js dependant and makes an extra request which requires extra system resources. But atleast it is better than disBling the page cache.
Edit:
After giving it some thought, I think a better solution would be to create a custom block. Since Drupal blocks has their own caching system you should be able to get the functionality you want. Custom blocks is created with the use of hook_block
|
I need to run a PHP code snippet in a Drupal template and prevent it from being cached.
The PHP snippet sniffs for a cookie and if found, returns a message based on the cookie value.
ie:
if(isset($_GET['key'])) {
$cookievalue = $_GET['key'];
}
if(isset($_COOKIE['cookname'])) {
$cookievalue = $_COOKIE['cookname'];
}
switch ($cookievalue) {
case hmm01: echo "abc";
break;
case hmm02: echo "def";
break;
case hmm03: echo "ghi";
break;
default: echo "hello";
}
Right now, Drupal renders the messages randomly according to when the page was first cached and based on the cookie used at the time.
I can't see a great deal of info on how I might go about this - it seems that you have to turn the cache off for the page rather than run php code uncached.
Can the community guide me on how to use the correct technique with regards to Drupal templating and PHP based on the information I posted or direct me to a guide or tutorial that may be useful for my situation?
|
Running uncached PHP in a Drupal template?
|
I believe the cells in the tableview are recycled.
cache the images in memory and assign from your cache rather than loading the images directly into the tableview
I don't know if this is best practice or not but I think you could use an NSArray or NSDictionary of UIImage and load into there first and just assign references to the objects in the array.
Update
There is some code here which uses an NSMutableDictionary for the cache
|
I have a UI Table View Controller. Each Cell Loads an image from my webserver.
If I scroll the TableView so that a particular cell scrolls out of view and then scroll back again the image for that cell has vanished and I have to wait for it to reload.
I'm guessing this is some performance/ memory management thing build into iphone?
How can I stop this behaviour?
|
Stop Images Disappearing when scrolling UITableView
|
When you mean by cache only the master page, do you mean output caching or cache only the ViewData related to the master page? You can use session or cache mechanisms to cache master page data, but you would have to programmably pick out what belongs to the master and what is there for the view.
If you are talking output caching, I don't believe output caching is available for master pages. For partial output caching in ASP.NET MVC, Steve Sanderson has some excellent points on this: http://blog.codeville.net/2008/10/15/partial-output-caching-in-aspnet-mvc/
So which type of caching are we talking about?
|
I have a MasterPage that has ViewData passed to it. I would really like to cache only the MasterPage for performance reasons. But I do not want to cache the actual page that is loading with the MasterPage.
I believe this can be done in web forms by adding code in to the Page_Load event. Does anyone know of a similar technique using ASP.NET MVC to achieve the same goal?
Thanks in advance.
|
Only cache MasterPage in ASP.NET MVC
|
1
This is because DataCacheFactory is an expensive object to create - you don't want to be creating an instance of it every time you want to access the cache.
What they're showing you in the lab is how to create an instance of DataCacheFactory once to get hold of a DataCache instance, and then storing that DataCache instance in Session state so you can go back to that one each time you access the cache.
Of course, this still means you're creating an instance of DataCacheFactory per user, I think storing it in Application state would be an even better design.
Share
Improve this answer
Follow
edited May 3, 2012 at 10:36
answered Jan 6, 2010 at 16:22
PhilPursglovePhilPursglove
12.6k55 gold badges4646 silver badges7070 bronze badges
Add a comment
|
|
I had my first go at AppFabric - caching (aka Ms Velocity) today and checked out msdn virtual labs.
https://cmg.vlabcenter.com/default.aspx?moduleid=4d352091-dd7d-4f6c-815c-db2eafe608c7
There is this code sample in it that I dont get. It creates a cache object and stores it in session state. The documentation just says:
We need to store the cache object in
Session state and retrieve the same
instance of that object each time we
need to use it.
Thats not the way I used to use the cache in ASP.NET. What is the reason for this pattern and do I have to use it?
private DataCache GetCache()
{
DataCache dCache;
if (Session["dCache"] != null)
{
dCache = (DataCache)Session["dCache"];
if (dCache == null)
throw new InvalidOperationException("Unable to get or create distributed cache");
}
else
{
var factory = new DataCacheFactory();
dCache = factory.GetCache("default");
Session["dCache"] = dCache;
}
return dCache;
}
|
Why is cache object stored in session when using AppFabric?
|
1
Sorry if I misunderstood something crucial, but wouldn't the obvious solution be a Dictionary<DateTime, object>()?
E: If in-memory storage is out of the question, ignore this of course. A local database to store the information seems the most viable alternative if you are memory restricted.
Share
Improve this answer
Follow
edited May 15, 2011 at 15:55
answered May 15, 2011 at 15:47
SirViverSirViver
2,4111515 silver badges1414 bronze badges
Add a comment
|
|
I'm coding a data intensive app in c#. Currently, the app loads loads and loads of timeseries from a distant sql server, does a lot of calculation to create other timeseries, and I'd like to access to these timeseries fast.
Each timeserie has a unique identifier, and should map from DateTime to anything (mostly floats, but sometime strings, stringarray, etc).
Do you know any library I could use for that, giving me :
fast and parallel access to these timeseries ?
access to the "tree" version of these timeseries, to lookup the latest date, last previous date, etc ?
I've had a look a massive parallel cache, such as memcached, tokyo-tyrant or redis, but I'd have to store a somehow serialized version of each timeseries to solve my problem.
Cheers !
|
Very fast cache / access time-indexed data in c#
|
There is a third-party "Cache Manager" which provides tools and stats for the HttpRuntime cache. You can get memory info there manually, or you can use Reflector to peek inside the assembly and see how it collects the stats, and do it yourself in your app
|
Is there any way to find the number of bytes of memory that are currently in the HttpContext.Cache?
I've found where you can get the physical memory limit using EffectivePrivateBytesLimit or EffectivePercentagePhysicalMemoryLimit, but I'm having difficulties finding the current physical memory usage.
Any ideas ?
---UPDATE---
Afer some more searching and using the first response mentioning http://aspalliance.com/cachemanager/ i went to that page, at the bottom there is a link to http://www.codeproject.com/aspnet/exploresessionandcache.asp that describes a method to calculate the size of an object that i think will be good enough to use.
Basically it serializes each object in the cache then it then finds the length of the serialized stream. Summing these values results in some information i can use.
|
HttpContext.Cache Physical Memory Usage
|
If you expect to have lots of clients accessing this database, written in lots of different languages, perhaps you want to write a thin server layer on top of the database that your clients can connect to. This server could handle the caching, and maybe gzip the data it is sending over the wire. Then your client could just send a message asking for the latest data since time X, and the server could return either just the needed data, or a message saying "no changes since time X"
|
I would like to design a database that is accessed through a very slow network link. And luckily the database itself is pretty static. So I'm going to use aggressive caching of the results. From time to time, other insertion and updates may happen on tables while the client is running, so I would like to design a low-bandwidth system where the client knows exactly when something has been updated to avoid even bothering checking the remote database.
My idea was to create a special table with two columns, one the name of the table, and another, a version number. This table would never be modified directly by application queries. It would be updated with a stored procedure. Whenever any table is modified, the stored procedure should increment the number of this special table.
The client can then store the results of the queries in a local database (say sqlite) along with the version number of the table. Then, next time runs a select on the special table and checks if the tables have changed.
How does this sound? Are there other strategies to minimize redundant database bandwidth and aggresively cache the database? The thing is going to be not only cross platform but different clients in programming languages will access it (C, C++, Obj-C, Python, etc) so I'm trying to find most simple thing to work in all cases.
Ideally I would like to design the tables to be incremental (deletes are actual inserts), so I could just query the highest ID of the table and compare to the local one. However, I don't know where to look for examples of this. Reading CouchDB's technical page makes my head explode.
|
How to design database tables that are cache friendly?
|
I was about to give a less detailed account of the answer you refer to in your question until I read that. I would refer you to this, seems spot on to me. No better way than seeing the physical size on the server, anything else might not be accurate.
You might want to set up some monitoring, for which a Powershell script might be handy to record and send on to yourself in a report. This way you could run various tests overnight say and summarise it.
On a side note, they sound like very large documents to be putting in a memory cache. Have you considered a disk based cache for these larger items and leaving the memory for smaller items which is more ideal for it. If your disks are reasonably fast this should be fairly performant.
|
I'm in active development of an ASP.NET web application that is using server side caching and I'm trying to understand how I can monitor the size this cache during some scale testing. The cache stores XML documents of various sizes, some of which are multi-megabyte.
On the System.Web.Caching.Cache object of System.Web.Caching namespace I see various properties, including Count, which gets "the number of items stored in the cache" and EffectivePrivateBytesLimit, which gets "the number of bytes available for the cache." Nothing tells me the size in bytes of the cache.
In the Understanding Caching Technologies section of the "Caching Architecture Guide for .NET Framework Applications" guide, there is a "Managing the Cache Object" section with a table (Table 2.2: Application performance counters for monitoring a cache) listing a set of application performance counters, but I don't see any that give me the current size of the cache.
What is a good way to find the size this cache? Do I have to set a byte limit on the cache and look at one of the turnover rates? Am I thinking of this problem in the wrong way? Is the answer to How to determine total size of ASP.Net cache really the best way to go?
|
How to determine the size in bytes of the ASP.NET Cache?
|
1
If you're talking about images then you could use PHP to add a watermark to the images.
How can I add an image onto an image in PHP like a watermark
its a tool to help track down the lazy copiers who just copy the source code as-is. this is not preventative, nor is it a deterrent. – Ian 12 hours ago
Going by your above comment you are happy with users copying your content, just not without the formatting etc. So what you could do is provide the users an embed type of source code for that particular content just like YouTube does with videos. Into that embed source code you could add your own links back to your site, utilize your own CSS etc.
That way you can still allow the members to use the content but it will always come out the way you intended it with links back to your site.
Thanks
Share
Improve this answer
Follow
edited May 23, 2017 at 12:30
CommunityBot
111 silver badge
answered Aug 2, 2009 at 3:22
mlevitmlevit
2,6961010 gold badges4242 silver badges5151 bronze badges
1
No, we aren't happy with people copying our content either with or without formatting. This is simply a tool to help us track down those who have already copied it.
– Ian
Aug 3, 2009 at 17:08
Add a comment
|
|
We have members-only paid content that is frequently copied and republished without our permission.
We are trying to ‘watermark’ our content by including each customer’s user id in a fake css class, for example <p class='userid_1234'> (except not so obivous, of course :), that would help us track the source of the copying, and then we place that class somewhere in the article body.
The problem is, by including user-specific information into an article, it makes it so that the article content is ineligible for caching because it is now unique to each user.
This bumps the page load time from ~.8ms to ~2.5sec for each article page view.
Does anyone know of any watermarking strategies that can still be used with caching?
Alternatively, what can be done to speed up database access? ( ha, ha, that there’s just a tiny topic i’m sure.. )
We're using the CMS Expression Engine, but I'd like to hear about any strategies. They don't have to be EE-specific.
|
Content Water Marking
|
See comments above for solution
EDIT
This comment was deemed the answer:
Have you tried publishing your site to IIS and looking at the headers
there? Is it the same as on the dev server? –
|
I have just started using OutputCache on some of my controller actions and I am not quite getting the response I would expect.
Basically I have set Location = OutputCacheLocation.Any and the http header is as follows:
Server ASP.NET Development Server/9.0.0.0
Date Wed, 15 Jul 2009 02:14:21 GMT
X-AspNet-Version 2.0.50727
X-AspNetMvc-Version 1.0
Content-Encoding gzip
Cache-Control private, max-age=3600
Expires Wed, 15 Jul 2009 02:14:21 GMT
Last-Modified Wed, 15 Jul 2009 02:14:20 GMT
Vary *
Content-Type text/html; charset=utf-8
Content-Length 640
Connection Close
Now if my interpretation is correct the Cache-Control part being set to private means that it will only be cached on the client. I need to also be cached on any proxy.
I would have expected that by setting OutputCacheLocation.Any the Cache-Control would have been something like "public, max-age=3600". As far as I know the private means it will only be cached on the client and not by "Any" (i.e. proxies - see http://msdn.microsoft.com/en-us/library/system.web.httpcacheability.aspx).
Any ideas?
Cheers
Anthony
|
ASP.NET MVC: OutputCache and http headers - Cache-Control
|
Info for invalidation proxy cache: http://linux-sysadmin.org/2010/08/nginx-invalidation-purging-content/
|
I heard recently that Nginx has added caching to its reverse proxy feature. I looked around but couldn't find much info about it.
I want to set up Nginx as a caching reverse proxy in front of Apache/Django: to have Nginx proxy requests for some (but not all) dynamic pages to Apache, then cache the generated pages and serve subsequent requests for those pages from cache.
Ideally I'd want to invalidate cache in 2 ways:
Set an expiration date on the cached item
To explicitly invalidate the cached item. E.g. if my Django backend has updated certain data, I'd want to tell Nginx to invalidate the cache of the affected pages
Is it possible to set Nginx to do that? How?
|
How to set up Nginx as a caching reverse proxy?
|
1
I think that the most important caching best practice is to not worry about it until you need to. Implementing caching before your server load demands it is a waste of time that you could be using to improve other areas of your codebase, add features, etc.
Share
Improve this answer
Follow
answered Jun 5, 2009 at 17:43
SamSam
1,01466 silver badges88 bronze badges
1
yes, there is no need for caching right now, but i just want to play around with that
– Dan Sosedoff
Jun 5, 2009 at 18:02
Add a comment
|
|
Im a beginner in Merb, so want to know, what is the best practices to cache data?
For example i have a page that shows list of books, that changes not really often, so im looking for a way to cache the data. There can be 2 choices: cache data from db or cache all page (html).
So, is there any tools to make simple and fast?
Thanks
|
Caching best practiced in Merb
|
Maybe you could consider taking the ad code out of your templates/source code and putting it into your database. I think of ads as more "content" for the site which can be managed by your code in the same way as other web site content. You can still set up the ads on a staging site, then copy/paste the relevant bits to your live site with this approach.
|
for a large project ive worked on (~310k uniq/day, large site, lots of templates, lots of content), we have to deal with the client selling several sections of the site (each with different layouts) for ad revenue. sometimes, its the top of the page for a 900x250, sometimes its a 952x200 under the nav, sometimes it requires a new div with custom styles. the ads are served through google's ad manager, and the ad buyers rarely (if ever) agree to customizing their implementation code for our site.
all of the code for this site is in a svn repo than we try to keep very tidy:
edit the templates "online" (on the production server) (such a bad idea)
make changes to local copy, push live (and risk later reverting back to old ad code and missing it/having to deal with it before going live. people miss things, dont pretend like you don't and say 'check harder'.)
neither of those options are particularly attractive. how do you guys do it?
|
how to handle constantly changing ad template code for a site in version control
|
I'm reluctant to call this an answer, as it makes a number of assumptions that should probably be clarified with additional questions first, but here goes.
Assuming there's a piece of code somewhere responsible for adding something to the cache with the key "Animal_Vacinations", and that code knows what other cached items the "Animal_Vacinations" item is dependent on it, then that code should create each of the necessary cache key dependencies, including adding Null objects to the cache, if necessary, for any dependent items not already found there.
So, for instance, in the example you give, where there is already "Cat" in the cache prior to adding "Animal_Vacinations", then the logic responsible for adding "Animal_Vacinations" to the cache should check the cache for the existence of each dependent item, i.e., "Cat", "Dog", "Bird", "Jon Skeet"; when one isn't found, a placeholder object or boxed value (maybe an empty string) should be added to the cache for that key (in this case, for "Dog", "Bird", and "Jon Skeet"); once all of the dependent items exist in the cache, then create your cache dependencies and add "Animal_Vacinations" to the cache. (Alternatively, call Cache.Add with a placeholder object for each of the required dependent keys, without first checking if they exist with a Get, and use an exception handler to swallow the exception thrown if it already exists.)
Continuing your example, when subsequent to this activity a real something is added to the cache with the "Dog" key, using Insert instead of Add in order to account for the possibility that the key already exists (as it does in this example), then the replacement of the "Dog" cache item, which was simply a Null value, will trigger the invalidation of the "Animal_Vacinations" cache item per its cache dependency.
|
I've got two items in my cache
Key: Cat
Key: Animal_Vacinations
Now, Animal_Vacinations has a key-based cache dependency on Cat. So, if anything changes for the cache item Cat, then the cache item Animal_Vacinations get invalidated. PERFECT :)
Ok.. now to the problem.
After i create the 2nd cache item (ie. Animal_Vacinations) I add a 3rd cache object :-
Key: Dog
Problem is, the 2nd object needs to have a dependency on Dog also. At the time of creating the 2nd object, it knows what items it should depend on. So in this example the Animal_Vacination object knows it's should be dependant on ...
Cat
Dog
Bird
Jon Skeet
Problem is, if i try and insert the Animal_Vacination object into the cache with all those 4 dependencies, it fails. No error, just fails. (ie. the Cache["Animal_Vacination"] == null).
The reason for this, is, when i insert the cache object with those 4 dependencies ... but 1 or more of those dependencies do not _exist_ ... it fails gracefully.
bummer.
because in my example above, one of those missing three objects is added right after the 2 object is added.
So ... is there anyway to add an object to a cache, with key-based cache dependencies, where 1 or more dependencies are not yet created but MIGHT be created later on?
|
Problem with ASP.NET Caching
|
If you're changing the name when you update, then you have the luxury of caching forever. This is a big plus if you're okay with name changing.
It's also a good idea when it comes to script and CSS as some client machines will persist to cache even if you had ruled otherwise.
|
A website of mine will host the usual images, javascript and CSS stylesheets in the database. Since these are unlikely to change each day, I am going to use some client caching on these to reduce the server load.
How long do you cache these? A few days? More?
I'm probably not going to reuse the same name twice if I update the resource, so I shouldn't have outdated data concerns.
|
How long do you cache resources client side?
|
0
You could try cleaning the cache for package like
go clean -cache package/path/
and then go for
go build
Share
Improve this answer
Follow
answered Sep 27, 2023 at 10:53
Adharsh MAdharsh M
3,41433 gold badges1717 silver badges2525 bronze badges
Add a comment
|
|
I have a Go project which uses cgo to use native libraries. I need to test that package to make sure it works with different combinations of native libraries. To do so, I set PKG_CONFIG_PATH environment variables to different paths and run go build multiple times.
However, it seems that go build caches intermediate build artifacts, so the test results are not correct. go build has -a option to ignore the cache, but it seems to ignore all the cache. I want to ignore only some of the cache. Is there any way to do so? Or is there ways to set PKG_CONFIG_PATH as a hash key for the go build cache?
|
go build caches intermediate build artifacts, how can I ignore some of them?
|
0
I think jest cache can be shared between all pipelines, all nodes. The key for the jest cache can just be test-cache-$CI_PROJECT_NAME. So this cache can be used everywhere in the same project. The policy should be 'pull-push'.
- key: test-cache-$CI_PROJECT_NAME
paths:
- .jestcache
policy: pull-push
Share
Improve this answer
Follow
answered Dec 9, 2023 at 11:31
Duc Anh TaoDuc Anh Tao
8455 bronze badges
Add a comment
|
|
Our repo to run test using jest takes up to 20 minutes, it can be consider slow.
Now I want to improve the performance for running tests.
currently, we run tests on gitlab, by running
stage: test
parallel: 4
script:
- yarn test:ci --shard=$CI_NODE_INDEX/$CI_NODE_TOTAL
After this optimization, it still takes up to 20 minutes to finish the job. So I consider, to use jest cache.
But how can I define the cache key, if use
cache:
key: '.jest-cache-${CI_COMMIT_REF_SLUG}-${CI_NODE_INDEX}'
There will be 4 jest cache for every branch, and many of them in the cache are the same. Because they all contains the transformed npm packages in their cache, and the cache can take much space.
Do you have any good ideas
|
How can I use jest cache on gitlab when the test job runs in parallel
|
0
Media CDN is still a controlled access service meaning your Google Cloud project must be allow listed in order to use the service. I suspect you are getting the error because your project hasn't been authorized as of yet. Contact your GCP Sales representative and they should be able to assist you.
Share
Improve this answer
Follow
edited Apr 18, 2023 at 15:14
answered Apr 17, 2023 at 13:18
DaveDave
49411 silver badge66 bronze badges
3
Thank you, it's strange Google send me tutorial to try the service Media CDN...
– juniper
Apr 17, 2023 at 13:31
Hi @juniper, did you solve this ? How did you do it ?
– Lutaaya Huzaifah Idris
Jul 23, 2023 at 21:02
Media CDN documentation is publicly available, but access to the service is restricted. Your project and portal email address needs to explicitly be allow listed by Google Cloud in order to use the new CDN service.
– Dave
Jul 25, 2023 at 11:43
Add a comment
|
|
i'm trying to use Google Media CDN and i start using example at:
https://codelabs.developers.google.com/mediacdn-ls-codelab#10
I've created a google bucket with public access and after:
gcloud edge-cache origins create cme-origin --origin-address="gs://bucket2975"
ERROR: (gcloud.edge-cache.origins.create) NOT_FOUND: Method not found.
all API needed are enabled...
what's wrong?
Thanks in advance
I'm expecting same output of example:
Create request issued for: cme-origin
Waiting for operation [projects/my-project/locations/global/operations/operation-1612121774168-5ba3759af1919-
3fdcd7b1-99f59223] to complete...done
Created origin cme-origin
|
How can i create a media CDN origin? I have some error
|
0
Today I got the same problem, the issue on GitHub is closed, but for me, it is not a "clean" solution. The cache support should be implemented in the library itself.
After some hours, I found this package which works very well: https://docs.expo.dev/versions/latest/sdk/image
If you're installing this in a bare React Native app, you should also follow these additional installation instructions.
By default, it includes a cache and store it in the "disk", you can change if you want with: https://docs.expo.dev/versions/latest/sdk/image/#cachepolicy
A simple change will optimize your code and application, like:
<SvgUri width="50" height="50" uri="url here" />
to
<Image
style={{
width: 50,
height: 50,
}}
source="url here"
cachePolicy="memory-disk"
/>
And off course
npm remove react-native-svg
Share
Improve this answer
Follow
edited Feb 18 at 14:41
answered Feb 18 at 14:36
Ah HuAh Hu
6111 silver badge77 bronze badges
Add a comment
|
|
I'm using SvgUri component from the react-native-svg package. Is there any way to cache these SVG's?
I found this issue on github, but the mentioned solution there didn't work for me.
|
How can we cache Svg's using SvgUri in react-native-svg?
|
0
use preload_page_view combined with this https://pub.dev/packages/visibility_detector, when it's not being shown set the video play to false.
Share
Improve this answer
Follow
answered Jun 7, 2022 at 9:11
rvngrvng
17611 silver badge77 bronze badges
Add a comment
|
|
I have a flutter app where I am loading video URLs from firebase firestore and displaying them in a PageView vertically like tiktok using the video_player plugin. But there's a delay between the loading of the next videos and the previous videos and i want the next video to preload so that they are displayed sequentially with no delay.
I have already tried flutter preload_page_view package and it is not working as it loads the next video while the current video is still playing and has problems.
i tried allowImplicitScrolling: true, as suggested by someone on stackoverflow but I think it works for images and doesn't work for videos in my case.
|
flutter tiktok like video preload
|
0
I got the symfony cache working like this:
composer require symfony/cache
Code
use Symfony\Component\Cache\Adapter\FilesystemAdapter;
use Symfony\Component\Cache\Psr16Cache;
$cache = new FilesystemAdapter();
$psr16Cache = new Psr16Cache($cache);
Settings::setCache($psr16Cache);
Anyway, it did not solve my memory issue.
Share
Improve this answer
Follow
edited Feb 22 at 10:51
tobias47n9e
2,22333 gold badges2929 silver badges5555 bronze badges
answered Apr 19, 2022 at 14:18
PiTheNumberPiTheNumber
23.1k1717 gold badges111111 silver badges183183 bronze badges
Add a comment
|
|
I am using multiple csv files to populate the tables in a database.
As I am on Symfony, I created a command which from a given directory read files in a defined order.
A csv file equals a table in my BD.
File sizes differ from file to file and may contain over than 65 thousand lines.
My script has been running locally for 3 days, it's progressing but it's heavy.
On a recipe server my script stops just after a few minutes showing this error:
CRITICAL: Error thrown while running command "symfony-command-name". Message: "Failed to store cell BB18118 in cache" {"exception": "[object] (PhpOffice \ PhpSpreadsheet \ Exception (code: 0): Failed to store cell BB18118 in cache at project / vendor / phpoffice / phpspreadsheet / src / PhpSpreadsheet / Collection / Cells.php: 393)
I m using Symfony FileSystemAdapter, SimpleCacheBridge (1.1)
In My Symfony command I do:
$pool = new FilesystemAdapter();
$simpleCache = new SimpleCacheBridge($pool);
Settings::setCache($simpleCache); // PhpOffice\PhpSpreadsheet\Settings
// loop directories and call Symfony service ...
In Symfony Service:
$spreadSheet = IOFactory::load($csvPath);
$sheet = $spreadSheet->getActiveSheet()->toArray();
// loop sheet and databases operations ...
Stack
PHP : 7.4
SF : 5.3.9
phpspreadsheet : 1.18
SimpleCache : 1.1*
Any Help please
|
Failed to store cell in cache
|
0
I might be wrong, but this sounds like an use-case where dynamic route and getStaticPaths and getStaticProps could work for you.
getStaticPaths would check the existing files and make routes for them, and getStaticProps would fetch the required post when needed. On export, all are fetched and generated.
https://nextjs.org/docs/advanced-features/static-html-export
https://nextjs.org/docs/basic-features/data-fetching#getstaticpaths-static-generation
Share
Improve this answer
Follow
answered Oct 4, 2021 at 19:09
tperamakitperamaki
1,02811 gold badge66 silver badges1212 bronze badges
2
I'm using it, indeed
– Nicolas Zozol
Oct 5, 2021 at 9:24
1
If you are using it, then you should be able to generate the post based on the route only on demand, not all at once? Or are you already, and it's slow even generating that one when opening the post page?
– tperamaki
Oct 5, 2021 at 16:59
Add a comment
|
|
I want to make a simple static blog site, and when I export my pages, everything is good and fast.
But in dev mode, my post generation is slow and I want to cache it, so that it is done only once. This is done in a post.ts file and my cache is const posts: Post[] = [] .
What I expect is that after typing yarn dev, the post.ts file is loaded once and I can fill and reuse my cached post array, but strangely this ts module is loaded many times.
import fs from 'fs'
import path from 'path'
// ...
console.log('????? reloading page')
const posts: Post[] = [] // cached posts, hope to generate once
let postsGenerated = false
export async function getSortedPostsData(): Promise<Post[]> {
// Get file names under /posts
const pendings: Promise<any>[] = []
if (postsGenerated) {
console.log('+++++ Using cache ')
return sortPostsByDate(posts)
} else {
console.log('==== Calculating all')
postsGenerated = true
}
let i = 0
traverseDir('content/blog', (path) => {
if (path.includes('.md')) { ... })
}
await Promise.all(pendings)
return sortPostsByDate(posts)
}
The result is that sometimes cache is used, sometimes not. Even when reloading the same page, cache is not always called. Why ? And how to improve that ?
|
NextJS Using a memory cache in dev does not work
|
0
You need to consider the cache strategy CacheConcurrencyStrategy
If your cache data is updated frequently, and you want it to be updated on all instances, then use CacheConcurrencyStrategy. READ_WRITE
Share
Improve this answer
Follow
answered Jan 4, 2022 at 10:43
kdureidykdureidy
96099 silver badges2727 bronze badges
Add a comment
|
|
I'm trying to implement a shared second-level Hibernate cache using JCache and Hazelcast.
The goal is to have multiple servers joined in a Hazelcast cluster sharing the same Hibernate second-level cache,
so when Hibernate on one of the servers (nodes) updates the cache, all other servers (nodes) have their second-level cache updated as well.
I have managed to establish a Hazelcast cluster with two nodes, where each one "sees" the second-level cache of another.
The problem is that each of the nodes is still using its own cache so when one of them updates the cache,
another continues to fetch old (unchanged) entries from its "outdated" cache.
In other words - I have two second-level caches distributed between two nodes with each node using a different cache.
I'm using Hazelcast 4.2, Hibernate 5.4, Spring Boot 2.4.8
These are my spring-boot properties:
spring.jpa.properties.hibernate.generate_statistics = true
spring.jpa.properties.hibernate.cache.use_second_level_cache = true
spring.jpa.properties.hibernate.cache.use_query_cache = true
spring.jpa.properties.javax.persistence.sharedCache.mode = ENABLE_SELECTIVE
spring.jpa.properties.hibernate.cache.region.factory_class = jcache
spring.jpa.properties.hibernate.javax.cache.provider = com.hazelcast.cache.impl.HazelcastServerCachingProvider
spring.jpa.properties.hibernate.javax.cache.uri = classpath:hazelcast.xml
Sample cache configuration in hazelcast.xml:
<cache name="jobsCache">
<statistics-enabled>true</statistics-enabled>
<management-enabled>true</management-enabled>
<eviction size="200" max-size-policy="ENTRY_COUNT" eviction-policy="LRU" />
<expiry-policy-factory>
<timed-expiry-policy-factory expiry-policy-type="CREATED" duration-amount="10" time-unit="MINUTES"/>
</expiry-policy-factory>
</cache>
Am I missing some configuration or have done something wrong?
Thank you!
|
Shared Hibernate 5 second-level cache with Hazelcast
|
0
I just ran into same issue. It seems the "octane" cache does not work if you try to use it via console. When I use it inside my controller (or anywhere where the process starts from any web request) it works fine.
I think, if you run any command of your application, the OS runs it separately. In this case you can't see the soole cache (because that is under a different process) and also can't use SwooleTable (Exception: Tables may only be accessed when using the Swoole server.)
Share
Improve this answer
Follow
answered Jan 21, 2023 at 14:58
szalmagszalmag
1
Add a comment
|
|
Octane Version: 1.0.8
Laravel Version: 8.50.0
PHP Version: 8.0
Server & Version: Swoole 4.6.7
Database Driver & Version: MySQL 8.0.25
Everything works as expected when using Redis for example.
cache()->store('redis')->remember("test_key", now()->addMinutes(), fn() => 'test_value');
Cache::remember() method does not store the value when using Laravel Octane Cache. (returns null)
cache()->store('octane')->remember("test_key", now()->addMinutes(), fn() => 'test_value');
I did another tests and seems that Octane store is not persistent. If I use put then get immediately will receive the value, if I use put then refresh the page the value will be null. This is only for Octane driver. Redis store works fine.
cache()->store('octane')->put("test_key", 'test_value', now()->addMinutes());
cache()->store('octane')->get("test_key"); => returns null
Redis works as expected.
cache()->store('redis')->put("test_key", 'test_value', now()->addMinutes());
cache()->store('redis')->get("test_key"); => returns test_value
|
Laravel Octane Cache not persistent
|
0
You can set the default behavior of client.query by configuring ApolloClient's the default fetchPolicy to cache-first in defaultOptions as shown here.
Something like this:
import { InMemoryCache, ApolloClient } from '@apollo/client';
const client = new ApolloClient({
cache: new InMemoryCache(),
defaultOptions: {
query: {
fetchPolicy: "cache-first"
}
}
});
async fetchRecipients(userIds: string[]) {
const result = await client?.query({
query: MembersBySFIDs,
variables: {sfids: userIds},
});
// do whatever you want with the result here
}
Hope you found this helpful.
Share
Improve this answer
Follow
answered Jun 10, 2021 at 9:53
hackhanhackhan
52233 silver badges77 bronze badges
Add a comment
|
|
I want to refractor the current code to use the ApolloClient constructor to cache information and only query for new data if the requesting data isn't already cache –
Currently I'm using fetchPolicy to cache users Id but from what I've seen there is a way to cache using apollo.
async fetchRecipients(userIds: string[]) {
//TODO: How to refactor and use apollo cache?
const result = await client?.query({
query: MembersBySFIDs,
variables: {sfids: userIds},
fetchPolicy: 'cache-first',
});
if (result?.data?.membersBySFIDs) {
await dispatch.newChatMessage.setRecipients(result.data.membersBySFIDs);
} else {
throw new Error('Members not found');
}
}
Here is what I tried so far,
I don't think I am using it correctly, any help is appreciated:
import { InMemoryCache, ApolloClient } from '@apollo/client';
const result = new ApolloClient({
cache: new InMemoryCache()
});
async fetchRecipients(userIds: string[]) {
const result = await client?.query({
query: MembersBySFIDs,
variables: {sfids: userIds},
fetchPolicy: 'cache-and-network'
});
if (result?.data?.membersBySFIDs) {
await dispatch.newChatMessage.setRecipients(result.data.membersBySFIDs);
} else {
throw new Error('Members not found');
}
}
|
How to best use ApolloClient / InMemoryCache and enable Cache on demand for API's?
|
0
C doesn't know anything about your processor, its type, or its cache, or anything other related to that.
If you want to access this data, you would need OS-level functionality that supports it (which could potentially be callable from C). You need to look at your OS, and potential other choices, to find out if they support it.
Share
Improve this answer
Follow
answered Jan 23, 2021 at 6:28
AganjuAganju
6,30511 gold badge1313 silver badges2323 bronze badges
1
The OS will only be able to support such a functionality if it is also supported by the CPU architecture. I am unaware of any CPU architecture which would support this. Such a feature is probably only supported by CPU emulators.
– Andreas Wenzel
Jan 23, 2021 at 6:33
Add a comment
|
|
For some reason, I'd like to know the content of processor caches at arbitrary point of my C program. Is there anyway I can write contents in the cache to a file?
|
Can I explicitly dump processor cache content to a file?
|
0
A map can solve this issue but you also have to make sure to evict your cache if necessary.
If you more than one instance of your application you should maybe think about a shared cache i.e. a redis database.
Share
Improve this answer
Follow
answered Sep 22, 2020 at 19:14
MrBrightsideMrBrightside
11911 silver badge99 bronze badges
Add a comment
|
|
I am writing a methodA() which returns a Map<String, String> by accessing an object in Amazon S3 and this method will be called several times (in the order of thousands) every hour. I want to get the object from S3 every day (and not for every method call) and maintain this map in a local cache for all the method calls. What should my approach be ?
|
Caching a Map in Java
|
let cache = {};
let awaits = {};
const optimizedFetch = (url) => {
return new Promise((resolve, reject) => {
if (url in cache) {
// return cached result if available
console.log("cache hit");
return resolve(cache[url]);
}
if (url in awaits) {
console.log("awaiting");
awaits[url].push({
resolve,
reject
});
return;
}
console.log("first fetch");
awaits[url] = [{
resolve,
reject
}];
fetch(url)
.then(response => response.json())
.then(result => awaits[url].forEach(({
resolve
}) => resolve(result)))
.catch(error => awaits[url].forEach(({
reject
}) => reject(error)))
.finally(() => {
delete awaits[url];
});
});
};
optimizedFetch("https://jsonplaceholder.typicode.com/todos")
.then(({
length
}) => console.log(length));
optimizedFetch("https://jsonplaceholder.typicode.com/todos")
.then(({
length
}) => console.log(length));
|
I've got the following code that retrieves data asynchronously and caches it for performance optimization purposes:
let cache = {}
const optimizedFetch = async (url) => {
if (url in cache) {
// return cached result if available
console.log("cache hit")
return cache[url]
}
try {
const response = await fetch (url)
const json = response.json();
// cache response keyed to url
cache[url] = json
return json
}
catch (error) {
console.log(error)
}
}
optimizedFetch("https://jsonplaceholder.typicode.com/todos").then(console.log)
The approach above works fine but if a second request to the same url comes while the first one is still awaited then a second fetch will be fired.
Could you please advice me on the ways to improve that scenario?
Thanks in advance.
|
How to improve async data retrieval and caching?
|
0
If we assume L1 cache is 64KB and one cache line is 64 bytes then there are total 1000 cache lines. So, in step 4 write to the result out[0].m_Foo will not discard the data cache in step 2 as they both are in different memory locations. This is the reason why he is using separate structure for updating out m_Foo instead directly mutating it in inplace like in his first implementation. He is just talking till point of calculation value. Updating value/writing value will have same cost as in his first implementation. Also, processor can optimize loops quite well as it can do multiple calculations in parallel(not sequential as result of first loop and second loop are not dependent). I hope this helps
Share
Improve this answer
Follow
answered Jul 1, 2022 at 22:29
AbhishekAbhishek
1
1
*1024 cache lines
– Abhishek
Jul 2, 2022 at 16:13
Add a comment
|
|
I've watched Mike Acton's talks about DOD a few times now to better understand it (it is not an easy subject to me). I'm referring to CppCon 2014: Mike Acton "Data-Oriented Design and C++"
and GDC 2015: How to Write Code the Compiler Can Actually Optimize.
But in both talks he presents some calculations that I'm confused with:
This shows that FooUpdateIn takes 12 bytes, but if you stack 32 of them you will get 6 fully packed cache lines. Same goes for FooUpdateOut, it takes 4 bytes and 32 of them gives you 2 fully packed cache lines.
In the UpdateFoos function, you can do ~5.33 loops per each cache line (assuming that count is indeed 32), then he proceeds by assuming that all the math done takes about 40 cycles which means that each cache line would take about 213.33 cycles.
Now here's where I'm confused, isn't he forgetting about reads and writes? Even though he has 2 fully packed data structures they are in different memory spaces.
In my head this is what's happening:
Read in[0].m_Velocity[0] (which would take about 200 cycles based on his previous slides)
Since in[0].m_Velocity[1] and in[0].m_Foo are in the same cache line as in[0].m_Velocity[0] their access is free
Do all the calculation
Write the result to out[0].m_Foo - Here is what I don't know what happens, I assume that it would discard the previous cache line (fetched in 1.) and load the new one to write the result
Read in[1].m_Velocity[0] which would discard again another cache line (fetched in 4.) (which would take again about 200 cycles)
...
So jumping from in and out the calculations goes from ~5.33 loops/cache line to 0.5 loops/cache line which would do 20 cycles per cache line.
Could someone explain why wasn't he concerned about reads/writes? Or what is wrong in my thinking?
Thank you.
|
Data Oriented Design with Mike Acton - Are 'loops per cache line' calculations right?
|
0
To use max-age, you need to URL encode it.
response-cache-control=max-age%3D100
Share
Improve this answer
Follow
answered Jul 29, 2020 at 1:08
jellycscjellycsc
11.5k22 gold badges1616 silver badges3535 bronze badges
1
This unfortunately does not work. It result in 403 errors from Amazon
– raphisama
Jul 29, 2020 at 19:16
Add a comment
|
|
According to the Amazon S3 documentation, there is a query parameter called ResponseCacheControl that can be added to an S3 URL so that the response includes a Cache-Control header, which I need so that my browser will cache the response i.e I need it to return with
Cache-Control: max-age=100
However, the notoriously terrible Amazon S3 docs don't give any information on how to use this parameter!
Does anyone know, what value do I give it to get back the response with the desired header?
|
How do you use the ResponseCacheControl URL parameter in Amazon?
|
0
While issuing a request to server when the data has been updated , can you please add the below 2 headers in your request:-
'Cache-Control', 'no-cache' 'Pragma', 'no-cache'
Give it a try and let us know if the issue is resolved or ?
Share
Improve this answer
Follow
edited Aug 4, 2020 at 15:05
answered Aug 2, 2020 at 16:24
Aditya SinghAditya Singh
13766 bronze badges
Add a comment
|
|
I am facing worst & awkward issue of my life.
I am using HttpActionExecutedContext for caching my WEB API end points.
My web API is working properly in case of caching, but when I have updated the data & at that time I wanted to reset the caching at that time problem is arises.
Problem 1 :-
When I have deleted the bin folder from the server, then also API was sending the data to me.
(I have consumed API in ANDROID phone, I have tested in 2 phones after deleting the BIN bolder, In 1st phone API was giving data even after BIN DELETION & in 2nd phone API was giving partial data such as 1 end point was working but another was not).
How can this be possible ?
Problem 2 :-
Where data is saved when we use HttpActionExecutedContext. Wheather data is saved application pool or something ?
Problem 3 :-
How to clear the cache of WEB API.
Here is the code of WEB API.
public class CacheFilter : ActionFilterAttribute
{
public int TimeDuration { get; set; }
public override void OnActionExecuted(HttpActionExecutedContext actionExecutedContext)
{
actionExecutedContext.Response.Headers.CacheControl = new System.Net.Http.Headers.CacheControlHeaderValue
{
MaxAge = TimeSpan.FromMinutes(1440),
MustRevalidate = true,
Public = true
};
}
}
Controller Code
[HttpGet]
[Route("SubCategory")]
[CacheFilter()]
public string SubCategory()
{
BAL_CAT_ALL obj = new BAL_CAT_ALL();
var data = obj.GetAllSubCategory();
return data;
}
|
Caching problem while using HttpActionExecutedContext in web API c# MVC
|
0
I think you might want something like Apollo's Data Sources, assuming you are getting data from other REST APIs. They have a section specifically about using Redis/memcached as a cache instead of an in-memory one.
So the gist of this answer is, if you're using Apollo Server and wanting to cache responses from REST APIs, you can use Data Sources with apollo-server-cache-redis
Share
Improve this answer
Follow
answered May 27, 2020 at 22:21
TLaddTLadd
6,64422 gold badges3333 silver badges4040 bronze badges
2
Thanks for the reply here I am using graphQL and caching the API's on the basis of Key which is the user. { getPerson(username: "getify"){ login id name node_id avatar_url email url followers following followers_url public_gists } }
– Dev7867
May 31, 2020 at 19:30
Can someone help me on this ?
– Dev7867
Jun 2, 2020 at 8:29
Add a comment
|
|
Any idea how we can write graphQL resolver so that I can cache API response in the redis and on the next call it takes data from the redis instead of hitting the backend API response ?
Here user name is unique in the API. i.e. 1. getify and 2. bradtraversy
/// Middleware Function to Check Cache
checkCache = (username) => {
redis.get(username, (err, data) => {
if (err) {
console.log("====111111111111==========");
console.log(err);
}
if (data !== null) {
personInfo = data;
// console.log(data);
console.log("============222222222222=========");
return personInfo;
}
});
};
// Running Code
const resolvers = {
Query: {
getPerson: async (_, { username }) => {
await checkCache(username);
console.log(username);
if(null != personInfo) {
console.log("=======333333333=======")
console.log(personInfo);
return JSON.parse(personInfo);
}
else {
console.log("Fetching Data from API")
console.log(username);
const response = await fetch(`https://api.github.com/users/${username}`).then(response => response.json());
redis.SETEX(username, 300, JSON.stringify(response));
// console.log(response);
return response;
}
}
}
|
How to add redis client for caching in GraphQL resolver
|
0
How about encrypting cached data with session id (from cookie)?
If user logs out or if session (cookie) expire - cookie is deleted and the decryption key is lost.
However, according to this comment:
Yeah, some firefox versions seem to conserve expired cookies forever.
I just deleted some cookies from my Firefox which expired in April
2012. (There were newer cookies with same domain and name, though.)
I know, the comment is old (2013) but still a bit worrying
Share
Improve this answer
Follow
answered May 14, 2020 at 2:04
cimakcimak
1,85433 gold badges1616 silver badges2222 bronze badges
1
Won't work either, you can't access cookie from a service worker: stackoverflow.com/questions/35447567/…
– cimak
May 20, 2020 at 1:58
Add a comment
|
|
Correct me if I'm wrong but I feel like every single article about service workers I have read, covers only one use-case: very simple, "static" website. Like "Here, you can cache your images using SW and now your app can work offline. The end.".
Well... I have an existing app, let's say it's a "TODO-list": user logs in and can view/modify the list. Server communication is based on POST requests (JSON).
Now, I want to make it work offline: user should be able to still view the list (this time from cache).
Its pretty easy to cache POST requests using IndexedDB but how do it securely? Lists can contain sensitive data and if not encrypted, everyone could just open DevTools and browse them.
Any tips, ideas? I need at least some level of security.
My first -not a bright- idea was to encrypt the cached data using user's credentials but that wouldn't work: credentials are known only when loggin in and are lost after page refresh.
|
Service worker: caching POST requests with IndexedDB - security concerns
|
0
I think this has to do with cashing enabled or not enabled in your tests:
you can set expectations like this with the current implementation in your example method:
expect(Rails).to receive_massage_change(:cashe, :fetch).and_return(expected_value)
you can also inject the ProcedureService instance to the method and set expectation on it like this:
procedure_service_instance = instance_double('ProcedureService', generate: some_value_you_want_to_be_returned)
expect(procedure_service_instance).to receive(:generate)
if you make your example method like this:
def method_example
data = Constant.fetch_from_cashe(cache_key)
procedure_service.generate
data
end
then you could git rid of receive_message_chain expectation and use:
expect(Constant).to receive(:fetch_from_cashe).with(cashe_key).and_return(expected_value)
expect_any_instance_of(ProcedureService).to receive(:generate){ some_fake_return_value }
also you can enable caching in your tests, check these links: link1, link2, link3
I do not know exactly where, and how your original is written, but based on the example method you provided, I think setting expectation on the methods that get sent would do the trick. and note that your goal is not to test rails cashing but to test that your code does use it as you want.
Share
Improve this answer
Follow
answered May 7, 2020 at 3:45
Abdullah FadhelAbdullah Fadhel
29422 silver badges99 bronze badges
Add a comment
|
|
I'm having a weird issue where I'm testing a controller that has a procedure that uses caching. The test is failing, but if I do a binding.pry inside the method that does the caching, the test passes.
example of the method containing the caching and the binding.pry:
def method_example:
data = Rails.cache.fetch(cache_key) do
ProcedureService.new(params).generate
end
binding.pry
data
end
Example of the test:
it "reverts record amount" do
expect(record.amount).to eq((original_amount + other_amount).to_d)
end
The caching is done via redis_store.
When done in the development environment, it works fine. What I don't understand is why it is failing but passing when adding a stopper? It seems it could be something about the time it takes to fetch the cache
UPDATE
Using sleep, instead of binding.pry also makes the test pass, so I can assume this is a timing issue. What is the problem exactly? and how could I manage it?
|
Rspec tests failing when using Rails.cache, but pass if I do a binding.pry
|
0
Basically its possible to add persitence via CacheLoader and CacheWriter. We use that in several ways to use file system or database as storage. When adding persistence this way the cache operates in the so called "cache through" mode. Some operations of the cache, especially get and put operate transparently and read or write the data via the loader and writer to the storage. Other operations, like CAS operations, just interact with the in-memory cache.
The persistence feature as it was planed was meant to be transparent for all cache operations. Although its feasible and the basic work is done in the internal infrastructure, we don't have a big need for it. Other features and tasks seem more important. However, I am happy to hear about potential use cases.
Share
Improve this answer
Follow
answered Mar 5, 2020 at 1:24
cruftexcruftex
5,61522 gold badges2020 silver badges3636 bronze badges
2
Here is one use case for you :) We have machines that simply get their power plug pulled. As i need a cache for something right now, i find that there really is no free / open source cache library right now which persists to disk.
– wlfbck
Jun 28, 2021 at 8:11
Forgot to add: I could use a DB, but that comes with all the hassle of using one, and i also need expiry, which caches just always 'have'.
– wlfbck
Jun 28, 2021 at 8:32
Add a comment
|
|
I have used the cache2k in my java project and it was so simple (key-value pair) and easy to use. Now I want to know is if cache2k is a persistent or non-persistent cache.
I found the answer in here
https://stackoverflow.com/a/23709996/12605243 which was said at 2014 stated that it was gonna be updated to persistent cache.
So my question is 'Am I using a persistent or non persistent cache?'. I have read their docs but unable to find it.
|
Is cache2k a persistent cache?
|
0
The only safe solution is to never call Cache from a user request context. A cron-job which hits a local URL to refresh cache data is perfectly safe from such race conditions and memory churn associated.
http://notmysock.org/blog/php/user-cache-timebomb.html
Share
Improve this answer
Follow
answered Dec 1, 2019 at 22:35
George MylGeorge Myl
30122 silver badges55 bronze badges
3
Yeah, I've read that post, but I need it to be fresh, can't go with once in a minute. I need it once on 5-10 seconds
– Vladyslav Startsev
Dec 1, 2019 at 22:37
1
Have a queued job that runs and re-queues itself with a 5-10 second delay?
– Dwight
Dec 2, 2019 at 1:33
@Dwight yeah, might be a thing
– Vladyslav Startsev
Dec 2, 2019 at 8:42
Add a comment
|
|
how do I avoid cache slams (https://www.doctrine-project.org/projects/doctrine-orm/en/2.6/reference/caching.html#cache-slams) ? this question is not about doctrine but cache in general
I need something like this
//pseudo code
// $cacheKey = 'randomCacheKey'.
if(Cache::has($cacheKey)) {
return Cache::get($cacheKey);
}
//do some work
$valueToCache = $this->someComplexTask();
Cache::set($cacheKey, $valueToCache);
return $valueToCache;
the question is, how do I need to do it, to avoid cache slams?
For example if I have 200 parallel requests, and all of them will notice that there is no cache, they all will try to write to same, key, with will lead to spike in cpu/memory/db queries and etc.
so I need only one of them to write to this cache, and all others should wait for it. How do I do it?
this one is probably has something to do with atomic locks but it's not clear to me how do I use it, doc's for it doesn't me (it's too much "hello world"-like example).
|
How to avoid cache slams in laravel?
|
0
I found some ways you can clear the URLSession cache:
1) Replacing URLSession.shared with URLSession(configuration: URLSessionConfiguration.ephemeral)
2) Adding this method before reloading the data: URLCache.shared.removeAllCachedResponses()
Share
Improve this answer
Follow
answered Oct 10, 2019 at 18:28
alxlivesalxlives
5,14244 gold badges2929 silver badges5050 bronze badges
0
Add a comment
|
|
I want to let URL image update immediately when I uploaded new image,
but it always display previous uploaded image.
func changeUserIMG(imgURL:String){
if let url = URL(string: imgURL) {
URLSession.shared.dataTask(with: url, completionHandler: {(data,responds,error) in
if error != nil{
print(error!.localizedDescription)
}
else if let imageData = data{
DispatchQueue.main.async {
self.userImage.image = UIImage(data: imageData)
}
}
}).resume()
}
}
Is there anyway to overwrite or ignore UIImage chache?
edit:
func changeUserIMG(imgURL:String){
if let url = URL(string: imgURL) {
let request = URLRequest.init(url: url, cachePolicy: .reloadIgnoringLocalCacheData, timeoutInterval: 60)
URLSession.shared.dataTask(with: request,completionHandler: {(data,responds,error) in
if error != nil{
print(error!.localizedDescription)
}else{
DispatchQueue.main.async {
self.userImage.image = UIImage(data: data!)
}
}
}).resume()
}
Even I try to use .reloadIgnoringLocalCacheData, still display previous uploaded image.
Where's the problem?
|
How to ignore or overwrite UIImage cache in swift when I uploaded new image to server?
|
0
I don't know of any implementation that queries the database in the way you specify. What does exist are solutions where changes in local caches are distributed among the members in a group. JBossCache is an example where you also have the option to only distribute invalidation of objects. This might be the closest to what you are after.
https://access.redhat.com/documentation/en-us/jboss_enterprise_application_platform/4.3/html/cache_frequently_asked_questions/tree_cache#a19
JBossCache is not a spring component as such, but you create and use a cache as a spring bean should not be a problem.
Share
Improve this answer
Follow
answered Jan 16, 2020 at 11:57
Mårten SvantessonMårten Svantesson
1
Add a comment
|
|
One downside to distributed caching is that every cache query (hit or miss) is a network request which will obviously never be as fast as a local in memory cache. Often this is a worthy tradeoff to avoid cache duplication, data inconsistencies, and cache size constraints. In my particular case, it's only data inconsistency that I'm concerned about. The size of the cached data is fairly small and the number of application servers is small enough that the additional load on the database to populate the duplicated caches wouldn't be a big deal. I'd really like to have the speed (and lower complexity) of a local cache, but my data set does get updated by the same application around 50 times per day. That means that each of these application servers would need to know to invalidate their local caches when these writes occurred.
A simple approach would be to have a database table with a column for a cache key and a timestamp for when the data was last updated. The application could query this table to determine if it needs to expire it's local cache. Yes, this is a network request as well, but it would be much faster than transporting the entire cached data set over the network. Before I go and build something custom, is there an existing caching product for Java/Spring that can be configured to work this way? Is there a gotcha I'm not thinking about? Note that this isn't data that has to be transactionally consistent. If the application servers were out of sync by a few seconds, it wouldn't be a problem.
|
Local Cache with Distributed Invalidation (Java/Spring)
|
Meanwhile we accept that behaviour as a feature and see near caches as a separate cache layer.
From this perspective it makes sense to design it like this. So the cluster has some rules for TTL oder IdleTime but the client can have different requirements for the topicality of items.
|
I have a cache cluster with multiple nodes containing a cache map config which is only valid for 10 minutes (TTL = 600s). Additionally I have some client nodes with near caches configured for that cache.
While debugging I see the following behaviour:
If I explicitly evict an entry in that cache on the cluster node, the corresponding near cache entry is evicted as well. (internally a DeleteOperation is performed).
If the entry is timed-out, the entry in the cluster node is removed but the entry in the near cache is still valid. So the client receives an outdated entry.
When I explicitly set a TTL for the near cache as well the cache is evicted correctly.
My expectation is that a TTL-Expiration is also propagated through the cluster and to all the near caches. Am I doing something wrong or is this behaviour by design?
|
TTL-Expiration on a cluster node does not update my clients NearCache
|
0
Unfortunately, this is not something you can do out of the box.
Looking at the implementation of VideoCaching, it expects a URI identifier in order to store the respective data coming from it inside a temporary location. The only way I can think is you adding extra functionality on the original implementation!
Good luck in any case :)
Share
Improve this answer
Follow
answered Jul 16, 2019 at 15:34
wizofewizofe
49477 silver badges2020 bronze badges
Add a comment
|
|
In my React Native application I am using React Native Video.
Currently, the library offers caching (using SPTPersistentCache and DVAssetLoaderDelegate). The caching currently implemented is by the asset's URL. Or in other words, if I watch a video from https://video.net/video.mp4 next time I pass the same link to React Native Video, the cached version of the file will be loaded.
However, in my application the same video file can be stored in different places (it will have different download links). Thus, caching would not work properly for me and it might result in an already cached file being re-downloaded once again if its download link is different.
Is there a way I could cache files by a unique ID rather than their download link. All my video files have unique IDs and I would like to cache them by their ID.
Any help would be appreciated.
|
SPTPersistentCache - Video caching by unique ID
|
0
If you're returning a product object, may be you could try something like:
@GetMapping("{id}")
@Caching(cacheable = {
@Cacheable(value = "product-cache", key = "#id"),
@Cacheable(value = "price-cache", key = "#result.data.product.price.id")
})
public Product getProductById(@PathVariable Long id) {
// some implementation goes here.
}
Share
Improve this answer
Follow
answered Mar 18, 2019 at 9:01
Muhammad InshalMuhammad Inshal
6255 bronze badges
2
But wouldn't above will store Product object into price-cache with key as priceId?, similar to product-cache where key is productId and value is Product?
– dkb
Mar 18, 2019 at 9:53
Oh you're right. I think you'll have to implement it yourself if you need to cache the nested object separately. In redis that would look something like: redisTemplate.opsForValue().set(priceKey,price);
– Muhammad Inshal
Mar 18, 2019 at 10:30
Add a comment
|
|
I have 2 domain Objects Price and Product defined as below.
public class Price {
private Long id;
private Double basePrice;
private Double tax;
private Double maxRetailPrice;
}
public class Product {
private Long id;
private String title;
private Price price;
}
I have defined the Controller method as below:
@GetMapping("{id}")
@Cacheable(value = "product-cache", key = "#id")
public Product getProductById(@PathVariable Long id) {
// some implementation goes here.
}
Is there a possibility to cache both product and price separately with their respective id fields as the keys ?
Something like :
@GetMapping("{id}")
@Caching(cacheable = {
@Cacheable(value = "product-cache", key = "#id"),
@Cacheable(value = "price-cache", key = "???")
})
public Product getProductById(@PathVariable Long id) {
// some implementation goes here.
}
How to store the Product0 part of the Product1 object into Product2 with key as Product3 of the Product4 object.
I have tried multiple ways and combinations using SpEL, but could not get it working.
If anyone has tried something like this please help me.
Thank you.
|
Caching nested objects using Spring Cache?
|
0
This question is answered here: https://stackoverflow.com/a/53367609/1882946
For Python <3.8, this isn't possible, but working since 3.8.
Share
Improve this answer
Follow
answered Feb 20, 2022 at 10:11
SuperlexxSuperlexx
18111 silver badge33 bronze badges
Add a comment
|
|
Python 2 and 3 both generate bytecode in the same directory as, or __pycache__ subdirectory of scripts that you run. One reason it sucks is because it dirties source trees that I would like to keep clean for various reasons that I don't need to explain for this question (please answer this question - not questions that you imagine I have!)
I know you can disable cache generation, but that is inefficient.
Is there a way to run Python (2 or 3) but tell it to store its cache in a completely separate cache directory? Either using an environment variable or a command line flag.
This is not a duplicate of this question or this question.
|
Generate python bytecode cache in a user controlled directory
|
Use this to delete outdated caches:
self.addEventListener('activate', function(event) {
event.waitUntil(
caches.keys().then(function(cacheNames) {
return Promise.all(
cacheNames.filter(function(cacheName) {
// Return true if you want to remove this cache,
// but remember that caches are shared across
// the whole origin
}).map(function(cacheName) {
return caches.delete(cacheName);
})
);
})
);
});
|
So, I have an HTML page with service worker,
the service worker cache the index.html and my JS files.
The problem is when I change the JS, the change doesn't show up directly on the client browser. Of course in chrome dev-tools, I can disable cache. But in chrome mobile, how do I do that?
I tried to access the site settings and hit the CLEAR % RESET button.
But it still loads the old page/load from cache.
I tried to use other browser or chrome incognito and it loads the new page.
Then, I try to clear my browsing data (just cache) and it works.
I guess that's not how it should work right? my user won't know if the page is updated without clearing the chrome browser cache.
|
How can I force service worker to clear cache? [duplicate]
|
0
First of all, it is currently not possible to invalidate by domain and Cloudfront invalidation by path only.
If you plan to automate the invalidation once a page is saved, you can use the AWS SDK for CloudFront and call the createInvalidation method from your code.
Share
Improve this answer
Follow
answered Sep 18, 2018 at 17:55
AshanAshan
19.3k44 gold badges4949 silver badges6868 bronze badges
1
Hi Ashan. Tnx for response. I tried use sdk, but when set url to path list I got error about invalid invalidation path.
– ashedrin
Sep 20, 2018 at 9:19
Add a comment
|
|
I have CloudFront distribution with multiple domains. As example all domains like .mydomain.com caching in my distribution. And now I have problem with invalidation. I can't set invalidation path like this one.mydomain.com or http://one.mydomain.com. Every times when I need invalidate cache I must invalidate cache for everyone domais which I have. With the path /
But my site provide page editor, and I want invalidate page after each save in editor. How can I invalidate cache for individual domain?
|
Individual domain cache invalidation AWS CloudFront
|
0
With WKWebView you can use URLRequests to load your website data and cache it. The URLRequest can be configured in their caching policy during initialization.
init(url URL: URL,
cachePolicy: NSURLRequest.CachePolicy,
timeoutInterval: TimeInterval)
You could try to go forward with returnCacheDataElseLoad. However, this probably would not cover your particular case even if the cache is used primarily. Have a look at this answer regarding application cache and WKWebView.
Share
Improve this answer
Follow
answered Aug 22, 2018 at 14:40
ff10ff10
3,10711 gold badge3333 silver badges5858 bronze badges
1
Thank you @ff10 I tried this approach. It does cache the images and resources, but for some reason, the audio files are not cached.
– Ahmed
Aug 22, 2018 at 15:04
Add a comment
|
|
In my app users can open a website. The website is heavy with content (20MB average). A lot of my users don't have access to the internet at all times, so it's essential that we save any content they open (at the same time they view it). And then the next time they open it, it's completely loaded from the disk.
Here's what I'm thinking:
Use UIWebView and cache everything .. that's working but UIWebview is depreciated.
Use WKWebView, but I cannot cache everything :( Im not even sure I can intercept all the requests and responses.
Use a proxy server inside the app and save all data as it's being loaded from the remote server, I have no idea how to start this.
Basically, I want something similar to https://github.com/evermeer/EVURLCache
but for WKWebView
The question:
Which approach do you recommend? And how can I get around the downsides mentioned ?
|
Caching Entire Website - Swift
|
0
Linux will always free the cache as needed, you should never have to do this explicitly.
the application will crash by throwing the exception Java Heap Memory Out of Space Exception
This means there isn't enough swap space to allocate memory to the JVM's heap.
I would either
increase the swap space
decrease the heap size.
pretouch all the spaces to ensure they are allocated eagerly.
Share
Improve this answer
Follow
answered Aug 21, 2018 at 14:47
Peter LawreyPeter Lawrey
529k8181 gold badges762762 silver badges1.1k1.1k bronze badges
1
Swap space is around 11gb, I was able to fix it by increasing heap memory using Java command line arguments. Swap is almost free.
– Bidyut
Aug 21, 2018 at 15:04
Add a comment
|
|
I am facing an issue while deploying a particular application onto the Linux server running Ubuntu 16.04.
The application is written in Java, and performs a lot of I/O operation. In due course of time, while running the application, the cache consumption will increases. Although the output of free -h will show sufficient amount of available memory, but the application will crash by throwing the exception Java Heap Memory Out of Space Exception.
To work around the problem, I execute the clear cache command to free up the cache.
I need some guidance on whether the issue is caused by the cache, or something is wrong while running the application, as clearing the cache won't let the exception happen. Do Cache take away JVM memory?
|
Does cache in Linux can cause heap memory out of space exception?
|
I was able to achieve what I wanted to.
-1. Created a class LruSpannableCache extending to LruCache
and override the sizeOf method
@override
protected int sizeOf(Object key, Object value)
{
//The cache size will be measured in kilobytes rather than number of items.
return ((CacheSpannable)value).size() / 1024;
}
-2. Created an Object CacheSpannable
public class CacheSpannable
{
private int key;
private ISpannable text;
// Getters and Setters for key and text...
public int size()
{
return text.length(); //Can also add the byte length for `key`
}
}
-3. Finally created a class with static methods to add and get cached spannables
private static LruCache spannableCache;
private static final int cacheSize = Runtime.getRuntime().maxMemory() / 1024 / 6;
public static void addSpannable(int key, CacheSpannable value)
{
if (spannableCache == null) spannableCache = new LruSpannableCache(cacheSize);
if (getSpannable(key) == null)
{
spannableCache.put(key, value);
}
}
public static CacheSpannable getSpannable(int key)
{
if (spannableCache == null) spannableCache = new LruSpannableCache(cacheSize);
return (CacheSpannable)spannableCache.get(key);
}
Then to use it, I simply check if the Spannable is cached (LruCache0) then set it to the LruCache1 or if not, build it, add to cache and then set it to the LruCache2.
Next step would be to implement a Disk Cache to persist the spannables indefinitely
|
I have a Recyclerview which holds a lot of messages, similar to a chat screen.
In each list item, I have a TextView in which I set Spannable strings. The strings contains images and text and sometimes just text. The method I have to replace some chars in the strings with Drawables can get quite costly when there are a ton of characters needed to be replaced by images, a noticeable jitter/lag starts to appear onScroll. I have also placed the costly method in a AsyncTask which improved the speed of the RecyclerView a lot.
Question:
Is it possible to cache a Spannable object with all of its metadata? If the user scrolls up a screen or two and then back down, I want to try and avoid the Spannables being built every time the row is rendered on screen.
If Caching is possible, I'll even go a bit further to implement a "Pre-Cache" method which builds and caches the Spannables on a separate thread so if the user scrolls up and down, it will simply be loaded from the cache rather than building it if it exists resulting in a "faster" or "smoother" user experience.
Drawables
The drawables are loaded/created from FilePaths, Inserted as an ImageSpan into the spannable string after which is set to the TextView in the AsyncTask.
|
Cache Spannable Strings for RecyclerView
|
0
This should help with retrofit2
OkHttpClient okHttpClient = new OkHttpClient()
.newBuilder()
.cache(new Cache(WaterGate.getAppContext().getCacheDir(), 10 * 1024 *1024))
.addInterceptor(chain -> {
Request request = chain.request();
if (NetworkUtil.isDeviceConnectedToInternet(WaterGate.getAppContext())) {
request = request.newBuilder().header("Cache-Control", "public, max-age=" + 60).build();
} else {
request = request.newBuilder().header("Cache-Control", "public, only-if-cached, max-stale=" + 60 * 60 * 24 * 7).build();
}
return chain.proceed(request);
})
.build();
Retrofit.Builder()
.client(okHttpClient)
.build();
Share
Improve this answer
Follow
answered Jul 6, 2018 at 2:06
Mbuodile ObiosioMbuodile Obiosio
1,50311 gold badge1515 silver badges2020 bronze badges
4
Thanks for your answer, how could I add that interceptor in the code that I just added?
– lulu666
Jul 6, 2018 at 8:10
ok, I think I know how to do it, but I dont really understand the meaning of max-age and max-stale
– lulu666
Jul 6, 2018 at 11:03
ok, I think I know how to do it. I guess max-age 60 means that all calls within 1 minute are going to bring the response from cache. What max-stale means? Is it the time that we are going to have it in cache? Why would you want it in cache for 1 week if you always refresh the cache every minute? To be able to work offline?
– lulu666
Jul 6, 2018 at 11:14
Yes. This is for offline support.
– Mbuodile Obiosio
Jul 6, 2018 at 12:21
Add a comment
|
|
I am following the Android guide for using LiveData: https://developer.android.com/jetpack/docs/guide, I can make calls and get a list of objects back, but I don't understand how I could cache that list of objects, on the example I am not really sure how is defined the class UserCache and also, I don't know how can I add a caching time.
Could you point me how to do it, please?
This is the class:
@Singleton
public class UserRepository {
private Webservice webservice;
private UserCache userCache;
public LiveData<User> getUser(String userId) {
LiveData<User> cached = userCache.get(userId);
if (cached != null) {
return cached;
}
final MutableLiveData<User> data = new MutableLiveData<>();
userCache.put(userId, data);
// this is still suboptimal but better than before.
// a complete implementation must also handle the error cases.
webservice.getUser(userId).enqueue(new Callback<User>() {
@Override
public void onResponse(Call<User> call, Response<User> response) {
data.setValue(response.body());
}
});
return data;
}
}
Thank you
|
How to cache data during 30 minutes using LiveData + Retrofit?
|
0
I changed the link from https://storage.googlesapis.com/subdomain.mysite.com/... to https://subdomain.mysite.com/... (simply removing "storage.googleapis.com") and it works!
Hope it helps other that stuck as well.
Share
Improve this answer
Follow
answered May 25, 2018 at 16:53
Jo MoranJo Moran
3944 bronze badges
Add a comment
|
|
How can I cache assets stored on google cloud storage (GCS)? I've been trying to make it work in the past 2 days with no luck. My website have backend & frontend, and the asset is stored on GCS. I tried the following guide:
a. https://support.cloudflare.com/hc/en-us/articles/200168926-How-do-I-use-Cloudflare-with-Amazon-s-S3-Service-
b. https://cloud.google.com/storage/docs/hosting-static-website
c. https://cloud.google.com/storage/docs/static-website#tip-dynamic
Let say my website is example.com, here's what I did:
I created a bucket on GCS "img.example.com"
On Cloudflare I set CNAME with the following:
Name: img.example.com
Value: c.storage.googleapis.com
I set all object in GCS bucket 'readable by public' (https://cloud.google.com/storage/docs/access-control/making-data-public#buckets)
The image is still not cached by Cloudflare and the header status still not showing CF-Status. Am I missing something? Any help would be greatly appreciated.
Thank you.
|
How to cache google cloud storage (GCS) with cloudflare?
|
0
Actually, this is not a problem of caching.
It looks like the variable name STATUS is reserved.
Doesn't matter what value you give to $status in the controller method.
The $status always contains the actual status of the request and you can't change it manually. Even if the method is empty it will return $status 200 because the request was sucessfull.
The solution is to use another variable name for your own data.
Share
Improve this answer
Follow
edited May 2, 2018 at 12:41
answered May 2, 2018 at 12:27
Виталий КомаровВиталий Комаров
28944 silver badges1616 bronze badges
Add a comment
|
|
I create Laravel+Vue simple REST API web-app.
In Vue component I have a method with an api request.
I simplified this to see the core of the problem:
phpValidate() {
axios
.post("api/validate", self.programmer)
.then(function(response) {
console.log(response.status);
});
}
In the controller I have a method validateIt(), which handle this "api/validate" request.
It returns:
return array('status' => $status, 'data' => $data);
The $status can be equal to 200 or 422, depends on the input data.
The problem is that from some point, it began to return $status of 200 always.
Even if I delete all the code from the method validateIt() and just leave two lines:
$status = 422;
return array('status' => $status);
I still receive 200.
If I delete the whole method in controller, it gives an Internal Server Error 500.
So, the route and function name is correct.
When I put it back, I can write there whatever I like, it doesn't have any sence - it still returns 200!
If I use debugger, I can see that at the end of validateIt() method it returns 422.
But, when I get the response in phpValidate() I see again 200.
Unbelievable!
I tried:
validateIt()0
and
validateIt()1
doesn't help!
Also I tried to restart the server and use different browsers, doesn't help.
|
Laravel+Vue. Caching problems (actually, just reserved variable name)
|
0
Set up the cache as per the cache-persist example here:
Then, pass it as a custom cache in the boost configuration, as shown in the cache configuration section here:
For example:
import { InMemoryCache } from 'apollo-cache-inmemory';
import { persistCache } from 'apollo-cache-persist';
const cache = new InMemoryCache({...});
persistCache({
cache,
storage: window.localStorage,
});
import ApolloClient from "apollo-boost";
const client = new ApolloClient({
uri: "https://48p1r2roz4.sse.codesandbox.io",
cache: cache
});
Share
Improve this answer
Follow
edited Feb 20, 2019 at 15:28
IAmAliYousefi
1,15433 gold badges2323 silver badges3434 bronze badges
answered Feb 20, 2019 at 14:54
Craig FletcherCraig Fletcher
1
Add a comment
|
|
I've got a quick question.
How can we persist the cache using apollo-boost lib?
I am not sure how to implement apollo-cache-persist with the following config.
const client = new ApolloClient({
uri: 'http://localhost:8080/_/service/com.suppliers/graphql',
clientState: {
defaults: {
networkStatus: {
__typename: 'NetworkStatus',
isConnected: false,
},
},
resolvers: {
Query: {},
Mutation: {
updateNetworkStatus: (_, { isConnected }, { cache }) => {
cache.writeData({
data: {
networkStatus: {
__typename: 'NetworkStatus',
isConnected,
},
},
})
return null
},
},
},
},
})
Thx in advance!
|
Persist cache with apollo-boost
|
0
The correct max-age syntax should be
Cache-Control: max-age=0, must-revalidate
You can also try
Cache-Control: no-cache, no-store, max-age=0, must-revalidate
which basically cover all don't-cache directives, in case some proxy in between does not support newer syntax.
Share
Improve this answer
Follow
answered Mar 22, 2019 at 9:21
William ChongWilliam Chong
2,14711 gold badge1313 silver badges2525 bronze badges
Add a comment
|
|
We are having an ongoing issue with a web application that we have developed where when we deploy a new version of the application, Chrome seems to ignore all the Cache-Control headers and load the application (index.html) out of disk cache, causing all kinds of errors for our users.
I've tried several different variations of the Cache-Control header, but it never seems to obey, including Cache-Control: max-age: 0, must-revalidate and Cache-Control: no-cache.
My questions are two-fold: Is there something that I'm missing that might be causing this, and are there techniques that others are using to avoid this sort of problem.
|
Chrome seems to ignore 'Cache-Control: must-revalidate' header
|
The comment of @yegdom is actually the right answer. When adding the Cacheable annotation, Spring generates a proxy which implements ISiteService. And somewhere in your code, you have a bean requiring SiteService, the implementation.
There are three solutions (in preference order):
Remove the useless interface... A single implementation is just adding complexity for no direct benefit. Removing it will force Spring to use a class proxy
Fix your dependency to use ISiteService
Add proxy-target-class="true" to cache:annotation-driven to tell Spring to create a class proxy
I really do not recommend the last one since you should always depend on the interface or always depend on the class (and delete the interface). Not both at the same time.
|
I'm integrating Caching into my web application but for some reason Application Context failed to load when adding the @Cacheable annotation.
I have been trying to solve the issue for two days now, your help is really appreciated!
app.context.xml
<cache:annotation-driven cache-manager="EhCacheManagerBean" key-generator="customKeyGenerator" />
<bean id="EhCacheManagerBean" class="org.springframework.cache.ehcache.EhCacheCacheManager" p:cache-manager-ref="ehcacheBean" />
<bean id="ehcacheBean" class="org.springframework.cache.ehcache.EhCacheManagerFactoryBean" p:configLocation="classpath:EhCache.xml" p:shared="true" />
<bean id ="customKeyGenerator" class="com.app.site.v2.cache.customKeyGenerator"/>
<bean id="siteService" class="com.app.site.v2.SiteService" primary="true"/>
EhCache.xml
<ehcache xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:noNamespaceSchemaLocation="http://ehcache.org/ehcache.xsd"
updateCheck="true"
monitoring="autodetect"
dynamicConfig="true">
<diskStore path="java.io.tmpdir" />
<cache name="cacheSite"
maxEntriesLocalHeap="100"
maxEntriesLocalDisk="1000"
eternal="false"
timeToIdleSeconds="300"
timeToLiveSeconds="600"
memoryStoreEvictionPolicy="LFU"
transactionalMode="off">
<persistence strategy="localTempSwap" />
</cache>
Method that is being cached
public class SiteService implements ISiteService {
@Cacheable("cacheSite")
public JsonObject getSiteJson(String siteId, boolean istTranslated) { ... }
}
Exception that is being thrown
org.springframework.beans.factory.BeanNotOfRequiredTypeException: Bean named 'siteService' is expected to be of type 'com.app.site.v2.SiteService' but was actually of type 'com.sun.proxy.$Proxy57'
|
Failed to load ApplicationContext when using @Cacheable annotation
|
0
I dont know about that problem but if you need to clear cache you can do that manualy by deleting var/cache/dev folder content. MIght help until you find where problem is.
Share
Improve this answer
Follow
answered Nov 4, 2017 at 18:41
some_guysome_guy
39022 silver badges1414 bronze badges
Add a comment
|
|
Whenever the cache is cleared in my Symfony project, either manually or e.g. via composer update, the public assets folder web/bundles/app is deleted.
I've got no idea why this is, it never happened on any of my previous Symfony projects (only a couple, but still). Google returns absolutely nothing, which I find really strange. I've asked a friend of mine that's been working with Symfony for a few years and he told me he's never seen that happen before.
I'm sorry I don't provide any code for this question, but I have literally no idea where the problem might come from. I'll update this with code requested in the comments if needed.
|
Clearing the cache deletes the web/bundles/app folder
|
Ok, so after doing research, I see the problem. The key is not being saved because the rails.cache.fetch method is utilizing a null_store which doesnt actually store any data. In my config/environments/development file theres the following code:
if Rails.root.join('tmp/caching-dev.txt').exist?
config.action_controller.perform_caching = true
config.cache_store = :memory_store
config.public_file_server.headers = {
'Cache-Control' => 'public, max-age=172800'
}
else
config.action_controller.perform_caching = false
config.cache_store = :null_store
end
and since I dont have a tmp/caching-dev.txt file, the null_store is being used. The second to last line can be updated to read
config.cache_store = :memory_store or whatever kind of store you like
|
I'm using auth0 for authentication and using one of their provided methods (copied below) to confirm the jwt token. The solution they provide hits their service on every request to my server, which is making the requests take up to 1 second to complete.
I'm trying to cache the public key that is generated by the method they provide but having no luck. At first I thought I can store it in Rails.cache but then realized the method creates an OpenSSL object, not a string, and when I try Rails.cache.write('KEY', openSslObject) and access it with Rails.cache.fetch('KEY') I'm getting nil returned
I also tried to use a block with the rails cache fetch:
cached_jwks_hash = Rails.cache.fetch("JWKS_HASH", expires_in: 10.hours) do
get_jwks_hash
end
but still get nil
The get_jwks_hash method below returns the following: {"key"=>#<OpenSSL::PKey::RSA:0x007fe29c545ef8>}
what would be the best way to cache this data? is it possible to store this in a variable in memory?
def verify(token, jwks_hash)
JWT.decode(token, nil,
true, # Verify the signature of this token
algorithm: 'RS256',
iss: "https://#{ENV["AUTH0_DOMAIN"]}/",
verify_iss: true,
aud: ENV["AUTH0_API_AUDIENCE"],
verify_aud: true) do |header|
jwks_hash[header['kid']]
end
end
def get_jwks_hash
jwks_raw = Net::HTTP.get URI("https://#{ENV["AUTH0_DOMAIN"]}/.well-known/jwks.json")
jwks_keys = Array(JSON.parse(jwks_raw)['keys'])
Hash[
jwks_keys
.map do |k|
[
k['kid'],
OpenSSL::X509::Certificate.new(
Base64.decode64(k['x5c'].first)
).public_key
]
end
]
end
|
cache auth0 public key
|
0
The IBM object storage currently does not have all the options as AWS S3, the valid API operations are listed here https://ibm-public-cos.github.io/crs-docs/api-reference
As you can see there is not support for control cache
Share
Improve this answer
Follow
answered Oct 12, 2017 at 18:15
Nelson Raul Cabero MendozaNelson Raul Cabero Mendoza
4,38611 gold badge1414 silver badges1919 bronze badges
3
thanks, I appreciate your ref, my last doubt is about the existence of some mechanism to set expiration header to my files in object storage and thus gain better performance, do you know how to do that ?
– user8322093
Oct 13, 2017 at 23:19
1
That only can be done for swift object storage, you need to set the header "X-Delete-At" and the time you want. see community.runabove.com/kb/en/object-storage/…
– Nelson Raul Cabero Mendoza
Oct 14, 2017 at 12:07
X-Delete-At deletes the file from object storage. What they mean by "Expires" and "Cache-Control" is to set HTTP headers, maybe via setHttpExpiresDate() and setCacheControl() so clients can cache the response until the given time or for a given period, so they don't keep hitting your bucket, using up requests and transfer. It seems this may be supported in the CLI (see: --cache-control), but unlike Oracle Cloud it isn't exposed on the web-based console; nor in Aspera Connect: cloud.ibm.com/docs/…
– GreenReaper
Jan 22, 2020 at 0:46
Add a comment
|
|
I did upload an object with Cache-Control as parameter and it does not take effect in object storage bucket but it does in AWS S3 bucket using the same code:
$s3Client->putObject([
'ACL' => 'public-read',
'Bucket' => config('filesystems.disks.object-storage.bucket_name'),
'CacheControl' => 'public, max-age=86400',
'Key' => $path,
'SourceFile' => $path,
]);
I don't really understand why the same code does not have same effect in both cloud buckets since both use S3 API.
The uploaded file has control-cache header in AWS S3 and the same file in IBM OO doesn't get the same result.
how can I set correctly control-cache header in object-storage file ?
|
Storing object with Control-Cache header in object storage is unachievable
|
When you build using ng build --prod which implies aot as well, the generated bundles have new names..
A common approach is to configure IIS to never allow caching of index.html.. That way, if clients see the same bundle name, in the always current index.html, its cached.. But if you generate a new bundle, its downloaded.
Related SO answer here.
Also, you can google 'Cache Busting" to find a lot of resources.
|
When I am ready to publish my code changes to our server, I build and compile using Angular AOT and copy my files over. Sometimes end users aren't getting these changes and have to do a hard refresh, or go into their browser history and clear cached images and files. What is the recommended method of forcing a hard refresh when I make changes to code on the server?
I've read about appending a version number to .css and .js files so the browser re-downloads the newly named files, but with angular AOT creating ngfactory.js files and ngfactory.js.map files, etc, I want to make sure I am doing this properly.
Also, I am hosting the site using IIS so if there is a way to achieve refreshing through ISS I am open to that as well.
|
(Angular 4, AOT, cache refresh) How to force end users browsers to do a hard refresh when code changes occur on server?
|
0
The HttpClient has (good) caching itself already. If the default caching behavior is not enough, you can further control the cache through the HttpCacheControl class which separates read and write behavior.
More important is knowing how to use your HttpClient. Even though it implements IDisposable, you should NOT dispose it but keep a single HttpClient object alive through your whole application, it's designed for re-use.
What the HttpClient doesn't do, is return you the cached result while being disconnected. Therefore there are other libraries like Akavache, which creates an offline key-value store.
Share
Improve this answer
Follow
answered Aug 14, 2017 at 11:43
BartBart
9,97577 gold badges4848 silver badges6565 bronze badges
2
I do not dispose HttpClient, I use singleton, but it still makes the same request to web server (observed trought Fiddler). Why?
– Liero
Aug 14, 2017 at 12:20
BTW: you can set the cache flags on HttpClient's HttpBaseProtocolFilter to get data only from the cache. It's specifically designed for handling the "no network" case. (But always keep in mind that http caches aren't a database! the system might well decide to remove stuff from the cache or never add it to the cache. You know, for Reasons).
– PESMITH_MSFT
Jun 26, 2018 at 0:33
Add a comment
|
|
I own both WebApi server (asp.net core app) and the client (UWP app).
I call the WebApi services using HttpClient from the UWP app.
Some resources are readonly and therefore can be cached:
[ResponseCache(Duration = 60*60*12, VaryByQueryKeys = new[] { "id" }, Location = ResponseCacheLocation.Client)]`
[HttpGet("{id}")]
public IActionResult Get(string id) { ... }
Is it possible to enable caching in HttpClient in UWP app or do I have to do it on my own?
|
HttpClient caching in UWP apps
|
Server restart fix my issue. Not sure why this was required with hot-reloading. The code was correct.
|
I have the following component that mutates data. Apollo provides functionality to update the store automatically. I would like to control the way the data is added to the store using the update function. The documentation is straightforward enough, but I can't get it working. What is wrong in the code below that would prevent the console.log from printing.
import React from 'react'
import { connect } from 'react-redux';
import { graphql, gql, compose } from 'react-apollo';
import { personCodeSelector } from '../../selectors/auth';
import UploadBankStatement from '../../components/eftFileUploads/UploadBankStatement.jsx';
const createEftFileUpload = gql`mutation createEftFileUpload(
$bankAccountCode: String!,
$uploadInput: UploadInput!,
$uploadedByPersonCode: String!) {
createEftFileUpload(
bankAccountCode: $bankAccountCode,
uploadInput: $uploadInput,
uploadedByPersonCode: $uploadedByPersonCode) {
id
bankAccountCode
fileName
numberOfProcessedItems
numberOfUnallocatedItems
createdAt
status
}
}`;
const mutationConfig = {
props: ({ ownProps, mutate }) => ({
createEftFileUpload: (bankAccountCode, uploadInput) => {
return mutate({
variables: {
bankAccountCode,
uploadInput,
uploadedByPersonCode: ownProps.personCode
},
update: (store, something) => {
console.log("ping");
console.log(store, something);
},
});
}
})
};
const mapStateToProps = state => {
return {
personCode: personCodeSelector(state)
};
};
export default compose(
connect(mapStateToProps),
graphql(createEftFileUpload, mutationConfig)
)(UploadBankStatement);
Note I have found a couple of similar issues, but it doesn't seem to shed any light on my situation.
|
Update method in mutation not running
|
0
For now, our work-around is to use ehcache and Spring to cache method calls instead.
Once we can move to JPA, we will likely use ehcache anyway, so this isn't that bad, I guess.
Share
Improve this answer
Follow
answered Jul 21, 2017 at 14:18
PucePuce
37.7k1313 gold badges8282 silver badges155155 bronze badges
Add a comment
|
|
In our project, the "old" native (non-JPA) version of EclipseLink is still in use. The mappings are configured using the Workbench Application (which generates XML configuration files and some Java code).
What we see in the workbench tool: it looks like for all entities the cache is enabled with isolation level Shared (default).
What we see in the application: no entity gets cached
What we want: Enable second level caching for some few entities only
EDIT
Config in EclipseLink Workbench:
Generated XML config:
[...]
<refresh-cache-policy/>
<caching-policy/>
[...]
Generated project code:
// ClassDescriptor Properties.
descriptor.useSoftCacheWeakIdentityMap();
descriptor.setIdentityMapSize(100);
descriptor.useRemoteSoftCacheWeakIdentityMap();
descriptor.setRemoteIdentityMapSize(100);
descriptor.setReadOnly();
descriptor.setAlias("SomeAlias");
// Cache Invalidation Policy
TimeToLiveCacheInvalidationPolicy policy = new TimeToLiveCacheInvalidationPolicy(1000);
policy.setShouldUpdateReadTimeOnUpdate(false);
descriptor.setCacheInvalidationPolicy(policy);
// Query Manager.
descriptor.getQueryManager().checkCacheForDoesExist();
|
Native EclipseLink: enable second level cache for some entities
|
0
Check upload_max_filesize variable in your php.ini file.
It might be set to 128M , try increasing it to the desired value.
Share
Improve this answer
Follow
answered May 16, 2017 at 10:11
VarunVarun
1
2
What if i don't have access to the file in the prod env ?
– Tikroz
May 16, 2017 at 10:43
If your server allows PHP config changes through .htaccess, then you can create .htaccess file in the same folder as your php file and in that file write: php_value upload_max_filesize 256M php_value post_max_size 256M
– Varun
May 16, 2017 at 12:31
Add a comment
|
|
I'm currently developping a PHP website and I need to store files in my database.
I'm using a LONGBLOB to store files such as PDF,PPTX,...
The file upload was working fine until i get this error :
Fatal error: Allowed memory size of 134217728 bytes exhausted
Here is my function :
public function uploadFile() {
// We upload only pdf for now
if (isset($_FILES['fichier']['name']) && $_FILES['fichier']['type']=="application/pdf"){
$tmp_name = $_FILES['fichier']['tmp_name'];
// Avoid problem with space
$nom = str_replace(' ','_',$_FILES['fichier']['name']);
$taille = $_FILES['fichier']['size'];
$type = $_FILES['fichier']['type'];
$fp = fopen($tmp_name, 'rb');
$content = fread($fp, filesize($tmp_name));
$statement = $this->db->prepare("INSERT INTO document(nomfichier,fichier,typefichier,taillefichier) VALUES (?,?,?,?)");
$statement->bindParam(1, $nom);
$statement->bindParam(2, $content, PDO::PARAM_LOB, $taille);
$statement->bindParam(3, $type);
$statement->bindParam(4, $taille);
$statement->execute();
$statement->closeCursor();
// Redirect
header('Location: documentation');
die('redirect');
Edit : Problem comes from the database who choose a blob instead of longblob when regenerated
|
I got Fatal error: Allowed memory size of 134217728 bytes exhausted when upload a file
|
To my understanding, SQLDependency works by using DependencyListener, that is an implementation of RepositoryListener and relays on ServiceBroker, as you stated AzureSQL does not support ServiceBroker. But you could use the PollingListener implementation of RepositoryListener to verify a change.
"The PollingListener will run until cancelled and will simply compare the result of the query against until change is detected. Once it is detected, a callback method will be called"
(Source1)
(Source 2)
|
I have two apps. One inserts into AzureSQL DB and other reads. I want second app to cache query results and invalidate cache only when something changed in table/query results. In standalone SQL Server it was possible by SQLDependency (or SQLCacheDependency) mechanism. As far as I understood, in AzureSQL this mechanism is unavailable. It requires ServiceBroker component to be enabled, and there's no such component in Azure SQL.
I apoligize if I reapeat already asked questions, but all answers come from 2012 or so. Were there any changes? It's 2017.
And the questions is, what is the mechanism to inform application (say, ASP.NET) about changes in AzureSQL?
PS: I know there's related feature "ChangesTracking", but it is about inserting records about some other changes in speical table. That is "within" database. I need to inform app outside of DB.
|
Is there equivalent of SQLdependency in AzureSQL?
|
0
Caching based on the response header cannot be done because it implies that Nginx must proxy the request back to the backend and check its response, defeating the purpose of the proxy cache.
Share
Improve this answer
Follow
answered Sep 26, 2017 at 9:56
sudokaisudokai
53722 gold badges88 silver badges1616 bronze badges
Add a comment
|
|
I want to implement a custom nginx cache control method from my scripts, by using a custom header: "Do-Cache".
I used in http block of nginx:
map $sent_http_do_cache $nocache {
public 0;
default 1;
}
And in the server block of nginx:
fastcgi_cache_bypass $nocache;
fastcgi_no_cache $nocache;
So, for Do-Cache: public, nginx should cache the response. Otherwise not.
But this configuration is not working. By debuging into logs the values of $sent_http_do_cache and $nocache are the right ones, until they are used in server block of nginx. If using them in the server block (fastcgi_cache_bypass $nocache, or a simple set $a $nocache), the $nocache variable got the "1" value, and $sent_http_do_cache - "-".
Is the any other way of managing the cache of nginx based on custom header in response?
|
Nginx cache bypass by custom response header
|
The path which will be generated in your case contains a /. So PHP can't create the file because it would be in an not existing folder.
We can see that this issue is in the Files.php of the Apix/Cache we are using to allow different caching options for the silex boilerplate. Nevertheless, we've found a way to fix this issue by now - but we will create a new pull request for the Apix/Cache so it will check for / before saving.
For you this means:
composer update
to install our new php-client version v1.1.11.
|
Currently I am using the silex boilerplate from the storyblok github repository, where I load the stories via the getStories function.
My code looks like this:
{%
set reference = getStories(global('references_path'), 1, 0, options('{"filter_by[customer_name]":"' ~ item.customer_name ~ '"}'))
%}
This code is called from another twig component in a loop.
For one "reference" I do get this error message:
file_put_contents(../cache//c3RvcnlibG9rOnN0b3JpZXMvYTo0OntpOjA7czoxMDoiRXJkZ2FzIE/DliI7aToxO3M6MTE6ImRlL3Byb2pla3RlIjtpOjI7aToxO2k6MztzOjM6ImZzcCI7fQ==):
failed to open stream: No such file or directory in
/webapp/vendor/apix/cache/src/Files.php
Seems to be an Issue with the cache.
Thanks in advance.
|
failed to open stream php error when loading stories from storyblok in twig
|
0
Definitely choice B is better.
If we first take a look at how the following patterns are classified. A decorator is a structural pattern and strategy is a behavioral pattern.
Decorator is a structural design pattern that lets you attach new behaviors to objects by placing these objects inside special wrapper objects that contain the behaviors.
Based on your question, you want to add new behavior to the existing object. Definition of the decorator is just like you describe your problem. Cache is the new behavior you want to add. I have used it in a similar problem and it usually works very well. It works well with a class that you can't change or with new features that could someday need caching.
Strategy is a behavioral design pattern that lets you define a family of algorithms, put each of them into a separate class, and make their objects interchangeable.
I haven't used strategy to cache data. According to the definition, I think it has a different purpose. It can be harder to implement with already released classes. The strategy has many use cases and for example, it can be better when you do some calculations that may change.
I have used the strategy pattern when working with the database but it was a case where I wanted to support many different complex queries. In that way, I could just implement queries with strategy pattern and pass that strategy to object that handle database connection.
Share
Improve this answer
Follow
answered Feb 4, 2019 at 18:11
T.NylundT.Nylund
69766 silver badges1111 bronze badges
Add a comment
|
|
I need to implement CRUD operations on a data source which could be a physical table or logic (in-memory cache holds data after query multiple tables). Ideal choice for data source is table in db. But due to some reasons there is alternate implementation of having in-memory cache class mimic ideal implementation.
interface IEmp
{
Add();
Update();
Remove();
}
There are 2 implementation :
Class Emp
Operates on Physical table in sqlite
Class EmpCache
Operates in-memory cache - aggregate data from multiple other tables
Based on performance or other non functional needs consumer of class may chose to switch to either of 2 options.
I am thinking to apply design pattern here so that not causing much rework.
I see 2 design patterns applicable here:
a. Strategy pattern -
There will be 2 separate implementations of an interface IEmp (as above).
e.g.
Class EmpTable
{
IEmp table;
bool isInMemory;
}
based on isInMemory T/F table will switch underlying instance to 1 of above implementation {Emp or EmpCache}
b. Decorator pattern - another interface extends + encapsulate IEmp interface. And based on property change - it will act / delegate as appropriate
e.g.
IEmpCache : IEmp
{
IEmp instance;
bool useCache;
}
EmpCache : IEmpCache
{
Add()
{
if(!useCache)
{
instance.Add();
}
//cache logic
}
... // same for all other methods
}
I see approach b better, but mostly used when need to add/enhance already released functionality (class/interface), isn't it?
Which is better? Is there any other better pattern?
|
Design Pattern for dual purpose class (in-memory cache or table)
|
0
Try set
@ini_set("memory_limit",-1);
Run
php -i|grep php.ini
To find where php.ini and echo phpinfo() after setting .
Unset the variables after using and optimize your code.
Share
Improve this answer
Follow
answered Feb 8, 2017 at 9:56
leon leon
1033 bronze badges
Add a comment
|
|
I have a Symfony3 project on my localhost (Windows 7 Entreprise) and I use Wamp Server 3.0.0 with PHP 5.6.16 and Apache 2.4.17.
When I do : php bin/console cache:clear
I have this error :
I also have a memory limit error every time I use Composer (for installing bundles for example).
I modified my php.ini : memory_limit = 1G and restarted all services of Wamp.
I still have this problem. My project is a big one so maybe it comes from it.
The only solution that I found is to increase memory limit in every command line (-1 = unlimited) :/
php -d memory_limit=-1 C:\Path\of\composer.phar require ...
In production, my project is on a Windows Server 2008 R2.
Have you a better way to increase memory limit for my entire project ? Thanks for your help.
|
How to increase PHP memory limit for Symfony project under Windows?
|
0
It is likely due to known product bug in Sitecore 8.0/8.1 which incorrectly handles in-line cache settings.
See this blog post for more info - https://vladimirhil.com/2016/02/28/html-sitecore-renderingpathorid-does-not-apply-varyby-settings/
Share
Improve this answer
Follow
answered Feb 1, 2017 at 16:57
Paul GeorgePaul George
1,79711 gold badge1616 silver badges3333 bronze badges
Add a comment
|
|
I'm using Sitecore 8.0 and I have a dynamically breadcrumb and the cache settings when rendering the view:
@Html.Sitecore().Rendering(RenderIds.Breadcrumbs, new {Cacheable = true, Cache_VaryByData = true, Cache_VaryByUrl = true, Cache_VaryByParameters = true, Cache_VaryByQueryString = true })
Also the cache settings are setup in Sitecore on the rendering.
The issue is that when I access the same item coming from different paths the breadcrumb is not being updated and shows a cached path depending for what got there first. After a while it gets updated, but will fail when I go to the item through another path.
I have removed the caching settings from the view and it looks to work correctly.
Any idea why this is happening or if I should not use caching on dynamically generated content?
|
sitecore dynamically content does not update when caching is setup
|
0
try to make your header respond to be
add_header Cache-Control "public, max-age=2592000, s-max-age=2592000";
Share
Improve this answer
Follow
answered Dec 9, 2023 at 16:21
horasjeyhorasjey
1
Add a comment
|
|
I'm trying to load images from Amazon S3 thru Cloudflare CDN. When adding src to <img /> tag by JavaScript response header cf-cache-status returns MISS, but when i'm opening picture in new tab it's HIT as it should be (working thru CDN cache).
The difference in request headers is
when getting image by JavaScript:
accept:image/webp,image/*,*/*;q=0.8
referer:https://example.com/
when open in new tab:
accept:text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
cache-control:max-age=0
if-modified-since:Sun, 15 Jan 2017 15:57:40 GMT
if-none-match:"62d2f51d08bd490d403206a37bec9c15"
upgrade-insecure-requests:1
What should i do to make it HIT from my webpage?
|
Cloudflare CDN header cf-cache-status:MISS when loading image from page
|
0
First you'll want to get the HTML from your "internal Google Sheet scripts", the following code will get the HTML (in string format), given that you have a File in your Script called "template.html".
var template = HtmlService.createHtmlOutputFromFile('template').getContent();
I run a check to see if the data is already in the cache...
function getObjectsFromCache(key, objects, flush)
{
Logger.log("Running: getObjectsFromCache(" + key + ", objects)");
var cache = CacheService.getScriptCache();
var cached = cache.get(key);
flush = false; // 1st run 45.33, 2nd run 46.475
//flush = true;// 65.818 seconds
if (cached != null && flush != true)
{
Logger.log("\tEXISTING DATA -> ");
cached = cache.get(key);
}
else
{
Logger.log("\tNEW DATA -> ");
//Logger.log("\tJSON.stringify(objects): " + JSON.stringify(objects) + ", length: " + objects.length);
//If you're working with spreadsheet objects, or array data, you'll want to put the data in the cache as a string, and then reformat the data to it's original format, when it is returned.
//cache.put(key, JSON.stringify(objects), 1500); // cache for 25 minutes
//In your case, the HTML does not need to be stringified.
cache.put(key, objects, 1500); // cache for 25 minutes
cached = objects;
}
return cached;
}
I commented-out the JSON.Stringify(objects), because in my original code I use another function called formatCachedData(cache, key) to return the different types of data - a multi-dimensional array (spreadsheet data), or Google user data from AdminDirectory.Users.list({...}).
Share
Improve this answer
Follow
answered Jun 8, 2017 at 17:00
Nick LalandeNick Lalande
133 bronze badges
Add a comment
|
|
I'm kinda stuck with something here.
I've a HTML template that i'm loading, and within this page there's a lot of JavaScript going on.
I'm trying to accelerate the operation by caching the template with the onOpen() of my Google Sheet. I can't figure how to cache my HTML page CalForm.html (from my internal Google Sheet scripts).
Here's what I have for now:
Creating the cache
function CacheCreate() {
CacheService.getScriptCache().put('CalCache', 'CalForm');
Browser.msgBox("done");
}
Get the cache
var evalSheet = SpreadsheetApp.getActiveSpreadsheet().getSheetByName('Evaluation');
var row = evalSheet.getActiveCell().getRow();
var CalCache2 = CacheService.getScriptCache().get('CalCache');
Browser.msgBox(CacheService.getScriptCache().get('CalCache'))
initialize(row);
//var cache = CacheService.getScriptCache();
//var cache2 = cache.get('rss-feed-contents');
//Browser.msgBox(cache.get('rss-feed-contents'));
var html = HtmlService
.createTemplateFromFile(CalCache2)
.evaluate()
.setWidth(1200)
.setHeight(560)
.setSandboxMode(HtmlService.SandboxMode.NATIVE);
SpreadsheetApp.getUi().showModalDialog(html, 'Calculatrice');
Thanks for your help!
|
GSCRIPT - How to cache HTML page content?
|
0
You can add a cache context to your render array. There is an IP Cache Context. The ID to use is just ip.
Try this:
<?php
namespace Drupal\tester\Plugin\Block;
use Drupal\Core\Block\BlockBase;
use Symfony\Component\HttpFoundation\Request;
class newid extends BlockBase {
/**
* {@inheritdoc}
*/
public function build() {
$request = \Drupal::request()->getClientIp();
return [
'#markup' => $request,
'#cache' => [
'contexts' => [
'ip',
],
],
];
}
}
Share
Improve this answer
Follow
answered Mar 18, 2021 at 2:44
sagannotcarlsagannotcarl
35422 silver badges1313 bronze badges
Add a comment
|
|
I've created a simple custom block module that displays the IP Address of the user visiting a website. When the site gets a visitor, that visitor sees their own IP Address displayed in the block. I don't need to store this information or use it for any other purpose, this is only for aesthetic purposes, thus only the block plugin is necessary. The module works fine and displays the IP, however, it's cached and the IP doesn't update unless I manually visit the performance page and clear the site cache. How do I override this so the visitor sees their current IP Address?
Here's my custom block module file:
<?php
namespace Drupal\tester\Plugin\Block;
use Drupal\Core\Block\BlockBase;
use Symfony\Component\HttpFoundation\Request;
class newid extends BlockBase {
/**
* {@inheritdoc}
*/
public function build() {
$request = \Drupal::request()->getClientIp();
return array('#markup' => $request);
}
}
|
Caching issue on Drupal 8 custom block module displaying IP Address
|
0
Your attempt to break the cache is short circuited by the fact you are getting the cached version of the content page with the old timestamp, meaning you never request the JavaScript page with the new timestamp until something else forces the content page to reload.
You can check this by using the same cache breaking scheme on the content page itself in addition to the JavaScript page.
This is not a recommended solution as you will bypass cache altogether for the content page.
Every answer I've seen for this problem has worked sometimes and other times not. If I get a solution, I'll post it.
Share
Improve this answer
Follow
answered Jul 9, 2021 at 18:14
John ScottJohn Scott
1
Add a comment
|
|
I have a weird problem, javascript files are cached in IIS,
What I have done till now
Disabled caching on my website in IIS
Disable cache on chrome dev tools
Adding timespan at url in page to prevent any chaching
Doubled check to see if the files on disk are updated and they were updated
But my scripts and css not updating from latest version on disk, until I do IISReset
|
How to prevent JavaScript files to be cached in IIS
|
0
There is a solution, but it's quite hacky.
The Problem is, that IE caches the favicons seperately and they will only be loaded, if the entire page is. The pushState just checks the existing cache and will only show favicons of pages that have been loaded before. If pushState has not been used you can change the favicon with JavaScript, afterwards, you can't.
The favicon cache however is updated live, so you can do the following:
Set up a hidden iframe on your page.
Before calling pushState, load the page you need with an additional hash (e.g #onlyFavicon) in the iframe (make sure to use replace otherwise the ifarem will interfere with your history).
Have your server - or a javaScript - check for your hash on pageload and serve an empty page with just the favicon.
Once the iframe has loaded, your favicon cache is updated with the new favicon (add an onLoad eventlistener to the iframe to know when) No Need to wait, the favicon gets updated automtically once the iframe has loaded.
If you call pushState now, the favicon will be displayed correctly
I know it's hacky and late, but i hope it helps anyone who stumbles over this problem as well.
Share
Improve this answer
Follow
edited Dec 9, 2017 at 13:20
answered Dec 8, 2017 at 23:30
Felix PriceFelix Price
1122 bronze badges
Add a comment
|
|
Here's the scenario:
The site has a favicon already cached (/favicon.ico file) and I want to change it. It can be easily achieved (by adding a querystring to favicon's path: favicon.ico?v2). But, any state change (by calling window.history.pushState() or window.history.replaceState()) changes the favicon to the old one.
It looks like IE11 is taking favicon.ico from BASE directory which is unfortunately cached (favicon.ico?v2 is treated as a different entity in IE11's cache).
It's worth mentioning that the problem is not occurring when manipulating location.hash.
So changing state from:
http://x.com
to:
http://x.com/#id=5
has no affect on the favicon. But changing state to:
http://x.com/?id=5
switches the favicon to the old one (if it's cached) or default from IE11.
As soon as I do a browser refresh the new favicon is shown.
This was already reported as a bug on IE11, but Microsoft decided to leave it. I found two related questions:
using history.pushState in Firefox make my favicon disappear &
history.pushState in Chrome make favicon request
but none provides a solution to my problem.
I've also tried to update the favicon's reference after changing the state (by updating favicon.ico?v20 attribute) - it helped on Edge but not on IE11. the browser is actually sending a new favicon request, but it still shows the old one.
I'm looking for a way of keeping the favicon after changing the state.
The alternative solution can be to programmatically forcing IE11 to reload favicon.ico?v21 file (let's assume that favicon.ico?v22 header was not set).
|
Manipulating window.history affects favicon on IE11
|
0
You best bet is to try using App Cache, which is now supported in WKWebView in iOS 10: https://stackoverflow.com/a/44333359/233602
Share
Improve this answer
Follow
answered Jun 2, 2017 at 16:22
Andrew EblingAndrew Ebling
10.2k1111 gold badges5959 silver badges7575 bronze badges
Add a comment
|
|
I am trying to cache web page URL beforehand (when application starts, before WKWebView loading), in order to load already cached response data into WKWebView, and in order to have a cached data to show to users even when the application starts in offline mode (no internet connection).
I tried NSURLSession in order to get URL response and data from NSURLRequest, and save this data as HTML string into memory, then load the HTML string into WKWebView. But the problem here is that web view shows only texts (not images etc.). There are bunch of images which need to load into web view as well. Can anyone help me to solve this kind of caching? Thanks in advance!
|
Cache URL response for offline viewing with WKWebView
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.