Response
stringlengths 8
2k
| Instruction
stringlengths 18
2k
| Prompt
stringlengths 14
160
|
---|---|---|
I think they record all page views which increments the counter, but the output (the HTML you receive) is cached - or at least that portion of it is.
It makes sense - Youtube is a very popular website serving many concurrent people. Performance is very important.
|
If you notice, the Number of Views of an Youtube video doesn't change if you refresh the video page multiple times.
Also if you open up the same url on a different browser from same computer it still shows old count.
Any guess what can be their logic of maintaining this view count?
Do they have 2 count fields and they sync up nightly and page always show the synced value not the count that gets updated with page refresh?
Thanks
|
Youtube Video View Count
|
You can and should query this using the OpenCL clGetDeviceInfo API, with the parameter CL_DEVICE_MAX_CONSTANT_BUFFER_SIZE. The OpenCL 1.1 spec says that a conforming implementation has to provide at least 64K bytes, which is probably what your device is implementing.
If you exceed this limit, then OpenCL should either give you an error or tranparently move you constant array into a global memory array for you.
If it's not returning an error, but giving you bad results, that's a bug in your OpenCL implementation. Not too surprising, none of them are very mature yet. You should definitely report the bug to vendor. (Which I assume is NVidia because of your references to CUDA) (After making sure you've got the latest version installed, of course.)
|
I ran some tests on my kernel which uses constant cache. If I use 16,000 floats (16,000 * 4KB = 64KB) then everything runs smoothly. If I use 16,200 it still runs smoothly. I get errors in my results (not from OpenCL) if I use 16,400 floats. Could it just be that technically there is 64.x KB of constant cache available? Should I even trust my code if I am using exactly 16,000 floats? Usually I expect code to break when you use stuff to the stated limit.
|
Why doesn't my kernel fail when I use a little more than 64kb of constant cache? (OpenCL/CUDA)
|
How to set L1 cache size-limitation
You can't. The only option is to clear the persistence context manually at regular intervals if you want to "control" (actually, clear is very aggressive, it removes all entities) its size.
How to set L2 cache size-limitation
This depends on the underlying cache provider. In other words, this is done by configuring the L2 cache implementation. For example, EHCache has a maxElementInMemory parameter.
what happens in L1?! how much entities will be in the memory as time pass? w/o any constrains ?!
As much as you put in it, until an eventual OutOfMemoryError, hence the need to clear explicitly:
on large batch jobs (even if they occur in a single transaction)
if a long-lived EntityManager is used
But the usual pattern is to use a short-lived EntityManager and most use cases are not batch jobs so this is not a concern.
See also
JPA wiki book
2.5 Clear
|
How to set L1 or L2 cache size-limitation. I concern of increasing the cache-size.
One way is defining timeout for cache but i want to know is it possible to make a constraint for cache size or not?
RGDS
Navid
|
how to set limitation in EntityManager (JPA) L1 or L2 cache size
|
This is called donut caching.
The ASP.Net MVC framework doesn't currently support it, but it's planned for version 3.
|
I have a heavy page that is cached. This is okay for anonymous users. They all see the same page.
The problem is for logged in users. They should have minor parts of the page re-rendered on every request (like personal notes on content in the page, etc.)
But still all the rest of the page should be cached (it does tons of SQL and calcuations when rendered).
As a workaround I put placeholders in page templates (like #var1#, #var2#,..).
Then I make controller method to render View into string, where I do string.Replace #var1# and other into real values.
Any cleaner way to do such kind of partial "non-caching"?
|
ASP.NET MVC: cache with non-cachable portions
|
I have found it
I can access MyDataSource.Select() method directly and I will get a list of my objects
protected void MyGrid_RowCommand(object sender, GridViewCommandEventArgs e)
{
List<student> lst =(List<student>)MyDataSource.Select();
}
|
I have a grid view that bounded to an object data source that select data from type of student
public class student
{
int id;
string name;
int age;
public List<students> GetAllStudents()
{
// Here I'm retrieving a list of student from Database
}
}
In the UI Control ascx
<asp:GridView ID="MyGrid" runat="server"
DataSourceID="MyDataSource"
OnRowCommand="MyGrid_RowCommand">
</asp:GridView>
<asp:ObjectDataSource ID="MyDataSource" runat="server"
TypeName="student"
SelectMethod="GetAllStudents">
In the UI Control code behind
protected void MyGrid_RowCommand(object sender, GridViewCommandEventArgs e)
{
// Here I want to get the list of students from my gridview
}
I want to retrieve the list of data that shown in the Grid to be able to check on the age value of last student in the list
please help me as soon as you can
Thanks in Advance
|
GridView and objectDataSource
|
Usually when they talk about Etags varying across servers it's in relation to static connect served up by Apache. By default Apache includes the file's inode in the Etag. If the files are not on a shared resource (like a NFS exported NAS), then the file's inode would be different on each server. Typically, the recommendation is to configure Apache like:
FileETag MTime Size
but even that has the possibility of differences if the modification time varies across the servers.
However, for non-static content, you are generating the Etag in your code, so it would be the same across multiple servers.
|
I gathered from the much famed scaling rails screencasts that at some point when your site gets big and bigger, proxy caching is the way to go. Proxy caching uses etag, among other things and since etags can be more specific and strong validator is perhaps the way to go. However, I also hear that in server farm scenarios the etag is not the right solution because it can vary across servers (How?)
This seems contradictory i.e. most likely one is implementing e-tag based proxy caching if they are running a large load balanced server farms. So if e-tag fails in this situation how do they do it? :last_modified isn't really a great option.
In a rails app let's say if my etags in a post index action is
:etag => "all_posts_#{Post.count}".
will this vary from server to server if it's a load balanced server farm?
|
etags and server farm
|
MSDN's page on the as keyword states that:
The as operator is like a cast
except that it yields null on
conversion failure instead of raising
an exception.
Looks this is what's happening here -- the cast to type List<Sale> is failing, and returning null. Are you sure this is the type of the object in your cache?
EDIT:
In response to your edit, it seems like some sort of assembly-related serialization/deserialization issue possible related to binding contexts that honestly is a little over my head. I checked around and found the following two questions here on SO that may be able to point you in the right direction:
Question 1
Question 2
Hopefully those links prove helpful.
|
Here's my bit of code:
List<Sale> sales = new List<Sale>();
if (Cache["Sales"] != null)
{
sales = (List<Sale>)Cache["Sales"];
}
else
{
...
Cache.Add("Sales", sales, null, DateTime.Now.AddMinutes(20),
Cache.NoSlidingExpiration, CacheItemPriority.Normal, null);
}
When I try to pull the data from the cache, my "sales" object is null. Wondering why that code is hit at all, I ran the debugger in VS to see what was in the Cache object.
The Cache contains the data I need, but when it gets the data from cache, "sales" still comes out as null.
Is there something I'm doing wrong here?
EDIT:
I'm getting this error on casting:
[A]System.Collections.Generic.List1[controls_mySales+Sale] cannot be cast to [B]System.Collections.Generic.List1[controls_mySales+Sale]. Type A originates from 'mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089' in the context 'LoadNeither' at location 'C:\WINDOWS\assembly\GAC_32\mscorlib\2.0.0.0__b77a5c561934e089\mscorlib.dll'. Type B originates from 'mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089' in the context 'LoadNeither' at location 'C:\WINDOWS\assembly\GAC_32\mscorlib\2.0.0.0__b77a5c561934e089\mscorlib.dll'
|
Cache contains data, but can't retrieve data
|
In short - No. That type of caching is very application-specific, so it's not built in to the protocol for you. I would say that the solution you chalked up your self is a good way to go. A side effect of such a queue is that you get a level of decoupling between your main application and the external service. This can be useful for a lot of things, once you get past the initial development phased (debugging, service windows, logging etc.)
|
I know that you can cache the WSDL but is there a way to cache the soap responses through configuration of the php soapclient?
Obviously, we could "cache" ourselves by constructing some tables in a database and running a cron. This will take much more effort and I am wondering if there is a way to specify caching abilities of the explicit SOAP data being returned from soap server to client.
Similar to how a browser can cache various data based on headers ?
Do I need to have the soap server configured properly or is this something I can do strictly on the soapclient.
Our soap server is a 3rd party vendor which we have little control over so I am hoping to keep the solution to soapclient side if possible.
Open to all suggestions/alternatives (aside from the one I mentioned) if this does not exist.
|
How to cache php soapclient responses?
|
How about the approach shown below, which will retrieve the garden instance once when the Category class is loaded:
class Category < SystemTable
GARDEN = self.find_by_name('garden')
end
Now whenever you need the garden category you can use Category::GARDEN.
|
Say I have a system table 'categories' with 2 fixed records. The user will not be allowed to delete these 2 but may wish to add their own to extend their list. I need to be able to pull out the 'garden' category at certain times, e.g when creating a garden project.
A class attribute reader that returns the garden instance would do the job, but I would like to know how this can be improved with caching?
I believe memoization would only work per process which is almost pointless here. I would like it to be set once (perhaps the first time it's accessed or on app start-up) and just remain in cache for future use.
Example setup:
class Project < ActiveRecord::Base
belongs_to :category
end
class Category < SystemTable
cattr_reader :garden
def self.garden
@@garden ||= self.find_by_name('garden')
end
end
|
Rails: How can I cache system table data
|
Use EGOcache API.
Just import the EGOCache.h file in your class
- (void)setObject:(id<NSCoding>)anObject forKey:(NSString*)key;
-(void)setObject:(id)anObject forKey:(NSString*)key withTimeoutInterval:(NSTimeInterval)timeoutInterval;
-(id)objectForKey:(NSString*)key;
Use the first method for set your data with a forkey.
If you want secify the cache time then use the second method.
Edit
EGOCache will save your data permanently, See here to for usage of NSCache which is incorporates various auto-removal policies.
|
I loaded XML data (including images, text,...) from server and display that data on iphone screen.
How can i cache data to re-load that screen when i visit that screen other time. It will be faster . (dont need re-load XML data again)?
Thank you.
|
How to cache data in ios
|
If you're sure that your data has a long time-to-live, you can certainly cache data by saving it temporarily to a text file.
if (!file_exists($cachefile)) {
// Save to cache
$query=mysql_query('SELECT * FROM ...');
while ($row=mysql_fetch_array($query))
$result[]=$row;
file_put_contents($cachefile,serialize($result),LOCK_EX);
}
else
// Retrieve from cache
$result=unserialize(file_get_contents($cachefile));
foreach ($result as $row)
echo $row['name'];
Although using APC, MemCache, or XCache would be a better alternative if you consider performance.
|
Is there a way to cache results of a mysql query manually to a txt file?
Ex:
$a=1;
$b=9;
$c=0;
$cache_filename = 'cached_results/'.md5("$a,$b,$c").'.txt';
if(!file_exists($cache_filename)){
$result = mysql_query("SELECT * FROM abc,def WHERE a=$a AND b=$b AND c=$c");
while($row = mysql_fetch_array($result)){
echo $row['name'];
}
// Write results on $row to the txt file for re-use
}else{
// Load results just like $row = mysql_fetch_array($result); from the txt file
}
The original query contains more WHEREs and joins that uses multiple tables.
So, Is this possible? If so, please explain.
Thank you,
pnm123
|
Cache results of a mysql query manually to a txt file
|
No, you don't.
But, how you guys helped me point to the right direction, it will help with performance.
So I will definitely run some instances of memcached and investigate concurrency control further.
Thanks.
|
I am developing AMF Flash gateway on FlourineFx application for deployment on Windows Azure and I want to use Azure SQL.
I use NHibernate 2.1 + NHibernate.Linq 1.0 + FluentNHibernate 1.1
There will be two or more instances of this FlourineFx gateway and only 1 database.
I am planning on implementing memcached as 2nd level cache later (as Windows Azure WorkerRole), but is it necessary?
(I don't mind performance, but I do mind consistency)
|
Do I need external 2nd level cache for multiple NHibernate instances in Windows Azure?
|
I finally found the documentation : http://confluence.atlassian.com/display/CONFDEV/Confluence+Caching+Architecture
It is pretty simple to get an instance of a Cache Manager from Confluence, and use it to create your own cache.
|
I am developing a plugin for Confluence (version 2.10). My plugin has some expensive process that would benefit a lot from caching. I have implemented a proof of concept with a simple HashMap as a cache. Now I need to put a real cache in place.
I'd like to integrate my caches with the standard Confluence caches, so they could be managed and monitored with the "Cache Statistics" page in the admin.
I tried looking through the Confluence documentation, but could not find any informations on using Confluence caches. Are they just not exposed to plugins?
|
Using standard Confluence caches when developing a plugin
|
2
I am not aware of any processor that does the optimization you describe - eliminating writes to clean cache lines that would not change the value - but it's a good question, a good idea, great minds think alike and all that.
I wrote a great big reply, and then I remembered: this is called "Silent Stores" in the literature. See "Silent Stores for Free", K. Lepak and M Lipasti, UWisc, MICRO-33, 2000.
Anyway, in my reply I described some of the implementation issues.
By the way, topics like this are often discussed in the USEnet newsgroup comp.arch.
I also write about them on my wiki, http://comp-arch.net
Share
Improve this answer
Follow
edited May 8, 2012 at 4:14
answered Mar 18, 2012 at 6:58
Krazy GlewKrazy Glew
7,30622 gold badges5151 silver badges6363 bronze badges
2
Hey, here's a fun aside: most modern processors have writeback caches, but a few have writethrough caches, possibly in combination with writeback. E.g. the AMD Bulldozer L1 cache is WT, as are several IBM z-series, and many GPUs. Most such WT caches are accompanied by write combining or write coalescing. Such a write coalescing cache or buffer naturally eliminates "silent stores" - they get done to the L1$, but not written through to the L2$.
– Krazy Glew
May 8, 2012 at 4:17
Skylake-client and Ice Lake had silent-store optimization for write-back of all-zero lines to L3, but it seems it was disabled by a microcode update in 2021, perhaps because of paranoia over timing attacks or something. We can't have nice things. What specifically marks an x86 cache line as dirty - any write, or is an explicit change required? / travisdowns.github.io/blog/2020/05/13/intel-zero-opt.html / travisdowns.github.io/blog/2020/05/18/icelake-zero-opt.html
– Peter Cordes
Feb 12, 2023 at 20:53
Add a comment
|
|
Edit - I guess the question I asked was too long so I'm making it very specific.
Question: If a memory location is in the L1 cache and not marked dirty. Suppose it has a value X. What happens if you try to write X to the same location? Is there any CPU that would see that such a write is redundant and skip it?
For example is there an optimization which compares the two values and discards a redundant write back to the main memory? Specifically how do mainstream processors handle this? What about when the value is a special value like 0? If there's no such optimization even for a special value like 0, is there a reason?
Motivation: We have a buffer that can easily fit in the cache. Multiple threads could potentially use it by recycling amongst themselves. Each use involves writing to n locations (not necessarily contiguous) in the buffer. Recycling simply implies setting all values to 0. Each time we recycle, size-n locations are already 0. To me it seems (intuitively) that avoiding so many redundant write backs would make the recycling process faster and hence the question.
Doing this in code wouldn't make sense, since branch instruction itself might cause an unnecessary cache miss (if (buf[i]) {...} )
|
cache behaviour on redundant writes
|
What about using a hybrid approach? You could store it into the SQL database in addition to putting it in your cache. Then when they hit a different instance it is put into the cache with the time that was stored in the database. Minimizes round trips to the database, and makes the information available to all of the servers.
|
I'm currently developing a WCF REST Web Service that will be running on Microsoft Azure. To limit the number of requests per IP address to prevent abuse, I currently store the IP and timeout using the ASP.NET Cache.
This method works great but since muliple VM instances with Azure don't share a single cache, the requests could be split between different VMs and be cleared if a VM is reset.
I don't think this is a major problem but since I already store user info in a SQL Azure database and authenticate users using the WCF service, would I be better off using the database instead of the ASP.NET cache?
Any adive would be really helpful.
|
WCF Service: ASP.NET Cache or SQL
|
Well, give SQL enough memory and you'll likely find its caching stuff for you anyway. Other than that, a basic caching idea will work for you - create a caching entity for your table (or business object, preferably) and simply use something like a dictionary to provide key-value associations.
Then all you need to do is work in some cache invalidation or lifespan and you're sorted. The caching layer usually hovers around the business layer, as the business logic can decide if what's in memory is suitable for you, or stale.
Don't re-invent anything, there are lots of caching solutions around that provide caching infrastructure: ASP.NET Cache, memcached, AppFabric...
Caching is a little gem when it comes to improving performance, because all it consumes is memory - which is turning into penny-a-dozen. However, like anything performance related, don't assume you need it until you need it - ie, database accesses are slow, the network is slow, you have millions of users accessing the same data, etc
Profile your code first!
|
Consider the following data model:
Suppose I have a table called "SuperAwesomeData" where each record maps to an instance of an object called "SuperAwesomeData" which is retrieved by using the primary key for table "SuperAwesomeData". My question is what caching strategy would best work for managing individual records? I need to still be able to request the "SuperAwesomeData" record via it's primary key.
|
Caching approaches for singular database records
|
<add name="defaultCache" duration="900" varyByParam="*" varyByCustom="membership" location="Any"/>
Global.asax:
public override string GetVaryByCustomString(HttpContext context, string custom)
{
if (custom == "membership")
{
string membership = "";//Get membership.
return membership;
}
return string.Empty;
}
|
We have a bunch of pages that get really high traffic, and as such we have the following in web.config:
<caching>
<outputCacheSettings>
<outputCacheProfiles>
<add name="defaultCache" duration="900" varyByParam="*" location="Any"/>
</outputCacheProfiles>
</outputCacheSettings>
</caching>
and the following attribute on the necessary controller methods:
[OutputCache(CacheProfile = "defaultCache")]
This has served us well because there is no intersection between cached pages served to normal users and those in an admin role. However now we have implemented a CMS where the interface to the CMS is rendered into most pages if a user is logged in under an admin role. However, we have found that the current caching strategy is not working for us now as admin content is getting cached and served to normal users.
So, is there a way to cache by role? Is this even possible where the url of a page remains the same but the content changes according to the logged in role? Would it be better to alter the URL by adding something like ?admin=true to all relevant pages such that the varyByParam="*" attribute on our cache profile can do its job?
Thanks.
|
Is it possible to cache by membership role in asp.net mvc?
|
3
I'm not 100% sure for NHibernate but Hibernate 2nd level cache does NOT offer Write-Behind caching, Hibernate just directly writes to the database. I think the same applies to NHibernate. In other words, what you'd like to do is IMO not possible, at least not without modifying NHibernate to write to the 2nd level cache and a persistent async-database-queue. But that would be a really non trivial change and is not going to happen short term.
Share
Improve this answer
Follow
edited Jun 26, 2010 at 23:26
answered Jun 26, 2010 at 23:14
Pascal ThiventPascal Thivent
566k138138 gold badges1.1k1.1k silver badges1.1k1.1k bronze badges
1
The NHibernate 2nd level cache behaves the same way as the Hibernate one.
– Sean Carpenter
Jun 26, 2010 at 23:16
Add a comment
|
|
How can i read/write to the cache for a periode of time i.e 10 seconds and then commit the changes to the database?
|
help with second level cache using NHibernate and memcached
|
Use article_posts_url(:article_id => post.article)
resource_name_url routes generate url with the host set.
|
I want to expire a cached action and wondered how to generate the correct reference.
#controller
caches_action :index, :layout => false
#generates this fragment which works fine
views/0.0.0.0:3000/article/someid/posts
#sweeper
...
expire_action article_posts_path(:article_id => post.article)
# results in this
Expired fragment: views//en/article/someid/posts (0.0ms)
So this is almost ok, except the host is missing. What do I do that supplies this to the expire_action method?
Thanks in advance.
|
rails - caches_action expire_action
|
1
You could use filemtime() to re-make weather.txt every minutes and if not, send the existing file.
Share
Improve this answer
Follow
answered May 29, 2010 at 22:33
Dan HeberdenDan Heberden
11k33 gold badges3333 silver badges2929 bronze badges
0
Add a comment
|
|
How do I do this on a php webpage?
I want to get and decode a json string and display the results as html on my page, however, I don't want it hotlinking back to the source.
If I could write the decoded string to a txt file say weather.txt on the server and keep the html formatting and do it so that the page won't fetch the json script until 60 minutes has passed since the last time it was fetched regardless of how many times the page is opened during that 60 minute period and the weather.txt is viewed.
All I can come up with is a simple script that hotlinks, everything else I have tried simply failed.
$file = file_get_contents('http://sample.com/weather');
$out = (json_decode($file));
echo $out->mainText;
Will appreciate any help with this.
|
Decoding and caching json every 60 minutes
|
The issue is you are reading through the PersistenceContext/EM which maintains an Object Transactional view of the data and will never update unless refreshed.
Add the query refresh property "eclipselink.refresh" to the find call (JPA 2.0) or simply call em.refresh after the initial find.
|
I am using eclipselink JPA with a database which is also being updated externally to my application. For that reason there are tables I want to query every few seconds. I can't get this to work even when I try to disable the cache and query cache. For example:
EntityManagerFactory entityManagerFactory = Persistence.createEntityManagerFactory("default");
EntityManager em = entityManagerFactory.createEntityManager();
MyLocation one = em.createNamedQuery("MyLocation.findMyLoc").getResultList().get(0);
Thread.sleep(10000);
MyLocation two = em.createNamedQuery("MyLocation.findMyLoc").getResultList().get(0);
System.out.println(one.getCapacity() + " - " + two.getCapacity());
Even though the capacity changes while my application is sleeping the println always prints the same value for one and two.
I have added the following to the persistence.xml
<property name="eclipselink.cache.shared.default" value="false"/>
<property name="eclipselink.query-results-cache" value="false"/>
I must be missing something but am running out of ideas.
James
|
Disable eclipselink caching and query caching - not working?
|
3
I know you didn't want to hear about memcached, but it is one of the best solutions for what you're trying to do. Depending on your site usage, there can be massive improvements in performance. By simply using memcached's session handler over my database session handler, I was able to cut the load in half and cut back on request serving times by over 30%.
Realistically, memcached is a simple solution. It's already integrated with PHP (if you have the extension loaded), and it requires virtually no configuration (I simply had to add memcached as a service on my linux box, which is done in one or two shell commands).
I would suggest storing session data (and anything that lends itself to caching) in memcache. For dynamic pages (such as stack overflow homepage), I would recommend caching output for a couple of seconds to prevent flooding.
Share
Improve this answer
Follow
edited May 25, 2010 at 18:20
answered May 25, 2010 at 18:15
KenaniahKenaniah
5,1812424 silver badges2727 bronze badges
Add a comment
|
|
I'm running a php/mysql-driven website with a lot of visits and I'm considering the possibility of caching result-sets in shared memory in order to reduce database load.
However, right now MySQL's query cache is enabled and it seems to be doing a pretty good job since if I disable query caching, the use of CPU jumps to 100% immediately.
Given that situation, I dont know if caching result-sets (or even the generated HTML code) locally in shared memory with PHP will result in any noticeable performace improvement.
Does anyone out there have any experience on this matter?
PS: Please avoid suggesting heavy-artillery solutions like memcached. Right now I'm looking for simple solutions that dont require too much time to implement, deploy and maintain.
Edit:
I see my comment about memcached deviated answers from the actual point, which is whether caching DB queries in the application layer would result in a noticeable performace impact considering that the result of those queries are already being cached at the DB level.
|
MySQL query cache vs caching result-sets in the application layer
|
Seems like a very reasonable approach. Personally I'd go with SQL CE for storage, make sure you index the column holding the datetime of the record, then use TableDirect on the index for getting and inserting data so it's blazing fast. Since your data is already chronological there's no need to get any slow SQL query processor involved, just seek to the date (or the end) and roll forward with a SqlCeResultSet. You'll end up being speed limited only by I/O. I profiled doing really, really similar stuff on a project and found TableDirect with SQLCE was just as fast as a flat binary file.
|
Greetings,
I've been working on a C#.NET app that interacts with a data logger. The user can query and obtain logs for a specified time period, and view plots of the data. Typically a new data log is created every minute and stores a measurement for a few parameters. To get meaningful information out of the logger, a reasonable number of logs need to be acquired - data for at least a few days. The hardware interface is a UART to USB module on the device, which restricts transfers to a maximum of about 30 logs/second. This becomes quite slow when reading in the data acquired over a number of days/weeks.
What I would like to do is improve the perceived performance for the user. I realize that with the hardware speed limitation the user will have to wait for the full download cycle at least the first time they acquire a larger set of data. My goal is to cache all data seen by the app, so that it can be obtained faster if ever requested again. The approach I have been considering is to use a light database, like SqlServerCe, that can store the data logs as they are received. I am then hoping to first search the cache prior to querying a device for logs. The cache would be updated with any logs obtained by the request that were not already cached.
Finally my question - would you consider this to be a good approach? Are there any better alternatives you can think of? I've tried to search SO and Google for reinforcement of the idea, but I mostly run into discussions of web request/content caching.
Thanks for any feedback!
|
Approach for caching data from data logger
|
The Remove method should work fine for this.
Are you sure that you're calling Remove correctly and that there isn't some other code re-inserting the item into the cache?
Cache.Remove Method
How to: Delete Items from the Cache in ASP.NET
|
been pulling my hair out over this for the past few hours.
i have a cache object..
HttpRuntime.Cache.Insert("Members", AllMembersList, null, DateTime.Now.AddHours(1), TimeSpan.Zero);
when i try and clear the cache object..
HttpRuntime.Cache.Remove("Members");
its value doesnt change, untill 1 hour is up or when i reset the server.
My question.. for a cache object that is set absolute expiration, can i manually clear it or will it exist for the full hour ?
what i would like is this object to last for an hour but depending on program execution be able to clear it so it will contain fresh data.
any help is most appreciated
truegilly
|
absolute expiration cache object - can it be manually removed?
|
Try adding
<meta http-equiv="cache-control" content="no-cache"/>
<meta http-equiv="pragma" content="no-cache"/>
to the header.html file
|
I have a html page which contains nested frameset (don't ask why, I'm only servicing the app ;) ). What bothers me, is why FF caches Header.htm file constantly. I had to use clear cache to force browser to download it again. ctrl+f5 didn't help.
<frameset rows="68,*" border="0" frameborder="no" framespacing="0">
<frame name="header" src="/Header.htm" scrolling="no" noresize>
<frame name="footer" src="/Login.aspx?w=<% =company %>&loc=<% =ccdom %>">
</frameset>
Any ideas what is responsible for that behaviour? On the other hand, IE downloads the file without a hassle.
Thanks, Pawel
|
Frameset frame cached by Firefox
|
You can use the HttpRuntime to access the Cache.
|
I want to avoid an item being removed from cache when expire in SOME OCASIONS
If i understand how CacheItemUpdateCallback works, i need to asign to expensiveObject the new object, in my case: the old one.
But i cant access the item with HttpContext.Current.Cache[key],
my question: how to access the old item? or in other words, How prevent this item being removed in the first place?
|
How can I get the item being removed when CacheItemUpdateCallback is called
|
I don't have a systematic answer for you, but in my experience, the file cache is the fastest. I should clarify that I haven't done any serious performance tests, but in all the time I've used Smarty, I have found the file cache to work best.
One this that definitely improves performance is to disable checking if the template files have changed. This avoids having to stat the tpl files.
|
Does anyone know if there is an overview of the performance of different cache handlers for smarty?
I compared smarty file cache with a memcache handler, but it seemed memcache has a negative impact on performance.
I figured there would be a faster way to cache than through the filesystem... am I wrong?
|
THE FASTEST Smarty Cache Handler
|
2
The recommended approach is to stub HttpContextBase. Its documentation states
When you perform unit testing, you
typically use a derived class to
implement members with customized
behavior that fulfills the scenario
you are testing.
This is mostly covered for TypeMock here.
var httpContext = MockRepository.GenerateStub<HttpContextBase>();
httpContext.Stub(x=>x.Cache).Return(yourFakeCacheHere);
var controllerContext = new ControllerContext(httpContext, ....);
var controller = new HomeController();
controller.ControllerContext = controllerContext;
Share
Improve this answer
Follow
edited May 23, 2017 at 12:30
CommunityBot
111 silver badge
answered Mar 11, 2010 at 21:35
G-WizG-Wiz
7,42011 gold badge3939 silver badges4747 bronze badges
Add a comment
|
|
I’m relatively new to testing and MVC and came across a sticking point today. I’m attempting to test an action method that has a dependency on HttpContext.Current.Cache and wanted to know the best practice for achieving the “low coupling” to allow for easy testing. Here's what I've got so far...
public class CacheHandler : ICacheHandler
{
public IList<Section3ListItem> StateList
{
get { return (List<Section3ListItem>)HttpContext.Current.Cache["StateList"]; }
set { HttpContext.Current.Cache["StateList"] = value; }
}
...
I then access it like such... I'm using Castle for my IoC.
public class ProfileController : ControllerBase
{
private readonly ISection3Repository _repository;
private readonly ICacheHandler _cache;
public ProfileController(ISection3Repository repository, ICacheHandler cacheHandler)
{
_repository = repository;
_cache = cacheHandler;
}
[UserIdFilter]
public ActionResult PersonalInfo(Guid userId)
{
if (_cache.StateList == null)
_cache.StateList = _repository.GetLookupValues((int)ELookupKey.States).ToList();
...
Then in my unit tests I am able to mock up ICacheHandler.
Would this be considered a 'best practice' and does anyone have any suggestions for other approaches?
|
Unit testing an MVC action method with a Cache dependency?
|
The OutputCache and the Page.Cache are in no way related. The OutputCache caches the html that the page generates and returns that to the browser without running your code again (for 10 seconds as by your current configuration). The Page.Cache provides a mechanism for storing application wide data. Once something is added to that cache it will be there until the next time you restart your website (unless explicitly removed).
|
in my application i trying to implement the cache (ouput) but it is not working fine,
that is it is every time getting from cache only this is my code.
<%@ OutputCache VaryByParam ="none"
Location="Client" Duration="10" %>.
Code:
protected void btn_Click(object sender, EventArgs e)
{
DataView dtv;
dtv = (DataView)Cache["mycache"];
if(dtv ==null )
{
string sqry="select * from scrap";
da=new SqlDataAdapter (sqry,con);
ds=new DataSet();
da.Fill (ds);
dtv=new DataView (ds.Tables[0]);
Cache["mycache"]=dtv ;
Response.Write ("<script> alert ('from code')</script>");
}
else
{
Response.Write ("<script> alert ('from cache')</script>");
}
grd1 .DataSource =dtv;
grd1 .DataBind();
|
cache in asp.net (output)?
|
3
I encountered a situation with Chrome where a page had no cache-control or expiration the first time Chrome loaded it (which was many months ago as I write this) and even though meta tags were later added to the page explicitly setting the expiration policy, Chrome never honored them.
What had to be done to force a reload was to supply a dummy query-string argument -- just once, merely to get the page to reload with its newly added meta tags-- and thenceforward Chrome began to honor the expiration settings in the meta tags.
BTW, I could not reproduce the problem with documents without meta tags that I created this week; the problem only affected documents that had been first loaded by Chrome many months ago. Perhaps the way Chrome caches documents lacking an expiration policy had changed in the interim, so that only documents loaded before the change were being cached in perpetuity.
P.S. I know that this question was asked months ago and the OP may have solved his problem by now -- but you never know when someone's going to come along a few months from now looking for help on the same problem. So the "dead question police" can stand down.
Share
Improve this answer
Follow
answered May 10, 2010 at 11:42
TimTim
5,39133 gold badges3232 silver badges4141 bronze badges
1
1
has got it with supply a query string (?) at the end of your css file (/styles/css?). That should do it every time. Rock on.
– hsatterwhite
May 10, 2010 at 11:57
Add a comment
|
|
Chrome cache doesn't seem to update while accessing my website.
Only when people clear it or press ctrl+F5 they can see the new content.
I'm running it on a Wordpress CMS.
Does anyone have any idea why is this happening?
|
Chrome cache won't update in my site
|
Okay, I think I have a partial answer for you.
From here:
Output cache module populates the
IHttpCachePolicy intrinsic in
BeginRequest stage if a matching
profile is found. Other modules can
still change cache policy for the
current request which might change
user-mode or kernel mode caching
behavior. Output cache caches 200
responses to GET requests only. If
some module already flushed the
response by the time request reaches
UpdateRequestCache stage or if headers
are suppressed, response is not cached
in output cache module.
That article is IIS7 specific, so not sure how this translates across to other versions, but it's probably similar. UpdateRequestCache is one of the HttpApplication pipeline events, and it occurs after an IHttpHandler (e.g. your Page object) has finished handling the request.
So... it doesn't look good for performing a flush inside your Page.
|
I have some code that is used to replace certain page output with other text. The way I accomplish this is by setting the Response.Filter to a Stream, Flushing the Response, and then reading that Stream back into a string. From there I can manipulate the string and output the resulting code. You can see the basic code for this over at Render a view as a string .
However, I noticed that Page Caching no longer works after the first Response.Flush call.
I put together a simple ASP.NET WebApp as an example. I have a Default.aspx with an @OutputCache set for 30 seconds. All this does is output DateTime.Now.ToLongTimeString(). I override Render. If I do a Response.Flush (even after the base.Render) the page does not get cached. This is regardless of any programmatic cacheability that I set.
So it seems that Response.Flush completely undermines any page caching in use. Why is this?
extra credit: is there a way to accomplish what I want (render output to a string) that will not result in Page Cache getting bypassed?
ASPX Page:
<%@ Page Language="C#" AutoEventWireup="true" CodeBehind="Default.aspx.cs" Inherits="TestCacheVsFlush._Default" %>
<%@ OutputCache Duration="30" VaryByParam="none" %>
<%= DateTime.Now.ToLongTimeString() %>
Code-behind (Page is Cached):
protected override void Render(HtmlTextWriter writer)
{
base.Render(writer);
}
Code-behind (Page is not cached):
protected override void Render(HtmlTextWriter writer)
{
base.Render(writer);
Response.Flush();
}
Code-behind (Page still is not cached):
protected override void Render(HtmlTextWriter writer)
{
base.Render(writer);
Response.Cache.SetCacheability(HttpCacheability.Server);
Response.Cache.SetExpires(DateTime.Now.AddSeconds(30));
Response.Flush();
}
|
Response.Flush breaking Page Caching
|
In your application helper you could try:
def optional_cache(key, &block)
cache(key, &block) unless session[:disable_caching]
end
Then replace your calls to cache() with optional_cache().
|
I'm using fragment caching a lot and it is essential to me for good performance. However, due to the complexity of the caching I'm using, I need to offer my testers, a way to disable/enable caching as a session variable. (On a user basis only)
I was thinking about implementing a cache_disabled? method, and I now check for it's value everywhere I use cache. Now, I'm stuck with the following piece of caching, and I can't figure out how to nicely integrate this check :
<% cache(@cache_key_for_consultContent) do %>
<div id="consult">
<%= render :partial => 'LOTS_OF_CONTENT' %>
</div>
<% end %>
I need the content to be called when caching is disabled or content isn't cached yet.
thanks for your creativity! (Please keep it DRY)
|
Rails - Best way to implement Optionnal Fragment Caching for testing purposes
|
The UIImageView+Cached code found here implements a simple cache of UIImages combined with background loading in a thread.
I use this in my own projects. It works well. I recommend it. It should be easily adaptable to do what you want and a quick look says that pulling out the caching bits should be easy.
|
I have a lot of images in my app so I decided to do some loading in a background thread, and since UIKit isn't thread-safe, I filled arrays with CGImageRefs. However, they are not cached and I need to be able to access them fast so my question is:
How to cache CGImageRef, or cache the UIImage derived from it later on in the main thread?
|
Cocoa Touch - How to cache a CGImageRef or UIImage (not using imageNamed:)?
|
While serving pages over HTTP may be faster (though I doubt https is not monstrously slow, for small files), a good lot of browsers will complain if included resources such as images and JS are not on https:// URL's. This will give your customers annoying popup notifications.
There are high-performance servers for static file serving, but unless your SSL certificate works for multiple subdomains, there are a variety of complications. Putting the high-performance server in front of your dynamic content server and reverse proxying might be an option, if that server can do the SSL negotiation. For unix platforms, Nginx is pretty highly liked for its reverse proxying and static file serving. Proxy-cache setups like Squid may be an option too.
Serving static content on a cloud like amazon is an option, and some of the cloud providers let you use https as well, as long as you are fine with using a subdomain of their domain name (due to technical limitations in SSL)
|
Ours is an e-commerce site with lots of images and flash(same heavy flash rendered across all pages). All the static content is stored and served up from the webserver(IHS clustered-2 nodes). We still notice that the image delivery is slow. Is this approach correct at all? What are the alternative ways of doing this, like maybe serving up images using a third party vendor or implementing some kind of caching?
P.S. All our pages are https. Could this be a reason?
Edit1: The images are served up from
https too so the alerts are a non
issue?
Edit2: The loading is slower on IE and
most of our users are IE. I am not
sure if browser specific styling could
be causing the slower IE loading?(We
have some browser specific styling for
IE)
|
What is the fastest way(/make loading faster) to serve images and other static content on a site?
|
3
You should have a look at Terracotta. They have technology that makes multiple JVMs (can be on different servers) appear unified. If you update an object on one JVM, Terracotta will update the instance transparently on all JVMs in the cluster in a safe way.
Share
Improve this answer
Follow
edited Dec 21, 2009 at 19:50
answered Dec 21, 2009 at 19:44
AsaphAsaph
161k2525 gold badges200200 silver badges202202 bronze badges
2
Let me ask you this. What would be the memory foot print? Would terracotta maintain multiple copies of the object in multiple JVM's? and update them synchronously? or Would it maintain one copy and give the illusion to the different processes that they have their own copy?
– Random Dude
Mar 16, 2010 at 19:59
@Random Dude: I don't know the answer to that for sure. My experience with Terracotta is limited to seeing a 2 hour presentation on it from one of the Terracotta guys at the San Diego Java User's Group. I suspect that Terracotta probably keeps the entire object graph in memory on each JVM in the cluster and proxies changes to the other JVMs. But again, I'm not 100% sure about that. Perhaps check in with the Terracotta people directly.
– Asaph
Mar 17, 2010 at 2:52
Add a comment
|
|
A couple of Relational DB tables are managed by a single object cache that resides in a process. When the cache is committed the tables are updated. The DB relational tables are updated by regular SQL queries and not anything more fancier like hibernate.
Eventually, other processes got into the business of modifying this object without communicating with one another i.e, Each process would initialize this object (read from DB) and update it( commit to DB), & other process would not know about it holding on to a stale cache.
I have to fix this workflow. I have thought of couple of methods.
One is to make this object an mBean. So, the object would reside on one process and every process would eventually modify the object in that process by mBean method invocations.
However, this approach has a couple of problems.
1) Every object returned by this cache has be an mBean, which could make the method invocations quite chatty.
2) Also there is a requirement that every process should see a consistent data model(cache) of the DB, and it should merge its contents to the DB if possible. (like a transaction). If the DB was updated by some other process significantly, it is OK for the merge to fail.
What technologies in Java will help to solve this problem?
|
Update a single object across multiple process in java
|
According to Microsoft (http://support.microsoft.com/kb/234067), you need to set the Expires header for -1 for this to work properly in Internet Explorer.
From the page:
In many cases, Web servers have one or
more volatile pages on a server that
contain information, which is subject
to change immediately. These pages
should be so marked by the server with
a value of "-1" for the Expires
header. On future requests by the
user, Internet Explorer usually
contacts the Web server for updates to
that page via a conditional
If-Modified-Since request.
I think the point is that if IE has an expiration date, it sees no reason to ask you if the resource has been modified, since its cached copy should be "good enough".
The page does also say that IE supports Cache-control: no-cache, though it isn't recommended. So it sounds like it should work, but try the Expires thing anyway.
Also, other googling tells me that browsers are expected to send If-Modified-Since in general, so maybe that's why Firefox works. Try removing Cache-Control: no-cache to see if Firefox still behaves correctly.
|
I have a resource which is user generated and therefore changes at an unpredictable time (example, a user uploads a new version of a word document). I would like browsers to cache this resource and validate its cache with the server on each request (i.e. always send the If-Modified-Since header).
From testing, I've found that Firefox handles this appropriately when I use "Cache-Control: no-cache" in the response header. However, Internet Explorer 7 is not sending "If-Modified-Since" in its request header.
Does "Cache-Control: no-cache" achieve what I described at the beginning? If not, is there anything I can do differently to achieve what I've described across browsers?
Thanks.
|
HTTP Cache - check with the server, always sending If-Modified-Since
|
1
MySql caching basically just caches resultsets against SQL issued to the database: if the SQL statement/query is in the cache, then the resultset gets returned without any work being done by the database engine. There is thus a certain amount of overhead in maintaining accuracy (i.e. the DB must track changes and flush cache entries accordingly).
Compare this to other DBs such as Oracle, where the caching mechanism can take into account placeholders (bound variables) and omits a "hard" parse (i.e. checking if the SQL is valid etc.) if the SQL plan is found in the SQL common cache.
If you find yourself repeatedly submitting identical SQL to the database, then caching may make a substantial difference. If this is not case, you may even find that the additional overhead cancels out any benefit. But you won't know for sure until you have some metrics from your system (i.e. profiling your SQL, analysing the query logs etc.)
Share
Improve this answer
Follow
answered Dec 14, 2009 at 21:14
davekdavek
22.7k99 gold badges7676 silver badges9595 bronze badges
Add a comment
|
|
I am currently working on a php/mysql project with the AbleDating system, my customer is worried about server load so he asked me to use "caching" as much as I could, he asked me to cache mysql query and some html zones...
Is that possible to cache only some html zones with php? If yes how can I do this?
For the mysql caching is it just an option to check or must I change something in the coding?
Thanks!
|
Php and mysql caching
|
Response.Cache.SetCacheability(HttpCacheability.NoCache) ;
|
Can you disable caching on the client side in a HttpHandler but enabling it on the server side?
|
Disable caching on the client side in ASP.NET
|
For the following call:
_context.Response.Cache.SetCacheability(HttpCacheability.Public);
it turns out that in addition to setting the Cache-Control: public HTTP header, it also enables server-side output caching.
|
If you set caching (as below) in an HTTP handler, will it be cached on the server or client or both?
_context.Response.Cache.SetCacheability(HttpCacheability.Public);
_context.Response.Cache.SetExpires(DateTime.Now.AddSeconds(180));
|
ASP.NET cache on client or server
|
Are you using some kind of ORM like Hibernate or any other JPA implementation?
If not, perhaps you should - because it does exactly what you want in terms of managing 1st / 2nd level cache and data synchronization with database. Hibernate in particular can use a number of cache providers including the very same EHCache; said cache can be disk-backed or distributed.
|
What are the best ways to keep data synchronized between in memory cache and the database when used in a web application? Specifically, I store large amounts of database data in an in-memory cache using EHcache on my web application. For many interactions, I would like to simply modify the value in cache, and only synchronize it with the database during low-traffic times. What is a good way to do this as well to minimize the loss of data should an unexpected shutdown occur?
|
Cached data Synchronization to Database
|
You need to use the lock statement whenever you are accessing or updating your static cache. The lock statement will block other threads from from executing until it is finished. If you don't do this you might have one thread attempting to loop through the collection at the same time as another thread is removing a row. Depending on you exact scenario you might want to use double check locking.
static readonly object lockObj = new object();
private static List<myObject> _myObject;
public List<myObject> FillMyObject()
{
lock (lockObj)
{
if(_myObject == null || myTimer)
_myObject = getfromDataBase();
}
}
public List<myObject> UpdateMyObject(somevalue)
{
lock (lockObj)
{
_myObject.RemoveAll(delegate(myObject o)
{
return o.somevalue == somevalue;
});)
}
}
Further Reading
|
I have a static Cache that at a set time updates a generic list of Objects from a database.
It is just a simple static List:
private static List<myObject> _myObject;
public List<myObject> FillMyObject()
{
if(_myObject == null || myTimer)
_myObject = getfromDataBase();
}
I have 2 methods to update my object called UpdateMyObject and RemoveAnEntryFromMyObject.
Everything seems to run fine but everyone once and a while I get a mass amount of errors. Then it goes away and seems fine again. Does anyone know what is going on?
|
Static Cache Error
|
Never mind. Found it, There's a trigger under each table that appears to be left over. Needs deleted to prevent the referencing.
|
I'm getting an exception in my data tier when I try to disable cache dependency in SQL server:
System.Exception: TblSettings::Insert::Error occured. --->
System.Data.SqlClient.SqlException: Could not find stored procedure
'dbo.AspNet_SqlCacheUpdateChangeIdStoredProcedure'.
The statement has been terminated.
Enabling cache dependency, everything is fine. Disabling, the above exception gets thrown. How do I turn this off conclusively? I've checked the code and can't seem to find where it is referenced, apart from the web config, of which I've removed the cache block. From what I can see this seems to be caused via SQL server itself. Anyone any ideas of things to check?
The following is console output I'm trying to turn on and off:
C:\Windows\Microsoft.NET\Framework\v2.0.50727>
aspnet_regsql -S JDAWG\SQLEXPRESS -U sa -P password -d DB -dd
Disabling the database for SQL cache dependency.
.
Finished.
C:\Windows\Microsoft.NET\Framework\v2.0.50727>
aspnet_regsql -S JDAWG\SQLEXPRESS -U sa -P password -d DB -ed
Enabling the database for SQL cache dependency.
.
Finished.
|
Cache Dependency Off causing exception
|
The limit case for a rope-like string would be built on top of a std::list<char>. That obviously isn't very effective. When iterating, you are likely to have have one cache miss per "leaf"/char. As the number of characters per leaf goes up, the average number of misses goes down, with a discontinuity as soon as your leaf allocation exceeds a single cache line.
It might still be a good idea to have larger leafs; memory transfers in cache hierarchies might have different granularities at different levels. Also, when targetting a mixed set of CPUs (i.e. consumer PCs) a leaf size which is a higher power of two will be an integral multiple of the cache line size on more machines. E.g. if you're addressing CPUs with 16 and 32 byte cache lines, 32 bytes would be the better choice, as it's an always integral number of cache lines. Wasting half a cache line is a shame.
|
From Wikipedia:
The main disadvantages are greater
overall space usage and slower
indexing, both of which become more
severe as the tree structure becomes
larger and deeper. However, many
practical applications of indexing
involve only iteration over the
string, which remains fast as long as
the leaf nodes are large enough to
benefit from cache effects.
I'm implementing a sort of compromise between ropes and strings. Basically it's just ropes, except that I'm flattening concatenation objects into strings when the concatenated strings are short. There are a few reasons for this:
The benefits of concatenation objects are minimal when the concatenated strings are short (it doesn't take too long to concatenate two strings in their normal form).
Doing this reduces the largeness/depth of the tree (reducing the downsides of ropes).
Doing this increases the size of the leaf nodes (to take better advantage of cache).
However, as length gets longer, the advantages of the ropes also decrease, so I'd like to find some compromise. The "sweet spot" logically seems to be around where "the leaf nodes are large enough to benefit from cache effects". The problem is, I don't know how large that is.
EDIT: While I was writing this, it occurred to me that the ideal size would be the size of a cache page, because then the rope only causes cache misses when they would happen anyway in a string. So my second question is, is this reasoning correct? And is there a cross-platform way to detect the size of a cache page?
My target language is C++.
|
Ropes: what's "large enough to benefit from cache effects"?
|
Instead of caching the XML response, you might be better off using EHCache to cache whatever objects you're creating as a result of the web servcie call.
If it's a matter of performance (i.e., your web service takes seconds to reply), then caching is a good idea. Nearly all of the AXIS webservices I've created ran sub-second, so caching may not be desireable or necessary in that instance.
|
I want to cache web services response XML at client side.
How can i cache response XML at client side, so we don't have to wait for
Server side response ?
Is there any mechanism available in Axis 1.4 that helps to caching server
side response XML at client side ?
Scenario :
I am consuming 1 search detail service method, which hits the servers of
supplier every minute for different search criteria and most of the time
search criteria is same. And for that again we are hitting servers and
servers take time to return a response. I am thinking that Is there any
mechanism in Axis 1.4, which will help me to store/cache response XML at
client side. So, we don't have to hit servers for same search criteria and
will take xmls or data from client side cached data.
Is there any configuration / settings required at Axis 1.4 .
I have started googling it and i am not finding any valuable details for
caching at client side.
Please provide me any tutorials of this.
|
Caching using Axis 1.4 + web service response caching at client side
|
Here's a quick edit to web.config for caching images/css/js in .NET
<staticContent>
<clientCache httpExpires="Sun, 29 Mar 2020 00:00:00 GMT" cacheControlMode="UseExpires" />
</staticContent>
see this post: http://madskristensen.net/post/Add-expires-header-for-images.aspx
|
I have implemented caching Images in my website, It is working fine in IIS 6 but not working in IIS 7 problem is images not shown on my website on iis7 hosted server but shown on iss6 hosted server
I have implemented caching using this Article....
http://www.codeproject.com/KB/aspnet/CachingImagesInASPNET.aspx
Does anyone have any idea what's going wrong or anyone have good suggestion regarding Image caching.....Any hellp will realy appreciated...
Thanks
|
Problem in Caching Images in ASP.Net in IIS 7
|
The following headers should do that. Whatever page you're trying protect, add them there.
Expires: Sat, 26 Jul 1997 05:00:00 GMT
Last-Modified: "now"
Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0
Pragma: no-cache
Obviously, the now needs to be dynamic.
|
I have an issue with my Rails application and the browser's cache: When a user logs out of the authenticated section of the site, they are still able to use the back button on the browser to see the authenticated page. I do not want to allow this.
How can I expire the cache and force it to reload.
Thank you
|
With Rails, How can I expire the Browser's cache?
|
I have implemented a workaround, see code below. I judged handling the compression easier than handling the cacheing so I implemented the compression part myself. Quite easy thanks to a blog post: HttpWebRequest and GZip Http Responses; I still think this is a bug in .net.
public static string GET(string URL)
{
string JSON;
// Create the web request
HttpWebRequest request = WebRequest.Create(URL) as HttpWebRequest;
HttpRequestCachePolicy cPolicy = new HttpRequestCachePolicy(HttpRequestCacheLevel.Revalidate);
request.Accept = "application/json";
request.Headers.Add(HttpRequestHeader.AcceptEncoding, "gzip,deflate");
request.CachePolicy = cPolicy;
request.Pipelined = false;
// Get response
using (HttpWebResponse response = request.GetResponse() as HttpWebResponse)
{
//From http://www.west-wind.com/WebLog/posts/102969.aspx
Stream responseStream = responseStream = response.GetResponseStream();
if (response.ContentEncoding.ToLower().Contains("gzip"))
responseStream = new GZipStream(responseStream, CompressionMode.Decompress);
else if (response.ContentEncoding.ToLower().Contains("deflate"))
responseStream = new DeflateStream(responseStream, CompressionMode.Decompress);
// Get the response stream
StreamReader readerF = new StreamReader(responseStream);
JSON = readerF.ReadToEnd();
}
return JSON;
}
|
I have a c# client talking to a cherrypy(http/rest) webservice.
The problem is i can't both turn on compression and caching at the same time.
request.AutomaticDecompression = DecompressionMethods.GZip | DecompressionMethods.Deflate;
By leaving out the above line I get the correct caching headers (If-None-Math,If-Modified-Since) while commenting it out gets me the compression headers (Accept-Encodig:gzip) but not the caching headers. It seems to me like a bug but maybe i'm doing something wrong.
[full code]
public static string GET(string URL)
{
string JSON;
// Create the web request
HttpWebRequest request = WebRequest.Create(URL) as HttpWebRequest;
HttpRequestCachePolicy cPolicy = new HttpRequestCachePolicy(HttpRequestCacheLevel.Revalidate);
request.Accept = "application/json";
request.CachePolicy = cPolicy;
request.AutomaticDecompression = DecompressionMethods.GZip | DecompressionMethods.Deflate;
request.Pipelined = false;
// Get response
using (HttpWebResponse response = request.GetResponse() as HttpWebResponse)
{
// Get the response stream
StreamReader readerF = new StreamReader(response.GetResponseStream());
JSON = readerF.ReadToEnd();
// Console application output
//Console.WriteLine(JSON);
if (response.IsFromCache )
Console.WriteLine("Request not from cache");
}
return JSON;
}
|
How to mix compression and caching in HttpWebRequest?
|
I would recommend that you cache them somewhere inside the DOM itself, either at their natural place or in a "cache" div, just hide them or their container (visibility: hidden). Move them around the DOM (e.g. final_container.appendChild(cache.removeChild(cached_item))) and show them as required. This should give you the best bang for the buck in terms of memory efficiency, speed and simplicity when dealing with moderate amounts of cached information.
With the proper cache directives inside your AJAX replies' headers, the browser might also perform caching for your AJAX replies just like for regular pages.
Check out this Browser-Side Cache article for ideas, too.
|
I am building a simple blog which is viewed single-page (do not worry, it is progressively built) and thus I am sending AJAX requests which return HTML to be inserted into the page.
What is the most efficient way to store/cache information (HTML) to be added at a later time into the DOM?
How much information (old entries which the user may return to) dare I save client side using JavaScript considering that they contain HTML for an entire article?
Maybe I do not have to save them with JavaScript if I somehow make sure the clients browser cache the AJAX application's state (e.g. getHTML.php?article=4) so that it returns the HTML without really sending an AJAX request (after it has already been requested once)?
Thanks in advance,
Willem
|
AJAX and Javascript cache efficiency question
|
IE Developer Toolbar has "Clear Browser Cache ..." and "Clear Browser Cache for This Domain ..." menu items.
|
I'm trying to debug some IE-only issues for a site I'm developing. I'm running WDE because there's no Firebug for IE. I want to see whether some changes fix a bug, but no matter what I do, IE never picks up my changes. I've tried all of the following:
stopping and restarting the debug evnironment
closing and re-opening WDE
closing and re-opening IE
clearing IE's "temporary internet files"
swearing at Microsoft for building such awful software
Any help? Are there some cached files somewhere on the drive I could clear out?
|
How do I clear IE's cache when running Web Developer Express?
|
It's actually really simple, just point your 404 error page in IIS to your HTTPHandler.
Request.RawUrl will look like:
http://yourdomain.com/yourhandler.ashx;originally/requested/url
|
I have a system which sits on a web server and generates files on the fly in response to HTTP requests. This is currently implemented as an HTTPHandler.
Once the files are generated, they don't change very often, so I'd like to implement a cache.
Ideally, I'd like the web server to look at the cache folder and serve the files directly from there without any of my code having to execute (web servers are designed to be good at serving files after all, so if I can keep out of the way of that so much the better!).
What I'd like to then do is hook into the server's "file not found" event as an opportunity to create the file, drop a copy in the cache folder for the next time it's requested and also return it to the user instead of the "file not found" message.
This way, repeat requests for files will be lightening fast and my code will only get called in 'exceptional' cases.
So - the question is - how do I wire my code into the "file not found" event in as unobtrusive and lightweight way as possible?
Thanks
|
How can I implement a Read-Through File Cache in ASP.NET?
|
Has anyone upgraded the version of php on the server since apc.so was created? It may be that apc.so was compiled against a different version of php.
If possible, try re-compiling apc.so against the current version of php. Or if you are using a package manager, try removing the apc package entirely and reinstall it.
|
I decided to install APC to speed up the site that I work for. Sadly, I found out that it was already installed and enabled(The developer who first worked on the servers has moved on).
Then I decided to check the usage of it to see if it needs more memory allocated to it or not. This is when I discovered something weird. A simple file with this code:
<?php
print_r(apc_cache_info());
?>
It would not work when served from apache. I get Error 320 (net::ERR_INVALID_RESPONSE): Unknown error. And there is nothing in the error log. From the cli on the server, it works fine. But it only says that my check_apc.php file is cached(name of the script that I was running).
So it looks like APC has not fully/correctly been set up. Any one know what the problem could be?
Contents of /etc/php.d/apc.ini:
; Enable apc extension module
extension = apc.so
; Options for the apc module
apc.enabled=1
apc.shm_segments=1
apc.optimization=0
apc.shm_size=32
apc.ttl=7200
apc.user_ttl=7200
apc.num_files_hint=1024
apc.mmap_file_mask=/tmp/apc.XXXXXX
apc.enable_cli=1
apc.cache_by_default=1
The server is running CentOS
|
Incorrectly set up APC for PHP?
|
I had the exact same problem while developing on andLinux.
My andLinux's clock was about three hours ahead of the host Windows, and setting the correct time (actually, a minute or so behind) has solved the problem.
|
I have the following setup:
Code on my local machine (OS X) shared as a Samba share
A Ubuntu VM running within Parallels, mounts the share
Running Rails 2.1 (either via Mongrel, WEBrick or passenger) in development mode, if I make changes to my views they don't update without me having to kick the server. I've tried switching to an NFS share instead but I get the same problem. I would assume it was some sort of Samba cache issue but autotest picks up the changes to files instantly.
Note:
This is not render caching or template caching and config.action_view.cache_template_loading is not defined in the development config.
Checking out the codebase direct to the VM doesn't display the same issue (but I'd prefer not to do this)
Editing the view file direct on the VM does not resolve this issue.
Touching the view file after alterations does cause the changes to appear in the browser.
I also noticed that the clock in the VM was an hour fast, changing that to the correct time made no difference.
|
Why does Rails cache view files when hosted on VM and codebase on Samba share
|
This question covers making sure a webpage is not cached. It seems you have to set several properties to ensure a web page is not cached across all browsers.
|
Will the code below work if the clock on the server is ahead of the clock on the client?
Response.Cache.SetExpires(DateTime.Now.AddSeconds(-1))
EDIT: the reason I ask is on one of our web apps some users are claiming they are seeing the pages ( account numbers, etc ) from a user that previously used that machine. Yet we use the line above and others to 'prevent' this from happening.
|
web page cache setexpires
|
You can limit the sharing setting of a cache mount for situations like this:
RUN --mount=type=cache,sharing=locked,target=/var/cache/apt apt update && apt -y --no-install-recommends install build-essential
sharing: One of shared, private, or locked. Defaults to shared. A shared cache mount can be used concurrently by multiple writers. private creates a new mount if there are multiple writers. locked pauses the second writer until the first one releases the mount.
https://docs.docker.com/reference/dockerfile/#run---mount
|
I am using multi-stage build where both stages install APT packages (MWE Dockerfile below). I use --mount=type=cache on /var/cache/apt to avoid re-downloading of packages, in both stages. The cache directory is shared between the stages, which in itself is good, but: as they run in parallel, one or the other will fail with:
43.24 E: Could not get lock /var/cache/apt/archives/lock. It is held by process 0
43.24 E: Unable to lock directory /var/cache/apt/archives/
(the -o DPkg::Lock::Timeout=180 option would not help, that is for dpkg, not APT)
Now, I see three possible ways to solve this, but don't know how to realize either of them (without hacks, that is):
make APT wait for the lock to be released; is there an option for that? I could not duckduck anything out.
make the cache separate for each stage; I would not mind, as the packages installed are typically different (build vs. runtime), but don't see an option to --mount which would let me specify that.
make the build run serially; I really would not mind, that is not the issue. But again, could not find any option for buildkit enforcing that.
Advice or other suggestions would be appreciated.
This is a minimal Dockerfile which you can try yourself; it does not do anything useful but should trigger the issue.
FROM debian:bookworm-20230919 as builder
RUN rm -f /etc/apt/apt.conf.d/docker-clean
RUN --mount=type=cache,target=/var/cache/apt apt update && apt -y --no-install-recommends install build-essential
# build stuff, result in /tmp/build
FROM debian:bookworm-20230919 as production
RUN rm -f /etc/apt/apt.conf.d/docker-clean
RUN --mount=type=cache,target=/var/cache/apt apt update && apt -y --no-install-recommends install vim
# install stuff via --mount=target=/build,from=builder,source=/tmp/build
|
APT lock contention on multi-stage (parallel) builds with cache
|
From the Speedb Hive:
there is a parameter in table_options called: prepopulate_block_cache the default is disabled but you can set it to flush-only.
You can find the Speedb hive here and (once you've registered) the link to the thread with your question here, if you have more questions or need additional info
|
I have a few column families which have references to each other -- to construct a "full object" I need to join data across them. The upstream providing me with data often provides updates across cross-referenced items in multiple column families around the same time, but there's no guarantee about ordering. When I get an update to one of them, I need to do a lookup of the referenced values to construct the full object for my application to use. Ideally, I want these reads-of-recently-written-data to hit a cache so that I don't end up doing 1 or more read IOs per item that I write.
I know RocksDB keeps writes in RAM in a MemTable before flushing data into an SST file on disk, but I couldn't find an answer in the documentation about whether writes which have been flushed ever enter the LRU cache. Is allowing the MemTables to get really large the best / only way for me to tune the write caching behavior?
|
how does RocksDB cache writes?
|
2
So, does cache.set() actually use version=1 by default instead of version=None in Django?
It uses the version specified in the cache. Indeed, if we look at the source code [GitHub], we see:
def make_key(self, key, version=None):
"""
Construct the key used by all other methods. By default, use the
key_func to generate a key (which, by default, prepends the
`key_prefix' and 'version'). A different key function can be provided
at the time of cache construction; alternatively, you can subclass the
cache backend to provide custom key making behavior.
"""
if version is None:
version = self.version
return self.key_func(key, self.key_prefix, version)
and the version can be set as parameter, or will default to one [GitHub]:
def __init__(self, params):
# …
self.version = params.get("VERSION", 1)
# …
so you can set this in the cache settings with:
CACHES = {
"default": {
"BACKEND": "django.core.cache.backends.locmem.LocMemCache",
"LOCATION": "unique-snowflake",
"VERSION": 1425,
}
}
if not, it will indeed default to one. This is also (briefly) explained in the options for the cache in the documentation on settings [Django-doc]:
VERSION
Default: 1
The default version number for cache keys generated by the Django server.
Share
Improve this answer
Follow
answered Aug 19, 2023 at 16:19
willeM_ Van OnsemwilleM_ Van Onsem
458k3232 gold badges450450 silver badges582582 bronze badges
Add a comment
|
|
If I only set and get David with version=0, then I can get John and David in order as shown below. *I use LocMemCache which is the default cache in Django and I'm learning Django Cache:
from django.core.cache import cache
cache.set("name", "John")
cache.set("name", "David", version=0)
print(cache.get("name")) # John
print(cache.get("name", version=0)) # David
And, if I only set and get David with version=2, then I can get John and David in order as well as shown below:
from django.core.cache import cache
cache.set("name", "John")
cache.set("name", "David", version=2)
print(cache.get("name")) # John
print(cache.get("name", version=2)) # David
But, if I only set and get version=00 with version=01, then I can get version=02 and version=03 in order as shown below so this is because version=04 is set with version=05 by default?:
version=06
In addition, if I set and get version=07 and version=08 without version=09, then I can get John0 and John1 in order as well as shown below:
John2
I know that the doc shows John3 for John4 as shown below:
John5
So, does John6 actually use John7 by default instead of John8 in Django?
|
Does `cache.set()` use `version=1` by default instead of `version=None` in Django?
|
2
Add implements Serializable to the end of the NoteResponse for example
public class NoteResponse implements Serializable
Share
Improve this answer
Follow
answered Jul 20, 2023 at 16:34
VifiehVifieh
2166 bronze badges
Add a comment
|
|
I want to cache the List in redis. Below is the service method for the same:-
@Autowired
private RedisTemplate<String, Object> redisTemplate;
@Cacheable(value = "notes", key = "#userId")
public ResponseEntity<?> getAllNotes(Integer userId) {
HashOperations<String, String, Object> hashOperations = redisTemplate.opsForHash();
//Check if the cacheKey contains this userId
if (hashOperations.hasKey(cacheKey, userIdAsKey)) {
List<NoteResponse> noteResponses = (List<NoteResponse>) hashOperations.get(cacheKey, userIdAsKey);
return ResponseEntity.ok(noteResponses);
}
List<Note> notes = noteRepository.findAllByUserId(userId);
List<NoteResponse> noteResponses = Helper.getNoteResponse(notes);
hashOperations.put(cacheKey, userIdAsKey, noteResponses);
return ResponseEntity.ok(noteResponses);
}
but i am getting this exception: java.lang.IllegalArgumentException: DefaultSerializer requires a Serializable payload but received an object of type [org.springframework.http.ResponseEntity]
|
DefaultSerializer requires a Serializable payload but received an object of type [org.springframework.http.ResponseEntity]
|
Workflows can only restore caches from a base branch, the current branch or the default branch. It's hard do tell from the screenshot, but if those caches are from different tags, that is why there is more than one cache.
|
I'm trying to maintain separate caches for each build environment (qa, staging, prod). So that whenever I try to build for the QA environment, a particular cache is used to speed up the build process. I'm triggering this build workflow by creating a release.
My problem is while retrieving the cache in the cache step, the cache is always missed and it creates a new cache every time with the same key. My cache key is ${{ runner.os }}-buildx-${{ env.deploy_env }}. When I look at GitHub cache page. I see multiple caches with the same key.
What am I missing here? Any help is appreciated
|
Github Actions - multiple cache with same key
|
2
The cleanest way to do this with Guava is to have two caches, one to look up by deviceId and one to look up by userId.
I don't know if you consider that clean enough, but I'd consider that the best possible type-safe solution.
Share
Improve this answer
Follow
answered Mar 31, 2023 at 3:49
Louis WassermanLouis Wasserman
194k2626 gold badges352352 silver badges421421 bronze badges
3
1
An ancient bit of code (and probably ugly nowadays; could be 10x smaller) that I wrote trying to do this was to use a cache with secondary key extractors to build the index. It used lock striping when update the two maps. We later took the better ideas explored by this project (striped, bloom filter, size caching) and discarded the rest as too exploratory and special case.
– Ben Manes
Apr 5, 2023 at 18:36
@LouisWasserman this will work as I will have to write a wrapper to make sure write happens at both the caches. Was curious if this is not a generic problem? I was surprised to not find any cache implementing as first class construct, but application has to do this.
– Kuldeep Yadav
Apr 11, 2023 at 10:02
@BenManes thanks for sharing this super relevant and useful resource. I think it may help me solve.
– Kuldeep Yadav
Apr 11, 2023 at 10:02
Add a comment
|
|
Whenever we look to retrieve data from multiple keys, we choose to lean over databases. I have a Class Cart as shown:
@Data
public class Cart {
private String cartId;
private String restaurantId;
private String userId;
@NotNull
private String deviceId;
private Map<String, Integer> items;
}
I want to retrieve my cart with userId or deviceId. In a RDBMS, the way would have been declaring deviceId(primary key) and userId(unique key) which would have been sufficient for me to retrieve the cart efficiently with either of these.
Is there a clean way to achieve this with Guava or some other cache? By clean, I expect it to be first class construct provided in the cache so that I don't have to write a wrapper around it to use.
|
How to use Guava Cache with secondary index key?
|
You needn't call refetch. It call the API regardless of the cache.
Comment/Remove this code
// useEffect(() => {
// if (debounceInputValue.length > 1) {
// refetch();
// }
// }, [debounceInputValue, refetch]);
And you should enable the useQuery
enabled: true,
And use debounceInputValue instead of inputValue for useQueryData
https://codesandbox.io/s/react-query-autocomplete-forked-d84rf4?file=/src/components/FormFields/CacheAutocompleteField.tsx:1255-1263
|
I built a form using react,react-query,
link to the code
I built custom fields:
CacheAutocompleteField - cache field using react-query
queryAsyncFunc props - get async function and cache the data using react-query
I have 3 fields:
Type - Select field
Country - CacheAutocompleteField
City - CacheAutocompleteField
My scenario:
I select any type from my hardcoded list (Type A, Type B , Type C),
I search any country, then I search any city
What I'm trying to do?
every time I select a new type (from type options) - I want the country and city fields to be reset.
every time I already search the same key (the queryKey is combined of cacheKey+inputValue) , it will not call to api, it will get the results from cache (that's what I chose to use from react-query to do).
What I'm getting when I run my code?
When I select a type A, enter a country “Island” it will fetch data from api and get the data.
Then when I select a type B, enter a country “Island” - It will fetch data from api and get the data.
But when I select a type A and and same country “Island” again - I don’t want it to fetch data from api - I want it to get data from cache (that’s the reason I chose to work with react-query too) because it already search for this data with the same type. The queryKey is depended of other type field.
when I search anything from autocomplete and it not find it, then I try to reset it by select any other type, it will kind of reset the value of the input but it still exist in inputValue of the country.
for example I select type C, then enter "lakd" in country, then select any other type, it not reset it. reset works for me only when anything find in autocomplete and I select it. I guess it's because the autocomplete component not have inputValue props, but when I use it it make me other issues.
|
react-query refetch api call with cache issue
|
2
The app will make requests wherever necessary, caching does not stop that, it just fastens things up by providing locally stored data.
There is no caching to be obtained via redux or toolkit, it is just a predictable state manager.
Using a powerful data fetching tool like rtk query, react query will help, providing powerful cache management.
Manual caches are hard to handle. Session storage cache will have to be minified, encrypted and would require heavy cache busting and updation.
Share
Improve this answer
Follow
answered Oct 23, 2022 at 14:00
Mayank JhunjhunwalaMayank Jhunjhunwala
8366 bronze badges
Add a comment
|
|
At my current work I came across an issue where it seems that the application is making some extra requests that I believe can be avoided and the app optimized for better performance.
Our tech stack is: Typescript + React + Redux (regular one, not Redux-Toolkit)
I would like to have the following outcomes:
The same dependancy value should not cause re-render of the page and make a new request to the backend
When user switches between the pages of the application coming back to a previously opened page makes a complete set of the same requests each time this page is opened.
For the (1) issue as far as I know I can do some optimization using useMemo and useCallback, however trying to find a suitable solution for (2) issue I came across a wide variety of options: from setting up some logic manually or using some wrappers to create a cache with Session Storage or IndexedDB in the browser to such solutions as Redux-Toolkit + RTK Query, React Query, SWR, Redux-Persist, etc.
What approach would you recommend me to pursue? I would say I understand that it requires quite significant changes in code and refactoring, so even more complicated solutions would fit greatly as long as they are more sustainable long term and easier to reuse.
As I understood Redux-Toolkit + RTK Query and React Query are exactly can be used for the purpose of caching and highly sustainable, would it be correct? Which one out of those 2 would be a recommended option?
Would using something like Redux-Persist suffice? As I understand it allows caching to session storage and in my case is most likely what I need. A state should be preserved as long as the page was opened in the browser and no dependencies have been changed.
|
React: What would be a recommended way to cache state of the app in the browser to minimize the amount of requests to the backend?
|
I was facing the same issue. My pipeline was downloading the Sonar plugins every time and taking about 40~60 seconds.
I was able to cache the plugins through the .sonar/cache folder and decrease the download time to around 8~12 seconds.
Example:
variables:
SONAR_PLUGINS: /home/vsts/.sonar/cache
steps:
- task: Cache@2
inputs:
key: sonar | "$(Agent.OS)" | $(Build.Repository.Name)
path: $(SONAR_PLUGINS)
displayName: cache sonar plugins
|
I use the Sonar tasks in my Azure DevOps pipeline to run static analysis on my code, but I would like to avoid having the pipeline download the plugins everytime the pipeline runs.
I think I could use the Cache task but I'm not sure how should I configure it.
|
How can I cache Sonar plugins in Azure DevOps Pipelines so that they don't download everytime the pipeline runs?
|
If you want group cache and remove only specific group of cache you can use tags.
Please read this. It will be helpful
Cache::tags(['tag_'.$tenant])->remember('dep_count:'.$tenant, 60 * 60 * 24, function () use ($tenant) {
return Department::where('is_active', 'yes')->where('tenantID', $tenant)->count();
});
For remove
Cache::tags(['tag_'.$tenant])->flush();
|
I am using using laravel and Redis driver to store my cache and I keep my keys scoped to the current tenant. Below is how I set my cache; where $tenant is the current tenant.
$department_count = Cache::remember('dep_count:'.$tenant, 60 * 60 * 24, function () use ($tenant) {
return Department::where('is_active', 'yes')->where('tenantID', $tenant)->count();
});
So on switch school, I want to clear all the cache for current tenant using cache::forget() but I don't know how to do that.
I've tried Cache::forget('*:'.$tenant) but it doesn't seem to work. Any help on how to go about it is much appreciated.
|
Forgetting Cache using Regular expression laravel
|
AWS CLI
See the "AWS CLI Command Reference" for more information.
AWS recently released their Command Line Tools, which work much like boto and can be installed using
sudo easy_install awscli
or
sudo pip install awscli
Once installed, you can then simply run:
aws s3 sync s3://<source_bucket> <local_destination>
For example:
aws s3 sync s3://mybucket .
will download all the objects in mybucket to the current directory.
And will output:
download: s3://mybucket/test.txt to test.txt
download: s3://mybucket/test2.txt to test2.txt
This will download all of your files using a one-way sync. It will not delete any existing files in your current directory unless you specify --delete, and it won't change or delete any files on S3.
You can also do S3 bucket to S3 bucket, or local to S3 bucket sync.
Check out the documentation and other examples.
Whereas the above example is how to download a full bucket, you can also download a folder recursively by performing
aws s3 cp s3://BUCKETNAME/PATH/TO/FOLDER LocalFolderName --recursive
This will instruct the CLI to download all files and folder keys recursively within the PATH/TO/FOLDER directory within the BUCKETNAME bucket.
|
I noticed that there does not seem to be an option to download an entire s3 bucket from the AWS Management Console.
Is there an easy way to grab everything in one of my buckets? I was thinking about making the root folder public, using wget to grab it all, and then making it private again but I don't know if there's an easier way.
|
Downloading an entire S3 bucket?
|
The problem is wrong set of permissions on the file.
Easily solved by executing -
chmod 400 mykey.pem
Taken from AWS instructions -
Your key file must not be publicly viewable for SSH to work. Use this
command if needed: chmod 400 mykey.pem
400 protects it by making it read only and only for the owner.
|
Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed last year.
The community reviewed whether to reopen this question last year and left it closed:
Original close reason(s) were not resolved
Improve this question
I've created a new linux instance on Amazon EC2, and as part of that downloaded the .pem file to allow me to SSH in.
When I tried to ssh with:
ssh -i myfile.pem <public dns>
I got:
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@ WARNING: UNPROTECTED PRIVATE KEY FILE! @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
Permissions 0644 for 'amazonec2.pem' are too open.
It is recommended that your private key files are NOT accessible by others.
This private key will be ignored.
bad permissions: ignore key: amazonec2.pem
Permission denied (publickey).
Following this post I tried to chmod +600 the .pem file, but now when I ssh I just get
Permission denied (publickey).
What school-boy error am I making here?
The .pem file is in my home folder (in macOS). Its permissions look like this:
-rw-------@ 1 mattroberts staff 1696 19 Nov 11:20 amazonec2.pem
|
"UNPROTECTED PRIVATE KEY FILE!" Error using SSH into Amazon EC2 Instance (AWS) [closed]
|
As of September 2017, you no longer have to configure mappings to access the request body.
All you need to do is check, "Use Lambda Proxy integration", under Integration Request, under the resource.
You'll then be able to access query parameters, path parameters and headers like so
event['pathParameters']['param1']
event["queryStringParameters"]['queryparam1']
event['requestContext']['identity']['userAgent']
event['requestContext']['identity']['sourceIP']
|
for instance if we want to use
GET /user?name=bob
or
GET /user/bob
How would you pass both of these examples as a parameter to the Lambda function?
I saw something about setting a "mapped from" in the documentation, but I can't find that setting in the API Gateway console.
method.request.path.parameter-name for a path parameter named parameter-name as defined in the Method Request page.
method.request.querystring.parameter-name for a query string parameter named parameter-name as defined in the Method Request page.
I don't see either of these options even though I defined a query string.
|
How to pass a querystring or route parameter to AWS Lambda from Amazon API Gateway
|
I figured it out. I had the arguments in the wrong order. This works:
scp -i mykey.pem somefile.txt [email protected]:/
|
I have an EC2 instance running (FreeBSD 9 AMI ami-8cce3fe5), and I can ssh into it using my amazon-created key file without password prompt, no problem.
However, when I want to copy a file to the instance using scp I am asked to enter a password:
scp somefile.txt -i mykey.pem [email protected]:/
Password:
Any ideas why this is happening/how it can be prevented?
|
scp (secure copy) to ec2 instance without password
|
See the EC2 documentation on the subject.
Run:
wget -q -O - http://169.254.169.254/latest/meta-data/instance-id
If you need programmatic access to the instance ID from within a script,
die() { status=$1; shift; echo "FATAL: $*"; exit $status; }
EC2_INSTANCE_ID="`wget -q -O - http://169.254.169.254/latest/meta-data/instance-id || die \"wget instance-id has failed: $?\"`"
Here is an example of a more advanced use (retrieve instance ID as well as availability zone and region, etc.):
EC2_INSTANCE_ID="`wget -q -O - http://169.254.169.254/latest/meta-data/instance-id || die \"wget instance-id has failed: $?\"`"
test -n "$EC2_INSTANCE_ID" || die 'cannot obtain instance-id'
EC2_AVAIL_ZONE="`wget -q -O - http://169.254.169.254/latest/meta-data/placement/availability-zone || die \"wget availability-zone has failed: $?\"`"
test -n "$EC2_AVAIL_ZONE" || die 'cannot obtain availability-zone'
EC2_REGION="`echo \"$EC2_AVAIL_ZONE\" | sed -e 's:\([0-9][0-9]*\)[a-z]*\$:\\1:'`"
You may also use curl instead of wget, depending on what is installed on your platform.
|
How can I find out the instance id of an ec2 instance from within the ec2 instance?
|
How to get an AWS EC2 instance ID from within that EC2 instance?
|
I found a way using the aws-sdk.
var aws = require('aws-sdk');
var lambda = new aws.Lambda({
region: 'us-west-2' //change to your region
});
lambda.invoke({
FunctionName: 'name_of_your_lambda_function',
Payload: JSON.stringify(event, null, 2) // pass params
}, function(error, data) {
if (error) {
context.done('error', error);
}
if (data.Payload) {
context.succeed(data.Payload);
}
});
You can find the doc here: http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/Lambda.html
|
I have 2 Lambda functions - one that produces a quote and one that turns a quote into an order.
I'd like the Order lambda function to call the Quote function to regenerate the quote, rather than just receive it from an untrusted client.
I've looked everywhere I can think of - but can't see how I'd go about chaining or calling the functions...surely this exists!
|
Can an AWS Lambda function call another
|
Edit: This answer is deprecated and is incorrect. There are several ways to list AWS resources (the AWS Tag Editor, etc.). Check
the other answers for more details.
No.
Each AWS Service (eg Amazon EC2, Amazon S3) have their own set of API calls. Also, each Region is independent.
To obtain a list of all resources, you would have to make API calls to every service in every region.
You might want to activate AWS Config:
AWS Config provides a detailed view of the configuration of AWS resources in your AWS account. This includes how the resources are related to one another and how they were configured in the past so that you can see how the configurations and relationships change over time.
However, AWS Config only collects information about EC2/VPC-related resources, not everything in your AWS account.
|
Is there a way to list all resources in AWS? For all regions, all resources.. Such as list all EC2 instances, all VPCs, all APIs in API Gateway, etc... I would like to list all resources for my account, since it's hard for me to find which resources I can relinquish now.
|
Is there a way to list all resources in AWS
|
First off, EC2 and Elastic Compute Cloud are the same thing.
Next, AWS encompasses the range of Web Services that includes EC2 and Elastic Beanstalk. It also includes many others such as S3, RDS, DynamoDB, and all the others.
EC2
EC2 is Amazon's service that allows you to create a server (AWS calls these instances) in the AWS cloud. You pay by the hour and only what you use. You can do whatever you want with this instance as well as launch n number of instances.
Elastic Beanstalk
Elastic Beanstalk is one layer of abstraction away from the EC2 layer. Elastic Beanstalk will setup an "environment" for you that can contain a number of EC2 instances, an optional database, as well as a few other AWS components such as a Elastic Load Balancer, Auto-Scaling Group, Security Group. Then Elastic Beanstalk will manage these items for you whenever you want to update your software running in AWS. Elastic Beanstalk doesn't add any cost on top of these resources that it creates for you. If you have 10 hours of EC2 usage, then all you pay is 10 compute hours.
Running Wordpress
For running Wordpress, it is whatever you are most comfortable with. You could run it straight on a single EC2 instance, you could use a solution from the AWS Marketplace, or you could use Elastic Beanstalk.
What to pick?
In the case that you want to reduce system operations and just focus on the website, then Elastic Beanstalk would be the best choice for that. Elastic Beanstalk supports a PHP stack (as well as others). You can keep your site in version control and easily deploy to your environment whenever you make changes. It will also setup an Autoscaling group which can spawn up more EC2 instances if traffic is growing.
Here's the first result off of Google when searching for "elastic beanstalk wordpress": https://www.otreva.com/blog/deploying-wordpress-amazon-web-services-aws-ec2-rds-via-elasticbeanstalk/
|
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
The community reviewed whether to reopen this question 2 months ago and left it closed:
Original close reason(s) were not resolved
Improve this question
What is the difference between EC2 and Beanstalk? I want to know regarding SaaS, PaaS and IaaS.
To deploy a web application in Wordpress I need a scalable hosting service. If there anything better than my purpose, please let me know as well.
FYI - I want to host and deploy multiple Wordpress and Drupal sites.
I do not want to give more time for the server and focus on development. But the cloud hosting needs to be auto scalable.
|
Difference between Amazon EC2 and AWS Elastic Beanstalk [closed]
|
I've created a video tutorial for this. Just check:
Connect to Amazon EC2 file directory using FileZilla and SFTP, Video Tutorial
Summary of above video tutorial:
Edit (Preferences) > Settings > Connection > SFTP, Click "Add key file”
Browse to the location of your .pem file and select it.
A message box will appear asking your permission to convert the file into ppk format. Click Yes, then give the file a name and store it somewhere.
If the new file is shown in the list of Keyfiles, then continue to the next step. If not, then click "Add keyfile..." and select the converted file.
File > Site Manager Add a new site with the following parameters:
Host: Your public DNS name of your EC2 instance, or the public IP address of the server.
Protocol: SFTP
Logon Type: Normal
User: From the docs: "For Amazon Linux, the default user name is ec2-user. For RHEL5, the user name is often root but might be ec2-user. For Ubuntu, the user name is ubuntu. For SUSE Linux, the user name is root. For Debian, the user name is admin. Otherwise, check with your AMI provider."
Press Connect Button - If saving of passwords has been disabled, you will be prompted that the logon type will be changed to 'Ask for password'. Say 'OK' and when connecting, at the password prompt push 'OK' without entering a password to proceed past the dialog.
Note: FileZilla automatically figures out which key to use. You do not need to specify the key after importing it as described above.
If you use Cyberduck follow this.
Check this post if you have any permission issues.
|
I have created an AWS EC2 Instance and I want to be able to upload files to the server directory using FileZilla in the simplest and most straightforward fashion possible.
|
Connect to Amazon EC2 file directory using Filezilla and SFTP
|
One way to see the contents would be:
for my_bucket_object in my_bucket.objects.all():
print(my_bucket_object)
|
How can I see what's inside a bucket in S3 with boto3? (i.e. do an "ls")?
Doing the following:
import boto3
s3 = boto3.resource('s3')
my_bucket = s3.Bucket('some/path/')
returns:
s3.Bucket(name='some/path/')
How do I see its contents?
|
Listing contents of a bucket with boto3
|
694
I had the same problem an solved it. Have a look at the step-by-step instructions:
Go to console.aws.amazon.com
Go To Services -> VPC
Open Your VPCs
select your VPC connected to your EC2 and
select Actions => Edit DNS Hostnames
---> Change DNS hostnames: to YES
Share
Improve this answer
Follow
edited Sep 8, 2021 at 7:57
Halo
1,82911 gold badge1010 silver badges3131 bronze badges
answered Oct 16, 2014 at 11:52
SEO FreelancerSEO Freelancer
7,08411 gold badge1313 silver badges77 bronze badges
10
3
The section containing VPCs is now called Networking. And to edit, you right-click on the VPC.
– nasch
May 20, 2015 at 19:31
115
+1 you now also need to set your subnet to allow auto-assignment of public ip. Right click your subnet > modify auto-assign public ip > check the box
– Adam Hey
Aug 7, 2015 at 5:56
2
Even when I tried to add an Elastic Ip, it was not working until that option to change dns hostnames was set to yes.
– RenatoSz
Jan 8, 2016 at 15:01
1
@ Jack BeNimble Yes. This will impact the existing instances in the VPC. Those instances will get the Public DNS.
– Vignesh
Jul 14, 2017 at 6:11
10
@Vignesh I did not get a public IP or DNS for existing instances. Stopping and restarting didn't change anything. The only thing that worked was to terminate and recreate the instance.
– Y e z
Dec 3, 2018 at 0:10
|
Show 5 more comments
|
Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 11 months ago.
The community reviewed whether to reopen this question 11 months ago and left it closed:
Original close reason(s) were not resolved
Improve this question
A guy I work with gave me the EC2 credentials to log onto his EC2 console. I was not the one who set it up. Some of the instances show a public dns name and others have a blank public DNS. I want to be able to connect to the instances that have a blank public DNS. I have not been able to figure out why these show up as blank.
|
EC2 instance has no public DNS [closed]
|
From my experience, the way I do it is create a snapshot of your current image, then once its done you'll see it as an option when launching new instances. Simply launch it as a large instance at that point.
This is my approach if I do not want any downtime(i.e. production server) because this solution only takes a server offline only after the new one is up and running(I also use it to add new machines to my clusters by using this approach to only add new machines). If Downtime is acceptable then see Marcel Castilho's answer.
|
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I have an Amazon EC2 micro instance (t1.micro). I want to upgrade this instance to large.
This is our production environment, so what is the safest way to do this?
Is there any step by step guide to do this?
|
How to safely upgrade an Amazon EC2 instance from t1.micro to large? [closed]
|
Boto 2's boto.s3.key.Key object used to have an exists method that checked if the key existed on S3 by doing a HEAD request and looking at the the result, but it seems that that no longer exists. You have to do it yourself:
import boto3
import botocore
s3 = boto3.resource('s3')
try:
s3.Object('my-bucket', 'dootdoot.jpg').load()
except botocore.exceptions.ClientError as e:
if e.response['Error']['Code'] == "404":
# The object does not exist.
...
else:
# Something else has gone wrong.
raise
else:
# The object does exist.
...
load() does a HEAD request for a single key, which is fast, even if the object in question is large or you have many objects in your bucket.
Of course, you might be checking if the object exists because you're planning on using it. If that is the case, you can just forget about the load() and do a get() or download_file() directly, then handle the error case there.
|
I would like to know if a key exists in boto3. I can loop the bucket contents and check the key if it matches.
But that seems longer and an overkill. Boto3 official docs explicitly state how to do this.
May be I am missing the obvious. Can anybody point me how I can achieve this.
|
check if a key exists in a bucket in s3 using boto3
|
This error message means you failed to authenticate.
These are common reasons that can cause that:
Trying to connect with the wrong key. Are you sure this instance is using this keypair?
Trying to connect with the wrong username. ubuntu is the username for the ubuntu based AWS distribution, but on some others it's ec2-user (or admin on some Debians, according to Bogdan Kulbida's answer)(can also be root, fedora, see below)
Trying to connect the wrong host. Is that the right host you are trying to log in to?
Note that 1. will also happen if you have messed up the /home/<username>/.ssh/authorized_keys file on your EC2 instance.
About 2., the information about which username you should use is often lacking from the AMI Image description. But you can find some in AWS EC2 documentation, bullet point 4. :
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AccessingInstancesLinux.html
Use the ssh command to connect to the instance. You'll specify the private key (.pem) file and user_name@public_dns_name. For Amazon Linux, the user name is ec2-user. For RHEL5, the user name is either root or ec2-user. For Ubuntu, the user name is ubuntu. For Fedora, the user name is either fedora or ec2-user. For SUSE Linux, the user name is root. Otherwise, if ec2-user and root don't work, check with your AMI provider.
Finally, be aware that there are many other reasons why authentication would fail. SSH is usually pretty explicit about what went wrong if you care to add the -v option to your SSH command and read the output, as explained in many other answers to this question.
|
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 5 years ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
I want to use my Amazon ec2 instance but faced the following error:
Permission denied (publickey).
I have created my key pair and downloaded .pem file.
Given:
chmod 600 pem file.
Then, this command
ssh -i /home/kashif/serverkey.pem [email protected]
But have this error:
Permission denied (publickey)
Also, how can I connect with filezilla to upload/download files?
|
Permission denied (publickey) when SSH Access to Amazon EC2 instance [closed]
|
A fix for this problem is to add swap (i.e. paging) space to the instance.
Paging works by creating an area on your hard drive and using it for extra memory, this memory is much slower than normal memory however much more of it is available.
To add this extra space to your instance you type:
sudo /bin/dd if=/dev/zero of=/var/swap.1 bs=1M count=1024
sudo chmod 600 /var/swap.1
sudo /sbin/mkswap /var/swap.1
sudo /sbin/swapon /var/swap.1
If you need more than 1024 then change that to something higher.
To enable it by default after reboot, add this line to /etc/fstab:
/var/swap.1 swap swap defaults 0 0
|
I'm currently running an ec2 micro instance and i've been finding that the instance occasionally runs out of memory.
Other than using a larger instance size, what else can be done?
|
How do you add swap to an EC2 instance?
|
395
Add a new EC2 security group inbound rule:
Type: Custom ICMP rule
Protocol: Echo Request
Port: N/A
Source: your choice (I would select Anywhere to be able to ping from any machine)
Share
Improve this answer
Follow
edited Nov 1, 2017 at 14:18
Undo♦
25.6k3737 gold badges110110 silver badges131131 bronze badges
answered May 30, 2015 at 9:39
RakibRakib
12.7k1717 gold badges8181 silver badges115115 bronze badges
13
5
Thank you, I like this. I just want to allow the ping, not all.
– Chu-Siang Lai
Nov 25, 2016 at 11:29
1
Do I need ICMPv6 as well?
– Franklin Yu
Mar 8, 2017 at 15:37
2
Could try that as well. But ICMP sufficed for me
– Rakib
Mar 8, 2017 at 16:26
2
I am doing Anywhere but it automatically converts it into custom @SyedRakibAlHasan
– alper
Apr 5, 2017 at 12:24
45
It's easy to mistake "Echo Reply" for "Echo Request", I got confused by this for a moment.
– Andy
Jul 10, 2018 at 15:23
|
Show 8 more comments
|
I have an EC2 instance running in AWS. When I try to ping from my local box it is not available.
How can I make the instance pingable?
|
Cannot ping AWS EC2 instance
|
One way or another you must tell boto3 in which region you wish the kms client to be created. This could be done explicitly using the region_name parameter as in:
kms = boto3.client('kms', region_name='us-west-2')
or you can have a default region associated with your profile in your ~/.aws/config file as in:
[default]
region=us-west-2
or you can use an environment variable as in:
export AWS_DEFAULT_REGION=us-west-2
but you do need to tell boto3 which region to use.
|
I have a boto3 client :
boto3.client('kms')
But it happens on new machines, They open and close dynamically.
if endpoint is None:
if region_name is None:
# Raise a more specific error message that will give
# better guidance to the user what needs to happen.
raise NoRegionError()
Why is this happening? and why only part of the time?
|
boto3 client NoRegionError: You must specify a region error only sometimes
|
There is no direct method to rename a file in S3. What you have to do is copy the existing file with a new name (just set the target key) and delete the old one:
@Autowired
private AmazonS3 s3Client;
public void rename(String fileKey, String newFileKey) {
s3Client.copyObject(bucketName, fileKey, bucketName, newFileKey);
s3Client.deleteObject(bucketName, fileKey);
}
|
Is there any function to rename files and folders in Amazon S3? Any related suggestions are also welcome.
|
How to rename files and folder in Amazon S3?
|
Stumbled onto this, was strangely hard to find again later. Putting here for posterity:
sudo yum install nodejs npm --enablerepo=epel
EDIT 3: As of July 2016, EDIT 1 no longer works for nodejs 4 (and EDIT 2 neither). This answer (https://stackoverflow.com/a/35165401/78935) gives a true one-liner.
EDIT 1: If you're looking for nodejs 4, please try the EPEL testing repo:
sudo yum install nodejs --enablerepo=epel-testing
EDIT 2: To upgrade from nodejs 0.12 installed through the EPEL repo using the command above, to nodejs 4 from the EPEL testing repo, please follow these steps:
sudo yum rm nodejs
sudo rm -f /usr/local/bin/node
sudo yum install nodejs --enablerepo=epel-testing
The newer packages put the node binaries in /usr/bin, instead of /usr/local/bin.
And some background:
The option --enablerepo=epel causes yum to search for the packages in the EPEL repository.
EPEL (Extra Packages for Enterprise Linux) is open source and free community based repository project from Fedora team which provides 100% high quality add-on software packages for Linux distribution including RHEL (Red Hat Enterprise Linux), CentOS, and Scientific Linux. Epel project is not a part of RHEL/Cent OS but it is designed for major Linux distributions by providing lots of open source packages like networking, sys admin, programming, monitoring and so on. Most of the epel packages are maintained by Fedora repo.
Via http://www.tecmint.com/how-to-enable-epel-repository-for-rhel-centos-6-5/
|
I've seen the writeup on using yum to install the dependencies, and then installing Node.JS & NPM from source. While this does work, I feel like Node.JS and NPM should both be in a public repo somewhere.
How can I install Node.JS and NPM in one command on AWS Amazon Linux?
|
How to yum install Node.JS on Amazon Linux
|
You can set a bucket policy as detailed in this blog post:
http://ariejan.net/2010/12/24/public-readable-amazon-s3-bucket-policy/
As per @robbyt's suggestion, create a bucket policy with the following JSON:
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "AllowPublicRead",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::bucket/*"
]
}
]
}
Important: replace bucket in the Resource line with the name of your bucket.
|
How can I set a bucket in Amazon S3 so all the files are publicly read-only by default?
|
Make a bucket public in Amazon S3
|
It seems likely that this bucket was created in a different region, IE not us-west-2. That's the only time I've seen "The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint."
US Standard is us-east-1
|
I am trying to delete uploaded image files with the AWS-SDK-Core Ruby Gem.
I have the following code:
require 'aws-sdk-core'
def pull_picture(picture)
Aws.config = {
:access_key_id => ENV["AWS_ACCESS_KEY_ID"],
:secret_access_key => ENV["AWS_SECRET_ACCESS_KEY"],
:region => 'us-west-2'
}
s3 = Aws::S3::Client.new
test = s3.get_object(
:bucket => ENV["AWS_S3_BUCKET"],
:key => picture.image_url.split('/')[-2],
)
end
However, I am getting the following error:
The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint.
I know the region is correct because if I change it to us-east-1, the following error shows up:
The specified key does not exist.
What am I doing wrong here?
|
AWS S3: The bucket you are attempting to access must be addressed using the specified endpoint
|
Usually, all you need to do is to "Add CORS Configuration" in your bucket properties.
The <CORSConfiguration> comes with some default values. That's all I needed to solve your problem. Just click "Save" and try again to see if it worked. If it doesn't, you could also try the code below (from alxrb answer) which seems to have worked for most of the people.
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<AllowedMethod>HEAD</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<AllowedHeader>Authorization</AllowedHeader>
</CORSRule>
</CORSConfiguration>
For further info, you can read this article on Editing Bucket Permission.
|
Did anyone manage to add Access-Control-Allow-Origin to the response headers?
What I need is something like this:
<img src="http://360assets.s3.amazonaws.com/tours/8b16734d-336c-48c7-95c4-3a93fa023a57/1_AU_COM_180212_Areitbahn_Hahnkoplift_Bergstation.tiles/l2_f_0101.jpg" />
This get request should contain in the response, header, Access-Control-Allow-Origin: *
My CORS settings for the bucket looks like this:
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>
As you might expect there is no Origin response header.
|
S3 - Access-Control-Allow-Origin Header
|
Amazon S3 is designed for large-capacity, low-cost file storage in one specific geographical region.* The storage and bandwidth costs are quite low.
Amazon CloudFront is a Content Delivery Network (CDN) which proxies and caches web data at edge locations as close to users as possible.
When end users request an object using this domain name, they are automatically routed to the nearest edge location for high performance delivery of your content. (Amazon)
The data served by CloudFront may or may not come from S3. Since it is more optimized for delivery speed, the bandwidth costs a little more.
If your user base is localized, you won't see too much difference working with S3 or CloudFront (but you have to choose the right location for your S3 bucket: US, EU, APAC). If your user base is spread globally and speed is important, CloudFront may be a better option.
Both S3 and CloudFront allow domain aliases, however CloudFront allows multiple aliases so that d1.mystatics.com, d2.mystatics.com and d3.mystatics.com could all point to the same location increasing the capacity for parallel downloads (this used to be recommended by Google but with the introduction of SPDY and HTTP/2 is of lesser importance).
CloudFront also supports CORS as of 2014 (thanks sergiopantoja).
* Note: S3 can now automatically replicate to additional regions as of 2015.
|
Are there use cases that lend themselves better to Amazon cloudfront over s3 or the other way around? I'm trying to understand the difference between the 2 through examples.
|
When to use Amazon Cloudfront or S3
|
This is not (yet?) available through the web interface.
As GitHub added File editing, then File creation features, this may make sense to propose such a feature. The recommended channel to do so is to send an email to [email protected].
Update
Deletion of files through the web interface is now available.
|
The title pretty much is the question. I am aware of git rm file but this isn't what I am looking for. I am looking for a way to delete files and folders in a github repo using ONLY a browser.
|
How to delete files in github using the web interface
|
46
You can use the below command
gh repo list {organization-name}
before this login with below command
gh auth login
github.com/cli/cli
Share
Follow
edited Jan 17, 2023 at 5:22
sastorsl
2,05511 gold badge1616 silver badges1818 bronze badges
answered May 25, 2021 at 12:48
Suraj Singh RathoreSuraj Singh Rathore
1,02299 silver badges1010 bronze badges
4
2
gh is the new GitHub CLI: github.com/cli/cli
– Boris Yakubchik
Jul 29, 2021 at 20:51
Yes, Boris is correct. I missed this comment.
– Suraj Singh Rathore
Jul 31, 2021 at 5:52
5
gh repo list {organization-name} does not show all repositories: Showing 30 of 65 repositories in @linksplatform that match your search
– FreePhoenix888
May 27, 2022 at 3:54
4
Yes by default it shows only 30 repos. if you need more repo then you can use -L flag. gh repo list {organization-name} -L 100
– Suraj Singh Rathore
May 27, 2022 at 10:39
Add a comment
|
|
Here's my query to the GitHub API
curl -i -u {user} https://api.github.com/orgs/{org}/repos?type=all
But this does not list all repos for this organization that I have access to. Specifically, it does not list repos in the organization that are part of a team that I am a member of.
If I were to query
curl -i -u {user} https://api.github.com/teams/{teamid}/repos
I would see the missing repos. If I were to navigate to github.com, I would see both private organization repos and my team repos next to each other on the same page. Is there a way to get all of these repos in the same API query?
|
List all GitHub repos for an organization - INCLUDING those in teams
|
It does not.
But you can use the side bar to make a custom 'table of contents' where you can put them in any order you want, with headings and everything, see below (from https://github.com/BrechtDeMan/WebAudioEvaluationTool/wiki)
You may want to have a look at the answers to this question.
|
I don't seem to be able to give a sort order to the Wiki pages of a project on GitHub. Does this option even exist?
|
GitHub Wiki pages sort order?
|
To get the example to work (i.e. to have one workflow wait for another to complete) you need two files. Both files live in the .github/workflows folder of a repository.
The first file would be set up as usual. This file will be triggered by whatever event(s) are set in the on section:
---
name: Preflight
on:
- pull_request
- push
jobs:
preflight-job:
name: Preflight Step
runs-on: ubuntu-latest
steps:
- run: env
The second file states that it should only trigger on the workflow_run event for any workflows with the name Preflight:
---
name: Test
on:
workflow_run:
workflows:
- Preflight
types:
- completed
jobs:
test-job:
name: Test Step
runs-on: ubuntu-latest
steps:
- run: env
This more-or-less the same as the example from the GitHub Actions manual.
As you can see on the actions page
of my example repo, the Preflight workflow will run first. After it has completed, the Test workflow will be triggered:
As you can also see, the branch does not appear for the "Test" workflow.
This is because, (quoting from the manual):
This event will only trigger a workflow run if the workflow file is on the default branch.
This means that the "Test" workflow will run on/with the code from the default branch (usually main or master).
There is a workaround for this...
Every actions is run with a set of contexts. The on0 context holds information about the event that triggered the workflow. This includes the branch that the event was originally triggered from/for: on1.
This can be used to check out the origination branch in the action, using the on2 action provided by GitHub.
To do this, the Yaml would be:
on3
|
Another frequently-requested feature for Actions is a way to trigger one workflow based on the completion of another workflow. For example, you may want to take the results of a CI workflow and run some further analysis.
The new workflow_run event enables you to trigger a new workflow when one or more workflows are requested or completed. Runs triggered by the workflow_run event always use the default branch for the repository, and have access to a read/write token as well as secrets. As an example, as a maintainer you could set up a workflow that takes the artifacts generated by the pull request workflow, do some analysis, and post comments back to the pull request. This event is also available as a webhook and works all repos.
This is quoted from Github's blog.
Could anybody tell me how to implement the example proposed using the new event workflow_run? The documentation only provide a very simple example:
on:
workflow_run:
workflows: ["Run Tests"]
branches: [main]
types:
- completed
- requested
I would be very glad if someone can teach me how to achieve the example.
|
How to use the GitHub Actions `workflow_run` event?
|
After some research I've found this solution:
[the real relative root of any fork](/../../)
It always points to the default branch. For me it's Ok, so it's up to you
PS
With such a trick you can also access the following abilities:
[test](/../../tree/test) - link to another branch
[doc/readme.md](/../../edit/master/doc/readme.md) - open in editor
[doc/readme.md](/../../delete/master/doc/readme.md) - ask to delete file
[doc/readme.md](/../../commits/master/doc/readme.md) - history
[doc/readme.md](/../../blame/master/doc/readme.md) - blame mode
[doc/readme.md](/../../raw/master/doc/readme.md) - raw mode (will redirect)
[doc/](/../../new/master/doc/) - ask to create new file
[doc/](/../../upload/master/doc/) - ask to upload file
[find](/../../find/test) - find file
|
I need to have a relative link to root of my repo from markdown file
(I need it working for any forks)
So it looks like the only way it's to provide a link to some file in the root:
the [Root](/README.md)
or
the [Root](../README.md)
(if it's located at /doc/README.md for instance)
At the same time I can refer to any folder without referring to a file
the [Doc](/doc)
But if I try to put a link to the root folder:
the [real root](/)
the [real root](../)
I'll have a link such
https://github.com/UserName/RepoName/blob/master
which unlike the
https://github.com/UserName/RepoName/blob/master/doc
refers to 404
So if I don't want to refer to README.md in the root (I could havn't it at all)
Is there any way to have such a link?
|
Relative Link to Repo's Root from Markdown file
|
You probably want to use GitHub's post-receive hooks.
In summary, GitHub will POST to a supplied URL when someone pushes to the repo. Just write a short PHP script to run on your linode VPS and pull from GitHub when it receives said POST.
|
We have a VPS on Linode, and code hosted on GitHub. How do we setup so when we push to GitHub, it also pushes automatically to our Linode server. We are using PHP on the Linode server.
|
Automatically Deploy From GitHub To Server On Push
|
With private repositories, how do I give someone read access versus write access?
This kind of permission is not available for simple accounts. When you add an user as a collaborator, he gains read/write permissions.
The story changes if you own an Organization. Organizations contains teams and each team can have different level of access, including read-only.
You can assign users to a specific read-only group, and they will only have pull access to the repositories.
|
One thing I noticed: Using the GitHub UI, I added a collaborator to a repository. I saw that they committed changes without any authority / approval from me. It was a private repository.
With private repositories, how do I give someone read access versus write access?
|
Do collaborators have commit access on GitHub?
|
Not quite - but close enough. (You'll get notified for every commit, not push.)
For GitHub Enterprise as of mid 2014:
Go into your repository's Settings
Open the "Webhooks and Services" tab
Click "Add Service" button
Select "Email" from the long list of services
Put in an e-mail address. This can be an e-mail address that forwards to multiple e-mail addresses, or just your own if only one person/account needs e-mail notifications.
Check "Send From Author" (probably) and "Active" (definitely).
For older versions of GitHub Enterprise:
Go into your repository's Settings
Open the "Service Hooks" tab
Select "Email" from the long list of services
Put in an e-mail address. This can be an e-mail address that forwards to multiple e-mail addresses, or just your own if only one person/account needs e-mail notifications.
Check "Send From Author" (probably) and "Active" (definitely).
Done!
Update GitHub plans on shutting down GitHub services before the end of the year. Refer
|
We're are using GitHub Enterprise in our company. We have a “develop” branch where every programmer must push their work. Is there a way to get notified when someone pushes into the develop branch along with a link to a diff view, like the one you get for a pull request?
|
How to get notified when someone pushes into a GitHub branch?
|
Push approach: Within the GitHub API documentation, you can find documentation about setting up service hooks which can be triggered for one or more events. The gollum event is especially raised any time a wiki page is updated.
JSON-based pull approach: You can also leverage the Events HTTP API to retrieve a JSON formated output of what happens on GitHub, then apply some filtering in order to isolate the events of type GollumEvent.
Below a quick jQuery-based sample
<html>
<head>
<title>Gollum events</title>
<script src="http://ajax.googleapis.com/ajax/libs/jquery/1.7.1/jquery.min.js"></script>
<script type="text/javascript">
$(function() {
$.getJSON('https://api.github.com/repos/holman/spark/events?callback=?', function(data) {
var list = $('#gollum-events');
$.each(data.data, function(key, val) {
if (val.type == "GollumEvent") {
$.each(val.payload.pages, function(key2, val2) {
list.append('<li id="' + key + '.' + key2 + '"><a href="' + val2.html_url + '">' + val2.page_name + '</a> [' + val.actor.login + ' @ ' + val.created_at + ']</li>');
});
}
});
});
});
</script>
</head>
<body>
<ul id="gollum-events"/>
</body>
</html>
Atom based pull approach: Last but not least, you can subscribe to the wiki changes atom feed. Go to the GitHub Wiki section of the repository, select the Pages sub tab, hover onto the orange icon, copy the link and paste into your favorite RSS reader.
Update:
It looks like the RSS feed icon is no longer displayed for a wiki.
However, you can still build the URL by yourself
Syntax: https://github.com/:user/:repository/wiki.atom
Example: https://github.com/holman/spark/wiki.atom
|
Are there service hooks for GitHub wiki repositories? Is there some other mechanism that GitHub provides for me to track wiki edits?
|
How can you track or be notified of changes to GitHub wikis?
|
2011:
1/ Yes, that seems the safest approach, as any modification you end up back-porting in nicstrong/projectA will be in a project with the same structure as original-author/projectA.
That means pull requests will be easier to organize, since you will be in a project mirroring the original author's project.
2/ If you have massive refactoring going on in nicstrong/projectA-android, I would make a backport branch, carefully merge or cherry-pick what you need from the numerous changes to the backport branch, and then push that branch to nicstrong/projectA.
(which means you have added nicstrong/projectA as a remote of nicstrong/projectA-android)
2022: Note that you can create a fork directly with a different name.
|
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I am trying to figure out the best workflow for working with a fork of an existing opensource project in Github. I want to take an existing project and make significant changes to it, in this case to port it to android and add specific android only functionality. I would like to satisfy the following:
Be able to pull changes from their public repo to the new android port as the original code is updated.
Be able to sumbit changes (via pull requests) to the orginal project when I fix bugs that aren't just applicable to the android port.
Have a seperate renamed version of the project to make it clear that it is a Android port. I looked at renaming a fork and Github gave me huge warnings about doing this.
My initial thoughts are I would fork the original project then fork and rename my fork to give me the following repos:
original-author/projectA
nicstrong/projectA
nicstrong/projectA-android
This would allow me to work on my local repo local/projectA-android push changes to nicstrong/projectA-android. Then to update from the orginal project I could rebase nicstrong/projectA to the latest from original-author/projectA then fetch/merge from nicstrong/projectA to local/projectA-android.
My questions are:
I am quite new to the whole Git
thing. Does this seem like a good
approach? Or is there a better
workflow for handling this scenerio?
How would I handle pushing from
projectA-android back to nicstrong/projectA so I can setup pull request for the original project?
|
Best workflow when forking and renaming a GitHub project [closed]
|
20
It's currently not possible to remove a GitHub Issue reference after it has been created.
Share
Follow
answered Jan 15, 2014 at 14:26
johndbrittonjohndbritton
2,66211 gold badge1919 silver badges2020 bronze badges
3
10
I'm really surprised that this hasn't been fixed.
– kierans
May 1, 2018 at 3:13
2
Still no solution?.. If so - this functionality is not considered as 'to be fixed'. But what the reason to forbid remove links then? Any ideas?
– Xronx
Jun 3, 2019 at 23:32
It's 2023, and still no solution?
– Eduardo Rosostolato
Jul 17, 2023 at 17:34
Add a comment
|
|
I have issue #1 and issue #2 on github.
I commenting on the issue #1 smthing like this: "I think that
issue #2 is associated with this issue".
Now in comments of issue #2 the message about referencing issue
#2 with issue #1 appears.
I delete my comment on issue #1.
The message about referencing issue #2 with issue #1 still
exists in issue #2 comments.
How I can remove the message about referencing from issue #20 comments?
|
How can I remove issue reference
|
Emailed the Github support team and was told that after this option disappears, there is no other way to Undo the change. They may implement this feature at a future time.
|
Is there any way to undo discard changes in GitHub Desktop, after the Undo button has disappeared?
I am talking about GitHub Desktop's undo feature, not git in general.
|
How can I undo discard changes in GitHub Desktop?
|
That's correct, you need to add the Pods directory to your .gitignore
1) Remove your files from your github repository:
git rm -r Pods/
and don't forget to commit and push
2) Create a gitignore file:
Open terminal and go through your project folder where the .git folder is located
Type touch .gitignore
Type echo "Pods/" > .gitignore
3) (Added from Gabriel's comment) Last step, remove them from the remote:
git rm -r --cache Pods/
More informations : here
|
This is a two part question:
1) I have added committed and pushed all pod files to github. Is there a way I can remove them and not push them again to github?
2) I know gitignore can help, but I don't know how to use it. Can anyone please walk me through the process of using gitignore?
so I think what I can do is, get the project from github, add gitignore, and then push again. is that correct?
Please help, new to github & Xcode.
|
Added pod files and pushed. How to undo? how to use gitignore in Xcode & github?
|
In order to get it worked I ended up going to Tools -> Options -> SSH Client and changing it to OpenSSH. I generated and uploaded several different types of keys trying to get it work as well but I think this is what finally did it.
|
I was able to create a key and connect to github following these instructions via the command prompt successfully:
https://help.github.com/articles/generating-ssh-keys
However, when I try to connect via Sourcetree and putty I cannot. I've tried:
generating a new key with the putty key generator (SSH-2 RSA)
entering a passphrase
saving the private key
saving the public key and adding a .pub extension
copying/pasting the key from the putty key generator window into github
attemping to refresh branches on a pull from my private github repository from my local repo using the SSH clone URL I got from github
I also tried opening the key generated from the github command line instructions and it wanted me to convert it to a putty-type key which I did and saved off, tried with that one. Also no luck.
What am I doing wrong?
|
unable to get SSH keys working between sourcetree and github
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.