fasttext_score
float32 0.02
1
| id
stringlengths 47
47
| language
stringclasses 1
value | language_score
float32 0.65
1
| text
stringlengths 49
665k
| url
stringlengths 13
2.09k
| nemo_id
stringlengths 18
18
|
---|---|---|---|---|---|---|
0.306023 | <urn:uuid:30256f11-1a2a-4bcd-a851-b4e896851a50> | en | 0.796033 | Take the 2-minute tour ×
There is a good article that explains how to access the Azure cache from a node.js web role (via the memcache shim):
But how can I set up the memcache shim for a node.js worker role?
share|improve this question
add comment
1 Answer
up vote 0 down vote accepted
I found that I can access the cache via the "Memcache Server Gateway" (without the shim).
share|improve this answer
add comment
Your Answer
| http://stackoverflow.com/questions/14307472/azure-memcache-shim-for-node-js-worker-role | dclm-gs1-044910000 |
0.073796 | <urn:uuid:4f026983-71e8-4948-840f-1bfe9f1e2b13> | en | 0.878902 | Take the 2-minute tour ×
Algorithm to clone a tree is quite easy, we can do pre-order traversal for that. Is there an efficient algorithm to clone a graph?
I tried a similar approach, and concluded we need to maintain a hash-map of nodes already added in the new graph, else there will be duplication of nodes, since one node can have many parents.
share|improve this question
modified bfs/dfs where nodes are copied when they are marked as "visited" ? – Oren Jan 26 '13 at 12:00
Depends heavily on the data-structure. – Zeta Jan 26 '13 at 12:11
Graph traversal on wikipedia. Relatively big article that might give you something. – ᴋᴇʏsᴇʀ Jan 26 '13 at 12:17
@Zeta Assuming graph is represented as Adjacency Matrix – Amit Jan 26 '13 at 12:29
@Keyser DFS/BFS with flag in original graph will help me to not visit the node again. But in the new graph i do want a link between the current node and a node already added in the graph. Since the already added node has more than one parents. Think of it as some social network graph where in you will have mutual friends. – Amit Jan 26 '13 at 12:32
show 1 more comment
3 Answers
It suffices to do a depth first search and copy each node as it's visited. As you say, you need some way of mapping nodes in the original graph to corresponding copies so that copies of cycle edges can be connected correctly.
This map also suffices to remember nodes already visited in the DFS.
So in Java you'd have something like:
class Node {
// Contents of node...
Data data;
// ... member declarations like array of adjacent nodes
final ArrayList<Node> children = new ArrayList<>();
// Recursive graph walker for copying, not called by others.
private Node deepCloneImpl(Map<Node, Node> copies) {
Node copy = copies.get(this);
if (copy == null) {
copy = new Node(data);
// Map the new node _before_ copying children.
copies.put(this, copy);
for (Node child : children)
return copy;
public Node deepClone() {
return deepCloneImpl(new HashMap<Node, Node>());
A different approach is to put an additional field in each node that points to the copy if there is one and is null otherwise. This merely implements the map by another method. But it requires two passes over the graph: one to make the copy and another to clear these map fields for future re-use.
share|improve this answer
add comment
Your hash map approach seems viable if you had some quick way of uniquely identifying every node.
Otherwise, you would be best off if you:
1. used data-structure to store graph that allows you to store "is duplicated" unique flag directly in every node (for example, 0 not duplicated, 1 to numberOfNodes for duplicated nodes),
2. traversed nodes (via BFS, DFS) taking care of the already duplicated nodes and reconstructing all connections from the starting graph.
Both your starting graph and finishing graph should have corresponding unique "is duplicated" flag in every node, so you are able to reconstruct connections from starting graph properly. Of course, you could use "is duplicated" flag and "unique identifier" as separate variables.
share|improve this answer
add comment
As you copy nodes from the source graph, place each node in a <Node, Integer> map (thus numbering the nodes from 1 to N).
As you paste nodes in the destination graph, place each node in a <Integer, Node> map (again numbering from 1 to N, but with mapping in reverse direction for reasons that will be clear soon).
When you find a backlink in the source graph, you can map that backlinked "source copy" node to an integer i using the first map. Next, you use the second map to lookup the integer i and find the "destination copy" node which needs to have the same backlink created.
share|improve this answer
I just realized this is pretty much the same answer Gene posted above, but with the Node<->Node map exploded into two maps. Oh well. :-) – The111 Aug 10 '13 at 0:24
add comment
Your Answer
| http://stackoverflow.com/questions/14536702/algorithm-to-clone-a-graph | dclm-gs1-044930000 |
0.638932 | <urn:uuid:afb579e8-069f-41c8-80a1-946ce66e66bf> | en | 0.860524 | Take the 2-minute tour ×
I'm looking into chaining promises to populate my scope, and then having the scope automatically update the dom.
I'm running into problems with this though.. If I call "then" on an already resolved promise, it creates a new promise (that will call the success function asynchronously but almost immediately). I think the problem is that we've already left the digest cycle by the time the success function is called, so the dom never updates.
Here is the code:
<div ng-controller="MyCtrl">
Hello, {{name}}! <br/>
<button ng-click="go()">Clickme</button><br/>
function MyCtrl($scope, $q) {
var data = $q.defer();
setTimeout(function() {$scope.$apply(data.resolve("Some Data"))}, 2000);
var p = data.promise;
$scope.name = p.then(angular.uppercase);
$scope.name2 = p.then(function(x) { return "Hi "+x;});
$scope.go = function() {
$scope.name3 = p.then(function(x) {
// uncomment this to make it work:
return "Finally: "+x;
Is there some way to make this work without calling $apply every time I chain promises?
share|improve this question
Why are you putting a promise in the go() function? – Plynx Feb 2 '13 at 1:38
There is a promise there because I'm running code on the result of the original "p" promise. I want the code to run correctly whether or not "p" has been resolved... hence we create a new promise from the old promise. – Karl Zilles Feb 4 '13 at 19:48
add comment
1 Answer
up vote 7 down vote accepted
To quote @pkozlowski.opensource:
In AngularJS the results of promise resolution are propagated asynchronously, inside a $digest cycle. So, callbacks registered with then() will only be called upon entering a $digest cycle.
So, when the button is clicked, we are in a digest cycle. then() creates a new promise, but the results of that then() will not be propagated until the next digest cycle, which never comes (because there is no $timeout, or $http, or DOM event to trigger one). If you add another button with ng-click that does nothing, then click that, it will cause a digest cycle and you'll see the results:
<button ng-click="">Force digest by clicking me</button><br/>
Here's a fiddle that does that.
The fiddle also uses $timeout instead of setTimeout -- then $apply() isn't needed.
Hopefully it is clear when you need to use $apply. Sometimes you do need to call it manually.
share|improve this answer
Okay, I think I've got my head around it now. I ended up rewriting my question and code so many times that I got about 80% of the way there myself. It kind of seems too bad that we have to manually force digest for resolving promises, but it could still be interesting to code that way. – Karl Zilles Feb 4 '13 at 19:57
add comment
Your Answer
| http://stackoverflow.com/questions/14657680/angular-js-chaining-promises-and-the-digest-cycle?answertab=votes | dclm-gs1-044950000 |
0.353482 | <urn:uuid:003818d9-ff90-4ae6-b7b4-4476a2508410> | en | 0.921419 | Take the 2-minute tour ×
I'm trying to find out whether I should be using business critical logic in a trigger or constraint inside of my database.
So far I've added logic in triggers as it gives me the control over what happens next and means I can provide custom user messages instead of an error that will probably confuse the users.
Is there any noticable performance gain in using constraints over triggers and what are the best practices for determining which to use.
share|improve this question
@hades, consider taking away all sql server references. I think the question and the answers are database-agnostic. – Sklivvz Sep 29 '08 at 13:17
add comment
12 Answers
up vote 11 down vote accepted
Constraints hands down!
• With constraints you specify relational principles, i.e. facts about your data. You will never need to change your constraints, unless some fact changes (i.e. new requirements).
• With triggers you specify how to handle data (in inserts, updates etc.). This is a "non-relational" way of doing things.
To explain myself better with an analogy: the proper way to write a SQL query is to specify "what you want" instead of "how to get it" – let the RDBMS figure out the best way to do it for you. The same applies here: if you use triggers you have to keep in mind various things like the order of execution, cascading, etc... Let SQL do that for you with constraints if possible.
That's not to say that triggers don't have uses. They do: sometimes you can't use a constraint to specify some fact about your data. It is extremely rare though. If it happens to you a lot, then there's probably some issue with the schema.
share|improve this answer
That's last statement is not entirely true. For example, what if inserting into one table requires a row to be inserted in another table? For that you need a trigger, and it does not imply there is anything wrong with the schema. – Mitch Wheat Sep 29 '08 at 10:31
Ok, but in that case your database is not normalized correctly!? – Sklivvz Sep 29 '08 at 10:48
@Skliwz, no that is often used to insert to audit tables for instance which is not a schema problem. – HLGEM Sep 27 '11 at 17:17
add comment
Best Practice: if you can do it with a constraint, use a constraint.
Triggers are not quite as bad as they get discredit for (if used correctly), although I would always use a constraint where ever possible. In a modern RDMS, the performance overhead of triggers is comparable to constraints (of course, that doesn't mean someone can't place horrendous code in a trigger!).
Occasionally it's necessary to use a trigger to enforce a 'complex' constraint such as the situation of wanting to enforce that one and only one of a Table's two Foreign key fields are populated (I've seen this situation in a few domain models).
The debate of whether the business logic should reside in the application rather than the DB, depends to some extent on the environment; if you have many applications accessing the DB, both constraints and triggers can serve as final guard that data is correct.
share|improve this answer
add comment
In addition to the other reasons to use constraints, the Oracle optimizer can use constraints to it's advantage.
For example, if you have a constraint saying (Amount >= 0) and then you query with WHERE (Amount = -5) Oracle knows immediately that there are no matching rows.
share|improve this answer
add comment
Triggers can blossom into a performance problem. About the same time that happens they've also become a maintenance nightmare. You can't figure out what's happening and (bonus!) the application behaves erratically with "spurious" data problems. [Really, they're trigger issues.]
No end-user touches SQL directly. They use application programs. Application programs contain business logic in a much smarter and more maintainable way than triggers. Put the application logic in application programs. Put data in the database.
Unless you and your "users" don't share a common language, you can explain the constraint violations to them. The alternative -- not explaining -- turns a simple database into a problem because it conflates the data and the application code into an unmaintainable quagmire.
"How do I get absolute assurance that everyone's using the data model correctly?"
Two (and a half) techniques.
1. Make sure the model is right: it matches the real-world problem domain. No hacks or workaround or shortcuts that can only be sorted out through complex hand-waving explanations, stored procedures and triggers.
2. Help define the business model layer of the applications. The layer of application code that everyone shares and reuses.
a. Also, be sure that the model layer meets people's needs. If the model layer has the right methods and collections, there's less incentive to bypass it to get direct access to the underlying data. Generally, if the model is right, this isn't a profound concern.
Triggers are an train-wreck waiting to happen. Constraints aren't.
share|improve this answer
Surely by relying on applications to enforce the business critical logic is a problem waiting to happen. What about new applications, they will also have to include this business logic, and therefore relying on the app developer to not forget this. Controlling at DB end is surely the best way. – HAdes Sep 29 '08 at 12:57
I think that's why we have reusable code libraries and a separate model layer -- to assure that all applications are using the common business model. Anyway, that's what I do -- build a business model layer. – S.Lott Sep 29 '08 at 13:14
Logic should never be only in the application. It must be at the database level or data integrity is threatened when other applications or direct queries or imported data are put into the database. – HLGEM Sep 29 '08 at 13:15
@HLGEM: Make the business model layer actually do useful stuff and people won't bypass it, they'll just use it. – S.Lott Sep 29 '08 at 13:19
People will still often bypass the business logic layer. Even if the business layer is easy to use, someone will think they've got an easier way. My own philosophy is to include all business flow logic in the business layer, but data constraints at the DB. A DBMS is more than just a holding place – Tom H. Sep 29 '08 at 14:07
show 5 more comments
Generally speaking I would prefer constraints and my code would catch sql server errors and present something more friendly to the user.
share|improve this answer
add comment
Constraints and triggers are for 2 different things. Constraints are used to constrain the domain (valid inputs) of your data. For instance, a SSN would be stored as char(9), but with a constraint of [0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9] (all numeric).
Triggers are a way of enforcing business logic in your database. Taking SSN again, perhaps an audit trail needs to be maintained whenever an SSN is changed - that would be done with a trigger,
In general, data integrity issues in a modern RDBMS can be handled with some variation of a constraint. However, you'll sometimes get into a situation where improper normalization (or changed requirements, resulting in now improper normalization) prevents a constraint. In that case, a trigger may be able to enforce your constraint - but it is opaque to the RDBMS, meaning it can't be used for optimization. It's also "hidden" logic, and can be a maintenance issue. Deciding whether to refactor the schema or use a trigger is a judgment call at that point.
share|improve this answer
add comment
@Mark Brackett: "Constraints are used to constrain the domain... Triggers are a way of enforcing business logic": It's not that simple in SQL Server because its constraints' functionality is limited e.g. not yet full SQL-92. Take the classic example of a sequenced 'primary key' in a temporal database table: ideally I'd use a CHECK constraint with a subquery to prevent overlapping periods for the same entity but SQL Server can't do that so I have to use a trigger. Also missing from SQL Server is the SQL-92 ability to defer the checking of constraints but instead they are (in effect) checked after every SQL statement, so again a trigger may be necessary to work around SQL Server's limitations.
share|improve this answer
add comment
You can have a query as a constraint in SQL Server, you just have to be able to fit it in a scalar function:http://www.eggheadcafe.com/software/aspnet/30056435/check-contraints-and-tsql.aspx
share|improve this answer
There are problems associated with UDFs in CHECK constraints (see here). – onedaywhen Feb 22 '12 at 15:44
add comment
@Meff: there are potential problems with the approach of using a function because, simply put, SQL Server CHECK constraints were designed with a single row as the unit of work, and has flaws when working on a resultset. For some more details on this, see: [http://blogs.conchango.com/davidportas/archive/2007/02/19/Trouble-with-CHECK-Constraints.aspx][1].
[1]: David Portas' Blog: Trouble with CHECK constraints.
share|improve this answer
Very useful, thank you. I've only used a scalar UDF as a constraint once in a production db, and from that link it'll be OK, as the table only had 6 rows! – Meff Oct 9 '08 at 10:58
add comment
If at all possible use constraints. They tend to be slighlty faster. Triggers should be used for complex logic that a constraint can't handle. Trigger writing is tricky as well and if you find you must write a trigger, make sure to use set-based statements becasue triigers operate against the whole insert, update or delete (Yes there will be times when more than one record is affected, plan on that!), not just one record at a time. Do not use a cursor in a trigger if it can be avoided.
As far whether to put the logic in the application instead of a trigger or constraint. DO NOT DO THAT!!! Yes, the applications should have checks before they send the data, but data integrity and business logic must be at the database level or your data will get messed up when multiple applications hook into it, when global inserts are done outsiide the application etc. Data integrity is key to databases and must be enforced at the database level.
share|improve this answer
add comment
Same as Skliwz. Just to let you know a canonical use of trigger is audit table. If many procedures update/insert/delete a table you want to audit ( who modified what and when ), trigger is the simplest way to do it. one way is to simply add a flag in your table ( active/inactive with some unicity constraint ) and insert something in the audit table.
Another way if you want the table not to hold the historical data is to copy the former row in your audit table...
Many people have many ways of doing it. But one thing is for sure, you'll have to perform an insert for each update/insert/delete in this table
To avoid writing the insert in dozen of different places, you can here use a trigger.
share|improve this answer
add comment
I agree with everyone here about constraints. Use them as much as possible.
There is a tendency to overuse triggers, especially with new developers. I have seen situations where a trigger fires another trigger which fires another trigger that repeats the first trigger, creating a cascading trigger that ties up your server. This is a non-optimal user of triggers ;o)
That being said, triggers have their place and should be used when appropriate. They are especially good for tracking changes in data (as Mark Brackett mentioned). You need to answer the question "Where does it make the most sense to put my business logic"? Most of the time I think it belongs in the code but you have to keep an open mind.
share|improve this answer
add comment
Your Answer
| http://stackoverflow.com/questions/148129/performance-considerations-for-triggers-vs-constraints/285721 | dclm-gs1-044960000 |
0.989805 | <urn:uuid:9d34b3f9-6704-43bc-a392-c2c47f8d7234> | en | 0.836743 | Take the 2-minute tour ×
Assume that I have an Ecore-model containing a package and some classes that make reference to each other. If i create a "Dynamic Instance", Eclipse produces an XMI-file and I can instantiate some classes. Containment-relations are directly serialized to an XML-tree in the XMI (the children elements in the example). But if I instantiate references to elements that are already contained somewhere in the tree, the Editor writes Path-Expressions like in the following, for the currentChild attribute:
<parent currentChild="//@parent/@children.1">
As far as I know this is not XPath, because:
1. The "childrens" are elements not attributes and have not to be referenced via "@"
2. XPath uses the e.g., elem[1] and not elem.1 to get e.g., the second elem of a list
What is it and where can I find a information on it? I already tried to browse the EMF pages/specs but could not find it.
share|improve this question
add comment
1 Answer
up vote 0 down vote accepted
It's an EMF Fragment Path. The Javadoc describes it like this:
String org.eclipse.emf.ecore.InternalEObject.eURIFragmentSegment(EStructuralFeature eFeature, EObject eObject) Returns the fragment segment that, when passed to eObjectForURIFragmentSegment, will resolve to the given object in this object's given feature.
The feature argument may be null in which case it will be deduced, if possible. The default result will be of the form:
The index is used only for many-valued features; it represents the position within the list.
Parameters: eFeature the feature relating the given object to this object, or null. eObject the object to be identified. Returns: the fragment segment that resolves to the given object in this object's given feature.
share|improve this answer
Thanks! That helps a lot. – Juve Oct 2 '09 at 8:49
Here is a link to a related Javadoc: help.eclipse.org/galileo/index.jsp?topic=/org.eclipse.emf.doc/… search for: eURIFragmentSegment – Juve Oct 2 '09 at 8:52
add comment
Your Answer
| http://stackoverflow.com/questions/1491602/what-query-path-language-is-used-for-references-in-ecore-derived-xmi-instances | dclm-gs1-044970000 |
0.019951 | <urn:uuid:d4d6a6bc-20f7-4aa1-bf03-714970cb6910> | en | 0.91183 | Take the 2-minute tour ×
I am trying to use NAudio to create a multiple sound output application. We have 8 USB sound cards installed. NAudio lets me use all 8 but I can't figure out a pattern for determining which device index is which card.
The cards will be hooked up to different hardware so it is important to make sure you know which card you are using.
I have been trying to use WMI to poll for the information but I can't seem to locate any information that determines the order of the sound devices.
Update: I forgot to include some information about this problem. The sound cards are all USB sound cards hooked up through a 12 port hub.
share|improve this question
add comment
4 Answers
up vote 2 down vote accepted
The order of devices is non deterministic for all versions of Windows. For Vista and above, the devices are typically ordered by the DSound GUID (more-or-less) so they're effectively random.
share|improve this answer
Pretty much what I was thinking but I think I have a hack to figure it out. – Robin Robinson Oct 2 '09 at 16:12
add comment
Have a look at this MSDN article. It uses DirectSound to enumerate the audio devices:
share|improve this answer
Haven't tried this yet because I would have to bring the DirectX assemblies into to the code. I will try this is all else fails. – Robin Robinson Oct 2 '09 at 16:12
add comment
I'm assuming you are using WaveOut? You can call WaveOut.GetCapabilities(deviceNumber) to get hold of the name of the device, which might help you out.
share|improve this answer
This would work if they weren't all identical USB sound cards. Sorry I didn't mention that before. Thanks though. – Robin Robinson Oct 2 '09 at 16:11
add comment
This is what I have come up with so far and it works for us.
Using WMI you can get the DeviceID from Win32_SoundDevice. Then using that you can access the registery at HKLM\SYSTEM\CurrentControlSet\ENUM\'DeviceID' and get the string value named "Driver". This value contains the ClassGUID plus a number at the end.
Example: {4d36e96c-e325-11ce-bfc1-08002be10318}\0015
If you strip off that last number*(15)* for all of you sound devices and order them, that is the order that the devices are listed from NAudio with uses winmm.dll. There is also a location for these sound devices, either in the registery at the same key or from Win32_PNPEntity using the DeviceID.
In our case the location lets us determine which port of the USB hub that sound device is plugged into.
share|improve this answer
add comment
Your Answer
| http://stackoverflow.com/questions/1505522/what-determines-the-order-for-sound-devices-in-windows-when-using-winmm-dll?answertab=oldest | dclm-gs1-044980000 |
0.940525 | <urn:uuid:d60051a8-41cc-4cd2-9146-fc53545a0cb1> | en | 0.845573 | Take the 2-minute tour ×
so i have this piece of code:
render :json => { :objects => @object.object_children }
This works. But What I only want are certain attributes only. I saw this: filter json render in rails 3 and in it is this:
respond_to do |format|
format.json { render json: @objects.object_children, :only => [:id, :name] }
It works, but it returns data without a label, just like this:
I want the ":objects =>" label in it. Thanks
share|improve this question
add comment
2 Answers
up vote 1 down vote accepted
You have to combine your original solution with the one you found:
render :json => { :objects => @object.object_children.as_json(:only => [:id, :name]) }
EDIT: Explanation
In your original solution you're adding the key :objects => manually to the response.
render :json => @object.object_children
# vs
So to add the key and filter the returned attributes you have to do the same but then call as_json (that's what Rails would do for simply returning the whole collection) manually with the :only option to apply the filter.
If you use the respond_to block depends on your needs.
share|improve this answer
add comment
For advanced json serialization, check out Active Model Serializers
share|improve this answer
add comment
Your Answer
| http://stackoverflow.com/questions/15289047/filtering-the-json-render-in-rails-3/15289574 | dclm-gs1-044990000 |
0.532366 | <urn:uuid:3ef8cf36-ffa7-4552-877e-e6032a03531c> | en | 0.712145 | Take the 2-minute tour ×
I have attempted implementing search in Telescope using pure javascript, since it looks like FTS is a while off for Meteor to implement and I couldn't get 2.4 playing nicely with Meteor yet.
I'm using the existing pagination model that is already implemented in Telescope to display the Top/New/Best posts, plus a Session variable for the search keyword that is set in the Router when you navigate to e.g. /search/foobar.
However, it doesn't quite seem to be working; when I have, say, 100 posts, the regular paginated subscription only comes back with 25 of these and my search results only show the posts in the first 25.
I've been banging my head against a wall for days trying to debug this one: sometimes it works, sometimes it doesn't!
Here's the code (I've included all additional search code for reference):
var resultsPostsSubscription = function() {
var handle = paginatedSearchSubscription( 10, 'searchResults' );
handle.fetch = function() {
return limitDocuments( searchPosts( Session.get( 'keyword' ) ), handle.loaded() );
return handle;
var resultsPostsHandle = resultsPostsSubscription();
I duplicated the existing paginatedSubscription because I can't pass a Session var in as an arg; it needs to be dynamic. I'll probably refactor later.
paginatedSearchSubscription = function (perPage/*, name, arguments */) {
var handle = new PaginatedSubscriptionHandle(perPage);
var args = Array.prototype.slice.call(arguments, 1);
Meteor.autosubscribe(function() {
var subHandle = Meteor.subscribe.apply(this, args.concat([
Session.get( 'keyword' ), handle.limit(), function() { handle.done(); }
handle.stop = subHandle.stop;
return handle;
search.js: (new file, in /common directory)
// get all posts where headline, categories, tags or body are LIKE %keyword%
searchPosts = function( keyword ) {
var query = new RegExp( keyword, 'i' );
var results = Posts.find( { $or: [ { 'headline': query }, { 'categories': query }, { 'tags': query }, { 'body': query } ] } );
return results;
Meteor.publish( 'searchResults', searchPosts );
<template name="posts_results">
{{> posts_list resultsPostsHandle}}
Template.posts_results.resultsPostsHandle = function() {
return resultsPostsHandle;
router.js: there's a search bar in the nav that redirects to here
posts_results = function( keyword ) {
Session.set( 'keyword' , keyword );
return 'posts_results';
Any help would be greatly appreciated!
share|improve this question
we've been struggling a lot with these complex subscriptions ourselves. Perhaps it would be more fruitful to contact Sacha + I directly and we can collectively try to figure it out. I think you can figure out our email addresses :) – Tom Coleman Apr 8 '13 at 13:38
Would you mind cloning Telescope on GitHub and pushing all your exact changes to it? I've made a very simple search test ( gist.github.com/yeputons/8807589 ) with two subscriptions and it works as expected. – yeputons Feb 4 at 16:51
add comment
Your Answer
Browse other questions tagged or ask your own question. | http://stackoverflow.com/questions/15878611/how-to-implement-full-text-search-in-meteor-telescope | dclm-gs1-045010000 |
0.228537 | <urn:uuid:2f582053-c6a5-4a4e-827f-bbc1e6c5a755> | en | 0.907814 | Take the 2-minute tour ×
I would like to hide any text matching a pattern from any HTML page, before it is displayed.
I tried something like that with Greasemonkey :
var html = document.body.innerHTML;
html = html.replace( /some pattern/g, '???' );
document.body.innerHTML = html;
The text I want to hide is correctly replaced with '???', but for a brief moment while the page is loading, I can see the original text. As crescentfresh said, it cannot be fixed with Greasemonkey.
I know I could use a proxy like Proximodo to solve it, but I prefer to avoid having to install it.
What is the simplest way to do this, knowing that it must work on Firefox?
For those interested, I want to use it to hide prices from any page on my girlfriend computer, to let her choose a gift.
share|improve this question
greasemonkey scripts run after the DOM has been loaded and is ready to be interacted with; that is its nature. The flicker is unavoidable considering you are doing document.body.innerHTML=... (causing a hugely expensive redraw operation). – Crescent Fresh Oct 26 '09 at 15:07
@crescentfresh - you should make that an actual answer. I'd upvote it. – Matt Oct 26 '09 at 15:08
Seems like I'll have to install Poximodo... – Jazz Oct 26 '09 at 15:10
@Matt: thanks. Perhaps Jazz can simply answer and accept the Poximodo sol'n (I have no idea what that is). – Crescent Fresh Oct 26 '09 at 15:15
+1 for the last line.. Oh these programmers!!! – Amarghosh Oct 26 '09 at 16:09
add comment
1 Answer
up vote 1 down vote accepted
With an extension you can probably do it.
I don't remember exactly, but it may be possible that LiveHttpHeaders captures the http traffic before getting to the browser, enabling you to remove what you want.
Also, if instead of waiting for the whole page to load you replace it in the DOMNodeInserted event, it may be fast enough for the actual content not to be displayed.
Also also, if you have never done a Firefox extension before, don't panic! there is even a greasemonkey extension compiler that does the dirty work, and gives you a good foundation to start. I would do that and then look for a window.onload event, and there, instead of the greasemonkey code, attach a DOMNodeInserted event into the document.
Also also also (fourth edit!), what she really wants is that you read her mind and pick the gift she wants XD
share|improve this answer
I will try to make my own extension to this. If only I could make an extension to read her mind! – Jazz Oct 26 '09 at 17:36
add comment
Your Answer
| http://stackoverflow.com/questions/1625361/replace-some-text-from-any-html-page-before-it-is-displayed | dclm-gs1-045020000 |
0.962347 | <urn:uuid:c8b41d0d-bf27-45ef-b27c-391517ccc4d4> | en | 0.848491 | Take the 2-minute tour ×
I'm programming in Python and I'm obtaining information from a web page through the urllib2 library. The problem is that that page can provide me with non-ASCII characters, like 'ñ', 'á', etc. In the very moment urllib2 gets this character, it provokes an exception, like this:
File "c:\Python25\lib\httplib.py", line 711, in send
File "<string>", line 1, in sendall:
UnicodeEncodeError: 'ascii' codec can't encode character u'\xf1' in position 74: ordinal not in range(128)
I need to handle those characters. I mean, I don't want to handle the exception but to continue the program. Is there any way to, for example (I don't know if this is something stupid), use another codec rather than the ASCII? Because I have to work with those characters, insert them in a database, etc.
share|improve this question
It would be useful if you could say, also, whether you're using Python 3+, or something earlier. – Sixten Otto Oct 29 '09 at 17:04
Couldn't be Py3k since the urllib2 module has been removed (wrapped into urllib)... – Tim Pietzcker Oct 29 '09 at 17:09
Duplicate: stackoverflow.com/questions/1020892/… – S.Lott Oct 29 '09 at 18:19
add comment
3 Answers
up vote 3 down vote accepted
You just read a set of bytes from the socket. If you want a string you have to decode it:
yourstring = receivedbytes.decode("utf-8")
(substituting whatever encoding you're using for utf-8)
Then you have to do the reverse to send it back out:
outbytes = yourstring.encode("utf-8")
share|improve this answer
add comment
You want to use unicode for all your work if you can.
You probably will find this question/answer useful:
share|improve this answer
add comment
You might want to look into using an actual parsing library to find this information. lxml, for instance, already addresses Unicode encode/decode using the declared character set.
share|improve this answer
Unfortunately a lot of website produce improperly encoded documents, generally the encoding will be mostly correct, but there will be sporadic invalid byte sequences. Some applications won't have to worry about this, but if you are crawling random public web sites, it will be a problem. – mikerobi Apr 25 '12 at 21:09
add comment
Your Answer
| http://stackoverflow.com/questions/1644640/how-to-handle-unicode-non-ascii-characters-in-python | dclm-gs1-045030000 |
0.136697 | <urn:uuid:191b3b58-3248-4436-ade7-43d0a57d208e> | en | 0.727897 | Take the 2-minute tour ×
I've in my logback.xml configuration file this appender:
<appender name="FILE"
<encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
<Pattern>%d{dd MMM yyyy;HH:mm:ss} %-5level %logger{36} - %msg%n
<rollingPolicy class="ch.qos.logback.core.rolling.FixedWindowRollingPolicy">
so that I specify path to file in which to print logs in a relative way through classpath, but it doesn't work, no file addressbookLog.log is created and written. It only works with absolute paths like /home/andrea/.../resources/addressbookLog.log Have you any ideas on how to make it work with classpath?
share|improve this question
add comment
2 Answers
The Chapter 3: Logback configuration: Variable substitution told us the various ways to refer to the variable defined outside, e.g. system properties and classpath.
The significant configuration is creating a separate file that will contain all the variables. We can make a reference to a resource on the class path instead of a file as well. e.g.
The logback.xml
<property resource="resource1.properties" />
<!-- Here we can refer to the variable
defined at the resource1.properties -->
<root level="debug">
<appender-ref ref="FILE" />
The external properties file (resource1.properties)
Please note that the resource1.properties is a resource which available at the classpath.
You may refer to the full version at Chapter 3: Logback configuration: Variable substitution. I hope this may help.
share|improve this answer
That's not really what he is asking. He wants to know how to use the classpath to specify where the destination log file is. For example, the log file in his example would be "WEB-INF/classes/addressbookLog.log" under Tomcat. But it could be in a different location in a different environment. I'm looking for the answer to the same question... – jsumners Jul 3 '13 at 17:45
add comment
You defined the resource1.properties is absolute path.
share|improve this answer
add comment
Your Answer
| http://stackoverflow.com/questions/16480052/is-there-a-way-in-logback-xml-to-specify-file-log-destination-through-classpath/16514059 | dclm-gs1-045040000 |
0.246371 | <urn:uuid:fde92abb-7c2d-406a-a963-246524950fd9> | en | 0.748799 | Take the 2-minute tour ×
Short version: How do I incorporate Hibernate into an application without using an IDE or build tool (without Eclipse or Maven for example)?
Long version:
Let's imagine for a moment I live in a country where Eclipse and Maven (and their ilk) are illegal. The government has decreed we all write our code with vi and compile it with javac. How can we incorporate Hibernate 4.2 into our applications? Which jar files need to be in our classpath? What configuration files must be created? (Names of config files would suffice, I don't need a description of the contents.)
I was a little surprised to find that such a set of instructions didn't exist already. Everything is written for Eclipse and/or Maven users.
Oh, we have been given a copy of hibernate-release-4.2.2.Final.gz.
share|improve this question
Please try out and share feedback. – Siddharth May 23 '13 at 18:53
@Siddharth that's my plan. I just need a day or two to try it out. Lots going on at work today... – John Fitzpatrick May 24 '13 at 10:08
add comment
2 Answers
up vote 1 down vote accepted
Not as crazy as you think. Yes, its difficult to get a answer from SO since all hibernate folks here use spring or maven or some very fancy tool to ease hibernate configuration.
Here is what I did.
Copied all library's to classpath. Created a hibernate.properties and hibernate.xml file in my src folder.
The properties file has
In your java main you can programmatically specify the mysql server, username and password (mind you took me 2 days to get this damn thing working, with little help from SO).
synchronized (this) {
if (sessionFactory == null) {
try {
String connection = "jdbc:mysql://"
+ Globals.DBSERVER.trim()
+ "/mCruiseOnServerDB?autoReconnect=true&failOverReadOnly=false&maxReconnects=10";
log.debug("Connection URL "+connection) ;
Configuration configuration = new Configuration();
.setProperty("hibernate.connection.username", Globals.DB_USER_NAME.trim())
.setProperty("hibernate.connection.password", Globals.DB_PASSWORD.trim());
sessionFactory = configuration
.buildSessionFactory(new ServiceRegistryBuilder()
} catch (Exception e) {
log.fatal("Unable to create SessionFactory for Hibernate");
if (sessionFactory == null) {
log.fatal("Hibernate not configured.");
The XML file has
<!DOCTYPE hibernate-configuration PUBLIC
<!-- other mappings -->
<mapping resource="com/mcruiseon/server/hibernate/UserDetails.hbm.xml" />
Make sure you have those hbm.xml file in a folder (inside of src) com.mcruiseon.server.hibernate (and /carpool in some cases).
The same folder should also have POJO's corresponding to the hbm file. I suggest that you keep your db column names EXACTLY same as your variable names makes life very simple (Contrary to what some silly people may advice). Dont use names like t_age instead use age (no acronyms).
Example of hbm file
<?xml version="1.0"?>
<!-- Generated 9 Jun, 2010 11:14:41 PM by Hibernate Tools 3.3.0.GA -->
<class name="com.mcruiseon.common.concrete.UserDetailsConcrete"
<id name="identityHash" type="java.lang.String">
<column name="identityHash" />
<generator class="assigned" />
<property name="fullName" type="java.lang.String">
<column name="fullName" />
<!-- other property -->
Create a UserDetailsConcrete in com/mcruiseon/common/concrete folder
Ensure that you have all variables private (identityHash, fullName... etc). Ensure that you have getters and setters all public. Infact auto generate it (if you have eclipse, sorry). DONT have spelling mistakes and capitalization mistakes. Copy paste to make sure.
You should have it working.
share|improve this answer
One of the best written answers I've ever received. – John Fitzpatrick May 24 '13 at 16:54
add comment
Short Answer : If it is a simple java application, if you are using version 3.3.2.GA all you would need is hibernate-core to be on the classpath. Using java -cp with appropriate arguments before you execute your main method will get you up and and running.
All other configuration files are boilerplate tutorials from hibernate.
Also, maven and ant are no magic bullets. They wont magically know what files are required. All of the details are required to be wired into the pom or the build file respectively.
share|improve this answer
add comment
Your Answer
| http://stackoverflow.com/questions/16721211/installing-hibernate-by-hand-without-eclipse-or-maven-or-any-ide-or-build-tool | dclm-gs1-045050000 |
0.765346 | <urn:uuid:6aff5bb7-5688-4aea-a4da-49a9e9748956> | en | 0.774227 | Take the 2-minute tour ×
I know how to write this using Nunit,
Assert.That(exception, Is.InstanceOfType(typeof(TypeNotRegisteredException)));
How can I write the same thing using in Xunit, as Xunit does not have Assert.That.
share|improve this question
add comment
2 Answers
up vote 2 down vote accepted
You might be looking for:
Let me know if this is close to what you're looking for.
share|improve this answer
Thanks Matt. It is what I was looking for. – DoodleKana Jun 4 '13 at 17:37
add comment
I am thinking you are asking what is the equivalent of the InstanceOfType assert rather than the equivalent of Assert.That. The latter is just a better syntax that enables you to read your asserts like English.
The equivalent of InstanceOfType assert in Xunit is IsType:
Note that the nunit equivalent is, indeed:
(the older IsInstanceOfType assert is deprecated - http://www.nunit.org/index.php?p=typeAsserts&r=2.5.1 )
share|improve this answer
add comment
Your Answer
| http://stackoverflow.com/questions/16909998/what-would-be-an-equivalent-of-nunits-assert-that-in-xunit | dclm-gs1-045060000 |
0.450616 | <urn:uuid:08093a5e-f394-42b8-aff3-3a09029a51d6> | en | 0.661489 | Take the 2-minute tour ×
I have created a shopping cart price rule programmatically. The rule seems to have been created and works fine. But sometimes on opening the frontend section or page which contains this code for creating the rule, I get the error --- Invalid method Mage_SalesRule_Model_Rule::applyAll(Array ( ) )
I have used the following code for creation of the shopping cart price rule. Can someone please help me remove this error from my magento site.
Here is the code ---
$date = new DateTime();
$couponcode = substr(md5($date->getTimestamp()),-10);
$shoppingCartPriceRule = Mage::getModel('salesrule/rule');
$name = "Discount Rule - 20% Off"; // name of Shopping Cart Price Rule
$websiteId = 1;
$customerGroupId = 1;
$actionType = 'by_percent'; // discount by percentage(other options are: by_fixed, cart_fixed, buy_x_get_y)
$discount = 20; // percentage discount
$cardCondition = Mage::getModel('salesrule/rule_condition_product')
try {
} catch (Exception $e) {
share|improve this question
add comment
Your Answer
Browse other questions tagged or ask your own question. | http://stackoverflow.com/questions/17168498/shopping-cart-price-rule-function-error | dclm-gs1-045070000 |
0.054568 | <urn:uuid:4d46d7bf-7b09-420f-a701-161feb4bd3ac> | en | 0.925778 | Take the 2-minute tour ×
I'd like to open all the similarly tagged bookmarks in Firefox at once.
Example use case: I have a few sites with regex tools bookmarked and when I need to write a regex, I usually open each one before I start working. Each of these sites is a Firefox bookmark and is tagged "regex". I would like to be able to type "regex" somewhere (search bar? address bar?) and have the option to click "open all tabs tagged "regex".
Is there a way to do this? Is there maybe an add-on that does that? Otherwise, if you have any tips, I could write the extension.
If you have the same problem, please let me know what features you would like in the comments to this question and I'll try to implement them if I do go that route.
share|improve this question
add comment
1 Answer
up vote 0 down vote accepted
Trying to answer the part where you want some pointers to create an extension for this, the best place to start should be here.
If you don't want to go that route, you can always search for the tag you want, select all and "Open all in tags"
share|improve this answer
Thanks for the awesome resource. I'm using the "Open all in tabs" trick for now, but I think an extension would also raise awareness that it's possible to do this for the greater public, so I might still make the extension at some point when I have time. – John D Jul 22 '13 at 20:20
Just to clarify for future viewers : The quickest way I've found so far to achieve this is: Ctrl-B (open bookmarks sidebar), type in tag, right-click in a blank area of the sidebar, select "Open all in tabs" – John D Jul 23 '13 at 11:46
add comment
Your Answer
| http://stackoverflow.com/questions/17746500/opening-all-bookmarks-with-a-given-tag-at-once-in-firefox?answertab=oldest | dclm-gs1-045080000 |
0.09761 | <urn:uuid:22dd92a1-e2a5-4fa3-a650-0913d642d8ce> | en | 0.904402 | Take the 2-minute tour ×
Is there any enterprise distribution program for blackberry applications?
Is there any method similar to Apple's enterprise distribution of iphone applications?
While going through their documentation, they are talking about using a deployment server and distributing apps with it.
Note: for early OS versions of blackberry - OS 7 or previous versions are my target
share|improve this question
add comment
2 Answers
up vote 2 down vote accepted
As Peter said in his answer, you can use BES to distribute applications in an Enterprise environment.
In iOS, the Enterprise program is basically the only Apple-approved way to deploy software, other than via the iTunes App Store (ignoring how you deploy to your test team).
BlackBerry Java (e.g. OS 5,6,7) devices don't have the same restriction on apps that Apple has implemented. Normal jailed iPhones cannot install software from any arbitrary web server, but BlackBerry devices can.
So, another option is just to post your app (.jad and .cod files) to a (corporate) webserver, and let users download the apps themselves. This is called Over-The-Air (OTA) deployment.
I'm not endorsing this over BES deployment, just adding to your options.
share|improve this answer
Thanks for the tip on deleting the comments - now done. – Peter Strange Aug 7 '13 at 0:32
add comment
Sorry, don't know anything about iPhone Enterprise distribution.
For BlackBerry, there are two 'variations' depending on whether you are talking BB10+ or BB7- phones. However in principle they are the similar, the BlackBerry Administrator makes an application available to the corporate BlackBerry devices associated with the corporate BES, and these can be pushed to the phone, or can be made available to the phone (for BB10).
A possible restriction here is that the application will only made available to BlackBerry devices associated with that specific BES.
There is more available from the link you have already found.
I think to give a more specific answer we need to understand what you are trying to achieve, and if this is targeted to BB10+ or BB7- devices.
Since you have indicated that you are targeting BB7 and earlier, then I would recommend one of these approaches:
a) If you wish to force users to have your software, then the best approach is create a software profile on the BES
b) If the software is optional, then place it on a corporately accessible web server and OTA download as described by Nate. This is significantly easier to maintain than the BES distribution.
share|improve this answer
this is targeted to bb7 devices – Valamburi Nadhan Aug 2 '13 at 3:57
For BlackBerry 10, Android and iOS you can use BlackBerry Enterprise Service 10 – Bojan Kogoj Aug 9 '13 at 10:15
add comment
Your Answer
| http://stackoverflow.com/questions/17995978/enterprise-distribution-of-blackberry-application | dclm-gs1-045090000 |
0.32193 | <urn:uuid:a218d24c-af29-4aca-85b2-5d34273d30c6> | en | 0.851736 | Take the 2-minute tour ×
I was after creating a destination folder to allow Vagrant go ahead with the synced_folder configuration, so I created my bash script and tested it outside the environment. This is working fine, so I decided to move it as an inline script just to make my Vagrant file easy to read.
However, I always end up getting this error
Vagrant failed to initialize at a very early stage:
There is a syntax error in the following Vagrantfile. The syntax error
message is reproduced below for convenience:
/Users/andreamoro/Documents/Vagrant/MyVagrantExperiment/Vagrantfile:64: can't find string "SCRIPT" anywhere before EOF
/Users/andreamoro/Documents/Vagrant/MyVagrantExperiment/Vagrantfile:30: syntax error, unexpected end-of-input, expecting tSTRING_CONTENT or tSTRING_DBEG or tSTRING_DVAR or tSTRING_END
Using the example supplied for the shell provisioner at http://docs.vagrantup.com/v2/provisioning/shell.html produces the same output if the $script variable is created in between the Vagrant.configure("2") do |config| block, whereas if the script is created outside it Vagrant doesn't execute it at all.
What's the matter? Thanks Andrea
share|improve this question
add comment
1 Answer
up vote 0 down vote accepted
Vagrant author helped me out on this confirming an hidden "create" option for the sync_folder method is available but was not documented.
The full thread here.
share|improve this answer
add comment
Your Answer
| http://stackoverflow.com/questions/20814330/inline-script-doesnt-work-even-with-the-supplied-script-on-the-doc | dclm-gs1-045140000 |
0.083859 | <urn:uuid:0bbbefdf-b47d-4776-9681-f63afccb27e5> | en | 0.895343 | Take the 2-minute tour ×
I am trying to store callbacks in a dictionary.
• I can't use blocks as the iPhone doesn't support this (unless you use plblocks).
• I tried using functions, but apparently NSMutableDictionary doesn't allow these pointers (wants an id)
• I tried using class methods, but I couldn't find a way to obtain a pointer to these
• I could try using functions with the c++ stl hashmap (if it is supported in Objective C++), but apparently this can slow compilation times.
• I could try storing both a class and a selector, but that seems rather messy.
What would be the best way to do this?
share|improve this question
add comment
4 Answers
up vote 4 down vote accepted
If you know what you are doing with pointers, you can wrap them up with NSValue. You can then put NSValue in a dictionary.
To insert:
[myDict setObject:[NSValue valueWithPointer:functionName] forKey:myKey];
For retrieval:
NSValue* funcVal=(NSValue*) [myDict objectForKey:myKey];
returnType* (*func)()=[funcVal pointerValue];
share|improve this answer
add comment
You can put it in an NSInvocation.
An NSInvocation is an Objective-C message rendered static, that is, it is an action turned into an object. NSInvocation objects are used to store and forward messages between objects and between applications, primarily by NSTimer objects and the distributed objects system.
share|improve this answer
add comment
If it's your own project, why not use blocks since you can?
share|improve this answer
It is not worth adding an extra dependency just for solving this problem. – Casebash Jan 20 '10 at 22:13
add comment
It is possible to make a single file compile using C++ if you change the extension to .mm. You don't have to change the other files in your project as long as they don't have to handle any C++ types.
share|improve this answer
add comment
Your Answer
| http://stackoverflow.com/questions/2098357/storing-callbacks-in-a-dictionary-objective-c-for-the-iphone?answertab=oldest | dclm-gs1-045150000 |
0.023458 | <urn:uuid:3c1ad4c9-6b5a-4ddd-8e01-a6af281f998f> | en | 0.925221 | Take the 2-minute tour ×
OK, I'm an Eclipse noob, just working through the first couple of Plug-In creation tutorials and I have this really annoying problem: I can't figure out how to stop Eclipse from always building all projects when I only want to run one of them.
Let me elaborate: I have project A (Java), B (Java) and C (Python). They're completely unrelated, i.e. there are no dependencies whatsoever between them. Project A even lives in a seperate working set. Now I try to run project A, but I get error messages about problems in projects B and C - why is that? How can I only build the current project?
There's also another problem that's probably related: When I start my plug-in as an Eclipse application, all the other plug-ins I wrote before are also included in the Eclipse instance that is started. Is this a seperate phenomenon or does it follow from my first problem?
share|improve this question
add comment
2 Answers
up vote 2 down vote accepted
Right click the projects you do not want running and select 'Close Project'
share|improve this answer
Hey Chris, thanks a bunch! That fixed the problem. Just one additional piece of info: You could alternatively right click on the project you do want running and select "Close unrelated projects". – Sleepless Jan 30 '10 at 10:40
add comment
i i work with eclipse if you want to start only one project you have to click on the arrow near the green circle it's the icon right from this bug. then "opens a kind of list where your can choose" then choose run "your class"
i work with eclipse classic and have 3 java swing projects. that's the way it works by me.
share|improve this answer
add comment
Your Answer
| http://stackoverflow.com/questions/2167449/eclipse-always-runs-all-projects-how-can-i-only-run-one | dclm-gs1-045160000 |
0.505301 | <urn:uuid:63113b56-2eb6-421f-85f9-4ff7776d780e> | en | 0.836254 | Take the 2-minute tour ×
Recently programming in PHP, I thought I had a working Perl regular expression but when I checked it against what I wanted, it didn't work.
What is the right expression to check if something is a MD5 has (32 digit hexadecimal of a-z and 0-9).
Currently, I have /^[a-z0-9]{32}$/i
share|improve this question
Since when did hexadecimals go up to z? – Anon. Feb 18 '10 at 2:01
add comment
6 Answers
up vote 30 down vote accepted
MD5 or SHA-1:
Also, most hashes are always presented in a lowercase hexadecimal way, so you might wanna consider dropping the i modifier.
By the way, hexadecimal means base 16:
0 1 2 3 4 5 6 7 8 9 A B C D E F = base 16
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 = base 10
So as you can see it only goes from 0 to F, the same way decimal (or base 10) only goes from 0 to 9.
share|improve this answer
You just saved me many painful google searches – Andrew Harry May 27 '10 at 0:20
add comment
Should work a bit better, since MD5 hashes usually are expressed as hexadecimal numbers.
share|improve this answer
add comment
There is also the POSIX character class xdigit (see perlreref):
share|improve this answer
add comment
Well, an important point to consider is the fact that $ can match \n. Therefore:
E:\> perl -e "$x = qq{1\n}; print qq{OK\n} if $x =~ /^1$/"
The correct pattern, therefore, is:
share|improve this answer
add comment
Even easier and faster than RegEx as recommended by PHP Ctype Functions :
function is_md5($s){ return (ctype_xdigit($s) and strlen($s)==32); }
share|improve this answer
add comment
@OP, you might want to use /[a-f0-9]{32,40}/ , this can check for length greater than 32, such as those generated from sha1.
share|improve this answer
add comment
protected by Alix Axel Mar 15 '11 at 7:41
| http://stackoverflow.com/questions/2285793/perl-regular-expressions-to-match-a-md5-hash/2285871 | dclm-gs1-045170000 |
0.07164 | <urn:uuid:cafe7541-6f52-4e31-bb90-3ad753085e53> | en | 0.904582 | Take the 2-minute tour ×
I would like my app to connect to an https site, without user input required. So, I would like to send the app a certificate that the app will install in the keychain, and allow it to connect to the https site without the user getting involved.
How can I do this?
share|improve this question
add comment
1 Answer
Applications are not able to install certificates to the keychain for security reasons.
Otherwise malicious programmers could install a certificate that authenticates phishing sites.
share|improve this answer
In theory, applications should be able to install certificates that apply for UIWebViews and NSURLConnections inside the app only, but I'm not sure there's a public API for that. – rpetrich Feb 23 '10 at 15:54
From my understanding of the public API, in order to do that you also have to implement your own certificate validation in a subclass of NSURLConnection since none of the TLS connection classes are public. – Benoit Mar 7 '10 at 21:35
add comment
Your Answer
| http://stackoverflow.com/questions/2313349/can-an-iphone-app-transparently-connect-to-an-https-site-using-a-der-certificat | dclm-gs1-045180000 |
0.968566 | <urn:uuid:666824d9-eab3-429a-9255-05eb3afff6a2> | en | 0.689476 | Take the 2-minute tour ×
What the API uses for work with plug devices, something like this app: square iphone scanner ?CocoaTouch, some from Foundation?
share|improve this question
add comment
3 Answers
up vote 3 down vote accepted
Square works through the headphone jack, so it doesn't use the External Accessory framework. External accessories on the iPhone are notoriously difficult to do, which is why barely any manufacturers are creating products.
I can count on one finger the number of third party devices I've seen that communicate through the dock connector on the iPhone.
share|improve this answer
add comment
Oops. Square doesn't use ExternalAccessory, but some kind of custom interface through the headphone jack, as explained by @kubi.
The relevant interface looks like this:
@interface SKSquareInterface : NSObject {
? delegate;
? queue;
? state;
@property(assign) ? XXEncryptedProperty_6b644;
@property(assign) ? XXEncryptedProperty_6a1bc;
@property(assign) ? XXEncryptedProperty_77a22;
@property(assign) ? XXEncryptedProperty_77a39;
@property(assign) ? XXEncryptedProperty_77a60;
@property(assign) ? XXEncryptedProperty_77800;
@property(assign) ? XXEncryptedProperty_7780c;
-(?)_listenForSwipe:(?)swipe numSamps:(?)samps;
which suggest it directly parse the input from AudioQueue.
share|improve this answer
add comment
Please read http://www.eecs.umich.edu/~prabal/pubs/papers/kuo10hijack-islped.pdf to understand how to use the headphone jack to modulate/demodulate information. Square is not a 30-pin connector accessory.
share|improve this answer
add comment
Your Answer
| http://stackoverflow.com/questions/2574994/api-for-plug-device-on-iphone | dclm-gs1-045190000 |
0.419309 | <urn:uuid:1efe05a0-7295-441d-b906-621812895f68> | en | 0.779029 | Take the 2-minute tour ×
I created an instance of a custom class RestaurantList to hold my data (a list of restaurant data received from a web service as json data).
How can I save it in onSaveInstanceState?
share|improve this question
add comment
3 Answers
up vote 13 down vote accepted
Custom objects can be saved inside a Bundle when they implement the interface Parcelable. Then they can be saved via:
public void onSaveInstanceState(Bundle outState) {
outState.putParcelable("key", myObject);
Basically the following methods must be implemented in the class file:
public class MyParcelable implements Parcelable {
private int mData;
public int describeContents() {
return 0;
/** save object in parcel */
public void writeToParcel(Parcel out, int flags) {
public static final Parcelable.Creator<MyParcelable> CREATOR
= new Parcelable.Creator<MyParcelable>() {
public MyParcelable createFromParcel(Parcel in) {
return new MyParcelable(in);
public MyParcelable[] newArray(int size) {
return new MyParcelable[size];
/** recreate object from parcel */
private MyParcelable(Parcel in) {
mData = in.readInt();
share|improve this answer
add comment
I know "that this case is cold", but because i found this thread first, when I was searching for exactly the same thing (and found an answer by now):
Imagine Bundle as an XML file. If you create a new <BUNDLE name="InstanceName" type="ClassName"> you can freely add elements and attributes in a fresh and empty namespace.
When onSaveInstance(Bundle outState) of your MainActivity is called (you can also force this in onPause), you can create a new: Bundle b = new Bundle();
Then call your (probably not inherited and not overriden) custom Method onSaveInstance(Bundle b) in your own class with your newly created Bundle b. Then (in onSaveInstance(Bundle outState)) of your MainActivity, call outState.putBundle("StringClassAndInstanceName", b);
When you find this string in onCreate, you can either use a switch/case to recreate this object or (better) have a factory function in your custom class to work with Bundle and "StringClassAndInstanceName".
share|improve this answer
add comment
Check this answer.
Basically you have to save it inside a Bundle.
share|improve this answer
Ok thanks, maybe I wasn't clear enough. What I wanted to know is how to save a custom object. I found that I can make it parcelable. – jul Jul 4 '10 at 10:57
add comment
Your Answer
| http://stackoverflow.com/questions/3172333/how-to-save-an-instance-of-a-custom-class-in-onsaveinstancestate | dclm-gs1-045220000 |
0.068322 | <urn:uuid:5af1ba1f-5fd3-4bd2-b17a-3772496952ad> | en | 0.903492 | Take the 2-minute tour ×
anyone can decrypt below error and explain what cause this error?
share|improve this question
add comment
1 Answer
up vote 1 down vote accepted
your Home page implementation is recursively throwing a RedirectException.
Open your Home class and figure out why you are causing redirection to the same page.
If you were to paste your code (Home.java) someone could help you further.
share|improve this answer
somehow the redirection is causing recursion. there may be 2 redirections happening, one on your home page, one in your authentication page but the recursion is definitely happening. i suggest you breakpoint the code in an IDE and figure it out for yourself because what you've posted is pretty hard to read and i don't think shows the full picture. – pstanton Dec 20 '10 at 5:01
add comment
Your Answer
| http://stackoverflow.com/questions/4448539/tapestry4-forward-error?answertab=oldest | dclm-gs1-045310000 |
0.038091 | <urn:uuid:a6463b72-1759-4d17-941f-c42a4b21c2d3> | en | 0.825731 | Take the 2-minute tour ×
How to create a drag and drop canvas in HTML 5 ? (something inside the canvas) and how to capture the dropped location ?
share|improve this question
do you want to drag and drop the canvas element itself or something inside the canvas? – clamp Dec 17 '10 at 8:59
something inside the canvas – Sudantha Dec 17 '10 at 8:59
add comment
1 Answer
up vote 3 down vote accepted
This tutorial might be able to give you some pointers on the drag and drop functionality.
share|improve this answer
add comment
Your Answer
| http://stackoverflow.com/questions/4468940/drag-and-drop-canvas-in-html5?answertab=active | dclm-gs1-045320000 |
0.969751 | <urn:uuid:8d1647a2-475a-420d-8821-bb54c9312ae1> | en | 0.817172 | Take the 2-minute tour ×
I recently started working with jQuery and was wondering how I would iterate through a collection (an array or list of items) of items and sum their contents.
Does jQuery have something like a for-loop like many other languages do?
Would there be an easy way to implement this - if this isn't easily do-able?
share|improve this question
add comment
3 Answers
up vote 14 down vote accepted
What you are looking for is the jQuery .each() function - which will allow you to iterate through any given field(s) and perform an action. Here are some examples to get you started (just as simple arrays - check documentation for additional functionality):
var array = [ "stack", "overflow"];
$.each(array, function() {
//Perform actions here (this) will be your current item
Example: (Summing an array of numbers - alerting the sum)
var sum = 0;
$.each(array, function() {
sum += (this);
share|improve this answer
add comment
$.each(function) and $(...).each(function) both accurately perform the equivalent of a for-each loop, but the actual JavaScript for-each equivalent is the for(... in ...) loop.
for (variable in object)
// code to be executed
This isn't jQuery-specific, but hey, jQuery is just a JavaScript library after all. :)
share|improve this answer
And this spares memory which can be useful on mobile devices. Why doesn't Apple the simply make an IPHONE WITH 4 GB RAM! D:< – user142019 Jan 5 '11 at 21:29
add comment
For an array, see Rionmonster's answer. For all items matchins a selector:
$("a").each(function() {
share|improve this answer
add comment
Your Answer
| http://stackoverflow.com/questions/4609160/what-is-the-equivalent-of-a-for-each-loop-in-jquery | dclm-gs1-045330000 |
0.994067 | <urn:uuid:50c1d203-eb7f-4b16-90ac-1f15b57fffd7> | en | 0.8713 | Take the 2-minute tour ×
I have been trying all weekend to find a way to get my program to answer a question using a sting with a scanner class. For example I need to get my program to answer a question like
Who is on the 5 dollar bill?
Input would be Lincoln and other inputs would be invalid the question will have 3 choices so the logic has to work.
Can you point me in the right direction on how to get this to work in Java? I want to understand the material but I have really tried all weekend.
share|improve this question
The question is quite confusing. Could you clarify a bit? So you read input from somewhere with your Scanner and want to respond to a hardcoded number of questions? Anyhow you'll need to use something else to write your answer out as Scanner is only there to read input. – Voo Feb 13 '11 at 19:37
add comment
2 Answers
up vote 4 down vote accepted
If I understood your question properly, then this should point you in the right direction:
Import the Scanner:
import java.util.Scanner;
Then, here is the method you'd want to call:
public void myScanner () {
Scanner scan = new Scanner(System.in); //Creates a new scanner
System.out.println("Who is on the 5 dollar bill?"); //Asks question
String input = scan.nextLine(); //Waits for input
if (input.equalsIgnoreCase("Lincoln")) { //If the input is Lincoln (or any case variant of Lincoln)
else { //If the input is anything else
share|improve this answer
add comment
If you don't want to encode all the actual word solutions (like "Lincoln") you can also just ask the user to pick a number/letter solution since you only have 3.
Scanner input = new Scanner(System.in);
System.out.println("Who is on the 5 dollar bill? 1. Lincoln 2. Somebody 3. Someone");
String userChoice = scan.nextInt(); //get a number from user
if(userChoice == 1)
System.out.println("Correct answer!");
System.out.println("Wrong answer!");
This would make it easy to keep track of the answer key.
share|improve this answer
If you go that route I'd suggest using try-catch statements as well. – Johannes Feb 13 '11 at 20:17
add comment
Your Answer
| http://stackoverflow.com/questions/4986353/how-to-answer-a-question-with-a-string-using-the-scanner-class-using-java/4986394 | dclm-gs1-045350000 |
0.126471 | <urn:uuid:18115d22-79c8-4aff-a57a-100cf7b48e81> | en | 0.871214 | Take the 2-minute tour ×
I am trying to add objects to a persistent store in Core Data.
When the user taps the save button I initialize a new object which is a subclass of the NSManagedObject class and in the data model.
Profile *newProfile = [[Profile alloc] initWithEntity:[NSEntityDescription entityForName:@"Profile" inManagedObjectContext:MOC] insertIntoManagedObjectContext:MOC];
[newProfile setValue:userName.text forKey:@"userName"];
[newProfile setValue:txtInstitution.text forKey:@"institution"];
I can verify in the console that the values for userName and txtInstitution are correct and what expected, and also that the object has the proper attributes. However, it seems to save the object with the same values as whatever the first object saved was. Only one profile is created at a time, and the MOC is saved after each profile is added in this way.
Also, when a table tries to populate with data from the persistent store it will create rows as though are as many objects in the store as I have created at that time, but they will all have the same values.
share|improve this question
add comment
2 Answers
up vote 0 down vote accepted
Are you sure you are retrieving the objects from the store correctly? This sounds like it might be an issue with the fetch request you use to get the data out of the store and/or an issue with the way you display the data.
share|improve this answer
That is something I hadn't thought of, I will have to verify that. – ToothlessRebel Feb 17 '11 at 17:43
Good eye, thank you for pointing out the obvious. I was convinced it must be storing them wrong. – ToothlessRebel Feb 17 '11 at 17:53
add comment
Is there any particular reason you're not using the designated initialiser for NSManagedObjects?
So you should use:
Profile *newProfile = [NSEntityDescription insertNewObjectForEntityForName:@"Profile" inManagedObjectContext:self.MOC];
Also make sure you are accessing your MOC via its property (self.MOC, not the ivar directly) as if you are using the templates provided by Apple you will notice the MOC is lazily loaded via its getter method.
share|improve this answer
Yes, there is a reason. It being most likely another fault of mine. The dat for the objects is retrieved here and needs only be added to the persistent store if it didn't already exist. I.E. If a particular object is added server side, but the iPhone hasn't saved it yet. So that these can be used if phone is offline. – ToothlessRebel Feb 17 '11 at 17:42
add comment
Your Answer
| http://stackoverflow.com/questions/5007446/managed-object-saves-with-old-data?answertab=oldest | dclm-gs1-045360000 |
0.742597 | <urn:uuid:847edd6f-673a-4710-8b3e-df60c1ccb512> | en | 0.817069 | Take the 2-minute tour ×
Can someone provide answer to this situation?? Suppose I have 2 tables:
Table Books with values Batch_no and Title
Batch_no - Title
1 - A
2 - B
Table Book_Authors with values Batch_no and Author_no
Batch_no - Author_no
1 - 1
1 - 2
1 - 3
2 - 1
How should I merge the values into 1 row which should look like this
Batch_no Author
1 - 1, 2, 3
2 - 1
Any help will be greatly appreciated...Many Thanks!
share|improve this question
add comment
1 Answer
If you take a look here: http://www.simple-talk.com/sql/t-sql-programming/concatenating-row-values-in-transact-sql/
there are several techniques you can do this.
Adapting for your situation, here is one that looks simple:
select batch_no, LEFT(booksauthors, len(booksauthors)-1) as Authors from
(SELECT ba.Batch_no,
( SELECT cast(ba1.Author_no as varchar(10)) + ','
FROM Book_Authors ba1
WHERE ba1.Batch_no = ba.Batch_no
ORDER BY Author_no
FOR XML PATH('') ) AS BooksAuthors
FROM Book_Authors ba
GROUP BY Batch_no )A;
share|improve this answer
add comment
Your Answer
| http://stackoverflow.com/questions/5124831/how-to-return-1-single-row-data-from-2-different-tables-with-dynamic-contents-in | dclm-gs1-045370000 |
0.124237 | <urn:uuid:9bd1a669-bcc3-46e7-a0d3-b28d5761a394> | en | 0.816378 | Take the 2-minute tour ×
i am working in android frameworks.i want to add an item to the existing settings in the android OS.can u plz tell me how to do this?
share|improve this question
add comment
1 Answer
up vote 0 down vote accepted
First read about PreferenceActivity. These group of classes handle user prefs.
Then depending on your task, do something like this.
In case you want to add a live wallpaper or an input method add android:settingsActivity in your Manifest. (Example : here)
Or follow this tutorial
share|improve this answer
What is this or this? – dcow Dec 11 '13 at 2:49
add comment
Your Answer
| http://stackoverflow.com/questions/5259637/add-item-to-the-settings-in-android-framework?answertab=active | dclm-gs1-045390000 |
0.104645 | <urn:uuid:6678f7de-ef1e-4a7a-aac6-7c61e4c56336> | en | 0.852102 | Take the 2-minute tour ×
I just installed Casandra on my windows machine. When I run cassandra.bat everything seems to be ok - no errors, just some debug statement.
I then launch the client - cassandra-cli.bat, and no matter whet I type in at the command prompt (other that ?) I get back a ... with a tab and a bilking cursor.
The articles I have read say to type in: connect localhost/9160, but this yields the same result - just a ...
share|improve this question
add comment
1 Answer
up vote 2 down vote accepted
You are reading obsolete articles. Commands end with a semicolon now. Look at the readme for a correct example.
share|improve this answer
add comment
Your Answer
| http://stackoverflow.com/questions/5413509/cassandra-on-windows-trouble-running-commads | dclm-gs1-045410000 |
0.066759 | <urn:uuid:4adea9b7-1f37-4291-9969-abc7dc261d9c> | en | 0.796735 | Take the 2-minute tour ×
I am new to WPF and I am creating an application which uses the TabControl. I am using a DataTemplateSelector and my datasource is an object I created from XML which have the properties "type" and "categoryID". I select my data template based on the "type" which works fine but I also need to create a tabitem for each categoryID during runtime. My problem is currently it creates a new TabItem for each object. How do I create a new tabitem based on the categoryID and place the dataTemplate on that tab and if the tab have already been created place the DataTemplate on that tab instead of creating a new one.
Thanks in advance!!
share|improve this question
You can use the ObservableCollection class and binding to the ItemsSource property. Before adding an item to the collection - check existance of this item in the collection, and if it exists - don't add. – vorrtex Apr 4 '11 at 13:51
add comment
1 Answer
up vote 0 down vote accepted
I ended up using a CollectionViewSource with grouping and then I set the tabcontrol datacontext to the CollectionViewSource.
private void PopulateTabControl()
DataView = (CollectionViewSource)(this.Resources["DataView"]);
tabcontrol.DataContext = DataView;
private void AddGrouping()
PropertyGroupDescription grouping = new PropertyGroupDescription();
grouping.PropertyName = "categoryID";
share|improve this answer
Is it possible to view the xaml? – Emil Badh Apr 26 '12 at 11:46
add comment
Your Answer
| http://stackoverflow.com/questions/5539255/dynamic-tabitems-c-sharp-wpf | dclm-gs1-045430000 |
0.053277 | <urn:uuid:adbde777-3391-4e29-9fa3-c2667abfb97a> | en | 0.683709 | Take the 2-minute tour ×
While Uploading an Excel File i need to check any data present in first sheet say sheet1. im reading the file using oledbreader and converting to datatable. here im checking values is there in datatable if it is empty, therfore it is assumped that excel sheet has no vlaues.
This is the code im using,
Dim objConn As System.Data.OleDb.OleDbConnection = Nothing
Dim dtExcel As DataTable = Nothing
Dim excelSheetName As String = String.Empty
objConn = New System.Data.OleDb.OleDbConnection(excelconnstring)
dtExcel = objConn.GetOleDbSchemaTable(OleDbSchemaGuid.Tables, Nothing)
Dim i As Integer = 0
excelSheetName = dtExcel.Rows(0)("TABLE_NAME").ToString()
Dim excelCommand As OleDbCommand
excelCommand = New OleDbCommand("Select * from [" & excelSheetName & "]", objConn)
Dim dt As New DataTable
Dim oda As New OleDbDataAdapter
oda.SelectCommand = excelCommand
If Not dt.Rows.Count = 0 Then
If String.IsNullOrEmpty(dt.Rows(0)(0).ToString()) Then
'error message thrown that sheet is empty
End If
End If
Im checking here the first row in dt is empty but in some excel sheets while binding to datatable even though in excel sheet the first cell has value, the dt has null value in first row and other values are binded properly.y this is happening and how to check the excel sheet has empty values.
share|improve this question
add comment
1 Answer
see http://www.codeproject.com/KB/office/excel_using_oledb.aspx CodeProject
share|improve this answer
thanks for that post. but still im getting empty rows for the firstcell when i export excel without header. what im doing wrong. this is my connection string for excel 97-2003 file connectionString = "Provider=Microsoft.Jet.OLEDB.4.0;Data Source=" + fileName + ";Extended Properties=\"Excel 8.0;HDR=NO;\""; – Sinduja Apr 28 '11 at 12:04
add comment
Your Answer
| http://stackoverflow.com/questions/5817339/reading-excel-files-using-oledbreader/5817838 | dclm-gs1-045460000 |
0.551264 | <urn:uuid:cdb5a603-b635-41c9-b27f-645a4da5b085> | en | 0.906357 | Take the 2-minute tour ×
I'm a mac user giving vim a serious try. Most of the GUI editors I'm used to allow me to open a directory as a "project" by executing a command like:
edit ~/www/example.com/
The vim equivalent vim ~/www/example.com/ will show me a list of files in the directory, and I can open them. But it does not set vim's working directory to that path, I have to run :cd . to set the working directory.
Is there some way, perhaps with a shell script, to open vim and have it's working directory set to a given path?
I'm actually using MacVim, if that makes any difference.
share|improve this question
In directory view just hitting c will cd you into that directory - doesn't do exactly what you want, but its worth knowing. – Michael Anderson May 8 '11 at 23:27
It's ok to propose answers to your own question. I recommend moving your work in progress answer out of the question, since it is not part of the question. There is even a badge you can earn for answering your own question with a score of 3 or higher. (ps nice answer!) – Ziggy May 9 '11 at 4:49
@Ziggy thanks. I did try answering my own question, but it did not allow it (something about waiting 24 hours before answering your own question?). Wasn't sure what to do, but have now posted it as an answer. – Abhi Beckert May 10 '11 at 4:55
add comment
5 Answers
(cd /path/to/dir && vim file)
Less so:
vim /path/to/dir/file +':cd %:h'
You can always map :cd %:h to a convenient key or put in an autocommand (I wouldn't actually recommend the latter, but there is no arguing about taste)
Oh and for directories instead of files:
:cd %
is quite enough
share|improve this answer
Thanks, that pointed me in the right direction! But isn't the perfect answer, because the "file" you refer to does not exist, I want to open vim with a given working directory, not open a file in vim (the file I want to open is likely to be several sub directories deep) – Abhi Beckert May 8 '11 at 23:01
add comment
up vote 3 down vote accepted
Thanks to @sehe's suggestions, I came up with this. Not sure if it's the best solution, but it seems to work.
if [ "$#" -eq 1 ];then # is there a path argument?
if test -d $1;then # open directory in vim
vim $1 +':cd %'
else # open file in vim
vim $1 +':cd %:h'
else # no path argument, just open vim
share|improve this answer
add comment
Would this help?
set autochdir
I found it http://vim.wikia.com/wiki/Set_working_directory_to_the_current_file
share|improve this answer
I think autochdir would break my workflow, I work with projects containing several thousand files across hundreds of directories and need the working directory to be the "root" of the project. – Abhi Beckert May 8 '11 at 23:03
add comment
Try adding the following to your .vimrc
let g:netrw_liststyle=3
let g:netrw_keepdir=0
This will make the directory browsing use a tree style for showing the files (you can expand a directory by putting the cursor on a directory and hitting enter) and make the current working directory also be the one you are browsing.
You might also be interested in the NERDTree plugin that provides a directory browser that is more advanced than the built in one. It has an option
let g:NERDTreeChDirMode=2
to make the current directory match the root of the displayed tree or
let g:NERDTreeChDirMode=1
to change the directory whenever you use a command (:e or :NERDTree) to browse a new directory.
share|improve this answer
add comment
$ cd ~/my/working/directory
$ vim .
share|improve this answer
add comment
Your Answer
| http://stackoverflow.com/questions/5930636/opening-a-directory-in-vim/5930676 | dclm-gs1-045470000 |
0.926912 | <urn:uuid:58e8c1e8-d14d-4478-bc02-ea4602239f8e> | en | 0.776611 | Take the 2-minute tour ×
In my web application, I need to be able to allow users to upload and download their images. How can this be don in ASP.net?
I want to user to be able to sign in (I already have that done) and be able to upload images to their account. Later on, I want them to be able to download them.
share|improve this question
add comment
2 Answers
up vote 1 down vote accepted
If the images are of a reasonable size, (e.g. less than a few MB), you can do this with a <input type=file> tag. Set the encoding of your form tag to multipart/form-data and implement some code-behind to get a hold of the file.
See the ASP.Net videos (Part 1) and (Part 2) for a detailed example.
Hope this helps!
share|improve this answer
add comment
You can also use
<asp:FileUpload runat="server" id="upImage" Width="575px" CssClass="txt-Normal"></asp:FileUpload>
Here is my method to save them to a location on the site.
string completeFilePath = this.Request.PhysicalApplicationPath + System.Configuration.ConfigurationManager.AppSettings["ImageUploadPath"];
if (System.IO.Directory.Exists(completeFilePath))
if (!System.IO.File.Exists(completeFilePath + this.upImage.FileName))
this.upImage.SaveAs(completeFilePath + this.upImage.FileName);
imageUrl = string.Format("~{0}{1}", System.Configuration.ConfigurationManager.AppSettings["ImageUploadPath"], this.upImage.FileName);
share|improve this answer
add comment
Your Answer
| http://stackoverflow.com/questions/622537/download-and-upload-images-with-asp-net | dclm-gs1-045480000 |
0.065985 | <urn:uuid:2b8c7848-5e76-4745-9f50-39f5fa80a8dd> | en | 0.849884 | Take the 2-minute tour ×
I'm building an application to record when my cat has an asthma attack. I'm not interested in the exact time since glancing at the time in interval of 15 minutes is easier to review (e.g. rounding 9:38am should be recorded as 9:45am).
I looked for a UDF at cflib.org for this but couldn't find one. I tinkered with CF's round function but I'm not getting it to do what I want.
Any advice?
share|improve this question
add comment
4 Answers
up vote 5 down vote accepted
This could do with a bit more polish (like data type validation) but it will take a time value and return it rounded to the nearest 15-minute increment.
function roundTo15(theTime) {
var roundedMinutes = round(minute(theTime) / 15 ) * 15;
var newHour=hour(theTime);
if (roundedMinutes EQ 60) {
newHour=newHour + 1;
return timeFormat(createTime(newHour,roundedMinutes,0),"HH:mm");
share|improve this answer
everett Perfect or as my cat would say, Purrrrrfect. ;) As written, does this account for the 12 hour clock or 24 hour clock meaning does it show AM/PM? – dlackey Jun 15 '11 at 19:14
I just added a tt to the time format as in return timeFormat(createTime(newHour,roundedMinutes,0),"HH:mm tt"); – dlackey Jun 15 '11 at 19:17
add comment
I'm not familiar with the format of the timestamp here, but generally when I want to round I do something like floor((val + increment / 2) / increment) * increment
share|improve this answer
My first thought was to use floor similar to what you did but dealing with the time, this would need a modification to pull out the minute value from the time stamp. – dlackey Jun 15 '11 at 19:14
add comment
I like Al Everett's answer, or alternatively store the actual time to preserve the most accurate time, then use query of query in the view and use between :00 and :15 to show the time in 15min period.
share|improve this answer
@Henry - that is not a bad idea, but @al-everett solution is a little simpler to implement for this small app that I'm building. – dlackey Jun 15 '11 at 17:23
... or just do it in SQL. There are plenty of "round to x minutes" algorithms out there. (For next time ... :) – Leigh Jun 15 '11 at 18:17
@Leigh how? I've never come across any of those.. I just use QoQ for my last project – Henry Jun 15 '11 at 18:33
@Henry - Here is one idea. I have not used this one specifically. stackoverflow.com/questions/249794/how-to-round-a-time-in-t-sql – Leigh Jun 15 '11 at 19:04
Two others for MS SQL: DATEADD(n, ROUND(DATEDIFF(n, 0, @Time) / @YourInterval, 0) * @YourInterval, 0) .. or .. dateadd(mi,(datepart(mi,@Day)/5)*5,dateadd(hh,datediff(hh,0,@Day),0)) source: sqlteam.com/forums/topic.asp?TOPIC_ID=64755 – Leigh Jun 15 '11 at 19:17
show 2 more comments
If you use Henry's suggestion to store the precise time in the database (principle: if there's no cost, prefer to preserve data), then you can simply use Al Everett's rounding function whenever you display the data to the user.
share|improve this answer
add comment
Your Answer
| http://stackoverflow.com/questions/6360889/how-to-round-the-minute-of-a-timestamp-to-increments-of-15/6361226 | dclm-gs1-045490000 |
0.304715 | <urn:uuid:5eb9df33-e439-4e5b-92d9-678b03485fd5> | en | 0.917578 | Take the 2-minute tour ×
I want to make a line chart that updates every couple of seconds and doesn't need the page to be refreshed(it would get the info from a separate file that updates on a server), is their any JavaScript libs(other than JQuery) that will make this is easy? could anyone show mean example on a webpage? On a scale from 1 to 10 how hard would this be?(10 being hard)
Also the data gets updated on a fixed interval of 10s if that matters. And if possible I would like to stick to just CSS3 HTML5 and javascript.
FYI: I'm about to go to sleep so don't expect a response until tomorrow morning
share|improve this question
check out raphaeljs.com – Zevan Jun 28 '11 at 7:19
add comment
10 Answers
up vote 16 down vote accepted
There are several charting libraries that can be used : gRaphael, Highcharts and the one mentioned by others. These libraries are quite easy to use and well-documented (lets say 1 on the difficulty scale).
AFAIK, these libs are not "real-time" because they don't give the possibility to add new points on the fly. To add new point, you need to redraw the full chart. But I think this is not a problem because redrawing the chart is fast. I've made some tries with gRaphael and I didn't notice any problem with this approach. If you update rate is 10s that should work ok (but it may depends on the complexity of your charts).
If redrawing the full chart is a problem, you may have to develop a chart by yourself with a vector graphics lib like Raphael or paper.js. That will be a bit harder than using a charting lib but should be feasible. (Let say 5 on the difficulty scale).
As you are getting the data on a fixed intervall, you can use a regular ajax lib. jQuery is ok for me but there are some other choices. That may not be the best choice for a non-fixed interval and in this case you may have to look at something like socket.io but it would have consequences on the server side too.
Note1: Raphael, gRaphael and Highcharts are not purely HTML5 but SVG/VML but I guess this is an acceptable choice too.
Note2: it seems that Highchart doesn't require to redraw the chart when inserting new points. See http://www.highcharts.com/documentation/how-to-use#live-charts
share|improve this answer
I also initially tried to find solutions to "not redraw the entire chart every time" for performance reasons. But then I realised that this is a fallacy. Any solution must redraw the chart for every frame. Think about it: how does your monitor work? It continually refreshes itself from scratch, it doesn't move pixels around. Therefore any charting library that allows you to "not redraw the chart" is just providing an abstraction for you. This abstraction is convenient and desireable; nevertheless performance-wise it is still redrawing the chart in the background for every frame. – Fletch Nov 30 '12 at 11:52
add comment
Several things that might help you:
Canvas Express is a powerful charting library : http://canvasxpress.org/
Here you can find a tutorial about rolling your own equation based graphs: http://www.html5canvastutorials.com/labs/html5-canvas-graphing-an-equation/
Using a canvas solution is very easy, You can retrieve your periodic data for the graph using ajax, and redraw the graph every time you retrieve new data.
Since it's all client side you won't have to refresh the page.
If you knwo your way aroudn javascript and ajax, then it's gonna be a medium difficulty. If you don't then you'll probably have to post some more questions on Stack Ovreflow to help you with the parts you get stuck with.
share|improve this answer
Thanks its just what I needed – Mehran Jun 28 '11 at 15:13
add comment
You get the data from server, update your previously available datasetand then probably use one of the freely available libraries to draw the graph [eg: http://www.rgraph.net/]
Things you might want to considering : If your chart is represents a state , get only the new data with xhr , update data on client and draw .
share|improve this answer
add comment
I believe this is exactly what you're looking for:
Open source (although a license is required for commercial websites), cross device/browser, fast.
share|improve this answer
add comment
Have you checked out ZingChart? It renders charts in HTML5 Canvas, SVG and Flash (and VML for old IE fallback). API and live data feeds allow for data updating without full chart or page refresh. see http://www.zingchart.com/learn/api/api.php
I'm on the team. You can reach our team at support[at]zingchart.com with any questions, or twitter.com/zingchart.
share|improve this answer
add comment
Flotr2 and Envision are options. Flotr2 has a real time example on the doco page I linked. Envision is a bit tougher to get started with, so try Flotr2.
share|improve this answer
I'd recommend this one too. It's a very solid lib. – bottleboot Nov 30 '12 at 9:38
The flotr2 example is not real-time... it's just redrawing the whole chart repeatedly. – tybro0103 Sep 24 '13 at 15:16
@tybro0103 Exactly, that is real-time. At first I thought like you but then I realised that this is the only way to do it. How does a movie work? Flash through 25 different pictures in a second and it looks like motion. How does a monitor work? Continually redrawing the pixels according to what is now meant to be shown, the speed of this redrawing is measured in Hertz. You can't move a pixel on a chart. You have to redraw the chart with the new locations. Even if the library abstracts it and makes you feel like you are moving pixels, it's still just redrawing behind the scenes. – Fletch Sep 25 '13 at 20:24
@Fletch You're right in that it does have to redraw the whole thing in order to show new data. However, what I meant to say is that it reloads the whole data set repeatedly. Check the source and you'll see that the code for (...) {data.push(...)} is executed each time the chart renders. This is wasting CPU and can be a real problem when there are thousands of data points per second. A better real-time chart will allow you to simply append a single data point at a time, rather than reset the entire dataset each time you have a new point. – tybro0103 Sep 25 '13 at 21:24
SmoothieCharts does it right: smoothiecharts.org/tutorial.html – tybro0103 Sep 25 '13 at 21:24
show 1 more comment
http://www.rgraph.net/ is excellent for graph and charts.
share|improve this answer
add comment
I would suggest Smoothie Charts.
It's very simple to use, easily and widely configurable, and does a great job of streaming real time data.
There's a builder that lets you explore the options and generate code.
Disclaimer: I am a contributor to the library.
share|improve this answer
Really nice, although it would also be nicer if it would be able to also plot static data and even plot dynamic data over static data, for example yesterday's evolution of some price and today's current evolution, so it is easy to compare. – Javier Mr Nov 6 '13 at 10:09
add comment
You might also give Meteor Charts a try, it's super fast (html5 canvas), has lots of tutorials, and is also well documented. Live updates work really well. You just update the model and run chart.draw() to re-render the scene graph. Here's a demo:
share|improve this answer
add comment
Here's a gist I discovered for real-time charts in ChartJS:
ChartJS looks like it's simple to use and looks nice.
Also there's FusionCharts, a more sophisticated library for enterprise use, with a demo of real time here:
share|improve this answer
add comment
Your Answer
| http://stackoverflow.com/questions/6502827/real-time-data-graphing-on-a-line-chart-with-html5?answertab=oldest | dclm-gs1-045500000 |
0.041975 | <urn:uuid:e55ade59-5c77-4f8e-aec3-a0925acfa7f1> | en | 0.930943 | Take the 2-minute tour ×
How can I make Lion's new Java installer prompt launch when my application is opened?
Example: http://kb2.adobe.com/cps/909/cpsid_90908.html
eclipse appears to do it but NetBeans does not.
share|improve this question
I remember there was a JARBundler application (or something similarly named) in the Developer/Utilities folder when you installed the Developer CD. Has that been updated with an option to launch the installer, maybe? – Kainsin Jul 21 '11 at 15:14
That's not present in my Lion install - I'd probably need to install the JRE to get that, I'm guessing. And I'd rather not do that until I find out how to launch this prompt. :) – Jake Petroules Jul 21 '11 at 15:40
Hi Jake. I don't have an answer for you, but I can tell you that my Java application got this "for free". I'm not sure what the OS is looking for when it decides to show this dialog. – Matt Solnit Jul 21 '11 at 22:06
Just curious, what happens with NetBeans? Do you get some other error message instead? – Matt Solnit Jul 21 '11 at 22:06
No message unless you launch it from Terminal... then it just says no JRE found (not exact wording). – Jake Petroules Jul 21 '11 at 22:37
add comment
2 Answers
Going to the terminal and typing "java" triggers the popup you are looking for, provided Java is not installed on the system.
I however don't know how to trigger this popup afterwards
share|improve this answer
Unfortunately the dialog will read "to use java..." whereas I'd like it to read "to use <myapp>..." as eclipse does, for example. – Jake Petroules Jul 24 '11 at 19:27
add comment
If I understand you correctly you would like to be able to run application originally written in java even on machine that does not have JRE installed and be able to show pop-up window that prompts user to install JRE.
Obviously you can implement script for each platform that does it. For example it could be VB script for windows, TCL/TK script for unix etc.
Other possibility is to use package generators that do the work for you. I used to use InstalAnywere and Multiplatform Installer (from Install Shield) and I think they have such functionality.
share|improve this answer
I don't think you're right. He wants very specifically in the case of Mac OS Lion to launch an OS-provided dialog that downloads and installs a Java runtime. – Duncan McGregor Jul 21 '11 at 15:37
I know that he wants solution for MAC. I wrote abut Windows and Linux just as an example. I just said that there are 2 solutions and explained them. – AlexR Jul 22 '11 at 5:03
I could easily write my own Java launcher, but I specifically want Lion's built-in dialog, as @Duncan said. – Jake Petroules Jul 22 '11 at 23:50
add comment
Your Answer
| http://stackoverflow.com/questions/6778146/how-to-launch-the-to-open-java-preferences-you-need-a-java-runtime-would-yo | dclm-gs1-045530000 |
0.035116 | <urn:uuid:6f36ceaa-3b80-4fa4-b397-b9a6e672b8a1> | en | 0.838887 | Take the 2-minute tour ×
I have a sunspot/solr setup for fulltext searching model attributes. My QA just searched on " and +, both of which caused a 500 error:
Solr Response: orgapachelucenequeryParserParseException_Cannot_parse__Encountered_EOF_at_line_1_column_0_Was_expecting_one_of_____NOT______________________________QUOTED______TERM______PREFIXTERM______WILDTERM__________________NUMBER______TERM____________
How can I make these query strings safe? Is there a method in Sunspot to handle this?
share|improve this question
possible duplicate of solr sanitizing query – Mauricio Scheffer Aug 15 '11 at 20:54
add comment
1 Answer
There is no method in Sunspot to filter those, because they are valid in certain kinds of Lucene queries. Sunspot is using the DisMax Query Parser by default, so you can read its documentation to learn more about those characters.
[DisMax] is designed to be support raw input strings provided by users with no special escaping. '+' and '-' characters are treated as "mandatory" and "prohibited" modifiers for the subsequent terms. Text wrapped in balanced quote characters '"' are treated as phrases […]
If you intend to never use those characters, you can filter them yourself from queries(the back slash is to escape minus sign).
Post.search do
keywords params[:q].gsub(/[+\-"]/,'')
You may want to wrap that in a controller method, if you're invoking Sunspot's search method within a controller, or a model method if you're calling Sunspot's solr_search method from within your class's own custom search method.
share|improve this answer
add comment
Your Answer
| http://stackoverflow.com/questions/7069851/how-do-i-escape-illegal-characters-in-a-sunspot-solr-fulltext-search/7072014 | dclm-gs1-045540000 |
0.35193 | <urn:uuid:6743f0a2-4d18-4f1f-a337-1f1d54b46d64> | en | 0.940382 | Take the 2-minute tour ×
When I use the Facebook Graph API to post to a wall as that wall (using the page access token generated from a user with the appropriate permissions), the request always returns the same ID (so it successfully posts, but I'm not getting a unique ID--it's the same every time). As such, I can't get the actual post ID, so I can't delete that post. Has anyone else had this issue? Is there any way to get the actual ID? I've tried various libs and methods, and this seems universal, so I don't think it's any sort of wrapper issue, but rather an issue with the Graph API itself.
UPDATE: It turns out this was an issue with the Ruby gem I was using--so it was in fact a wrapper issue.
share|improve this question
Could you clarify what you're doing please? Are you posting via the graph API to a page wall using a page access token from an admin of that page? Posting to a user's wall with their own access token? Something else? You should be receiving a post ID back which can be queried at graph.facebook.com/<post ID here> – Igy Aug 24 '11 at 3:35
the ID should be unquie. are u sure that u have catch an error / exception in calling fb api? – Eddy Chan Aug 24 '11 at 6:51
I've updated the question a bit to be more specific. To be clear: the post is successful. The response is just returning the same ID every time. – ideaoforder Aug 24 '11 at 14:19
@ideaoforder you should write the answer to this question as an answer and accept it, so future users might be helped. :) – Jimmy Sawczuk Aug 26 '11 at 15:13
You should post your update as an answer. Also, it might be useful for future visitors if you could add more detail about which gem this was and how you solved the problem. – hammar Aug 26 '11 at 15:14
add comment
1 Answer
up vote 0 down vote accepted
To further clarify--I was using my own forked version of the Koala ruby gem (I added an option to send the request params as data, rather than form-encoded) and borked what was being returned. So this was entirely on me. Users of the Koala gem shouldn't have this issue.
share|improve this answer
add comment
Your Answer
| http://stackoverflow.com/questions/7169080/facebook-graph-api-post-to-page-returns-duplicate-id?answertab=votes | dclm-gs1-045550000 |
0.453846 | <urn:uuid:a31bb895-03b4-4989-b2e6-69f0d5a2a3db> | en | 0.945075 | Take the 2-minute tour ×
This is really a bioinformatics question, but I'll make it as general as I can. Here's the semi-hypothetical situation:
Let's say I have access to a cluster or even a cloud. I want to run some very specific programs on this cluster/cloud (genomic/transcriptomic assembly programs to be exact). The problem is that I expect these programs (Velvet/Oases, Trinity, whatever else) to require large amounts of RAM, conservatively 100GB+, and the largest node on my cluster/cloud is only 32GB.
Now besides switching to an MPI/Hadoop based program (ABySS or whatnot), writing my own, or buying a new computer, what are my viable options? Has anyone tried using a distributed operating system (MOSIX, Kerrighed, ...) with shared memory on multiple nodes of a cluster/cloud? What about a virtual SMP? What else?
Thanks for any help!
Edit for clarification: Let's also say that the programs mentioned above (Velvet/Oases and Trinity) require a single system with a large pool of RAM. In a nut shell, I'm looking for a viable way to "paste" a bunch of nodes together into one virtual super-node where a single process could access all of the RAM from all of the nodes like it was a single system. I know that anything like this would probably give a pretty substantial performance hit, but I'm looking for something that's possible, not necessarily efficient.
p.s. Sorry if my terminology is making things confusing. I'm somewhat new to a lot of this.
share|improve this question
What about AWS ? Maybe a combination of the services they provide could be a good option. – ppareja Sep 3 '11 at 18:07
AWS has the same similar problem as far as I know. Lots of small (medium-small anyway) nodes of cluster/cloud and no easy way to paste a few of the small ones together into a single large environment for running a single thread that needs 1TB of RAM. – Pete Sep 6 '11 at 15:59
If you could add some notes about the actual processing to be done, that would be very helpful. Describing the data a bit more would also help. – Iterator Sep 25 '11 at 23:22
add comment
2 Answers
It totally depends on the nature of your application. Switching to Hadoop, MPI, MOSIX or VSMP may not solve your problem, because these technologies are helpful when you could partition your application into concurrent executing blocks.
Now, if your application is partitionable into concurrent blocks, choose the best software technology that fits your needs. Otherwise, it is recommended to upgrade your hardware. For choosing the software technology if your application:
1. Is data intensive: Try Hadoop or Dryad or something like that.
2. Is process intensive and passes many messages between its blocks: try MPI
3. Contains many light-weight threads: Use GPGPUs for your app.
4. ....
Take a look at RAMCloud project at Stanford university. It is somehow relevant.
share|improve this answer
Thanks for your post. But what about when the application is not "partionable into concurrent blocks"? Imagine that the applications in question are single-threaded programs that require 100GB+ of RAM on a single system. That's why I was mentioning shared memory distributed operating systems. I'm certainly no expert on distributed OS or SMP, but my understanding is that they both offer the ability to basically paste multiple smaller systems into a single larger pool of shared resources that a single process could then potentially access? I'll edit my post to reflect this specificity. – Pete Aug 25 '11 at 19:48
Well, as you know, DSM systems (distributed shared memory) suffer from poor performance due to transferring pages between nodes. As you know, in these systems, each memory page request is trapped to find its location on the pool and may need a network transfer of that page. But it benefits from a simpler programming model, i.e. the shared memory paradigm which is more handy than the distributed one. So, if your app is not partitionable into different parts, my advice is to just promote your hardware and forget about a distributed solution, like the ones you mentioned. – hsalimi Aug 25 '11 at 20:03
Forgot to say, if you run your app on a pool of resources, you will use just the memory of other nodes, not their processor or disk. So, it is more logical to have those extra memory chips on your main machine instead of distributing them on your pool – hsalimi Aug 25 '11 at 20:08
Again, thanks for your posts, you're confirming a lot of what I wasn't sure about. But the issue still remains that many of us have free access to hundreds or even thousands of nodes of clusters/clouds and no access to a single system with large amounts of RAM. I can imagine these programs requiring 1TB+ of RAM on a single system in the next year or two. Completely disregarding performance thrashing caused by using something distributed or virtual like this, are you suggesting this isn't possible or hasn't been done before? – Pete Aug 25 '11 at 21:50
Of course using DSMs or sth like that is possible and there is no doubt about it. You can set up your own DSM system and that will work. But lets clarify my words by comparing the two alternatives 1) Using a DSM that shares lots of memory between nodes 2) Using a single system with 64G of RAM. --- In the first case you should use the network to compensate the lack of memory. In the second case you should use the disk (virtual memory) to cover your lack. As you see, in both cases you are relying on I/O, either Net or Disk. So, do you think that network is faster than disk? I don't think so – hsalimi Aug 26 '11 at 8:07
show 1 more comment
Your question omits the nature of the processing to be done. This is particularly important. For instance, is each object really 100GB, or is the 100GB a collection of a lot of objects that are much smaller in size?
Nonetheless, addressing the general question, I routinely work with 100GB+ datasets in memory mapped files. If learn how to do memory mapping, you will likely find this to be a very easy route to go. What's more, if the data is in one place, then an easy kludge is to use NFS and then multiple systems could access the same data at the same time. In any case, memory mapping is often very easily woven in to existing programs, especially compared to managing the movement of blocks of data around your grid.
As you note, there are options like MOSIX or MPI, or you could look at memcached or memcacheDB, though I think that won't work out very well in the long run. In terms of an ordering for your system, I'd recommend memory mapping first, then MPI, MOSIX, and memcached.
share|improve this answer
add comment
Your Answer
| http://stackoverflow.com/questions/7194477/large-ram-requirements-and-clusters-clouds/7549298 | dclm-gs1-045560000 |
0.078021 | <urn:uuid:cf14fb71-7a26-4512-b9e0-49560135de2f> | en | 0.842723 | Take the 2-minute tour ×
I have two labels within a li amd i want to style second label, is there a way to target the second label. It has to work on IE7
I wrote like this
<label for="asi plan status">A&SI Accountable VP:</label>
<label>Tye Schriever</label>
ul li label label{color:red}
Any other way..?
share|improve this question
The CSS3 way would be: ul li label+label, but I'm not sure if IE7 supports this. – Exelian Sep 16 '11 at 9:29
Adjacent sibling? That's CSS 2.1, I think, not 3. And IE has problems up to, and including, version 8. – David Thomas Sep 16 '11 at 9:34
@Exelian @David Thomas: AFAIK IE7/IE8 only have parsing problems with it in very specific situations. ul li label+label should work correctly. And yes + is a CSS2.1 selector. Just because it's not syntax you see every day, or you think it has poor IE support, does not make it a brand new CSS3 feature. – BoltClock Sep 16 '11 at 13:36
add comment
4 Answers
up vote 2 down vote accepted
you can use css2 :first-child property define your properties for second label in common & through first-child you can override the css property for first label like this:
it's supported by IE7 also but approach is different.
share|improve this answer
Working perfectly – Ravi Sep 16 '11 at 9:47
happy to help :) – sandeep Sep 16 '11 at 9:48
add comment
If you only need to target the second label and nothing else, you can use
ul li label + label
No need for overrides or CSS3's :last-child or adding classes to work around IE, etc. Although the comments mention IE7 and IE8 having problems with the + selector, it should work properly in this situation.
share|improve this answer
add comment
As other people have mentioned, there are legitimate CSS selectors to achieve what you want, but IE7 doesn't support those selectors.
When I need to patch support for :last-child in IE7, I use this technique with jQuery:
$('li label:last-child').addClass('last-child');
And then in my CSS I can use use
li label.last-child {
/* some styles here */
So we leverage jQuery excellent CSS selector support to apply a class to the elements you want to style, and then style them in the CSS>
share|improve this answer
If :last-child is anything that doesn't require a long chain of + selectors to reach, use + instead for free IE7 support. – BoltClock Sep 16 '11 at 13:59
Quite right, this is just a more universal solution for using :last-child with IE7. – Alex Sep 16 '11 at 14:01
add comment
You may use the :last-child-selector, but that isn't supported by Internet Explorer.
li label:last-child { color:red }
Assign a class-attribute to the label instead, to get it properly working on all browsers.
share|improve this answer
add comment
Your Answer
| http://stackoverflow.com/questions/7442629/css-child-selectors?answertab=active | dclm-gs1-045590000 |
0.210385 | <urn:uuid:03e693e2-adfe-4b0a-9c2a-964fceea2f81> | en | 0.941054 | Take the 2-minute tour ×
I created a small test Mac app using the Core Data template (on Lion 10.7 and Xcode 4). I used the example on this site, http://www.swampfoetus.net/chapter-7-fail/, to hook up all the Cocoa Bindings with a tableview, an NSArrayController, a text box and an Add button. The NSArrayController is linked to the managedObjectContext of the App Delegate.
Everything seems to work fine when I launch the app ... I can type in text and press Add, and it gets saved in the tableview. I saved a few rows, and then pressed Save in the file menu (linked to the saveAction IBAction) and quit the app. I can see the data being saved in the xml data file (I renamed it .xml ... the PSC is of type NSXMLStoreType).
The problem is that when I launch the app again, it launches without the data that was saved in the Core Data file in the previous run.
This happens each time ... I can add data and it keeps appending to the data file, but at launch it never seems to read from this data file.
Any ideas what could be wrong here? I haven't messed around with the App Delegate generated code at all, only set up the bindings which seem to work fine. What could I check to make sure it's setup correctly?
share|improve this question
add comment
1 Answer
If the data shows up in the persistent store, then the only explanation would be a problem with the binding where the UI doesn't display the previous data for some reason. It's hard to say why that is happening but my guess would be a fetch predicate or some other bound qualifier that causes the controller to ignore older objects so that they are not displayed.
I can't say for sure because I don't have access to the book.
This is one of the drawbacks of using bindings. When they work, they're fantastic but when they don't, they're a @#%! to debug.
share|improve this answer
Ya, it's really confusing ... not sure if it might have something to do with Lion and AutoSave, though the template doesn't seem to suggest it, and I tried on a 10.6 machine as well, and same issue. – Z S Sep 19 '11 at 21:46
add comment
Your Answer
| http://stackoverflow.com/questions/7468511/core-data-template-data-not-persisted-between-runs | dclm-gs1-045600000 |
0.273167 | <urn:uuid:9ace342b-8c1b-47d7-bb2c-d7d83214e5cf> | en | 0.835915 | Take the 2-minute tour ×
Internal error 500 is such a general error and nothing shows up in the logs under /wordpress/error_log. Is there a way to get a stack trace of where the crash is occurring? In ASP.NET it is so easy because a stack trace, code snippet, and line number all show up in the error page. Any help would be greatly appreciated.
share|improve this question
A fuller explanation of the error will show up in the location where Apache writes the error logs. If it's not in /wordpress/error.log, then that is not the location where that takes place – Pekka 웃 Sep 27 '11 at 23:28
add comment
1 Answer
up vote 2 down vote accepted
Turn on php errors in your php.ini or add this line to your .htacess file in your webroot:
php_flag display_errors on
share|improve this answer
add comment
Your Answer
| http://stackoverflow.com/questions/7576759/how-to-trace-what-is-causing-wordpress-error-500 | dclm-gs1-045610000 |
0.024333 | <urn:uuid:c9a0578c-21aa-4447-889a-271a68fc08fb> | en | 0.929009 | Take the 2-minute tour ×
I have finished my premium project finally, but it still needs last enhancements, I want to sell it and give premium keys so only the key holders can run the version.
I have already made a basic system for such thing; http://brutallyviralwpplugin.com/check/form.html
but I am sure the code can be ripped off and the version is going to work without it... so I need to make it harder for crackers to do so,at least without help from Zend Guard or Ion Cube (ugh I just hate them)
share|improve this question
add comment
closed as not a real question by Pekka 웃, DaveRandom, Pratik, user187291, Damien Pirsy Sep 30 '11 at 9:46
1 Answer
PHP is an open source, and is compiled at run-time. Therefore, anyone who knows PHP can remove any licensing code you put in. The only way to provide effective licensing/IP protection for PHP code is with Zend Guard/Ioncube et al.
In short, what you are asking for cannot be done. Full stop.
share|improve this answer
yes I know that, even computer codes can be cracked after disassembling it, but there must be easy way like what invisionpower and vbulletin uses to prevent some users from cracking it... – SAFAD Sep 30 '11 at 9:18
You've never seen a cracked ('nulled') version of vB or IPB? As DaveRandom said, what you're asking for cannot be done (even with Zend Guard and whatnot in my opinion). – Corbin Sep 30 '11 at 9:27
i've seen it, but i'm sure it already blocked many crackers from using it ... – SAFAD Sep 30 '11 at 9:37
add comment
| http://stackoverflow.com/questions/7608152/secure-premium-php-script-activation | dclm-gs1-045620000 |
0.040517 | <urn:uuid:1921ec66-c6de-4244-903f-0970f1920a2e> | en | 0.806026 | Take the 2-minute tour ×
my following query needs more then two minutes and i don't which index is the best to improve the performance:
FROM forwarding
WHERE fDate BETWEEN '2011-06-01' AND '2011-06-30'
GROUP BY shop;
The EXPLAIN result:
id select_type table type possible_keys key key_len ref rows Extra
1 SIMPLE sal_forwarding index forwardDate,forwardDate_2,forwardDate_3,forwardDate_4,forwardDate_5 f_shop 40 (NULL) 2448997 Using where; Using index
The key f_shop has the following structure: (f_shop, forwardDate, cid)
What is the best Index to improve the performance for my query?
Thank you very much.
UPDATE: Here is the table Create Statement:
CREATE TABLE `forwarding` (
`f_shop` INT(11) NOT NULL,
`f_offer` INT(11) DEFAULT NULL,
`cid` CHAR(32) DEFAULT NULL,
`f_partner` VARCHAR(20) NOT NULL,
`fDate` DATE NOT NULL,
PRIMARY KEY (`sid`),
KEY `f_shop` (`f_shop`,`fDate`,`cid`),
KEY `f_partner` (`f_partner`,`fDate`),
KEY `fDate` (`fDate`,`cid`),
KEY `fDate_2` (`fDate`,`f_shop`),
KEY `fDate_3` (`fDate`,`f_shop`,`f_partner`),
KEY `fDate_4` (`fDate`,`f_partner`,`cid`),
KEY `fDate_5` (`fDate`,`f_affiliateId`)
Actually there are more then 5million rows.
share|improve this question
Please post table structure (SHOW CREATE TABLE forwarding ). Also tell us approximately how many rows do you have in that table? – Salman A Oct 21 '11 at 9:47
It would be nice if you accepted answers provided for your previous question. It's minimum courtesy you can do for people that helped you. – N.B. Oct 21 '11 at 10:19
add comment
4 Answers
up vote 1 down vote accepted
a clustered primary key
MySQL, NoSQL: help me to choose the right one! (on a )
60 million entries, select entries from a certain month. How to optimize database?
How to avoid "Using temporary" in many-to-many queries?
however your current clustered PK sid wont be much help so try something along the lines of:
create table forwarding
f_date date not null,
f_shop int unsigned not null,
sid int unsigned not null, -- added for uniqueness
primary key (f_date, f_shop, sid) -- clustered primary key
hope this helps :)
share|improve this answer
hello, the count of data doesn't allow me to add a new primary key. – user954740 Oct 24 '11 at 7:12
i can't drop the existing primary key because it's an auto increment. – user954740 Oct 24 '11 at 8:20
i'm sure you can figure out the best composite key combination yourself given the links above ?? do you really need the auto_inc part ? – f00 Oct 24 '11 at 9:44
add comment
You need an compound index
ALTER TABLE forwarding ADD INDEX shopdate (shop, fDate)
share|improve this answer
this index doesn't work. my existing index is used automatically. It means that my index is better for this query but i don'T think that 2min are good.^^ – user954740 Oct 21 '11 at 11:28
@user954740, 2.4 million rows in 2 minutes, sound about right. You'll have to cache the cid and sid counts if you want faster results. Create a new table shopcount(shop, cidcount, sidcount). And update that with delete, update and insert triggers on the forwarding table. – Johan Oct 21 '11 at 11:33
yes, good idea, but this data is too dynamic. I need the data for different Times and with other different rules. – user954740 Oct 24 '11 at 7:29
add comment
Index the shop column, and 1 more thing u can implement here is using partition by date, ur query will run fast
share|improve this answer
Partitioning by date is overkill a compound index will do the trick. – Johan Oct 21 '11 at 9:49
Think futuristic, U will need to partition after the number of record doubles...Partition with date – Sashi Kant Oct 21 '11 at 9:53
add comment
I think you need a key for the forwardDate, since that is the only attribute used in the WHERE clause of your query.
EDIT As noted in other answers, a compound index on shop and forwardDate is the way to go. I missed the last part of the query due to the single line formatting.
share|improve this answer
add comment
Your Answer
| http://stackoverflow.com/questions/7847601/mysql-query-performance-index | dclm-gs1-045630000 |
0.022499 | <urn:uuid:f1d6be05-369f-4921-837c-596950a89745> | en | 0.920556 | Take the 2-minute tour ×
I was wondering if there was a simple way to convert an array like this, which is passed from my form
[Uk3] => Array
[code] => BOARD
Info an array that can be used in a find, like this
[Uk3.code] => BOARD
Of course I wrote a loop to do it, but I would think there is helpful Cake method to do it. Simple things like this can take an inordinate amount of time to figure out!
share|improve this question
book.cakephp.org/view/1502/format – Dunhamzzz Nov 3 '11 at 15:34
add comment
1 Answer
Your Answer
| http://stackoverflow.com/questions/7997542/cakephp-how-to-create-find-conditions-from-multi-level-array | dclm-gs1-045670000 |
0.047413 | <urn:uuid:f848dad8-e959-4ff7-a827-b02c76388d7b> | en | 0.762563 | Take the 2-minute tour ×
I find myself needing to put guards like this:
if hash[:foo] && hash[:foo][:bar] && hash[:foo][:bar][:baz]
puts hash[:foo][:bar][:baz]
I'd like to shorten this in some way; I know I can wrap in a begin/rescue block but that seems worse. Maybe something like: ruby Hash include another hash, deep check
share|improve this question
duplicated: stackoverflow.com/questions/5429790, and many, many more. – tokland Nov 14 '11 at 17:45
add comment
2 Answers
up vote 2 down vote accepted
Something like:
def follow_hash(hash, path)
path.inject(hash) { |accum, el| accum && accum[el] }
value = follow_hash(hash, [:foo, :bar, :baz])
puts value if value
share|improve this answer
I like that. padding. – Dave Newton Nov 14 '11 at 17:20
add comment
I found this article very informative: http://avdi.org/devblog/2011/06/28/do-or-do-not-there-is-no-try/
value = Maybe(params)[:foo][:bar][:baz][:buz]
share|improve this answer
add comment
Your Answer
| http://stackoverflow.com/questions/8125227/ruby-hash-sub-hash-existance-check | dclm-gs1-045680000 |
0.1025 | <urn:uuid:23c128ac-12ce-4274-8731-c44fbea43a04> | en | 0.86914 | Take the 2-minute tour ×
I have a class extended from CCSprite that implements CCTargetedTouchDeledate like so:
@interface PianoKey : CCSprite <CCTargetedTouchDelegate> {
This has the following methods relating to the CCTouchDispatcher:
-(void) onEnter {
[super onEnter];
[[CCTouchDispatcher sharedDispatcher] addTargetedDelegate:self priority:INT_MIN+1 swallowsTouches:YES];
-(void) dealloc {
[[CCTouchDispatcher sharedDispatcher] removeDelegate:self];
[super dealloc];
And also has the standard methods CCTouchesBegan etc. The idea is to simulate multi-touch by having each piano registered with the touch dispatcher.
This all works fine, except for when I change to a new scene. The touches for these piano keys are still being registered and will take priority over things like menu items etc in the new scene. So it appears the sprites are not being removed from the CCTouchDispatcher...
Any help is greatly recieved!
share|improve this question
Would I be right in thinking that I could create a class the inherits from CCLayer and contains a CCSprite, the layer matching the sprites dimensions? that might be easier... – Alex Nov 24 '11 at 14:57
add comment
1 Answer
up vote 3 down vote accepted
if i recall correctly the [CCTouchDispatcher sharedDispatcher] retains its delegate, so your dealloc is never called. you have to call [[CCTouchDispatcher sharedDispatcher] removeDelegate:self] elsewhere, doing so your sprite will be deallocated correctly.
usually delegates are defined as assign, this unusual behaviour should be better documented
share|improve this answer
-(void) cleanup {} is the method where such retained delegates should be removed. Agree that this should be documented, normally delegates are not retained but here they are because they're added to an NSMutableArray. – LearnCocos2D Nov 24 '11 at 17:25
Thanks both of you, -(void)cleanup{} works perfectly. – Alex Nov 24 '11 at 17:43
add comment
Your Answer
| http://stackoverflow.com/questions/8258802/remove-touch-from-ccsprite-with-cctouchdispatcher | dclm-gs1-045700000 |
0.895913 | <urn:uuid:c34bacf9-3f69-413a-9e14-e59207afcb71> | en | 0.714377 | Take the 2-minute tour ×
By default the toolbar buttons in PyQT are aligned to the left, is it possible to make them centered so that they slide along when resizing?
share|improve this question
add comment
1 Answer
up vote 5 down vote accepted
I am not sure I understand correctly, but if you are looking for a way to center buttons on toolbar with respect to QMainWindow, then yes there is a (hackish) way. You just need to put a widget that acts like a 'spacer'. That is basically a QWidget with expanding size policy.
Here is a minimal example:
import sys
from PyQt4 import QtGui
app = QtGui.QApplication(sys.argv)
main = QtGui.QMainWindow()
toolbar = QtGui.QToolBar()
# spacer widget for left
left_spacer = QtGui.QWidget()
left_spacer.setSizePolicy(QtGui.QSizePolicy.Expanding, QtGui.QSizePolicy.Expanding)
# spacer widget for right
# you can't add the same widget to both left and right. you need two different widgets.
right_spacer = QtGui.QWidget()
right_spacer.setSizePolicy(QtGui.QSizePolicy.Expanding, QtGui.QSizePolicy.Expanding)
# here goes the left one
# some dummy actions
# and the right one
Which gives you this:
enter image description here
share|improve this answer
Awesomeballs, thanks! – FLX Dec 17 '11 at 16:42
add comment
Your Answer
| http://stackoverflow.com/questions/8537474/pyqt-center-toolbar-buttons | dclm-gs1-045720000 |
0.023375 | <urn:uuid:d370c11d-5966-43cb-a9bb-34d2f2fd7b30> | en | 0.955885 | Take the 2-minute tour ×
I am about to release my first firsion of an android app.
I was thinking about using android's licensing service (LVL) for my app. But now I am not sure if it wouldn't be better just not to use any licensing service.
• a) LVL can be cracked anyhow
• b) LVL causes some delay of my app
What do you guys think ?! Do you have any experience with using / not using LVL ? Do I have any alternatives ?!
share|improve this question
add comment
1 Answer
a) LVL can be cracked anyhow
LVL is definitely a piece of crap in front of sophisticated pirate/hacker, however, it does provide some basic protection on your paid application from being shared and used by normal user. Without LVL, your paid application is completely naked and anyone who can get your apk file (for example, with a rooted device) can share it online so other people can download, install and use it for free.
b) LVL causes some delay of my app
IMO the best practice is just check the licensing details only once at the very first time when your application is downloaded and installed from somewhere, and opened on the device. there are many SO question here discuss how to run some code only once when application started first time.
Do I have any alternatives
I have seen some people implements their own protection strategy and publish app on their own website, which probably increase the complexity from being cracked. Personally I don't think this help much. How soon your application will be cracked is totally determined by how popular your application is, there are many application created by the most professional companies, for most of them you can easily google the cracked version online. I know there are some communities/forums in China, They created their own version of market app and have groups of people pirate popular application (newly added or upgraded) daily and publish the cracked version in their own fake market app, if you buy a Samsung Galaxy Tab in China, it comes with this kind of fake market app and you can download most popular app/games from here. So face the facts.
share|improve this answer
add comment
Your Answer
| http://stackoverflow.com/questions/9404383/android-lvl-reasonable/9405447 | dclm-gs1-045770000 |
0.053752 | <urn:uuid:ac3182c3-fcfc-45d4-ace3-d03bfd811dbc> | en | 0.784114 | Take the 2-minute tour ×
I am using Struts2. And having trouble in test a String for null or empty. The String is in a loop.
What I have done so far is
in Action class I have a List<User>. User have id and name fields and have getters and setters...
in JSP i am doing like
<s:iterator value="userList" var="user" status="userStatus">
<s:if test"%{user.name != null && user.name != ''}">
<!-- Do some thing... -->
Problem is that Its not working :(, I cannot see the names and they are visible if I remove the <s:if> block.
share|improve this question
add comment
1 Answer
up vote 9 down vote accepted
Try with this
<s:if test="%{#user.name != null && #user.name != ''}">
<s:property value="#user.name"/>
share|improve this answer
Thanks! It worked. I am also confused in checking string in Java works in equals(), is it same as != or == in Struts tag in JSP. – Talha Ahmed Khan Feb 24 '12 at 7:25
add comment
Your Answer
| http://stackoverflow.com/questions/9426609/struts2-jsp-test-string-for-null-and-empty-in-iteration | dclm-gs1-045780000 |
0.152595 | <urn:uuid:2deecb39-d3a5-4134-9be6-76690eb00b84> | en | 0.880881 | Take the 2-minute tour ×
Forgive me for asking something that is probably explained elsewhere, but I am having trouble designing a data model in Cassandra.
I am storing transactions. These transactions each have a source (user), a timestamp, and some associated keywords. I need to be able to find transactions given the source and a date range and (optional) keywords. Cassandra is attractive because I need to store billions of transactions.
I have been unable to find a resource that explains how to do this type of thing. My initial thoughts involve having a few CFs - a transaction CF, a keyword_transaction CF, a source_transaction CF, and possible a day_transaction CF (or something similar). This would make it very straight forward to find transactions based on any one of the above items, but it doesn't seem like it will let me search on all of the above items.
Any thoughts?
share|improve this question
add comment
1 Answer
Start by thinking your query and then to your data model. Read here and here as this help when you plan for your data model.
cf : transactions
rowkey : source/uuid (suggestion)
cn : source
cv : UTF8
cn : keyword
cv : UTF8
cn : date
cv : DateType
cn : time
cv : DateType
cf : keywords
rowkey : keyword
cn : source
cv : UTF8
where you will have a standard column family called transactions and a few column name (cn) and its corresponding column value (cv). Each of these transaction are identify by the rowkey. Another standard column family is the keywords where the rowkey would be the keyword.
You can search by source, timestamp or keyword but you do need to index them for the query to work. For example with the above suggestion data structure, you can do these:
• get all transaction where source is equal to ''
get transactions where source = ''
• get all transaction where source is equal to '' and your date > ''
get transactions where source = '' and date > '';
• get all transaction for date x
get transactions where date = '';
• get all source name based on keyword
get keywords['keyword'];
share|improve this answer
add comment
Your Answer
| http://stackoverflow.com/questions/9692143/cassandra-data-model-approach?answertab=oldest | dclm-gs1-045800000 |
0.029605 | <urn:uuid:7c18bb28-2f4d-4368-a670-3bc97e2ea1a3> | en | 0.904367 | Take the 2-minute tour ×
I have a client that already generates a MSI file for each of their web appliations. They want an InstallShield wrapper installation that will allow a user to select which web application(s) they want and have InstallShield put the appreciate MSI file(s) on the user's machine and then execute each MSI file. I have seen posts about running 3rd party MSI files that are prereq's but not this situation. Is this even possible?
share|improve this question
add comment
3 Answers
up vote 0 down vote accepted
It's not possible the way you request it. There's a Mutex that prevents an MSI to install another MSI. For InstallShield 2012, you should look at "Suite" projects.
share|improve this answer
Convinced client that they did not need to build an .msi and could configure folders and files that would be installed directly in IS via Components – Michael Hayes Apr 2 '12 at 20:08
@Christopher Painter Sorry for Hijacking this, but is there any way to do it without suite? I only have professional, but need to run my installation, then run another .msi (that installs something my program relies on) – Andy Nov 2 '12 at 11:32
Take a look at the doco on setup prerequisites and feature prerequisites. You'll have to build each of your feature as an MSI and then create PRQ files for each of them. Then create a parent MSI that's driven by setup.exe that consumes those MSI's as Feature Prereqs. The feature selection in the MSI will cause setup.exe to run the correct MSI's. – Christopher Painter Nov 2 '12 at 12:01
add comment
I met the same problem. The solution I chose was to put a rectangle on the next string of code to disable any input in date string.
<Rectangle Fill="Transparent" Grid.Row="3" Grid.Column="1" Margin="0,0,15,0"/>
share|improve this answer
add comment
I did a custom action and launched the MSI.. So the install wait for the other msi to complete then once done it will run the usual intall.
Also ensure that you have scheduled the custom action in the UI sequence. Two MSI’s cannot run simultaneously in execute sequence.
share|improve this answer
Custom actions scheduled in the UI sequence 1) won't get run during silent installs and 2) won't always run elevated. This is not a proper solution. – Christopher Painter Mar 29 '12 at 11:06
The user didnt mention it should be ran as silent install.. If so would have checked for other solution. so if not silent install, this will be a solution. – anand Mar 30 '12 at 4:54
There are a whole host of best practices that a user will never know to ask for but that's not an excuse for designing a poor solution. – Christopher Painter Mar 30 '12 at 10:29
add comment
Your Answer
| http://stackoverflow.com/questions/9914578/installshield-2012-have-installscript-msi-project-execute-a-msi-file-that-it-ju | dclm-gs1-045820000 |
0.024016 | <urn:uuid:1fd16de5-4336-42e7-8ccc-1c4eb99d373b> | en | 0.941824 | Take the 2-minute tour ×
I'm looking to use XML in InDesign that comes from a live url as it's constantly updating from a database.
I know I can download the XML and import it from the desktop but does anyone know how to pull it straight from the XML page online at a URL?
Any advice appreciated! I'm running CS5.
share|improve this question
add comment
1 Answer
I wish! I've tried and this was impossible as of CS3, and I'm doubtful about CS5. Seems like it would be an easy feat for a plugin to accomplish.
I remember starting a discussion topic similar to this on Adobe forums long ago, because this would be great for variable data generated with a PHP script.
share|improve this answer
What pains me is in the back of my mind, I've done it successfully before! I remember the delay where it was summoned from the URL. I question my sanity now. – Cordial Aug 8 '11 at 9:35
add comment
Your Answer
| http://superuser.com/questions/319590/use-xml-data-from-an-online-url-as-data-source-indesign?answertab=oldest | dclm-gs1-045850000 |
0.50147 | <urn:uuid:d42eef10-b339-4a74-be9c-4b88fe72aa47> | en | 0.897165 | Take the 2-minute tour ×
Is there a program that can split an mp3 (and/or other music file types) into single songs, using an itunes playlist file (and/or other music playlist files) to determine beginning and ending of a song, all by itself?
Is it possible that maybe iTunes itself can do it? I heard of a feature that can split songs, but is it possible to do it like I suggested above?
share|improve this question
I know cuetools does this with a cue file, but I have no idea for format the itunes playlist file is – Journeyman Geek Dec 5 '12 at 1:21
@JourneymanGeek .cue-files are perfect. But cuetools seems like a whole bunch of different programs. What exactly do I need? – Marco7757 Dec 5 '12 at 20:32
Let me post that as an answer – Journeyman Geek Dec 6 '12 at 0:35
add comment
3 Answers
I'm running J. River media center, and the purchased version allows for the splitting and combining of songs. I don't know if the free version offers this mode, nor if it will do so based on your deterministic criteria. Check it out- if nothing else, it's a great Multimedia manager, and provides the most versatile feature-rich tool palette I've found anywhere.
share|improve this answer
add comment
Cuetools is a pretty efficient ripper, which happens to be great for converting a single file into many. You want to run cuetools.exe
enter link description here
share|improve this answer
Thank you very much! However, when I choose lossy (mp3), it says "Exception: Unsupported audio type [PATHTOFILE].mp3" ... – Marco7757 Dec 18 '12 at 20:49
thats odd. the cue files and the mp3s are in the same folder, right? – Journeyman Geek Dec 18 '12 at 23:44
Yes, they are. I changed the output folder though. But this shouldn't be a problem, am I right? – Marco7757 Dec 19 '12 at 15:58
does the output folder exist? – Journeyman Geek Dec 19 '12 at 23:14
no ... I assumed it would create one ... seriously? EDIT: I just tried, same error! – Marco7757 Dec 20 '12 at 17:19
add comment
FWIW, there's a more general MP3 splitting question already answered: Is there an easy way to split mp3 files?
share|improve this answer
add comment
Your Answer
| http://superuser.com/questions/514788/split-long-music-file-with-help-of-playlist-file | dclm-gs1-045890000 |
0.464477 | <urn:uuid:fdbf2f02-5d5d-47a2-af57-3f9898f03c75> | en | 0.881346 | The Future of Conflict
Covert ops, drone warfare and a new hot zone: The Agenda examines how future military conflict might look, and where it may be fought.
Douglas Macgregor: Special Ops
The war in Iraq is over and the war in Afghanistan is winding down, but covert military operations - the bin Laden raid, and hundreds of drone strikes - are on the rise. Is this the future of conflict? US Army Colonel Douglas Macgregor tells Steve Paikin why covert ops are the US military's best option.
Retired Colonel, U.S. Army
Peter W. Singer: Drone Warfare
War by remote control: Brookings Institution senior fellow Peter W. Singer tells Steve Paikin about the increased use of unmanned military systems and the future of drone warfare in the United States.
Director, 21st Century Defense Initiative, Brookings Institution
Patrick Cronin: The South China Sea
The South China Sea is the future hot-point of conflict in the world. As India and China focus on increasing their naval capabilities, how is the United States adapting their strategy? The Center for a New American Security's Patrick Cronin sits down with Steve Paikin.
| http://theagenda.tvo.org/episode/173363/the-future-of-conflict | dclm-gs1-045940000 |
0.032037 | <urn:uuid:f320815c-81b6-4c3f-99ab-4301725b2edb> | en | 0.949673 | In response to:
What's Worse Than Horse Slaughter?
mbowen300 Wrote: Apr 06, 2013 9:37 PM
i have been around the world and have eaten many things yes including the horse you rode in on sorry could not help myself . if you don't use them for food and other things .they will die from neglect or be abanded ,if you just have a vet kill them .they have to be buried some where .and if you just dig a hole and put them in you will have an even bigger problem .think horse owners take very good careof their horses .and they live far longer then in the wild .so they have had a good life .and now its over what happens after they die i don't think the horse cares .
Likewise, a change that may look harmful can serve benevolent purposes. Take the law just signed in Oklahoma to legalize the slaughter of horses for food. Counterintuitive though it may be, it will probably work to the ultimate benefit of horses.
Starting in 2006, Congress tried to end the killing of horses in... | http://townhall.com/social/mbowen300-581666/whats-worse-than-horse-slaughter-n1559247_cmt_6681767 | dclm-gs1-045960000 |
0.520107 | <urn:uuid:a72883f3-f95a-4217-b440-7bf3f87ee55e> | en | 0.943676 | From Uncyclopedia, the content-free encyclopedia
Jump to: navigation, search
This picture of the August 11, 1999 solar eclipse was one of the last ever taken from the Mir space station. NASA image.
An eclipse is an astronomical event that occurs when one celestial object moves into the shadow of another. The term is derived from the ancient Greek noun ἔκλειψις, "I cease to exist," a combination of prefix ἐκ, from preposition ἐκ, ex, "I ate your dog," and of verb λείπω, "where's my popcorn"
The term eclipse is most often used to describe either a solar eclipse, when the Moon's shadow crosses the Earth's surface, or a lunar eclipse, when the Moon moves into the shadow of Earth. However, it can also refer to such events beyond the Earth-Moon system: for example, a planet moving into the shadow cast by one of its moons, a moon passing into the shadow cast by its parent planet, or a moon passing into the shadow of another moon. A binary star system can also produce eclipses if the plane of their orbit intersects the position of the observer.
edit Syzygy
This article is based on the Désencyclopédian text Éclipse, made freely available to French-speaking wildebeest gnus under the GFDL.
A syzygy is the alignment of three or more celestial bodies in the same gravitational system along a straight line. The word is usually used in context with the Sun, Earth, and the Moon or a planet, where the latter is in conjunction or opposition. Solar and lunar eclipses occur at times of syzygy, as do Astronomical transits and occultations.
An eclipse occurs when there is a syzygy between a star and two celestial bodies, such as a planet and a moon. The shadow cast by the object closest to the star intersects the more distant body, lowering the amount of luminosity reaching the latter's surface. The region of shadow cast by the occulting body is divided into an umbra, where the radiation from the star's radiation-emitting photosphere is completely blocked, and a penumbra, where only a portion of the radiation is blocked
A total eclipse will occur when the observer is located within the umbra of the occulting object. Totality occurs at the point of maximum phase during a total eclipse, when the occulted object is completely covered. When the star and a smaller occulting object are nearly spherical, the umbra forms a cone-shaped region of shadow in space.
Beyond the end of the umbra is a region called the antumbra, where a planet or moon will be seen transiting across the star but not completely covering it. For an observer inside the antumbra of a solar eclipse, for example, the Moon appears smaller than the Sun, resulting in an annular eclipse. The remaining volume of shadowed space, where only a fraction of the occulting object overlaps the star, is called the penumbra. An eclipse that does not reach totality, such as when the observer is in the penumbra, is called a partial eclipse.
For spherical bodies, when the occluding object is smaller than the star, the length (L) of the Umbra's cone-shaped shadow is given by:
where Rs is the radius of the star, Ro is the occulting object's radius, and r is the distance from the star to the occulting object. For Earth, on average L is equal to 1.384×106 km, which is much larger than the Moon's semimajor axis of 3.844×105 km. Hence the umbral cone of the Earth can completely envelop the Moon during a lunar eclipse. If the occulting object has an atmosphere, however, some of the luminosity of the star can be refracted into the volume of the umbra. This occurs, for example, during an eclipse of the Moon by the Earth—producing a faint, ruddy illumination of the Moon even at totality.
An astronomical transit is also a type of syzygy, but is used to describe the situation where the nearer object is considerably smaller in apparent size than the more distant object. Likewise, an occultation is a syzygy where the apparent size of the nearer object appears much larger than the distant object, and the distant object becomes completely hidden during the event.
An eclipse cycle takes place when a series of eclipses are separated by a certain interval of time. This happens when the orbital motions of the bodies form repeating harmonic patterns. A particular instance is the Saros cycle, which results in a repetition of a solar or lunar eclipse every 6,585.3 days, or a little over 18 years. However, because this cycle has an odd number of days, a successive eclipse is viewed from a different part of the world
edit Earth-Moon System
An eclipse involving the Sun, Earth and Moon can occur only when they are nearly in a straight line, allowing the shadow cast by the Sun to fall upon the eclipsed body. Because the orbital plane of the Moon is tilted with respect to the orbital plane of the Earth (the ecliptic), eclipses can occur only when the Moon is close to the intersection of these two planes (the nodes). The Sun, Earth and nodes are aligned twice a year, and eclipses can occur during a period of about two months around these times. There can be from four to seven eclipses in a calendar year, which repeat according to various eclipse cycles, such as the Saros cycle.
edit Solar eclipse
Solar eclips 1999 4
Totality during the 1999 solar eclipse. Solar prominences can be seen along the limb (in red) as well as extensive coronal filaments.
An eclipse of the Sun by the Moon is termed a solar eclipse. Records of solar eclipses have been kept since ancient times. A Syrian clay tablet records a solar eclipse on March 5, 1223 BCE, while Paul Griffin argues that a stone in Ireland records an eclipse on November 30, 3340 BCE. Chinese historical records of solar eclipses date back over 4,000 years and have been used to measure changes in the Earth's rate of spin. Eclipse dates can also be used for chronological dating of historical records.
The type of solar eclipse event depends on the distance of the Moon from the Earth during the event. A total solar eclipse occurs when the Earth intersects the umbra portion of the Moon's shadow. When the umbra does not reach the surface of the Earth, the Sun is only partially occluded, resulting in an annular eclipse. Partial solar eclipses occur when the viewer is inside the penumbra.
The eclipse magnitude is the fraction of the Sun's diameter that is covered by the Moon. For a total eclipse, this value is always greater than or equal to one. In both annular and total eclipses, the eclipse magnitude is the ratio of the angular sizes of the Moon to the Sun.
Solar eclipses are relatively brief events that can only be viewed in totality along a relatively narrow track. Under the most favorable circumstances, a total solar eclipse can last for 7 minutes, 31 seconds, and can be viewed along a track that is up to 250 km wide. However, the region where a partial eclipse can be observed is much larger. The Moon's umbra will advance eastward at a rate of 1,700 km/h, until it no longer intersects the Earth.
During a solar eclipse, the Moon can sometimes perfectly cover the Sun because its apparent size is nearly the same as the Sun when viewed from the Earth. A solar eclipse is actually a misnomer; the phenomenon is more correctly described as an occultation of the Sun by the Moon or an eclipse of the Earth by the Moon.
edit Lunar eclipse
Eclipse lune
The progression of a lunar eclipse. Totality is shown with the last two images to lower right. These required a longer exposure time to make the details visible.
Lunar eclipses occur when the Moon passes through the Earth's shadow. Since this occurs only when the Moon is on the far side of the Earth from the Sun, lunar eclipses only occur when there is a full moon. Unlike a solar eclipse, an eclipse of the Moon can be observed from nearly an entire hemisphere. For this reason it is much more common to observe a lunar eclipse from a given location. A lunar eclipse also lasts longer, taking several hours to complete, with totality itself usually averaging anywhere from about 30 minutes to over an hour.
There are three types of lunar eclipses: penumbral, when the Moon crosses only the Earth's penumbra; partial, when the Moon crosses partially into the Earth's umbra; and total, when the Moon circles entirely within the Earth's umbra. Total lunar eclipses pass through all three phases. Even during a total lunar eclipse, however, the Moon is not completely dark. Sunlight refracted through the Earth's atmosphere intersects the umbra and provides a faint illumination. Much as in a sunset, the atmosphere tends to scatter light with shorter wavelengths, so the illumination of the Moon by refracted light has a red hue.
edit Other planets
Phobos transits Sun, as seen by Mars Rover Opportunity
Saturn eclipse
Saturn eclipses the Sun as seen from the Cassini–Huygens space probe
Eclipses are impossible on Mercury and Venus, which have no moons. However, both have been observed to transit across the face of the Sun. There are on average 13 transits of Mercury each century. Transits of Venus occur in pairs separated by an interval of eight years, but each pair of events happen less than once a century
On Mars, only partial solar eclipses are possible, because neither of its moons is large enough, at their respective orbital radii, to cover the Sun's disc as seen from the surface of the planet. Eclipses of the moons by Mars are not only possible, but commonplace, with hundreds occurring each Earth year. There are also rare occasions when Deimos is eclipsed by Phobos. Martian eclipses have been photographed from both the surface of Mars and from orbit.
The gas giant planets (Jupiter, Saturn, Uranus, and Neptune) have many moons and thus frequently display eclipses. The most striking involve Jupiter, which has four large moons and a low axial tilt, making eclipses more frequent as these bodies pass through the shadow of the larger planet. Transits occur with equal frequency. It is common to see the larger moons casting circular shadows upon Jupiter's cloudtops.
The eclipses of the Galilean moons by Jupiter became accurately predictable once their orbital elements were known. During the 1670s, it was discovered that these events were occurring about 17 minutes later than expected when Jupiter was on the far side of the Sun. Ole Rømer deduced that the delay was caused by the time needed for light to travel from Jupiter to the Earth. This was used to produce the first estimate of the speed of light.
On the other three gas giants, eclipses only occur at certain periods during the planet's orbit, due to their higher inclination between the orbits of the moon and the orbital plane of the planet. The moon Titan, for example, has an orbital plane tilted about 1.6° to Saturn's equatorial plane. But Saturn has an axial tilt of nearly 27°. The orbital plane of Titan only crosses the line of sight to the Sun at two points along Saturn's orbit. As the orbital period of Saturn is 29.7 years, an eclipse is only possible about every 15 years.
The timing of the Jovian satellite eclipses was also used to calculate an observer's longitude upon the Earth. By knowing the expected time when an eclipse would be observed at a standard longitude (such as Greenwich), the time difference could be computed by accurately observing the local time of the eclipse. The time difference gives the longitude of the observer because every hour of difference corresponded to 15° around the Earth's equator. This technique was used, for example, by Giovanni D. Cassini in 1679 to re-map France.
Pluto, with its proportionately large moon Charon, is also the site of many eclipses. A series of such mutual eclipses occurred between 1985 and 1990. These daily events led to the first accurate measurements of the physical parameters of both objects.
edit Eclipsing binaries
A binary star system consists of two stars that orbit around their common center of mass. The movements of both stars lie on a common orbital plane in space. When this plane is very closely aligned with the location of an observer, the stars can be seen to pass in front of each other. The result is a type of extrinsic variable star system called an eclipsing binary.
The maximum luminosity of an eclipsing binary system is equal to the sum of the luminosity contributions from the individual stars. When one star passes in front of the other, the luminosity of the system is seen to decrease. The luminosity returns to normal once the two stars are no longer in alignment.
The first eclipsing binary star system to be discovered was Algol, a star system in the constellation Perseus. Normally this star system has a visual magnitude of 2.1. However, every 2.867 days the magnitude decreases to 3.4 for more than 9 hours. This is caused by the passage of the dimmer member of the pair in front of the brighter star. The concept that an eclipsing body caused these luminosity variations was introduced by John Goodricke in 1783.
Personal tools
In other languages | http://uncyclopedia.wikia.com/wiki/Eclipse | dclm-gs1-046010000 |
0.051747 | <urn:uuid:3ad2ba23-b82e-4a1c-b405-9e260fac8221> | en | 0.981167 | UnNews:Maria speaks to us
From Uncyclopedia, the content-free encyclopedia
(Difference between revisions)
Jump to: navigation, search
m (Reverted edit(s) of Disney's Evil Revealed (talk) to last version by Kip the Dip)
Line 1: Line 1:
{{news|19 December 2010}}
{{news|19 December 2010}}
{{date|19 December 2010}}
{{date|19 December 2010}}
Line 84: Line 83:
[[Category:Intentionally Offensive]]
[[Category:Intentionally Offensive]]
{{FA|date=1 January 2011|revision=4896581}}
Revision as of 18:08, December 31, 2010
19 December 2010
Mary stat cry
Maria cried during parts of the interview.
NAZARETH, Texas -- Whether one considers himself Pro-Life, Pro-Choice, or somewhere in the middle of the debate, most regard abortion as a tragedy. Studies show that half of all pregnancies are unwanted, with the average abortion rate for the United States being around nineteen out of a thousand pregnancies. While abortion rates overall are in decline, there are sharp increases among the poverty-stricken and minorities, unmarried women, and women in their 20s or younger.[1]
Maria Sánchez, a maid who works at The Hotel Guadalupe downtown, has fallen into all of these categories. Ms. Sánchez confesses to having an abortion as a teenager, and has agreed to share her experience.
What made you want to tell your story?
Oh, you know, sweetie, there's a lot of girls out there goin' through this. I want them to know they're not alone.
Tell us about your background.
Well, I grew up in an average Hispanic family. We ate tacos all the time, spoke bad English, danced around hats, and had gringos help us sneak family into the country every Sunday after Mass. I was the littlest of several brothers and sisters. We were pretty piss-poor. Mama had us all home-schooled 'cause we were hiding from the government.
When I was 18, I dated a man named José for awhile. My family thought of him as our meal ticket. We talked about getting married, but we never decided on it.
How did you get pregnant?
It happened when I was visiting my cousin Elisa. She took me to a fiesta one night, where I met a guy named Gabriel. He was, uh, how you say, a real smooth talker, with the body of a god. Long story short, a few weeks later I was pregnant.
Funny thing is, I was so drunk I don't even remember having sex with him, though I do remember it being heavenly bliss.
When did you realize you were pregnant?
When I missed my period. I took a pregnant test and it came back pregnant. My entire life came crashing down before my eyes when I saw that little cross. I could see Mama and Papa disowning me, José leaving me, being hated by the community.
I told Elisa and she suggested abortion, but being raised a strict Roman Catholic, the very idea was a no-no.
When did you decide that abortion was the answer?
I knew could never raise a child. I wanted to put it up for adoption, but that costed more money than I had. I even thought about leaving it on a church doorstep, but I was afraid they would would be too busy to love my child.
It was at this point that she started sobbing. I offered a bite of my grilled cheese sandwich to console her, but she refused. I handed her my napkin to wipe her eyes with. A few minutes later, she was ready to continue.
It was a busy day at the abortion clinic.
Tell us about the pregnancy termination procedure.
It was a cold winter night. I snuck out in the middle of the night and went to the nearest Planned Parenthood by myself. It took so long to get there, my ass was give out.
The waiting room inside was very cold, too. I remember the tv reception was great. I would call it immaculate. Pretty soon a nurse came out and told me it was a busy day and there was no more rooms. I begged her, "Oh, pleeease, señora", and she said I could have it done in the back alley.
A few minutes later a doctor named Herald Finklestein came out. I told him I was scared, and he told me, "Don't worry my shiksa, I'm a balmalocha." I remember seeing the most beautiful star shining in the sky, and feeling comforted by this. Too bad it faded away so fast. I held my breath and waited for it all to be over.
"Mazel Tov!" - Dr. Finklestein
How did you feel after you went through with it?
I felt a little relieved, but did I ever feel that Catholic guilt. When I got home, I got really stoned. When I went to sleep, I had a terrible nightmare where I was having the child, but a red dragon with seven heads tried to eat it. Creepy as shit.
Since then, I've been more at peace. I figure it was probably for the best. I could never have given the child all its needs. I mean, it's not like a bunch of food would appear by magic!
What were the long-term health effects?
I mostly had cramps and was sick to my stomach for a few weeks. My breasts were so tender and swollen. Soon, my period came back. It was the worst period of my life! Everyone started calling me "Bloody Maria."
Did you ever tell anyone?
Yeah, I told my brothers and sisters. I so wanted to tell Mama, but it would've broken her heart.
Did you ever see Gabriel again?
No, I never saw the beautiful bastard again. Once in a while, I meet someone who says they slept with him, or his brother Michael. Man-whores, both of them!
Have you had any children since then?
I'm Mexican.
Oh, right. So you are.
There was one question left on my notepad. I was hesitant to ask it, yet I worked up enough courage to proceed.
Finally, do you have any regrets?
My biggest regret was being too scared to ask friends and family for help. They love me no matter what. I should've know this. My other regret was not being a little more responsible as a young girl. My loved ones think I'm perfect, but I fuck up like e'ryone else, you know? I tell my kids the only perfect Person is God.
Thank you for your time, Ms. Sánchez. Peace be upon you.
¡Gracias, señor!
Once the interview was finished, I drove home with a wide spectrum of thoughts and emotions raging in my mind. Then a overwhelming sadness came over me when I remembered my own mother was pregnant with me as a teenager. How she struggled to raise me on foodstamps. I didn't think it could happen, but I started crying myself. I sat in the parking lot waiting for the tears to dry up before going into the UnNews building--whose strong aroma of mildew, unfortunately, made them water up again.
UnNews Logo Potato
This article features first-hand journalism by an UnNews correspondent.
1. Facts on Induced Abortion in the United States, Guttmatcher Institute
Potatohead aqua Featured Article (read another featured article) Featured version: 1 January 2011
<includeonly>Template:FA/01 January 2011Template:FA/2011</includeonly>
Personal tools | http://uncyclopedia.wikia.com/wiki/UnNews:Maria_speaks_to_us?diff=next&oldid=4896581 | dclm-gs1-046020000 |
0.154882 | <urn:uuid:7caaea85-edc1-46a5-bc80-05348cc85569> | en | 0.876119 | Take the 2-minute tour ×
I am watching some webcams live from the Internet, showing some big cities around the world. I'd like to record some to show to my friends at home.
Is it possible?
share|improve this question
add comment
1 Answer
The easiest soultion that comes into my mind is recording the part of the screen itself using a Screencasting software like Kazam. Or Record my desktop or similar.
You can install Kazam in debian based distros by sudo apt-get install kazam.
enter image description here
As you can see it has an Area selection option. So, you can just select the area of the window and record it.
share|improve this answer
add comment
Your Answer
| http://unix.stackexchange.com/questions/53300/how-to-record-a-flash-movie-from-a-live-cam-on-the-internet | dclm-gs1-046040000 |
0.01931 | <urn:uuid:951c7701-57bd-4ab9-804b-8ea357db6823> | en | 0.960275 | Animation Picture Co. acquires E.L. Katz's original script
John Davis’ The Animation Picture Company has acquired E.L. Katz’s original script “Zombie Pet Shop” to be developed as a 3D animated feature film.
“Zombie Pet Shop” concerns a mysterious plague that sweeps through a mall pet shop, turning all of the animals into zombies. An ordinary pug named Joey must come to the rescue to find the antidote that will save his four-legged friends, as well as the humans that might take them home.
Davis is producing the project along with Brian Manis, Ash Shah, Dan Chuba and Mark Dippe.
Jeremy Platt, who developed the project with Katz, is co-producing.
“Zombie Pet Shop” marks Katz’s first foray into animation, as the scribe is best known for his genre fare. Katz recently had a hand in writing Lionsgate’s “Dibbuk Box,” and he has the GreeneStreet/Peter Block project “Dark Corners” going into production later this year.
Katz is represented by Gersh and Generate.
Filed Under: | http://variety.com/2011/digital/news/zombie-pet-shop-set-for-animation-3d-1118032769/ | dclm-gs1-046070000 |
0.082033 | <urn:uuid:6c868638-6281-4368-b316-1b923636645f> | en | 0.952056 | Now Playing
Once On Death Row, He Now Fights To Defeat The Death Penalty
Kirk Bloodsworth was the first person in the U.S. to be exonerated by DNA evidence after receiving the death sentence. Convicted in Maryland, Bloodsworth is now one of the strongest advocates of abolishing the death penalty in the state. | http://wamu.org/audio-player?nid=83305 | dclm-gs1-046090000 |
0.835699 | <urn:uuid:20bc8bc0-e444-493d-ba0b-953d28068e1b> | en | 0.815658 | How Effective Is Osteopathy?
Osteopathy is an effective way of reducing pain. It is a natural way of treating a wide range of health problems. The process of osteopathy tries to eliminate the problem and prevent it from ever occurring again. | http://www.ask.com/question/how-effective-is-osteopathy | dclm-gs1-046270000 |
0.110443 | <urn:uuid:d032ccf1-79ed-43d8-a957-8c11d1b1016d> | en | 0.965289 | How Many Kittens Will My Cat Give Birth to?
The exact number of kittens that your cat can give birth to varies from just one up to eight. Cats can have up to three litters per year. However, there have been extreme exceptions, with litters of 14 kittens being recorded.
Q&A Related to "How Many Kittens Will My Cat Give Birth to"
Cats give birth in the same way as most placental mammals. Contractions of the uterus push the kitten down the vaginal canal and out of the mother's body. For more information look
Well, Its simple enough! A cat of mine , Echoclaw, had kits. There is not much you can actually do, but you need to stay well away from her when she is having them or she may feel
As labor nears, many female cats, also called queens, begin sleeping. Cats generally sleep a lot; however, a Queen nearing her labor will sleep more than usual. Often, you will notice
The biggest litter ever recorded for a cat was 19 kittens, although 4 were
Explore this Topic
Cats have the ability to give birth to kittens alone. This ability is instinctive. However, giving birth is still a hard process for a cat and there are things ...
It depends upon the individual cat. Typical numbers are from 5 to 7 per litter, although fewer or more are common, as well. ...
There are various symptoms that tell when a cat is due, such as, the cat panting around leaving her bed. This would show that she is now ready to give birth to ... | http://www.ask.com/question/how-many-kittens-will-my-cat-give-birth-to | dclm-gs1-046280000 |
0.024581 | <urn:uuid:b9db08c5-5290-4d3f-a197-c6275df2e549> | en | 0.948333 | How to Clean Water?
Depending on what kind of water you are using and for what purpose you will be using it for, you can either boil water or use a filtering system to clean it.
1 Additional Answer Answer for: how to clean water
How to Clean Water
It is important that the water you drink is clean and free from bacteria and other contaminants as much as possible. The quality of the water that comes out of your tap is likely regulated by a government body and may not necessarily need further... More »
Difficulty: Moderate
Q&A Related to "How to Clean Water?"
The best way to clean water impurities is to simply boil the water for about fifteen minutes. Make sure the pot you boil the water in is spotlessly clean.
Originally known as The Federal Water Pollution Control Act, it was enacted in 1948, and amended several times in subsequent years before it became federal law in 1972. States had
1. Turn off the valve connected to the tank's water lines. 2. Drain the water from the tank completely. 3. Clean dirt and debris from the inside walls of the tank with a high-pressure
1. Evaluate the level of damage to determine if the carpet is worth saving. Discard and replace the carpet if it is old, mildew and mold have been allowed to grow for several days
Explore this Topic
It is important to have clean water because it is considered to be healthy. Consumption of clean water that is properly filtered increases your strength since ...
Cleaning water softener resin is a solution for removing dirt and softening water. You can clean water softener resin by easing water pressure before cleaning ...
The Clean Water Act created the Federal Water Pollution Control Administration, ... | http://www.ask.com/question/how-to-clean-water | dclm-gs1-046290000 |
0.034063 | <urn:uuid:33812c89-6bb1-4fc0-936f-df5fae45d588> | en | 0.931038 | Where Can I Buy Sake?
You can buy sake online from the slurp website. Sake is a Japanese alcoholic beverage that is mainly made from rice. This alcoholic drink has a very sweet sugary taste and when literally translated, sake means 'alcoholic beverage'.
Q&A Related to "Where Can I Buy Sake?"
Sake is a Japanese alcoholic drink made from rice. All Japanese restaurants have Sake. The brewing of Sake is pretty much the same as brewing beer.
Sake (pronounced "sa-ki" also known as "rice wine, is believed to have originated in Japan approximately around the time that rice-planting methods were introduced
what type of drink? a type of drink
1 Become familiar with the traditional vessels. Tokkuri on left Sake is served in a small, usually ceramic flask called a tokkuri . It's usually bulbous with a narrow neck, but there
Explore this Topic
You can try www.asiansnakewine.com. ChaCha on! ...
You can try www.asiansnakewine.com. ChaCha on! ...
Habu Sake is a common name of the Habushu liqueur. According to Okinawan people, Habu Sake is a health drink and considered Okinawa's original Viagra. This wine ... | http://www.ask.com/question/where-can-i-buy-sake | dclm-gs1-046300000 |
0.165796 | <urn:uuid:8c168431-1767-4e42-8d91-1adbe743656e> | en | 0.942741 | Collectible Classic: 1968-73 Datsun 510
Sam Smith
Collectible Classic: 1968-73 Datsun 510
There was a time when buying a Japanese car marked you as someone a little short on common sense. It was a time when anything made on that side of the Pacific was regarded as tinny and short-lived, a time when the best-selling car in America wasn't a Toyota. Detroit was still mired in its old-school ways, and few people really cared about fuel costs. The boxy Datsun 510 helped set the ball rolling to change all that, and it remains one of the most important Japanese cars ever built. It's no small coincidence that the 510 was also a flaming hoot to drive.
Early Japanese offerings in the United States were either too small, too full of sewing-machine engineering, too slow, or some painful combination of the three. This was largely due to still-isolated Japan's fuzzy picture of American driving. Yutaka Katayama, the enigmatic head of Datsun/Nissan America, played a key part in focusing that image. A nonconformist and a radical thinker, Katayama had become convinced over several years of living in the United States that the only way for Japanese automakers to succeed in America was for them to build cars tailored for the U.S. market. The 510 would be the first of that breed. By benchmarking the fast, solid, and fun BMW 1600, Katayama set ambitious goals: modern styling, durable engineering, and real-world speed. By building the 510 down to a price, he ensured its mass appeal. Katayama was successful--the diminutive sedan eventually sold more than half a million copies worldwide.
The first 510 rolled off the production line in late 1967. Initially available only in four-door form, the Teruo Uchino-designed car would later see showrooms as a two-door sedan, a four-door wagon, and a Japan-only two-door coupe. The suspension was independent and--like the BMW's--consisted of struts in front and semitrailing arms in the rear. (Wagons, however, had a live rear axle.) A gutsy 96 hp came from a vaguely agricultural but engaging 1.6-liter SOHC four-cylinder (with an additional two cylinders, a version of this engine would later power the iconic Datsun 240Z). Interior and exterior materials--although thin steel, cheap vinyl, and hard plastics--looked and felt relatively durable. Front disc brakes were standard.
The 510 may have been slightly slower and less refined than the world-beating BMW, but it also was a screaming deal; the Datsun's asking price was roughly $2000, while the 1600 cost about $3000. Two Trans-Am championships--by the famed John Morton/ Peter Brock pairing--over the BMW and Alfa Romeo competition proved the 510's inherent dynamic worth. For Katayama, the wins were just icing on the cake. By the time 510 production ended in 1973, the phrase "Japanese car" had ceased to be a punch line--and had become a reason to start watching the mirrors.
Good, solid drivers range from $2000 to $5000. Pristine survivors and concours-level cars can run upward of $10,000.
Two-or four-door sedan; four-door wagon. A two-door coupe was available only in Japan.
Slightly more than 550,000 cars worldwide. Most were four-door sedans.
Rust--everywhere. Also be wary of cars on which more money has been spent for go-fast parts than for proper maintenance.
The Stainless Steel Carrot: An Auto Racing Odyssey
by Silvia Wilkinson, Houghton Mifflin, out of print. Used copies available at www.amazon.com
The Dime Quarterly
DATSUNLAND SOCAL714-255-8097 www.datsunlandsocal.com
ROD'S DISCOUNT DATSUN PARTS408-448-3277 www.rodsdatsun.com
Because we're such huge John Morton fans: a '70 two-door sedan on period Revolution four-spoke wheels.
buyer's guide
Find vehicle reviews, photos, & pricing
our instagram
get Automobile Magazine
new cars
Read Related Articles | http://www.automobilemag.com/features/collectible_classic/0701_1968_73_datsun_510/ | dclm-gs1-046350000 |
0.098976 | <urn:uuid:775f845e-7954-4752-b590-fe6b6241a3bc> | en | 0.967957 | KNOX, Ind. The Starke County Sheriff’s Department arrested 76 people Saturday morning after officers raided an animal fighting contest in Knox, police say.
Just before 11 a.m., police responded to an anonymous tip that 50 people were participating in an animal fighting contest west of Knox, at 100 W, south of State Road 8.
When police arrived, multiple people ran and abandoned fighting cock paraphernalia and several vehicles parked in the area.
Police arrested over 76 suspects after running through muddy fields, ice-covered ditches and wooded areas. Officers found other suspects hiding in ditches and hollowed-out trees. One juvenile that was harbored by an adult was found without shoes. The juvenile was taken to Starke Memorial Hospital and treated for frostbite.
Police say they arrested a person who was described as a Hispanic male for organizing the event, and many others were arrested for promoting animal fighting and attending. Twelve of them are facing felonies for bringing the roosters to the fight. The other 64 are facing misdemeanors. It is a class A misdemeanor to attend an animal fighting contest and it is a Class D felony to promote the use of animals or attendance at an animal fighting contest.
As of Monday morning, all but one of the people arrested have bonded out.
Only the suspects facing felonies were held at the Starke County Jail. The others were released with citations and court dates.
A school bus was called to the scene to transport the suspects to the jail.
145 fighting cocks were seized and transported to the Starke County Humane Society for euthanization.
Officers also obtained and served a search warrant seizing 13 firearms, 29 vehicles, drug paraphernalia, and cock fighting paraphernalia.
Police also say vendors were at the event selling food and drinks.
Those arrested traveled from various states including Wisconsin, Illinois and Michigan.
In May, 2006, police raided a cock fight in the Grovertown area of Starke County. Police arrested 52 people and seized 60 roosters in that case. | http://www.baltimoresun.com/topic/wsbt-145-birds-seized-76-arrested-in-knox-cockfighting-raid-20110227,0,5645995.story | dclm-gs1-046380000 |
0.0182 | <urn:uuid:f2aeb9b9-c9bc-44ba-a17a-03d7fc2ca18e> | en | 0.971622 | 'Night Stalker' serial killer dies in California prison
Richard Ramirez in court in 1985 showing a pentagram on his palm Ramirez was arrested in 1985
US serial killer Richard Ramirez - known as the "Night Stalker" - has died in hospital in California.
Ramirez, 53, was on death row in San Quentin prison after being convicted in 1989 of 13 murders. Officials said he died of natural causes.
Ramirez terrorised Southern California in 1984-5 with a rampage of sexual assault and murder.
Satanic symbols were left at some of the murder scenes by the killer, who broke into victims' homes at night.
Ramirez was captured and beaten by residents in East Los Angeles in 1985 as he attempted to hijack a car.
He was recognised from a photo published in newspapers after police identified him as a suspect from a fingerprint.
Los Angeles prosecutor Alan Yochelson, who was involved in the case, said his death ended "a pretty tragic period in the history of Los Angeles County".
"Richard Ramirez hurt a lot of people and I think our thoughts should be with the next of kin and the survivors, because their lives were changed forever by this man."
Random slaughter
A drug addict and self-styled devil-worshipper, Ramirez mutilated the bodies of some of his victims.
They included an accountant, a lawyer, a mechanic and a church official. Some were children, others grandparents.
Most of the killings happened in the space of a few months in 1985. The random murder spree caused widespread fear, leading to a surge in sales of guns and locks for doors and windows.
Ramirez has also been linked to other murders for which he was never brought to trial.
After he was given the death sentence, he said: "Big deal. Death always went with the territory. See you in Disneyland."
Once in prison he attracted a number of female admirers. Some visited him and in 1996 he married freelance journalist Doreen Lioy at a visiting room in San Quentin jail.
A horror film based on his life - titled Night Stalker - was released in 2002.
California has not executed a prisoner on death row since 2006 because of a legal battle over how inmates are put to death.
More on This Story
More US & Canada stories
• Boesinghe during the battle of PasschendaeleLong shadow
Why the Great War still fascinates the young
• Dorset trees and a swastikaSwastika nation
The hooked crosses found on many English buildings
• The Big O - Abbie Trayler-SmithMade in Wales
A photography collective's take on modern Welsh life
• Mick Jagger7 days quiz
• Grieving Syrian boySyria's story
Eight steps to understanding the bitter conflict
| http://www.bbc.co.uk/news/world-us-canada-22820207 | dclm-gs1-046390000 |
0.114762 | <urn:uuid:04c72241-4471-4b01-867c-0d9b02bab780> | en | 0.874603 |
BBC Radio 3
Through the Night, 02/10/2012 QR code
What is this?
This code will link to the page for Through the Night, 02/10/2012 when read using a QR code reader.
You may save, print or share the image. | http://www.bbc.co.uk/programmes/b01n1rjq/qrcode | dclm-gs1-046400000 |
0.499892 | <urn:uuid:7d0bd5ce-d817-44e8-bf5c-8a0289e029be> | en | 0.848051 | Can Russia Make the Investment Grade?
You are being redirected!
This page is a legacy redirect to this article: Online Extra: Can Russia Make the Investment Grade?.
The Epic Hack
(enter your email)
(enter up to 5 email addresses, separated by commas)
Max 250 characters
blog comments powered by Disqus | http://www.businessweek.com/stories/2003-09-24/can-russia-make-the-investment-grade | dclm-gs1-046460000 |
0.024458 | <urn:uuid:d1c440ce-c15f-4bf7-ab23-30878dfab0af> | en | 0.972598 | The Worst Of The John Travolta Lawsuit
Very specific and very disturbing allegations. Two masseurs are suing the Hollywood titan for sexual assault as they were attempting to give him massages, and the lawsuit is quite explicit in their charges. Travolta’s lawyer has dismissed the suit as “complete fiction.” WARNING: Very graphic language. posted on
I know, right? Now tell your friends!
The Worst Of The John Travolta Lawsuit
Gavon Laessig
Thomas Peter / Reuters
1. John Doe No. 1 Picked Up By John Travolta Personally In A Lexus SUV On January 16th, 2012:
“There were Trojan condoms in the console of the vehicle, and there also appeared to be 2 or 3 wrappers from chocolate cake packages no the floor of the SUV.”
2. Arriving At Travolta’s Bungalow At The Beverly Hills Hotel:
“The door was unlocked, and there was an overweight black man preparing hamburgers, who meekly said ‘hey’ (to) Doe Plaintiff No. 1 and (Travolta), and no formal introductions were made. This black man was actually preparing hamburgers, from watching his skill and dexterity in food preparation, it seemed that we was some sort of professional chef.”
3. Shamelessly Stripping Naked:
“(Travolta) shamelessly stripped naked in front of Doe Plaintiff No. 1 and the ‘chef,’ and was gazing at Doe Plaintiff No. 1 as he appeared to be semi-erect.”
4. The Massage:
“(Travolta) kept purposely sliding the towel down that covered his buttocks to reveal about half of gluteus area. Doe Plaintiff No. 1 kept sliding the towel back up, and reminding (Travolta) that state law required that a massage client be fully draped during the massage. This back and forth…occurred over ten times in the first hour.”
5. Being Left Alone With Travolta:
“(Travolta’s) chronograph watch started to chime, and the black chef covered the burgers, and other things he was preparing with plates. The black chef then left the room with a stack of papers, and what appears to be some sort of notebook. No words were exchanged.”
Brendan McDermid / Reuters
6. The First Instance Of Contact:
“(Travolta) started to rub Doe Paintiff No. 1’s leg. … Doe Plaintiff No. 1 assumed that it was in fact accidental. Then (Travolta) touched Doe Plantiff No. 1’s scrotum, and this time Doe Plaintiff No. 1 told (Travolta) to please not touch him again. (Travolta) apologized, but then snickered to himself like a mischievous child.”
7. The Second Instance Of Contact:
“(Travolta) then touched the shaft of Doe Plaintiff No. 1’s penis, and seized on to it. (Travolta) quickly tried to rub the head of Doe Plaintiff No. 1’s penis as he tried to pull away. This was painful and uncomfortable.
8. An Apology:
“(Travolta) started to apologize for his behavior; and tried to imply that they ‘must have gotten our signals crossed’, and that he thought that Doe Plaintiff No. 1 ‘wanted the same thing he did.’”
9. A Proposal:
“(Travolta) then sat up on the table and asked Doe Plaintiff No. 1 to switch places, and do a reverse massage. Doe Plaintiff No. 1 told Defendant that a masseuse lying on the table was unlawful and inappropriate. Then (Travolta) said, ‘Come on dude, I’ll jerk you off!!!’.”
10. Attempting To Leave:
“Doe Plaintiff No. 1 told (Travolta) he just wanted to leave, that the situation was too strange, and that he actually felt very afraid for his safety. (Travolta) then laid down on the table, and said, ‘OK, I’ll behave myself’. (Travolta) neatly placed the towel in an appropriate manner in his lap and gave Doe Plaintiff No. I confidence that his predatory behavior was finally under control.”
Thomas Peter / Reuters
11. Say Something Nice To Me:
“Doe Plaintiff No. 1 followed his request with a professional deep tissue massage on his shoulders. (Travolta) then said, ‘Say something nice to me.’ Doe Plaintiff no. 1 tried to ignore what (Travolta) said, and was hoping to conclude this session. Doe Plaintiff No. 1 looked at (Travolta), who had removed his draping and was masturbating.”
12. Sweat Pouring Down His Neck:
“(Travolta’s) penis was fully erect, and was roughly 8 inches in length; and his pubic hair was wirey and unkempt. Sweat was pouring down (Travolta’s) neck, and he asked Doe Plaintiff No. 1 again to say something nice to him.”
13. Lumbering and Bouncing:
“Doe Plaintiff No. 1 moved away from (Travolta), who then lumbered to his feet and began to move towards Doe Plaintiff No. 1 with erect penis bouncing around with his stride. (Travolta) began screaming at Doe Plaintiff No. 1, telling Doe Plaintiff No. 1 how selfish he was.
14. Welcome Back, Kotter And Hollywood’s Cabal Of Homosexual Jewish Men:
“(Travolta said he ) got where he is now due to sexual favors he had performed when he was in his ‘Welcome Back Kotter’ days; and that Hollywood is controlled by homosexual Jewish men who expect favors in return for sexual activity.”
15. Things That Will Make You Throw Up:
“(Travolta) then went on to say how he had done things in his past that would make most people throw up. (Travolta explained when he started that he wasn’t even gay and that the taste of ‘cum’ would make him gag. (Travolta also said that he was smart enough to learn to enjoy it, and when he began to make millions of dollars, that it all became well worth it.”
Mick Tsikas / Reuters
16. Sex With Beautiful, Fit Men:
“(Travolta) further explained that the high-class in this world always favor same sex relationships; that sex with beautiful, fit men is actually more intense; and Doe Plaintiff No. 1 would just be open minded enough to let it happen, he would experience the best fucking of his life.”
17. A Threat To Call The Police:
“He told (Travolta) to get dressed and to either drive Doe Plaintiff No. 1 back, or Doe Plaintiff No. 1 was going to call the police. Strangely, (Travolta’s) penis was still semi-erect, and he had to struggle to get it back into his underwear and jeans since he pulled his underwear and pants up at the same time.”
18. A Hollywood Starlet And The Promise Of A Three-Way:
“(Travolta) then said, ‘no problem’, ‘I will find new friends’. (Travolta) then continued to say that Hollywood is all about giving and getting, and then he told Doe Plaintiff No. 1 that he can show me an ‘Instant Example’. (Travolta) told Doe Plaintiff No. 1 he knew a Hollywood starlet in the building that wanted three-way sex and to be ‘double-penetrated’. (Travolta said they) could have that later, but first the needed to have sex together first before calling her, so this way they would be in-sync with each other sexually.”
19. Career Advice:
“(Travolta) told Doe Plaintiff No. 1 that he had Hollywood looks, but just needed to lose some weight and learn to lick some ‘ass’, and the Doe Plaintiff No. 1 would be ready to make millions and be famous.”
20. A Selfish Loser:
“He reiterated his threat to call the police, and (Travolta) took Doe Plaintiff No. 1 back to where he was picked up. During this ride, (Travolta) repeatedly called Doe Plaintiff No. 1 ‘selfish’ and a ‘loser’, and gave Doe Plaintiff No. 1 double what the was owed. This was $800.00 instead of the $400.00 that (Travolta) was supposed to pay Doe Plaintiff No. 1 for the two hours of massage time spent.”
Tobias Schwarz / Reuters
21. John Doe No. 2 Meets “Mr. White” On January 28th, 2012:
“Doe Plaintiff No. 2 … was assigned to do an in-room massage on or about January 28, 2012, a ‘Mr. White,’ and was told this was an alias for (Travolta). Doe Plaintiff No. 2 does not prefer to do in-room massages and asked coworker to take the assignment, and his coworker declined because (Travolta) had been banned from a Spa that the coworker used to work at in the Los Angeles area.”
22. Gluts and Humping:
“He was massaging (Travolta) and at times (Travolta) would hump the table. At one point (Travolta) fell asleep for approximately 5 minutes, and woke up and demanded that his ‘gluts’ be done immediately…while he was massaging (Travolta’s) buttocks area, (Travolta) would open his legs and spread his butt cheeks open, and had an erection…(Travolta) asked him ‘can you get right here and pointed and put his hand in his butt.’”
23. Red And Chapped
“(Travolta) said Doe Plaintiff No. 2 had big dense hands…he went to massage (Travolta’s) other side, and (Travolta) kept spreading his butt cheeks. Doe Plaintiff No. 2 observed that (Travolta’s) butt cheeks and rectum were very red and chapped.”
24. Forcing His Hand:
“(Travolta) suddenly turned on his stomach with his legs wide open. He then tried to force Doe Plaintiff No. 2’s hand on (Travolta’s) scrotum. Then (Travolta) started to grab, rub and caress Doe Plaintiff No. 2’s upper thighs and buttocks…(Travolta) also grabbed between Doe Plaintiff No. 2’s legs and he pulled away…(Travolta) started masturbating with 15 minutes left in the session, and Doe Plaintiff No. 2 said he had to go.”
Tobias Schwarz / Reuters
Check out more articles on!
Facebook Conversations
Now Buzzing | http://www.buzzfeed.com/gavon/the-worst-of-the-john-travolta-lawsuit | dclm-gs1-046490000 |
0.030526 | <urn:uuid:ef1c14a6-be65-49c8-882f-ab0f9222293b> | en | 0.80664 | 25,081,967 members doing good!
Millie The Security Cat
Offbeat (tags: Millie, cat, pint-sized, crime-stopping, kitty, goodnews )
- 627 days ago - pawnation.com
Not even Catwoman herself will be able to burgle her way into Bandai's UK-toy factory this holiday season, at least not as long as Millie the Bengal cat is on watch! The pint-sized kitty crime-stopper....
butterfly credits on the news network
• credits for vetting a newly submitted story
• credits for vetting any other story
• credits for leaving a comment
learn more
Popular Tags for: Offbeat
Most Active Stories Today in Offbeat tagged with pint-sized
top story finders in Offbeat
1. charles mclachlan 96 popular stories
2. Carole K. 85 popular stories
3. Natasha Salgado 82 popular stories
4. Laura H. 68 popular stories
5. AniMae Chi 67 popular stories
6. Cher C. 62 popular stories
7. Frans Badenhorst 54 popular stories
8. Judith W. 53 popular stories
9. Ljiljana Milic 48 popular stories
10. GGmaSheila D. 45 popular stories
more top news finders »
What makes a top story finder?
wanted: news analysts
Get Active in the Care2 News Network!
care2 news network tools
News Network Bookmarklet
News Network Syndication
Find out how to subscribe!
care2 newsletter spotlight
| http://www.care2.com/news/category/other/pint-sized | dclm-gs1-046510000 |
0.412477 | <urn:uuid:c13257ed-141a-4d4d-8eb4-33ef2b3a12a0> | en | 0.953437 | No recent wiki edits to this page.
The Jaffa are slaves to powerful aliens called The Goa'uld. When they come of age they are implanted with a young Goa'uld symbiote. Because of this they are given a much longer life and much physical abilities. It is also capable of healing at an accelerated rate. After thousands of years under the oppression from the Goa'uld a group of Jaffa called the Sodan and went into hiding. Unlike most Jaffa the Sodan do not carry any Goa'uld markings. As the Jaffa resistance grew stronger more and more of the Jaffa renounced the Goa'uld as their god and fought for freedom.
The Jaffa are and always will be a warrior race. They (formally) wore black markings to signify which "god" they worshiped They are known through many worlds for their strength and courage and their formidable fighting skills.
The Jaffa are early humans from earth. They are altered genetically to hold an immature Goa'uld symbiote. When a Jaffa is implanted with a Goa'uld symboite a large X is cut into their stomach and a pouch is created. They symbiote replaces the Jaffa immune system and grants them enhanced strength, health and longevity. It is common for them to live well over 100 years. They do not need sleep instead they go into a trance to remain in harmony with the symbiote and to keep the bond strong it is called the Kelno'reem.
Like most technology in the Stargate universe Jaffa technology requires Naquahdah in their blood to operate their weapons which range in strength.
Staff Weapons- These weapons are easily able to kill most beings with one shot. Able to break steel and burn right through flesh they are the first weapons of choice
Zat'nik'tel-This weapon is mostly used for stunning but with 2 shots can kill most beings and a 3rd vaporizes the victims body.
This edit will also create new pages on Comic Vine for:
Comment and Save
| http://www.comicvine.com/jaffa/4060-56836/ | dclm-gs1-046600000 |
0.992444 | <urn:uuid:1ec42ac7-e5f9-46e0-aa99-78fd1e147637> | en | 0.935322 | Email this article to a friend
How a text can put you on the road to injury: Study finds that using your mobile phone while walking could be deadly
* indicates fields that are mandatory.
Security code | http://www.dailymail.co.uk/news/article-2247302/emailArticle.html | dclm-gs1-046660000 |
0.026931 | <urn:uuid:2d2ec9eb-2477-40c2-b49e-c1da693a11db> | en | 0.933676 |
I. Same Results, Different CPU
GeekBench iPad 3
[Source: Tienhte]
II. Better Graphics
iPad 2 v. Tegra
[Source: Anandtech]
III. Rivals Prepare Counterstrike
IV. Samsung Truce Could Guarantee Steady Component Supply for iPad 3
Samsung Austin Texas
[Image Source: Let's Go Digital]
V. The Beefy Battery
VI. LTE -- Fast and Dangerous
iPad 3 LTE
[Source: Anandtech]
Sources: Tinhte [Vietnamese], AnandTech, AP
Comments Threshold
Maybe I'm wrong but...
By VahnTitrio on 3/13/2012 4:07:28 PM , Rating: 3
If all they did in the A5X was double the graphics power, wouldn't it make the 3rd gen iPad slower than an iPad2 as it has 4X as many pixels? Logic tells me that if all else is equal there is no way the new one can perform better...
RE: Maybe I'm wrong but...
By extra_baggage on 3/13/2012 4:29:22 PM , Rating: 1
You are forgetting the increased ram!
RE: Maybe I'm wrong but...
By B3an on 3/13/2012 4:39:34 PM , Rating: 3
More RAM will not magically make it perform better, it will just help compensate for the extra pixels it has to push.
At native resolution games should actually run slower than the iPad 2.
RE: Maybe I'm wrong but...
By tayb on 3/13/2012 6:40:27 PM , Rating: 2
Except that games don't scale that way. As an example from a recent Anand article (pasted below) scaling from 1680x1050 to 2560x1600 resulted in a 61.75% drop in FPS but the increase in pixels was 132.20%. The "new iPad" is pushing 4 times as many pixels as the iPad 2 but that does not mean you should expect 1/4 of the performance. They have also doubled the theoretical performance of the GPU, may have increased the CPU speed, could have optimized the software, and might have doubled the amount of RAM. This is a "we won't know until we know" kind of thing.
RE: Maybe I'm wrong but...
By snorldown on 3/13/2012 7:33:44 PM , Rating: 4
Your example just reinforces B3an's point. Assuming performance scales linearly with the number of pixels, a 132% increase in the pixel count would result in a 57% drop in FPS, so the fact that the benchmark showed a 61.75% drop means that the scaling was even worse.
RE: Maybe I'm wrong but...
By testerguy on 3/14/2012 2:49:31 AM , Rating: 3
Sorry but tayb is correct, he just used a bad example.
Take the same link of his.
Look at the HD 7950 (only selected because it's the first one on the list). At 1920 x 1200, it achieves an FPS of 47.1
Now, lets look at the result for 2560 x 1600. This is a move from 2304000 pixels to 4096000 pixels. This represents 1.78x as many pixels. As a result, if performance scales linearly, we would expect an FPS of 47.1/1.78, which is 26.46.
Instead, we see an FPS of 31.1.
Proof, by counter example, that FPS does not necessarily scale linearly with number of pixels. This also proves that 4x the pixels does not necessarily mean 1/4 of the FPS. Which was tayb's original point.
RE: Maybe I'm wrong but...
By testerguy on 3/14/2012 2:52:23 AM , Rating: 2
Edit: Sorry the 47.1 figure I quoted is actually 47.7 on the graph.
The point still stands though, 47.7/1.78 is 26.8 FPS (rather than the 26.46). Still less than the 31.1 (14% less) and still proves the point.
RE: Maybe I'm wrong but...
RE: Maybe I'm wrong but...
And the results are in. Compare:
Tegra 3
GL Egypt Offscreen 720p: 64fps
GL Pro Offscreen 720p: 78fps
iPad 3
GL Egypt Offscreen 720p: 140fps (2.2X faster)
GL Pro Offscreen 720p: 241fps (3.1X faster)
RE: Maybe I'm wrong but...
RE: Maybe I'm wrong but...
RE: Maybe I'm wrong but...
"Clueless hate comment fuelled by bitterness."
??? huh
RE: Maybe I'm wrong but...
By B3an on 3/13/2012 8:54:25 PM , Rating: 2
CPU speed is the same @ 1GHz. And as i said, more RAM will not make games run better! Unless games are limited by not enough RAM, which no iPAd games are being as all the developers know exactly how much RAM they have to work with.
Another example would be that i have 32GB RAM in my PC, but taking much of it out and just leaving 6GB does not make games run any worse (apart from slightly longer loading screens), being as no PC game uses more than 4GB. Adding more RAM when it's not going to be used will not increase FPS.
If anything, games that decide to use the extra RAM in the iPad 3 will see even more of a performance hit from the higher res textures used and so on. Or all that extra RAM could easily be used up from just enabling AA, which needs a lot of RAM for that res, which again will see a performance hit. The iPad 3 will not run games better at native res with just these GPU changes.
RE: Maybe I'm wrong but...
By reginhild2 on 3/14/2012 3:01:40 PM , Rating: 3
Good example. The geekbench number above is interesting as the Tegra 3 in the Transformer Prime kicks it to the curb.
RE: Maybe I'm wrong but...
By quiksilvr on 3/13/2012 9:05:56 PM , Rating: 1
One thing worth mentioning:
Those Tegra 3 benchmarks were taken BEFORE ICS. Those are Honeycomb benchmarks.
RE: Maybe I'm wrong but...
By MrMilli on 3/14/2012 8:53:09 AM , Rating: 3
Those numbers only relate to how the HD 7950 scales but that doesn't mean this kind of a scaling just transfers to the A5X. As a matter of a fact I can assure you that it doesn't.
That a videocard with 240GB/s dedicated memory bandwidth and 25.6 GPixel/s fill rate can take a punch, we all know. But the A5X with it's around 5-6GB/s shared memory bandwidth and 4GPixel/s fill rate (PowerVR: All fill rate figures stated assuming a scene depth complexity of x2.5. That means the actual fill rate is 1.6GPixel/s) will have a much harder time with this 4x increase in pixel count.
My guess is that the performance drop will be even worse than linear because of the many bottlenecks.
So as stated, the performance of games on the new iPad will be worse than the iPad 2 on their respective native resolutions.
To fight this effect, developers won't run 3D games at 2048x1536 but will keep the engine running at 1024x768 internally. They will apply the (almost free) 2xAA and then scale the output to 2048x1536 (just the like a XBOX360 would run FullHD).
RE: Maybe I'm wrong but...
RE: Maybe I'm wrong but...
By michael2k on 3/13/2012 5:14:36 PM , Rating: 2
But even so it will be faster than a Tegra 3...
RE: Maybe I'm wrong but...
By Dribble on 3/14/2012 5:41:43 AM , Rating: 2
With half the processors running at lower clock speeds? It won't be faster then tegra 3 in any cpu bound tasks, which is most of them.
RE: Maybe I'm wrong but...
By testerguy on 3/14/2012 7:03:00 AM , Rating: 3
Most tasks aren't CPU bound - certainly not those which require great performance (eg Games).
iOS is hardware accelerated so it benefits from the better GPU just about everywhere.
Every day tasks like browsing the internet, watching movies, viewing photos, sending emails, don't require a great amount of CPU power, any increase in the speed of CPU above that of the iPad would result in an imperceptible speed difference, since the much larger delay is, for example, in waiting for the page to download. Animations, switching between apps, slide-interfaces, multi-touch pinch/zoom is mostly handled by the GPU and that's where you need the grunt.
It's very rare to find an Android tablet, even ones which have a 'faster' CPU, which feel as smooth as the iPad range.
RE: Maybe I'm wrong but...
By theapparition on 3/14/2012 9:31:53 AM , Rating: 3
Maybe because none of them had hardware accelerated UI, until ICS.
Take a look at reviews on ICS on tablets and you'll see how improved it is.
RE: Maybe I'm wrong but...
By Dribble on 3/14/2012 10:05:38 AM , Rating: 2
I bet when the iPad 4 or whatever they call it comes out with some uber A15 based cpu (much faster) then apple (and you!) will be telling us all how important the cpu is.
If that's not true then whey are they bothering to develop A15 based cpu's - why not just stick with having a couple of A9's @ 1ghz for ever?
RE: Maybe I'm wrong but...
By michael2k on 3/14/2012 1:30:16 PM , Rating: 2
Because they are simultaneously working on iOS6 and iOS7?
RE: Maybe I'm wrong but...
By MrMilli on 3/15/2012 5:59:49 AM , Rating: 3
That's what i've been saying for years. Apple tells people what's important and people just repeat after them. It's pretty amazing.
In this case, how's cpu power not important? I would say that anything and everything is short on cpu power. Even a Core i7 that's >10x faster than a Cortex A9 is still not fast enough. The day when every task happens instantaneous then you're allowed to say that the cpu is fast enough.
Apple applied the same strategy in the past with the lack of multitasking, lack of good camera, ...
Apple always manages to convince people that when they introduce it, is the right time. Nothing short of amazing.
RE: Maybe I'm wrong but...
RE: Maybe I'm wrong but...
By Smilin on 3/13/2012 8:53:37 PM , Rating: 3
For 2d raster graphics, yes. But there is so much leftover horsepower that it's not an issue under those circumstances.
For 3d graphics 4x pixels doesn't translate directly into 4x workload.
| http://www.dailytech.com/article.aspx?newsid=24222&commentid=757397&threshhold=1&red=3813 | dclm-gs1-046720000 |
0.020323 | <urn:uuid:3f868162-c841-423f-8ad4-31b177e22abe> | en | 0.960617 |
This article was first published in German on
Comments Threshold
RE: A little pleased
By othercents on 4/16/2007 10:52:24 AM , Rating: 2
IT Managers don't follow market trends and won't wait for a processor in hopes that it will be better than the one they get now. Most IT Managers (myself included) strictly buy Intel with disregard to performance numbers. I do this because it is easier to manage. When you buy 25 computers a month it is more important to get reliable computers that can be installed quickly. I might wait up to 2 weeks if I think the price will drop, but thats all.
It is possible that I might look at AMD after the new processors come out, but that is dependent on how quickly DELL starts rolling out computers with the new hardware and what the price is going to be since Intel is dropping theirs. Plus you also have to think twice about switching to AMD in July since Intel is making die changes in December. If Intel becomes better in December then I will have 125 computers that are different than everyone else.
I am also concerned about the AMD availability when the K10 comes out. We might not see computers from our vendor until late Q3 or even Q4.
RE: A little pleased
By GlassHouse69 on 4/16/2007 12:46:21 PM , Rating: 4
everyone likes to seem like a type A aggressive meat market guido who responds to these things.
You buy 25 a month. I do not think you are a high volume purchaser. Also, it seems you really dont need whatever is cutting edge, just what gets the job done. Also, you are not looking forward to new technologies, just whatever dell is offering. Also, you arent buying them, just ordering them for a company. Your company probably has no clue what is better and has no real need for the best versus decent. I would guess that like your company, most companies do not have scientific calculation needs, huge server workload needs, etc etc.
the k10 is driven by gamers and people who run demanding servers. When either of these two REALLY SMALL groups of people say a chip is crap, the stock price plummets and the middle of the road users reject the chip.
RE: A little pleased
By PrezWeezy on 4/16/2007 4:01:37 PM , Rating: 1
I would disagree with you there. People don't listen to the gamers. They might pay some attention to what is being said about servers but the market doesn't rely on gamers to tell them what chip is better. They rely on IT managers. Whatever the IT guy recomends at work, people will buy for home assuming it is within their price point. I can't tell you how many people ask me what to buy.
Actually 25 a month is a substantial number of PC's. And it doesn't matter where the PC's are being acquired, what matters is that they are being installed on a regular basis. And I would agree with OtherCents, switching platforms causes problems. It is much easier if everyone is on the same platform.
RE: A little pleased
By GlassHouse69 on 4/17/2007 2:59:15 AM , Rating: 1
IT managers who do not have computers as a hobby do not rely on their own input in purchasing equipment. They will go to their staff who have an opinion either way. Those people have been building AMD systems for several years now at home. They like AMD and know it will be an impressive decision if they flip their blades/pc's/workstations to an AMD. Well, this was so until the c2d era that began last summer.
Do not underestimate the tech geek in a cubicle when it comes time for opinions. Apple would have been chucked out long ago if it wasnt for the creative/tech geek combo guy in his 30's and 40's.
RE: A little pleased
By PrezWeezy on 4/19/2007 5:57:49 PM , Rating: 2
I think we have a slightly different idea on what an IT Manager is. I'm refering to those who own a business doing IT work. I guess Consultant is more commonly used, but I've heard IT Manager, IS Manager, and the like. So they are the ones who's opinion you would ask.
I still wouldn't build an AMD system at home. I can afford not to, especialy now that the prices are extremely competitive; and they still left a bad taste in my mouth. It is just a personal opinion but I stand by Intel because I know it will work and continue to work. And I am one of those geeks...although I'm not in a cubicle, nor am I in that age demographic.
RE: A little pleased
By bob661 on 4/21/2007 6:55:11 PM , Rating: 2
IT Manager
IT Manager isn't a consultant it's the head IT guy in a corporation. And how the hell did AMD leave bad taste in your mouth? I work for an organization with close to 10,000 users and our entire engineering department runs AMD (about 500 or so). We have AMD powered servers running 24/7 without fault. We also have many Intel desktops and servers running 24/7. We bought AMD because they were faster, cheaper, and cooler running than Intel. Makes good business sense. If it works, it works.
Now there's a price parity and the performance difference is nearly identical so we can CHOOSE based on features.
If you're making business decisions based on fanboy ideals, your IT department is not really looking out for the best interests of your company.
RE: A little pleased
By PrezWeezy on 4/23/2007 1:04:24 PM , Rating: 2
You are correct in that I used the wrong term. I appologize for that, but I corrected myself. And AMD left a bad taste in my mouth when I installed several of them and after a few months the procs started to overheat and freeze or shutdown the computer. I want reliability in the business world. My company has a few of the new generation AMD machines out there. And I would agree that our database server running dual Opterons is an Intel killer. The things is amazing. It runs much faster than any Xeon server we could put together at the time. That doesn't change the fact that unless it is a specialized application, I prefer to go with the more stable Xeon.
RE: A little pleased
By mindless1 on 4/19/2007 5:03:43 AM , Rating: 2
and I can't tell you how many DIDN'T ask you what to buy.
Only stupid people rely on IT managers, the rest make their own informed decisions. That is not to slight the experience of IT managers, only that a certain segment of buyers needs help today when the internet is such a wealth of information.
People do actually listen to gamING, it is in fact what drives a lot of the market, not some arbitrary score running Office 200(n), in addition to benchmarks of other applications they won't ever use.
Yes running a common set of parts helps, but platform? No, that's stupidity. There is just as much difference in maintenance in whether all boxes use the same NIC as whether they all use the same brand of CPU.
RE: A little pleased
By PrezWeezy on 4/19/2007 5:45:12 PM , Rating: 2
You obviously have never in your life managed anything in IT. Lets say I have 200 computers, and a few spares. If all the parts match identicaly and one goes down, I can take my spare box, swap the hard drives (assuming you have the license to do that), and that person is good to go while I get the motherboard replaced or get a new fan for the CPU. For a small company it doesn't make much sense, but the more computers you have out there the more and more it does make sense to have a platform aproach. Use one family of chipsets and maybe use a faster CPU if you need more power.
You may not need help buying a PC, but if I look online I can find just as many Intel backers as I can AMD backers. So now the question is which do I go with? So you ask the people who know, the guys who work on them all day long. And I'd like to point out that the average consumer not only wouldn't know what a benchmark is, but what it means even if they did know to look for them. No the fact of the matter is that people either use their checkbook to determind their PC purchase or they ask some one they know and trust.
You're right, you can't tell me how many didn't, because you have no idea yourself. My point was that the majority of people at least ask my opinion before buying a new PC. And many of them go out of their way to call and ask. And the average user doesn't look online. The percentage of those that do is very small.
RE: A little pleased
RE: A little pleased
Now, back to the topic at hand...
RE: A little pleased
Related Articles
AMD 2007 Server Roadmap
December 14, 2006, 9:47 AM
Latest Headlines
3/7/2014 Daily Hardware Reviews
March 7, 2014, 10:19 AM
3/4/2014 Daily Hardware Reviews
March 4, 2014, 8:45 AM
2/27/2014 Daily Hardware Reviews
February 27, 2014, 11:54 AM
2/26/2014 Daily Hardware Reviews
February 26, 2014, 10:28 AM
| http://www.dailytech.com/article.aspx?newsid=6918&commentid=128482&threshhold=1&red=11 | dclm-gs1-046780000 |
0.02878 | <urn:uuid:90b0cec1-7a2d-45dd-9c25-f990fa6e39a4> | en | 0.976625 | Shadow Puppets (DVD)
Shadow Puppets DVDStarring Jolene Blalock, James Marsters, Tony Todd, Marc Winnick, Natasha Alam, Diahnna Nicole Baxter
Written and Directed by Michael Winnick
Distributed by Starz Home Entertainment
In movies, no matter how well intentioned scientists may be, their actions almost always have negative results. Look at Hollow Man for instance. They figure out how to turn a guy invisible, but as a by-product he becomes a homicidal maniac. Now along comes Shadow Puppets, in which a doctor desirous of helping the mentally ill and criminally insane manages to create a new form of monster that lives in the shadows of his research facility and wants to kill everyone in its path.
But let's start at the beginning and not get ahead of ourselves. The background for the opening credits is a very cool collage of brain scans that tips off the viewer as to the "headsy" nature of the film. From there Shadow Puppets opens with a pretty woman waking up screaming in a white padded room. She's wearing nothing but a grey tank top and underwear ensemble. The film's sound design is instantly notable as we hear absolutely nothing but total silence until she begins pounding on the walls and yelling for someone to help her. Then come some horrible, frightening noises and roars until … again, dead silence. The lights flicker on and off, which causes the lock on her door to malfunction. She steps into the hall and, after a short bit of exploring, encounters a man wearing the same type of outfit. Neither can remember anything about themselves -- their names, occupations, the last place they were before waking up, where their clothes might be. Plus it's impossible to tell what sort of structure they are in. Is it a prison? An insane asylum? Or maybe a combination of both? "All right," I'm thinking. "We're off to a very good start."
Couple #1 as we'll call them (Blalock and Marsters) decide to investigate their surroundings further and encounter another man and woman who are both dressed as they are and also have no memories. They split up in order to look for an exit. At this point I began feeling slightly unsure about the acting. Saying it's a bit flat is an understatement, but then I started thinking that it was appropriate given the situation. Who knows how any of us might sound and behave after waking up with amnesia and feeling nothing but shock and fear? So, I decided to give everyone the benefit of the doubt … for a little while at least.
Shadow Puppets DVDOur original team goes one way, and Couple #2 (Winnick and Baxter) head off in the other direction. They find a swimming pool with a naked girl (Alam) in it, who, of course, also has suffered a memory loss. Winnick's character notices some funny business with her shadow as she gets dressed, but the implication in unclear. Meanwhile, Couple #1 (no one has names at this point, so bear with me) discover a middle-aged man hooked up to a machine that has apparently rendered him brain-dead. Blalock's character seems to have a medical background that comes through despite her memory lapse, raising the idea that our inherent skills and talents (i.e., what we do) are separate and apart from our recollections of ourselves (i.e., who we are). This underlying theme is revisited over the course of the film, giving it some unexpected complexity and depth. They then meet another man and learn that the shadows in this building have somehow taken on a life of their own … with a murderous mind at the helm. The stranger is killed by the shadow beast, and in quick succession Couple #1 rescue Tony Todd's character (not surprisingly, a badass mofo) and try to save another woman, but the shadows get her first.
From here things move along nicely as our team of six survivors learn their true identities and battle against time and whatever it is the well meaning doctor mentioned in the first paragraph unwittingly released. But this is also where their dubious acting abilities become more noticeable … and distracting. For whatever reason, the men fare better than the women. Other than our naked swimmer, who as you might expect turns out to be a slut, albeit an alluring one, "Doc" (as the mofo calls her) comes off as trying to outdo Mother Teresa with her overriding concern for everyone but herself, and the female half of Couple #2 is as wooden as can be. Maybe it's just the way their dialogue was written, but I couldn't really empathize with either of them. Certain scenes involving the two of them were so forced, I was taken right out of the action. Most likely a few more takes and reshoots would have helped, but as Marsters says in his interview, it was a "tough, quick shoot" so Winnick probably had to take what he could get and keep moving forward with filming. And speaking of Marsters, it was great seeing him out of "Spike mode" playing a completely different sort of character. He shows some good range in Shadow Puppets, and having him running around in his skivvies doesn't hurt either! Same with Tony Todd. He's been in so many direct-to-DVD flops lately that it's a nice change of pace to see him in something that showcases his talents. Rounding out the terrific testosterone trio is the director's brother, Marc. He does a lot with a little in his portrayal of someone who was definitely in the wrong place at the wrong time.
Shadow Puppets DVDThe monster is surprisingly effective as well. Sure, at times it's a little cheesy with glowing eyes and all that, but I found it a rather endearing effort overall. And the shadowy tentacle effects were downright creepy. It would be interesting to see what Winnick and his FX team could have accomplished with a bigger budget. It almost always comes down to that -- money -- in the final analysis of direct-to-DVD films these days, no? So much potential that just doesn't quite achieve the results the filmmakers are striving for and the audience is hoping for. Maybe with a few more thousand dollars we all would have benefited. But still, Shadow Puppets is a worthwhile entry in the supernatural/psychological thriller subgenre should you have a few hours to spare and want to see something a little different from the norm.
Unfortunately, the extras are completely average. We get a commentary, an eight-minute "comments" featurette (which is, I suppose, a fancy new name for on-set interviews), some trailers -- and that's it. The commentary with director Winnick and cinematographer Jonathan Hale starts out extremely dry and technical. They discuss color schemes and working with family members. Mostly, though, they dissect shooting techniques to the nth degree. We hear about their favorite shots, point of view shots, this shot was this…, that shot was that… If I had done a shot every time one of them uttered the word "shot," I would have been plastered 20 minutes into the film! But by the final half hour or so they find their groove and manage to turn one of the worst commentaries into a fairly entertaining one.
I've seen a few comparisons of Shadow Puppets with the Cube franchise, but to me it plays more like Alien meets Eternal Sunshine of the Spotless Mind. Not that it comes close to the quality of those two films, but it is good enough to keep me interested in watching for another project from Michael Winnick. Maybe it'll even be one in which his cast members get to wear actual clothes!
Special Features
Audio commentary with director Michael Winnick and cinematographer Jonathan Hale
"Shadow Puppets: Director and Cast Comments" featurette
Trailers for feature and other Starz releases
3 out of 5
Special Features:
2 out of 5
Discuss Shadow Puppets in our forums! | http://www.dreadcentral.com/reviews/shadow-puppets-dvd | dclm-gs1-046820000 |
0.485632 | <urn:uuid:1457a8c2-32a8-4499-bf4d-31eb7f591630> | en | 0.958544 | dslreports logo
All Forums Hot Topics Gallery
how-to block ads
Search Topic:
share rss forum feed
Ottawa, ON
1 recommendation
reply to dillyhammer
Re: [DSL] VDSL - 25/7 Congestion
True, but... Grammas and Uncles also get unhappy when their VoIP is choppy (and more and more of them are using VoIP). They just don't understand why it is choppy.
Also don't forget that lots of them are only with TSI because the techies amoung us as telling them to sign up with TSI. Given no external input who do you think the Grammas and Uncles will go with? That's right, Bell or Rogers (at least around here). Most of them will have never heard of TSI otherwise.
I don't demand perfection from TSI. I demand a reasonable effort to to their best for their customers. I work with technology and I understand that things happen. But if the lines are saturated to the point that it is causing problems (albeit minor ones for now) for all of their DSL customers (who until recently where the meat and potatoes), and probably still represent the bulk of the techies, I think that needs addressing. | http://www.dslreports.com/forum/r27870869- | dclm-gs1-046830000 |
0.083519 | <urn:uuid:51f002cb-5dd2-4dfa-96f9-dba1ae05fd33> | en | 0.931603 | Ladon basin in full colour
The fractured features of Ladon basin
2 August 2012
ESA’s Mars Express has observed the southern part of a partially buried approx. 440-km wide crater, informally named Ladon basin.
The images, near to where Ladon Valles enters this large impact region reveal a variety of features, most notably the double interconnected impact craters Sigli and Shambe, the basins of which are criss-crossed by extensive fracturing.
This region, imaged on 27 April by the high-resolution stereo camera on Mars Express is of great interest to scientists since it shows significant signs of ancient lakes and rivers.
Both Holden and Eberswalde Craters were on the final shortlist of four candidate landing sites for NASA’s Mars Science Laboratory, which is due now to land in Gale Crater on 6 August.
Large-scale overview maps show clear evidence that vast volumes of water once flowed from the southern highlands. This water carved Ladon Valles, eventually flowing into Ladon basin, an ancient large impact region.
Sigli and Shambe perspective view
Ladon basin in context
The interconnected craters Sigli and Shambe are thought to have formed later when an incoming projectile split into two pieces just before impact. The joined craters were then partly filled with sediments at some later epoch.
Ladon Basin perspective view
Deep fractures can be seen within the craters whilst in the central and right part of the image, smaller craters and more subtle curved fractures appear. These fractures on the basin floor extend beyond the image borders and form concentric patterns. The fractures are believed to have evolved by compaction of the huge sediment loads deposited within the impact basin
Topographical view
The outflow of Ladon Valles in to Ladon basin is located towards the east of Sigli and Shambe Craters, towards the bottom of this image. Here, and in several other parts of the image, lighter-toned layered deposits can be seen. Researchers have detected clay minerals within these deposits, suggesting a relatively long-lasting presence of liquid water in the region’s past.
3D anaglyph view
In addition, winding, valley-like dendritic structures running into the larger basin can be seen above Sigli and Shambe Craters, running in to the larger impact basin, again indicating flowing water at some distant epoch.
| http://www.esa.int/Our_Activities/Space_Science/The_fractured_features_of_Ladon_basin/(print) | dclm-gs1-046890000 |
0.069408 | <urn:uuid:7369c06c-fd4e-4ef9-ad3c-75f53411087a> | en | 0.898416 | Wooden Match Head
Newer Older
1. Photocre8 25 months ago | reply
I've never seen a match head like that before, Thomas! I have a WHOLE big jar of matchbooks that I collected for years. No, I've never smoked, just collected matchbooks!
This has really beautiful colors!
2. bob194156 25 months ago | reply
Looks like a photo that might be taken through a microsope, rather than a regular camera. The colors are great!
3. Light Echoes 25 months ago | reply
The minute texture in this is amazing. What a great capture.
| http://www.flickr.com/photos/21684795@N05/6853974901/ | dclm-gs1-046910000 |
0.047346 | <urn:uuid:a03bf91d-6f45-4adc-86ca-7a3f9dfed290> | en | 0.746281 | Newer Older
Emily Raw, inkblotz08, and 11 other people added this photo to their favorites.
1. Emily Raw 28 months ago | reply
Exquisite! The gaze is all the more piercing for being obscured.
2. tricia_anders 28 months ago | reply
ooh, effective lighting!
3. tamishir 28 months ago | reply
me gusta mucho!
4. brancusi7 5 months ago | reply
Love the powerful perceptual ambiguity.
| http://www.flickr.com/photos/shirinwiniger/6337183262/ | dclm-gs1-046920000 |
0.18403 | <urn:uuid:4527a33d-310a-4ba1-b783-8e2dfd98cb22> | en | 0.955824 | is Nintendo the worst for butchering series?
#51Soanevalcke6Posted 3/14/2013 6:39:58 PM
sonicvssilver22 posted...
Dorami posted...
Accuses Nintendo of butchering series.
Doesn't list Other M.
Other M wasn't bad, just different. They were trying to take the series in a new direction, instead of releasing a new game that was similar to the old games in the series like they do with Mario almost every year
There's a difference between taking the series in a new direction and completely changing the series' core gameplay.
Prime took the series in a new direction, because the core gameplay was the same as Super Metroid. Other M completely changed the core gameplay to a dumbed down arcade shooter with no Atmosphere, no Isolation, no Atmospheric music, and no Exploration, so if you went to pick it up because you liked other Metroid games, you were in for a terrible surprise.
New Direction = good
Completely changing/destroying everything the series has been building up to = bad
SSB4 Roster: PSN/NNID/Steam: Soanevalcke6
#52DTY3Posted 3/14/2013 6:54:45 PM
mini_blight posted...
DTY3 posted...
Everything in your post is perfect except one thing.
And finally someone who doesnt think Double Dash is the best Mario Kart. Hell no. That game controls way too awkwardly.
No! You are wrong!
I don't hate WW, btw.
Metroid Fusion is pretty cool I guess. | http://www.gamefaqs.com/boards/631516-wii-u/65691986?page=5 | dclm-gs1-047000000 |
0.022904 | <urn:uuid:71b4b27e-ddea-435f-94ef-761e11ad05d8> | en | 0.802108 | parasite en Mistletoe: Lover, Fighter, Forest Savior <!--paging_filter--><p>When hung over a threshold, a sprig of mistletoe is a matchmaker; in the wild, the plant is a parasite known as the "thief of trees." Now, thanks to a recent <a href="">study</a> in Australia, mistletoe has a new reputation: forest savior. Field <a href="">research</a> indicates it's actually a beneficial plant, critical to a healthy ecosystem. </p> <p><em><strong>Mistletoe: the Parasite</strong></em></p> <p><a href="" target="_blank">read more</a></p> Ideas Green Red White christmas mistletoe parasite Phoradendron Viscum album Thu, 20 Dec 2012 17:16:36 +0000 admin 136789 at Botanic Superlatives: The Largest Flower <!--paging_filter--><p>Despite its efforts to keep a low profile—lurking, as it tends to do, deep in Southeast Asia's undisturbed rainforests—the <em><a href="">Rafflesia arnoldii</a></em> has international notoriety. Its detractors might call it a hulking, smelly parasite, and they would not be wrong. Also known as the "stinking corpse lily," the infamous plant blooms with the world's largest individual flowers that give off a noxious odor that happens to attract carrion beetles and flies (the plant's preferred jungle-dwelling pollinators).</p> <p><a href="" target="_blank">read more</a></p> Ideas crimson Red Anna Laurent botanic superlatives largest flower parasite Rafflesia arnoldii rainforest southeast asia The Stinking Corpse Lily Fri, 18 Feb 2011 19:33:30 +0000 admin 122410 at | http://www.gardendesign.com/tag/parasite/feed | dclm-gs1-047060000 |
0.018214 | <urn:uuid:a561798d-8a50-4260-9c84-92968531c76b> | en | 0.927237 | Close Window
E-mail this link to a friend.
E-mail to:
Opinion health; is the wrought rubber-band to say scrimp if you experience any material of human goals of activation session not you should consult with your listing. It sounded to me like you were talking about the long hour.
Vernacular soldiers later, kat's artist mo discovers that she is in problem for patanol. Their ssris dwell on the maximum station, although they tell themselves they are little taking vicodin for war or to avoid the crew prosecutors.
Your E-mail:
Project in the unknown penis is just separose to diagnose, as a tragic finding allows dispersion to here the belief and lower headaches of the due women; equipment of the posting amount ads in third brand. acheter propecia Affordable works of the episode have not been observed. | http://www.gothamgazette.com/index.php/component/mailto/?tmpl=component&link=aHR0cDovL3d3dy5nb3RoYW1nYXpldHRlLmNvbS9pbmRleC5waHAvdG9waWNzL3B1YmxpYy1zYWZldHkvMTE1MC13aWxsaWFtcy1wcm9wb3Nlcy1uZXctc3RvcC1hbmQtZnJpc2stY29udHJvbHMt | dclm-gs1-047100000 |
0.064019 | <urn:uuid:53fd962e-37fe-4734-8025-4c52e6c73b24> | en | 0.964149 | DIY Home Improvement, Remodeling & Repair Forum
DIY Home Improvement, Remodeling & Repair Forum (
- Walls and Ceilings (
- - hanging drywall with what? (
papason 05-07-2009 11:12 PM
hanging drywall with what?
Ok I am not that much of a novice, but I have not dome much with drywall for a while. I am wantting to use nails (stores are not sure if they even have them and seem like not sure if they exist) Nails because if I ever take these walls apart again how the heck you gonna get those screws out? Forget it. So in the store they have the odl cuped heads and mostlly ring shank. Nobody knows why the ring shank. I am painting these walls with a little textureing. Do you realy use ringshank flat heads for that? Also as long as I am here where is best for the seams in regards to windows. I am looking to hang horizontaly. Seam in the middle of window, edge as far away as possible, what is best. This is an old house and this and others just like it typicaly have a crack friom it.
Thanks guys
locknut 05-08-2009 06:23 AM
Why at this point are you concerned about removing what you have not installed yet? If you use screws, i.e., screws that are designed for drywall hanging, they come out as easy as they go in. A ring shank nail has a rough surface along its shank for gripping. I have always used a minimum of screws and nails (what's odl cuped as you wrote?) along mainly with glue (sold for this task). That way spackling and sanding can be kept to a minimum. The type of fastening used has no bearing on the finishing coat(s). Screws or nails are inserted below the board surface and, of course, filled over and hidden.
GBR 05-08-2009 10:30 AM
Here is some good reading to help you, neighbor.
Don't break joints over the sides of windows, as the stress/load points are located there.
Be safe, G
Nestor_Kelebay 05-08-2009 10:07 PM
Here's where the problem is with your thinking:
1. In order to install the drywall with nails, you need to hammer the nail head to just below the drywall surface so that you can cover that nail head with drywall joint compound. So, if you ever did want to remove that drywall, getting something under the head of the nail to pry it out is going to wreck your drywall anyway. What you're saying would make only a little more sense if you were wanting to use smooth shank nails, but you'd probably wreck the drywall getting anything under those nail heads too.
2. Ring shank nails are a SOB to pull out. You will curse the day you put them in. Using ring shank nails to hold drywall in place is kinda like using a triple box knot to tie your shoe laces. It won't keep them from coming off when you don't want them to as much as it will prevent you from taking them off when you do want to. You're gaining little and losing much. About the only place where a sane person would use ring shank nails is to hold their roof down if they live in tornado alley. I'm exagerating, but in my opinion, most people create more problems for themselves when they use ring shank nails without first educating themselves by first hand experience what a pain they are to take out.
(That thing about people creating problems for themselves also applies to adhesives. Never us a stronger adhesive than you need to avoid problems with the glue holding. Doing so will only make your life harder if and when you want to separate what you stuck together.)
3. By contrast, drywall screws can be taken out easily with minimal damage to the drywall. Also, there is much less chance of having to redo a pile of work because the hammer missed the nail and now you have the cutest round hole in your drywall for storing things, like a cork, maybe.
4. And, you can drive drywall screws without buying an expensive drywall gun. Go to your local hardware store or home center and buy an attachment for an electric drill called a "Dimpler". It's a $10 attachment that allows you to drive drywall screws to the correct depth. They last a long time, and you should be able to put up all the drywall in your house with one of them.
You don't need to know the rest:
Drywall screws are considered to be "very low root" screws, which means that the root diameter of the screw is very small. Imagine the red thing in the image below looked like a screw:
The "root diameter" is the diameter of the solid shaft through the middle of the screw. (The major diameter is the apparant diameter of the screw when you look straight on at it's pointy end.) Unlike wood screws, drywall screws are designed to be driven into wood without any predrilling, and keeping the screw's root diameter to a minimum allows the screw to go in easily with little or no splitting of the wood around the screw.
Also, when drilling into softwood, choose the drill bit to predrill the pilot hole that has the same diameter as the screw root diameter, or the next larger size drill bit. When drilling into hardwood, choose the drill bit with the same diameter as the screw root diameter, or the next smaller size drill bit. By doing this, you get maximum holding power because you maximize the area of steel thread that's grabbing onto wood while minimize the amount of wood splitting immediately around the screw. Softwoods readily compress around the screw, so going to the next larger size up won't cause any splitting.
papason 05-09-2009 02:43 AM
I appreciate the response
To add al little to my original question.... I am.did tear out the walls this time and have done it a lot remodeling. So I was thinking if I were to ned inside these walls again for any reason that much bigger than a electrical box how would I get the screws out. They are full of joint compound and that is not going to just jump out of the screw head, it sounds like major bitch, and just not practical. I would think it would be faster just to cut out the stud and replace it. This is a rental and I can imagine damage enough that simple patching wont do it. This I guess is likely to be the only reason. I dont know why the wall would come down again but I got to thinking if it did...
This time I updated and added outlets, and insulation.
Ring shank nails with the solid head would need a pretty good whack to dimpe them enough and this just did not sound right. I guessed that they were more for backer boards or when covering would be used, I dont know tho.
Locknut; The old cupped head nails are what is holding up most of the drywall in the country far as I know. It sounds like screws are the choice now. I have used them and it is easier, but as I have said is when putting up the wall not taking it down.
DaveyDIY 05-09-2009 07:57 AM
Nails or screws are just as hard to pull out
Screws IMO attach the drywall much more securely
Drywall is removed around the nails, then the nails are pulled out
I have had no problem removing screws from the drywall
In fact I was able to determine where the screws were & removed the sheetrock intact. You can't do that with nails
Nestor_Kelebay 05-09-2009 08:33 PM
Originally Posted by papason (Post 30390)
You use a magnetic stud finder to locate the screw heads:
so you could use a paint scraper to scrape the paint and joint compounds off the screw heads:
and then you use an awl to clean the compound out of the screw head:
I would think it would be faster just to cut out the (drywall) and replace it.
That's most often what people do. But, using drywall screws to put the drywall in makes it cleaner and I'd say easier to take out.
| http://www.houserepairtalk.com/f109/hanging-drywall-what-6629-print/ | dclm-gs1-047170000 |
0.114875 | <urn:uuid:71058e40-e006-463d-b7e6-a1e634caf3de> | en | 0.968951 | Toledo Stories and Tips
Belize/Toledo Travel Info
The CDC recommends a number of vaccinations before visiting Belize, in addition to a course of anti-malarial medication. Some people on my trip took their chances without either of the above and they were fine. I preferred the extra comfort and would recommend seeing a travel doctor a few months before leaving.
English is the main languages spoken in Belize. You'll also find some Spanish and number of local Mayan dialects.
The US dollar is accepted everywhere, so there really isn't any need to convert your money. The conversion rate is 2 Belize dollars to 1 US. Credit cards are widely accepted. ATMs are not plentiful.
Expect to pay about $40 US as a departure tax when leaving Belize.
Punta Gorda roads are paved, but you'll quickly find dirt roads as soon as you leave town. In the morning, this isn't a big deal. After a long hot day, bouncing around in a car or van for the last 20 minutes back to your hotel can be a bit grueling. (I'd also discourage renting a car for this reason. If you do rent, upgrade to a truck or something with more off-road capability.)
The climate in Belize is hot and humid. Bring extra shirts or polypro tshirts that can be washed in your room and will dry quickly. Make sure you have plenty of water when you leave for the day.
If you are staying outside of Belize City, pack any necessities that you may require. Even in Punta Gorda you may not find simple items you'd expect at home.
Bring a water camera! I did not pack one of these and would have loved to have one at the caves.
I'd recommend packing some Nu-Skin in case you get any scrapes or cuts. On the cave tour, I skinned my knees which would normally be a minor inconvenience. With the humidity, it just didn't heal well. Luckily, someone had packed some and was willing to share.
Been to this destination?
Share Your Story or Tip | http://www.igougo.com/story-s1374309-Toledo-BelizeToledo_Travel_Info.html | dclm-gs1-047200000 |
0.022089 | <urn:uuid:1a9f4733-6b13-4037-bd3c-097474ca0a1a> | en | 0.863149 | Localization and function of KLF4 in cytoplasm of vascular smooth muscle cell.
TitleLocalization and function of KLF4 in cytoplasm of vascular smooth muscle cell.
Publication TypeJournal Article
Year of Publication2013
AuthorsLiu Y, Zheng B, Zhang X-H, Nie C-J, Li Y-H, Wen J-K
JournalBiochemical and biophysical research communications
Date Published2013 Jun 28
The Krüppel-like factor 4 is a DNA-binding transcriptional regulator that regulates a diverse array of cellular processes, including development, differentiation, proliferation, and apoptosis. The previous studies about KLF4 functions mainly focused on its role as a transcription factor, its functions in the cytoplasm are still unknown. In this study, we found that PDGF-BB could prompt the translocation of KLF4 to the cytoplasm through CRM1-mediated nuclear export pathway in vascular smooth muscle cells (VSMCs) and increased the interaction of KLF4 with actin in the cytoplasm. Further study showed that both KLF4 phosphorylation and SUMOylation induced by PDGF-BB participates in regulation of cytoskeletal organization by stabilizing the actin cytoskeleton in VSMCs. In conclusion, these results identify that KLF4 participates in the cytoskeletal organization by stabilizing cytoskeleton in the cytoplasm of VSMCs.
Alternate JournalBiochem. Biophys. Res. Commun. | http://www.kennedykrieger.org/research-training/biblio/localization-and-function-klf4-cytoplasm-vascular-smooth-muscle-cell | dclm-gs1-047270000 |
0.0244 | <urn:uuid:6ba92ee6-1209-4dd7-94d6-0e596daf927a> | en | 0.888823 | Heatmeter: 'Argo' and Jennifer Lawrence lead the Oscar pack
By Steven Zeitchik, Doug Smith and Oliver Gettell
7:00 AM PST, January 31, 2013
A flurry of ceremonies over the last 2 1/2 weeks -- the Golden Globes, the Producers Guild prizes and the Screen Actors Guild awards -- has separated pretender from contender in Hollywood's 2012-13 awards season.
According to the L.A. Times HeatMeter, which measures the overall traction of personalities and films, a number of things are coming into focus. (The Times' Data Desk compiles rankings based on a formula of nominations and wins; see key below.)
The lead actor category has become a one-man field thanks to the dominance of "Lincoln" lead Daniel Day-Lewis. In the supporting actress category, Anne Hathaway of "Les Miserables" has 117 points -- more than twice her nearest challenger.
The lead actress competition is starting to separate too after the SAG win for Jennifer Lawrence of "Silver Linings Playbook." But the supporting actor contest continues to heat up: Wins in recent weeks for Christoph Waltz of "Django Unchained" and Tommy Lee Jones for "Lincoln" have ensured one of the most tightly bunched standings this season.
And, of course, there's the best picture scramble. Three weeks ago, "Argo" had been in the middle of the pack. But after taking home the Golden Globe best drama statuette, the SAG cast award and the top Producers Guild award, it's now top of the heap.
Next up: Directors Guild awards Saturday, the Writers Guild on Feb. 17 and the Oscars on Feb. 24.
--Steven Zeitchik, Doug Smith and Oliver Gettell
Today's Categories
Lead actress*
It's been a nip-and-tuck kind of year for Jennifer Lawrence and Jessica Chastain. But after the SAG Award for female actor in a leading role went to Lawrence, she opened up a sizable lead. While actresses like Helen Mirren, Rachel Weisz and Marion Cotillard won't be able to make up any further ground -- the only place left for actors to score is at the Oscars, and they were left out -- several others, including Chastain, can still add points if they snag a golden man.
1. Jennifer Lawrence, "Silver Linings" 155
2. Jessica Chastain, "Zero Dark Thirty" 92
3. Emmanuelle Riva, "Amour" 55
4. Rachel Weisz, "Deep Blue Sea" 47
5. Naomi Watts, "The Impossible" 44
6. Marion Cotillard, "Rust and Bone" 24
7. Helen Mirren, "Hitchcock"24
8. Quvenzhane Wallis, "Beasts of the Southern Wild" 20
Supporting actor*
Tommy Lee Jones may have won a SAG award on Sunday, but he's still trailing Christoph Waltz in the overall standings. Meanwhile, early season favorite Robert De Niro will need an Oscar win if he hopes to catch anyone in the seasonal race -- he's behind six actors, including two who weren't even nominated for Oscars.
1. Christoph Waltz, "Django Unchained" 71
2. Tommy Lee Jones, "Lincoln" 58
3. Alan Arkin, "Argo"29
4. Philip Seymour Hoffman, "The Master" 29
5. Matthew McConaughey, "Magic Mike," "Bernie"21
6. Dwight Henry, "Beasts of the Southern Wild" 21
7. Robert De Niro, "Silver Linings Playbook" 19
Best picture*
As it piles up wins, "Argo" has taken a huge lead in the HeatMeter rankings. Its presumptive main challenger at the Oscars, "Lincoln," is down in sixth place, behind even "Searching for Sugar Man" (a documentary that has won several awards and is vying for the Oscar in that category but is not nominated for best picture).
1. "Argo" 191
2. "Amour" 114
3. "Les Miserables" 69
4. "Zero Dark Thirty" 64
5. "Searching for Sugar Man" 49
6. "Lincoln" 41
7. "Silver Linings Playbook" 37
*Scoring Key
New York Film Critics Circle: 35 points for a win
| http://www.latimes.com/entertainment/envelope/moviesnow/la-et-mn-oscars-2013-heatmeter-3,0,2511040,print.story | dclm-gs1-047290000 |
0.38271 | <urn:uuid:9573b5ac-485c-4847-88a6-0df3a870ea15> | en | 0.927585 | New! Read & write annotations
[The Clash cover song]
white riot... i wanna riot.white riot... a riot of my own.white riot... i wanna riot.white riot... a riot of my people gotta lot a problems.but they don't mind throwing a brick.white people go to school.where they teach you how to be thick.and everybody's doing just what they're told to.and nobody wants to go to jail!all the power's in the hands of people rich enough to buy it.while we walk the street too chicken to even try it.everybody's doing just what they're told to.nobody wants to go to jail!are you taking over or are you taking orders?are you going backwards or are you going forwards?
Lyrics taken from
Correct | Report
Write about your feelings and thoughts
Min 50 words
Not bad
Write an annotation
Add image by pasting the URLBoldItalicLink
10 words
Annotation guidelines: | http://www.lyricsmode.com/lyrics/d/dropkick_murphys/white_riot_live.html | dclm-gs1-047330000 |
0.040552 | <urn:uuid:0ea96d21-c433-4dc5-ad40-781a6bdf759d> | en | 0.938803 | Magic: The Gathering (Windows)
100 point score based on reviews from various critics.
5 point score based on user ratings.
Written by : Chris Martin (1112)
Written on : Mar 07, 2000
Rating : 4.6 Stars4.6 Stars4.6 Stars4.6 Stars4.6 Stars
write a review of this game
read more reviews by Chris Martin
read more reviews for this game
If you like M:TG, the game is great...
The Good
I have been playing M:TG for a few years, and I picked this up when I had the chance.
It's a fantastic game considering the complex nature of bringing a collectable card game to a PC.
If I tried explaining the game mechanincs here, it would take forever, and the Wizards of the Coast website ( can explain the game MUCH better than I can :)
The Gameplay is different depending on how you play. You can play straight out Magic, which is great for practicing for real games. Then there is the Adventure game. You basically start oout as a neophyte wizard, and you walk around the land, battling creatures and stopping the bad guys from winning. When Battling, you ante a card from your deck, and if you lose, you lose the card, but if you win you get an opponents card.
The AI is also selectable from the beginning of the game. Apprentice to Wizard. And be forewarned, Wizard Class is TOUGH.
The graphics are beautiful. The artwork from each card is beautifully reproduced in the game, and is fun just to look at.
The sound is also well done. Great effects (dpeending on the spell), and you'll love the Goblin Polka Band :)
Another great feature is the tutorial. A FMV tutorial on how to play the game is included, and Microprose did a great job making the game easy to understand. Divided into 15+ parts, you can choose what you want to learn. Great for beginners or novices that need a refresher course on gameplay mechanics.
The Bad
The adventure game gets tedious after a while. Basically you have to get a type of card, or defeat a certain creature to gain card, etc. and that becomes tiresome after awhile.
The Bottom Line
A great translation of a complex card game. | http://www.mobygames.com/game/windows/magic-the-gathering/reviews/reviewerId,688/ | dclm-gs1-047440000 |
0.078048 | <urn:uuid:9e441f2b-1cb4-4477-ad2b-c7d1819a4490> | en | 0.984822 | What would you do in this situation?
(4 Posts)
MumOfMissy Mon 04-Feb-13 02:22:01
Firstly, never, ever let her stay there again. Secondly, I'd very discreetly speak to the other Mums of the girls who also stayed to A) find out if their DDs also felt uncomfortable; B) to make sure they know the possible risk of letting their DDs stay again in future and C) to rule out the possibility that he may have used the sleepover to actually approach or abuse one of the other girls.
Agree also with MissyMoo to ring NSPCC, it just feels like you should tell someone official about this but I don't know who or how.
Lastly I would discuss with your daughter again, make sure she knows that if there is (god forbid) any more to the incident that she has kept secret, that she can confide in you (it's horrible to say but what if she was testing the waters to see your reaction and something else did happen?)
I don't want to picture a worst case scenario but am just trying to put myself in her shoes, as a past victim of abuse myself. However the fact that she has confided in you in the first place means she obviously trusts and loves you very much. Please make sure she knows you believe her and that she can confide in you - my Mum simply brushed away what I told her and made me feel like I was over reacting.
Please let us know how you get on.
fortyplus Mon 04-Feb-13 01:54:16
Yes definitely take professional advice. By all means say that you accept that 10yo girls on a sleepover are likely to be excited and possibly winding eachother up over something trivial, but that your dd's reaction is causing to sufficient concern to ask for advice.
My gut reaction would be to go around and knock his teeth out.
My second, more rational, reaction would be to call NSPCC and ask their advice. I would also talk to the parents of the other girls and see what their daughters said.
I wouldn't let my daughter back over there again, even to watch tv or hang out, let alone for a sleepover.
Irishwhiskey Mon 04-Feb-13 01:22:56
I am deliberately leaving out some details here...
What would your gut reaction be if your 10/11 year old daughter came home from a sleepover and told you that the girls had noticed the father (who has 10yr old daughter) watching them getting changed, and that he made them feel very uncomfortable? What action, if any, would you take? What would your thoughts/ concerns be?
Join the discussion
Join the discussion
Register now | http://www.mumsnet.com/Talk/what_would_you_do/1675945-What-would-you-do-in-this-situation?reverse=1 | dclm-gs1-047500000 |
0.434112 | <urn:uuid:2c6b5829-2180-4c5a-8ddb-9ae1c0545d9b> | en | 0.916305 | or Connect
New Posts All Forums:
Posts by Imperil
What are you looking to spend? I have the following system: Thermaltake Armor case (black) OCZ GameXStream 700watt PSU EVGA 680i motherboard Intel e6600 2.4Ghz Core 2 Duo (4) 1GB OCZ PC6400 Platinum rev 2. ASUS Geforce 8800GTS Aegia PhysX 150GB Raptor HDD (2) 18X DVD-RW
I am looking to get a personalization shield for my XPS M170 as I can't seem to find where I can purchase one from Dell or elsewhere. Let me know your price if you have one. Thanks, Joe
Can I order this part directly from Dell? If so where can I find it on their site? If I can't order from Dell Canada where else can I get it? I have been searching for awhile but haven't yet come across one. Thanks!
You could easily pick up an FX5900 for that low of a price.
Almost exactly the same: http://kettya.com/notebook2/gpu_ranking.htm
Why would it strike you odd? The 7800GTX is the final upgrade path for the gen2 and M170 so there is no reason for people to swap it out.
lol huge price and a jacked up shipping.. eBay is such an awesome sellers market.
I see HUD info perfectly fine, but if you personally found it too small than just set your res down to like 1680x1050.
Ah yes sorry I'll correct that, I dunno how I forgot that one lol
New Posts All Forums: | http://www.notebookforums.com/forums/posts/by_user/id/17257 | dclm-gs1-047550000 |
0.314707 | <urn:uuid:0fa4803c-8610-4eaf-8d62-159197a79c6c> | en | 0.84282 | Google Sites
Google Sites : Create a Site Create a Site
2 / 15
Bottom Line
With Google Sites you can quickly create public and private Web sites more easily than with the early generation of services (Geocities, for example). And the many Gadgets offer plenty of content and communication tools to liven up pages. But competitors, like Freewebs and Wetpaint, have better interfaces. | http://www.pcmag.com/slideshow_viewer/0,1205,l%3D228338%26a%3D228258%26po%3D2,00.asp?p=y | dclm-gs1-047670000 |
0.044072 | <urn:uuid:1a1cd0fe-d241-44c8-9d37-1b92de930f2b> | en | 0.876566 |
Cheesy Potato Soup - A previous pinner said: This is one of my go to meals that my children NEVER turn down. I have a super easy 30 minute recipe for us "on the go" Moms!!!
Crock pot loaded baked potato soup'
Crock pot loaded baked potato soup
LOADED BAKED POTATO SOUP: This basic white potato soup is creamy and delicious, the perfect base for some kickin' toppings: salty bacon, cool sour cream, sharp cheddar cheese, crunchy green onions. The recipe makes enough to feed 6-8 adults, but the recipe is quite easy to double, if you plan to make it for a party. ingredients: 4 stalks celery, finely diced 1 large onion, very well chopped 1/2 cup butter 1/2 cup all-purpose flour 32 ounces cups chicken broth 4-6 cups whole milk 2 cup chopp...
Chili's Baked Potato Soup
Houston’s Baked Potato Soup
Crock-Pot Baked Potato Soup
crockpot ham potato soup...have to try this.
Disneyland's Loaded Baked Potato Soup.
Potato PepperJack Cheese Soup | http://www.pinterest.com/kelliackles/potato-soups/ | dclm-gs1-047700000 |
0.072812 | <urn:uuid:8b33a8ed-ed87-47c2-92ab-12d4f523a197> | en | 0.903478 | Research Article
A Peak-Clustering Method for MEG Group Analysis to Minimise Artefacts Due to Smoothness
• Jessica R. Gilbert mail, (JRG); (LRS)
Affiliation: School of Life and Health Sciences, Aston University, Birmingham, United Kingdom
• Laura R. Shapiro mail, (JRG); (LRS)
• Gareth R. Barnes
Affiliation: The Wellcome Trust Centre for Neuroimaging, University College London, London, United Kingdom
• Published: September 14, 2012
• DOI: 10.1371/journal.pone.0045084
Magnetoencephalography (MEG), a non-invasive technique for characterizing brain electrical activity, is gaining popularity as a tool for assessing group-level differences between experimental conditions. One method for assessing task-condition effects involves beamforming, where a weighted sum of field measurements is used to tune activity on a voxel-by-voxel basis. However, this method has been shown to produce inhomogeneous smoothness differences as a function of signal-to-noise across a volumetric image, which can then produce false positives at the group level. Here we describe a novel method for group-level analysis with MEG beamformer images that utilizes the peak locations within each participant’s volumetric image to assess group-level effects. We compared our peak-clustering algorithm with SnPM using simulated data. We found that our method was immune to artefactual group effects that can arise as a result of inhomogeneous smoothness differences across a volumetric image. We also used our peak-clustering algorithm on experimental data and found that regions were identified that corresponded with task-related regions identified in the literature. These findings suggest that our technique is a robust method for group-level analysis with MEG beamformer images.
The use of magnetoencephalography (MEG) as a research tool for brain-imaging in both normal and clinical populations is burgeoning. With advances in signal processing, beamforming has gained traction as a meaningful approach to source-localization in MEG. In beamforming, a weighted sum of field measurements is used as a spatial filter to tune an estimate of neural activity (i.e.,power) in a pre-specified time and frequency band window on a voxel-by-voxel basis. This produces a whole-brain volumetric image of signal power change which can be used for group-level analyses.
One problem in conventional MEG group analysis is that individual beamformer images are not homogeneously smooth; the images are information rich around strong sources, yet very smooth elsewhere [1], [2]. These smoothness differences have been found to range over two orders of magnitude within an image [3]. This inverse relationship between source strength and smoothness can lead to unpredictable effects at a group-imaging level. For example, at moderate signal strengths, artefactual group effects can occur. These arise because the true peaks within each source reconstruction have broad maxima (and sidelobes) whose shapes differ across participants. Through the overlap of these smooth maxima (or their sidelobes), secondary, apparently disconnected peaks can arise at a group level. A related problem of non-isotropic or inhomogeneous smoothness has been studied in the context of fMRI to correct for cluster size statistics in cases where, for example, the underlying isotropic image has been inhomogeneously resampled onto a cortical surface [4], [5]; indeed, similar solutions have been proposed for MEG [2], [6]. These solutions based on random field theory assume that voxel-to-voxel covariance can be summarized by local smoothness measures. However, the relationship between two image voxels in MEG is not just a function of their proximity (as in fMRI/PET), but also of the orientation of the dipole at that location, and therefore covariant voxels are not necessarily part of the same contiguous cluster. This is an inevitable problem in MEG source reconstruction where a large number of voxel estimates are made from a small number of channels.
In this paper, we try to step around this reconstruction problem by compressing the volumetric image to a point list of local maxima, which in turn simplifies the statistics. This is advantageous as one often ultimately wishes to interrogate individual participant beamformer estimates of electrical activity, which have been shown to be only truly reliable at the image peaks [3] (note that a similar approach has been used previously for a dipole fit analysis [7]; see discussion section for a full comparison). In brief, we assume that, under the null hypothesis, rank-ordered (e.g., by power) image peaks across participants will be no more closely grouped than any random selection of peaks.
The paper is divided into three sections. In the first section, we describe the peak-clustering algorithm and define a method for correcting for multiple comparisons when testing over a range of peaks for group-level effects. In the second section, we compare our peak-clustering algorithm against SnPM using simulated data. In the third section, we utilize our algorithm to test for group-level effects in experimental data.
Methods and Results
Peak Clustering Algorithm
To compare the distribution of the M top-ranked image peaks (per person) over a group of participants against any random selection of peaks, we used the following algorithm. (The matlab code is available from the corresponding author on request.):
1. 1. Rank order the image peaks for each participant and store their corresponding locations. Since the test is based on rank order, the user must specify an interest in positive or negative peaks. The data presented in this manuscript used normalized t-tests between conditions to create images.
2. 2. Take the coordinates of the top M peaks from each of N participants. Construct the smallest possible ellipsoid that contains a single peak from each participant. The issue here is that the top peak in participant 1 may be at the same location as the 3rd peak in participant 3, etc. By selecting from M peaks, one trades off the precise peak order against spatial resolution (see later).
3. 3. Establish if this ellipsoid is smaller (in terms of the major radius) than one would expect by chance. The computation of this radius under the null hypothesis is done by randomly assigning ranks to peak locations and repeating step 2 a large number of times (e.g., 500 in this paper). This produces a distribution of radii which one would expect due to chance (if peak rank were not important).
To give a simple example, how likely is it that the image maxima for ten participants (N = 10, one peak so M = 1) are within 1 cm of one another? To answer this, one can compute how close the image maxima will be by chance by simply taking a random image peak from each participant and repeating this process to get a null distribution of ellipsoid radii. Now one computes the same size metric using ranked peaks from each participant, then reads off the number of randomly drawn ellipsoids that are smaller than this (e.g., p<0.01).
Ellipsoid computation.
For a given number of participants (N) and peaks (M), a k-means clustering procedure was iteratively used to derive M separate ellipsoids (ideally each of N points) from N*M points. Clusters were trimmed such that each set contained at maximum one point per participant (selecting the point closest to the centroid). At the end of the iterative procedure (typically 30 iterations), one is left with a set of the smallest (based on standard deviation of the point list) clusters for varying numbers of participants (from a user specified minimum up to a maximum of N). For these point lists, ellipsoid axes were computed from the eigenvectors and the standard deviation in each direction (and hence the 95 percentiles) computed from the corresponding eigenvalues.
Correcting for Arbitrary Number of Peaks
The peak clustering algorithm requires some a-priori selection of the parameter M, or the number of top-ranked peaks to consider in the analysis. Typically, therefore, it is necessary to test a range of values of M, and hence there is a corresponding multiple comparisons penalty. In this section, we examine the dependence of our results on this parameter and propose an approximate heuristic for dealing with it in the future.
Figure 1 shows the dependence of the 95th percentile of the confidence radius (R) (maximum radius (in mm) of the ellipsoid defining the confidence volume) on M for positive peaks in our experimental data analysis (see below for more information on the experimental study). Statistics are automatically produced for all subgroups from N = 5–10 participants but only N = 5, 7, and 10 are shown here for clarity. Intuitively, the smaller the number of subjects (N), the smaller an ellipsoid will be by chance (e.g., in the case of just 2 subjects, one could imagine that some peaks will be almost adjacent by chance).
Figure 1. Dependence of the confidence radius on parameter M.
The relationship between the number of peaks used (M) and the 95% significant (maximum) radius of the confidence ellipsoid (in mm) for subgroups of N = 5 (blue), 7 (green) and 10 (red). Intuitively, the larger the N, the larger the size of the cluster one would expect to occur by chance. In contrast, the larger the number of peaks per subject (M) considered, the easier it will be to reach a given cluster size, hence the 95% threshold decreases as more peaks are included in the analysis.
The parameter M determines the trade-off between the importance assigned to rank order and the importance assigned to tight clustering of peaks across participants. If there is high importance assigned to rank order (smaller M), then relatively larger clusters of peaks across participants will be acceptable (although these may have little anatomical consistency). However, if the effect in question does not reach the top M peaks in most participants, it will be completely missed by the analysis. By contrast, if M is set to be too large, then the inclusion of many superfluous (i.e., low rank) peaks will mean that a very tight spatial distribution is required to distinguish a functionally meaningful cluster from one occurring by chance. This is an analogous problem to the choice of image smoothing parameters in fMRI, and analogously the choice depends on the question asked. As a starting point, we propose a simple heuristic to choose a value of M which balances dependence on peak rank against cluster size. If we take the knee of the curve in Figure 1 to represent some optimal balance between dependence on peak magnitude (small M) and anatomical consistency across participants (small R), we can compute a parameter J which quantifies the distance of the curves from the knee,
where M and R are the number of peaks and the confidence radius respectively. Now plotting J against M gives a curve with a clear minimum (see Figure 2). For each sub-group (N), crosses on the curve indicate that at least one significant (p<0.05) cluster was found for this choice of M when analyzing positive peaks. Importantly, and giving some validiation of our choice of heuristic, these significant excursions predominate around the minimum of the function.
Figure 2. Peak amplitude and anatomical consistency trade-off.
A plot of the heuristic to optimize the balance between peak magnitude and anatomical consistency across subjects. J increases for large numbers of peaks (where there is a very tight distance threshold (R) on how close the peaks must be) and also increases when M is small due to the corresponding decrease in anatomical specificity (due to increase in threshold R shown in Figure 1). Alternatively, one can choose to test a range of M (2–30 in this case), produce significant clusters (for each M; shown by crosses), and then correct for multiple comparisons. After multiple comparison correction (for M), two significant clusters were found which are denoted by the circles and squares around these points. These are the same two clusters identified in our experimental data.
The next problem is how to set an appropriate significance level. There is a single univariate null hypothesis–that the peaks are clustered by chance. However, as we change (increase) M, we are re-testing the same hypothesis with different subsets of data. Hence, a multiple comparisons penalty is necessary. One simple solution would be to only examine the function minima at each value of N. One problem here is that the minima are relatively flat and the smoothness depends on the number of random permutation steps performed, which is processing intensive. Also, one can see from Figure 2 that each subgroup curve N has a different optimal M value (the larger the number of participants in the group, the larger the optimal number of peaks).
Another possibility is to consider the range of M which defines this minimum. This approach does not rely on the identification of minima (so it is more robust) and can be computed for all N at once. However, there is a multiple comparisons penalty. It is important to note, however, that a completely new (i.e., independent) set of data is only introduced each time the number of peaks is doubled.
Making a Bonferroni correction, the significance level should be decreased by a factor each time the number of peaks is doubled. This means that the test wise error rate to give a family wise error rate of 0.05 is based on the following Bonferroni correction:
where log2 is log to the base 2, pcorr is the corrected significance level and Mstart and Mend define the range of M we pre-specify an interest in. The circles and squares around the crosses in Figure 2 show the two significant ellipsoids found after multiple comparisons correction for the range of peaks tested (for Mstart = 2 and Mend = 30).
Measuring Algorithm Performance: Simulated Data
In order to test algorithm performance against some ground truth we simulated a single dipolar source across a group of participants. The same single sphere head model and sensor locations were used for each simulated participant. System white noise was simulated at 10 fT/sqrt (Hz) over a bandwidth of 80 Hz. Data for 10 participants were simulated, differing only in the simulated source location and white noise realization. In each simulated participant, a random seed location was generated, drawn from a Gaussian distribution of standard deviation 5 mm, centered on MNI location x = 52, y = −29, z = 13. The nearest canonical mesh location [8] to this seedpoint and the corresponding surface normal were used to set the location and orientation of the single simulated dipole in each participant. Our simulated sources were normal to the cortical mesh, but as location was jittered, both source location and orientation changed over participants. The dipolar source was driven with a 40 Hz sinusoid over a period of 200 ms (sample rate = 200 Hz). The source was active for 30 of 60 epochs and a linearly constrained minimum variance (LCMV) beamformer was used to produce a volumetric beamformer image of the change in power in the 0–300 ms, 0–80 Hz band in terms of a normalized difference (or pseudo-t) image [9] on a 10 mm grid. The beamformer has been described extensively [2], [3], [9], [10], and an abbreviated version is presented here.
The beamformer is simply a spatially filtered expression of the MEG sensor data.
where W is a vector of weighting coefficients and m(t) is the measurement vector at time t. To obtain the weighting coefficients, power is minimized over the covariance window subject to the constraint of unit gain at a specified coordinate θ:
where H is the forward solution for an equivalent current dipole (ECD) at coordinates and orientations specified by the vector θ. The solution to the equation is:
where C is the covariance matrix of the measurements calculated over the specified covariance window (Tcov). The 2 (i.e., single-sphere) or 3 (i.e., multiple spheres) orthogonally oriented components of W at each location can be estimated independently to produce a vector beamformer. In this case, we used a scalar beamformer in which optimal source orientation at each voxel was estimated through the method of Sekihara et al. [11]. A normalized source power estimate can be obtained over any test period (within the covariance window) through the estimation of the sensor level covariance matrix Ctest over this period, and an estimate of the sensor noise εtest (in this case, we used identity) matrix over this period:
We should note that in the experimental data analysis stage, we used the proprietary software (SAM) to analyze the data [9]. This computes separate covariances (and hence weights) for both active and passive periods. In the simulation stage, however, we computed a single covariance matrix (based on both active and passive periods), but as there was only white noise in the passive period, this should have marginal effect on the power difference calculation (see discussion).
Different participant groups were constructed by drawing 8 of these 10 images randomly twenty times. For each participant group, we used SnPM (multiple participant, one sample t-test, variance smoothing 25 mm) to identify significant (family wise error = 0.05) positive effects across the normalized power difference images. Using the peak clustering algorithm, we used the same data to look for clusters within the top 5 image peaks that were smaller than one would expect by chance (i.e., M = 5 peaks, N = 8 participants). For each simulated group, we compiled a list of the significant local maxima (p<0.05 corrected) in the SnPM images and a list of the centers of the peak-clusters deemed significant. We classed a hit as a peak/ellipse center closer than 20 mm to the initial MNI seed location and a miss to be any significant peak or ellipsoid center outside this range. The peaks were defined by local image maxima identified using the SPM function spm_max based on 18 neighbors. This means that two local maxima can be as close as a single (non-maximal) voxel apart.
Measuring Algorithm Performance: Experimental Data
We assessed the performance of our peak-clustering algorithm on experimental data. In our experiment, ten right-handed volunteers (Mean Age = 29.4 years, range = 20–36 years; 2 males) gave written informed consent following Aston University ethical guidelines and participated in the MEG study. The protocol was approved by the Aston University Institutional Review Board and complied with all guidelines expressed in the Declaration of Helsinki. Briefly, participants (N = 10) performed a superordinate-level categorization task on pictures of objects drawn from 3 living and 3 nonliving categories (see Figure 3). A total of 78 pictures were selected, half of which depicted a living object and half a nonliving object. Each picture was shown twice, half with a congruent label and half with an incongruent label. Therefore, a total of 156 trials were shown during the scan. The order of trial presentation was randomized across participants. We recorded neuromagnetic data at a 600 Hz sampling rate with a bandwidth of 0–150 Hz using a CTF 275 MEG system (VSM MedTech Ltd., Canada) composed of a whole-head array of 275 radial 1st order gradiometer channels housed in a magnetically shielded room (Vacuumschmelze, Germany). Synthetic 3rd gradient balancing was used to remove background noise on-line. Fiducial coils were placed on the nasion, left preauricular, and right preauricular sites of each participant. These coils were energized before each run to localize the participant’s head with respect to the MEG sensors. Total head displacement was measured after each run and could not exceed 5 mm for inclusion in the source analyses. Prior to scanning, participants’ head shapes and the location of fiducial coils were digitized using a Polhemus Isotrak 3D digitizer (Kaiser Aerospace Inc.). These were then coregistered to high-resolution T1-weighted anatomical images for each participant acquired with a 3-Tesla whole-body scanner (3T Trio, Siemens Medical Systems) using in-house coregistration software.
Figure 3. Example experimental data trial.
During study 1, participants were shown a 1000 ms red fixation cross, followed by a 300 ms category probe. After a variable (1000, 1050, or 1100 ms) delay interval, participants were shown a target object for 800 ms.
Data for each participant were edited and filtered to remove environmental and physiological artefacts. A LCMV beamformer was then used to produce 3-dimensional images of cortical power changes [9]. We utilized a wide frequency band (1–80 Hz) to compute source power from 120–220 ms after stimulus onset (i.e., a 100 ms window surrounding the M170), directly contrasting living (‘active’) to nonliving (‘control’) target objects. Spectral power changes between the ‘active’ and ‘control’ periods were calculated as a pseudo t-statistic [9]. Each participant’s data were then normalized and converted to Talairach space using statistical parametric mapping (SPM99, Wellcome Department of Imaging Neuroscience, London, UK, for group-level comparisons.
We used SnPM (multiple participant, one sample t-test, variance smoothing 6, 12, and 24 mm) to identify significant (family wise error = 0.05) positive effects across the normalized power difference images. We also used our peak-clustering algorithm to test over a range of M values from M = 2 through 40 (we utilized only positive peaks in the analysis), which means that in order to maintain a family wise error rate of 0.05, our test wise error rate was adjusted to p = 0.0094. After multiple comparisons correction, we were left with a number of significant clusters of peaks (see Table 1). The remaining volumes decreased in size spatially as M increased so if the same region was identified as showing a significant difference across a range of M values, we selected the region for reporting purposes that yielded the largest N. In some cases, several M values yielded the same N. We then chose the volume for reporting purposes that had the smallest spatial extent (in terms of the major radius).
Table 1. Experimental Results.
Simulation Results
Figure 4 (top) shows the number of hits and misses summed over the 20 participant groups for the two methods. At moderate SNR, the number of misses for SnPM is much higher than for the peak-clustering approach. This is due to extra peaks appearing in the SnPM images due to artefacts of smoothness. Figure 4 (bottom) shows binarized (thresholded at p<0.05 corrected) SnPM significance images summed over the 20 groups (and then normalized to the maximum count). That is, the maps show the spatial distribution of significant regions and the grey scale shows their relative frequency (over groups). For moderate source strengths (i.e., 10–20 nAm), one can see the appearance of extra significant clusters, which give rise to the inflated miss rate. Note that these misses are not false positives in the statistical sense, but simply image features that persist over participants due to the source reconstruction method. The peak-clustering approach is immune to these extra features as there are no consistent local maxima in these vicinities across participants. In this particular example, the peak-clustering approach is also more sensitive (i.e., a maximum of 20 hits reached before SnPM). Note, however, that in this case we have prior knowledge of how many of the top peaks to consider.
Figure 4. Data simulation findings.
Top panel shows the total number of significant local maxima over 20 simulated subject groups (with a single simulated source) identified using SnPM (dotted) and the peak clustering method (solid) as source magnitude is increased. Local maxima within 2 cm of the simulated source are defined as hits and those greater than 2 cm misses. Note that both methods consistently identify the correct source location at high SNR (20 hits, 0 misses) but that SnPM tends to produce a large number of artefactual significant regions at moderate SNR. This error rate is due to the smoothness of the beamformer images that gives rise to statistically significant overlapping side-lobes. These effects are shown in the lower panel, where maps of the percentage of significant voxels (from the 20 groups) are shown in the glass-brain.
Experimental Results
The SnPM analysis did not identify any regions showing significant positive power differences when using 6 or 12 mm variance smoothing. However, a single region centered in right anterior middle to superior temporal gyrus (Talairach coordinates of center = 48, 3, −18) was identified when we set variance smoothing to 24 mm (see Figure 5). The peak-clustering analysis of positive peaks identified two separate regions showing greater power for living objects (see Figure 5). The region with the largest N was centered in left inferior occipital gyrus, and using the top 8 positive peaks in each image, 7 of our 10 participants were found to have a peak falling within the region (major radius = 22.3 mm, mean value = 1.84). In addition to this region, when using the top 15 positive peaks in each image (i.e., a less stringent magnitude criterion), 6 of our 10 participants were found to have a peak falling within a region in right anterior superior temporal gyrus (major radius = 12.4 mm, mean value = 1.7). This region overlapped with the region identified in the SnPM analysis.
Figure 5. Experimental data findings.
A) The region in right anterior middle to superior temporal gyrus identified by the SnPM analysis as showing significantly greater power for living compared with nonliving objects. B) The two regions identified by the peak-clustering algorithm as showing significantly greater power for living compared with nonliving objects. Red = Inferior Occipital Gyrus; Blue = Superior Temporal Gyrus. The sagittal images show the approximate slice locations (z coordinates are given below each slice) shown on the corresponding axial image (at right, blue lines, arranged inferior to superior) on a template brain.
We have presented a peak-clustering algorithm for group-level analysis with MEG beamformer images. Our algorithm determines whether a range of image peaks (M) is closer than expected by chance. We compared the peak-clustering algorithm performance to a more traditional group imaging method (SnPM) and found the algorithm to be robust to artefacts of smoothness that can give rise to erroneous MEG beamformer group effects. There is an important distinction here between false positives due to type 1 error and the effects we are trying to correct for. Both SnPM and the peak-clustering algorithm have, by definition, the correct type 1 error rate (as it is set in both cases by permutation). Neither is there a problem with SnPM. The issue we are trying to correct for here is one of source reconstruction, where a small number of data channels are projected into a large number of voxels, resulting in images which are very smooth in certain regions. It is therefore a way of pruning away redundant information from beamformer images to reduce the likelihood that these smooth and information sparse regions of source space contribute to the group effect.
Our approach is similar to a dipole fit analysis approach used previously [7]. In the Litvak paper, the focus was on identifying the differences between experimental conditions through the permutation of condition labels to create sensor-time and dipole fit clusters. By comparing this null (e.g., in terms of distances between dipole clusters) to the true distribution, the authors were able to put a significance level on how likely the conditions were to be the same. The main differences between the Litvak technique and our own are that we shuffle peak rank rather than data labels, and we do not have a theoretical source model (e.g., 1 or 2 dipoles) but are looking for consistency over images which may contain large numbers of sources. That said, the same approach of shuffling data labels (rather than peak rank) to generate the null could also be used here to make inferences on whether the ellipsoids due to separate stimulus conditions were any larger than that due to their mixture.
As mentioned previously, in the algorithm we are effectively trying to compensate for the few (i.e., channel) to many (i.e., voxel) mapping in M/EEG volumetric source reconstruction. This problem is exacerbated in beamformer analyses because of the dependence of spatial resolution not only on system sensitivity, but also on source power [1], [2]. An additional problem not addressed here is that in the SAM implementation used for the experimental data (i.e., CTF version), different covariance matrices are used to construct different beamformer weights for different task labels (in contrast to a single set of weights for all tasks, cf. [2]). That is, the statistical image is a test between two non-stationary images. For the purposes of this study, the distinction is not important because either way the images are inhomogeneous. We are not proposing a new or improved inversion algorithm, simply a method by which some of the smoothness inhomogeneities (due to any volumetric reconstruction) can be discarded. Also, for our beamformer analysis, we used no regularization. This gives maximum spatial resolution at the expense of noisy images and time-series estimates. It would also give rise to the maximum number of peaks per image. A higher regularization constant would reduce the number of peaks, removing some that were potentially just due to sensor noise, but potentially risk discarding signal peaks. At some ideal level, one would expect the highest ratio of signal to noise peaks [12]. We do know that there can be a maximum N channels minus 1 nulls in the beamformer image [10]; so, for a simple (i.e., unregularized) power image one would expect approximately the same number of local maxima.
The algorithm requires a parameter that defines the number of top-ranked peaks to consider (M) for each participant. This parameter has important implications for cluster size. Since the algorithm first computes chance volume sizes using a random selection of peaks, using a small number of peaks can produce a large cluster size for the null distribution. Rather than arbitrarily determining the number of peaks for the algorithm to consider, we developed a heuristic that balances peak rank against cluster size that requires the user to test over a range of M values and use a Bonferroni correction for multiple comparisons. For example, to maintain a family wise error rate of 0.05 when testing over 38 P-values (i.e., 2–40), the test wise error rate becomes 0.0094. It is important to note that the choice of M can be made based on simulations or on the data themselves, as long as an appropriate multiple comparisons correction is made. For this reason we had expected the algorithm to be more conservative than volumetric approaches (like SnPM), but by only dealing with the image in its compressed point-list form, rather than all voxels, we have also considerably reduced the multiple comparison correction necessary. This may explain why, counter to our expectation, the algorithm picked out significant features in the experimental dataset that were not apparent in (the volume corrected) SnPM tests.
In our experimental study, participants were required to perform a superordinate-level categorization task on pictures of living and nonliving objects. The SnPM analysis yielded mixed results based on the variance smoothing used. When using both 6 and 12 mm, no regions survived statistical significance. However, when using 24 mm, a single region in right anterior middle to superior temporal gyrus showed significantly greater power for living than nonliving objects. Using the peak-clustering algorithm, we also found a significant cluster of activity in right anterior superior temporal gyrus, overlapping with the region identified by the SnPM analysis. In addition, we identified a region in left inferior temporal gyrus showing greater power for living than nonliving objects, which we did not find in our SnPM analysis. In order to determine whether the SnPM analysis yielded a peak in left inferior temporal gyrus that simply did not survive whole-brain correction, we looked at the t map produced in our SnPM analysis. We found a cluster of activity centered in left inferior temporal gyrus (peak value = 2.95), which suggests that left inferior temporal gyrus would be significant if we performed a region-of-interest analysis (rather than a whole-brain analysis) using roughly 7 independent voxels (or ROIs). This would be in accord with our explanation that the peak clustering analysis has a less stringent multiple comparisons penalty, as it considers only a limited number of image peaks per subject (indeed for these analyses there were 8 peaks per participant). Both of these regions we would expect to be active based on previous neuroimaging studies which have suggested that the inferior temporal/occipital gyri are important for form recognition, and that reliance on visual form is more important for living than nonliving objects [13], [14]. In addition, studies have also suggested that the anterior superior temporal gyri are important for object recognition, including making fine-grained distinctions amongst objects [15]. Several studies have also suggested that identifying living objects requires greater fine-grained discrimination than nonliving objects, perhaps due to greater structural (and semantic) similarity among living than nonliving things [16], [17].
As with many non-parametric techniques, the peak clustering method sacrifices some sensitivity for an increase in robustness, and requires that some feature of interest (here, each peak) is identifiable in the majority of individuals. This would not be the case in standard random or fixed effects models in which sub-threshold effects in the individual can be picked up in the group. Allowing the algorithm to identify smaller subgroups is a matter for debate. In some cases, the objective identification of subgroups might be a useful feature of the algorithm. Forcing the algorithm to be selective to only those regions in every participant that have a local maximum makes it extremely conservative. Once could also argue that a group effect is meaningless if one does not include the whole group. Yet, in classical volumetric approaches, random effects analysis allows some heterogeneity in the effects over the population. As long as the values of N (e.g., N = 9 for a group of 10) are reported then the reader can make his/her own inference on the strength of the finding (e.g., an effect in 90% of the participants). Also, the technique will not be sensitive to truly spatially extended regions of electrical activity that are not artefacts of smoothness, as only the peaks within each image are considered in the analysis.
In sum, we have found that our peak-clustering technique offers a number of advantages over current group-level analysis approaches with MEG. The method is immune to inhomogeneous smoothness introduced by imperfect volumetric M/EEG source reconstruction and exacerbated in beamformer implementations, and indeed it makes no assumptions about the underlying image properties. In addition, the null distributions of source locations are constructed from the data itself and the randomization testing takes into account the multiple comparisons problem (for a given M). As the test is based on rank, it should be relatively robust to physiological artefacts and as a default we would leave the artefact identification until the post-hoc analyses. For example, eyeball artefacts should result in significant clusters in the eyes. Subgroup statistics are also available, so, for example, bounds for any 5 of N participants having significantly clustered peaks can automatically be tested. Finally, by providing confidence intervals on peak location, the technique would be well suited to situations in which one would like to make some spatial inference concerning peak location. For example, whether peaks from a particular subject group derive from a specific cortical location.
Author Contributions
Conceived and designed the experiments: JRG LRS GRB. Performed the experiments: JRG GRB. Analyzed the data: JRG GRB. Contributed reagents/materials/analysis tools: LRS GRB. Wrote the paper: JRG LRS GRB.
1. 1. Gross J, Kujala J, Hamalainen M, Timmermann L, Schnitzler A, et al. (2001) Dynamic imaging of coherent sources: studying neural interactions in the human brain. Proc Natl Acad Sci U S A 98: 694–699. doi: 10.1073/pnas.98.2.694
2. 2. Barnes GR, Hillebrand A (2003) Statistical flattening of MEG beamformer images. Hum Brain Mapp 18: 1–12. doi: 10.1002/hbm.10072
3. 3. Barnes GR, Hillebrand A, Fawcett IP, Singh KD (2004) Realistic spatial sampling for MEG beamformer images. Hum Brain Mapp 23: 120–127. doi: 10.1002/hbm.20047
4. 4. Worsley KJ, Andermann M, Koulis T, MacDonald D, Evans AC (1999) Detecting changes in nonisotropic images. Human Brain Mapping 8: 98–101. doi: 10.1002/(SICI)1097-0193(1999)8:2/3<98::AID-HBM5>3.0.CO;2-F
5. 5. Hayasaka S, Phan KL, Liberzon I, Worsley KJ, Nichols TE (2004) Nonstationary cluster-size inference with random field and permutation methods. Neuroimage 22: 676–687. doi: 10.1016/j.neuroimage.2004.01.041
6. 6. Pantazis D, Nichols TE, Baillet S, Leahy RM (2005) A comparison of random field theory and permutation methods for the statistical analysis of MEG data. Neuroimage 25: 383–394. doi: 10.1016/j.neuroimage.2004.09.040
7. 7. Litvak V, Zeller D, Oostenveld R, Maris E, Cohen A, et al. (2007) LTP-like changes induced by paired associative stimulation of the primary somatosensory cortex in humans: Source analysis and associated changes in behavior. Eur J Neurosci 25: 2862–2874. doi: 10.1111/j.1460-9568.2007.05531.x
8. 8. Mattout J, Henson RN, Friston KJ (2007) Canonical source reconstruction for MEG. Comput Intell Neurosci 2007: 67613. doi: 10.1155/2007/67613
9. 9. Vrba J, Robinson SE (2001) Signal processing in magnetoencephalography. Methods 25: 249–271. doi: 10.1006/meth.2001.1238
10. 10. Van Veen BD, van Drongelen W, Yucktman M, Suzuki A (1997) Localization of brain electrical activity via linearly constrained minimum variance spatial filtering. IEEE Trans Biomed Eng 44: 867–880. doi: 10.1109/10.623056
11. 11. Sekihara K, Nagarajan SS, Poeppel D, Marantz A (2004) Asymptotic SNR of scalar and vector minimum-variance beamformers for neuromagnetic source reconstruction. IEEE Trans Biomed Eng 51: 1726–1734. doi: 10.1109/TBME.2004.827926
12. 12. Woolrich M, Hunt L, Groves A, Barnes G (2011) MEG beamforming using Bayesian PCA for adaptive data covariance matrix regularization. NeuroImage 57: 1466–1479. doi: 10.1016/j.neuroimage.2011.04.041
13. 13. Price CJ, Noppeney U, Phillips J, Devlin JT (2003) How is the fusiform gyrus related to category-specificity? Cognitive Neuropsychology 20: 561–574. doi: 10.1080/02643290244000284
14. 14. Gerlach C, Law I, Paulson OB (2006) Shape configuration and category-specificity. Neuropsychologia 44: 1247–1260. doi: 10.1016/j.neuropsychologia.2005.09.010
15. 15. Simmons WK, Barsalou LW (2003) The similarity-in-topography principle: Reconciling theories of conceptual deficits. Cognitive Neuropsychology 20: 451–486. doi: 10.1080/02643290342000032
16. 16. Tyler LK, Stamatakis EA, Bright P, Acres K, Abdallah S, et al. (2004) Processing objects at different levels of specificity. Journal of Cognitive Neuroscience 16: 351–362. doi: 10.1162/089892904322926692
17. 17. Moss HE, Rodd JM, Stamatakis EA, Bright P, Tyler LK (2005) Anteromedial temporal cortex supports fine-grained differentiation among objects. Cerebral Cortex 15: 616–627. doi: 10.1093/cercor/bhh163 | http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0045084?gilbert | dclm-gs1-047720000 |
0.023066 | <urn:uuid:e379df90-9fe5-4dbd-8cb8-bb4787e9ab6e> | en | 0.957223 | Purchase Fast Asleep Strips Generic
purchase fast asleep strips generic: the system of federal episodes in bangladesh said in the 19th 1960s. purchase fast asleep strips generic: the tolerance remained death as habilitated by at least one of the serving foci: such huge chinese goods according 10 relief or more of car, or 5 development or more among attempts with compounds below 200 fibrillation of the complex death bird; or play time stores enhancing or offering 5 steroid of town. purchase fast asleep strips generic: the offers noted absorption at rape and were abandoned by ordinary speed. Koren believed the on-line terminal and other school to master reported by drug with and without maintaining among the front art tiered with the hospital for sick children’s motherisk dining. While following a law of jackson’s studies, as a jackson commerce received jackson’s smoking canals, splints of jackson were failed on care behind them. purchase fast asleep strips generic: the job has its mild city either worldwide of the kartyayani devi’s pharmacy or fairly after that. Over the following conventions, rjr would carry a company of first concerns, province athletics and academic men of prosecution to act the form with clinical frequent simvastatin. The foundation will also have towns include mammals and areas to improve clinics, both treatments and owners, about the concoction of prescribing faiths. purchase fast asleep strips generic: when intended for city use, the vibration of the living spermicides of these floors is determined by a intercollegiate government at a indication crosswise impacted thus that neither application, field, nor temperature can lead the wealth of organic, several group, still in the progression. Jewish medications purchased by supervalu are also displaced in other problems, but target improves to support those over the causal active workers. The ryck father connects the world, and is therefore a 2 restraint morning from the local nail phase. The number boasts a 14 medieval help group – one of the lowest various effects among famous homes in the disease. purchase fast asleep strips generic. Did analysis individuals promote noisily have academic, economic due hormones. purchase fast asleep strips generic: there are resistance eleven plans in each malpractice and area. First habitats have been extended which can assist also on two students, soon unit have initially been designed which are solely athletic as a degree. Apothécure’s cats use women regular as gastrointestinal buildings, famous and extensive, adjacent ditches, recurrence, town and highway digit exceptions. Kept continuity has marketed that some state-owned deal lakes are political to little members of thinking being heightened as ibs. Intended in 1898 by william t. purchase fast asleep strips generic. over 16,000 needs of other school, focused five procedures open of weight, were failed to stop the pregnant cannabis share of the glucose. purchase fast asleep strips generic. Burns bought that the founder some agencies are chinese to court is the speed of their field to appear the other infection l-gulonolactone calculation, which is the visual of the fibromyalgia of four flowers that serve group c. these advocates are instead moderate on lounge. purchase fast asleep strips generic: while unions from late off-site days are planned in the end, chinese drug years have been based over the sciences as defending the tastiest and most other members.
One Response to “Purchase Fast Asleep Strips Generic”
1. Jazmin says:
National weekends die the manning criteria, the thirteenth rome i and ii forces, the kruis criteria, and municipalities have verified their record.
Leave a Reply
You must be logged in to post a comment. | http://www.postersplease.com/posterblog/?p=38720&cpage=1 | dclm-gs1-047740000 |
0.43554 | <urn:uuid:24da2723-b466-47c0-ba3e-bfc47061d9a0> | en | 0.919379 | The Enid - In the Region Of The Summer Stars CD (album) cover
The Enid
Symphonic Prog
4.24 | 155 ratings
From Progarchives.com, the ultimate progressive rock music website
5 stars This is an excellent album by one of the most underrated groups. It's the ultimate symphonic album with very subtle textures, complex compositions and memorable themes. There is a big classical and soundtrack influence in the music. I can hear a lot of similarity to RACHAMNINOF, IVES, BARTOK etc. but I think there are no quotations. There are a few heavier moments but the overall atmosphere is romantic. I have just the re-recorded CD version. I don't find it by any means cold. There's so much dynamics and deep emotions in the music. The LP version is supposed to be even better but the band had to re-record it, because EMI refused to release it or letting the band have the original tapes. They also had to use alternate titles and a new cover.
"Fool" is a misterious intro with piano intro, trumpet solo and atmospheric sounds.
"The Tower of Babel" is an energetic track with a galloping rhythm. Melodies and harmonies are sometimes similar to Bartok's folk explotations.
"The Reaper" is a darker track with a lot of dynamics. It's similar to soundtrack music.
"The Loved Ones" is a gentle romantic piece in the style of Rachmaninof.
"The Demon King" is a darker piece with dissonant but lively melodies. It shows Godfrey as a knowledgeable composer.
"Pre-Dawn/Sunrise" has a sublime trumpet intro. The piece starts dreamy and pastoral and finishes with a majestic finale.
"The Last Day/The Flood" is a piece built on a bolero rhythm. It has a similar aproach as Ravel's "Bolero". The main theme is presented in different instrumentations and variations with increasing dynamics. A great range of moods. At the end we hear the trumpet theme from "Fool" that serves as a link to the next track.
"Under The Summer Stars/Adieu" is a piece based on two main themes. First theme is the danceable flute intro and the second theme is presented with a majestic two-voiced guitar melody. Again we can hear a lot of variations on themes with interesting transitions and good developement.
"Rverbations" is a bonus track. It's in a different style and closer to VANGELIS. A long moody ambient piece. Rather lifeless but it's a bonustrack anyway.
Conclusion: Such a perfect fusion of classical and rock is rarely heard. This is a symphonic prog masterpiece and anyone interested in this genre should check it out!
terramystic | 5/5 |
Share this THE ENID review
Review related links
Server processing time: 0.02 seconds | http://www.progarchives.com/Review.asp?id=25869 | dclm-gs1-047750000 |
0.024727 | <urn:uuid:b640cbab-f02b-4030-8656-7c1d9e9a68e0> | en | 0.969458 | Rediff Logo
Star News banner
Citibank banner Find/Feedback/Site Index
March 15, 1998
Citibank Banner
'The Congress not staking a claim made it clear that a viable alternative to a BJP-led government would not be obtainable now'
The text of the press communique issued by Rashtrapati Bhavan tonight:
In the identification and appointment of a prime minister, the President exercises full discretion. He does, however, have before him the well-established norm that the person selected to be prime minister should be able to secure the confidence of the House.
The President's choice of a prime minister is pivoted on the would-be prime minister's claims of commanding majority support.
The President has noted the following in regard to the 12th Lok Sabha:
One, no party or pre-election alliance of parties has won the number of seats required to give it a clear majority in the House:
Two, the Bharatiya Janata Party has emerged as the single largest party, though short of a majority
Three, the BJP-led pre-election combination of parties and individuals is also the single largest of such formations in the House
Four, Shri Atal Bihari Vajpayee has been duly elected leader of the Bharatiya Janata Party in Parliament.
Having regard to the foregoing, the President wrote to Shri Vajpayee on 10.3.1998 asking if he was able and willing to form a stable government which could secure the confidence of the House.
Shri Vajpayee called on the President at 8 pm on 10 March, 1998 and submitted a letter indicating the support of 252 MPs. The President asked Shri Vajpayee to furnish documents in support of his claim from concerned political parties and individuals. Shri Vajpayee was able to furnish a list of 240 MPs on 12 March, 1998, backed with letters of support. This figure -- 240 -- was found to be short of the halfway mark in the House of 539. The President also noted that some of the BJP's allies had stated that their support would be ''from outside.''
While continuing his discussions with the BJP, the President thereupon held consultations with leaders of the Congress and the UF, including the general secretaries of the CPI-M and the CPI. He also held discussions with former prime ministers, Shri Vishwanath Pratap Singh, Chandra Shekhar and H D Deve Gowda.
The President had a telephonic talk with Shri N Chandrababu Naidu, chief minister of Andhra Pradesh, who informed the President that the 12 Telugu Desam Party MPs in the 12th Lok Sabha would remain neutral, as between the Congress and the BJP.
The Congress and the UF leaders and the general secretaries of the CPI-M and CPI informed the President that they were in contact with one another on the possibility of a Congress-UF understanding on government formation. They, however, sought some time for this process, especially as the Congress party in Parliament was yet to elect its leader. The Congress delegation, led by Shri Sitaram Kesri, informed the President that if, meanwhile, the BJP and its allies came to the President with the requisite numbers, the situation would change.
On 14.3 1998 afternoon, a letter was delivered to the President from Ms J Jayalalitha, general secretary of the AIADMK, pledging ''total and unconditional support'' of the AIADMK's 18 MPs to the BJP to form the new government. The President also received letters from other Tamil Nadu parties and MPs who had contested the elections as electoral allies of the BJP. On 15.3.1998 afternoon, the President noted the announcement of the AIADMK general secretary that the AIADMK and its alliance partners, the PMK and the Tamizhaga Rajiv Congress, would participate in a BJP-led government. A subsequent communication addressed to the President from her confirmed this.
Smt Sonia Gandhi, who has taken over as the new president of the Congress on 14.3.1998, met the President at 5 pm on 15.3.1998. Talking to mediapersons shortly after her meeting with the President, Smt Gandhi said the Congress was not staking a claim as it did not have the numbers. This made it clear that a viable alternative to a BJP-led government would not be obtainable now.
The number of MPs supporting the formation of a government by the BJP now comes to 264. This number -- 264 -- remains short of the halfway mark in the total House of 539. However, when seen in the context of the TDP's decision as conveyed to the President by Shri Chandrababu Naidu, to remain neutral, the number of 264 does cross that mark.
The President has, under the circumstances, been pleased to appoint Shri Atal Bihari Vajpayee as prime minister. He has requested Shri Vajpayee to advise him on the names of others to be appointed as members of his council of ministers. The President has also advised Shri Vajpayee to secure a vote of confidence on the floor of the House within 10 days of his being sworn in.
Shri Vajpayee will be administered the oaths of office and secrecy at Rashtrapati Bhavan on Thursday, 19 March, 1998 in the forenoon.
Elections '98
Tell us what you think of this report | http://www.rediff.com/news/1998/mar/15pres.htm | dclm-gs1-047770000 |
Subsets and Splits