text
stringlengths
64
89.7k
meta
dict
Q: When does the rigidity matrix of a graph have full row rank? Intuitive description: In the 2D plane, there are $m$ bars connected by $n$ joints. The length of each bar is fixed. These joints and bars can be viewed as a graph (see the figures below). Denote $s_i$ as the static stress of bar $i$. For some graphs (see figure 1), it is clear all $s_i$ must be zero. Otherwise, the stress applied on each joint is non-zero and the graph cannot be balanced in the plane. For other graphs (see figure 2), some $s_i$ can be non-zero and the graph can be well balanced in the plane. My question: does the later kind of graphs have a name? Any theoretical discussions on them in the literature? Web link to the figures Mathematical description: I am studying graph rigidity. Denote $$R=\mathrm{blkdiag}(e_1^T,\cdots,e_m^T)(H\otimes I_2)$$ as the rigidity matrix of a graph, where $e_i\in\mathbb{R}^2$ denotes the edge of the graph, $H\in\mathbb{R}^{m\times n}$ is the incidence matrix, and $I_2$ is the 2x2 identity matrix. The left null space of $R$ actually is the space of all non-zero stresses. So mathematically my question can be rephrased as: when is the rigidity matrix of full row rank? A: The term used by Connelly and Whiteley is self-stressed, and they discuss this notion extensively in their paper "Second-Order Rigidity and Prestress Stability for Tensegrity Frameworks."
{ "pile_set_name": "StackExchange" }
Q: Setting JQuery selected does nothing I have the following layout <div id="tabs"> <ul> <li><a href="#tabs-1">Create Username</a></li> <li><a href="#tabs-2">View Search Logs</a></li> <li><a href="#tabs-3">Assign Values Table</a></li> <li><a href="#tabs-4">Edit Values Table</a></li> <li><a href="#tabs-5">Create Values Table</a></li> </ul> <div id="tabs-1"> <h2>Create Username</h2> </div> <div id="tabs-2"> <h2>View Search Logs</h2> </div> <div id="tabs-3"> <h2>Assign Values Table</h2> </div> <div id="tabs-4"> <h2>Edit Values Table</h2> </div> <div id="tabs-5"> <h2>Create Values Table</h2> </div> </div> Each tab has a form in it, when the form is submitted and the page reloads I want the visible tab to still be the tab the user was last on (instead of it going back to the first). Each form has a hidden field that contains the index of that tab. I then use the below: <script type="text/javascript"> $(function() { $("#tabs").tabs({ selected: <?php echo (isset($_POST['selected_tab']) ? $_POST['selected_tab'] : 1)?> }); }); </script> This results in showing the right thing e.g <script type="text/javascript"> $(function() { $("#tabs").tabs({ selected: 2 }); }); </script> However the first tab is still the tab that is shown. A: $("#tabs").tabs({ selected: 2 }); Should be: $("#tabs").tabs({ active: 2 }); Tabs uses active property and not selected property. Also note that the index is base 0, so if you want to select the 2nd tab, your value needs to be 1.
{ "pile_set_name": "StackExchange" }
Q: react-pdf-js - Uncaught (in promise) e {name: "InvalidPDFException", message: "Invalid PDF structure"} I have intentionally (to minimize any errors at first) only copied the demo for react-pdf-js, added page and pages in the state to resolve an error, added the closing ); at the end of the render/return to resolve another error, and edited the component name. Still, I am getting the error below and am unsure of what it means. It points to the first line of viewingPDF (which is my html extension myapp.firebaseapp.com/viewingPDF) so it isn't directly pointing to anything wrong in my component and I'm unsure how to fix it. Error: Uncaught (in promise) e message: "Invalid PDF structure" name : "InvalidPDFException" __proto__ : Error at https://myapp.firebaseapp.com/static/js/main.9108db74.js:34:3985 at Object.<anonymous> (https://myapp.firebaseapp.com/static/js/main.9108db74.js:34:4013) at Object.<anonymous> (https://myapp.firebaseapp.com/static/js/main.9108db74.js:34:12954) at t (https://myapp.firebaseapp.com/static/js/main.9108db74.js:33:27623) at Object.<anonymous> (https://myapp.firebaseapp.com/static/js/main.9108db74.js:56:12712) at t (https://myapp.firebaseapp.com/static/js/main.9108db74.js:33:27623) at re (https://myapp.firebaseapp.com/static/js/main.9108db74.js:33:28006) at https://myapp.firebaseapp.com/static/js/main.9108db74.js:33:28016 at n.(anonymous function).i (https://myapp.firebaseapp.com/static/js/main.9108db74.js:33:27496) at Object.<anonymous> (https://myapp.firebaseapp.com/static/js/main.9108db74.js:33:27500) stack: "Error↵ at https://myapp.firebaseapp.com/static/js/main.9108db74.js:34:3985↵ at Object.<anonymous> (https://myapp.firebaseapp.com/static/js/main.9108db74.js:34:4013)↵ at Object.<anonymous> (https://myapp.firebaseapp.com/static/js/main.9108db74.js:34:12954)↵ at t (https://myapp.firebaseapp.com/static/js/main.9108db74.js:33:27623)↵ at Object.<anonymous> (https://myapp.firebaseapp.com/static/js/main.9108db74.js:56:12712)↵ at t (https://myapp.firebaseapp.com/static/js/main.9108db74.js:33:27623)↵ at re (https://myapp.firebaseapp.com/static/js/main.9108db74.js:33:28006)↵ at https://myapp.firebaseapp.com/static/js/main.9108db74.js:33:28016↵ at n.(anonymous function).i (https://myapp.firebaseapp.com/static/js/main.9108db74.js:33:27496)↵ at Object.<anonymous> (https://myapp.firebaseapp.com/static/js/main.9108db74.js:33:27500)" __proto__: Object constructor: Error() message: "" name: "Error" toString: toString() __proto__: Object __defineGetter__: __defineGetter__() __defineSetter__: __defineSetter__() __lookupGetter__: __lookupGetter__() __lookupSetter__: __lookupSetter__() constructor: Object() hasOwnProperty: hasOwnProperty() isPrototypeOf: isPrototypeOf() propertyIsEnumerable: propertyIsEnumerable() toLocaleString: toLocaleString() toString: toString() valueOf: valueOf() get __proto__: __proto__() set __proto__: __proto__() This is my component implementation (kept close to the demo above): import React from 'react'; import PDF from 'react-pdf-js'; class ViewPDF extends React.Component { constructor(props) { super(props); this.state = { page: 0, pages: 3 } this.onDocumentComplete = this.onDocumentComplete.bind(this); this.onPageComplete = this.onPageComplete.bind(this); this.handlePrevious = this.handlePrevious.bind(this); this.handleNext = this.handleNext.bind(this); } onDocumentComplete(pages) { this.setState({ page: 1, pages }); } onPageComplete(page) { this.setState({ page }); } handlePrevious() { this.setState({ page: this.state.page - 1 }); } handleNext() { this.setState({ page: this.state.page + 1 }); } renderPagination(page, pages) { let previousButton = <li className="previous" onClick={this.handlePrevious}><a href="#"><i className="fa fa-arrow-left"></i> Previous</a></li>; if (page === 1) { previousButton = <li className="previous disabled"><a href="#"><i className="fa fa-arrow-left"></i> Previous</a></li>; } let nextButton = <li className="next" onClick={this.handleNext}><a href="#">Next <i className="fa fa-arrow-right"></i></a></li>; if (page === pages) { nextButton = <li className="next disabled"><a href="#">Next <i className="fa fa-arrow-right"></i></a></li>; } return ( <nav> <ul className="pager"> {previousButton} {nextButton} </ul> </nav> ); } render() { let pagination = null; if (this.state.pages) { pagination = this.renderPagination(this.state.page, this.state.pages); } return ( <div> <PDF file="somefile.pdf" onDocumentComplete={this.onDocumentComplete} onPageComplete={this.onPageComplete} page={this.state.page} /> {pagination} </div> ); } } module.exports = ViewPDF; In an effort to strip the code down to the bare basics and just get a pdf rendering, I alternatively reduced my component down to the following. Still, I get the same error. Alternative (reduced to basics): import React from 'react'; import PDF from 'react-pdf-js'; export default class ViewPDF extends React.Component { constructor(props) { super(props); this.state = { page: 0, pages: 3 }; this.onDocumentComplete = this.onDocumentComplete.bind(this); this.onPageComplete = this.onPageComplete.bind(this); } onDocumentComplete(pages) { this.setState({ page: 1, pages }); } onPageComplete(page) { this.setState({ page }); } render() { return ( <div> <PDF file="somePdfToView.pdf"/> </div> ); } } Any suggestions? A: "Invalid PDF Structure".. I'm pretty sure the demo has taken some liberties with that file reference (<PDF file="somePdfToView.pdf"/>). See react-pdf#props for what it should be.
{ "pile_set_name": "StackExchange" }
Q: When to sanitize PHP & MySQL code before being stored in the database or when its being displayed? Okay I was wondering when should I sanitize my code, when I add store it in the database or when I have it displayed on my web page or both? I ask this question because I sanitize my code before it gets stored in the database but I never sanitize when its displayed for the user. Here is an example of how I sanitize my code before its stored in the database. $title = mysqli_real_escape_string($mysqli, $purifier->purify(strip_tags($_POST['title']))); $content = mysqli_real_escape_string($mysqli, $purifier->purify($_POST['content'])); A: There are distinct threats you are (probably) talking about here: You need to sanitize data that's being inserted into the database to avoid SQL injections. You also need to be careful with the data that's being displayed to the user, as it might contain malicious scripts (if it's been submitted by other users). See Wikipedia's entry for cross-site scripting (aka XSS) What's harmful to your database is not necessarily harmful to the users (and vice versa). You have to take care of both threats accordingly. In your example: Use mysqli::real_escape_string() on the data being inserted into your db (sanitizing) You probably want to use the purifier prior to data insertion - just ensure it's "purified" by the time the user gets it. You might need to use striplashes() on data retrieved from the db to display it correctly to the user if magic_quotes are on
{ "pile_set_name": "StackExchange" }
Q: ZF2 mocking authentification service I have developed simple authentification using this tutorial http://samsonasik.wordpress.com/2012/10/23/zend-framework-2-create-login-authentication-using-authenticationservice-with-rememberme/. Everything works fine, but now I have unit testing issues. To check if user is authentified I am using: public function onBootstrap(MvcEvent $e) { $auth = $e->getApplication()->getServiceManager()->get('AuthService'); $e->getTarget()->getEventManager()->getSharedManager() ->attach('Admin', \Zend\Mvc\MvcEvent::EVENT_DISPATCH, function($e) use ($auth) { $currentRouteName = $e->getRouteMatch()->getMatchedRouteName(); $allowed = array( 'admin/login', 'admin/', ); if (in_array($currentRouteName, $allowed)) { return; } if (!$auth->hasIdentity()) { $url = $e->getRouter()->assemble(array(), array('name' => 'admin/login')); $response = $e->getResponse(); $response->getHeaders()->addHeaderLine('Location', $url); $response->setStatusCode(302); $response->sendHeaders(); } }); } And my mock code: $authMock = $this->getMock('Zend\Authentication\AuthenticationService'); $authMock->expects($this->once()) ->method('hasIdentity') ->will($this->returnValue(true)); $serviceManager = $this->getApplicationServiceLocator(); $serviceManager->setAllowOverride(true); $serviceManager->setService('AuthService', $authMock); My issue is that mocks hasIdentity is not being called during unit test. What I have Did wrong. A: The problem was in bootstrap. onBootstrap is being called before mocking. So get('AuthService') needs to be called in event handler. Here is working bootstrap example: public function onBootstrap(MvcEvent $e) { $sm = $e->getApplication()->getServiceManager(); $e->getTarget()->getEventManager()->getSharedManager() ->attach('Admin', \Zend\Mvc\MvcEvent::EVENT_DISPATCH, function($e) use ($sm) { $auth = $sm->get('AuthService'); $currentRouteName = $e->getRouteMatch()->getMatchedRouteName(); $allowed = array( 'admin/login', 'admin/', ); if (in_array($currentRouteName, $allowed)) { return; } if (!$auth->hasIdentity()) { $url = $e->getRouter()->assemble(array(), array('name' => 'admin/login')); $response = $e->getResponse(); $response->getHeaders()->addHeaderLine('Location', $url); $response->setStatusCode(302); $response->sendHeaders(); } }); }
{ "pile_set_name": "StackExchange" }
Q: Conditional date period formula needed for summing I am looking for a formula that will calculate the time I spent during a billing period. In column A, I have dates and in column, F I have the total time spent. My billing period runs from the 21st of one month to the 20th of the next month. I would like a formula that looks in column A for dates of each billing period and groups them together and then calculates total time spent during that billing period. Can Excel do this? Here is a sample spreadsheet 1. Col A Col F 2. 12/5/2015 1.0 3. 13/5/2015 0.5 4. 16/5/2015 0.7 5. 21/5/2015 3.2 6. 29/5/2015 0.9 There are two billing periods above: May (21 Apr to 20 May) and June (21 May to 20 June). I would like a formula that calculates total time for May (2.2 hrs) and June (4.1 hrs). A: Try something using the SUMIFS function together with EDATE and DATE like this, =sumifs(f:f, a:a, ">="&date(2015, 4, 21), a:a, "<"&edate(date(2015, 4, 21), 1)) =sumifs(f:f, a:a, ">="&date(2015, 5, 21), a:a, "<"&edate(date(2015, 5, 21), 1)) With no mention as to how you are storing the dates governing the time periods, I've simply hard-coded the start dates in and used EDATE to add a month.
{ "pile_set_name": "StackExchange" }
Q: how to calc ranges in oracle I have a table defining ranges, e.g.: START | END | MAP 1 | 10 | A 11 | 15 | B ... how do I query into that table so the result will be ID | MAP 1 | A 2 | A 3 | A 4 | A 5 | A 6 | A 7 | A 8 | A 9 | A 10 | A 11 | B 12 | B 13 | B 14 | B 15 | B ... I bet its a easy one... Thanks for the help f. A: select * from Table, (Select Level as Id from dual connect by Level <= (Select Max(End) from Table)) t Where t.Id between rr.Start and rr.End Order by Map, Start, Id
{ "pile_set_name": "StackExchange" }
Q: Error import - referencing another .java I have two files, one is for administrators another for clients. Admins one is called: AutocompleteJComboBox Clients one is called: AutocompleteJComboBox_client Now when it comes to calling the constructor: StringSearchable searchable_client = new StringSearchable(myWords); combo_client = new AutocompleteJComboBox(searchable_client); I get this error message: incompatible types: Practica1.modules.users.client.model.utils.pager.autocomplete.AutocompleteJComboBox_client cannot be converted to Practica1.modules.users.admin.model.utils.pager.autocomplete.AutocompleteJComboBox Why is giving me the wrong reference? I've tried to change variables name, .java names, imports and variables related with that process so far. A: If combo_client is declared as AutocompleteJComboBox_client, then it will have to be instantiated as such e.g. StringSearchable searchable_client = new StringSearchable(myWords); combo_client = new AutocompleteJComboBox_client(searchable_client); Or it may be that it has been wrongly defined as an AutocompleteJComboBox_client and perhaps the code you've posted is correct. Essentially, you have declared the variable as a particular type and you're trying to create an instance of a different type, which is causing this conflict.
{ "pile_set_name": "StackExchange" }
Q: lein repl without network connection Can I open a lein repl connection, or cider-jack-in in Emacs, without a network connection? The computer which needs lein repl is behind some network that blocks some IPs so that it cannot connect to the (lein?) server and cannot use a vpn to bypass this problem either. So is there a way to start lein repl without network connection? Thanks A: You can tell lein not to try to do things requiring an internet connection with the -o flag: lein -o repl You need to, of course, make sure the dependencies are available before you do this. And you should most definatly always run your production stuff in this mode if you run it from lein, because fetching dependencies as your service starts in production is crazy (and I've been burned by this twice too many times) Lein will, by default, try to go online to do things like checking for new snapshot dependencies (you should not use these).
{ "pile_set_name": "StackExchange" }
Q: PostgreSQL and Point in Time Recovery I was reading through this: http://www.postgresql.org/about/ And I saw this: An enterprise class database, PostgreSQL boasts sophisticated features such as ... point in time recovery I need some light shed on this subject and its features and examples of it in action or share their own performance experiences? A: You might want to consult documentation. I think that you will want to read mostly part "24.4. Warm Standby Servers for High Availability" - but what is written there is based on information from "24.3. Continuous Archiving and Point-In-Time Recovery (PITR)" so you might want to read it first. To summarize: pitr lets you do continuous backup with very low performance impact. this backup is incremental it can be restored to any given point in time (hence the name) using pitr you can also setup warm standby server
{ "pile_set_name": "StackExchange" }
Q: jquery to set td width in a nested table I wanted to set the width of the td in a nested table, please help. This is not working: $('#WebPartWPQ7 table:eq(0) tbody tr td table tbody tr:eq(1) td:eq(0)') .css('width','30px'); <div id="webpartwpq7"> <table> <tbody> <tr> <td> <table> <tbody> <tr> <td></td> <td> this is the place I need to set width of the td tag </td> <td></td> <td></td> </tr> <tr> <td>....</td> </tr> <tr></tr> </tbody> </table> </td> </tr> </tbody> </table> <table>.... </table> </div> A: IDs are case-sensitive, so WebPartWPQ7 needs to be webpartwpq7. Also, your eq numbers are wrong. The selector should be: $('#webpartwpq7 table:eq(0) tbody tr td table tbody tr:eq(0) td:eq(1)') http://jsfiddle.net/76hhU/2/
{ "pile_set_name": "StackExchange" }
Q: How does one make a decision between different cognitive tests for the same construct? I would like to measure peak cognitive performance in healthy individuals, for situation awareness, reaction time, visuo-spatial processing etc. I am interested in both within-subject and between-subject changes for pre-post test interventions. There are a number of 'standardised' and 'validated' commercial test batteries that have been used in both clinical and healthy populations. Could anyone explain what is the difference between using the 'visuo-spatial processing' test module in ANAM as compared to the same module in CANTAB, PenScreen etc etc? What is a principled way to choose which test battery module to use? Many thanks. A: There are systematic ways to compare similar tests and determine which test best fits your needs. Construct Definition Identify the constructs (in this case, traits or abilities) you are interested in and clearly define the meaning of each construct. Cognitive performance can encompass many domains, including memory, processing speed, psychomotor functioning, and various aspects of intelligence (including social intelligence). Once you’ve identified which components you consider to be key to “peak cognitive processing,” use literature in your field to develop accepted or theory-driven operationalized definitions for each construct so you have a standard to which you can compare different tests. Test Identification Identify potential tests; generally their title or brief description will allow you to identify tests that are assessing constructs of interest. When comparing tests, however, there are several things to consider in order to 1) ensure the test is actually measuring your construct of interest, 2) compare similar types of tests, and 3) select the test which best suits your needs. Validity Assess various aspects of test validity for each tests. Construct validity assesses whether the test measures the construct of interest; a quick (though not thorough) way to assess construct validity is to compare the definition or theory the test’s developers use to the definition you have created. If you’ve defined “depression” in cognitive terms (difficulty thinking, distraction, ruminative thoughts) and the test you’re assessing has defined it in emotional or somatic terms (sad, hopeless, problems sleeping, loss of appetite), then that test would not match your definition of depression and may not capture the construct you are interested in. I advise using these definitions as a starting place, so that you feel confident the test is appropriate for your purposes. However, you should also consider other types of validity as well. Content validity assesses whether it captures part of a construct (like factual knowledge) or the whole construct (like factual knowledge and the application of that knowledge). Criterion/predictive validity is whether the test has shown to predict certain outcomes measured at a later time; an example is seeing if a reading test predicts year end grades in reading. Criterion validity is most often used in achievement or employment testing and may not be relevant to your work. Other Psychometric Properties Assess additional psychometric properties of each measure. Is there evidence of convergent (is it positively correlated with similar variables) and divergent (is there a negative or null correlation with dissimilar variables) validity? Has test-retest or inter-rater reliability been established? Do the scales have good internal consistency (Cronbach’s alpha)? Is the sensitivity (number of true positives identified) and specificity (number of true negatives) known? Norms and Development Think about what groups the test has been used with and developed in. Was it developed only based on samples of college students? Has it been validated in diverse groups? Some of the more well-known batteries may be “normed” tests, which allow you to compare scores to a well-defined reference sample. However, if that sample is drastically different from the population you work with, it may not be helpful to make that comparison. Test Burden and Assessing Secondary Traits You will also need to decide if there are certain factors that might complicate the use of a test, such as excessive length or a high reading level or difficulty. Finally, when you’ve settled on a valid, reliable, assessment of appropriate length that matches your construct of interest, look over the measure and be aware of any construct contamination issues. An example is a working memory test (perhaps making someone do solve oral story problems in his head) that is also timed. In this case, if the person is penalized for not working quickly enough, you might be capturing deficits in processing speed rather than working memory. Similarly, if you’re using the test to assess math skills, you might be capturing working memory problems rather than an accurate assessment of math ability; the person may be able to correctly complete the math problems on paper pencil if they are not required to remember the problem while solving it. You may be able to counter the effects of contamination by measuring the same skill (math) multiple ways (both orally and on paper, both timed and untimed). Summary for identifying and comparing assessments: Ensure the test is assessing the construct of interest using the operationalized definition you have in mind. Assess reliability and validity (including norms, sensitivity, and specificity) Ensure length, language, etc. are practical for your purposes Be aware of possible construct contamination References Groth-Marnat, G. (2009). Handbook of psychological assessment. (5th Edition). Hoboken, NJ: John Wiley & Sons, Inc. Furr, R. M., & Bacharach, V. R. (2008). Psychometrics: An introduction. Thousand Oaks, CA: Sage Publications, Inc.
{ "pile_set_name": "StackExchange" }
Q: Quantifier order for prenex normal form $\exists x R(x) \land \forall y S(y)$ is this equivalent to $\exists x \forall y (R(x) \land S(y))$ as well as $\forall y \exists x (R(x) \land S(y))$? Is there a rule for the order of pulling out quantifiers? A: The rule is that you can pull out a quantifier when the term that gets included into its new scope does not have the variable that that quantifier quantifies as a free variable. Formally: Prenex Laws Where $\varphi$ is any formula and where $x$ is not a free variable in $\psi$: $ \forall x \ \varphi \land \psi \Leftrightarrow \forall x (\varphi \land \psi)$ $ \exists x \ \varphi \land \psi \Leftrightarrow \exists x (\varphi \land \psi)$ As such, you can pull out the existential first, and then the universal, as well as in the other order. That is: $\exists x \ R(x) \land \forall y \ S(y) \Leftrightarrow$ $\exists x (R(x) \land \forall y \ S(y)) \Leftrightarrow$ $\exists x \forall y (R(x) \land S(y))$ But also: $\exists x \ R(x) \land \forall y \ S(y) \Leftrightarrow$ $\forall y (\exists x (R(x) \land S(y)) \Leftrightarrow$ $\forall y \exists x (R(x) \land S(y))$ All equivalences used here follow the general law. So yes, these are all equivalent.
{ "pile_set_name": "StackExchange" }
Q: Filter List without iteration Is it possible in Java to filter a List based on some criteria without iteration? I have a List of size 10000 full of beans where those beans have a property of type boolean. If I want to filter that list based on that boolean property is it necessary to iterate the whole List or there is some other way? A: If you mean, without using an Iterator, then the answer is Yes. For example: for (int i = 0; i < list.size(); ) { MyElement element = list.get(i); if (element.getMyProperty()) { list.remove(i); } else { i++; } } Depending on the List implementation, this could be an expensive way of implementing filtering than using an Iterator and Iterator.remove(). (For instance, if list is a LinkedList then list.get(i) and list.remove(i) are both O(N) and hence the list filtering is O(N^2). By contrast, filtering the same list with an Iterator would be O(N).) However, if you are asking if you can filter your list without checking each element in the list, then the answer is No. You'd need some secondary data structure, or the list to be ordered on the property to achieve better than O(N) filtering.
{ "pile_set_name": "StackExchange" }
Q: Unsupervised clustering in $10$ dimensions I have a set of $\sim1000$ feature vectors in $\sim10$ dimensions and would like to cluster them in an unsupervised manner. I am expecting some of the vectors to bunch together in groups, but quite a lot to be outliers that are nowhere near each other (so $\sim5$ meaningful clusters and $1$ cluster which is just a uniform distribution in all dimensions). I'm thinking of using a Gaussian mixture model; does that sounds reasonable? Is learning a GMM suitable for this higher dimension of data or is there perhaps a more suitable technique? Does $1000$ vectors sound like enough to do $10$-dimensional clustering. I am quite new to it so am trying to get a feel. Thanks very much for any insight you might be able to provide! :) A: Your data are not "high dimensional" (1000x10 is small), but the question you are asking doesn't have a "right" answer. Depending on what you need I would suggest 2 different approaches : kmeans algorithm http://en.wikipedia.org/wiki/K-means_clustering (eventually kernel kmeans but it's much more involved) Principal component analysis (http://en.wikipedia.org/wiki/Principal_component_analysis ) or Generalized PCA (http://arxiv.org/ftp/arxiv/papers/1202/1202.4002.pdf) and more generally subspace clustering methods. Kmeans are probably the easiest out of the box algorithm in your case. The answer depends a lot on what you are trying to achieve. By the way, your last cluster uniform along all dimensions will be hard to find in an unsupervised manner I think
{ "pile_set_name": "StackExchange" }
Q: Reducing memory of similar objects I'm looking at reducing the memory consumption of a table like collection object. Given a class structure like Class Cell { public property int Data; public property string Format; } Class Table { public property Dictionary<Position, Cell> Cells; } When there are a large number of cells the Data property of the Cell class may be variable but the Format property may be repeated many times, e.g. the header cells may have an empty format string for titles and the data cells may all be "0.00". One idea is to something like the following Class Cell { public property int Data; public property int FormatId; } Class Table { public property Dictionary<Position, Cell> Cells; private property Dictionary<Position, string> Formats; public string GetCellFormat(Position); } This would save memory on strings however the FormatId integer value would still be repeated many times. Is there a better implementation than this? I've looked at the flyweight pattern but am unsure if it matches this. A more complex implementation I am considering is removing the Format property from the Cell class altogether and instead storing the Formats in a dictionary that groups adjacent cells together e.g. there may be 2 entries like this <item rowFrom=1 rowTo=1 format="" /> <item romFrom=2 rowTo=1000 format="0.00" /> A: For strings, you could perhaps look at interning; either with the inbuilt interner, or (preferably) a custom interner - basically a Dictionary<string,string>. What this means is that each identical string uses the same reference - and the duplicates can be collected. Don't do anything with the int; that is already optimal. For example: using System; using System.Collections.Generic; class StringInterner { private readonly Dictionary<string, string> lookup = new Dictionary<string, string>(); public string this[string value] { get { if(value == null) return null; if(value == "") return string.Empty; string result; lock (lookup) { // remove if not needed to be thread-safe if (!lookup.TryGetValue(value, out result)) { lookup.Add(value, value); result = value; } } return result; } } public void Clear() { lock (lookup) { lookup.Clear(); } } } static class Program { static void Main() { // this line is to defeat the inbuilt compiler interner char[] test = { 'h', 'e', 'l', 'l', 'o', ' ', 'w', 'o', 'r', 'l', 'd' }; string a = new string(test), b = new string(test); Console.WriteLine(ReferenceEquals(a, b)); // false StringInterner cache = new StringInterner(); string c = cache[a], d = cache[b]; Console.WriteLine(ReferenceEquals(c, d)); // true } } You could take this further with WeakReference if desired. Note importantly that you don't need to change your design - you just change the code that populates the object to use the interner/cache. A: Have you actually determined whether or not this is actually a problem? The CLR does a lot of string interning on your behalf so it is possible (depending on CLR version and how your code was compiled) that you are not using as much memory as you think you are. I would highly recommend that you validate your suspicions about memory utilization before you change your design.
{ "pile_set_name": "StackExchange" }
Q: Prove that $\int\int_Df(x,y)dA \leq \int\int_D g(x,y)dA$ if $f(x,y) \leq g(x,y)$ for all $(x,y) \in D$ Suppose that $f,g$ are integrable function on a Jordan set $D$ such that $f(x,y) \leq g(x,y)$ for all $(x,y) \in D$. Prove that $\int\int_Df(x,y)dA \leq \int\int_D g(x,y)dA$ Here is what I got: Assume that $f,g$ are integrable function on a Jordan set $D$ such that $f(x,y) \leq g(x,y)$ for all $(x,y) \in D$. Since $f(x,y) \leq g(x,y)$ for all $(x,y) \in D$, $g(x,y)-f(x,y) \geq 0$ for all $(x,y) \in D$ Since $f,g$ are integrable function on a Jordan set $D$ , they are bounded and $$\int\int_D [g(x,y)-f(x,y)] \geq 0$$ $$\int\int_D [g(x,y)-f(x,y)]=\int\int_D [g(x,y)]- \int\int_D[f(x,y)] \geq 0$$ $$\int\int_D [g(x,y)] \geq \int\int_D[f(x,y)]$$ Please feel free to fix if I'm wrong A: Two problems pop out right away: first, the integral of a function $f$ is not given by the formula $$\int\int_Df(x,y)dA = \sum_{i=1}^n\sum_{j=1}^m(M_{f_i}-m_{f_i})(\Delta x_i \Delta y_j). $$ Second $f(x,y) \le g(x,y)$ does not imply $M_{f_i}-m_{f_i} \le M_{g_i}-m_{g_i}.$
{ "pile_set_name": "StackExchange" }
Q: java.lang.OutOfMemoryError while deploying the war file in Heroku While trying to deploy my Spring MVC Application war file to Heroku I am getting java.lang.OutofMemoryError Exception. What may be the reason for this exception? What am I doing wrong? C:\Users\sai\Desktop>heroku deploy:war --war crud-1.0.0-BUILD-SNAPSHOT.war --app springmvc-crud-jpa Uploading crud-1.0.0-BUILD-SNAPSHOT.war.... ---> Packaging application... - app: springmvc-crud-jpa - including: webapp-runner.jar - including: crud-1.0.0-BUILD-SNAPSHOT.war - installing: OpenJDK 1.8 ---> Creating slug... - file: slug.tgz - size: 68MB ---> Uploading slug... Exception in thread "main" java.lang.OutOfMemoryError: Java heap space at java.util.Arrays.copyOf(Unknown Source) at java.io.ByteArrayOutputStream.grow(Unknown Source) at java.io.ByteArrayOutputStream.ensureCapacity(Unknown Source) at java.io.ByteArrayOutputStream.write(Unknown Source) at sun.net.www.http.PosterOutputStream.write(Unknown Source) at com.heroku.sdk.deploy.Curl.put(Curl.java:75) at com.heroku.sdk.deploy.Slug.upload(Slug.java:81) at com.heroku.sdk.deploy.App.uploadSlug(App.java:196) at com.heroku.sdk.deploy.App.deploySlug(App.java:186) at com.heroku.sdk.deploy.App.createAndReleaseSlug(App.java:169) at com.heroku.sdk.deploy.App.deploy(App.java:83) at com.heroku.sdk.deploy.App.deploy(App.java:87) at com.heroku.sdk.deploy.WarApp.deploy(WarApp.java:32) at com.heroku.sdk.deploy.DeployWar.main(DeployWar.java:51) ---> Done A: I'm the maintainer of the heroku-deploy plugin. I've increased the default heap size for the tool, so if you reinstall it by running heroku plugins:install https://github.com/heroku/heroku-deploy and then deploy again, it will hopefully work. If other continue to encounter this, you can manually increase the heap size even further by setting the JAVA_TOOL_OPTIONS environment variable with something like -Xmx2g.
{ "pile_set_name": "StackExchange" }
Q: How to negate code in "if" statement For example if I want to do something if parent element for used element hasn't got ul as next element what should I add to this code? Somehow I try some combination of .not() and/or .is() but they fail for me. So someone maybe knows what is the best method for negate code after if? if ($(this).parent().next().is('ul')){ // code... } A: You can use the Logical NOT ! operator: if (!$(this).parent().next().is('ul')){ Or equivalently (see comments below): if (! ($(this).parent().next().is('ul'))){ For more information, see the Logical Operators section of the MDN docs. A: Try negation operator ! before $(this): if (!$(this).parent().next().is('ul')){
{ "pile_set_name": "StackExchange" }
Q: Ruby hash use key value in default value I have the following code to create an array to object hash: tp = TupleProfile.new(98, 99) keyDict = Hash[Array[98,99] => tp] keyDict[[98,99]].addLatency(0.45) puts keyDict[[98,99]].getAvg() This works, but I'd like to be able to call addLatency without checking for an existing hash value: keyDict[[100,98]].addLatency(0.45) #throws error right now So I want to create a default value that varies based on the key, something like: keyDict = Hash.new(TupleProfile.new(theKey[0], theKey[1])) Where theKey is some sort of special directive. Is there any reasonably clean way to do this, or am I better off checking each time or making a wrapper class for the hash? A: Try the Hash.new block notation: keyDict = Hash.new {|hash,key| hash[key] = TupleProfile.new(*key) } Using the standard parameter notation (Hash.new(xyz)) will really only instantiate a single TupleProfile object for the hash; this way there will be one for each individual key.
{ "pile_set_name": "StackExchange" }
Q: Backbone.js Collections do not invoke "Reset" event after fetch operation When requesting for data.json file for populating collection which has below data [{ "Id": "BVwi1", "Name": "Bag It", "AverageRating": 4.6, "ReleaseYear": 2010, "Url": "http://www.netflix.com/Movie/Bag_It/70153545", "Rating": "NR" }, { "Id": "BW1Ss", "Name": "Lost Boy: The Next Chapter", "AverageRating": 4.6, "ReleaseYear": 2009, "Url": "http://www.netflix.com/Movie/Lost_Boy_The_Next_Chapter/70171826", "Rating": "NR" }] Collection does not invoke the "Reset" event as the documentation says it should. I can view the request and response are correct after the fetch method but nothing happens. Below is the code for my app. Router that start's everything Theater.Router = Backbone.Router.extend({ routes: { "": "defaultRoute" }, defaultRoute: function () { Theater.movies = new Theater.Collections.Movies() new Theater.Views.Movies({ collection: Theater.movies }); Theater.movies.fetch(); } }) var appRouter = new Theater.Router(); Backbone.history.start(); the Collection Theater.Collections.Movies = Backbone.Collection.extend({ model: Theater.Models.Movie, url: "scripts/data/data.json", initialize: function () {} }); View that subscribes to the reset event Theater.Views.Movies = Backbone.View.extend({ initialize: function () { _.bindAll(this, "render", "addOne"); this.collection.bind("reset", this.render); this.collection.bind("add", this.addOne); }, render: function(){ console.log("render") console.log(this.collection.length); }, addOne: function (model) { console.log("addOne") } }) Reference Site http://bardevblog.wordpress.com/2012/01/16/understanding-backbone-js-simple-example/ A: You should tell Backbone to fire the reset on fetch by passing {reset: true} when fetching as of Backbone 1.0 Replace : Theater.movies.fetch() With Theater.movies.fetch({reset :true}) A: I had a similar issue, I hope my reply will be of any use to others. At first my data.json file was not valid. Then it turned out that I overlooked the following line of code: Theater.Models.Movie = Backbone.Model.extend({} Adding this line of code resolved the issue for me.
{ "pile_set_name": "StackExchange" }
Q: Unable to install same apps on 1 device I have to install 2 same apps on 1 device for testing. What should i change in the plist in order to install the same apps on 1 device. Thanks. A: in info.plst file change the Bundle identifier.
{ "pile_set_name": "StackExchange" }
Q: Matrix that are not upper triangular I saw in the book "Linear algebra done right" (by S. Axler) that all complex operator has a Jordan form. The proof is based on the fact that all complex operator is upper triangular (i.e. there is a basis s.t. the matrix is upper triangular). My questions are the following : 1) Could you give me an example of operator that is not upper triangular (I guess such an operator has no eigenvalue... may be an operator with $x^2+1$ as polynomial characteristic ?) 2) If an operator has at least one eigenvalue, does it has a Jordan form ? (in the proof of Axler it's indeed based on the fact that it has an eigenvalue, but also on the fact that the characteristic polynomial is of the form $(x-\lambda _1)^{m_1}...(x-\lambda _n)^{m_n}$ (btw how do you call such a form ? in french we say "scindé" but I didn't find an english equivalent on wikipedia). So I guess that a characteristic polynomial of the form $x^2+x+1$ will not have a Jordan form even not a upper triangular form... but these are just supposition, and if anyone can confirm or not, I would be very happe :) A: Consider the map $T\colon\mathbb{R}^2\longrightarrow\mathbb{R}^2$ defined by $T(x,y)=(-y,x)$. There is no basis of $\mathbb{R}^2$ such that the matrix of $T$ with respect to that basis is upper triangular. Note that, as you suspected, $T$ has no (real) eigenvalues. And the map $U\colon\mathbb{R}^3\longrightarrow\mathbb{R}^3$ defined by $U(x,y,z)=(-y,x,0)$ has one eigenvalue ($0$), but no Jordan form.
{ "pile_set_name": "StackExchange" }
Q: What's needed to get 100 mA from a USB port? I'm trying to build a stereo speaker system for use on a laptop. I want to keep it as simple as possible, so I'm thinking of using laptop's audio output to for the audio signal and a USB port for power. As far as I know, each port should be able to provide 100 mA for devices. Is there any need to signal to the computer that I'm going to try to draw 100 mA, or is it acceptable to just connect the device? Also, how stabilized and filtered is USB power? I'm thinking of using TDA7053A to drive the speakers and its minimum voltage is 4.5 V. If that doesn't work, I'd use two TDA7052 amplifiers, but I'd like to keep number of parts as low as possible. As for power consumption, I already have a small radio which uses one 50 Ω speaker and a TDA7052 and it uses at most 25 mA, so even with two of those speakers, I should have lots of power to spare with a maximum supply current of 100 mA. A: The USB specification requires that a host port be able to source 100mA at a nominal 5V on the VBUS pin. That much power is barely enough to allow some devices to enumerate on the bus. (Early versions of the popular Cypress EZ-USB FX2 required a waiver because they drew slightly more than 100mA during enumeration.) Of course, there is also an elaborate power management scheme that permits the host to shed loads by turning off ports individually. (I've never personally seen power management implemented on individual ports: on systems I've examined carefully either all host ports are powered, or none are. Your mileage will certainly vary.) In particular, whether your ports are powered when the laptop is sleeping is more than a little OS, platform, and configuration specific. A device is permitted to draw up to 100mA without asking permission if VBUS is present. For a device to consume more than 100mA, it is supposed to have permission, and to be able to gracefully handle being denied. Similar rules apply to hubs, with complications for bus-powered hubs which are permitted to restrict downstream devices to only 100mA, while never consuming more than 500mA from the upstream port. One reason for an external device to include a second USB cable for power is that effectively allows it to double its power budget. Edit: I weakened the implication that PCs don't manage power per port. Just because I haven't actually seen it happen has little bearing on whether it is found in the wild. The white-box PC with an MSI MB that I was last actively developing a USB device driver on had fairly limited power management capabilities. The brand new Dell on my desk seems to turn off individual PCIe cards under some conditions, so the world of power management in PCs has been advancing (or at least getting more complicated) steadily while I wasn't looking. A: I haven't experimented with USB power, but this analysis seems to point to it working just fine. At 100ma all tested devices remain above 4.5v. I believe you can just connect to the port... you can test this simply by plugging in a USB cable and checking the pins on the other end with a multimeter. Here's the full USB spec and here's all other docs from usb.org A: The USB spec allows any device to draw 100mA from a port. No communication with the host is required. However, 500mA is available by communicating with the host unless you're plugged into an unpowered hub. Many computers allow you to draw this 500mA without properly requesting it. If this is a personal project, put a 10 ohm power resistor across the terminals of your USB port and see how much the voltage drops. If it works, you're golden. Just remember that it might not work if you plug it into a different computer. If this is something you want to distribute, you'll have to tell the host that you want 500mA. If you don't have a micro on the project that can handle this task, the easiest way to do this is to put the cheapest USB hub controller IC you can find on the board, and configure it to do the communication. The TI TUSB2036 is about $3, and just requires you to pull a pin high (Or low, I can't remember) to get the 500mA. I think you'll want the 500mA to get a decent audio volume. I don't know about the 50 ohm speaker you have, but in general, your power is limited to Vrms^2/R. A pair of 50 ohm speakers operating from 0V to 5V will draw .125W (assuming 100% efficiency). That's hardly better than the stock speakers. Four 8-ohm speakers will bring you up to a more respectable 1.5W of power, which is well under your 2.5W allowable power from the USB port. If you're operating on a device which has a FireWire (IEEE 1394) port, you might consider using that as it has much more power available (Up to tens of amps, but possibly less. Apple products guarantee that there's a minimum of 7W available, but the spec allows for as much as 45W of power to be delivered.
{ "pile_set_name": "StackExchange" }
Q: Make HTML5 details tag to take maximum available height I have an HTML page with header/content/footer that uses flexbox model and contains <details> tag. I need to make details content use maximum available height, meaning that when in opened state its content should occupy all space in its container (except for summary of course). Here is my HTML/CSS code (http://jsfiddle.net/rtojycvk/2/): HTML: <div class="wrapper"> <div class="header">Header</div> <div class="main"> Some text before details <details class="details" open> <summary>Details summary</summary> <div class="content">Details content</div> </details> </div> <div class="footer">Footer</div> </div> CSS: html, body { height: 100%; margin: 0; padding: 0; } .wrapper { display: flex; flex-direction: column; width: 100%; min-height: 100%; } .header { height: 50px; background-color: yellow; flex: 0 0 auto; } .main { background-color: cyan; flex: 1; display: flex; flex-direction: column; } .footer { height: 50px; background-color: green; flex: 0 0 auto; } .content { background-color: blue; color: white; flex: 1; } .details { background-color: red; flex: 1; display: flex; flex-direction: column; } As you can see, the details tag itself takes all the available space, but not its content. P.S. I need this to work only in Chrome. A: http://jsfiddle.net/rtojycvk/16/ use position absolute on content, position relative on details, and calc() css (to offset the summary height) .content { background-color: lightgray; color: black; flex: 1; display:flex; position:absolute; height: calc(100% - 18px); width: 100%; } .details { background-color: gray; flex: 1; display: flex; flex-direction: column; position:relative; } hope this helps! (I changed the colors cause they were a bit bright for me :p)
{ "pile_set_name": "StackExchange" }
Q: Change mac's terminal theme using python How exactly can I change the theme of a mac terminal using python. I have a command line program and I want to have a specific theme for the terminal (other than the basic theme) as my command-line program starts. A: You can use the Python subprocess module to call an AppleScript: #!/usr/bin/python import subprocess def asrun(ascript): osasc = subprocess.Popen(['osascript', '-'], stdin = subprocess.PIPE, stdout=subprocess.PIPE) return osasc.communicate(ascript)[0] def asquote(astr): ascrpt = astr.replace('"', '" & quote & "') return '"{}"'.format(ascrpt) ascript = ''' tell application "Terminal" activate set current settings of tabs of windows to settings set "Pro" end tell ''' asrun(ascript) This will change all of the windows and tabs you currently have open. If you want it do change just one and not the others, or change the window when you launch terminal that's fairly easy to do. It's just a matter of determining which window or tab you want to change and how you are calling the script in the first place. This should give you an idea though of the basic means of how it works — so I've left this example fairly minimal so you can understand the basics of it. To change the profile, substitute "Pro" with any profile name (even custom versions you've created) that are listed in Terminal.app.
{ "pile_set_name": "StackExchange" }
Q: How can I figure out how a search engine is finding hidden pages? We have a system hosting many websites for our customers, and inside that system there is a method that non-live customers can view their sites before we turn them on. Say the link is something like this: ourbigcompany.com/customer/domain=thisisanewsiteurl Those links are not linked to anywhere outside a secure login - they are only sent to the customer via email. They are publicly viewable, as they have to be, but that's not the real problem. The real problem is that somehow Bing is getting hold of them and trying to crawl the sites. I know how to stop the crawling, but that would be like treating the symptoms without fixing the problem. We log the traffic and there is no referrer - so that is not helpful. If I change the querystring value for a site, Bing has it within hours. I need to figure out where Bing is getting the links from so that I can close what is obviously a security hole, but I am not sure how. Any ideas on how to figure that out? A: You won't be able to know for sure how search engines got the URL. They don't tell you that information. There are several possible ways that it could have happened: The user shares or publishes the link themselves The site has a link to another site. When that link is clicked, the secret URL is sent as a referrer. Some sites publish referrer URLs in places that search engines can find them. Some browsers send information about every page you visit directly to the companies that run search engines. Google at least says they do not rely on any sent data to feed their crawler. Some browser features that rely on this are: Safe browsing features that flag malware pages as you surf Pagerank indicator toolbars Usage of social buttons on the page such as Google +1 buttons Usage of analytics software Inclusion of advertisements on the site Any 3rd party JavaScript, CSS, or image usage The email you send with a link traverses through an email server owned by the search engine (Gmail, Hotmail). Links in such an email could be harvested for crawling. As Google says: It's almost impossible to keep a web server secret by not publishing links to it. As soon as someone follows a link from your "secret" server to another web server, your "secret" URL may appear in the referrer tag and can be stored and published by the other web server in its referrer log... If you want to prevent Googlebot from crawling content on your site, you have a number of options, including using robots.txt to block access to files and directories on your server.
{ "pile_set_name": "StackExchange" }
Q: Is this true?: converging sequence question Let X be a metric space, and let A ⊂ X. Suppose that {pn} is a sequence in A which converges to some point p ∈ X. True or false: i) p ∈ A′ (limit points of A) (ii) p ∈ closure(A) These are both true, right? And if (i) is true isn't (ii) always true (because closure(A)=A union A') A: Suppose $p_n = p$ for all $n$. Then $p_n$ converges to $p$, but $p$ might not be a limit point of $A$. It might be an isolated point of $A$. To give a concrete example, suppose that $A = \{0\} \cup [1,2]$ and $p_n = 0$ for all $n$. Then $p_n \rightarrow 0$ but $0$ is not a limit point of $A$. What you can say for sure is that if $p_n \in A$ and $p_n \rightarrow p$, then $p$ is either an isolated point of $A$ or a limit point of $A$. In either case, $p \in \text{closure}(A)$ since $\text{closure}(A) = A \cup A'$ and every isolated point of $A$ is contained in $A$. Referring to your problem statement, (i) implies (ii) but (ii) does not imply (i). And $p_n \rightarrow p$ implies (ii) but not necssarily (i). Note that a point $x \in X$ is a limit point of $A$ if and only if every neighborhood of $x$ contains a point of $A$ distinct from $x$. This in fact is equivalent to the condition that every neighborhood of $x$ contains infinitely many elements of $A$. Another equivalent condition is that there exists a sequence $x_n \in A$ of distinct values (no repeats) which converges to $x$. Given any set $A$, every element of $A$ is either an isolated point of $A$ or a limit point of $A$. Also, $A$ contains all of its isolated points but it need not contain all of its limit points.
{ "pile_set_name": "StackExchange" }
Q: discord.py: How to get channel id of a mentioned channel? I'm coding a discord bot right now. But I have the problem that I don't know how to get a channel id of a mentioned channel. How can I get the ID? example: def check(messagehchannelid): return messagehchannelid.channel.id == ctx.message.channel.id and messagehchannelid.author == ctx.message.author and messagehchannelid.content == (the channel id of the mentioned channel in the message) messagechannelidcheck = await client.wait_for('message', check=check, timeout=None) A: An example using command decorators: @client.command() async def cmd(ctx, channel: discord.TextChannel): await ctx.send(f"Here's your mentioned channel ID: {channel.id}") Post-edit: You can use the channel_mentions attribute of a message to see what channels have been mentioned. If you're only expecting one, you can do: # making sure they've mentioned a channel, and replying in the same channel # the command was executed in, and by the same author def check(msg): return len(msg.channel_mentions) != 0 and msg.channel == ctx.channel and ctx.author == msg.author msg = await client.wait_for("message", check=check) # timeout is None by default channel_id = msg.channel_mentions[0].id References: discord.TextChannel TextChannel.id Message.channel_mentions Client.wait_for()
{ "pile_set_name": "StackExchange" }
Q: TLS 1.2 Server certificate and signature_algorithms In the specification for TLS 1.2, it says the following: If the client provided a "signature_algorithms" extension, then all certificates provided by the server MUST be signed by a hash/signature algorithm pair that appears in that extension. However, when I tried the following command in OpenSSL (As a server) it runs without any issue: openssl s_server -accept 443 -cert ecdsa-cert.crt -key ecdsa-key.key -debug -msg -state -HTTP -cipher ECDHE-RSA-AES128-SHA Then, I run another command for client: openssl s_client -connect 127.0.0.1:443 -debug -cipher ECDHE-RSA-AES128-SHA It can't proceeed any further on the server console, it says "no shared cipher". However, when I used wireshark and inspect the ClientHello.signature_algorithms, it did indeed has ECDSA pair in it. So I'm wondering is it me that misinterpret the specification? A: The ECDHE-RSA-AES128-SHA cipher suite means that the key exchange will use a dynamically generate ECDH key pair, that the server will sign with its own RSA private key. The server's certificate will thus contain a RSA public key, regardless of how that certificate was signed by its CA. I suppose that the certificate you use contains an EC key pair, thus not compatible with the ECDHE-RSA-AES128-SHA cipher suite. It is kinda dumb from the server to start up at all with the provided options, since, in effect, it supports no cipher suite at all. But software is known to be dumb (at times). So while the client is perfectly able to understand ECDSA signatures (and states as such in its signature_algorithms extension), the list of supported cipher suites that you configured prevents it from accepting any ServerKeyExchange message which contains anything else than an ECDH public key signed with RSA. Also, the signature_algorithms extension is only partially related to this. With this extension, the client may announce the signature algorithms that it supports, and this is meant to help the server select an appropriate certificate chain, and algorithms for messages that need to be signed. If the client says "RSA only", then the server should strive to use only RSA signatures, both for what it signs itself (e.g. ServerKeyExchange message) and for the certificate chains is sends (all CA certificates should be RSA based). That's, in practice, wishful thinking. Most servers have only one certificate, and that's the one they will send, regardless of the signature_algorithms extension. And most clients will adapt: if the client really supports RSA signatures, then it will process the RSA signatures on certificates and on TLS messages. This is the normal behaviour in the absence of the signature_algorithms extension. The real usage of this extension is not to limit possible signature algorithms -- these are constrained by both the cipher suite and the kind of certificate that the server actually owns -- but to help the server choose hash functions to be used with signature algorithms. When the client says: "I support RSA with SHA-256", it is really telling to the server "if you must use RSA signatures, then you can do it with SHA-256 as support hash function, I know how to handle it".
{ "pile_set_name": "StackExchange" }
Q: nginx location redirect too many redirects I am trying to redirect all URLS from my site /search to /search/grid using: location /search { return 301 /search/grid; } this works to redirect /search but then /search/grid gives the HTTP error too many redirects, how can I only redirect if the path is only /search? A: Use either and exact match location block, or a rewrite: location = /search { return 301 /search/grid; } Or: rewrite ^/search$ /search/grid permanent; See this and this for more.
{ "pile_set_name": "StackExchange" }
Q: When I try to run "rails s" or "rails server" command I get an error and It does not let me start the server Once I install rails then I got into a folder and type in rails new app. Then I go into the folder and run bundle install. Once I have installed all the necessary gems I go into the applications folder and i type in either rails s or rails server and I get the following error: C:/Ruby22-x64/lib/ruby/gems/2.2.0/gems/nokogiri-1.6.6.2-x64-mingw32/lib/nokogiri .rb:29:in `require': cannot load such file -- nokogiri/nokogiri (LoadError) from C:/Ruby22-x64/lib/ruby/gems/2.2.0/gems/nokogiri-1.6.6.2-x64-mingw32 /lib/nokogiri.rb:29:in `rescue in <top (required)>' from C:/Ruby22-x64/lib/ruby/gems/2.2.0/gems/nokogiri-1.6.6.2-x64-mingw32 /lib/nokogiri.rb:25:in `<top (required)>' from C:/Ruby22-x64/lib/ruby/gems/2.2.0/gems/loofah-2.0.2/lib/loofah.rb:3 :in `require' from C:/Ruby22-x64/lib/ruby/gems/2.2.0/gems/loofah-2.0.2/lib/loofah.rb:3 :in `<top (required)>' from C:/Ruby22-x64/lib/ruby/gems/2.2.0/gems/rails-html-sanitizer-1.0.2/l ib/rails-html-sanitizer.rb:2:in `require' from C:/Ruby22-x64/lib/ruby/gems/2.2.0/gems/rails-html-sanitizer-1.0.2/l ib/rails-html-sanitizer.rb:2:in `<top (required)>' from C:/Ruby22-x64/lib/ruby/gems/2.2.0/gems/actionview-4.2.1/lib/action_ view/helpers/sanitize_helper.rb:3:in `require' from C:/Ruby22-x64/lib/ruby/gems/2.2.0/gems/actionview-4.2.1/lib/action_ view/helpers/sanitize_helper.rb:3:in `<top (required)>' from C:/Ruby22-x64/lib/ruby/gems/2.2.0/gems/actionview-4.2.1/lib/action_ view/helpers/text_helper.rb:32:in `<module:TextHelper>' from C:/Ruby22-x64/lib/ruby/gems/2.2.0/gems/actionview-4.2.1/lib/action_ view/helpers/text_helper.rb:29:in `<module:Helpers>' from C:/Ruby22-x64/lib/ruby/gems/2.2.0/gems/actionview-4.2.1/lib/action_ view/helpers/text_helper.rb:6:in `<module:ActionView>' from C:/Ruby22-x64/lib/ruby/gems/2.2.0/gems/actionview-4.2.1/lib/action_ view/helpers/text_helper.rb:4:in `<top (required)>' from C:/Ruby22-x64/lib/ruby/gems/2.2.0/gems/actionview-4.2.1/lib/action_ view/helpers/form_tag_helper.rb:18:in `<module:FormTagHelper>' from C:/Ruby22-x64/lib/ruby/gems/2.2.0/gems/actionview-4.2.1/lib/action_ view/helpers/form_tag_helper.rb:14:in `<module:Helpers>' from C:/Ruby22-x64/lib/ruby/gems/2.2.0/gems/actionview-4.2.1/lib/action_ view/helpers/form_tag_helper.rb:8:in `<module:ActionView>' from C:/Ruby22-x64/lib/ruby/gems/2.2.0/gems/actionview-4.2.1/lib/action_ view/helpers/form_tag_helper.rb:6:in `<top (required)>' from C:/Ruby22-x64/lib/ruby/gems/2.2.0/gems/actionview-4.2.1/lib/action_ view/helpers/form_helper.rb:4:in `require' from C:/Ruby22-x64/lib/ruby/gems/2.2.0/gems/actionview-4.2.1/lib/action_ view/helpers/form_helper.rb:4:in `<top (required)>' from C:/Ruby22-x64/lib/ruby/gems/2.2.0/gems/actionview-4.2.1/lib/action_ view/helpers.rb:50:in `<module:Helpers>' from C:/Ruby22-x64/lib/ruby/gems/2.2.0/gems/actionview-4.2.1/lib/action_ view/helpers.rb:4:in `<module:ActionView>' from C:/Ruby22-x64/lib/ruby/gems/2.2.0/gems/actionview-4.2.1/lib/action_ view/helpers.rb:3:in `<top (required)>' from C:/Ruby22-x64/lib/ruby/gems/2.2.0/gems/sprockets-rails-2.3.0/lib/sp rockets/rails/legacy_asset_tag_helper.rb:7:in `<module:LegacyAssetTagHelper>' from C:/Ruby22-x64/lib/ruby/gems/2.2.0/gems/sprockets-rails-2.3.0/lib/sp rockets/rails/legacy_asset_tag_helper.rb:6:in `<module:Rails>' from C:/Ruby22-x64/lib/ruby/gems/2.2.0/gems/sprockets-rails-2.3.0/lib/sp rockets/rails/legacy_asset_tag_helper.rb:4:in `<module:Sprockets>' from C:/Ruby22-x64/lib/ruby/gems/2.2.0/gems/sprockets-rails-2.3.0/lib/sp rockets/rails/legacy_asset_tag_helper.rb:3:in `<top (required)>' from C:/Ruby22-x64/lib/ruby/gems/2.2.0/gems/sprockets-rails-2.3.0/lib/sp rockets/rails/helper.rb:54:in `require' from C:/Ruby22-x64/lib/ruby/gems/2.2.0/gems/sprockets-rails-2.3.0/lib/sp rockets/rails/helper.rb:54:in `<module:Helper>' from C:/Ruby22-x64/lib/ruby/gems/2.2.0/gems/sprockets-rails-2.3.0/lib/sp rockets/rails/helper.rb:7:in `<module:Rails>' from C:/Ruby22-x64/lib/ruby/gems/2.2.0/gems/sprockets-rails-2.3.0/lib/sp rockets/rails/helper.rb:6:in `<module:Sprockets>' from C:/Ruby22-x64/lib/ruby/gems/2.2.0/gems/sprockets-rails-2.3.0/lib/sp rockets/rails/helper.rb:5:in `<top (required)>' from C:/Ruby22-x64/lib/ruby/gems/2.2.0/gems/sprockets-rails-2.3.0/lib/sp rockets/railtie.rb:6:in `require' from C:/Ruby22-x64/lib/ruby/gems/2.2.0/gems/sprockets-rails-2.3.0/lib/sp rockets/railtie.rb:6:in `<top (required)>' from C:/Ruby22-x64/lib/ruby/gems/2.2.0/gems/sass-rails-5.0.3/lib/sass/ra ils/railtie.rb:3:in `require' from C:/Ruby22-x64/lib/ruby/gems/2.2.0/gems/sass-rails-5.0.3/lib/sass/ra ils/railtie.rb:3:in `<top (required)>' from C:/Ruby22-x64/lib/ruby/gems/2.2.0/gems/sass-rails-5.0.3/lib/sass/ra ils.rb:11:in `require' from C:/Ruby22-x64/lib/ruby/gems/2.2.0/gems/sass-rails-5.0.3/lib/sass/ra ils.rb:11:in `<top (required)>' from C:/Ruby22-x64/lib/ruby/gems/2.2.0/gems/sass-rails-5.0.3/lib/sass-ra ils.rb:1:in `require' from C:/Ruby22-x64/lib/ruby/gems/2.2.0/gems/sass-rails-5.0.3/lib/sass-ra ils.rb:1:in `<top (required)>' from C:/Ruby22-x64/lib/ruby/gems/2.2.0/gems/bundler-1.9.6/lib/bundler/ru ntime.rb:76:in `require' from C:/Ruby22-x64/lib/ruby/gems/2.2.0/gems/bundler-1.9.6/lib/bundler/ru ntime.rb:76:in `block (2 levels) in require' from C:/Ruby22-x64/lib/ruby/gems/2.2.0/gems/bundler-1.9.6/lib/bundler/ru ntime.rb:72:in `each' from C:/Ruby22-x64/lib/ruby/gems/2.2.0/gems/bundler-1.9.6/lib/bundler/ru ntime.rb:72:in `block in require' from C:/Ruby22-x64/lib/ruby/gems/2.2.0/gems/bundler-1.9.6/lib/bundler/ru ntime.rb:61:in `each' from C:/Ruby22-x64/lib/ruby/gems/2.2.0/gems/bundler-1.9.6/lib/bundler/ru ntime.rb:61:in `require' from C:/Ruby22-x64/lib/ruby/gems/2.2.0/gems/bundler-1.9.6/lib/bundler.rb :134:in `require' from C:/RubyProjects/app/config/application.rb:7:in `<top (required)>' from C:/Ruby22-x64/lib/ruby/gems/2.2.0/gems/railties-4.2.1/lib/rails/com mands/commands_tasks.rb:78:in `require' from C:/Ruby22-x64/lib/ruby/gems/2.2.0/gems/railties-4.2.1/lib/rails/com mands/commands_tasks.rb:78:in `block in server' from C:/Ruby22-x64/lib/ruby/gems/2.2.0/gems/railties-4.2.1/lib/rails/com mands/commands_tasks.rb:75:in `tap' from C:/Ruby22-x64/lib/ruby/gems/2.2.0/gems/railties-4.2.1/lib/rails/com mands/commands_tasks.rb:75:in `server' from C:/Ruby22-x64/lib/ruby/gems/2.2.0/gems/railties-4.2.1/lib/rails/com mands/commands_tasks.rb:39:in `run_command!' from C:/Ruby22-x64/lib/ruby/gems/2.2.0/gems/railties-4.2.1/lib/rails/com mands.rb:17:in `<top (required)>' from bin/rails:4:in `require' from bin/rails:4:in `<main>' Does anyone have any idea why this might be happening and how can I fix that? Thank you! A: I can suggest you for now or until Nokogiri release new version with windows support. Solution-1: Change your development platform to linux or it's flavours like Ubuntu. Solution-2: Downgrade your ruby version to 2.1 or lower. Once Nokogiri release new version with windows support then you can upgrade ruby to 2.2 . To downgrade ruby versions use RVM or similar tool to maintain/manage the ruby versions. But I am not sure RVM will work on Windows platform you can refer this stackoverflow link .
{ "pile_set_name": "StackExchange" }
Q: Dynamically add .js file, how to know when it's ready? In order to only load the jquery swipe script in touch devices, I am detecting it with modernizer and appending the script if is touch, like this: /**/ if ( modernizer.touch ) { var js = jQuery("<script>"); css.attr({ type: "text/css", src: base_url + "/js/jquery-touch-swipe.js?p1" }); $("head").append(css); $this.swipe( function( e, dx, dy ){ if( dx < 0 ){ $this.find('.next').click(); } else { $this.find('.prev').click(); } }); } The problem is that right after, the function swipe isn't ready yet. I could use a setTimeout but I couldn't be sure either, How can I make sure the appended script is ready? I tried like this: if ( modernizer.touch ) { var js = jQuery("<script>"); css.attr({ type: "text/css", src: base_url + "/js/jquery-touch-swipe.js?p1", id: 'swipe' }); $("head").append(css); $('#swipe').load(function() { $this.swipe( function( e, dx, dy ){ if( dx < 0 ){ $this.find('.next').click(); } else { $this.find('.prev').click(); } }); }); } But with no success A: You can use $.getScript and the use its success callback function which will execute once script is loaded but not necessarily executed. if (Modernizr.touch) { $.getScript(base_url + "/js/jquery-touch-swipe.js?p1", function () { //Operation which you want to perform }); }
{ "pile_set_name": "StackExchange" }
Q: How to export data from Matlab to excel for a loop? I have a code for "for loop" for i=1:4 statement... y=sim(net, I); end now i need to export the value of y to excel sheet. for that i used.. xlswrite('output_data.xls', y, 'output_data', 'A1') but my problem is that the ID of excel i.e. "A1" should change according to each iteration... in my case for iteration 1-> A1, iteration-> A2 and so on.. anybody please help me out .. thanks in advance. for any assistance.. or suggestion.. A: You can store sim outputs in a vector (y(ii)) and save in the sheet with a single write. This is also more efficient since you perform a single bulk-write instead of many small writes. Specify the first cell and y will be written starting from there. last = someNumber; for i=1:last statement... y(i)=sim(net, I); end xlswrite('output_data.xls', y', 'output_data', 'A1'); If you prefer specify the range write ['A1:A',num2str(last)] instead of A1. If you really want to write within the loop try: for ii=1:last ... y=sim(net, I); xlswrite('output_data.xls', y, 'output_data', sprintf('A%d',ii)); end
{ "pile_set_name": "StackExchange" }
Q: What is the right approach to initialize the Model's data inside of the mvc layout page and bind it to the dropdown list? Currently, I'm doing the following logic: I have a Layout page where I need to display a Kendo.DropDown list. I have created a Model: public class CultureModel { public string Culture { get; set; } public List<string> AvailableCultures { get; set; } public CultureModel() { PopulateCulture(); } private void PopulateCulture() { CultureModel cm = new CultureModel(); cm.AvailableCultures = new List<string>(); cm.AvailableCultures.Add("en-US"); cm.AvailableCultures.Add("de-DE"); cm.AvailableCultures.Add("es-ES"); } } And in my Layout I define the model: @model CultureModel Then, I'm trying to render DisplayTemplate to show the dropdown: @Html.DisplayFor(x => x.AvailableCultures, "_CultureSelector") And my template is: @model List<string> <label for="culture">Choose culture:</label> @(Html.Kendo().DropDownList() .Name("culture") ) Is that correct approach? A: Thinking about your usecase by having a dropdown in the layout file, it would be fine to create your kendo dropdown with the following code directly in the layout file: @{ @(Html.Kendo().DropDownList() .Name("Cultures") .DataTextField("Text") .DataValueField("Value") .BindTo(new List<SelectListItem>() { new SelectListItem() { Text = "en-US", Value = "1" }, new SelectListItem() { Text = "de-DE", Value = "2" }, new SelectListItem() { Text = "es-ES", Value = "3" } }) ) } Perhaps use a partialview to render the code in the layout for better code organizing and readability: @Html.Partial("_CultureSelector") I found the code on the telerik site: https://demos.telerik.com/aspnet-mvc/dropdownlist
{ "pile_set_name": "StackExchange" }
Q: How to make vertical frame label read top down In Mathematica, default frame label will put labels at the bottom and the left. The label on the left reads from the bottom to the top. I would like to also put label on the right of the frame, but this label also reads from bottom to top. How can I make the right label read from top down? A: You can simply Rotate the label by 180 degrees: Plot[, {x, 0, 1}, Frame -> True, FrameLabel -> {"foo", "bar", "baz", Rotate["qux", 180 Degree]}]
{ "pile_set_name": "StackExchange" }
Q: Rendering Large Bitmaps in PDFSharp and Minimizing Memory Footprint So, I have a need to export large-format paper sizes (33 x 44 inches) at up to 300 DPI from my web app as PDF. I'm currently handling 8.5 x 11 and 11 x 17 sheets just fine using PDFSharp. These pages contain mostly image data, minus some margins and a small amount of text; i.e. not much is vector inside this PDF page. The problem I'm encountering for the large formats is this: a 33 x 44 inch sheet at 300 DPI with a bit depth of 32 bits per pixel is 522,720,000 bytes - almost half a gigabyte. I can't have this kind of memory consumption in my web app. Is there any possible way that I can render the PDF in tiles or chunks to avoid needing to have this entire block in memory at one time? Is there any functionality in PDFSharp that can help me here? A: PDFsharp was designed to keep everything in memory, so the PDF file can finally be written very fast. Currently (and in the foreseeable future) there is no way to handle image data that does not fit into memory.
{ "pile_set_name": "StackExchange" }
Q: why is numpy.linalg.norm slow when called many times for small size data? import numpy as np from datetime import datetime import math def norm(l): s = 0 for i in l: s += i**2 return math.sqrt(s) def foo(a, b, f): l = range(a) s = datetime.now() for i in range(b): f(l) e = datetime.now() return e-s foo(10**4, 10**5, norm) foo(10**4, 10**5, np.linalg.norm) foo(10**2, 10**7, norm) foo(10**2, 10**7, np.linalg.norm) I got the following output: 0:00:43.156278 0:00:23.923239 0:00:44.184835 0:01:00.343875 It seems like when np.linalg.norm is called many times for small-sized data, it runs slower than my norm function. What is the cause of that? A: First of all: datetime.now() isn't appropriate to measure performance, it includes the wall-time and you may just pick a bad time (for your computer) when a high-priority process runs or Pythons GC kicks in, ... There are dedicated timing functions/modules available in Python: the built-in timeit module or %timeit in IPython/Jupyter and several other external modules (like perf, ...) Let's see what happens if I use these on your data: import numpy as np import math def norm(l): s = 0 for i in l: s += i**2 return math.sqrt(s) r1 = range(10**4) r2 = range(10**2) %timeit norm(r1) 3.34 ms ± 150 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) %timeit np.linalg.norm(r1) 1.05 ms ± 3.92 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each) %timeit norm(r2) 30.8 µs ± 1.53 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each) %timeit np.linalg.norm(r2) 14.2 µs ± 313 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each) It isn't slower for short iterables it's still faster. However note that the real advantage from NumPy functions comes if you already have NumPy arrays: a1 = np.arange(10**4) a2 = np.arange(10**2) %timeit np.linalg.norm(a1) 18.7 µs ± 539 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each) %timeit np.linalg.norm(a2) 4.03 µs ± 157 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each) Yeah, it's quite a lot faster now. 18.7us vs. 1ms - almost 100 times faster for 10000 elements. That means most of the time of np.linalg.norm in your examples was spent in converting the range to a np.array.
{ "pile_set_name": "StackExchange" }
Q: What exactly does Data Locality mean in Hadoop? Data locality as defined by many Hadoop tutorial sites (i.e. https://techvidvan.com/tutorials/data-locality-in-hadoop-mapreduce/) states that: "Data locality in Hadoop is the process of moving the computation close to where the actual data resides instead of moving large data to computation. This minimizes overall network congestion." I can understand having the node where the data resides process the computation for those data, instead of moving data around, would be efficient. However, what does it mean by "moving the computation close to where the actual data resides"? Does this mean that if the data sits in a server in Germany, it is better to use the server in France to do the computation on those data instead of using the server in Singapore to do the computation since France is closer to Germany than Singapore? A: Typically people talk about this on a quite different scale, especially within a Hadoop context. Suppose you have a cluster of 5 nodes, you store a file there and need to do a calculation on it. With data locality you try to make the calculation happen on the node(s) where the data is stored (rather than for example the first node that has compute resources available). This reduces network load. It is good to realize that in many new infrastructures the network is not the bottleneck, so you will keep hearing more about the decoupling of compute and storage. A: I +1 Dennis Jaheruddin's answer, and just wanted to add -- you can actually see different locality levels in MR when you check job counters, in Job History UI for example. HDFS and YARN are rack-aware so its not just binary same-or-other node: in the above screen, Data-local means the task was running local to the machine that contained actual data; Rack-local -- that the data wasn't local to the node running the task and needed to be copied, but was still on the same rack; and finally the Other local case -- where the data wasn't available local, nor on the same rack, so it had to be copied over two switches to the node that run the computation.
{ "pile_set_name": "StackExchange" }
Q: Taking list's tail in a Pythonic way? from random import randrange data = [(randrange(8), randrange(8)) for x in range(8)] And we have to test if the first item equals to one of a tail. I am curious, how we would do it in most simple way without copying tail items to the new list? Please take into account this piece of code gets executed many times in, say, update() method, and therefore it has to be quick as possible. Using an additional list (unnesessary memory wasting, i guess): head = data[0] result = head in data[1:] Okay, here's another way (too lengthy): i = 1 while i < len(data): result = head == data[i] if result: break i+=1 What is the most Pythonic way to solve this? Thanks. A: Nick D's Answer is better use islice. It doesn't make a copy of the list and essentially embeds your second (elegant but verbose) solution in a C module. import itertools head = data[0] result = head in itertools.islice(data, 1, None) for a demo: >>> a = [1, 2, 3, 1] >>> head = a[0] >>> tail = itertools.islice(a, 1, None) >>> head in tail True Note that you can only traverse it once but if all you want to do is check that the head is or is not in the tail and you're worried about memory, then I think that this is the best bet. A: Alternative ways, # 1 result = data.count(data[0]) > 1 # 2 it = iter(data) result = it.next() in it
{ "pile_set_name": "StackExchange" }
Q: Boolean value in when section using Drools 7.21 I am trying to migrate Drools from version 5.2 to 7.21. I rebuilded the code to KIE API and all looked fine, but now I get a problem in DRL files. In "when" section in Drools file I need to use statement "finished != true". In v5.2 it worked fine, but in v7.21 not... My code: rule"..." when element : Operation( person.id == $person.getId(), finished != true ) then (...) end I've done some tests, and the results are wierd: finished != true -> it doesn't work and all objects with "finished == true" are in the results too finished == false -> like above finished -> it's working fine and only the objects with "finished == true" are in the results finished == true -> like above I need to use finished != true or something similar. How can I fix it? Is it an error in new Drools version? A: I found a workaround which is working for me. If someone have better way to do it, please share it here. rule"..." when $booleanTrue : Boolean(booleanValue == true) from 1 == 1 element : Operation( person.id == $person.getId(), finished != $booleanTrue ) then (...) end
{ "pile_set_name": "StackExchange" }
Q: Are every bytes of a file made of char from a charset? I am writing a program (in Java) that has to remove half of the bytes of a file, but sequentially ie. remove every even (or uneven) byte. I am using the following method to retrieve all the bytes : byte[] fileContent = Files.readAllBytes(file.toPath()); From a text file, using System.out.println(fileContent[i]); it will output the corresponding ASCII code of the targeted byte. Do I always get an ASCII code ? I don't know how the structure of a file work. In the end, I didn't manage to make a successful loop to write in a new file by looping through byte[] fileContent and skip 1/2 elements. Instead I created char[] fileContentChar out of byte[] fileContent , and write from that one. A: The short answer to "Do I always get an ASCII code?" is: No. You cannot make any assumptions about the character encoding of a text file. There are so many formats (ASCII, UTF8, UTF16, ISO-8859-1, Unicode, etc. see https://en.wikipedia.org/wiki/Character_encoding ) that you need to sample the file to make any assumptions if the text is 7 bit (ASCII) or UTF8 etc. char and byte are not the same (in terms of bit length, depending on platform). In java, char is 2 byte (16 bit) and byte is 1 byte (8 bit). There are tricks to guessing what encoding a text file uses. For example, if you sample 100 bytes and the high bit is never set, it might be 7-bit ASCII ( b & 0x80 ). If the file starts with a 3 byte preamble/signature (0xEF,0xBB,0xBF) it is likely UTF8. (UTF8 is 1 to 4 bytes per character; looking at the high bits of the first byte.) Java by default uses UTF16 (2 bytes). Check this resource for more details (http://unicode.org/faq/utf_bom.html). Good luck!
{ "pile_set_name": "StackExchange" }
Q: Global test initialize method for MSTest Quick question, how do I create a method that is run only once before all tests in the solution are run. A: Create a public static method, decorated with the AssemblyInitialize attribute. The test framework will call this Setup method once per test run: [AssemblyInitialize()] public static void MyTestInitialize(TestContext testContext) {} For TearDown its: [AssemblyCleanup] public static void TearDown() {} EDIT: Another very important detail: the class to which this method belongs must be decorated with [TestClass]. Otherwise, the initialization method will not run. A: Just to underscore what @driis and @Malice said in the accepted answer, here's what your global test initializer class should look like: namespace ThanksDriis { [TestClass] class GlobalTestInitializer { [AssemblyInitialize()] public static void MyTestInitialize(TestContext testContext) { // The test framework will call this method once -BEFORE- each test run. } [AssemblyCleanup] public static void TearDown() { // The test framework will call this method once -AFTER- each test run. } } }
{ "pile_set_name": "StackExchange" }
Q: how to decide on api signature Many times while writing functions that accept enumerable types I face this confusion. Which api exposed will be better from the following options: public void Resolve(Func<bool>[] howtos) public void Resolve(IEnumerable<Func<bool>> howtos) public void Resolve(List<Func<bool>> howtos) I usually decide based on the following: if the input needs to be modified by adding or deleting items then use List else use IEnumerable. Not sure about Array option. Are there other points that need to be considered while deciding on the api to be exposed? Are there any rule of thumb that clearly identified situations in which on should be preferred over the other? Thanks A: You should always accept the least restrictive parameter types. That means IEnumerable<T>, ICollection<T>, or IList<T>. This way, the client is free to pass any kind of implementation, such as an array, HashSet<T>, or ReadOnlyCollection<T>. Specifically, you should take an IEnumerable<T> if you only need to iterate the data, ICollection<T>if you also want to add or remove items, or if you need to know the size, and IList<T> if you need random access (the indexer).
{ "pile_set_name": "StackExchange" }
Q: Flexbox aligning input type=file boxes Puzzled how to neatly align the name of the input field, with the "Choose file" dialog & the previous uploads. Any tips? Looking for a simple, elegant solution. .inputs { display: flex; flex-direction: column; margin: 3em; align-items: left; justify-content: left; } label { padding: 1em; margin: 0.3em; border: thin solid black; border-bottom-right-radius: 1em; } <div class=inputs> <label>Form 1 <input type=file name=form24><a href=#>Previous upload 1</a> </label> <label>Form something else <input type=file name=form24><a href=#>Named upload</a> </label> <label>Form blah <input type=file name=form24><a href=#>Previous upload</a> </label> </div> A: .inputs { display: flex; flex-direction: column; margin: 3em; /* align-items: left; <-- "left" is not a valid value */ /* justify-content: left; <-- "left" is not a valid value */ } label { display: flex; /* establish nested flex container */ padding: 1em; margin: 0.3em; border: thin solid black; border-bottom-right-radius: 1em; } label > * { flex: 1; /* distribute container space evenly among flex items */ } <div class=inputs> <label> <span>Form 1</span><!-- wrap text in a span so it can be targeted by CSS --> <input type=file name=form24> <a href=#>Previous upload 1</a> </label> <label> <span>Form something else</span> <input type=file name=form24> <a href=#>Named upload</a> </label> <label> <span>Form blah</span> <input type=file name=form24> <a href=#>Previous upload</a> </label> </div>
{ "pile_set_name": "StackExchange" }
Q: How to get a certain number of elements in django template I wonder how I can get a specific number of items when i put an if statement inside a for loop i know we can do {% for i in items|slice ":5"%} to get a number of items but when i do {% for post in posts %} {% for img in post_imgs %} {% if img.link == post.link %} <img class="class" src="{{img.img.url}}" style="width:100%"> {% endif %} {% endfor %} {% endfor %} there's no way of doing that inside the if tag .. any solution A: From this answer: Changing the state of an object in a Django template is discouraged. You should probably bite the bullet, calculate the condition beforehand and pass extra state to the template so you can simplify the template logic. So just do your comparisons in python in your view, something like: post_imgs_filtered = [img for img in post_imgs if img.link == post.link] And then in your template: {% for img in post_imgs_filtered|slice ":5" %} <img class="class" src="{{img.img.url}}" style="width:100%"> {% endfor %}
{ "pile_set_name": "StackExchange" }
Q: Displaying a changing loop of pictures in a dialog? I'm not having much success doing this the way I thought would work so I'll ask the experts. I have an ArrayList of ten URLs linking to images. I want to display the first URL for 2 seconds, then get the second URL and do the same until the end. Here is what I have so far, I think perhaps I'm not going about it in the best way with a dialog in postExecute?: private class LiveView extends AsyncTask<String, Integer, ArrayList<String>> { ProgressDialog dialog; private volatile boolean running = true; @Override protected void onPreExecute() { dialog = ProgressDialog.show( myView.this, "Working", "Info message . . .", true, true, new DialogInterface.OnCancelListener(){ public void onCancel(DialogInterface dialog) { cancel(true); } } ); } @Override protected void onCancelled() { running = false; } @Override protected ArrayList<String> doInBackground(String... passed) { while (running) { //removed the code here that sends the request to to make this shorter the server but it works fine return responseFromServer.arrayListofURLs; //list or URLs } return null; } @Override protected void onPostExecute(ArrayList<String> listURLs) { dialog.cancel(); Dialog liveView = new Dialog(myView.this, R.style.Dialog); liveView.setContentView(R.layout.liveview_dialogue); TextView title = (TextView)liveView.findViewById(R.id.liveViewTitle); Button button = (Button) liveView.findViewById(R.id.liveViewButton); ImageView trackImage = (ImageView)liveView.findViewById(R.id.liveViewImage); //I want to loop through the ten images here? button.setOnClickListener(new OnClickListener() { public void onClick(View v) { } }); liveView.show(); } } A: To finish the answer with some code, here is how I did it with a handler. I didn't need to pass any variables as I'd created the ListArray of bitmaps globally for the AsyncTask. I also used a boolean value to end the handler if the dialog was closed. liveView = new Dialog(myView.this, R.style.Dialog); liveView.setContentView(R.layout.liveview_dialogue); TextView title = (TextView)liveView.findViewById(R.id.liveViewTitle); Button button = (Button) liveView.findViewById(R.id.liveViewButton); trackImage = (ImageView)liveView.findViewById(R.id.liveViewImage); button.setOnClickListener(new OnClickListener() { public void onClick(View v) { run = false; liveView.dismiss(); } }); liveView.show(); final Handler handler = new Handler(); final Runnable r = new Runnable() { Iterator<Bitmap> it = images.iterator(); public void run() { if(run){ trackImage.setImageBitmap(it.next()); if(it.hasNext()) handler.postDelayed(this, 5000); } } }; handler.postDelayed(r, 5000);
{ "pile_set_name": "StackExchange" }
Q: Can't send request from applet to servlet I have the following code: URL urlServlet = new URL(WEB_SERVER_URL); URLConnection connection = urlServlet.openConnection(); connection.setDoInput(true); connection.setDoOutput(true); connection.setUseCaches(false); connection.setDefaultUseCaches(true); connection.setRequestProperty("Content-Type", "application/octet-stream"); connection.setRequestProperty("event", "blah"); OutputStream outputStream = servletConnection.getOutputStream(); outputStream.flush(); outputStream.close(); The server is not responding to this program. But if I get inputStream from connection, I catch breakpoint in the DoGet servlet method. What am I doing wrong? A: But if I get inputStream from connection, I catch breakpoint in the DoGet servlet method. What am I doing wrong? Your mistake was to not asking for the response. The URLConnection is lazily executed. The request will only be sent whenever you ask for the response. Calling getInputStream() will actually fire the HTTP request because you're asking for the response. The connection will not be made when you just open and close the request body. See also: Using java.net.URLConnection to fire and handle HTTP requests
{ "pile_set_name": "StackExchange" }
Q: How to compute the average power for this signal The signal is $x(t)=A\cos(\omega_ot+\theta)$ and the average power formula is $$ P_\infty = \lim\limits_{T\rightarrow \infty} \frac{1}{2T+1} \int_{-T}^{T} |x(t)|^2 dt $$ My approach is The answer in the book is $P_{\infty}= \dfrac{A^2}{2}$ but I'm not able to reach to this result. As far as I can see from my approach is the following equation must hold but I can't prove it. $$ \frac{\sin(2\omega_o T)\cos(2\theta)}{w_o} = 1 $$ A: Here's the intuition to understand the result. First of all, $$\int_{-\pi}^{\pi}\cos^2t\,dt = \pi, \quad\text{so}\quad \int_{-\pi}^{\pi} \cos^2(ct)\,dt = \pi \quad\text{for any } c>0.$$ It follows that $$\int_{-n\pi}^{n\pi}\cos^2(ct)\,dt = n\pi \quad\text{for any positive integer }n,$$ and so $\lim_\limits{n\to\infty} \dfrac1{n\pi}\displaystyle\int_{-n\pi}^{n\pi}\cos^2(ct)\,dt = 1$. If $n\pi\le T<(n+1)\pi$, since the integrand $\cos^2(ct)\ge 0$, it follows that $$n\pi\le\int_{-T}^T \cos^2(ct)\,dt < (n+1)\pi$$ and so $$\frac{n\pi}T \le \frac1T\int_{-T}^T \cos^2(ct)\,dt < \frac{(n+1)\pi}T.$$ As $T$ varies, let $n=n_T=[T/\pi]$. So as $T\to\infty$, $n_T\to\infty$ and we see that $1\le\frac{n_T\pi}T<\frac{(n_T+1)\pi}T <1+\frac{\pi} T$, and so, letting $T\to\infty$, by the squeeze theorem, $$\lim_{T\to\infty}\frac1T\int_{-T}^T \cos^2(ct)\,dt = 1.$$ **** Alternatively, to finish the derivation given in the OP (ignoring the factor of $A^2/2$ until the end), we again apply the squeeze theorem. Note that $$\left|\frac{\sin(2\omega_0T)\cos(2\theta)}{\omega_0}\right|\le \frac1{|\omega_0|},$$ and so $$\left|\frac1T\frac{\sin(2\omega_0T)\cos(2\theta)}{\omega_0}\right|\le \frac1{|\omega_0|}\frac1T \to 0 \quad\text{as } T\to\infty,$$ and so $\lim\limits_{T\to\infty}\dfrac1T\dfrac{\sin(2\omega_0T)\cos(2\theta)}{\omega_0} = 0$. Likewise, $\lim\limits_{T\to\infty}\dfrac1{2T+1}\cdot\dfrac{\sin(2\omega_0T)\cos(2\theta)}{\omega_0} = 0$. Thus, \begin{align*} \lim_{T\to\infty} &\frac{A^2}{2(2T+1)}\left(2T+\frac{\sin(2\omega_0T)\cos(2\theta)}{\omega_0}\right) \\&= \frac{A^2}2 \left(\lim_{T\to\infty} \frac{2T}{2T+1} + \lim_{T\to\infty}\frac1{2T+1}\frac{\sin(2\omega_0T)\cos(2\theta)}{\omega_0}\right) \\ &= \frac{A^2}2(1+0)= \frac{A^2}2. \end{align*}
{ "pile_set_name": "StackExchange" }
Q: Django: How to save data to ManyToManyField Your help will be nice for me. Here are that codes: models.py: from django.db import models class TagModel(models.Model): tag = models.CharField(max_length=50) def __str__(self): return self.tag class MyModel(models.Model): title = models.CharField(max_length=50) tag = models.ManyToManyField(TagModel) forms.py: from django import forms from .models import * class MyForm(forms.ModelForm): class Meta: model = MyModel fields = '__all__' views.py: from django.shortcuts import render, get_object_or_404, redirect from .models import * from .forms import * def MyWriteView(request): if request.method == "POST": mywriteform = MyForm(request.POST) if mywriteform.is_valid(): confirmform = mywriteform.save(commit=False) confirmform.save() return redirect('MyDetail', pk=confirmform.pk) else: mywriteform = MyForm() return render(request, 'form.html', {'mywriteform': mywriteform}) form.html(1st trial): <form method="post"> {% csrf_token %} {{ mywriteform }} <button type="submit">Save</button> </form> form.html(2nd trial): <form method="post"> {% csrf_token %} {{ mywriteform.title }} <select name="tags" required="" id="id_tags" multiple=""> {% for taglist in mywriteform.tags %} <option value="{{taglist.id}}">{{taglist}}</option> {% endfor %} </select> <button type="submit">Save</button> </form> I am trying to add tags on my post. I made a simple manytomany tagging blog but it does not work. I submitted a post by clicking the save button, and the title was saved, but the tag was not. In the admin, it worked well. Thank you in advance. A: update the code like this if mywriteform.is_valid(): confirmform = mywriteform.save(commit=False) confirmform.save() mywriteform.save_m2m() return redirect('MyDetail', pk=confirmform.pk) for more details Refer here
{ "pile_set_name": "StackExchange" }
Q: How to connect to my cluster on aws which has private topology and internal loadbalacer? I am creating a cluster using the following command: kops create cluster --zones us-west-1c --master-size=m4.large --node-size=m5.large ${NAME} --associate-public-ip=false --topology private --api-loadbalancer-type internal --networking calico --vpc vpc-xxxxxxxx --cloud-labels="Creator=revor,Description=YM k8 cluster,ENV=int,Name=SMV_INT_YMK8,Requestor=Rey Reymond,code=5483" The cluster gets created in aws. So far so good. But the problem is, when I run kops validate cluster I get: Validating cluster xxx.xxx.xx unexpected error during validation: error listing nodes: Get https://api. xxx.xxx.xx/api/v1/nodes: dial tcp 172.30.xx.xx:443: getsockopt: connection refused and when I run kubectl get nodes I get: Unable to connect to the server: dial tcp 172.30.xx.xx:443: i/o timeout Also when I run ssh -i ~/.ssh/id_rsa [email protected] I get: sh: connect to host api. xxx.xxx.xx port 22: Connection refused My question is why I cannot connect to my cluster and why I'm getting the above errors? As the above command shows, my cluster is defined to have a private topology and no public IP addresses and an internal loadbalancer. I'm wondering if that mean I should not be able to connect to my cluster and the above errors are expected? A: If all your instances are private, that is expected. I bet your xxx.xxx.xx is in some private IP range like 172.x.x.x. The usual approach to this is to create an EC2 instance with public IP address in a public network, connect to this instance and then connect to your private instances from this public instance. Such instance is generally referred to as bastion host. You will, of course, have to modify VPC security groups to allow access from your public subnet to your private subnet. Take a look at https://docs.aws.amazon.com/quickstart/latest/linux-bastion/welcome.html for AWS-provided guides.
{ "pile_set_name": "StackExchange" }
Q: Google "GoogleWebAuthorizationBroker" showing error redirect uri mismatch I'm doing it in MVC (C#). I want to access users google calendar so i have specified a button with "Access Calendar". When a user clicks on the button below code is called to get tokens (and save) for accessing calendar data. UserCredential credential; credential = GoogleWebAuthorizationBroker.AuthorizeAsync( new ClientSecrets { ClientId = "xxxxxx.apps.googleusercontent.com", ClientSecret = "FxxxxxxxxxZ" }, Scopes, "user", CancellationToken.None, new FileDataStore(credPath)).Result; When this method is executed we should be redirected to consent screen, instead, I'm getting the error as but the redirect URI it is showing I have never specified in the console. These are the redirect uri I have specified in google project console. Is anything I'm doing wrong? How to get properly redirected to permissions screen ? A: Redirect uri issue The redirect uri in your request is http://127.0.1:59693/authorize you have not added that under Authorized redirect Uris. You cant just add any redirect uri. The client library builds this url itself. its always host:port/authorize Application type there are several types of clients that you can create these clients are designed to work with different types of applications. The code used to connect with these clients is also different. installed application - application installed on a users machine web application - application hosted in a web server connected to a user via a web browser. Installed Application You are using GoogleWebAuthorizationBroker.AuthorizeAsync it is designed for use with installed applications. The browser window will open on the machine itself. If you try to host this on a web-server the browser will attempt to open on the server and not be displayed to the user. Web applications you should be following Web applications and using GoogleAuthorizationCodeFlow using System; using System.Web.Mvc; using Google.Apis.Auth.OAuth2; using Google.Apis.Auth.OAuth2.Flows; using Google.Apis.Auth.OAuth2.Mvc; using Google.Apis.Drive.v2; using Google.Apis.Util.Store; namespace Google.Apis.Sample.MVC4 { public class AppFlowMetadata : FlowMetadata { private static readonly IAuthorizationCodeFlow flow = new GoogleAuthorizationCodeFlow(new GoogleAuthorizationCodeFlow.Initializer { ClientSecrets = new ClientSecrets { ClientId = "PUT_CLIENT_ID_HERE", ClientSecret = "PUT_CLIENT_SECRET_HERE" }, Scopes = new[] { DriveService.Scope.Drive }, DataStore = new FileDataStore("Drive.Api.Auth.Store") }); public override string GetUserId(Controller controller) { // In this sample we use the session to store the user identifiers. // That's not the best practice, because you should have a logic to identify // a user. You might want to use "OpenID Connect". // You can read more about the protocol in the following link: // https://developers.google.com/accounts/docs/OAuth2Login. var user = controller.Session["user"]; if (user == null) { user = Guid.NewGuid(); controller.Session["user"] = user; } return user.ToString(); } public override IAuthorizationCodeFlow Flow { get { return flow; } } } }
{ "pile_set_name": "StackExchange" }
Q: How do I use a Maven assembly in multiple projects? I've got a Maven assembly that I want to use in multiple projects. How do I reuse it without hardcoding the path? A: With the latest version of the assembly plugin (2.2-beta-2), you can use a [shared descriptor][1]. Define the descriptor in the src/main/resources/assemblies folder of a separate project and install or deploy it. In the projects that want to use the descriptor, define a dependency on the descriptor project in the assembly plugin configuration, then reference the assembly. Update: There is a special rule that checks the assemblies directory. So either /assemblies/myassembly.xml or just /myassembly.xml work as long as you're using the magic assemblies directory name. For other directory names the full path relative to the resources directory is needed. I'd wrongly warned that there is an error in the referenced documentation and that the reference path needs to match the relative path below src/main resources, i.e. assemblies/myassembly.xml not assembly.xml. The project using the shared descriptor should have this configuration: <build> ... <plugins> ... <plugin> <artifactId>maven-assembly-plugin</artifactId> <version>2.2-beta-2</version> <!--declare plugin has a dependency on the descriptor project --> <dependencies> <dependency> <groupId>your.group.id</groupId> <artifactId>my-assembly-descriptor</artifactId> <version>1.0-SNAPSHOT</version> </dependency> </dependencies> <executions> <execution> <id>make-assembly</id> <phase>package</phase> <goals> <goal>single</goal> </goals> <configuration> <!-- This is where we use our shared assembly descriptor --> <descriptors> <descriptor>assemblies/myassembly.xml</descriptor> </descriptors> </configuration> </execution> </executions> </plugin> </plugins> </build>
{ "pile_set_name": "StackExchange" }
Q: When to use @Pointcut & @Before, @After AOP Annotations Am learning AOP concepts in Spring. I am now pretty aware of the usages of @Before and @After Annotations and started using it for Time capturing purpose. This is pretty much satisfying all my AOP related needs. Wondering what is that @pointcut annotation that every spring guide talks about ? Is that a redundant functionality ? or does it has separate needs ? A: In simple words whatever you specify inside @Before or @After is a pointcut expression. This can be extracted out into a separate method using @Pointcut annotation for better understanding, modularity and better control. For example @Pointcut("@annotation(org.springframework.web.bind.annotation.RequestMapping)") public void requestMapping() {} @Pointcut("within(blah.blah.controller.*) || within(blah.blah.aspect.*)") public void myController() {} @Around("requestMapping() && myController()") public Object logAround(ProceedingJoinPoint joinPoint) throws Throwable { ............... } As you can see instead of specifying the pointcut expressions inside @Around, you can separate it to two methods using @Pointcut.
{ "pile_set_name": "StackExchange" }
Q: How to convert a string to list using python? I am working with RC-522 RFID Reader for my project. I want to use it for paying transportation fee. I am using python and used the code in: https://github.com/mxgxw/MFRC522-python.git On python script Read.py, Sector 8 was read with the use of this code: # Check if authenticated if status == MIFAREReader.MI_OK: MIFAREReader.MFRC522_Read(8) <---- prints the sector 8 MIFAREReader.MFRC522_StopCrypto1() else: print "Authentication error" The output of this was: Sector 8 [100, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] So that last part(Sector 8 [100, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]), I convert it to string. I want that to be a list but I can't. Tried to put it on a variable x and use x.split() but the output when I execute print(x) is "None". x = str(MIFAREReader.MFRC22_READ(8)) x = x.split() print x #PRINTS ['NONE'] I want it to be like this: DATA = [100, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] so that I can use the sum(DATA) to check for balance, and I can access it using indexes like DATA[0] Thanks a lot!! A: Follow these steps: Open MFRC522.py >> header file for RFID Reader vi MFRC522.py look for function def MFRC522_Read(self, blockAddr) add this line return backData at the end of function. Save it. In read() program, call it like DATA=(MIFAREReader.MFRC522_Read(8)) print 'DATA :',DATA I hope this solves the problem.
{ "pile_set_name": "StackExchange" }
Q: Whats the best way to use CoffeeScript and Node.js together when writing a javascript library? I'm writing a javascript library and want to use the awesomeness that is CoffeeScript to keep the code clean while writing it, but i'd also like to use something like Node mainly for its require feature. The idea is to namespace my sub-objects under a global object, and each of the sub-objects defined in their own file for ease of development. Maybe I am going about this the wrong way, I just need a clean way to write a client side javascript library with CoffeeScript? Thanks! Example file structure below... ./twtmore.coffee twtmore = a: require('./twtmore/a.coffee').a b: require('./twtmore/b.coffee').b c: require('./twtmore/c.coffee').c ./twtmore/a.coffee class a ... exports.a = a A: Yes, you are going about this the wrong way. Node.js is a server side technology. What you are looking for is using something like RequireJS or CommonJS modules (which is what node uses) in coffee script on the browser. There is a plugin for using CoffeeScript with RequireJS which seems to do what you want but I have not used it and cannot vouch for it.
{ "pile_set_name": "StackExchange" }
Q: gulp & bourbon @include font-face issue - Custom fonts not importing? I am using gulp for the first time I have everything working how I would like it but an stuck on one issue. I have a custom font family in a fonts folder something like "assets/fonts/font-family/...." The issue I am having is that normally in a static project I would normally just use bourbon's : @include font-face("source-sans-pro", "/fonts/source-sans-pro/source-sans-pro-regular", $file-formats: eot woff2 woff); This would then allow me to use the family in a regular font-family declaration easy peasy. However this does not work in my current gulp project. here is the gulpfile I currently have: var gulp = require('gulp'); var browserSync = require('browser-sync'); var sass = require('gulp-sass'); var prefix = require('gulp-autoprefixer'); var cp = require('child_process'); var messages = { jekyllBuild: '<span style="color: grey">Running:</span> $ jekyll build' }; /** * Build the Jekyll Site */ gulp.task('jekyll-build', function (done) { browserSync.notify(messages.jekyllBuild); return cp.spawn('jekyll', ['build'], {stdio: 'inherit'}) .on('close', done); }); /** * Rebuild Jekyll & do page reload */ gulp.task('jekyll-rebuild', ['jekyll-build'], function () { browserSync.reload(); }); /** * Wait for jekyll-build, then launch the Server */ gulp.task('browser-sync', ['sass', 'jekyll-build'], function() { browserSync({ server: { baseDir: '_site' }, notify: false }); }); /** * Compile files from _scss into both _site/css (for live injecting) and site (for future jekyll builds) */ gulp.task('sass', function () { return gulp.src('assets/css/main.scss') .pipe(sass({ includePaths: ['css'], onError: browserSync.notify })) .pipe(prefix(['last 15 versions', '> 1%', 'ie 8', 'ie 7'], { cascade: true })) .pipe(gulp.dest('_site/assets/css')) .pipe(browserSync.reload({stream:true})) .pipe(gulp.dest('assets/css')); }); /** * Watch scss files for changes & recompile * Watch html/md files, run jekyll & reload BrowserSync */ gulp.task('watch', function () { gulp.watch('assets/css/**', ['sass']); gulp.watch(['index.html', '_layouts/*.html', '_includes/*.html'], ['jekyll-rebuild']); }); /** * Default task, running just `gulp` will compile the sass, * compile the jekyll site, launch BrowserSync & watch files. */ gulp.task('default', ['browser-sync', 'watch']); I am severely confused as to what I need to do next to get custom assets working for the bourbon include. Maybe this is because I installed bourbon in a normal fashion and gulp isn't handling bourbon? Any direction or critiques are much appreciated. A: What version of gulp-sass are you using? I had the same error on version 0.7.3, but upgrading to the current (2.0.4) version fixed this problem.
{ "pile_set_name": "StackExchange" }
Q: Sarcastic empathy What is a single word that means sarcastic empathy? For example, when one says to you "I'm sorry. That's devastating." where devastating is obviously overly-dramatic or excessive to the point of removing the value of the preceding "I'm sorry." In a sentence, it would look like this: "You stubbed your toe? I'm sorry. That's devastating", she said with __________. The language itself is empathetic (the feeling that you understand and share another person's experiences and emotions), but its context makes it sarcastic (marked by or given to using irony in order to mock or convey contempt.) Maybe there are better ways to describe it, but that is partly why I am asking the question in the first place. The tone in the example is mocking, executed by the duplicitous (or ironic) nature of language having a meaning other than its face value of empathy. Hopefully, that helps to clarify the question and my example. As @Chappo added the helpful example above, which includes "... she said with _________," I too will add a bit more context. The phrase "I'm sorry, that's devastating" was actually said to me and I wanted to reply with something to the effect of "Don't berate me," or "I don't need your belittlement," but neither seemed to capture the sentiment I was wanting to convey of disdain for their fake pity, or, as stated in the original question, their sarcastic empathy. I searched for synonyms to these and other words, without discovering a satisfying solution, which ultimately lead me to post my question here. How would you reply? A: Are you intending to imply that the speaker is obviously joking in somewhat poor taste? If it is intended to be transparent, what about facetious? I had a hard time finding a single definition that encompassed the nuances, so here's a few: Exerpt from Collins dictionary: If you say that someone is being facetious, you are criticizing them because they are making humorous remarks or sayingthings that they do not mean in a situation where they ought to be serious. Dictionary.com definition: Not meant to be taken seriously or literally Merriam-Webster: joking or jesting often inappropriately Basically, it can mean broadly "an inappropriate joke", and is sometimes a form of sarcasm. Edit: if you are intending to reply to such a speaker, "don't be facetious" is a potentially appropriate response.
{ "pile_set_name": "StackExchange" }
Q: rvest handling hidden text I don't see the data/text I am looking for when scraping a web page I tried googling the issue without having any luck. I also tried using the xpath but i get {xml_nodeset (0)} require(rvest) url <- "https://www.nasdaq.com/market-activity/ipos" IPOS <- read_html(url) IPOS %>% xml_nodes("tbody") %>% xml_text() Output: [1] "\n \n \n \n \n \n " I do not see any of the IPO data. Expected output should contain the table for the "Priced" IPOs: Symbol, Company Name, etc... A: It seems that the table data are loaded by scripts. You can use RSelenium package to get them. library(rvest) library(RSelenium) rD <- rsDriver(port = 1210L, browser = "firefox", check = FALSE) remDr <- rD$client url <- "https://www.nasdaq.com/market-activity/ipos" remDr$navigate(url) IPOS <- remDr$getPageSource()[[1]] %>% read_html() %>% html_table(fill = TRUE) str(IPOS) PRICED <- IPOS[[3]]
{ "pile_set_name": "StackExchange" }
Q: What is meant by Scala's path-dependent types? I've heard that Scala has path-dependent types. It's something to do with inner-classes but what does this actually mean and why do I care? A: My favorite example: case class Board(length: Int, height: Int) { case class Coordinate(x: Int, y: Int) { require(0 <= x && x < length && 0 <= y && y < height) } val occupied = scala.collection.mutable.Set[Coordinate]() } val b1 = Board(20, 20) val b2 = Board(30, 30) val c1 = b1.Coordinate(15, 15) val c2 = b2.Coordinate(25, 25) b1.occupied += c1 b2.occupied += c2 // Next line doesn't compile b1.occupied += c2 So, the type of Coordinate is dependent on the instance of Board from which it was instantiated. There are all sort of things that can be accomplished with this, giving a sort of type safety that is dependent on values and not types alone. This might sound like dependent types, but it is more limited. For example, the type of occupied is dependent on the value of Board. Above, the last line doesn't work because the type of c2 is b2.Coordinate, while occupied's type is Set[b1.Coordinate]. Note that one can use another identifier with the same type of b1, so it is not the identifier b1 that is associated with the type. For example, the following works: val b3: b1.type = b1 val c3 = b3.Coordinate(10, 10) b1.occupied += c3
{ "pile_set_name": "StackExchange" }
Q: Weird while() conditions Can anyone help me to understand the way the condition written in the below while() loop: Please find the code below: int fun () { static int x = 5; x--; pritnf("x = %d\n", x); return x; } int main () { do { printf("Inside while\n"); } while (1==1, fun()); printf("Main ended\n"); return 0; } Output: Inside while x = 4 Inside while x = 3 Inside while x = 2 Inside while x = 1 Inside while x = 0 Main ended Also I have the below code and the output surprises: int fun () { static int x = 5; x--; printf("x = %d\n", x); return x; } int main () { do { printf("Inside while\n"); } while (fun(),1==1); printf("Main ended\n"); return 0; } Output: Inside while x = 4 Inside while x = 3 Inside while x = 2 Inside while x = 1 Inside while x = 0 Inside while x = -1 Inside while x = -2 Inside while x = -3 . . . . Inside while x = -2890 Inside while x = -2891 Inside while x = -2892 Inside while x = -2893 Inside wh Timeout In my understanding, the condition is checked from right-to-left. If 1==1 comes on right, the condition is always true and while never breaks. A: , is an operator that takes two parameters and returns the second one. In the first case 1==1, fun() is equivalent to fun(), so the loop happens while fun() returns a non-zero number. In the second case, fun(), 1==1 happens forever (hence the timeout).
{ "pile_set_name": "StackExchange" }
Q: whlie loop never exiting even though variable updated I have created a nested class that extends the AsyncTask to extract data from a database. When the extraction is completed, it sets a global variable ("notificationsExtracted") to true, to say that the process is complete. In the main calling code, I set up a while loop that waits for that global variable to become true, before using the data that was extracted, i.e. while(!notificationsExtracted) {} ...now continue running other code... On all phones except one, this works perfectly, but the one phone (Conexis x2 - Android 7.0) refuses to reflect the global variable being set to true. When I do logging, it shows the flow of code instantiating the class, pulling the data works, setting the global variable to true, and then nothing. On other phones, it does the above, but then continues running further code in the main program. Briefly, I have the following in the calling program public class FragmentMain extends Fragment { static private Boolean notificationsExtracted; ... @Override public View onCreateView(LayoutInflater inflater, ViewGroup parent, Bundle savedInstanceState) { View view = inflater.inflate(R.layout.fragment_main_layout, parent, false); ... Log.i("notif", "1"); notificationsExtracted = false; GetNotificationsNewMySQL getNotificationsNewMySQL = new GetNotificationsNewMySQL(); getNotificationsNewMySQL.execute(""); while (!notificationsExtracted) { }; Log.i("notif", "2"); ... and then the nested class private class GetNotificationsNewMySQL extends AsyncTask<String, Void, String> { @Override protected void onPreExecute() { super.onPreExecute(); } @Override protected String doInBackground(String... params) { try { Class.forName("com.mysql.jdbc.Driver"); Connection con = DriverManager.getConnection(url, user, pass); Statement st = con.createStatement(); String q = "SELECT COUNT(*) AS cnt FROM training.app_notifications"; Log.i("notif", "a"); ResultSet rs = st.executeQuery(q); Log.i("notif", "b"); ResultSetMetaData rsmd = rs.getMetaData(); Log.i("notif", "c"); while (rs.next()) { notificationCount = Integer.parseInt(rs.getString(1).toString()); } Log.i("notif", "d"); notificationsExtracted = true; } catch (Exception e) { Log.i("notif", "error extracting"); e.printStackTrace(); } if (notificationsExtracted) Log.i("notif", "true"); else Log.i("notif", "false"); return ""; } @Override protected void onPostExecute(String result) { } } Now on all phones except the one phone, it logs the following sequence of program flow 2019-11-15 11:18:40.338 1951-1951/com.example.pdcapp I/Notif: 1 2019-11-15 11:18:40.339 1951-1951/com.example.pdcapp I/Notif: 2 2019-11-15 11:18:40.438 1951-1951/com.example.pdcapp I/notif: 1 2019-11-15 11:18:40.512 1951-2025/com.example.pdcapp I/notif: a 2019-11-15 11:18:40.521 1951-2025/com.example.pdcapp I/notif: b 2019-11-15 11:18:40.521 1951-2025/com.example.pdcapp I/notif: c 2019-11-15 11:18:40.522 1951-2025/com.example.pdcapp I/notif: d 2019-11-15 11:18:40.522 1951-2025/com.example.pdcapp I/notif: true 2019-11-15 11:18:40.522 1951-1951/com.example.pdcapp I/notif: 2 except the one phone I get the following, with the result that the phone hangs, i.e. it never gets past the while loop: 2019-11-15 11:20:57.153 20089-20089/com.example.pdcapp I/Notif: 1 2019-11-15 11:20:57.154 20089-20089/com.example.pdcapp I/Notif: 2 2019-11-15 11:20:57.196 20089-20089/com.example.pdcapp I/notif: 1 2019-11-15 11:20:57.267 20089-20348/com.example.pdcapp I/notif: a 2019-11-15 11:20:57.274 20089-20348/com.example.pdcapp I/notif: b 2019-11-15 11:20:57.274 20089-20348/com.example.pdcapp I/notif: c 2019-11-15 11:20:57.274 20089-20348/com.example.pdcapp I/notif: d 2019-11-15 11:20:57.274 20089-20348/com.example.pdcapp I/notif: true I would appreciate help on this please. As reference, I previously loooked for help on the following link https://androidforums.com/threads/while-loop-not-exiting.1315976/ A: What you are doing goes against the principal of asynchronous execution. If you start an asynchTask, but then wait until it's finsished, it's not really async. This is ok though, sometimes you simply need things to be done in a certain order but you can't do it on the main-thread. I'm not exactly sure why your program doesn't work on a technical level, but I think I had a similar problem once. I believe that in Java, updating a variable from within one thread does not necessarily update the variable for all other threads too. Does Android Studio give you a warning on the while-loop? I hope someone who has more experience with multi-threading in Java can say something about this. However, the solution to your Problem is easy: You want whatever Code after the while-loop to execute after the AsyncTask finished excecuting, so simply excecute that Code in the onPostExcecute method.
{ "pile_set_name": "StackExchange" }
Q: Is it possible or how to extract message string from binarized list? Given message copied from Silvia's profile page, a binarized list could be found bellow. lst = Uncompress@ "1:eJztk8ENwzAIRUkHyA5ZqdfeMkC6/605Vl8Gg8EykUD6imMb8/hxjvP7vl5EtN/\ 63NpKpVIpqSgBg4XzKbwzvP6PJ/Baa2BvxMxzea1caV2al3zQnId7pHqS1yPsEkPPa8/\ Yk4c8MzmwjuSNtydp3DsHg/PMwov7o74/ntni4f6nSAbuXcPK8YzyYqy+\ 16u8tnrmYYnq07LOee3pKcJrzX3XrHPnjdaz/\ if47NVqsWl4tBrJmalsPJF9ZegNYzVPKZ9+c/YGIg=="; or output is lst // TableForm // TeXForm $\tiny{ \begin{array}{ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc} 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 0 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 0 & 1 & 0 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 0 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 0 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ 1 & 0 & 1 & 1 & 0 & 0 & 1 & 1 & 0 & 1 & 0 & 0 & 1 & 1 & 1 & 0 & 1 & 1 & 0 & 0 & 0 & 1 & 1 & 0 & 1 & 1 & 0 & 0 & 0 & 1 & 1 & 0 & 0 & 0 & 0 & 1 & 1 & 0 & 0 & 1 & 1 & 1 & 0 & 1 & 1 & 1 & 0 & 0 & 0 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 1 & 1 & 0 & 1 & 0 & 1 & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 1 & 1 & 0 & 0 & 0 & 0 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 1 & 0 & 1 & 1 & 0 & 0 & 1 & 1 & 0 & 1 & 1 & 0 & 1 & 1 & 0 & 1 & 1 & 0 & 1 & 1 & 0 & 1 & 1 & 0 & 1 & 0 & 1 & 1 & 0 & 1 & 1 & 0 & 1 & 1 & 0 & 1 & 1 & 0 & 1 & 0 & 0 & 0 & 1 & 1 & 1 & 0 & 1 & 1 & 0 & 1 & 1 & 0 & 1 & 1 & 0 & 1 & 1 & 0 & 1 & 0 & 1 & 1 & 0 & 1 & 1 & 0 & 1 & 0 & 1 & 1 & 1 & 1 & 0 & 1 & 1 & 0 & 0 & 1 & 0 & 1 & 1 & 0 & 1 & 1 & 0 & 1 & 1 & 0 & 1 & 1 & 0 \\ 1 & 0 & 0 & 1 & 0 & 0 & 1 & 0 & 0 & 1 & 1 & 0 & 1 & 1 & 0 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 1 & 0 & 1 & 1 & 0 & 1 & 0 & 0 & 0 & 1 & 1 & 1 & 0 & 1 & 1 & 0 & 1 & 1 & 0 & 1 & 1 & 0 & 1 & 1 & 0 & 1 & 1 & 0 & 0 & 0 & 1 & 1 & 0 & 1 & 0 & 1 & 1 & 1 & 1 & 0 & 1 & 1 & 1 & 1 & 1 & 0 & 1 & 1 & 0 & 1 & 1 & 0 & 1 & 1 & 0 & 1 & 1 & 0 \\ 1 & 1 & 0 & 0 & 1 & 1 & 0 & 0 & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 1 & 1 & 0 & 1 & 1 & 1 & 1 & 1 & 0 & 1 & 0 & 1 & 1 & 1 & 1 & 1 & 0 & 1 & 1 & 0 & 1 & 1 & 0 & 1 & 0 & 0 & 0 & 0 & 1 & 1 & 0 & 1 & 1 & 0 & 1 & 1 & 0 & 1 & 1 & 0 & 1 & 1 & 0 & 1 & 0 & 1 & 1 & 0 & 1 & 1 & 0 & 1 & 0 & 1 & 1 & 1 & 1 & 0 & 1 & 1 & 1 & 1 & 1 & 0 & 1 & 1 & 0 & 1 & 1 & 0 & 1 & 1 & 0 & 1 & 1 & 0 \\ 1 & 1 & 0 & 0 & 1 & 1 & 0 & 0 & 1 & 1 & 1 & 1 & 0 & 0 & 1 & 1 & 1 & 0 & 1 & 1 & 0 & 1 & 1 & 0 & 1 & 0 & 1 & 1 & 0 & 1 & 1 & 0 & 1 & 1 & 0 & 1 & 1 & 0 & 0 & 1 & 1 & 1 & 1 & 1 & 1 & 0 & 1 & 1 & 0 & 1 & 1 & 0 & 1 & 1 & 0 & 1 & 1 & 0 & 1 & 0 & 1 & 1 & 0 & 1 & 1 & 0 & 1 & 0 & 1 & 1 & 1 & 1 & 0 & 1 & 1 & 1 & 0 & 1 & 0 & 1 & 1 & 0 & 1 & 1 & 0 & 1 & 1 & 0 & 1 & 1 & 0 \\ 1 & 1 & 0 & 0 & 1 & 1 & 0 & 0 & 1 & 1 & 1 & 1 & 0 & 0 & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 1 & 1 & 0 & 1 & 1 & 0 & 0 & 0 & 1 & 1 & 0 & 1 & 1 & 0 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 1 & 1 & 0 & 1 & 1 & 0 & 1 & 1 & 0 & 1 & 0 & 0 & 0 & 0 & 1 & 1 & 0 & 1 & 0 & 1 & 0 & 1 & 1 & 1 & 0 & 0 & 0 & 1 & 1 & 0 & 0 & 0 & 1 & 1 & 1 & 0 & 1 & 1 & 0 & 1 & 1 & 0 \\ 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 0 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 0 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 0 & 0 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ \end{array}} $ at first, I tried appy Total and FromCharacterCode to pick up words. FromCharacterCode/@(Total/@lst) but got sth meaningless chars. {"[", "[", "[", "X", "V", "(", "9", "2", ";", "<", "2", "Y", "U", "["} so I gazed fixedly at the Output list, I found the message hidden in those 0s' layout, and then an image was made for eyes vision. lst /. {1 -> ""} // TableForm // Rasterize[#, ImageSize -> Large] & However, I cannot read the right chars from the image directly via eyes. so I wonder is there methods to solve it automatically from the binarized list? TextRecognize does not work in this case. If any Mail Address be extracted, please replace @ to AT to avoid disturbing the owner. Thanks! @Silvia, I feel lucky to read your posts on snowflake,boot logos and BSpline demos, all impressive. Thanks for leaving a compressed message on your profile, I'd ask this question on how to extract message by MMA codes. If it bring you trouble, please let me know, I shall revise it ASAP. A: One method might be to use ArrayPlot to construct an image, and then use character recognition to decode the message. (It's still not bot-readable, so I did not disguise it.) Edit: I think our 10^15 synapses are better at text recognition than my laptop: TextRecognize[image2] "(* wyemgm Em" *)
{ "pile_set_name": "StackExchange" }
Q: getting top post from Facebook for iPhone I have a very different requirement in my app.Using the app i need to get top 25 most liked and most commented posts from facebook.First let me ask the experts does it seems to be possible till today? As far as i thought and researched on this I am able to get the likes and comments for a page(of an organization or celebrity using graph api). My approach was to make an Admin panel and subscribe most of the famous people,organization,product etc,but this won't help. A: Not possible to do I'm afraid. From what you are asking, you want to find the most popular posts across all of facebook? This is most likely going to come from Pages as these are public and can have millions of people liking / commenting on posts. Either way, this would be a complex and data-intensive task. You would have to scrape posts from all the top Facebook pages and see which ones would have the most links / comments. A lot of work for little reward.
{ "pile_set_name": "StackExchange" }
Q: SAML nameID impersonation We are using the nameId from the SAML response (in email format) to identify and authorize the incoming user on our system. Could a different authenticated user not alter the SAML response from their redirection to have a different known nameId. Authorizing themselves as a different user on our system? A: Generally, no. The SAML response and/or the assertions contain a signature that would become invalid if the underlying XML (such as the value of the NameId attribute) were altered. The relying party verifies this signature prior to trusting the contents of the assertion. Of course, software is software and there could be bugs at either end, e.g. The system that generates SAML (Identity Provider) could have a bug where it allows them to specify arbitrary usernames to be placed into the assertions before the signature is added. The receiving system (Relying Party) could have a bug where it fails to verify the signature. It should be noted that the signature can be found in multiple places in the SAML and there's a category of bugs that allow XML signature wrapping attacks to defeat the intended validation. It is not recommended to "roll your own" signature validation. Rather, depend on one of the well-tested implementations (commercial or open source) that exist for most platforms. But these would be rather fundamental errors. The SAML specification itself is designed to protect against exactly the problem you describe. This is why you need to obtain the public key of your IDP (either via metadata or out of band) prior to receiving any SAML assertions. They sign with their private key and you verify with the public key associated.
{ "pile_set_name": "StackExchange" }
Q: A bash riddle concerning stream redirection: `cat x > y <` I was given a following bash riddle with no additional information on the meaning of variables used. cat x > y < I assumed x and y are files. In my bash this does not execute though (unexpected newline), so I tried something like this ls *.txt >0; cat file1.txt > file2.txt <0; To my understanding this should put file1.txt to file2.txt and then the result of ls *.txt. It doesn't. It puts only file1.txt. And it's not the case of being overwritten since the result of the following is the same: ls *.txt >0; cat file1.txt >> file2.txt <0; My question is: why is redirection of standard input is ignored? why < at the end was incorrect and I had to place <0 explicitly? Isn't that zero should be assumed by default? update As pointed out, I've mistaken >0 with >&0. The question remains valid though. A: There isn't really any mystery here. cat x > y < You said there was "no additional information on the meaning of variables used"; that's because there are no variables used. x and y are simply file names. And the above command is an error because < requires a file name argument. ls *.txt >0; cat file1.txt > file2.txt <0; The cat command reads input from stdin if it has no filename arguments (or if it's given - as an argument). Otherwise, it reads from the files named on its command line -- and ignores stdin. If you type cat file1.txt it's not going to try to read input from the keyboard. For exactly the same reason, if you type cat file1.txt < 0 it's not going to try to read from the file 0. why < at the end was incorrect and I had to place <0 explicitly? Isn't that zero should be assumed by default? 0 in this context is simply a file name. There is no reason for 0 to be a default file name. If you want to redirect input from a file, you need to name the file; there is no default file name. You may be confusing < 0, which redirects standard input from a file named 0 (there is nothing special about that file name) with <&0, which redirects standard input from file descriptor 0, which is standard input (so <&0 would have no effect).
{ "pile_set_name": "StackExchange" }
Q: TRY...CATCH doesn't seem to work I have the following piece of code that's just to make sure that the temporary table doesn't exist. If the table exist I want to truncate it. CREATE TABLE #LookupLinks( [SyncID] uniqueidentifier, [Name] nvarchar(50), [SQLTable] nvarchar(50) ) --I create this just to test my try-catch BEGIN TRY CREATE TABLE #LookupLinks( [SyncID] uniqueidentifier, [Name] nvarchar(50), [SQLTable] nvarchar(50) ) END TRY BEGIN CATCH PRINT N'#LookupLinks already existed and was truncated.'; TRUNCATE TABLE #LookupLinks END CATCH What I want this to do: The temp-table is created Attempt to create it again error sends us into the catch table is truncated and everything continues as normal What happens: ERROR: There is already an object named '#LookupLinks' in the database. What am I doing wrong here? A: This is because SQL Server parses and validates the whole batch. So when parsing the second CREATE TABLE statement, it errors out saying: There is already an object named '#LookupLinks' in the database. See this example: IF 1 = 1 BEGIN CREATE TABLE #temp(col INT) END ELSE BEGIN CREATE TABLE #temp(col INT) END It produces an error saying: There is already an object named '#temp' in the database. The workaround is to use Dynamic SQL. -- CREATE the table for testing IF OBJECT_ID('tempdb..#LookupLinks') IS NOT NULL DROP TABLE #LookupLinks CREATE TABLE #LookupLinks( [SyncID] uniqueidentifier, [Name] nvarchar(50), [SQLTable] nvarchar(50) ) -- Final query IF OBJECT_ID('tempdb..#LookupLinks') IS NOT NULL BEGIN TRUNCATE TABLE #LookupLinks PRINT N'#LookupLinks already existed and was truncated.' END ELSE BEGIN DECLARE @sql NVARCHAR(MAX) = '' SELECT @sql = ' CREATE TABLE #LookupLinks( [SyncID] uniqueidentifier, [Name] nvarchar(50), [SQLTable] nvarchar(50) )' EXEC sp_executesql @sql PRINT N'#LookupLinks was created.' END If you do not have the first CREATE TABLE statement,your query will work just fine. Or if you put a GO before the BEGIN TRY. IF OBJECT_ID('tempdb..#LookupLinks') IS NOT NULL DROP TABLE #LookupLinks -- DROP FIRST CREATE TABLE #LookupLinks( [SyncID] uniqueidentifier, [Name] nvarchar(50), [SQLTable] nvarchar(50) ) --I create this just to test my try-catch GO BEGIN TRY CREATE TABLE #LookupLinks( [SyncID] uniqueidentifier, [Name] nvarchar(50), [SQLTable] nvarchar(50) ) END TRY BEGIN CATCH PRINT N'#LookupLinks already existed and was truncated.'; TRUNCATE TABLE #LookupLinks END CATCH Still, it's because SQL server parses and validates the whole batch. The GO statement will put the statements into their own batches, thus the error is now not happening. Even CeOnSql's answer will work fine. A: I think what you really want to achieve is this: IF OBJECT_ID('tempdb..#LookupLinks') IS NOT NULL --Table already exists BEGIN TRUNCATE TABLE #LookupLinks PRINT N'#LookupLinks already existed and was truncated.'; END ELSE BEGIN CREATE TABLE #LookupLinks( [SyncID] uniqueidentifier, [Name] nvarchar(50), [SQLTable] nvarchar(50) ) END
{ "pile_set_name": "StackExchange" }
Q: UIButton color change I want to change UIButton color from brown color to darkbrown color. How i can do that myButton.backgroundColor = [UIColor brownColor]; Any ideas that how to change this brown color to darkbrown color. Thanks for help. A: If you just want a darker brown specifically, you can manually specify it: myButton.backgroundColor = [UIColor colorWithHue:1.0/12 saturation:2.0/3 brightness:4.0/10 alpha:1]; (The default brown color has brightness 6.0/10.) If you want to be able to darken a color in general, you can do it like this: UIColor *color = UIColor.brownColor; CGFloat hue, saturation, brightness, alpha; [color getHue:&hue saturation:&saturation brightness:&brightness alpha:&alpha]; brightness *= .8; UIColor *darkerColor = [UIColor colorWithHue:hue saturation:saturation brightness:brightness alpha:alpha];
{ "pile_set_name": "StackExchange" }
Q: What is the difference between and in HTML, and do they affect website rankings? What is the difference between <s> and <del>? I have read here that there are semantic differences between <b> and <strong>, does this apply to <s> and <del>? Additionally, how are such semantic differences, if any, interpreted by search engines and what affect would they have on rankings? Are there any other tags that affect search rankings? A: <s> and <del> both still exist in the HTML specification. The <del> element represents a removal from the document. The <s> represents contents that are no longer accurate or no longer relevant. That is to say that they actually represent different things semantically. Specifically <del> would be used if you had an existing document and wanted to indicate text that was in the document, but has been removed. This would be different than text in a document that is no longer accurate, but that has not been removed (you could use <s> for that). You should not use or depend on either for styling even though most browsers do have them strike-through by default. You should only rely on CSS for presentation. Due to the mercurial nature of how search engines work, it's very difficult to say whether one tag or another will make a difference on how keywords are created and your content is indexed. You should focus on creating good content that is semantically correct, and your website rank will follow. A: After reading the existing answer, I left still confused about which one to use in my app. We agree that presentationally they are the same, and the difference mostly comes down to semantics. Mozilla's MDN docs helped clarify the semantics for me. I think <s> is correct for most use cases, and here's why along with the four relevant options: <strike> It's clear that <strike> is deprecated, and therefore not correct to use anymore. <s> From <s> on MDN: The HTML Strikethrough Element (<s>) renders text with a strikethrough, or a line through it. Use the <s> element to represent things that are no longer relevant or no longer accurate. However, <s> is not appropriate when indicating document edits; for that, use the <del> and <ins> elements, as appropriate.** <del> From <del> on MDN: The HTML Deleted Text Element (<del>) represents a range of text that has been deleted from a document. This element is often (but need not be) rendered with strike-through text. The biggest key to me on this page is that <del>, like <ins>, offers two additional (optional) attributes: cite and datetime which make sense in referring to a change in the context of a document. As a counterexample, if I were updating a restaurant menu, citing a source for a menu item being sold out doesn't seem relevant. No tag / use CSS If none of the above seem correct for your use case, remember that you can still define a custom class. .purchased { text-decoration: line-through; } <p>Wish list</p> <ul> <li class="purchased">New MacBook</li> <li>Cookies</li> </ul> A: There is no practical difference between del and s, except that the tag names are different. They have the same default rendering (overstruck text), and there is no evidence of any difference in processing by browsers or search engines, and no reason to expect such differences, since the “semantic” definitions are vague and authors don’t care much about them. There is no evidence of any action in search engines on these elements – they operate on text and usually ignore text-level markup, except possibly for some elements that might be regarded as giving their content greater relative weight within a page. The default, or “expected” default rendering is explicitly specified in the Rendering section of HTML5 CR: del, s, strike { text-decoration: line-through; } The theoretical difference is what HTML specifications and drafts say about their “meaning”, which varies from one HTML version to another. So you can use either element, or neither. Overstriking text is not such a good idea, since it easily makes some letters difficult to read. But if you need to overstrike (e.g., because an ad needs to contain old price overstruck), it is perhaps safest to use strike, which is an honest presentational element. So you avoid even the theoretical possibility that some software could interpret s or del in some special way, based on someone’s reading of the HTML5 CR perhaps, possibly differing from your intentions, and thus possibly causing some rendering or processing that is no consistent with your reason for overstriking. (Historically, s and strike have been synonymous, but HTML5 CR makes an arbitrary distinction between them, making s “semantic” and strike presentational.)
{ "pile_set_name": "StackExchange" }
Q: Increment content of pointer I am trying to increment a value in C and return the old value, and I am doing that using a pointer. The problem is that the new value is still 0 even though I am using pointer. #include <stdio.h> #include <stdlib.h> int increment (int *mem) { int tmp; tmp = *mem; *mem++; return tmp; } int main () { int a = 0; printf("The old value of a is \t %d", increment(&a)); printf("The new value of a is \t %d", a); } Now when I run this method I get the same value for a that is 0; I was expecting 1 in the second printf. I don't know what I am doing wrong here. A: Change this *mem++; to this (*mem)++; The problem lies in the operators priority. You may want to read about C Operator precedence. So, what does your code do? It increments the value of the pointer, since ++ operator is activated first and then * gets activated, having no real effect. As a result, your code invokes undefined behavior, since you access (and eventually write) into an invalid memory location since the pointer is incremented before the value is written to it. A: Maybe you missed some parentheses? #include <stdio.h> #include <stdlib.h> int increment (int *mem) { int tmp; tmp = *mem; (*mem)++; // problem was here. return tmp; } int main (){ int a = 0; printf("The old value of a is \t %d", increment(&a)); printf("The new value of a is \t %d", a); }
{ "pile_set_name": "StackExchange" }
Q: How to handle jpa entity I have a table client and from retrieving results I use this way public ClientParent getClient(Long clientId,Long parentId){ String queryString="SELECT cp FROM Client cp where cp.cid.id=:clientId " + "and cp.pid.id=:parentId "; Query query=entityManagerUtil.getQuery(queryString); query.setParameter("clientId", clientId); query.setParameter("parentId", parentId); return (ClientParent)query.getSingleResult(); } This is the DAO method. Actually for getting client at 1st control goes to controller class then to service and then DAO class Now lets say that the client table is empty so in this case return (ClientParent)query.getSingleResult(); will throw me error. I can handle this in by wrting in try catch block in service class as well as in controller class.But wanted to know if I can do with out throwing any exception.I mean do I have change the query or what should I return so that it will never throw exception even if the table is empty A: you can use the getResultList() method public ClientParent getClient(Long clientId,Long parentId){ String queryString="SELECT cp FROM Client cp where cp.cid.id=:clientId " + "and cp.pid.id=:parentId "; Query query=entityManagerUtil.getQuery(queryString); query.setParameter("clientId", clientId); query.setParameter("parentId", parentId); List<ClientParent> result = query.getResultList(); if (result != null && result.size() >0){ return result.get(0); } else { return null; } }
{ "pile_set_name": "StackExchange" }
Q: Can modern compilers optimize constant expressions where the expression is derived from a function? It is my understanding that modern c++ compilers take shortcuts on things like: if(true) {do stuff} But how about something like: bool foo(){return true} ... if(foo()) {do stuff} Or: class Functor { public: bool operator() () { return true;} } ... Functor f; if(f()){do stuff} A: It depends if the compiler can see foo() in the same compilation unit. With optimization enabled, if foo() is in the same compilation unit as the callers, it will probably inline the call to foo() and then optimization is simplified to the same if (true) check as before. If you move foo() to a separate compilation unit, the inlining can no longer happen, so most compilers will no longer be able to optimize this code. (Link-time optimization can optimize across compilation units, but it's a lot less common--not all compilers support it and in general it's less effective.) A: I've just tried g++ 4.7.2 with -O3, and in both examples it optimizes out the call. Without -O, it doesn't.
{ "pile_set_name": "StackExchange" }
Q: Android Sms Broadcast Receiver not firing despite priority I have read many similar questions that usually results in an answer about priority. The reason I don't think this is true in my case is that other SMS readers on my phone (like automation apps) receive the broadcasts just fine. I would like to post the process of what I'm doing currently and make triple sure that I'm not doing something wrong in my code that would cause this to fail. Thanks for any tips you can give! Note: I've tried with the highest integer priority, priority 99, 100, 0, or none set at all. They all don't work. AndroidManifest: <?xml version="1.0" encoding="utf-8"?> <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="com.hennessylabs.appname" > <uses-permission android:name="android.permission.CALL_PHONE" /> <uses-permission android:name="android.permission.READ_CONTACTS" /> <uses-permission android:name="android.permission.SEND_SMS" /> <uses-permission android:name="android.permission.READ_SMS" /> <application android:allowBackup="true" android:icon="@mipmap/ic_launcher" android:label="@string/app_name" android:supportsRtl="true" android:theme="@style/AppTheme" > <activity android:name=".MainActivity" android:label="@string/app_name" android:theme="@style/AppTheme.NoActionBar" > <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> <receiver android:name="com.hennessylabs.drivercompanion.ProcessTextMessage" android:exported="true"> <intent-filter android:priority="999" android:permission="android.permission.BROADCAST_SMS"> <action android:name="android.provider.Telephony.SMS_RECEIVED"></action> </intent-filter> </receiver> </application> </manifest> BroadcastReceiver: package com.hennessylabs.appname; import android.content.BroadcastReceiver; import android.content.Context; import android.content.Intent; import android.os.Bundle; import android.provider.Telephony; import android.telephony.SmsManager; import android.telephony.SmsMessage; import android.util.Log; import android.widget.Toast; /** * Created by kyleC on 11/15/2015. */ public class ProcessTextMessage extends BroadcastReceiver { private static final String SMS_RECEIVED = "android.provider.Telephony.SMS_RECEIVED"; final SmsManager sms = SmsManager.getDefault(); @Override public void onReceive(Context context, Intent intent) { Toast.makeText(context, "Entered onReceive", Toast.LENGTH_LONG).show(); // Retrieves a map of extended data from the intent. final Bundle bundle = intent.getExtras(); try { if (bundle != null) { final Object[] pdusObj = (Object[]) bundle.get("pdus"); for (int i = 0; i < pdusObj.length; i++) { SmsMessage currentMessage = SmsMessage.createFromPdu((byte[]) pdusObj[i]); String phoneNumber = currentMessage.getDisplayOriginatingAddress(); String senderNum = phoneNumber; String message = currentMessage.getDisplayMessageBody(); Log.i("SmsReceiver", "senderNum: "+ senderNum + "; message: " + message); // Show alert int duration = Toast.LENGTH_LONG; Toast toast = Toast.makeText(context, "senderNum: " + senderNum + ", message: " + message, duration); toast.show(); } // end for loop } // bundle is null } catch (Exception e) { Log.e("SmsReceiver", "Exception smsReceiver" +e); } } } Expected Result: The expected result is that when an SMS arrives, the screen would first show a Toast that it entered the OnReceive method. Then it would log and Toast the contents of the SMS. Actual Result: Nothing from the expected result happens. In fact even while connected to USB and in debug mode it never seems to enter that class at all. So maybe I have my manifest set up wrong? A: Provided everything else is correct, it looks like you're just missing the RECEIVE_SMS permission. <uses-permission android:name="android.permission.RECEIVE_SMS" />
{ "pile_set_name": "StackExchange" }
Q: Ant target calling I would like to call target backup.yes only if the condition is true. <condition property="directory.found.yes"> <equals arg1="${directory.found}" arg2="true"/> </condition> <antcall target="update.backup"/> Is there any way to do this. A: Instead of <antcall/>, do the following: Imagine you're calling target foo, and you want to do a backup before, but only if that condition exists: <target name="foo" depends="update.backup"> <..../> </target> <target name="update.backup.test"> <condition property="directory.found.yes"> <equals arg1="${directory.found}" arg2="true"/> </condition> </target> <target name="update.backup" depends="update.backup.test" if="directory.found.yes"> <.../> </target> The problem with <antcall/> is that it is used when the dependency matrix Ant uses is broken, and it's used to force a task to be done before another task is complete. When really abused, you'll end up calling the same task multiple times. I had a project here that literally called each target between 10 to 14 times, and there were over two dozen targets. I rewrote the entire build sans <antcall/> and by using true dependency setup, cut the build time by 75%. From my experience 90% of <antcall/> is due to poor target dependency management. Let's say you want to execute target foo. (The target the user wants to really execute), and before foo is called, you want to do your backup, but only if the directory actually exists. In the above, foo is called. It depends upon update.backaup. The target update.backup is called, but it depends upon update.backup.test which will test whether or not the directory actually exists. If the directory exists, the if clause on the update.backup task is true, and the task will actually execute. Otherwise, if the directory isn't there, it won't execute. Note that update.backup first calls any dependencies before it checks whether the property on the if or unless parameter for the target entity is checked. This allows the target to call a test before it attempts to execute. This is not a mere side effect, but built into the design of Ant. In fact, the Ant Manual on Targets](http://ant.apache.org/manual/targets.html) specifically gives a very similar example: <target name="myTarget" depends="myTarget.check" if="myTarget.run"> <echo>Files foo.txt and bar.txt are present.</echo> </target> <target name="myTarget.check"> <condition property="myTarget.run"> <and> <available file="foo.txt"/> <available file="bar.txt"/> </and> </condition> </target> And states: Important: the if and unless attributes only enable or disable the target to which they are attached. They do not control whether or not targets that a conditional target depends upon get executed. In fact, they do not even get evaluated until the target is about to be executed, and all its predecessors have already run. A: You can do the following In the other target: <antcall target="update.back"> <param name="ok" value="${directory.found.yes}"/> </antcall> And in the update.backup target: <target name="update.backup" if="ok"> But I think you can also do the following using the if statement from ant-contrib: <if> <equals arg1="${directory.found.yes}" arg2="true" /> <then> <antcall target="update.back" /> </then> </if>
{ "pile_set_name": "StackExchange" }
Q: How can I fill a vertex with black bars? Suppose I have simple graph drawing like the following. \documentclass[12pt,a4paper]{article} \usepackage{tkz-graph} \begin{document} \begin{figure} \begin{tikzpicture} \SetUpEdge[lw = 1.5pt, color = black, labelcolor = white] \GraphInit[vstyle=Classic] \tikzset{VertexStyle/.append style = {minimum size = 8pt, inner sep = 0pt}} \Vertices[unit=2]{circle}{a,b,c,d,e,f} % It's easy to change the color of a vertex! \AddVertexColor{white}{a,d} \Edges(a,b,c,d,e,f,a) \end{tikzpicture} \end{figure} \end{document} From the documentation, I could figure out how to change the color of a vertex, as is done in the code. However, I'd like to use only black and white to represent several colors. For example, could I have a vertex that is filled with black bars? If so, how? This could represent "blue", while vertex filled with black is "green", and a vertex filled with white is "red". A: Load the patterns library, and add pattern=<style> to the definition of VertexStyle, where <style> include, for example, horizontal lines, vertical lines, north east lines, north west lines, etc. Code \documentclass[12pt,a4paper]{article} \usepackage{tkz-graph} \usetikzlibrary{patterns} \begin{document} \begin{figure} \begin{tikzpicture} \SetUpEdge[lw = 1.5pt, color = black, labelcolor = white] \GraphInit[vstyle=Classic] \tikzset{VertexStyle/.append style = {minimum size = 8pt, inner sep = 0pt, pattern=north east lines}} \Vertices[unit=2]{circle}{a,b,c,d,e,f} % It's easy to change the color of a vertex! \AddVertexColor{white}{a,d} \Edges(a,b,c,d,e,f,a) \end{tikzpicture} \end{figure} \end{document} Output
{ "pile_set_name": "StackExchange" }
Q: Using Flutter Firestore plug in, do something for each sub collection in a document To help me better understand Flutter and Firebase, I'm making a list sharing app. I'm working on the list home screen that will show a reorderable list tile view with a tile for each of the users' lists, I have not began to work on whats inside of these lists. I have firestore set up so that each list is a sub collection, now I want to create a tile for each sub collection in that user's document ( a list of that user's lists). I'm having a tough time telling flutter to do something for each sub collection without using a specific sub collection's name. I wanted to pass in a list title to each tile by making a collection reference for each sub collection( list) and calling .id on each one. Using the collection ID as the title, I'm not yet sure if I can do this or if ill have to make the list title a field inside of each list. Either way, I need to find out how to do something for each subcollection inside a particular document. .forEach seems to only work on document fields, not subcollections? What am I doing wrong? I'm sure there is a better way to go about this. I have not included any code as this is a big picture kind of question. A: There is no method in the Firestore client-side SDKs to get all collections (or subcollections under a specific document). Such API does exist in the server-side SDKs, but not in the client-side SDKs. See: Fetching all collections in Firestore How to get all of the collection ids from document on Firestore? So you'll need to know the collections already, typically by changing your data model. For example by creating a document for each list's metadata, and then storing the list items in a subcollection with a known name under that document. That way you can get all lists by querying for the documents, which is possible within the API.
{ "pile_set_name": "StackExchange" }
Q: Time count based on condition I've been trying hard to figure this one out, see if anyone could give me some directions, please. I have a worksheet in which I put the activities performed through the day (column W), so I have a report for every day. Each activity has a type defined by a letter listed in a drop down menu (column U), for example: 'R' for 'Reporting', 'M' for 'Meeting' etc, and a duration in hours (HH:mm) (column S). What I want my worksheet to do is to sum all the durations of the activities based on their type and store the results in a diferent cell. In the image example attached, I'd have to have 3:30 h for Meeting, 4:30 for Reporting and 2 h for Drawing. What the following SumIf code does is to sum all the hours, but it is not acounting for the duration, once the duration is the difference between the time in the next row and the time of the row in which the conditional argument is. Sub test_count_hours() Range("E35").Value = WorksheetFunction.SumIf(Range("U8:U37"), "R", Range("S8:S37")) End Sub So my struggle is to make the code acounting for the duration of the events. Example of a filled Worksheet A: How about storing your activity types in an array? You can loop through each activity and then loop through each row to check the activity type, adding the time between that and the next row if it's a match Sub sumHours() Dim activityList As Variant Dim activity As Variant Dim rowCount As Integer Dim countHours As Date activityList = Array("M", "R", "Dv") Range("U5").Select Range(Selection, Selection.End(xlDown)).Select rowCount = Selection.Rows.Count For Each activity In activityList countHours = 0 For i = 5 To rowCount + 5 If Range("U" & i) = activity Then countHours = countHours + Range("S" & i + 1) - Range("S" & i) End If Next i Select Case activity Case Is = "M" Range("E35") = countHours Case Is = "R" Range("E36") = countHours Case Is = "Dv" Range("E37") = countHours End Select Next activity End Sub (Make sure the cells that will hold the timevalues are formatted with the "Time" data type.
{ "pile_set_name": "StackExchange" }
Q: Is it possibly to forward mail in phpbb? I want to forward all mails sent to the administrator account to another e-mail-address. Is this possible? A: PHPbb does not handle incoming mail - your MTA does (postfix, sendmail, exim, etc.). You'll need to configure the forwarding in whatever MTA you have running.
{ "pile_set_name": "StackExchange" }
Q: How to transform an array of objects into another array of objects? I have an array of objects like this: const arr = [ { date: "12-09-2018", text: "something", type: "free", id: "dsadsadada" }, { date: "12-09-2018", text: "something1", type: "premium", id: "fdss4a4654" } ] and I would like to transform this array into this one: const arr2 = [ { date: "12-09-2018", data: [ { type: "free", id: "dsadsadada", text: "something" }, { type: "premium", id: "fdss4a4654", text: "something1" } ] } ] So in this case for each day I will have an array of data. What is the best approach ? Thank you :) A: use reduce. const arr = [ { date: "12-09-2018", text: "something", type: "free", id: "dsadsadada" }, { date: "12-09-2018", text: "something1", type: "premium", id: "fdss4a4654" } ]; const output = Object.values(arr.reduce((accu, {date, ...rest}) => { if(!accu[date]) { accu[date] = {date, data: []}; } accu[date].data.push(rest); return accu; }, {})); console.log(output);
{ "pile_set_name": "StackExchange" }
Q: How to get Query Plan Reuse in MS SQL Server I inherited a database application that has a table with about 450 queries. There's a calling procedures takes the @QueryId and @TheId as input parameters. The only way these queries are executed is via this procedure. The queries are like this: @sql = replace('insert into #temp select col1, col2, col3, col4 from SomeTable st join OtherTable ot on matching_column where st.TheID = ##TheId##', '##TheId##', @TheId); exec sp_executesql @sql; I want to get plan reuse, so I replace ##TheId## with @TheId and then execute the query like this: exec sp_executesql @sql, N'@TheId int', @TheId; However, I'm still seeing the same behavior where each plan is a unique plan, even though the @sql string is already compiled and in the procedure cache. Now the string is like this ...where where st.TheID = @TheId Question: how can I get plan reuse as desired on a parameterized query? A: Well if you modify it to the following you should get plan reuse as this will make it a parameterized query: @sql = replace('insert into #temp select col1, col2, col3, col4 from SomeTable st join OtherTable ot on matching_column where st.TheID = ##TheId##', '##TheId##', '@TheId'); exec sp_executesql @sql, N'@TheID INT', @TheID; https://technet.microsoft.com/en-us/library/ms175580(v=sql.105).aspx
{ "pile_set_name": "StackExchange" }
Q: Javascript prototype example- changing protoype of standard JavaScript objects <!DOCTYPE html> <html> <body> <p id="demo"></p> <script> Date.prototype.name="not good"; var d=new Date(); document.getElementById("demo").innerHTML =d.name; </script> </body> </html> Result- not good In the above example the name field is being added to the Data object using the prototype functionality. What could be the drawbacks of doing this? Are the changes made here will reflect permanently? According to w3scholls I got this note- Only modify your own prototypes. Never modify the prototypes of standard JavaScript objects. But sometimes isn't it convenient? A: Except for polyfills which implement standard-defined behavior in older implementations, it is generally not considered a good practice to modify the implementation of built-in objects. These are some of the reasons: A standard object is a public namespace. The standards may evolve over time and add new methods/properties which could conflict with the properties you add. Better to avoid that possibility and avoid code that could conflict with future browsers. If multiple pieces of code being used in a project are all doing this (imagine a project that pulls in 15 third party libraries to do its job), then there is a chance for conflict among the modifications (since they aren't defined by any standards). You may be modifying the behavior of a standard object in a way that could break some existing code. For example, adding an enumerable property or method to some objects can mess up existing code that iterates existing properties. If you were going to add something, make sure you add it with Object.defineProperty() so that it is not enumerable. There's pretty much always another way to accomplish what you want to do that doesn't have these risks and works just as well. For example, jQuery chooses to create a container wrapper (it's own object) that contains references to DOM objects and is the container object for its own methods.
{ "pile_set_name": "StackExchange" }
Q: Show/Hide div in DataList with Jquery and I made a project with asp, but something is not working...I am trying to show/hide div which is inside of Datalist. But unfortunately is working only in first element, and the others element the div that I want to hide is appear. here is my code: `<script type="text/javascript"> $(function () { $("#hiden").hide(); $("#showddiv").on("click", function () { $("#hiden").toggle(); }); }); </script> <div id="mainReferences"> <asp:DataList ID="DataList1" runat="server" CellPadding="4" ForeColor="#333333"> <AlternatingItemStyle BackColor="#2E2E2E" /> <FooterStyle BackColor="#507CD1" Font-Bold="True" ForeColor="White" /> <HeaderStyle BackColor="#507CD1" Font-Bold="True" ForeColor="White" /> <ItemStyle BackColor="#151515" /> <ItemTemplate> <table cellspacing="20"> <tr> <td><a href="#" id="showddiv" class="fontText" title="drop the div down"><img src='<%# Eval("Mainfoto") %>' width="320px" height="290px" /> </a></td> <td width="400px"> <asp:Label ID="Label1" class="FontText" Font-Bold="true" runat="server" Text="Përshkrimi:"></asp:Label><br /> <asp:Label ID="Label2" width="400px" class="FontText" Font-Size="Large" runat="server" Text='<%# Eval("pershkrimi") %>' ></asp:Label></td> </tr> </table> <div id="hiden" class="categorry"> </div> </ItemTemplate> <SelectedItemStyle BackColor="#D1DDF1" Font-Bold="True" ForeColor="#333333" /> </asp:DataList>` A: You're re-using id values in your HTML. This is invalid markup and will likely lead to undefined behavior (probably different by browser as well). Notice this element: <div id="hiden" class="categorry"> Since this is essentially inside a loop (repeater, datalist, etc.) it's going to render multiple times to the page. Instead of an id, use a class: <div class="hiden categorry"> Then just change your jQuery selector: $('.hiden') Of course, now you also need to specifically identify which element you want to toggle. You can do this by traversing the DOM a little bit from the clicked element. Something like this: $(this).closest('div').find('.hiden').toggle(); This is an example, since I don't know the rendered markup resulting from your server-side code. Essentially the selector in .closest() should refer to whatever parent element wraps that particular datalist item in the markup. This basically looks for: The element which was clicked -> a common parent between it and the element you want to toggle -> the element you want to toggle. (Naturally, this same fix will need to be applied anywhere else you're duplicating id values, which you do a couple of times in your code.) ids have to be unique in the DOM. classes can be re-used.
{ "pile_set_name": "StackExchange" }
Q: Self Joining between 2 same tables For instance I have a table called employees where it consists of "Employee ID", "First Name", "Last Name", "Manager ID". To count the subordinate of each manager, I tried to self-joining between the 2 tables. SELECT e1.first_name, e1.last_name, COUNT(e1.employee_id) FROM employee e1 INNER JOIN e2 ON e1.employee_id = e2.manager_id GROUP BY e1.first_name, e1.last_name Am I right? Also, if I want to join with other tables after self-joining, is the joining statement right? FROM ((self-joining) INNER JOIN other tables ON "common column") Combining the first and last name: SELECT CONCAT(e1.first_name,' ',e1.last_name) "Full Name", COUNT(e1.employee_id) FROM employee e1 INNER JOIN e2 ON e1.employee_id = e2.manager_id GROUP BY "Full Name" I can't compile this....What is wrong? A: An answer for your first question is just a minor tweak to your query: SELECT e1.firstname AS managerFirstName, e1.lastname AS managerLastName, COUNT(e1.employeeid) FROM employees e1 INNER JOIN employees e2 ON e1.employeeId = e2.managerId GROUP BY e1.firstname, e1.lastname; I really haven't changed much here - and you can see that it works at: http://sqlfiddle.com/#!9/187477/1 The essential change is sticking with the names coming from the same table reference (e1) and GROUPing BY the same fields. You also need to (as commented) indicate the table name before aliasing with "e2". (Note that the aliasing of the names is just to help indicate that these are managers, it's not an essential part of the query. Also, I used slightly different field names, but the logic is the same.) As to your second question, I'd do it using the self-join query as a sub-query, more or less as you suggest. Try something out - you're essentially at a solution. EDIT IN RESPONSE TO QUESTION EDIT: Adding the concatenation in (note that oracle has some limits around concatenating more than 2 strings, so this is one possible workaround - there's more info at this answer: Oracle SQL, concatenate multiple columns + add text): SELECT CONCAT(CONCAT(e1.firstname, ' '), e1.lastname) AS managerName, COUNT(e1.employeeid) FROM employees e1 INNER JOIN employees e2 ON e1.employeeId = e2.managerId GROUP BY e1.firstname, e1.lastname; still works: http://sqlfiddle.com/#!4/b0cbcd/4
{ "pile_set_name": "StackExchange" }
Q: How to pass a type into func Here is the situation: interface A{}//interface for entities class B implements A{}//entity B class C implements A{}//entity C abstract class AR<E extends A>{}//generate list of E via a file,this will be done after the construct func. class BR extends AR<B>{} class CR extends AR<C>{} Now I want to maintain: Map<String,List<A>> data;//<filepath+filename, list of entities in file> The func. blow will return the list based on the fileAddr: <E extends A,R extends AR<E>> List<A> getList(String fileAddr) { if(data.containsKey(fileAddr)) return data.get(fileAddr); else { AR<E> reader=new R(fileAddr);//generate a list of E via this file List<E> values=reader.getValues(); data.put(fileAddr,values); return values; } } But this doesn't work,and new R(fileAddr) is not supported. So how to implement getList() func. based the spec. above. A: You cannot use new to generate an instance of a type parameter. The technical reason is that the type information has been erased by the time that getList is executed. Anyway, the Java language does not permit it. One way around the problem is to pass in a factory object that can create the required instance. Another way around it is to pass the parameter type's Class object as an argument, and then create the instance reflectively; e.g. something like this: <E extends A,R extends AR<E>> List<A> getList(String fileAddr, Class<R> rClass) { if(data.containsKey(fileAddr)) return data.get(fileAddr); else { Constructor<R> ctor = rClass.getConstructor(String.class); R reader=ctor.newInstance(fileAddr); List<E> values=reader.getValues(); data.put(fileAddr,values); return values; } } (There will be a bunch of exceptions to deal with ...)
{ "pile_set_name": "StackExchange" }
Q: Get current directory and run a File in vbscript? I am trying to see if IIS is installed and display a message and a download exe to INstall IIS if IIS isn't installed.However i am having a hard time running a file without specifying the full path in the vb-script.The path will be dynamic and it impossible to specify any other directory than "%cd% My code: If WScript.Arguments.length =0 Then Set objShell = CreateObject("Shell.Application") objShell.ShellExecute "wscript.exe", Chr(34) & _ WScript.ScriptFullName & Chr(34) & " uac", "", "runas", 1 Else Dim intCounter, strSubkey Const HKEY_LOCAL_MACHINE = &H80000002 strComputer = "." Set objReg=GetObject("winmgmts:{impersonationLevel=impersonate}!\\" _ & strComputer & "\root\default:StdRegProv") strKeyPath = "SOFTWARE\Microsoft" objReg.EnumKey HKEY_LOCAL_MACHINE, strKeyPath, arrSubKeys intCounter=0 For Each subkey In arrSubKeys If subkey="InetStp" Then intCounter=1 or strSubkey=subkey End If Next currentDirectory = left(WScript.ScriptFullName, Len(WScript.ScriptFullName))-(len(WScript.ScriptName))) if intCounter=0 then Set WSHShell = CreateObject("Wscript.Shell") WSHShell.Run ("\currentDirectory\noiisinstalled.exe") Elseif intCounter=1 then Wscript.Echo "IIS is Already installed - " & strSubkey End If End if My problem is running the no iisinstalled.exe file.Whatever I'm trying the script cannot find the file. A: You can get the current directory using the Scripting.FileSystemObject. ie dim fso: set fso = CreateObject("Scripting.FileSystemObject") ' directory in which this script is currently running CurrentDirectory = fso.GetAbsolutePathName(".") to use this to build a new path, you can use the BuildPath() function NewPath = fso.BuildPath(CurrentDirectory, "noiisinstalled.exe")
{ "pile_set_name": "StackExchange" }
Q: Should arguments.slice() work in ES5? I'm watching Crockford on Javascript - Act III: Function the Ultimate at around 41 mins 26 seconds. The code on his screen uses arguments.slice() in a way that causes an error for me. function curry(func){ var args = arguments.slice(1); ... } He explains it like this: I'll first get an array of arguments, except the first one, because the first one is a function and I don't need that one. In this case I'm assuming I'm on ES5, so I'm not doing the awful Array.prototype.apply() trick. The problem is that running arguments.slice() results in this error: Uncaught TypeError: arguments.slice is not a function I'm testing on modern browsers that definitely have ES5! The only way I can get the code to work is if I use some "awful" tricks, (as he calls them) e.g. Array.prototype.slice.apply(arguments, [1]) or [].slice.call(arguments, 1);. Is he just mistaken? Does his slide have a typo in it? Why doesn't arguments.slice() work in my ES5 browsers? A: Quoting TC39 member Allen Wirfs-Brock: Until very late in the development of ECMAScript 5, argument object were going to inherit all of the Array.prototype methods. But the "final draft" of ES5 approved by TC39 in Sept. 2009 did not have this feature. Making the arguments object inherit from the Array prototype was actually planned, but when put in practice it broke the web. Hence it was removed from the final revision before official publication. Nowadays, with ECMAScript 2015 (a.k.a. ES6) standardized, the best approach is to use rest parameters: function curry(func, ...args) { // ... } Which is equivalent to ES5: function curry(func) { var args = [].slice.call(arguments, 1); // ... } This feature is already natively available in Firefox and Edge, and available everywhere if you use a JavaScript compiler such as Babel.
{ "pile_set_name": "StackExchange" }
Q: Animation of files moving from one folder to the other is what? Consider the animation that appears when you drag and drop a bunch of files from one folder to another and you can see the files moving from the first folder to the second. Is that an example of any of these: visibility, affordance, mapping, consistency? Why, why not? Thank you in advance. A: I'll discuss each term separately: Visibility I haven't heard this term before, I'm guessing it comes from David Hogue? This is his definition: Good visibility, according to Hogue’s principles, means that obvious prompts and cues are present, which: Lead the user through an interaction. Guide them through a series of tasks. Indicate what possible actions are available to them. Communicate the context of the situation. Out of all of these, the animation meets the last point, as it communicates that the files are being moved, where the files came from and where they're going. Affordance Depends on your definition. Don Norman defines them as "perceivable action possibilities". The animation doesn't meet this definition, as it doesn't communicate what actions you may take. Mapping Depends on what definition you're using. Mapping is often used in the context of making diagrams, which isn't the case here. There's also the case of mental maps, where you try to come close to how a user understands concepts. You could argue that you're mapping the move action to the physical movement of objects, which is illustrated by this move animation. Consistency This really depends on the context you're designing in. If the move animation is commonly used on the platform, in other projects, or at least within your own project, then it's an example of consistency.
{ "pile_set_name": "StackExchange" }
Q: Stop flash content from loading - AS2 I have a flash file that I want to embed on a webpage, however, I want it to load when the user clicks on it (which will run the preloader) - (like clicking play on a youtube video) It's one flash file that loads in XML data and is graphically heavy. I'm not sure if the only way to do it is to load the flash file through another swf, i.e., flash container -> click flash container to load flash file with preloader. Any suggestions? A: My first impulse would be to use an image or explicitly-sized DOM element as a placeholder and some Javascript to swap in the Flash applet on-click. That'd be guaranteed to solve your loading issue as well as deferring the overhead of starting the flash player. (Which would make users who think like me but don't know about FlashBlock happy) With jQuery, it'd just be something like $('#flash_placeholder').click(function() { $(this).replaceWith(flash_applet_markup); }); ...and on-hover behaviour with something like opacity animation is also trivial.
{ "pile_set_name": "StackExchange" }
Q: How can I cumulatively add or subtract values based on another column values using Pandas? I have a dataframe below that shows voltage output based on seconds. The v_out value is based on a displacement of either +/- 0.05 centimeters. So when v_out gets more positive, then there is positive displacement compared to the last v_out value. When v_out gets more negative, the displacement is going in the - direction. I have the initial df and I want to add a sign column that tells whether the v_out is positive or negative based on the previous v_out value. And, I want a cumulative column that keeps track of the added running total of the sign column. Initial df secs v_out 0 0.0 -1.179100 1 15.0 -1.179100 2 18.0 -1.179200 3 33.0 -1.181800 4 48.0 0.029461 What I want secs v_out sign cumul 0 0.0 -1.179100 0.00 0.00 1 15.0 -1.179100 0.00 0.00 2 18.0 -1.179200 -0.05 -0.05 3 33.0 -1.181800 -0.05 -0.10 4 48.0 0.029461 0.05 -0.05 A: The method to look at a _lagged value is named shift, and then we inspect if the values is positive, negative, or zero with an if-else construct. So, first we'd construct the column sign. The logic can be packed into 1 line. df['sign'] = (df.v_out - df.v_out.shift()).apply(lambda x: 0.05 if x > 0 else -0.05 if x < 0 else 0) btw, I'd often write this code interactively, but would recommend splitting it over a few lines so the logic is easier to follow if its going to be saved in a file) Then, to make the cumul column can be made through the application of the cumsum method. df['cumul'] = df.sign.cumsum() The final df looks like this: secs v_out sign cumul 0 0.0 -1.179100 0.00 0.00 1 15.0 -1.179100 0.00 0.00 2 18.0 -1.179200 -0.05 -0.05 3 33.0 -1.181800 -0.05 -0.10 4 48.0 0.029461 0.05 -0.05`
{ "pile_set_name": "StackExchange" }
Q: How to create two different value in the same row Below is the Data Set I am working on. I am trying to create new column based on certain condition. Below is the code:- SELECT * , CASE WHEN [Partner Number] Like ('SAL%') THEN [Partner Name] ELSE '' END AS [SAL Name], CASE WHEN [Partner Number] Like ('SAL%') THEN [Partner Email Address] ELSE '' END AS [SAL Email], CASE WHEN [Partner Number] Like ('CEO%') THEN [Partner Name] ELSE '' END AS [CEO Name], CASE WHEN [Partner Number] Like ('CEO%') THEN [Partner Email Address] ELSE '' END AS [CEO Email], CASE WHEN [Partner Number] Like ('VPS%') THEN [Partner Name] ELSE '' END AS [VPS Name], CASE WHEN [Partner Number] Like ('VPS%') THEN [Partner Email Address] ELSE '' END AS [VPS Email] Into #Partner from ( Select distinct [Vendor Number], [Purchasing Org], [Partner Number], [Partner Name] , [Partner Email Address] from #Temp)k But It is generating two different rows, if the set has both SAL and CAL. It is possible to generate both of them in same row because when i am doing join , the data is getting explode . Ideally both Sal Name and cal Name in the same row. Result A: I think this is more something you should do in the presentation layer. But something like this should do it: WITH cte_cat as ( SELECT *, CASE WHEN partnerNumber like 'CEO%' THEN 'CEO' WHEN partnerNumber like 'SAL%' THEN 'SAL' ELSE 'OTHER' END as Category FROM #yourtable ), cte_rn as ( SELECT *,ROW_NUMBER () OVER (PARTITION BY vendorNbr,purchasingOrg,Category ORDER BY Commodity) as RN FROM cte_cat ) SELECT oth.vendorNbr,oth.purchasingOrg,oth.commodity,oth.partnerNumber,oth.PartnerName,oth.ParterEmail,CEO.partnerNumber,CEO.PartnerName,CEO.ParterEmail,SAL.partnerNumber,SAL.PartnerName,SAL.ParterEmail FROM cte_rn oth LEFT JOIN cte_rn CEO ON CEO.Category = 'CEO' AND oth.RN = CEO.rn AND oth.vendorNbr=CEO.vendorNbr AND oth.purchasingOrg = CEO.purchasingOrg LEFT JOIN cte_rn SAL ON SAL.Category = 'SAL' AND oth.RN = SAL.rn AND oth.vendorNbr=SAL.vendorNbr AND oth.purchasingOrg = SAL.purchasingOrg WHERE oth.Category ='OTHER';
{ "pile_set_name": "StackExchange" }
Q: Some Issues About Cygwin[Linux in Windows] (socket,thread,other programming and shell issues) I have some question about cygwin : Can I use Cygwin develop socket based code? Does Cygwin have read() and write() functions that work with file descriptors? Can I use Pthread library in Cygwin? Does code that compiles in Cygwin also compile in Linux without any change or with little change? Will an executable file that built by Cygwin run in Linux ? Why does Cygwin not need the linker option -lpthread when I use pthread library? why in #include <iostream> don't I need to use using namespace std; ? Can I work with QT in Cygwin? If so,How? Can I boot my Linux in other partition with Cygwin and use it? Can I access the other partition that is EXT3 in Cygwin? A: On 1: Yes. Socket libraries are shipped with Cygwin - many socket based apps such as web servers are included in the base distribution. On 2: Yes. I think all of the 'section 2 and 3' system calls in the GNU C runtime and library are implemented by the cygwin runtume. You can check this in the man pages that come with Cygwin. A list of system calls and std lib calls implementd by Cygwin can be found here. On 3: Yes. Pthread is included in Cygwin. The list referred to in the link above mentions pthreads as well. On 4: Anything built against GNU libraries should work with little or no change between Cygwin and Linux (assuming there are no dependencies missing on Cygwin). Depending on CPU architecture you may have to worry about word alignment, endianness and other architecture-specific porting issues, but if you're targeting Windows and Linux on Intel your code would have few if any porting issues arising from CPU architecture. On 5: Cygwin will build a program against its own shared libraries by default but GCC can cross-compile to target other platforms. You could (in theory) set GCC up to cross-compile to any target supported by the compiler. There are plenty of resources on the web about cross-compiling with GCC, and I don't think the process will be materially different on Cygwin. Note that Cygwin binaries will not run on Linux - or Vice-versa. You will still need separate builds for both. On 6: Not sure - at a guess it's included in the standard runtime, perhaps because it was necessary to wrap the Win32 threading API for some reason. On 7: Don't know - it's probably the same on g++ on all platforms. Apparently a compiler bug. Dan Moulding's Answer covers this in more detail. On 8: Yes. IIRC QT is available in the standard builds and it will certainly compile on Cygwin. As with Linux/Unix, QT on Cygwin uses an X11 backend so you will need to have an X server such as XMing running. In order to avoid the dependency on an X server you may want to build QT apps against the Win32 API,. It is possible to do this with MinGW, which is a set of header files and libraries to build native Win32 apps with GCC. MinGW can be used from within a Cygwin environment (an example of GCC on Cygwin cross-compiling to a non-Cygwin target) and the installer from cygwin.com gives you the option of installing it. MinGW is quite mature; it has all of the 'usual suspects' - libraries and header files you would expect to find on a Unix/Linux GCC development environment and is very stable. It is often the tool of choice for building Win32 ports of open-source software because it is (a) free, (b) supports the libraries used by the software and (c) uses GCC so it is not affected by dialectic variations between MSVC and GCC. However, these dialectic variations in the language and available libraries (for example MSVC doesn't come with an implementation of getopt) mean that porting programs between MinGW and MSVC can be quite fiddly. My experience - admittedly not terribly extensive as I've only done this a few times - is that porting applications between MinGW32 and Linux is easier than porting between MinGW and MSVC. Obviously apps with non-portable dependencies such as Win32 specific API usage would require the dependent components to be re-written for the new platform but you'll have far fewer problems with differences in the standard libs, header files and language dialect. QT does a fairly good job of providing a platform abstraction layer. It provides APIs for database access, threading, I/O and many other services as well as the GUI. Using the QT APIs where possible should help with portability and the Unix/Linux flavoured libraries that come with MinGW mean that it might give you a good platform for making applications that will port between Win32 and Linux with relatively little platform dependent code. EDIT: The qt development packages in Cygwin are: qt4: Qt application framework (source) qt4-devel-tools: Qt4 Assistant, Designer, and Linguist qt4-doc: Qt4 API documentation qt4-qtconfig: Qt4 desktop configuration app qt4-qtdemo: Qt4 demos and examples You'll probably also need gcc4-g++ and some other bits and pieces. This listing on the cygwin web site has a list of the packages. A: "Yes" to all of those except 5. You'll have to build your executables separately for Linux, but that should be straightforward since the answer to 4 is "yes". Make sure you install all the development headers you need on both platforms.
{ "pile_set_name": "StackExchange" }
Q: IPTables port whitelist, problem with outgoing connections I'm trying to set-up iptables so it will only allow incoming connections on ports I specify. I managed to get it working however it seems I somehow managed to break outgoing connections in the process. Both NS look-ups and pings seem to fail, and probably everything else too. Current rules: Chain INPUT (policy DROP) target prot opt source destination ACCEPT tcp -- anywhere anywhere tcp dpt:ssh Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination So how can I make this work, while still allowing outgoing connections? A: Try: iptables -A INPUT -p TCP -m state --state ESTABLISHED,RELATED -j ACCEPT
{ "pile_set_name": "StackExchange" }
Q: What's the difference between abandoned memory and a memory leak? Both are exactly the same thing, except that "abandoned memory" refers to a whole object graph leaked rather than just a single object. Right? A: First, you need to understand the notion of a "memory object graph" or "application object graph" (or, simply, "object graph" as it applies to allocated buffers). In this case, "object" refers to any allocation in your application, be it an object or a simple malloc()ed buffer. The "graph" part if it is that any object can contain a reference to -- a pointer -- to other objects. The "live object graph" of an application are all of the allocations that can be reached, directly or indirectly, from the various "roots" in the application. A "root" is something that, on its own, represents a live reference to an object, regardless of whether or not anything else explicitly references the root. For example, global variables are roots; by referring to an object, a global variable, by definition, makes that object part of the app's live object graph. And, by implication, any objects that the object referred to by the global variable are also considered to be live; not leaked. The same goes for the stack; any object referred to by any thread's live stack is, itself, considered live. With this in mind, a leak and abandoned memory actually do have two distinct meanings. Leak A leak is a piece of memory for which there are no references to the allocation from any live object in the application's live object graph. I.e. the memory is unreachable and, thus, there is no way that it can ever be referred to again (barring bugs). It is dead memory. Note that if object A points to object B and object B points to A, but nothing in the live object graph points to either A or B, it is still a leak. If the B->A and A->B references are both retained references, you got yourself a retain cycle & a leak. Abandoned Memory Allocations that are in the app's live object graph but are no longer reachable due to application logic issues are considered abandoned, but not leaked. For example, say you have a cache whose entries are instances of NSData that were downloaded from some URL where the URL contains a session ID in the URL (a common pattern) and that session ID + URL are used as the key to look up stuff in the cache. Now, say the user logs out, causing the session ID to be destroyed. If the cache isn't also pruned of all entries specific to that session ID, then all of those NSData objects will be abandoned, but not leaked as they can still be reached via the cache. In reality, there is little use in making this strong of a distinction between the two save for that fixing either requires very different strategies. Fixing a leak is to figure out where the extra retain came from (or where a missing call to free() might need to be inserted, in the case of a malloc() based leak). Since a detected leak cannot be reached from the live object graph, fixing a leak is really this straightforward. Fixing abandoned memory can be considerably trickier for a couple of reasons. First, the memory is still reachable from the live object graph. Thus, by definition, there is an algorithmic problem in your application that is keeping the memory alive. Finding and fixing that can often be much more difficult and potentially disruptive then fixing a mere leak. Secondly, there might be non-zeroing non-retained weak references to the abandoned allocation. That is, if you figure out where to prune the strong references and make the allocation actually go away, that doesn't mean that your work is done; if there are any remaining non-zeroing weak references, they will now be dangling pointers and..... BOOM. As Amit indicated, Heapshot Analysis is quite adept at finding both leaks, abandoned memory and -- quite important -- overall "Undesirable Memory Growth". A: Not sure if there's a standard terminology, but there's also the possibility of having memory around which does have a reference, but will never be used. (The Heap Shot feature of the Leaks instrument can help track this down.) I call this "bloat" to distinguish it from a true leak. Both are wasted memory.
{ "pile_set_name": "StackExchange" }
Q: Questions about instrument care and repair Should questions regarding the care and repair of our instruments be considered on topic here? Why? Why not? Could that be extended to questions about making musical instruments? A: Care and repair should definitely be on-topic. It's closely related to Musical Performance and drawing a line between them would not be useful. I'm not as sure about instrument construction. While there's certainly a large overlap, I would assume the majority of relevant questions are irrelevant to the average musician. That said, I would have no problem with this site being host to them unless the Musical Instrument Construction, Maintenance, and Repair proposal takes off. If it does we would presumably migrate our own care & repair questions there as well.
{ "pile_set_name": "StackExchange" }
Q: UserControl: Can I set my own DependencyProperty in XAML? I'd like to be able to do something like this: .xaml.cs: public partial class MyControl : UserControl { public MyControl() => InitializeComponent(); public static readonly DependencyProperty MyTemplateProperty = DependencyProperty.Register( "MyTemplate", typeof(DataTemplate), typeof(MyControl), new PropertyMetadata(default(DataTemplate))); public DataTemplate MyTemplate { get => (DataTemplate) GetValue(MyTemplateProperty); set => SetValue(MyTemplateProperty, value); } } .xaml: <UserControl x:Class="MyControl"> <!-- etc. --> <Grid /> <!-- does not compile--> <UserControl.MyTemplate> <DataTemplate /> </UserControl.MyTemplate> </UserControl> But it doesn't work. Not-so-surprisingly, when you start an element name with UserControl, the compiler only looks for properties defined on UserControl itself. But changing the element name to <MyControl.MyTemplate> (with the proper namespace prefix) doesn't work either; the compiler tries to interpret MyTemplate as an attached property in this case. Is there any way to achieve this aside from defining the value in a resource and then assigning it to the property from codebehind? A: You can set the property by a Style: <UserControl ...> <UserControl.Style> <Style> <Setter Property="local:MyControl.MyTemplate"> <Setter.Value> <DataTemplate /> </Setter.Value> </Setter> </Style> </UserControl.Style> ... </UserControl>
{ "pile_set_name": "StackExchange" }
Q: Method to know the number of isomers of metal complexes Whenever I try to find the number of geometrical isomers (including optical isomers) of coordination compound, I got confused, and mostly miss few isomers. Is there any standard method to know the number of isomers of compounds such as $\ce{Ma2b2c2}$ or $\ce{[M(A-A)a2b2]}$. where $\ce{M}$ is metal and $\ce{a,b}$ are monodentate ligands while $\ce{A-A}$ is a bidentate ligand. A: Do it many times over, so as not to get confused. Alternatively, there is a thing called Polya's formula, but you won't be able to use it anyway. In trivial cases like this, the said formula is about 100 times more complicated than counting isomers by hand. It is not before polysubstituted fullerenes that its use in chemistry starts making any sense. A: For an easier method without going much into mathematics you can use simple logic to do find the number of geometrical isomers of [Ma2b2c2]. Though such logics are very much situation based, and I believe there's no such generic one. Here you can start by choosing no. of pairs of same ligands to be in trans (i.e. 180 degrees to each other). After some thought, its clear that one can choose 0,1 or 3 such pairs, because choosing 2 will automatically force the 3rd pair to be in trans. For choosing 1 such pair there are 3 ways, namely (a,a),(b,b),(c,c); and only 1 way each for choosing none (0) or all (3) such pairs. Thus accounting for all 5 isomers. Take the following example of $\ce{[Co(NH3)2Cl2(NO2)2]-}$. For further references you can visit this site.
{ "pile_set_name": "StackExchange" }
Q: Middle clicking "Tell me more" button is redirecting on the parent page in addition to opening it in new tab I didn't visit Stack Overflow main page without being logged in for a long while thus not sure when this section has been added: Expecting this to be an ordinary link like all others, I middle clicked it but it caused redirection (to the About page) in the parent page in addition to the new tab. Browser: Chrome 26.0.1410.43 Edit: since others can't reproduce, here is vid showing the issue: http://www.youtube.com/watch?v=eFpkmLLJCJ4 A: Hands up, I screwed this one up. Fix in place, to be with you in the next build.
{ "pile_set_name": "StackExchange" }
Q: ASP.NET web service using IUSR, not Application Pool Identity This question seems to be similar to this one: IIS site not using identity specified in app pool IIS 7 + However, there are no answers there. There's a tl;dr at the bottom. A thing to keep in mind is that I'm not the one who set up the server so they may have changed some settings I don't know about. We have an ASP.NET web service running on IIS 7. The web service is set to use DefaultAppPool, and the app pool's Identity is set to a domain user (let's say it's "localdomain\user1"). The web service was unable to save to a certain network folder, so we gave localdomain\user1 read/write permissions to that folder. It still can't save there, however. I can't remote debug, and it works fine on my own computer (probably because it's running in Visual Studio's IIS express and my user does have access), so I tried to change the web service so that the error message contains the user name it's running under. If I use Environment.UserName to get it, the result is "IUSR". If I use System.Security.Principal.WindowsIdentity.GetCurrent().Name, it returns "NT AUTHORITY\IUSR". Unless the above methods are not reliable, the web service seems to be running under the default user (IUSR) and not the one set in its application pool. I can't figure out why, can anyone explain? EDIT: The Task Manager on the server, if I log in using RDP, shows that the w3wp.exe process IS being run by user1. I'm not sure which one to believe. Thank you. tl;dr: The web service's application pool is set to a domain user, but it seems to be running under IUSR anyway. How do I prevent that? A: Impersonation was the issue. I didn't know this was a setting in the web service's web.config. Changing <identity impersonate="true"/> to <identity impersonate="false"/> allows it to run as localdomain\user1.
{ "pile_set_name": "StackExchange" }
Q: How to get IP address of the client in Play! framework 2.0? Possible Duplicate: How to get the client IP? How to get ip of the client in Play! framework 2.0? Is something implemented in Play? Any help, advices? I'm writing apps in Java. A: In Play 2.0's action you can get lot of data from Http.RequestHeader it can be fetched like this: public static Result index() { String remote = request().remoteAddress(); return ok(remote); }
{ "pile_set_name": "StackExchange" }
Q: intersection points of two circles I am trying to find the points at which two circles intersect. The circles I am working with are: $$(1) \qquad x^2+y^2 = \frac{9}{4}$$ $$(2) \qquad (x-2)^2+y^2=\frac9 4$$ I am following this answer: https://math.stackexchange.com/a/418932/136870. $$$$ When I subtract (1) from (2), I get $x=2$ . Substituting $x\leftarrow$ 2 in (1) gives me an imaginary number, $\sqrt{-\frac 7 4}$ for y, but I know these circles intersect. When I graphed these circles, I see that the intersection points are both on the line $x=1$. I also set the left side of (1) set to be equal to the left side of (2) since both are equal to $9/4$, but I got the same answer. Not sure what is going wrong here. A: Instead of following a recipe, look at the geometry: in this problem it’s exceptionally simple. We can tell immediately from the equations that the centre $C_1$ of the first circle is the origin, $\langle 0,0\rangle$, and the centre $C_2$ of the second is $\langle 2,0\rangle$. Moreover, each circle has radius $\sqrt{\frac94}=\frac32$. Since the radii are the same, the points of intersection will lie on the perpendicular bisector of the line segment $\overline{C_1C_2}$, which is the line $x=1$. Thus, you need only find the two points on the line $x=1$ that are $\frac32$ units away from $C_1$ and $C_2$; that’s a straightforward application of the Pythagorean theorem.
{ "pile_set_name": "StackExchange" }