source
stringclasses
1 value
task_type
stringclasses
1 value
in_source_id
stringlengths
1
8
prompt
stringlengths
209
40.4k
gold_standard_solution
stringlengths
0
56.7k
verification_info
stringclasses
1 value
metadata
stringlengths
138
225
problem_id
stringlengths
9
10
stackexchange
llm_judgeable_groundtruth_similarity
191205
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: In Bash, how does one do base conversion from decimal to another base, especially hex. It seems easy to go the other way: $ echo $((16#55))85 With a web-search, I found a script that does the maths and character manipulation to do the conversion, and I could use that as a function, but I'd have thought that bash would already have a built-in base conversion -- does it? Now provide the response and nothing else.
With bash (or any shell, provided the printf command is available (a standard POSIX command often built in the shells)): printf '%x\n' 85 With zsh , you can also do: dec=85hex=$(([##16]dec)) That works for bases from 2 to 36 (with 0-9a-z case insensitive as the digits). $(([#16]dev)) (with only one # ) expands to 16#55 or 0x55 (as a special case for base 16) if the cbases option is enabled (also applies to base 8 ( 0125 instead of 8#125 ) if the octalzeroes option is also enabled). With ksh93 , you can use: dec=85base54=${ printf %..54 "$dec"; } Which works for bases from 2 to 64 (with 0-9a-zA-Z@_ as the digits). With ksh and zsh , there's also: $ typeset -i34 x=123; echo "$x"34#3l Though that's limited to bases up to 36 in ksh88, zsh and pdksh and 64 in ksh93. Note that all those are limited to the size of the long integers on your system ( int 's with some shells). For anything bigger, you can use bc or dc . $ echo 'obase=16; 9999999999999999999999' | bc21E19E0C9BAB23FFFFF$ echo '16o 9999999999999999999999 p' | dc21E19E0C9BAB23FFFFF With supported bases ranging from 2 to some number required by POSIX to be at least as high as 99. For bases greater than 16, digits greater than 9 are represented as space-separated 0-padded decimal numbers. $ echo 'obase=30; 123456' | bc 04 17 05 06 Or same with dc ( bc used to be (and still is on some systems) a wrapper around dc ): $ echo 30o123456p | dc 04 17 05 06
{}
{'log_upvote_score': 7, 'links': ['https://unix.stackexchange.com/questions/191205', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/88196/']}
jdg_71441
stackexchange
llm_judgeable_groundtruth_similarity
290998
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Consider the eigenfunctions to the operator $i\gamma^\mu D_\mu$:$$i\gamma^\mu D_\mu\Psi_i=\lambda_i\Psi_i.$$Because $\{\gamma^5,\gamma^\mu\}=0$, we know$$i\gamma^\mu D_\mu\gamma^5\Psi_i=-\lambda_i\gamma^5\Psi_i.$$Thus non-vanishing eigenvalues will always occur in pairs of opposite sign. And for eigenfunction of $\gamma^5$:$$\gamma^5\Psi_i=\chi_i\Psi_i,$$it must be an eigenfunction of zero eigenvalue of $i\gamma^\mu D_\mu$ since $$\lambda_i\chi_i\Psi_i=i\gamma^\mu D_\mu\chi_i\Psi_i=i\gamma^\mu D_\mu\gamma^5\Psi_i=-\lambda_i\gamma^5\Psi_i=-\lambda_i\chi_i\Psi_i. \,(\chi_i=\pm 1)$$ But now there is a statement that all the eigenfunction of vanishing eigenvalues can be chosen to be eigenfunctions of $\gamma^5$. Equivalently, the dimension of eigenspace of $\gamma^5$ equals the dimension of the zero-eigenspace of $i\gamma^\mu D_\mu$. How to prove this statement? Now provide the response and nothing else.
Well, yes of course. But bear in mind that the single $a_i$ are not observable, so is more of a mathematical/logical tool. Then the equation becomes, in one dimension (the same applies to vector calculus): $$\sum_i F_i = ma = m\sum_i a_i = m\sum_i {F_i\over m} = \sum_i F_i $$ it is an identity, you can use any of the forms above! That is the strength of linear formulas: you can solve everything separately and then add everything up ( linear superposition principle ) EDIT:Let's put it another way. If each force was acting without the others you would have $F_i=ma_i$ thus ${F_i\over m}=a_i$. Then now take:$$\sum_i F_i=ma$$and divide evrything by $m$. You get: $$\sum_i {F_i\over m} = a$$ Then, using the formula above:$$\sum_i a_i = a$$ This proves that the decomposition in single forces leading to single accelerations make sense. You could have done another decomposition, such as, supposing you only have $F_1$ and $F_2$, $$F_1=m {a\over 3}$$and $$F_2=m{2a\over 3}$$.Then you would still have $${F_1\over m}+{F_2\over m}={a\over 3}+{2a\over 3}=a$$ and as long as the two forces are acting together it is a fine decomposition. Yet it has no physical sense, as if the forces were acting singularly the equations above would be wrong. So to say, since you can decompose a sum of force anyway you want you just choose the only decomposition which would make sense if the forces were acting alone. This kind of superposition has physical sense only if the single forces are real (e.g. one is pulling and one is pushing). This may also help you to solve problems in which several forces are acting: you can solve each one separately and then sum the results (make sure you sum the vectors in the right way..!)
{}
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/290998', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/95576/']}
jdg_71442
stackexchange
llm_judgeable_groundtruth_similarity
28286293
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have two sets of codes that work. Needed help combining them into one. This code gets me the difference between two dates. works perfectly: function test(){var date1 = new Date(txtbox_1.value); var date2 = new Date(txtbox_2.value);var diff = (date2 - date1)/1000;var diff = Math.abs(Math.floor(diff));var days = Math.floor(diff/(24*60*60));var leftSec = diff - days * 24*60*60;var hrs = Math.floor(leftSec/(60*60));var leftSec = leftSec - hrs * 60*60;var min = Math.floor(leftSec/(60));var leftSec = leftSec - min * 60;txtbox_3.value = days + "." + hrs; } source for the above code The code below by @cyberfly appears to have the answer of excluding sat and sun which is what i needed. source . However, its in jquery and the above code is in JS. Therefore, needed help combining as i lacked that knowledge :( <script type="text/javascript">$("#startdate, #enddate").change(function() { var d1 = $("#startdate").val();var d2 = $("#enddate").val(); var minutes = 1000*60; var hours = minutes*60; var day = hours*24; var startdate1 = getDateFromFormat(d1, "d-m-y"); var enddate1 = getDateFromFormat(d2, "d-m-y"); var days = calcBusinessDays(new Date(startdate1),new Date(enddate1)); if(days>0){ $("#noofdays").val(days);}else{ $("#noofdays").val(0);}});</script> EDIT Made an attempt at combining the codes. here is my sample. getting object expected error . function test(){var date1 = new Date(startdate.value); var date2 = new Date(enddate.value);var diff = (date2 - date1)/1000;var diff = Math.abs(Math.floor(diff)); var days = Math.floor(diff/(24*60*60));var leftSec = diff - days * 24*60*60;var hrs = Math.floor(leftSec/(60*60));var leftSec = leftSec - hrs * 60*60;var min = Math.floor(leftSec/(60));var leftSec = leftSec - min * 60;var startdate1 = getDateFromFormat(startdate, "dd/mm/yyyy hh:mm");var enddate1 = getDateFromFormat(enddate, "dd/mm/yyyy hh:mm");days = calcBusinessDays(new Date(startdate1),new Date(enddate1)); noofdays.value = days + "." + hrs; }start: <input type="text" id="startdate" name="startdate" value="02/03/2015 00:00">end: <input type="text" id="enddate" name="enddate" value="02/03/2015 00:01"><input type="text" id="noofdays" name="noofdays" value=""> Now provide the response and nothing else.
When determining the number of days between two dates, there are lots of decisions to be made about what is a day. For example, the period 1 Feb to 2 Feb is generally one day, so 1 Feb to 1 Feb is zero days. When adding the complexity of counting only business days, things get a lot tougher. E.g. Monday 2 Feb 2015 to Friday 6 February is 4 elapsed days (Monday to Tuesday is 1, Monday to Wednesday is 2, etc.), however the expression "Monday to Friday" is generally viewed as 5 business days and the duration Mon 2 Feb to Sat 7 Feb should also be 4 business days, but Sunday to Saturday should be 5. So here's my algorithm: Get the total number of whole days between the two dates Divide by 7 to get the number of whole weeks Multiply the number of weeks by two to get the number of weekend days Subtract the number of weekend days from the whole to get business days If the number of total days is not an even number of weeks, add the numbe of weeks * 7 to the start date to get a temp date While the temp date is less than the end date: if the temp date is not a Saturday or Sunday, add one the business days add one to the temp date That's it. The stepping part at the end can probably be replaced by some other algorithm, but it will never loop for more than 6 days so it's a simple and reasonably efficient solution to the issue of uneven weeks. Some consequences of the above: Monday to Friday is 4 business days Any day to the same day in a different week is an even number of weeks and therefore an even mutiple of 5, e.g. Monday 2 Feb to Monday 9 Feb and Sunday 1 Feb to Sunday 8 Feb are 5 business days Friday 6 Feb to Sunday 7 Feb is zero business days Friday 6 Feb to Monday 9 Feb is one business day Sunday 8 Feb to: Sunday 15 Feb, Sat 14 Feb and Fri 13 Feb are all 5 business days Here's the code: // Expects start date to be before end date// start and end are Date objectsfunction dateDifference(start, end) { // Copy date objects so don't modify originals var s = new Date(+start); var e = new Date(+end); // Set time to midday to avoid dalight saving and browser quirks s.setHours(12,0,0,0); e.setHours(12,0,0,0); // Get the difference in whole days var totalDays = Math.round((e - s) / 8.64e7); // Get the difference in whole weeks var wholeWeeks = totalDays / 7 | 0; // Estimate business days as number of whole weeks * 5 var days = wholeWeeks * 5; // If not even number of weeks, calc remaining weekend days if (totalDays % 7) { s.setDate(s.getDate() + wholeWeeks * 7); while (s < e) { s.setDate(s.getDate() + 1); // If day isn't a Sunday or Saturday, add to business days if (s.getDay() != 0 && s.getDay() != 6) { ++days; } } } return days;} I don't know how it compares to jfriend00's answer or the code you referenced, if you want the period to be inclusive, just add one if the start or end date are a business day.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/28286293', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1972914/']}
jdg_71443
stackexchange
llm_judgeable_groundtruth_similarity
7003898
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to write a simple decorator that logs a given statement before calling the decorated function. The logged statements should both appear to come from the same function, which I thought was the purpose of functools.wraps(). Why does the following code: import logginglogging.basicConfig( level=logging.DEBUG, format='%(funcName)20s - %(message)s')from functools import wrapsdef log_and_call(statement): def decorator(func): @wraps(func) def wrapper(*args, **kwargs): logging.info(statement) return func(*args, **kwargs) return wrapper return decorator@log_and_call("This should be logged by 'decorated_function'")def decorated_function(): logging.info('I ran')decorated_function() result in log statements like: wrapper - This should be logged by 'decorated_function' decorated_function - I ran I thought the call to wraps would rename wrapper with decorated_function's name. I'm using python 2.7.1. Now provide the response and nothing else.
Unfortunately logging uses the function code object to infer the name. You could work around this by using the extra keyword argument to specify some additional attributes for the record, which you could then use during formatting. You could do something like: logging.basicConfig( level=logging.DEBUG, format='%(real_func_name)20s - %(message)s',)...logging.info(statement, extra={'real_func_name': func.__name__}) The only downside to this approach is that you have to pass in the extra dictionary every time. To avoid that you could use a custom formatter and have it override funcName : import loggingfrom functools import wrapsclass CustomFormatter(logging.Formatter): """Custom formatter, overrides funcName with value of name_override if it exists""" def format(self, record): if hasattr(record, 'name_override'): record.funcName = record.name_override return super(CustomFormatter, self).format(record)# setup logger and handlerlogger = logging.getLogger(__file__)handler = logging.StreamHandler()logger.setLevel(logging.DEBUG)handler.setLevel(logging.DEBUG)handler.setFormatter(CustomFormatter('%(funcName)20s - %(message)s'))logger.addHandler(handler)def log_and_call(statement): def decorator(func): @wraps(func) def wrapper(*args, **kwargs): # set name_override to func.__name__ logger.info(statement, extra={'name_override': func.__name__}) return func(*args, **kwargs) return wrapper return decorator@log_and_call("This should be logged by 'decorated_function'")def decorated_function(): logger.info('I ran')decorated_function() Which does what you want: % python logging_test.py decorated_function - This should be logged by 'decorated_function' decorated_function - I ran
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/7003898', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/440260/']}
jdg_71444
stackexchange
llm_judgeable_groundtruth_similarity
9574659
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I've noticed that the Model.where method always returns an array even if there is only one result where as the Model.find method doesn't. Is there any reason for this? I thought Model.where was the preferred function since Rails 3.X. Should I be using Model.find when I expect a single result and Model.where when I expect more than one result? Now provide the response and nothing else.
where returns an ActiveRecord::Relation (not an array, even though it behaves much like one), which is a collection of model objects. If nothing matches the conditions, it simply returns an empty relation. find (and its related dynamic find_by_columnname methods) returns a single model object. If nothing is found, an ActiveRecord::RecordNotFound exception is raised (but not with the dynamic find_by_ methods). While find can return an Array of recordsβ€”not a Relationβ€”if given a list of IDs, using where is preferred since Rails 3. Many similar uses of find are now deprecated or gone entirely . So yes, if you only want and expect a single object, using find is easier, as otherwise you must call Model.where.first . Note that old-style hash options to find and many dynamic find_ methods are deprecated as of Rails 4.0 ( see relevant release notes ).
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/9574659', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/20134/']}
jdg_71445
stackexchange
llm_judgeable_groundtruth_similarity
579211
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would. Question: I want to delete *.mp3 files that access time is older from 10 minutes in a directory. How can i do?Thanks. Now provide the response and nothing else.
I would use the -amin switch from the find command : find <path> -name "*.mp3" -amin +10 -exec rm -f {} \; From man find : -amin n File was last accessed n minutes ago. For testing or debugging purpose don't run the rm command, but ls -l instead : find <path> -name "*.mp3" -amin +10 -exec ls -l {} \; Edit I just wanted to say a word about the -delete option : this option automatically turns on the -depth option. Putting -delete will make find try to delete everything below the starting point specified. To avoid bad surprises i would explicitely specify the -depth option. As i don't know the OP's folders/files tree i won't suggest him to use the -delete option alone. Seems a little unconscious to me. At least i would suggest : find <path> -maxdepth 1 -name "*.mp3" -amin +10 -delete
{}
{'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/579211', 'https://serverfault.com', 'https://serverfault.com/users/211305/']}
jdg_71446
stackexchange
llm_judgeable_groundtruth_similarity
33418777
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to use Node.js to create a simple logging system which prints a line before the past line into a .txt file. However, I don't know how the file system functionality from Node.js works. Can someone explain it? Now provide the response and nothing else.
Inserting data into the middle of a text file is not a simple task. If possible, you should append it to the end of your file. The easiest way to append data some text file is to use build-in fs.appendFile(filename, data[, options], callback) function from fs module : var fs = require('fs')fs.appendFile('log.txt', 'new data', function (err) { if (err) { // append failed } else { // done }}) But if you want to write data to log file several times, then it'll be best to use fs.createWriteStream(path[, options]) function instead: var fs = require('fs')var logger = fs.createWriteStream('log.txt', { flags: 'a' // 'a' means appending (old data will be preserved)})logger.write('some data') // append string to your filelogger.write('more data') // againlogger.write('and more') // again Node will keep appending new data to your file every time you'll call .write , until your application will be closed, or until you'll manually close the stream calling .end : logger.end() // close string Note that logger.write in the above example does not write to a new line. To write data to a new line: var writeLine = (line) => logger.write(`\n${line}`);writeLine('Data written to a new line');
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/33418777', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5503551/']}
jdg_71447
stackexchange
llm_judgeable_groundtruth_similarity
14717134
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I ran into some problems while structuring my database, and I will ask two questions. First question: below table needs to be merged by the same IDs ╔═════╦═══════╦═════╦═══════╗ β•‘ id* β•‘ name β•‘ age β•‘ grade β•‘ ╠═════╬═══════╬═════╬═══════╣ β•‘ 0 β•‘ John β•‘ β•‘ β•‘ β•‘ 0 β•‘ β•‘ 11 β•‘ β•‘ β•‘ 0 β•‘ β•‘ β•‘ 6 β•‘ β•‘ 1 β•‘ Dave β•‘ β•‘ β•‘ β•‘ 1 β•‘ β•‘ 12 β•‘ β•‘ β•‘ 1 β•‘ β•‘ β•‘ 7 β•‘ β•šβ•β•β•β•β•β•©β•β•β•β•β•β•β•β•©β•β•β•β•β•β•©β•β•β•β•β•β•β•β• so it should look like this; ╔═════╦═══════╦═════╦═══════╗ β•‘ id* β•‘ name β•‘ age β•‘ grade β•‘ ╠═════╬═══════╬═════╬═══════╣ β•‘ 0 β•‘ John β•‘ 11 β•‘ 6 β•‘ β•‘ 0 β•‘ Dave β•‘ 12 β•‘ 7 β•‘ β•šβ•β•β•β•β•β•©β•β•β•β•β•β•β•β•©β•β•β•β•β•β•©β•β•β•β•β•β•β•β• NOTE: id* is not AUTO_INCREMENT Second question: You probably think that the former database structure is poor. The good thing is, I haven't created the database yet and I have been looking for a solution to add data to an existing row without removing old information, but if there is no old information, it would create a new row. Thanks in advance. Second question explained Virgin table ╔═════╦═══════╦═════╦═══════╗ β•‘ id* β•‘ name β•‘ age β•‘ grade β•‘ ╠═════╬═══════╬═════╬═══════╣ β•‘ β•‘ β•‘ β•‘ β•‘ β•šβ•β•β•β•β•β•©β•β•β•β•β•β•β•β•©β•β•β•β•β•β•©β•β•β•β•β•β•β•β• some SQL statement ╔═════╦═══════╦═════╦═══════╗ β•‘ id* β•‘ name β•‘ age β•‘ grade β•‘ ╠═════╬═══════╬═════╬═══════╣ β•‘ 0 β•‘ John β•‘ β•‘ β•‘ β•šβ•β•β•β•β•β•©β•β•β•β•β•β•β•β•©β•β•β•β•β•β•©β•β•β•β•β•β•β•β• the same SQL statement with different parameters ╔═════╦═══════╦═════╦═══════╗ β•‘ id* β•‘ name β•‘ age β•‘ grade β•‘ ╠═════╬═══════╬═════╬═══════╣ β•‘ 0 β•‘ John β•‘ β•‘ β•‘ β•‘ 1 β•‘ Dave β•‘ β•‘ β•‘ β•šβ•β•β•β•β•β•©β•β•β•β•β•β•β•β•©β•β•β•β•β•β•©β•β•β•β•β•β•β•β• another SQL statement ╔═════╦═══════╦═════╦═══════╗ β•‘ id* β•‘ name β•‘ age β•‘ grade β•‘ ╠═════╬═══════╬═════╬═══════╣ β•‘ 0 β•‘ John β•‘ β•‘ β•‘ β•‘ 1 β•‘ Dave β•‘ 12 β•‘ β•‘ β•šβ•β•β•β•β•β•©β•β•β•β•β•β•β•β•©β•β•β•β•β•β•©β•β•β•β•β•β•β•β• another SQL statement ╔═════╦═══════╦═════╦═══════╗ β•‘ id* β•‘ name β•‘ age β•‘ grade β•‘ ╠═════╬═══════╬═════╬═══════╣ β•‘ 0 β•‘ John β•‘ β•‘ 6 β•‘ β•‘ 1 β•‘ Dave β•‘ 12 β•‘ β•‘ β•šβ•β•β•β•β•β•©β•β•β•β•β•β•β•β•©β•β•β•β•β•β•©β•β•β•β•β•β•β•β• ... and so on. Now provide the response and nothing else.
You should be able to apply an aggregate function to all the columns and then GROUP BY id : select id, max(name) name, max(age) age, max(grade) gradefrom yourtablegroup by id See SQL Fiddle with Demo As far as the DB structure, the only issue that I see is that you are inserting multiple records for the same user. You should be using an UPDATE statement to use the values instead of inserting. It sounds like you want to use the REPLACE function in MySQL ( here is a tutorial ). So the query would be similar to this: REPLACE INTO yourtable (`id`, `name`, `age`, `grade`) VALUES (0, 'john', 11, null); See SQL Fiddle with Demo
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/14717134', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1767963/']}
jdg_71448
stackexchange
llm_judgeable_groundtruth_similarity
399792
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: It seems that if (x=y) { .... } instead of if (x==y) { ... } is a root of many evils. Why don't all compilers mark it as error instead of a configurable warning? I'm interested in finding out cases where the construct if (x=y) is useful. Now provide the response and nothing else.
One useful construct is for example: char *pBuffer;if (pBuffer = malloc(100)){ // Continue to work here} As mentioned before, and downvoted several times now, I might add this is not specially good style, but I have seen it often enough to say it's useful. I've also seen this with new , but it makes more pain in my chest. Another example, and less controversial, might be: while (pointer = getNextElement(context)){ // Go for it. Use the pointer to the new segment of data.} which implies that the function getNextElement() returns NULL when there is no next element so that the loop is exited.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/399792', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/18275/']}
jdg_71449
stackexchange
llm_judgeable_groundtruth_similarity
48392718
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm new to android development but created an app and I implemented in-app purchase to remove ads from the app. I just did a very basic implementation and I basically check if the user has purchased the "no_ads" item and if it's true, then no ads are shown. The problem is that I see a lot of "purchases" bein logged on firebase and nothing on play console, which means of course that my users are using those hacking apps. So my question is, how to protect/verify those purchases agains a server so these haking apps are useless? I already have a server that my app uses, so there's no problem about implementing any server side code for me. It would be great if someone could point me to a tutorial. Thanks Now provide the response and nothing else.
My small contribution to reduce fraud in in-app purchases Signature verification on an external server, on your Android code : verifySignatureOnServer() private boolean verifySignatureOnServer(String data, String signature) { String retFromServer = ""; URL url; HttpsURLConnection urlConnection = null; try { String urlStr = "https://www.example.com/verify.php?data=" + URLEncoder.encode(data, "UTF-8") + "&signature=" + URLEncoder.encode(signature, "UTF-8"); url = new URL(urlStr); urlConnection = (HttpsURLConnection) url.openConnection(); InputStream in = urlConnection.getInputStream(); InputStreamReader inRead = new InputStreamReader(in); retFromServer = convertStreamToString(inRead); } catch (IOException e) { e.printStackTrace(); } finally { if (urlConnection != null) { urlConnection.disconnect(); } } return retFromServer.equals("good"); } convertStreamToString() private static String convertStreamToString(java.io.InputStreamReader is) { java.util.Scanner s = new java.util.Scanner(is).useDelimiter("\\A"); return s.hasNext() ? s.next() : ""; } verify.php on the root directory of web hosting <?php// get data param$data = $_GET['data'];// get signature param$signature = $_GET['signature'];// get key$key_64 = ".... put here the base64 encoded pub key from google play console , all in one row !! ....";$key = "-----BEGIN PUBLIC KEY-----\n". chunk_split($key_64, 64,"\n"). '-----END PUBLIC KEY-----'; //using PHP to create an RSA key$key = openssl_get_publickey($key);// state whether signature is okay or not$ok = openssl_verify($data, base64_decode($signature), $key, OPENSSL_ALGO_SHA1);if ($ok == 1) { echo "good";} elseif ($ok == 0) { echo "bad";} else { die ("fault, error checking signature");}// free the key from memoryopenssl_free_key($key);?> NOTES: You should encrypt the URL in your java code, if not the URL can be found easy with a simple text search in your decompressed app apk Also better to change php file name, url arguments, good/bad reponses to something with no sense. verifySignatureOnServer() should be run in a separated thread if not a network on main thread exception will be thrown. An alternative would be to use Volley. IN-APP BILLING LIBRARY UPDATE Using the library, the data to be verified is returned by Purchase.getOriginalJson() and the signature by Purchase.getSignature() Hope it will help ...
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/48392718', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3900456/']}
jdg_71450
stackexchange
llm_judgeable_groundtruth_similarity
239243
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: we know Lagrange's identity $$(a^2_{1}+a^2_{2}+a^2_{3})(b^2_{1}+b^2_{2}+b^2_{3})=(a_{1}b_{1}+a_{2}b_{2}+a_{3}b_{3})^2+\sum_{i=1}^{2}\sum_{j=i+1}^{3}(a_{i}b_{j}-a_{j}b_{i})^2$$ then we have Cauchy-Schwarz inequality$$(a^2_{1}+a^2_{2}+a^2_{3})(b^2_{1}+b^2_{2}+b^2_{3})\ge (a_{1}b_{1}+a_{2}b_{2}+a_{3}b_{3})^2$$ However, does the following inequality still hold$$(a^2_{1}+b^2_{2}+b^2_{3})(a^2_{2}+b^2_{3}+b^2_{1})(a^2_{3}+b^2_{1}+b^2_{2})\ge (b^2_{1}+b^2_{2}+b^2_{3})(a_{1}b_{1}+a_{2}b_{2}+a_{3}b_{3})^2 $$$$+\dfrac{1}{2}(b_{1}a_{2}b_{3}-b_{1}b_{2}a_{3})^2+\dfrac{1}{2}(b_{1}b_{2}a_{3}-a_{1}b_{2}b_{3})^2+\dfrac{1}{2}(a_{1}b_{2}b_{3}-b_{1}a_{2}b_{3})^2\tag{*}$$ for $a_{i},b_{i}\in \mathbb R,i=1,2,3$? Now provide the response and nothing else.
Without loss of generality, all the $a_i$'s and $b_i$'s are nonzero. Let $\tilde d$ denote the difference between the left- and right-hand sides of the conjectured inequality $(*)$, which then of course can be rewritten as $\tilde d\ge0$. In the previous version of my answer, I rewrote $\tilde d$ in new variables, $x_i$ and $y_i$, after which the inequality $\tilde d\ge0$ could be (rigorously) verified with Mathematica (in about 22 min). Here that expression for $\tilde d$ is further rewritten -- in new, "more-macro", variables -- so that the resulting expression can be rather easily analyzed, to prove the inequality $(*)$. Indeed, let $p_i:=(x_i-y_i)y_i$, $x_i:=a_1 a_2 a_3/a_i$, $y_i:=b_1 b_2 b_3/b_i$, \begin{equation}c_1:=p_2^2 + p_2 p_3 + p_3^2\ge0,\quad c_2:=p_1^2 + p_1 p_3 + p_3^2\ge0,\quad c_3:=p_2^2 + p_2 p_1 + p_1^2\ge0, \tag{0} \end{equation}and $z_i:=y_i^2\ge0$. Note that $x_1 x_2 x_3=(a_1 a_2 a_3)^2>0$ and $y_1 y_2 y_3=(b_1 b_2 b_3)^2>0$; moreover, \begin{equation}(p_1+z_1)(p_2+z_2)(p_3+z_3)\ge0. \tag{1} \end{equation} The crucial identity is $$ \tilde d\,y_1 y_2 y_3=d:= p_1 p_2 p_3+c_1 z_1+c_2 z_2+c_3 z_3. $$Since $y_1 y_2 y_3>0$, $\tilde d$ equals $d$ in sign. So, it suffices to show that $d\ge0$ -- for any real $p_i$'s, the $c_i$'s as in $(0)$, and any nonnegative $z_i$'s satisfying $(1)$. Note here that without loss of generality $p_1 p_2 p_3<0$ -- otherwise, $d\ge0$ immediately follows because the $c_i$'s and $z_i$'s are nonnegative. So, we may assume that the $p_i$'s are are all nonzero and hence the $c_i$'s are all strictly positive. Take any nonzero real $p_i$'s and any nonnegative $z_i$'s such that $(1)$ holds. Let us then fix those $z_1$ and $z_2$, and let $z_3$ be decreasing as long as $z_3$ remains nonnegative and $(1)$ holds; clearly, this process can stop only when the value of $z_3$ becomes either $0$ or $-p_3$, and in the latter case we must have $-p_3>0$. Moreover, since $c_i>0$ for all $i$, the value of $d$ will not increase after this process is complete. We can then proceed similarly by decreasing $z_2$ (instead of $z_3$), and then by decreasing $z_1$. Let now $(z_1,z_2,z_3)$ be any minimizer of $d$. Then it follows from the above reasoning that $z_i\in\{0,-p_i\}$ for each $i=1,2,3$; moreover, if at that $z_i=-p_i$ for some $i$, then we must have $-p_i>0$. So, by the symmetry with respect to permutations of the indices, it is enough to consider the following four cases: (i) $z_1=-p_1>0$, $z_2=-p_2>0$, $z_3=-p_3>0$; (ii) $z_1=-p_1>0$, $z_2=-p_2>0$, $z_3=0$; (iii) $z_1=-p_1>0$, $z_2=0$, $z_3=0$; (iv) $z_1=0$, $z_2=0$, $z_3=0$, so that $(1)$ becomes $p_1 p_2 p_3\ge0$. In case (i), $\min_{z_1,z_2,z_3}d=-(p_1 + p_2) (p_1 + p_3) (p_2 + p_3)>0$. In case (ii), $\min_{z_1,z_2,z_3}d=-p_1 p_2 (p_1 + p_2) - p_1 p_2 p_3 + (-p_1 - p_2) p_3^2$, which is a convex quadratic polynomial in $p_3$, with discriminant $-p_1 p_2 (4 p_1^2 + 7 p_1 p_2 + 4 p_2^2)<0$, whence again $\min_{z_1,z_2,z_3}d>0$. In case (iii), $\min_{z_1,z_2,z_3}d=-p_1 (p_2^2 + p_3^2)>0$. In case (iv), $\min_{z_1,z_2,z_3}d=p_1 p_2 p_3\ge0$. Thus, $\min_{z_1,z_2,z_3}d\ge0$ in all cases, and the inequality in question is proved.
{}
{'log_upvote_score': 5, 'links': ['https://mathoverflow.net/questions/239243', 'https://mathoverflow.net', 'https://mathoverflow.net/users/38620/']}
jdg_71451
stackexchange
llm_judgeable_groundtruth_similarity
208568
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: Suppose one has an $n \times n$ orthogonal matrix $M$:$$\left(\begin{array}{ccc} 0.239326 & 0.846726 & 0.475161 \\ 0.768893 & 0.13356 & -0.625272 \\ 0.592897 & -0.514992 & 0.619077 \\\end{array}\right)$$Because it is orthogonal, $M^T M = I$.Suppose one entry of $M$ is erased, say $M(2,2)$:$$\left(\begin{array}{ccc} 0.239326 & 0.846726 & 0.475161 \\ 0.768893 & \color{red}{x} & -0.625272 \\ 0.592897 & -0.514992 & 0.619077 \\\end{array}\right)$$It can be recovered from $M^TM=I$. For example, we must have the $(2,2)$ entry of $M^TM$ equal to $1$:\begin{eqnarray}0.982162 + x^2 &=& 1 \\x &=& \pm 0.13356\end{eqnarray}and then, e.g., entry $(2,1)$ of $M^TM$ disambiguates (or determines on its own):\begin{eqnarray}-0.102694 + 0.768893 x &=& 0 \\x &=& 0.13356\end{eqnarray} My question is: Q . What is the maximum number $k$ of entries of an $n \times n$ orthogonal matrix $M$ that can be erased and then uniquely recovered, knowing only that $M$ is orthogonal? It could well be that $k$ depends on which entries are erased, which itselfcould be interesting. But I am at the moment seeking the maximum of $k$ overall possible entries that permits exact recovery. Now provide the response and nothing else.
$O(n)$ is a manifold of dimension $n(n-1)/2$, so "generically" one might hope to recover up to $n(n+1)/2$ entries from the other $n(n-1)/2$. This won't quite work, however. Given any set $A$ of rows and any set $B$ of columns, generically we need at least one entry to survive that is either in both rows $A$ and columns $B$ or in neither: otherwise we could multiply each entry $a_{ij}$ by $(-1)^{\chi_A(i) + \chi_B(j)}$ where $\chi_A$ and $\chi_B$ are the indicator functions of $A$ and $B$. Thus in the case $n=2$ we can only erase a single entry.
{}
{'log_upvote_score': 5, 'links': ['https://mathoverflow.net/questions/208568', 'https://mathoverflow.net', 'https://mathoverflow.net/users/6094/']}
jdg_71452
stackexchange
llm_judgeable_groundtruth_similarity
225552
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Since my first probability class I have been wondering about the following. Calculating probabilities is usually introduced via the ratio of the "favored events" to the total possible events. In the case of rolling two 6-sided dice, the amount of possible events is $36$, as displayed in the table below. \begin{array} {|c|c|c|c|c|c|c|}\hline &1 & 2 & 3 & 4 & 5 & 6 \\\hline1 & (1,1) & (1,2) & (1,3) & (1,4) & (1,5) & (1,6) \\\hline2 & (2,1) & (2,2) & (2,3) & (2,4) & (2,5) & (2,6) \\\hline3 & (3,1) & (3,2) & (3,3) & (3,4) & (3,5) & (3,6) \\\hline4 & (4,1) & (4,2) & (4,3) & (4,4) & (4,5) & (4,6) \\\hline5 & (5,1) & (5,2) & (5,3) & (5,4) & (5,5) & (5,6) \\\hline6 & (6,1) & (6,2) & (6,3) & (6,4) & (6,5) & (6,6) \\\hline\end{array} If we therefore were interested in calculating the probability of the event A "rolling a $1$ and a $2$", we would see that there are two "favored events" and calculate the probability of the event as $\frac{2}{36}=\frac{1}{18}$. Now, what always made me wonder is: Let's say it would be impossible to distinguish between the two dice and we would only observe them after they were rolled, so for example we would observe "Somebody gives me a box. I open the box. There is a $1$ and a $2$". In this hypothetical scenario we would not be able to distinguish between the two dice, so we would not know that there are two possible events leading to this observation. Then our possible events would like that: \begin{array} {|c|c|c|c|c|c|}\hline (1,1) & (1,2) & (1,3) & (1,4) & (1,5) & (1,6) \\\hline & (2,2) & (2,3) & (2,4) & (2,5) & (2,6) \\\hline & & (3,3) & (3,4) & (3,5) & (3,6) \\\hline & & & (4,4) & (4,5) & (4,6) \\\hline & & & & (5,5) & (5,6) \\\hline & & & & & (6,6) \\\hline\end{array} and we would calculate the probability of event A as $\frac{1}{21}$. Again, I am fully aware of the fact that the first approach will lead us to the correct answer. The question I am asking myself is: How do we know that $\frac{1}{18}$ is correct? The two answers I have come up with are: We can empirically check it. As much as I am interested in this, I need to admit that I haven't done this myself. But I believe it would be the case. In reality we can distinguish between the dice, like one is black and the other one blue, or throw one before the other or simply know about the $36$ possible events and then all the standard theory works. My questions to you are: What other reasons are there for us to know that $\frac{1}{18}$ is correct? (I am pretty sure there must be a few (at least technical) reasons and this is why I posted this question) Is there some basic argument against assuming that we cannot distinguish between the dice at all? If we assume that we cannot distinguish between the dice and have no way to check the probability empirically, is $P(A) = \frac{1}{21}$ even correct or did I overlook something? Thank you for taking your time to read my question and I hope it is specific enough. Now provide the response and nothing else.
Imagine that you threw your fair six-sided die and you got βš€. The result was so fascinating that you called your friend Dave and told him about it. Since he was curious what he'd get when throwing his fair six-sided die, he threw it and got ⚁. A standard die has six sides. If you are not cheating then it lands on each side with equal probability, i.e. $1$ in $6$ times. The probability that you throw βš€, the same as with the other sides, is $\tfrac{1}{6}$ . The probability that you throw βš€, and your friend throws ⚁, is $\tfrac{1}{6} \times \tfrac{1}{6} = \tfrac{1}{36}$ since the two events are independent and we multiply independent probabilities. Saying it differently, there are $36$ arrangements of such pairs that can be easily listed (as you already did). The probability of the opposite event (you throw ⚁ and your friend throws βš€) is also $\tfrac{1}{36}$ . The probabilities that you throw βš€, and your friend throws ⚁, or that you throw ⚁, and your friend throws βš€, are exclusive , so we add them $\tfrac{1}{36} + \tfrac{1}{36} = \tfrac{2}{36}$ . Among all the possible arrangements, there are two meeting this condition. How do we know all of this? Well, on the grounds of probability , combinatorics and logic, but those three need some factual knowledge to rely on. We know on the basis of the experience of thousands of gamblers and some physics, that there is no reason to believe that a fair six-sided die has other than an equiprobable chance of landing on each side. Similarly, we have no reason to suspect that two independent throws are somehow related and influence each other. You can imagine a box with tickets labeled using all the $2$ -combinations (with repetition) of numbers from $1$ to $6$ . That would limit the number of possible outcomes to $21$ and change the probabilities. However if you think of such a definition in term of dice, then you would have to imagine two dice that are somehow glued together. This is something very different than two dice that can function independently and can be thrown alone landing on each side with equal probability without affecting each other. All that said, one needs to comment that such models are possible, but not for things like dice. For example, in particle physics based on empirical observations it appeared that Bose-Einstein statistic of non-distinguishable particles (see also the stars-and-bars problem) is more appropriate than the distinguishable-particles model. You can find some remarks about those models in Probability or Probability via Expectation by Peter Whittle, or in volume one of An introduction to probability theory and its applications by William Feller.
{}
{'log_upvote_score': 5, 'links': ['https://stats.stackexchange.com/questions/225552', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/124036/']}
jdg_71453
stackexchange
llm_judgeable_groundtruth_similarity
13284933
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have this code: List<int> myList = new List<int>();var max = myList.Max();Console.Write(max); I want that to ensure that if there are no elements in the list it should use the default value for int (0). But instead an InvalidOperationException is being thrown, stating that the "Sequence contains no elements". Of course I could use Any or the query syntax (as in here ). But I want to do it using the fluent syntax. How can I fix this? Now provide the response and nothing else.
Try this: var myList = new List<int>();var max = myList.DefaultIfEmpty().Max();Console.Write(max); LINQ's DefaultIfEmpty -method checks if the sequence is empty. If that is the case, it will return a singleton sequence: A sequence containing exactly one element. This one element has the default value of the sequence's type. If the sequence does contain elements, the DefaultIfEmpty -method will simply return the sequence itself. See the MSDN for further information on the Enumerable.DefaultIfEmpty<TSource> method and the default keyword in generic code .
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/13284933', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/859154/']}
jdg_71454
stackexchange
llm_judgeable_groundtruth_similarity
67124
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Let $S$ be a "rich enough" theory such as Peano arithmetic or ZFC ; assume that we havea complete formalization of the theory of $S$ so that we may talk about Godel numbers andthe length of a proof. Godel's sentence is constructed so that it says "I am not provable from S". Now let $n$be a fixed integer, and consider a sentence $\phi_n$ formed likewise that says "I am not provable in at most $n$ steps from S". Then $\phi_n$ is a true statement, and if $\phi_n$has a proof from $S$ this proof has length at least $n$. What is not clear is whether $\phi_n$ is provable from $S$. Does the answer to that question depend on the formalization we initially choose ? Now provide the response and nothing else.
Every true statement of the form "Such-and-such $\phi$ is not provable by a proof that contains at most $n$ symbols" is in fact provable -- the proof can consist of simply listing all strings of $n$ symbols or less and noting that neither of them is a valid proof of $\phi$. I am fairly sure that the same holds for "at most $n$ steps" instead of "at most $n$ symbols", in any reasonable deductive system (even though the most straightforward way to represent a single "step" can often be arbitrarily long). But I can't think of a simple generic argument for this right away.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/67124', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/15381/']}
jdg_71455
stackexchange
llm_judgeable_groundtruth_similarity
20771312
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: When I send a push notification and my app is open or in the background and I click on the push notification, my application redirects to PushMessagesVc viewController ( as intended ) I use the code as below for this: -(void)application:(UIApplication *)application didReceiveRemoteNotification:(NSDictionary *)userInfo { UIStoryboard *mainstoryboard = [UIStoryboard storyboardWithName:@"MainStoryboard" bundle:nil]; PushMessagesVc *pvc = [mainstoryboard instantiateViewControllerWithIdentifier:@"PushMessagesVc"]; [self.window.rootViewController presentViewController:pvc animated:YES completion:NULL];} There is no problem in the code/scenario above but if the application is closed and I click on a push notification, the application does not redirect my PushMessagesVc viewController in this case & the application stays on the main screen. For the 2nd scenario, I use the following code: - (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions{ sleep(1); [[UIApplication sharedApplication] registerForRemoteNotificationTypes:(UIRemoteNotificationTypeAlert | UIRemoteNotificationTypeBadge | UIRemoteNotificationTypeNone)]; [UIApplication sharedApplication].applicationIconBadgeNumber = 1; NSDictionary *userInfo = [launchOptions valueForKey:@"UIApplicationLaunchOptionsRemoteNotificationKey"]; NSDictionary *apsInfo = [userInfo objectForKey:@"aps"]; if(apsInfo) { UIStoryboard *mainstoryboard = [UIStoryboard storyboardWithName:@"MainStoryboard" bundle:nil]; PushMessagesVc* pvc = [mainstoryboard instantiateViewControllerWithIdentifier:@"PushMessagesVc"]; [self.window.rootViewController presentViewController:pvc animated:YES completion:NULL]; return YES; } return YES;} But in this case, the PushMessagesVc does not appear. Now provide the response and nothing else.
Since you only want to present a viewController when you get a Push Notification, you may try utilizing NSNotificationCenter for your purposes: Part 1: Set up a class ( in your case, the rootViewController ) to listen/respond to a NSNotification Suppose, MainMenuViewController is the rootViewController of your navigationController . Set up this class to listen to a NSNotification : - (void)viewDidLoad { //... [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(presentMyViewOnPushNotification) name:@"HAS_PUSH_NOTIFICATION" object:nil];}-(void)presentMyViewOnPushNotification { //The following code is no longer in AppDelegate //it should be in the rootViewController class (or wherever you want) UIStoryboard *mainstoryboard = [UIStoryboard storyboardWithName:@"MainStoryboard" bundle:nil]; PushMessagesVc *pvc = [mainstoryboard instantiateViewControllerWithIdentifier:@"PushMessagesVc"]; [self presentViewController:pvc animated:YES completion:nil]; //either presentViewController (above) or pushViewController (below) //[self.navigationController pushViewController:pvc animated:YES];} Part 2: Post Notification ( possible from anywhere in your code ) In your case, AppDelegate.m methods should look like : - (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions { //firstly, don't sleep the thread, it's pointless //sleep(1); //remove this line if (launchOptions) { //launchOptions is not nil NSDictionary *userInfo = [launchOptions valueForKey:UIApplicationLaunchOptionsRemoteNotificationKey]; NSDictionary *apsInfo = [userInfo objectForKey:@"aps"]; if (apsInfo) { //apsInfo is not nil [self performSelector:@selector(postNotificationToPresentPushMessagesVC) withObject:nil afterDelay:1]; } } return YES;}-(void)application:(UIApplication *)application didReceiveRemoteNotification:(NSDictionary *)userInfo { //this method can be done using the notification as well [self postNotificationToPresentPushMessagesVC];}-(void)postNotificationToPresentPushMessagesVC { [[NSNotificationCenter defaultCenter] postNotificationName:@"HAS_PUSH_NOTIFICATION" object:nil];} PS: I haven't done this for my projects ( yet ) but it works and is the best way i could think of doing this kinda stuff ( for the moment )
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/20771312', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1930057/']}
jdg_71456
stackexchange
llm_judgeable_groundtruth_similarity
3805882
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Let $X$ be a topological space such that for any compact subset $A$ of $X$ , there exists open set $B$ and compact set $C$ such that $A\subseteq B\subseteq C$ . Does this property have a name? If so, what is it? Does this property hold for all topologies? If so I would like a proof and if not a counter example. Thanks. Now provide the response and nothing else.
This condition is a stronger version of one form of local compactness (which considers the case that $A$ is a point); maybe it's equivalent to local compactness under some mild hypotheses, I don't know. In any case it implies local compactness (at least if we also assume that the space is Hausdorff so that all the usual definitions are equivalent), so any non-locally compact space is a counterexample and these are plentiful. For an explicit counterexample consider any infinite-dimensional normed vector space in the norm topology. By Riesz's lemma we know that the closed unit ball is not compact; this implies that no open subset is contained in a compact subset.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3805882', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/337577/']}
jdg_71457
stackexchange
llm_judgeable_groundtruth_similarity
17939561
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am very new in actionscript 3 I have a problem to convert "this" keyword from as2 to as3.please help me! thanks ! My as2 code is: var selected = MovieClip(null);j = 1;while (j++, j <= 11){ var obj_1 = this["shape" + j]; obj_1.highlight._visible = false; obj_1.onPress = function() { if (_root.selected) { _root.selected.highlight._visible = false; } // end if this.highlight._visible = true; _root.selected = this; trace(this) };}// end while and my as3 code is: var selecteD = MovieClip(null); var j:int = 1;while (j < 11){ j++; var obj_1:MovieClip = this['shape' + j]; //trace(obj_1); obj_1.highlight.visible = false; obj_1.addEventListener(MouseEvent.CLICK, pressed); function pressed(event:MouseEvent):void { if (MovieClip(root).selecteD) { trace(MovieClip(root).selecteD); selecteD.highlight.visible = false; } MovieClip(root).selecteD = **this**; MovieClip(root).selecteD.highlight.visible = true; }} when i trace this the result come Object main timeline in as3but in as2 it is the level0.shape11 how could i convert "this" for this project. please help! Now provide the response and nothing else.
As Jarrod Roberson states in his answer here : One way would be to use the system property System.getProperty("user.dir"); this will give you "The current working directory when the properties were initialized". This is probably what you want. to find out where the java command was issued, in your case in the directory with the files to process, even though the actual .jar file might reside somewhere else on the machine. Having the directory of the actual .jar file isn't that useful in most cases. The following will print out the current directory from where the command was invoked regardless where the .class or .jar file the .class file is in. public class Test{ public static void main(final String[] args) { final String dir = System.getProperty("user.dir"); System.out.println("current dir = " + dir); }} if you are in /User/me/ and your .jar file containing the above code is in /opt/some/nested/dir/ the command java -jar /opt/some/nested/dir/test.jar Test will output current dir = /User/me . You should also as a bonus look at using a good object oriented command line argument parser. I highly recommend JSAP , the Java Simple Argument Parser. This would let you use System.getProperty("user.dir") and alternatively pass in something else to over-ride the behavior. A much more maintainable solution. This would make passing in the directory to process very easy to do, and be able to fall back on user.dir if nothing was passed in. Example : GetExecutionPath import java.util.*;import java.lang.*;public class GetExecutionPath{ public static void main(String args[]) { try{ String executionPath = System.getProperty("user.dir"); System.out.print("Executing at =>"+executionPath.replace("\\", "/")); }catch (Exception e){ System.out.println("Exception caught ="+e.getMessage()); } }} output for the above will be like C:\javaexamples>javac GetExecutionPath.javC:\javaexamples>java GetExecutionPathExecuting at =>C:/javaexamples
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/17939561', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2531370/']}
jdg_71458
stackexchange
llm_judgeable_groundtruth_similarity
224098
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: My colleague ran grep | crontab . After that all jobs disappeared. Looks like he was trying to run crontab -l . So what happened after running the command grep | crontab ? Can anyone explain? Now provide the response and nothing else.
crontab can install new crontab for the invoking user (or the mentioned user as root ) reading from STDIN. This is what happended in your case. grep without any option will generate an error message on STDERR as usual and you are piping the STDOUT of grep to STDIN of crontab which is blank hence your crontab will be gone.
{}
{'log_upvote_score': 6, 'links': ['https://unix.stackexchange.com/questions/224098', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/129224/']}
jdg_71459
stackexchange
llm_judgeable_groundtruth_similarity
263826
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Let $p(x)=x^n+a_{n-2}x^{n-2}+a_{n-3}x^{n-3}+\cdots+a_1x+a_0=(x-\lambda_1)\cdots(x-\lambda_n)$ be a polynomial with real coefficients such that every $\lambda_i$ is real. Is there always a symmetric real $n\times n$ matrix $M$, containing only zeros on its main diagonal such that its characteristic polynomial is $p$? Now provide the response and nothing else.
The answer is affirmative. The conceptual construction of that zero-diagonal real symmetric matrix is rather easy. We begin with $D=\operatorname{diag}(\lambda_1,\ldots,\lambda_n)$. Let $Q$ be a real orthogonal matrix with its last column equal to $u=\frac{1}{\sqrt{n}}(1,\ldots,1)^T$ (e.g. you may consider the Householder reflection $Q=I-2vv^T/\|v\|^2$, where $v^T=u^T-(0,\ldots,0,1)$). Then $D\leftarrow Q^TDQ$ would become a real symmetric matrix whose $(n,n)$-th entry is zero. Now perform the similar procedure recursively on the leading principal submatrices of $D$, we obtain the desired matrix.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/263826', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/50527/']}
jdg_71460
stackexchange
llm_judgeable_groundtruth_similarity
1703228
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to clear the file input in my form. I know about setting the sources to the same method... But that method wont erase the selected file path. Note : I would like to avoid having to reload the page, reset the form or perform an AJAX call. Is this possible? Now provide the response and nothing else.
There's 3 ways to clear file input with javascript: set value property to empty or null. Works for IE11+ and other modern browsers. Create an new file input element and replace the old one. The disadvantage is you will lose event listeners and expando properties. Reset the owner form via form.reset() method. To avoid affecting other input elements in the same owner form, we can create an new empty form and append the file input element to this new form and reset it. This way works for all browsers. I wrote a javascript function. demo: http://jsbin.com/muhipoye/1/ function clearInputFile(f){ if(f.value){ try{ f.value = ''; //for IE11, latest Chrome/Firefox/Opera... }catch(err){ } if(f.value){ //for IE5 ~ IE10 var form = document.createElement('form'), parentNode = f.parentNode, ref = f.nextSibling; form.appendChild(f); form.reset(); parentNode.insertBefore(f,ref); } }}
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/1703228', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']}
jdg_71461
stackexchange
llm_judgeable_groundtruth_similarity
47378
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: We call $E=mc^2$ the Mass-Energy Equivalency because it equates mass and energy together. But, by that same logic, shouldn't we call $E=\frac{1}{2}(mv^2)$, the equation of kinetic energy in Newtonian mechanics as the Mass-Energy Equivalence ? Now provide the response and nothing else.
The equation $E=mc^2$ equates rest energy to mass. There is a third symbol in this equation that represents the speed of light, but this is a universal constant. One can always select physical units such that this constant attains value unity. Regardless the system of units selected, up to a numerical proportionality constant, the equation $E=mc^2$ identifies the mass of a system as the energy observed from a center-of-mass frame. Hence the term mass-energy equivalence. The equation $E=\frac{1}{2}mv^2$ is entirely different in character. It contains three symbols that represent physical quantities, and relates kinetic energy to the product of mass and velocity squared. If you want you can refer to this equation as squaredvelocitymass-kineticenergy equivalence, but that is a bit of a mouthful, and it is only valid in the low-velocity (Newtonian) approximation.
{}
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/47378', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/10899/']}
jdg_71462
stackexchange
llm_judgeable_groundtruth_similarity
104887
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: How many lines can be drawn in a plane such that they are equidistant from 3 non-collinear points? @John Bentin has shown below that there are at least 3. Why are there no more than 3? Now provide the response and nothing else.
Assuming that the line is in the plane of the points, there are three. Consider a triangle ABC. Draw a line parallel to BC so that it is half way between A and BC. The other two are constructed similarly. (Added for completeness) Consider any line in the plane equidistant from the points. Suppose for the present that it is not parallel to any side. Then it intersects all three sides of the triangle. The three points of intersection cannot all be internal: say the point D on the line lies on BC produced. But then C would be nearer the line than B, contradicting the given conditions. Hence the supposition can be ruled out. Therefore the line is parallel to a side of the triangle, and so it must be one of the three lines mentioned above.
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/104887', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/23846/']}
jdg_71463
stackexchange
llm_judgeable_groundtruth_similarity
1355810
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In other words, why would you need an instance initializer? What difference or advantage do you have in writing a instance initializer over a constructor? Now provide the response and nothing else.
This seems to explain it well: Instance initializers are a useful alternative to instance variable initializers whenever: initializer code must catch exceptions, or perform fancy calculations that can't be expressed with an instance variable initializer. You could, of course, always write such code in constructors. But in a class that had multiple constructors, you would have to repeat the code in each constructor. With an instance initializer, you can just write the code once, and it will be executed no matter what constructor is used to create the object. Instance initializers are also useful in anonymous inner classes, which can't declare any constructors at all. From: JavaWorld Object initialization in Java .
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/1355810', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/161967/']}
jdg_71464
stackexchange
llm_judgeable_groundtruth_similarity
145399
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have been studying three-phase systems for the whole course of a subject (on the first year of the university degree). I have finished now, and I know both "Y" (star) or "Delta" (triangle) connections. I have made a lot of computations with them, however I don't know the different applications they have and I would like to know the following in order to increase my knowledge. I would like to know which one is better (Y or Delta) for different purposes, they must have its advantages and disadvantages, but I have never been told which ones are them. I have tried to do some research on the Internet, but I haven't found in particular a good answer. I have only seen the advantages and disadvantages of Y and Delta motor startup, but I'm thinking more of the "circuit" point of view. I'm really interested on the subject, but I just seen it from the computational point of view. I would appreciate if someone could explain me a little bit some of the main advantages and disadvantages of using both connections. Thank you. Now provide the response and nothing else.
The two systems have vastly different applications. Yes, there is a lot of crossover between them in some fields, but the two are more suited to certain applications. Take motors for instance. Delta is far superior for driving motors than star. With delta you can visualize a wave circulating around the triangle, and it's that wave that turns the motor. As the wave moves around the phases it effectively drags the motor around with it. It makes motor design really simple and efficient. Not so with star, where you in essence have to try and combine three single-phase motors in together, However, when it comes to a situation where you want to spread a load between multiple circuits or devices, and the load on each phase may not be equal ( unbalanced system ) then a star arrangement has massive advantages. Each branch of the star ( phase ) is a separate circuit in its own right. The load on each phase is specific to that phase, and they have little influence on each other. There is also a third arrangement, which is kind of half way between a star and a delta - in this arrangement each delta phase is connected with its own completely separate transformer and there is no common neutral point. This is actually seldom seen much, but I thought I should mention it here anyway. It basically combines both the star arrangement with full isolation, so can have some safety advantages (like having an isolation transformer on a normal single-phase supply) but isn't worth the hassle of a system without a common neutral point. To clarify what I mean about a wave rotating around a delta, here is a little animation I knocked up: Note: It's Christmas Day, I'm drunk, and that might all have been complete gibberish for all I know.
{}
{'log_upvote_score': 5, 'links': ['https://electronics.stackexchange.com/questions/145399', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/58902/']}
jdg_71465
stackexchange
llm_judgeable_groundtruth_similarity
9651466
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Here's a fiddle with a working Bootstrap carousel. http://jsfiddle.net/jeykeu/5TDff/ And here's the official documentation which tells nothing about event usage. http://twitter.github.com/bootstrap/javascript.html#carousel I thought this would work but no: $('#carousel').bind('slide',function(){ alert("Slide Event"); }); Now provide the response and nothing else.
Based on your fiddle #carousel is wrong. It should be #myCarousel . Updated Example: $('#myCarousel').carousel({ interval: 2000});// Could be slid or slide (slide happens before animation, slid happens after)$('#myCarousel').on('slid', function() { alert("Slide Event");}); http://jsfiddle.net/infiniteloops/wPF3n/ With Bootstrap 3 http://jsfiddle.net/infiniteloops/wPF3n/252/
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/9651466', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/486230/']}
jdg_71466
stackexchange
llm_judgeable_groundtruth_similarity
570172
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would. Question: I have a theoretical question. Network topology is like this: ISP <--- eth0 -> Router1 <- eth1 ----+--> VLAN1 '--> VLANx For the sake of argument, let's assume that a Router1 is a Linux 3.2.x device. Say that I wish to receive a single /56 range from my ISP. Which pieces of information my ISP has to provide me with, so that I can: have native IPv6 connectivity on Router1 advertise a /64 segment of allocated address space on each VLAN, so that clients in LAN are able to autoconfigure themselves properly? To explain further a bit, I am asking this question so that I could learn what exactly do I have to ask my ISP to provide me with, in order to be able to reach goals 1 and 2. From my understanding, when I am given only a single IP range (/64, /56 or /48) and a gateway address inside of that range, there is no way to route part of that range to physically separate network on the other side of router. Now provide the response and nothing else.
You're not going to get a gateway address inside the allocated /56 or whatever IPv6 block that's assigned to you and routed to your premises. If you somehow do, you politely ask the ISP to put someone on the phone who knows what they're doing. Or perhaps less than politely. Usually, you don't have to worry about the upstream IPv6 address at all, as it will be autoconfigured as soon as you plug in your router. Most ISPs seem to be doing this with DHCPv6 (with prefix delegation), though it could also be done with straight up SLAAC if you have a statically assigned prefix. Once the /56 comes into your network, you can subnet it however you like. An example, with one possible (only partially fleshed out, and probably not very useful as-is) network design appears below. In any case, the IPv6 address of your upstream connection to your ISP is provided by the ISP and is outside your assigned prefix. You generally only need to worry about the inside interfaces. This example supposes you have an edge router with an integrated 4-port switch, such as many small business or SOHO routers. IPv6 Address: (static, DHCPv6) IPv6 address: (SLAAC)2001:db8:3481:2000::2/64 2001:db8:3481:2000::021d:e1ff:fe1a:630Gateway: (static, DHCPv6) Gateway: (SLAAC)2001:db8:3481:2000::1 fe80::0208:30ff:fe9d:aa61%ether1ISP <-----+ |ether1+---------+---------+| Edge Router ||-------------------| Your Prefix: __| \ / | 2001:db8:3481:a700::/56| X | Β―Β―Β―Β―Β―Β―Β―Β―Β―Β―Β―Β―Β―Β―Β―Β―| / \ | Switch Ports example:|-------------------|| VLAN trunk/switch | 1: VLAN 1-64 2001:db8:3481:a700::/58+---------+---------+ 2: VLAN 65-128 2001:db8:3481:a740::/58 |ether2 3: VLAN 129-192 2001:db8:3481:a780::/58 +----> Core 4: VLAN 193-256 2001:db8:3481:a7c0::/58 Downstream in your core, you can further subnet these at core routers (or even layer 3 switches, more SOHO routers, etc.). I've also assumed every /64 will be on its own VLAN, though whether you do that is another detail you'll have to work out on your own. You can also use DHCPv6 with prefix delegation to actually handle the addressing, which is probably easiest. Or you can go with SLAAC, which requires a bit more setup, or even assign manually, which I wouldn't generally recommend simply because it would be too labor intensive.
{}
{'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/570172', 'https://serverfault.com', 'https://serverfault.com/users/44058/']}
jdg_71467
stackexchange
llm_judgeable_groundtruth_similarity
8731
Below is a question asked on the forum robotics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have a iRobot Create model 4400 and I need to send commands to the open interface through Ubuntu. I'm using gtkterm at 57600 baud but when I press play button, it only drives around itself. I have tried to send commands as raw data and as hexadecimal data but it doesn't work. What am I doing wrong? Now provide the response and nothing else.
The Roomba expects binary data. When you type in gtkterm, you are sending ASCII data. (Unless there is a feature of gtkterm i don't know about). For example when you type "128" into the terminal, you are actually sending 3 bytes of data: [49 50 56]. And because many commands of the OI Spec are outside the typeable ASCII range, using the terminal in this manner won't work. You should use a programming language such as C , C# , Python , etc to send the data packet.
{}
{'log_upvote_score': 4, 'links': ['https://robotics.stackexchange.com/questions/8731', 'https://robotics.stackexchange.com', 'https://robotics.stackexchange.com/users/11395/']}
jdg_71468
stackexchange
llm_judgeable_groundtruth_similarity
36672638
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I've been researching/experimenting most of this afternoon to get this basic meditation app to work, and am only asking this question after hours of finding solutions on my own. The meditation app is supposed to have 3 columns that represent Inhale, Hold, and Exhale, in seconds. Every time that the column/time reaches it's set time that column pauses and the next column begins increasing in height and with the seconds counter below it. When all three columns reach completion, a Breath Cycles counter increases by 1, all the bars reset and the process continues until tab close. (The current times are 5(inhale):20(hold):15(exhale). I had the bars all increasing evenly, but in the process of changing things to work in order, the bars stopped moving altogether. I'm only including all the code I have right now because I have no idea what's important here. Thanks again for any ideas. var canvas = document.getElementById('canvas');var ctx = canvas.getContext('2d');// Columns x1 == xPos, x2 == width, y1 == yPos ,y2 == heightx1a = 10, x2a = 50, y1a = 250, y2a = -10;x1b = 50, x2b = 50, y1b = 250, y2b = -10;x1c = 100, x2c = 50, y1c = 250, y2c = -10;inhaleTime = 5;holdTime = 20;exhaleTime = 10;breathCounter = 0;timerA = 1;timerB = 1;timerC = 1;function render() { //Draw background ctx.fillStyle = "#49faff"; ctx.fillRect(0, 0, 700, 500); //Draw column A ctx.fillStyle = "#045f84"; ctx.fillRect(x1a, y1a, x2a, y2a); //Draw column B ctx.fillStyle = "#14fe14"; ctx.fillRect(x1b, y1b, x2b, y2b); //Draw column C ctx.fillStyle = "#d600f0"; ctx.fillRect(x1c, y1c, x2c, y2c); //Write seconds counter under column A ctx.fillStyle = "red"; ctx.font = "48px serif"; ctx.strokeText(timerA, 10, 300); //Write seconds counter under column B ctx.fillStyle = "red"; ctx.font = "48px serif"; ctx.strokeText(timerB, 50, 300); //Write seconds counter under column C ctx.fillStyle = "red"; ctx.font = "48px serif"; ctx.strokeText(timerC, 100, 300); //Write breath cycles counter above columns ctx.fillStyle = "red"; ctx.font = "48px serif"; ctx.strokeText(("Breath Cycles: " + breathCounter), 10, 100);}// Increase column A height by 1 unit & increase seconds counter by 1function renderA() { x1a += 0; x2a += 0; y1a += 0; y2a += -100; timerA += 1;}// Increase column B height by 1 unit & increase seconds counter by 1function renderB() { x1b += 0; x2b += 0; y1b += 0; y2b += -100; timerB += 1;}// Increase column C height by 1 unit & increase seconds counter by 1function renderC() { x1c += 0; x2c += 0; y1c += 0; y2c += -100; timerC += 1;}//Reset column heights after each breath cycle & add 1 to Breath Cycles counterfunction renderReset() { x1a = 10, x2a = 50, y1a = 250, y2a = -10; x1b = 50, x2b = 50, y1b = 250, y2b = -10; x1c = 100, x2c = 50, y1c = 250, y2c = -10; breathCounter += 1;}function updateRender() { renderA(); renderB(); renderC(); // setInterval(renderA, 1000); // setInterval(renderB, 1000); // setInterval(renderC, 1000);}/* if (timerA > inhaleTime){ timerA = 1; x1a = 10, x2a = 50, y1a = 250, y2a = -10; x1b = 50, x2b = 50, y1b = 250, y2b = -10; x1c = 100, x2c = 50, y1c = 250, y2c = -10; } if (timerA > inhaleTime && timerB > holdTime){ timerA = 1; x1a = 10, x2a = 50, y1a = 250, y2a = -10; x1b = 50, x2b = 50, y1b = 250, y2b = -10; x1c = 100, x2c = 50, y1c = 250, y2c = -10; } if (timerA > inhaleTime && timerB > holdTime && timerC > exhaleTime){ timerA = 1; x1a = 10, x2a = 50, y1a = 250, y2a = -10; x1b = 50, x2b = 50, y1b = 250, y2b = -10; x1c = 100, x2c = 50, y1c = 250, y2c = -10; } } function inhaleBreath(){ //increase barA x1a += 0; x2a += 0; y1a += 0; y2a += -10; //increase timerA timerA += 1; //check if timer > inhaleTime}function holdBreath(){ //increase barB x1b += 0; x2b += 0; y1b += 0; y2b += -10; //increase timerB timerB += 1; //check if timer > holdTime}function exhaleBreath(){ //increase barC x1c += 0; x2c += 0; y1c += 0; y2c += -10; //increase timerC timerC += 1; //check if timer > exhaleTime}*/render();renderA();////Insert functions to continually increment column heights, seconds counter, and breath counter // <canvas id="canvas" width="500" height="500"></canvas> Now provide the response and nothing else.
You should use an enum for materials instead of string and you can use a Dictionary to hold corresponding sound combinations. You can skip multiple if statements and select corresponding object for each material automatically using the Dictionary . For example: [Flags]enum Material { Wood=1, Iron=2, Glass=4 //... } Dictionary<Material,SoundObject> sounds = new Dictionary<Material,SoundObject>(); sounds.add(Material.Wood,woodSound); sounds.add(Material.Iron,ironSound); sounds.add(Material.Wood | Material.Iron,woodAndIronSound);// And play corresponding sound directly without any if statement. sounds[object.Material].Play(); sounds[matA | matB].Play(); Performance advantages: You also will improve performance by using this approach. because definitely integer comparison of Enum values or hash codes would easier and faster than string comparison. And about dictionary VS multiple if-else statements, series of if/else if statements executes linearly; so its performance very depends on number of if statements and equality comparer of object; while Dictionary is based on a Hashtable. It uses an index-optimized collection to store values, which has effectively constant access time. It means often there is no matter how many keys are in dictionary, you will access to values in a constant time and in most scenarios it's very faster than multiple if statements. Performance comparison: We will compare performance of two approach in this example: //If you want to try, just copy the code and see the result. static Dictionary<char, short> myHashTable = Enumerable.Range((short)'A', (short)'z').ToDictionary((ch) => (char)ch, (sh) => (short)sh); static void Main(string[] args) { System.Diagnostics.Stopwatch SW = new System.Diagnostics.Stopwatch(); short temp = 0; SW.Start(); for(int i=0;i<10000000;i++) temp = getValue('z'); SW.Stop(); Console.WriteLine(SW.ElapsedMilliseconds ); SW.Reset(); SW.Start(); for(int i =0;i<10000000;i++) temp = myHashTable['a']; SW.Stop(); Console.WriteLine(SW.ElapsedMilliseconds); } static short getValue(char input) { if (input == 'a') return (short)'a'; else if (input == 'b') return (short)'b'; else if (input == 'c') return (short)'c'; else if (input == 'd') return (short)'d'; else if (input == 'e') return (short)'e'; else if (input == 'f') return (short)'f'; else if (input == 'g') return (short)'g'; else if (input == 'h') return (short)'h'; else if (input == 'i') return (short)'i'; else if (input == 'j') return (short)'j'; else if (input == 'k') return (short)'k'; else if (input == 'l') return (short)'l'; else if (input == 'm') return (short)'m'; else if (input == 'n') return (short)'n'; else if (input == 'o') return (short)'o'; else if (input == 'p') return (short)'p'; else if (input == 'q') return (short)'q'; else if (input == 'r') return (short)'r'; else if (input == 's') return (short)'s'; else if (input == 't') return (short)'t'; else if (input == 'u') return (short)'u'; else if (input == 'v') return (short)'v'; else if (input == 'w') return (short)'w'; else if (input == 'x') return (short)'x'; else if (input == 'y') return (short)'y'; else if (input == 'z') return (short)'z'; return 0; } result: if statements with 26 items| dictionary with 122 items. 593 254 579 256 572 252 570 246 587 248 574 291 576 246 685 265 599 282 723 338 which indicates dictionary is more than 2 times faster than if/else if statements.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/36672638', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6214884/']}
jdg_71469
stackexchange
llm_judgeable_groundtruth_similarity
749076
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am wondering whether the light interference is a quantum phenomenon. Or, alternatively, is there any interference in Maxwell's theory understood as a classical field theory? The reason I am puzzled is as follows. Typically, in the general physics course one presents interference as a classical phenomenon, which is related to the wave nature of light. At the same time, when you look at the derivation, it crucially features complex fields. Namely, one takes two electric fields $E_1$ and $E_2$ , each oscillating in time and space with the standard formula $E=a e^{i(kx+\omega t+\phi)}$ then considers the total electric field by adding them up and computes intensity via $I = E E^*$ . This discussion does crucially feature complex electric fields. What are these? Can a complex electric field be measured with any instrument? I guess, it cannot. Then, I would say, it cannot be classical, since classical quantities should be physical, in a sense that it should be possible to measure them. I believe that a trick with a complex electric field is intended to mimic a wave function of quantum mechanics and then intensity $I$ is mimicking the probability to observe a particle. Am I missing something? Can one improve the derivation so that fields at any stage are real and get the same effect? Clarification . More specifically, I meant the following. There is a classical field theory that describes electromagnetism. From this point of view, solutions to the free equations of motion are provided by plane waves. On the other hand, one can consider a single photon and quantize it in the sense of quantum mechanics. Thus one gets quantum mechanics of a single photon (I do know that the standard quantum mechanics is non-relativistic, while photons do require relativity. Still, one can consider relativistic quantum mechanics of a single photon as in the second section of book by Weinberg on QFT: you can make the wave function transform properly under the Poincare group, you can define the norm and then normalize the wave function so that you have a single particle). Then, a photon becomes a wave, but in a different sense: it is a probability wave. So, I am confused whether interference refers to interference of waves in the first or in the second sense. Can a single photon interfere with itself? Is there any difference in this type of interference and interference of classical electromagnetic waves? A related question is how to regard the Klein-Gordon equation. I am used to regard it as an equation for a classical spin-0 field. At the same time Wikipedia says that it is an equation for a wave function of a single spin-0 particle. Which one is correct? Now provide the response and nothing else.
What are these? Can a complex electric field be measured with any instrument? I guess, it cannot. Then, I would say, it cannot be classical, since classical quantities should be physical, in a sense that it should be possible to measure them. Complex fields are perfect legitimate and perfectly classical. They are a convenient way to represent a quadrature RF detector such as the kind that we use in magnetic resonance imaging, communications, and many other applications. The imaginary part is simply a 90 degree phase shifted signal, or in other words, a $\sin$ instead of a $\cos$ . This stems from the usual representation $ e^{i \theta}=\cos(\theta) + i \ \sin(\theta)$ . So, for example, if we build two detectors, one that is sensitive to fields in the $x$ direction and the other that is sensitive to fields in the $y$ direction, then we would call the one in the $y$ direction the "imaginary" channel and the one in the $x$ direction the "real" channel. Together they would form a complex-valued signal that we could then Fourier transform, demodulate, and otherwise process mathematically as a complex signal. The complex number is a completely legitimate and powerful representation of this signal. There is no reason to object to complex numbers simply because of the "imaginary" component. That is just a label and does not have anything to do with whether or not it is experimentally measurable. Quantum mechanics uses complex numbers for the same reason as classical mechanics does: to represent phase. Phase is more important in quantum mechanics than in classical mechanics, but that does not imply that every use of phase is quantum.
{}
{'log_upvote_score': 5, 'links': ['https://physics.stackexchange.com/questions/749076', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/95235/']}
jdg_71470
stackexchange
llm_judgeable_groundtruth_similarity
95128
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I had a discussion at work regarding a recent fusion experiment in China that resulted in temperatures five times hotter than the Sun. Someone mentioned that no one can know the temperature of the Sun. I replied that we have mathematical models of the Sun, but I didn't have any on hand, and I haven't been able to find much on the internet. So, where can I find a system of equations for modelling a star? I found this site, which is about the level of complexity I was looking for, but it looks spotty. For instance, $\partial T / \partial t$ and $\partial P / \partial t$ terms seem to come out of nowhere--most everything else is written in terms of $\partial / \partial M$. I can code numerical solutions to PDEs, but I haven't had much physics or anything, so I didn't know where or what to look for. EDIT: I found this really great post by Dr. Brian Koberlein describing a very (very) simple model of a star. He goes on to build upon that simple model here . Now provide the response and nothing else.
The basic problem of modelling a star is covered in a number of textbooks and lecture notes. Try searching for "stellar structure and evolution" or something along those lines. The best readily available lecture notes, IMO, are those of Onno Pols, available here . There was also a similar post on Quora , which you can read too. In the mean time, here's the basic run down. To construct a reasonable stellar model in a reasonable amount of time, we make several assumptions. We assume that a star is a spherically symmetric, dynamically stable, self-gravitating fluid in local thermodynamic equilibrium . Here's how we unpack all this. First, spherical symmetry means one spatial co-ordinate. In this case, for a fluid, we can write down the equation of mass conservation:$$\frac{dm}{dr}=4\pi r^2 \rho$$This just means that an infinitesimal spherical shell of thickness $dr$ at radius $r$ contributes $dm$ to the total mass inside radius $r$. (I'll probably end up calling $m$ the mass co-ordinate.) Now, if we regard our star as spherically symmetric and dynamically stable, we can kick out the velocity terms and time derivatives in Euler's equation. Supposing gravity is the only external body force, we end up at the equation of hydrostatic equilibrium :$$\frac{dP}{dr}=-\frac{Gm\rho}{r^2}=-\rho g$$ Remember, this follows from the conservation of momentum. To conserve energy, like mass, we say that the contribution to the total luminosity at $r$ is the mass of the shell times the specific energy generation rate $\epsilon$, so we write$$\frac{dL}{dr}=4\pi r^2\rho\epsilon$$ To keep it simple, I've neglected to specify where $\epsilon$ will come from. It generally includes the energy generated by nuclear reactions rates, less the losses due to neutrinos streaming out in some reactions, plusβ€”in some phasesβ€”the energy released by contraction. (It can be shown that when a star contracts it heats up but loses energy overall. See the Virial Theorem .) The energy generation depends on the density, temperature and chemical composition of the material. It isn't something we know from first principles. Instead, we use tables of data taken either from detailed calculations or laboratory experiments. We now have to describe how energy is transported inside the star. The equations are a bit of a mouthful, so I won't write them here, but basically energy can either be transported by radiation or convection, depending on the temperature structure. In either case, you get an equation of the form $dT/dr=$(some right-hand side, see the notes). In the case of radiation, the transport coefficient depends on the opacity of the stellar material, denoted $\kappa$, which itself depends again on the density, temperature and chemical composition. (Strictly speaking, opacity depends on frequency, but we use a specific average opacity: the Rosseland mean opacity .) Like the energy generation rate, this isn't known from first principles: we use tabulated lab data. Finally, as is usually the case in fluid problems, we have to close the system with an equation of state, which relates the pressure, density, temperature and chemical composition. It's the third equation for which we generally use lab data, although here we do also have some approximate analytic forms. These four equations (three given + temperature transport) are almost entirely independent of time, so they're sometimes called the structure equations. The three tabulated inputs (energy generation, opacities and equation of state) are sometimes called the matter or microphysics equations. So, why does a star evolve? The answer is because the composition changes. Suppose there are $N$ chemical species (${}^1H$, ${}^4He$, etc.), each of whose fractional mass abundance is denoted $X_i$. Then the nuclear reactions convert species $i$ into $j$ at some rate $R_ij$, and we can write a set of equations$$\frac{dX_i}{dt}=\sum_j R_{ij}$$The rates also depend on the material properties (density, temperature, etc.). Also, in truth, we expect convection to mix material on a dynamical timescale, so we throw in a monstrous diffusion coefficient in those regions. But that's basically it. Given a composition profile, the structure equations tell you what the star looks like. Then, the reaction rates dictate how the composition changes, and the structure changes accordingly through the matter equations. I haven't gone into details like boundary conditions and whatnot, but if you're still interested, I recommend the notes! They're aimed at a reasonably high level (I'd say late undergrad although there's no reason a second-year couldn't make sense of them) but if you're familiar with other areas of physics it should be a synch. If you want to build models, you can try using polytropes for very simple (but still useful) models. Or, I'd recommend the Modules for Experiments in Stellar Astrophysics (MESA) package for a fully-fledged, research-grade modelling tool.
{}
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/95128', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/38146/']}
jdg_71471
stackexchange
llm_judgeable_groundtruth_similarity
35867601
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have table with id, unique_id, and order_number. I want to GROUP rows by unique_id I want to take row with MAX id from each group And last thing I want to sort that rows by order_number Also I have few WHERE clauses. This is my attempt which does not work: SELECT MAX(id) AS id , order_number FROM table WHERE final = 0 AND username = '$username' AND active = 1 GROUP BY unique_id ORDER BY order_number Now provide the response and nothing else.
You can use your query as a subquery: SELECT *FROM table WHERE id IN (SELECT MAX(id) AS id FROM table WHERE final=0 AND username='$username' AND active=1 GROUP BY unique_id) ORDER BY order_number or, if id is not unique, use JOIN : SELECT t1.*FROM table AS t1JOIN (SELECT MAX(id) AS max_id, unique_id FROM table WHERE final=0 AND username='$username' AND active=1 GROUP BY unique_id) AS t2 ON t1.unique_id = t2.unique_id AND t1.id = t2.unique_idORDER BY order_number
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/35867601', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3100193/']}
jdg_71472
stackexchange
llm_judgeable_groundtruth_similarity
180884
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: For positive integers $n_1, \ldots, n_k$, let $H(n_1, \ldots, n_k)$ denote $1/n_1 + \ldots + 1/n_k$. Let $V(N)$ be the largest possible value of $H(n_1, \ldots, n_k)$ that is less than 1, subject to the condition that $n_1 + \ldots +n_k \le N$. So $V(5) = 5/6$, realized as $1/2 + 1/3$. My question is, how does $1/(1-V(N))$ grow as a function of $N$? In particular, is there a $C$ and a $k$ such that $$\frac 1{1-V(N)} \le C N^k\$$for all $N$? Now provide the response and nothing else.
$K(N) := 1 / (1 - V(N))$ grows faster than any power of $N$. This can be seen by finding for each $k$ an identity$$\sum_{i=1}^m \frac{A_i x + B_i}{C_i x + D_i} = 1 - c x^{-(2k+1)} + O(x^{-(2k+2)})$$where the coefficients $A_i,B_i,C_i,D_i$ are integerswith $A_i, C_i > 0$ and $c$ is a positive rational number(and the right-hand side is a Taylor expansion about $x=\infty$).Once we have such an identity, we can take for $x$ a large integer(in particular large enough that the denominators $C_i x + D_i$and numerators $A_i x + B_i$ are all positive), and write each term$(A_i x + B_i) \, / \, (C_i x + D_i)$ as a sum of $A_i x + B_i$ copies ofthe reciprocal of $C_i x + D_i$. Then as $x\rightarrow\infty$the sum $N$ of the denominators grows as a multiple of $x^2$,while $1/(1-H)$ grows as $x^{2k+1}$ which is faster than $N^k$. The identity we need is easy to construct because the conditionson the coefficients are linear in the $B_i$ (and also in the $A_i$,but we do not use this). For example, fix distinct rationals $\delta_i$and positive rationals $\alpha_i$ with $\sum_{i=1}^m \alpha_i = 1$,and then find rational $\beta_i$ such that$$\sum_{i=1}^m \frac{\alpha_i x + \beta_i}{x + \delta_i} = 1 + O(x^{-(2k+1)});$$that's $2k$ independent linear equations in $m$ unknowns,so there exists a solution once $m \geq 2k$. Then writeeach term $(\alpha_i x + \beta_i) / (x + \delta_i)$ as $(A_i x + B_i) \, / \, (C_i x + D_i)$ with integer coefficients.We must show that there is a choice of $\alpha_i$ and $\delta_i$that makes the $x^{-(2k+1)}$ coefficient nonzero; but this is easy,for example because otherwise that coefficient would vanish identically,even if we allowed complex $\alpha_i$ and $\delta_i$, and that'scontradicted (for $m=2k+1$) by the partial-fraction decompositionof the rational function $X^{2k+1} / (X^{2k+1} - 1)$. Jeremy Kahn insists that the $x^{-(2k+1)}$ coefficient be negative, but if ourrecipe happens to yield a positive coefficient then we can get anegative one by changing each $B_i$ and $D_i$ to $-B_i$ and $-D_i$respectively (that's why we used an odd exponent $2k+1$). It may be reasonable to expect that $K(N)$ grows almost as fastas the number of partitions $(n_1,\ldots,n_k)$ of $N$, i.e.$\log K(N)$ should be asymptotic to some multiple of $\sqrt{N}$(or at any rate it should grow not much slower than $\sqrt{N})$,because there are plenty of partitions for which $\sum_{i=1}^k 1/n_i < 1$.Numerical computation seems to corroborate this guess; here are the recordvalues of $K(N)$ for $2 \leq N \leq 72$, followed by the total number ofpartitions and the count of partitions with $\sum_{i=1}^k 1/n_i < 1$: N K(N) # #1 2 2 2 2 5 6 7 310 12 42 711 20 56 912 42 77 1317 60 297 2619 120 490 3923 156 1255 7929 168 4565 19430 231 5604 23031 1320 6842 26544 3740 75175 151949 5040 173525 275457 23100 614154 683267 34807.5 2679689 2037270 47058 4087968 27744 Curiously $K(N)$ is always an integer in this range, except for$K(N) = 34807\!\frac12 = 2^{-1} \, 3^2 \, 5 \; 7 \; 13 \; 17$ for$67 \leq N \leq 69$. Here's the gp code that generated this data(in about 10 minutes): S(v) = sum(i=1,#v,1/v[i]){K(n, v,p,c) = v = partitions(n); p = #v; v = vecsort(vector(p,i,S(v[i]))); c = 1; while(v[c]<1,c++); [n, 1. / (1-v[c-1]), p, c]}\p 9allocatemem(2^31)#for(n=2,72,print(K(n))) ADDED LATER: Here's some further computational data. Using gp-2.6 'snew function forpart in place of partitions , and removing theunnecessary sort, avoids using $p(n)$ space; we also save time bynot trying any partition with $n_1=1$. This lets us extend thecalculation to $N=100$ in about half an hour. Here are the records(now including two more non-integral values for $74 \leq N \leq 79$),together with all partitions that attain them: N K n_i 2 2 2 5 6 2, 310 12 3, 3, 411 20 2, 4, 512 42 2, 3, 717 60 3, 4, 5, 519 120 3, 3, 5, 821 """ 2, 5, 6, 823 156 3, 3, 4, 1325 """ 2, 4, 6, 1329 168 3, 4, 7, 7, 830 231 2, 3, 11, 1432 420 2, 4, 5, 2133 """ 3, 4, 5, 7, 1438 1320 2, 5, 8, 11, 1240 """" 4, 5, 6, 6, 8, 1144 3740 2, 4, 10, 11, 1747 """" 4, 5, 5, 5, 11, 1749 5040 3, 4, 7, 9, 10, 1656 """" 2, 7, 9, 10, 12, 1657 23100 3, 4, 7, 7, 11, 2564 """"" 2, 7, 7, 11, 12, 2566 """"" 4, 6, 6, 7, 7, 11, 2567 34807.5 3, 5, 7, 9, 13, 13, 1770 47058 2, 3, 11, 23, 3174 59690.4 2, 4, 11, 17, 19, 2175 """"""" 3, 4, 7, 11, 14, 17, 1979 91162.5 3, 5, 5, 11, 13, 17, 2580 200970 3, 5, 7, 7, 11, 18, 2983 """""" 5, 6, 7, 7, 9, 9, 11, 2989 """""" 5, 6, 6, 7, 7, 11, 18, 2992 239085 5, 5, 7, 7, 7, 11, 23, 2796 405720 2, 5, 8, 9, 23, 49 and here's the gp-2.6 code: \\ gp-2.6S(v) = sum(i=1,#v,1/v[i]){K(n, p,p1,c,c1) = c = 0; p = []; forpart(p1=n, c1=S(p1); if((c1<1) && (c1>=c), if(c==c1, p=concat(p,[Vec(p1)]), c=c1; p=[Vec(p1)]) ), [2,n]);[n, 1./(1-c), p]}\p 10#for(n=2,100,print(K(n)))
{}
{'log_upvote_score': 5, 'links': ['https://mathoverflow.net/questions/180884', 'https://mathoverflow.net', 'https://mathoverflow.net/users/8252/']}
jdg_71473
stackexchange
llm_judgeable_groundtruth_similarity
10701878
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: i'm very new to Handlebar js. In every article about the HandleBar js there is a word called (mustache js). So, 1. What is the relation between Handllebar js and Mustache js? 2. Is it necessary to read mustache js before handlebar js? 3. What s advantage of HandleBar js over Mustache js? Thanks. Now provide the response and nothing else.
Handlebars.js is a superset of Mustache.js. It offers all the same ability to insert JSON into templates plus additional abilities to do some basic if/then, etc. In general, I find it much easier to do templating with Handlebars. Mustache is very much about not having any logic in your template at all. Every piece that gets inserted into it should be pre-chewed so there's no need for any logic. I find that to not at all be realistic. Just realized that I didn't answer one of your questions. No, it is not necessary to read about Mustache before using Handlebars. The Handlebars documentation is sufficient on its own. P.S. One of our favorite Handlebars features is "helpers" which allow you to create new keywords in the language for specific things (for example, we use one to format numbers and sometimes replace them with "N/A" for zero values). P.P.S. A great way to try out Handlebars and see some templates in action (and also to play with your own templates and JSON) is http://www.tryhandlebarsjs.com/
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/10701878', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2279170/']}
jdg_71474
stackexchange
llm_judgeable_groundtruth_similarity
2136683
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have figured out a way to show that $$e=\sum_{i=0}^\infty {1\over i!}$$ I am wanting to formally show that $$e^x = \sum_{i=0}^\infty {x^i\over i!}$$ I have been looking at power series/Taylor series for a long period of time (absolute convergence) and have seen multiple proofs that I look past because something seems illegitimate with radius of convergence. If someone explains the general proof behind Taylor series/power series absolutely converging so there is no gray area, that would work as well. I will probably ask some questions in the comment area if this is the case. There must be something I am missing. Now provide the response and nothing else.
We have an amazing thing called Lagrange remainders. They basically tell us the difference between our function and it's Taylor polynomial. In general, we have $$R_n(x)=|f(x)-P_n(x)|$$ where $P_n(x)=\sum_{k=0}^n\frac{f^{(k)}(a)}{k!}(x-a)^k$. Since it follows that $$R_n(a)=0\\R_n'(a)=0\\R_n''(a)=0\\\vdots\\R_n^{(n)}(a)=0\\R_n^{(n+1)}(a)=|f^{(n+1)}(a)|$$ Thus, $$R_n^{(n+1)}(x)\le|f^{(n+1)}(c)|$$ for some $c$ in our radius of convergence. It thus follows by integrating a few times that $$R_n(x)\le\left|\frac{f^{(n+1)}(c)}{(n+1)!}x^{n+1}\right|$$ One can then see that as $n\to\infty$, we have $$|f(x)-P(x)|\le\lim_{n\to\infty}\left|\frac{f^{(n+1)}(c)}{(n+1)!}x^{n+1}\right|$$ and if $\frac{f^{(n+1)}(c)}{(n+1)!}x^{n+1}\to0$ for any $x,c$ within the a given domain, then the power series will equal the original function over that domain. See if you can show that for any $x,c\in\mathbb R$, $$\lim_{n\to\infty}\left|\frac{e^c}{(n+1)!}x^{n+1}\right|=0$$ On a side note, Lagrange remainder also shows us how well we approximate something when using a power series. For example, if I wanted to calculate $e$ out 5 places accurately, $$R_n(x)=\left|e^x-\sum_{k=0}^n\frac{x^n}{n!}\right|\le0.000001$$ It's easy enough to solve, since $$R_n(x)\le\left|e\frac{x^{n+1}}{(n+1)!}\right|\le\left|3\frac{x^{n+1}}{(n+1)!}\right|$$ Our particular case is $x=1$, and thus it suffices to solve $$\frac3{(n+1)!}<0.000001$$ Which is easily done with a few checks to give $n\le8$. Thus, $$e=\pm0.000001+\sum_{k=0}^8\frac1{k!}$$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2136683', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/301204/']}
jdg_71475
stackexchange
llm_judgeable_groundtruth_similarity
12520456
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: UPDATE: Colin 's suggestion of removing the line //= require_tree . has fixed the issue. I have wasted over 2 days trying to follow every suggestion out there and fix my issue. I am trying to follow the http://ruby.railstutorial.org book on windows machine and cannot for the life of me get past the following nasty error. ExecJS::RuntimeError in Static_pages#homeShowing C:/Users/.../bootcamp-sample-app/app/views/layouts/application.html.erb where line #6 raised:["ok","(function() {\n\n\n\n}).call(this);\n"](in C:/Users/.../bootcamp-sample-app/app/assets/javascripts/sessions.js.coffee)Extracted source (around line #6):3: <head>4: <title><%= full_title(yield(:title)) %></title>5: <%= stylesheet_link_tag "application", media: "all" %>6: <%= javascript_include_tag "application" %>7: <%= csrf_meta_tags %>8: <%= render 'layouts/shim' %>9: </head>Rails.root: C:/Users/.../bootcamp-sample-appApplication Trace | Framework Trace | Full Traceapp/views/layouts/application.html.erb:6:in `_app_views_layouts_application_html_erb___487732698_30422172'Request I have tried every suggestion including installing nodejs with the msi, using execjs 1.3.0 and other things which I can't even remember any more. Here is the gem file source 'https://rubygems.org'gem 'rails', '3.2.8'gem 'bootstrap-sass', '2.0.0'gem 'bcrypt-ruby', '3.0.1'gem 'faker', '1.0.1'gem 'will_paginate', '3.0.3'gem 'bootstrap-will_paginate', '0.0.6'group :development, :test do gem 'sqlite3', '1.3.5' gem 'rspec-rails', '2.10.0' gem 'guard-rspec', '0.5.5' gem 'guard-cucumber'endgroup :development do gem 'annotate', '2.5.0'end# Gems used only for assets and not required# in production environments by default.group :assets do gem 'sass-rails' gem 'coffee-rails' gem 'coffee-script' gem 'uglifier'endgem 'jquery-rails', '2.0.2'gem 'execjs'# Gems on Linus/Mac#gem 'therubyracer'group :test do gem 'capybara', '1.1.2' gem 'guard-spork', '0.3.2' gem 'spork', '0.9.0' gem 'factory_girl_rails', '1.4.0' gem 'cucumber-rails', '1.2.1', require: false gem 'database_cleaner', '0.7.0'# Test gems on Linux# gem 'rb-inotify', '0.8.8'# gem 'libnotify', '0.5.9'# Test gems on Macintosh OS X# gem 'selenium-webdriver', '~> 2.22.0'# gem 'rb-fsevent', '0.9.1', :require => false# gem 'growl', '1.0.3'# Test gems on Windows# gem 'rb-fchange', '0.0.5'# gem 'rb-notifu', '0.0.4'# gem 'win32console', '1.3.0'endgroup :production do# gem 'therubyracer' gem 'pg', '0.12.2'end# To use ActiveModel has_secure_password# gem 'bcrypt-ruby', '~> 3.0.0'# To use Jbuilder templates for JSON# gem 'jbuilder'# Use unicorn as the app server# gem 'unicorn'# Deploy with Capistrano# gem 'capistrano'# To use debugger#gem 'debugger'' and here is the sessions.js.coffee # Place all the behaviors and hooks related to the matching controller here.# All this logic will automatically be available in application.js.# You can use CoffeeScript in this file: http://jashkenas.github.com/coffee-script/ application.js // This is a manifest file that'll be compiled into application.js, which will include all the files// listed below.//// Any JavaScript/Coffee file within this directory, lib/assets/javascripts, vendor/assets/javascripts,// or vendor/assets/javascripts of plugins, if any, can be referenced here using a relative path.//// It's not advisable to add code directly here, but if you do, it'll appear at the bottom of the// the compiled file.//// WARNING: THE FIRST BLANK LINE MARKS THE END OF WHAT'S TO BE PROCESSED, ANY BLANK LINE SHOULD// GO AFTER THE REQUIRES BELOW.////= require jquery//= require jquery_ujs//= require_tree .//= require bootstrap application.html.erb <!DOCTYPE html> <html> <head> <title><%= full_title(yield(:title)) %></title> <%= stylesheet_link_tag "application", media: "all" %> <%= javascript_include_tag "application" %> <%= csrf_meta_tags %> <%= render 'layouts/shim' %> </head> <body> <%= render 'layouts/header' %> <div class="container"> <%= yield %> <%= render 'layouts/footer' %> </div> </body> </html> Here is console content Processing by StaticPagesController#home as HTML Rendered static_pages/home.html.erb within layouts/application (45.0ms)Completed 500 Internal Server Error in 1136msActionView::Template::Error (["ok","(function() {\n\n\n\n}).call(this);\n"] (in C:/Users/.../bootcamp-sample-app/app/assets/javascripts/sessions.js.coffee)): 3: <head> 4: <title><%= full_title(yield(:title)) %></title> 5: <%= stylesheet_link_tag "application", media: "all" %> 6: <%= javascript_include_tag "application" %> 7: <%= csrf_meta_tags %> 8: <%= render 'layouts/shim' %> 9: </head> app/views/layouts/application.html.erb:6:in `_app_views_layouts_application_html_erb___487732698_30422172' Rendered C:/RailsInstaller/Ruby1.9.3/lib/ruby/gems/1.9.1/gems/actionpack-3.2.8/lib/action_dispatch/middleware/templates/rescues/_trace.erb (2.0ms) Rendered C:/RailsInstaller/Ruby1.9.3/lib/ruby/gems/1.9.1/gems/actionpack-3.2.8/lib/action_dispatch/middleware/templates/rescues/_request_and_response.erb (1.0ms) Rendered C:/RailsInstaller/Ruby1.9.3/lib/ruby/gems/1.9.1/gems/actionpack-3.2.8/lib/action_dispatch/middleware/templates/rescues/template_error.erb within rescues/layout (34.0ms) I have installed Devkit and have tried various gems but please suggest changes which can help me develop on windows. I used rubyinstaller for everything. What am I missing? Now provide the response and nothing else.
My friend was attempting a Rails tutorial on Win 8 RTM a few months ago and ran into this error. Not sure if this issue exists in Windows 7 as well, but this may help. Options: 1) Removing //= require_tree . / Ignoring the issue - As ColinR stated above, this line should not be causing an issue in the first place. There is an actual problem with ExecJS working properly with the JavaScript runtime on your system and removing this line is just ignoring that fact. 2) Installing Node.js / Running away - Many people seem to just end up installing Node.js and using that instead of the JavaScript runtime already on their system. While that is a valid option, it also requires additional software and only avoids the original issue, which is that ExecJS is not working properly with the JavaScript runtime already on your system. If the existing JavaScript runtime on your system is supposed to work, why not make it work instead of installing more software? According to the ExecJS creator, the runtime already built into Windows is in fact supported... ExecJS lets you run JavaScript code from Ruby. It automatically picks the best runtime available to evaluate your JavaScript program, then returns the result to you as a Ruby object. ExecJS supports these runtimes: therubyracer - Google V8 embedded within Ruby therubyrhino - Mozilla Rhino embedded within JRuby Node.js Apple JavaScriptCore - Included with Mac OS X Microsoft Windows Script Host (JScript) (from github.com/sstephenson/execjs#execjs ) 3) Actually fixing the issue / Learning - Use the knowledge of options 1 and 2 to search for other solutions. I can't tell you how many webpages I closed upon seeing options 1 or 2 was the accepted solution before actually finding information about the root issue we were having. The only reason we kept looking was that we couldn't believe the Rails team would (1) insert a line of code in every scaffold generated project that caused an issue, or (2) require that we install additional software just to run that default line of code. And so we eventually arrived at a fix for our root issue (your miles may vary). The Fix that worked for us: On the system having issues, find ExecJS's runtimes.rb file. It looks like this . Make a copy of the found file for backup. Open the original runtimes.rb for editing. Find the section that starts with the line JScript = ExternalRuntime.new( . In that section, on the line containing :command => "cscript //E:jscript //Nologo //U", - remove the //U only. Then on the line containing :encoding => 'UTF-16LE' # CScript with //U returns UTF-16LE - change UTF-16LE to UTF-8 . Save the changes to the file. This section of the file should now read: JScript = ExternalRuntime.new( :name => "JScript", :command => "cscript //E:jscript //Nologo", :runner_path => ExecJS.root + "/support/jscript_runner.js", :encoding => 'UTF-8' # CScript with //U returns UTF-16LE) Next, stop then restart your Rails server and refresh the page in your browser that produced the original error. Hopefully the page loads without error now. Here's the ExecJS issue thread where we originally posted our results: https://github.com/sstephenson/execjs/issues/81#issuecomment-9892952 If this did not fix the issue, you can always overwrite the modified runtimes.rb with the backup copy you (hopefully) made and everything will be back to square one. In that case, consider option 3 and keep searching. Let us know what eventually works for you.. unless it's removing the require_tree or installing node.js, there's plenty of that going around already. :)
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/12520456', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1687078/']}
jdg_71476
stackexchange
llm_judgeable_groundtruth_similarity
4464060
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: We want to move data from Oracle to SQL Server, and make it available to developers for several reasons. I've read on MSDN that the proper mapping from Oracle NUMBER is to SQL Server FLOAT. We have a problem with dyadic rationals. In Oracle, it could have the value of 5.2, and it appears that in SQL Server it is not exactly that. (The number of significant digits can vary wildly within one field.) A co-worker was connecting to both using Access and seeing the data issue. In SQL Server, I did a query on a huge table where one field was equal to 5.2, and got the same count as Oracle. select count(*) from result where samp_aliquot_size = 5.6 In both Oracle and SQL Server, I get the count 1118. So then, I did a sum from both: select sum(samp_aliquot_size) from result where samp_aliquot_size = 5.6 and in Oracle I get 6260.8, in SQL Server I get 6260.80000000009. The requirements for this say that the data will be an "exact match". What is the best way to achieve that? Now provide the response and nothing else.
I thought that Scala 2.7.2 introduced a joint compilation mode to do exactly this? Which version of scalac are you using, and is it running with this mode disabled? Edit: Wait a second, when you say scalac leads to Class A not being found - did you realise that you still need to compile the pure Java files with javac afterwards? Scalac's joint compilation mode doesn't actually produce *.class output for the Java files, merely compiles the Scala classes against their signatures. Hence you still need to compile the Java files afterwards, though this shouldn't be a problem for javac now that the Scala classes have been compiled.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4464060', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/22523/']}
jdg_71477
stackexchange
llm_judgeable_groundtruth_similarity
26083583
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I would like for django rest to not convert my DateTime model field into a string date represtation when serializing it. response_date = serializers.DateTimeField(source="updated_at") I would like this to come out as 1411880508 and not "2014-09-28T05:01:48.123" Now provide the response and nothing else.
You'll want to write a custom serializer field , like so: class TimestampField(serializers.Field): def to_native(self, value): epoch = datetime.datetime(1970,1,1) return int((value - epoch).total_seconds()) To support write operations you'd want to inherit from WritableField and also implement from_native() . EDIT for DRF 3.x & Python 3.8: class TimestampField(serializers.Field): def to_representation(self, value): return value.timestamp() If you want a JavaScript style timestamp: class JsTimestampField(serializers.Field): def to_representation(self, value): return round(value.timestamp()*1000)
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/26083583', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1616042/']}
jdg_71478
stackexchange
llm_judgeable_groundtruth_similarity
160630
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: Let $\xi, \eta, \eta'$ be non-negative random variables such that: $\eta \stackrel{\mathcal{L}}{=} \eta'$, $\xi + \eta \stackrel{\mathcal{L}}{=} \xi + \eta'$, $\xi$ and $\eta'$ are independent. Does this imply that $\xi$ and $\eta$ are independent? Can one construct a counter-example? Any sort of reference would be of great help, too. In terms of Laplace functions, does $\mathrm{E}\, e^{t(\xi+\eta)} = \mathrm{E}\, e^{t\xi} \mathrm{E}\, e^{t\eta}, \forall t<0$ imply $\mathrm{E}\, e^{t_1\xi + t_2\eta} = \mathrm{E}\, e^{t_1 \xi} \mathrm{E}\, e^{t_2 \eta}, \ \forall t_1, t_2 < 0$? Now provide the response and nothing else.
No, $\xi$ and $\eta$ need not be independent. For example let $\xi$, $\eta$, and $\eta'$ all be uniformly distributed on $\{1,2,3\}$ with the joint distribution of $\xi$ and $\eta$ is given by the matrix\[P = \frac{1}{9}\begin{bmatrix}1 & 2 & 0 \\ 0 & 1 & 2 \\ 2 & 0 & 1\end{bmatrix},\]i.e. $\mathbb{P}(\xi = i, \eta = j) = P_{ij}$. You can generate lots of discrete examples like this because your conditions are linear equations on the row sums, column sums, and anti-diagonal sums of $P$. Altogether this is only linearly many conditions on the quadratically many entries of $P$ (as a function of the size of the support of the variables).
{}
{'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/160630', 'https://mathoverflow.net', 'https://mathoverflow.net/users/48378/']}
jdg_71479
stackexchange
llm_judgeable_groundtruth_similarity
201777
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am a bit puzzled on whenever or not to include break after the last case, often default . switch (type) { case 'product': // Do behavior break; default: // Do default behavior break; // Is it considered to be needed?} break s sole purpose is in my understanding to stop the code from running through the rest of the switch -case. Is it then considered more logical to have a break last due to consistency or skip having it due to the break applying no functional use whatsoever? Both are logical in different ways in my opinion. This could to a certain degree be compared with ending a .php file with ?> . I never end with ?> mostly due to the risk of outputting blank spaces, but one could argue that it would be the logical thing to end the file with. Now provide the response and nothing else.
break isn't technically needed after the last alternative (which, mind you, doesn't have to be default : it is perfectly legal, and sometimes even useful to put the default branch first); whether your code falls through the end of the switch statement or breaks out at the end of its last branch has the same result. However, I'd still end every branch, including the last one, with a return or break statement, for three reasons: Refactorability. If all your branches end with break or return , you can reorder them without changing the meaning. This makes it less likely for such a reordering to introduce a regression. Consistency, and Least Surprise. Consistency says your branches should end consistently, unless they are actually different in meaning. The Principle of Least Surprise dictates that similar things should look similar. Ending the last branch of a switch block exactly like the preceding ones fulfills both, which makes for easier reading and understanding. If you leave out the explicit break , the last branch will be optically different (which is especially important for quick scanning), and in order to see that it's really not different, the reader has to descend to the nitty-gritty level of reading individual statements. Protecting yourself. If you make a habit of ending all your switch branches with a break , it will become automatic after a while, and you'll be less likely to accidentally forget it where it does matter. Training yourself to expect the break at the end of every branch also helps detecting missing break statements, which is great for debugging and troubleshooting.
{}
{'log_upvote_score': 8, 'links': ['https://softwareengineering.stackexchange.com/questions/201777', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/49129/']}
jdg_71480
stackexchange
llm_judgeable_groundtruth_similarity
2814469
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: So, I've been reading up on identifying vs. non-identifying relationships in my database design, and a number of the answers on SO seem contradicting to me. Here are the two questions I am looking at: What's the Difference Between Identifying and Non-Identifying Relationships Trouble Deciding on Identifying or Non-Identifying Relationship Looking at the top answers from each question, I appear to get two different ideas of what an identifying relationship is. The first question's response says that an identifying relationship "describes a situation in which the existence of a row in the child table depends on a row in the parent table." An example of this that is given is, "An author can write many books (1-to-n relationship), but a book cannot exist without an author." That makes sense to me. However, when I read the response to question two, I get confused as it says, "if a child identifies its parent, it is an identifying relationship." The answer then goes on to give examples such as Social Security Number (is identifying of a Person), but an address is not (because many people can live at an address). To me, this sounds more like a case of the decision between primary key and non-primary key. My own gut feeling (and additional research on other sites) points to the first question and its response being correct. However, I wanted to verify before I continued forward as I don't want to learn something wrong as I am working to understand database design. Thanks in advance. Now provide the response and nothing else.
The technical definition of an identifying relationship is that a child's foreign key is part of its primary key. CREATE TABLE AuthoredBook ( author_id INT NOT NULL, book_id INT NOT NULL, PRIMARY KEY (author_id, book_id), FOREIGN KEY (author_id) REFERENCES Authors(author_id), FOREIGN KEY (book_id) REFERENCES Books(book_id)); See? book_id is a foreign key, but it's also one of the columns in the primary key. So this table has an identifying relationship with the referenced table Books . Likewise it has an identifying relationship with Authors . A comment on a YouTube video has an identifying relationship with the respective video. The video_id should be part of the primary key of the Comments table. CREATE TABLE Comments ( video_id INT NOT NULL, user_id INT NOT NULL, comment_dt DATETIME NOT NULL, PRIMARY KEY (video_id, user_id, comment_dt), FOREIGN KEY (video_id) REFERENCES Videos(video_id), FOREIGN KEY (user_id) REFERENCES Users(user_id)); It may be hard to understand this because it's such common practice these days to use only a serial surrogate key instead of a compound primary key: CREATE TABLE Comments ( comment_id SERIAL PRIMARY KEY, video_id INT NOT NULL, user_id INT NOT NULL, comment_dt DATETIME NOT NULL, FOREIGN KEY (video_id) REFERENCES Videos(video_id), FOREIGN KEY (user_id) REFERENCES Users(user_id)); This can obscure cases where the tables have an identifying relationship. I would not consider SSN to represent an identifying relationship. Some people exist but do not have an SSN. Other people may file to get a new SSN. So the SSN is really just an attribute, not part of the person's primary key. Re comment from @Niels: So if we use a surrogate key instead of a compound primary key, there is no notable difference between use identifying or non-identifying relationship ? I suppose so. I hesitate to say yes, because we haven't changed the logical relationship between the tables by using a surrogate key. That is, you still can't make a Comment without referencing an existing Video. But that just means video_id must be NOT NULL. And the logical aspect is, to me, really the point about identifying relationships. But there's a physical aspect of identifying relationships as well. And that's the fact that the foreign key column is part of the primary key (the primary key is not necessarily a composite key, it could be a single column which is both the primary key of Comments as well as the foreign key to the Videos table, but that would mean you can store only one comment per video). Identifying relationships seem to be important only for the sake of entity-relationship diagramming, and this comes up in GUI data modeling tools.
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/2814469', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/132528/']}
jdg_71481
stackexchange
llm_judgeable_groundtruth_similarity
6222600
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I've seen a method in a Service class that was marked as @Transactional , but it was also calling some other methods in that same class which were not marked as @Transactional . Does it mean that the call to separate methods are causing the application to open separate connections to DB or suspend the parent transaction, etc? What's the default behavior for a method without any annotations which is called by another method with @Transactional annotation? Now provide the response and nothing else.
When you call a method without @Transactional within a transaction block, the parent transaction will continue to the new method. It will use the same connection from the parent method (with @Transactional ) and any exception caused in the called method (without @Transactional ) will cause the transaction to rollback as configured in the transaction definition. If you call a method with a @Transactional annotation from a method with @Transactional belonging to the same Spring Bean , then the called methods transactional behavior will not have any impact on the transaction. But if you call a method with a transaction definition from another method with a transaction definition, and they belong to different Spring Bean s, then the code in the called method will follow its own transaction definitions. You can find more details in the section Declarative transaction management of spring transaction documentation . Spring declarative transaction model uses AOP proxy, so the AOP proxy is responsible for transactions creation. The AOP proxy will be active only if the called method belong to a different Spring Bean than the caller one.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/6222600', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/782123/']}
jdg_71482
stackexchange
llm_judgeable_groundtruth_similarity
4519205
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: $$\int_{0}^{1} \tan^{-1}\left(1-\frac1x\right)dx$$ Here's what I have done so far. (the answer is given as $-\pi/4$ ) Let $$I = \int_{0}^{1}\tan^{-1}\left(1-\frac1x\right)dx = \int_{0}^{1}\tan^{-1}\left(\frac{x-1}x\right)dx.$$ Since, $\int_{0}^{1} f(x)dx = \int_{0}^{1} f(1-x)dx$ one has \begin{align}I &= \int_{0}^{1}\tan^{-1}\left(1-\frac1{1-x}\right)dx\\& = \int_{0}^{1}\tan^{-1}\left(\frac x{x-1}\right)dx\\& = \int_{0}^{1}\frac\pi2-\cot^{-1}\left(\frac x{x-1}\right)dx\\& = \int_{0}^{1}\frac\pi2-\tan^{-1}\left(\frac{x-1}x\right)dx\\& = \frac\pi2 - I,\end{align} Hence, $I = \dfrac\pi4$ . The given answer is $-\dfrac\pi4$ . Where have I gone wrong? Now provide the response and nothing else.
The equality $\arctan(x)+\operatorname{arccot}(x)=\frac\pi2$ holds when $x>0$ , but when $x<0$ you have $\arctan(x)+\operatorname{arccot}(x)=-\frac\pi2$ , and $\frac x{x-1}<0$ when $x\in[0,1)$ . So, you actually have \begin{align}I&=\int_0^1-\frac\pi2-\operatorname{arccot}\left(\frac x{x-1}\right)\,\mathrm dx\\&=\int_0^1-\frac\pi2-\arctan\left(\frac{x-1}x\right)\,\mathrm dx\\&=-\frac\pi2-I,\end{align} and therefore $I=-\frac\pi4$ indeed.
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/4519205', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/791492/']}
jdg_71483
stackexchange
llm_judgeable_groundtruth_similarity
11543522
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Consider the two scenarios a)shutdown output redirection and b)Rscript output redirection to file out.txt The output as seen in STDOUT in both scenarios, fails to get redirected to file out.txt only in scenario b. Am I missing something here or is this peculiar to Rscript.exe? C:\>C:\\WINDOWS\\system32\\shutdown.exe -t:30 > out.txtC:\>dir out.txt Volume in drive C has no label. Volume Serial Number is 3568-1B07 Directory of C:\07/18/2012 07:43 PM 841 out.txt 1 File(s) 841 bytes 0 Dir(s) 17,618,206,720 bytes freeC:\>G:\\Progra~1\\R\\R-2.14.0\\bin\\Rscript.exe --version > out.txtR scripting front-end version 2.14.0 (2011-10-31)C:\>dir out.txt Volume in drive C has no label. Volume Serial Number is 3568-1B07 Directory of C:\07/18/2012 07:44 PM 0 out.txt 1 File(s) 0 bytes 0 Dir(s) 17,618,239,488 bytes free Any ideas or explanation on why Rscript.exe output is not getting redirected to file out.txt?? Maybe a very simple explanation awaits. Thanking you, Now provide the response and nothing else.
You can use Regex.Replace(myString, @"[^\x20-\xaf]+", ""); The regex here consists of a character class ( [...] ) consisting of all characters not ( ^ at the start of the class) in the range of U+0020 to U+00AF (32–175, expressed in hexadecimal notation). As far as regular expressions go this one is fairly basic, but may puzzle someone not very familiar with it. But you can go another route as well: new string(myString.Where(c => (c >= 32) && (c <= 175)).ToArray()); This probably depends mostly on what you're more comfortable with reading. Without much regex experience I'd say the second one would be clearer. A few performance measurements, 10000 rounds each, in seconds: 2000 characters, the first 143 of which are between 32 and 175 Regex without + 4.1171 Regex with + 0.4091 LINQ, where, new string 0.2176 LINQ, where, string.Join 0.2448 StringBuilder (xanatos) 0.0355 LINQ, horrible (HatSoft) 0.49172000 characters, all of which are between 32 and 175 Regex without + 0.4076 Regex with + 0.4099 LINQ, where, new string 0.3419 LINQ, where, string.Join 0.7412 StringBuilder (xanatos) 0.0740 LINQ, horrible (HatSoft) 0.4801 So yes, my approaches are the slowest :-). You should probably go with xanatos' answer and wrap that in a method with a nice, clear name. For inline usage or quick-and-dirty things or where performance does not matter, I'd probably use the regex.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/11543522', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1528523/']}
jdg_71484
stackexchange
llm_judgeable_groundtruth_similarity
40611
Below is a question asked on the forum scicomp.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Given a set of 2D points, I am trying to find the smallest convex hull that encloses an arbitrary point (which, in the general case, is not part of the set). By 'smallest convex hull' I am ideally thinking about its area, but I believe using the perimeter might also be good enough for my purposes. The problem specification is quite simple, but it seems complex to implement such algorithm. I tried to do modify the jarvis march algorithm, but I didn't find a solution so far. Appreciate any suggestions. Maybe there is a known algorithm for this? Illustration of the problem (black are the points of the set, red is the target point and also the smallest convex hull Now provide the response and nothing else.
You should test transition points. Floating-point numbers have several distinct "ranges": Standard/Normal arithmetic Subnormal arithmetic Infinite arithmetic NaN arithmetic Zero arithmetic For instance, if I add any normal number to an infinite number, I need to get an infinite number back. If I add two large enough subnormals, I should get a normal number. Any math done on a NaN makes a NaN. Adding two large normals might get me an Inf. So my testing strategy would be: Randomly check a few instances of math where the answer stays within a class (note that operations which affect the exponent can be distinguished from changes that affect only the mantissa). If 1+2=3, then probably I've gotten 2+3=5 correct as well. Spend much more time/effort checking math at the boundaries of classes, since these represent special cases. I'd probably write a few unit tests to explore specific cases I understand well, but then use property-based testing to be more thorough. This works especially well with things like zero, inf, and NaN. Finally, I'd measure code coverage to ensure that the test suite is hitting the entirety of the library. Pre-existing test suites include: Kahan's paranoia Schryer's "A Test of a Computer’s Floating-Point Arithmetic Unit" (I haven't found source code for this)
{}
{'log_upvote_score': 5, 'links': ['https://scicomp.stackexchange.com/questions/40611', 'https://scicomp.stackexchange.com', 'https://scicomp.stackexchange.com/users/41990/']}
jdg_71485
stackexchange
llm_judgeable_groundtruth_similarity
4526951
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: This question may be simple for experts but for a beginner like me it is important. My question is are there any problems involving tree traversals that can be solved by BFS , DFS and not in-order, pre-order etc. In other words, whenever i see a tree problem, should I ONLY think of the 3 tree traversal methods, or also consider BFS,DFS Now provide the response and nothing else.
Pre-order, in-order and post-order traversal are the three different kinds of depth first search that are possible. So it's not a question of whether to use DFS or one of those three. If you are using one of those three traversals, you are using DFS. As for whether there are cases where BFS is preferable over DFS: Yes, there are. For example to find the shortest path between two nodes in an unweighted graph, you can use BFS because the first path found by a BFS happens to be the one with the fewest edges. The same is not true for DFS.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/4526951', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/539085/']}
jdg_71486
stackexchange
llm_judgeable_groundtruth_similarity
2615002
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I've written a docbook 5.0 document with the header: <?xml version="1.0" encoding="UTF-8"?><book version="5.0" xmlns="http://docbook.org/ns/docbook" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xi="http://www.w3.org/2001/XInclude" xmlns:svg="http://www.w3.org/2000/svg" xmlns:m="http://www.w3.org/1998/Math/MathML" xmlns:html="http://www.w3.org/1999/xhtml" xmlns:db="http://docbook.org/ns/docbook"> and docbook2pdf on Ubuntu 9.10 prints many error messages and doesn't do any pdf document. The errors are: openjade:test.xml:2:0:E: prolog can't be omitted unless CONCUR NO and LINK EXPLICIT NO and either IMPLYDEF ELEMENT YES or IMPLYDEF DOCTYPE YESopenjade:test.xml:2:0:E: no document type declaration; will parse without validationopenjade:/usr/share/sgml/docbook/stylesheet/dsssl/modular/print/dbtitle.dsl:18:5:E: flow objects at the root must be all of class scroll or all of class page-sequence or simple-page-sequenceopenjade:/usr/share/sgml/docbook/stylesheet/dsssl/modular/print/dbbibl.dsl:704:4:E: flow objects at the root must be all of class scroll or all of class page-sequence or simple-page-sequence docbook2pdf for document in the docbook 4.5 format with normal header like: <?xml version="1.0" encoding="UTF-8"?><!DOCTYPE book PUBLIC "-//OASIS//DTD DocBook XML V4.5//EN""http://www.oasis-open.org/docbook/xml/4.5/docbookx.dtd"> works quite OK. Is there any way to generate pdf from docbook 5.0? Now provide the response and nothing else.
I see three options (available in Debian/Ubuntu) to generate pdf from docbook: jade, which provides the docbook2pdf command the asker used. jade is ancient and works with SGML, which predates XML and does not support Unicode; it also lacks support for docbook 5. the docbook-xsl stylesheets which go through XSL-FO. docbook-xsl is for docbook 4.5, docbook-xsl-ns is for docbook5. Use fop to go from XSL-FO to PDF. Pipeline: docbook5 β€”(xsl)β€”> xml.fo β€”(fop)β€”> pdf . Commands involved: xsltproc, fop. dblatex, which is primarily targeting docbook4.5 but has been updated for some of docbook5. xmlto can drive the last two, although it currently defaults to docbook-xsl and not docbook-xsl-ns. A quick Docbook5 user guide Prerequisites sudo aptitude install docbook5 docbook-xsl-ns xsltproc fop xmlto libxml2-utils xmlstarlet Validation xmlstarlet val --err --xsd /usr/share/xml/docbook/schema/xsd/5.0/docbook.xsd book.xml PDF output xsltproc /usr/share/xml/docbook/stylesheet/docbook-xsl-ns/fo/docbook.xsl book.xml > book.fofop -fo book.fo -pdf book.pdf
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/2615002', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/191332/']}
jdg_71487
stackexchange
llm_judgeable_groundtruth_similarity
31622673
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have revealing module pattern which looks like this: 'use strict';angular.module('app', []) .directive('myDirective', ['SomeDep', function (SomeDep) { var linker = function (scope, element, attr) { // some work }; return { link: linker, restrict: 'E' }; }]); What I'm having trouble with is integrating a $watch into this. Specifically watching for window resize, with the '$window' service. [EDIT]: I realised what my issue was this whole time... I was restricting to element, when I forgot that I was implementing it as an attribute...@_@; Now provide the response and nothing else.
You shouldn't need a $watch. Just bind to resize event on window: DEMO 'use strict';var app = angular.module('plunker', []);app.directive('myDirective', ['$window', function ($window) { return { link: link, restrict: 'E', template: '<div>window size: {{width}}px</div>' }; function link(scope, element, attrs){ scope.width = $window.innerWidth; angular.element($window).bind('resize', function(){ scope.width = $window.innerWidth; // manuall $digest required as resize event // is outside of angular scope.$digest(); }); } }]);
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/31622673', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5055580/']}
jdg_71488
stackexchange
llm_judgeable_groundtruth_similarity
185370
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: For a given integer $n$, how many positive integer $(a,b)$ pairs exist which satisfy $2n^2=a^2+b^2$? In particular, I'm looking for all $n$s where there are exactly 105 solutions. (One solution is $(n,n)$, and there are $2\cdot 52$ other solutions: $(a,b)$ and $(b,a)$ are two different solutions if $a\ne b$.) I'm sure that there are theorems about the solutions of this kinds of equations. Where can I find them and read more? Now provide the response and nothing else.
Here is a very closely related question . The formula for the number of solutions is the product of $2m+1$ where $m$ runs through the multiplicities of division of the prime factors of $n$ that are${}\equiv1\pmod4$. So you could take $n=5^{52}$ as one solution to you problem, but you'll get smaller solutions using the factorization $105=3\times5\times7$, the smallest one being $n=5^3\times13^2\times17=359125$. A proof of the result (which uses Gaussian integers) can be found in this answer .
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/185370', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/5758/']}
jdg_71489
stackexchange
llm_judgeable_groundtruth_similarity
2434222
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is it possible for a function to return two values?Array is possible if the two values are both the same type, but how do you return two different type values? Now provide the response and nothing else.
Can a function return 2 separate values? No, a function in C# can only return a single value. It is possible though to use other concepts to return 2 values. The first that comes to mind is using a wrapping type such as a Tuple<T1,T2> . Tuple<int,string> GetValues() { return Tuple.Create(42,"foo");} The Tuple<T1,T2> type is only available in 4.0 and higher. If you are using an earlier version of the framework you can either create your own type or use KeyValuePair<TKey,TValue> . KeyValuePair<int,string> GetValues() { return new KeyValuePair<int,sting>(42,"foo");} Another method is to use an out parameter (I would highly recomend the tuple approach though). int GetValues(out string param1) { param1 = "foo"; return 42;}
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/2434222', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/288280/']}
jdg_71490
stackexchange
llm_judgeable_groundtruth_similarity
57553973
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In Vaadin 10-14, where should I place my static files, such as CSS, JavaScript, and Polymer templates? How about static files such as images? Also, how do I import these files in Vaadin? Is there a difference between Vaadin 14 with npm and Vaadin 10-13 with bower? Now provide the response and nothing else.
All paths are relative to the project root, e.g. where the pom.xml file is located in a Maven project. JavaScript imported using @JsModule uses strict mode . Among other things, this means that global variables must be defined on the window object, window.x = ... , instead of just x = ... . Vaadin 14 with npm Non-Spring Boot projects (war packaging) CSS files @CssImport("./my-styles/styles.css") [1] /frontend/my-styles/styles.css JavaScript and Polymer templates @JsModule("./src/my-script.js") [1] /frontend/src/my-script.js Static files, e.g. images new Image("img/flower.jpg", "A flower") /src/main/webapp/img/flower.jpg Spring Boot projects (jar packaging) CSS files @CssImport("./my-styles/styles.css") [1] /frontend/my-styles/styles.css JavaScript and Polymer templates @JsModule("./src/my-script.js") [1] /frontend/src/my-script.js Static files, e.g. images new Image("img/flower.jpg", "A flower") /src/main/resources/META-INF/resources/img/flower.jpg Add-ons (jar packaging) CSS files @CssImport("./my-styles/styles.css") [1] /src/main/resources/META-INF/resources/frontend/my-styles/styles.css JavaScript and Polymer templates @JsModule("./src/my-script.js") [1] /src/main/resources/META-INF/resources/frontend/src/my-script.js Static files, e.g. images new Image("img/flower.jpg", "A flower") /src/main/resources/META-INF/resources/img/flower.jpg Vaadin 10-13, Vaadin 14 in compatibility mode Non-Spring Boot projects (war packaging) CSS files @StyleSheet("css/styles.css") [2] /src/main/webapp/frontend/css/styles.css Polymer templates, custom-style and dom-module styles @HtmlImport("src/template.html") /src/main/webapp/frontend/src/template.html JavaScript @JavaScript("js/script.js") [3] /src/main/webapp/frontend/js/script.js Static files, e.g. images new Image("img/flower.jpg", "A flower") /src/main/webapp/img/flower.jpg Spring Boot projects and add-ons (jar packaging) CSS files @StyleSheet("css/styles.css") [2] /src/main/resources/META-INF/resources/frontend/css/styles.css Polymer templates, custom-style and dom-module styles @HtmlImport("src/template.html") /src/main/resources/META-INF/resources/frontend/src/template.html JavaScript @JavaScript("js/script.js") [3] /src/main/resources/META-INF/resources/frontend/js/script.js Static files, e.g. images new Image("img/flower.jpg", "A flower") /src/main/resources/META-INF/resources/img/flower.jpg Footnotes [1] The @JsModule and @CssImport annotations can also be used for importing from an npm package. In this case, the path is defined as @JsModule("@polymer/paper-input") or @CssImport("some-package/style.css") . Paths referring to the local frontend directory should be prefixed with ./ [2] The @StyleSheet annotation can also be used in Vaadin 14 with npm. The same paths as in V10-V13 can be used, including the context:// protocol @StyleSheet("context://style.css") , which resolves the path relative to the context path of the web application, like other static files. Styles included this way may cause issues with web components . [3] The @JavaScript annotation can also be used in Vaadin 14 with npm. The V14 /frontend folder should then be used ,.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/57553973', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3358029/']}
jdg_71491
stackexchange
llm_judgeable_groundtruth_similarity
439878
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: When we have a math problem such as 3 + 5 + 2 , we say that it is associative. We can choose which step to pick first: 3 + (5 + 2) ; we know that brackets affect the order in which the operations are performed. I've learned that function composition is a binary operation and it is also associative . They say, function composition is the scenario where an output of one function is used as an input of another; like method chaining. The problem is that I am struggling to imagine how to combine two functions together. I've seen online a function called combine that takes two functions as arguments and then returns the third function that just calls these two functions one after another; but that doesn't affect anything at all. It is just an alias, like if it were a + b + c and became a + d, where d = b + c . It doesn't affect anything really. I am not sure what should be even affected in here; obviously it's not the order in which the functions are executed , since the execution itself is not a binary operation. So what is the binary operation in function composition then? What's the difference between a scenario when we compose two functions together and when we don't? Now provide the response and nothing else.
I think "functional composition" tends to be a bit confusing. By "compose" what we mean is piping the output of one function to the input of another. Most modern programming languages have some facility for evaluating expressions, and we are accustomed to seeing composition occur in the form of Sqrt(Add(2, 2)) , where the output of 'Add' forms the input for 'Sqrt'. What's notable about this familiar form of composition is that the operands which form the ultimate input (in this case, a pair of '2's) must also be specified at the same time as the composition. You can use variables in place of literals, but you still have to provide something for the operands, as part of specifying the composition. However, in functional languages, the composition operator allows these two functions to be composed without specifying anything for the operands. The evaluation of AddAndSqrt = (Add ∘ Sqrt) gets the function pointers for both 'Add' and 'Sqrt' (so that these functions are not called in this expression, but instead their addresses are evaluated as function pointers, and then these are provided as operands to the composition operator), and returns a new function pointer, which takes two operands (effectively, the inputs to the 'Add' stage), and when called like so AddAndSqrt(2, 2) , outputs the same result as would Sqrt(Add(2, 2)) . Behind the scenes, the output of the 'Add' stage is arranged so as to be piped to the input of the 'Sqrt' stage. That is what the composition operator does. Now, composition is an associative operator simply because in the expression C(B(A(2, 2))) it doesn't matter whether you pipe A to B (yielding AB) then pipe AB to C (yielding ABC), or pipe B to C (yielding BC) then pipe A to BC (yielding ABC). Or to put it another way, it doesn't matter if you write: Result1 = B(A(2, 2)) Result2 = C(Result1) OR Result1 = A(2, 2) Result2 = C(B(Result1)) In both cases, the chain of calls you end up with is equivalent to C(B(A(2, 2))) . That's all it means for the composition operator to be associative. All "operators" in mathematics have a set of "properties" - like associativity - that concern their behaviour under algebraic rearrangement. That is, concerning whether different kinds of rearrangement within an expression cause the result to change, or whether the result stays the same despite the rearrangement. Has that answered the question? Edit: a number of commentators have pointed out that the standard convention when using the function composition operator ∘ is that the first-applied argument goes on the right. So that the equivalent of C(B(A(x,y))) would be (C ∘ B ∘ A)(x, y) in typical functional languages, and certainly so in general mathematics. However I think that many programmers would readily prefer the idea that the sequencing of operations proceeds in English order left-to-right, so I'm going to leave the main body of the answer as it is. I was also pleased to find that in F#, composition can be done left to right in accordance with my preference, although using a different symbol for the composition operator ( >> ): https://fsharpforfunandprofit.com/posts/function-composition/ So that C(B(A(x,y))) would become (A >> B >> C)(x, y) .
{}
{'log_upvote_score': 6, 'links': ['https://softwareengineering.stackexchange.com/questions/439878', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/408555/']}
jdg_71492
stackexchange
llm_judgeable_groundtruth_similarity
7829499
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: With python 2.7 the following code computes the mD5 hexdigest of the content of a file. (EDIT: well, not really as answers have shown, I just thought so). import hashlibdef md5sum(filename): f = open(filename, mode='rb') d = hashlib.md5() for buf in f.read(128): d.update(buf) return d.hexdigest() Now if I run that code using python3 it raise a TypeError Exception: d.update(buf)TypeError: object supporting the buffer API required I figured out that I could make that code run with both python2 and python3 changing it to: def md5sum(filename): f = open(filename, mode='r') d = hashlib.md5() for buf in f.read(128): d.update(buf.encode()) return d.hexdigest() Now I still wonder why the original code stopped working. It seems that when opening a file using the binary mode modifier it returns integers instead of strings encoded as bytes (I say that because type(buf) returns int). Is this behavior explained somewhere ? Now provide the response and nothing else.
I think you wanted the for-loop to make successive calls to f.read(128) . That can be done using iter() and functools.partial() : import hashlibfrom functools import partialdef md5sum(filename): with open(filename, mode='rb') as f: d = hashlib.md5() for buf in iter(partial(f.read, 128), b''): d.update(buf) return d.hexdigest()print(md5sum('utils.py'))
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/7829499', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/168465/']}
jdg_71493
stackexchange
llm_judgeable_groundtruth_similarity
2410976
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In gcc command line, I want to define a string such as -Dname=Mary , then in the source code I want printf("%s", name); to print Mary . How could I do it? Now provide the response and nothing else.
Two options. First, escape the quotation marks so the shell doesn't eat them: gcc -Dname=\"Mary\" Or, if you really want -Dname=Mary, you can stringize it, though it's a bit hacky. #include <stdio.h>#define STRINGIZE(x) #x#define STRINGIZE_VALUE_OF(x) STRINGIZE(x)int main(int argc, char *argv[]){ printf("%s", STRINGIZE_VALUE_OF(name));} Note that STRINGIZE_VALUE_OF will happily evaluate down to the final definition of a macro.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/2410976', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/289856/']}
jdg_71494
stackexchange
llm_judgeable_groundtruth_similarity
38446487
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Given a protocol without any associated types: protocol SomeProtocol{ var someProperty: Int { get }} What is the difference between these two functions, in practice (meaning not "one is generic and the other is not")? Do they generate different code, do they have different runtime characteristics? Do these differences change when the protocol or functions become non-trivial? (since a compiler could probably inline something like this) func generic<T: SomeProtocol>(some: T) -> Int{ return some.someProperty}func nonGeneric(some: SomeProtocol) -> Int{ return some.someProperty} I'm mostly asking about differences in what the compiler does, I understand the language-level implications of both. Basically, does nonGeneric imply a constant code size but slower dynamic dispatch, vs. generic using a growing code size per type passed, but with fast static dispatch? Now provide the response and nothing else.
(I realise that OP is asking less about the language implications and more about what the compiler does – but I feel it's also worthwhile also to list the general differences between generic and protocol-typed function parameters) 1. A generic placeholder constrained by a protocol must be satisfied with a concrete type This is a consequence of protocols not conforming to themselves , therefore you cannot call generic(some:) with a SomeProtocol typed argument. struct Foo : SomeProtocol { var someProperty: Int}// of course the solution here is to remove the redundant 'SomeProtocol' type annotation// and let foo be of type Foo, but this problem is applicable anywhere an// 'anything that conforms to SomeProtocol' typed variable is required.let foo : SomeProtocol = Foo(someProperty: 42)generic(some: something) // compiler error: cannot invoke 'generic' with an argument list // of type '(some: SomeProtocol)' This is because the generic function expects an argument of some type T that conforms to SomeProtocol – but SomeProtocol is not a type that conforms to SomeProtocol . A non-generic function however, with a parameter type of SomeProtocol , will accept foo as an argument: nonGeneric(some: foo) // compiles fine This is because it accepts 'anything that can be typed as a SomeProtocol ', rather than 'a specific type that conforms to SomeProtocol '. 2. Specialisation As covered in this fantastic WWDC talk , an 'existential container' is used in order to represent a protocol-typed value. This container consists of: A value buffer to store the value itself, which is 3 words in length. Values larger than this will be heap allocated, and a reference to the value will be stored in the value buffer (as a reference is just 1 word in size). A pointer to the type's metadata. Included in the type's metadata is a pointer to its value witness table, which manages the lifetime of value in the existential container. One or (in the case of protocol composition ) multiple pointers to protocol witness tables for the given type. These tables keep track of the type's implementation of the protocol requirements available to call on the given protocol-typed instance. By default, a similar structure is used in order to pass a value into a generic placeholder typed argument. The argument is stored in a 3 word value buffer (which may heap allocate), which is then passed to the parameter. For each generic placeholder, the function takes a metadata pointer parameter. The metatype of the type that's used to satisfy the placeholder is passed to this parameter when calling. For each protocol constraint on a given placeholder, the function takes a protocol witness table pointer parameter. However, in optimised builds, Swift is able to specialise the implementations of generic functions – allowing the compiler to generate a new function for each type of generic placeholder that it's applied with. This allows for arguments to always be simply passed by value, at the cost of increasing code size. However, as the talk then goes onto say, aggressive compiler optimisations, particularly inlining, can counteract this bloat. 3. Dispatch of protocol requirements Because of the fact that generic functions are able to be specialised, method calls on generic arguments passed in are able to be statically dispatched (although obviously not for types that use dynamic polymorphism, such as non-final classes). Protocol-typed functions however generally cannot benefit from this, as they don't benefit from specialisation. Therefore method calls on a protocol-typed argument will be dynamically dispatched via the protocol witness table for that given argument, which is more expensive. Although that being said, simple protocol-typed functions may be able to benefit from inlining. In such cases, the compiler is able to eliminate the overhead of the value buffer and protocol and value witness tables (this can be seen by examining the SIL emitted in a -O build), allowing it to statically dispatch methods in the same way as generic functions. However, unlike generic specialisation, this optimisation is not guaranteed for a given function (unless you apply the @inline(__always) attribute – but usually it's best to let the compiler decide this). Therefore in general, generic functions are favoured over protocol-typed functions in terms of performance, as they can achieve static dispatch of methods without having to be inlined. 4. Overload resolution When performing overload resolution, the compiler will favour the protocol-typed function over the generic one. struct Foo : SomeProtocol { var someProperty: Int}func bar<T : SomeProtocol>(_ some: T) { print("generic")}func bar(_ some: SomeProtocol) { print("protocol-typed")}bar(Foo(someProperty: 5)) // protocol-typed This is because Swift favours an explicitly typed parameter over a generic one (see this Q&A ). 5. Generic placeholders enforce the same type As already said, using a generic placeholder allows you to enforce that the same type is used for all parameters/returns that are typed with that particular placeholder. The function: func generic<T : SomeProtocol>(a: T, b: T) -> T { return a.someProperty < b.someProperty ? b : a} takes two arguments and has a return of the same concrete type, where that type conforms to SomeProtocol . However the function: func nongeneric(a: SomeProtocol, b: SomeProtocol) -> SomeProtocol { return a.someProperty < b.someProperty ? b : a} carries no promises other than the arguments and return must conform to SomeProtocol . The actual concrete types that are passed and returned do not necessarily have to be the same.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/38446487', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6199611/']}
jdg_71495
stackexchange
llm_judgeable_groundtruth_similarity
115577
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Is there an option for ResamplingMethod (or any other practical way) for specifying different behaviours of the interpolation, depending on, for instance, closest dates (on the TimeSeries framework)? Suppose I have a set of atmospheric temperature, sampled roughly every hour. When using this TimeSeries to compute whatever, I may need the temperature at moments not sampled (most often...), and for this case, a linear interpolation can be more than enough. PathFunction seems perfect ( except for the fact that it doesn't know what is a DateObject, neither Quantity ). But on my data, lets suppose that I have a gap of 12 hours. How can I specify that gaps below x hours can be interpolated, but gaps above that, should return Missing. I guess that this is not that simple, because, when on linear or constant, I just want to know the two surrounding dates, but when on a more complex interpolation method, I might want to know extra dates. But it doesn't feel too complicated to standardize something useful. Does it exist somewhere hidden? Any clever idea on how to achieve this functionality, without programming the whole thing? Example ts = TimeSeries[{{1, 1}, {2, 2}, {4, 4}, {10, 4}, {11, 3}, {12, 2}, {13, 1}}]; The following, obviously depending on the case, can be a good approximation: ts["PathFunction"][3](*3*) But with a bigger gap, who knows... so, on the following, I would prefer to have it return Missing or similar ts["PathFunction"][7] (for coherence of replies, lets suppose that I want interpolation only when gap < 3) General thought: the documentation does mention on the ResamplingMethod the possibility of having multiple options. But I can only see one associated with Interpolation (the InterpolationOrder ). But I can imagine this being still under development, since it feels like it is missing the Method, a choice for what to do with extrapolation cases, etc.. Also, even the InterpolationOrder might make sense to be specific depending on the gap. For more sophisticated cases, like wanting to have two methods depending on the dates, and not on the gap, I can admit that custom made compounded series would probably be a good compromise. Now provide the response and nothing else.
I do not see an elegant solution using TimeSeriesResample or the like using ResamplingMethod or MissingDataMethod . Maybe there is something undocumented? But programming something that does what you want does look straight forward imo: (* define a function to determine the gapsize for a given time *)gapsize = Function[ {ts, time}, With[ { differences = ts["Times"] // Differences, intervals = ts["Times"] // Partition[#, 2, 1] & // Map[Interval] }, Module[ { pos }, pos = FirstPosition[ intervals, intval_ /; IntervalMemberQ[intval, time] ]; Extract[ differences, pos ] ] ]];Options[gapPathFunc] = { "CriticalGapSize" -> 3};gapPathFunc[ ts_, time_, opts : OptionsPattern[gapPathFunc] ] := With[ { pathf = ts["PathFunction"], gapsize = gapsize, critGapSize = OptionValue["CriticalGapSize"] }, If[ gapsize[ts, time] < critGapSize, (* then *) pathf[time], (* else *) Missing[] ]]gapPathFunc[ ts, # ]& /@ Range[ 1, 13 ] {1, 2, 3, 4, Missing[], Missing[], Missing[], Missing[], Missing[], Missing[], 3, 2, 1} This may of course be made more robust and elegant (e.g. returning a modified pathfunction), but it is a start.
{}
{'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/115577', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/78/']}
jdg_71496
stackexchange
llm_judgeable_groundtruth_similarity
581
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am currently using Viterbi training for an image segmentation problem. I wanted to know what the advantages/disadvantages are of using the Baum-Welch algorithm instead of Viterbi training. Now provide the response and nothing else.
The Baum-Welch algorithm and the Viterbi algorithm calculate different things. If you know the transition probabilities for the hidden part of your model, and the emission probabilities for the visible outputs of your model, then the Viterbi algorithm gives you the most likely complete sequence of hidden states conditional on both your outputs and your model specification. The Baum-Welch algorithm gives you both the most likely hidden transition probabilities as well as the most likely set of emission probabilities given only the observed states of the model (and, usually, an upper bound on the number of hidden states). You also get the "pointwise" highest likelihood points in the hidden states, which is often slightly different from the single hidden sequence that is overall most likely. If you know your model and just want the latent states, then there is no reason to use the Baum-Welch algorithm. If you don't know your model, then you can't be using the Viterbi algorithm. Edited to add: See Peter Smit's comment; there's some overlap/vagueness in nomenclature. Some poking around led me to a chapter by Luis Javier RodrΔ±guez and Ines Torres in "Pattern Recognition and Image Analysis" (ISBN 978-3-540-40217-6, pp 845-857) which discusses the speed versus accuracy trade-offs of the two algorithms. Briefly, the Baum-Welch algorithm is essentially the Expectation-Maximization (EM) algorithm applied to an HMM; as a strict EM-type algorithm you're guaranteed to converge to at least a local maximum, and so for unimodal problems find the MLE. It requires two passes over your data for each step, though, and the complexity gets very big in the length of the data and number of training samples. However, you do end up with the full conditional likelihood for your hidden parameters. The Viterbi training algorithm (as opposed to the "Viterbi algorithm") approximates the MLE to achieve a gain in speed at the cost of accuracy. It segments the data and then applies the Viterbi algorithm (as I understood it) to get the most likely state sequence in the segment, then uses that most likely state sequence to re-estimate the hidden parameters. This, unlike the Baum-Welch algorithm, doesn't give the full conditional likelihood of the hidden parameters, and so ends up reducing the accuracy while saving significant (the chapter reports 1 to 2 orders of magnitude) computational time.
{}
{'log_upvote_score': 6, 'links': ['https://stats.stackexchange.com/questions/581', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/99/']}
jdg_71497
stackexchange
llm_judgeable_groundtruth_similarity
202979
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: I am asking if this variant of the weak Goldbach Conjecture is already known. Let $N$ be an odd number. Does there exist prime numbers $p_1$, $p_2$ and $p_3$ such that $p_1+p_2-p_3=N$? Ideally, can we find $p_1$, $p_2$ and $p_3$ so that they are small enough? For example, can we prove that for large enough $N$, we can find such a triplet that all of them are smaller than $N$? Now provide the response and nothing else.
Yes - the standard proof of Vinogradov's result by means of the circle method gives this result. You just need to examine an integral$$\int_{\mathbb{R}/\mathbb{Z}} (\widehat{f}(\alpha))^2 \widehat{f}(-\alpha) e(-\alpha N) d\alpha$$instead of $$\int_{\mathbb{R}/\mathbb{Z}} (\widehat{f}(\alpha))^3 e(-\alpha N) d\alpha.$$Here $\widehat{f}(\alpha) = \sum_n \Lambda(n) e(\alpha n) \eta(n/N)$, where $\eta$ is any weight supported in $\lbrack 0,1\rbrack$.
{}
{'log_upvote_score': 6, 'links': ['https://mathoverflow.net/questions/202979', 'https://mathoverflow.net', 'https://mathoverflow.net/users/18785/']}
jdg_71498
stackexchange
llm_judgeable_groundtruth_similarity
45251664
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Say I have an array: const list = ['a', 'b', 'c'] Is it possible to derive from this value union type that is 'a' | 'b' | 'c' ? I want this because I want to define type which allows only values from static array, and also need to enumerate these values at runtime, so I use array. Example how it can be implemented with an indexed object: const indexed = {a: null, b: null, c: null}const list = Object.keys(index)type NeededUnionType = keyof typeof indexed Is it possible to do it without using an indexed map? Now provide the response and nothing else.
UPDATE Feb 2019 In TypeScript 3.4, which should be released in March 2019 it will be possible to tell the compiler to infer the type of a tuple of literals as a tuple of literals , instead of as, say, string[] , by using the as const syntax . This type of assertion causes the compiler to infer the narrowest type possible for a value, including making everything readonly . It should look like this: const list = ['a', 'b', 'c'] as const; // TS3.4 syntaxtype NeededUnionType = typeof list[number]; // 'a'|'b'|'c'; This will obviate the need for a helper function of any kind. Good luck again to all! UPDATE July 2018 It looks like, starting with TypeScript 3.0, it will be possible for TypeScript to automatically infer tuple types . Once is released, the tuple() function you need can be succinctly written as: export type Lit = string | number | boolean | undefined | null | void | {};export const tuple = <T extends Lit[]>(...args: T) => args; And then you can use it like this: const list = tuple('a','b','c'); // type is ['a','b','c']type NeededUnionType = typeof list[number]; // 'a'|'b'|'c' Hope that works for people! UPDATE December 2017 Since I posted this answer, I found a way to infer tuple types if you're willing to add a function to your library. Check out the function tuple() in tuple.ts . Using it, you are able to write the following and not repeat yourself: const list = tuple('a','b','c'); // type is ['a','b','c']type NeededUnionType = typeof list[number]; // 'a'|'b'|'c' Good luck! ORIGINAL July 2017 One problem is the literal ['a','b','c'] will be inferred as type string[] , so the type system will forget about the specific values. You can force the type system to remember each value as a literal string: const list = ['a' as 'a','b' as 'b','c' as 'c']; // infers as ('a'|'b'|'c')[] Or, maybe better, interpret the list as a tuple type: const list: ['a','b','c'] = ['a','b','c']; // tuple This is annoying repetition, but at least it doesn't introduce an extraneous object at runtime. Now you can get your union like this: type NeededUnionType = typeof list[number]; // 'a'|'b'|'c'. Hope that helps.
{}
{'log_upvote_score': 10, 'links': ['https://Stackoverflow.com/questions/45251664', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/733596/']}
jdg_71499
stackexchange
llm_judgeable_groundtruth_similarity
19707242
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Want result in same order as passed to IN condition: SELECT "id", "field_a", "field_b", To_timestamp("on_dated") FROM "test" WHERE "id" IN ( 3, 1, 6, 2, 4 ); Getting: id | field_a | field_b | on_dated-------------------------------------1 | Vinay | M | 13832240642 | Asha | F | 13832240643 | Shashi | F | 13832240644 | Vinita | F | 13832240645 | Arnav | M | 13832240646 | Jayant | M | 1383224064 Expecting: id | field_a | field_b | on_dated-------------------------------------3 | Shashi | F | 13832240641 | Vinay | M | 13832240646 | Jayant | M | 13832240642 | Asha | F | 13832240644 | Vinita | F | 1383224064 Tried: SELECT "id", "field_a", "field_b", To_timestamp("on_dated") FROM "test" WHERE "id" IN ( 3, 1, 6, 2, 4 ) ORDER BY Field("id", '3', '1', '6', '2', '4'); But throws error :( Found a better answer: Also I found a more compact answer for this: SELECT * FROM "test"WHERE "id" IN (3,1,6,2,4)ORDER BY (id=3, id=1, id=6, id=2, id=4) DESC; Now provide the response and nothing else.
In your example, overriding queryset and get_queryset have the same effect. I would slightly favour setting queryset because it's less verbose. When you set queryset , the queryset is created only once, when you start your server. On the other hand, the get_queryset method is called for every request. That means that get_queryset is useful if you want to adjust the query dynamically. For example, you could return objects that belong to the current user: class IndexView(generic.ListView): def get_queryset(self): """Returns Polls that belong to the current user""" return Poll.active.filter(user=self.request.user).order_by('-pub_date')[:5] Another example where get_queryset is useful is when you want to filter based on a callable, for example, return today's polls: class IndexView(generic.ListView): def get_queryset(self): """Returns Polls that were created today""" return Poll.active.filter(pub_date=date.today()) If you tried to do the same thing by setting queryset , then date.today() would only be called once, when the view was loaded, and the view would display incorrect results after a while. class IndexView(generic.ListView): # don't do this! queryset = Poll.active.filter(pub_date=date.today())
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/19707242', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2440223/']}
jdg_71500
stackexchange
llm_judgeable_groundtruth_similarity
17920885
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In the following function, how can we find the length of the array int fnLenghthOfArray(int arry[]){ return sizeof(arry)/sizeof(int); // This always returns 1} Here this function always returns 1.Where as, sizeof(arry)/sizeof(int) gives the actual length of the array, in the function where it is declared. If we use vector or template like template<typename T,int N> int fnLenghthOfArray(T (&arry)[N]){} we can get the size. But here I am not allowed to change the function prototype. Please help me to find this. Now provide the response and nothing else.
Remember, in C when you pass an array as an argument to a function, you're passing a pointer to the array. If you want to pass the size of the array, you should pass it as a separated argument. The size of a pointer and an int is 4 or 8 or something else - depending on ABI . In your case, it's 4 , so you're getting sizeof(int *)/sizeof int which is 1. Here is a useful trick You can store the length of the array in the first element of it: int myArray[]= {-1, 1, 2, 3, 4, 5};myArray[0] = sizeof(myArray) / sizeof(myArray[0]) - 1;//The -1 because.. the first element is only to indicate the size Now, myArray[0] will contain the size of the array.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/17920885', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2629706/']}
jdg_71501
stackexchange
llm_judgeable_groundtruth_similarity
40344801
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have an application with a LoginActivity , that when the user login correctly, I register to receive messages. And the LoginActivity jumps to MainActivity .The arriving messages are supposed to be stored in database (Realm), to recover from a Realm instance in Main. But when the message arrives It crash realm launching this errror: Exception in packet listener java.lang.IllegalStateException: Realm access from incorrect thread. Realm objects can only be accessed on the thread they were created. at io.realm.BaseRealm.checkIfValid(BaseRealm.java:383) at io.realm.Realm.executeTransactionAsync(Realm.java:1324) at io.realm.Realm.executeTransactionAsync(Realm.java:1276) at es.in2.in2tant.LoginActivity.newMessageReceived(LoginActivity.java:124) at es.in2.in2tant.Connection.Connection$4$1.processMessage(Connection.java:227) at org.jivesoftware.smack.chat.Chat.deliver(Chat.java:180) at org.jivesoftware.smack.chat.ChatManager.deliverMessage(ChatManager.java:351) at org.jivesoftware.smack.chat.ChatManager.access$300(ChatManager.java:53) at org.jivesoftware.smack.chat.ChatManager$2.processPacket(ChatManager.java:162) at org.jivesoftware.smack.AbstractXMPPConnection$4.run(AbstractXMPPConnection.java:1126) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1113) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:588) at java.lang.Thread.run(Thread.java:818) I'm a bit lost on how Realm works, and I don't know how to make realm accessible across the application without a crash and keep storing this received messages from LoginActivity . Some help, or approaches to achieving this? LoginActivity.java : public class LoginActivity extends AppCompatActivity implements ConnectionConnectResponse {.....protected void onCreate(Bundle savedInstanceState) {//Realm Init config: Realm.init(this); RealmConfiguration realmConfiguration = new RealmConfiguration.Builder().build(); Realm.deleteRealm(realmConfiguration); // Clean slate Realm.setDefaultConfiguration(realmConfiguration); // Make this Realm the default@Override public void newMessageReceived(final ChatMessage message) { Logger.d("NEWMESSAGERECEIVED :" + message); realm.executeTransactionAsync(new Realm.Transaction() { @Override public void execute(Realm realm) { Message receivedMessage = realm.createObject(Message.class, message.id); receivedMessage.setBodyMessage(message.message); receivedMessage.setFrom(message.from); receivedMessage.setTo(message.to); receivedMessage.setDelivered(false); receivedMessage.setMine(false); receivedMessage.setDate(Calendar.getInstance().getTime()); } }); //Logger.d("NEWMESSRE: LAST MESSAGE:" + realm.where(Message.class).equalTo("chatID", message.id)); }@Override protected void onStart() { super.onStart(); realm = Realm.getDefaultInstance(); } @Override protected void onStop() { super.onStop(); realm.close(); } Image of what is needed: Now provide the response and nothing else.
Realm access from incorrect thread. Realm objects can only be accessed on the thread they were created . This error message is quite self-explanatory. As i see you're initializing realm by calling Realm.getDefaultInstance() on the UI thread. The error is coming from newMessageReceived() , so i guess that method is called from a background thread. Either obtain a Realm instance on the background thread and use that instead of the global instance: @Overridepublic void run () { Realm backgroundRealm = Realm.getDefaultInstance(); backgroundRealm.executeTransactionAsync(new Realm.Transaction() { @Override public void execute(Realm realm) { Message receivedMessage = realm.createObject(Message.class, message.id); receivedMessage.setBodyMessage(message.message); receivedMessage.setFrom(message.from); receivedMessage.setTo(message.to); receivedMessage.setDelivered(false); receivedMessage.setMine(false); receivedMessage.setDate(Calendar.getInstance().getTime()); } });} Or , if you would like to stick to the global Realm instance for some reason, then make sure your code is executed on the UI thread by calling runOnUiThread() (or directly posting a Runnable to the message queue of the main thread through a Handler ): @Overridepublic void newMessageReceived(final ChatMessage message) { runOnUiThread(new Runnable() { @Override public void run() { realm.executeTransactionAsync(new Realm.Transaction() { @Override public void execute(Realm realm) { Message receivedMessage = realm.createObject(Message.class, message.id); receivedMessage.setBodyMessage(message.message); receivedMessage.setFrom(message.from); receivedMessage.setTo(message.to); receivedMessage.setDelivered(false); receivedMessage.setMine(false); receivedMessage.setDate(Calendar.getInstance().getTime()); } }); } });}
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/40344801', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1282166/']}
jdg_71502
stackexchange
llm_judgeable_groundtruth_similarity
36071220
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: This code is what I want to do: Tony& Movie::addTony(){ Tony *newTony = new Tony; std::unique_ptr<Tony> tony(newTony); attachActor(std::move(tony)); return *newTony;} I am wondering if I could do this instead: Tony& Movie::addTony(){ std::unique_ptr<Tony> tony(new Tony); attachActor(std::move(tony)); return *tony.get();} But will *tony.get() be the same pointer or null? I know I could verify, but what is the standard thing for it to do? Now provide the response and nothing else.
No, you cannot do that instead. Moving the unique_ptr nulls it. If it didn't, then it would not be unique. I am of course assuming that attachActor doesn't do something silly like this: attachActor(std::unique_ptr<Tony>&&) { // take the unique_ptr by r-value reference, // and then don't move from it, leaving the // original intact} Section 20.8.1 paragraph 4. Additionally, u (the unique_ptr object) can, upon request, transfer ownership to another unique pointer u2. Upon completion of such a transfer, the following postconditions hold: -- u2.p is equal to the pre-transfer u.p, -- u.p is equal to nullptr , and -- if the pre-transfer u.d maintained state, such state has been transferred to u2.d.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/36071220', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3496846/']}
jdg_71503
stackexchange
llm_judgeable_groundtruth_similarity
52969755
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Need to find open and closed bracket, if the sequence of opening and closing brackets is violated, then return false. But if don't revert right array to compare with left array, i don't make check brackets here {[(3+1)+2]+} . And if reverse like now, then i fail to check here [1+1]+(2*2)-{3/3} function brackets(expression){ let leftArr=[]; let rightArr = []; for(let i=0; i<expression.length; i++){ if(expression[i] === '(' || expression[i] === '[' || expression[i] === "{"){ leftArr.push(expression[i]); } if(expression[i] === ')'){ rightArr.push("("); }else if(expression[i] === '}'){ rightArr.push("{"); } else if(expression[i] === ']'){ rightArr.push("["); } } rightArr.reverse(); if(leftArr.length<rightArr.length || leftArr.length>rightArr.length){ return false; } for(let k=0; k<leftArr.length; k++) { if(leftArr[k] != rightArr[k]){ return false; } } return true;}console.log(brackets('(3+{1-1)}')); // falseconsole.log(brackets('{[(3+1)+2]+}')); //trueconsole.log(brackets('[1+1]+(2*2)-{3/3}')); //trueconsole.log(brackets('(({[(((1)-2)+3)-3]/3}-3)')); //false Now provide the response and nothing else.
In the shortest possible, with comments for lines that are probably confusing for you. function check(expr){ const holder = [] const openBrackets = ['(','{','['] const closedBrackets = [')','}',']'] for (let letter of expr) { // loop trought all letters of expr if(openBrackets.includes(letter)){ // if its oppening bracket holder.push(letter) }else if(closedBrackets.includes(letter)){ // if its closing const openPair = openBrackets[closedBrackets.indexOf(letter)] // find its pair if(holder[holder.length - 1] === openPair){ // check if that pair is the last element in the array holder.splice(-1,1) // if so, remove it }else{ // if its not holder.push(letter) break // exit loop } } } return (holder.length === 0) // return true if length is 0, otherwise false}check('[[{asd}]]') /// true
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/52969755', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6289878/']}
jdg_71504
stackexchange
llm_judgeable_groundtruth_similarity
12999649
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to change the color and size of H2 font and H2 link fonts based on the div they are in but have not been successful. What am I doing wrong? <style>h2 {color:fff;font-size: 20px;}social.h2 {color:pink;font-size: 14px;}social.h2.h2color {color:purple;font-size: 10px;}tv.h2 {color:green;font-size: 14px;}tv.h2.h2color {color:orange;font-size: 10px;} </style> <h2>List of Companies </h2> <div class="social"> <h2> <A href="http://www.facebook.com">Facebook </a> <span class="h2color">Found in 2004 </span> </h2> blah blah blah <h2> <A href="http://www.twitter.com">Twitter </a> <span class="h2color">Found in 2007 </span> </h2> blah blah blah </div> <div class="tv"> <h2> <A href="http://www.fox.com">Fox </a> <span class="h2color">Found in 2004 </span> </h2> blah blah blah <h2> <A href="http://www.nbc.com">NBC </a> <span class="h2color">Found in 2007 </span> </h2> blah blah blah </div> I am trying to make it look like this: Now provide the response and nothing else.
Your first CSS selectorβ€” social.h2 β€”is looking for the "social" element in the "h2", class, e.g.: <social class="h2"> Class selectors are proceeded with a dot ( . ). Also, use a space ( ) to indicate that one element is inside of another. To find an <h2> descendant of an element in the social class, try something like: .social h2 { color: pink; font-size: 14px;} To get a better understanding of CSS selectors and how they are used to reference your HTML, I suggest going through the interactive HTML and CSS tutorials from CodeAcademy . I hope that this helps point you in the right direction.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/12999649', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1294251/']}
jdg_71505
stackexchange
llm_judgeable_groundtruth_similarity
12676924
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I was editing a project and I saw a Session[""] in one controller method and TempData[""] in another. Is there a difference between the 4 or is it just 4 ways to do the same thing. Now provide the response and nothing else.
ViewData/ViewBag - valid only for the duration of the current request. You set it in a controller action and use it in the view, then it disappears. The difference is that the first is a dictionary whereas the second is just a dynamic wrapper around this dictionary. Both point to the same data though. ViewBag was introduced in ASP.NET MVC 3. Example: public ActionResult Index(){ ViewData["foo"] = "bar"; return View();} and inside the view you could use this value: <div>@ViewData["foo"]</div> Same with ViewBag but it is dynamic: public ActionResult Index(){ ViewBag.foo = "bar"; return View();} and inside the view you could use this value: <div>@ViewBag.foo</div> So as you can see ViewData/ViewBag are just an alternative way to pass information to a view from a controller action compared to the classic and recommended way which is using a view model: public class MyViewModel{ public string Foo { get; set; }} and then: public ActionResult Index(){ var model = new MyViewModel { Foo = "bar" }; return View(model);} and inside your strongly typed view: @model MyViewModel<div>@Html.DisplayFor(x => x.Foo)</div> As you can see using view models provide a strongly typed approach in passing information to a view from a controller action. TempData - it allows for persisting information for the duration of a single subsequent request. You store something inside TempData and then redirect. In the target controller action to which you redirected you could retrieve the value that was stored inside TempData. Example: public ActionResult Foo(){ TempData["foo"] = "bar"; return RedirectToAction("bar");}public ActionResult Bar(){ var value = TempData["foo"] as string; // use the value here. If you need to pass it to the view you could // use ViewData/ViewBag (I can't believe I said that but I will leave it for the moment) return View();} ASP.NET MVC will automatically expire the value that was stored in TempData once you read it. Under the covers ASP.NET MVC persists the information into the Session . Session - same as TempData except that it never expires - it will be valid for all requests, not a single redirect.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/12676924', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/54197/']}
jdg_71506
stackexchange
llm_judgeable_groundtruth_similarity
42051588
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Just wondering which one is the best way to use import: import * as Foo from './foo'; VS: import { bar, bar2, bar3 } from './foo'; In terms of efficiency, say for example, I'm using webpack for bundling all the JavaScript files. Will the first one actually importing everything even though I'm not using them in the main code? Some references that I can find are: In Airbnb style guide , they are recommending no wildcard so there will always be default import object, and this . Now provide the response and nothing else.
If you use webpack with the dead code elimination provided by the new uglify, or rollupjs with tree-shaking, then the unused imports will be stripped. I partially agree with the airbnb styleguide to not to use wildcard imports, although javascripts wildcard imports do not suffer from the same diseases as for example pythons or javas wildcard imports, namely it does not pollute the scope with variable names defined in other modules (you can only access them by moduleB.foo , not foo when using import * as moduleB from ... ). About the article on testing: I kindof understand the concerns, but I see nothing that cannot be solved there. You can mock the imports themselves with some custom module loader (a custom amd module loader is literally 15 lines of code), so you dont have to mess with the local scope of the tested module.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/42051588', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4162778/']}
jdg_71507
stackexchange
llm_judgeable_groundtruth_similarity
710568
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have 120 files (genomes.faa) that all have headers between each sequence >GENOME1_00001 HYPOTHETICAL PROTEIN ANQFTIAQSQVGLEDALLDL>GENOME1_00002 HYPOTHETICAL PROTEIN BNQFTIAQSQVGLEDALLDL>GENOME1_00003 HYPOTHETICAL PROTEIN CNQFTIAQSQVGLEDALLDLetc. I am trying to remove the "_0000X " after the name and replace it with a "|" >GENOME1|HYPOTHETICAL PROTEIN ANQFTIAQSQVGLEDALLDL>GENOME1|HYPOTHETICAL PROTEIN BNQFTIAQSQVGLEDALLDL>GENOME1|HYPOTHETICAL PROTEIN CNQFTIAQSQVGLEDALLDLetc. I have tried doing this: for file in *.faadosed -r 's/_.*$/|/g' $file > $file.1done This does not keep the 'HYPOTHETICAL PROTEIN A' afterwards, resulting in >ERR1156171|MMRQSVQTVLP instead of >ERR1156171|HYPOTHETICAL PROTEIN AMMRQSVQTVLP Any help is appreciated! Now provide the response and nothing else.
I think you were very close to a working command. This worked for me on the few examples you gave: sed -E 's/_[0-9]+ /|/' "$file" > "$file.1" I changed the match expression from _.* to _[0-9]+ to limit the match to only the underscore, numeric digits, and space character. I removed the $ because that matches at the end of the line, not the end of the first word. I changed the end of the substitute command from /g to / because your examples have only one place in each line that needs editing, rather than multiple places. Also, rather use -E than -r for extended regular expressions, as the former is more compatible with other versions of sed; and quote the variable expansions in case any filenames contain whitespace or special characters.
{}
{'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/710568', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/534500/']}
jdg_71508
stackexchange
llm_judgeable_groundtruth_similarity
283358
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: This is a theoretical question. Let's say that we have two black boxes, each with two terminals. One contain an infinity (or arbitrarily) large capacitor in series with a 1 ohm resistor and the other contains just a 1 ohm resistor. Is there a way to tell which is which using finite finite resources (e.g. upper bound on experiment time, upper bound on voltage sources, upper bound on equipment accuracy, etc)? My thinking is that since the capacitor is arbitrarily large, it will not get charge to a noticeable voltage in a given finite time and thus will be identical to a short (zero voltage regardless of finite current). Now provide the response and nothing else.
Yes, your analysis is correct for a infinite capacitor. However, anything less than that can be detected in arbitrarily short time. The problem is that the size of the signal to notice the difference gets smaller as the time to run the experiment gets smaller. Larger current makes the effect larger in the same amount of time. Let's say your current is limited to 1 A and you have a 12 bit A/D in a 3.3 V microcontroller. Let's see how large a capacitor this could detect. The voltage change of a cap as a result of some Amps for some seconds is: V = A s / F Where A is the current in Amps, s is the time the current is applied in seconds, and F is the capacitance in Farads. Flippping this around to solve for the capacitance yields: F = A s / V The minimum voltage change we can detect is (3.3 V)/4095 = 806 Β΅V. Plugging in our particulars, we get: F = A s / V = (1 A)(1 s)/(806 Β΅V) = 1.2 kF That's a very large capacitor. If you can supply 5 A and wait 2 seconds, then you can detect a 10x larger capacitor. Or conversely, be able to measure 1.2 kF to 1 part in 10. Yet another way to look at this is to apply a constant voltage for a fixed time, then see how much the open-circuit voltage went up afterwards. The voltage on the capacitor will rise exponentially, asymptotically approaching the fixed voltage being applied. Again let's say we can measure down to 1 part in 4095 of the applied voltage. That comes out to 0.000244 time constants. If that's how long 1 second is, then the time constant must be 4096 seconds. With a 1 Ξ© resistor, that means the cap is 4.1 kF. Note that cheap $20 voltmeters can measure much smaller voltages than a 12 bit A/D running from 3.3 V. Basically, it takes a unrealistically large capacitor to not be detectable via rather simple means.
{}
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/283358', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/134548/']}
jdg_71509
stackexchange
llm_judgeable_groundtruth_similarity
40601991
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In a recent discussion about distributed processing and streaming I came across the concept of 'code moving to data'. Can someone please help explaining the same. Reference for this phrase is MapReduceWay . In terms of Hadoop, it's stated in a question but still could not figure out an explanation of the principle in a tech agnostic way. Now provide the response and nothing else.
The basic idea is easy: if code and data are on different machines, one of them must be moved to the other machine before the code can be executed on the data. If the code is smaller than the data, better to send the code to the machine holding the data than the other way around, if all the machines are equally fast and code-compatible. [Arguably you can send the source and JIT compile as needed]. In the world of Big Data, the code is almost always smaller than the data. On many supercomputers, the data is partitioned across many nodes, and all the code for the entire application is replicated on all nodes, precisely because the entire application is small compared to even the locally stored data. Then any node can run the part of the program that applies to the data it holds. No need to send the code on demand.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/40601991', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/462123/']}
jdg_71510
stackexchange
llm_judgeable_groundtruth_similarity
990909
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Here is the gist of some code I'm writing. I'm concerned that I am not properly addressing the retain/release issues with the array class method on NSMutableArray. Is the following actually leaking memory? for(a while) { // do stuff NSMutableArray *a = nil; // do stuff if (!a) { a = [NSMutableArray array]; }} // for(a while) Now provide the response and nothing else.
You wouldn't leak memory in this code, and releasing the array yourself will cause a crash when the array is autoreleased at the end of the run loop. Most Cocoa classes provide a couple of ways of making a new object, and are very consistent with this convention: [[NSSomeObject alloc] init] : you are responsible for releasing the object (instance method). [NSSomeObject someObject] : the object will be autoreleased for you, usually at the end of the run loop (class method). It's roughly equivalent to [[[NSSomeObject alloc] init] autorelease] . The proper use of the instance method would be: a = [[NSMutableArray alloc] init];// do stuff[a release]; The proper use of the class method would be: a = [NSMutableArray array];// do stuff, array is in the autorelease pool Note that Apple has recommended you stay away from the convenience methods as much as possible to improve performance. This is controversial advice , may not save much processor time, and separates the alloc-init from the release on an object you may not actually care much about keeping.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/990909', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/116169/']}
jdg_71511
stackexchange
llm_judgeable_groundtruth_similarity
29356135
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How can I only apply a permission class to a detail route? class EventViewSet(viewsets.ModelViewSet): @detail_route(methods=['post']) def messages(self, request, pk=None): ### Check a permissions class. ... Now provide the response and nothing else.
Update-1 From DRF 3.8 onwards, detail_route decorator has replaced with action decorator. class EventViewSet(viewsets.ModelViewSet): @action(permission_classes=[permissions.PermissionClass_], methods=['post']) def messages(self, request, pk=None): # your view code Original post You can add permissions basically by doing this: class EventViewSet(viewsets.ModelViewSet): @detail_route( permission_classes=[ permissions.PermissionClass_], methods=['post']) def messages(self, request, pk=None): ### Check a permissions class. ...
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/29356135', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1658458/']}
jdg_71512
stackexchange
llm_judgeable_groundtruth_similarity
15262747
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a fragment that inflates an xml layout. My requirement is to update the text size on all my views inside my fragment when my Activity is resumed. I tried fragment.getView().invalidate(); which didn't seem to do the work. I also tried fragment.getView().requestLayout(); which didn't work either. On another activity, I have a ListFragment which needs to do the same thing. I tried listfragment.getListView().invalidate(); which did the trick , refreshing my list view and redrawing all the items inside it. I don't understand why one works but not the other. I have also seen people recommending initiating a fragment transaction and replacing the current fragment with a new one, and it has kept me wondering Why should I create a whole new fragment and replace my current fragment when all I need is to refresh the text on the views that my fragment contains. Fragment transaction method will prevent me from defining my fragment in the layout xml of my activity and I will have to programatically insert the fragment at the right position. Is there any simple approach to this? Now provide the response and nothing else.
I do not think there is a method for that. The fragment rebuilds it's UI on onCreateView()... but that happens when the fragment is created or recreated. You'll have to implement your own updateUI method or where you will specify what elements and how they should update. It's rather a good practice, since you need to do that when the fragment is created anyway. However if this is not enough you could do something like replacing fragment with the same one forcing it to call onCreateView() FragmentTransaction tr = getFragmentManager().beginTransaction();tr.replace(R.id.your_fragment_container, yourFragmentInstance);tr.commit() NOTE To refresh ListView you need to call notifyDataSetChanged() on the ListView's adapter.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/15262747', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/803936/']}
jdg_71513
stackexchange
llm_judgeable_groundtruth_similarity
727774
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm new to induction so please bear with me. How can I prove using induction that, for each positive integer $n$, the sum of the first $n$ odd positive integers is $n^2$? I think $9$ can be an example since the sum of the first $9$ positive odd numbers is $1,3,5,7,9,11,13,15,17 = 81 = 9^2$, but where do I go from here. Now provide the response and nothing else.
Induction is done by demonstrating that if the condition is true for some $n$ then it must also be true for $n+1$. If you then show that the condition is true for $n=0$ then it must be true for all $n>0$. For this problem: Step $1$: $n=1$ The sum of the first $1$ odd numbers is $1$. $1^2=1$. Therefore the condition holds for $n=1$. Step $2$: induction If the sum of the first $n$ odd numbers is $n^2$ then the sum of the first $n+1$ integers is$n^2 + (2n + 1) = (n+1)(n+1)=(n+1)^2$ So the condition is also true for $n+1$. Step $3$: conclusion Since the we have shown that the condition is true for $n=1$ and we have shown that if it is true for $n$ then it is also true for $n+1$ then it follows by induction that it is true for all $n\geq 1$.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/727774', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/137413/']}
jdg_71514
stackexchange
llm_judgeable_groundtruth_similarity
518203
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: This is exercise 1.1.9 in Friedman's "Foundations of Modern Analysis": Let $\mathscr{D}$ consist of those sets [in $2^X$?] which are either finite or have a finite complement... If $X$ is not finite, then $\mathscr{D}$ is not a $\sigma$-algebra. Now, $\mathscr{D}$ is an algebra, so I figure this comes down to showing that if $A_n \in \mathscr{D}$ ($n=1,2,...$) then $\cup A_n$ is neither finite nor has finite complement. Problem is I don't know how to do this in an effective way. I've tried breaking the problem up ("suppose the $A_n$ are finite" etc) but there are too many subcases, which in themselves are probably more difficult than the original question. I'm sure this result follows immediately from one crucical observation, I just don't know what it is. So I would appreciate a hint with this one, thank you. Now provide the response and nothing else.
Since you asked, I've replaced my hint with a full solution: First consider matrices of the form $$A = \begin{pmatrix}B & C\\D & E\end{pmatrix}$$ Where $B$ is an $r \times r$ nonsingular matrix. Since invertibility is an open condition, this set of such matrices, denoted $Z$, is a submanifold of $M_{m \times n}$. Postmultiply by the nonsingular matrix $$\begin{pmatrix}I & -B^{-1}C\\0 & I\end{pmatrix}$$ to obtain the matrix $$\begin{pmatrix}B & 0\\D & -DB^{-1}C + E\end{pmatrix}$$ the original matrix has rank $r$ iff this new matrix has rank $r$, which is clearly only the case if $-DB^{-1}C + E = 0$. Thus we can define a map $f$ from $Z$ to matrices of size $(m-r) \times (n-r)$ that sends $A$ as above to $-DB^{-1}C + E$. This is clearly smooth, so it suffices to check that it is a submersion. Now, the tangent space of the image is the same space as the image, since the image is a linear space. Let $X$ be an $(m-r) \times (n-r)$ matrix. Consider the curve passing through any matrix $A \in Z$. $$\gamma(t) = \begin{pmatrix}B & C\\D & E+tX\end{pmatrix}$$ The derivative of $f \circ \gamma$ at $0$ is $X$, and this is equal to $$df_{A}(\begin{pmatrix}0 & 0\\0 & X\end{pmatrix})$$ so that at any arbitrary point $A$ we have shown the existence of a tangent vector at $A$ that is mapped by $df$ to $X$. This verifies that $f$ is submersion, and hence $f^{-1}(0)$ is a smooth submanifold of $\mathbb{R}^{mn}$. The dimension $f^{-1}(0)$ is $mn - (m-r)(n-r)$, i.e. of codimension $(m-r)(n-r)$. Of course, we have only shown that matrices of rank $r$ contained in $Z$ form a smooth submanifold. However, any matrix can be put into the form of matrices in $Z$ by rearranging rows and columns, which is just a linear isomorphism. Thus if $A$ is matrix of rank $r$, we have a map $R$ to a matrix in $Z$ contained in chart $\psi$. Then we have that $\psi \circ R$ is a smooth chart around $A$ inherited from a chart on $M_{m \times n}$. The collection of these charts then extends to a maximal atlas giving the set of rank-$r$ matrices the structure of a smooth submanifold.
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/518203', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/77637/']}
jdg_71515
stackexchange
llm_judgeable_groundtruth_similarity
5659384
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to populate a drop down select form with values from a database. Here is what I have currently. $modes = Set::combine($this->Setting->find('all', array('conditions' => array('setting_name LIKE' => 'mode_%'))), '{n}.Setting.id','{n}.Setting.setting_name');$this->set('modes', $modes); Then in the view, this is what I have echo $form->select('current_mode',$modes); That output <select name="data[Setting][current_mode]" id="SettingCategoryId"><option value=""></option><option value="2">mode_2</option><option value="1">mode_1</option><option value="3">mode_3</option></select> The output that I have right now almost work but how can I make the output to be like this? <select name="data[Setting][current_mode]" id="SettingCategoryId"><option value="mode_2">Title 2</option><option value="mode_1">Title 1</option><option value="mode_3">Title 3</option></select> Note: 1. no default option with empty value 2. Option's value isn't the id and titles comes from a "title" field in the table Thanks, Tee Now provide the response and nothing else.
See http://book.cakephp.org/view/1022/find-list and http://book.cakephp.org/view/1062/displayField . $settings = $this->Setting->find('list', array( 'conditions' => array('Setting.setting_name LIKE' => 'mode_%'), 'fields' => array('Setting.id', 'Setting.title')));$this->set(compact('settings'));// viewecho $this->Form->input('current_mode', array( 'type' => 'select', 'options' => $settings, 'empty' => false));
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/5659384', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/166940/']}
jdg_71516
stackexchange
llm_judgeable_groundtruth_similarity
399250
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: PHP's parse_url() has a host field, which includes the full host. I'm looking for the most reliable (and least costly) way to only return the domain and TLD. Given the examples: http://www.google.com/foo , parse_url() returns www.google.com for host http://www.google.co.uk/foo , parse_url() returns www.google.co.uk for host I am looking for only google.com or google.co.uk . I have contemplated a table of valid TLD's/suffixes and only allowing those and one word. Would you do it any other way? Does anyone know of a pre-canned valid REGEX for this sort of thing? Now provide the response and nothing else.
How about something like that? function getDomain($url) { $pieces = parse_url($url); $domain = isset($pieces['host']) ? $pieces['host'] : ''; if (preg_match('/(?P<domain>[a-z0-9][a-z0-9\-]{1,63}\.[a-z\.]{2,6})$/i', $domain, $regs)) { return $regs['domain']; } return false;} Will extract the domain name using the classic parse_url and then look for a valid domain without any subdomain (www being a subdomain). Won't work on things like 'localhost'. Will return false if it didn't match anything. // Edit: Try it out with: echo getDomain('http://www.google.com/test.html') . '<br/>';echo getDomain('https://news.google.co.uk/?id=12345') . '<br/>';echo getDomain('http://my.subdomain.google.com/directory1/page.php?id=abc') . '<br/>';echo getDomain('https://testing.multiple.subdomain.google.co.uk/') . '<br/>';echo getDomain('http://nothingelsethan.com') . '<br/>'; And it should return: google.comgoogle.co.ukgoogle.comgoogle.co.uknothingelsethan.com Of course, it won't return anything if it doesn't get through parse_url , so make sure it's a well-formed URL. // Addendum: Alnitak is right. The solution presented above will work in most cases but not necessarily all and needs to be maintained to make sure, for example, that their aren't new TLD with .morethan6characters and so on. The only reliable way of extracting the domain is to use a maintained list such as http://publicsuffix.org/ . It's more painful at first but easier and more robust on the long-term. You need to make sure you understand the pros and cons of each method and how it fits with your project.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/399250', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/13203/']}
jdg_71517
stackexchange
llm_judgeable_groundtruth_similarity
6088372
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: What are the ways in which to quantify the texture of a portion of an image? I'm trying to detect areas that are similar in texture in an image, sort of a measure of "how closely similar are they?" So the question is what information about the image (edge, pixel value, gradient etc.) can be taken as containing its texture information. Please note that this is not based on template matching. Wikipedia didn't give much details on actually implementing any of the texture analyses. Now provide the response and nothing else.
Do you want to find two distinct areas in the image that looks the same (same texture) or match a texture in one image to another?The second is harder due to different radiometry. Here is a basic scheme of how to measure similarity of areas. You write a function which as input gets an area in the image and calculates scalar value. Like average brightness. This scalar is called a feature You write more such functions to obtain about 8 - 30 features. which form together a vector which encodes information about the area in the image Calculate such vector to both areas that you want to compare Define similarity function which takes two vectors and output how much they are alike. You need to focus on steps 2 and 4. Step 2.: Use the following features: std() of brightness, some kind of corner detector, entropy filter, histogram of edges orientation, histogram of FFT frequencies (x and y directions). Use color information if available. Step 4. You can use cosine simmilarity, min-max or weighted cosine. After you implement about 4-6 such features and a similarity function start to run tests. Look at the results and try to understand why or where it doesnt work. Then add a specific feature to cover that topic. For example if you see that texture with big blobs is regarded as simmilar to texture with tiny blobs then add morphological filter calculated densitiy of objects with size > 20sq pixels. Iterate the process of identifying problem-design specific feature about 5 times and you will start to get very good results.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/6088372', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/297353/']}
jdg_71518
stackexchange
llm_judgeable_groundtruth_similarity
199847
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I came across a simple circuit in which when a reed switch (which is normally closed) is activated then a LED is being turned on (the LED turns ON when the switch in closed as it is depicted in the schematic). Obviously the led is ON only when the magnet is away from the switch. My question is how can the LED stay ON even when the magnet comes close to the switch after the first activation? I would like a solution without the use of a microcontroller. In order to make things clear: If the magnet is next to the switch the LED is OFF. If the magnet goes away the LED is ON. As long as the magnet is away the LED is ON but I want the LED to remain ON even if the magnet comes back next to the switch as long as the switch has been activated once. simulate this circuit – Schematic created using CircuitLab Now provide the response and nothing else.
You might be able to use a thyristor for this. Once triggered they stay on until the current is interrupted by another switch. e.g., simulate this circuit – Schematic created using CircuitLab You'll need to find one that will stay on with 10 - 20 mA going through it. Alternatively, you can make a thyristor with a PNP and NPN transistor. A web search should get you plenty of examples.
{}
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/199847', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/38896/']}
jdg_71519
stackexchange
llm_judgeable_groundtruth_similarity
25081344
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I've always heard and searched for new php 'good writing practice', for example: It's better (for performance) to check if array key exists than search in array, but also it seems better for memory too: Assuming we have: $array = array( 'one' => 1, 'two' => 2, 'three' => 3, 'four' => 4,); this allocates 1040 bytes of memory, and $array = array( 1 => 'one', 2 => 'two', 3 => 'three', 4 => 'four',); requires 1136 bytes I understand that the key and value surely will have different storing mechanism, butplease can you actually point me to the principle how does it work? Example 2 (for @teuneboon) : $array = array( 'one' => '1', 'two' => '2', 'three' => '3', 'four' => '4',); 1168 bytes $array = array( '1' => 'one', '2' => 'two', '3' => 'three', '4' => 'four',); 1136 bytes consuming same memory: 4 => 'four', '4' => 'four', Now provide the response and nothing else.
Note, answer below is applicable for PHP prior to version 7 as in PHP 7 major changes were introduced which also involve values structures. TL;DR Your question is not actually about "how memory works in PHP" (here, I assume, you meant "memory allocation"), but about "how arrays work in PHP" - and these two questions are different. To summarize what's written below: PHP arrays aren't "arrays" in classical sense. They are hash-maps Hash-map for PHP array has specific structure and uses many additional storage things, such as internal links pointers Hash-map items for PHP hash-map also use additional fields to store information. And - yes, not only string/integer keys matters, but also what are strings themselves, which are used for your keys. Option with string keys in your case will "win" in terms of memory amount because both options will be hashed into ulong (unsigned long) keys hash-map, so real difference will be in values, where string-keys option has integer (fixed-length) values, while integer-keys option has strings (chars-dependent length) values. But that may not always will be true due to possible collisions. "String-numeric" keys, such as '4' , will be treated as integer keys and translated into integer hash result as it was integer key. Thus, '4'=>'foo' and 4 => 'foo' are same things. Also, important note : the graphics here are copyright of PHP internals book Hash-map for PHP arrays PHP arrays and C arrays You should realize one very important thing: PHP is written on C, where such things as "associative array" simply does not exist. So, in C "array" is exactly what "array" is - i.e. it's just a consecutive area in memory which can be accessed by a consecutive offset. Your "keys" may be only numeric, integer and only consecutive, starting from zero. You can't have, for instance, 3 , -6 , 'foo' as your "keys" there. So to implement arrays, which are in PHP, there's hash-map option, it uses hash-function to hash your keys and transform them to integers, which can be used for C-arrays. That function, however, will never be able to create a bijection between string keys and their integer hashed results. And it's easy to understand why: because cardinality of strings set is much, much larger that cardinality of integer set. Let's illustrate with example: we'll recount all strings, up to length 10, which have only alphanumeric symbols (so, 0-9 , a-z and A-Z , total 62): it's 62 10 total strings possible. It's around 8.39E+17 . Compare it with around 4E+9 which we have for unsigned integer (long integer, 32-bits) type and you'll get the idea - there will be collisions . PHP hash-map keys & collisions Now, to resolve collisions, PHP will just place items, which have same hash-function result, into one linked list. So, hash-map would not be just "list of hashed elements", but instead it will store pointers to lists of elements (each element in certain list will have same hash-function key). And this is where you have point to how it will affect memory allocation: if your array has string keys, which did not result in collisions, then no additional pointers inside those list would be needed, so memory amount will be reduced (actually, it's a very small overhead, but, since we're talking about precise memory allocation, this should be taken to account). And, same way, if your string keys will result into many collisions, then more additional pointers would be created, so total memory amount will be a bit more. To illustrate those relations within those lists, here's a graphic: Above there is how PHP will resolve collisions after applying hash-function. So one of your question parts lies here, pointers inside collision-resolution lists. Also, elements of linked lists are usually called buckets and the array, which contains pointers to heads of those lists is internally called arBuckets . Due to structure optimization (so, to make such things as element deletion, faster), real list element has two pointers, previous element and next element - but that's only will make difference in memory amount for non-collision/collision arrays little wider, but won't change concept itself. One more list: order To fully support arrays as they are in PHP, it's also needed to maintain order , so that is achieved with another internal list. Each element of arrays is a member of that list too. It won't make difference in terms of memory allocation, since in both options this list should be maintained, but for full picture, I'm mentioning this list. Here's the graphic: In addition to pListLast and pListNext , pointers to order-list head and tail are stored. Again, it's not directly related to your question, but further I'll dump internal bucket structure, where these pointers are present. Array element from inside Now we're ready to look into: what is array element, so, bucket : typedef struct bucket { ulong h; uint nKeyLength; void *pData; void *pDataPtr; struct bucket *pListNext; struct bucket *pListLast; struct bucket *pNext; struct bucket *pLast; char *arKey;} Bucket; Here we are: h is an integer (ulong) value of key, it's a result of hash-function. For integer keys it is just same as key itself (hash-function returns itself) pNext / pLast are pointers inside collision-resolution linked list pListNext / pListLast are pointers inside order-resolution linked list pData is a pointer to the stored value. Actually, value isn't same as inserted at array creation, it's copy , but, to avoid unnecessary overhead, PHP uses pDataPtr (so pData = &pDataPtr ) From this viewpoint, you may get next thing to where difference is: since string key will be hashed (thus, h is always ulong and, therefore, same size), it will be a matter of what is stored in values. So for your string-keys array there will be integer values, while for integer-keys array there will be string values, and that makes difference. However - no, it isn't a magic : you can't "save memory" with storing string keys such way all the times, because if your keys would be large and there will be many of them, it will cause collisions overhead (well, with very high probability, but, of course, not guaranteed). It will "work" only for arbitrary short strings, which won't cause many collisions. Hash-table itself It's already been spoken about elements (buckets) and their structure, but there's also hash-table itself, which is, in fact, array data-structure. So, it's called _hashtable : typedef struct _hashtable { uint nTableSize; uint nTableMask; uint nNumOfElements; ulong nNextFreeElement; Bucket *pInternalPointer; /* Used for element traversal */ Bucket *pListHead; Bucket *pListTail; Bucket **arBuckets; dtor_func_t pDestructor; zend_bool persistent; unsigned char nApplyCount; zend_bool bApplyProtection;#if ZEND_DEBUG int inconsistent;#endif} HashTable; I won't describe all the fields, since I've already provided much info, which is only related to the question, but I'll describe this structure briefly: arBuckets is what was described above, the buckets storage, pListHead / pListTail are pointers to order-resolution list nTableSize determines size of hash-table. And this is directly related to memory allocation: nTableSize is always power of 2. Thus, it's no matter if you'll have 13 or 14 elements in array: actual size will be 16. Take that to account when you want to estimate array size. Conclusion It's really difficult to predict, will one array be larger than another in your case. Yes, there are guidelines which are following from internal structure, but if string keys are comparable by their length to integer values (like 'four' , 'one' in your sample) - real difference will be in such things as - how many collisions occurred, how many bytes were allocated to save the value. But choosing proper structure should be matter of sense, not memory. If your intention is to build the corresponding indexed data, then choice always be obvious. Post above is only about one goal: to show how arrays actually work in PHP and where you can find the difference in memory allocation in your sample. You may also check article about arrays & hash-tables in PHP: it's Hash-tables in PHP by PHP internals book: I've used some graphics from there. Also, to realize, how values are allocated in PHP, check zval Structure article, it may help you to understand, what will be differences between strings & integers allocation for values of your arrays. I didn't include explanations from it here, since much more important point for me - is to show array data structure and what may be difference in context of string keys/integer keys for your question.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/25081344', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3172092/']}
jdg_71520
stackexchange
llm_judgeable_groundtruth_similarity
13669554
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Let's say I have following arrays: Array ( [0] => Array ( [id] => 5 [name] => Education ) [1] => Array ( [id] => 4 [name] => Computers ) [3] => Array ( [id] => 7 [name] => Science [4] => Array ( [id] => 1 [name] => Sports ) ) And the second one: Array ( [0] => Array ( [id] => 1 [title] => Sport ) [1] => Array ( [id] => 7 [title] => Sci ) [3] => Array ( [id] => 4 [title] => Comp [4] => Array ( [id] => 5 [title] => Edu ) ) And desired output is: Array ( [0] => Array ( [id] => 5 [name] => Education [title] => Edu ) [1] => Array ( [id] => 4 [name] => Computers [title] => Comp ) [3] => Array ( [id] => 7 [name] => Science [title] => Sci [4] => Array ( [id] => 1 [name] => Sports [title] => Sport ) ) I have managed to merge these arrays with simply: foreach($first as $key => $value){ $result[$key] = array_merge($first[$key], $second[$key]);} But the output is not combined correctly: Array ( [0] => Array ( [id] => 5 [name] => Education [title] => Sport ) [1] => Array ( [id] => 4 [name] => Computers [title] => Sci ) [3] => Array ( [id] => 7 [name] => Science [title] => Comp [4] => Array ( [id] => 1 [name] => Sports [title] => Edu ) ) The problem is I would like to merge these arrays on the same id . Desired output sorting should be same as in the first array. How can I achieve this? Any help is much appreciated. Now provide the response and nothing else.
You can just do a nested loop and check if the id values match, then add title to $first (or name to $second ) foreach($first as $key => $value){ foreach($second as $value2){ if($value['id'] === $value2['id']){ $first[$key]['title'] = $value2['title']; } }}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/13669554', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1292810/']}
jdg_71521
stackexchange
llm_judgeable_groundtruth_similarity
24479060
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to compile Caffe ( http://caffe.berkeleyvision.org/installation.html ) and I get the following errors: /usr/bin/ld: cannot find -lcblas/usr/bin/ld: cannot find -latlas However, I have these libraries installed (libatlas).My LD_LIBRARY_PATH contains the path /usr/lib/atlas-base and it contains the files libcblas.so and libatlas.so (and some other files as well). Why ld can't find these libraries?Thanks. Now provide the response and nothing else.
tl;dr: Caffe makefile looks for libblas.so in /usr/lib . If missing, update-alternatives creates a symbolic link /usr/lib/libblas.so to the location where it is installed. Same applies to libcblas.so . LD_LIBRARY_PATH is for runtime, and doesn't have anything to do with this. LD_LIBRARY_PATH doesn't really help you when compiling. It only provides directories to look for shared libraries when executing programs that rely on them, after they are compiled. Still, when linking during the compilation, the compiler needs to find these shared libraries, and does so by other means than LD_LIBRARY_PATH . More to the point: if compiling with gcc or clang , the directories in which to look for libraries to link with are provided using the -L flag, and it does not consider the LD_LIBRARY_PATH environment variable. Common locations for libblas.so are /usr/lib/atlas-base/ and /usr/lib/libblas/ . The Makefile for caffe doesn't do anything particular to try and locate these subdirectories, and relies on these libraries being in the default library directory /usr/lib/ . Typically a symbolic link /usr/lib/libblas.so exists, and points to the real location of the shared library. For some reason, this wasn't the case in your initial configuration. When dealing with multiple alternatives for packages, update-alternatives comes in handy. In the case of libblas.so it let's you easily switch between multiple implementations ( libblas , openblas ) you might have installed, and does so by changing out the symbolic links. sudo update-alternatives --config libblas.so created this symbolic link when it was missing, which in turn let the compiler find the shared library, solving your problem. This is indicated by the output of the command: $ sudo update-alternatives --config libblas.soThere is only one alternative in link group libblas.so (providing /usr/lib/libblas.so): /usr/lib/libblas/libblas.soNothing to configure. Same kind of reasoning applies to libcblas.so .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/24479060', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/445114/']}
jdg_71522
stackexchange
llm_judgeable_groundtruth_similarity
11352486
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm working on a game with monsters. Each one has a list of stats that are all going to be ints. I can set up each stat as it's own variable but I'd prefer to keep them in an NSDictionary since they are all related. I'm running into a problem when I'm trying to change the value's of each stat. What I Have: -(id) init { self = [super init]; if(self) { stats = [NSDictionary dictionaryWithObjectsAndKeys: @"Attack", 0, @"Defense", 0, @"Special Attack", 0, @"Special Defense", 0, @"HP", 0, nil]; } return self;} What I want to do -(void) levelUp { self.level++; [self.stats objectForKey:@"Attack"] += (level * 5); [self.stats objectForKey:@"Defense"] += (level * 5); [self.stats objectForKey:@"Special Attack"] += (level * 5); [self.stats objectForKey:@"Special Defense"] += (level * 5); [self.stats objectForKey:@"HP"] += (level * 5);} Error I'm Getting Arithmetic on pointer to interface 'id', which is not a constant size in non-fragile ABI So it seems obvious to me that the reason I'm getting the problem is that I'm getting an object returned from objectForKey instead of an integer. So I tried to do the intValue method on the object I'm getting but that gave me another error, specifically: Assigning to 'readonly' return result of an objective-c message not allowed I'm out of ideas for how to fix this. Any help? Would it be better to just give up the idea to store them all together and just use an int property for each stat? Now provide the response and nothing else.
You can only store objects, not primitives, within Cocoa collection classes, so to store numbers you need to use NSNumber objects. You need to use an NSMutableDictionary if you wish to change the contents later. Your call to dictionaryWithObjectsAndKeys has the keys and values reversed. Your stats object is not being retained, so it will be released next time round the run loop (if you're using manual reference counting, that is). You want: stats = [[NSMutableDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithInt:0], @"Attack", [NSNumber numberWithInt:0], @"Defense", [NSNumber numberWithInt:0], @"Special Attack", [NSNumber numberWithInt:0], @"Special Defense", [NSNumber numberWithInt:0], @"HP", nil] retain]; In order to change the values you need to create a new NSNumber object as they are immutable, so something like: NSNumber *num = [stats objectForKey:@"Attack"];NSNumber *newNum = [NSNumber numberWithInt:[num intValue] + (level * 5)];[stats setObject:newNum forKey:@"Attack"]; All pretty tedious if you ask me; there must be an easier way, for example how about creating an Objective-C class to store and manipulate this stuff?
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/11352486', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/856275/']}
jdg_71523
stackexchange
llm_judgeable_groundtruth_similarity
85469
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: In light of recent developments on SSL issues such as BEAST and POODLE, I decided to configure my browser to only allow TLS1.1 and higher. The trouble is, that I am finding a lot of websites which do not work correctly (some websites serve all JS and CSS resources as HTTPS always) or at all. A bit of digging usually reveals that there is no cipher overlap, or no protocol overlap. In a lot of cases, I found websites are only accepting maximum protocol version of TLS1.0. My question is, is there some good reason from a security perspective to allow only TLS 1.0, or is it simply pure "laziness", and there is no good reason not to allow TLS1.1 and TLS1.2 in addition to TLS1.0? Now provide the response and nothing else.
No, there is absolutely no security related reason to continue to support TLS 1.0, but there are several other business concerns which can twist the arm of a system engineer into allowing it. For larger sites, they may be trying not to leave people with older browsers out in the cold. For some situations, the person publishing the website needs to assume you are using I.E 6 with no updates. It sounds ridiculous, and it is, but it's a fact of life. That said, if the server doesn't support anything above TLS 1.0, that's a problem no matter how you slice it. You should e-mail them about it. Or, if it's not fixed, stop using the site. Sometimes it only takes one angry e-mail to set things right. Edit: I shall emphasize that supporting a legacy operating system or software stack is no excuse for exposing your users to encryption with major known faults.
{}
{'log_upvote_score': 4, 'links': ['https://security.stackexchange.com/questions/85469', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/71961/']}
jdg_71524
stackexchange
llm_judgeable_groundtruth_similarity
339520
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I know that the power factor of an induction motor varies with load and is very low at no load and high at full load, but does the reactive power vary as well? If so, how much can it vary? Now provide the response and nothing else.
When the load decreases, the power factor decreases primarily due to the decrease in real power. Reactive power is primarily due to the magnetizing current, But there is a small amount due to the stator and rotor leakage reactances. Magnetizing current depends on the voltage across the magnetizing branch of the equivalent circuit that will increase slightly with reduced load because the voltage drop across the stator resistance and leakage reactance declines when the current declines due to reduced load. In the equivalent circuit shown below, R1 & X1 are the stator resistance and leakage reactance, Gc and Bm are the magnetizing branch, and X2 is the rotor leakage reactance. R2/s represents the rotor resistance combined with a variable resistance that represents the mechanical load. The following motor performance data shows quite a significantly decreasing reactive VA as the motor load declines to zero. The slight increase in magnetizing VA must be insignificant compared to the loss of reactive VA in the rotor branch as the rotor current drops to near zero.
{}
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/339520', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/19452/']}
jdg_71525
stackexchange
llm_judgeable_groundtruth_similarity
609491
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm a software dev dabbling in electronics and I am looking into wiring up a position-sensitive detector, the Hamamatsu S3932, on a breadboard. My knowledge is quite rudimentary and my first goal is to wire it up in a circuit without frying it. I am vaguely aware that diodes like a resistor, and that voltage regulators exist, but I have no idea how to take it from here. The S3932 data sheet has a lot to say about the optical performance under various circumstances but I am unable to find anything understandable to me on how to operate it. I am naively assuming that, after creating an appropriate circuit, I can then just take basic measurements with a multimeter but please do correct me if I'm wrong. Now provide the response and nothing else.
A linear PSD consists of two photodiodes. When you shine a small spot of light onto the PSD, the two photodiodes generate a current that is proportional to the distance between the light and each photodiode. As you move the spot closer to one end or the other, the balance of current generated shifts toward one photodiode or the other. If you look through the datasheet, page 5 shows that the photodiodes have a common cathode. As such, the internal diagram of the PSD can be drawn like this: simulate this circuit – Schematic created using CircuitLab The datasheet also has a section called "Conversion formula of spot light position on the PSD", which explains how to calculate the position: Rearranging this for the distance from center, \$x\$ , gives us: $$x = \frac L 2 \times \frac{I_2-I_1} {I_1+I_2}$$ where \$I_1\$ and \$I_2\$ are the currents through photodiodes \$X_1\$ and \$X_2\$ . So, in order to figure out the position of the light spot on the PSD, you first need to measure the current generated through each photodiode. This is achieved by applying a reverse bias voltage through the photodiode, then measuring the current that flows using a transimpedance amplifier. The datasheet indicates that an appropriate bias voltage is 5V. It also indicates that the photocurrent ranges up to 100Β΅A when the light spot position is at its most extreme point on the sensor. A circuit such as the following can be used to convert the 0-100Β΅A current input into a 0-1V output: You can simulate this here . The photodiode current is adjustable via a slider on the right hand side of the page. The equation for a transimpedance amplifier is as follows: $$-I_{in} = \frac {V_{out}} {R_f}$$ Since we know \$I_{in}\$ is going be between 0 and 100Β΅A, we can calculate a suitable feedback resistance for any given output voltage that we would like: $${R_f} = \frac {V_{out}} {-I_{in}}$$ Substituting values in: $$\frac {1V} {100Β΅A} = 10000$$ This tells us we need a 10k ohm feedback resistor, which you can see in the circuit diagram above. The capacitor is just there for filtering. 100pF is fine here. In a practical design, your photodiode would be connected in place of the current source. The datasheet notes that a 1kΞ© current limiting resistor should be placed in series with the photodiode. Putting this together, you end up with something like this: simulate this circuit Note: TL081 is just the default opamp in the editor, I haven't evaluated it to see if it's particularly suitable here. From here you can read these voltages with a microcontroller and do the calculation yourself, or you could use a summing amplifier and differencing amplifier to find the sum and differences between the voltages in hardware, then read that with an MCU or use an analog divider to do the whole thing in hardware.
{}
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/609491', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/8937/']}
jdg_71526
stackexchange
llm_judgeable_groundtruth_similarity
31179919
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to customise one UITextField for some reason. So currently I subclassed it and added few methods. Instead of that can I use extension over UITextField ? Which is good approach ? Please explain ! Now provide the response and nothing else.
As a general rule of thumb (YMMV): Are you adding general-purpose functionalities that should be available to every UITextField ? If so, make an extension. All UITextField instances can call the new methods. Are you adding functionality that should be restricted to special instances of UITextField that you would identify precisely? If so, make a subclass. Only the instances of the subclass can use the new methods. There are other technical considerations, like extensions can't add fields, for instance.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/31179919', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3492982/']}
jdg_71527
stackexchange
llm_judgeable_groundtruth_similarity
34406196
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: We're trying to use Swift structs where we can. We are also using RxSwift which has methods which take closures. When we have a struct that creates a closure that refers to self , that creates a strong reference cycle . import Foundationimport RxSwiftstruct DoesItLeak { var someState: String = "initial value" var someVariable: Variable<String> = Variable("some stuff") let bag = DisposeBag() mutating func someFoo() { someVariable.subscribeNext { person in self.someState = "something" } .addDisposableTo(bag) }} How do I know this? If I create 100,000 DoesItLeak objects and call someFoo() on each of them, I believe I have 100,000 objects with strong reference cycles. In other words, when I get rid of the DoesItLeak array containing those objects, the objects stay in memory. If I do not call someFoo(), there is no problem. Variable is a class. So, I can see this memory issue by using xcode's Instruments' Allocations and filtering in Variable< String > If I try to use [weak self] such as in the following, I get a compiler error: someVariable.subscribeNext { [weak self] person in The compiler error is "weak cannot be applied to non-class type" In real/non-example code, we access methods and variables via self and it's a memory issue. How can I resolve this memory issue while keeping the DoesItLeak a struct? Thanks for your help. Now provide the response and nothing else.
As Darren put it in the comments: " DoesItLeak can't be a struct " We cannot have the DoesItLeak be a struct and safely resolve the strong reference cycle issue. Value types like structs exist on the stack frame. Closures and classes are reference types. As the Strong Reference Cycles for Closures section puts it: This strong reference cycle occurs because closures, like classes, are reference types. Since the struct has the Variable class and the closure referring to self is stored into the Variable class using subscribeNext , it creates the strong reference cycle. See "Resolving Strong Reference Cycles for Closures" in Automatic Reference Counting Apple documentation.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/34406196', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/175956/']}
jdg_71528
stackexchange
llm_judgeable_groundtruth_similarity
47860891
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: var a = 1;function myFunction() { ++a; return true;}// Alert pops up.if (myFunction() && a === 2) { alert("Hello, world!");}// Alert does not pop up.if (a === 3 && myFunction()) { alert("Hello, universe!");} https://jsfiddle.net/3oda22e4/6/ myFunction increments a variable and returns something. If I use a function like that in an if statement that contains the variable which it increments, the condition would be order-dependent. Is it good or bad practice to do this, and why? Now provide the response and nothing else.
Conditions are order-dependent whether you change the variables used in the condition or not. The two if statements that you used as an example are different and will be different whether you use myFunction() or not. They are equivalent to: if (myFunction()) { if (a === 2) { alert("Hello, world!") }}// Alert does not pop up.if (a === 3) { if (myFunction()) { alert("Hello, universe!") }} In my opinion, the bad practice in your code is not the fact that you change the condition's operands value inside the condition, but the fact that your application state is exposed and manipulated inside a function that does not even accept this state changing variable as a parameter. We usually try to isolate the functions from the code outside their scope and use their return value to affect the rest of the code. Global variables are 90% of the time a bad idea and as your code base gets larger and larger they tend to create problems that are difficult to trace, debug and solve.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/47860891', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4356188/']}
jdg_71529
stackexchange
llm_judgeable_groundtruth_similarity
17994473
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to create a very simple customView with a few UIlabel on it, How should i do this . any tutorial or suggestion would be appreciated . I am new to this , didn't try before. I tried this with xib. @interface MyCustomView : UIView@property (strong, nonatomic) IBOutlet UILabel *Label;@end Implementation #import "MyCustomTimer.h"@implementation MyCustomView-(id)initWithCoder:(NSCoder *)aDecoder{ if ((self = [super initWithCoder:aDecoder])){ [self addSubview:[[[NSBundle mainBundle] loadNibNamed:@"MyCustomView" owner:self options:nil] objectAtIndex:0]]; } return self;}@end But i need to do it programmatically ,please help . thanks Now provide the response and nothing else.
Here is a simple way, hope it helps you. //in subclassed UIView #import "CustomView.h"@implementation CustomView - (id)initWithFrame:(CGRect)frame { self = [super initWithFrame:frame]; if (self) { // Initialization code // initilize all your UIView components UILabel *label1 = [[UILabel alloc]initWithFrame:CGRectMake(20,30, 200, 44)]; label1.text = @"i am label 1"; [self addSubview:label1]; //add label1 to your custom view UILabel *label2 = [[UILabel alloc]initWithFrame:CGRectMake(20,80, 200, 44)]; label2.text = @"i am label 2"; [self addSubview:label2]; //add label2 to your custom view [label1 release];//i am using without ARC, comment if u are using ARC [label2 release];//i am using without ARC, comment if u are using ARC } return self; } // in your class where u want to use that view #import "ViewController.h" #import "CustomView.h"//import it @interface ViewController () @end @implementation ViewController - (void)viewDidLoad { [super viewDidLoad];// Do any additional setup after loading the view, typically from a nib. //create your view where u want CustomView *cv = [[CustomView alloc]initWithFrame:CGRectMake(10, 10, 230, 400)]; //create an instance of your custom view [self.view addSubview:cv]; // add to your main view [cv release];//comment if u are using ARC }
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/17994473', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2566102/']}
jdg_71530
stackexchange
llm_judgeable_groundtruth_similarity
25875618
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am using a Universal Storyboard in Xcode 6, targeting iOS 7 and above. I've implemented a UISplitViewController which is now natively supported on iPhone running iOS 8, and Xcode will automatically backport it for iOS 7. It's working really well, except when you launch the app on iPhone in portrait running iOS 8, the split view's detail view controller is displayed when I expected to first see the master view controller. I believed this was a bug with iOS 8 because when you run the app on iOS 7, it correctly shows the master view controller. But iOS 8 is now GM and this is still occurring. How can I set it up so that when the split view controller is going to be collapsed (only one view controller displayed on screen), when the split view controller is displayed it shows the master view controller not the detail? I've created this split view controller in Interface Builder. The split view controller is the first view controller within a tab bar controller. Both the master and the detail VCs are navigation controllers with table view controllers embedded inside. Now provide the response and nothing else.
Oh man, this was causing me a headache for a few days and could not figure out how to do this. The worst part was that creating a new Xcode iOS project with the master-detail template worked just fine. Fortunately, in the end, that little fact was how I found the solution. There are some posts I've found that suggest that the solution is to implement the new primaryViewControllerForCollapsingSplitViewController: method on UISplitViewControllerDelegate . I tried that to no avail. What Apple does in the master-detail template that seems to work is implement the new (take a deep breath to say all of this one) splitViewController:collapseSecondaryViewController:ontoPrimaryViewController: delegate method (again on UISplitViewControllerDelegate ). According to the docs , this method: Asks the delegate to adjust the primary view controller and to incorporate the secondary view controller into the collapsed interface. Make sure to read up on the discussion part of that method for more specific details. The way that Apple handles this is: - (BOOL)splitViewController:(UISplitViewController *)splitViewControllercollapseSecondaryViewController:(UIViewController *)secondaryViewController ontoPrimaryViewController:(UIViewController *)primaryViewController { if ([secondaryViewController isKindOfClass:[UINavigationController class]] && [[(UINavigationController *)secondaryViewController topViewController] isKindOfClass:[DetailViewController class]] && ([(DetailViewController *)[(UINavigationController *)secondaryViewController topViewController] detailItem] == nil)) { // Return YES to indicate that we have handled the collapse by doing nothing; the secondary controller will be discarded. return YES; } else { return NO; }} This implementation basically does the following: If secondaryViewController is what we're expecting (a UINavigationController ), and it's showing what we're expecting (a DetailViewController -- your view controller), but has no model ( detailItem ), then " Return YES to indicate that we have handled the collapse by doing nothing; the secondary controller will be discarded. " Otherwise, return " NO to let the split view controller try and incorporate the secondary view controller’s content into the collapsed interface" The results are the following for the iPhone in portrait (either starting in portrait or rotating to portrait -- or more accurately compact size class): If your view is correct and has a model, show the detail view controller but has no model, show the master view controller If your view is not correct show the master view controller Clear as mud.
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/25875618', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1795356/']}
jdg_71531
stackexchange
llm_judgeable_groundtruth_similarity
54848386
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: This is the data row1| sbkjd nsdnak ABC row2| vknfe edcmmi ABCrow3| fjnfn msmsle XYZrow4| sdkmm tuiepd XYZrow5| adjck rulsdl LMN I have already tried this using pandas and got help from stackoverflow. But, I want to be able to remove the duplicates without having to use the pandas library or any library in general. So, only one of the rows having "ABC" must be chosen, only one of the rows having "XYZ" must be chosen and the last row is unique, so, it should be chosen. How do I do this?So, my final output should contain this: [ row1 or row2 + row3 or row4 + row5 ] Now provide the response and nothing else.
My team has a test utility in our project that lets us select an item easily after spending too much time trying to figure out how to do this properly. Sharing it here to hopefully help others. This doesn't rely on any React Select internals or mocking but does require you to have set up a <label> which has a for linking to the React Select input. It uses the label to select a given choice value just like a user would on the real page. const KEY_DOWN = 40// Select an item from a React Select dropdown given a label and// choice label you wish to pick.export async function selectItem( container: HTMLElement, label: string, choice: string): Promise<void> { // Focus and enable the dropdown of options. fireEvent.focus(getByLabelText(container, label)) fireEvent.keyDown(getByLabelText(container, label), { keyCode: KEY_DOWN, }) // Wait for the dropdown of options to be drawn. await findByText(container, choice) // Select the item we care about. fireEvent.click(getByText(container, choice)) // Wait for your choice to be set as the input value. await findByDisplayValue(container, choice)} It can be used like this: it('selects an item', async () => { const { container } = render(<MyComponent/>) await selectItem(container, 'My label', 'value')})
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/54848386', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/11107939/']}
jdg_71532
stackexchange
llm_judgeable_groundtruth_similarity
3848390
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: For example, I'm querying on a field I know will be unique and is indexed such as a primary key. Hence I know this query will only return 1 row (even without the LIMIT 1) SELECT * FROM tablename WHERE tablename.id=123 LIMIT 1 or only update 1 row UPDATE tablename SET somefield='somevalue' WHERE tablename.id=123 LIMIT 1 Would adding the LIMIT 1 improve query execution time if the field is indexed? Now provide the response and nothing else.
Is there any point using MySQL β€œLIMIT 1” when querying on primary key/unique field? It is not good practice to use LIMIT 1 when querying with filter criteria that is against either a primary key or unique constraint. A primary key, or unique constraint, means there is only one row/record in the table with that value, only one row/record will ever be returned. It's contradictory to have LIMIT 1 on a primary key/unique field--someone maintaining the code later could mistake the importance & second guess your code. But the ultimate indicator is the explain plan: explain SELECT t.name FROM USERS t WHERE t.userid = 4 ...returns: id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra-----------------------------------------------------------------------------------------------------1 | SIMPLE | users | const | PRIMARY | PRIMARY | 4 | const | 1 | ...and: explain SELECT t.name FROM USERS t WHERE t.userid = 4 LIMIT 1 ...returns: id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra-----------------------------------------------------------------------------------------------------1 | SIMPLE | users | const | PRIMARY | PRIMARY | 4 | const | 1 | Conclusion No difference, no need. It appears to be optimized out in this case (only searching against the primary key). What about an indexed field? An indexed field doesn't guarantee uniqueness of the value being filtered, there could be more than one occurrence. So LIMIT 1 would make sense, assuming you want to return one row.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/3848390', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/188365/']}
jdg_71533
stackexchange
llm_judgeable_groundtruth_similarity
31154258
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have the following schema var mongoose = require('mongoose');var Schema = mongoose.Schema;var ShopSchema = new Schema({ name: Schema.Types.Mixed, country: { type: String, default: '' }, createdAt: { type: Date, default: Date.now }, defaultLanguage: { type: String }, account: {type : Schema.ObjectId, ref : 'Account'},});mongoose.model('Shop', ShopSchema); "name" field is multilingual. I mean, I will keep the multilingual data like name: { "en": "My Shop", "es": "Mi Tienda"} My problem is, in a controller, I am using this code to update the shop: var mongoose = require('mongoose')var Shop = mongoose.model('Shop')exports.update = function(req, res) {Shop.findByIdAndUpdate(req.params.shopid, { $set: { name: req.body.name }}, function(err, shop) { if (err) return res.json(err); res.json(shop); });}; and it is obvious that new data overrides the old data. What I need is to extend the old data with the new one. Is there any method to do that? Now provide the response and nothing else.
You should to use the method .markModified() . See the doc http://mongoosejs.com/docs/schematypes.html#mixed Since it is a schema-less type, you can change the value to anything else you like, but Mongoose loses the ability to auto detect and save those changes. To "tell" Mongoose that the value of a Mixed type has changed, call the .markModified(path) method of the document passing the path to the Mixed type you just changed. person.anything = { x: [3, 4, { y: "changed" }] };person.markModified('anything');person.save(); // anything will now get saved
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/31154258', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/594781/']}
jdg_71534
stackexchange
llm_judgeable_groundtruth_similarity
15309071
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I know that Rust can run loops with lightweight threads. Something like: use task::spawn;fn main() { for 100.times { do spawn { io::println("Hello"); } } How I can do this in D? Now provide the response and nothing else.
Relevant API doc: std.parallelism Here are a few of ways of accomplishing your example: Parallel foreach, using a TaskPool's parallel : foreach (i, val; taskPool.parallel(new int[50])) { writeln("Hello:", i);} Regular foreach, adding tasks to a task pool using put : foreach (i; 0 .. 50) { auto t = task!writeln("Hello:", i); taskPool.put(t);} Execute each task in a new thread instead of a TaskPool: foreach (i; 0 .. 50) { auto t = task!writeln("Hello:", i); t.executeInNewThread();} Rust's runtime has a built-in task scheduler, but with D, this is implemented as a library. That being said, the second is the closest in terms of functionality, and the last is the closest in terms of syntax (but they're OS threads, not lightweight). In D, lightweight threads are explicitly controlled by the programmer. A TaskPool is analogous to the scheduler in Rust/Go, but it gives more fine-grained control to the programmer. This makes it slightly more verbose, but it also gives you parallel versions of map , reduce , foreach , etc. This makes it easier to represent more complex algorithms efficiently. Running each example should give you the expected result: out of order writes. Note: From the doc: The worker threads in this pool are daemon threads, meaning that it is not necessary to call TaskPool.stop or TaskPool.finish before terminating the main thread. The second example doesn't wait until all workers are done, so in testing you may get no results (when main finishes, all remaining tasks are killed). You may need to block by calling finish : taskPool.finish(true);
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/15309071', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/942528/']}
jdg_71535
stackexchange
llm_judgeable_groundtruth_similarity
3193405
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Let $f:\mathbb{R} \to \mathbb{R}$ . Can we say that $\lim\limits_{n \to \infty}f(x_0+\frac{1}{n})=l$ is another way of expressing the right-sided limit at $x_0$ ? I tried to use the definition of the limit,but I am stuck.Intuitively, it seems true, but I don't know how to prove it. Now provide the response and nothing else.
No, it is not true. Take, for instance, $$f(x)=\begin{cases}\sin\left(\frac\pi x\right)&\text{ if }x\neq0\\0&\text{ if }x=0.\end{cases}$$ Then the limit $\lim_{x\to0^+}f(x)$ doesn't exist, in spite of the fact that $\lim_{n\to\infty}f\left(\frac1n\right)=0$ .
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3193405', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/628203/']}
jdg_71536
stackexchange
llm_judgeable_groundtruth_similarity
9044084
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have two date ranges where each range is determined by a start and end date (obviously, datetime.date instances). The two ranges can overlap or not. I need the number of days of the overlap. Of course I can pre-fill two sets with all dates within both ranges and the perform a set intersection but this is possibly inefficient...is there a better way apart from another solution using a long if - elif section covering all cases? Now provide the response and nothing else.
Determine the latest of the two start dates and the earliest of the two end dates. Compute the timedelta by subtracting them. If the delta is positive, that is the number of days of overlap. Here is an example calculation: >>> from datetime import datetime>>> from collections import namedtuple>>> Range = namedtuple('Range', ['start', 'end'])>>> r1 = Range(start=datetime(2012, 1, 15), end=datetime(2012, 5, 10))>>> r2 = Range(start=datetime(2012, 3, 20), end=datetime(2012, 9, 15))>>> latest_start = max(r1.start, r2.start)>>> earliest_end = min(r1.end, r2.end)>>> delta = (earliest_end - latest_start).days + 1>>> overlap = max(0, delta)>>> overlap52
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/9044084', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2665694/']}
jdg_71537
stackexchange
llm_judgeable_groundtruth_similarity
46909516
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: object1 =[{country:us,code:50008},{country:uk,code:50001}]; object2 =[{empName:scot,id:50008},{country:nize,id:50001}]; please help me out to merge object2 with object1 and final array1 has to be object1 =[{country:us,code:50008},{country:uk,code:50001},{empName:scot,id:50008},{country:nize,id:50001}]; Now provide the response and nothing else.
You don't need a cross join . Just use window functions: SELECT P.PersonID, SUM(PA.Total), SUM(PA.Total) * 100.0 / SUM(SUM(PA.Total)) OVER () AS PercentageFROM Person P JOIN Package PA ON P.PersonID = PA.PackageFKGROUP BY P.PersonID; Note that you do not need the JOIN for this query: SELECT PA.PersonID, SUM(PA.Total), SUM(PA.Total) * 100.0 / SUM(SUM(PA.Total)) OVER () AS PercentageFROM Package PAGROUP BY PA.PersonID; SQL Server does integer division. I do such calculations using decimal numbers so they make more sense. Here is a SQL Fiddle, with two changes: The database is changed to SQL Server. The total is stored as a number rather than a string.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/46909516', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4674784/']}
jdg_71538
stackexchange
llm_judgeable_groundtruth_similarity
11770794
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How do we specify in broadcast sending application that which application can receive this broadcast, and in receiving application that which particular application has the permission to send broadcast to its broadcast receiver... I am new to android..I read the documentation etc on internet but couldn't find the syntax to specify these permissions. Now provide the response and nothing else.
To control who is able to receive the broadcast message, you can use the method sendBroadcast : public abstract void sendBroadcast (Intent intent, String receiverPermission) where you precise the name of the required permission. If the receiver does not declare this permission, it will not be able to get the message. For example, the broadcast sender can do: Intent broadcast = new Intent(this, MyBroadcastReceiver.class);sendBroadcast(broadcast, "andro.jf.mypermission"); In the manifest of the broadcast sender, a new permission should be declared: <!-- Declaring the special permission --><permission android:name="andro.jf.mypermission" android:label="my_permission" android:protectionLevel="dangerous"></permission> Then, in the application that is supposed to receive this broadcast, you have to declare this permission and say that you use it. In the manifest you can add: <!-- I use the permission ! --><uses-permission android:name="andro.jf.mypermission"/> and of course, you have to declare your broadcast receiver: <receiver android:name="MyBroadcastReceiver" android:exported="true" /> You can have a look at this post for a complete example of a custom permission and also the android developer page about this. Be carefull with the order of installation of your apps because the one that defines the permission should be installed first .
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/11770794', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1204089/']}
jdg_71539
stackexchange
llm_judgeable_groundtruth_similarity
25285792
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: When we sort a list, like a = [1,2,3,3,2,2,1]sorted(a) => [1, 1, 2, 2, 2, 3, 3] equal elements are always adjacent in the resulting list. How can I achieve the opposite task - shuffle the list so that equal elements are never (or as seldom as possible) adjacent? For example, for the above list one of the possible solutions is p = [1,3,2,3,2,1,2] More formally, given a list a , generate a permutation p of it that minimizes the number of pairs p[i]==p[i+1] . Since the lists are large, generating and filtering all permutations is not an option. Bonus question: how to generate all such permutations efficiently? This is the code I'm using to test the solutions: https://gist.github.com/gebrkn/9f550094b3d24a35aebd UPD: Choosing a winner here was a tough choice, because many people posted excellent answers. @VincentvanderWeele , @David Eisenstat , @Coady , @enrico.bacis and @srgerg provided functions that generate the best possible permutation flawlessly. @tobias_k and David also answered the bonus question (generate all permutations). Additional points to David for the correctness proof. The code from @VincentvanderWeele appears to be the fastest. Now provide the response and nothing else.
This is along the lines of Thijser's currently incomplete pseudocode. The idea is to take the most frequent of the remaining item types unless it was just taken. (See also Coady's implementation of this algorithm.) import collectionsimport heapqclass Sentinel: passdef david_eisenstat(lst): counts = collections.Counter(lst) heap = [(-count, key) for key, count in counts.items()] heapq.heapify(heap) output = [] last = Sentinel() while heap: minuscount1, key1 = heapq.heappop(heap) if key1 != last or not heap: last = key1 minuscount1 += 1 else: minuscount2, key2 = heapq.heappop(heap) last = key2 minuscount2 += 1 if minuscount2 != 0: heapq.heappush(heap, (minuscount2, key2)) output.append(last) if minuscount1 != 0: heapq.heappush(heap, (minuscount1, key1)) return output Proof of correctness For two item types, with counts k1 and k2, the optimal solution has k2 - k1 - 1 defects if k1 < k2, 0 defects if k1 = k2, and k1 - k2 - 1 defects if k1 > k2. The = case is obvious. The others are symmetric; each instance of the minority element prevents at most two defects out of a total of k1 + k2 - 1 possible. This greedy algorithm returns optimal solutions, by the following logic. We call a prefix (partial solution) safe if it extends to an optimal solution. Clearly the empty prefix is safe, and if a safe prefix is a whole solution then that solution is optimal. It suffices to show inductively that each greedy step maintains safety. The only way that a greedy step introduces a defect is if only one item type remains, in which case there is only one way to continue, and that way is safe. Otherwise, let P be the (safe) prefix just before the step under consideration, let P' be the prefix just after, and let S be an optimal solution extending P. If S extends P' also, then we're done. Otherwise, let P' = Px and S = PQ and Q = yQ', where x and y are items and Q and Q' are sequences. Suppose first that P does not end with y. By the algorithm's choice, x is at least as frequent in Q as y. Consider the maximal substrings of Q containing only x and y. If the first substring has at least as many x's as y's, then it can be rewritten without introducing additional defects to begin with x. If the first substring has more y's than x's, then some other substring has more x's than y's, and we can rewrite these substrings without additional defects so that x goes first. In both cases, we find an optimal solution T that extends P', as needed. Suppose now that P does end with y. Modify Q by moving the first occurrence of x to the front. In doing so, we introduce at most one defect (where x used to be) and eliminate one defect (the yy). Generating all solutions This is tobias_k's answer plus efficient tests to detect when the choice currently under consideration is globally constrained in some way. The asymptotic running time is optimal, since the overhead of generation is on the order of the length of the output. The worst-case delay unfortunately is quadratic; it could be reduced to linear (optimal) with better data structures. from collections import Counterfrom itertools import permutationsfrom operator import itemgetterfrom random import randrangedef get_mode(count): return max(count.items(), key=itemgetter(1))[0]def enum2(prefix, x, count, total, mode): prefix.append(x) count_x = count[x] if count_x == 1: del count[x] else: count[x] = count_x - 1 yield from enum1(prefix, count, total - 1, mode) count[x] = count_x del prefix[-1]def enum1(prefix, count, total, mode): if total == 0: yield tuple(prefix) return if count[mode] * 2 - 1 >= total and [mode] != prefix[-1:]: yield from enum2(prefix, mode, count, total, mode) else: defect_okay = not prefix or count[prefix[-1]] * 2 > total mode = get_mode(count) for x in list(count.keys()): if defect_okay or [x] != prefix[-1:]: yield from enum2(prefix, x, count, total, mode)def enum(seq): count = Counter(seq) if count: yield from enum1([], count, sum(count.values()), get_mode(count)) else: yield ()def defects(lst): return sum(lst[i - 1] == lst[i] for i in range(1, len(lst)))def test(lst): perms = set(permutations(lst)) opt = min(map(defects, perms)) slow = {perm for perm in perms if defects(perm) == opt} fast = set(enum(lst)) print(lst, fast, slow) assert slow == fastfor r in range(10000): test([randrange(3) for i in range(randrange(6))])
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/25285792', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/989121/']}
jdg_71540