text
stringlengths 64
89.7k
| meta
dict |
---|---|
Q:
/ampps-admin was not found on this server?
Using OS X 10.9.1 and AMPPS 2.1 (dated 11/18/13) I can launch the Apache server, but when attempting to go to the admin or configuration pages I get 404 errors for both localhost/ampps-admin and localhost/ampps.
Where should I be looking to track down this problem?
A:
When you close ampps and look at localhost, is it still saying "It works!"?
Mine was, which meant my mac was running its version of apache. I fixed it by killing my mac's apache and then restarting AMPPS.
Kill apache:
sudo launchctl unload -w /System/Library/LaunchDaemons/org.apache.httpd.plist
|
{
"pile_set_name": "StackExchange"
}
|
Q:
In Firestore can there be duplicates of FieldValue.serverTimestamp()?
Imagine you have thoiusands, even millions of users connecting simmultaneously on your Firebase app. Your app (on every single user) write this to Firestore:
const ref = firestore.collection('users').doc(myUID)
ref.set({
updatedAt: firebase.firestore.FieldValue.serverTimestamp()
})
Is it possible (even if very improbable) that more than one user has the same timestamp in updatedAt?
A:
There is no guarantee of uniqueness for server timestamps. That said, I imagine it's highly unlikely to get a duplicate, since they are measured to nanosecond precision.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to handle the How to return the response from an AJAX call questions
Every single day we are encountering 10+ question of the same type, dealing with how to handle the result of an asynchronous ajax request(a possible duplication of this question) in SO, the same exists for event delegation for dynamic elements also.
How should we handle it?
Right now I don't have a consistent approach - Sometimes I answer the question otherwise mark it as duplicate
Note: It will be great if somebody can pullout a stats saying how many other questions are marked as duplicate of the said question
A:
We keep closing them as duplicates. Occasionally, also posting a helpful quick answer to help the OP get started is okay. But if it's a duplicate, always vote to close too. If it's really low quality, downvote, and perhaps edit or vote to delete. There's nothing we can do to stop those questions from being posted.
I am aware the above paragraph is simplifying things, and in fact I asked myself this very same question recently (I agree with you, those question are annoying). Sometimes, it's difficult to classify the question as a duplicate of that reference post, usually because:
The question is not about a "direct" ajax request (i.e., a vanilla-js XMLHttpRequest, or a jQuery $.ajax or similar), but about a third-party API "disguising" the ajax operation under a more specific name (think of Google Maps API geocoding methods).
or:
The question is not exactly about returning a value from a callback, but about some similar operation such as assigning a value to a global variable from within the callback, and expecting the variable to be populated before the callback actually runs (because in source-code order it looks like it already ran).
In both cases, the OP might not understand that the reference answers address what he's asking about, although we know that the underlying problem is the same: the OP is unaware of how asynchronous JavaScript operations work, and sometimes they don't even know the basics about HTTP requests. Frequently, it's unclear to us what exactly the OP is missing.
That's why I believe most such questions deserve at least a link to that reference post: both top answers there do a very good job in explaining the basic concepts. I also believe that closing as a duplicate of that question is adequate, unless you know a closer duplicate to point to. In cases where we feel a simple link/close vote is not enough, we can add a comment or short answer to help the asker understand why we are pointing him there.
If the OP is really interested in learning the why behind their problem, the reference answers should help a lot; and if the OP is not interested in that, should we be interested in helping?
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Different way int n can be split in groups of 1 or 2
This is the question posed to me for an assignment:
A patient has n pills to take. In each day, he can take either one
pill or two pills until all pills are gone. Let T(n) denote the number
of different ways the patient can take all n pills. Give a closed form
for T(n). (Note that – for example – the two sequences (1, 2, 2) and
(2, 1, 2) are considered as two different ways of taking 5 pills.)
I have tried to work the sets for n = 1 through 8 to see if I can find a pattern like so:
n=1 {1} n=2 {{1,1},{2}} n=3 {{1,1,1},{1,2},{2,1}} n=4
{{1,1,1,1},{1,1,2},{1,2,1},{2,1,1},{2,2}} ...
But haven't been able to. Combinations from n=1-8 are 1,2,3,5,8,12,18,25
Anyone have an idea?
A:
Your example shows wrong values after 8 (should be 13...).
Consider the next approach: in the last day patient can eat the one pill or two pills (n = (n-1) + 1 or n = (n-2) + 2 ). So number of ways to compose T(n) value is
T(n) = T(n-1) + T(n-2)
Repeat the same process with T(n-1) and T(n-2) and you'll finish at T(0) or T(1) - these values apparently are equal to 1.
So build recurrent sequence and solve recurrence for any n.
Note that you can unwind recurrence from the end (recursion method) and start from the 0/1 - iteration method.
When you find correct values, you might discover that they form famous sequence and read more about it.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Reading and Writing into CSV file at the same time
I wanted to read some input from the csv file and then modify the input and replace it with the new value. For this purpose, I first read the value but then I'm stuck at this point as I want to modify all the values present in the file.
So is it possible to open the file in r mode in one for loop and then immediately in w mode in another loop to enter the modified data?
If there is a simpler way to do this please help me out
Thank you.
A:
Yes, you can open the same file in different modes in the same program. Just be sure not to do it at the same time. For example, this is perfectly valid:
with open("data.csv") as f:
# read data into a data structure (list, dictionary, etc.)
# process lines here if you can do it line by line
# process data here as needed (replacing your values etc.)
# now open the same filename again for writing
# the main thing is that the file has been previously closed
# (after the previous `with` block finishes, python will auto close the file)
with open("data.csv", "w") as f:
# write to f here
As others have pointed out in the comments, reading and writing on the same file handle at the same time is generally a bad idea and won't work as you expect (unless for some very specific use case).
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Openshift online - no longer running collectstatic
I've got 2 Python 3.6 pods currently running. They both used to run collectstatic upon redeployment, but then one wasn't working properly, so I deleted it and made a new 3.6 pod. Everything is working perfectly with it, except it no longer is running collectstatic on redeployment (so I'm doing it manually). Any thoughts on how I can get it running again?
I checked the documentation, and for the 3.11 version of openshift still looks like it has a variable to disable collectstatic (which i haven't done), but the 4.* versions don't seem to have it. Don't know if that has anything to do with it.
Edit:
So it turns out that I had also updated the django version to 2.2.7.
As it happens, the openshift infrastructure on openshift online is happy to collectstatic w/ version 2.1.15 of Django, but not 2.2.7 (or 2.2.9). I'm not quite sure why that is yet. Still looking in to it.
A:
Currently Openshift Online's python 3.6 module doesn't support Django 2.2.7 or 2.2.9.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Sum of elements in multidimensional array
I have the following input:
$data = array(
0 => array(
'date' => '2014-01-01',
'sales' => 1,
'price' => array(
'usd' => 1,
'eur' => 100
)
),
1 => array(
'date' => '2014-01-05',
'sales' => 1,
'price' => array(
'usd' => 1,
'eur' => 100,
'gbp' => 500
)
),
2 => array(
'date' => '2016-03-27',
'sales' => 5,
'age' => 50
),
3 => array(
'date' => '2016-03-28',
'sales' => 10
)
);
And I expect the following output:
$final = array(
'March 2016' => array(
'sales' => 15
),
'January 2014' => array(
'sales' => 2,
'price' => array(
'usd' => 2,
'eur' => 200,
'gbp' => 500
)
)
);
What I've done so far?
$monthlyData = array();
foreach ($dailyData as $day)
{
$key = date('M y', strtotime($day['date']));
if (!isset($monthlyData[$key]))
{
$monthlyData[$key] = $day;
continue;
}
foreach ($day as $metric => $value)
{
if(!empty($value))
{
$monthlyData[$key][$metric] += $value;
}
}
}
Well, I know that we can use good ol' foreach (with recursive calls) in order to get the right result, but I'm looking for some more elegant solution.
A:
You really just need one more condition and loop for this specific example.
foreach ($day as $metric => $value)
{
/* Added Condition */
if(is_array($value))
{
foreach($value as $nestedMetric => $nestedValue) {
$monthlyData[$key][$metric][$nestedMetric] += $nestedValue;
}
}
elseif(!empty($value))
{
$monthlyData[$key][$metric] += $value;
}
}
However, I'd probably do it differently by handling the calculation based on the metric, not just treating every metric dynamically.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Passing MySQL query via Javascript
In a Javascript function, I have the following JQuery in which I call a PHP script (i.e. getDBData.php) to get the database data from the query:
$("#dbcontent").load("getDBData.php", {query: "SELECT * FROM `texts` WHERE name='John' LIMIT 10;"});
In getDBData, I fetch this query via POST:
$query = $_POST['query'];
and give it as input for mysql_query:
$query = mysql_query($query) or die(mysql_error());
However, I get the following MySQL error:
You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '\'John\' LIMIT 10' at line 1
What could be wrong here? I guess it has something to do with character encoding when passing the query, but don't seem to get it right.
A:
You should never do this under any circumstances. You should be passing parameters that can then be used to build the proper query.
At least do something like this....
Javascript
$.post('getDBData.php', {
query: 'getTextsByUser',
user: 'John'
});
PHP
$queries = array(
'getTextsByUser' => 'SELECT * FROM texts WHERE name = ?',
'getNewsById' => 'SELECT * FROM news WHERE id = ?'
);
$stmt = $dbConnection->prepare($queries[$_POST['query']);
$stmt->bind_param('s', $_POST['user']);
$stmt->execute();
$result = $stmt->get_result();
while ($row = $result->fetch_assoc()) {
// do something with $row
}
And then pass getUsers via ajax to determine which query to run.
Note: If you're just beginning this project, mysql_query() has been deprecated and you should consider switching to mysqli.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Adding prefix and postfix to matching regular expression
Dear fellow Developers,
I would like to quote variables having @ inside like [email protected], but not starting with @ like @var1, @var2.
Does Vi/Vim/Neovim have any option to inspect the matched pattern and create a pre- and postfix to [email protected]: "[email protected]" ?
:%s/ [a-zA-Z0-9]+@[a-zA-Z0-9]* / \"*\" /
does not work due to \"*\".
If Vi/Vim/Neovim has no such feature, what tools would you recommend for portability?
I know C++ has a feature to inspect the matched regex as a string for manipulation, but I would like to have a more pluggable solution for vim.
A:
Try using \0 to insert the contents of the match in the substitution:
:%s/\<[a-zA-Z0-9.]\+@[a-zA-Z0-9.]*\>/"\0"/g
Note that, given that you gave an example of an email address, I added . to your character classes so that the expression will match these, but it won't only match email addresses. e.g. It would match a@a.
I also changed the white spaces into start/end word atoms: \<, \> so that it will successfully match email addresses abutting punctuation, and added a g flag so that more than one match can be found on each line.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Props in seminar
I've been asked to give a seminar at an other research center about my work. Specifically about a device which was installed in an experiment.
I have an early prototype of it (Size is about 20x5x2 cm). The prototype was and never will be installed and as such safe for people to touch it.
I was pondering to bring it along flashing is shortly in front of the audience for them to better visualize it when I go over the layout of it. After that I would leave it on the table during the talk, as not to distract the audience by circulating it, and let everybody who's interested come to have a look afterwards. There is no subsequent speaker after me.
I was wondering if this is a good idea or considered unprofessional, since showing the device doesn't add any content per se (as it is too small to see for the audience) and only tries to engage the audience.
A:
The goal of your seminar is to educate and inform your audience about your work. If showing the audience a prototype of your work will help them to better visualize and conceptualize what you are doing, then why wouldn't you want to take advantage of that during your talk?
So long as you properly integrate into your talk—make it an essential part of it, rather than just "for show" while you talk—then the prop will do its duty.
What you might want to consider is taking some high-quality photos of the prototype that you can show on the screen as you talk about the prototype while holding it. Then you can get the best of both worlds.
A:
This is a terrific idea, and moreover you should also strongly consider passing the device around so that people can inspect it and hold it in their hands. The idea that the opportunity to have either visual or tactile contact with a scientific instrument or device "doesn't add any content" is simply false. At the very least, including a prop of this sort in your presentation will add an unusual and memorable element to your talk that would set it apart from the hundreds of other talks that come and go in a university department and are easily forgotten; at best, the prop will actually give the seminar participants some actual insight into your experiment and the related science. Either way, it can only work to your benefit.
As a small illustration, I recently brought some 3D-printed objects to a math seminar I was giving. Although one can make the same argument that showing pictures of the objects (which I also did) would contain exactly the same information as you could get from handling the 3D models, the psychological effect of being able to handle the 3D objects, and the reactions I got, were both very positive.
A:
Personally, I've always been unable to clearly understand the arrangement of a complex device from pictures or drawings, and I much appreciate the possibility of observing directly a device after its description.
In addition, since you've been invited to talk specifically about this device, the audience should appreciate your idea of bringing it along.
Therefore, yes, I think it's a good idea.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to translate from statement-based to row-based replication in MySQL
I have a production system that uses MySQL statement based replication for hot failover should a master database die. Running version 5.5 Percona. I have to use statement based replication for reasons that are immutable for the purposes of this question.
Now, I'd like to see the row-based replication stream of the same data, with the goal of trying to adapt this into an HBase based data store.
Is it possible to set up a MySQL server to slave (read) using statement-based replication, but at the same time be replication master (write to other slaves) using row-based replication? If so, how do I set this up? I've looked in the docs but failed to find it.
A:
I have to use statement based replication for reasons that are immutable for the purposes of this question.
I assume, for the purpose of this question, that there may indeed be compelling reasons for using statement-based logging¹ but it is not generally recommended, because statement-based logging is relatively fragile. On any system where you have the flexibility not to use statement-based logging, don't use it -- use MIXED or ROW.²
MySQL Server (and compatible systems, like Percona Server, MariaDB, and Aurora for MySQL) automatically "translate" from one format to the other, based on the configuration of each individual server.
“Each MySQL Server can set its own and only its own binary logging format (true whether binlog_format is set with global or session scope). This means that changing the logging format on a replication master does not cause a slave to change its logging format to match.”
— https://dev.mysql.com/doc/refman/5.6/en/binary-log-setting.html
To restate this with some additional implications, what you want to do “just works,” because the setting of binlog_format on the slave does not specify what the slave expects. It only sets what the slave will generate.
Configure the slave with binlog_format=ROW and enable log_slave_updates in my.cnf on the slave (which causes incoming events to be rewritten to the slave's binlog).
...and you're done.
The slave will log all of its DML as row-based events in spite of the master's binlog format. You don't really have to do anything else to make a slave also be a master, since every MySQL server with binary logging enable is, essentially, already a master -- it just may happen to be a master without any actual slaves.
Any combination of master and slave binlog_format is valid except for a master configured as ROW and a slave configured as STATEMENT (the opposite of what you're doing here), because while statements can be translated into row events (they affect rows on the slave, after all), the converse isn't true -- you can't necessarily determine the specific statement that changed rows, if your only knowledge is the actual changed data. But for the application you're asking about, the above should do exactly what you intend.
I also discussed the interactions of the possible combinations of master and slave binlog format here, on dba.stackexchange.com in some additional detail.
¹logging is used here rather than "replication" because it's a more accurate description of what is actually being configured, though arguably the meaning is unchanged.
² STATEMENT logs the actual queries that made the change; ROW logs "row images" of the rows inserted/updated/deleted by the query. For updates, boththe old and new values are logged. MIXED mode allows the server to choose the format for each query, always using ROW when there is anything about the query that makes its impact on the database potentially unsafe for statement-based replication, because the replica could potentially interpret it in a way that would cause the replica's data to diverge from the master, because the query isn't deterministic. Examples might include an unordered UPDATE ... LIMIT where the replica might update a different set of rows depending on its index selection, and statements using non-deterministic functions like UUID(). Other seemingly-non-deterministic functions like NOW() and RAND() are compatible with statement-based replication, because there are hints written to the statement log to indicate the master's system time, and the master's random seed, at the time the query was executed.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
start pm2 processes on system boot using a shell script
I was looking on how to write a SHELL script that:
Change the working directory to directory path/to/dir/A
Then get's all the ".js" files in directory path/to/dir/A/X
And iterates over them all one by one with this command "pm2 start {files in directory A}"
I'm new to SHELL scripting so any help is welcome.
Just to add, those files are pm2 config files to start different processes. Each process has its own file. Thus the need to iterate over all of them.
A:
I'm not sure what X is in the question, but seeing how you've accepted an answer that ignores X, here is my solution:
for fname in /path/to/A/*.js; do
pm2 start "$fname"
done
This simply loops over a glob expanding to all .js files in A and runs pm2 start on them.
In case the process needs to be started from a specific directory, you do have to cd into it. After edits to the question:
cd /path/to/A
for fname in X/*.js; do
pm2 start "$fname"
done
|
{
"pile_set_name": "StackExchange"
}
|
Q:
merge function for mergesort - recursion vs. iteration
I decided to do an algorithms course (Roughgarden's on Coursera), and am setting out to implement each algorithm as it's introduced, in Lisp. We start with mergesort which is introduced as the canonical example of the divide and conquer paradigm.
The point of the paradigm being a problem is divided (into two), and solved, then the results are combined. Thus two steps are involved - the solution step and the combination step. We use induction such that subproblems are assumed solved, then we just need the base case. The base case here is that lists of length zero and of length 1 are already sorted so can just be returned.
This leaves us with the need to implement two things. First the divide part, then the combination part. In mergesort this means the functions mergesort and merge respectively.
Here's what I came up with for mergesort
(defun mergesort (lst)
"mergesort is the canonical example of the divide & conquer paradigm"
(if (or (null lst) (eq (length lst) 1))
lst
(let* ((len (length lst))
(mid (truncate len 2))
(sorted-lower (mergesort (subseq lst 0 mid)))
(sorted-upper (mergesort (subseq lst mid len))))
(merge. sorted-lower sorted-upper nil))))
And for merge
(defun merge. (x y acc)
"merge two lists by moving lowest car to output list"
(cond
((and (null x) (null y)) (reverse acc))
((null x) (merge. x (cdr y) (cons (car y) acc)))
((null y) (merge. (cdr x) y (cons (car x) acc)))
((<= (car x) (car y)) (merge. (cdr x) y (cons (car x) acc)))
(t (merge. x (cdr y) (cons (car y) acc)))))
These functions work fine.
Now to the problem.
The actual pseudocode given by the instructor, and the subsequent algorithmic analysis we're about to do, presumes that mergesort is done recursively (fine), but that merge is done iteratively.
But I naturally wrote merge recursively as above without really contemplating using a loop.
The instructor offers the following pseudocode
My thinking this morning has been, since in the next lecture we are going to study the theory of how to analyse the running time of divide and conquer, (including recursion tree method, generalising to the master method - i don't know what these are yet), that it might be better if I had an implementation which followed the actual pseudocode the instructor is assuming. (He did say at the beginning that any imperative language would be fine). But I would like more chance to use Lisp.
So, my implementation of the pseudocode give above is follows.
This code also works fine so long as we change the last line of mergesort to call merge- with the appropriate signature, which this time is (merge- len sorted-lower sorted-upper).
(defun merge- (n a b)
(let ((acc nil)
(ca 0)
(cb 0))
(dotimes (i n (reverse acc))
(cond
((null (nth ca a))
(progn
(setf acc (cons (nth cb b) acc))
(setf cb (+ cb 1))))
((or (null (nth cb b))
(<= (nth ca a) (nth cb b)))
(progn
(setf acc (cons (nth ca a) acc))
(setf ca (+ ca 1))))
(t
(progn
(setf acc (cons (nth cb b) acc))
(setf cb (+ cb 1))))))))
But boy, that code isn't half ugly!
I also spent at least half an hour simply not having a clue why it wasn't initially working. The reason was that I'd "forgotten" to use setf on acc, a notion totally at odds to the recursive version where instead of altering state we are defining divisions of the function and variables are irrelevant.
Since Common Lisp is multi-paradigm, I was wondering if the iterative version can be improved upon?
Would it be uncontroversial in the lisp community to observe that the recursive version is simply going to be better & more natural?
(and if that's the case, then why does this apply especially to lisp and less so to other languages? but lets not get into that transcendent question just yet! (maybe the answer is because we're using lists... which is maybe the key thing that makes recursion natural ... (?)))
Update 1: In response to Rainer's comment, here's a version using vectors:
(defun merge- (x y)
"merge sorted lists (or vectors) x & y into sorted array"
(let ((a (make-array (length x) :initial-contents x))
(b (make-array (length y) :initial-contents y))
(c (make-array (+ (length x) (length y))))
(i 0)
(j 0))
(dotimes (k (length c) c)
(cond
((= i (length a))
(setf (svref c k) (svref b j)
j (1+ j)))
((= j (length b))
(setf (svref c k) (svref a i)
i (1+ i)))
((< (svref a i) (svref b j))
(setf (svref c k) (svref a i)
i (1+ i)))
(t
(setf (svref c k) (svref b j)
j (1+ j)))))))
I'm wondering if style in the cond block above can be improved, or if that's how you'd normally do it?
Update 2: In response to Rainer's answer, I've written this new version of mergesort incorporating his suggestions (those I feel I fully understand at this point). Thank you Rainer.
(defun mergesort (lst)
"mergesort is the canonical example of the divide & conquer paradigm"
(flet ((merge- (a b)
"merge sorted arrays a & b into sorted array c"
(let ((c (make-array (+ (length a) (length b))))
(i 0)
(j 0))
(dotimes (k (length c) c)
(when (= i (length a)) ; [1]
(setf (subseq c k) (subseq b j)) ; [2]
(return c)) ; [3]
(when (= j (length b))
(setf (subseq c k) (subseq a i))
(return c))
(setf (aref c k)
(if (< (aref a i) (aref b j)) ; [4]
(prog1 (aref a i) (incf i))
(prog1 (aref b j) (incf j))))))))
(if (= (length lst) 0)
nil
(if (= (length lst) 1)
(make-array 1 :initial-element (first lst)) ; [5]
(let* ((len (length lst))
(mid (truncate len 2)))
(merge- (mergesort (subseq lst 0 mid))
(mergesort (subseq lst mid len))))))))
;; Notes
;; [1] when has an implicit progn
;; [2] use subseq in settable context to fill remaining array
;; [3] return from the implicit nil block created by dotimes
;; [4] the 2nd arg to setf becomes a conditional, with prog1 used to return
;; value of first arg, while tucking in the extra step needed in each case.
;; this is an advance in expressivity compared to c-style languages.
;; you can't do, there, for example:
;; a[3] = if(x < y) 2; else 3;
;; so in c/java you *must* say it this way, which is repetitive:
;; if x < y, a[3] = 2; else a[3] = 3;
;; [5] Base case of mergesort now returns an array, not a list.
;; That meant we can remove conversion of list to array in let.
;; Mergesort now receives list, but generates vector, right from the base case.
I'm very intrigued by the syntactical advance(?) over c-style languages which I mention in Note [4] above.
Any further discussion on that or any other points would be greatly appreciated :) thanks!
A:
There are some cases to be considered. Though we can write it slightly different:
CL-USER 32 > (let ((a #(1 5 8 10 11)) (b #(1 2 6 7 10)))
(flet ((merge- (x y
&aux
(lx (length x)) (ly (length y)) (lc (+ lx ly))
(c (make-array lc))
(i 0) (j 0))
"merge sorted vectors x & y"
(dotimes (k lc c)
(when (= i lx)
(setf (subseq c k) (subseq b j))
(return c))
(when (= j ly)
(setf (subseq c k) (subseq a i))
(return c))
(setf (aref c k)
(if (< (aref a i) (aref b j))
(prog1 (aref a i) (incf i))
(prog1 (aref b j) (incf j)))))))
(merge- a b)))
#(1 1 2 5 6 7 8 10 10 11)
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How can I convert a character-encoded binary string to hexadecimal in SQL Server?
I'm trying to take a VARCHAR(MAX) with data in it as follows:
"00001001010001010111010101..." etc.
Then encode it as hexadecimal for more efficient return to the client.
Is it possible to do this? Either directly or converting the string into a real binary column first before calling master.dbo.fn_varbintohexstr?
As an example, given the string:
0000100101000101011101011110
We should end up with:
0000 = 0
1001 = 9
0100 = 4
0101 = 5
0111 = 7
0101 = 5
1110 = E
094575E.
Or if there is an even more efficient method (reading binary directly?) then that would be even better. SQL Server 2000 compatible solutions are preferable.
A:
Given your previous question, you're generating this string as part of another query. Why on Earth are you generating a string of ones and zeros when you can just multiply them by the appropriate power of 2 to make an INT out of them instead of a string? Converting from INT to hex string is trivial.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
libpython2.7.so.1.0: cannot open shared object file: No such file or directory
I have trying to run python script from the terminal but getting the next error message :
ImportError: libpython2.7.so.1.0: cannot open shared object file: No such file or directory
if I run print sys.version I get :
>>> import sys
>>> print sys.version
2.7.3 (default, Feb 26 2013, 16:27:39)
[GCC 4.4.6 20120305 (Red Hat 4.4.6-4)]
and if I run ldd /usr/local/bin/python
>> ldd /usr/local/bin/python
linux-vdso.so.1 => (0x00007fff219ff000)
libpthread.so.0 => /lib64/libpthread.so.0 (0x0000003300c00000)
libdl.so.2 => /lib64/libdl.so.2 (0x0000003300800000)
libutil.so.1 => /lib64/libutil.so.1 (0x0000003310e00000)
libm.so.6 => /lib64/libm.so.6 (0x0000003300000000)
libc.so.6 => /lib64/libc.so.6 (0x0000003300400000)
/lib64/ld-linux-x86-64.so.2 (0x00000032ffc00000)
I don't understand which python do I have ? why running this python script from the terminal is failing ?
I have tried to run
export LD_LIBRARY_PATH=/usr/local/lib/python2.7/
with no luck...
BTW - I have managed to debug this script in eclipse with the python plug-in, and when I look at the debug configuration I see that the PYTHONPATH is set for :
/..../eclipse/plugins/org.python.pydev_3.1.0.201312121632/pysrc/pydev_sitecustomize:/..../workspace/style_checker/src:/usr/local/lib/python2.7/site-packages/setuptools-0.6c11-py2.7.egg:/usr/local/lib/python2.7/site-packages/pip-1.2.1-py2.7.egg:/usr/local/lib/python2.7:/usr/local/lib/python2.7/plat-linux2:/usr/local/lib/python2.7/lib-tk:/usr/local/lib/python2.7/lib-dynload:/usr/local/lib/python2.7/site-packages
so eclipse manage somehow to find this python2.7 libs... so how can I do it with out eclipse and from the command line ? what am I doing wrong ? using CentOS6.
A:
Try to find file libpython2.7.so.1.0:
locate libpython2.7.so.1.0
In my case, it show out put:
/opt/rh/python27/root/usr/lib64/libpython2.7.so.1.0
Then paste line /opt/rh/python27/root/usr/lib64 to file /etc/ld.so.conf
And run ldconfig.
It solved my problem. Goodluck!
A:
For some reason these two have worked perfectly for me:
apt-get install libpython2.7
sudo apt-get install libatlas3-base
I found them here and here
A:
Perhaps you could try the answer at https://stackoverflow.com/a/1100297/3559967.
The author of that question also stated that the LD_LIBRARY_PATH approach did not work for him, but adding the library path to /etc/ld.so.conf and running ldconfig worked.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
how to remove HTML parameters in jquery
by default, my <button> has a disabled="disabled" value in my HTML markup.
which makes the <button> non-clickable
<button id="submit" value="submit" name="submit" class="btn btn-large btn-block btn-primary" disabled="disabled" type="submit">Submit</button>
if the <textarea> is more than 30 characters, it will remove the parameter disabled and make the button clickable
Im not sure how to remove or even add parameters inside an html markup using jquery.
var $j = jQuery.noConflict();
$j(function(){
$j("#textField").keyup(function(){
num = $j(this);
if ($j(this).val().length >= 30 ) {
$j("#charNum").text(30-num.val().length);
$j('#submit") // disable
}
else {
$j("#submit") // enable
}
});
});
A:
Use .removeAttr() method.
var $j = jQuery.noConflict();
$j(function(){
$j("#textField").keyup(function(){
num = $j(this);
if ($j(this).val().length >= 30 ) {
$j("#charNum").text(30-num.val().length);
$j("#submit").removeAttr('disabled');
}
else {
$j("#submit").attr('disabled', 'disabled');
}
});
});
A:
To disable you can use
$('#submit').prop('disabled', true);
To enable you can use
$("#submit").button( "enable" );
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Running different method when a different selection is made in a comboBox
I am trying to create a combo box with different options within it and fire different methods for when an option has been selected. However when I run the app and select an option nothing happens. Here is a snippet of the code:
<ComboBox x:Name="comboBox" HorizontalAlignment="Left" Margin="10,84,0,0" VerticalAlignment="Top" Width="100" SelectionChanged="comboBox_SelectionChanged" SelectedItem="{Binding Path=index, Mode=TwoWay}" SelectedValuePath="Tag">
<ComboBoxItem Content="Kilograms" Tag="0"></ComboBoxItem>
<ComboBoxItem Content="Pounds" Tag="1"></ComboBoxItem>
</ComboBox>
private void comboBox_SelectionChanged(object sender, SelectionChangedEventArgs e)
{
ComboBox Selector = (sender as ComboBox);
int index = Selector.SelectedIndex;
switch(Selector.ToString())
{
case "0":
workOutKilo();
break;
case "1":
break;
}
}
private void workOutPounds()
{
MessageBox.Show("This is the pounds conversion");
}
private void workOutKilo()
{
MessageBox.Show("This is the kilo conversion");
}
How can I get this working so that the methods will run when an option within the combo box is selected and display the message to the screen?
A:
Don't use the sender object as switch condition. Instead use the fetched index:
private void comboBox_SelectionChanged(object sender, SelectionChangedEventArgs e)
{
var comboBox = sender as ComboBox;
int index = comboBox.SelectedIndex;
switch (index)
{
case 0:
workOutKilo();
break;
case 1:
workOutPounds();
break;
}
}
Your current code does not work, because calling the ToString method on the ComboBox object yields the following text: System.Windows.Controls.ComboBox Items.Count:2, which is neither the string "0", nor the string "1".
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Read a string in AS3
I have a question regarding to my project which is how to read a string in AS3.
Actually, I have an text file named test.txt. For instance:
It consists of:
Sun,Mon,Tue,Wed,Thu,Fri,Sat
and then I want to put all of them into an array and then a string to show them in the dynamic text Box called text_txt:
var myTextLoader:URLLoader = new URLLoader();
myTextLoader.addEventListener(Event.COMPLETE, onLoaded);
function onLoaded(e:Event):void
{
var days:Array = e.target.data.split(/\n/);
var str:String;
stage.addEventListener(MouseEvent.CLICK, arrayToString);
function arrayToString(e:MouseEvent):void
{
for (var i=0; i<days.length; i++)
{
str = days.join("");
text_txt.text = str + "\n" + ";"; //it does not work here
}
}
}
myTextLoader.load(new URLRequest("test.txt"));
BUT IT DOES NOT show them in different line and then put a ";" at the end of each line !
I can make it to show them in different line, but I need to put them in different line in txt file and also I still do not get the ";" at the end of each line unless put it in the next file also at the end of each line.
And then I want to read the string and show an object from my library based on each word or line. for example:
//I do not know how to write it or do we have a function to read a string and devide it to the words after each space or line
if (str.string="sun"){
show(obj01);
}
if (str.string="mon"){
show(obj02);
}
I hope I can get the answer for this question.
Please inform me if you can not get the concept of the last part. I will try to explain it more until you can help me.
Thanks in advance
A:
you must enable multiline ability for your TextField (if did not)
adobe As3 DOC :
join() Converts the elements in an array to strings, inserts the
specified separator between the elements, concatenates them, and
returns the resulting string. A nested array is always separated by a
comma (,), not by the separator passed to the join() method.
so str = days.join(""); converts the Array to a single string, and as your demand ( parameter passed to join is empty "") there is no any thing between fetched lines. and text_txt.text = str + "\n" + ";"; only put a new line at the end of the text once.
var myTextLoader:URLLoader = new URLLoader();
var days:Array;
myTextLoader.addEventListener(Event.COMPLETE, onLoaded);
function onLoaded(e:Event):void
{
days = e.target.data.split(/\n/);
var str:String;
stage.addEventListener(MouseEvent.CLICK, arrayToString);
}
myTextLoader.load(new URLRequest("test.txt"));
function arrayToString(e:MouseEvent):void
{
text_txt.multiline = true;
text_txt.wordWrap = true;
text_txt.autoSize = TextFieldAutoSize.LEFT;
text_txt.text = days.join("\n");
}
also i moved arrayToString out of onLoaded
for second Question: to checking existance of a word, its better using indexOf("word") instead comparing it with "==" operator, because of invisible characters like "\r" or "\n".
if (str.indexOf("sun") >= 0){
show(obj01);
}
if (str.indexOf("mon") >= 0){
show(obj02);
}
|
{
"pile_set_name": "StackExchange"
}
|
Q:
File system libraries that allow mounting on an application level
I have been looking into libraries for a file system that will allow path mounting on purely an application level. This may not be called just "path mounting" since that has the connotation of os level path mounting, but something else, I am not sure of the terminology. I was hoping to be able to find a few but were unable to find anything to what I am looking for (boost::filesystem was the closest I found). I wanted to be able to compare several different libraries in hopes of seeing what advantages and disadvantages they have.
What I mean by a file system with path mounting is so I would have a path such as
"SomeRoot:data\file.txt"
and the "SomeRoot" would be replaced with C:\SomeFolder", which would be set to the file mount system.
Does anyone know of a file system that will allow path mounting?
Edit:
Since it appears that there may not be many libraries for this, I would also be interested in how to construct one properly.
A:
If you are looking for an "application level file system" then at the most basic level, you are going to need to do a string replace. On the most basic level there are two strings
MountPoint
Which will be used as the "mount point", such as your SomeRoot.
MountResolve
Which is the location to what mount point is pointed at for when "resolving" a file location. This is the same as your C:\SomeFolder.
Besides for the obvious accessor and getters for those variables, there is the need for a function to resolve the path, which is this case can be
bool ResolvePath(const String& mountPath, String& resolvedPath);
The contents of the ResolvePath are very simple, all you need to do is replace the current MountPoint string in mountPath and place the result into resolvedPath.
resolvedPath = mountPath;
resolvedPath.replace(0, mMountPoint.size() + 1, mMountResolve.c_str(), mMountResolve.size());
However, there is more that can be done in that function. The reason why I have it returning a bool is because the function should fail mountPath does not have the MountPoint. To check, just do a simple string::find.
if(mountPath.find(mMountPoint) == String::npos)
return false;
With this, you can now resolve SomeRoot:data\file.txt to C:\SomeFolder\data\file.txt if MountResolve is set to C:\SomeFolder\. However, you mentioned without the trailing slash at the end. Since there is nothing to be currently done to verify that slash, your result would be C:\SomeFolderdata\file.txt. This is wrong.
On your access for setting the mount resolve, you want to check to see if there is there is a trailing folder slash. If there is not, then add it.
void FileSystem::SetMountResolve(const String& mountResolve)
{
mMountResolve = mountResolve;
if(*(mMountResolve.end() - 1) != FOLDERSLASH)
mMountResolve += FOLDERSLASH;
}
This will allow a basic "FileSystem" class to have one MountPoint/MountResolve. It will not be very difficult to extend this to allow multiple mount points either.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Rx 2 Android what is better Single or Observable for api calls?
when we use retrofit2 for doing API rest calls with Rx, What is the best approach to use, Single or Observable?
public interface ApiService{
Single<Data> getDataFromServer();
Observable<Data> getDataFromServer();
}
A:
I'd suggest using a Single as it is more accurate representation of the data flow: you make a request to the server and the you get either one emission of data OR an error:
Single: onSubscribe (onSuccess | onError)?
For an Observable you could theoretically get several emissions of data AND an error
Observable: onSubscribe onNext? (onCompleted | onError)?
However, if you are using rx-java2, I'd suggest using a Maybe instead of Single. The difference between those two is that Maybe handles also the case when you get the response from server but it contains no body.
Maybe: onSubscribe (onSuccess | onCompleted | onError)?
A:
Difference between Observable and Single is rather semantic. When you are declaring something Single you are saying that this observable is going to produce only one value, not series of values.
Using proper semantic types is the best way to document your API.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Leave quotations in original (bad) English or edit?
I am editing a thesis and the writer has quoted from many interviews with non-native English speakers. I'm wondering whether it is correct practice to leave the quotations as is, or perhaps use (sic), or to edit them in correct English? Thanks in advance.
A:
In qualitative social research, two main methods of transcription are applied: naturalized transcription and denaturalized transcription.
Naturalized transcription is a detailed and less filtered transcription. It is as detailed as possible and focuses on the details of the discourse, such as breaks in speech, laughter, mumbling, involuntary sounds, gestures, body language, etc. as well as content [...] Denaturalized transcription is flowing, presenting ‘laundered’ data which removes the slightest socio-cultural characteristics of the data or even information that could shed light on the results of the study. It accurately describes the discourse, but limits dealing with the description of accent or involuntary sounds (Mero-Jaffe 2011, 232, emphasis added).
To what extent interview transcriptions ("quotations") can be altered is a methodological choice, which must be informed by the purpose of the study, and which is sometimes controlled by explicit transcription rules. It should therefore be left to the author, not to the editor.
As editor, I would leave all quotations as they are, and point this out to the author.
Mero-Jaffe, I. (2011). ‘Is that what I Said?’ Interview Transcript Approval by Participants: An Aspect of Ethics in Qualitative Research. International Journal of Qualitative Methods, 231–247. https://doi.org/10.1177/160940691101000304
|
{
"pile_set_name": "StackExchange"
}
|
Q:
ARB Pleace holders not working
I'm making a localization using ARB google's API but actually the place holder is not working i spent 5 hours to solve this issue with no luck
arb.register(
"login",
{
"title": "Login",
"subtitle": "to your account",
"MSG_BODY_TEST": "This is a test.",
"email": "Email {0}",
"@email": {
"placeholders": {
"0": {
"example": "$123.45",
"description": "cost presented with currency symbol"
}
}
},
"MSG_CURR_LOCALE": "...and the selected language is \"{currentLocale}\"."
}
);
arb.register(
"login:ar",
{
"title": "الدخول",
"subtitle": "الى حسابك",
"email": "البريد الاكتروني {0}",
"@email": {
"placeholders": {
"0": {
"example": "$123.45",
"description": "cost presented with currency symbol"
}
}
},
"MSG_CURR_LOCALE": "...and the selected language is \" {currentLocale}\"."
}
);
could someone tell me what the problem is please ?
A:
Well, try to replace "email": "Email {0}" with "email@placeholder": "Email"
check this : reference
also remove the place holder bracket you don't need it
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Binlog have event but no rows?
I am working with a project, i need to watch binlog event, iterator rows, and do something.In my local enviroment ,I tested my codes and everything worked,I can get events and rows in each event.However, after moved to production-enviroment and connected to another database,I can only get bingo event,but there were no rows in any events.
I used python-mysql-replication,i dumped all binlog-event i received,each of them in like below:
=== UpdateRowsEvent ===
Date: 2018-06-27T15:46:33
Log position: 326768636
Event size: 87
Read bytes: 15
Table: db_xxx.t_yyy
Affected columns: 13
Changed rows: 0
Affected columns: 13
Values:
As U see,changed rows is 0,and values are empty!
A:
I find the solution, it is because has no grant for select on table.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Print list element following a pattern match in Python
I would like to print the preceding element in a list following a pattern match. In the basic example below I match the string 'foo' in a list. I would like to print out not the match itself ('foo') but the following element to the match (in this case 'bar').
theList = ["foo", "bar", "baz", "qurx", "bother"] # example list
list1 = "foo" # matching string
regex = re.compile(list1)
[m.group(0) for l in theList for m in [regex.search(l)] if m] # returns 'foo'
The above code returns the match, but like I said I would like to return the following element in theList. Any help with be greatly appreciated.
A:
Don't try to use clever things like list comprehensions when you are not yet quite sure how they work. It is perfectly OK to use simple code - readable code is easier to verify, after all.
Try this (for clarity I left out the regular expression nonsense):
haystack = ["foo", "bar", "baz", "foo", "qurx", "bother"]
needle = "foo"
result = []
for i, element in enumerate(haystack):
if needle in element:
result.append(haystack[i+1])
print(result)
If you are only interested in the first (or only) match, you can use
haystack = ["foo", "bar", "baz", "foo", "qurx", "bother"]
needle = "foo"
for i, element in enumerate(haystack):
if needle in element:
print(haystack[i+1])
break
If you can also match on equality instead of using regex, this is the easiest approach:
idx = haystack.index(needle)
result = haystack[idx+1]
The examples above will all break if the needle is found at the last position of haystack, but I leave that as exercise to the reader.
Properly implementing your original approach, the solution could look something like this:
pairs = zip(haystack, haystack[1:])
[following for matching, following in pairs if needle in matching]
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How should I handle a scalability problem during routing my requests from server?
I have faced a problem with the initial design of my system where sensitive information was being sent to the front-end and front-end was responsible for calling 3rd party APIs. As you have probably guessed it was extremely vulnerable to attacks. To remedy this I added a back-end system to proxy those requests and call the 3rd party APIs on behalf of front-end. The issue with this approach is it is not scalable at all. I am currently looking at additional 15 servers to handle the current load and it is increasing day by day.
Any advice on how can I remove this back-end requirement? is there any way to make the front-end still call the APIs but secure the data?
A:
No. It goes from very difficult (secured client with online executable validation) to impossible (browser).
This is strangely vague for some reason, but if it is a web platform and someone tried to use Javascript, then it is very much impossible.
If it is a custom client that you have full control over, then you'll need to do a self-validation that is checked online to make sure code is not modified. Even then, the network traffic can be attacked so that scenario is still vulnerable. You can then setup VPN connections from the client but the overhead goes up.
A:
Tokens
Depending on your collaborating third-parties you could use a token system.
User Obtains a suitable permission restricted, and signed token from your server.
They use this token to talk to the third-party services.
The third party services verify the integrity of the token with you, alternately you already cleared the path with the third-party.
You will have to ensure that the token can only perform the operations you are fine with them doing on the third-party directly. That it has a short expires (say 20mins). And that it does not reveal any secret information.
Note that you have to presume that any barriers erected in the client to prevent an operation, do not exist when considering this approach. The user can do any mutative, or query action at any time. The token permissions must be set to ensure that only the mutations that are acceptable, and the data that is fine for public dissemination is returned.
Which does mean that very dangerous operations should still be proxied by your own services, if only to ensure you have an audit trail. And dangerous doesn't just mean deletion, creation, and meta-data changes also fit this mold.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Python - Read Dictionary of Lists from CSV
I'm trying to read a dictionary of List from a CSV file.
I'm trying to access to data-structure like:
dbfree = {u'keyname': [u'x', 'y', 'z']}
These data structures are stored in a CSV file with this code:
for key, val in dbfree.items():
w.writerow([key, val])
I read the CSV in this way:
dbproj = {}
for key, val in unicodecsv.reader(open(filename)):
dbproj[key] = val
But the output is this:
{u'key name': u"[u'x', 'y', 'z']"
How can I correctly retrieve the full dictionary of lists from my CSV file?
A:
You wrote the repr() output of the nested list here:
for key, val in dbfree.items():
w.writerow([key, val])
here val is [u'x', 'y', 'z']; to store that in one column the csv file simply writes the result of repr(val).
You can decode that string to a Python object again with the ast.literal_eval() function:
import ast
dbproj = {}
for key, val in unicodecsv.reader(open(filename)):
dbproj[key] = ast.literal_eval(val)
ast.literal_eval() interprets the input as a Python expression, but limited to literals, Python syntax that defines objects such as dictionaries, lists, tuples, sets and strings, numbers, booleans and None.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Corporate adoption rate of Git?
A discussion amongst some colleagues emerged recently how in today's software industry, two separate worlds exist:
FOSS oriented
Corporate
Question
How much is Git used in corporate environments?
What is your experience with Git in a corporate environment?
A:
For what it's worth, we use git in my workplace. Everyone is quite happy with it. Of course, no single person is really going to be able to tell you how common it is.
I suspect the continued prevalence of cvs/svn is much more to do with inertia than anything else. They were definitely among the best (if not the best) choices for a long time**, and a large number of developers have had the chance to learn to use them well. If most of your workforce is already comfortable with them, and they're good enough, how many companies can we really expect to try something new?
Another common factor in corporations' decisions has to do with a sort of stigma attached to free software. People tend to associate monetary cost and value, perceiving more expensive products as better (For example, I've read about a psychology study where people were given the same wine twice, and told one was a more expensive variety. They tended to rate it as tasting better). With software, there is a certain amount truth to this attitude - you can often buy some guarantee of support and maintenance with a product. We all know established open-source projects can easily still win out (more testers, more documentation writers, faster bugfix releases...), but I'm sure this still motivates many companies to purchase VCS/SCM products. However, this is clearly not the reason people are using cvs/svn.
** Please, no flamewars! I'm a diehard git fan, but I know it hasn't always existed. Of course, some still disagree, like Linus Torvalds:
For the first 10 years of kernel maintenance, we literally used tarballs and patches, which is a much superior source control management system than CVS ... The slogan of Subversion for a while was "CVS done right", or something like that, and if you start with that kind of slogan, there's nowhere you can go. There is no way to do CVS right.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
D3 collapsible tree - jumpy on zoom
I've already spent too much time trying to figure this out.
My goal is to create d3 collapsible tree but for some reason when you zoom it, it moves the tree on position 0,0. I've already seen a few questions with similar problem such as this one d3.behavior.zoom jitters, shakes, jumps, and bounces when dragging but can't figure it out how to apply it to my situation.
I think this part is making a problem but I'm not sure how to change it to have the proper zooming functionality.
d3.select('g').transition()
.duration(duration)
.attr("transform", "translate(" + x + "," + y + ")scale(" + scale + ")")
zoomListener.scale(scale);
Here is my code: https://jsfiddle.net/ramo2600/y79r5dyk/11/
A:
You are translating your zoomable g to position [100,100] but not telling the zoom d3.behavior.zoom() about it. So it starts from [0,0] and you see the "jump".
Modify your centerNode function to:
function centerNode(source) {
scale = zoomListener.scale();
// x = -source.y0;
y = -source.x0;
// x = x * scale + viewerWidth / 2;
x = 100;
y = 100;
// y = y * scale + viewerHeight / 2;
d3.select('g').transition()
.duration(duration)
.attr("transform", "translate(" + x + "," + y + ")scale(" + scale + ")")
zoomListener.scale(scale);
zoomListener.translate([x,y]); //<-- tell zoom about position
}
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Magical Records how to create just one unique entity
I get some object from the server it is an json string. I want to create entity using keys and values from this string.
So I use this method for create entity using Magical Records
Entity *entity = [Entity createEntity];
I have id for each entity, so do I need to create some condition that will check if some entity already exist by id from code or there is alternative method in core data data model like in SQL (primary key etc)?
A:
As one possible option, you can find out how many entities exist by using a predicate. For example:
NSUInteger numberOfEntities = [Entity countOfEntitiesWithPredicate:[NSPredicate predicateWithFormat:@"entityIdAttributeName == %@", entityId]];
if(numberOfEntities == 0) {
Entity *entity = [Entity createEntity];
}
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Nested loops and SQL queries; need for speed
I'm having trouble solving a problem with iterative SQL queries (which I need to do away with) and I'm trying to work out an alternative.
(Also; unfortunately, AJAX is not really suitable)
Given I have the following tables for location data:
Country
country_id
name
State
state_id
country_id
name
City
city_id
state_id
name
Now, I'm trying to pull all of the data, however it's actually quite tiny (147 cities, split between 64 states, split between 2 countries) however it's taking forever because I'm iteratively looping:
// this is pseudo-code, but it gets the point across
$countries = getCountries();
foreach($countries as &$country){
$country['states'] = $states = getStates($country['country_id']);
foreach($states as &$state){
$state['cities'] = getCities($state['state_id']);
}
}
The reason I'm going this way, is because my final result set needs to be in the form:
$countries = array(
array(
'name' => 'country_name',
'id' => 'country_id',
'states' => array(
array(
'name' => 'state_name',
'id' => 'state_id',
'cities' => array(
array(
'name' => 'city_name',
'id' => 'city_id',
),
// ... more cities
),
),
// ... more states
),
),
// ... more countries
);
I can't seem to wrap my head around a faster approach. What alternatives exist to querying for hierarchical data?
Revised:
$sql = "SELECT
`dbc_country`.`name` as `country_name`,
`dbc_state`.`name` as `state_name`,
`city_id`,
`dbc_city`.`name` as `city_name`,
`latitude`,
`longitude`
FROM
`dbc_city`
INNER JOIN
`dbc_state` ON `dbc_city`.`state_id` = `dbc_state`.`state_id`
INNER JOIN
`dbc_country` ON `dbc_state`.`country_id` = `dbc_country`.`country_id`";
$locations = array();
foreach($datasource->fetchSet($sql) as $row){
$locations[$row['country_name']][$row['state_name']][] = array(
$row['city_id'],
$row['city_name'],
$row['latitude'],
$row['longitude'],
);
}
(I also removed the id values of states/countries, since they were uselessly taking up space)
A:
it would be much faster to do joins in the sql
then iterate over the single (larger) result set.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Как подключить общую папку в гостевой Debian на virtualbox (родительская ос win7)?
Решил я установить virtualbox 5 и протестировать разные операционные системы, начал с разных
linux (первыми выбрал Ubuntu (потому что, как-то знакомо звучит название), Debian (прочитал, что на нем домашний сервер поднять можно,
и для этого Debian как раз и рекомендуют), Kali-Linux ( проверка сетевой безопасности и все такое).
Для своих экспериментов брал образы с сайта http://www.osboxes.org/virtualbox-images/
Информацию узнал из статьи http://alv.me/?p=10087
С Ubuntu 15.10 64 bit все прошло на ура. Легко установились дополнения гостевой системы, общая папка была доступна
по адресу /media/sf_namesharefolder. Также работает drug-n-drop (перетаскивание файлов из гостевой и родительской ос). Работает звук.
В Kali-linux 2.0 64 bit установились дополнения гостевой системы, доступна общая папка. Но нет звука. Как вылечить звук
решения не нашел (гугление тоже не помогло). Если кто знает напишите как это можно сделать.
И самым сложным оказался Debian Jesie 8.2 64 bit. Во-первых: дополнения гостевой системы установились только из под root (в терминале
вводим su потом пароль от пользователя под которым зашли в систему (в случае образов с osboxes.org (user - osboxes; пароль - osboxes.org).
И только после этого удалось командой sh VBoxGuestAdditions.run установить дополнения гостевой системы. Это помогло с разрешением экрана и тем что теперь можно развернуть на весь экран. Работает звук, что тоже радует а вот общая папка не доступна.
Как настроить для Debian Jesie 8.2 64 bit (в virtualbox) общую папку?
A:
После нескольких дней мучений проблема была решена!
Установились все дополнения гостевой системы. Окно гостевой Debian разворачивается на весь экран.
Есть звук. Работает буфер обмена (drug-n-drop в том числе). И барабанная дробь... Доступна общая папка!
Что для этого потребовалось:
(Сам Virtualbox версии 5.012 с пакетом гостевых дополнений был уже установлен)
Установить с нуля чистый образ Debian 8.2 x64 с официального сайта проекта Debian
ссылка прилагается: https://www.debian.org/distrib/netinst
Я использовал мини образ размером порядка 240 мб для 64 разрядной системы.
2.Образ был смонтирован и установлен (помогло вот это видео https://www.youtube.com/watch?v=rP_NEAGmGPo ).
Далее из под root (вводим команду su а затем пароль, который вы задали при установке образа) были установленны
следующие пакеты (даже не знаю которого из них больше всего не хватало):
(взято было из обсуждения вот по этой ссылке https://askubuntu.com/questions/287205/building-the-main-guest-additions-module-fail )
sudo apt-get update
sudo apt-get install dkms
sudo apt-get install build-essential
sudo apt-get install virtualbox-guest-x11 (вот эта команда не сработала)
sudo apt-get install linux-headers-generic
sudo apt-get install linux-headers-virtual
После перезагрузки общая папка появилась по адресу /media/sf_nameYourFolder.
Но она была еще недоступна (потребовалось добавить прав текущему пользователю):
Запустим терминал
Добавляем нового пользователя к директории.
sudo adduser user_name vboxsf
Вместо user_name указываем себя, своё имя, под которым вас знает система. Чтобы изменения вступили в силу — перезагрузим систему.
Вот после всех этих манипуляций наступил долгожданный успех!
(вот ссылка на сообщение об успешной установке дополнений гостевой системы: http://joxi.ru/KAgZaxBSg5qW82
которую я так жаждал получить.)
Надеюсь мои мучения будут кому-то полезны. И позволят не наступать на те же грабли:)
И спасибо всем кто пытался мне помочь. Особенно помог совет "установки чистого исошника"
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to fetch only a specified field from pivot with "with" function
I'm using Laravel 6.0 anc i can't figure out how to fetch just a specified field from the pivot table with the with eager loading.
For the relationships i use
$builder->with('relation_name:field1,field2")
But it doesn't work for the pivot of that relation.
Is there any way to do it or have i got to unset the others fields manually?
A:
For relations, the withPivot() method on the relationship is probably what you are looking for:
$builder->with(['relation_name' => function ($query) {
$query->withPivot('field1')->withPivot('field2');
}])->get();
You can combine, but for clarity, this is simplest.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Erro ao gerar relatório em produção - ReportViewer Versão 11
Estou trabalhando com webforms e gerando um relatório no reportviewer localmente, o relatório aceita parâmetros e possui um dataset
// Parametros
List<ReportParameter> parametersReport = new List<ReportParameter>();
parametersReport.Add(new ReportParameter("Nome", "Nome Teste TI"));
parametersReport.Add(new ReportParameter("Idade", "25"));
// DataSet
RecursoDeGlosaDataSet recursoDataSet = new RecursoDeGlosaDataSet();
// recursosFinalizados **vem do banco
recursosFinalizados.ToList().ForEach(y =>
{
recursoDataSet.Recurso.AddRecursoRow(
y.DataInicioRealizacao.ToString("dd/MM/yyyy"),
y.DataFim.Value.ToString("dd/MM/yyyy"),
y.CodigoTabela,
y.CodigoProcedimento,
y.DescricaoProcedimento,
y.GrauParticipacao,
y.CodigoItem,
y.ValorRecursado.ToString(),
y.Justificativaitem,
y.ValorAcatado.ToString(),
y.JustificativaCliente);
});
ReportViewer ReportViewer = new ReportViewer();
ReportViewer.ProcessingMode = ProcessingMode.Local;
eportViewer.LocalReport.ReportPath = "caminhoDoRelatorio"; // o caminho está ok
ReportViewer.LocalReport.DataSources.Add(
new Microsoft.Reporting.WebForms.ReportDataSource("RecursoDeGlosaDataSet",
(System.Data.DataTable)recursoDataSet.RecursoGlosa));
ReportViewer.LocalReport.SetParameters(parametersReport);
string mimeType = "";
string encoding = "";
string filenameExtension = "";
string[] streams = null;
Microsoft.Reporting.WebForms.Warning[] warnings = null;
string theDeviceSettings = "<DeviceInfo>
<HumanReadablePDF>True</HumanReadablePDF></DeviceInfo>";
byte[] bytes = ReportViewer.LocalReport.Render("PDF", theDeviceSettings,
out mimeType, out encoding, out filenameExtension, out streams, out
warnings);
Agora vou ao erro: relatório roda perfeitamente em minha máquina mas ao colocar em PRODUÇÂO caiu no seguinte erro :
Ocorreu um erro durante o processamento de relatórios local.
at Microsoft.Reporting.WebForms.LocalReport.EnsureExecutionSession() at
Microsoft.Reporting.WebForms.LocalReport.SetParameters(IEnumerable`1
parameters)
*Obs: instalei o reportviewer pelo nuget, então as seguintes dlls estão em produção (Install-Package Microsoft.Report.Viewer -Version 11.0.0)
Microsoft.ReportViewer.Common.dll
Microsoft.ReportViewer.ProcessingObjectModel.dll
Microsoft.ReportViewer.WebForms.dll
A:
Algumas DLL's são requeridas no servidor para o funcionamento do Report Viewer, ao fazer deploy além das DLL's que você já incluiu tente adicionar as seguintes:
Microsoft.ReportViewer.DataVisualization.dll
Microsoft.SqlServer.Types.dll
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to open new window using Vim Tsuquyomi plugin and :TsuDefinition command?
I use Vim and Tsuquyomi Plugin. Also, I use :TsuDefinition command navigate to function/type definition. This command replaces the current window with the new file. Is it possible using this plugin open definition on a new window?
A:
From Tsuquyomi Doc:
:TsuquyomiSplitDefinition - Split current window in two. Navigate to the location where
the symbol is defined.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to build from src to binary for Android
I want to use some function calls(commands) designed for linux. I can use them by enter the key words in adb(Android CML).
Here I found some works some people did.
wget (because it isn't included in most Android device )
Iperf
But after reading their methods or suggestions, I can only understand that I need to use Android NDK and write the correct makefile. I have no idea about building others source code (most of them are C/C++) for linux(only need to use 'make' command mentioned in their README file). The official NDK document is for Java environment to call C lib mainly.
Are there some HOWTO, courses or suggestions for this. Thanks!
I have compiled single cpp file program. I try to compile a alternative version iperf
https://github.com/tierney/iperf
It seems to be relative to lib ,some header files, and multiple c files. I failed to compile by enter all c files normally. Is there anything I missed?
A:
yeah you need the NDK, it offers an C/C++ compiler for Android.
In general the steps are all the same:
Setting up the NDK (I wrote a small how-to, but it's for Arch-Linux and the fish-shell, Windows how-to)
Adjusting your make file (instead of gcc compiler use Android NDK compiler, and so on)
Remember that Android uses Bionic C library, so only use functions supported by it
Run make, push the program to your device and make it executable
Of course, this is just an overview how it is done. You should try it and then ask specific questions if you run into troubles.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How shall I understand this deep binding example in Python?
From Programming Language Pragmatics by Scott,
Figure 3.14 Deep binding in Python. At right is a conceptual view of the run-time stack.
Referencing environments captured in closures are shown as dashed boxes and arrows. When
B is called via formal parameter P, two instances of I exist. Because the closure for P was
created in the initial invocation of A, B’s static link (solid arrow) points to the frame of that earlier
invocation. B uses that invocation’s instance of I in its print statement, and the output is a 1.
The catch is that a running program may have more than one instance of an object that is declared within a recursive subroutine. A closure in a language with
static scoping captures the current instance of every object, at the time the closure
is created. When the closure’s subroutine is called, it will find these captured instances, even if newer instances have subsequently been created by recursive calls.
So basically, the quote tries to explain that the following program (which is the same as the one in the screenshot) prints out 1:
def A(I, P):
def B():
print(I)
# body of A:
if I > 1:
P()
else:
A(2, B)
def C():
pass # do nothing
A(1, C) # main program
I don't quite understand the reason being "because the closure for P was
created in the initial invocation of A, B’s static link (solid arrow) points to the frame of that earlier invocation", and "When the closure’s subroutine is called, it will find these captured instances". So I modify the example, and then the new example prints out 2 instead of 1:
def A(I, P):
def B():
print(I)
if I > 2:
P()
elif I > 1:
A(3, B)
else:
A(2, B)
def C():
pass
A(1, C)
Another modified example prints 1:
def A(I, P):
def B():
print(I)
if I > 2:
P()
elif I > 1:
A(3, P)
else:
A(2, B)
def C():
pass
A(1, C)
So how can I know which closure matters?
Generally, is a closure created whenever a function is passed as an argument to another function?
Thanks.
A:
This has become too long for a comment, hence I am adding it as an answer.
I should say that I am answering this from the perspective of someone who learned about these ideas in other languages: I mostly write Python now, but it's possible my terminology is 'wrong' (for which read 'right, but latter-day languages like Python got it wrong'...). In particular I have intentionally skated over a bunch of Python-specific detail, and avoided dealing with things like like the mutability of bindings and the Python 3 nonlocal hack).
I also think the book's usage of the term 'deep binding' is confusing and perhaps wrong: see the end for a note on this. I have therefore largely ignored it.
Bindings, scope and extent
There are three important concepts.
A binding is the association between a name and something it denotes. The most common kind of binding is a variable binding which associates a variable name with a value: there are other kinds (for instance the bindings between exception classes and handlers for them established by try: ... except: ... constructs) but I'll only talk about variable bindings.
The scope of a binding is the region of the code it is accessible in.
The extent of a binding is how long it is accessible for.
There are several options for scope and extent. For variable bindings, Python has:
lexical scope, which means that a binding is accessible only to code for which it is visible in the source code;
and indefinite extent, which means that a binding exists as long as there is any possibility of reference.
(Another common scope is dynamic: a binding with dynamic scope is visible to any code for which it is visible in the source code and any code that is 'down the stack' from that code. Another common extent is definite, which means that a binding goes away the moment control leaves the construct which established it. Bindings for exception handlers have dynamic scope and definite extent in Python, for instance.)
What lexical scope means is that you can (almost) tell by reading the source what bindings a bit of code is referring to.
So consider this:
def outer():
a = 2
def inner(b):
return a + b
return inner(2)
There are two bindings here: a is bound in outer and b is bound in inner (actually, there are three: inner is also bound, to a function, in outer). Each of these two bindings is referenced once: in inner (and the inner binding is referenced once, in outer).
And the important thing is that, by reading the code, you can tell what the reference to a in inner is to: it's to the binding established in outer. That's what 'lexical' means: you (and the compiler, if it is smart enough) can tell what bindings exist by looking at the source.
This is only almost true. Consider this fragment:
def maybe(x):
return x + y
There is one binding created, in maybe, but two references: y is a reference to a binding which is not known to exist. But it may exist: there may be a top-level binding of y which would make this code work. So there is a special caveat around lexical bindings: there is a 'top-level' environment which all definitions can 'see' and which can contain bindings. So if the fragment above was enlarged to read
y = 4
def maybe(x):
return x + y
Then this code is fine, because maybe can 'see' the top-level environment (really, in Python, this is the bindings in the module in which it's defined).
Indefinite extent & first-class functions
For the above examples, the results would be the same with definite or indefinite extent. This stops being true if you consider first-class functions, which Python has. It stops being the case because functions are objects which, by being called, can reference bindings. Consider this:
def outer(x):
def inner(y):
return x + y
return inner
There are three bindings here: outer binds x and inner, and inner binds y (and can see x and inner). So, now, let add4 = outer(4): what should add4(3) return (or, equivalently, what should outer(4)(3) return)? Well, the answer is 7. And this is the case because the binding of x exists for as long as it can be referenced, or in other words, it exists for as long as any instances of inner exist, because they reference it. This is what 'indefinite extent' means.
(If Python had only definite extent, then outer(4)(3) would be some kind of error, since it would reference a binding which no longer exists. Languages with only definite extent can't really have first-class functions in any useful way.)
Something that's important to understand is that lexical scope tells you which bindings are visible, but the actual instances of those bindings which are visible are, of course, dynamic. So, if you consider something like this:
def adder(n):
return lambda e: e + n
a1 = adder(12)
a2 = adder(15)
then a1 and a2 reference different bindings of n: a1(0) is 12 while a2(0) is 15. So by reading the source code you can tell which bindings are captured, but you need to run it to know which instances of them are captured -- what the values of the variables are, in other words.
Compare that with this:
def signaller(initial):
s = [initial]
def setter(new):
s[0] = new
return new
def getter():
return s[0]
return (setter, getter)
(str, gtr) = signaller(0)
Now, str and gtr capture the same binding of s, so str(1) will cause gtr() to return 1.
Closures
So that's actually all there is to know. Except that there is some special terminology which people use, in particular the term 'closure'.
A closure is simply a function which refers to some lexical bindings outside its own definition. Such a function is said to 'close over' these bindings.
I think it would be a good question to ask why this term is needed at all? All you actually need to understand is the scope rules, from which everything else follows: why do you need this special term? I think the reason is partly historical and partly pragmatic:
historically, closures carried a lot of extra baggage in terms of all the closed-over bindings, so it was interesting to people whether a function was, or wasn't a closure;
pragmatically closures can have behaviour which depends on the bindings the close over in slightly non-obvious ways (see the example above) and this perhaps justifies the term.
The example code
The example code was
def A(I, P):
def B():
print(I)
# body of A:
if I > 1:
P()
else:
A(2, B)
def C():
pass # do nothing
A(1, C) # main program
So, when A is called, there is a local function, B which can see the binding of I (and also P & B itself, but it does not refer to these so we can ignore them). Each call to A creates new bindings for I, P & B, and these bindings are different for each call. This includes recursive calls, which is the trick that is being done here to confuse you.
So, what does A(1, C), do?
It binds I to 1, and B to a closure which can see this binding of I. It also binds P to the global (module) value of C, which is a function, but nothing refers to this binding.
Then it calls itself recursively (because I is 1) with arguments of 2 & the value of B, which is the closure that just got created.
In the recursive call, there are new bindings: I is now bound to 2, and P is bound to the closure from the outer call.
A new closure, is created, which captures these bindings and is itself bound, in this inner call, to B. Nothing refers to this binding.
The closure bound to P is then called. It is the closure created in the outer call, and the binding it can see for I is the binding visible to it, whose value is 1. So it prints 1, and we're done.
You can see what is happening by changing the definition to print some useful information:
from __future__ import print_function # Python 2
def A(I, P):
def B():
print(I)
print("{I} {P} {B}".format(I=I, P=P, B=B))
if I > 1:
P()
else:
A(2, B)
def C():
pass
A(1, C)
This prints (for example):
1 <function C at 0x7f7a03768e60> <function B at 0x7f7a03768d70>
recursing with (2, <function B at 0x7f7a03768d70>)
2 <function B at 0x7f7a03768d70> <function B at 0x7f7a03651a28>
calling <function B at 0x7f7a03768d70>
1
Note that, in the inner call, there are two functions which identify themselves as B: one of them is the same as the function created in the outer call (which will be called), while the other one is the just-created closure, which is never referenced again.
Deep binding
I think the book's usage of the term 'deep binding' is at best confusing and in fact probably outright wrong: it is possible that this term has changed meaning, but it certainly did not originally mean what the book thinks it means.
The terms 'deep binding' and 'shallow binding' refer to implementation strategies for languages with dynamic scope. In a system with dynamic scope, a 'free' variable reference (one which is not bound by a particular bit of code) is looked up by searching, dynamically, up the call stack until a binding for it is found. So in a language with dynamic binding you can't tell by looking at a bit of code what bindings it can see (and neither can the compiler!), because the bindings it can see depend on what the call stack looks like at the moment it is run.
Dynamic binding is great for things like exception handlers but usually terrible for most variable bindings.
One reason it's terrible is that a naïve implementation technique makes it inherently slow, and a clever implementation technique needs to be very clever to work in a system with more than one thread of control.
Deep binding is the naïve implementation technique. In deep binding, variable access works the way you think: the system searches up the stack until it finds the binding for the name it's looking for, and then uses that. If the stack is deep and the binding is a long way up it, this is slow.
Shallow binding is the clever implementation technique. This works by, instead of storing the current binding in the stack, storing the previous value in the stack, and smashing the current value into a slot associated with the variable name which is always in the same place. So now looking up a binding just involves looking up the name: there is no search. But creating and destroying bindings may be slower since the old values need to be stashed away. Additionally, shallow binding is not obviously safe in the presence of multiple threads of control: if all the threads share the slot for the binding then catastrophe will follow. So instead each thread needs to have its own slot, or slots need to be indexed by thread as well as name somehow.
I suspect that, for cases where dynamic scope is used such as exception handlers, systems use deep binding because it is much simpler to get right in multithreaded systems and performance is not critical.
Here is the classic early paper on deep & shallow binding, by Henry Baker.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to achieve background image that is the size of the viewport
Many websites use this, but https://modsquad.com/ was the first example I found. When you visit the site their background image (video in this case) is the full width and length of whatever screen you are viewing it on, directly below it is separate content, but you only see the video prior to scrolling down. How do you achieve this? In my search for the answer to this question I have only seen examples that set the entire html background to a certain image, which is not what I am looking for. Thanks in advance for help.
A:
Visit here : https://modsquad.com/ , and open console, you can see what they do.
When the browser is resized, the video(or image) changes its height.
You can do it via javascript, read this question : JavaScript - Get Browser Height
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Excel Macro Not Doing anything?
This is my first excel macro (and first time working with VBScript), so it is most likely wrong, but I'm trying to go through each sheet in my workbook, and rename the sheet to the value of the sheets "A2" cell's value. As the name says, the function isn't doing anything when I run it. It is running however. Here is my code:
Sub RenameSheets()
Dim WS_Count As Integer
Dim I As Integer
' Set WS_Count equal to the number of worksheets in the active
' workbook.
WS_Count = ActiveWorkbook.Worksheets.Count
' Begin the loop.
For I = 1 To WS_Count
ActiveSheet.Name = ActiveSheet.Range("A2").Value
Next I
End Sub
A:
Sub RenameSheets()
Dim WS_Count As Integer
Dim I As Integer
WS_Count = ActiveWorkbook.Worksheets.Count
For I = 1 To WS_Count
Dim WS As Worksheet
Set WS = ActiveWorkbook.Worksheets(I)
'Worksheet names can not be null
If Len(WS.Cells(2, 1)) > 0 Then
WS.Name = WS.Cells(2, 1)
End If
Next I
End Sub
A:
You are not selecting the different sheets so ActiveSheet isn't changing. You can rewrite your function below to get the intended result:
Dim currentWorksheet as Worksheet
For Each currentWorksheet in ActiveWorkbook.Worksheets
currentWorksheet.name = currentWorksheet.Range("A2").Value
Next currentWorksheet
what is above is a for..each loop that will set currentWorksheet to each Worksheet in all of the Worksheets in the Workbook.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Use of "Au plaisir" at the end of a correspondance
It is a form I see frequently on emails in Québec. Is it a reasonable equivalent to "regards" if sending out correspondence in two languages? It seems less stiff than "cordialement". Should it be used in correspondence with people you have not met before?
A:
Les usages sont différents dans les divers pays de la francophonie. « Au plaisir » est, je crois, plus courant au Québec qu'en France. « Au plaisir » se comprend en France mais est peu employé, on y dira plus facilement « Au plaisir de vous lire ».
Dans mes échanges électroniques professionnels (donc hors proches) j'utilise « cordialement » en français et « regards » en anglais. On utilise « cordialement » aussi avec des personnes qu'on connaît mais avec lesquelles on n'a pas de lien d’amitié. On peut aussi le faire précéder d'un adverbe pour lui donner un aspect plus convivial et dire, « bien cordialement » ou « très cordialement ».
Usage is different in various parts of the French speaking world. I think "Au plaisir" is more common in Québec than in France. People will understand "Au plaisir" in France, but it is not much used and when used usually followed by something, such as "Au plaisir de vous lire".
In my work related email correspondence I use "Regards" when writing in English and "cordialement" when writing in French. "Cordialement" is not really stiff, it can be used with people we have never met but also with people we know but with who we have no personnal realtionship. If you think it is too stiff, you can soften it by using an adverb, and say "Bien cordialement" or "Très cordialement".
A:
"Au plaisir!" could be used in place of "Until then!" or "Looking forward to it!"
As in, it will be a pleasure to see, to welcome you, to meet you..
|
{
"pile_set_name": "StackExchange"
}
|
Q:
REST API of Teamcity for initiate a build
Can anybody suggest how TEAMCITY RestAPI can used for trigger a build for a particular project availble in bitbucket/mercurial repository and getting the result of the build ,that is jar/war file .
A:
See JetBrains help:
To start a build, send POST request to
http://teamcity:8111/httpAuth/app/rest/buildQueue with the "build"
node in content - the same node as details of a queued build or
finished build. The queued build details will be returned.
Set up build configuration as normal, and trigger using REST API with steps above.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Make blank params[] nil
When a user submits a form and leaves certain fields blank, they get saved as blank in the DB. I would like to iterate through the params[:user] collection (for example) and if a field is blank, set it to nil before updating attributes. I can't figure out how to do this though as the only way I know to iterate creates new objects:
coll = params[:user].each do |c|
if c == ""
c = nil
end
end
Thanks.
A:
Consider what you're doing here by using filters in the controller to affect how a model behaves when saved or updated. I think a much cleaner method would be a before_save call back in the model or an observer. This way, you're getting the same behavior no matter where the change originates from, whether its via a controller, the console or even when running batch processes.
Example:
class Customer < ActiveRecord::Base
NULL_ATTRS = %w( middle_name )
before_save :nil_if_blank
protected
def nil_if_blank
NULL_ATTRS.each { |attr| self[attr] = nil if self[attr].blank? }
end
end
This yields the expected behavior:
>> c = Customer.new
=> #<Customer id: nil, first_name: nil, middle_name: nil, last_name: nil>
>> c.first_name = "Matt"
=> "Matt"
>> c.middle_name = "" # blank string here
=> ""
>> c.last_name = "Haley"
=> "Haley"
>> c.save
=> true
>> c.middle_name.nil?
=> true
>>
A:
If you just want to kill the blanks, you can just do params.delete_if {|k,v| v.blank?}.
A:
A good gem for handling this in the model: https://github.com/rmm5t/strip_attributes
It defines a before_validation hook that trims whitespaces and sets empty strings to nil.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Keeping an open StreamWriter as Class Field
I was trying to write a class that would keep an open StreamWriter until the instantiated object was destroyed -- this is to avoid the normal using idiom, because it's important that outside sources do not edit the file while the program is running (hence FileShare.Read).
Once instantiated, the file is created successfully via the constructor. Calls to the Write method do not actually write anything and throw no exceptions. Once the object is destroyed, the destructor throws an exception on the sw.Close() of Cannot access a closed file., even though the BaseStream isn't null. I'm not sure I understand the reasoning for that.
This question follows somewhat similar approach, but in some other type of class. So I thought this approach should have worked, but wasn't able to determine why it isn't.
class SchemaWriter
{
private StreamWriter sw;
private string path;
/// <summary>
/// Creates an object to handle writing Schema.ini information
/// </summary>
/// <param name="Path">Path to place Schema.ini file in</param>
public SchemaWriter(string Path)
{
path = Path;
sw = new StreamWriter(File.Open(Path + "Schema.ini", FileMode.Create,
FileAccess.ReadWrite, FileShare.Read));
}
/// <summary>
/// Writes Schema information about the supplied file name
/// </summary>
/// <param name="FileName">Name of file to write the Schema info about</param>
public void Write(string FileName)
{
sw.WriteLine(String.Format(@"[{0}]", FileName));
sw.WriteLine(@"Format=TabDelimited");
sw.WriteLine();
}
/// <summary>
/// Closes StreamWriter, deletes ini file
/// </summary>
~SchemaWriter()
{
if(sw.BaseStream != null)
sw.Close();
File.Delete(path + @"Schema.ini");
}
}
A:
The GC has already claimed it (and disposed of it).
The only thing you should do is to implement IDisposable. Do the following:
class SchemaWriter : IDisposable
{
private StreamWriter sw;
public void Dispose()
{
sw.Dispose();
}
...
}
You can now use your object with:
using(var writer = new SchemaWriter())
{
}
This will close your StreamWriter immediately when you are done with the object. And if you don't use the using the GC will collect the StreamWriter for you when it feels like it.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to search one string for all of the words contained in a second string?
I need to check a string A for a match based on whether it contains all of the words in another string B - in whatever order.
So, let's say string A is this:
one two three four five
And string B is one of these:
one two three // match!
one three two // match! (order doesn't matter)
one two six // NOT A MATCH! ('six' is not found in string A)
one two three four // match!
one two three four five // match!
one two three four five seven // NOT A MATCH! ('seven' is not found in string A)
How would I find a match between string A and string B only if every word in string B is found in string A (regardless of the order of the words in either string and regardless of whether string A contains additional words that are not found in string B)?
I don't know if jQuery has any special features that would help with this or whether I need to do it strictly with pure JavaScript?
A:
Split the strings into arrays of words.
For each word in string A, assign obj[word] = true;.
For each word in string B, check if obj[word] === true;. If it does not, return false.
Return true.
This should be trivial enough to translate into code.
A:
// If the client has the array method every, this method is efficient-
function commonwords(string, wordlist){
string= string.toLowerCase().split(/\s+/);
wordlist= wordlist.toLowerCase().split(/\s+/);
return wordlist.every(function(itm){
return string.indexOf(itm)!= -1;
});
}
commonwords('one two three four five','one two nine');
// If you want any client to handle it without a special function,
you can 'explain' the advanced array methods-
Array.prototype.every= Array.prototype.every || function(fun, scope){
var L= this.length, i= 0;
if(typeof fun== 'function'){
while(i<L){
if(i in this && !fun.call(scope, this[i], i, this)) return false;
++i;
}
return true;
}
return null;
}
Array.prototype.indexOf= Array.prototype.indexOf || function(what, i){
i= i || 0;
var L= this.length;
while(i< L){
if(this[i]=== what) return i;
++i;
}
return -1;
}
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Output values to be separated by commas and to exclude the last comma
I need my output to be separated by commas which it does, but I need the last comma excluded and also how would I rerun my program. I need to do this the simplest way as possible. This is what I have so far:
System.out.print("Enter numbers (-1 to end):");
int num = input.nextInt();
int sum = 0;
String u= " ";
while (num != -1) {
sum += num;
u += num + ",";
num = input.nextInt();
}
System.out.println("Entered Numbers: " + u);
System.out.println("The Sum: " + sum);
A:
Replace
u += num + ",";
with
u += (u.length() == 1 ? "" : ",") + num;
This only appends the comma if something has already been appended to u.
Note that it is better to use a StringBuilder to concatenate strings in a loop.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How can I open an IPython notebook without the output?
I have an IPython notebook where I've accidentally dumped a huge output (15 MB) that crashed the notebook. Now when I open the notebook and attempt to delete the troublesome cell, the notebook crashes again—thus preventing me from fixing the problem and restoring the notebook to stability.
The best fix I can think of is manually pasting the input cells to a new notebook, but is there a way to just open the notebook without any outputs?
A:
There is this nice snippet (that I use as a git commit hook) to strip the output of an ipython notebook:
#!/usr/bin/env python
def strip_output(nb):
for ws in nb.worksheets:
for cell in ws.cells:
if hasattr(cell, "outputs"):
cell.outputs = []
if hasattr(cell, "prompt_number"):
del cell["prompt_number"]
if __name__ == "__main__":
from sys import stdin, stdout
from IPython.nbformat.current import read, write
nb = read(stdin, "ipynb")
strip_output(nb)
write(nb, stdout, "ipynb")
stdout.write("\n")
You can easily make it a bit nicer to use, currently you'd have to call it as
strip_output.py < my_notebook.ipynb > my_notebook_stripped.ipynb
A:
If you are running jupyter 4.x, you will get some API deprecation warnings when running filmor's script. Although the script still works, I update the script a bit to remove the warnings.
#!/usr/bin/env python
def strip_output(nb):
for cell in nb.cells:
if hasattr(cell, "outputs"):
cell.outputs = []
if hasattr(cell, "prompt_number"):
del cell["prompt_number"]
if __name__ == "__main__":
from sys import stdin, stdout
from nbformat import read, write
nb = read(stdin, 4)
strip_output(nb)
write(nb, stdout, 4)
stdout.write("\n")
A:
As for later versions of jupyter, there is a Restart Kernel and Clear All Outputs... option that clears the outputs but also removed the variables.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How can I get the size of a Solr document?
I would like to know the size in bytes of individual Solr documents/responses. Is there a straightforward way to figure this out?
We are using the solrj java client.
I've looked around and have only found ways to determine the size of the index, but nothing on the size of the documents themselves.
A:
the size and the document of solr are composed of both :
- indexes compressed
- files compressed
A easy way to know the size of you solr core / node is to get in solr admin
So in my case we have :
2993 document for 5,41 MB = an average of 1,8KB / document including the index / and the stored fields.
There is also another way if you are more geeky and love to code :
how to get the Index Size in solr
UPDATE THE 5/06/2014 :
Hehe, I've found something that sound good : [ MAT tool3
You can lookup the lru cache in order to understand the memory used (and the space used by a specific doc). By running a get of the ID, you will have the ID retrieved and the memory taken by the ID on HDD/RAM! :)
See this great article :
MAT usage ,
MAT .
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Get the H-Axis(X-Axis) value of google area chart
In the google chart datatable object, I can get the current select point value(V-Axis/Y-Axis) using datatable.getValue(...). However, if I wanna get the time/date/year from X-Axis(see screenshot). I did not find any datatable's function to achieve that. Does anyone know how??
This is my code
google.visualization.events.addListener(chart, 'select', function(a, b, c, d) {
var selectedItem = chart.getSelection()[0];
if (selectedItem) {
// Get the current Y-axis value which is 1120
var value = data.getValue(selectedItem.row, selectedItem.column);
alert('The user selected ' + value);
// How I can get the value of 2015 which is the X-axis value??????
}
});
A:
In most cases, your axis value will be in column 0, so just change out selectedItem.column for 0, and you will have the axis value:
var axisValue = data.getValue(selectedItem.row, 0);
|
{
"pile_set_name": "StackExchange"
}
|
Q:
CSS - why my browsers can't reflect any changes made on the server?
I was working on some CSS code and then suddenly the server stopped reflecting any changed made on the server. At first I thought it was a caching problem but I disabled caching on my browser and even tried using different browsers but still it's using the old version of my CSS file.
If I download my CSS file from the server and open in the text editor, it shows all the changes I made to the code but they don't reflect on my website at all. The site is using old version of the CSS file that doesn't even exist anymore on the server.
What on Earth is happening with my server? Can it be a router caching problem?
A:
I can't answer this but try to add in your html, where you include the css this: ?v1.
Example: src="resources/yourDir/style.css?v1"
This will force everything to download the new css.
You can add after the ? everything you want. Like a timestamp, just a number or words. Whatever you like.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Kivy Plyer Notification
I am very new to kivy and I just tried Plyer for the App I am making. But for some reason I cannot get the notify method to work and as soon as the Clock method runs, it gives me this Error: TypeError: notify() missing 1 required positional argument: 'self'
from kivy.app import App
from kivy.uix.widget import Widget
from kivy.uix.button import Button
from kivy.uix.gridlayout import GridLayout
from kivy.uix.anchorlayout import AnchorLayout
from kivy.uix.switch import Switch
from kivy.clock import Clock
from kivy.uix.label import Label
import datetime
from kivy.event import EventDispatcher
import plyer
count = 0
class ConnectPage(GridLayout):
def __init__(self, **kwargs):
super(ConnectPage, self).__init__(**kwargs)
self.cols = 1
self.switch = Switch()
self.add_widget(self.switch)
self.label = Label(text="0")
self.add_widget(self.label)
def manager(self):
global count
count += 1
print("[", count, "]")
plyer.facades.Notification.notify(title='hehe', message='huhu')
Clock.schedule_interval(manager, 1 / 1.)
class TestService(App):
def build(self):
return ConnectPage()
TestService().run()
A:
notify() is a method of the class Notification, and it is not marked @staticmethod. So you need an instance of the class to call it.
According to the documentation, the proper way to create a notification is:
from plyer import notification
notification.notify(title='hehe', message='huhu')
|
{
"pile_set_name": "StackExchange"
}
|
Q:
C++ - "Most important const" doesn't work with expressions?
According to Herb Sutter's article http://herbsutter.com/2008/01/01/gotw-88-a-candidate-for-the-most-important-const/, the following code is correct:
#include <iostream>
#include <vector>
using namespace std;
vector<vector<int>> f() { return {{1},{2},{3},{4},{5}}; }
int main()
{
const auto& v = f();
cout << v[3][0] << endl;
}
i.e. the lifetime of v is extended to the lifetime of the v const reference.
And indeed this compiles fine with gcc and clang and runs without leaks according to valgrind.
However, when I change the main function thusly:
int main()
{
const auto& v = f()[3];
cout << v[0] << endl;
}
it still compiles but valgrind warns me of invalid reads in the second line of the function due to the fact that the memory was free'd in the first line.
Is this standard compliant behaviour or could this be a bug in both g++ (4.7.2) and clang (3.5.0-1~exp1)?
If it is standard compliant, it seems pretty weird to me... oh well.
A:
There's no bug here except in your code.
The first example works because, when you bind the result of f() to v, you extend the lifetime of that result.
In the second example you don't bind the result of f() to anything, so its lifetime is not extended. Binding to a subobject of it would count:
[C++11: 12.2/5]: The second context is when a reference is bound to a temporary. The temporary to which the reference is bound or the temporary that is the complete object of a subobject to which the reference is bound persists for the lifetime of the reference except: [..]
…but you're not doing that: you're binding to the result of calling a member function (e.g. operator[]) on the object, and that result is not a data member of the vector!
(Notably, if you had an std::array rather than an std::vector, then the code† would be absolutely fine as array data is stored locally, so elements are subobjects.)
So, you have a dangling reference to a logical element of the original result of f() which has long gone out of scope.
† Sorry for the horrid initializers but, well, blame C++.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
PHP preg_quote() and less-than sign
echo preg_quote("aaa<bbb");
should write:
aaa\<bbb
but I get:
aaa\
This is the only sign that makes problems.
A:
If you want to display it in browser what it just is, you could wrap it in <pre> tag.
echo '<pre>'.preg_quote("aaa<bbb").'</pre>';
Or you could use htmlspecialchars to escape the <.
echo htmlspecialchars(preg_quote("aaa<bbb"));
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Get user's home folder under Python process started by supervisord
I would like to store login secrets in a file from the current user's account in a Django configuration. I'm using the recommended portable way to get the home folder, as in:
os.path.expanduser("~")
This worked in all environments, both locally and both when started with gunicorn -D config.wsgi on a server.
My problem is however that I introduced supervisord to control the gunicorn process and now this function doesn't work, it simply returns /.
This is the relevant section of supervisord.conf
[program:kek_django]
command=.../venv/bin/gunicorn config.wsgi
directory=.../django
user=testuser
Under this environmet, os.path.expanduser("~") becomes /.
Can you tell me either how to fix this problem either by fixing the environment or the function used to detect the home directory?
note: OS is FreeBSD 10, if that is relevant
update: os.environ reports the following under the running process:
'SUPERVISOR_SERVER_URL': 'unix:///var/run/supervisor/supervisor.sock',
'RC_PID': '84177',
'SERVER_SOFTWARE': 'gunicorn/19.3.0',
'SUPERVISOR_ENABLED': '1',
'SUPERVISOR_PROCESS_NAME': 'test_django',
'PWD': '/',
'DJANGO_SETTINGS_MODULE': 'config.settings.production',
'SUPERVISOR_GROUP_NAME': 'test_django',
'PATH': '/sbin:/bin:/usr/sbin:/usr/bin',
'HOME': '/'
A:
As supervisord's docs for Subprocess Environment say:
No shell is executed by supervisord when it runs a subprocess, so environment variables such as USER, PATH, HOME, SHELL, LOGNAME, etc. are not changed from their defaults or otherwise reassigned. This is particularly important to note when you are running a program from a supervisord run as root with a user= stanza in the configuration. Unlike cron, supervisord does not attempt to divine and override “fundamental” environment variables like USER, PATH, HOME, and LOGNAME when it performs a setuid to the user defined within the user= program config option. If you need to set environment variables for a particular program that might otherwise be set by a shell invocation for a particular user, you must do it explicitly within the environment= program config option. An example of setting these enviroment variables is as below.
[program:apache2]
command=/home/chrism/bin/httpd -c "ErrorLog /dev/stdout" -DFOREGROUND
user=chrism
environment=HOME="/home/chrism",USER="chrism"
So, that's the actual fix. (If you construct the supervisord.conf file dynamically and need to know how to look those values up dynamically, I can explain that, but it's pretty easy, and I don't think you need it anyway.)
[program:kek_django]
command=.../venv/bin/gunicorn config.wsgi
directory=.../django
user=testuser
environment=HOME="/home/testuser"
If this doesn't make sense to you, consider:
If you're running supervisord as root, it doesn't have testuser's HOME or anything else. And all it does is setuid(testuser), which just changes its user ID; it doesn't give the shell, or any other part of the system, any opportunity to set up the variables for testuser. Most similar tools have workarounds to fake it, following in the well-worn footsteps of how cron works, but supervisord intentionally chose not to do that.
Alternatively, as the docs for expanduser say:
On Unix, an initial ~ is replaced by the environment variable HOME if it is set; otherwise the current user’s home directory is looked up in the password directory through the built-in module pwd. An initial ~user is looked up directly in the password directory.
And a quick look at the source shows that it does this in the most obvious way possible.
So there are three obvious workarounds from within your code:
Use ~testuser instead of ~ (and you can even generate that programmatically from the username if you want).
Write your own expanduser function that just does the pwd.getpwuid(os.getuid()).pw_dir without checking for HOME.
Manually set HOME to pwd.getpwuid(os.getuid()).pw_dir at startup if it's /.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to get a related object sorted with Entity Framework for ASP.NET MVC
Having two classes like Blog and Post, in Entity Framework (and LINQ-to-Entities), how can you get the blogs with the posts sorted by date. I was getting the blogs with the posts this way:
from blog in db.BlogSet.Include("Posts") select blog
and now I'm forced to do this:
public class BlogAndPosts {
public Blog Blog { get; set; }
public IEnumerable<Post> Posts { get; set; }
}
from blog in db.BlogSet
select new BlogAndPosts () {
Blog = blog,
Posts = blog.Posts.OrderByDescending(p => p.PublicationTime)
}
which is very convoluted and ugly. The reason why I'm creating a BlogPosts class is that now, since I have to pass two variables, Blog and Posts, to MVC, I need a view model.
I'm even tempted to try this hack:
from blog in db.BlogSet
select new Blog(blog) {
Posts = blog.Posts.OrderByDescending(p => p.PublicationTime)
}
but what's the correct way to do it? Is Entity Framework not the way to go with MVC?
A:
I generally create a presentation model type wholly unaware of the Entity Framework and project into that. So I would do something like:
public class PostPresentation {
public Guid Id { get; set; }
public string Title { get; set; }
public DateTime PostTime { get; set; }
public string Body { get; set; }
}
public class BlogHomePresentation {
public string BlogName { get; set; }
public IEnumerable<Post> RecentPosts { get; set; }
}
from blog in db.BlogSet
select new BlogHomePresentation
{
BlogName = blog.name,
RecentPosts = (from p in blog.Posts
order by p.PublicationTime desc
select new PostPresentation
{
Id = p.Id,
Title = p.Title,
PostTime = p.PublicationTime,
Body = p.Body
}).Take(10)
}
Does this seem like a lot of work? Consider the advantages:
Your presentation is entirely ignorant of your persistence. Not "ignorant as in having to make all of the properties public virtual," but entirely ignorant.
It is now possible to design the presentation before designing the database schema. You can get your client's approval without doing so much work up front.
The presentation model can be designed to suit the needs of the page. You don't need to worry about eager loading or lazy loading; you just write the model to fit the page. If you need to change either the page or the entity model, you can do either one without affecting the other.
Model binding is easier with simple types. You will not need a custom model binder with this design.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
jquery iframe a links into new div / tab / window?
I have a tricky question.
All pages are within same server / portal.
I have a page that embeds iframe from the other page using the following code:
$('#mydiv').append('<p id="loading">Loading ...</p>');
$('#mydiv').append('<iframe id="myframe" name="myframe" class="myframe" onload="myframe()" style="display:none;"></iframe>');
$('#myframe').attr('src', 'https://mywebsite.com/page2');
function myframe() {
var $myframesearch = $('#myframe').contents();
$myframesearch.find("a").attr('target','_parent');
}
$('#myframe').load(function(){
$('#loading').remove();
$('#myframe').fadeIn();
});
All of the links within iframe have no href (but href="javascript:void(0)") and uses scripts within iframe to process the action dynamically.
Some links does open in new window some does not.
I would like to force all links to either open in new Tab, Window, or append to new Div, but none of the methods work, like base / parent, onclick / new window, _top, _parent, etc.
However, my idea was to hide and wait till the content of iframe is loaded after a click and then to append loaded content in new hidden div and then fade it in. When doing so the loaded iframe content resets back to default and not with new content.
Does anyone knows how this can be solved?
Thank you all!
A:
So I check and it appears that other JavaScript overwrites the "a" tag action with some "data" field in the tag only for the "a" tags that contain "Open:" in their link.
I found solution to the problem below by the link for this "a" tag from another page bypassing the JavaScript overwriting:
$(function(){
$('#mydiv').append('<p id="loading">Loading ...</p>');
$('#mydiv').append('<iframe id="myframe" name="myframe" class="myframe" src="https://mywebsite.com/page2" onload="myframe()" style="display:none;"></iframe>');
$('#myframe').load(function(){
$('#loading').hide();
$('#myframe').fadeIn();
});
$('<div id="popup" style="display:none; width:1px; height:1px; position:absolute; top:0; left:0;"></div>').appendTo('html');
});
function myframe() {
var $myframesearch = $('#myframe').contents();
$myframesearch.find("a").attr('target','_parent');
$myframesearch.find('a:contains("Open:")').on('click',function(){
$(this).attr('data','');
var $texta = $(this).text();
var $text = $texta.replace(/Open: /,"");
$('#popup').load('https://mywebsite.com/page2' + ' a:contains("'+$text+'")', function(){
$('#popup a').each(function(){
this.href = this.href.replace(/https:\/\/mywebsite\.com\/page2\/a/, "https://mywebsite.com/page2");
this.click();
});
});
});
}
Hope this helps :)
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Ubuntu linux takes longer time for incorrect passwords
When I log into my Ubuntu 8.10 box with a correct password the system figures out almost instantaneously that the password is correct and logs me in. However, if I supply an incorrect password, it takes significantly longer to figure out that the password is incorrect and to show me the login screen.
Why is this? Should it not take the same amount of time in both cases?
--
Thanks
A:
It's a security feature to slow down people who are trying to guess your password. It takes Ubuntu the same amount of time to see if it's correct or not, but then it waits for a few seconds before letting you try again.
A:
As Dentrasi has explained - this is to make it more difficult for the attacker to carry out a brute-force attack on the password store. In almost all circumstances, you don't to change this behavior.
If you have a good reason to (which I can't think of), you can modify it via /etc/login.defs - See the login.defs(5) man page.
FAIL_DELAY (number)
Delay in seconds before being allowed another attempt after a login failure.
Hmmm... At the end of the manpage...
Much of the functionality that used to be provided by the shadow password suite
is now handled by PAM. Thus, /etc/login.defs is no longer used by passwd(1), or
less used by login(1), and su(1). Please refer to the corresponding PAM
configuration files instead.
The appropriate PAM entry instead...
# Enforce a minimal delay in case of failure (in microseconds).
# (Replaces the `FAIL_DELAY' setting from login.defs)
# Note that other modules may require another minimal delay. (for example,
# to disable any delay, you should add the nodelay option to pam_unix)
auth optional pam_faildelay.so delay=3000000
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Is there a way to use FINDSTR with non-ASCII (in this case Japanese/Chinese) characters in batch?
I have a list of Japanese Kanji and their pronunciations saved in a text file (JouyouKanjiReadings.txt) like this
亜 ア
哀 アイ,あわれ,あわれむ
愛 アイ
悪 アク,オ,わるい
握 アク,にぎる
圧 アツ
(each gap is made by pressing TAB)
and I have a script like this
@echo off
set /p text=Enter here:
echo %text%>Search.txt
echo.
findstr /G:"Search.txt" JouyouKanjiReadings.txt || echo No Results && pause > nul && exit
pause > nul
However, when I run the script, I always get "No Results". I tried with English characters and it worked fine. I also tried the same script with this
findstr "%text%" JouyouKanjiReadings.txt || echo No Results && pause > nul && exit
but got the same results. Is there any ways to get around this? Also, I'm displaying the these characters correctly in the command prompt by using
chcp 65001
and a different font.
A:
You need to use find (which supports Unicode but not regex) instead of findstr (which supports regex but not Unicode). See Why are there both FIND and FINDSTR programs, with unrelated feature sets?
D:\kanji>chcp
Active code page: 65001
D:\kanji>find "哀" JouyouKanjiReadings.txt
---------- JOUYOUKANJIREADINGS.TXT
哀 アイ,あわれ,あわれむ
Redirect to NUL to suppress the output if you don't need it
That said, find isn't a good solution either. Nowadays you should use PowerShell instead of cmd with all of its quirks due to compatibility legacy issues. PowerShell fully supports Unicode and can run any .NET framework methods. To search for strings you can use the cmdlet Select-String or its alias sls
PS D:\kanji> Select-String '握' JouyouKanjiReadings.txt
JouyouKanjiReadings.txt:5:握 アク,にぎる
If fact you don't even need to use UTF-8 and codepage 65001. Just store the file in UTF-16 with BOM (that'll result in a much smaller file because your file contains mostly Japanese characters), then find and sls will automatically do a search in UTF-16
Of course if there are a lot of existing batch code then you can call PowerShell from cmd like this
powershell -Command "Select-String '哀' JouyouKanjiReadings.txt"
But if it's entirely new then please just avoid the hassle and use PowerShell
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Making column sum of adjacency matrix even.
Let $G$ be a connected graph with $V$ vertices and let say I have an adjacency matrix of order $N$, how can I make sum of each column even?
Like I have a graph with $4$ vertices and $4$ edges as
$\begin{bmatrix}0 & 1 & 0 &0 \\
0 & 0 & 1 & 0\\
0 & 0 & 0 & 1 \\
1 & 0 & 0 & 0 \end{bmatrix}$
And I want to convert it like this,
$\begin{bmatrix}0 & 0 & 0 &0 \\
1 & 0 & 1 & 0\\
0 & 0 & 0 & 0\\
1 & 0 & 1 & 0 \end{bmatrix}$
For reference, if $i^{th}$ element of the $j^{th}$ row is $1$ then the edge is directed from $j$ to $i$.
You can reverse any edge between two vertices such that the graph remains connected. And the adjacency matrix's indexing starts from 1 rather than from 0.
A:
As mentioned in the comments, this depends on what operations you allow, because you are clearly allowing your underlying graph to change --- the graph after your transformation isn't isomorphic to the first graph.
For your first matrix, you have the following graph:
For your second matrix, you have the following:
Taken as directed graphs, these are not isomorphic (however, if you relax them to merely undirected, they are). As such, it's difficult to determine what operations you're allowing as legal transformations.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Function behavior with very large variables
Whenever I think about how a function behaves, I always try to identify a general pattern of behavior with some common numbers (somewhere between 5 and 100 maybe) and then I try to see if anything interesting happens around 1, 0 and into negative numbers if applicable.
If that all works out, I essentially assume that I know that the function is going to behave similarly for very large numbers as it does for those relatively small numbers.
Are there notable (famous, clever or common) functions where very large numbers would cause them to behave significantly differently than would initially be thought if I followed my regular experimental pattern? If so, are there any warning signs I should be aware of?
A:
The Griewank function,
$$ f(\mathbf x) = \frac1{4000}\sum_{i=1}^n x_i^2 - \prod_{i=1}^n \cos\left(\frac{x_i}{\sqrt i}\right) + 1 $$
which is one of the objective functions used in testing optimization algorithms, looks completely different in large scale (dominated by x2) and small scale (dominated by cos x).
(source: geatbx.com)
(source: geatbx.com)
(source: geatbx.com)
|
{
"pile_set_name": "StackExchange"
}
|
Q:
jQuery $.POST only works when some function executed after it.
I have this jQuery project that is almost complete, for the final portion when the user's data has been succeesfully validated, it executes an external PHP script using the $.post function, passing as parameters the user entered info.
My problem though is that my callback function only works if i put an alert function after it.
Works:
$.post("backend.php", dataString,
function(response) {
if (response=="1") {
alert("ok");
}
}, 'html');
alert('test');
But if i don't put that alert after the closing tag, my script doesnt do the alert "ok", or anything else in the callback.
Do any of you see what's wrong w/ my code?
Thanks!
A:
My guess is that there's something that comes after the $.post() script that either causes the page to change or causes the post to be stopped or fail.
Fundamentally, all the alert('test') does is block your code waiting for the OK button to be pressed. The only way that could make your code suddenly work is if there is something executing in your code after that second alert that messed up the post.
Remember the post is asynchronous. It will happen in the background while any other code you have runs and only when it succeeds will the alert('ok') run.
So, my guess is that something else in your page is running and messing up the post.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Array expression syntax error
i am trying to create an array with 5 data strings in it each comprised of 3 0's and a lowercase letter. When I go to compile my code it gives me an error for that line saying expression syntax is wrong. Is there something I am missing here? the line in the code is exactly as below
segment .data
my_data db 000a, 000c, 000e, 000f, 000g ;skipped b,d due to use as binary and decimal notation
segment .text
code
edit i have tried using db, dd, dw, dq to state size, none have worked. I tried noting the number of elements in the array.
A:
Strings are defined between quotes. Below is an array of 7 stings including "b" and "d"
my_data db "000a", "000b", "000c", "000d", "000e", "000f", "000g"
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Hypothetical question of prospective pet owners
This question by a prospective puppy owner got put on hold even after being cleaned up and focused on one particular aspect because it is "hypothetical" rather than "a problem the OP actually faces". While this is true, the OP doesn't have a dog, so he doesn't have a problem with it biting, I would love to encourage non-pet owners to come to this site and ask questions that help them make an informed decision about whether and what pet to get. So many problem behaviours can be avoided if people start out with the right knowledge rather than trying to correct them after they manifest. I appreciate that there is a possibility for this site to get flooded with hypothetical questions, but I'd like to believe that we can prevent that from happening by ensuring that questions are of interest to not just the OP but a wider audience. I'm confident we can come up with some requirements/benchmarks as to what constitutes a question from a non-pet owner that is on topic and useful to visitors of this site. As it stands, I don't see why a person thinking about getting a dog, and anxious about its biting behaviour, cannot ask how to teach a puppy bite inhibition.
A:
I agree with your stance on Hypothetical questions. I think that as long as they are squarely rooted in reality and is a situation that pet owners face, or could realistically face, the question should be allowed.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
codeigniter form validation with define CONSTANT
i define a constant in /application/config/abc.php
define('MINI_LENGTH', 5);
then in the /application/controllers/abc.php
$config_form = array(array(
'field' => 'name',
'label' => 'First name',
'rules' => 'trim|required|min_length[MINI_LENGTH]'),
it is not working and i got an error message :
"The xxx field must be at least MINI_LENGTH characters in length."
which part did i missed out or done wrongly? thanks.
A:
Try
$config_form = array(array(
'field' => 'name',
'label' => 'First name',
'rules' => 'trim|required|min_length['.MINI_LENGTH.']'),
I haven't tested it though
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Array editing not working correctly
OK so here's my code to edit a specific entry within the array, and the array layout is below.
$counter = 0;
foreach($_SESSION['cart'] as $listitem){
if ($listitem[0] == $_POST['product']){
if ($listitem[1] <= $_POST['remove']){
$remove = array($listitem[0], 0);
$_SESSION['cart'][$counter] = $remove;
} else {
$result = $listitem[1] - $_POST['remove'];
$remove = array($listitem[0], $result);
$_SESSION['cart'][$counter] = $remove;
}
}
$counter = $counter++;
}
Here's my $_SESSION['Cart'] Array layout
Array(
- [0] => Array ( [0] => 8 [1] => 0 )
- [1] => Array ( [0] => 10 [1] => 0 )
- [2] => Array ( [0] => 8 [1] => 1 )
)
Either my understanding of the array is wrong with this line of code:
$_SESSION['cart'][$counter]
Or my counter will not count:
$counter = $counter++;
since the only value it keeps editing the first entry [0]
Can anybody see where I've went wrong?
A:
$counter = $counter++ will do nothing.
$counter++ increments the value of $counter, but evaluates to its current value (the one it had before incrementing). That way, you're setting $counter to have the value of itself, and that doesn't usually do much.
Simply do $counter++ instead.
(Additional info: there's also the pre-increment operator, ++$counter, which increments the variable and returns it new value.)
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Network configuration: Preventing my computer being used as a proxy?
I recently installed an extension for Chrome called Hola (skip to next paragraph if you know it). It calls itself a VPN, but is basically a proxy pool AFAIK. You can choose a country from a list in a per website basis and your connection to that website comes from an IP from the selected country from there onwards. Meanwhile, you serve as a proxy for other users.
I tried using it at my uni but it was stuck at "initializing", and after a few tests (different browsers, different computers) I could only conclude that it was being blocked somehow. Which made me wonder: My computer is in a vulnerable position if it can be used by others as a proxy, possibly without my knowledge. So I was wondering which aspect of my network I need to configure in order to avoid this kind of thing. I am currently using this script to generate my firewall rules (iptables):
# Flush all rules
iptables -F
iptables -X
# Allow unlimited traffic on localhost (breaks MPI programs otherwise)
iptables -A INPUT -i lo -j ACCEPT
iptables -A OUTPUT -o lo -j ACCEPT
iptables -A FORWARD -o lo -j ACCEPT
# Allow SSH traffic
iptables -A INPUT -p tcp --dport 22 -m state --state NEW,ESTABLISHED -j ACCEPT
# Allow incomming traffic from estabilished and related connections
iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
# Policy: Allow outgoing, deny incoming and forwarding
iptables -P OUTPUT ACCEPT
iptables -P INPUT DROP
iptables -P FORWARD DROP
I imagined the drop policy in "forward" would be enough, but it doesn't seem so. Is there anything I can do at the firewall level to block things like Hola? If not, what should I be doing?
A:
Your question of why the forward rule doesn't work is essentially a networking question, so I'll explain using an analogy. Imagine there are three people involved in the sending of a letter - Alice, Bob, and you. Alice wants to get a message to Bob, but using you as a middleman (proxy). Alice could use one of two techniques:
Method 1: Write a letter containing the message, stick it in an envelope, and address the envelope to Bob. However, instead of giving the envelope directly to Bob, she drops it off on your doorstep. She hopes you'll be kind enough to find it, recognize that the envelope isn't addressed to you, and find Bob to give it to him.
Method 2: Write a letter, to you, that says: "Can you find Bob and tell him [message]?" Stick the letter in an envelope, and address the envelope to you. You (or perhaps the Hola app running on your computer) open the envelope, read the letter, and tell Bob the message.
Firewalls are rather dumb, in that they can only understand basic information like Source IP, Destination IP, port, and some other bits of metadata. They can't actually understand the contents of the packets. It's analogous to only being able to look at the outside of the envelope, without being able to read the letter inside. So even though both of the above methods would result in the same outcome (in essence, Alice getting a message to Bob using you as a proxy), the forward chain only recognizes the first one as forwarding. As far as the firewall is concerned, in the second case Alice and you are the only parties involved in the conversation. It has no idea of Bob's involvement, as it would need to "read" the letter to know that.
As far as blocking Hola, it seems that Hola traffic can be detected, but you'll need a tool that can "read the letter," so to speak, by performing deep packet inspection and application layer filtering. Usually this is accomplished using an intrusion detection system as opposed to a basic firewall, and I suspect it's how your uni is blocking Hola. If you don't want your computer used as a proxy, your best bet would be to just not use Hola.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
React-router with tabs, how to set up routes
I have the following tabs set up in the AdministratorComponent component of my app. I am struggling with routes. I would like the route https://example.com/xyz/publish/create_news_item to go to the Publish tab below, and and show the <CreateNewsItem /> component. And the https://example.com/xyz/publish/index should go to the Publish tab below, and show the <PublishIndex /> component.
<div className="container">
<ul className="nav nav-tabs">
<li className="active"><a data-toggle="tab" href="#menu1">Menu1</a></li>
<li><a data-toggle="tab" href="#publish">Publish</a></li>
<li><a data-toggle="tab" href="#analytics">Analytics</a></li>
</ul>
<div className="tab-content">
<div id="menu1" className="tab-pane fade">
<h3>Menu 1</h3>
<p>Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.</p>
</div>
<div id="publish" className="tab-pane fade">
</div>
<div id="analytics" className="tab-pane fade">
<Analytics />
</div>
</div>
</div>
<div>
{this.props.children}
</div>
</div>
Here are the routes I have so far:
export default (
<Route path=“/xyz/index" component={AdministratorComponent} >
<Route path=“/xyz/create_news_publication" component={AdministratorComponent} />
</Route>
);
This does not work. I thought maybe I would put code into the AdministratorComponent which would look at the window.location.href, activate the Publish tab, and put in either <PublishIndex /> or <CreateNewsItem /> depending on the path. But I can’t get started as the above path in the url just causes a bunch of 404s. What is the correct way to do this?
A:
<Route path="/">
<Route path="/index" component={AdministratorComponent}/>
<Route path="/create_news_publication" component={AdministratorComponent}/>
</Route>
would be in routes.js for exemple. The component Route come from react-router
To switch route, you would need to use the component Link from react-router
To have a text (for exemple) linking to another page it would be like this :
<Link to={`/create_news_publication`}>click to go to create_news_publication!</Link>
|
{
"pile_set_name": "StackExchange"
}
|
Q:
STL vector containing vector causing segfault
The following code causes a segfault when I try to issue my push_back call. What am I doing wrong?
#include <iostream>
#include <vector>
#include <string>
using namespace std;
int main() {
std::string * foo = new std::string("hello world");
cout << *foo << endl;
std::vector<std::vector<std::string *> > my_vecs;
my_vecs[0].push_back(foo); // segfaults
cout << "trying to print my_vecs size of " << my_vecs.size() << " but we never reach that point due to segfault " << endl;
return 0;
}
I'm pretty sure I'm violating one of the contracts for using vector, as the problem is surely not with the STL implementation.
A:
The outer vector must first be explicitly grown, before one can push to its elements.
This may be a little surprising since STL map's automatically insert their keys. But, it's certainly the way it is.
#include <iostream>
#include <vector>
#include <string>
using namespace std;
int main() {
const int DESIRED_VECTOR_SIZE = 1;
std::string * foo = new std::string("hello world");
cout << *foo << endl;
std::vector<std::vector<std::string *> > my_vecs;
for (int i = 0; i < DESIRED_VECTOR_SIZE; ++i) {
std::vector<std::string *> tmp;
my_vecs.push_back(tmp); // will invoke copy constructor, which seems unfortunate but meh
}
my_vecs[0].push_back(foo); // segfaults
cout << "now able to print my_vecs size of " << my_vecs.size() << endl;
return 0;
}
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Configuration of Jobs getting updated by System user on Jenkins
We are using Jenkins enterprise version. Whenever we update job configs there are cases where the config is again updated with the System user to what it was earlier.
We are unable to understand what process is doing that.
Thank you for the help in advance.
I'm unable to attach a screenshot to give an idea of what is happening as my reputation is not high enough.
A:
Most likely, the reason is in the fact, that your job includes "Parameterized Jenkins Pipeline" stage (thanks to @rjohnston for the "Jenkinsfile" comment above, which is related to this). And in this case, once the parameters code "is included at the top level of the pipeline script, any pipeline execution resets the job’s parameters to the specified values" (as it is pointed out in this article: Parameterized Jenkins Pipelines).
So, you need to change config parameters not in the job configuration page itself, but in the pipeline script, relative link to which you can find in the "Script Path" field of the "Pipeline" part of the "Config" page.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Validating Close method and Windows Close button C#
I am having a method like MainPage_FormClosing(object sender, FormClosingEventArgs e) and I am using Close method to close the form.
This this.Close() method also triggers the MainPage_FormClosing method.
I just want to perform some function specifically when the user click on Form Windows Close Button.
I have seen some other questions here, they used some way like String.Equals((sender as Button).Name, @"CloseButton") to validate.
The sender is always null for me
How can I validate this ?
A:
If you can't use e.CloseReason, the simplest solution would be to use a flag - Have a form level boolean variable that will only change it's state if you are closing the form in code and in the form closing event handler. Something like this will do:
private bool _isClosedFromCode = false;
...
private void CloseForm()
{
_isClosedFromCode = true;
Close();
}
...
private void MainPage_FormClosing(object sender, FormClosingEventArgs e)
{
if(_isClosedFromCode)
{
// do your stuff here
}
_isClosedFromCode = false;
}
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Using jQuery to assign an active class to an area tag (image map)
Is there a way, through jQuery, to assign an active class to an area tag within an a map?
I have a bunch of areas defined like this:
<map name="mappy">
<area shape="rect" coords="162,105,179,136" href="#" title="Unique Title 1" alt="Unique Title 1" />
<area shape="rect" coords="205,72,222,101" href="#" title="Unique Title 2" alt="Unique Title 2" />
</map>
What I need to figure out is if it is possible to add a some jQuery that sniffs out the title or alt tag and applies an active class to the area if there is a match.
Something like... if title="Unique Title 1" then add class="active" to area. Is this possible?
A:
You can use the attribute-equals selector to find it and .addClass() to do the actual adding, like this:
$("area[title='Unique Title 1']").addClass("active");
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to know which key a song is written in with the same key signature?
How to know which key a song is written in with the same key signature ? For example, the C Lydian scale has the same key signature as G major (G Ionian) AND E (natural) minor (E Aeolian) which makes it pretty difficult. Sometimes you can tell if in the bass staff, there are chords in a specific key, for example C major: CEG - CEG. But this isn't always the case. So how do you do it?
A:
With no key sig, the piece could be in one of several keys. C maj., and A minor being the most likely. What clues are there? In Am, there will likely be some G# notes.
However, it may also be using D Dorian, E Phrygian, F Lydian, G Mixolydian, or, unlikely, B Locrian. The melody would centre around the appropriate note, and would feel at rest on that note. For example if it was D Dorian, the last chord may well be D minor, and last note D.
That apart, there's no way of telling - unless it's announced at the top! The accompanying chords will all come from the same set, with perhaps the additional E in Am, which is where the G# came from. Otherwise, it's usually academic.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How do I make an array of only Strings from a txt file with Strings and doubles?
Heres the txt file: http://txt.do/5w3em
I just need the Strings (EXCLUDING THE COMMENTED OUT ONES), the M's and B's, all on separate lines from the text file and none of the doubles. How do I make an array list to store them all?
I tried this
List<String> trainingDatasetStrings = new ArrayList<String>();
while(inFile.hasNextLine()){
String line = inFile.nextLine();
String[] words = line.split(",");
for(int i = 11; i < words.length; i++)
trainingDatasetStrings.add(words[i]);
}
But it wont help.
A:
Try this.
List<String> result = Files.readAllLines(Paths.get(fileName)).stream()
.filter(s -> !s.startsWith("//"))
.flatMap(s -> Stream.of(s.split(",")))
.filter(s -> !s.matches("\\d+(\\.\\d*)?"))
.collect(Collectors.toList());
System.out.println(result);
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Why sifr text's swf takes more width than actual text?
I think if we can match swf size with original text width's then rendering will be fast.
A:
The font-replacement flash generated by sIFR takes the same space as the replaced HTML element. So if for example the text you are replacing is inside of an h1 tag, sIFR will take up the whole space of that h1 element and not the space of the text inside of it.
I guess the element that you are replacing has a width of 100%. You can set a border on it (style="border: solid 1px black;") so you can see the bounds of the element.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Is Kylo Ren more force capable than Snoke?
Kylo Ren used force to remotely turn and activate Luke's lightsaber and cut Snoke in halves.
It has been stated over and over again, Snoke can read others' mind and feelings, how could Snoke not see Kylo's move?
Is Kylo so force-capable that he could hide his true feelings from Snoke?
A:
Kylo Ren is a very strong force user. Luke says something to the effect to Rey:
I wasn't frightened enough when I came across such power the first time
We also see him perform other 'tricks' that have never been seen in Star Wars before, like stopping a blaster bolt in mid air for more than a minute.
However I don't think we've seen what the powers of Snoke are, so cannot judge whether he or Kylo Ren are more capable.
The key thing we see is that Kylo Ren disguises his feelings about what he is about to do. You can see Snoke (apparently) reading Kylo's mind - repeating out loud that he is turning the lightsabre and is about to kill his target. Kylo does this acting out turning the lightsabre in his hands as he stands in front of Rey, while turning the sabre to the side of Snoke with the Force.
He is gambling that Snoke's attention is on what he is seeing in front of him. He is gambling that Snoke will assume the feelings he is reading relate to Rey's imminent death, and overlooks the possibility that his apprentice is planning to kill him.
So its not that Kylo is so strong with the Force that he can disguise his thoughts completely. He uses his powers and misdirection to achieve his aims.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How do I open some nice looking UberChests?
Just started playing the game and I found some nice looking UberChests(tm).
The problem is that I cannot open them, any idea?
A:
To open UberChests you have to find a lever somewhere in the dungeon, which would open the chest for you.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Не создается папка при первом запуске в мобильном приложении
Всем привет! Такая проблема, при первом запуски приложения не создается фолдер, если его перезапустить - фолдер создаётся. Если до первого запуска дать разрешение на хранилище - папке создается. Для получения разрешения юзаю TedPermission (стандартным методом тоже пробовал)
Мой код:
...
private static String FOLDER = "NewFolder";
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
PermissionListener permissionlistener = new PermissionListener() {
@Override
public void onPermissionGranted() {}
@Override
public void onPermissionDenied(List<String> deniedPermissions) {}
};
TedPermission.with(this)
.setPermissionListener(permissionlistener)
.setPermissions(Manifest.permission.READ_EXTERNAL_STORAGE, Manifest.permission.WRITE_EXTERNAL_STORAGE)
.check();
createFolder(); // создаем папку
...
}
private File createFolder() {
File folder = new File(Environment.getExternalStorageDirectory(), FOLDER);
if (folder.exists())
return folder;
if (folder.isFile())
folder.delete();
if (folder.mkdirs())
return folder;
Environment.getExternalStorageDirectory();
return folder;}
Разрешения на хранилище он получает, но папку создает только после 2 запуска.
Пробовал еще такие варианты как:
File nfile=new File(Environment.getExternalStorageDirectory()+"/NewFolder");
nfile.mkdir();
и
if (!Environment.getExternalStorageState().equals(Environment.MEDIA_MOUNTED)){
Log.d("MyApp", "No SDCARD");
} else {
File directory = new File(Environment.getExternalStorageDirectory()+File.separator+"NewFolder");
directory.mkdirs();
}
В манифесте тоже все объявлено
<?xml version="1.0" encoding="utf-8"?>
<manifest xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:tools="http://schemas.android.com/tools"
package="ru.asd.dsa">
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
A:
Ну все правильно. Запрос на предоставление разрешения на запись он не блокирующий (асинхронный), то есть вы вешаете запрос на предоставление разрешения и сразу же "проваливаетесь" в createFolder(), который естественно благополучно проваливается, поскольку разрешение еще не granted
Вам надо "посадить" createFolder() на ветку PermissionListener.onPermissionGranted()
|
{
"pile_set_name": "StackExchange"
}
|
Q:
HTML Button in iFrame does not work properly on iPad
I have a web application in which I have a div element with an onclick javascript action. This web application works fine on iPads and desktops alike.
When it is launched within an iFrame on an iPad, however, all of the sudden, my clicks/taps are rarely and inconsistently acted upon. When running in an iFrame on a desktop browser, I do not see this behavior.
Has anyone seen this type of behavior before?
A:
I'm not sure about your exact situation, but I was having a similar problem when my botton had a "mouseenter" event trigger binded to it. The mouse enter would be called on the first "tap" and the button would be called on the "second". Because of the way ipad uses those two events.
My solution was to use the browser detection tool from http://detectmobilebrowsers.com/ and set a var ismolible = to true or false, depending on whether the browser was mobile or not, then I used an if statement to unbind my mouseenter immediately if the browser was mobile. You do have to modify the http://detectmobilebrowsers.com/ code for ipad.
Hope this helps!
|
{
"pile_set_name": "StackExchange"
}
|
Q:
cp -r .
I run the command cp -r <Folder Name> .<Folder Name>. It have done something, but I cannot find it.
Example:
cp -r Agent .Agent.old # (There is a dot symbol on the 2nd folder name)
A:
Prefixing the file name with a dot (.) is usually considered as hidden files. This folder copying should have done in the same way.
try to list hidden items by.
ls -la .Agent.old
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Create db user with limited privileges
I have to create a sql 2008 database user that can backup and restore databases but NOT execute INSERT, UPDATE, DELETE.
Is it possible or the two operations are incompatible?
A:
You will need to assign them the role of db_BackupOperator to the login(s) to be able to do this.
They will be able to see the objects, but not the data in them or the code behind them.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How do I get the true negatives from y_true and y_pred?
Is it possible to get true negative from y_true and y_pred tensors in Keras?
I know we can get true positives from the following code:
true_positive = K.sum(y_true*y_pred)
How to do the same for the true negative?
A:
Here's an example of how you can extract the True Positive and the True Negative values of your confusion matrix:
def confusion(y_true, y_pred):
y_pred_pos = K.round(K.clip(y_pred, 0, 1))
y_pred_neg = 1 - y_pred_pos
y_pos = K.round(K.clip(y_true, 0, 1))
y_neg = 1 - y_pos
tp = K.sum(y_pos * y_pred_pos) / K.sum(y_pos)
tn = K.sum(y_neg * y_pred_neg) / K.sum(y_neg)
return {'true_pos': tp, 'true_neg': tn}
In case you are facing some NaN values in your output, try adding an epsilon to the denominators which will solve your problem.
Answer came from this link.
Hope it helped you.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
unable to set proper scope for variable within a JavaScript promise
I've come across a weird issue where a new variable is being created in local scope even if it is defined outside,
from the below code
after I call buildMeta() and check the contents of "data", it is always empty
implying that it's not being modified at all, even if I've specifically targeted "that.data" where that refers to the class' object.
I'd appreciate if anyone would point out what I am doing wrong.
class meta {
constructor(files) {
if(!files) throw Error("files not specified");
this.data = {};
this.ls = files;
}
buildMeta() {
var that = this;
for(let i = 0; i < that.ls.length; i++) {
mm.parseFile(that.ls[i]).then(x => {
var info = x.common;
that.data[info.artist] = "test";
}).catch((x) => {
console.log(x);
});
}
}
}
const mm = new meta(indexer); // indexer is an array of file paths
mm.buildMeta();
console.log(mm.data);
A:
You are logging mm.data before parseFile has finished. Your code implies that it returns a promise, so your insertion into that.data will happen after your console.log(mm.data) executes.
You need to return a promise from buildMeta, so that you can do...
const mm = new meta(indexer);
mm.buildMeta().then(() => {
console.log(mm.data);
})
Here's a buildMeta that should do what you need. This returns a promise that waits for all of the parseFile invocations to do their work and update this.data...
buildMeta() {
return Promise.all(this.ls.map(f => mm.parseFile(f).then(x => {
var info = x.common;
this.data[info.artist] = "test";
})))
}
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Select on empty table but still get column names
I want to do a SELECT on an empty table, but i still want to get a single record back with all the column names. I know there are other ways to get the column names from a table, but i want to know if it's possible with some sort of SELECT query.
I know this one works when i run it directly in MySQL:
SELECT * FROM cf_pagetree_elements WHERE 1=0;
But i'm using PHP + PDO (FETCH_CLASS). This just gives me an empty object back instead of an row with all the column names (with empty values). So for some reason that query doesn't work with PDO FETCH_CLASS.
$stmt = $this->db->prepare ( $sql );
$stmt->execute ( $bindings );
$result = $stmt->fetchAll ( \PDO::FETCH_CLASS, $class );
print_r($result); // Empty object... I need an object with column names
Anyone any idea if there's another method that i can try?
A:
Adding on to what w00 answered, there's a solution that doesn't even need a dummy table
SELECT tbl.*
FROM (SELECT 1) AS ignore_me
LEFT JOIN your_table AS tbl
ON 1 = 1
LIMIT 1
In MySQL you can change WHERE 1 = 1 to just WHERE 1
A:
To the other answers who posted about SHOW COLUMNS and the information scheme.
The OP clearly said: "I know there are other ways to get the column names from a table, but i want to know if it's possible with some sort of SELECT query."
Learn to read.
Anyway, to answer your question; No you can't. You cannot select a row from an empty table. Not even a row with empty values, from an empty table.
There is however a trick you can apply to do this.
Create an additional table called 'dummy' with just one column and one row in it:
Table: dummy
dummy_id: 1
That's all. Now you can do a select statement like this:
SELECT * FROM dummy LEFT OUTER JOIN your_table ON 1=1
This will always return one row. It does however contain the 'dummy_id' column too. You can however just ignore that ofcourse and do with the (empty) data what ever you like.
So again, this is just a trick to do it with a SELECT statement. There's no default way to get this done.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
how to check if a day of hour is filled by the user?
I have 2 dialogs datepicker and time picker that turn one after another , so i am trying to make case to check if a timepicker is set or it's pressed cancel and only the date picker is set because i need it for later to set an alarm...i will use the same to check if the date is picked also...
//so here is what i have in mind
public void funk2()
{
if(mCalendar.isSet(Calendar.HOUR_OF_DAY))
{
Toast.makeText(dodadi.this,"Успешно е внесено2z! ",Toast.LENGTH_SHORT).show();
}
}
A:
if(mData.getText().toString().equals("SetAlarm")){}
else{
mData.setText(mData.getText()+" "+timeForButton);
if(timeForButton!=null){n=timeForButton;}}
int m=mCalendar.get(Calendar.MONTH);
if(n!=null)
{
if(m!=0)
{
Toast.makeText(dodadi.this,"Успешно е внесено2z! ",Toast.LENGTH_SHORT).show();
try like this to check a string not an integer, so first check if the time is null that u want to put in the calendar than set it as n variable
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to do looping in SQL?
I've never attempted to loop in SQL, I have done in PHP but mainly through copying examples (although I do understand the concept).
My Question is....
I have a report where I generate headings depending on the contents of three fields: Division, Department and Branch. I create the headings manually by doing...
sum(case when Division = 'Property' and Department = 'High Value' and Branch = 'London' then Net end) as 'Prop|HighValue|Lon',
and I have to do this for every combination of the three fields, which is a: Time consuming in itself and b: means that if a category is added, I need to then add a line of code to my view.
Is there a way of looping through the fields to dynamically create 'every Branch in the 1st record in Department for the 1st record in Division etc etc?
plus - is there a way to exclude a specific combination (given that not every combination exists in reality)?
Additional info....
'Division' is a column that contains 'Property', 'Litigation','Private Client'
'Department' is a column that contains 'High Value', 'Low Value'
'Branch' is a column that contains 'London', 'Manchester', 'Peterborough'
these are grouped columns that show summarised billing information (in the 'Net' column'
This is fine for grouping the categories downwards, what I want is a column for each combination of the three fields (eg the case statement above creates a column for 'Property|High Value|London' - I'm wondering if I can create that dynamically with a loop?
A:
What you are asking is known as crosstab, pivot or xtab. It is backend dependant how you do it. What you are trying to do currently is the oldest style way of doing that and would be cumbersome (poor man's pivot). ie: For postgreSQL you could simply use tableFunc:
tablefunc
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Overridden Controller in Magento, One Function not working
Im attempting to get my contact form to redirect to the homepage after form submission. I setup my module and can confirm it is working. In the code below, I get the 'Pre-dispatched' and 'Index Action' log messages but do not get the 'Post Action' and as you can expect, it also does not redirect me to the homepage when it is complete. I do receive the contact e-mail properly. Can anyone tell me why the first two functions are working correctly and postAction() is not?
I copied all of the code from the original controller into my controller for troubleshooting purposes. Everything is default except the addition of log messages and the redirects at the bottom.
class MyCompany_Contacts_IndexController extends Mage_Contacts_IndexController
{
const XML_PATH_EMAIL_RECIPIENT = 'contacts/email/recipient_email';
const XML_PATH_EMAIL_SENDER = 'contacts/email/sender_email_identity';
const XML_PATH_EMAIL_TEMPLATE = 'contacts/email/email_template';
const XML_PATH_ENABLED = 'contacts/contacts/enabled';
public function preDispatch()
{
parent::preDispatch();
Mage::log('Pre-dispatched');
if( !Mage::getStoreConfigFlag(self::XML_PATH_ENABLED) ) {
$this->norouteAction();
}
}
public function indexAction()
{
Mage::log('Index Action.');
$this->loadLayout();
$this->getLayout()->getBlock('contactForm')
->setFormAction( Mage::getUrl('*/*/post') );
$this->_initLayoutMessages('customer/session');
$this->_initLayoutMessages('catalog/session');
$this->renderLayout();
}
public function postAction()
{
parent::postAction();
Mage::log('Post Action.');
$post = $this->getRequest()->getPost();
if ( $post ) {
$translate = Mage::getSingleton('core/translate');
/* @var $translate Mage_Core_Model_Translate */
$translate->setTranslateInline(false);
try {
$postObject = new Varien_Object();
$postObject->setData($post);
$error = false;
if (!Zend_Validate::is(trim($post['name']) , 'NotEmpty')) {
$error = true;
}
if (!Zend_Validate::is(trim($post['comment']) , 'NotEmpty')) {
$error = true;
}
if (!Zend_Validate::is(trim($post['email']), 'EmailAddress')) {
$error = true;
}
if (Zend_Validate::is(trim($post['hideit']), 'NotEmpty')) {
$error = true;
}
if ($error) {
throw new Exception();
}
$mailTemplate = Mage::getModel('core/email_template');
/* @var $mailTemplate Mage_Core_Model_Email_Template */
$mailTemplate->setDesignConfig(array('area' => 'frontend'))
->setReplyTo($post['email'])
->sendTransactional(
Mage::getStoreConfig(self::XML_PATH_EMAIL_TEMPLATE),
Mage::getStoreConfig(self::XML_PATH_EMAIL_SENDER),
Mage::getStoreConfig(self::XML_PATH_EMAIL_RECIPIENT),
null,
array('data' => $postObject)
);
if (!$mailTemplate->getSentSuccess()) {
throw new Exception();
}
$translate->setTranslateInline(true);
// Mage::getSingleton('customer/session')->addSuccess(Mage::helper('contacts')->__('Your inquiry was submitted and will be responded to as soon as possible. Thank you for contacting us.'));
$this->_redirect('');
return;
} catch (Exception $e) {
$translate->setTranslateInline(true);
// Mage::getSingleton('customer/session')->addError(Mage::helper('contacts')->__('Unable to submit your request. Please, try again later'));
$this->_redirect('');
return;
}
} else {
$this->_redirect('');
}
}
}
config.xml
<?xml version="1.0" encoding="UTF-8"?>
<config>
<modules>
<MyCompany_Contacts>
<version>0.0.1</version>
</MyCompany_Contacts>
</modules>
<frontend>
<routers>
<contacts>
<args>
<modules>
<MyCompany_Contacts before="Mage_Contacts">MyCompany_Contacts</MyCompany_Contacts>
</modules>
</args>
</contacts>
</routers>
</frontend>
</config>
A:
I think I figured this out. I remmebered that the post data gets analyzed by Akismet before it gets sent so its entirely possible the default Mage_Contacts is already getting extended and is going trough that module first. Added the logging to postAction() in the Akismet controller and that verified it. Thanks for setting me on the right track.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Email campaigns using Firebase Dynamic Links
Use case: send email to users that currently don’t have the app installed. The email shall contain two links for Google (Android devices) and Apple (for iPhone). When the user taps on the link, launch Google play or Apple store app on the phone pointing to the app so that the user can download.
Is this functionality be implemented by Firebase Dynamic Links? We support Android KitKat and above. What is the minimum OS the user will need?
A:
Sai, the use-case you describing is the reason why Firebase Dynamic Links exists. It does not matter how you get your link to the hands of the customers/users. You can use FDL in emails, SMS, iMessage, Facebook, Twiter etc.
You do not need to specify two links. One Firebase Dynamic Link will work on both Android and iOS. If App is not installed, the link will navigate to AppStore or PlayStore. If App is installed, the link will open the App with deep link.
If opened on desktop, the link will navigate to deep link itself. Also check out "dfl" Desktop fallback link parameter for desktop behavior.
Firebase Dynamic Links on Android supported Ice Cream Sandwich and newer. KitKat is supported by FDL.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Creation of 1.3 GB file takes only 1 second. How?
I have folder which contains over 200 files and has size of over 1.3 GB:
I use Gizmo Drive software to create an .iso file from that folder.
The interesting thing is it takes only 1 or 2 seconds!
I have tried that several times. I even tried to create the .iso file on another volume. Again it takes only 1 or 2 seconds.
I tried to mount the .iso file, everything works fine. I thought that it might be an .iso file referencing the source folder then I moved the source folder to another place but no luck. Even copying the produced .iso file takes minutes!
So how come creating the .iso file takes only 1 second!
Do you have any explanation for that?
Notes
All tests conducted on a regular HDD, no SSD.
Using Windows 7 x64, have 16 Gig memory, Core i5 CPU.
I have used sync.exe to flush all file system data to disk just after the .iso file is created and sync.exe took 14 seconds to flush the data. That means it actually takes 14 seconds to create the .iso file. A quick benchmark on my D: drive shows that it can write the same .iso file from an SSD to my D: drive in 14 seconds and that confirms the source folder is in RAM and it takes 14 seconds to flush the data.**
A:
With 16GB of RAM, you probably have a lot of it free for disk caching. The ISO has most likely just been buffered entirely in RAM by the operating system; it'll be written to disk later, but applications don't have to wait for that.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Having problems with a model provided placeholder for dropdown
So I have two cases that work independantly.
If I know I have model data in the dropdown, I use this code.
@Html.DropDownListFor(model => model.CarID, (SelectList)ViewBag.Cars, htmlAttributes: new { @class = "form-control" })
HOWEVER, the problem with this, if CarID is null the first item on the list will be the pre selected in the dropdown.
And in another form, I use this one:
@Html.DropDownListFor(model => model.CarID, "--Select a Car--", (SelectList)ViewBag.Cars, htmlAttributes: new { @class = "form-control" })
My Controller has this to provide the view data:
ViewBag.Cars = new SelectList(db.Units, "CarID", "Name", model.CarID);
Is there ONE of these that I can use where it will have a placeholder ONLY if model data is not present to fill the spot?
A:
In your second approach, you are using the helper method incorrectly,
It should be
@Html.DropDownListFor(model => model.CarID,(SelectList)ViewBag.Cars,
"Select one", new { @class = "form-control" })
This will render the dropdown with a "Select one" option as the first (and default item), if nothing is pre selected (model.CarID is null). If Model.CarID has a valid value, It will select the corresponding option.
If you absolutely want to remove the "Select one" when Model.CarID is not null, you can write some javascript which executes on the document load (jQuery document ready :) ) to check the selected option and remove it as needed.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to set a column alias as the result of a SQL query?
I need to use result of a SQL query to set column aliases. Please see below script and the result of the script I need to use it as column aliases.
select
convert(varchar,DATEADD(month, -12, dateadd(d,-day(convert(date,dateadd(d,-(day(getdate())),getdate()))),convert(date,dateadd(d,+1-(day(getdate())),getdate())))),107),
convert(varchar,convert(date,dateadd(d,-day(convert(date,dateadd(d,-(day(getdate())),getdate()))),convert(date,dateadd(d,+1-(day(getdate())),getdate())))),107)
I need the answer for my question as soon as possible.
A:
Two solutions are described in the following link: Column alias based on variable
First solution:
Set the alias in a variable
Define the query as a nvarchar containing a reference to the variable.
Execute the query using sp_executesql
SET @column_alias = 'new_title'
SET @sql = 'SELECT keycol, datacol AS ' + @column_alias + ' FROM Foo'
EXEC sp_executesql @sql
Second solution: Rename the column after the execution of the query
INSERT INTO Results
SELECT keycol, datacol
FROM Foo
EXEC sp_rename 'Results.datacol', @column_alias, 'COLUMN'
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to copy a register and do `x*4 + constant` with the minimum number of instructions
I am new to x86 assembly. For example for the following instruction: multiply the contents of ESP by 4 and add 0x11233344, storing the result in EDI.
How do I represent this instruction in x86 assembly with minimum number of instructions?
push esp
mov edi, 4
mul edi
add edi, 0x11233344
A:
Your asm doesn't make any sense (push esp copies to memory, not another register), and mul edi writes EDX:EAX not edi. It does EDX:EAX = EAX * src_operand. Read the manual: https://www.felixcloutier.com/x86/MUL.html. Or better, use imul instead unless you actually need the high-half output of the full 32x32 => 64-bit multiply.
Also, don't use the stack pointer register ESP to hold temporary values unless you know exactly what you're doing (e.g. you're in user-space, and you've made sure no signal handlers can asynchronously use the stack.) stack-pointer * 4 + large-constant is not something that a normal program would ever do.
Normally you could do this in one LEA instruction but ESP is the only register that can't be an index in an x86 address mode. See rbp not allowed as SIB base?
(The index is the part of an addressing mode that can have a 2-bit shift count applied, aka a scale factor).
I think our best bet is still just to copy ESP to EDI, then use LEA:
mov edi, esp
lea edi, [edi * 4 + 0x11223344]
Or you could copy-and-add with LEA, and then left shift, because the value we're adding has two zeros as its low bits (i.e. it's a multiple of 4). So we can right shift it by 2 without losing any bits.
SHIFTED_ADD_CONSTANT equ 0x11223344 >> 2
lea edi, [esp + SHIFTED_ADD_CONSTANT]
shl edi, 2
The add before left-shifting will produce carry into the top 2 bits, but we're about to shift those bits out so it doesn't matter what's there.
This is also 2 uops, and more efficient on AMD Bulldozer-family CPUs (no mov-elimination for GP-integer mov, and where a scaled index costs an extra cycle of latency for LEA). Zen has mov-elimination but I think still the same LEA latencies so both versions are 2 cycle latency. Even "complex" LEA has 2/clock throughput on Zen, or 4/clock for simple LEA (any ALU port).
But less efficient on Intel IvyBridge and later CPUs where the mov can run with zero latency (mov elimination), and the [edi*4 + disp32] addressing mode is still a fast 2-component LEA. So on Intel CPUs with mov-elimination, the first version is 2 front-end uops, 1 unfused-domain uop for an execution unit, and only 1 cycle of latency.
Another 2-instruction option is to use a slower imul instead of a fast shift. (Addressing modes use a shift: even though it's written as * 1 / 2 / 4 / 8, it's encoded in a 2-bit shift-count field in machine code).
imul edi, esp, 4 ; this is dumb, don't use mul/imul for powers of 2.
add edi, 0x11223344
imul has 3 cycle latency on modern x86 CPUs which is pretty good, but is slower on old CPUs like Pentium 3. Still not as good as 1 or 2-cycle latency for mov + LEA, and imul runs on fewer ports.
(Number of instructions is not usually the thing to optimize for; number of uops usually matters more, and latency / back-end throughput. Also code-size in bytes of x86 machine code; different instructions are different lengths.)
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How do I get the tap coordinates on a custom UIButton?
I'm using XCode 4.4 developing for iOS 5 on an iPad and am using the Storyboard layout when creating my custom button.
I have the touch event correctly working and logging but now I want to get the x/y coordinates of the tap on my custom button.
If possible, I'd like the coordinates to be relative to the custom button instead of relative to the entire iPad screen.
Here's my code in the .h file:
- (IBAction)getButtonClick:(id)sender;
and my code in the .m file:
- (IBAction)getButtonClick:(id)sender {
NSLog(@"Image Clicked.");
}
Like I said, that correctly logs when I tap the image.
How can I get the coordinates of the tap?
I've tried a few different examples from the internet but they always freeze when it displays a bunch of numbers (maybe the coordinates) in the log box. I'm VERY new to iOS developing so please make it as simple as possible. Thanks!
A:
To get touch location you can use another variant of button action method: myAction:forEvent: (if you create it from IB interface note "sender and event" option in arguments field: )
Then in your action handler you can get touch location from event parameter, for example:
- (IBAction)myAction:(UIButton *)sender forEvent:(UIEvent *)event {
NSSet *touches = [event touchesForView:sender];
UITouch *touch = [touches anyObject];
CGPoint touchPoint = [touch locationInView:sender];
NSLog(@"%@", NSStringFromCGPoint(touchPoint));
}
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How I swap rownames in these data?
I have a microarray gene expression matrix like this by weird gene IDs in rows.
> head(mat[1:10,1:5])
> dim(mat)
[1] 39302 76
>
There is matched gene symbol for each of these identifiers in another matrix
>
> dim(matched)
[1] 23107 1
>
How I can have matched gene symbol with probe identifiers in the row names of my expression matrix please? The problem is, for one gene symbol we may
I tried
> merged <- merge(mat, matched)
Error: cannot allocate vector of size 6.8 Gb
EDITED
tmp = paste(matched[rownames(mat)],rownames(mat),sep="_")
> rownames(mat) = tmp
EDITED
This is my matrix after matching prob identifiers to gene symbol
> head(array[,1:10,1:5])
GSM482796 GSM482797 GSM482798 GSM482799
1 OR2T6 0.0171 -0.1100 -0.0394 -0.0141
2 EBF1 0.1890 0.0222 0.0832 0.0459
3 DKFZp686D0972 1.9400 0.2530 0.3770 0.8310
4 ATP8B4 -0.1490 0.0690 -0.0637 -0.0527
5 NOTCH2NL 0.1540 -0.3880 0.2160 -0.0812
6 SPIRE1 0.2920 0.1690 0.5500 0.1430
but now for some genes I have several probes. For example for gene A I have several matched probes. So I have repetition for gene A. How I can take mean over the expression of repeated genes please and having an unique value?
A:
There are actually two questions.
The first one is about the memory issue. I believe merge from data.table would solve that issue.
The second one would be aggregating or summarizing the identifiers. As @StupidWolf mentioned, you need to have a rule, summing them up, averaging them, ... For this second part one way is using dplyr:
merged_data %>%
group_by(probe_colum) %>%
summarise_all(mean or sum or ...) %>%
mutate(new_column = paste(probe_column, gene_column, sep = "_"))
The same thing can be done faster with data.table:
merged_data[, lapply(.SD, mean or sum or ..., na.rm=TRUE), by = probe_column ] [, new_column := paste(probe_column, gene_column, sep = "_"),]
.SD means subset of data and your subsets are your groups based on your probes.
A:
You need to think about whether to take the average for all probes belonging to one gene, or simply append the gene name to the probe id.
If it is simply adding a gene name, maybe try this below, note I am assuming that all your rownames of mat can be found in matched
# check whether all rownames are in matched
table(rownames(mat) %in% rownames(matched))
# rename the rows
tmp = paste(matched[match(rownames(mat),rownames(matched)),1],rownames(mat),sep="_")
rownames(mat) = tmp
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Should a title be required?
It appears that a question can be posted without a title being supplied (Example). Should this be a required field? Did the OP actually enter that title?
Previous title to the example, "What's your geographic information systems question? Be specific."
A:
It is the default text from the Asking a Question - some users miss this when in first posts.
see below
|
{
"pile_set_name": "StackExchange"
}
|
Q:
SQL injection-proofing TextBoxes
I've found some tutorials on this already, but they aren't exactly what I'm looking for, I can use the following for username fields and password fields
Private Sub UsernameTextBox_KeyPress(ByVal sender As Object, ByVal e As System.Windows.Forms.KeyPressEventArgs) Handles UsernameTextBox.KeyPress
If Char.IsDigit(e.KeyChar) OrElse Char.IsControl(e.KeyChar) OrElse Char.IsLetter(e.KeyChar) Then
e.Handled = False
Else
e.Handled = True
End If
End Sub
But for an email field how would I go about protecting against SQL injection for that textbox, as some email accounts have periods or dashes in them?
Update:
Below is an example of an insert statement I use.
Dim con As SqlConnection
con = New SqlConnection()
Dim cmd As New SqlCommand
Try
con.ConnectionString = "Data Source=" & Server & ";Initial Catalog=" & Database & ";User ID=" & User & ";Password=" & Password & ";"
con.Open()
cmd.Connection = con
cmd.CommandText = "INSERT INTO TB_User(STRUserID, password, Email) VALUES('" & UsernameTextBox.Text & "', '" & MD5Hash(PasswordTextBox.Text) & "', '" & EmailTextBox.Text & "')"
cmd.ExecuteNonQuery()
Catch ex As Exception
MessageBox.Show("Error while inserting record on table..." & ex.Message, "Insert Records")
Finally
con.Close()
End Try
So I need to run this with parametrized queries rather than how I'm doing it now?
A:
Instead of filtering out "invalid" data from user input, consider using parametrized queries and not putting user input directly into your queries; that's very bad form.
To run your current query using parameters, it's pretty easy:
Dim con As New SqlConnection()
Dim cmd As New SqlCommand()
Try
con.ConnectionString = "Data Source=" & Server & ";Initial Catalog=" & Database & ";User ID=" & User & ";Password=" & Password & ";"
con.Open()
cmd.Connection = con
cmd.CommandText = "INSERT INTO TB_User(STRUserID, password, Email) VALUES(@username, @password, @email)"
cmd.Parameters.Add("@username", SqlDbType.VarChar, 50).Value = UsernameTextBox.Text
cmd.Parameters.Add("@password", SqlDbType.Char, 32).Value = MD5Hash(PasswordTextBox.Text)
cmd.Parameters.Add("@email", SqlDbType.VarChar, 50).Value = EmailTextBox.Text
cmd.ExecuteNonQuery()
Catch ex As Exception
MessageBox.Show("Error while inserting record on table..." & ex.Message, "Insert Records")
Finally
con.Close()
End Try
All you have to do is use cmd.Parameters.Add with a parameter name and the right database type (the ones I guessed probably don't match up, so you'll want to change them), then set the value to the value you want used in the query. Parameter names start with an @.
A:
It doesn't depend on the textbox. Don't compose a sql sentence joining strings like this:
"SELECT * FROM User WHERE UserName=" + tbName.Text + ...
Use stored procedures or parameterized queries and you'll be safe from SQL injection.
When you use parameters, the textbox content is used as a value, so it doesn't matter what it contains.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to insert data to the additional field which is inside pivot table with laravel eloquent
My pivot table has additional field...
id | user_id | language_id | level
---------------------------------------
and my code:
User::find($this->userId)->languages()->attach(['language_id' => $lang_id, 'level' => $level]);
but the result is:
id | user_id | language_id | level
---------------------------------------
1 1 1 null
1 1 2 null
actually, second line's language_id must be first line's level...
how can i do it properly like this?
id | user_id | language_id | level
---------------------------------------
1 1 1 2
A:
attach() works a bit differently. The first parameter is the id or an instance of the other model and the second parameter are other pivot fields:
User::find($this->userId)->languages()->attach($lang_id, ['level' => $level]);
As @ceejayoz mentioned you also don't have withPivot() defined in your relationship. That means level won't be available in the result. Change that by adding withPivot() to both sides of the relation:
public function languages() {
return $this->belongsToMany('Language')->withPivot('level');
}
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to prevent UITableView interaction while UITextField is FirstResponder
Is there a simple way to prevent user interaction while entering text in a UITextField (the textfield is in a cell of the tableView)?
I've already tried this:
- (void)textFieldDidBeginEditing:(UITextField *)textField {
self.tableView.userInteractionEnabled = NO;
}
with the result that the keyboard also stops showing up...
A:
The first thought that comes to mind is adding a transparent UIView overlay that would "mask" everything surrounding your cell, intercepting all the touch events. I suppose it would have to be two overlays - one for above the cell and one for below.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Loop Through String of Words, Return Word With Highest Score According To Character Value in Object - JavaScript
I'm trying to figure out how to solve this kata on CodeWars.
Function high recieves a string and returns the word with the highest "score" according to which letters in the word are present. The letters receive a score based on their position in the alphabet. So a = 1 point, b = 2 points, c = 3 points, and so on.
I think it makes sense to create an object where all of the letters in the alphabet are assigned a value:
If the letter in the word appears in alphabetScore, that word will receive its "points" and continue on to the next letter in the word, increasing the total points of the word.
I have:
function high(string) {
let words = string.split(" ");
let wordScore = 0;
const alphabetScore = {
a: 1,
b: 2,
c: 3,
d: 4,
e: 5,
f: 6,
g: 7,
h: 8,
i: 9,
j: 10,
k: 11,
l: 12,
m: 13,
n: 14,
o: 15,
p: 16,
q: 17,
r: 18,
s: 19,
t: 20,
u: 21,
v: 22,
w: 23,
x: 24,
y: 25,
z: 26
}
let word = words[i];
let wordCount = 0;
//loop through all words in the string
for (let i = 0; i < words.length; i++) {
let word = words[i];
//loop through all characters in each word
for (let j = 0; j < word.length; j++) {
let value = alphabetScore[j];
wordCount += alphabetScore[value];
}
}
return wordCount;
}
console.log(high("man i need a taxi up to ubud"));
And this is returning an error saying
i is not defined
in let word = words[i] - how else would I define a word, then?
If it's possible to solve this Kata with my existing logic (using for-loops), please do so.
EDIT: Changed wordCount = alphabetScore.value++; to wordCount += alphabetScore[value];
EDIT 2: This is now returning NaN
EDIT 3: Latest attempt:
function myScore(input) {
let key = ["a", "b", "c", "d", "e", "f", "g", "h", "i", "j",
"k", "l", "m", "n", "o", "p", "q", "r", "s", "t", "u", "v",
"w", "x", "y", "z"
];
let bestWord = "";
let bestScore = 0;
let words = input.split(" ");
for (let i = 0; i < words.length; i++) {
let score = 0;
let word = words[i];
for (let j = 0; j < word.length; j++) {
let char = word[j];
score += (key.indexOf(char) + 1);
}
if (score > bestScore) {
bestScore = score;
bestWord = word;
}
}
return bestWord;
}
ReferenceError: high is not defined
at Test.describe._
A:
ran successfully on codewars
let key = ["a", "b", "c", "d", "e", "f", "g", "h", "i", "j",
"k", "l", "m", "n", "o", "p", "q", "r", "s", "t", "u", "v",
"w", "x", "y", "z"
];
function wordScore(word) {
let score = 0;
for (let j = 0; j < word.length; j++) {
let char = word[j];
score += (key.indexOf(char) + 1);
}
return score;
}
function high(x) {
let bestWord = "";
let bestScore = 0;
words = x.split(" ");
for (let i = 0; i < words.length; i++) {
let word = words[i];
let score = wordScore(word);
if (score > bestScore) {
bestScore = score;
bestWord = word;
}
}
return bestWord;
}
console.log(high("man i need a taxi up to ubud"));
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to check if an array has an element at the specified index?
I know there is array_key_exists() but after reading the documentation I'm not really sure if it fits for this case:
I have an $array and an $index. Now I want to access the $array, but don't know if it has an index matching $index. I'm not talking about an associative array, but an plain boring normal numerically indexed array.
Is there an safe way to figure out if I would really access an $array element with the given $index (which is an integer!)?
PHP may not care if I access an array with an index out of bounds and maybe just returns NULL or so, but I don't want to even attempt to code dirty, so I want to check if the array has the key, or not ;-)
A:
You can use either the language construct isset, or the function array_key_exists : numeric or string key doesn't matter : it's still an associative array, for PHP.
isset should be a bit faster (as it's not a function), but will return false if the element exists and has the value NULL.
For example, considering this array :
$a = array(
123 => 'glop',
456 => null,
);
And those three tests, relying on isset :
var_dump(isset($a[123]));
var_dump(isset($a[456]));
var_dump(isset($a[789]));
You'll get this kind of output :
boolean true
boolean false
boolean false
Because :
in the first case, the element exists, and is not null
in the second, the element exists, but is null
and, in the third, the element doesn't exist
On the other hand, using array_key_exists like in this portion of code :
var_dump(array_key_exists(123, $a));
var_dump(array_key_exists(456, $a));
var_dump(array_key_exists(789, $a));
You'll get this output :
boolean true
boolean true
boolean false
Because :
in the two first cases, the element exists -- even if it's null in the second case
and, in the third, it doesn't exist.
A:
You can easily use isset():
if (isset($array[$index])) {
// array index $index exists
}
And as you have suggested, PHP is not very kind if you try to access a non-existent index, so it is crucial that you check that you are within bounds when dealing with accessing specific array indexes.
If you decide to use array_key_exists(), please note that there is a subtle difference:
isset() does not return TRUE for array
keys that correspond to a NULL value,
while array_key_exists() does.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How do we physically apply the operators of quantum mechanics on a particle?
What do we have to perform physically that is equivalent to applying those quantum mechanical operators on a state $|\psi\rangle$?
Edit: I have removed the part I was asking regarding measurement because it takes us away from the real question.
A:
Perhaps I misunderstand your question but I would like to make clear that operating on a state with say, the momentum operator is not meant to be the equivalent of measuring the momentum of the system in that state.
Consider, for example, a state that is a superposition of two momentum eigenstates:
$$|\psi\rangle = \frac{1}{\sqrt{2}}\left(\,|p_1\rangle + |p_2\rangle \,\right)$$
If we operate on this state with the momentum operator, we get a different state:
$$\hat p |\psi\rangle = \frac{1}{\sqrt{2}}\left(\,p_1|p_1\rangle + p_2|p_2\rangle\, \right)$$
But note that $\hat p |\psi\rangle$ is a superposition of momentum eigenstates, i.e., operating on the state with the momentum operator did not 'collapse' the state to one or the other momentum eigenstate.
However, if we measure the momentum of the system in this state, we will measure either $p_1$ or $p_2$ and, further, the state of the system, immediately after the measurement, will be the associated eigenstate.
A measurement always causes the system to jump into an eigenstate of
the dynamical variable that is being measured, the eigenvalue this
eigenstate belongs to being equal to the result of the measurement.
P.A.M Dirac in "The Principles of Quantum Mechanics"
Thus, the 'momentum measurement operator' (whatever that is) is not the momentum operator.
Put another way, the result of operating on the state with the momentum operator is determined by the state; the result of the operation is certain.
However, the result of measuring the momentum of the system in this state is not determined. The result will be either $p_1$ or $p_2$ but which value will be measured is not determined by the state.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How do I filter sensitive Django POST parameters out of Sentry error reports?
To quote the Django docs:
@sensitive_post_parameters('pass_word', 'credit_card_number')
def record_user_profile(request):
UserProfile.create(user=request.user,
password=request.POST['pass_word'],
credit_card=request.POST['credit_card_number'],
name=request.POST['name'])
In the above example, the values for the pass_word and credit_card_number POST parameters will be hidden and replaced with stars (******) in the request’s representation inside the error reports, whereas the value of the name parameter will be disclosed.
To systematically hide all POST parameters of a request in error reports, do not provide any argument to the sensitive_post_parameters decorator:
@sensitive_post_parameters()
def my_view(request):
...
As a test, I added the following code to my Django 1.6 application:
views.py:
@sensitive_post_parameters('sensitive')
def sensitive(request):
if request.method == 'POST':
raise IntegrityError(unicode(timezone.now()))
return render(request, 'sensitive-test.html',
{'form': forms.SensitiveParamForm()})
forms.py:
class SensitiveParamForm(forms.Form):
not_sensitive = forms.CharField(max_length=255)
sensitive = forms.CharField(max_length=255)
When I submit this form via POST, I can see the values of both fields (including sensitive) clear as day in the Sentry report.
What am I doing wrong here? I'm using Django 1.6 and Raven 3.5.2.
Thanks in advance for your help!
A:
Turns out that this stemmed from a bug in Django itself!
If you haven't changed DEFAULT_EXCEPTION_REPORTER_FILTER in your settings file, you get the default filter of SafeExceptionReporterFilter.
If you've used the sensitive_post_parameters decorator, this will result in your calling SafeExceptionReporterFilter's get_post_parameters method:
def get_post_parameters(self, request):
"""
Replaces the values of POST parameters marked as sensitive with
stars (*********).
"""
if request is None:
return {}
else:
sensitive_post_parameters = getattr(request, 'sensitive_post_parameters', [])
if self.is_active(request) and sensitive_post_parameters:
cleansed = request.POST.copy()
if sensitive_post_parameters == '__ALL__':
# Cleanse all parameters.
for k, v in cleansed.items():
cleansed[k] = CLEANSED_SUBSTITUTE
return cleansed
else:
# Cleanse only the specified parameters.
for param in sensitive_post_parameters:
if param in cleansed:
cleansed[param] = CLEANSED_SUBSTITUTE
return cleansed
else:
return request.POST
The problem with the above is that while it will correctly return a QuerySet with the sensitive POST parameters set to CLEANSED_SUBSTITUTE ('********************')...it won't in any way alter request.body.
This is a problem when working with Raven/Sentry for Django, because it turns out that the get_data_from_request method of Raven's DjangoClient first attempts to get the request's POST parameters from request.body:
def get_data_from_request(self, request):
[snip]
if request.method != 'GET':
try:
data = request.body
except Exception:
try:
data = request.raw_post_data
except Exception:
# assume we had a partial read.
try:
data = request.POST or '<unavailable>'
except Exception:
data = '<unavailable>'
else:
data = None
[snip]
The fastest fix turned out to just involve subclassing DjangoClient and manually replacing its output with the cleansed QuerySet produced by SafeExceptionReporterFilter:
from django.views.debug import SafeExceptionReporterFilter
from raven.contrib.django.client import DjangoClient
class SafeDjangoClient(DjangoClient):
def get_data_from_request(self, request):
request.POST = SafeExceptionReporterFilter().get_post_parameters(request)
result = super(SafeDjangoClient, self).get_data_from_request(request)
result['sentry.interfaces.Http']['data'] = request.POST
return result
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Include dependencies in Maven assembly without include the actual artifact
I would like to create a Maven assembly that contains the transitive dependencies of an artifact without actually including the artifact itself. I have tried to exclude the artifact from the assembly, but then its dependencies aren't included as a result.
ArtifactA has DependencyA, DependencyB
Assembly should contain DependencyA, DependencyB (without ArtifactA)
And I would preferrably like to do this without having to explicitly specifiy what dependencies to be included in the assembly because this will be done with multiple projects that have many dependencies.
Thank you!
A:
I finally got it to work. This will produce an artifact that only contains the dependencies of the depende
pom.xml
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>moduletest</groupId>
<artifactId>moduletest</artifactId>
<version>1.0</version>
<packaging>pom</packaging>
<dependencies>
<dependency>
<groupId>dependency</groupId>
<artifactId>dependency</artifactId>
<version>1.0</version>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-assembly-plugin</artifactId>
<version>2.3</version>
<configuration>
<descriptors>
<descriptor>assembly.xml</descriptor>
</descriptors>
</configuration>
</plugin>
</plugins>
</build>
</project>
assembly.xml
<?xml version="1.0" encoding="UTF-8"?>
<assembly xmlns="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.2"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.2 http://maven.apache.org/xsd/assembly-1.1.2.xsd">
<id>module</id>
<includeBaseDirectory>false</includeBaseDirectory>
<formats>
<format>zip</format>
</formats>
<dependencySets>
<dependencySet>
<excludes>
<exclude>dependency:dependency</exclude>
</excludes>
<useProjectArtifact>false</useProjectArtifact>
<useTransitiveDependencies>true</useTransitiveDependencies>
</dependencySet>
</dependencySets>
</assembly>
|
{
"pile_set_name": "StackExchange"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.