id
stringlengths
5
27
question
stringlengths
19
69.9k
title
stringlengths
1
150
tags
stringlengths
1
118
accepted_answer
stringlengths
4
29.9k
_codereview.30584
This program prints out a 10x10 square and uses only bit operations. It works well, but I don't like that global array. Can you tell me if the code is proper or not?#include <iostream>const std::size_t HEIGHT = 10;const std::size_t WIDTH = 10;char graphics[WIDTH/8][HEIGHT];inline void set_bit(std::size_t x, std::size_t y){ graphics[(x) / 8][y] |= (0x80 >> ((x) % 8));}void print_screen(void){ for (int y = 0; y < HEIGHT; y++) { for (int x = 0; x < WIDTH/8+1; x++) { for (int i = 0x80; i != 0; i = (i >> 1)) { if ((graphics[x][y] & i) != 0) std::cout << *; else std::cout << ; } } std::cout<<std::endl; }}int main(){ for(int x = 0; x < WIDTH; x++) { for(int y = 0; y < HEIGHT; y++) { if(x == 0 || y == 0 || x == WIDTH-1 || y == HEIGHT-1) set_bit(x,y); } } print_screen(); return 0;}
10x10 bitmapped square with bits
c++;matrix;bitwise
That global array is indeed not good. You'll need to pass around an array, but you shouldn't do it with a C-style array. Doing that will cause it to decay to a pointer, which you should avoid in C++. If you have C++11, you could use std::array, which will be set at an initial size. But if you don't have C++11, and also want to adjust the size, use an std::vector. You can also compare the two here. Either way, you'll be able to pass any of them around nicely, and it's something you should be doing in C++ anyway.To match your environment, the following code does not utilize C++11.I'll use std::vector here, but this can be done with other STL storage containers. Here's what a 2D vector would look like:std::vector<std::vector<T> > matrix; // where T is the typeThis type does look long, and you may not want to type it out each time. To shorten it, you can use typedef to create an alias (which is not a new type):typedef std::vector<std::vector<T> > Matrix;With that, you can use this type as such:Matrix matrix;and create the 2D vector of a specific size.However, this is where the syntax gets nasty (especially lengthy). It's not set to a specific size, so you can just push vectors into it to increase the size. For a fixed size (using your size and data type), you'll have something like this:std::vector<std::vector<char> > matrix(HEIGHT, std::vector<char>(WIDTH));This can be made shorter by having another typedef to serve as a dimension of the matrix. This will also make it a little clearer what the vector means in this context.typedef std::vector<char> MatrixDim;It is then applied to the Matrix typedef:typedef std::vector<MatrixDim> Matrix;The 2D initialization will then become this:Matrix matrix(HEIGHT, MatrixDim(WIDTH));Now you can finally use this in main() and pass it to the other functions. Before you do that, you'll need a different loop counter type. With an STL storage container, you should use std::size_type. With std::vector<char>, specifically, you'll have:std::vector<char>::size_type;You can use yet another typedef for this:typedef MatrixDim::size_type MatrixDimSize;Here's what the functions will look like with the changes (explanations provided). I've also included some additional changes, which are also explained. The entire program with my changes applied and produces the same output as yours.setbit():inline void set_bit(Matrix& matrix, MatrixDimSize x, MatrixDimSize y){ matrix[(x) / 8][y] |= (0x80 >> ((x) % 8));}An additional parameter of type Matrix is added. The matrix is passed in by reference and modified within the function.The std::size_t parameters were replaced with the MatrixDimSize type.print_screen():void print_screen(Matrix const& matrix){ for (MatrixDimSize y = 0; y < HEIGHT; y++) { for (MatrixDimSize x = 0; x < WIDTH/8+1; x++) { for (int i = 0x80; i != 0; i >>= 1) { std::cout << (((matrix[x][y] & i) != 0) ? '*' : ' '); } } std::cout << \n; }}A parameter of type Matrix is added. The matrix is passed in by const&, which is necessary as the function displays the matrix but does not modify it. It's also cheaper to pass it this way as opposed to copying (passing by value).MatrixDimSize is added for the loop counter types.The if/else is replaced with an equivalent ternary statement.A newline is done with \n as opposed to std::endl. The latter also flushes the buffer, which is slower. You just need the former.i = (i >> 1) is shortened to i >>= 1.Main():int main(){ Matrix matrix(HEIGHT, MatrixDim(WIDTH)); for (MatrixDimSize x = 0; x < WIDTH; x++) { for (MatrixDimSize y = 0; y < HEIGHT; y++) { if (x == 0 || y == 0 || x == WIDTH-1 || y == HEIGHT-1) { set_bit(matrix, x, y); } } } print_screen(matrix);}Both matrix vector typedefs are applied.MatrixDimSize is added for the loop counter types.The matrix is passed to and modified only by set_bit().It is passed to print_screen() and is not modified.
_webmaster.53896
I recently noticed the following Google-related error on our website pages:GET http://pagead2.googlesyndication.com/teracent_product_template_V1/clearPixel.gif 404 (Not Found) pagead2.googlesyndication.com/teracent_product_template_V1/clearPixel.gif:1I'm wondering if this is hindering AdSense clicks from registering with Google.I'm using the asynch analytics code that begins with:<script async src=http://pagead2.googlesyndication.com/pagead/js/adsbygoogle.js></script>
Any alternatives to Google's clearPixel.gif 404 not found?
google analytics;google adsense;404
null
_softwareengineering.109138
I'm working on an installation script, and especially there I have many things that need to be checked and actions that only need to be performed in case of related outcomes. For example, a folder only needs to be created if it doesn't exist:MyDesktopFolderPath := AddBackslash(ExpandConstant('{commondesktop}')) + 'My Folder';if not DirExists(MyDesktopFolderPath) thenbegin ForceDirectories(MyDesktopFolderPath); //Create full pathend;if DirExists(MyDesktopFolderPath) thenbegin //Perform other actionsInstead of this, I could use (exploit?) short-circuit evaluation to make it more compact:MyDesktopFolderPath := AddBackslash(ExpandConstant('{commondesktop}')) + 'My Folder';IsMyDesktopFolderExisting := DirExists(MyDesktopFolderPath) or ForceDirectories(MyDesktopFolderPath);if IsMyDesktopFolderExisting thenbegin //Perform other actionsThis would work too, but I wonder if it is bad practice, mostly because it is less obvious that actions might be performed. Any opinions on that?Edit: As pointed out by Secure, the code is not completely equivalent, but the discussion topic still stands for other cases.
Is it bad practice to use short-circuit evaluation instead of an if clause?
coding style
Good practice - NO. As your peers may not be as intelligent as you are. But yes, in some languages (I'd dare say, Perl), That is a common idiom. But exploiting short circuit evaluation - if it was considered good practice,It certainly would have appeared to be a common idiom in a place like the Linux kernel source (Those guys swear to write the clearest code on earth).If..Then..Else would have been done away with by now.So, my suggestion, keep it simple, even if it means a few more keystrokes. Just an observation and a personal opinion, though.
_codereview.41130
I have created an MS Excel type of grid in jQuery. I want to learn best practices and I need your comments for more optimized code.Please review the code and offer your suggestions.DemojQuery:var JS = JS || {};JS.training = JS.training || {}; JS.training.tableData = JS.training.tableData || {}; JS.training.tableData = { defaults: { $table: $('#myTable'), addTableRowBtn: $('#addRowBtn'), addTableColBtn: $('#addColBtn') }, createTable: function () { var _this = this, table = ; table += <thead>; for (i = 0; i < 5; i++) { var tableHeader = (i == 0) ? <th style='border:1px solid #E5E5E5; background:#F1F1F1'></th> : <th style='border:1px solid #E5E5E5; background:#F1F1F1'>A + i + </th>; table += tableHeader; } table += </thead>; table += <tbody>; for (i = 0; i < 5; i++) { table += <tr id='row + i + '>; for (var j = 0; j < 5; j++) { var tableDataCells = (j == 0) ? <td width='3%' style='border:1px solid #E5E5E5;'> + (i + 1) + </td> : <td width=100px style='border:1px solid #E5E5E5' id='td + j + ' contenteditable=true> </td>; table += tableDataCells; } table += '</tr>'; } table += </tbody>; //APPEND TABLE MARKUP $(_this.defaults.$table).append(table); //BIND EVENTS _this.bindEvents(); }, addTableRow: function () { var _this = this, colLen = $(#myTable tr:nth-child(1) td).length, colVal = parseInt($(#myTable tr:last-child td:first).text()) + 1; for (i = 0; i < 1; i++) { var table = <tr id=row + colVal + '>; for (var j = 0; j < colLen; j++) { if (j == 0) { table += '<td width=3% style=border:1px solid #E5E5E5; background:#F1F1F1>' + colVal + ' </td>'; } else { table += <td width=100px style='border:1px solid #E5E5E5;' contenteditable=true id='td + j + '> </td>; } } table += '</tr>'; } $(_this.defaults.$table).append(table); }, addTableColumn: function () { var _this = this, colVal = $(#myTable tr th:last-child).text(), colNum = parseInt(colVal.charAt(1)) + 1; console.log(colNum); $(#myTable thead tr:last-child).append(<th style='border:1px solid #E5E5E5; background:#F1F1F1'>A + colNum + </th>); $(#myTable tbody tr).each(function () { $(this).append(<td width=100px style='border:1px solid #E5E5E5;' contenteditable=true id='td + colNum + '></td>) }); }, bindEvents: function () { var _this = this; //CAPTURE ADD ROW BUTTON CLICK _this.defaults.addTableRowBtn.on('click', function () { _this.addTableRow(); }); //CAPTURE ADD COLUMN BUTTON CLICK _this.defaults.addTableColBtn.on('click', function () { _this.addTableColumn(); }); }, init: function () { var _this = this; _this.createTable(); }};//INIT CALLJS.training.tableData.init();HTML:<div id=wrapper> <button id=addRowBtn name=addRowBtn value=Add Row>Add Row</button> <button id=addColBtn name=addColBtn value=Add Col>Add Column</button> <div id=table> <table id=myTable cellpadding=0 cellspacing=0 border=0 style=border:1px solid #E5E5E5></table> </div></div>CSS:body { font:normal 14px/16px Arial, Helvetica, sans-serif}#wrapper { margin:100px auto 0; width:80%;}#myTable { width:100%; margin:20px auto 0}#myTable th, #myTable td { padding:5px;}#myTable tr td.first { background:#F1F1F1; border:1px solid #E5E5E5;}
MS Excel type of grid in jQuery
javascript;jquery;css;excel
null
_unix.293506
Our Apache installation is in /var/www on a CentOS 7.2 virtual machine. On occassion I run the following to ensure there are no unintended backups which could leak information to an attacker:$ sudo find /var -name '*~' -exec ls -al {} \;An attacker could read something like /var/www/html/.htaccess~, so I try to close the loop.I'm finding a lot of *.journal~ files like below. My question is, is it safe to delete *.journal~ files?$ sudo find /var -name '*~' -exec ls -al {} \;-rw-r-x---+ 1 root systemd-journal 16777216 Jun 29 18:15 /var/log/journal/dca38bef0d5e4e8f9f2d545d2d833d10/[email protected]~-rw-r-x---+ 1 root systemd-journal 8388608 Mar 22 09:44 /var/log/journal/dca38bef0d5e4e8f9f2d545d2d833d10/[email protected]~-rw-r-x---+ 1 root systemd-journal 16777216 Mar 1 12:47 /var/log/journal/dca38bef0d5e4e8f9f2d545d2d833d10/[email protected]~-rw-r-x---+ 1 root systemd-journal 41943040 Dec 20 2015 /var/log/journal/dca38bef0d5e4e8f9f2d545d2d833d10/[email protected]~-rw-r-x---+ 1 root systemd-journal 50331648 Jun 22 12:27 /var/log/journal/dca38bef0d5e4e8f9f2d545d2d833d10/[email protected]~-rw-r-x---+ 1 root systemd-journal 8388608 Mar 22 02:12 /var/log/journal/dca38bef0d5e4e8f9f2d545d2d833d10/[email protected]~-rw-r-x---+ 1 root systemd-journal 8388608 Mar 21 06:41 /var/log/journal/dca38bef0d5e4e8f9f2d545d2d833d10/[email protected]~-rw-r-x---+ 1 root systemd-journal 109051904 Oct 30 2015 /var/log/journal/dca38bef0d5e4e8f9f2d545d2d833d10/[email protected]~-rw-r-x---+ 1 root systemd-journal 33554432 Dec 28 2015 /var/log/journal/dca38bef0d5e4e8f9f2d545d2d833d10/[email protected]~-rw-r-x---+ 1 root systemd-journal 25165824 Mar 16 10:22 /var/log/journal/dca38bef0d5e4e8f9f2d545d2d833d10/[email protected]~-rw-r-x---+ 1 root systemd-journal 25165824 Jan 25 05:21 /var/log/journal/dca38bef0d5e4e8f9f2d545d2d833d10/[email protected]~-rw-r-x---+ 1 root systemd-journal 8388608 Jun 22 14:33 /var/log/journal/dca38bef0d5e4e8f9f2d545d2d833d10/[email protected]~-rw-r-x---+ 1 root systemd-journal 33554432 Feb 24 08:16 /var/log/journal/dca38bef0d5e4e8f9f2d545d2d833d10/[email protected]~-rw-r-x---+ 1 root systemd-journal 75497472 Sep 12 2015 /var/log/journal/dca38bef0d5e4e8f9f2d545d2d833d10/[email protected]~-rw-r-x---+ 1 root systemd-journal 25165824 Jan 31 19:57 /var/log/journal/dca38bef0d5e4e8f9f2d545d2d833d10/[email protected]~-rw-r-x---+ 1 root systemd-journal 8388608 Mar 3 23:33 /var/log/journal/dca38bef0d5e4e8f9f2d545d2d833d10/[email protected]~-rw-r-x---+ 1 root systemd-journal 8388608 Feb 24 18:47 /var/log/journal/dca38bef0d5e4e8f9f2d545d2d833d10/[email protected]~-rw-r-x---+ 1 root systemd-journal 25165824 Mar 21 00:01 /var/log/journal/dca38bef0d5e4e8f9f2d545d2d833d10/[email protected]~-rw-r-x---+ 1 root systemd-journal 50331648 May 11 12:00 /var/log/journal/dca38bef0d5e4e8f9f2d545d2d833d10/[email protected]~
Is it safe to delete '.journal~' files?
files;systemd;logs;rm
null
_softwareengineering.343172
The question is asked in the context of Python, but it is also relevant for any languages with named parameters support.If some entity in my code (e.g. a pubsub implementation) or even a simple function accepts a callback, does it make more sense to call it with positional arguments or with named arguments?def foo(on_foo_ended): foo_result = await ... # Call with positional argument: on_foo_ended(foo_result) # Call with named argument: on_foo_ended(result=foo_result)A call with positional arguments does not require any arbitrary parameter naming from the user, but may instead impose arbitrary parameter ordering. Named arguments require matching parameter names, but are (to some extent) self-documenting and do not force the user to order them.Assuming we are developing a library, it would be preferable to use the most common convention if there is one. If there is none, would it make sense to prefer one way over the other? I suppose complex strategies like pass positional argument when there is only one and named arguments when there are two or more would be too arbitrary and probably difficult to use.
Should callbacks be called with named or positional arguments?
python;coding standards;functions;parameters
null
_datascience.13057
I have a dataset of about 1M observation and I had to predict a response that occurs only about 10.000 times (1%). I decided to train a random forest, but this takes a lot of time to train because the data is too large for my hardware. So I decided to take a sample, but an aleatory sample would be already too large to have a minimum quantity of response. (If I take 10% aleatory, i would have only 1000 response)Then I took a stratified sample. All responses and 10.000 aleatory non-responses, and trained my model in this dataset.But now I need to rescale the probability so I have the real probability of the observation to be response.I tried to simulate this problem with this code in R. Training a model in a balanced dataset and another one in the unbalanced data. But those models are not very correlated and I didn't find a good way to tranform the probabilities to the original unbalanced scale.Found this is a good reference for future readersI found that for logistic regression I can do this by just changing the intercept this way:$$ \hat{\beta_0} = \hat{\beta_0^*} - log(\frac{\gamma_1}{\gamma_2})$$Where $ \gamma_1 = Pr(Z=1|Y=1)$ and $ \gamma_2 = Pr(Z=1|Y=0)$. $Z$ is the an aleatory variable indicating if the observation is in the reduced dataset.This was found in this book (in portuguese) page 216simulate_data <- function(n){ X <- data.frame(matrix(runif(n*20), ncol = 20)) list( X = X, Y = rbinom(n, size = 1, prob = apply(X, 1, sum) %>% pnorm(mean = 13)) %>% as.factor() )}balance <- function(X, Y){ X <- rbind( X[Y == 1,], X[Y == 0,] %>% sample_n(length(Y[Y == 1])) ) return(list( X = X, Y = as.factor(c(rep(c(1,0), each = length(Y[Y == 1])))) ))}train <- simulate_data(100000)library(randomForest)m_desb <- mean(train$Y == 1)modelo_desb <- randomForest(train$X,train$Y, ntree = 200, cutoff = c(1-m_desb, m_desb), nodesize = 30, mtry = 8)bal <- balance(train$X, train$Y)m_bal <- mean(bal$Y == 1)modelo_bal <- randomForest(bal$X,bal$Y, ntree = 100, cutoff = c(1-m_bal, m_bal), nodesize = 50)Code for plotlibrary(ggplot2)data.frame( unbalanced = predict(modelo_desb, newdata = test$X, type = prob)[,2], balanced = predict(modelo_bal, newdata = test$X, type = prob)[,2]) %>% ggplot(aes(x = balanced, y = unbalanced)) + geom_point(size = 0.3) + xlim(0,1) + geom_smooth() + geom_hline(yintercept = m_desb, linetype = dashed) + geom_vline(xintercept = m_bal, linetype = dashed)
Predict probability when model was trained in balanced dataset
machine learning
Daniel,What you did goes under the name of oversampling. There is a sample of some real population, and you replace it with a sample from a manufactured population. The problem that makes sense in application is the estimation of$$P_r(Y=1|X) = \text{probability of response=1 in the $\mathbf{real}$ population given the predictor $X$} $$but by using an oversample you are estimating$$P_m(Y=1|X) = \text{probability of response=1 in the $\mathbf{manufactured}$ population given the predictor $X$}$$The two probabilities are related. I'll worked the details. I'll pretend the predicor $X$ is discrete. If $X$ takes numerical values one has to replace some probabilities by probability densities.$$\dots\dots\dots$$To simplify the notation, let $\pi_1 = P_r(Y=1)$ and $\mu_1 = P_m(Y=1)$ be the probabilities of response in the real and manufactured populations, let $$L_r = \frac{P_r(X=x|Y=1)}{P_r(X=x|Y=0)} = \frac{\frac{P_r(Y=1|X=x)}{P_r(Y=0|X=x)}}{\frac{\pi_1}{1-\pi_1} }$$be the odds ratio of $Y=1$, i.e.: the ratio of the odds among cases with $X=x$ and the odds in the general $\mathbf{real}$ population. Finally, let $L_m$ be the corresponding ratio in the $\mathbf{manufactured}$ population.By Bayes' Theorem:$$ P_r(Y=1|X=x) = \frac{P_r(Y=1,X=x)}{P_r(X=x)} = \\=\frac{P_r(X=x|Y=1)\space \pi_1}{P_r(X=x|Y=1)\space\pi_1 + P_r(X=x|Y=0)\space (1 - \pi_1)} = \\=\frac{L_r\space \pi_1}{L_r\space\pi_1 + \space (1 - \pi_1)} \tag{1}$$In a similar way, we get an analogous result for the manufactured population:$$P_m(Y=1|X=x) = \frac{L_m\space \mu_1}{L_m\space\mu_1 + \space (1 - \mu_1)} \tag{2}$$Since the manufactured sample is a random sample, stratified by $Y$, the conditional distribution of X within responders is the same as in the real population. Same as for non responders, i.e.:$$ P_r(X=x|Y=j) = P_r(X=x|Y=j) $$for $j=0,1.$ If the sample stratified by values of Y was anything other than random sample these would not be true.It follows that $\boxed{ L_r = L_m }$. Next we solve for $L_m$ in terms of $P_m(Y=1|X)$ from (2) and replace in (1). $$\dots\dots\dots$$$\mathbf{Digression}$:Here is my easy way to carry the steps, without mess. Two non-zero vectors $\mathbf{v_1}$, $\mathbf{v_2}$ are parallel iff there is $\lambda \ne 0$ such that $\mathbf{v_1}=\lambda \mathbf{v_2}.$ Below I will use this idea, and I will not care about the exact value of $\lambda$, so I will be using $\lambda$ as a short-hand for $\mathbf{\text{some non-zero mumber}}$.$\mathbf{\text{End Digression}}$:$$\dots\dots\dots$$The easy way to solve is to observed that for non-zero messy values of $\lambda$ (not the same in each occurrence!) one has:$$\begin{bmatrix} P_r(Y=1|X) \\ 1 \\\end{bmatrix} = \lambda \begin{bmatrix} \pi_1 &0\\ \pi_1 &1-\pi_1 \end{bmatrix} \begin{bmatrix}L_r\\1 \end{bmatrix} ,$$and$$\begin{bmatrix} P_m(Y=1|X) \\ 1 \\\end{bmatrix} = \lambda \begin{bmatrix} \mu_1 &0 \\ \mu_1 &1-\mu_1 \end{bmatrix} \begin{bmatrix}L_m\\1 \end{bmatrix}. $$Therefore,$$\begin{bmatrix}L_m\\1 \end{bmatrix} = \lambda \begin{bmatrix} \mu_1 &0 \\ \mu_1 &1-\mu_1 \end{bmatrix}^{-1} \begin{bmatrix} P_m(Y=1|X) \\ 1 \\\end{bmatrix} ,$$and so (remember that here $\lambda$ stands for some non-zero number)$ \space \begin{bmatrix} P_r \\ 1 \end{bmatrix} = \lambda \begin{bmatrix} \pi_1 &0 \\ \pi_1 &1-\pi_1 \end{bmatrix} \begin{bmatrix}L_r \\1 \end{bmatrix} = \\\text{ }=\lambda \begin{bmatrix} \pi_1 &0 \\ \pi_1 &1-\pi_1 \end{bmatrix} \begin{bmatrix} \mu_1 &0 \\ \mu_1 &1-\mu_1 \end{bmatrix}^{-1} \begin{bmatrix} P_m \\ 1 \\\end{bmatrix} = \\\text{ }=\lambda \begin{bmatrix} \pi_1 (1- \mu_1) &0 \\ \pi_1 - \mu_1 & \mu_1 (1- \pi_1)\end{bmatrix} \begin{bmatrix} P_m \\ 1 \\\end{bmatrix} = \lambda \begin{bmatrix} \pi_1 (1-\mu_1) P_m \\ (\pi_1 - \mu_1) \; P_m + \mu_1 (1- \pi_1)\end{bmatrix}. $Thus,$$ P_r = \frac{\pi_1 (1- \mu_1) P_m}{(\pi_1 - \mu_1) \; P_m + \mu_1 (1- \pi_1) } $$$$\dots\dots\dots$$ Example: Let's work the details of a Binomial model, $$P_m(Y=1|X) = \frac{e^{\beta_0 + \beta X}}{1+e^{\beta_0 + \beta X}} $$or in the where $\lambda$ is some non-zero scalar notation (I would not had digressed before if I did not had ulterior motive.. :) ):$$ \begin{bmatrix} P_m \\ 1 \\\end{bmatrix} = \lambda \begin{bmatrix}e^{\beta_0 + \beta X} \\1 + e^{\beta_0 + \beta X}\end{bmatrix}$$What is the implied model in the real population? $ \space \begin{bmatrix} P_r \\ 1 \end{bmatrix} = \lambda \begin{bmatrix} \pi_1 (1- \mu_1) &0 \\ \pi_1 - \mu_1 &\mu_1 (1- \pi_1)\end{bmatrix} \begin{bmatrix} P_m \\ 1 \end{bmatrix} =$ $\;$$= \lambda \begin{bmatrix} \pi_1 (1- \mu_1) &0 \\ \pi_1 - \mu_1 &\mu_1 (1- \pi_1)\end{bmatrix} \begin{bmatrix}e^{\beta_0 + \beta X} \\1 + e^{\beta_0 + \beta X}\end{bmatrix}=$ $\;$$= \lambda \begin{bmatrix} \pi_1 (1- \mu_1) e^{\beta_0 + \beta X} \\ \pi_1 (1- \mu_1) e^{\beta_0 + \beta X} + \mu_1 (1- \pi_1)\end{bmatrix} = \lambda\begin{bmatrix}\frac{\pi_1 (1- \mu_1)}{\mu_1 (1- \pi_1)} e^{\beta_0 + \beta X} \\1 + \frac{\pi_1 (1- \mu_1)}{\mu_1 (1- \pi_1)} e^{\beta_0 + \beta X}\end{bmatrix}.$ If we let $\tau = \ln(\frac{\pi_1 (1- \mu_1)}{\mu_1 (1- \pi_1)})$, we can absorve this constant in the exponents to get:$$ \begin{bmatrix} P_r \\ 1 \end{bmatrix} = \lambda \begin{bmatrix}e^{\tau + \beta_0 + \beta X} \\1 + e^{\tau + \beta_0 + \beta X}\end{bmatrix}.$$Taking the ratio and simplifying the non-zero constant in numerator and denominator we get that fitting a logistic model to the manufactured population results in an implied logistic model for the real population, $\mathbf{\text{with the same coefficients for X}}$ and with a difference in the constant (in the logistic model) given by:$$ \beta_{real} = \tau + \beta_0 $$$$\dots$$Note that, according to your reference, the ratio of $\gamma_1 = Pr(Z=1|Y=1)$ and $\gamma_0 = Pr(Z=1|Y=0)$ should come up. Indeed:$$ \gamma_1 = Pr(Z=1|Y=1) = \frac{P(Z=1,Y=1)}{P(Y=1)} = \frac{P_r(Y=1|Z=1)P_r(Z=1)}{P_r(Y=1)} = \frac{P_m(Y=1)}{P_r(Y=1)} P_r(Z=1)= \frac{\mu_1}{\pi_1}P_r(Z=1) $$likewise (i.e. change Y to 1-Y),$$ \gamma_0 = \frac{1-\mu_1}{1-\pi_1}P_r(Z=1) $$ so $$ \ln(\frac{\gamma_1}{\gamma_0}) = - \ln(\frac{\pi_1 (1-\mu_1)}{\mu_1 (1-\pi_1}) = \tau $$$$\dots\dots\dots$$Notes for full disclosure: I worked with the probability model. When one works with finite samples the example above suggests two ways of estimating the coefficients:* estimate coefficients using the sample from the real population* estimate coefficients using the manufactored populationsIt terns out that this two estimators are not the same (it is obvious if one consideres one estimator is based on more cases than the other). Both estimators are asymtopically consistent, but it can be shown the one based on the manufactored population is more biased (forgot the reference :( ).In the data science space we are more concern with the quality of the predictions than the parameters of the parameters used to make those predictions, so as long as you check results properly (e.g.: using a testing set to build models and another to validate them), the bias in the parameters should not deter us from using oversampling. $$\dots\dots\dots$$
_unix.144619
This is my original server with very loose security given that it does not block all ports via iptables. /etc/sysconfig/iptables contents:# Generated by iptables-save v1.4.7 on Mon Jun 16 20:04:05 2014*filter:INPUT ACCEPT [8:607]:FORWARD ACCEPT [0:0]:OUTPUT ACCEPT [6:1089]COMMIT# Completed on Mon Jun 16 20:04:05 2014this (below) is a server with a different company but it looks like it came with good security settings in.. ( allows only port 22 ) /etc/sysconfig/iptables contents:# Firewall configuration written by system-config-firewall# Manual customization of this file is not recommended.*filter:INPUT ACCEPT [0:0]:FORWARD ACCEPT [0:0]:OUTPUT ACCEPT [0:0]-A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT-A INPUT -p icmp -j ACCEPT-A INPUT -i lo -j ACCEPT-A INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT-A INPUT -j REJECT --reject-with icmp-host-prohibited-A FORWARD -j REJECT --reject-with icmp-host-prohibitedCOMMITthis one looks alot better.Can I simply copy these to my original server's /etc/sysconfig/iptables and then reboot the whole system and expect everything to work ?
Is it OK to copy /etc/sysconfig/iptables to another machine?
security;iptables;firewall
null
_unix.216222
I am trying to read files from a directory into an array but even when the file doesn't exist, it is saved into the array. I want to exclude the file name if it doesn't exist.a=(/tmp/nofileexists) && echo ${#a[@]} && echo ${a[@]}1/tmp/nofileexistsThe path may contain a wild card.a=(/tmp/nofileexists*.pdf) && echo ${#a[@]} && echo ${a[@]}
Handle wildcards matching no file in bash
bash;wildcards;array
You can use nullglob for bash return empty string when file name expansion fail :$ shopt -s nullglob$ a=(/tmp/nofileexists*.pdf) && echo ${#a[@]} && echo ${a[@]}0<blank line>Or using failglob to report error:$ shopt -s failglob$ a=(/tmp/nofileexists*.pdf) && echo ${#a[@]} && echo ${a[@]}bash: no match: /tmp/nofileexists*.pdf
_softwareengineering.65783
It seems to me that buying and reading books is one of the most important investments a developer can make, and one of the most important investment a company can spend on a developer. There were a lot of times when I was confronted with a tricky problem, and what I did was to flip through an algorithm book and I would stumble upon an answer. And there were a lot of times when I tackled a problem head-on without first doing a literature search, only to find out later that my solution sucked and a proper solution had already been written down elsewhere.But it also seems to me that a lot of companies view books as an expense and thus must be cut down and budgeted. Sad, but true.How much your company spends on books?
How much does your company spends on books?
books
null
_codereview.106457
I created a web crawler that uses beautiful soup to crawl images from a website and scrape them to a database. in order to use it you have to create a class that inherits from Crawler and implements 4 simple methods.get_image_page_links() returns a list of the a tags that link to each images individual page.get_image_source_url() returns the url for the image using the page_soup that is provided.get_image_thumbnail_url() returns a url to a smaller version in order to create the thumbnail quicklyget_tags_ul() returns a BeautifulSoup object that represents the ul containing a list of tags for the imageexample:class PexelCrawler(Crawler): def __init__(self, db_record=None): origin = 'PX' base_url = 'https://www.pexels.com/?format=html&page={}' domain = 'www.pexels.com' Crawler.__init__(self, db_record, origin, base_url, domain) def get_image_page_links(self, page_soup): article_tags = page_soup.select('article.photos__photo') return [article.find('a') for article in article_tags] def get_image_source_url(self, image_page_soup): return image_page_soup.find('a', class_='js-download')['href'] def get_image_thumbnail_url(self, image_page_soup): return image_page_soup.find('img', class_='photo__img')['src'] def get_tags_ul(self, image_page_soup): return image_page_soup.find('ul', class_='list-padding') here is the Crawler class:from urllib.parse import urljoinimport requestsfrom bs4 import BeautifulSoupfrom crawlers.models import Image, Tag, Crawlerfrom requests.exceptions import HTTPErrorimport pdbimport signalimport sysimport timeRED = \033[01;31m{0}\033[00mGREEN = \033[1;36m{0}\033[00mdef signal_handler(signal, frame): global interrupted interrupted = Trueclass Crawler(): def __init__(self, db_record, origin, base_url, domain): current_page is used to track what page the crawler is on db_record is an instance of the model class Crawler and represents the associated record in the table that keeps track of the crawler in the database base_url is the page that is used in order to scrape images from, must contain {} where the page number should be domain_name is the domain name of the website. used to transform relateive urls into absolute urls. self.current_page = db_record.current_page self.db_record = db_record self.origin = origin self.base_url = base_url self.domain = domain def make_absolute_url(self, url): returns an absolute url for a given url using the domain_name property example: '/photo/bloom-flower-colorful-colourful-9459/' returns 'https://www.pexels.com/photo/bloom-flower-colorful-colourful-9459/' where the domain_name is 'www.pexels.com' protocol = https:// return urljoin(protocol + self.domain, url) def get_page_soup(self, page_url, attempts=5, delay=2): for i in range(attempts): try: response = requests.get(page_url) response.raise_for_status() except HTTPError as e: print (RED.format(page responded with +str(response.status_code)+. trying again)) time.sleep(delay) else: return BeautifulSoup(response.text) else: # failed to get the page raise an exception response.raise_for_status() def get_image_page_urls(self): returns a list of urls for each image on the page response = requests.get(self.base_url.format(self.current_page)) page_soup = BeautifulSoup(response.text) image_page_urls = [ link['href'] for link in self.get_image_page_links(page_soup)] # make sure urls are absolute image_page_urls = [self.make_absolute_url(url) for url in image_page_urls] return image_page_urls def crawl(self): global interrupted interrupted = False signal.signal(signal.SIGINT, signal_handler) images_added = 0 images_failed = 0 while True: print('crawling page {}'.format(self.current_page)) image_page_urls = self.get_image_page_urls() for n,image_page_url in enumerate(image_page_urls): if Image.objects.filter(page_url=image_page_url).exists(): print(Image already exists in database, moving on) continue print('crawling image at: {} (image {} of {})'.format(image_page_url, n+1, len(image_page_urls))) try: image_page_soup = self.get_page_soup(image_page_url) except HTTPError: print(RED.format(Failed to reach image page url at: {} , moving on.format(image_page_url))) images_failed+=1 continue print('getting image source url') image_source_url = self.get_image_source_url(image_page_soup) print('getting image thumbnail url') image_thumbnail_url = self.get_image_thumbnail_url(image_page_soup) print('getting tags') tags = self.get_tags(image_page_soup) print('storing image in db') self.store_image(image_source_url, image_page_url, image_thumbnail_url, tags) images_added+=1 self.current_page+=1 self.db_record.current_page+=1 self.db_record.save() if interrupted: print(Crawling halted.) print(GREEN.format({} images added to database.format(images_added))) print(RED.format({} images failed to add.format(images_failed))) break def get_image_page_links(self, page_soup): return NotImplementedError(method get_image_page_links must be implemented) def get_image_source_url(self, image_page_soup): return NotImplementedError(method get_image_source_url must be implemented) def get_image_thumbnail_url(self, image_page_soup): return NotImplementedError(method get_image_thumbnail_url must be implemented) def get_tags_ul(self, image_page_soup): return NotImplementedError(method get_tags_ul must be implemented) def get_tags(self, image_page_soup): tags_ul = self.get_tags_ul(image_page_soup) tag_links = tags_ul.find_all('a') tag_names = [tag_link.string for tag_link in tag_links] return tag_names def store_image(self, image_source_url, image_page_url, image_thumbnail_url, tags): image = Image(source_url=image_source_url, page_url=image_page_url, origin=self.origin) print('creating thumbnail from url: '+image_thumbnail_url) if image.create_thumbnail(image_thumbnail_url): print(GREEN.format(thumbnail created)) else: print(RED.format(thumbnail creation failed, deleting image)) image.delete() return print('saving image') image.save() print(GREEN.format(new image saved to database)) print('adding tags to image') for tag in tags: tag, created = Tag.objects.get_or_create(name=tag) image.tags.add(tag)I'm looking for feedback regarding OOP or making the crawler faster, more efficient or even better names for functions and variables.
A web crawler for scraping images from stock photo websites
python;python 3.x;web scraping;django
null
_unix.82464
I have a Debian Squeeze guest system running on a Windows 7 Professional host, a notebook computer. As a web developer, I need to mount my Windows project root folder into the Debian system.VirtualBox offers a shared folders function, however it does not recognize windows-style symlinks (junctions). So I decided to run a Samba client in my Debian system and mount the project folders from the virtual network (NAT). I do that by using this command in /etc/rc.local:mount -t cifs //192.168.178.62/Projekte/workspace /media/smb_workspace -o user=Bill,password=XXXXXXX,domain=LOCALDOMAINNAME,uid=33,gid=33,ex$This works fine, but as the name of the host machine couldn't be resolved, I had to use it's IP address. When I'm in a different WIFI, the IP address changes and I have to change the mounting command. Obviously I'd prefer to enter the name of my Windows machine, like that: mount -t cifs //NOTEBOOKNAME/Projekte/workspace /media/smb_workspace -o user=Bill,password=XXXXXXX,domain=LOCALDOMAINNAME,uid=33,gid=33,ex$I tried switching off my Windows firewall and the antivirus software, to no avail. The samba packages I installed on Debian are those, and apart from entering the workgroup information I left the configuration unchanged: libwbclient0 Samba winbind client library samba SMB/CIFS file, print,and login server for Unix samba-common common files used by both theSamba server and client samba-common-bin common files used by boththe Samba server and clientSo how can I get this to work? Any suggestions?
Virtualbox: Find Windows host DNS name on Debian guest
windows;virtualbox;samba;virtual machine
null
_unix.138729
I have a Lenovo X230 with a Centrino N-6300 wifi card.I cannot get wifi to work on it.I did fw_upgradeI can do aifconfig iwno scanI have the list of Wifi networks around including mine. However if I try to setup /etc/hostname.iwn0nwid Livebox-XXXXwpakey XXXXXXXXXdhcpthensh /etc/netstart iwn0I got:No link: .............. sleepingSame if I try:ifconfig iwn0 nwid Livebox-XXXX wpakey XXXXXXXXWhat did I miss?
Centrino N-6300 OpenBSD 5.5 - No Link
wifi;openbsd
null
_cs.63144
If a decision problem A belongs to the polynomial complexity class P, must there be at least one YES instance and one NO instance of the problem? I know that in the definition of a Turing machine an accept state and separate reject state are defined but I'm not sure if that applies to this case. Is it maybe possible to have only YES instances or only NO instances?Thanks very much in advance.
Do problems in P have a minimum number of YES and NO instances?
complexity theory;decision problem;polynomial time
If a problem has only YES instances (resp. only NO instances), then the associated language, which is our formalization of a problem contains every word in $\Sigma^*$ (resp. no words), with $\Sigma$ being the underlying alphabet. Both $\Sigma^*$ and $\emptyset$ are regular languages, and in particular, are both in $P$.So yes - there are trivial languages are in $P$. In fact, this argument also works when there are finitely many YES or NO instances, since finite languages are regular, and also their complements are.So for a language not to be in $P$ it must first of all have both infinitely many YES and infinitely many NO instances.
_webmaster.5302
I migrated my WordPress blog to a new server, and everything seemed to be working fine until it started giving me the error when entering the admin area:Fatal error: Allowed memory size of 33554432 bytes exhausted (tried toallocate 4864 bytes) in/home/neworder/public_html/blog/wp-admin/includes/plugin.phpon line 729The line 729 has:$protected = array( '_wp_attached_file', '_wp_attachment_metadata', '_wp_old_slug', '_wp_page_template' );I had installed the maintenance-mode, and I have suspicions that this is what broke the forum.If I remove the plugin it then gives another error:Fatal error: Allowed memory size of33554432 bytes exhausted (tried toallocate 19456 bytes) in/home/neworder/public_html/blog/wp-admin/includes/post.phpon line 1158And that line has:$content .= '<p class=hide-if-no-js>' . esc_html__( 'Remove featured image' ) . '</p>';}I tried to restore the blog file-system from the old server and also to restore the database from the old server (2x), but still it gives me the same error. The blog itself seems to be working fine:http://blog.antinovaordemmundial.com/
Can't get into the admin console after migrating to new server
wordpress;administration
Try taking a look at this:http://nabtron.com/wordpress-3-0-fatal-error-allowed-memory-size-of-33554432-bytes-exhausted/1924/
_webmaster.100860
I'm starting a new development of a site and checking constantly with GTMetrix the Page load & YSlow.All was almost perfect 'till I placed a Google ad from AdSense.This is the report just before I placed the script:and after I place the script (only one):As can be seen, the performance fall down drastically. The number of request is big.Is there any way I can do to improve this?By the way, the script is placed just before thw tag with the following:<script async src=//pagead2.googlesyndication.com/pagead/js/adsbygoogle.js defer=defer></script><script defer=defer>(adsbygoogle = window.adsbygoogle || []).push({});</script>
Improve page load performance with Google ads
google adsense
Sad truth is that google adsense is an advertising service. They rent a window of your webpage for advertisements. These advertisements can contain any number of requests to other third party helper URLs and may even contain unoptimized content. The content is what the advertisers produce, and you really have no control over it.All I can suggest if you really want to improve (a.k.a. speed up) loading time is to disable all the fancy offerings with google ads such as animated ads, video ads, ads that expand, etc. Just stick with the basic text, and basic graphic ads, and if that isn't fast enough, then disable everything but text-based ads.Now, having said all that, taking my advice MIGHT cost you money, and I say this because people are visual creatures. They like to see videos and graphics and advertisers understand this. If you remove graphical/animated based ads, then people will look for other graphics and there may be fewer advertisers bidding on your ad slots which can result in less revenue for you.
_codereview.44689
I am learning algorithms in graphs. I have implemented topological sort using Java. Kindly review my code and provide me with feedback. import java.util.LinkedList;import java.util.Queue;import java.util.Stack;public class TopologicalSortGraph { /** * This Topological Sort implementation takes the example graph in * Version 1: implementation with unweighted * Assumption : Graph is directed */ TopologicalSortGraph Graph = new TopologicalSortGraph(); //public LinkedList<Node> nodes = new LinkedList<Node>(); public static void topologicalSort(Graph graph) { Queue<Node> q = new LinkedList<Node>(); int vertexProcessesCtr = 0; for(Node m : graph.nodes){ if(m.inDegree==0){ ++vertexProcessesCtr; q.add(m); System.out.println(m.data); } } while(!q.isEmpty()) { Node m = q.poll(); //System.out.println(m.data); for(Node child : m.AdjacenctNode){ --child.inDegree; if(child.inDegree==0){ q.add(child); ++vertexProcessesCtr; System.out.println(child.data); } } } if(vertexProcessesCtr > graph.vertices) { System.out.println(); } } public static void main(String[] args) { Graph g= new Graph(); g.vertices=8; Node TEN = new Node(10); Node ELEVEN = new Node(11); Node TWO = new Node(2); Node THREE = new Node(3); Node FIVE = new Node(5); Node SEVEN = new Node(7); Node EIGHT = new Node(8); Node NINE = new Node(9); SEVEN.AdjacenctNode.add(ELEVEN); ELEVEN.inDegree++; SEVEN.AdjacenctNode.add(EIGHT); EIGHT.inDegree++; FIVE.AdjacenctNode.add(ELEVEN); ELEVEN.inDegree++; THREE.AdjacenctNode.add(EIGHT); EIGHT.inDegree++; THREE.AdjacenctNode.add(TEN); TEN.inDegree++; ELEVEN.AdjacenctNode.add(TEN); TEN.inDegree++; ELEVEN.AdjacenctNode.add(TWO); TWO.inDegree++; ELEVEN.AdjacenctNode.add(NINE); NINE.inDegree++; EIGHT.AdjacenctNode.add(NINE); NINE.inDegree++; g.nodes.add(TWO); g.nodes.add(THREE); g.nodes.add(FIVE); g.nodes.add(SEVEN); g.nodes.add(EIGHT); g.nodes.add(NINE); System.out.println(Now calling the topologial sorts); topologicalSort(g); }}Graph class:class Graph { public int vertices; LinkedList<Node> nodes = new LinkedList<Node>();}Node Class:class Node { public String data; public int dist; public int inDegree; LinkedList<Node> AdjacenctNode = new LinkedList<Node>( ); public void addAdjNode(final Node Child){ AdjacenctNode.add(Child); Child.inDegree++; } public Node(String data) { super(); this.data = data; }}
Topological sort in Java
java;algorithm;graph
class Graph { public int vertices; LinkedList<Node> nodes = new LinkedList<Node>();}GraphGraph, as other data structures in general, should be declared public. Because you want them to be able to be used outside of the package they are declared in. nodes, vertices should be private so that you can know that they are not changed outside the Graph class. nodes should not be a LinkedList. You must depend on abstractions as much as possible. You should prefer interfaces such as List over specific implementations LinkedList.Nodes of a graph is not a List, it is a Set. A standard graph cannot have multiple copies of a node. You should prefer a Set to represent a set unless you have a good reason. NodeAll of the above points also apply to Node. Apart from those: AdjacenctNode should be named adjacentNode by Java naming convention. Feel free to remove parameterless call to super();, although Eclipse adds it by default, it's just noise. TopologicalSortGraphRemove unused code : TopologicalSortGraph Graph = new TopologicalSortGraph();Always remove commented code. If you need to see previous versions of a code use a version control system. : //public LinkedList<Node> nodes = new LinkedList<Node>();Do not put more than one space between tokens, use autoformat of your IDE to fix formatting after changing a piece of code: public static void topologicalSort(Graph graph) {The snippets :if(child.inDegree==0){ q.add(child); ++vertexProcessesCtr; System.out.println(child.data);}and if(m.inDegree==0){ ++vertexProcessesCtr; q.add(m); System.out.println(m.data);}are duplicates. They should be extracted to a private method. You are missing some kind of abstraction there. You are changing the internals of an object passed in as a parameter: --child.inDegree; I do not expect my graph to change after I ask to see its nodes printed in topological order. What if I want to print them again?You are mixing the calculation and the printing out of the calculation result, I do not expect to see printlns in a method implementing an algorithm: System.out.println( .... );What if I want the result to be printed somewhere other than System.out? What if I do not want the result to be printed at all and want it to be used as an intermediate step in a bigger calculation instead? You probably want to return a List<Node> from a topological sort algorithm. List is the standard return type when you are sorting some collection, that is when the result is a collection whose order is important. You can then print that list as many times as you want or pass it as a parameter to wherever you like. public static void main(String[] args) {You should separate test code from your main code. If your actual class is named TopologicalSortGraph put your test code in TopologicalSortGraphTest. Use de facto standard JUnit so that instead of one big main you can have many small tests. You can run any one of them or all of them easily from within your IDE or from the command line. You should try to separate the test code into separate source directories or even into separate projects. Your implementation code should not need or know about your test code to compile. Another spacing (indentation) problem : Node TEN = new Node(10);Your code should align well. So that it reads neatly top to bottom and scopes in it are easily identified. TEN should be ten by Java naming convention.Instead of these two you should use your addAdjNode method instead:SEVEN.AdjacenctNode.add(ELEVEN);ELEVEN.inDegree++;Also access chains like node.AdjacenctNode.add(otherNode) or x.y.z are usually a sign that your encapsulation is not good enough. In this case you are modifying a collection of one class from another class. It's a problem waiting to happen. Same encapsulation problem is present in ELEVEN.inDegree++, too. The root problem here is adjacency is a property of the graph -Remember G = (V, E) from school?- and not of the nodes themselves. Instead of node.addAdjNode(otherNode), you should use a method like graph.addEdge(node, otherNode). Same problem also exists here: g.nodes.add(TWO); You should have aGraph.addNode(node) method instead. Coming back to G = (V, E); it says, if you listen carefully, you need a set of vertices and a set of edges to have a graph and should not add nodes one by one. Ideally, Graph would have a constructor like this public Graph(Set<Node> vertices, Set<Edge> edges).
_unix.64272
I use standard Danish QWERTY keyboard (on Debian, if the distro matters). Is it at all possible to write German umlauts such as , , by some key combos (that is without changing the layout to German)?
German umlaut on Danish keyboard
keyboard;keyboard layout
The key between and Enter should produce a dead diaeresis. I.e. pressing the key followed by u should produce .I'm not Danish so I'm basing this on my knowledge of the Finnish/Swedish keyboard and Wikipedia.
_webapps.108010
I want to make SUMIF in Google Spreadsheets look at the previous row for its evaluation. So if my sheet looks like this:trigger, value1something, value2I want it to add value2, because the trigger is in the previous row. Can this be done with SUMIF and if not, is there another way?
Google Spreadsheets SUMIF look to previous row
google spreadsheets;google documents;formulas;worksheet function
This effect is achieved by shifting the criteria range by one row:=sumif(A2:A15, <6, B3:B16)sums the entries in B where the number up-and-to-the-left is less than 6.
_unix.278010
I'm trying to connect to the local university eduroam wifi where I work with my Debian Jessy (xfce) laptop.The wifi is protected as WPA- EAP : TLS (using ssh key pairs .cer and .pem)I tried using wicd, but I permanently get an error of 'bad password', I'm not sure how to troubleshoot the connect (I can't how to get the debug messages going through the terminal to work out what isn't connecting).So I decided to try connecting directly via wicd (they supply the config and a script).here is the output after I attempt to connect using wpa_supplicang$:~/ sudo wpa_supplicant -Dnl80211 -iwlan0 -c/etc/wpa_supplicant.confSuccessfully initialized wpa_supplicantwlan0: Trying to associate with c4:7d:4f:4b:3f:71 (SSID='eduroam' freq=2437 MHz)wlan0: Associated with c4:7d:4f:4b:3f:71wlan0: CTRL-EVENT-EAP-STARTED EAP authentication startedwlan0: CTRL-EVENT-EAP-PROPOSED-METHOD vendor=0 method=21wlan0: CTRL-EVENT-EAP-METHOD EAP vendor 0 method 21 (TTLS) selectedwlan0: CTRL-EVENT-EAP-PEER-CERT depth=3 subject='/C=SE/O=AddTrust AB/OU=AddTrust External TTP Network/CN=AddTrust External CA Root'wlan0: CTRL-EVENT-EAP-PEER-CERT depth=2 subject='/C=US/ST=UT/L=Salt Lake City/O=The USERTRUST Network/OU=http://www.usertrust.com/CN=UTN-USERFirst-Hardware'wlan0: CTRL-EVENT-EAP-PEER-CERT depth=1 subject='/C=NL/O=TERENA/CN=TERENA SSL CA'wlan0: CTRL-EVENT-EAP-PEER-CERT depth=0 subject='/OU=Domain Control Validated/CN=radius.u-bordeaux.fr'wlan0: CTRL-EVENT-EAP-SUCCESS EAP authentication completed successfullywlan0: WPA: Key negotiation completed with c4:7d:4f:4b:3f:71 [PTK=CCMP GTK=TKIP]wlan0: CTRL-EVENT-CONNECTED - Connection to c4:7d:4f:4b:3f:71 completed [id=0 id_str=]wlan0: WPA: Group rekeying completed with c4:7d:4f:4b:3f:71 [GTK=TKIP]wlan0: WPA: Group rekeying completed with c4:7d:4f:4b:3f:71 [GTK=TKIP]wlan0: WPA: Group rekeying completed with c4:7d:4f:4b:3f:71 [GTK=TKIP]Here the command seems to authenticate, and then connect. However I am unable to ping any ip, and hence no internet, no email from laptop (and worse no connection to my git repo !).Is anyone able to give my any clues as to what is wrong in my setup up, or how to troubleshoot. I'd really like to get this working.All help is hugely appreciated.David
trying to connect to eduroam with wicd or wpa-supplicant fails
wpa supplicant;wicd
This may sound ridiculous, but it works !I think in the first instance with all the 'messing around' I had been doing trying to get the wifi to work I had ended up with multiple wpa_supplicants running, or an issue with a conflict in wicd.Anyway, I closed / stoped everything...sudo killall wpa_supplicantsudo /etc/init.d/wicd stopand then when I didsudo wpa_supplicant -Dnl80211 -iwlan0 -c/etc/wpa_supplicant.conf -BI got a different response, a simple Successfully initialized wpa_supplicantThen running sudo dhclient -d wlan0returned success... for the first time. previously it just hung, which I assumed was an fault in wpa_supplicant (although I may be wrong)Internet Systems Consortium DHCP Client 4.3.1Copyright 2004-2014 Internet Systems Consortium.All rights reserved.For info, please visit https://www.isc.org/software/dhcp/Listening on LPF/wlan0/ac:81:12:70:6f:22Sending on LPF/wlan0/ac:81:12:70:6f:22Sending on Socket/fallbackDHCPDISCOVER on wlan0 to 255.255.255.255 port 67 interval 6DHCPREQUEST on wlan0 to 255.255.255.255 port 67DHCPOFFER from 123.456.789.123DHCPACK from 123.456.789.123bound to 987.654.321.321 -- renewal in 1494 seconds.So I'm now happily making this response from my now connected laptop. Coolnow all I need to do is to enable the same connection via wicd, and I'll be super happy.David.
_unix.372846
I have been converting all of my home videos to HEVC and sometimes the files end up smaller and sometimes they don't. I am currently comparing all the video files manually and it takes forever. I was wondering if there is a script that can check the 2 folders and delete the larger of the 2 files and keep the smaller one. After all I am doing this to save space. I do all my conversion in Ubuntu 17.04 CLI so a bash script would be preferable but I am not a scripter.
Compare and delete larger video files in 2 directories
ubuntu;command line;diff
null
_codereview.36966
I'm writing a SDK for a NFC device in .NET so I don't have to import the SDK from C++. Right now I'm working on the ISO14443-3 part which is just simple Halt, Request, and Anticollision commands. The communication part between the device and computer is simple enough so I'm not going to post any of that. Just know that it is a serial device and that the command I send to it gets built before I write it to the SerialPort.We have 2 different NFC devices with completly different SDK's. I plan on making them identical, but when I first started I was main concerned with only one of them and I was basing all my methods off of the SDK that came with the device. Note that this is not a debate about if I should use the SDK or not. When I first started I figured that I would only have 1 method with a simple structure. It looked like this. private void BuildAndSendCommand(MasterRDCommands command, params byte[] data)MasterRDCommands is a simple enum. I decided that was a bad idea when I started work on the sound and light commands.. it was super hard to read something likenfc.BuildAndSendCommand(MasterRDCommands.SetLED, RFIDLED.Blue, 0x01, 0x10);it's like..HUH???so I made the BuildAndSendCommands private and made methods to make the code more clear.. Now I have the method signature like this..public void SetLED(RFIDLED led, byte flashes, byte duration)that sure make it much nicer at the top most level, but the middle man I'm concerned if I should still use a enum. I feel that it would much more clean if I would just put a region at the bottom or top of my code with a few private constants so that things like the RATS command would switch from public byte[] SendRATS_TypeA() { BuildAndSendCommand(MasterRDCommands.RATS); byte[] RATS = GetResponse(10); return RATS; }to say something like this private const byte RATScmd = 0x1F; public byte[] SendRATS_TypeA() { BuildAndSendCommand(RATScmd); byte[] RATS = GetResponse(10); return RATS; }it's not much different, but I don't plan on exposing any of the commands from my Enum to the user since most if not all of the commands require a certain order. Where as the LED example is a good example (to me atleast) of when to use a enum. The user has to choose a very narrow set of LED's.in the end the user still only sees the few methods that I mark as public and would still never know if I ever deleted the Commands enum or not. What do you think? Keep them or remove them?
Enum or Constant
c#;enum;constants
I've never been a fan of a global constants file. It's a good idea to keep enums defined close to where they are needed. Makes it a little more apparent how enum is used. This helps keep the code clean as you mentioned, and also helps improve maintainability, since there's not a long, master list of enums that need to be mentally processed in order to confidently make a change.On a side note, I would go ahead and put them in separate files. This will also help with maintainability later, in case a move becomes necessary.If you can eliminate the enum and just use a private variable, then you've made it even better. Code is more readable, and it doesn't imply that the particular value is shared across larger portions of code.
_vi.3276
I've the following example:vim -E http://example.com/where I'd like to search for <head> tag.It seems that search only works when the lines are separated, but when the lines are joined together (%j) then it doesn't work and it says:Search hit BOTTOM, continuing at TOPWhat I'm doing is simply:/<head>The above search works when lines are separated, but not when everything is in one line (just run %j before doing search).Any idea why or how to search properly for pattern in Ex mode?I'm expecting after search that the cursor would be placed under the found pattern (similar as in visual mode), so I'm able to perform further changes (for example removing inner tag content by: norm vitd), but it doesn't work when the cursor is not placed on the tag it-self. In other words it seems to work only when the tag is at the beginning of the line, but not when it's in the middle.
How to search in Ex mode and place the cursor on the matched pattern?
search;ex mode;filetype html
In vim we can read (:help :range):/{pattern}[/] - the next line where {pattern} matchesThat means if there is no next line - no match can be found, because there is only one current line (all lines together).So to search starting from the first/current line, we need to use 0;/foo.0;/that - the first line containing that, also matches in the first line.However specifying ; makes that the cursor isn't moved, so as workaround, you can manually jump to the next found phrase by: norm n.
_softwareengineering.255878
In an answer to a previous question, a small debate started about correct terminology for certain constructs. As I did not find a question (other than this or that, which is not quite the right thing) to address this clearly, I am making this new one.The questionable terms and their relationships are: type, type constructor, type parameter, kinds or sorts, and values.I also checked wikipedia for type theory, but that didn't clarify it much either.So for the sake of having a good reference answer and to check my own understanding:How are these things defined properly?What is the difference between each of these things?How are they related to each other?
Correct terminology in type theory: types, type constructors, kinds/sorts and values
type systems;data types;type theory
Alright, let's go one by one.ValuesValues are the concrete pieces of data that programs evaluate and juggle. Nothing fancy, some examples might be1truefizz buzz foo barTypesA nice description for a type is a classifier for a value. A type is a little bit of information about what that value will be at runtime, but indicated at compile time.For example if you tell me that e : bool at compile time, and I'll know that e is either true or false during runtime, nothing else! Because types classify values nicely like this, we can use this information to determine some basic properties of your program.For example, if I ever see you adding e and e' when e : int and e' : String, then I know something is a bit off! In fact I can flag this and throw an error at compile time, saying Hey, that doesn't make any sense at all!.A more powerful type system allows for more interesting types which classify more interesting values. For example, let's consider some functionf = fun x -> xIt's pretty clear that f : Something -> Something, but what should that Something be? In a boring type system, we'd have to specify something arbitrary, like Something = int. In a more flexible type system, we could sayf : forall a. a -> aThat is to say for any a, f maps an a to an a. This let's us use f more generally and write more interesting programs.Moreover, the compiler is going to check actually satisfying the classifier we've given it, if f = fun x -> true then we have a bug and the compiler will say so!So as a tldr; a type is a compile time constraint on the values an expression can be at runtime.Type ConstructorSome types are related. For example a list of integers is very similar to a list of strings. This is almost like how sort for integers is almost like sort for strings. We can imagine a sort of factory that builds these almost-the-same types by generalizing over their differences and building them upon demand. That's what a type constructor is. It's kind of like a function from types to types, but a little more limited.The classic example is a generic list. A type constructor for is just the generic definition data List a = Cons a (List a) | NilNow List is a function which maps a type a to a list of values of that type! In Java-land I think these are perhaps called generic classesType ParametersA type parameter is just the type passed to a type constructor (or function). Just like in the value level we'd say foo(a) has a parameter a just like how List a has a type parameter a.KindsKinds are a bit tricky. The basic idea is that certain types are similar. For example, we have all the primitive types in java int, char, float... which all behave as if they have the same type. Except, when we're speaking of the classifiers for types themselves, we call the classifiers kinds. So int : Prim, String : Box, List : Boxed -> Boxed.This system gives nice concrete rules about what sort of types we can use where, just like how types govern values. It'd clearly be nonsense to say List<List>or List<int>In Java since List needs to be applied to a concrete type to be used like that! If we look at their kinds List : Boxed -> Boxed and since Boxed -> Boxed /= Boxed, the above is a kind error!Most of the time we don't really think about kinds and just treat them as common sense, but with fancier type systems it's something important to think about.A little illustration of what I've been saying so far value : type : kind : ... true : bool : Prim : ... new F() : Foo : Boxed : ...Better Reading Than WikipediaIf you're interested in this sort of thing, I'd highly recommend investing a good textbook. Type theory and PLT in general is pretty vast and without a coherent base of knowledge you (or at least I) can wander around without getting anywhere for months. Two of my favorite books areTypes and Programming Language - Ben PiercePractical Foundations of Programming Languages - Bob HarperBoth are excellent books that introduce what I've just talked about and much more in beautiful, well explained detail.
_unix.88040
When I use grep to find some text which I need, it will display lines containing a match to the given pattern. For example, # grep -r .*Linux *path0/output.txt:I hope you enjoyed working on Linux.path1/output1.txt:Welcome to Linux.path2/output2.txt:I hope you will have fun with Linux.then, I want to edit the file path2/output2.txt, hence, I type vim path2/output2.txt.But, I don't think it is an effective way. How can I copy the path after grep?
How to use the grep result in command line?
bash;grep
null
_cstheory.5277
As an extension to the question posed recently by Bulatov, I wonder what are the maximal sub-classes of perfect graphs for which we know of combinatorial algorithms to compute a maximum independent set.
Combinatorial Independent set Algorithms for sub-classes of perfect graphs
ds.algorithms;graph theory;graph algorithms
One springs to mind and is listed as a maximal subclass in ISGCI, which surprised me: perfect claw-free graphs (a.k.a. perfect quasi-line graphs). This was done by Minty for all claw-free graphs around 1980. But a couple of other algorithms for claw-free graphs, one recently in SODA 2011 that is $O(n^3)$ by Faenza, Oriolo, and Stauffer, use the Chudnovsky-Seymour structural characterization of these graphs to reduce the problem to line graphs (and therefore maximum matching) in a fairly straightforward way. If you're only looking at perfect claw-free graphs, then the earlier characterization by Maffray and Reed is sufficient (and the reduction to line graphs is more obvious).
_unix.106720
The file is apache:apache 660 and username is jdoe and he belongs to groups jdoe and apache. He SFTPs into the server with WinSCP but when he goes to perform an edit which includes a set modtime command he gets sent status permission denied in the syslog. The file is on an NFS mount (v4) which is mounted rw on this server. How do I deal with this issue in SFTP, RHEL 6?
SFTP Modification time Permission denied when touching/updating files
linux;permissions;sftp
null
_softwareengineering.219532
We've all faced this. You apply to a cool project and they ask you to send them a piece of your code. On the surface, this look OK and I am fine with it. But what shall I send them? My cool utility? Snapshot of structure in complex project? Something else?So far, I have tried to persuade them to have a desktop share via Skype and that I lead them through code and structure. But somehow clients do not like this approach. So I'd like to hear your thoughts.
What to send when a client wants a sample of my code to test my qualities?
code sample
Do you keep track of the stuff you've built for other clients?If you are a freelancer (it seems like you are) you should keep links to past projects you've done for other clients. I know that as a client I would click on the other projects to see exactly what you're capable of. Client testimonials would go well with this too.Of course you don't want to define yourself by just your past projects, so you could also keep links within your website to demo projects you've worked on too (normally your demos should highlight how 'deep' your knowledge level goes - as chances are, you haven't built that many complex projects for clients). Snapshots could work here too (if you don't have links to actual demo projects).Lastly, for displaying code, you can just link to your GitHub account to your website. You don't want anyone having access to proprietary code you've written for another client, and GitHub (or even bitbucket) is a great place to display code you want displayed (remember though, seeing your code will depend on the client, as most won't care about that unless they are technical firms themselves).PS. You should just note that in situations like these, you are marketing yourself (especially if you are a freelancer). It is important that you look for strategies to improve your brand (which is basically YOU).
_unix.131348
I want to connect external monitor to my laptop but I can't manage it properly. My setup is: Arch Linux x64 (xfce) on Dell l702x with Bumblebee and HDMI -> DVI adapter monitor.I want to have that just as regular dual display, with common mouse pointer and ability to move windows between both screens.Since HDMI port in my laptop is connected to Nvidia card, I've followed that help file: https://github.com/Bumblebee-Project/Bumblebee/wiki/Multi-monitor-setup but to no avail. I've found way to get something on second screen (so it's definitely working) - I simply need to do echo DISPLAY=:8.0 (that's default virtual port) and since then everything will be started on the external screen BUT not X server, which always starts on my laptop main screen despite of any configuration changes.I can share cursor thanks to synergy (that works fine) but I can't resize anything on external screen, nor move windows, alt+tab also doesn't work. All answers I've found are about starting another X server on external display so how that can be done?Unfortunately DISPLAY=:8.0 startx (or primusrun startx or optirun) just ignores display, it starts on my laptop screen.I've tested a lot of xorg.conf options (all of them are being unfortunately ignored), one big difference I found is that xrandr shows always only one display, i.e.:$ DISPLAY=:0.0 xrandrScreen 0: minimum 8 x 8, current 1920 x 1080, maximum 32767 x 32767LVDS1 connected 1920x1080+0+0 (normal left inverted right x axis y axis) 382mm x 215mm 1920x1080 60.01*+ 40.01 1400x1050 59.98 1280x1024 60.02 1280x960 60.00 1024x768 60.00 800x600 60.32 56.25 640x480 59.94 VGA1 disconnected (normal left inverted right x axis y axis)HDMI1 disconnected (normal left inverted right x axis y axis)DP1 disconnected (normal left inverted right x axis y axis)VIRTUAL1 disconnected (normal left inverted right x axis y axis)and$ DISPLAY=:8.0 xrandrScreen 0: minimum 8 x 8, current 1920 x 1200, maximum 16384 x 16384HDMI-0 connected primary 1920x1200+0+0 (normal left inverted right x axis y axis) 518mm x 324mm 1920x1200 59.95*+ 1920x1080 60.00 1680x1050 59.95 1600x1200 60.00 1280x1024 60.02 1280x960 60.00 1024x768 60.00 800x600 60.32 640x480 59.94 So basically how can I start X server on external display? Ideally that would work on both screens like any regular setup, but even that would be better than current state.
How to start x server on another display?
arch linux;x11;xrandr;dual monitor;bumblebee
null
_webmaster.49585
A client of mine scrapped his entire wordpress site of many years for a custom CMS. The URL structures both use the /<post_title> format, so a pattern based .htaccess rule is out of the picture for saving the old urls. The old site is effectively dead, but I'd hate to let all of those thousands of backlinks simply die a 404 death.Is there a faster way than listing each url in a .htaccess directive to redirect 1000s of urls to the home page? (Faster as in load speed, not programming efficiency)
Fastest way to redirect dead URLs to home page
apache;wordpress;redirects;301 redirect
In your custom 404 page you could check the structure of the URL and 301-redirect if it looks good - or preferably look it up against a known list of previous Wordpress URLs.At least this way you are only doing the lookup/redirect if the page doesn't exist.If you are doing 1000s of redirects with Apache, it will be more efficient/faster to do this in your server config (vhost) files. Ref: Move .htaccess content into vhost, for performanceEDIT: I expect you've considered this already, but the thought was beginning to nag... do these missing pages have equivalent new URLs? Although a lot more work, it will obviously be many(*N) times better (for SEO and users) to redirect to the new URL if possible.(Where N is a sufficiently large arbitrary number.)
_cstheory.33640
It is known that for general arithmetic circuits there is not much of a difference between standard model and one with division: any circuit with divisions that computes a polynomial can be simulated by a circuit without divisions with only polynomial blow-up.However, in the non-commutative world, such a reduction is unknown. In fact, although exponential lower bounds are known for noncommutative formulas without division, proving lower bounds for noncommutative formulas with division is a big open question.Are we aware of lower bounds for noncommutative circuits with division, but where every gate computes a polynomial (and not just a rational function)?
Lower bounds for noncommutative arithmetic circuits with exact division?
lower bounds;arithmetic circuits
null
_unix.195443
good day,anyone can help me convert this .htaccess to nginx ?<IfModule mod_rewrite.c>RewriteEngine OnRewriteRule ^data/([0-9]+)/(.*).html$ index.php?dir=$1 [QSA,L]RewriteRule ^data/file/([0-9]+)/(.*).html$ file.php?id=$1 [QSA,L]</IfModule>i've tried to convert with : http://winginx.com/en/htaccessand the result :location /data { rewrite ^/data/([0-9]+)/(.*).html$ /index.php?dir=$1 break;rewrite ^/data/file/([0-9]+)/(.*).html$ /file.php?id=$1 break; }but i still get not found on the converted rewrite rules.thanks before for your help .fyi i use Master Auto Index script
Nginx rewrite rules
nginx;rewrite
null
_unix.61305
I accidentally left an open port on my router, leaving access to one of my computers via SSH, using a rather insecure password. I noticed because I checked /var/log/auth.log and there is just one entry from the outside. There's no bash history nor anything easily noticeable. What should I check to know my computer is not compromised?
What should I check after an unauthorized access?
ssh;security
null
_unix.6435
E.g. check if $PWD is a subdirectory of /home. In other words I'm searching for a bash string operation to check if one string starts with another.
How to check if $PWD is a subdirectory of a given path
bash;shell;directory;string
If you want to reliably test whether a directory is a subdirectory of another, you'll need more than just a string prefix check. Gilles' answer describes in detail how to do this test properly.But if you do want a simple string prefix check (maybe you've already normalized your paths?), this is a good one:test ${PWD##/home/} != ${PWD}If $PWD starts with /home/, it gets stripped off in the left side, which means it won't match the right side, so != returns true.
_webmaster.78719
This strikes my curiosity.I have two versions of my website. A desktop version and a mobile version. The objective of both websites is to allow users to view whatever full size photos they like to see.On the desktop version of the site, I fill it with a lot of helpful text detailing each extra option one can do with a photo such as ordering it, etc. On the mobile site, I have it very basic. Maybe a sentence describing a photo and the photo and that's it.People say content is king and I think the minimum number of words per page should be 250 to avoid a thin-content penalty. So far, I have not received that on the mobile site from google webmaster tools, yet I never submitted a sitemap to the mobile site. I only submitted one for the desktop site.The reason why I make the mobile version mostly stripped down is because I want users to spend less money on bandwidth and paragraphs can eat bandwidth, especially if they're decorated.So with that set aside, is there a different minimum required word limit for mobile sites in comparison to desktop sites in order to evade google's thin-content penalty?The average age group of users visiting my site range from about 18 to 24 and younger people cannot afford a lot of money.
Content length VS document size for mobile
website design;mobile;bandwidth;desktop;thin content
null
_softwareengineering.230873
How does the pattern of using command handlers to deal with persistence fit into a purely functional language, where we want to make IO-related code as thin as possible?When implementing Domain-Driven Design in an object-oriented language, it's common to use the Command/Handler pattern to execute state changes. In this design, command handlers sit on top of your domain objects, and are responsible for the boring persistence-related logic like using repositories and publishing domain events. The handlers are the public face of your domain model; application code like the UI calls the handlers when it needs to change domain objects' state.A sketch in C#:public class DiscardDraftDocumentCommandHandler : CommandHandler<DiscardDraftDocument>{ IDraftDocumentRepository _repo; IEventPublisher _publisher; public DiscardDraftCommandHandler(IDraftDocumentRepository repo, IEventPublisher publisher) { _repo = repo; _publisher = publisher; } public override void Handle(DiscardDraftDocument command) { var document = _repo.Get(command.DocumentId); document.Discard(command.UserId); _publisher.Publish(document.NewEvents); }}The document domain object is responsible for implementing the business rules (like the user should have permission to discard the document or you can't discard a document that's already been discarded) and for generating the domain events we need to publish (document.NewEvents would be an IEnumerable<Event> and would probably contain a DocumentDiscarded event).This is a nice design - it's easy to extend (you can add new use cases without changing your domain model, by adding new command handlers) and is agnostic as to how objects are persisted (you can easily swap out an NHibernate repository for a Mongo repository, or swap a RabbitMQ publisher for an EventStore publisher) which makes it easy to test using fakes and mocks. It also obeys model/view separation - the command handler has no idea whether it's being used by a batch job, a GUI, or a REST API.In a purely-functional language like Haskell, you might model the command handler roughly like this:newtype CommandHandler = CommandHandler {handleCommand :: Command -> IO Result)data Result a = Success a | Failure Reasontype Reason = StringdiscardDraftDocumentCommandHandler = CommandHandler handle where handle (DiscardDraftDocument documentID userID) = do document <- loadDocument documentID let result = discard document userID :: Result [Event] case result of Success events -> publishEvents events >> return result -- in an event-sourced model, there's no extra step to save the document Failure _ -> return result handle _ = return $ Failure I expected a DiscardDraftDocument commandHere's the part I'm struggling to understand. Typically, there'll be some sort of 'presentation' code which calls into the command handler, like a GUI or a REST API. So now we have two layers in our program which need to do IO - the command handler and the view - which is a big no-no in Haskell.As far as I can make out, there are two opposing forces here: one is model/view separation and the other is the need to persist the model. There needs to be IO code to persist the model somewhere, but model/view separation says that we can't put it in the presentation layer with all the other IO code.Of course, in a normal language, IO can (and does) happen anywhere. Good design dictates that the different types of IO be kept separate, but the compiler doesn't enforce it.So: how do we reconcile model/view separation with the desire to push IO code to the very edge of the program, when the model needs to be persisted? How do we keep the two different types of IO separate, but still away from all the pure code?Update: The bounty expires in less than 24 hours. I don't feel that either of the current answers has addressed my question at all. @Ptharien's Flame's comment about acid-state seems promising, but it's not an answer and it's lacking in detail. I'd hate for these points to go to waste!
How does persistence fit into a purely functional language?
c#;architecture;functional programming;domain driven design;haskell
The general way to separate components in Haskell is through monad transformer stacks. I explain this in more detail below.Imagine we're building a system that has several large-scale components:a component that talks with the disk or database (submodel)a component that does transformations on our domain (model)a component that interacts with the user (view)a component that describes the connection between view, model, and submodel (controller)a component that kickstarts the whole system (driver)We decide that we need to keep these components loosely coupled in order to maintain good code style.Therefore we code each of our components polymorphically, using the various MTL classes to guide us:every function in the submodel is of type MonadState DataState m => Foo -> Bar -> ... -> m BazDataState is a pure representation of a snapshot of the state of our database or storageevery function in the model is pureevery function in the view is of type MonadState UIState m => Foo -> Bar -> ... -> m BazUIState is a pure representation of a snapshot of the state of our user interfaceevery function in the controller is of type MonadState (DataState, UIState) m => Foo -> Bar -> ... -> m BazNotice that the controller has access to both the state of the view and the state of the submodelthe driver has only one definition, main :: IO (), which does the near-trivial work of combining the other components into one systemthe view and submodel will need to be lifted into the same state type as the controller using zoom or a similar combinatorthe model is pure, and so can be used without restrictionin the end, everything lives in (a type compatible with) StateT (DataState, UIState) IO, which is then run with the actual contents of the database or storage to produce IO.
_codereview.38249
I am trying to print all the strings given in a N x N Boggle board. Its basically a N x N scrabble-like board where words can be made from a position - horizontally, vertically and diagonally.Here is my naive implementation. Is this correct? Any hints on optimizations? Any other useful links?public class BoggleSolver { private void solve(String prefix, int i, int j, char[][] Board, boolean[][] marker, int N) { if ((i < 0) || (j < 0) || (i >= N) || (j >= N)) return; if (marker[i][j] == true) return; String s = prefix + Character.toString(Board[i][j]); // TODO Fictional dictionary that can tell us if // a string is a legal word if (dict.HasWord(s)) System.out.println(s); // Mark current index and traverse horizontal,vertical // and diagonal marker[i][j] = true; solve(s, i, j + 1, Board, marker, N); solve(s, i + 1, j, Board, marker, N); solve(s, i + 1, j + 1, Board, marker, N); } public void solve(char[][] Board, int N) { boolean[][] marker = new boolean[N][N]; solve(, 0, 0, Board, marker, N); } public static void main(String[] args) { // TODO make board and call solve(board, N) }}
Printing all strings in a Boggle board
java;optimization;algorithm;strings;recursion
I wouldn't consider that a Boggle solver, as you only consider words that start at [0, 0] and progress east, south, or southeast. (With coordinates that are always nondecreasing, you need not have bothered using marker to prevent backtracking.)A complete search will probably take an extremely long time. You should use a dictionary that can check whether there are any words that begin with some prefix, so that you can prune fruitless search paths. Use a trie data structure, or possibly a TreeSet<String>.Board, N, and HasWord() all have improper capitalization. It doesn't make sense to require N to be passed in explicitly, since it should be detectable from the board's dimensions.Appending a character to a string can be simply written as prefix + board[i][j].I would recommend redesigning the class with a more versatile interface, because printing the results to System.out limits reusability.public interface BoggleSolutionHandler { public void foundWord(String word);}public class BoggleSolver { public static void solve(char[][] board, BoggleSolutionHandler callback, NavigableSet<String> dictionary) { }}An iterator would be even nicer to use, but probably harder to implement since you can't take advantage of recursion:public class BoggleSolver implements Iterator<String> { public class BoggleSolver(char[][] board, NavigableSet<String> dictionary) { } @Override public boolean hasNext() { } @Override public String next() { } @Override public void remove() { throw new UnsupportedOperationException(); }}
_codereview.21249
My conditional code here seems repetitive and long. Is there a better approach? I want to test for a string value in a NSDictionary object and then depending upon the value prefix a UILabel with $, , currency symbols.I've just shown 2 examples below. I have more currencies and the code is very long.if ([[item objectForKey:@currency] isEqualToString:@EUR]) { NSString *priceConvertToStr = [NSString stringWithFormat:@%@, [[item objectForKey:@price]stringValue]]; NSString *priceStringFix = [priceConvertToStr stringByReplacingOccurrencesOfString:@(null) withString:@]; priceLabelText.text = priceStringFix; [imgView2 addSubview:priceLabelText];}if ([[item objectForKey:@currency] isEqualToString:@GBP]) { NSString *priceConvertToStr = [NSString stringWithFormat:@%@, [[item objectForKey:@price]stringValue]]; NSString *priceStringFix = [priceConvertToStr stringByReplacingOccurrencesOfString:@(null) withString:@]; priceLabelText.text = priceStringFix; [imgView2 addSubview:priceLabelText];}if ([[item objectForKey:@currency] isEqualToString:@USD]) { NSString *priceConvertToStr = [NSString stringWithFormat:@$%@, [[item objectForKey:@price]stringValue]]; NSString *priceStringFix = [priceConvertToStr stringByReplacingOccurrencesOfString:@(null) withString:@]; priceLabelText.text = priceStringFix; [imgView2 addSubview:priceLabelText];}
Applying a currency symbol based on a tested string value
strings;objective c;ios;dictionary
I'd create an NSDictionary holding those prefixes and wrap that whole thing into its own method, like so: -(NSString *) prefixForCurrency:(NSString *)currency{ NSDictionary *currencyPrefixes = @{@EUR: @, @USD : @$, @GBP : @, @NOK : @kr. }; NSString *returnString = [currencyPrefixes objectForKey:currency]; return returnString;}Then, instead of your current mass of if-statements you'd have something like the following:NSString *currency = [item objectForKey:@currency];NSString *currencyPrefix = [self prefixForCurrency: currency];NSString *price = [item objectForKey:@price];NSString *priceString = [NSString stringWithFormat:@%@ %@, currencyPrefix, price];In case you were wondering: the main reason I wrap the prefixDictionary in its own method is in case you'd for instance prefer to fetch this list from a file later on. Then you can just alter the innards of that one method...
_datascience.8018
I have data for each vehicle's lateral position over time and lane number as shown in these 3 plots in the image and sample data below.> a Frame.ID xcoord Lane1 452 27.39400 32 453 27.38331 33 454 27.42999 34 455 27.46512 35 456 27.49066 3The lateral position varies over time because a human driver does not have perfect control over vehicle's position. The lane change maneuver starts when the lateral position changes drastically and ends when the variation becomes 'normal' again. This can not be identified from the data directly. I have to manually look at each vehicle's plot to determine the start and end points of lane change maneuver in order to estimate the duration of lane change. But I have thousands of vehicles in the data set. Could you please direct me to any relevant image analysis/ machine learning algorithm that could be trained to identify these points? I work in R. Thanks in advance.
Are there any machine learning techniques to identify points on plots/ images?
machine learning;r
null
_unix.37883
If I want to run the application monodevelop, I need to chdir to /usr/lib/monodevelop/Bin and then execute ./MonoDevelop.exe. This is the same for all other Mono applications such as banshee, tomboy, etc. If I attempt to run the Mono applications from another location by simply running monodevelop, or even from their own directory, I get TypeInitializationExceptions like this: behrooz@behrooz:/usr/lib/monodevelop/bin$ monodevelop FATAL ERROR [2012-05-04 11:24:39Z]: MonoDevelop failed to start. Some of the assemblies required to run MonoDevelop (for example gtk-sharp, gnome-sharp or gtkhtml-sharp) may not be properly installed in the GAC. System.TypeInitializationException: An exception was thrown by the type initializer for Gtk.Application ---> System.EntryPointNotFoundException: glibsharp_g_thread_supported at (wrapper managed-to-native) GLib.Thread:glibsharp_g_thread_supported () at GLib.Thread.get_Supported () [0x00000] in :0 at Gtk.Application..cctor () [0x00000] in :0 --- End of inner exception stack trace --- at MonoDevelop.Ide.IdeStartup.Run (MonoDevelop.Ide.MonoDevelopOptions options) [0x0007e] in /home/behrooz/Desktop/Monodevelop/monodevelop-2.8.6.5/src/core/MonoDevelop.Ide/MonoDevelop.Ide/IdeStartup.cs:95 at MonoDevelop.Ide.IdeStartup.Main (System.String[] args) [0x0004f] in /home/behrooz/Desktop/Monodevelop/monodevelop-2.8.6.5/src/core/MonoDevelop.Ide/MonoDevelop.Ide/IdeStartup.cs:503 Why is that?I have tried reinstalling all Mono, Wine, GTK, Glib, X, Gnome packages.apt-get --purge --reinstall install $(dpkg --get-selections| grep mono |grep install |grep -v deinstall |awk'{print $1}') I also tried starce on open and got nothing by myselfSystem Configuration: - Debian 6.0-updates 64 bit - Kernel 3.2.0-2, 3.2.0-1, 3.1 and 3 EDIT: not a kernel thing - Gnome 3.4 EDIT:but a gnome thing - Mono 2.10.5 TLS: __thread SIGSEGV: altstack Notifications: epoll Architecture: amd64 Disabled: none Misc: softdebug LLVM: supported, not enabled. GC: Included Boehm (with typed GC and Parallel Mark) update: with upgrading to the new monodevelop 3.0.2 and latest mono, I can run monodevelop with command monodevelop in terminal,no chdir.but gnome-shell cannot run it. Finally found it:as root: cd /usr/local/ find | grep mono|xargs rm -rf # Use with caution/some applications may get messed up(stellarium has MONOchrome images...)
Why do Mono applications only start from their own directory?
linux;path;gnome shell;mono
It looks like you've built and installed monodevelop from source - did you do the same for the dependencies like gtksharp? Since banshee and tomboy are broken, it sounds like you have a dependency shared between the broken programs, and that's an obvious candidate. Do CLI mono apps work?From the MonoDevelop build documentation:We strongly recommend you install everything from packages if possible. If not you, you should use a Parallel Mono Environment. Do not install anything to /usr or /usr/local unless you completely understand the implications of doing do.If the other mono applications will only run from the installed monodevelop tree, and reinstalling packages hasn't helped, you might have a mess of extra stuff floating around that the source install has added which is interfering with mono finding its libraries, possibly with hardcoded paths into the monodevelop install.My Debian-fu is not strong, but there should be a way of identifying files in /usr that dpkg doesn't know about, that might be a place to start.
_codereview.129518
I have written a program in x86 assembly (Intel syntax/MASM) that interprets brainfuck code that is fed to it via an interactive console and prints the final stack to stdout. Note that it does not include an implementation for the , command, but everything else has been implemented. The idea is that it presents the user with a prompt where they can enter their code; when the user hits Enter, it evaluates it and then dumps the resultant cell state to the console. While the cell state is maintained between prompt entries, the pointer position is not.It looks like this:$++++++++[>++++[>++>+++>+++>+<<<<-]>+>+>->>+[<]<-]>>.>---.+++++++..+++.>>.<-.<.+++.------.--------.>>+.>++.Hello World!0 0 72 100 87 33 10 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0--------------------------------------------------$>>[.>]HdW!0 0 72 100 87 33 10 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0--------------------------------------------------$++++++++[>++++++++<-]>+.A0 65 72 100 87 33 10 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0--------------------------------------------------This is my first major foray into assembly, so I'm mainly looking for general tips, such as:Which registers I should be using in particular casesWhen I should be using RAM and when I should be using registersWays that the code could be simplified.386.model flat,stdcall.stack 4096include \masm32\include\masm32.incincludelib \masm32\lib\masm32.libExitProcess proto,dwExitCode:dword.databfsrc BYTE 200 dup(0) ; buffer to store source codebfcells BYTE 100 dup(0) ; 100-byte data array size for nowloopStack DD 5 dup(0) ; stores the position of the first instruction in the current loop. Maximum of 5 nested loops.charBuf BYTE 5 dup(0) ; buffer for when we are dumping numbersnewline BYTE 10,0 ; ASCII 10 is \nprompt BYTE $,0 ; input prompt stringhr BYTE 50 dup('-'),0 ; fake horizontal rulespace BYTE ' ',0.codeEvalBf proc start: ; print the prompt and then read input into the source array invoke StdOut, addr prompt invoke StdIn, addr bfsrc,200 ; exit if input is empty cmp bfsrc,0 je exit mov eax,0 ; eax is BF data pointer mov ebx,0 ; ebx is BF source pointer mov ecx,0 ; ecx is loop depth processInstruction: ; jump according to current source char cmp BYTE PTR bfsrc[ebx], '+' je plus cmp BYTE PTR bfsrc[ebx], '-' je minus cmp BYTE PTR bfsrc[ebx], '>' je fwd cmp BYTE PTR bfsrc[ebx], '<' je back cmp BYTE PTR bfsrc[ebx], '[' je open cmp BYTE PTR bfsrc[ebx], ']' je close cmp BYTE PTR bfsrc[ebx], '.' je dot ; By default, skip instruction if we haven't caught it jmp processNextInstruction plus: inc BYTE PTR bfcells[eax] jmp processNextInstruction minus: dec BYTE PTR bfcells[eax] jmp processNextInstruction fwd: inc eax jmp processNextInstruction back: dec eax jmp processNextInstruction open: ; push the current source position ; onto the loop stack mov loopStack[ecx*4],ebx inc ecx jmp processNextInstruction close: dec ecx cmp BYTE PTR bfcells[eax], 0 ; break out of loop if data cell is 0 je processNextInstruction ; pop the innermost loop position and ; set it as the next instruction mov ebx,loopStack[ecx*4] inc ecx jmp processNextInstruction dot: ; transfer current cell value into char buffer through dl mov dl, BYTE PTR bfcells[eax] mov BYTE PTR charBuf[0], dl ; follow the character with null to terminate the string mov BYTE PTR charBuf[1],0 ; save the registers we need to maintain so that stdout doesn't break anything push eax push ecx ; print generated string invoke StdOut, addr charBuf pop ecx pop eax jmp processNextInstruction processNextInstruction: inc ebx ; we're finished if we have hit the end of the input cmp BYTE PTR bfsrc[ebx], 0 je done jmp processInstruction done: ; loop through every value in the BF data array and print it invoke StdOut, addr newline mov eax, 0 printNext: ; the data array is 100 cells long, so stop looping when we hit cell 100 cmp eax, 100 jge reset ; save value in eax onto the stack push eax ; convert cell value to string and store it in the character buffer invoke dwtoa, BYTE PTR bfcells[eax], addr charBuf ; print the buffer, followed by a space invoke StdOut, addr charBuf invoke StdOut, addr space ; restore and increment value of eax pop eax inc eax jmp printNext ; when processing is complete, go back to the beginning and take new input reset: invoke StdOut, addr newline invoke StdOut, addr hr invoke StdOut, addr newline jmp start exit: invoke ExitProcess,0EvalBf endpend EvalBf
Brainf*ck interpreter written in x86 assembly
assembly;brainfuck
mov eax,0 ; eax is BF data pointermov ebx,0 ; ebx is BF source pointermov ecx,0 ; ecx is loop depthIf you used the EDI register for the BF data pointer and the ESI register for the BF source pointer, not only would this be a more natural choice, you could also dismiss the push eaxand pop eax around invoke StdOut, addr charBuf.By further using EBX for the loop depth you can also eliminate the need for push ecxand pop ecx around the same invoke StdOut, addr charBuf.Do note that clearing a register is better done trough an xor instruction:xor edi, edi ; EDI is BF data pointerxor esi, esi ; ESI is BF source pointerxor ebx, ebx ; EBX is loop depth cmp BYTE PTR bfsrc[ebx], 0 je done jmp processInstructiondone:Here's a clear opportunity to optimize the code. Instead of conditionally jumping to done you can use the opposite conditional jump and fall through. This saves an instruction:cmp BYTE PTR bfsrc[ebx], 0jne processInstructiondone:mov eax, 0printNext:cmp eax, 100jge resetpush eax...pop eaxinc eaxjmp printNextYou've used a WHILE-loop here. A REPEAT-loop would have been more optimal. Also a better way to zero a register is by xor-ing it with itself. Furthermore by using the EDI register you eliminate the need for the push eax and pop eax: xor edi, ediprintNext: ... inc edi cmp edi, 100 jl printNextreset:mov dl, BYTE PTR bfcells[eax]mov BYTE PTR charBuf[0], dl; follow the character with null to terminate the stringmov BYTE PTR charBuf[1],0Use the movzx here and shave off an instruction:movzx edx, BYTE PTR bfcells[eax]mov WORD PTR charBuf[0], dx ; follow the character with null to terminate the string
_unix.220066
My Centos7 starts very slowly. I do not know what is the cause. This problem appeared after I upgraded it to a new version. Can you, please, help me solve the problem? If more details are needed I will present them. Thank you in advance.
Centos7 starting very slowly?
centos
null
_unix.248683
I have a debian box that acts as a NFS server:Debian 8 server settings:$ apt-get install nfs-kernel-server nfs-common#/etc/fstab/dev/disk/by-uuid/6e7815a5-cd91-450c-8e83-479f732ecd87 /mnt/hdd1-raid10 ext4 defaults 0 1#/etc/exports/mnt/hdd1-raid10/log/r1 freebsd-client(rw,no_root_squash,subtree_check)I configured my FreeBSD box to act as a NFS client to push logs to a mounted directory on the Debian server:FreeBSD 10.2 client settings:#/etc/rc.confnfs_client_enable=YESpflog_enable=YESpflog_logfile=/mnt/r1/pflogpflog_flags='tcp'#/etc/fstabdebian-server:/mnt/hdd1-raid10/log/r1 /mnt/r1 nfs rw 0 0#/etc/newsyslog.conf/mnt/r1/pflog 600 * * @T00 B /var/run/pflogd.pid$ nfsstat -mdebian-server:/mnt/hdd1-raid10/log/r1 on /mnt/r1nfsv3,tcp,resvport,hard,cto,lockd,sec=sys,acdirmin=3,acdirmax=60,acregmin=5,acregmax=60,nametimeo=60,negnametimeo=60,rsize=65536,wsize=65536,readdirsize=8192,readahead=1,wcommitsize=16777216,timeout=120,retrans=2I am seeing much more higher inbound traffic on an interface that is exclusively used for handling NFS between the Debian server, bce3.101 on the FreeBSD box: bce3.101 in 2.015 MB/s 4.295 MB/s 226.455 GB out 0.148 MB/s 0.314 MB/s 17.048 GBWhich is very surprising considering that I have only produced around 10 Gbytes of logs in three days. In no way the interface values and bandwidth correspond to the actual data that is stored on disk /mnt/r1Why would the FreeBSD box be receiving so much inbound NFS traffic? Eventhough all it does is writing to the Debian server.Is this some kind of misconfiguration on my part? I am using only the default NFS settings.Here's an example tcpdump -i bce3.101 -vvv on the FreeBSD box.http://pastebin.com/RFNC72SAThank you.
NFS client much more inbound traffic than outbound? Why?
linux;debian;freebsd;nfs
null
_webapps.58094
After I created the secondary domain, I found I had added a www in front of the secondary domain and I just deleted the secondary domain because of my mistake. When I tried to recreate it with the same domain name, it said:you have already have a domain or alias in this nameShould I wait for 24 hours to take effect. Is it possible to create a secondary domain on the same name in that same Google Apps account.
Secondary Domain in Google Apps Deleted and unable to create it back
google apps;domain
null
_codereview.46060
The purpose of the below code is to update a column based on the name of the field name sent from the .NET code. This allows one piece of code to handle multiple rows when a user is only adding/updating one at a time.I have been using Stored Procedures for a while but normally just Add/Update but not using variable field names. All tips appreciated. USE DBGO/****** Object: StoredProcedure [dbo].[spActionUpdateOldestDate] Script Date: 04/02/2014 14:24:09 ******/SET ANSI_NULLS ONGOSET QUOTED_IDENTIFIER ONGOALTER PROCEDURE [dbo].[spUpdateViaFieldName] -- spActionUpdateOldestDate '1234','date','field'-- Add the parameters for the stored procedure here@AlphaNumbericalRef nvarchar(50) ,@vValue nvarchar(MAX) ,@vFieldName varchar(MAX)ASBEGIN-- add selection for courseID etc.. hereExecute ('UPDATE [TblActionsOldest] SET ' + @vFieldName + ' = ''' + @vValue + ''' WHERE RefID = ''' + @AlphaNumbericalRef+ '''')END
Update column based on input variable in stored procedure
sql;sql server;stored procedure
null
_unix.2998
I have a WD MyBook World NAS on my home network. I currently use this for Time Machine backups for my Mac but I'd also like to use it as a backup location for my linux box which is running Ubuntu Server. How can I mount this drive on the linux server so that my cron backups can use it as a backup destination?I have configured the WD drive with a static IP address which I assume would be important in the solution.
How do I mount a WD MyBook World network drive in linux?
linux;ubuntu;networking;mount;backup
According to the manual, the MyBook supports both the CIFS/SMB and the NFS protocols.CIFS/SMB is the protocol natively used by Windows for accessing network drives. You should be able to access the MyBook on a Linux/Unix system by using the smbclient or mount.cifs, e.g. to access (mount) the public folder on MyBook on the local directory /mnt you would issue (from a root terminal):mount.cifs //ip.address.of.mybook/public /mnt -o username=admin,password=admin_passwd_on_mybookor, equivalently:mount -t cifs -o username=admin,password=... //ip.address.of.mybook/public /mntwhere:you can substitute public with download (to access the pre-defined download share) or any share name that you have created with the MyBook storage manager.username/password can be those of any user that you have created on the MyBook storage manager interface; or just use -o guest instead of -o username=...,password=... to specify Guest access.Access by the NFS protocol is not enabled by default; you have first to enable it in the Advanced tab of the MyBook storage manager, then you can mount the disk shares via NFS with:mount -t nfs ip.address.of.mybook/nfs/public /mntAgain, public can be any defined share name.
_softwareengineering.341755
In our internal architecture, we have several Tomcat servers which distribute the workload and isolate the different processes of our business. We are planning to move into an approach where several tasks (mostly BD related) will be handled by a Tomcat endpoint (load balanced for availability and performance), but we are troubled in determining what could be the best practice oriented method for internal communication between the different web apps that run in the different Tomcat servers. We've already been using HornetQ, MQTT (Mosquitto based) and HTTP for different aspects, but considering this planned endpoint should expose internally an API to the others servers.The main goal of this architecture change is to avoid problems related to the fact of having different pieces of software doing similar work, but with different rules and algorithms, thus leading to coding mistakes and non-uniform behaviors across the system for a single entity.What should be considered? What inter-process-communication would be actually recommended? Some kind of ESB perhaps?
IPC design considerations
java;process;tomcat
null
_unix.53110
My LCD monitor is dying. It has white screen issue: when power on, it's black screen (power LED is flashing), after a while (maybe one hour), it turns to white screen. Then I need switch to different resolutions many many times (hundreds, maybe) to get the LCD monitor to work.Currently, I'm using VNC to connect to my fedora 17 desktop from an Android phone, and switch resolutions from System Setting -> Display. It's difficult to switch resolutions in this way. So, I think if the console has a different resolution to the X Window resolution, then I can simply use Ctrl+Alt+[F1F2...] hotkeys to switch resolutions.If I disabled Kernel Mode Setting by passing nomodeset parameter to kernel, then the console resolution is different to X Window resolution. However, the X Windows does not get the native/best resolution (1680x1050) of the monitor, it only get 1280x1024 resolution which is listed in vesa modes (vga=ask kernel parameter).So, before I buy a new monitor or fixed this monitor, is it possible to set different resolution between console and X when Kernel Mode Setting is enabled ?Edit (2012-11-28)I've sent LCD monitor to electrical appliance service shop and finally got it fixed, it was because of some capacitors failed.And now I realize I can also change to a different lower resolution in X environment, then use Ctrl+Alt+[F1F2...] hotkeys to switch resolutions. This can let me get lower resolution but same X:Y ratio as native/best resolution.The answer for this question is still wanted.
Is it possible to set different resolution between console and X when Kernel Mode Setting is enabled?
fedora;monitors;kms
null
_unix.151004
I setup a link to web app that my company uses on my desktop, which resulted in a .desktop file like this:[Desktop Entry]Icon=/home/kris/Pictures/gplus.pngName[en_US]=Google PlusName=Google PlusType=LinkURL[$e]=https://plus.google.com/This works pretty well, but I can't figure out how to add an option to force it to open in a new window, or possibly even as an app window. I have tested in a console that what I want will work:$ firefox -new-window plus.google.comDoes anyone know of a way to modify the .desktop to do this? Is the only way to re-do it as a Exec style launcher? Is there an editor in KDE for this?
How do I get a .desktop in KDE to open a new browser window?
kde;opensuse;desktop;freedesktop
null
_softwareengineering.205958
Say I have a class like this:public class MyObject{ public List<string> MyCollection { get; set; }}And a method like this:public void DoSomething(MyObject object){ if(object.MyCollection == null) { // MyCollection must not be null // Should I... // a) object.MyCollection = new List<string>(); // b) throw new ArgumentException(MyCollection can not be null); }}I do not have control over MyObject. Normally I'd just instantiate the collection in the constructor and be done with it. Should I just instantiate the collection in my method, or throw an exception?
Throwing an exception for errors that can be fixed
exceptions
What you've got here are called guard statements, and you absolutely should throw an exception if object.MyCollection is null.Exceptions are meant for exceptional circumstances, and since you specify that object.MyCollection must not be null this would be indeed exceptional. Just make sure the exception you throw is of a suitable type (for example an ArgumentNullException).
_webapps.103234
I know that repeat groups can operate in a few different ways:With a fixed repeat count (the repeat count is an integer loaded in from a hidden variable)With an unspecified repeat group (the repeat group continues until the mobile worker decides to exit the loop) With a Model Iteration ID Query (I've seen this used primarily to iterate over cases retrieved from the casedb)Is there a 4th option that allows you to iterate over a space-separated list of items, and access each subsequent item as a hidden variable inside the repeat group?
Is it possible to use a repeat group to iterate over a space-separated list?
commcare
A repeat group is capable of iterating over a space separated list with a Model Iteration query, but like any repeat group auto-expansion, this can only occur over a set of values which is fixed when the form opens. That means the list can't be determined by user input in the form, unless you follow the pattern where the repeat contains an entry for every possible selection, and uses Display Conditions around an inner group to hide elements which aren't chosen.With those caveats: You can actually accomplish this quite simply, by providing the path to the space separated list as the Model Iteration query itself. Model iterations actually internally operate over a space separated list that they generate by performing a join(' ', instance('something')/your/iteration/query)operation on your input. As such, if you provide a query with only one element, the join will just return your space separated list and proceed as usual!EDIT: Forgot to mention - if you are going to use this method and reference a question inside the form (rather tthan an instance as in my example) it needs to:Be set it using a default value, not a calculationNeeds to come before the model iteration loop.
_codereview.44619
My code below is for a sort of onscreen keyboard. I was just wondering if it could be written shorter with a lot of ifs and elses. function input(key) { fieldName = currentSide+'_scratchfield'; field = document.getElementById(fieldName); del = DELETE; if(key == 'sp') key = ' '; if(key == 'minus') { if(field.value == del) { return false; } else if(field.value.charAt(0) == -) { field.value = field.value.substr(1, field.value.length); return false; } else { field.value = -+field.value; return false; } } if(key == 'clr') { if(field.value == || field.value == null) { return false; } else if (field.value == del) { field.value = ; return false; } else { field.value = field.value.substring(0, field.value.length-1); return false; } } if(key == 'del') { if(field.value == || field.value == null) { key = del; } else if(field.value == del) { field.value = ; return false; } else { field.value = ; key = del; } }key = key.toUpperCase();if(field.value == del) { field.value = ; }field.value += key;}
Onscreen keyboard
javascript
null
_unix.6488
I have a virtual machine running of openSuse 11.2 that has mono 2.6.4, I use this VM as a test server to test asp.net applications under Apache mod_mono.I wanted to upgrade (in the same virtual machine) to mono 2.8.2. I downloaded several rpm files from http://ftp.novell.com/pub/mono/download-stable/openSUSE_11.2/i586/ but I'm in a dependency loop, don't know which package to install in the correct order...(Did I mention that I know very little of suse?)Edit: Is it possible to find a way to upgrade it without network connectivity?Thanks!
How to upgrade mono on openSuse
linux;opensuse;mono;software installation
Go to this page at opensuse.org and click 1-Click Install button on mono-complete-2.8.2 meta package. Then all your loop dependencies will be solved automatically by YaST manager.It is a usual user-friendly way to install packages on openSuSE.
_codereview.66893
A little while back, I wrote in a review of a Ruby bowling sim that I might try my hand at modelling the game myself, focussing on the funky deferred scores system. Since a2bfay, who posted the original question, has since posted a thoroughly updated piece of code, I figured I'd better get mine finished, especially since I chose a very different approach. I just figured it'd be fun to put up for review. The main point for me was mostly to do some plain ol' Ruby for fun and zero profit. It's a self-imposed programming-challenge, I suppose.This it not a complete bowling simulator like a2bfay's, though; it's actually just a single class for now. It does a lot, though. Perhaps too much, but that's for a review to tackle.The class in question models the concept of a frame in a game of bowling. Since a frame's final score might depend on shots/rolls in the following frames, I implemented the frames as nodes in a singly-linked list. So at any time, a frame can report its score, and whether or not said score is final by examining itself and (if necessary) shots from its following frame(s). The final frame behaves differently, allowing up to 3 shots/rolls in order to resolve any non-final scores.Basic requirements:Adherence to regular 10-pin bowling rules. No support for any of the variants.Ability to calculate scores live, i.e. while the game is in progress.Ability to easily control skill, i.e. how many pins to knock down.I fairly happy with the result, though the class is pretty packed with functionality. For instance, here's how to play a perfect game: game = Frame.create_list(10).each do |frame| # create 10 linked frames frame.play { 10 } # bowl a strike in each frame end game.map(&:score).inject(&:+) # => 300So it doesn't leave much for other classes to do1. Not necessarily a bad thing, of course.Anyway, enough intro, here's the code. It's also in a gist along with many, many tests (RSpec). Kinda went overboard, I guess, but I wanted to stretch my testing muscles.# This class models a frame in 10 pin bowling.class Frame # Number of pins lined up PIN_COUNT = 10 # The next frame relative to the receiver attr_reader :next_frame # Create an array of +count+ linked frames, in which frame N is linked # to frame N+1 from first to last. def self.create_list(count) raise RangeError.new(count must be greater than 0) if count.to_i < 1 frames = [self.new] (count.to_i - 1).times.inject(frames) do |frames| frames << frames.last.append_frame end end def initialize #:nodoc: @shots = [] end # Returns a dup of the shots in this frame. def shots @shots.dup end # Append a new frame to the receiver (if one exists, it's returned). # If the receiver has already recorded 1 or more shots, no new frame # will be created, and the method will just return the existing # +next_frame+ which may be +nil+. def append_frame return @next_frame if last? && !shots.empty? @next_frame ||= self.class.new end # True if the receiver has been played to completion (i.e. no more shots # are possible). def played? shots.count >= shot_limit end # Play this frame till it's done. Returns the shots that were recorded # # See also #play_one_shot def play(&block) shots_played = [] shots_played << play_one_shot(&block) until played? shots_played end # Bowl a single ball. If the frame's been played, +nil+ will be # returned. # # The block will receive the number of pins still standing, and # the numbers of the shots already taken, and must return the number # of pins to topple. The number is clamped to what's possible. def play_one_shot(&block) return if played? shot = yield remaining_pins, shots.count @shots << [0, shot.to_i, remaining_pins].sort[1] shots.last end # True if the receiver has no following frames (see also #next_frame). def last? next_frame.nil? end # The score for this frame (may not be final). def score scored_shots = successive_shots[0, shots_required_for_score] scored_shots.inject(0, &:+) end # True if have all the necessary shots been taken (in this frame or others). def score_finalized? successive_shots.count >= shots_required_for_score end # The number of pins knocked down in the shots that have been taken. def pins shots.inject(0, &:+) end # True if the first shot was a strike. def strike? shots.first == PIN_COUNT end # True if the first 2 shots constitute a spare. def spare? !strike? && shots[0, 2].inject(0, &:+) == PIN_COUNT end # Shots required in order to calculate the final score for this frame. # This includes shots already taken this frame. def shots_required_for_score strike? || spare? ? 3 : 2 end # Returns the number of pins currently standing. def remaining_pins remaining = PIN_COUNT - (pins % PIN_COUNT) remaining %= PIN_COUNT if played? remaining end # Collect shots from this frame and successive frames. def successive_shots return [] if @shots.empty? collected = @shots.dup collected += next_frame.successive_shots if next_frame collected end private # The number of shots that can be taken in this frame. def shot_limit return 3 if fill_ball? return 1 if strike? return 2 # intentional but unnecessary return; just there for formatting end # Does this frame have a 3rd shot? def fill_ball? last? && (strike? || spare?) endendThings I've noticed myself:Conflating a regular mid-game frame and the last frame in a single class. They do differ a fair bit. But while I tried breaking it into separate classes, super and sub classes, and extracting logic into modules, I didn't find a structure I liked. So it's all one class.There are a handful of magic numbers in there, but creating constants or methods for them seemed more trouble than it's worth, since I'm aiming exclusively at regular 10-pin bowling. Still, though...Again, the class just does a lot. Hmm... it is kinda complex, isn't it?Edits & addenda:Edit: Added a missing comment on Frame.create_list1) I should have been clearer. I do not intend for this single class to do and be everything, even if it does do a lot. To more fully model a game, you'd still need something like a Game class, perhaps a Player class, etc..
Bowling scores in Ruby
object oriented;ruby;game;rags to riches
null
_webapps.78637
I don't see any sharing settings that allows full access to upload/view an entire Google Photos account.I want to have my phone, my laptop, my wife's phone and her laptop to all sync whatever local photos the devices have to the same Google Photos account.I don't want to use one of our personal Google accounts, like my personal gmail, because in order for her to log in to see the photos online, she'd have to be able to 2 factor authentication and basically be completely logged in to all my Google services, which would be confusing for her.Has anyone else figured out the best way? Should we create a new photos only Google account?
Strategy for sharing the same Google Photos account for whole family
google photos
null
_codereview.109620
My tooltip works fine, but I need to simplify this code.var genderSelect = { getGenderSelect: function () { if(this.val('Girl')){ $(this).find('#girl-character').show(); $(this).find('#boy-character').hide(); }else if(this.val('Boy')){ $(this).find('#boy-character').show(); $(this).find('#girl-character').hide(); } }, init: function(){ $('.control-label input[name=gender]').change( $('#caracters').show(); genderSelect.getGenderSelect(); )};};genderSelect.init();
Simple jQuery tooltip
javascript;jquery
You can shorten the getGenderSelect function like this:var genderSelect = { getGenderSelect: function () { $(this).find('#girl-character').toggle(this.val() == 'Girl'); $(this).find('#boy-character').toggle(this.val() == 'Boy'); }, init: function(){ $('.control-label input[name=gender]').change( $('#caracters').show(); genderSelect.getGenderSelect(); )};};genderSelect.init();The toggle function takes a boolean which determines whether to show or hide the selected elements.
_cs.60509
Question taken from The Algorithm Design Manual by Steven S. Skiena, 1997.A vertex cover of a graph $G=(V,E)$ is a subset of vertices $V'\subseteq V$ such that every edge $e\in E$ contains at least one vertex from $V$. Delete all the leaves from any depth-first search tree of $G$. Must the remaining vertices form a vertex cover of $G$? Give a proof or a counterexample. Answer given :If the tree has more than one vertex, then yes. The remaining vertices are still the vertexcover because for every edge eE incident on the leaves, their other end-point is still in theremaining tree.My question:The answer is right for undirected graphs. But I think there exist counterexamples to this question using directed graphs.For example:If we are using DFS starting from vertex $a$ and we are traversing in alphabetical order, i.e. explore b first, then we end up with two tree edges, which are $(a,b)$ and $(a,c)$. Therefore, $b$ and $c$ are leaf-vertices.But here if are going to delete vertices $b$ and $c$ the edge $(c,b)$ has no incident vertices which are contained in our vertex cover.Am I right? I am confused actually.
Vertex cover of a graph by removing leaf-vertices from a DFS tree
algorithms;algorithm analysis;graph traversal
Look at the definition of vertex cover (as provided by the book). It is strictly defined on undirected graphs. Thus, the answer doesn't apply to directed graphs, nor to any other kinds of graphs you might think of. So your counterexample is invalid, as it makes invalid assumptions (graphs are always undirected here). These invalid assumptions are the source of your confusion.To consider directed graphs, we first need to define what a vertex cover is on a directed graph. Well, we can say it is a subset of vertices such that every arc is incident to at least one vertex in the subset. Again, the book makes no claim about such directed vertex covers. If you are now asking does $\{ a \}$ form a (directed) vertex cover as per our definition?, the answer is NO as you correctly observed.
_scicomp.15874
I am trying to use the MPI share memory feature. I have several SMP nodes, and each of them has four cores. I need an array of size N for each node that should be accessed by all four cores in each node. My plan is to construct a shared window of size N/4 using MPI_Win_allocate_shared, and I expect that memory usage of each node would be N. In the example below, N is 4X10^9 bytes, but the memory usage of each node is not 4GB but 16GB. Am I missing something?#include <iostream>#include <mpi.h>int main(int argc, char** argv) { MPI_Init(&argc, &argv); int rank_all; int rank_sm; int size_sm; // all communicator MPI_Comm comm_sm; MPI_Comm_rank(MPI_COMM_WORLD, &rank_all); // shared memory communicator MPI_Comm_split_type(MPI_COMM_WORLD, MPI_COMM_TYPE_SHARED, 0, MPI_INFO_NULL, &comm_sm); MPI_Comm_rank(comm_sm, &rank_sm); MPI_Comm_size(comm_sm, &size_sm); std::size_t local_window_count(1000000000); char* base_ptr; MPI_Win win_sm; int disp_unit(sizeof(char)); MPI_Win_allocate_shared(local_window_count * disp_unit, disp_unit, MPI_INFO_NULL, comm_sm, &base_ptr, &win_sm); // write char buffer; if (rank_sm == 0) { buffer = 'A'; } else if (rank_sm == 1) { buffer = 'C'; } else if (rank_sm == 2) { buffer = 'G'; } else { buffer = 'T'; } MPI_Win_fence(0, win_sm); for (std::size_t it = 0; it < local_window_count; it++) { base_ptr[it] = buffer; } MPI_Win_fence(0, win_sm); // read long long int index_start(-1 * rank_sm * local_window_count); long long int index_end((size_sm - rank_sm) * local_window_count - 1); for (long long int it_rel = index_start; it_rel < index_end; it_rel++) { buffer = base_ptr[it_rel]; if (it_rel == index_start) { std::cout << rank_sm << start: << buffer << std::endl; } else if (it_rel == (index_end - 1)) { std::cout << rank_sm << end: << buffer << std::endl; } } MPI_Finalize(); return 0;}
total memory usage of MPI shared memory
c++;mpi;memory management
null
_cogsci.10575
This question is based on a previous question I have written. For details, see here: https://parenting.stackexchange.com/q/21080/17089Question:Are these claims actually saying classical music is more intelligence enhancing than other forms of music? If so, is that true?As a composer, I would find this interesting because I think certainly there are many different kinds of music and it seems very surprising to single out one genre as requiring more intelligence. For instance, suppose you say Mozart is more sophisticated then the Boss (Bruce Springsteen) and therefore more intelligence promoting. By that logic then, Schoenberg's music should be better for children than Mozart since Schoenberg is certainly more complex than Mozart (e.g. chromatic vs diatonic scales; polymeter vs single meter). But I would never play Schoenberg for my child instead of Mozart because I find Schoenberg unnatural to listen to because it is so weird and complex. The point is I think simply saying classical music is more sophisticated or superior isn't a very meaningful distinction and greater clarification should be given about why classical music in particular enhances learning (if in fact that has any basis). Another way of saying this is, why doesn't music in general enhance it then? Why classical music in particular?
Does classical music enhance intelligence in children more than other genres?
developmental psychology;music
A great overview of this topic is available in Chapter 6 of the book The Invisible Gorilla by Chabris & Simons. My answer is based, in large part, on their summary of the topic.The Mozart Effect was originally reported by Rauscher, Shaw, & Ky (1993). In the experiment, college students completed a set of typical IQ tests. Before taking the tests, the participants were randomly assigned to either (1) listen to 10 minutes of Mozart, (2) listen to 10 minutes of relaxation instructions, or (3) sit in silence for 10 minutes. They reported that participants who listened to Mozart scored an average of 8-9 IQ points higher on the tests than the other groups.The effect turns out to be difficult to replicate. This is nicely summarized in the introduction of Steele, Bass, & Crook (1999), who also conducted a replication of the original design and found no evidence of a Mozart effect.Your question is specifically about classical versus other kinds of music, and this is indeed a topic that has been researched as a follow-up to the Mozart Effect. One theory that arose to explain the Mozart Effect (even though the effect itself may not be all that reliable!) is that it has nothing to do with classical music in particular. Rather, the effect is simply about arousal and mood. Listening to music may not increase your intelligence, but it might make you more alert and engaged than sitting in silence or listening to a relaxation script. Schellenberg & Hallam (2006) reported a study in which participants (8,000 British school children!) listened to either (1) a Mozart string quintet, (2) three pop songs, or (3) a discussion about a science experiment. The children who listened to the pop songs performed significantly better than the others, and there was no benefit of Mozart over the science discussion. Nantais & Schellenberg (1999) demonstrated that there was no overall difference between listening to Mozart or an excerpt from a Stephen King story, but participants did tend to do better if they got to listen to the thing they preferred! One explanation, then, is that listening to something you like before taking an IQ test increases your performance (or, conversely, listening to something you don't like decreases performance).
_datascience.14688
is there any reason why the validation mean squared error output from Keras is always very similar to 1? Thank you. All of my training results looks like:155/155 [==============================] - 0s - loss: 6062.6136 - mean_absolute_error: 0.8344 - mean_squared_error: 1.0271 - val_loss: 0.8252 - val_mean_absolute_error: 0.8252 - val_mean_squared_error: 1.0164Epoch 29/1000155/155 [==============================] - 0s - loss: 5870.5280 - mean_absolute_error: 0.8324 - mean_squared_error: 1.0211 - val_loss: 0.8246 - val_mean_absolute_error: 0.8246 - val_mean_squared_error: 1.0130Epoch 30/1000155/155 [==============================] - 0s - loss: 5668.5083 - mean_absolute_error: 0.8311 - mean_squared_error: 1.0134 - val_loss: 0.8244 - val_mean_absolute_error: 0.8244 - val_mean_squared_error: 1.0106Epoch 31/1000155/155 [==============================] - 0s - loss: 5530.8119 - mean_absolute_error: 0.8288 - mean_squared_error: 1.0115 - val_loss: 0.8243 - val_mean_absolute_error: 0.8243 - val_mean_squared_error: 1.0089Epoch 32/1000155/155 [==============================] - 0s - loss: 5222.6773 - mean_absolute_error: 0.8283 - mean_squared_error: 1.0119 - val_loss: 0.8245 - val_mean_absolute_error: 0.8245 - val_mean_squared_error: 1.0071Epoch 33/1000155/155 [==============================] - 0s - loss: 5090.0273 - mean_absolute_error: 0.8273 - mean_squared_error: 1.0078 - val_loss: 0.8247 - val_mean_absolute_error: 0.8247 - val_mean_squared_error: 1.0060Epoch 34/1000155/155 [==============================] - 0s - loss: 4878.2420 - mean_absolute_error: 0.8272 - mean_squared_error: 1.0093 - val_loss: 0.8245 - val_mean_absolute_error: 0.8245 - val_mean_squared_error: 1.0046note: I have standardized my input and output with sklearn standardization:from sklearn import preprocessing X_scaler = preprocessing.StandardScaler().fit(X_list_total) X_list_total_standardized = X_scaler.transform(X_list_total) Y_scaler = preprocessing.StandardScaler().fit(Y_list_total) Y_list_total_standardized = Y_scaler.transform(Y_list_total)Does it just mean that there is nothing to learn from the data at all?
keras validation mean squared error always similar to 1
machine learning;scikit learn;tensorflow;keras
null
_unix.272086
Is it possible to have the ls command behave differently based on the number of directory entries that may be listed?If I just use ls (with no modifying options, but I can specify directories or filters), I want it to:apply -l long listing format if there are 10 entries or lessshow the first 50 entries only and output a warning that there are x more entriesIs this possible? How can I do this?Note that I don't want to use a custom script command switch to a custom command - I am OK with a custom script or wrapper, but I still want to use ls to do this, with full functionality still maintained. That is, not my-custom-ls but just ls to call the script/wrapper.
Have the ls command behave differently based on the number of entries
shell;ls
null
_softwareengineering.133825
I only have a verbal hosting agreement in place with a customer. Unfortunately the relationship has deteriorated to the point of no return, and I'd like to end the hosting agreement and hand them over the root password. This is a virtual dedicated server I pay for, and then they pay me.They are fine with this, but the problem is I don't trust this client even a little bit. Can I get them to sign something saying they release me from liability to changes made on the server once I give them the password? How is the root password usually given to clients who request it while making sure they don't come back later on and blame you for breaking the server? Does simply the act of giving them the root password infer that you are handing over complete control and thus complete responsibility?
How do I hand over the keys to a webserver I'm hosting?
client relations;web hosting
null
_unix.324777
I want to create a custom lang file for source highlighting in gedit etc. Before I start, I cloned an existing file for testing purposes,/usr/share/gtksourceview-3.0/language-specs/imagej.langand changed the header from<language id=imagej _name=ImageJ version=2.0 _section=Scientific>to<language id=imagej2 _name=ImageJ2 version=2.0 _section=Scientific>and saved it in the same folder under the name imagej2.lang. The new ImageJ2 appears in the list of languages in gedit, but nothing is actually highlighted when I choose to use it.What am I missing?
GtkSourceView lang file is loaded but nothing gets highlighted
xml;gedit;syntax highlighting
null
_unix.98115
I have a Dell Poweredge running Ubuntu 13.04 in my office to serve up an interal web-app address system. It has been at least a 2 months possibly 3 since my last login. Everything is running great, but I can not login. I know I have the correct credentials because they are saved in putty. Error simply says: Access DeniedWhat could possibly cause this to happen? Can it be fixed without pulling it off the shelf and hooking up monitors and keyboards etc (as a side-note it weighs something like 50+ pounds so I am not looking forward to that at all)?guest@buildsys2:~$ ssh -v [email protected]_5.9p1 Debian-5ubuntu1.1, OpenSSL 1.0.1 14 Mar 2012debug1: Reading configuration data /etc/ssh/ssh_configdebug1: /etc/ssh/ssh_config line 19: Applying options for *debug1: Connecting to 192.168.1.10 [192.168.1.10] port 22.debug1: Connection established.debug1: SELinux support disableddebug1: identity file /tmp/guest-YBscPe/.ssh/id_rsa type -1debug1: identity file /tmp/guest-YBscPe/.ssh/id_rsa-cert type -1debug1: identity file /tmp/guest-YBscPe/.ssh/id_dsa type -1debug1: identity file /tmp/guest-YBscPe/.ssh/id_dsa-cert type -1debug1: identity file /tmp/guest-YBscPe/.ssh/id_ecdsa type -1debug1: identity file /tmp/guest-YBscPe/.ssh/id_ecdsa-cert type -1debug1: Remote protocol version 2.0, remote software version OpenSSH_5.9p1 > Debian-5ubuntu1.1debug1: match: OpenSSH_5.9p1 Debian-5ubuntu1.1 pat OpenSSH*debug1: Enabling compatibility mode for protocol 2.0debug1: Local version string SSH-2.0-OpenSSH_5.9p1 Debian-5ubuntu1.1debug1: SSH2_MSG_KEXINIT sentdebug1: SSH2_MSG_KEXINIT receiveddebug1: kex: server->client aes128-ctr hmac-md5 nonedebug1: kex: client->server aes128-ctr hmac-md5 nonedebug1: sending SSH2_MSG_KEX_ECDH_INITdebug1: expecting SSH2_MSG_KEX_ECDH_REPLYdebug1: Server host key: ECDSA [removed]The authenticity of host '192.168.1.10 (192.168.1.10)' can't be established.ECDSA key fingerprint is [removed].Are you sure you want to continue connecting (yes/no)? yesWarning: Permanently added '192.168.1.10' (ECDSA) to the list of known hosts.debug1: ssh_ecdsa_verify: signature correctdebug1: SSH2_MSG_NEWKEYS sentdebug1: expecting SSH2_MSG_NEWKEYSdebug1: SSH2_MSG_NEWKEYS receiveddebug1: Roaming not allowed by serverdebug1: SSH2_MSG_SERVICE_REQUEST sentdebug1: SSH2_MSG_SERVICE_ACCEPT receiveddebug1: Authentications that can continue: publickey,passworddebug1: Next authentication method: publickeydebug1: Trying private key: /tmp/guest-YBscPe/.ssh/id_rsadebug1: Trying private key: /tmp/guest-YBscPe/.ssh/id_dsadebug1: Trying private key: /tmp/guest-YBscPe/.ssh/id_ecdsadebug1: Next authentication method: [email protected]'s password: debug1: Authentications that can continue: publickey,passwordPermission denied, please try [email protected]'s password:
Unable to login via ssh after several months
ubuntu;ssh;putty
null
_unix.207453
I have a question about backuping BTRFS partitions.Assume I have a BTRFS partition on /dev/sda1 and an external harddrive with BTRFS on /dev/sdb1.I already found out I can make an initial backup by issuing:btrfs replace start /dev/sda1 /dev/sdb1Afterwards I change 2 things:I create a new regular file on SDA1 I create a new BTRFS snapshot on SDA1Now I want to bring my external harddrive SDB1 ('backup') aligned with SDA1. So both the files as the BTRFS specific stuff (snapshots).How do I do this? So I am looking for the rsync equivalent which also syncs the BTFS features (snapshots):rsync -avr --delete <mount point sda1> <mount point sdb1>Thanks
Copy BTRFS partition to external harddisk BTRFS partition including snapshots
filesystems;btrfs
null
_unix.265302
I am interested in modifying a bash script file whose purpose is to copy large amount of files to destination path. What i'm trying to achieve is count the number of files as they are being copied.How can i achieve the objective state above ?
counting the number of files being copied
bash;shell script
null
_webmaster.102752
I have a domain reseller account under PDR (publicdomainregistry). One of my client's domain has come under Pending Verification. The message which I see in my control panel is as below.An email has been sent to the Registrant for verification. If the email address is not verified by Friday, January 13, 2017, the domain name will be deactivated. The problem is that the client have no access to the current 'Registrant Contact' email. Also I can't change the registrant contact because there is this 'pending verification'. Is there any other option by which I can overcome this? without the domain getting deactivated?Thanks.
Alternative when the current whois email cant be accessed and needs to verify the contact info for ICANN validation
domain registrar;whois;icann
Your client, who obviously entered the wrong email address, should contact the registry directly and generally provide all sorts of ID.If you are the reseller, best thing you can do to look after your customer is contact your wholesaler as well and check what the process is. I doubt it will be easy.But I reckon you have left your run a bit late. Your client had 15 days to do this, there are 5 left :(
_unix.13158
Possible Duplicate:Renaming multiple files in unix Rename all files within a folder with the first word of their content(remember all the files should be text files. For example if a.txt contains Unix is an OS in its first line then a.txt should be renamed to Unix.txt
ranaming multiple file in unix
rename
null
_codereview.36582
This looks pretty messy (need to reduce nesting I feel). I need a check an input for a seat, but I can't guarantee it has a value (one may not be chosen for example). If the seats aren't in a certain color it goes to seat basic (1), otherwise seat premium (2). var seat = $(input[id*= + seatPrefix + ]); if (seat != 'undefined') { seat = seat[0].value; if (seat != ) { if (seat != red && seat != blue && seat != silver && seat != gold) { chooseSeat(seat, 2); } else { chooseSeat(seat, 1); } } }
null/undefined checking for checking seats
javascript;jquery;null
var seat = $(input[id*= + seatPrefix + ]); if (seat != 'undefined') {First off, an extra level of indentation has crept in here. Let me assume this is just a copy/paste error.Any chance you could be using classes here instead of prefixed IDs? Best case: a single ID you know in advance. It seems that you assume there's only one match anyways.jQuery, when it doesn't find anything, returns an empty collection, not undefined (or 'undefined'; did you get confused by typeof x !== 'undefined'?). To test if a jQuery collection is empty, you can use seat.length === 0 or even ! seat.length.I don't normally use hungarian notation, but I do use it for jQuery objects, especially when I'm dealing with native elements as well: $seat. On a side note, shouldn't the variable name be $seatInput or something, rather than just $seat?I recommend against the coercing equality operator (==). Use === instead. == can be a bit... unpredictable at times. There are some special cases or case where I find == acceptable (== null for null or undefined) but this is not one of them (if only because undefined != 'undefined')seat = seat[0].value;You can use seat.val() here. This has a positive side-effect that val returns undefined if the jQuery object holds no elements. Your code will attempt to dereference undefined in that case.Also, you are reusing the same variable to mean a jQuery object at one point, then to mean a string in the next line. You should use two separate variables (or inline the first one) here.if (seat != ) {If you use val, you can move it outside its condition block, and merge the condition with this one:if(seat !== undefined && seat !== )Since both undefined and the empty string are falsy and all other strings are truthy, this will work as well:if(seat)Of course, @retailcoder's suggestion to invert the condition and return early still applies.if (seat != red && seat != blue && seat != silver && seat != gold) {You can use indexOf to shorten that code and make it more readable:if ([red, blue, silver, gold].indexOf(seat) == -1)if you like shortcuts,if (!~[red, blue, silver, gold].indexOf(seat))At this point, the array definition can (and should) be moved outside the condition. It is marginally nicer to the memory, but more importantly it's easier to find the array in case you want to ever change it if you put it at the beginning of the file.if(...){ chooseSeat(seat, 2);} else { chooseSeat(seat, 1);}Shouldn't this logic be part of the chooseSeat function?Also, 1 and 2 are non-obvious. Since you're passing a string anyways (why?), perhaps chooseSeat should accept basic and premium as its arguments?If I couldn't modify the chooseSeat function or your HTML, I would probably refactor your code like this:var basicColors = [red, blue, silver, gold];var seat = $(input[id*= + seatPrefix + ]).val();if (!seat) return;if (~basicColors.indexOf(seat)) { chooseSeat(seat, 1);} else { chooseSeat(seat, 2);}or, if return cannot be used (this is not the whole body of the function it is in),var basicColors = [red, blue, silver, gold];var seat = $(input[id*= + seatPrefix + ]).val();if (seat) { if (~basicColors.indexOf(seat)) { chooseSeat(seat, 1); } else { chooseSeat(seat, 2); }}
_softwareengineering.254799
In 1989 Felix Lee, John Hayes and Angela Thomas wrote a Hacker's test taking the form of a quiz with many insider jokes, as Do you eat slime-molds?I am considering the following series:0015 Ever change the value of 4?0016 ... Unintentionally?0017 ... In a language other than Fortran?Is there a particular anecdote making the number 4 particular in the series?Did some Fortran implementation allow to modify the value of constants? Was this possible in other languages in common use at that time?
Ever change the value of 4? - how did this come into Hayes-Thomas quiz?
history;fortran
In the old days (1970s and before) some computers did not have any MMU (and this is true today for very cheap microcontrollers).On such systems, there is no memory protection so no read-only segment in the address space, and a buggy program could overwrite a constant (either in data memory, or even inside the machine code).The Fortran compilers at that time passed formal arguments by reference. So if you did CALL FUN(4) and the SUBROUTINE FUN(I) has its body changing I - e.g. with an statement I = I + 1 in its body, you could have a disaster, changing 4 into 5 in the caller (or worse).This was also true on the first microcomputers like the original IBM PC AT from 1984, with MS-DOSFWIW, I'm old enough to have used, as a teen ager in early 1970s, such computers: IBM1620 and CAB500 (in a museum: these are 1960s era computers!). The IBM1620 was quite fun: it used in memory tables for additions and multiplications (and if you overwrote these tables, chaos ensued). So not only you could overwrite a 4, but you could even overwrite every future 2+2 addition or 7*8 multiplications (but I really forgot these dirty details so could be wrong).Today, you might overwrite the BIOS code in flash memory, if you are persevering enough. Sadly, I don't feel that fun any more, so I never tried. (I'm even afraid of installing some LinuxBios on my motherboard).On current computers and operating systems passing a constant by reference and changing it inside the callee will just provoke a segmentation violation, which sounds familiar to many C or C++ developers.BTW: to be nitpicking: overwriting 4 is not a matter of language, but of implementation.
_softwareengineering.135841
I know very little about smart card authentication in general so please point out or correct me if anything below doesn't make sense.Lets say i have:A Certificate Authority X-s smart card (non-exportable private key)Drivers for that smart card written in C A smart card reader CA-s authentication OCSP web serviceA requirement to implement user authentication in a .NET fat client application via a smart card, that was given out by the CA X.I tried searching info on the web but no prevail. What would the steps be ? My first thought was:Set up a web service, that would allow saving of (for example) scores of a ping pong game for each user. Each time someone tries to submit a score via the client application, he can only do so by inserting the smart card into the reader.Then the public key is read from the smart card by native c calls through .NET and sent to my custom web service, which in return uses the CA-s authentication OCSP web service to prove the validity of the public key/public certificate (?). If the public key is okay and valid, encrypt a random sequence of bytes with the public key and send it to the client application.If the client application sends back the correctly decrypted random sequence of bytes along with the score of the ping pong game, then the score is saved in the database for the given user.My question is, is this the correct way to do it ? What else should i know about smart card authentication ?
How to implement smart card authentication with a .NET Fat client?
.net;windows
null
_hardwarecs.1310
I am looking for a device that would allow me to connect one laptop to two Ethernet cables. Each Ethernet cable has a distinct public IP and 100 Mbits symmetrical bandwidth. The goal of this link aggregation is to increase download/upload speed. The laptop has only one Ethernet port. I am not sure what the best solution is between adding another Ethernet port (e.g. with an Ethernet <-> USB adapter) and trying to configure the operating system to use both Ethernet cables simultaneously (Windows 7 SP1 x64 Ultimate and Kubuntu 14.04 LTS x64), or having an external device that takes care of link aggregation.
Connect one computer to two Ethernet cables
networking
null
_opensource.2699
I am using the Telegram API (licensed under GNU GPL) to integrate chat services within my own application. The developer of Telegram asks to -Please remember to publish your code too in order to comply with the licences.So in a nutshell, my question is: Do I need to open-source my entire application or whether I need to open-source only the part wherein I'm using the chat functionality with Telegram?Any help will be highly appreciated.Link 1 to GitHub pageLink 2 to GitHub
GNU GPL question: Do I need to open source my entire app?
gpl;derivative works;api
null
_cstheory.31418
I have a graph optimization problem which is hard to describe in the title.There is a component based system which consists of components and data transmissions between components(components and data transmissions will be represented by BC for short). The system has only one entry and one exit which means the system will always execute from the start component to the end component. Suppose the system only has sequence and concurrence structures. In sequence structure, the components execute one by one. In concurrence structure, components in different branches execute at the same time.Each component and data transmission has a response time $a_i$. Now I have $k$ resources. Each resource can only be allocated to one BC. If a BC has a resource, its response time will drop from $a_i$ to $b_i$. I can calculate the response time of the system because response time in sequence structure can be added up and it will be the max in concurrence structures. So I can get the system response time if I know all the BC's response time.Now I need to know what is the best resource allocation strategy to make the system response time minimum?The system can be represented as a graph $S$ which has $n$ nodes and edges(BC for short). The nodes represent components and the edges represent data transmissions. The graph has only one entry and one exit and all the edges are in one direction. $S=\{BC_1,BC_2,\ldots, BC_n\}$. $BC_i$ has two values $a_i, b_i$. $a_i$ means the normal response time. $b_i$ means the response time when allocated a resource.I have a function to calculate the response time of any graph $G$ which time complexity is $O(n)$ (n is the number of edges and nodes it contains). A naive method is choose the best from $\binom nk$ different allocations. So the time complexity is $O(n \cdot \binom nk)$.Now I think I can use dynamic programming to solve this problem. My idea is to treat a complex structure as a node then some intermediate results can be cached. I have some images to describe my idea. For example, the graph $S$ can be represented as $N_1,E_A,N_A,E_B,N_3$. The best allocation must come from the 11 different allocations. I can cache the result of allocating $0,1,2$ resource to $N_A$ to avoid repetitive computation. If not, $F(N_A, 0)$ will be calculated 6 times by simply enumerating.I think my solution's time complexity is better than $n \cdot \binom nk$. However I don't know how to prove it. Intuitively I think my solution is solvable in polynomial time.
How to solve such a graph optimization problem?
graph algorithms;optimization;dynamic programming
null
_webapps.44656
Okay I've got a client's Twitter account and a Facebook page, I am a manager for the Facebook page. I have currently tried using the official Twitter Facebook app to send posts to Facebook, I was unable to get posts to send to my page but I got it to working seemlessly with my profile. I used selective tweets to post but that now isn't working anymore, it too worked with my profile too.As that selective tweets was a recommendation from here I'm now asking for any help in resolving this. Its like my Facebook profile doesn't have authorization to post to the page, but I have full access. I've spent too much time on this already and any help is welcome.
Sending Tweets to a Facebook Page not working
facebook;twitter;facebook pages
null
_cs.76295
Given a set S of integers, the task is to partition the set into subsets such that:Total number of partitions is maximizedEach partition has sum at least KThis looks like a variant of bin-packing problem, in which the bin has to filled upto a minimum capacity and the objective is to maximize the number of bins. I looked up the solutions of bin-packing problem, turned out ffd was the best approximation solution to it.Could someone please tell me how to approach the partition problem?
Subset partition problem variant
algorithms;combinatorics;sets;partitions
null
_codereview.47030
I would like to revise the fundamentals of Java threads on before-Java 5, so that I can understand improvements in Java 5 and beyond.I started off with a custom collection and need help on:Things I am doing wrong / grossly overlookingHow to make it betterSuggest alternate implementation, if anyWhich is the correct place to have the kill variable? Is it inside the thread or in the collection like I did?public class ThreadCreation { public static void main(String[] args) { MyCollection coll = new MyCollection(); coll.kill = false; RemoveCollClass t2 = new RemoveCollClass(); t2.setName(Thread2); t2.coll = coll; t2.start(); AddCollClass t1 = new AddCollClass(); t1.setName(Thread1); t1.coll = coll; t1.start(); RemoveCollClass t3 = new RemoveCollClass(); t3.setName(Thread3); t3.coll = coll; t3.start(); try { Thread.sleep(10000); } catch (InterruptedException e) { // TODO Auto-generated catch block e.printStackTrace(); } coll.kill = true; } static class AddCollClass extends Thread { volatile MyCollection coll; int count = 0; @Override public void run() { while (!coll.kill) { System.out.println(Thread.currentThread().getName() + --> AddCollClass Attempt --> + count); coll.add(); count++; try { Thread.sleep(2000); } catch (InterruptedException e) { // TODO Auto-generated catch block e.printStackTrace(); } } } } static class RemoveCollClass extends Thread { volatile MyCollection coll; int count = 0; @Override public void run() { // System.out.println(ThreadClass is running ); while (!coll.kill) { System.out.println(Thread.currentThread().getName() + -->RemoveCollClass Attempt --> + count); coll.remove(); count++; } } } static class MyCollection { Stack<String> container = new Stack<String>(); int maxSize = 5; volatile boolean kill = false; public synchronized boolean isFull() { if (container.size() >= maxSize) return true; else return false; } public synchronized boolean isEmpty() { if (container.size() <= 0) return true; else return false; } public synchronized void add() { if (isFull()) { try { System.out.println(wait triggered on--> + Thread.currentThread().getName()); wait(); } catch (InterruptedException e) { // TODO Auto-generated catch block e.printStackTrace(); notify(); } } if (!isFull()) { container.add(Thread.currentThread().getName()); System.out.println( Add Completed by + Thread.currentThread().getName()); notify(); } } public synchronized void remove() { if (isEmpty()) { try { System.out.println(wait triggered on--> + Thread.currentThread().getName()); wait(); } catch (InterruptedException e) { // TODO Auto-generated catch block e.printStackTrace(); notify(); } } if (!isEmpty()) { container.pop(); System.out.println( Remove Completed by + Thread.currentThread().getName()); } } }}
Pre-Java 5 threads
java;multithreading;thread safety
null
_codereview.136224
I have this nifty merge sort implementation for sorting singly-linked lists. Its running time is \$\Theta(n \log n)\$, yet it uses only \$\Theta(\log n)\$ space (the stack). See what I have below.ListMergesort.java:package net.coderodde.fun;import java.util.Random;/** * This class contains a method for sorting a singly-linked list. * * @author Rodion rodde Efremov * @version 1.6 (Jul 28, 2016) */public class ListMergesort { /** * This class implements a node in a singly-linked list. * * @param <E> the type of the datum hold by this node. */ public static final class LinkedListNode<E> { private final E datum; private LinkedListNode<E> next; public LinkedListNode(final E datum) { this.datum = datum; } public E getDatum() { return datum; } public LinkedListNode<E> getNext() { return next; } public void setNext(final LinkedListNode<E> node) { this.next = node; } } public static <E extends Comparable<? super E>> LinkedListNode<E> mergesort(final LinkedListNode<E> head) { if (head == null || head.getNext() == null) { return head; } return mergesortImpl(head); } private static <E extends Comparable<? super E>> LinkedListNode<E> mergesortImpl(final LinkedListNode<E> head) { if (head.getNext() == null) { return head; } final LinkedListNode<E> leftSublistHead = head; final LinkedListNode<E> rightSublistHead = head.getNext(); LinkedListNode<E> leftSublistTail = leftSublistHead; LinkedListNode<E> rightSublistTail = rightSublistHead; LinkedListNode<E> currentNode = rightSublistHead.getNext(); boolean left = true; // Split the input linked list into two smaller linked lists: while (currentNode != null) { if (left) { leftSublistTail.setNext(currentNode); leftSublistTail = currentNode; left = false; } else { rightSublistTail.setNext(currentNode); rightSublistTail = currentNode; left = true; } currentNode = currentNode.getNext(); } leftSublistTail.setNext(null); rightSublistTail.setNext(null); return merge(mergesortImpl(leftSublistHead), mergesortImpl(rightSublistHead)); } private static <E extends Comparable<? super E>> LinkedListNode<E> merge(LinkedListNode<E> leftSortedListHead, LinkedListNode<E> rightSortedListHead) { LinkedListNode<E> mergedListHead; LinkedListNode<E> mergedListTail; if (rightSortedListHead.getDatum() .compareTo(leftSortedListHead.getDatum()) < 0) { mergedListHead = rightSortedListHead; mergedListTail = rightSortedListHead; rightSortedListHead = rightSortedListHead.getNext(); } else { mergedListHead = leftSortedListHead; mergedListTail = leftSortedListHead; leftSortedListHead = leftSortedListHead.getNext(); } while (leftSortedListHead != null && rightSortedListHead != null) { if (rightSortedListHead .getDatum() .compareTo(leftSortedListHead.getDatum()) < 0) { mergedListTail.setNext(rightSortedListHead); mergedListTail = rightSortedListHead; rightSortedListHead = rightSortedListHead.getNext(); } else { mergedListTail.setNext(leftSortedListHead); mergedListTail = leftSortedListHead; leftSortedListHead = leftSortedListHead.getNext(); } } while (leftSortedListHead != null) { mergedListTail.setNext(leftSortedListHead); mergedListTail = leftSortedListHead; leftSortedListHead = leftSortedListHead.getNext(); } while (rightSortedListHead != null) { mergedListTail.setNext(rightSortedListHead); mergedListTail = rightSortedListHead; rightSortedListHead = rightSortedListHead.getNext(); } return mergedListHead; } public static <E> String toString(LinkedListNode<E> head) { final StringBuilder sb = new StringBuilder(); while (head != null) { sb.append(head.getDatum()).append(' '); head = head.getNext(); } return sb.toString(); } private static LinkedListNode<Integer> createRandomLinkedList(final int size, final Random random) { if (size == 0) { return null; } LinkedListNode<Integer> head = new LinkedListNode<>( random.nextInt(100)); LinkedListNode<Integer> tail = head; for (int i = 1; i < size; ++i) { final LinkedListNode<Integer> newnode = new LinkedListNode<>(random.nextInt(100)); tail.setNext(newnode); tail = newnode; } return head; } public static void main(final String... args) { final long seed = System.nanoTime(); final Random random = new Random(seed); LinkedListNode<Integer> head = createRandomLinkedList(10, random); System.out.println(Seed = + seed); System.out.println(toString(head)); head = mergesort(head); System.out.println(toString(head)); }}As always, any critique is much appreciated.
Merge sorting a singly-linked list in Java
java;algorithm;linked list;mergesort
On the whole, I like your approach and found it very easy to follow. There are a few things to consider though.TestHarnessI don't like having main methods in classes as test harnesses. I would prefer to either see JUnit tests, or for some kind of MergeSortTestHarness to contain the method that exercises your class. This creates a separation between the code that does the work and the code that exercises it. It also forces you to use the public interface to the class. At the moment, you've got ListMergesort which presents generic methods and contains a generic LinkedListNode class, but has a private method that's called by main that creates a random list of integers. This method clearly doesn't belong.Variable NamingWhen I first saw this code in toString, I thought it was going to be a bug:head = head.getNext();It looks like it's updating the head of the list as it iterates through to print. It's not of course, it's using a local variable that's actually iterating along the list. The name is a bit misleading, current or iter or something suggesting that it's expected to move along the list might be better.left = !leftAt the end of your split loop you can do:left = !left;currentNode = currentNode.getNext();This would allow you to remove the assignments from the split clauses above to make it more concise.Copy to endYou stop merging the lists after you've identified that one of the input streams is empty. At which point you copy the rest of the list across like this:while (leftSortedListHead != null) { mergedListTail.setNext(leftSortedListHead); mergedListTail = leftSortedListHead; leftSortedListHead = leftSortedListHead.getNext();}It feels a lot like at this point all you actually have to do is:if(leftSortedListHead != null) { mergedListTail.setNext(leftSortedListHead);} else if(rightSortedListHead != null) { mergedListTail.setNext(rightSortedListHead);}Each of the input lists is already a null terminated list and you don't need mergedListTail after this point, since you return the head, so you can just tack the rest of the input list onto the end.
_unix.339496
I'm running a read only filesystem on a raspberry pi so far everything works fine until i tried to mount /var as overlayfs for nginx and other services to work using this:VAROVRL=-o lowerdir=/var,upperdir=/mnt/persist/var-rw,workdir=/mnt/persist/var-workmount -t overlay ${VAROVRL} overlay /varwhile this is working and all services start no issues i noticed that the mount command outputs only the overlay mount and it gets duplicated every time i reboot.after 3 reboots:mountoverlay on /var type overlay (rw,lowerdir=/var,upperdir=/mnt/persist/var-rw,workdir=/mnt/persist/var-work)overlay on /var type overlay (rw,lowerdir=/var,upperdir=/mnt/persist/var-rw,workdir=/mnt/persist/var-work)overlay on /var type overlay (rw,lowerdir=/var,upperdir=/mnt/persist/var-rw,workdir=/mnt/persist/var-work)output of /etc/mount/dev/root / ext4 ro,relatime,data=ordered 0 0devtmpfs /dev devtmpfs rw,relatime,size=469532k,nr_inodes=117383,mode=755 0 0sysfs /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0proc /proc proc rw,relatime 0 0tmpfs /dev/shm tmpfs rw,nosuid,nodev 0 0devpts /dev/pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 0 0tmpfs /run tmpfs rw,nosuid,nodev,mode=755 0 0tmpfs /run/lock tmpfs rw,nosuid,nodev,noexec,relatime,size=5120k 0 0tmpfs /sys/fs/cgroup tmpfs ro,nosuid,nodev,noexec,mode=755 0 0cgroup /sys/fs/cgroup/systemd cgroup rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/lib/systemd/systemd-cgroups-agent,name=systemd 0 0cgroup /sys/fs/cgroup/cpu,cpuacct cgroup rw,nosuid,nodev,noexec,relatime,cpu,cpuacct 0 0cgroup /sys/fs/cgroup/blkio cgroup rw,nosuid,nodev,noexec,relatime,blkio 0 0cgroup /sys/fs/cgroup/memory cgroup rw,nosuid,nodev,noexec,relatime,memory 0 0cgroup /sys/fs/cgroup/devices cgroup rw,nosuid,nodev,noexec,relatime,devices 0 0cgroup /sys/fs/cgroup/freezer cgroup rw,nosuid,nodev,noexec,relatime,freezer 0 0cgroup /sys/fs/cgroup/net_cls cgroup rw,nosuid,nodev,noexec,relatime,net_cls 0 0systemd-1 /proc/sys/fs/binfmt_misc autofs rw,relatime,fd=23,pgrp=1,timeout=300,minproto=5,maxproto=5,direct 0 0debugfs /sys/kernel/debug debugfs rw,relatime 0 0mqueue /dev/mqueue mqueue rw,relatime 0 0configfs /sys/kernel/config configfs rw,relatime 0 0tmpfs /tmp tmpfs rw,relatime,size=102400k 0 0/dev/mmcblk0p1 /boot vfat ro,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=ascii,shortname=mixed,errors=remount-ro 0 0/dev/mmcblk0p5 /mnt/persist ext4 rw,relatime,data=ordered 0 0/dev/mmcblk0p6 /mnt/cache ext4 rw,relatime,data=ordered 0 0/dev/mmcblk0p7 /mnt/osboot vfat rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=ascii,shortname=mixed,errors=remount-ro 0 0/dev/mmcblk0p8 /mnt/osimage ext4 rw,relatime,data=ordered 0 0/dev/mmcblk0p9 /mnt/userdata ext4 rw,relatime,data=ordered 0 0overlay /etc overlay rw,relatime,lowerdir=/etc,upperdir=/mnt/persist/etc-rw,workdir=/mnt/persist/etc-work 0 0overlay /var overlay rw,relatime,lowerdir=/var,upperdir=/mnt/persist/var-rw,workdir=/mnt/persist/var-work 0 0binfmt_misc /proc/sys/fs/binfmt_misc binfmt_misc rw,relatime 0 0output of /etc/mtaboverlay /var overlay rw,lowerdir=/var,upperdir=/mnt/persist/var-rw,workdir=/mnt/persist/var-work 0 0overlay /var overlay rw,lowerdir=/var,upperdir=/mnt/persist/var-rw,workdir=/mnt/persist/var-work 0 0overlay /var overlay rw,lowerdir=/var,upperdir=/mnt/persist/var-rw,workdir=/mnt/persist/var-work 0 0note that /etc is also mounted as overlayfs but does not generate this problem when it's the only overlay mount.anyone can spot something i'm doing wrong here?
mounting /var as overlayfs
filesystems;mount;raspbian;readonly;overlayfs
The file /etc/mtab is written by the mount and umount commands. Keeping it accurate requires a bit of work because they can only update /etc/mtab if that file is available and writable.For the usual case where /etc is mounted read-write at some point during boot, distributions set up a script that rewrite /etc/mtab during startup, as soon as the root partition has been mounted read-write. This is necessary in case the system shut down without unmounting everything (e.g. due to a system crash or power failure).In your case, where /etc is on overlayfs, either the startup script runs at the wrong time when /etc is still read-only, or it doesn't support the case of an overlay root. So if you want to keep /etc/mtab as a regular file, you'll have to tweak this script or the time when it's executed.But you probably don't need to do this. A common setup is to have /etc/mtab be a symbolic link to /proc/mounts. The two files contain mostly the same information with mostly the same syntax; from the point of view of applications that read them, they're compatible. Since /proc/mounts reflects the current kernel information, it is always up-to-date, and the mount and umount commands won't touch them.The downside of /proc/mounts compared with /etc/mtab is that it shows information (especially mount options) as printed back by the kernel, rather than the exact parameters passed to the mount command. So a little information is lost. That information is rarely useful though.
_unix.378
I am looking for a succinct howto on the basics.
Where are some good guides for making packages (deb, rpm, etc)?
packaging
The ubuntu packaging guide is a good introduction. The rest you can learn by studying existing packages, and reading manuals (CDBS, and of course Debian Policy). However, as directhex said, it depends a lot on the kind of package you work on.For RPM, I liked the Mandriva wiki, and some Fedora RPM Guide and Guidelines.
_softwareengineering.154310
I work for a small software company (about 200 people building 8-10 applications) and I was hoping to get some advice on products that might be out there to manage the information of which clients are using which versions of our products?The most fundamental relationship would be that a product has versions and a given version is used by a client. Uses would be:Determine which clients use which productsDetermine which clients are on which versions of a productDetermine which clients are exposed to which vulnerabilities because of the version they useDetermine which clients cannot move to a new version because of a vulnerability in the new version that they may hitDetermine which clients should be approached for an upgradeAny thoughts or product reviews would be greatly appreciated!Thanks in advance.
Options for Application Registry
design;architecture
There are lots of products for keeping software inventories (google for software inventory), but those are mostly aimed at gathering the complete software stack inside a company`s network. Your case seems to be simple and small enough that someone could maintain that information more or less manually in an Excel sheet or an Access DB - nothing for which a full-blown product will pay off, I guess. That it is probably the reason why you have problems to find a ready-made solution for your case - it can be too easily solved with MS Office tools.How to gather the information and transport it from your client to you is a completely different question. There are lots of real-world examples how online updaters could be designed to convince a user that he should allow a program to check for available updates and download those automatically (I don't think I have to list you any examples, I am sure you know them). Part of that update process may be transferring the version information together with the information who is downloading the update to you.
_unix.260527
I have partition names like Data-HD, Yhteinen and W10. I do have lots of other partitions, but I want a little more assurance I am copying a right partition to a correct destination. I am quite errorprone with names like /dev/sdbd1 etc.How do I get partition names to show?
GDiskDump or DD to show partition names?
linux;partition;dd
null
_unix.14368
What are the differences between POSIX, the Single UNIX Specification, and the Open Group Base Specifications? I think their purpose is for determining if an OS is Unix?
Difference between POSIX, Single UNIX Specification, and Open Group Base Specifications?
posix
Today, POSIX and SUS are basically the same thing; SUS encompasses a little more.Quoting here:Beginning in 1998, a joint working group known as the Austin Group began to develop the combined standard that would be known as the Single UNIX Specification Version 3 and as POSIX:2001 (formally: IEEE Std 1003.1-2001). It was released on January 30, 2002andIn December 2008, the Austin Group published a new major revision, known as POSIX:2008 (formally: IEEE Std 1003.1-2008). This is the core of the Single UNIX Specification, Version 4
_datascience.10143
I'm looking for information on how should a Python Machine Learning project be organized. For Python usual projects there is Cookiecutter and for R ProjectTemplate. This is my current folder structure, but I'm mixing Jupyter Notebooks with actual Python code and it does not seems very clear.. cache data my_module logs notebooks scripts snippets toolsI work in the scripts folder and currently adding all the functions in files under my_module, but that leads to errors loading data(relative/absolute paths) and other problems.I could not find proper best practices or good examples on this topic besides some kaggle competition solutions and some Notebooks that have all the functions condensed at the start of such Notebook.
Python Machine Learning/Data Science Project Structure
python
null
_codereview.46635
I just finished writing a flat-file DB class for PHP which supports selecting, updating, inserting and deleting. I was wondering if there are any ways to make it faster or if I'm doing anything the wrong way.<?phpclass FlatDB { private static $field_deliemeter = \t; private static $linebreak = \n; private static $table_extension = '.tsv'; public $table_name; public $table_contents = array(FIELDS => NULL, RECORDS => NULL); /* ** This method creates a table ** ** @param string $table_name ** ** @example ** $db = new FlatDB; ** $db->createTable('Administrators'); **/ public function createTable($table_name, $table_fields) { // Create the file $tbl_name = $table_name.self::$table_extension; $header = ''; foreach($table_fields as $field) { $header .= $field.self::$field_deliemeter; } file_put_contents($tbl_name, $header); } /* ** This method opens a table for querying, editing, etc ** ** @param string $table_name ** ** @example ** $db = new FlatDB; ** $db->openTable('Test.csv'); **/ public function openTable($table_name) { // Check if this table is found $table_name = $table_name.self::$table_extension; if(file_exists($table_name) === FALSE) throw new Exception('Table not found.'); // Set the table in a property $this->table_name = $table_name; // Get the fields $table = file($this->table_name, FILE_IGNORE_NEW_LINES); $table_fields = explode(self::$field_deliemeter, $table[0]); unset($table[0]); // Put all records in an array $records = array(); $num = 0; foreach($table as $record) { $records_temp = explode(self::$field_deliemeter, $record); $count = count($records_temp); for($i = 0; $i < $count; $i++) $records[$num][$table_fields[$i]] = $records_temp[$i]; $num++; } $this->table_contents['FIELDS'] = $table_fields; $this->table_contents['RECORDS'] = $records; } /* ** This method returns fields selected by the user based on a where criteria ** ** @param array $select an array containing the fields the user wants to select, if he wants all fields he should use a * ** @param array $where an array which has field => value combinations ** @return array it returns an array containing the records ** ** @example ** $db = new FlatDB; ** $db->openTable('Test.csv'); ** $select = array(id, name, group_id); ** $where = array(group_id => 2); ** $db->getRecords($select, $where); **/ public function getRecords($select, $where = array()) { // Do some checks if(is_array($select) === FALSE) throw new Exception('First argument must be an array'); if(is_array($where) === FALSE && isset($where)) throw new Exception('Second arguement must be an array'); if(empty($this->table_name) === TRUE) throw new Exception('There is no connection to a table opened.'); // If the array contains only one key which is a *, then select all fields if($select[0] == '*') $select = $this->table_contents['FIELDS']; // Check if the fieldnames in select are all found foreach($select as $field_name) if(in_array($field_name, $this->table_contents['FIELDS']) === FALSE) throw new Exception($field_name. is not found in the table.); // Check if the fieldnames in where are all found foreach($where as $field_name => $value) if(in_array($field_name, $this->table_contents['FIELDS']) === FALSE) throw new Exception($field['name']. is not found in the table.); // Find the record that the user queried in where $user_records = $this->table_contents['RECORDS']; if(isset($where)) { foreach($where as $field => $value) { foreach($this->table_contents['RECORDS'] as $key => $record) { if($record[$field] != $value) { unset($user_records[$key]); } } } } // Preserve only the keys that the user asked for $final_array = array(); $temp_fields = array_flip($select); foreach($user_records as &$record) { $final_array[] = array_intersect_key($record, $temp_fields); } return $final_array; } /* ** This method updates fields based on a criteria ** ** @param array $update an array containing the fields the user wants to update ** @param array $where an array which has field => value combinations which is the criteria ** ** @example ** $db = new FlatDB; ** $db->openTable('Test.csv'); ** $update = array(group_id => 1); ** $where = array(group_id => 2); ** $db->updateRecords($update, $where); **/ public function updateRecords($update, $where) { // Check if the connection is opened if(empty($this->table_name) === TRUE) throw new Exception('There is no connection to a table opened.'); // Check if each field in update and where are found foreach($update as $field => $value) if(in_array($field, $this->table_contents['FIELDS']) === FALSE) throw new Exception($field. is not found.); foreach($where as $field => $value) if(in_array($field, $this->table_contents['FIELDS']) === FALSE) throw new Exception($field. is not found.); // Find the record that the user queried in where $user_records = $this->table_contents['RECORDS']; $preserved_records = array(); foreach($where as $field => $value) { foreach($this->table_contents['RECORDS'] as $key => $record) { if($record[$field] != $value) { unset($user_records[$key]); $preserved_records[$key] = $record; } } } // Update whatever needs updating $user_records_temp = $user_records; foreach($user_records_temp as $key => $record) { foreach($update as $field => $value) { $user_records[$key][$field] = $value; } } // Merge the preserved records and the records that were updated, then sort them by their record number $user_records += $preserved_records; ksort($user_records, SORT_NUMERIC); // Modify the property of the records and insert the new table $this->table_contents['RECORDS'] = $user_records; // Implode it so we can save it in a file $final_array[] = implode(self::$field_deliemeter, $this->table_contents['FIELDS']); foreach($user_records as $record) $final_array[] = implode(self::$field_deliemeter, $record); // Implode by linebreaks $data = implode(self::$linebreak, $final_array); // Save the file file_put_contents($this->table_name, $data); } /* ** This method inserts a new record to the table ** ** @param array $insert an array containing field => value combinations ** @param array $where an array which has field => value combinations which is the criteria ** ** @example ** $db = new FlatDB; ** $db->openTable('Test.csv'); ** $array = array(id => 7, name => Jack, password => 1234567, group_id => 2); ** $db->insertRecord($array); **/ public function insertRecord($insert) { if(is_array($insert) === FALSE) throw new Exception('The values need to be in an array'); if(empty($this->table_name) === TRUE) throw new Exception('You need to open a connection to a table first.'); // Check if each field in insert is found foreach($insert as $field => $value) if(in_array($field, $this->table_contents['FIELDS']) === FALSE) throw new Exception($field. is not found.); // Build the new record $newRecord = array(); foreach($this->table_contents['FIELDS'] as $field) { if(isset($insert[$field])) $newRecord[$field] = $insert[$field]; else $newRecord[$field] = NULL; } // Add the new record to the pre-existing table and save it in the records $records = $this->table_contents['RECORDS']; $records[] = $newRecord; $this->table_contents['RECORDS'] = $records; // Format it for saving $data = array(); $data[] = implode(self::$field_deliemeter, $this->table_contents['FIELDS']); foreach($records as $record) $data[] = implode(self::$field_deliemeter, $record); // Implode by linebreaks $data = implode(self::$linebreak, $data); // Save in file file_put_contents($this->table_name, $data); } /* ** This method deletes records from a table ** ** @param array $where an array which has field => value combinations which is the criteria ** ** @example ** $db = new FlatDB; ** $db->openTable('Test.csv'); ** $where = array(group_id => 3); ** $db->deleteRecords($where); **/ public function deleteRecords($where) { if(is_array($where) === FALSE) throw new Exception('The argument must be an array'); if(empty($this->table_name) === TRUE) throw new Exception('You need to open a connection to a database first.'); // Check if each field in insert is found foreach($where as $field => $value) if(in_array($field, $this->table_contents['FIELDS']) === FALSE) throw new Exception($field. is not found.); // Find the records that match and delete them $records = $this->table_contents['RECORDS']; foreach($records as $key => $record) { foreach($where as $field => $value) { if($record[$field] == $value) unset($records[$key]); } } // Save the records in the property $this->table_contents['RECORDS'] = $records; // Format it for saving $data = array(); $data[] = implode(self::$field_deliemeter, $this->table_contents['FIELDS']); foreach($records as $record) $data[] = implode(self::$field_deliemeter, $record); // Implode by linebreaks $data = implode(self::$linebreak, $data); // Save the file file_put_contents($this->table_name, $data); }}?>
Flat-file DB with CRUD
php;classes;database;crud
null
_codereview.119508
This is a for-loop which runs once for each step with i>1 and maybe for 0 <= i <= 1.var i;var prob_ = 3.5;for ( i = prob_; i >= 0; i -= 1 ) { if ( i > 1 || withProbability(i) ) { loadHttp(randomUrl()); }}with function withProbability(chance) { return Math.random() < chance;}JsLint gave me a for-loop error. Is this code ok despite that warning, or should this for-loop be changed to some other looping construct?Backgroundprob_ is a ratio between fake actions and real actions. The loop is triggered once for each real action. If prob_ is greater than 1, the fake action is done. If it is in between 0 and 1, it is considered a probability and thus maybe a fake action follows.This is a case where it was not clear how to replace the for-loop with a forEach, as is often recommended. Maybe ES6 tail recursion could be used...More contextfunction loadHttp(toLoad) { require(sdk/page-worker).Page({ contentURL: toLoad });}The implementation of randomUrl() is rather lengthy and does not relate to the question. It determines a statistically probable length of a HTML (or embedded object like js, img, css, ...) site, then does a lookup for the next bigger URL in a hardcoded list. It is supposed that the randomness will confuse someone who watches for traffic patterns.Full class coverTraffic.jsThis is initialized viaconst coverTraffic = require('./coverTraffic.js');coverTraffic.setLoader(require('./load.js'));with load.js containing similar to loadHttp as above. It is triggered viafunction loads(URL) { if ( _.contains(activeHosts, URL.host) ) { // has already started coverTraffic.loadNext(); } else { activeHosts.push(URL.host); coverTraffic.start(); }}exports.loads = URL => loads(URL);which is called whenever the browser loads a new URL.use strict;exports.DOC = 'creates cover traffic, up to predetermined parameters';const { setTimeout } = require(sdk/timers);const coverUrl = require(./coverUrl.js);const stats = require(./stats.js);/** overhead of dummy traffic -1; 1.5 has overhead of 50% */const FACTOR = 1.5;var load;var site_ = {};var pad_ = {};var prob_;function setLoader(load_param) { load = load_param;}exports.setLoader = (load_param) => setLoader(load_param);/** a website is loaded by the user. covertraffic determines a target * feature vector and adds to the load to approximate the target. */function start() { site_ = {}; pad_ = {}; site_.html = stats.htmlSize(1); // td: buffer known sites in bloomfilter pad_.html = stats.htmlSize(FACTOR) - site_.html; site_.num_embedded = stats.numberEmbeddedObjects(1); // td: see above pad_.num_embedded = stats.numberEmbeddedObjects(FACTOR) - site_.num_embedded; prob_ = pad_.num_embedded / site_.num_embedded; // setTimeout(loadNext, stats.parsingTime()); load.http(coverUrl.sized(pad_.html));}exports.start = start;function loadNext() { var i; if ( pad_.num_embedded > 0 ) { for ( i = prob_; i >= 0; i -= 1 ) { if ( i > 1 || stats.withProbability(i) ) { load.http(coverUrl.sized(stats.embeddedObjectSize())); pad_.num_embedded -= 1; } } } else { console.log(cover traffic empty, num: + pad_.num_embedded); }}exports.loadNext = loadNext;The stats module provides statistical distributions, the load-module just loading a URL as described above.
Creating cover traffic by calling random urls - Javascript loop to decrement float
javascript;random
The problem JSLint is pointing at is that you usually iterate with an index that is an integer. If you were to have an array, you couldn't use i to access the array elements, because if i is big enough, rounding issues may cause you to skip elements or see elements twice!What I'd recommend is that you just use i as integer, and then once the for loop is done, check if there is an additional chance to roll for. That way, you don't have to worry about floating point rounding errors accumulating.Another way you could do it is first roll, and then if you roll high enough, up i by 1. That way you only have one place where you run your code.So like this:var chance = i % 1;if (chance !== 0 && withProbability(chance)){ i = Math.ceil(i);} else { i = Math.floor(i);}//for loop goes here
_webapps.104583
How do I set default accounts for YouTube and GMail?I have an email for school and an email for personal use. How can I make it so that, in Chrome, when I go to YouTube it logs me into my personal account but when I go to GMail it logs me into my school account?
How do I set default accounts for YouTube and GMail?
google chrome;gmail
null
_unix.327308
I'm new to Linux and its terminal, so forgive my errors if any.IssueWhenever I type anything on my terminal the window keeps shrinking. However once the terminal window is maximized this issue never occurs.Specifications (according to uname command)OS: GNU/LinuxKernel Version: #1 SMP Debian 4.6.4-1kali1 EditBy terminal here I mean the gnome-terminal.Only the width and height are affected and the SHIFT and CTRL keys do not affect the window. The only keys affecting the window are the character keys i.e. A-Z, 0-9, and other special symbols and punctuation as well as the TAB key.
How to solve issue with terminal screen shrinking while typing?
kali linux;gnome terminal
null
_softwareengineering.294402
I am in the process of designing and building a small web app. While implementing a first prototype I discovered that I make many unwritten assumptions about the behaviour of the interface. For example:When the user selects a product in the product browser, the product inspector quickly slides out the left side displaying the product data. If the inspector is already opened, the data is only updated. The header background color changes with a radial animation.Right now it's just me on the project, so I define these implicit requirements on the fly. But after every change that could break something I have to recall all of them in order to test them. I obviously should write them down somewhere. I know unit testing, I know scenarios and personas, use-case diagrams; this seems to be something different and I don't know how to do it properly. So, what is the usual way of defining, documenting, and also guaranteeing such requirements? Where do I put them? How do I structure them? How do I test them effectively? Should I ship them with the source code or put them in the project Wiki?Also, these requirements are likely to change frequently, especially since I am using a rapid prototyping approach. I do not want to spend a lot of time drawing diagrams, etc. But simply dumping them into a text file without any structure seems to be a time waster as well, as soon as I have to test a specific part of the application.
Defining and testing detailed user interface requirements
testing;requirements;qa;ui
Sure, rapid prototyping can lead to frequent changes on your APIs for a duration of time, but I will expect changes to eventually mature and stabilize after requirements fall into place and you can essentially do an 'API freeze'.If you are making changes to undo past changes, then you may be getting ahead of yourself and straying away from the You Aren't Gonna Need It principle. The point here is to know when and how you need to put a stop to making more requirements' changes.With that out of the way, I think unit testing is definitely a potential starting point to validate any new changes you think you are going to implement with what is working currently. However, don't code weak unit tests that are either incomplete, or only performing the more superficial test cases (e.g. making sure fail-fast for null arguments work, without stepping through the actual logic). Try to unit-test for more meaningful edge cases, which tend to be your first line of defense when you head back to the drawing board. It will be even more helpful if you can write your unit tests in a BDD-style that gives a good definition of a class's contract.You should also try to document your API design/code from the viewpoint of an end-user, or as a tabula rasa developer. Linking to the first point, once you feel you have reached a sufficiently mature point of your API design, start to document your codebase from scratch. Make sure to update the parts where your original 'implicit requirements' have changed. These can be included on your project's wiki page. If you are also placing your project documentation under version control, e.g. putting them up on GitHub, then you will also have a working version history to learn how the documentation plus your codebase have evolved.
_unix.171365
I've a linux bridge which has it's first interface as a eth0.123 (VLAN tagged interface) & one or more tap interfaces created by kvm. (libvirt+kvm VM)[root@compute1 ~]# brctl showbridge name bridge id STP enabled interfacesbrq732eb7f9-16 8000.002590c6438e no eth0.123 tap81474f06-29 tap81474f06-30I noticed that, when I tried to ping IPs bound to VMs on tap interfaces, it used to work intermittently. Sometimes, the first VM spawned worked, sometimes the second worked. I fixed that problem by setting rp_filter=0.Question 1: Gentleman at https://fvtool.wordpress.com/2013/04/19/install-kvm-on-centos-6-3-configure-networking-b/ has explained a similar problem but I fail to understand why it works the first time but doesn't work for other tap interfaces. Could anyone explain why rp_filter is dropping those ping packets? When ping worked on these machines, the ssh didn't work. It was weird. Question 2: When I was debuging the problem, tcpdump on eth0 revealed that I was recieveing both ICMP request/reply packets but still the sending machine reported 100% packet loss. Does tcpdump show packets even though they'll be eventually dropped by kernel?
rp_filter dropping packets in linux bridges+vlan_tagged interface + tap interfaces configuration
networking;centos;bridge
null
_cseducators.2537
One way to extend the reach of your best students is to have an Honors Program that, perhaps, runs over several years. One model is to have a special program to which students are invited. Participation extends over several years, perhaps with separate classes. If you have experience with such a program, either as a participant, faculty member, or creator, what was the model and what worked and didn't in your program.I was a faculty member for students in one such program but had little influence on its general structure. Students could designate any course(s) in the curriculum as honors option. If the professor agreed, then the student and prof made a mini contract for extra work in the class. This was sort of small scale from my perspective but worked well in my (limited) experience. There were no special lectures for these students and the other students weren't affected. The honors student had to do more to earn an A in the course, but that was part of the contract. I had to design a more complex project for the (compiler course) students was the only real difference. When I was an undergrad at a liberal arts college we had a general honors program. I studied math and philosophy, but the program was tailored to the liberal arts. Students were invited at the end of the first year (of four). A faculty committee was responsible for invitations, though I suspect that a student could petition for entry. Every term the program had a different theme, literature, science, psychology, history, etc. depending on faculty interest. The students read a book on the topic of the term each week and met in seminar for a few hours once a week. The faculty said little as the students discussed the book. At the end of the second year of the program (third year of study) one of the seminars was public, held in an auditorium, with a reception following. These were well attended. The final year was different. We developed a thesis paper in our major study area, so mine was in the philosophy of math area. Since my major was fairly narrow (but also required science study) I cam away with a nearly perfect Liberal Arts education. The only area of the Medieval University course of study that I missed was Astronomy. However, the honors courses were in addition to the regular student load. The courses were pretty intense, so could lead to burnout. Also, someone already studying, say philosophy, didn't get as much broadening as I did since they already studied many of the HP topics anyway. I liked that Honors Program model, then and now, but here I'm more interested in something specialized for Computer Science students. I think that both small scale and large scale programs are interesting and might serve as models for others to consider. Much later, I helped design a doctoral program in computing that had many features of the above honors model. Three years, two of study and one of dissertation. The years were calendar years with no breaks and dissertation topics were expected to be ready to go by the beginning of the third year.
What are good models for an Honors Program for CS students?
best practice;student motivation;differentiation
null
_codereview.161386
I want to check if a string contains only a single occurrence of the given substring (so the result of containsOnce(foo-and-boo, oo) should be false). In Java, a simple and straightforward implementation would be eitherboolean containsOnce(final String s, final CharSequence substring) { final String substring0 = substring.toString(); final int i = s.indexOf(substring0); return i != -1 && i == s.lastIndexOf(substring0);}orboolean containsOnce(final String s, final CharSequence substring) { final String substring0 = substring.toString(); final int i = s.indexOf(substring0); if (i == -1) { return false; } final int nextIndexOf = s.indexOf(substring0, i + 1); return nextIndexOf == 0 || nextIndexOf == -1; // nextIndexOf is 0 if both arguments are empty strings.}Can you suggest a simpler/more efficient implementation?
Checking whether a string contains a substring only once
java;strings
null
_codereview.46404
The InputHandler class for my game detects key presses, turns each one into a GameAction, and raises an event:public class InputHandler{ public delegate void ActionListener(GameActions gameAction); public event ActionListener ActionRequested; private void ProcessKeyboard(KeyboardState keyboardState) { var pressedKeys = keyboardState.PressedKeys(); foreach (var inputAction in pressedKeys.Select(GetInputAction)) { ActionRequested(inputAction.Down); } var releasedKeys = keyboardState.ReleasedKeys(prevPressedKeys); foreach (var inputAction in releasedKeys.Select(GetInputAction)) { ActionRequested(inputAction.Up); } prevPressedKeys = pressedKeys; }}Only one GameAction can be requested at a time. This means that if you press the key bound to GameActions.MoveUp and the key bound to GameActions.MoveLeft at the same time, two separate events are raised and it looks like your character is moving diagonally.Now, I need to detect this as a single GameAction, e.g. GameActions.MoveUpLeft. There has to be a better way to do it than what I've come up with.private void ProcessKeyboard(KeyboardState keyboardState){ var pressedKeys = keyboardState.PressedKeys(); var inputActions = pressedKeys.Select(GetInputAction).ToList(); var downActions = inputActions.Select(ia => ia.Down); if (downActions.Contains(GameActions.MoveUp) && downActions.Contains(GameActions.MoveLeft)) { ActionRequested(GameActions.MoveUpLeft); } else if (downActions.Contains(GameActions.MoveUp) && downActions.Contains(GameActions.MoveRight)) { ActionRequested(GameActions.MoveUpRight); } else if ...}I'm sure you can see how this can get quickly out of hand. How can I make it better?
Turn multiple key presses into single GameAction
c#;xna
null
_codereview.171978
I am trying to create a logger wrapper around winston and express-winston (enables middleware error log) that is configurable . How could i improve this code?//logger.tsimport * as winston from winston;import * as FileSystem from fs;import AppConfig from ../config/app/appConfig;import CoreUtils from ../../common/utils/core;// Express winston does not yet have declaration support.const expressWinston = require(express-winston);export default class Logger { public static loggerInstance: Logger; private static readonly CONFIG_KEY: string = log; private static readonly CONFIG_KEY_LEVEL: string = level; private static readonly CONFIG_KEY_TRANSPORT: string = transports; private static readonly CONFIG_KEY_ENABLE_MIDDLEWARE: string = enableMiddleware; private static readonly TRANSPORT_KEY_CONSOLE: string = console; private static readonly TRANSPORT_KEY_FILE: string = file; private static readonly LOG_DIR: string = log; /** * Ensures a single logger object is maintained throught application/ */ public static getInstance(): Logger { let loggerInstance: Logger = Logger.loggerInstance; if (CoreUtils.isNull(loggerInstance)) { Logger.loggerInstance = loggerInstance = new Logger(); } return loggerInstance; } public isMiddlewareEnabled: boolean = false; private logger: winston.LoggerInstance; private transports: winston.TransportInstance[] = []; constructor() { this.initialize(); } public info(msg: string, logObject?: any): void { this.logger.info(msg, logObject); } public error(msg: string, logObject?: any): void { this.logger.error(msg, logObject); } public warn(msg: string, logObject?: any): void { this.logger.warn(msg, logObject); } public debug(msg: string, logObject?: any): void { this.logger.debug(msg, logObject); } public trace(msg: string, logObject?: any): void { this.logger.verbose(msg, logObject); } /** * Get express winston middleware configuraion */ public getMiddlewareLogger(): any { const options: any = { transports: this.transports }; return expressWinston.logger(options); } private initialize() { const logConfig = AppConfig.getObject(Logger.CONFIG_KEY); const logLevel: string = logConfig[Logger.CONFIG_KEY_LEVEL]; const logTransport: Object = logConfig[Logger.CONFIG_KEY_TRANSPORT]; this.isMiddlewareEnabled = <boolean> logConfig[Logger.CONFIG_KEY_ENABLE_MIDDLEWARE]; for (let key in logTransport) { if (logTransport.hasOwnProperty(key)) { if (key === Logger.TRANSPORT_KEY_CONSOLE) { this.configureConsoleTransport(logTransport[key], logLevel); } else if (key === Logger.TRANSPORT_KEY_FILE) { this.configureFileTransport(logTransport[key]); } } } this.logger = new (winston.Logger)({ transports: this.transports }); } /** * Configuring console transport */ private configureConsoleTransport(transport: Object, logLevel: string): void { const options: Object = Object.assign({ level: logLevel }, transport); this.transports.push(new (winston.transports.Console)(options)); } /** * Configuring file transport */ private configureFileTransport(transport: Object): void { const targetFileList: Object = AppConfig.getObject(Logger.CONFIG_KEY)[target]; // Creating log directory if it does not exist if (!FileSystem.existsSync(Logger.LOG_DIR)) { FileSystem.mkdirSync(Logger.LOG_DIR); } for (let key in targetFileList) { const options: Object = Object.assign({ name: key, level: key, filename: `${Logger.LOG_DIR}/${targetFileList[key]}` }, transport); this.transports.push(new (winston.transports.File)(options)); } }}//config.jsonlog: { level: info, enableMiddleware: true, transports: { console: { colorize: true, timestamp: true, json: false, showLevel: true }, file: { json: true, maxsize: 10485760, maxfile: 5 } }, target: { trace: trace.log, debug: debug.log, error: error.log, warn: info.log, info: info.log } }//usage in main.ts import Logger from ../core/common/logging/logger;...if (Logger.getInstance().isMiddlewareEnabled) { this.app.use(Logger.getInstance().getMiddlewareLogger());}What could be done better?
Creating a logger that wraps around winston and express-winston
javascript;node.js;express.js;typescript
null
_softwareengineering.338664
I want to confirm that my software works on an OS that is not installed on my workstation. So I want to use a virtual environment to test it.I am worrying there are disadvantages about that.Are there disadvantages for testing software in a virtual environment?OS what I mean, operating system.
When I need to confirm that my software works on an OS not installed on my workstation, are there disadvantages to using a virtual environment?
testing
Virtual machines are very good. If your software works in a virtual machine, it should also work on a real machine. But sometimes, the software would work on a real machine but fails in a virtual machine:if the software needs access to special hardware, like direct access to USB ports, graphic cards, .if the software needs direct network access.if the software is related to virtualization. You can't create nested virtual machines.So for most software, virtualization works well. There are a couple of disadvantages to virtual machines in general:they use a lot of RAM while they are running. This limits the number of virtual machines you can run at the same time.they take some time to boot up and shut down. You can't quickly test something.if the virtual machine runs on an emulator, the virtual machine will be much slower. This is necessary if the OS runs on a different CPU architecture. For example, you are developing on a x64 machine, but want to test on ARM (mobile phones) or Sparc (Sun/Oracle servers).I have used a lot of virtual machines for testing, and it was much easier than running many physical machines.
_unix.204852
I'm currently running Debian Stretch (amd64), Gnome Shell (Gnome 3). My hardware is NVidia GT220. I installed properitary drivers from the repository. I'm experiencing bad response times from Gnome Terminal, for example when coding in Vim. Cursor movement is delayed, etc. But when I'm typing this message I also have lags, hmm... Is there a bug in drivers (340.76)? Does anyone heard about the problem, and maybe a solution?Or this is not a graphics problem, but the environment?CPU is AMD Phenom II x4 945. glxgears returns ~6300 FPS.
Debian, slow response from Gnome Terminal in Gnome 3
gnome3;nvidia;gnome terminal;gnome shell
null
_unix.372199
First - sorry for bad title.Consider the following:root@debian-lap:/tmp echo Step 1 || echo Failed to execute step 1 ; echo Step 2Step 1Step 2root@debian-lap:/tmpAs you can see, the 1st and 3rd echo command executed normally.And if I the first command failed I want to stop the script and exit from it:root@debian-lap:/home/fugitive echo Step 1 || echo Failed to execute step 1 && exit 2 ; echo Step 2Step 1 exitfugitive@debian-lap:~$ The exit command executes and exits the shell, even thou exit code of the first command is 0. My question is - why?In translation, doesn't this say:echo Step 1if the command failed , echo 'Failed to execute step 1' and exit the scriptelse echo Step 2Looking at this like :cmd foo1 || cmd foo2 && exitShouldn't cmd foo2 and (&&) exit execute only when cmd foo1 failed?What I am missing?EditI am adding 2nd example, something that I am really trying to do (still dummy test)root@debian-lap:/tmp/bashtest a=$(ls) || echo Failed ; echo $atest_file # < - This is OKroot@debian-lap:root@debian-lap:/tmp/bashtest a=$(ls) || echo Unable to assign the variable && exit 2; echo $aexitfugitive@debian-lap:~$ # <- this is confusing partroot@debian-lap:/tmp/bashtest a=$(ls /tmpppp/notexist) || echo Unable to assign the variable ; echo $als: cannot access /tmpppp/notexist: No such file or directoryUnable to assign the variable # <- This is also OKroot@debian-lap:
Issue with booleans tests && and || in bash
bash;shell script;shell;boolean
Because the last command executed (the echo) succeeded. If you want to group the commands, it's clearer to use an if statement:if ! step 1 ; then echo >&2 Failed to execute step 1 exit 2fiYou could also group the error message and exit with { ... } but that's somewhat harder to read. However that does preserve the exact value of the exit status.step 1 || { echo >&2 step 1 failed with code: $?; exit 2; }Note, I changed the && to a semicolon, since I assume you want to exit even if the error message fails (and output those errors on stderr for good practice).For the if variant to preserve the exit status, you'd need to add your code to the else part:if step 1; then : OK, do nothingelse echo >&2 step 1 failed with code: $? exit 2fi(that also makes it compatible with the Bourne shell that didn't have the ! keyword).As or why the commands group like the do, the standard says:An AND-OR list is a sequence of one or more pipelines separated by the operators && and ||.The operators && and || shall have equal precedence and shall be evaluated with left associativity.Which means that something like somecmd || echo && exit acts as if somecmd and echo were grouped together first, i.e. { somecmd || echo; } && exit and not somecmd || { echo && exit; } .
_softwareengineering.335366
At my workplace we're soon going to be tasked with removing SQL injection vulnerabilities from a large code base. The application was originally written around 8 years ago and after years of bolt-ons and additional features, security is finally getting looked at. We'll be moving from using the mysql_ extension to PDO and prepared statements, binding parameters properly.We're looking at around 1100 queries, a reasonable mix of SELECT, UPDATE, INSERT, DELETE and the codebase is littered with mysql_fetch_assoc calls.What things can I do to make the process easier to manage?What other things can I do in addition to moving to prepared statements to prevent SQL injection?
Converting a large PHP codebase from mysql_ to PDO
php;mysql;sql injection
Personally, I think that sitting down and plowing through 1,100 database queries and converting them to PDO would drive me crazy.I'd opt for doing them in smaller bits, as you're working on other parts of the application. Fixing a bug? Adding a new feature? While you're in that area, convert the mysql_ code to use PDO before you do move to anoother part of the codebase.Now, if there's truly nothing else to be done to the app besides switching to PDO, then I don't know if there's any other way other than starting from the top and converting them all one by one, and testing each one as you go.
_codereview.149591
We have written a Python script using arcpy modules. It is was written by Python beginners and many parts of the code are written in 'unpythonic way'. The goal is to re-write or address 'unpythonic' code. The software used for this script are ArcMap 10.4 and written in Python 2.7. The complete script is available on GitHub.The main directory is C:/A__P6_GIS4/ and contains twelve sub-dirs the sub directories are actually names of counties, and each county sub-dir contains all relevant inputs inside the geodatabase or gdb file. First, the script loops and creates a .txt file with names of all the sub-dirs then uses this to loop through each sub-dir and execute set of functions for each county using set of if-else statements. The functions can be broadly broken by turf grass (TG) model, fractional (Frac) model, forest model and final model. The final model utilizes outputs from the aforementioned models and then creates the final model.Only contains main() due to character limitation:def main(): ALL_start_time = time.time() #ALL_start_time = timeit.default_timer() if arcpy.CheckExtension(Spatial) == Available: arcpy.AddMessage(Checking out Spatial) arcpy.CheckOutExtension(Spatial) else: arcpy.AddError(Unable to get spatial analyst extension) arcpy.AddMessage(arcpy.GetMessages(0)) sys.exit(0) # Create List of Counties for Loop arcpy.Delete_management(C:/GIS_temp/county_list.txt) co_list = C:/GIS_temp/county_list.txt county_list = open(co_list,a) #Open the list MainDIR = C:/A__P6_GIS4/ #Directory where LAND USE geodatabases are located for i in os.listdir(MainDIR): i_base = os.path.basename(i) county_list.write(i_base + \n) county_list.close() # Start Looping Through County List county_list = open(co_list,r) for i in county_list: #A text file is needed to ensure looping works! CoName = i.strip(\n) print CoName + started # Setup Directories MainDIR = C:/A__P6_GIS4/ #Directory where LAND USE geodatabases are located CntyDIR = os.path.join(MainDIR, CoName + /) OutputDIR = os.path.join(CntyDIR, Outputs/) TiffDIR = os.path.join(OutputDIR, CoName + _FINAL/) # Former FinalDirectory if not arcpy.Exists(OutputDIR): arcpy.CreateFolder_management(CntyDIR, Outputs) if not arcpy.Exists(TiffDIR): arcpy.CreateFolder_management(OutputDIR, CoName + _FINAL) Inputs = os.path.join(CntyDIR, CoName + _Inputs.gdb/) # Former CoGDB Temp_1m = os.path.join(OutputDIR, Temp_1m.gdb/) # Former TempGDB Temp_10m = os.path.join(OutputDIR, Temp_10m.gdb/) # Former Temp10GDB Final_10m = os.path.join(OutputDIR, Final_10m.gdb/) # Former Final_10m Final_1m = os.path.join(OutputDIR, Final_1m.gdb/) # Former LuGDB if not arcpy.Exists(Temp_1m): arcpy.CreateFileGDB_management(OutputDIR, Temp_1m.gdb) if not arcpy.Exists(Temp_10m): arcpy.CreateFileGDB_management(OutputDIR, Temp_10m.gdb) if not arcpy.Exists(Final_10m): arcpy.CreateFileGDB_management(OutputDIR, Final_10m.gdb) if not arcpy.Exists(Final_1m): arcpy.CreateFileGDB_management(OutputDIR, Final_1m.gdb) arcpy.Copy_management(Inputs + CoName + _IR_1m, Final_1m + CoName + _IR_1m) arcpy.Copy_management(Inputs + CoName + _INR_1m, Final_1m + CoName + _INR_1m) arcpy.Copy_management(Inputs + CoName + _TCoI_1m, Final_1m + CoName + _TCoI_1m) arcpy.Copy_management(Inputs + CoName + _WAT_1m, Final_1m + CoName + _WAT_1m) arcpy.Copy_management(Inputs + CoName + _LC, Final_1m + CoName + _LandCover) arcpy.env.overwriteOutput = True coord_data = Inputs + CoName + _Snap arcpy.env.outputCoordinateSystem = arcpy.Describe(coord_data).spatialReference arcpy.env.workspace = Temp_1m arcpy.env.scratchWorkspace = Temp_1m arcpy.env.extent = os.path.join(str(Final_1m) + str(CoName) + _IR_1m) arcpy.env.parallelProcessingFactor = 100% arcpy.env.snapRaster = str(Final_1m) + str(CoName) + _IR_1m #location of the default snap raster # Local variables: BAR = os.path.join(str(Inputs) + str(CoName) + _Barren) BEACH = os.path.join(str(Inputs) + str(Inputs),str(CoName) + _MOBeach) cc_wetlands = os.path.join(str(Inputs), str(CoName) +_WL) crpCDL = os.path.join(str(Inputs) + str(CoName) + _crpCDL) DEMstrm = os.path.join(str(Inputs) + str(CoName) + _Stream) DEV_UAC = os.path.join(str(Inputs) + str(CoName) + _DEV_UAC) DEV113 = os.path.join(str(Inputs) + str(CoName) + _DEV113) DEV37 = os.path.join(str(Inputs) + str(CoName) + _DEV37) DEV27 = os.path.join(str(Inputs) + str(CoName) + _DEV27) DEV18 = os.path.join(str(Inputs) + str(CoName) + _DEV18) fc_Tidal = os.path.join(str(Inputs), str(CoName) +_mask_tidal) fc_FPlain = os.path.join(str(Inputs), str(CoName) +_mask_fplain) fc_OTHWL = os.path.join(str(Inputs), str(CoName) +_mask_oth_wl) FEDS_sm = os.path.join(str(Inputs) + str(CoName) + _FedPark_small) FEDS_med = os.path.join(str(Inputs) + str(CoName) + _FedPark_medium) FEDS_lrg = os.path.join(str(Inputs) + str(CoName) + _FedPark_large) FINR_LU = os.path.join(str(Inputs) + str(CoName) + _FracINR) FTG_LU = os.path.join(str(Inputs) + str(CoName) + _FracTG) INST = os.path.join(str(Inputs) + str(CoName) + _TurfNT) T_LANDUSE = os.path.join(str(Inputs) + str(CoName) + _TgLU) M_LANDUSE = os.path.join(str(Inputs) + str(CoName) + _MoLU) LV = os.path.join(str(Inputs) + str(CoName) + _LV) MINE = os.path.join(str(Inputs) + str(CoName) + _ExtLFill) nwi_Tidal = os.path.join(str(Inputs), str(CoName) +_Tidal) nwi_FPlain = os.path.join(str(Inputs), str(CoName) +_NTFPW) nwi_OTHWL = os.path.join(str(Inputs), str(CoName) +_OtherWL) PARCELS = os.path.join(str(Inputs) + str(CoName) + _Parcels) pa_wetlands = os.path.join(str(Inputs), str(CoName) +_PA_wet) pasCDL = os.path.join(str(Inputs) + str(CoName) + _pasCDL) ROW = os.path.join(str(Inputs) + str(CoName) + _RoW) SS = os.path.join(str(Inputs),str(CoName) + _SS) TC = os.path.join(str(Inputs) + str(CoName) + _TC) Snap = os.path.join(str(Inputs) + str(CoName) + _Snap) # 1 meter LU Rasters - Listed in Hierarchical Order: IR = os.path.join(str(Final_1m) + str(CoName) + _IR_1m) INR = os.path.join(str(Final_1m) + str(CoName) + _INR_1m) TCI = os.path.join(str(Final_1m) + str(CoName) + _TCoI_1m) WAT = os.path.join(str(Final_1m) + str(CoName) + _WAT_1m) WLT = os.path.join(str(Final_1m) + str(CoName) + _WLT_1m) WLF = os.path.join(str(Final_1m) + str(CoName) + _WLF_1m) WLO = os.path.join(str(Final_1m) + str(CoName) + _WLO_1m) FOR = os.path.join(str(Final_1m) + str(CoName) + _FOR_1m) TCT = os.path.join(str(Final_1m) + str(CoName) + _TCT_1m) MO = os.path.join(str(Final_1m) + str(CoName) + _MO_1m) FTG1 = os.path.join(str(Final_1m) + str(CoName) + _FTG1_1m) FTG2 = os.path.join(str(Final_1m) + str(CoName) + _FTG2_1m) FTG3 = os.path.join(str(Final_1m) + str(CoName) + _FTG3_1m) FINR = os.path.join(str(Final_1m) + str(CoName) + _FINR_1m) TG = os.path.join(str(Final_1m) + str(CoName) + _TG_1m) # Temporary Datasets CDEdge = os.path.join(str(Temp_1m) + str(CoName) + _EDGE) EDGE = os.path.join(str(Temp_1m) + str(CoName) + _EDGE) FINRtemp = os.path.join(str(Temp_1m) + str(CoName) + _FINRtemp) FTGMask = os.path.join(str(Temp_1m) + str(CoName) + _FTGmask) FTGparcels = os.path.join(str(Temp_1m),str(CoName) + _FTG_parcels) FTGtemp = os.path.join(str(Temp_1m) + str(CoName) + _FTGtemp) FTGtemp2 = os.path.join(str(Temp_1m) + str(CoName) + _FTGtemp2) FTGtemp3 = os.path.join(str(Temp_1m) + str(CoName) + _FTGtemp3) HERB = os.path.join(str(Temp_1m) + str(CoName) + _Herb) INRmask = os.path.join(str(Temp_1m),str(CoName) + _INRmask) MOherb = os.path.join(str(Temp_1m) + str(CoName) + _MOherb) POT_FOR = os.path.join(str(Temp_1m),str(CoName) + _potFOR) RLTCP = os.path.join(str(Temp_1m) + str(CoName) + _RLTCP) RTmask = os.path.join(str(Temp_1m) + str(CoName) + _RTmask) RURmask = os.path.join(str(Temp_1m) + str(CoName) + _RURmask) TGMask = os.path.join(str(Temp_1m) + str(CoName) + _TGmask) TURFparcels = os.path.join(str(Temp_1m),str(CoName) + _TURF_parcels) TURFtemp = os.path.join(str(Temp_1m) + str(CoName) + _TURFtemp) TREES = os.path.join(str(Temp_1m) + str(CoName) + _MOTrees) URBmask = os.path.join(str(Temp_1m) + str(CoName) + _URBmask) WAT_FOR = os.path.join(str(Temp_1m),str(CoName) + _watFOR) print (IR, arcpy.Exists(IR)) print (INR, arcpy.Exists(INR)) print (TCI, arcpy.Exists(TCI)) print(WAT, arcpy.Exists(WAT)) print(BAR, arcpy.Exists(BAR)) print(LV, arcpy.Exists(LV)) print(SS, arcpy.Exists(SS)) print(TC, arcpy.Exists(TC)) print(BAR, arcpy.Exists(BAR)) print(DEV_UAC, arcpy.Exists(DEV_UAC)) print(DEV113, arcpy.Exists(DEV113)) print(DEV37, arcpy.Exists(DEV37)) print(DEV27, arcpy.Exists(DEV27)) print(DEV18, arcpy.Exists(DEV18)) print(FEDS_sm, arcpy.Exists(FEDS_sm)) print(FEDS_med, arcpy.Exists(FEDS_med)) print(FEDS_lrg, arcpy.Exists(FEDS_lrg)) print(BEACH, arcpy.Exists(BEACH)) print(MINE, arcpy.Exists(MINE)) print(T_LANDUSE, arcpy.Exists(T_LANDUSE)) print(M_LANDUSE, arcpy.Exists(M_LANDUSE)) print(FINR_LU, arcpy.Exists(FINR_LU)) print(FTG_LU, arcpy.Exists(FTG_LU)) print(INST, arcpy.Exists(INST)) print(PARCELS, arcpy.Exists(PARCELS)) print(ROW, arcpy.Exists(ROW)) ########################## START ALL MODELS #################################### #ALL_start_time = time.time() #------------------------- TURF & FRACTIONAL MODELS ----------------------------- start_time = time.time() arcpy.Delete_management(str(Temp_1m) + Parcel_IMP) arcpy.Delete_management(str(Temp_1m) + Parcel_IMP2) arcpy.Delete_management(str(Temp_1m) + str(CoName) + _INRmask) arcpy.Delete_management(str(Temp_1m) + str(CoName) + _RTmask) arcpy.Delete_management(str(Temp_1m) + str(CoName) + _Parcels_TURFtemp) arcpy.Delete_management(str(Temp_1m) + str(CoName) + _Parcels_TURF) arcpy.Delete_management(str(Temp_1m) + str(CoName) + _TURF_parcels) arcpy.Delete_management(str(Temp_1m) + str(CoName) + _Parcels_FTGtemp) arcpy.Delete_management(str(Temp_1m) + str(CoName) + _Parcels_FTG) arcpy.Delete_management(str(Temp_1m) + str(CoName) + _FTG_parcels) arcpy.Delete_management(str(Temp_1m) + str(CoName) + _TGmask) arcpy.Delete_management(str(Temp_1m) + str(CoName) + _FTGmask) arcpy.Delete_management(str(Temp_1m) + str(CoName) + _TURFtemp) arcpy.Delete_management(str(Temp_1m) + str(CoName) + _FTGtemp) arcpy.Delete_management(str(Temp_1m) + str(CoName) + _FTGtemp2) arcpy.Delete_management(str(Temp_1m) + str(CoName) + _FTGtemp3) arcpy.Delete_management(str(Temp_1m) + str(CoName) + _FINRtemp) arcpy.Delete_management(str(Final_1m) + str(CoName) + _TG_1m) arcpy.Delete_management(str(Final_1m) + str(CoName) + _TCI_1m) arcpy.Delete_management(str(Final_1m) + str(CoName) + _FTG1_1m) arcpy.Delete_management(str(Final_1m) + str(CoName) + _FTG2_1m) arcpy.Delete_management(str(Final_1m) + str(CoName) + _FTG3_1m) arcpy.Delete_management(str(Final_1m) + str(CoName) + _FINR_1m) print(--- Removal of TURF & FRAC Duplicate Files Complete %s seconds --- % (time.time() - start_time)) # Call each function, passing the necessary variables... turf_1(CoName, Temp_1m, INR, IR, INRmask, TCI) turf_2(CoName, Temp_1m, HERB, BAR, LV) turf_3(CoName, Temp_1m, DEV18, DEV27) # # TURF 4: Create Parcel-based Turf and Fractional Turf Masks if arcpy.Exists(PARCELS): turf_4a(CoName, Temp_1m, PARCELS, IR) turf_4b(CoName, Temp_1m, PARCELS) turf_4c(CoName, Temp_1m, PARCELS) turf_4d(CoName, Temp_1m, DEV_UAC, RTmask, ROW, INST, T_LANDUSE, TURFparcels) turf_4e(CoName, Temp_1m, FTG_LU, FEDS_sm, FTGparcels) else: turf_5a(CoName, Temp_1m, DEV_UAC, RTmask, ROW, INST, T_LANDUSE) turf_5b(CoName, Temp_1m, FTG_LU, FEDS_sm) turf_6(CoName, Temp_1m, Final_1m, HERB, TGMask, TURFtemp) frac_1(CoName, Final_1m, HERB, FTGMask, FTGtemp) frac_2(CoName, Final_1m, HERB, FEDS_med, FTGtemp2) frac_3(CoName, Final_1m, HERB, FEDS_lrg, FTGtemp3) frac_4(CoName, Final_1m, FINR_LU, HERB, FINRtemp) # TURF & FRACTIONAL Clean up start_time = time.time() arcpy.Delete_management(str(Temp_1m) + Parcel_IMP) arcpy.Delete_management(str(Temp_1m) + Parcel_IMP2) arcpy.Delete_management(str(Temp_1m) + str(CoName) + _RTmask) arcpy.Delete_management(str(Temp_1m) + str(CoName) + _Parcels_TURFtemp) arcpy.Delete_management(str(Temp_1m) + str(CoName) + _Parcels_TURF) arcpy.Delete_management(str(Temp_1m) + str(CoName) + _TURF_parcels) arcpy.Delete_management(str(Temp_1m) + str(CoName) + _Parcels_FTGtemp) arcpy.Delete_management(str(Temp_1m) + str(CoName) + _Parcels_FTG) arcpy.Delete_management(str(Temp_1m) + str(CoName) + _FTG_parcels) #arcpy.Delete_management(str(Temp_1m) + str(CoName) + _TGmask) #arcpy.Delete_management(str(Temp_1m) + str(CoName) + _FTGmask) arcpy.Delete_management(str(Temp_1m) + str(CoName) + _TURFtemp) arcpy.Delete_management(str(Temp_1m) + str(CoName) + _FTGtemp) arcpy.Delete_management(str(Temp_1m) + str(CoName) + _FTGtemp2) arcpy.Delete_management(str(Temp_1m) + str(CoName) + _FTGtemp3) arcpy.Delete_management(str(Temp_1m) + str(CoName) + _FINRtemp) print(--- TURF & FRAC Clean Up Complete %s seconds --- % (time.time() - start_time)) #--------------------------------FOREST MODEL---------------------------------------- start_time = time.time() arcpy.Delete_management(str(Temp_1m) + str(CoName) + _RLTCP) arcpy.Delete_management(str(Temp_1m) + str(CoName) + _EDGE) arcpy.Delete_management(str(Temp_1m) + str(CoName) + _CDEdge) arcpy.Delete_management(str(Temp_1m) + str(CoName) + _URBmask) arcpy.Delete_management(str(Temp_1m) + str(CoName) + _RURmask) arcpy.Delete_management(str(Temp_1m) + str(CoName) + _CDEdge) arcpy.Delete_management(str(Temp_1m) + str(CoName) + _URB_TCT) arcpy.Delete_management(str(Temp_1m) + str(CoName) + _RUR_TCT) arcpy.Delete_management(str(Temp_1m) + str(CoName) + _TCT1) arcpy.Delete_management(str(Temp_1m) + str(CoName) + _nonTCT) arcpy.Delete_management(str(Temp_1m) + str(CoName) + _potFOR) arcpy.Delete_management(str(Temp_1m) + str(CoName) + _NATnhbrs) arcpy.Delete_management(str(Temp_1m) + str(CoName) + _ForRG) arcpy.Delete_management(str(Temp_1m) + str(CoName) + _MOtemp) arcpy.Delete_management(str(Temp_1m) + str(CoName) + _MOspace) arcpy.Delete_management(str(Temp_1m) + str(CoName) + _MOherb) arcpy.Delete_management(str(Temp_1m) + str(CoName) + _MOTrees) arcpy.Delete_management(str(Final_1m) + str(CoName) + _FOR_1m) arcpy.Delete_management(str(Final_1m) + str(CoName) + _MO_1m) for_1(CoName, DEV113, TC) for_2(CoName, TC, DEV27) for_3(CoName, Temp_1m, RLTCP, EDGE) for_4(CoName, DEV37, CDEdge) for_5(CoName, DEV18, TC) for_6(CoName, Temp_1m, Final_1m) for_7(CoName, TCT, TC) for_8(CoName, Temp_1m, Final_1m, TC, WAT, WLF, WLO, WLT, WAT_FOR, POT_FOR) #---------------------------MIXED OPEN MODEL----------------------------------------------------- # MO 1: Create Mixed Open with just MOtrees and Scrub-shrub (no ancillary data) inrasListMO = [ ] if arcpy.Exists(BEACH): inrasListMO.append(BEACH) if arcpy.Exists(M_LANDUSE): inrasListMO.append(M_LANDUSE) if arcpy.Exists(MINE): inrasListMO.append(MINE) if not inrasListMO: mo_1(CoName, Temp_1m, Final_1m, TREES, SS) else: mo_2a(CoName, Temp_1m, inrasListMO) mo_2b(CoName, Temp_1m, BAR, HERB, LV) mo_2c(CoName, Temp_1m, HERB, MOherb) mo_2d(CoName, Temp_1m, Final_1m, MOherb, TREES, SS) # FOREST & MIXED OPEN Clean up start_time = time.time() arcpy.Delete_management(str(Temp_1m) + str(CoName) + _RLTCP) arcpy.Delete_management(str(Temp_1m) + str(CoName) + _EDGE) arcpy.Delete_management(str(Temp_1m) + str(CoName) + _CDEdge) arcpy.Delete_management(str(Temp_1m) + str(CoName) + _URBmask) arcpy.Delete_management(str(Temp_1m) + str(CoName) + _RURmask) arcpy.Delete_management(str(Temp_1m) + str(CoName) + _CDEdge) arcpy.Delete_management(str(Temp_1m) + str(CoName) + _URB_TCT) arcpy.Delete_management(str(Temp_1m) + str(CoName) + _RUR_TCT) arcpy.Delete_management(str(Temp_1m) + str(CoName) + _TCT1) arcpy.Delete_management(str(Temp_1m) + str(CoName) + _nonTCT) arcpy.Delete_management(str(Temp_1m) + str(CoName) + _potFOR) arcpy.Delete_management(str(Temp_1m) + str(CoName) + _NATnhbrs) arcpy.Delete_management(str(Temp_1m) + str(CoName) + _ForRG) arcpy.Delete_management(str(Temp_1m) + str(CoName) + _MOtemp) #arcpy.Delete_management(str(Temp_1m) + str(CoName) + _MOspace) arcpy.Delete_management(str(Temp_1m) + str(CoName) + _MOherb) print(--- FOREST & MIXED OPEN Clean Up Complete %s seconds --- % (time.time() - start_time)) #----------------------FINAL AGGREGATION MODEL----------------------------------------- print (IR, arcpy.Exists(IR)) print (INR, arcpy.Exists(INR)) print (TCI, arcpy.Exists(TCI)) print (WAT, arcpy.Exists(WAT)) print (WLT, arcpy.Exists(WLT)) print (WLF, arcpy.Exists(WLF)) print (WLO, arcpy.Exists(WLO)) print (FOR, arcpy.Exists(FOR)) print (TCT, arcpy.Exists(TCT)) print (MO, arcpy.Exists(MO)) print (FTG1, arcpy.Exists(FTG1)) print (FTG2, arcpy.Exists(FTG2)) print (FTG3, arcpy.Exists(FTG3)) print (FINR, arcpy.Exists(FINR)) print (TG, arcpy.Exists(TG)) final_1(CoName, Temp_1m, IR, INR, TCI, WAT, WLT, WLF, WLO, FOR, TCT, MO, FTG1, FTG2, FTG3, FINR, TG) final_2(CoName, Temp_1m, Temp_10m, Final_1m, Snap, CntyDIR) final_3(CoName, Temp_10m, DEMstrm, crpCDL, pasCDL, Snap) final_4(CoName, Temp_10m, FTG1, FTG2, FTG3, FINR, WLT, WLF, WLO, Snap) final_5(CoName, Temp_10m, Final_10m, TiffDIR, WLT, WLF, Snap) print(--- All Models Complete %s seconds --- % (time.time() - ALL_start_time))## ############################################################################### <<< END MAIN >>>## ############################################################################## Need this to execute main()if __name__ == __main__: main()
ArcPy script to analyze land use in counties
python;beginner;python 2.7;file system;geospatial
MainUse an helper function to separate unit of work. Namely, I would use a function performing the required operations on a county and call it from main.def main(main_directory=C:/A__P6_GIS4/): if arcpy.CheckExtension(Spatial) != Available: arcpy.AddError(Unable to get spatial analyst extension) arcpy.AddMessage(arcpy.GetMessages(0)) sys.exit(1) arcpy.AddMessage(Checking out Spatial) arcpy.CheckOutExtension(Spatial) for county in os.listdir(main_directory): manage_county(main_directory, county)And that's all you need in your main. Separating this manage_county function into turf, forest, mixed and final subfunctions could be a good thing to do too.A few things to note:sys.exit(0) means there was no error, so better use an exit status of 1 to indicate an error;os.listdir can be used directly to iterate over county directory, there in no need in using a file as buffer;using a parameter with default value can help with reusability/maintenance as the function can easily be tested and such value is not burried within the code;timing and debug printing can be delegated to helper functions/decorators, more on that later.Manage_countyThe main issue with the rest of the code, is the amount of redundant line of code one can read. Once again helper function can help reduce the amount of repetition. Loops are also a great way to perform the same operation on copious amount of filenames.You should also take some time to remove useless variables, such as BEACH which is defined, printed, tested for existence, but nothing usefull is done with it.You should also read PEP 8 and the official naming conventions to make the code look like Python code; and avoid needless abreviations in your variable names.Strings management is also a mess: there is a lot of useless call to str as the variables it is applied to are already strings; os.path.join is mainly applied to a single string, thus it is just noise; and str.format should be prefered to string concatenation.FEATURE_NAME_PATTERN = '{}/{}_{}'FEATURE_1M_PATTERN = '{}/{}_{}_1m'def create_directory(root, directory_name): directory = os.path.join(root, directory_name) if not arcpy.Exists(directory): arcpy.CreateFolder_management(root, directory_name) return directorydef create_geodatabase(root, filename): file_name = os.path.join(root, filename) created = False if not arcpy.Exists(file_name): arcpy.CreateFileGDB_management(root, filename) created = True return file_namedef manage_county(root, county_name): county_directory = os.path.join(root, county_name) output_directory = create_directory(county_directory, 'Outputs') tiff_directory = create_directory(output_directory, county_name + '_FINAL') imputs = os.path.join(county_directory, county_name + '_Inputs.gdb') # Former CoGDB temp_1m, _ = create_geodatabase(output_directory, 'Temp_1m.gdb') # Former TempGDB temp_10m, _ = create_geodatabase(output_directory, 'Temp_10m.gdb') # Former Temp10GDB final_10m, _ = create_geodatabase(output_directory, 'Final_10m.gdb') # Former Final_10m final_1m, created = create_geodatabase(output_directory, 'Final_1m.gdb') # Former LuGDB if created: for feature in ['IR', 'INR', 'TCoI', 'WAT']: feature_in = FEATURE_1M_PATTERN.format(inputs, county_name, feature) feature_out = FEATURE_1M_PATTERN.format(final_1m, county_name, feature) arcpy.Copy_management(feature_in, feature_out) arcpy.Copy_management( FEATURE_NAME_PATTERN.format(inputs, county_name, 'LC'), FEATURE_NAME_PATTERN.format(final_1m, county_name, 'LandCover')) arcpy.env.overwriteOutput = True coord_data = FEATURE_NAME_PATTERN.format(inputs, county_name, 'Snap') ir_1m_path = FEATURE_1M_PATTERN.format(final_1m, county_name, 'IR') arcpy.env.outputCoordinateSystem = arcpy.Describe(coord_data).spatialReference arcpy.env.workspace = temp_1m arcpy.env.scratchWorkspace = temp_1m arcpy.env.extent = ir_1m_path arcpy.env.parallelProcessingFactor = 100% arcpy.env.snapRaster = ir_1m_path #location of the default snap raster #------------------------- TURF & FRACTIONAL MODELS ----------------------------- for parcel in ['IMP', 'IMP2']: arcpy.Delete_management('{}/Parcel_{}'.format(temp_1m, parcel)) for feature in ['INRmask', 'RTmask', 'Parcels_TURFtemp', 'Parcels_TURF', 'TURF_parcels', 'Parcels_FTGtemp', 'Parcels_FTG', 'FTG_parcels', 'TGmask', 'FTGmask', 'TURFtemp', 'FTGtemp', 'FTGtemp2', 'FTGtemp3', 'FINRtemp']: arcpy.Delete_management(FEATURE_NAME_PATTERN.format(temp_1m, county_name, feature)) for feature in ['TG', 'TCI', 'FTG1', 'FTG2', 'FTG3', 'FINR']: arcpy.Delete_management(FEATURE_1M_PATTERN.format(final_1m, county_name, feature)) # Call each function, passing the necessary variables... turf_1(final_1m, county_name, temp_1m) turf_2(inputs, county_name, temp_1m) turf_3(inputs, county_name, temp_1m) # # TURF 4: Create Parcel-based Turf and Fractional Turf Masks if arcpy.Exists('{}/{}_Parcels'.format(inputs, county_name)): turf_4a(inputs, county_name, temp_1m, final_1m) turf_4b(inputs, county_name, temp_1m) turf_4c(inputs, county_name, temp_1m) turf_4d(inputs, county_name, temp_1m) turf_4e(inputs, county_name, temp_1m) else: turf_5a(inputs, county_name, temp_1m) turf_5b(inputs, county_name, temp_1m) turf_6(inputs, county_name, temp_1m, final_1m) frac_1(inputs, county_name, final_1m) frac_2(inputs, county_name, final_1m) frac_3(inputs, county_name, final_1m) frac_4(inputs, county_name, final_1m) # TURF & FRACTIONAL Clean up for parcel in ['IMP', 'IMP2']: arcpy.Delete_management('{}/Parcel_{}'.format(temp_1m, parcel)) for feature in ['INRmask', 'RTmask', 'Parcels_TURFtemp', 'Parcels_TURF', 'TURF_parcels', 'Parcels_FTGtemp', 'Parcels_FTG', 'FTG_parcels', 'TGmask', 'FTGmask', 'TURFtemp', 'FTGtemp', 'FTGtemp2', 'FTGtemp3', 'FINRtemp']: arcpy.Delete_management(FEATURE_NAME_PATTERN.format(temp_1m, county_name, feature)) #--------------------------------FOREST MODEL---------------------------------------- for feature in ['RLTCP', 'EDGE', 'CDEdge', 'URBmask', 'RURmask', 'URB_TCT', 'RUR_TCT', 'TCT1', 'nonTCT', 'potFor', 'NATnhbrs', 'ForRG', 'MOtemp', 'MOspace', 'MOherb', 'MOTrees']: arcpy.Delete_management(FEATURE_NAME_PATTERN.format(temp_1m, county_name, feature)) for feature in ['FOR', 'MO']: arcpy.Delete_management(FEATURE_1M_PATTERN.format(final_1m, county_name, feature)) for_1(inputs, county_name) for_2(inputs, county_name) for_3(inputs, county_name) for_4(inputs, county_name) for_5(inputs, county_name) for_6(inputs, county_name, temp_1m, final_1m) for_7(inputs, county_name) for_8(inputs, county_name, temp_1m, final_1m) #---------------------------MIXED OPEN MODEL----------------------------------------------------- # MO 1: Create Mixed Open with just MOtrees and Scrub-shrub (no ancillary data) inras_list_MO = [ name for name in ['MOBeach', 'MoLU', 'ExtLFill'] if arcpy.Exists(FEATURE_NAME_PATTERN.format(inputs, county_name, name)) ] if not inrasListMO: mo_1(inputs, county_name, temp_1m, final_1m) else: mo_2a(inputs, county_name, temp_1m, inras_list_MO) mo_2b(inputs, county_name, temp_1m) mo_2c(inputs, county_name, temp_1m) mo_2d(inputs, county_name, temp_1m, final_1m) # FOREST & MIXED OPEN Clean up for feature in ['RLTCP', 'EDGE', 'CDEdge', 'URBmask', 'RURmask', 'URB_TCT', 'RUR_TCT', 'TCT1', 'nonTCT', 'potFor', 'NATnhbrs', 'ForRG', 'MOtemp', 'MOspace', 'MOherb', 'MOTrees']: arcpy.Delete_management(FEATURE_NAME_PATTERN.format(temp_1m, county_name, feature)) #----------------------FINAL AGGREGATION MODEL----------------------------------------- final_1(inputs, county_name, temp_1m) final_2(inputs, county_name, temp_1m, temp_10m, final_1m, county_directory) final_3(inputs, county_name, temp_10m) final_4(inputs, county_name, temp_10m) final_5(inputs, county_name, temp_10m, final_10m)Youll see that there is still some repetitions, especially when deleting features from temp_1m before and after computations. But they are handled with less verbosity. However, I don't find any advantage in removing them both before and after. Either you let the file clean for the next computation, or you clean it before your own, but doing both is counter productive as one of them will yield no results. Instead, I recommend only deleting before your computation to start from a clean state and let the next computation perform its own cleanup when necessary.Youll also note that I removed most of the parameters from each intermediate calls. This is because they are variable that are unnecessary for this function. Instead, it is better to define them at the beginning of each of your helper functions. This is also the reason I added inputs and final_1m (but I might have missed some) as first parameter for each of the calls. For instance, the first lines of turf_1 can become:def turf_1(final_1m, county_name, temp_1m): INRmask = FEATURE_NAME_PATTERN.format(temp_1m, county_name, 'INRmask') IR = FEATURE_1M_PATTERN.format(final_1m, county_name, 'IR') INR = FEATURE_1M_PATTERN.format(final_1m, county_name, 'INR') TCI = FEATURE_1M_PATTERN.format(final_1m, county_name, 'TCoI') ...Timing and debug printsEven though debug prints inform the user that something is going on, they disturb for development and maintenance purpose. Instead, you could reduce the amount of information printed and rely on an helper function to time the execution:import timefrom functools import wrapsdef timer(func): @wraps(func) def wrapper(*args): start = time.time() # or time.perf_counter() in Python 3 print 'Starting', func.__name__, args func(*args) end = time.time() # or time.perf_counter() print 'Computation time:', end - startUsage being:@timerdef manage_county(root, county_name): # rest of the codeAnd you can decorate other functions as well to get outputs more often.
_codereview.27461
When I wrote this it just felt messy due to the null checks. Any good ideas on how to clean it up would be most appreciated. def getItemsInStock(self): itemsInStock = [] items = self.soup.find_all(ul, {'class':re.compile('results.*')}) getItems = [x for x in items if x.find(div, class_=quantity)] if getItems: itemsInStock += getItems pages = self.combThroughPages() if pages: for each in pages: self.uri = each soup = self.createSoup() items = soup.find_all(ul, {'class':re.compile('results.*')}) if items: inStock = [x for x in items if x.find(div, class_=quantity)] if inStock: itemsInStock += inStock if itemsInStock: return self.__returnItemDetailAsDictionary(itemsInStock) else: return None
refactor Python code with lots of None type checks
python
If a function like self.combThroughPages() returns None or a list of items you can wrap such a call in a function listify() that makes None into an empty list:def listify(l): if l is None: return []and make the call like:...for each in listify(self.combThroughPages()) self.uri = each...This works even if you don't have control over the definition of combThroughPages(). Sometimes you have functions that return None, or a single item or a list.I use a slightly more elaborate version of listify() myself to handle that:def listify(val, none=None): return a list if val is only an element. if val is None, normally it is returned as is, however if none is set and True then [None] is returned as a list, if none is set but not True ( listify(val, False) ), then the empty list is returned. listify(None) == None listify(None, 1) == [1] listify(None, [2]) == [[2]] listify(None, False) == [] listify(3) == [3] listify([4]) == [4] listify(5, 6) == [5] if val is None: if none is None: return None elif none: return [None] else: return [] if isinstance(val, list): return val return [val]In your case that version would be called like:...for each in listify(self.combThroughPages(), False) self.uri = each...Since you are not further using the temporary lists getItems or inStock you can get rid of them and directly append items found to itemsInStock. This would get you (assuming you have the extended version of listify in your scope)def getItemsInStock(self): itemsInStock = [] for item in self.soup.find_all(ul, {'class':re.compile('results.*')}): if item.find(div, class_=quantity): itemsInStock.append(item) for self.uri in listify(self.combThroughPages(), False): soup = self.createSoup() for item in listify(soup.find_all(ul, {'class':re.compile('results.*')}), False): if item.find(div, class_=quantity): itemsInStock.append(item) if itemsInStock: return self.__returnItemDetailAsDictionary(itemsInStock) else: return NoneIt is of course impossible to test without context, but this should work.I also removed the variable each directly setting self.uri. I can only assume that self.createSoup is dependent on the value of self.uri, otherwise I am not sure why you would have differences in calling createSoup.Of course you don't need listify() around self.combThroughPages() if you change the latter to return an empty list as @ruds already proposed that would work as well. In that case I would probably also have getItemsInStock() return an empty dictionary ( return {} ) depending on how that function itself is called: iis = self.getItemsInStock() if iis: for key, value in iis.iteritems()could then be changed to: for key, value in iis.iteritems():(or you can write a dictify() function).