id
stringlengths
5
27
question
stringlengths
19
69.9k
title
stringlengths
1
150
tags
stringlengths
1
118
accepted_answer
stringlengths
4
29.9k
_softwareengineering.33758
I've been using eclipse for a long time to do development. One of the problems I've come across when working on other people's projects is if they come from source control, some of the eclipse project files default.properties and other xml config files are missing. Its usually a big pain in the butt to get the project running in eclipse. I understand the reasoning to not have certain files tracked because they may be full of specific stuff to a certain eclipse install. How do all of you manage that?
Managing Eclipse projects in source control
development process;version control;eclipse
My solution has always been to not check those files in.I really have never liked it when I do a check out and have to filter through all the IDE-specific stuff. Why not use a common ground? If the IDE files are the common ground, great. But, more often than not, something like Ant and Maven are the common ground.It depends on the audience, but I generally avoid it. Build tools are mostly universal, and IDE's fall near the edge of the text editor religious wars. I'm a peaceful guy and practice my religion in private.
_webapps.103479
I got a message on my mobile today:There's been a change to your Google accountalong with a prompt to re-enter my password.Could someone be using my account or is this a known issue?
Why did I get a There's been a change to your Google account message and password reprompt?
google account;security
null
_unix.339815
I am trying to make an if-statement that says: If a file in this directory has .PDF extension do this, otherwise do this...And I'm having a hard time figuring out how to do this in bash. I checked here : https://stackoverflow.com/questions/3856747/check-whether-a-certain-file-type-extension-exists-in-directory first but the solutions listed either don't work or give errors I don't know how to fix. Attached below is my script. I slightly modified the script I was working on in my previous question here: Change only the extension of a fileThe script is below: #!/bin/bash#shebang for bourne shell executionecho Hello this is task 1 of the homework #Initial prompt used for testing to see if script ran#shopt allows configuration of the shell -s dotglob allows the script to run on dot-filesshopt -s dotglob#loop to iterate through each file in Task1 directory and rename them#loop runs if there is a file type of .PDF in the folder Task1, if there isn't displays that there isn't a file of that type and terminates if [ [ -n $(echo *.PDF) ] ] # what should I put here??? then for file in Task1/*.PDF; do mv $file ${file%.PDF}.pdf echo $file has been updated doneelse echo No files of that type...fi~ **Edit 1: ** As per Ipor Sircer's answer below I changed my script to the following:#!/bin/bash#shebang for bourne shell executionecho Hello this is task 1 of the homework #Initial prompt used for testing to see if script ran#shopt allows configuration of the shell -s dotglob allows the script to run on dot-filesshopt -s dotglob#loop to iterate through each file in Task1 directory and rename them#loop runs if there is a file type of .PDF in the folder Task1, if there isn't displays that there isn't a file of that type and terminates if [ ls -1 *.PDF|xargs -l1 bash -c 'mv $0 ${0%.PDF}.pdf' ]then for file in Task1/*.PDF; do mv $file ${file%.PDF}.pdf echo $file has been updated doneelse echo No files of that type...fiI get the following errors: Hello this is task 1 of the homework./Shell1.sh: line 12: [: missing `]'mv: cannot stat ']': No such file or directoryNo files of that type...Edit 2As per Eric Renouf's comment fixing the spacing gives me the following script: #!/bin/bash#shebang for bourne shell executionecho Hello this is task 1 of the homework #Initial prompt used for testing to see if script ran#shopt allows configuration of the shell -s dotglob allows the script to run on dot-filesshopt -s dotglob#loop to iterate through each file in Task1 directory and rename them#loop runs if there is a file type of .PDF in the folder Task1, if there isn't displays that there isn't a file of that type and terminates if [[ -n $(echo *.PDF) ]]then for file in Task1/*.PDF; do mv $file ${file%.PDF}.pdf echo $file has been updated doneelse echo No files of that type...fiHowever if I run it twice I see the following: Hello this is task 1 of the homeworkmv: cannot stat 'Task1/*.PDF': No such file or directoryTask1/*.PDF has been updatedWhy don't I just see the else echo since there are no files of type .PDF in the folder anymore?
If a file exists in a directory do...?
bash;shell script
null
_codereview.120080
I have create a functioning automated traffic light sequence using an array and if statements. It all work correctly but I am wondering if there is anything more I can do to improve my code without changing to structure or way it works, so without the use of dictionary's etc. <!DOCTYPE html><head> <title> Traffic Light</title> <style> .rainbow { background-image: -webkit-gradient( linear, left top, right top, color-stop(0, red), color-stop(0.1, yellow), color-stop(0.2, green)); background-image: gradient( linear, left top, right top, color-stop(0, #f22), color-stop(0.15, #f2f), color-stop(0.3, #22f), color-stop(0.45, #2ff), color-stop(0.6, #2f2),color-stop(0.75, #2f2), color-stop(0.9, #ff2), color-stop(1, #f22) ); color:transparent; -webkit-background-clip: text; background-clip: text; } </style></head><body background=street.gif> <h1 class=rainbow>Traffic Light</h1> <canvas id=myCanvas width=200 height=300 style=border:1px solid #000000;> Your browser does not support the HTML5 canvas tag. </canvas> <script> var c = document.getElementById(myCanvas); var ctx = c.getContext(2d); ctx.rect(0, 0, 200, 300); ctx.fillStyle = grey; ctx.fill(); var colours=[red, yellow, green, black,red yellow]; var current=colours[0]; function offlight() { ctx.beginPath(); ctx.arc(95,50,40,10,12*Math.PI); ctx.fillStyle = black; ctx.fill(); ctx.stroke(); } function offlight1() { ctx.beginPath(); ctx.arc(95,150,40,10,12*Math.PI); ctx.fillStyle = black; ctx.fill(); ctx.stroke(); } function offlight2() { ctx.beginPath(); ctx.arc(95,250,40,10,12*Math.PI); ctx.fillStyle = black; ctx.fill(); ctx.stroke(); } function drawLight1() { ctx.beginPath(); ctx.arc(95,50,40,10,12*Math.PI); ctx.fillStyle = red; ctx.fill(); ctx.stroke(); } function drawLight2() { ctx.beginPath(); ctx.arc(95,150,40,10,12*Math.PI); ctx.fillStyle = yellow; ctx.fill(); ctx.stroke(); } function drawLight3() { ctx.beginPath(); ctx.arc(95,250,40,10,12*Math.PI); ctx.fillStyle = green; ctx.fill(); ctx.stroke(); } function changelight(){ if (current==colours[0]){ drawLight1(); offlight1(); offlight2(); current=colours[4] } else if (current==colours[4]){ drawLight1(); drawLight2(); offlight2(); current=colours[2] } else if (current==colours[2]) { offlight(); offlight1(); drawLight3(); current=colours[3] } else if (current==colours[3]){ offlight(); drawLight2(); offlight2(); current=colours[0] } } setInterval(changelight,1000); </script> <br><br> <button onclick=changelight()>Click</button></body>
Check of a traffic light sequence using an array and if statements
javascript;html;css
null
_unix.184379
I am trying to copy files from one server directly to another, bypassing my local computer.I didscp -r [email protected]:~/data/* [email protected]:~/data/Password: Host key verification failed.lost connectionIs this even possible? How may I fix it?
Scp from one server to another server?
scp
null
_vi.12109
I need to create regex in VIM (for plugin purpose) which would work as follows:Expected behavior:|| characters indicate current cursor position, bold words indicate matching wordsExample nr 1:foo bar foo barfoo b|a|r foo barfoo bar foo barTwo bars from first line match. First bar from the second line does not match because the cursor is currently placed on it. Second bar in the second line do match. Two bars from third line match.Example nr 2:foo bar foo barfoo bar foo barfo|o| bar foo barTwo foos from first line match. Two foos from the second line match. First foo from the third line does no match as the cursor is currently placed on it. Second foo from the third line do match.So I was tinkering a bit with it and I managed to create the regex which will match all words like one under the cursor: putting word under cursor into variablelet current_word = expand('<cword>') regex matching all words like one under cursor'\k*\<\V'.current_word.'\m\>\k*'I can not figure out how to exclude word under cursor from matches.I found this special character \%# which according to VIM help Matches with the cursor position. (...)Yet I couldn't figure out how to use it in my case. Any ideas?
VIM regex - match all words equal to one under cursor, except one currently under the cursor
regular expression
\%# will always track the cursor's position, but it will only work when matching text in the window. If that's what you want, this is the pattern to use:\%(\k*\%#\k*\)\@!\<bar\>It matches any bar, but excludes any word that's under the cursor. Using this in a / search will jump to the first match, which will seem strange if the cursor is currently over bar.You can use this pattern in :match if you just want to highlight all occurrences of <cword> that aren't under the cursor:autocmd CursorMoved * execute 'match Error /\%(\k*\%#\k*\)\@!\<' . expand('<cword>') . '\>/' If you want to match <cword> at the time of the search, you will need to create a more complicated pattern:function! s:search_allbut_cword() abort let p1 = searchpos('\<.', 'nbcW') let p2 = searchpos('.\>', 'ncW') return '/\%(\%' . p1[0] . 'l' \. '\%>' . (p1[1] - 1) . 'c' \. '\%<' . (p2[1] + 1) . 'c\)' \. '\@!\<' . expand('<cword>') . '\>' . \<cr>endfunctionnnoremap <expr> <leader>* <sid>search_allbut_cword()p1 is the position at the beginning of <cword> and p2 is the end. With those, it creates a pattern that matches <cword> except on the current line between two columns. This slightly flawed since it doesn't accurately account for changes made in the excluded region.
_softwareengineering.341090
I have a limited amount of input types:34:56 = sensorA#, sensorA#, sensorB#2:5 = { led# }66 = otherSensor2,3,4,5 = greenRelay#, redRelay#, relayA#, relayA#a:b implies range. {name} implies a global name for the dataset. # represents automatic enumeration in the name (not relevant for thequestion) Single values or coma separated values means what you'd expect for it.Less names than values implies automatic name assignation (not relevant for the question) I need to extract the numerical values from the left side of the expression and the names from the right side so I can iterate to assign the names to the values. I don't know hot to handle this task, I've been reading and I have sought a solution but I'd like to reach a good methodology for this case.Should I replace all the spaces and tabs before processing?Should I use regex just to verify the correctness of the input or for something more?Should I use just plain string manipulation? I'm using Golang and strings are immutable, string manipulation implies allocations and a lot of code (speed is not REALLY important here but I'd like to find the correct way to solve this).Should I write a lexer and parser for this?
How to parse a simple custom syntax in Go?
parsing;methodology;regular expressions;go
Should I replace all the spaces and tabs before processing?You can do this if you want whitespace to be as meaningless as it is in c, c++, java, c#. This means doing a double pass over the file. For very large files this can be prohibitive because it forces you to hold the whole thing in memory or create a temp file. There are techniques to consume whitespace on the fly. Consider them before you resort to this.Should I use regex just to verify the correctness of the input or for something more?Not every language can be validated with regex. Be sure of which category you're in before you commit to it.Should I use just plain string manipulation? I'm using Golang and strings are immutable, string manipulation implies allocations and a lot of code (speed is not REALLY important here but I'd like to find the correct way to solve this).A lot of code is not a good way to define a language. Here's a good way:http://www.bottlecaps.de/rr/uiShould I write a lexer and parser for this?This offers the most power of anything you've mentioned. There are likely simpler alternatives that center around reusing parsers written for things like json or xml but then you're just shoving your input types into a different data format.
_webapps.92616
I've been trying to find a way using Twitter Advanced Search to return retweets made by a certain user. I'm trying to use a 3rd party tool called Twools to pull just retweets using a search. I've tried using this query, but it doesn't seem to return any retweets. from:username include:retweetsI'm not sure such a thing exists, but I'd love if there were something likefrom:username onlyinclude:retweetsIs there any secret search syntax that might be usable?
Use Twitter Advanced search to find retweets made by a single account?
twitter
It seems like the syntax for showing all the retweets by a user is supposed to be:from:username include:retweets filter:retweetsHowever, it seems all of this search syntax is broken in modern Twitter, where retweets are treated as a reference to the original tweet, rather than a separate RT @originaluser stuff and things tweet from the retweeter. filter:retweets picks up any tweet that happens to contain the word RT anywhere in the tweet but not actual retweets anymore.
_unix.35956
I've found only puf (Parallel URL fetcher) but could to get it work with reading urls from file and something like puf < urls.txtdoes not work either. The operating system installed on the server is ubuntu.
Is there parallel wget? Something like fping but only for downloading?
ubuntu;download;parallel
You can implement that using Python and the pycurl library. The pycurl library has the multi interface that implements its own even loop that enables multiple simultaneous connections. However the interface is rather C-like and therefore a bit cumbersome as compared to other, more Pythonic, code. I wrote a wrapper for it that builds a more complete browser-like client on top of it. You can use that as an example. See the pycopia.WWW.client module. The HTTPConnectionManager wraps the multi interface.
_datascience.14852
I want to perform SGD on the following neural network:Training set size = 200000input layer size = 784hidden layer size = 50output layer size = 10I have an algorithm that performs batch gradient descent. The following python function calculates cost function and gradients for batch gradient descent: def cost(theta,X,y,lamb): #get theta1 and theta2 from unrolled theta vector th1 = (theta[0:(hiddenLayerSize*(inputLayerSize+1))].reshape((inputLayerSize+1,hiddenLayerSize))).T th2 = (theta[(hiddenLayerSize*(inputLayerSize+1)):].reshape((hiddenLayerSize+1,outputLayerSize))).T#matrices to store gradient of theta1 &theta2 th1_grad = np.zeros(th1.shape) th2_grad = np.zeros(th2.shape) I = np.identity(outputLayerSize,int) Y = np.zeros((realTrainSetSize ,outputLayerSize)) #get Y[i] to the size of output Layer for i in range(0,realTrainSetSize ): Y[i] = I[y[i]] #add bais unit in each training example and perform forward prop and backprop A1 = np.hstack([np.ones((realTrainSetSize ,1)),X]) Z2 = A1 @ (th1.T) A2 = np.hstack([np.ones((len(Z2),1)),sigmoid(Z2)]) Z3 = A2 @ (th2.T) H = A3 = sigmoid(Z3) penalty = (lamb/(2*trainSetSize))*(sum(sum(np.delete(th1,0,1)**2))+ sum(sum(np.delete(th2,0,1)**2)) ) J = (1/2)*sum(sum( np.multiply(-Y,log(H)) - np.multiply((1-Y),log(1-H)) )) sigma3 = A3 - Y; sigma2 = np.multiply(sigma3@theta2,sigmoidGradient(np.hstack([np.ones((len(Z2),1)),Z2]))) sigma2 = np.delete(sigma2,0,1) delta_1 = sigma2.T @ A1 delta_2 = sigma3.T @ A2 th1_grad = np.divide(delta_1,trainSetSize)+(lamb/trainSetSize)*(np.hstack([np.zeros((len(th1),1)) , np.delete(th1,0,1)])) th2_grad = np.divide(delta_2,trainSetSize)+(lamb/trainSetSize)*(np.hstack([np.zeros((len(th2),1)) , np.delete(th2,0,1)])) #unroll gradients of theta1 and theta2 theta_grad = np.concatenate(((th1_grad.T).ravel(),(th2_grad.T).ravel())) return (J,theta_grad)I guess to perform SGD , the cost function should be modified to perform calculations on single training data(array of size 784) and then theta should be updated for each training data. Is it the correct way of implementing SGD ?If yes, I am not able to get this cost function to work for single training data , if no, then what is the correct way to implement SGD on a neural network ?
How can I perform stochastic gradient descent for training neural network?
machine learning;python;neural network;data mining;gradient descent
null
_cogsci.5048
I was intrigued to read (in the question What positive writing exercises improve happiness?) the idea of a gratitude diary suggested as an intervention that causes psychological well-being levels to increase in a lasting way.Empirical studies suggest that people who use gratitude journals feel better about their lives and report fewer symptoms of illness. (Emmons & McCullough, 2003; Doverspike; Emmons Lab.)However, there's also lots of dubious law of attraction-style writing on the topic, like this:Did you know that appreciation, gratitude and love are the highest forms of vibration? You can only have one vibration at a time, and if you are noticing what you appreciate and noticing what you are grateful for, you can't be noticing what you don't like.And although studies find gratitude journals to be beneficial, they disagree on the most beneficial ways to keep one. Psychologist William Doverspike says:A daily gratitude intervention (self-guided exercises) resulted in more positive effects tha[n] did the weekly intervention.But Jason Marsh of the Greater Good Science Center at the University of California, Berkeley says:Writing occasionally (once or twice per week) is more beneficial than daily journaling. In fact, one study by psychologist Sonja Lyubomirsky and her colleagues found that people who wrote in their gratitude journals once a week for six weeks reported boosts in happiness afterward; people who wrote three times per week didnt.So: which is likely to be correct in practice? And what best practices for keeping a gratitude journal can be inferred from other research, either specifically about gratitude or more generally in the cognitive sciences?References:Doverspike, Ph.D., William F. Gratitude: A Key to Happiness. Georgia Psychological Association.Emmons, R. A. & McCullough, M. E. (2003). Counting blessings versus burdens: An experimental investigation of gratitude and subjective well-being in daily life. Journal of Personality and Social Psychology, 84, 377-389.Gratitude and Well-Being. Emmons Lab at the University of California, Davis.Update (9th Dec 2013):I'm continuing to research this question. This article, while not terribly scientific, has some good jumping off points (infographic and links) that may help potential answerers!
Evidence-based best practices for writing a gratitude journal
emotion;positive psychology;positive thinking
null
_cs.10447
I am currently learning how randomised Hashing works. So, you have a class (aka family) $H$ of hash functions, each of which maps the universe $U$ to the hash table $N$.That class is called strongly universal or pairwise independent if $\forall x,y \in U, x \neq y: \forall z_1,z_2 \in N: \Pr\limits_{h \in H}[h(x) = z_1 \land h(y) = z_2] \leq \frac{1}{|N|^2}$. In words: pick any two elements from the universe and two from the hash table. If you pick a hash function from the hash class at random, the probability that these two elements are mapped to each other by $h$ is less or equal than $\frac{1}{|N|^2}$.Now, what is confusing me is that, since $x$, $y$, $z_1$ and $z_2$ are all completely independent, it looks to me like you could just remove one pair from the equation and still get the same result. That would be $\forall x \in U: \forall z \in N: \Pr\limits_{h \in H}[h(x) = z] \leq \frac{1}{|N|}$. This, however, is called uniformity of a hash class.Could someone explain to me why these two attributes are different from one anoter?
Hash function - uniformity / strong universality
data structures;hash tables;hash
Arnab provided the answer. The family $\mathcal{H} = \{h_i : i \in N\}$, where $h_i(x) = i$ for all $x \in U$, is uniform but not pairwise independent. Similarly you can come up with families which are pairwise but not $3$-wise independent, and so on.To give a simple example, let $X,Y$ be two independent uniformly random coin tosses. Each of the possibilities $(H,H),(H,T),(T,H),(T,H)$ has the same probability. Now let $X'$ be a uniformly random coin toss, and let $Y' = X'$. Now it is not true that each of the four possibilities of $(X',Y')$ has the same probability, but each of $X',Y'$ by itself is a uniformly random coin toss.
_codereview.58886
I am trying to compare the RemoveAt() function performance in an array and linked list.For array:public T RemoveAt(int index){ if (index >= this.count || index < 0) { throw new ArgumentOutOfRangeException( Invalid index: + index); } T item = this.arr[index]; Array.Copy(this.arr, index + 1, this.arr, index, this.count - index - 1); this.arr[this.count - 1] = default(T); this.count--; return item;}For linked list:public T RemoveAt(int index){ if (index < 0 || index >= count) { throw new IndexOutOfRangeException(Invalid Index + index); } else { int currentindex = 0; ListNode currentnode = this.head; ListNode prevnode = null; while (currentindex<index) { prevnode = currentnode; currentnode = currentnode.nextnode; currentindex++; } // Remove the found element from the list of nodes RemoveListNode(currentnode, prevnode); // Return the removed element return currentnode.element; }}private void RemoveListNode(ListNode node, ListNode prevNode){ prevNode.nextnode = node.nextnode;}Main program:I have inserted 10K elements in each and I am trying to remove the 500th element.Stopwatch s=new Stopwatch();CustomArrayList<int> listusingArray = new CustomArrayList<int>(10000);Console.WriteLine(Deleting 500th elements from array........\n);s.Start();listusingArray.RemoveAt(500);s.Stop();Console.WriteLine(Time taken to delete from array: + s.Elapsed);DynamicList<int> listusingDynamic = new DynamicList<int>();s.Reset();Console.WriteLine(Removing 500th elements from Link List........\n);s.Start();listusingDynamic.RemoveAt(500);s.Stop();Console.WriteLine(Time taken to remove from list: + s.Elapsed);Output:Time taken to delete from array :00:00:00.0003040Time taken to delete from list :00:00:00.0008685Shouldn't the linked list Remove at() function be faster as it avoids Array.Copy in array?
Linked list vs array performance for RemoveAt() function
c#;performance;array;linked list
null
_unix.254594
For an input file named Lab1:034023 052030034023 022130044023 012030034223 022030034123 152030024023 152030AWK commandawk 'gsub(/[0-9][0-9]/,&:,$1) gsub(/[0-9][0-9]/,&:,$2)' Lab1results in:03:40:23: 05:20:30:03:40:23: 02:21:30:04:40:23: 01:20:30:03:42:23: 02:20:30:03:41:23: 15:20:30:02:40:23: 15:20:30:How can I prevent the trailing colons?desired result 03:40:23 05:20:30 03:40:23 02:21:30
Adding : time-formatting using awk
text processing;awk
awk ' { for(i=1;i<=NF;i++){ sub(/[0-9]{4}$/,:&,$i) sub(/:[0-9]{2}/,&:,$i) } } 1 ' <<<\'034023 052030034023 022130044023 012030034223 022030034123 152030024023 152030'produces:03:40:23 05:20:3003:40:23 02:21:3004:40:23 01:20:3003:42:23 02:20:3003:41:23 15:20:3002:40:23 15:20:30Other scripts are1.#!/usr/bin/awk -fgsub(/[0-9]{4}\>/,:&) &&gsub(/:[0-9][0-9]/,&:)2.#!/usr/bin/awk -fgsub(/[0-9]{2}\B/,&:)3. #!/usr/bin/awk -fBEGIN{ FS=OFS=}/[0-9]{6} [0-9]{6}/{ $3=:$3 $4=$4: $11=:$11 $12=$12: print}4.#!/usr/bin/awk -f/[0-9]{6} [0-9]{6}/{ printf(%02d:%d:%s:%d:%d\n, substr($0,0,2), substr($0,3,2), substr($0,5,6), substr($0,11,2), substr($0,13,2))}
_cs.7019
Consider the following language over the alphabet $\mathcal{A} = \{a,b,c\}$:$$L = \left\{w \in \mathcal{A}^* \mid \text{\(|w|\) is odd and the middle character in \(w\) occurs nowhere else in \(w\)} \right\}$$I am trying to come up with a grammar for $L$, but I'm getting nowhere. I came up with some sample strings from the language which would be accepted: b, abcab, accbcaa I understand the length of the string has to be odd, and the middle character cannot be repeated anywhere in the string.Therefore, the above three strings are accepted. However, something like aabbb will not accepted, because even though the length is odd, the middle character is repeated.Can someone help with a grammar for $L$?
Grammar for a language: odd length, middle character not repeated
formal languages;formal grammars
null
_unix.192465
I have a file (file.php) like this:...Match user foo ChrootDirectory /NAS/foo.info/ ForceCommand internal-sftp AllowTcpForwarding no GatewayPorts no X11Forwarding noMatch user bar ChrootDirectory /NAS/bar.co.uk/ ForceCommand internal-sftp AllowTcpForwarding no GatewayPorts no X11Forwarding noMatch user baz ChrootDirectory /NAS/baz.com/ ForceCommand internal-sftp AllowTcpForwarding no GatewayPorts no X11Forwarding noI am trying to write a bash script to delete one of the paragraphs.So say I wanted delete the user foo from the file.php. After running the script, it would then look like this:...Match user bar ChrootDirectory /NAS/bar.co.uk/ ForceCommand internal-sftp AllowTcpForwarding no GatewayPorts no X11Forwarding noMatch user baz ChrootDirectory /NAS/baz.com/ ForceCommand internal-sftp AllowTcpForwarding no GatewayPorts no X11Forwarding noHow could I go about doing this. I have thought about using sed but that only seems to be appropriate for one liners?sed -i 's/foo//g' file.phpAnd I couldn't do it for each individual line as most of the lines withing the paragraph are not unique! Any ideas?
Remove paragraph from file
bash;shell;text processing;sed
Actually, sed can also take ranges. This command will delete all lines between Match user foo and the first empty line (inclusive):$ sed '/Match user foo/,/^\s*$/{d}' fileMatch user bar ChrootDirectory /NAS/bar.co.uk/ ForceCommand internal-sftp AllowTcpForwarding no GatewayPorts no X11Forwarding noMatch user baz ChrootDirectory /NAS/baz.com/ ForceCommand internal-sftp AllowTcpForwarding no GatewayPorts no X11Forwarding noPersonally, however, I would do this using perl's paragraph mode (-00) that has the benefit of removing the leading blank lines:$ perl -00ne 'print unless /Match user foo/' fileMatch user bar ChrootDirectory /NAS/bar.co.uk/ ForceCommand internal-sftp AllowTcpForwarding no GatewayPorts no X11Forwarding noMatch user baz ChrootDirectory /NAS/baz.com/ ForceCommand internal-sftp AllowTcpForwarding no GatewayPorts no X11Forwarding noIn both cases, you can use -i to edit the file in place (these will create a backup of the original called file.bak):sed -i.bak '/Match user foo/,/^\s*$/{d}' fileorperl -i.bak -00ne 'print unless /Match user foo/' file
_unix.225756
what happened to the elementary os logo? why has it changed to an ubuntu
Why does my Elementary OS show an ubuntu logo?
ubuntu;elementary os
null
_softwareengineering.321941
After learning OOP design then I got to know my programming way was not correct. One should convert physical entities or logically separable components into classes which are reusable and have their own behavior and properties. One must not simply convert all the entities into objects because that would be really cumbersome. If I am developing a filter system in console that checks if my program wants an integer from user then it must be an integer, it should not be a corrupted value by user's typing error or wrong input. For such purpose I should create a separate class to manage all its working and behaviors, distinguished from the console working so that if later I want to use that system into GUI then I can.But creating objects for such class does not makes sense, so I ended up creating everything static from data members to functions which rather seemed like creating somewhat a procedural design with classes. With only one benefit that all the data was bound into units by classes known as encapsulation.My Question is , Is it fine to have such interfaces with classes or one needs to move back to procedural for them? I often face such problems with designing when applying OOP design to the working mechanisms of my code. Edit-1Let me explain the question more precisely suppose you want create a filter system. So you take input in strings from the user and then perform checks if the data contains numerics or alphas or if the data is really pure, not a mixture of alpha and numerics making the code more robust and bug free. For that I created a class in which one needs to pass string and it will check it out what kind of data it is, based on it's internal working while working with such mechanism creating multiple objects of the filter system class does not make sense because if one does so then all the objects would be identical in functionality and usage. Which clearly means that there would be no benefit if the class has multiple instances. So I ended up creating everything static because I did not wanted to create objects
Ideal OOP Design
c++
The thing you asked about is called utility function.There are lots of information, opinions, and mis-information about the role and propriety of utility functions in object-oriented world.In your example, the utility function is very simple, so I would have preferred to keep it as a non-member#1 utility function. Your utility function has a very simple signature:#1: Not a member of a class. However, it could still be placed into a namespace.bool ValidateNumber(const std::string& str);When the utility function is used in the context of user I/O (specifically console-based user I/O), the following logic is typically involved:Subroutine ask user for number and repeat until successPrint promptAccept one line of text from userValidate input (by calling the utility function above)If validation fails,Print an error message explaining what is wrongLoop back to Print prompt, etc.If validation succeeds, the converted number is returned from this subroutine.Notice that this subroutine has plenty of states and logic. It has sufficient complexity that it can be made into a class and instantiated into an object. It could also be made customizable, e.g. allowing an application to instantiate two objects, each with a different prompt.In graphical user interface (GUI), it is more often to see that a number-asking subroutine being wrapped into an object. This is because in the GUI world, a GUI element that asks for a number will have some unique designs, most notably a spinbox (a pair of up/down arrows which allow user to increment/decrement the number when clicked). This strongly favors converting the subroutine into an object.
_cs.80168
Fiorini, Massar, Pokutta, Tiwary and De Wolf (Exponential Lower Bounds for Polytopes in CombinatorialOptimization, Journal of the ACM 62(2):article 17, 2015; PDF, ArXiv) show any linear program that solves travelling salesman needs super-polynomially many constraints.Suppose $P=NP$ by 'some' method then we can solve the optimal tour explicitly and trivially setup a LP that 'solves' the TSP problem. So $P=NP$ implies that TSP has a poly-size LP formulation. The contrapositive is that TSP has no poly-size LP formulation implies $P\neq NP$. This paper shows TSP needs super-polynomially many constraints.So why doesn't this show that $P\neq NP$?
Why does this not prove $P\neq NP$?
complexity theory;proof techniques;linear programming;p vs np;traveling salesman
What Fiorini et al. show is the following:The TSP polytope $P_n$ over $n$ points is a polytope in $\binom{n}{2}$ dimensions whose vertices correspond to all Hamiltonian cycles in $K_n$ (the complete graph on $n$ vertices). (That is, it is the convex hull of the indicator vectors of all Hamiltonian cycles.)Suppose that $X_n$ is a polytope whose projection over the first $\binom{n}{2}$ dimensions is $P_n$, and let $d_n$ be the number of constraints needed to define $X_n$ (i.e., the number of facets of codimension 1). Then $d_n \geq f(n)$ for some function $f(n) = 2^{\Omega(\sqrt{n})}$.In other words, they show that TSP cannot be solved using LPs in one particular way. There could be some other way of using LPs to solve TSP which isn't ruled out by their result.For example, perhaps you could use iterative rounding to solve TSP, at each step solving an LP. This is consistent with the result of Fiorini et al.The method in your argument is likewise not ruled out by Fiorini et al.
_unix.231965
I have to write 17 Tb to tape:ssh some_host 'tar -cz /' | dd bs=20b of=/dev/tapeof course 17 Tb doesn't fit to one tape so I need automatically change it when no room error occurs.I have robot changer and mtx next works fine. I also need to write label to log when tape changed, so I prefer to write hook script on this event.tar has change tape script feature, but I run tar on another host. Also, coping 17 Tb to local host is not the option. sshfs is not an option, as bad. And please don't offer huge backup solutions.What i need is pipe tool like dd which able to run some script on 'no room' error and proceed after. Specifying block size is also important as tape drive requires some values.
change tape on the fly script
tar;dd;tape
null
_softwareengineering.340869
I would like to initialize my variables at the top of my file, to prevent any Undefined variable notices. But what do you guys think is considered to be best practice (in PHP), in case of a string type variable?Initialize the variable with a value false by default?$variable = false;Or perhaps just an empty string$variable = '';Or even a null value$variable = null;
Default value variable, null vs empty string vs false
php;coding standards;clean code
If you don't have a sensible value to initialize your variable with, then you should not be creating that variable at that point in your code.If you are getting notices that you are using an undefined variable, then that is a clear indication that you have a problem in your program flow.The correct course of action here is to give those variable a sensible value before they are being used. If such a sensible value doesn't exist, then you need to ask yourself why the variable is being used in the first place.Just blindly masking Undefined variable notices is not going to make the real problem go away. The Undefined variable notice is only a side-effect of that real problem.
_webmaster.56292
In Google Webmaster Tools under Search Traffic --> Internal Links, for some pages it's showing too many internal links. Is there any effect on my site due to these internal links. How these internal links works?For Example:Under /musical-instruments-2 it has Top 82 links and Total links 470. I observed that it contains unrelated links (other than musical instruments). Why has it linked unrelated data under musical instruments?
Internal links in Google Webmaster Tools showing unrelated data
seo;google search console;links
null
_unix.177058
how do you replace multiple lines of code in Kate?The following is the original text:<div id=image><br style=clear: left/><br style=clear: left/><br style=clear: left/><!-- Begin logo -->...<!-- End logo --></div>This is the text that I would like to have:<div id=image><!-- Begin logo -->...<!-- End logo --></div>Thank you.
How do you search and replace multiple lines in KDE Kate?
search;replace;kate
Through Ctrl+R you switch into the search & replace mode. In the bar on the bottom, you first have to select as Mode either Escape sequences or Regular expressions.For instance, if you choose Escape sequences, click the right mouse button on the search field. A context menu appears with an item called Add, listing some valid escape sequences, among which you will find \n. The same apples for the replace line edit. The context menu is also available in the Regular expressions mode.
_datascience.1069
I am trying to evaluate and compare several different machine learning models built with different parameters (i.e. downsampling, outlier removal) and different classifiers (i.e. Bayes Net, SVM, Decision Tree).I am performing a type of cross validation where I randomly select 67% of the data for use in the training set and 33% of the data for use in the testing set. I perform this for several iterations, say, 20.Now, from each iteration I am able to generate a confusion matrix and compute a kappa. My question is, what are some ways to aggregate these across the iterations? I am also interested in aggregating accuracy and expected accuracy, among other things.For the kappa, accuracy, and expected accuracy, I have just been taking the average up to this point. One of the problems is that when I recompute kappa with the aggregated average and expected average, it is not the same with the aggregated kappa.For the confusion matrix, I have been first normalizing the confusion matrix from each iteration and then averaging them, in an attempt to avoid an issue of confusion matrices with different numbers of total cases (which is possible with my cross validation scheme).When I recompute the kappa from this aggregated confusion matrix, it is also different from the previous two.Which one is most correct? Is there another way of computing an average kappa that is more correct?Thanks, and if more concrete examples are needed in order to illustrate my question please let me know.
Kappa From Combined Confusion Matrices
machine learning;confusion matrix
null
_unix.53367
I'm trying to find the path of a file and move it.When I try realpath, it is not useful. For example : I want to move the file All Hail the Generalist - Vikram Mansharamani - Harvard Business Review.htmlUsing realpath: realpath 'All Hail the Generalist - Vikram Mansharamani - Harvard Business Review.html/home/x/Downloads/All Hail the Generalist - Vikram Mansharamani - Harvard Business Review.htmlBut I can't do : mv /home/x/Downloads/All Hail the Generalist - Vikram Mansharamani - Harvard Business Review.html /home/I need something like that : mv /home/x/Downloads/All\ Hail\ the\ Generalist\ -\ Vikram\ Mansharamani\ -\ Harvard\ Business\ Review.html
Find and use the path of a file?
shell;rename;filenames;quoting
null
_softwareengineering.163266
I'm wondering about the differences between inheritance and composition examined with concrete code relevant arguments.In particular my example was:Inheritance:class Do: def do(self): self.doA() self.doB() def doA(self): pass def doB(self): passclass MyDo(Do): def doA(self): print(A) def doB(self): print(B)x=MyDo()vs Composition:class Do: def __init__(self, a, b): self.a=a self.b=b def do(self): self.a.do() self.b.do()x=Do(DoA(), DoB())(Note for composition I'm missing code so it's not actually shorter)Can you name particular advantages of one or the other?I'm think of:composition is useful if you plan to reuse DoA() in another contextinheritance seems easier; no additional references/variables/initializationmethod doA can access internal variable (be it a good or bad thing :) )inheritance groups logic A and B together; even though you could equally introduce a grouped delegate objectinheritance provides a preset class for the users; with composition you'd have to encapsule the initialization in a factory so that the user does have to assemble the logic and the skeleton...Basically I'd like to examine the implications of inheritance vs composition. I heard often composition is preferred, but I'd like to understand that by example.Of course I can always start with one and refactor later to the other.
How do inheritance and composition differ?
object oriented design;inheritance;composition
Conceptually speaking, composition models consists of relationships, whereas inheritance models is a.Using the car analogy, a car has wheels is a textbook example for composition, and it makes sense to have a class Wheel, and a class Car with a property wheels of type Wheel[].In theory, an example of inheritance would be a truck is a vehicle: properties common to all vehicles can be implemented in class Vehicle, while those specific to trucks can be implemented in class Truck.The truck example, however, also illustrates the problem of the inheritance approach: what if you have to make your vehicle class polymorphic not only for the vehicles purpose (passengers vs. freight), but also for fuel type? You'd have to create four classes to cover passenger cars and freight vehicles, as well as diesel vs. gasoline powered. Add another orthogonal property, and the number of classes doubles again. Worse yet, you have to decide which of these orthogonal properties comes first in the class hierarchy: is it Vehicle -> DieselVehicle -> DieselFreightVehicle -> Truck, or is it Vehicle -> FreightVehicle -> DieselFreightVehicle -> Truck? Either way, you have to duplicate some functionality, either the freight-specific things, or the diesel-specific things. The solution is to use composition anyway: A vehicle has an engine (which can be diesel- or gasonline-powered), and a cargo type (which can be passengers or freight); you don't need a truck class anymore, because your vehicle class can already model all sorts of vehicles by combining suitable engines and cargo types. Note that the components are still polymorphic, but the container object is not. The trick is to keep polymorphism one-dimensional, that is, each polymorphic hierarchy models one of many orthogonal properties, such as engine type, freight type, etc.This is why people say favor composition over inheritance; of course you are still inheriting, but you have separated your inheritance into independent strains.
_unix.240974
What is the list of programs that were available in the first public version of Linux distribution (not just kernel)? I am specially concerned when this distribution was released and if diff utility was there.
Was 'diff' included in the first version of Linux
linux;diff;history
Short answer - it did.A bit of archeology reveals thatThe first linux distributions were published in 1993. SLS 1.02, linked above, was the most popular at the time.GNU bulletin for Jan 1993 includes diff 2.0.diff 2.0 GNU diff compares files showing line-by-line changes in several flexible formats. It is much faster than the traditional Unix versions. The diff distribution contains diff, diff3, sdiff, and cmp.The SLS distribution, which later forked to slackware and debian included diff in it's /usr/bin, as linked above.
_unix.216973
I've been trying since Thursday to get a sabrent card to come up and show what wifi networks are available. I'm in midtown Atlanta...so seems there should be 1 or 2 networks showing.The card's power light is on.I tried running:ifconfig wlan0 upin the terminal which didn't give any errors but dmesg shows:IPv6: ADDRCONF(NETDEV_UP): wlan0: link is not readyHow can I make wlan0 ready? ....or am I going at this all wrong?
Wi-Fi Sabrent PCI-8021N Linux Debian Jessie _x86_64 using ralink firmware/desktop
networking;wifi
null
_softwareengineering.323514
Say you've just started working in a very small team on a {currently relatively small, though hopefully bigger later} project. Note that this is an actual project intended to be used by other developers in the real world, not some academic project that is meant to be scrapped at the end of a semester.However, the code is not yet released to others, so no decision is yet set in stone.The MethodologiesOne of you likes to begin coding and make the pieces fit together as you go before you necessarily have a clear idea of how exactly all the components will interact (bottom-up design). Another one of you likes to do the entire design first and nail down the details of all the components and communication before coding a solution.Assume that you are working on a new system rather than mimicking existing ones, and thus it is not always obvious what the right end-design should look like. So, on your team, different team members sometimes have different ideas of what requirements are even necessary for the final product, let alone how to go about designing it.When the bottom-up developer writes some code, the top-down developer rejects it because of potential future problems envisioned in the design despite the fact that the code may solve the problem at hand, believing that it is more important to get the design correct before attempting to code the solution to the problem.When the top-down developer tries to work out the full design and the envisioned problems before starting to write the code, the bottom-up developer rejects it because the bottom-up developer doesn't think some of the problems will actually arise in practice, and thinks that the design may need to be changed in the future when the requirements and constraints become clearer.The ProblemThe problem that this has resulted in is that bottom-up developer ends up wasting time because the top-down developer frequently decides the solution that the bottom-up developer has written should be scrapped due to a design flaw, resulting in the need to re-write the code. The top-down developer ends up wasting time because instead of parallelizing the work, the top-down developer now frequently sits down to work out the correct design with the bottom-up developer, serializing the two to the point where it may even be faster for 1 person to do the work than 2.Both of the developers want to keep working together, but it doesn't seem that the combination is actually helping either of them in practice. The GoalsThe common goals are obviously to maximize coding effectiveness (i.e. minimize time wastage) and to write useful software.The QuestionPut simply, how do you solve this problem and cope with this situation? The only efficient solution I can think of that doesn't waste time is to let each developer follow his/her own style for the design. But this is harder than it sounds when you code-review and actually need to approve of each others' changes, and when you're trying to design a coherent framework for others to use. Is there a better way?
How to cope with different development styles (top-down vs. bottom-up) in a team?
design;development methodologies;methodology
null
_unix.313568
I have problem getting my VM running via libvirt. Here is my setup:I put my qcow2 image and domain XML (named win7.xml) under $HOME/vm,with all files and directories using my user, my group, and permission bits 0644.I uncommented user = root, group = root and dynamic_ownership = 1lines in /etc/libvirt/qemu.conf, expecting qemu-system-x86_64 will runas root, therefore having full access to the dirs and files under $HOME/vm.However, invocation of virsh create win7.xml as root was failed:error: Failed to create domain from win7.xmlerror: internal error: early end of file from monitor, possible problem: 2016-10-01T03:36:02.049418Z qemu-system-x86_64: -drive file=/home/naitree/vm/win7/win7.qcow2,format=qcow2,if=none,id=drive-virtio-disk0: Could not open '/home/naitree/vm/win7/win7.qcow2': Permission deniedThe following error was logged in /var/log/libvirt/qemu/win7.log:2016-10-01T03:36:02.049418Z qemu-system-x86_64: -drive file=/home/naitree/vm/win7/win7.qcow2,format=qcow2,if=none,id=drive-virtio-disk0: Could not open '/home/naitree/vm/win7/win7.qcow2': Permission denied2016-10-01 03:36:02.080+0000: shutting downIt looks like qemu failed to access my VM disk file. But why? Didn't qemu-system-x86_64run as root? What should be done to make sure libvirt-qemu able to access the disk imageresiding in $HOME directory?Additional version informations:libvirt, virsh version: 1.3.3.2QEMU version: QEMU emulator version 2.6.1 (qemu-2.6.1-1.fc24)distro: Fedora 24kernel: 4.7.4-200.fc24.x86_64
libvirt qemu cannot access image inside my home directory, even as root?
kvm;qemu;libvirtd
null
_webmaster.91117
I was looking at http://marcjschmidt.de/blog/2013/10/25/php-imagepng-performance-slow.html and towards the end of the document, I learned something from the chart at the end.If I use a very high compression on an image, the processing time (including TTFB) will be higher and the download time will be smaller.The opposite is true if I use low compression.The problem here is if I pick too low of compression, then it will take substantially longer for people to load images on my site. If I pick too high of compression, then I believe the TTFB (time to first byte) value will be high. If that value is too high, then google might see my image loading speed as slow.Currently for the desktop version of my site, I'm using full-sized images at 80% JPEG quality, and for mobile site, I'm using 66% quality.I'm afraid if I raise the quality of the image (aka reduce compression) on either site, then there will be more data to download and some people might be billed high for data usage. My other option is to adjust the image size some more, but then if I go too small, visitors might complain.So whats the best thing to do? adjust JPEG image quality to different values (more optimal values)? or do I shrink the images and pray for no complaints?
Compressing images for desktop and mobile site or adjust size?
images;mobile;optimization;desktop;image size
null
_cs.80117
I have a brief proof about QBF being $\in$ APTIME, which claims that you can use the additional input of the alternation as values for your QBF formula.And therefore you can accept, if the quantifier less formula $\psi$ is true, given the additional input.That makes sense, but if you assume $\psi$ in 3-CNF. I could make that check in SPACE$(\log n)$ and therefore, I could solve QBF in $\mathbf{ALOGSPACE}$, which would place QBF in $\mathbf{P} = \mathbf{ALOGSPACE}$, that obviously can't be true, but I don't see my mistake.The proof claims runtime: $\mathcal{O}(n^2)$I know that APTIME = PSPACE and QBF is PSPACE complete therefore QBF is APTIME complete, so what am I missing?
QBF in APTIME - Why APTIME and not ALOGSPACE?
complexity theory;time complexity;runtime analysis
null
_vi.1879
I am using Vim on windows. Any time I edited a file, Vim is created temporary files. When I created a file, mysql-build-properties.xmlVim is creating files like these: mysql-build-properties.xml~ mysql-build-properties.xml~~Are these files Vim temporary files? If so, how do to stop it?
Vim Windows creating temporary files
swap file;backup
null
_webmaster.2359
In my phpbb forum, just after I post a reply to a thread, a page is shown and I've to wait around 5 secs to return back to the thread. How do I reduced the time to 0. (I tried setting the Flood Interval to 0 but it dint work.
How to reduce delay between forum posts in phpbb
php;forum
If you're using phpBB 3, then you can reduce the refresh time by editing the posting.php file in the root directory of the script.Inside you'll find (around line 1118 for 3.0.7PL1) an if statement similar to the following, depending on your version:// Check the permissions for post approval. Moderators are not affected.if ((!$auth->acl_get('f_noapprove', $data['forum_id']) && !$auth->acl_get('m_approve', $data['forum_id']) && empty($data['force_approved_state'])) || (isset($data['force_approved_state']) && !$data['force_approved_state'])){ meta_refresh(10, $redirect_url); $message = ($mode == 'edit') ? $user->lang['POST_EDITED_MOD'] : $user->lang['POST_STORED_MOD']; $message .= (($user->data['user_id'] == ANONYMOUS) ? '' : ' '. $user->lang['POST_APPROVAL_NOTIFY']);}else{ meta_refresh(3, $redirect_url); $message = ($mode == 'edit') ? 'POST_EDITED' : 'POST_STORED'; $message = $user->lang[$message] . '<br /><br />' . sprintf($user->lang['VIEW_MESSAGE'], '<a href=' . $redirect_url . '>', '</a>');}You'll notice there are two calls to meta_refresh() in there; the first one - waiting 10 seconds based on the first argument - is used when a forum is moderated, and a post needs to be approved first. It was changed to this length to give users enough time to see the actual message before the page refreshed.The second one - 3 seconds in the current phpBB version - is the one you'll probably want to change. You can reduce this down to 0 to have users redirected immediately, after which you'll just have the normal 1-2 second lag while the page is served, and the browser renders it.One thing to note - you may need to modify this every time you upgrade phpBB, as this is a core file.
_codereview.2490
I would like to efficiently import real numbers stored in CSV format into an array. Is there a way to modify my code to make it faster? Also, is there a way to scan the file and compute the number of rows and columns rather than having to provide these directly?double * getcsv(string fname, int m, int n){ double *a; a = (double *)malloc(n * m * sizeof(double)); ifstream fs(fname.c_str()); char buf[100]; for(int i=0;i<m;i++){ for(int j=0;j<n;j++){ if (j < (n-1) ){ fs.getline(buf,100,',');} else{ fs.getline(buf,100);} stringstream data(buf); data>>a[(i*n)+j]; } } return(a);}
Fast import of numbers stored in a CSV file
c++;csv
In C++, we usually avoid malloc - in this case, you could use a std::vector< double > in stead. Doing this, you will allow the user of your code to use RAII in stead of manually managing the allocated memory. You cal tell the vector how much to allocate by initializing it with the right size. The compiler will apply return value optimization to avoid copying the vector (by constructing it into the return value) so you shouldn't worry too much about that.You don't preserve the values of m and n. I don't know if this is intentional but if you don't want to burden your client code with lugging it around, consider a struct or a class to carry it, and the values you read, around.You don't check the return values of the methods you call on the fstream - you should.You'd probably be better off reading the values from the fstream directly into the array, rather than reading them into a buffer, copying that buffer into a stringstream and then reading them from the stringstream to convert. You can save the use of the stringstream (and therefore eliminate it altogether).Example program:#include <iostream>#include <iterator>#include <sstream>#include <vector>using namespace std;struct Data{ Data(unsigned int width, unsigned int height) : width_(width) , height_(height) , data_(width * height) { /* no-op */ } unsigned int width_; unsigned int height_; vector< double > data_;};Data split(istream &is, unsigned int width, unsigned int height){ Data retval(width, height); double d; int i(0); while (is >> d) { retval.data_[i++] = d; is.ignore(1, ','); } return retval;}int main(){ const char *test_string = 2.83, 31.26, 2354.3262, 0.83567\n 12.3, 35.236, 2854.3262, 0.83557\n 32.3, 33.26, 2564.3262, 0.83357\n 27.3, 3.2869, 2594.3262, 0.82357\n; const unsigned int width = 4; const unsigned int height = 4; stringstream ss(test_string); Data data(split(ss, width, height)); copy(data.data_.begin(), data.data_.end(), ostream_iterator< double >(cout, ));}
_unix.274432
I have the following folder structure:/backup/backup/copy.sh/backup/archive//backup/20160405_logs//backup/20160405_logs/sql.log/backup/20160405_logs/bak.logI want to move the folder 20160405_logs into /backup/archive/. If I run mv /backup/20160405_logs /backup/archive from the CLI (manually type and run) it works perfectly. However, if I run that command from copy.sh I get the following error for each file within 20160405_logs:copy.sh: line x: file_path: No such file or directory where is x is an incorrect line number for mv call in copy.sh.All the files and their parent folder are moved though. So it's not like the move is failing...What am I missing!?Thanks in advance :)
mv misbehaves in shell script
shell script;command line;mv
Jeff Schallers' second comment pointed me in the right direction.My backup script looks like this:tar source_folder dest_file >> /backup/20160405_logs/bak.logmv /backup/20160405_logs /backup/archiveecho Backup complete >> /backup/20160405_logs /backup/archiveThe missing files that are being reported are log files that I am trying to write to after I run the mv command.As mentioned in my comment above, if there were a badge for nitwits, I'd own one! Sorry for wasting everyone's time.
_webapps.20355
My brother and I used to play Star Fox 64 when we were kids, and when we did dog fights, we always loved doing barrel rolls so I decided to search for the quote that they always said the game in tutorial. So I start typing do a and well Google started auto-completing it for me. That's good so before I move to the suggested item Google decides it is going to make me sick.I know Easter eggs are all good and great but now I am afraid for when I need to search Google that something happens unexpectedly. I didn't even type in the full query. I just want to search Google.. you know without all the fireworks and barrel rolls. I love Google Search so is there an option/extension to stop these features?
How do I search for do a barrel roll without Google ... doing a barrel roll?
google search;easter eggs
You can avoid Google doing a barrel roll by encapsulating your query in quotation marks: do a barrel roll will not, ironically enough, do a barrel roll.This should work for all easter egg queries: quotation marks signal to Google that it should search for the literal string instead of interpreting it to mean something else.Compare:tilt vs. tiltaskew vs. askewonce in a blue moon vs. once in a blue moonThis of course doesn't work when Google Instant is turned on, as Google will submit the search query before you can finish encapsulating the query in quotation marks. Unfortunately, this is a limitation/feature of Google Instant: to prevent Google from submitting the query before you're finished typing it, you'd have to disable Google Instant.Beyond this, it's possible to disable certain types of Easter eggs, provided you know the nature of the Easter egg beforehand. You could, for instance, prevent the do a barrel roll Easter egg by adding the following snippet to your browser's custom stylesheet:body { -webkit-animation-name: none; -moz-animation-name: none;}But since this would affect every <body> tag on every webpage, it's not ideal either.You could get around this by using Stylish, which allows you to specify site-specific custom stylesheets (userstyles). Creating a userstyle with the following should work:@-moz-document: domain(google.com)@-webkit-document: domain(google.com)@document: domain(google.com)body { -webkit-animation-name: none; -moz-animation-name: none;}Of course, while this would allow you to disable this specific Easter egg, Google can and most likely will come up with new ones that do unexpected things in the name of being quirky. Without disabling JavaScript or Google Instant, it'd be nigh impossible to prevent them from happening at least once.
_unix.311597
Is there a way to block connections for incoming requests for GMAIL at port level? Is it that all protocols use same port for incoming and outgoing connections?
How to block a port connection (Ex: GMAIL) for incoming mails and allow only for outgoing mails?
iptables;email;http;https
Block incoming ports 25, 465 and 587. And you will have only outgoing mail from this server. Also you can consider to set the SMTP daemon to listen only to 127.0.0.1 which will make it accept incoming mail only via localhost and send mail to the world
_unix.195448
From my computer:$ cat /etc/issue && uname -aUbuntu 14.04.1 LTS \n \lLinux abc-pc 3.13.0-32-generic #57-Ubuntu SMP Tue Jul 15 03:51:08 UTC 2014 x86_64 x86_64 x86_64 GNU/LinuxI am using Qt 5.4 and QtCreator 3.3.0I have read this answer regarding the opengl errors, but I am not sure if that applies to me too.The programs run very finely on my computer, but a peer does ssh from his own computer on my computer and runs the same program and the following errors are shown on his computer.Doing ssh means that he is actually working on my computer so when I am not getting these errors why is he?libGL error: failed to load driver: swrastConnected to xyzQOpenGLShader: could not create shaderQOpenGLShaderProgram: could not create shader programQOpenGLShader: could not create shaderQOpenGLShaderProgram::uniformLocation( imageTexture ): shader program is not linkedQOpenGLShaderProgram: could not create shader programQOpenGLShader: could not create shaderQOpenGLShader: could not create shadershader compilation failed: QOpenGLShaderProgram::uniformLocation( matrix ): shader program is not linkedQOpenGLShaderProgram::uniformLocation( opacity ): shader program is not linkedQOpenGLShaderProgram: could not create shader programQOpenGLShader: could not create shaderQOpenGLShader: could not create shadershader compilation failed: QOpenGLShaderProgram::uniformLocation( matrix ): shader program is not linkedQOpenGLShaderProgram::uniformLocation( opacity ): shader program is not linkedQOpenGLShaderProgram::uniformLocation( pixelSize ): shader program is not linkedQOpenGLShaderProgram: could not create shader programQOpenGLShader: could not create shaderQOpenGLShader: could not create shadershader compilation failed:
Error: QOpenGLShader: could not create shader - while running through ssh
opengl;ubuntu;ssh
null
_webapps.85053
I have number of fan pages on my Facebook account. I am trying to get a Twitter account for each page, but because I already have my own Twitter account it will not let me connect the fan page unless I disconnect the main twitter account from my private Facebook profile. Is there any other way to fix this?
Twitter accounts with Facebook fan pages
facebook;twitter;facebook pages;synchronization
null
_webmaster.86614
So I finally ran screaming frog SEO spider on a number of my pages, and what I'm left with now are some duplicate titles on some of my mobile pages containing this HTML:<link rel=canonical href=http://example.com/path/to/equivalentpage.htm> On the respective pages for desktop users, I added a link tag showing the mobile equivalent page to show google the relationship.On my desktop pages, my H1 tags on each page are unique. Some pages no longer have an H2 tag since I had to blend it into the H1 tag and screaming frog spider takes note of duplicate H2 tags.My question is for any two pages that contain rel=canonical pointing to the original unique pages, do I have to worry about unique titles (in H1 tag) on them? or do I only have to worry on pages without rel=canonicalI ask because if making the titles unique on the pages (mobile site pages) with rel=canonical are necessary, then users will have a worse experience since the title will take up a huge part of the above-the-fold real estate on my website, especially on the photo pages where the user's prime interest is the photo.
Is it mandatory to have unique titles (in H1 tag) on a mobile version of a page even if it contains rel=canonical?
tags;rel canonical;rel alternate;titles;h1
null
_reverseengineering.11162
I have been impressed with the reverse debugging (that is stepping back in time through a program) capabilities in GDB and tools like QIRA, but I am a little confused as to why no such program exists for the OSX platform (GDS does not support reverse debugging on OSX.) Is there a technical reason why a reverse debugger is not possible on OSX? I would imagine that under the same architectures the task of implementing a time oblivious debugger would be almost exactly the same. Why would porting to OSX be a impossible or difficult? I mostly assume there is a technical challenge here because no one has implemented such an obviously useful program.
Are Reverse Debuggers Impossible on OSX?
debugging;osx
null
_unix.136497
I can do git clone like so ... git clone https://github.com/stackforge/puppet-heat.git... with no problems. But I want to exclude all the git meta stuff that comes with the cloning, so I figured I would use git archive but I get this error:$ git archive --remote=https://github.com/stackforge/puppet-heat.git fatal: Operation not supported by protocol.Anyone know why or what I am doing wrong?
git archive fatal: Operation not supported by protocol
git;github
I would simply run the git clone as you've described and then delete the .git directories that are dispersed throughout the cloned directory.$ find puppet-heat/ -name '.git' -exec rm -fr {} +
_webmaster.7135
All that is output on the page is the domain name of the advertiser, for example 'www.solar-aid.org'. The rest of the content is stripped, I believe because of a document.write() statement.I'd like to know if this is a common issue or something wrong with our setup. There are three domains causing the issue, which we've blocked from Adsense as a result.solar-aid.orgkiva.orggrameenfoundation.orgGiven the type of organizations I think they may be within the default group of 'public service ads' within the Backup Ads setting. If the issue doesn't completely resolve itself soon (one customer of ours complained today, even though I blocked them 5+ days ago), I'll disable public service ads and select the 'fill space with a solid color' option.
Some Adsense domain's ads are causing document.write() statements that remove the html from the page
javascript;advertising;google adsense
null
_unix.28151
I don't have a lot of knowledge on Linux so pleas forgive me if this is a simple question.I manage a server with Oracle RAC 11g running on Redhat 5.2. There are a number of raw drives on the server but I cannot see how they are being mapped.I have had a look at the fstab and even the udev (raw rules configuration, even thought its depreciated). But I cannot see where the various mappings from the partitions presented /dev/sd[xy][123] would map to the various /dev/raw/raw[12345] etc.So is there any way I can work out which partitions are assigned to the mappings and the size of the partitions?
How do I find out a server's drive mappings for raw devices?
rhel;partition
null
_unix.181879
I am using kubuntu 14.04. I have installed cron using sudo apt-get install cron, and then I created this file in IDLE, called openurl.py.#!/usr/bin/env pythonimport webbrowserwebbrowser.open('http://eample.com')I then typed chmod +x openurl.py into the terminal to make the .py file excecutionable. If I type in./openurl.py to the terminal, the script works.then, using the kickoff application launcher I clicked system settings > task scheduler > new task > then I searched for the openurl.py file, and selected when I wanted it to run.If I type crontab -e into the terminal, this is displayed:#openurl21 21 * * * /home/craig/openurl.py# File generated by KCron the Thursday 29 Jan 2015 21:20.And then I wait, and nothing happens. What am I doing wrong?
How to automaticly open a URL at specific times each day
cron;kubuntu
null
_unix.381862
I downloaded the deb packages; put them, without extracting them, into a USB drive* and, when asked, I told the installer to search them in that drive.Now, on my freshly installed system, the files the installer told me that were missing are in /lib/firmware, but dpkg -s <package> says the packages are not installed. Is it ok?*I did so because the guide says: If the firmware was loaded from a firmware package, debian-installer will also install this package for the installed system and will automatically add the non-free section of the package archive in APT's sources.list. This has the advantage that the firmware should be updated automatically if a new version becomes available. It's not clear whether the package should be uncompressed, I decided to leave it as it was.The firmware packages in question are firmware-brcm80211 and firmware-realtek. The missing firmware files are brcm/bcm43xx-0.fw and rtl_nic/rtl8168d-2.fw.
Check if non-free firmware has been installed correctly
debian;debian installer;firmware
null
_unix.181270
Using VMWare and CentOS-7.0-1406-x86_64-DVD installer. I would like to install CentOS on this virtual machine. I did that successfully few weeks ago. but now, Anaconda screen disappeared so fast before I even click anything, and took me to user settings screen as below:How can I get back to anaconda?
/installation of CentOS 7 - Anaconda Disappeared
centos;system installation
null
_unix.150249
I want to convert following string (20140805234656) into date time stamp (2014-08-05 23:46:56).I am new to gawk and I don't know the exact syntax,how can I put - at every 5,8 and : at every 14,17 and put at 11 index. Is there any efficient way to achieve this in awk?EDITPlease note that I have string as variable in awk.I generated it during some processing of records.
Convert string into date time stamp in gawk or awk
awk;gawk
One way of doing it using GNU awk is this:echo 20140805234656 | awk 'BEGIN { FIELDWIDTHS = 4 2 2 2 2 2 } { printf %s-%s-%s %s:%s:%s\n, $1, $2, $3, $4, $5, $6 }'
_unix.204293
After I type # shutdown -h now from terminal, I can't access to the ssh remote server. (ssh [email protected])How should I open back the ssh server?
open ssh server back
ssh;shutdown
null
_unix.117878
Using the less paginator, you can use the -r option to properly display colored input and the -S option to disable line wrap.However, when using less -rS or equivalently less -r -S, colors are diplayed but lines are wrapped. How can this be achieved?
show colors and disable line wrap
less
If the -r option doesn't work, maybe the -R option will do what you want:-R or --RAW-CONTROL-CHARSLike -r, but only ANSI color escape sequences are output in raw form. Unlike -r, the screen appearance is maintained correctly in most cases. ANSI color escape sequences are sequences of the form:ESC [ ... mwhere the ... is zero or more color specification characters For the purpose of keeping track of screen appearance, ANSI color escape sequences are assumed to not move the cursor. You can make less think that characters other than m can end ANSI color escape sequences by setting the environment variable LESSANSIENDCHARS to the list of characters which can end a color escape sequence. And you can make less think that characters other than the standard ones may appear between the ESC and the m by setting the environment variable LESSANSIMIDCHARS to the list of characters which can appear.
_codereview.25165
I have the following CRC 16 code. Is it possible to improve the performance of the code with unsafe constructs? My knowledge about pointer arithmetic is rather limited.public enum Crc16Mode : ushort { Standard = 0xA001, CcittKermit = 0x8408 }public class Crc16Ccitt{ static ushort[] table = new ushort[256]; public ushort ComputeChecksum(params byte[] bytes) { ushort crc = 0; for (int i = 0; i < bytes.Length; ++i) { byte index = (byte)(crc ^ bytes[i]); crc = (ushort)((crc >> 8) ^ table[index]); } return crc; } public Crc16Ccitt(Crc16Mode mode) { ushort polynomial = (ushort)mode; ushort value; ushort temp; for (ushort i = 0; i < table.Length; ++i) { value = 0; temp = i; for (byte j = 0; j < 8; ++j) { if (((value ^ temp) & 0x0001) != 0) { value = (ushort)((value >> 1) ^ polynomial); } else { value >>= 1; } temp >>= 1; } table[i] = value; } }}
CRC 16 code with unsafe features
c#;performance
null
_unix.320515
My computer froze because of RAM exhaustion. I performed hard reset. When I launched Chromium, I was getting Aw, Snap! error on every page. So I deleted the folder .config/chromium/ and ran apt-get purge chromium and then rebooted and installed again. Unfortunately nothing changed. What should I do now?
Chromium on Debian Wheezy Aw, Snap! error
linux;debian;chrome
null
_scicomp.26230
The question is exactly as the title: Which 2D (triangular) mesh generator software can be used which has a set of geometric primitives, controlled mesh size and standard output (.vtk or something similar)?For testing my code I need a triangular mesh for a circle and I am looking for the easiest way to do that. E.g., in 3D case for a sphere I am happy to use NETGEN with several lines of code to get meshes with controlled mesh size and standard output format.Can anyone give me a recommendation?Thanks!
2D mesh generator with geometric primitives
software;mesh generation;unstructured mesh
Shewchuk's triangle mesh generator produces high quality meshes and is quiterobust. However, the boundary definition is a series of straight lines. So toproduce a sequence of refined meshes over a circle you would also have to refinethe definition of the boundary for each mesh.Another option is Persson's distmesh triangle mesher,As shown on this page, only two lines of MATLAB (or Octave) code are requiredto mesh a circlefd=@(p) sqrt(sum(p.^2,2))-1;[p,t]=distmesh2d(fd,@huniform,0.2,[-1,-1;1,1],[]);And the mesh can be refined by changing a single mesh density parameterwith no changes to the geometry definition.You can conveniently use MATLAB to write the coordinate matrix, p, and theconnectivity matrix, t, in whatever format you choose. And, as I've verified,it also runs in Octave if you don't have access to MATLAB.
_softwareengineering.337863
I was reading the Go-lang documents and found under the section of Types that Go has no type hierarchy.What does that mean exactly? Is it like python that types are been checked at run time (dynamically typed) rather than 'compile time' (statically typed)?
What does a type system [that] has no hierarchy mean?
dynamic typing;go;static typing
null
_webmaster.104639
I'm developing a web application which downloads a number of web pages using PHP curl. It then uses diff to compare the files as they change each day.I reported a problem a few weeks back where seemingly identical files were being flagged by diff as being different: https://stackoverflow.com/questions/42552239/different-versions-of-diff-giving-mixed-results-when-comparing-2-identical-filThe answer to the above was that if diff was used with the -w flag it ignores whitespace.However, I've now noticed a separate problem. If I download one of the files I'm comparing, and re-upload (overwrite) it through an FTP client, the output changes.For example: Compare file1.html against file2.html with diff file1.html file2.html it may give output such as12159,12161c12159,12161< < < ---> > > 12163,12172c12163,12172< < < < < < < < < < ---However, if I download file2.html to my desktop and re-upload it through FTP, diff without the -w flag reports there being no differences at all i.e. it's now saying the files are identical.I've tried to check the encoding of the file using file -bi file2.html but it's reported the same before and after upload through FTP. The encoding is text/html; charset=us-asciiIf the encoding is no different and the file contents have not been modified, how is re-uploading the file through FTP changing anything?? I've tried it using FileZilla and also through Netbeans.I'm using macOS Sierra locally and the remote server is Apache 2/PHP 7/centOS.
Can file encoding change when FTP is used?
linux;ftp;content encoding
You are probably seeing a difference in line-endings. When transferring a file in ASCII/Text mode (as opposed to Binary mode) then most FTP clients will convert/normalise line-endings to the OS being transferred to.On Classic Mac OS (9.x and earlier) the line-ending char is simply \r (ASCII 13), on Mac OS X this changed to \n (ASCII 10), on Linux it is \n (ASCII 10). And Windows is \r\n or ASCII 13+10. (Thanks @8bittree for the Mac correction.)So, when downloading from one OS to another all line-endings are silently converted. The conversion is reversed when uploaded. (However, as noted in @Joshua's answer this can result in corruption, depending on the file's character encoding and specific characters contained in the file.) If there is a mishmash of line-endings then it's possible the FTP software is normalising/fixing the line-endings. This would explain why downloading and then uploading the file results in a different file to what was originally on the server (ie. it is fixed). Or it is reverting a previously miss-converted file? However, the EOL-conversion may not be so intelligent and you can just end up with either double spaced lines or missing line breaks altogether (ie. mildly corrupted).By default, most FTP clients are set to Auto transfer mode and have a list of known file types to transfer in ASCII/Text mode. Other file types are transferred in Binary mode. If you are transferring between the same OS, or you wish to transfer with no conversion, then you should use Binary mode only.Ordinarily, the FTP software will not change the character encoding of the transferred file unless the source/target operating systems use a very different character encoding with which to represent text files. As @KeithDavies noted in comments, one such example is when downloading from a mainframe, that uses EBCDIC, to a local Windows machine. EBCDIC is not supported natively by Windows, so a conversion is required to convert this to ASCII. Again, transferring in Binary mode avoids any such conversion. (Thanks to @KeithDavies for the note regarding character encoding.)The answer to the above was that if diff was used with the -w flag it ignores whitespace.Yes, line-endings (whitespace) are ignored in the comparison.If I download one of the files I'm comparing, and re-upload (overwrite) it through an FTP client, the output changes.If there was a mixture of line-endings in the original file then downloading and re-uploading in ASCII mode could well fix the inconsistent line-endings. So, the files are now the same.
_unix.98092
I would like to do something like the following$ tmp=name*$ mv $tmp new_$tmp$ ls new_*$ new_name-with-stuff-i-dont-want-to-type name-with-stuff-i-dont-want-to-typeSetting the environment variable seems to work$ $tmp$ name-with-stuff-i-dont-want-to-typeBut not when I do the mv$ mv $tmp new_$tmp$ ls new_*$ new_name*Question: Is there a clever easy way of doing this to save some typing?
Adding prefixes/suffixes to file names without typing it all over again?
bash;environment variables;rename
I always love bash expansion for this:lsname-with-stuff-i-dont-want-to-typemv name-with-stuff-i-dont-want-to-type{,_old}lsname-with-stuff-i-dont-want-to-type_oldThis is equivalent to:mv name-with-stuff-i-dont-want-to-type name-with-stuff-i-dont-want-to-type_oldFor prefixing:lsname-with-stuff-i-dont-want-to-typemv {,new_}name-with-stuff-i-dont-want-to-typelsnew_name-with-stuff-i-dont-want-to-typeFor easy file renaming:lsfileFOOnamemv file{FOO,BAR}namelsfileBARname
_codereview.13780
I need to do pending methods invocation in Java (Android) in my game project. A method represents actions in a scene.Maybe you're familiar with Context.startActivity in Android, which is not suddenly starting the activity, but statements below it is still executed before starting the requested activity. I've found 3 ways, but I'm not sure which one should I choose by considering the performance and PermGen issues (actually I don't know whether Android has PermGen issue if there are so many classes). These methods will not be called very frequent, may be once in 5 seconds, but may be they will be very many (there are so many scenes).Please suggest the best way of doing this, by considering memory usage (maybe PermGen too), performance, ease of coding, and bug freedom.Using switch-caseI need to add each method call in switch-case.public class MethodContainer { public void invoke(int index) { switch (index) { case 0: method0(); break; . . . case 100: method100(); break; } } private void method0() { ... } . . . private void method100() { ... }}Using for-loop and annotation/reflectionLike the above, but make coding easier (except in defining constants).public class MethodContainer { private static final int METHOD_0 = 0; ... private static final int METHOD_100 = 100; public void invoke(int index) { for (Method m : MethodContainer.class.getMethods()) { MyAnnotation annotation = m.getAnnotation(MyAnnotation.class); if (annotation != null && annotation.value() == index) { try { m.invoke(this); break; } catch (...) { ... } } } } @MyAnnotation(METHOD_0) private void method0() { ... } . . . @MyAnnotation(METHOD_100) private void method100() { ... }}Using inner classesThere's no need to declare constants and no reflection, but too many classes.public class MethodContainer { public void invoke(Runnable method) { method.run(); } private Runnable method0 = new Runnable() { public void run() { ... } }; . . . private Runnable method100 = new Runnable() { public void run() { ... } };}
Pending method invocation for a game
java;android
Your first example (switch-case) is so poorly thought of in the Object Oriented community that there is a refactoring designed specifically to replace it with your third implementation: Replace Conditional With Polymorphism.To be honest, I would have never thought of your second implementation. It isn't completely heinous, though I would probably use the constructor to create an array of method objects that you can index into directly - rather than run through a for loop every time you want to make a call.Between a revised second implementation (array with annotation/reflection) and the third implementation (inner classes) I would personally be more fond of the inner class version. I don't think that memory or performance is going to be a factor at this level and I think the third, inner class, version is much more readable (at least in Java - in other languages, you could implement something like the second approach directly).
_codereview.136658
I have a dataset with 16 million rows and may increase upwards of 30 million. I am using the parLapply to run across three cores in R. But it's taking two days to run to completion. When I try smaller datasets of about 60,000 it takes less than 5 minutes to run, what may be cause of this disparity.Desktop Specs : Corei5 -QuadCore , 4GB RAM FG DataSet (16 million rows)Id,R,T1,12,439632,12,502733,12,40805 4,13,502735,13,408056,14,40805AB (1.3 million rows)Id,R,1,12 2,133,14 4,15Locations (6600 rows)T,NEWLong,NEWLat,SITENAME,43963,-77.108995,17.942062,HARBOUR TOWN50273,-77.108995,17.942062,NEW MEADOWS40805,-77.108995,17.942062,ISLE AVENUECodenum_cores = detectCores() -1cl = makeCluster(num_cores)clusterExport(cl,varlist = c(FG,AB,sites,distancematrix) ,envir=environment())results = parLapply(cl,1:nrow(AB),function(i){ row = AB[i,2] filtered = subset(FG,FG$R == AB[i,2]) sites = merge(filtered , locations , by.x = T , by.y = T , all.x = FALSE) resultdf =unique(data.frame(sites$NAME,sites$NEWLong,sites$NEWLat)) if ((nrow(resultdf))==0) { VAL = data.frame(AN = AB[i,2] ,SCORE = 0 ,SITES = 0,DISTANCE = 0) } else if ((nrow(resultdf) > 0) & (nrow(resultdf) < 4)) { alldistance = round(distanceMatrix(resultdf)) VAL2 = data.frame(AN = AB[i,2] ,SCORE= 1 ,SITES = nrow(resultdf),DISTANCE=sum(alldistance)) } else if ((nrow(resultdf) >= 4) & (nrow(resultdf) <= 10 )) { alldistance = round(distanceMatrix(resultdf)) if (sum(alldistance) == 0) { VAL = data.frame(AN = AB[i,2] ,SCORE= 1 ,SITES = nrow(resultdf),DISTANCE=sum(alldistance)) } else { value = nrow(resultdf)-1 require(fpc) clustervaluePAMK = pamk(alldistance,krange = 1:value, criterion = asw ,critout = TRUE , usepam=FALSE, ns = 2) clustervaluePAMK = clustervaluePAMK$nc VAL2 = data.frame(AN = AB[i,2] ,SCORE= clustervaluePAMK ,SITES = nrow(resultdf),DISTANCE=sum(alldistance)) }}else { alldistance = round(distanceMatrix(resultdf)) if (sum(alldistance) == 0) { VAL = data.frame(AN = AB[i,2] ,SCORE= 1 ,SITES = nrow(resultdf),DISTANCE=sum(alldistance)) } else { require(fpc) clustervaluePAMK = pamk(alldistance,krange = 1:10, criterion = asw ,critout = TRUE , usepam=FALSE, ns = 2) clustervaluePAMK = clustervaluePAMK$nc VAL = data.frame(AN = AB[i,2] ,SCORE= clustervaluePAMK ,SITES = nrow(resultdf),DISTANCE=sum(alldistance)) }}}){FGL <- merge(FG, locations) }) and object.size(FGL) user system elapsed 393.70 10.24 993.51 656225664 bytesCode Profile--For this section I ran against 60,000 elements.$by.self self.time self.pct total.time total.pctunserialize 174.70 99.99 174.70 99.99as.character 0.02 0.01 0.02 0.01$by.total total.time total.pct self.time self.pctclusterApply 174.72 100.00 0.00 0.00do.call 174.72 100.00 0.00 0.00lapply 174.72 100.00 0.00 0.00parLapply 174.72 100.00 0.00 0.00staticClusterApply 174.72 100.00 0.00 0.00unserialize 174.70 99.99 174.70 99.99FUN 174.70 99.99 0.00 0.00recvData 174.70 99.99 0.00 0.00recvData.SOCKnode 174.70 99.99 0.00 0.00as.character 0.02 0.01 0.02 0.01cut 0.02 0.01 0.00 0.00cut.default 0.02 0.01 0.00 0.00factor 0.02 0.01 0.00 0.00split 0.02 0.01 0.00 0.00split.default 0.02 0.01 0.00 0.00splitIndices 0.02 0.01 0.00 0.00splitList 0.02 0.01 0.00 0.00structure 0.02 0.01 0.00 0.00$sample.interval[1] 0.02$sampling.time[1] 174.72
Clustering 16 million records in parallel
time limit exceeded;r;clustering;machine learning;geospatial
Hard to tell without sample data, but lets start with a cleaned upversion as there's soo much duplicated code here and the formatting isinconsistent.The require should probably go to the top?AB[i, 2], nrow(resultdf) are run more than once and that's notgood.Some expressions are the same in multiple branches and can be merged,e.g. alldistance = ... and sum(alldistance).AFAIK parLapply just uses the return value of the function, so theassignments to VAR and VAR2 are super confusing.All refactored that looks like this now, which almost fits into a singlescreen now:require(fpc)cl = makeCluster(detectCores() - 1)clusterExport(cl, varlist = c(FG, AB, sites, distancematrix), envir = environment())results = parLapply(cl, 1:nrow(AB), function(i) { row = AB[i, 2] filtered = subset(FG, FG$R == row) sites = merge(filtered, locations, by.x = T, by.y = T, all.x = FALSE) resultdf = unique(data.frame(sites$NAME, sites$NEWLong, sites$NEWLat)) n = nrow(resultdf) if (n == 0) { data.frame(AN = row, SCORE = 0, SITES = 0, DISTANCE = 0) } else { alldistance = round(distanceMatrix(resultdf)) s = sum(alldistance) if (n < 4 || s == 0) { score = 1 } else { score = pamk(alldistance, krange = 1:min(n - 1, 10), criterion = asw, critout = TRUE, usepam = FALSE, ns = 2)$nc } data.frame(AN = row, SCORE = score, SITES = n, DISTANCE = s) }})
_softwareengineering.290023
I'm trying develop an add-in for an application using it's API and I have Option Strict turned on. Trying to work with these COM objects is causing multiple compile issues saying Option Strict On disallows implicit conversions from 'typeA' to 'typeB'.I'm currently overcoming the issue by using DirectCast, which means I have to define the type in multiple places, which makes future code maintenance kind of sucky.A simple example is: Dim templateMgr As IEdmTemplateMgr5 = TheVault.CreateUtility(EdmUtility.EdmUtil_TemplateMgr)The CreateUtility method returns an object which implements the interface specified by the argument (argument is an EdmUtility Enum value). The return is specified at compile time as System.Object so must be cast to the right type in order to use it. With option strict on I cannot do this implicitly, so I have been using DirectCast thusly:Dim templateMgr As IEdmTemplateMgr5 = DirectCast(TheVault.CreateUtility(EdmUtility.EdmUtil_TemplateMgr), IEdmTemplateMgr5)There are many, many, many (that many!) COM members that use generic members that make the compiler very unhappy with Option Strict turned on. I don't particularly like using DirectCast() because it means declaring the type in multiple places scattered all throughout the code. Is this a case where it's better to just turn off Option Strict??I feel like there has to be a better way!EDIT 1My compile options are: Option Explicit = ON Option Strict = ON Option Infer = ON Option Compare = Text Target CPU = AnyCPUEDIT 2Here is another example that isn't just creating a new object instance. In this example I am using a loop to get information on all the objects in an array that returned by another function. The return from Data.Get is always type System.Object which again causes teh compiler to complain that implicit conversion from type 'Object' to type 'String' is not allowed with Option Strict on.Dim DataReturn As System.ArrayDim refreshFlag As LongTry refreshFlag = TheTemplate.RunEx(TheCommand.mlParentWnd, TheVault.RootFolderID, DataReturn) If Not DataReturn.IsAllocated Then Throw New Exception(Nothing was created by the template.) 'Refresh the folder view if required If refreshFlag = EdmRefreshFlag.EdmRefresh_FileList Then TheVault.RefreshFolder(TheVault.RootFolderPath) 'Return the path(s) of the newly created file(s) Dim path As String = String.Empty For Each data As EdmData In DataReturn Select Case data.Type Case EdmDataType.EdmData_File path = DirectCast(data.Get(EdmDataPropertyType.EdmProp_Path), System.String) If path.Length > 0 Then CreatedFilesPaths.Add(path) path = String.Empty End Select Next
When to turn off Option Strict? Or how to deal with inheritance of COM using Option Strict?
vb.net;com
Dealing with COM onjects is almost the canonical reason for setting Option Strict to off, or using the C# equivalent dynamic.Late binding means that you loose the help of the compiler in getting things right, if you are fighting the compiler more than it is helping you, it is quite reasonable to just say I know what I am doing.I would recommend isolating these functions into a seperate file, and leave option strict on for everything else.
_softwareengineering.122173
I was looking for a pattern/solution that allows me call a method as a runtime exception in a group of different methods without using Reflection. I've recently become aware of the Abstract Factory Pattern.To me, it looks so much like polymorphism, and I thought it could be a case of polymorphism but without the super class GUIFactory, as you can see in the example of the link above. Am I correct in this assumption?
Can the Abstract Factory pattern be considered as a case of polymorphism?
object oriented;design patterns;polymorphism
I guess it depends on how it is used.Essentially the Factory pattern is a reference to a set of objects. Normally combined with something else, possibly like the Strategy pattern (which is more likely to be defined as a type of polymorphism) to provide reference to an object to act on.The Abstract Factory Pattern by itself is intended to be polymorphic as it is defined as an abstract class type. However the concrete implementation of the factory is the WidgetFactory in your type and is only polymorphic in reference to using a factory and providing an implementation of a factory.In terms of what you are after, you certainly require a concrete factory, and presumably the actions you perform depend on the exception being caught. To that end you would use your factory implementation in a non-polymorphic way simply by passing it the exception caught, and have the factory pattern return a method to invoke a strategy or even a chain-of-command pattern to deal with how you would like to handle the exception.Therefore, your factory would not necessarily be polymorphic, and your exception handlers would be polymorphic.
_softwareengineering.238755
Recently I started same projects to improve my programming skills, so I tried to develop a point of sale software. I started to bay the required hardware (ticket printer, Barcode Scanner.) to make my personal lab, but Im unable to find how to implement the payment process using an ATM terminal. Is there any possibility to simulate the payment process whiteout having the equipment?
How to simulate an ATM terminal in a POS
simulation
An 'ATM terminal' is often a magnetic stripe reader (MSR).There are several different approaches for this:Keyboard 'wedge' or USB. It hooks into the USB system and acts as another keyboard. Magtek makes some (admittedly, this is a higher end brand).Integrated keyboard mag stripe reader are actually part of a keyboard as can be seen with this Cherry Keyboard.NFC devices.Dedicated card readers with displays (such as those by VeriFone and Ingenico)As some of these just act as keyboards themselves, yes, you can simulate it by typing the data in from a keyboard. Otherwise, if you want to support things such as cherry keyboards and VeriFone 870 you will probably need to get one. Especially consider that these devices are often dedicated computers of their own, with their own programming languages and operating systems (in the case of the 870 I linked, its an embedded linux system).There are libraries that try to abstract away the 'card reader' (and then you can go about making something that works with JavaPOS or the like, but from experience, these can be very frustrating to work with as they often target the lowest common denominator of devices (and that can be very low).
_webapps.29922
Say I get a New Message notification (the red box over the Messages icon). I then click on the Messages icon, so I am viewing the preview of all of my recent conversations. The red notification icon goes away.Even if I didn't open the conversation itself to view the full text, does the message get marked as read?
Does previewing a Facebook message mark it as Seen?
facebook;chat
If you see text 1/2/3 in a small red cloud in the Envelope icon(Notification) in the top-blue bar & you click that, and preview the message, Message will NOT be marked as read. To simplify, Until you actually Go to Messages, click that Specific Message and see it in FULL, no message will be marked as Read. Unread messages always have a faint blue background, whereas read messages have pure white.Any message can be marked as Unread again, from the screen where you read it.TL:DR; No, it will not be marked as read. :)
_codereview.42741
I was wondering if there is a smarter way of doing the following code. Basically what is does is that it opens a data file with a lot of rows and columns. The columns are then sorted so each column is a vector with all the data inside.3.2.2 - Declare variableslineData = list()for line in File: splittedLine = line.split() # split lineData.append(splittedLine) #collect And here the fun begins3.2.3 - define desired variables from filecol1 = ElemNocol2 = Node1col3 = Node2col4 = Lengthcol5 = Areacol6 = Inertiacol7 = Fnode1col8 = Fnode2col9 = SigmaMincol10 = SigmaMax3.2.3 - make each variable as a list/vectorvar ={col1:[], col2:[], col3:[], col4:[], col5:[], col6:[], col7:[], col8:[] ,col9:[],col10:[]} 3.2.3 - take the values from each row in lineData and collect them into the correct variablefor row in lineData: var[col1] .append(float(row[0]) ) #[-] ElemNo var[col2] .append(float(row[1]) ) #[-] Node1 var[col3] .append(float(row[2]) ) #[-] Node2 var[col4] .append(float(row[3]) ) #[mm] Length var[col5] .append(float(row[4]) ) #[mm^2] Area var[col6] .append(float(row[5])*10**6) #[mm^4] Inertia var[col7] .append(float(row[6]) ) #[N] Fnode1 var[col8] .append(float(row[7]) ) #[N] Fnode2 var[col9] .append(float(row[8]) ) #[MPa] SigmaMin var[col10].append(float(row[9]) ) #[MPa] SigmaMaxAs you see this is a rather annoying way of making each row into a variable. Any suggestions?
Row/Column Transpose
python;matrix
First of all don't create variables for those keys, store them in a list.keys = [ElemNo, Node1, Node2, Length, Area, Inertia, Fnode1, Fnode2, SigmaMin, SigmaMax]You can use collections.defaultdict here, so no need to initialize the dictionary with those keys and empty list. from collections import defaultdictvar = defaultdict(list)Now, instead of storing the data in a list, you can populate the dictionary during iteration over File itself.for line in File: for i, (k, v) in enumerate(zip(keys, line.split())): if i == 5: var[k].append(float(v)*10**6) else: var[k].append(float(v))
_cs.25931
Can this problem be solved in poly time?Input: $S_i \subset \{1,\cdots,n\}$ for $i=1,\cdots, n$.Question: Is it possible to select an $a_i \in S_i$ for each $i=1,\cdots,n$, such that $\{a_1,\cdots,a_n\}=\{1,\cdots,n\}$?Informally, the problem asks for selecting one element from each subset $S_i$ such that the selected elements cover the set $\{1, \cdots, n\}$.
A variant of the set cover problem: Is that a known problem?
algorithms;complexity theory;decision problem;polynomial time
Hint: Form a bipartite graph which has the sets $S_1,\ldots,S_n$ on one side and the numbers $1,\ldots,n$ on the other. Connect $S_i$ to $a$ if $S_i$ contains $a$. You are looking for a perfect matching in this graph.
_cs.22558
For a proof I need to use the fact that every word in the language of an enumerator occur on the output paper in finite time. Is it true?For example, the language of the natural numbers in decimal representation. Can the enumerator print the odd numbers first and then the even numbers? (if yes, I am wrong)$1, 3, 5, 7, ..., 2, 4, 6 ,8, ...$As I know, we get to the even numbers but not in finite time (maybe transfinite induction based on this)
Does an enumerator print the first occurrence of a word in finite time?
turing machines
Your interpretation is correct, and the easiest way to look at it is to know both definitions of recursively enumerable sets:There is an algorithm (potentially running forever) that enumerates the members of $S$.There is an algorithm for which the algorithm halts only on elements of $S$.These two definitions are equivalent, and if you use the second one then you can easily answer your own question.However, do note an important caveat. Although for any given word $w \in S$ there is some time $T_w$ such that after $T_w$ steps, $w$ will have been enumerated. However, there is no way to bound $T_w$ from above as a function of $w$. It can grow faster than any computable function.
_codereview.42411
A day ago I have asked a question on here about Preventing email injection. I had some feedback and worked on it, and below is the latest update.Could anyone please share their opinion? Is it secure enough? Is there a way to short-cut the code? <?php session_start(); if ($_SERVER['REQUEST_METHOD'] == 'POST'){ ob_start(); if(isset( $_REQUEST['name'], $_REQUEST['email'], $_REQUEST['message'], $_REQUEST['number'], $_REQUEST['date'], $_REQUEST['select'], $_REQUEST['radio'], $_REQUEST['checkbox'], $_REQUEST['token'] )){ if($_SESSION['token'] != $_POST['token']){ $response = 0; }else{ $_SESSION['token'] = ; $name = $_REQUEST['name']; $email = $_REQUEST['email']; $message = $_REQUEST['message']; $number = $_REQUEST['number']; $date = $_REQUEST['date']; $select = $_REQUEST['select']; $radio = $_REQUEST['radio']; $checkbox = $_REQUEST['checkbox']; $spam_pattern = /[\r\n]|Content-Type:|Bcc:|Cc:/i; switch (true){ case !filter_var($email, FILTER_VALIDATE_EMAIL): $response = <b style='color: red'>Invalid Email Address!</b>; break; case !preg_match($spam_pattern, $name): case !preg_match($spam_pattern, $number): case !preg_match($spam_pattern, $date): case !preg_match($spam_pattern, $select): case !preg_match($spam_pattern, $radio): case !preg_match($spam_pattern, $checkbox): case !preg_match($spam_pattern, $message): $response = <b style='color: red'>Invalid Request!</b>; break; default: $to = ; $subject = New Message From: $name; $message = Name: $name<br/> number: $number<br/> date: $date<br/> select: $select<br/> radio: $radio<br/> checkbox: $checkbox<br/> Email: $email<br/> Message: $message; $headers = 'MIME-Version: 1.0' . \r\n; $headers .= 'Content-type: text/html; charset=utf-8' . \r\n; $headers .= 'From: '.$email . \r\n; if(mail($to, $subject, $message, $headers)){ $response = <h2 style='color: green'>Success! Your submission has been sent.</h2>; }else{ $response = <h2 style='color: blue'>Error! There was a problem with sending.</h2>; } break; } } } else { $response = <b style='color: red'>Error</b>; } ob_flush(); }?>
Preventing email injection - Part 2
php;security;email
null
_unix.111545
I am using the bash script to connect to the mysql database and execute a query. I use the below script to connect to the database and execute the query. #!/bin/bashTotal_Results=$(mysql -h server-name -P 3306 -u username-ppassword -D dbname<<<select URL from Experiment where URL_Exists = 1);for URL in $Total_Results;doecho $URLvar=$(curl -s --head $URL | head -n 1 | grep HTTP/1.[01] [23]..)echo $varif [ -z $var ]thenecho Ok we do not have a valid link and the value needs to be updated as -1 hereelseecho we will update the value as 1 from herefidoneThe problem is the result set is considered as a one whole result and I am getting inside the else loop only once (we will update the value as 1 from here is printed only once). I have 2500 valid URLs and I expect 2500 echoes of we will update the value as 1 from here.How can I process each and every row as a single result from mySQL query?
Mysql query resultset in bash script
bash;shell;mysql
mysql seems to output the results to a shell variable in a single line. One way round this is to write the contents to a temporary file, then process in a while loop.EDITOn my system IFS=\n before the mysql command (when the results are assigned to a shell variable) gives the correct multi-line output.e.g. IFS=\n Total_results=$(mysql.....)=============== End of Edit ==========================#!/bin/bashmysql --silent -h server-name -P 3306 -u username-ppassword -D dbname<<<select URL from Experiment where URL_Exists = 1 > tmp_resultswhile read URLdo echo $URL var=$(curl -s --head $URL | head -n 1 | grep HTTP/1.[01] [23]..) echo $var if [ -z $var ] then echo Ok we do not have a valid link and the value needs to be updated as -1 here else echo we will update the value as 1 from here fidone < tmp_results
_unix.192885
So I installed GRUB2 and Ubuntu 14.10 alongside Windwos 8.1. I am on an Acer laptop, which does not have a cd drive.I deleted the Ubuntu partition in windows 8.1 through the integrated disk manager, restarted and now am seeing this:GNU GRUB version 2.02~beta2-9ubuntu1 Minimal BASH-like editing is supported.for the first word, TAB lists possible commands completions.Anywhere else TAB lists possible device or file completion grub>I have googled some things and I think I need to restore the default Windows boot manager. However, I don't have a recovery disk for windows 8.1, and again, I don't have a cd drive.Is another possibility to make a usb with Ubuntu on it and to boot this one instead and then somehow to fix this?
Restore windows boot manager in grub command line
windows;grub;boot loader
null
_unix.253804
I just installed and am trying to start rpcbind on my redhat 7 machine. I'm new to linux so I'm having some trouble figuring out what to do next. I'm running all of these commands as the root user.When I run rpcbind I get the following output:Jan 07 09:44:28 sebilj systemd[1]: Starting RPC bind service...**Jan 07 09:44:28 sebilj rpcbind[17902]: /sbin/rpcbind: error while loading shared libraries: libkeyutils.so.1: cannot open shared object file: Permission denied**Jan 07 09:44:28 sebilj systemd[1]: rpcbind.service: control process exited, code=exited status=127Jan 07 09:44:28 sebilj systemd[1]: Failed to start RPC bind service.Jan 07 09:44:28 sebilj systemd[1]: Unit rpcbind.service entered failed state.Jan 07 09:44:28 sebilj systemd[1]: rpcbind.service failed.So I checked and the library in question does exist and it has chmod set to 777 so full permissions.I checked and this library is linking to a lib with the same name but a higher version, and this second lib also has full permissions:ldconfig -v | grep libkeyutils.so.1 libkeyutils.so.1 -> libkeyutils.so.1.5Finally I checked which libs rpcbind needs and it showed me the following:ldd /sbin/rpcbindlinux-vdso.so.1 => (0x00007ffe2b731000)libtirpc.so.3 => /lib64/libtirpc.so.3 (0x00007f4f06b43000)libsystemd.so.0 => /lib64/libsystemd.so.0 (0x00007f4f06b1b000)libpthread.so.0 => /lib64/libpthread.so.0 (0x00007f4f068fe000)libwrap.so.0 => /lib64/libwrap.so.0 (0x00007f4f066f3000)libc.so.6 => /lib64/libc.so.6 (0x00007f4f06332000)libgssapi_krb5.so.2 => /lib64/libgssapi_krb5.so.2 (0x00007f4f060e5000)libkrb5.so.3 => /lib64/libkrb5.so.3 (0x00007f4f05e00000)libk5crypto.so.3 => /lib64/libk5crypto.so.3 (0x00007f4f05bce000)libcom_err.so.2 => /lib64/libcom_err.so.2 (0x00007f4f059c9000)libcap.so.2 => /lib64/libcap.so.2 (0x00007f4f057c4000)libm.so.6 => /lib64/libm.so.6 (0x00007f4f054c2000)librt.so.1 => /lib64/librt.so.1 (0x00007f4f052b9000)libselinux.so.1 => /lib64/libselinux.so.1 (0x00007f4f05094000)liblzma.so.5 => /lib64/liblzma.so.5 (0x00007f4f04e6f000)libgcrypt.so.11 => /lib64/libgcrypt.so.11 (0x00007f4f04bed000)libgpg-error.so.0 => /lib64/libgpg-error.so.0 (0x00007f4f049e8000)libresolv.so.2 => /lib64/libresolv.so.2 (0x00007f4f047ce000)libdw.so.1 => /lib64/libdw.so.1 (0x00007f4f04586000)libdl.so.2 => /lib64/libdl.so.2 (0x00007f4f04382000)libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x00007f4f0416c000)/lib64/ld-linux-x86-64.so.2 (0x00007f4f06f8a000)libnsl.so.1 => /lib64/libnsl.so.1 (0x00007f4f03f52000)libkrb5support.so.0 => /lib64/libkrb5support.so.0 (0x00007f4f03d43000)**libkeyutils.so.1 => /lib64/libkeyutils.so.1 (0x00007f4f03b3f000)**libattr.so.1 => /lib64/libattr.so.1 (0x00007f4f03939000)libpcre.so.1 => /lib64/libpcre.so.1 (0x00007f4f036d8000)libelf.so.1 => /lib64/libelf.so.1 (0x00007f4f034c1000)libbz2.so.1 => /lib64/libbz2.so.1 (0x00007f4f032b1000)libz.so.1 => /lib64/libz.so.1 (0x00007f4f0309b000)From this I can see libkeyutils.so.1 => /lib64/libkeyutils.so.1 (0x00007f4f03b3f000). My assumption is that the higher version that libkeyutils.so.1 links to is causing the problem but I'm not sure how to resolve this since when searching for this lib it shows me the package that I already have installed. Any ideas?EDITI just want to add that Ijaz Khan's suggestion resolved the issue from me, I had a version issue when installing without Yum.
Redhat error while loading shared libraries: libkeyutils.so.1: cannot open shared object file: Permission denied
rhel;libraries;shared library
null
_cstheory.7491
I'm looking for a reference for the following result:Adding two integers in the factored representation is as hard as factoring two integers in the usual binary representation.(I'm pretty sure it's out there because this is something I had wondered at some point, and then was excited when I finally saw it in print.)Adding two integers in the factored representation is the problem: given the prime factorizations of two numbers $x$ and $y$, output the prime factorization of $x+y$. Note that the naive algorithm for this problem uses factorization in the standard binary representation as a subroutine.Update: Thanks Kaveh and Sadeq for the proofs. Obviously the more proofs the merrier, but I would also like to encourage more help in finding a reference, which as I said I'm fairly sure exists. I recall reading it in a paper with other interesting and not-often-discussed ideas in it, but I don't recall what those other ideas were or what the paper was about in general.
Adding integers represented by their factorization is as hard as factoring? Reference request
cc.complexity theory;reference request;time complexity;factoring;encoding
null
_unix.203338
Yesterday I add a SSD to my PC configuration and I make on it a fresh installation. At the moment of installation I replace my old HDD and there was only the SSD. When the installation finish I make a manually shutdown to attach the HDD with cables and then turn on the pc. After that I can't open my information on the HDD but in BIOS everything seems fine. From second HDD I can mount only boot partition which is 524MB from 500GB HDD. When I check with fdisk -l what is the situation the answer looks fine:Disk /dev/sda: 128.0 GB, 128035676160 bytes255 heads, 63 sectors/track, 15566 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk identifier: 0x000d66f4 Device Boot Start End Blocks Id System/dev/sda1 * 1 64 512000 83 LinuxPartition 1 does not end on cylinder boundary./dev/sda2 64 15567 124521472 8e Linux LVMDisk /dev/sdb: 500.1 GB, 500107862016 bytes255 heads, 63 sectors/track, 60801 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk identifier: 0x16481d17 Device Boot Start End Blocks Id System/dev/sdb1 * 1 64 512000 83 LinuxPartition 1 does not end on cylinder boundary./dev/sdb2 64 60802 487873536 8e Linux LVMDisk /dev/mapper/vg_andromeda-lv_root: 53.7 GB, 53687091200 bytes255 heads, 63 sectors/track, 6527 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk identifier: 0x00000000Disk /dev/mapper/vg_andromeda-lv_swap: 8136 MB, 8136949760 bytes255 heads, 63 sectors/track, 989 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk identifier: 0x00000000Disk /dev/mapper/vg_andromeda-lv_home: 65.7 GB, 65682800640 bytes255 heads, 63 sectors/track, 7985 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk identifier: 0x00000000Here is a screenshot of computer:///When I execute mount /dev/sdb2 /storage as rootI get the following error:mount: unknown filesystem type 'LVM2_member'When I run vgs here is the answer:WARNING: Duplicate VG name vg_andromeda: Existing gc5zhX-vrW9-mEDA-mzNN-kZxf-9nON-1aWwGY (created here) takes precedence over bwQkRq-mgph-9BYf-9WPF-cKz0-FLFq-0Qxs73WARNING: Duplicate VG name vg_andromeda: Existing gc5zhX-vrW9-mEDA-mzNN-kZxf-9nON-1aWwGY (created here) takes precedence over bwQkRq-mgph-9BYf-9WPF-cKz0-FLFq-0Qxs73WARNING: Duplicate VG name vg_andromeda: gc5zhX-vrW9-mEDA-mzNN-kZxf-9nON-1aWwGY (created here) takes precedence over bwQkRq-mgph-9BYf-9WPF-cKz0-FLFq-0Qxs73WARNING: Duplicate VG name vg_andromeda: gc5zhX-vrW9-mEDA-mzNN-kZxf-9nON-1aWwGY (created here) takes precedence over bwQkRq-mgph-9BYf-9WPF-cKz0-FLFq-0Qxs73So can anyone helps me because I can't open my information from the HDD. I've tried to mount /dev/sdb and /dev/sdb2 (there is no problem with /dev/sdb1 because there is the boot partition). On the fresh installation I use the same username and hostname as on the old. Also on the old HDD there is a other CentOS installation but there is a lot of information and I want copy it first to the SSD and then I'll format the HDD.Best regards,Georgi!
Can't mount second hard drive on CentOS 6.6 - Duplicate VG name
mount;hard disk;automounting;ssd;duplicate
Volume group name should be unique on system, by design. Problem occurs when a disk is moved from a system to another.So you have few options (detailed below)rename the VG on the external [not mounted] disk(s).rename the VG of your system (not realistic)merge both volume group into a single one (probably needs to rename first)option 1 - rename the VG on the externalUse the command vgrename. You need to use vgdisplay or vgs,to retrieve the volume group UUID.$ vgs -o vg_name,vg_attr,vg_uuidVG Attr VG UUID vg_andromeda wz--n- gc5zhX-vrW9-mEDA-mzNN-kZxf-9nON-1aWwGY???? ?????? bwQkRq-mgph-9BYf-9WPF-cKz0-FLFq-0Qxs73$ vgrename bwQkRq-mgph-9BYf-9WPF-cKz0-FLFq-0Qxs73 vg_andromeda_old$ vgchange -ay vg_andromeda_old(please, edit/update this post with the actual ouput of the command vgs)option 2 - rename the VG of your systemThis is not realistic. You can't rename an active volume group, so you would have to boot on a CD/DVD, rename the VG, and fix your system configuration in various places (fstab, bootloader)...However, since your installation is fresh, you could reinstall your system with another name.option 3 - merge both volume group into a single oneYou could merge both VG, but it has a few caveats : 1. it make sens if both drives are meant to remain on the system. 2. you can't have to LV with the same name in a single VG. 3. you have an SSD and an HDD. it's sensible to keep them on distinct VG for clarity. 4. the vgmerge command seems to only merge two VG by Name (not UUID), so you have to rename the duplicate VG anyway.
_unix.66021
I need to copy file between Linux machines. The problem that the user I use to login (myuser), is different from the user that can access the file.If I just ssh to the machine, I can switch the user using sudo su someuser, can I do it somehow while scp?While using WinSCP I managed to do it, by configuring the SCP/Shell, so I believe there must be a similar way to do it via pure shell.
Changing user while scp
users;scp
Assuming that the user you CAN ssh to doesn't need a password to sudo su into the target user, you can try this:dd if=myfile | ssh some.host sudo -u targetuser dd of=myfile ... Mind, I'm still unconvinced that simply configuring targetuser to only allow scp/sftp/rsync over SSH and using a RSA keypair for authentication isn't a much better option.
_unix.230486
I am currently using Virtualbox to run this server where my current disk space is:$ df -hFilesystem Size Used Avail Use% Mounted on/dev/mapper/ol-root 50G 44G 6.1G 88% /devtmpfs 3.9G 0 3.9G 0% /devtmpfs 4.0G 80K 4.0G 1% /dev/shmtmpfs 4.0G 9.0M 3.9G 1% /runtmpfs 4.0G 0 4.0G 0% /sys/fs/cgroup/dev/sda1 497M 166M 331M 34% /boot/dev/mapper/ol-home 26G 2.9G 23G 12% /homeI would like to increase the /dev/mapper/ol-root size by 12gb. I already increased the size of the .vdi file. Then used gparted to allocate the unallocated space.However all it's done is massively increase my ol-home volume with a bunch of unused space. I'd like to move 12gb of that available space to that of ol-root.Can someone explain how to go about doing this and why gparted added the space to ol-home instead?
How to move space from 1 file system to another?
linux;filesystems;virtualbox;gparted
null
_webmaster.61575
I have a glyphicons in Bootstrap 3. They work very nicely here:latest Chromelatest Firefoxlatest Safarilatest Explorerlatest Android At one facility, the glyphicons don't show. The buttons come up blank. How do I troubleshoot? They are security sensitive there. I don't have systems or network access.. and am not in a position to request that. Troubleshooting with advanced tools isn't going to happen. Here's what I have access to:Internet Explorer 9Behind a very secure firewallSometimes, I think the glyphs not showing is the IE 9.. but my code should be addressing that.Sometimes, I think their firewall is blocking the CDN. Can I enter a URL into a browser to test if the CDN is there?Sometimes, I think my FB share and like buttons upset this facilty's firewall, and they tie the whole thing down.Any suggestions at how I begin to research this? Or maybe you have an outright idea for IE 9 and glyphs (though my code is very-very close to the demo's which work).UPDATE: I never did solve the problem of getting glyphs to render. However, out of deadline driven desperation I did do a successful workaround: I switched to Font-Awesome. FA works as desired, though I like the Glyph graphic design better.
Diagnosing Bootstrap 3 Glyphicon Button Icons Not Showing
internet explorer;bootstrap
null
_reverseengineering.15221
Hello reverse engineers,I am reverse engineering a Mach-O executable for iOS.File says: Mach-O universal binary with 2 architectures: [arm_v7: Mach-O arm_v7 executable] [64-bit architecture=12].I need to convert a virtual address to a file offset in a Mach-O file.In IDA, I see some data in the data segment with the virtual address 0x0000000100366720, which I want to read with a C program.Using hexdump -C -v, I saw that the virtual address corresponds with the file offset 0xa9a720:00a9a720 89 00 00 00 00 00 00 00 50 b7 39 00 01 00 00 00 |........P.9.....|00a9a730 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|00a9a740 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|00a9a750 70 22 12 00 01 00 00 00 50 53 12 00 01 00 00 00 |p......PS......|00a9a760 58 53 12 00 01 00 00 00 00 00 00 00 00 00 00 00 |XS..............|00a9a770 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|00a9a780 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|00a9a790 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|00a9a7a0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|00a9a7b0 00 00 00 00 00 00 00 00 80 57 12 00 01 00 00 00 |.........W......|00a9a7c0 98 cf 32 00 01 00 00 00 94 cf 32 00 01 00 00 00 |..2.......2.....|00a9a7d0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|00a9a7e0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|00a9a7f0 00 67 36 00 01 00 00 00 c0 cf 32 00 01 00 00 00 |.g6.......2.....|00a9a800 20 54 12 00 01 00 00 00 7c 57 12 00 01 00 00 00 | T......|W......|00a9a810 38 ce 32 00 01 00 00 00 10 ce 32 00 01 00 00 00 |8.2.......2.....|00a9a820 34 ce 32 00 01 00 00 00 f0 53 12 00 01 00 00 00 |4.2......S......|00a9a830 51 00 00 00 e8 03 00 00 2c 00 00 00 25 00 00 00 |Q.......,...%...|00a9a840 46 00 00 00 ff 29 11 17 00 00 00 00 4b 17 00 00 |F....)......K...|00a9a850 80 00 00 00 08 00 00 00 08 00 00 00 0a 00 00 00 |................|00a9a860 00 00 00 00 0f 00 00 00 28 1b 00 00 d0 03 00 00 |........(.......|00a9a870 c8 02 00 00 a0 01 00 00 00 00 00 00 58 02 00 00 |............X...|00a9a880 a8 02 00 00 d8 01 00 00 00 00 00 00 50 01 00 00 |............P...|00a9a890 48 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |H...............|00a9a8a0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|00a9a8b0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|00a9a8c0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|00a9a8d0 00 00 00 00 29 da f5 21 a1 b5 7a bf e9 7a d7 5b |....)..!..z..z.[|00a9a8e0 3a 97 49 72 00 00 00 00 20 67 36 00 01 00 00 00 |:.Ir.... g6.....|00a9a8f0 f0 e2 32 00 01 00 00 00 20 e3 32 00 01 00 00 00 |..2..... .2.....|Using IDA:0000000100366720 89 00 00 00 00 00 00 00 50 B7 39 00 01 00 00 00 ........P.9.....0000000100366730 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................0000000100366740 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................0000000100366750 70 22 12 00 01 00 00 00 50 53 12 00 01 00 00 00 p......PS......0000000100366760 58 53 12 00 01 00 00 00 00 00 00 00 00 00 00 00 XS..............0000000100366770 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................0000000100366780 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................0000000100366790 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................00000001003667A0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................00000001003667B0 00 00 00 00 00 00 00 00 80 57 12 00 01 00 00 00 .........W......00000001003667C0 98 CF 32 00 01 00 00 00 94 CF 32 00 01 00 00 00 ..2.......2.....00000001003667D0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................00000001003667E0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................00000001003667F0 00 67 36 00 01 00 00 00 C0 CF 32 00 01 00 00 00 .g6.......2.....0000000100366800 20 54 12 00 01 00 00 00 7C 57 12 00 01 00 00 00 T......|W......0000000100366810 38 CE 32 00 01 00 00 00 10 CE 32 00 01 00 00 00 8.2.......2.....0000000100366820 34 CE 32 00 01 00 00 00 F0 53 12 00 01 00 00 00 4.2......S......0000000100366830 51 00 00 00 E8 03 00 00 2C 00 00 00 25 00 00 00 Q.......,...%...0000000100366840 46 00 00 00 FF 29 11 17 00 00 00 00 4B 17 00 00 F....)......K...0000000100366850 80 00 00 00 08 00 00 00 08 00 00 00 0A 00 00 00 ................0000000100366860 00 00 00 00 0F 00 00 00 28 1B 00 00 D0 03 00 00 ........(.......0000000100366870 C8 02 00 00 A0 01 00 00 00 00 00 00 58 02 00 00 ............X...0000000100366880 A8 02 00 00 D8 01 00 00 00 00 00 00 50 01 00 00 ............P...0000000100366890 48 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 H...............00000001003668A0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................00000001003668B0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................00000001003668C0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................00000001003668D0 00 00 00 00 29 DA F5 21 A1 B5 7A BF E9 7A D7 5B ....)..!..z..z.[00000001003668E0 3A 97 49 72 00 00 00 00 20 67 36 00 01 00 00 00 :.Ir.... g6.....00000001003668F0 F0 E2 32 00 01 00 00 00 20 E3 32 00 01 00 00 00 ..2..... .2.....In the post Convert Mach-O VM Address To File Offset this formula is mentioned:you need to find the segment (LC_SEGMENT) load command which covers the address, then do something like this:fle_off = (address-seg.address)+ seg.offsetI have this load command in my Mach-O header:HEADER:0000000100000380 ; LC_SEGMENT_64 - 64-bit segment of this file to be mappedHEADER:0000000100000380 segment_command_64 <0x19, 0x5E8, __DATA, 0x100360000, 0x50000, \HEADER:0000000100000380 0x360000, 0xC000, 3, 3, 0x12, 0>If I fill in the formula:fle_off = (0x0000000100366720 - 0x100360000) + 0x360000 = 0x366720The result is not the same file offset as 0xa9a720, which I found out using hexdump.It seems like I just calculated the offset from the base address.What am I doing wrong?
Mach-O : Convert virtual address to file offset on disk
ida;c;ios;mach o
The offsets in the LC_SEGMENT command are counted from the beginning of the Mach-O header in the file. Normally the Mach-O header is at file offset 0, however OS X and iOS support so-called fat files which can contain several Mach-O files (usually for different architectures). You need to account for that and add a corresponding delta to the file offset, or, alternatively, extract the subfile you're interested in (e.g. using lipo) and work on a single file directly.
_softwareengineering.205173
I wrote a number of NodeJS modules (some of which are actually good, in my modest opinion). I basically forgot to set a license for them.I would like to pick the AGPL (Affero GPL).How do I do that? Can I just get away with a LICENSE file in the project? Or do I need a license disclaimer in every single file in the project? What about CSS?
How do I license my software under a free license?
github;licensing;gpl;agpl
null
_webmaster.6375
Looking in Google Webmaster tools for my site, the sixth most common keyword is life. This appears a lot because I mention artificial life and in places second life in the page text. I want to increase the relevance of this page in terms of the keywords in the text, not as just life on its own, which as you can see, is far to vague to be of use.How can I increase the ranking of artificial life as a keyword instead of just life on my site?
Single word keywords against 2 or 3 word keywords for SEO
seo;pagerank;keywords
You don't want to focus on one or two keywords for an entire site. Search engines don't rank sites. They rank pages. What that means is you want to focus on the keywords of specific pages. If you want to rank well for a certain keyword, whether it is a one or three word phrase, you need to dedicate a page to content related to that keyword(s). So if you want to have a page rank for artificial life you need to create page about artificial life. If you want your home page to rank for artificial life then you will need to alter your content to focus more on artificial life. This includes altering your internal links to include that phrase in it, etc.
_unix.145359
I have 2 hard drives 3TB each. The first of these drives is full with very important content. I have encrypted the second 3TB drive using the following technique:https://www.youtube.com/watch?v=J_g-W6hrkNAand have copied the files from the non encrypted first drive to the encrypted second drive. I now want to encrypt the first drive. How do I go about doing that? It has content in it, so I am assuming if I use the above link to encrypt it, the drive will be wiped?
Encrypting drive which has content
encryption;dm crypt
null
_softwareengineering.347445
Git can generate patches/diffs for binary files as well as for text files.I'm trying to figure out what encoding it uses for its binary patches.Here is an example:diff --git a/www/images/openconnect.png b/www/images/openconnect.pngnew file mode 100644index 0000000000000000000000000000000000000000..51a5d620083cafdc8be07fc42db44ee4a273caccGIT binary patchliteral 55947zcmdRWhd<R{{QouL+Lx^CE7^o(&x`1mLiWne-g{(SE4%EOaTT&R8Ie)46SA_pbc?KPzzUO|vkMHk)_}#}t>6X0T@AEpZ*K-|lT94EzNSR0>5D3M64OJZo1TO`A$Uup}J5o}Fzs^B+5FT{OaD0l@!ZDPTnN!&GzydV&wVcZAa(v9vV@a7F~HAC+wZg$>&mY%i{KR-WV...zM_(nPM^0iqGn&ziW^}xgq{7*>(Z~zK&uB(7n$e8P)2d=17EN{7l9w}@(Trv^qY==mzVj$pbc}Z>Q83UQojAk^WG1F>eAQJOc8zsxw&S*w6n$e8P)3L}vzB|q+oEgn%Ml+fbz)3L}vX6CCI&1gn5ngGoh$c$z*qZ!R;AX+sH#8%$gAZR*cATyfLjAk?eS~Uy=GVLP*l@a=IAWJWWZ(TrvU{C~H_V_Z$W5taY|002ovPDHLkV1k~|z(xQ7literal 0HcmV?d00001This is clearly some kind of binary-to-ASCII encoding but it is not the common Base64. It appears to use more ASCII characters and all the encoded lines (except for the last one!?) begin with z.
What is the encoding used in Git's binary patches?
git;text encoding;ascii
null
_unix.204912
I want to run a complex script via ssh#!/bin/shARRAY1=(server1sserver123server12server14server13)for i in ${ARRAY1[@]};do ssh $i case \$HOSTNAME in server1.domain.com)echo try1 ;;server12.domain.com)echo try12 ;;*)echo try123 ;;esacThe problem is ssh read my internal hostname variableand return try123,is possible to read the internal variableof remote site?I have tried \$VARIABLE and $VARIABLE but result is the same
SSH: remote variable
bash;ssh;variable
Using \$HOSTNAME is the correct way to escape the variable in this case.However, that variable often contains the hostname (non-fqdn), or may not be populated. You should rather use the command hostname -f to get your server FQDN.I don't know how will look like your final script, but connecting to server1 then check if this server is server1 may be some kind of useless (out of security purpose).You could write some scripts, for instance script1.sh containingecho $HOSTNAME / $(hostname -f)thenfor i in ${ARRAY1[@]};do case $i in server1) ssh $i < script1.sh ;; server2) ssh $i < script2.sh ;; *) ssh $i < script_${i}.sh esac done EDIT: As stated in comments, OpenBSD shipped version of hostname doesn't understand -f option. The default behaviour is to display FQDN.
_datascience.21764
I have 20 000 plots most of which follow patterns similar to those I sketched below (sorry for my poor drawing skills!). I am now looking for a simple, ideally unsupervised algorithm, that would allow me to quickly categorize them and potentially find new patterns. I was considering Dynamic Time Warping, but I am afraid that it will not allow me to distinguish between, say, plots 1 and 2.Each single plot displays a sequence of binary decisions made by a single subject. These data were smoothed to create the above plots. I do have raw data. Subjects' binary decisions were made over a fixed time interval (5 minutes), but subjects could make as many decisions as they wanted. In other words, the number of data points differs between subjects but they span the same time (5 minutes). I analyze the data looking at choice by choice (i.e., choice 1, choice 2, ...), and not across time. However, shapes of these curves are relatively invariant to what ordering variable I choose.I would appreciate your advice on a quick way of categorizing these data - I am open to traditional machine learning approaches as well as deep learning. Right now, I don't care about understanding why these curves have different shapes. Once subjects' data are clustered, cluster numbers will serve as an input to a model, which I will use to gain that understanding.Many thanks for your help!
what is an appropriate algorithm for unsupervised clustering of curves or images?
machine learning;deep learning;image classification;unsupervised learning;sequential pattern mining
null
_codereview.54763
CSS and responsiveness in multiple columns with fixed and scaleable elements can be done in many ways.I have created a solution that seems to work, though I have no idea whether this is best practice.FiddleCSShtml, body { margin:0 auto; padding:0; background: #fff; text-align: center; }/* Clearfix============================================================================ */.CF { display:inline-block;overflow:hidden; }/* Elements============================================================================ */div#container {max-width: 1140px; min-width: 960px; margin:0 auto; margin-top: 10px; padding:0; background:#0F9; position:relative;} div#left-menu {width: 100px; background:#F30; position: absolute; top:0; left:0; } div#information {padding: 10px 10px 25px 10px; background:#39C; margin-left:100px;} div#information-wrapper {position:relative; background:#3FF; } div#information-left-menu {width: 125px; background:#C30; position: absolute; top: 0; left:0;} div#content {background:#FC0; margin-left: 125px; text-align:left;}HTML<div id=container class=CF > <!-- This is fixed Width --> <div id=left-menu> <p>Left 100px wide </p> </div> <!-- Width scales to size of Container --> <div id=information class=CF> <div id=information-wrapper> <div id=information-left-menu>Fixed width of 125px </div> <div id=content>text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text </div> </div> </div></div><!-- / END / Container / -->
Responsive / CSS fixed and variable widths
html;css
To have a responsive web design, you need to do more than have no horizontal scrolling when viewed with a desktop browser. You have to adapt to the viewport of any device, from the really small (phone) to the really big (desktop). This code does not, sorry.Responsive web design is typically achieved by using media queries (there are other ways, but they're unavailable in IE versions older than 10), which are completely absent from your code. I suggest you take the time to learn about what responsive web design is:http://bradfrostweb.com/blog/post/7-habits-of-highly-effective-media-queries/http://www.smashingmagazine.com/responsive-web-design-guidelines-tutorials/http://www.html5rocks.com/en/mobile/responsivedesign/http://thesiteslinger.com/blog/responsive-design-why-youre-doing-it-wrong/http://blog.cloudfour.com/the-ems-have-it-proportional-media-queries-ftw/That said, there are other things that are not good here:Absolute positioning of the left menu. In general, absolute positioning should be avoided unless it is absolutely necessary (eg. drop menus, etc.). Absolutely positioned elements can become cut off if there's not enough surrounding content to prevent it from overflowing its ancestor elements. Multi-column layouts can easily be done using floats or the table display properties (eg. display: table-cell) and the content won't get cut off.Using px to restrict the width of text containing elements. If the user needs to increase their font-size for accessibility reasons, 100-125px is no longer an appropriate sized container for that text. You should be using ems or other relative units instead.No semantic markup. With HTML5 (see: http://html5doctor.com/), a whole slew of new semantic container elements have been added (eg. article, nav, aside, section) which may or may not be more appropriate than the general purpose div (I can't tell because there's no real content here). Markup should be chosen to describe the content first, then you can worry about how to make it pretty.
_softwareengineering.83370
I am considering going back to school for my masters and I've been looking at several avenues I can take. I've been considering either an MBA or an MSIS degree. Overall I know that an MBA is going to give me a solid skill set that can help me become an executive. However they seem to be a dime a dozen these days and the University I can get into is good, but it's not exactly in the top 100 anything. My undergrad MINOR was in Business Information Systems. I'm rusty as hell, considering I haven't touched it, but an MSIS would be more in the direction of my past academic experience and seems to touch both on business management and IT. Question...With an MSIS will I just be a middleman? Will I really be an important person with a real skill set or will I merely be someone who isn't quite cut out to be a manager and who is clueless about the tech side?Is an MSIS degree going to give me a real chance to move up the pay scale quickly or am I better off learning programming, networking through another BS degree? What will give me more upward mobility career wise? An MBA or an MSIS?
MBA versus MSIS
education;management;functional programming
null
_vi.10615
I'm using codex for generating tags file but vim does not followtags such as$ <$> . <*> with ctrl-] it only works if I manually call tag:tag $Is it a bug? or there is something that I don't knowthanks
ctrl-] does not work for tags consist of special character ( operators in haskell )
tags
ctrl-] uses the word under the cursor, (as opposed to a WORD) which means that any punctuation is excluded. From :h word:A word consists of a sequence of letters, digits and underscores, or a sequence of other non-blank characters, separated with white space (spaces, tabs, ). This can be changed with the 'iskeyword' option. An empty line is also considered to be a word.So your options are:Select it with visual mode first to tell it explicitly what you want to search for, or Change the mapping to use a WORD (which is any white space separated characters).You can do so with the following mapping in your vimrc:nnoremap <silent><C-]> :exe tag .expand('<cWORD>')<CR>nnoremap: create a non-recursive, normal mode mapping<silent><C-]>: map CTRL + ] and do it without echoing anything:exe: execute the following string as a commandtag : use the tag command.expand('<cWORD>'): append the WORD under the cursor to the tag command<CR>: a carriage return. Simply executes the command.Please note that doing this will not allow you to use tags in the vim help files as they surround their tags with ||!See :h word, :h WORD for more info.
_unix.219260
I have a computer that I need to boot into, but the passwords seem to be bogus. Additionally I can't mount the drive for writing, and it is a mips processor, so I can't stick it in another machine to run it. Anyhow, they passwd file has some users that look like this, with a star after the user-name. does that mean blank password or what? root:8sh9JBUR0VYeQ:0:0:Super-User,,,,,,,:/:/bin/kshsysadm:*:0:0:System V Administration:/usr/admin:/bin/shdiag:*:0:996:Hardware Diagnostics:/usr/diags:/bin/cshdaemon:*:1:1:daemons:/:/dev/nullbin:*:2:2:System Tools Owner:/bin:/dev/nulluucp:*:3:5:UUCP Owner:/usr/lib/uucp:/bin/cshsys:*:4:0:System Activity Owner:/var/adm:/bin/shadm:*:5:3:Accounting Files Owner:/var/adm:/bin/shlp:VvHUV8idZH1uM:9:9:Print Spooler Owner:/var/spool/lp:/bin/shnuucp::10:10:Remote UUCP User:/var/spool/uucppublic:/usr/lib/uucp/uucicoauditor:*:11:0:Audit Activity Owner:/auditor:/bin/shdbadmin:*:12:0:Security Database Owner:/dbadmin:/bin/shrfindd:*:66:1:Rfind Daemon and Fsdump:/var/rfindd:/bin/sh
what does star in passwd file mean?
users;password;data recovery
null
_webmaster.38369
I'm looking for a something to visually create a sitemap for one of my websites. Id like something in a tree structure, so I have the hierarchical view of my site.A couple requirements I have though, the ability to map password protected pages, and (not REALLY a requirement) the ability to integrate Google Analytics data.
Visual sitemap generater
google analytics;sitemap
null
_unix.145682
Is it possible to make an aoutput showing extended desktop to the left of the primary? I'm using xfce.$ xrandr --output VGA1 --primary --right-of LVDS1The above command makes VGA1 as extended desktop to the right of LVDS1 but the primary part of the desktop (the part showing the apps menu button, the desktops, apps instances, applets, time and date ...) is on LVDS1. I want it on VGA1.
Put the XFCE extended dektop on the right-side monitor
xorg;xfce;xrandr;multi monitor
null
_softwareengineering.201804
Most of my programming experience is in OOP where I have fully embraced the concepts thereof including encapsulation. Now I'm back to structured programming where I have a tendency to logicaly seperate my code using subprocedures. For example, if I have a large switch case (30 cases or more), I'll put that in it's own procedure so the main method looks a little neater. Generally, subprocedures are used to help keep things DRY, but in some instances these logical seperations I create usually amount to being used only once.Some of my code was being reviewed, and it was mentioned that this is a bad idea. His backing to this claim is that it muddies the water and unecessarily hides code. Instead, he insists that a subprocedure MUST be used more than once to merit making a subprocedure out of a section of code. While this idea of hiding code is a common place in OOP, he does admit to having little to no understanding of OOP concepts and has only ever worked with structured programming. Is there any backing to his claim or is this merely programming dogma?
Is using subprocedures to logically separate my code a bad idea for structured programming?
procedural;structural programming
Encapsulation, Data Hiding, Abstraction, Writing Readable Code etc. are nothing that is unique to OOP. In fact, all of those were invented long before OOP. Really, the only difference between OOP and other paradigms is the mechanism of Data Abstraction: OOP uses Procedural Abstraction, others typically use Abstract Data Types.Breaking up Subroutines such that every statement (or top-level expression in the case of languages that don't have statements) is on the same level of abstraction and organizing them so that the code tells a coherent story, is a universal principle, completely unrelated to any particular paradigm.
_ai.3164
I was trying to build an OCR system and heard about ANNs. I am weak at mathematics and statistics and couldn't stick up to reading those massive mathematical documents (research papers or ANN related books). But I kind of figured out that ANN training is all about balancing of weights and biases. Am I right? And please also point me to some docs where I can get help understanding ANNs to use in my OCR system.
Neural Network training
training;artificial neuron
Sorry, this is a very broad area. Proper understanding of neural networks requires advanced mathematics. It's not sufficient to say balancing of weights and biases because most ML algorithms have weights. You seriously need to grab a book.OCR system itself is also very broad, it includes various object recognition techniques. You haven't even mentioned what you want to detect.If you want to study, try:https://github.com/Elucidation/tensorflow_chessbotThis is a well-documented OCR example for chess pieces. The project uses both regression and convolutional neural network.
_cogsci.16818
Employability is typically defined as the continuous fulfilling, acquiring or creating of work through the optimal use of competences. (Van der Heijde & Van der Heijden, 2006) One's employability does not only depend on one's ability to work (both physically and mentally), but also one's motivation to work and learn and the opportunity to work (Brouwers, 2012; dutch citation). Especially for elders, who are getting older and older, and have to work longer (i.e. until a higher age), employability is becoming incredibly relevant. They need to be able (and willing) to keep on working until their retirement, either in their current position or another less demanding job. This is a difficult job without clear insights. However, with such an incredibly broad term, it will even be difficult to gain those insights. Are there tools available to asses the personal factors of individuals' employability? Heijde, C. M., & Van Der Heijden, B. I. (2006). A competencebased and multidimensional operationalization and measurement of employability. Human resource management, 45(3), 449-476.Brouwer, S., de Lange, A., van der Mei, S., Wessels, M., Koolhaas, W., Bltmann, U., ... & van der Klink, J. (2012). Duurzame inzetbaarheid van de oudere werknemer: stand van zaken. Universitair Medisch Centrum Groningen, Groningen: Rijksuniversiteit Groningen.
What are ways to assess employability of workers?
measurement;io psychology;human factors;employability
null
_webmaster.24876
I have a Google Analytics account with a well-functioning funnel made up of 4 goals. I can query the API and get the data out, but it does not match the funnel report in Analytics. Without getting into specific values, I can give you an example with faked data.Here's how the funnel might look:Shopping Cart100 > 100 > 20 80 (80%)Address Page5 > 85 > 25 60 (71%)Payment Page2 > 62 > 10 52 (84%)Checkout1 > 53 (49.07% funnel conversion rate)Okay, so you would expect the API to output data something like this:goal1Starts goal1Completions goal1Abandons100 80 20goal2Starts goal2Completions goal2Abandons85 60 25goal3Starts goal3Completions goal3Abandons62 52 10goal4Starts goal4Completions goal4Abandons53 53 0Instead, it's different. Firstly, the abandons are associated with the following goal (so goal1 always has 0 abandons and goal4 always has >0 abandons. Okay, I can work with that. What's confusing is that the numbers are always a little different. The goal1Completions always match the report, as do the goal4Completions, but everything else is off by a small amount. Sometimes it's only 2 visits, other times it's off by 50.For the report above here's the kind of results I would tend to get:goal1Starts goal1Completions goal1Abandons100 100 0goal2Starts goal2Completions goal2Abandons105 84 21goal3Starts goal3Completions goal3Abandons90 65 25goal4Starts goal4Completions goal4Abandons58 53 5Here's what I know:Goal(n)Completions + Goal(n)Abandons = Goal(n)StartsGoal(n)Starts >= Goal(n-1)CompletionsGoal(n)Starts - Goal(n-1)Completions != reported number entering at that levelThat third one is particularly disappointing. So, here's my question:What data do I need to pull from the API in order to recreate the counts in the Funnel report in Google Analytics? I don't need the pages exited to entering from - just the counts at every level.
Google Analytics API data for goals (funnels) doesn't match - how do they reconcile?
google analytics
null
_vi.2108
If I use::tabedit file1 file2I get:E172: Only one file name allowedIs there any way to use :tabedit with multiple file names? Or another way to open multiple tabs at once?
How can I open multiple tabs at once?
tabbed user interface
Given the problems & complexity in my other answer using the built-in way by modifying the argument list, I've added by own small function to do this: Open multiple tabs at oncefun! OpenMultipleTabs(pattern_list) for p in a:pattern_list for c in glob(l:p, 0, 1) execute 'tabedit ' . l:c endfor endforendfuncommand! -bar -bang -nargs=+ -complete=file Tabedit call OpenMultipleTabs([<f-args>])You can now use :Tabedit *.vim. This function will expand all globbing patterns, and execute :tabedit <f> for every file. You can add as many pathnames as you want, for example this all works::Tabedit file.rb:Tabedit *.c:Tabedit file1.py file2.py _*.py:Tabedit /etc/hosts file{1,2}.shWell, and so forth...I put this in a little globedit.vim plugin, which also contains command for :Edit, :Split, etc.
_cs.55000
I have constructed truth tables to prove that:$ABC + ABC'+ AB'C +A'BC = AB+AC+BC$How do I prove it by simplifying the expression? I know that I can simplify: $ABC + ABC' = AB(C+C')=AB$. However I can't repeat this for $AB'C$ or $A'BC$, to get the answer I desire. I'm fairly new to boolean algebra and have tried to use the basic identities to figure it out, but can't seem to get there.
How do I simplify this boolean expression?
boolean algebra
null
_webmaster.21870
Is there a photo sharing service, such as flickr or picasa, that will collect the urls of the locations where the photo has been posted on other blogs (or mentioned in tweets, etc?) This could be accomplished by posting each photo as a blog entry using wordpress, which would then automatically handle pingbacks, but of course a blog doesn't perform quite like a proper photo service. Perhaps this could be done with a private photo hosting server like zenphoto by editing the php, but that seems rather involved.Does such a service already exist?
pingback / trackback support for a photo sharing website?
blog;backlinks;photo gallery
I'm not sure about a photo sharing service, but I have an idea of how you can track where your images are embedded with some PHP. You could probably build this in to WordPress somehow as well if you know what you're doing. I believe this could work.In your .htaccess file put a rule something like this:RewriteCond %{HTTP_REFERER} !^http://([-a-z0-9]+\.)?yourdomain\.com [NC]RewriteCond %{QUERY_STRING} !^pass=1$ [NC]RewriteRule ^(.*)\.(gif|jpe?g|png|bmp|swf)$ /hotlink.php?url=$1.$2 [R,NC,L]This will re-write all requests for images on your site, that don't come from your domain, to a file called hotlink.php with the address for the image they were accessing contained in the 'url' variable.Now in the hotlink.php file you can sort of do what you want. You can log the referrer and serve the image anyways, which would still allow your picture to be embedded in other sites, you can block certain sites from using your images but allow others, or you can block other sites from using your images at all.So if all you want to do is track the referring URL's you could put something like this in your hotlink.php file that all image requests are redirected through (untested):<?phpmysql_connect(localhost, admin, 1admin) or die(mysql_error());mysql_select_db(link_track) or die(mysql_error());$query = INSERT INTO image_tracking (img_url, date, referrer) VALUES('.$_GET[url].', '.date(DATE_RFC822).', '.$_SERVER['HTTP_REFERER'].' ) ;mysql_query($query); header(Location: .$url.?pass=1);?>This would take the url of the image that is being accessed and record it in a MySql database with the date and the referring url. It would then serve up the image that was being requested so the people embedding your images wouldn't even notice the difference. With the information in a database you could access the info however you wanted, through a custom php page, through something like phpMyAdmin, or by adding a page to the admin area of the blog software you are currently using.
_unix.274294
I'm exploring the linux frame buffer, /dev/fb0, and when I run sudo fbset -i from a virtual console in Gnome 3 (using Terminator) on Fedora 23, it reports the dimensions of the frame buffer as 1280x768, but my Gnome desktop resolution is 1680x1050. Why is fbset telling me that the frame buffer is 1280x768?Full output of fbset -i:mode 1280x768 geometry 1280 768 2048 2048 32 timings 0 0 0 0 0 0 0 rgba 8/16,8/8,8/0,0/0endmodeFrame buffer device information: Name : svgadrmfb Address : (nil) Size : 16777216 Type : PACKED PIXELS Visual : TRUECOLOR XPanStep : 1 YPanStep : 1 YWrapStep : 0 LineLength : 8192 Accelerator : No
Why does fbset -i report a different resolution?
framebuffer
null
_webapps.53742
I am wondering if there is a way to control the speed on ANY video on YouTube without using a plugin.
Control YouTube Video Speed Without Plugin
firefox;google chrome;video;youtube
null
_codereview.49409
I was asked to provide some example code for a job interview and didn't have anything to offer, so I wrote the following function.A barcode generator may seem a bit basic, but there's some logic behind the choice:It's something I recently needed, so it's not entirely contrived.It's simple enough to do in one function, so it can be reviewed easily.It's complex enough that a new programmer would make a mess of it.It's cute. You can render the generated barcode in a browser.It's useful, so I could make a little gist out of it.The barcode is in the Interleaved 2 of 5 Format. It's a simple format, and there's a Wikipedia article that explains it well.Update: A new version of this code has been posted here. The new code includes many of the suggestions made below.def render(digits): '''This function converts its input, a string of decimal digits, into a barcode, using the interleaved 2 of 5 format. The input string must not contain an odd number of digits. The output is an SVG string. import barcode svg = barcode.render('0123456789') Strings are used to avoid problems with insignificant zeros, and allow for iterating over the bits in a byte [for zipping]. # Wikipedia, ITF Format: http://en.wikipedia.org/wiki/Interleaved_2_of_5 ''' # Encode: Convert the digits into an interleaved string of bits, with a # start code of four narrow bars [four zeros]. bits = '0000' bytes = { '0': '00110', '5': '10100', '1': '10001', '6': '01100', '2': '01001', '7': '00011', '3': '11000', '8': '10010', '4': '00101', '9': '01010' } for i in range(0, len(digits), 2): # get the next two digits and convert them to bytes black, white = bytes[digits[i]], bytes[digits[i+1]] # zip and concat to `bits` 5 pairs of bits from the 2 bytes for black, white in zip(black, white): bits += black + white # Render: Convert the string of bits into an SVG string and return it. svg = '<svg height=50>' bar = '<rect x={0} y=0 width={1} height=50 {2}></rect>' pos = 5 # the `x` attribute of the next bar [position from the left] for i, bit in enumerate(bits): width = int(bit) * 2 + 2 color = '0,0,0' if (i % 2 == 0) else '255,255,255' style = 'style=fill:rgb({0})'.format(color) svg += bar.format(pos, width, style) pos += width return svg + '</svg>'
Python Barcode Generator
python;interview questions;python 3.x;converting;python 2.7
The code generally looks pretty good, but I think there are some things you could fix up here.Don't forget the stop codeYou have the start code of 0000 but you've forgotten the stop code which is 100. When you calculate, you could just tack it onto the end with bits += '100' Use SVG <def>sSince the purpose is to create an SVG barcode, it make sense to tighten the resulting SVG. You may already know how to use a <def> in SVG and it definitely makes things a little easier to understand here. The way your code is currently structures, it creates four different styles of <rect> which are all combinations of narrow/wide and black/white. You could predefine each of those shapes and then simply intantiate them within the body of the svg code. That would make the definition for svg and bar like this:svg = '''<svg height=50><defs><g id=b0><rect x=0 y=0 width=2 height=50/></g><g id=b1><rect x=0 y=0 width=4 height=50/></g></defs>'''bar = '<use xlink:href=#b{0} x={1} y=0 {2}/>'Using it would then be svg += bar.format(bit, pos, style)An improvement would be to create both bars and spaces like this:<g id=b0><rect x=0 y=0 width=2 height=50 style=fill:rgb(0,0,0)/></g><g id=b1><rect x=0 y=0 width=4 height=50 style=fill:rgb(0,0,0)/></g><g id=s0><rect x=0 y=0 width=2 height=50 style=fill:rgb(255,255,255)/></g><g id=s1><rect x=0 y=0 width=4 height=50 style=fill:rgb(255,255,255)/></g>Then your loop could look then like this:for i, bit in enumerate(bits): width = int(bit) * 2 + 2 svg += bar.format('bs'[i%2], bit, pos) pos += widthAlthough that may look verbose, it actually is over 2k shorter for a 12 digit barcode.Think carefully about data representationYour code currently translates digits into a series of '1' and '0' characters and then translates again into SVG rectangles. Why not eliminate a step? Your code could just as easily translate them in a single operation.Use list comprehensions instead of for loopsThe use of list comprehensions is almost always faster than a for loop in Python, so we use them when we can. It also tends to make the code shorter. So for example, we could change your loop to calculate the string of bar code bits to calculate all the black bits and then all the white bits like this:black = .join([bytes[i] for i in digits[0::2]])white = .join([bytes[i] for i in digits[1::2]])# shuffle them togetherdatabits = .join(.join(i) for i in zip(black,white))# create the full bar code string with start and stopbits = .join(['0000',databits,'100'])Doing this with join also saves time. Appending strings with += is very slow in Python.Prefer xrange to rangeWhen you use range, a whole list object is created, using memory. With xrange, a generator is created instead populated which can save memory. For any reasonably sized bar code this won't make much difference here, but it's good practice for Python 2.7. In Python 3, xrange doesn't exist and range creates a generator, so keep that in mind if you change versions.Don't draw more than you have toYou're drawing one <rect> for every bar or space, but it's really not necessary. Instead, you could create one larger rectangle that's the background space color and then only draw the black bars.
_webapps.31577
I would like to display YouTube thumbnail images instead of embedding the actual flash player in a Tumblr theme. But as soon as people click on the image, it starts loading the flash player and play the video.The reason is that if the front page has 10 YouTube videos, it really loads very slow because it is loading all the flash players. It is a lot faster to load the thumbnails.
How do you display YouTube thumbnails instead of embedding the actual flash in Tumblr theme?
youtube;tumblr;tumblr themes
null
_softwareengineering.263202
Let's say i have an interest in file conversions, but everything should be made by hand and i have multiple output formats (say: csv and excel).Once i get contacted by a client, i have to link services and convert files. What would be the fastest way of doing this?My guess would be to start with NodeJS and use libraries to import data into a json format. Then use the libraries to convert them to a certain file format (using libraries als).Is there anything that would be faster / easier to create?
Manual repetitive conversion between file types
programming practices;data;data types;type conversion
null