id
stringlengths 5
27
| question
stringlengths 19
69.9k
| title
stringlengths 1
150
| tags
stringlengths 1
118
| accepted_answer
stringlengths 4
29.9k
⌀ |
---|---|---|---|---|
_cstheory.17761 | What is the relationship between $\mathsf{PLS}$ and $\mathsf{APX}$? In other words, are problems that admit a polynomial time local search approximable? Do approximable optimization problems imply a local search algorithm in general? | What is the relationship between $\mathsf{PLS}$ and $\mathsf{APX}$? | cc.complexity theory;approximation algorithms | null |
_cs.76648 | Given is graph with networks and bridges/switches. We know, the rootBridge is the bridge with the minimal Bridge-ID. The connections between every bridge and network is 1.In the lecture slides they calculated the best(shortest) path from one network to the root bridge. I assume they used BFS, because it would be the easiest way to calculate the shortest path, but if they used BFS, why is the algorithm called Spanning-Tree-Algorithm? | Spanning-Tree-Protocol and BFS? (Distributed Computing) | distributed systems;spanning trees | null |
_unix.341721 | I have the following test.txt. Using below command its printing output: xvf-9c3683ff. However I need the output xvf-bcb500df. It is based on its last date.cat test.txt | sort -k2 | awk '{print $2}' | sed 's///g' | grep xvf | head -1test.txt{ date: 2017-01-30T10:55:46.000Z, Id: xvf-9c3683ff}, { date: 2017-01-26T12:58:20.000Z, Id: xvf-bcb500df}, { date: 2017-01-31T18:33:20.000Z, Id: xvf-ee07b28d}output should print below result.xvf-bcb500df | grep with sort on column | grep | null |
_softwareengineering.187539 | I have a question about the best practice in this situation.At one point, my small application allowed the client to upload a file to a server, and download a file from the server (it would also compress / decompress as well).This was created in 1 solution which consisted of 4 projects:FTPCompressDecompressUITestsNow, the spec has changed and there will 2 end users, one who only wants to upload, the other who only wants to download and they should never have access to anything else (ie downloading people cannot upload and vice versa).So, I have a few choices here. I could either Keep it as 1 solution, and ask users to login, based upon the credentials will display a different UI Alter my UI so it only shows tools to download, create a new solution which consists of just a UI project and reference my .dll accordingly.Delete my UI, create 2 new solutions, each solution being created for either download or upload (and each solution probably only consisting of just 1 project, the UI) and again, referencing the .dllDoes any one have any suggestions? Would any guidelines have allowed me to have not gotten into this situation in the first place (or at least made me more aware of the potential disasters)? | How do I alter my solution / project when the spec changes | programming practices | I am mildly surprised by the requirement that the uploading user is only allowed to upload. I can see a use-case for read-only access (download only), but much less for write-only access (upload only), unless that role is to be filled by an automated system.With that in mind, I would weigh my options as followsIf both users are expected to be human, extend the UI with login functionality and provide either the upload or download functionality, depending on the user credentials. This keeps the project future-proof for the case that there will be another requirements change to support both up- and download again for one user.If one of the users is expected to be a machine, add another project to the solution, providing an API that is tailored for machine-machine communication, parallel to the functionality you currently have in the UI. The unneeded functionality should be stripped from the UI.Big scope-changes like this can usually not be foreseen during the initial development of a project, so there are no guidelines to prepare your codebase for them.What you can prepare for are requirements that seem oddly restricting (like the write-only user you have now), and write your code in such a way that the requirement can change without forcing a complete rewrite of a project. |
_codereview.14753 | This is a slightly specialised A* algorithm, basically it allows you to search to and from disabled nodes. It's for a game and we the entities spawn in houses and go into them. The rest of the search behaves normally, it doesn't traverse any disabled nodes in between the start and end points.NavData is a class which has the collections of nodes and edges in as well as some other things listed below:(also edges and nodes are simple. edges have two integers - connect from node and connected to node. They also have a value for their weight. Nodes have an int index, a vector3 position and an enabled bool)private List<Node> nodes; //just a list of all nodes in the level, unsortedprivate Dictionary<int, Dictionary<int, Edge>> edges;//edges[fromNode][toNode] private List<List<int>> nodeAdjacencies; //maintains a list of each node's connected nodes which can be used to index into 'nodes'private List<Edge> simpleEdges; //this keeps all the edges in one big list for easy traversalprivate bool aStarCore( int start, int goal, out List<Edge> shortestPath, out Dictionary<int, Edge> SPT ){ SPT = new Dictionary<int, Edge>(); //[toNode] = Edge NavData ND = NavData.GetSingleton; Vector3 targetCoords = ND.Nodes[goal].Position; // for calculating heuristic shortestPath = new List<Edge>(); SortedList<float, List<int>> openList = new SortedList<float, List<int>>();//lump nodes with same cost together, doesn't matter which one we grab List<int> searchedNodes = new List<int>(); // could definitely make this more efficient (find a better collection for lookup) // push the start node onto the open list openList.Add( Vector3.Distance( ND.Nodes[start].Position, targetCoords ), new List<int>() ); openList[ Vector3.Distance( ND.Nodes[start].Position, targetCoords ) ].Add( start ); int openListCount = 1; searchedNodes.Add(start); // while there are still nodes on the open list while ( openListCount > 0 ) { // look at the next lowest cost node int source = openList[ openList.Keys[0] ][0];//first node of first list if (openList[ openList.Keys[0]].Count == 1) openList.Remove( openList.Keys[0] ); else openList[ openList.Keys[0] ].RemoveAt(0); openListCount--; //Debug.Log( source: + source ); // only allow the code to look at enabled nodes, ( unless it's the start // node as I assume we'll want the agents to emerge from occupied tiles ) if ( ND.Nodes[source].Enabled == true || source == start ) { for ( int i = 0; i < ND.NodeAdjacencies[source].Count; i++ ) { //Debug.Log(adjacency count: + nodeAdjacencies[source].Count); int target = ND.NodeAdjacencies[source][i]; if ( !searchedNodes.Contains( target ) ) { SPT.Add( ND.Edges[source][target].ToNode, ND.Edges[source][target] ); //does the key(cost) already exist? float costToNode = ND.Edges[source][target].Weight + Vector3.Distance(ND.Nodes[target].Position, targetCoords); if ( openList.ContainsKey(costToNode) ) { openList[costToNode].Add(target); } else { openList.Add( costToNode, new List<int>() ); openList[costToNode].Add(target); } searchedNodes.Add( target ); openListCount++; if ( target == goal ) { //calculate shortest path from the SPT int counter = target; while ( counter != start ) { shortestPath.Add(SPT[counter]); counter = SPT[counter].FromNode; } return true; } } } } } shortestPath = null; return false;} | How can I make my A* algorithm faster? | c#;lookup;pathfinding | null |
_unix.236571 | I want to find C source code for scanf implementation on Linux machine. Googling to find scanf implementation does not tell me the way to find it. I tried to find that source code from gcc source tree using ctags and cscope, but I could not find it. Can anybody please tell me where is scanf function definition, i.e. implementation source code? | Where is `scanf` implementation source code? | c;source;printf | It's in glibc library scanf.c sourceglibc stands for GNU C Library. It is a C standard library implementation. It's not a part of the compiler, because you might have different implementations of it (like Microsoft C run-time for example) as well as different compilers (like clang). |
_codereview.157344 | I saw this interview question and decided to solve using recursion in Java.Write a multiply function that multiples 2 integers without using *public class Main { public static void main(String[] args) { Scanner in = new Scanner(System.in); System.out.println(Enter first num: ); double num = in.nextDouble(); System.out.println(Enter second num: ); double numTwo = in.nextDouble(); System.out.println(multiply(num, numTwo)); } private static double multiply(double x, double y) { if (x == 0 || y == 0) { return 0; } else if (y > 0) { return x + multiply(x, y - 1); } else if (y < 0) { return -multiply(x, -y); } else { return -1; } }}What should I return instead of -1 to make this clear? | Multiplying 2 numbers without using * operator in Java | java;recursion;interview questions | What should I return instead of -1 to make this clear?Don't return -1, but recognise that you have exhausted the possible states of y, so simply do a return for the last possibility.private static double multiply(double x, double y) { if (x == 0 || y == 0) { return 0; } else if (y > 0) { return x + multiply(x, y - 1); } return -multiply(x, -y);} |
_codereview.84523 | Given a list of strings, my task is to find the common prefix.Sample input: [madam, mad, mast]Sample output: maSample input: [question, method]Sample output: Below is my solution, I'd be happy for help improving the algorithm (I'm open to totally different approaches) or general code improvement tips.Thanks :)public class PreFixer { public static void main(String[] args) { if(args.length < 1) { System.out.println(invalid arguments); return; } String commongPrefix = getCommonPrefix(args); System.out.println(Common Prefix for list is : + commongPrefix); } private static String getCommonPrefix (String[] list){ int matchIndex = recursiveChecker(0, list); return list[0].substring(0, matchIndex); } private static int recursiveChecker(int strIndex, String[] list){ for(int x=0; x<list.length; x++) { if(strIndex >= list[x].length()){ return strIndex; } if(list[0].charAt(strIndex) != list[x].charAt(strIndex)) { return strIndex; } } return recursiveChecker(strIndex + 1, list); }} | Finding the common prefix in a list of strings | java;algorithm;strings | There are some inconsistencies in your code style: sometimes you do put a space before an opening curly bracket, sometimes you don't. It is a good practice to adhere to one style(in this case, it is conventional to have a whitespace there). It is also conventional to surround all binary operators with whitespaces to make the code more readable. For instance, for(int x = 0; x < list.length; x++) looks better than for(int x=0; x<list.length; x++). In terms of time complexity, you algorithm is optimal(it is linear in the size of input). However, if it is supposed to work with long strings, I'd use iteration instead of recursion(it gets a StackOverflowError when the strings get really big). Here is my iterative solution:public static String getLongestCommonPrefix(String[] strings) { int commonPrefixLength = 0; while (allCharactersAreSame(strings, commonPrefixLength)) { commonPrefixLength++; } return strings[0].substring(0, commonPrefixLength);}private static boolean allCharactersAreSame(String[] strings, int pos) { String first = strings[0]; for (String curString : strings) { if (curString.length() <= pos || curString.charAt(pos) != first.charAt(pos)) { return false; } } return true;}In general, each class should have single responsibility(that it, you might create two separate classes here: one for computing the longest prefix and the other one for checking and parsing command-line arguments). But I think it is fine to have one class here(the entire class is pretty small) as long as the format of arguments is not going to change in the future. |
_softwareengineering.101431 | Hello and I'm apologizing in advance if this question doesn't fit programmers section of stackexchange.I'll try to clarify what the problem is by telling a story where the problem originates from.I'm working in a small company (8 people) where we have developers, sys admins and staff that handles day to day calls.The problem is the logins. There are various logins, for example web server root login. Then there's MySQL login, login for apps that we develop and to make the thing even more complex - logins differ from local development and testing versions to the ones that are actually deployed which makes the whole thing even more messier.What's happening is that no one knows all of the logins, and they're either written down somewhere deep on cryptically named files or certain people know them by heart. If someone who knows the login for the thing we need to work with is away (say holiday), the process of retrieving the login becomes a huge mess.Now, we are aware how terrible that is and we're looking into improving our login / project management. The other problem is that not everyone should know all the logins.My question is: how do you (or how would you) handle storing logins such as web server root information, web app admin login etc. in such a way that it's available to everyone within the company, but with restricted access (say, a secretary cannot obtain root details, no offence to secretaries)?P.S. I just spent 3 hours obtaining the login details to change a simple spelling mistake on a project which took exactly 1 second after I had the info. Seeing I can't afford losing any more nerves or time over such seemingly trivial things, I'm begging experienced and smart guys for help. Thanks in advance :) | How to handle storing the login data for various projects and web servers? | project management | Almost every server product or product targeted toward businesses can use LDAP. I'd go so far as to say that it's irresponsible not to do it in this day and age.Set up a directory server if you don't already have one, configure server products like mysql to use it, then update the authentication systems in whichever products you own the source for. One login for every app on the internal network.Logins to vendor sites are another story but you don't really mention those. I wouldn't waste time maintaining an enterprise-wide password vault unless I absolutely had to; it's too easy to run into problems with sensitive information getting stored in the wrong place or low-level idiots forgetting the password to the password vault (or worse, writing it down on a sticky note and attaching it to their monitor). Even if the information doesn't seem all that valuable, I would only ever trust competent IT professionals with it. We use KeePass where I work but the database is in a restricted (Admin-only) location.If everybody gets their own private password safe then that mitigates a lot of the harm (also a lot of the usefulness); shared password vaults violate non-repudiation and that is not a situation you want to be in when there's a major disaster and the auditors come a-sniffin'. |
_softwareengineering.352352 | I would like some input on some refactoring I am to do on a mobile backend (API).I have been tossed a mobile API which I need to refactor and improve upon, especially in the area of performance.One (of the many) place where I can improve the performance while also improve the architecture is how the backend currently deals with push notifications.Let me briefly describe how it currently works, and then how I intent to restructure it.This is how it works now. The example is about the user submitting a comment to a feed post:The user clicks send in the mobile app. The app shows a spinner and meanwhile sends a request to the backend.The backend receives the request and starts handling it. It inserts a row in the comments table, does some other bookkeeping stuff, and then for the affected mobile devices it makes a request to either the Apple Push Notification server or the Google Firebase Service (or both if the receiver has both an Android and an iPhone).On success the backend returns a 200 to the mobile app.Upon receiving status code 200 from the backend, the mobile app removes the spinner and updates the UI with the submitted comment.It is simple but the issue with the above as I see it isa) Currently this sample endpoint has too many responsibilities. It deals with saving a comment, and also with sending out push notifications to devices.b) The performance is not the best since the mobile app waits for the backend to both save a comment (which is pretty fast) and send a notification which requires a HTTP request (which can be anything from fast to slow).So my idea is to remove all about notifications from the backend, and host that in a separate backend app (you might call it a microservice).So what I am thinking is to do it like this:The user clicks Send in the mobile app. The app shows a spinner and meanwhile sends a request to the main API backend.The mobile app also sends of another HTTP request, this time to a notification service which is separate from the main API backend. This is kind of a fire and forget request. So the app does not wait for this in anyway, and it can be send in the background (in iOS using e.g. GCD).The main backend receives the request about the comment, and starts handling it. It inserts a row in the comments table, perhaps does some other bookkeeping stuff, and then it returns the response to the mobile app.The notification service receives the request about the comment, and inserts a row in a notification table (this is for historical reasons, e.g. to make an Activity view or something like that), and then puts a message on some queue (or on Redis). A separate job takes whatever is on the queue/Redis and handles it (this is where we actually send a request to Apple Push Notification Server and Googles Firebase Service). By not having the HTTP notification service do the talking with these external services it will be easier to scale the HTTP resources.Upon receiving the 200 from the main backend, the mobile app removes the spinner and updates the UI with the submitted comment. Again note that the mobile app does not wait on the second request it send off (it's not like it can do anything if that fails anyway).So this is way more complex. But the main API backend is now only concerned actually saving the comment. The mobile app also needs to send two requests instead of just one, but it doesn't need to wait for the second request. So overall it should giver better performance I think.With regards to the notification service it could be simpler by not using a queue/Redis but just have the notification service call up Apple and Google with the push notifications. But I am thinking that by separating that out into a simple HTTP service that only does some basic bookkeeping stuff and putting stuff on a queue/Redis it can be fast and simple, and the separate job would then do the actual work of calling up Apple and Google.Does it makes sense? Or have I over complicated things? All comments appreciated. | Architecture of mobile backend | api;android;ios;mobile | null |
_webmaster.78660 | BackgroundI just moved my blog over from Blogger to WordPress, and I'm trying to replicate the stats reporting that I had before. I've got Google Analytics integrated with my website, but I'm having trouble figuring out how to see the data I'm looking for.What I want is to be able to see a list of all my posts, with the number of page views: today, this week, this month, and forever. Similar to this:QuestionIs it possible to set this up? | How Can I See Pageviews Per Post - Google Analytics & WordPress | google analytics;wordpress | Look under Behavior > Site Content > All Pages |
_webmaster.26325 | I ran my site through the W3C validator, and a bunch of errors were caused by the Google Website Optimizer javascript.It seems weird that would happen. If I put CDATA around it, the error go away. I assume that the code will still work? So, I'm wondering if there's ever a time it's bad to put CDATA around javascript?andWhy would Google's javascript snippets not validate in the first place? | Google Optimizer code, validation and cdata | google;google search console;validation | It probably has to do with the doctype. I suspect you're using XHTML. JavaScript Contains Characters Which Can Not Exist in XHTML. Raw < and & characters are not allowed except inside of CDATA Sections. See this page for more. |
_softwareengineering.310547 | I need to write a function to detect if a set of strings needs delimiters when concatenated in any order.For example, the strings (A,B,C) do not need a delimiter: ABCBB -> [A,B,C,B,B].However, the strings (Pop,corn,Popcorn) do need a delimiter, as the string Popcorn is ambiguous: it can either be [Pop,corn] or [Popcorn].Two more testcases:(Pop,Popcorn,Kernel) -> Not ambiguous(A,AB,BC,BA) -> Ambiguous (on the string ABCA)Algorithms that I've considered, but don't work:Testing if a string starts with another string, which fails (Pop,Popcorn,Kernel)Testing if a string is completely made up of other strings, which fails (A,AB,BC,BA)Testing all possible combinations of strings (fails to finish on non-ambiguous)How can I detect (hopefully efficiently) if a set of strings need a delimiter when concatenated? | Algorithm to detect if a list of strings need delimiters | algorithms | After some thinking, there is a simple algorithm. Assume you have two strings x and y, and neither is a prefix of the other. Nothing you could append to them would make them equal, so they are of no interest. What you are interested in are strings x and xa, where xa has not been created by adding strings to x: We might be able to add words from your list to both x and xa and get the same two strings. Actually, we are only interested in a. Let L be your list. We create a set S of all strings a such that for some x, both x and xa can be formed from words in your list, without creating xa by adding words to x. But L contains no prefix of corn and no word starting with corn, so we are finished and L is unambigous. Initially S is empty. We check all pairs (x, y) of strings in L. If x is empty or x = y then L is ambigous. Otherwise if y = xa or x = ya then add a to the set S. (In your last example, the two strings A and AB in your list add B to the set S). Then for each string x in S, and for each string y in L: If x = y then L is ambiguous. Otherwise if y = xa or x = ya then add a to the set S. Repeat until nothing else can be added to S; in that case L is unambigous. In your last example, S contains B. Since L contains BC and BA we add C and A. C doesn't let us anything because no string in L starts with C. But A is actually an element of L, which makes L ambiguous. Your first example is unambiguous because neither of A, B, C is prefix of another. In your second example, because L contains Pop and Popcorn, you add corn to S. And corn is an element of S, so it's ambigous. In your third example, we also add corn to S, but L contains no prefix of corn and no string that starts with corn, so we are finished and L is unambiguous. |
_codereview.85803 | I'm trying to write a function explodeBy:explodeBy :: Int -> [Int] -> [Int]explodeBy n arr = foldr (\x acc-> (take n.repeat) x ++ acc ) [] arr This works as follows:> explodeBy 2 [1,2,3][1,1,2,2,3,3]The above uses list concatenation, so I tried writing a more efficient version:explodeBy2 n arr = foldr (\x acc -> f n x acc) [] arrf 0 _ rest = restf rep val rest = f ( rep -1) val (val:rest)Though this looks ugly, I expected it would work better. Reason: here we are using the (:) operator. Since rest is larger than (take n.repeat) x, I believe the second approach should have always been faster.Surprisingly:> sum $ explodeBy2 500 [2..1001]250750000(0.48 secs, 199527400 bytes)> sum $ explodeBy 500 [2..1001]250750000(0.14 secs, 102259320 bytes)explodeBy2 is consuming more time and memory!Is there something wrong with this code or my understanding of (++)? Are there also any tips on how to write explodeBy better?PS: This is where I got the impression ++ isn't efficient. Is this outdated? | Haskell: Efficient and clear list concatenation | haskell | GHCi is frequently significantly slower than compiled code even if you compile without any optimizations turned on, consider the timing information that GHCi gives you with great suspicion. Compile your code with -O2 and then time and the difference should largely disappear.One of the nice things about the Haskell Prelude is that there is very little magic going on behind the scenes. Haskell lists use some privileged constructor names ((:) and []) but besides that they are a data type like any other. So let's peek behind the curtain and take a look at how (++) is defined.(++) :: [a] -> [a] -> [a](++) [] ys = ys(++) (x:xs) ys = x : xs ++ ysYou should recognize the similarity to your function f. There is no way around the fact that growing a list by n elements takes n cons operations. When you want to add one element to the front of a list, you use (:) and that's it, you're done. When you want to add one element to the back of a list there's no constructor for that, you have to consider that element to be a new list of length one, and prepend your original list in front of it, causing you to have to walk down every single element of the original list. That is “slower” than the other case, but if the ordering of elements in your list is important you gotta do what you've gotta do, right?Take some time to familiarize yourself with the functions available in Prelude and Data.List. There are many easier ways to write explodeBy.As always, consider the types and use Hoogle. Looking at take n . repeat $ x think about the type you would give that function on its own. n is an Int, and x is any type a. So you'd have a type like func :: Int -> a -> [a]. The first hit is for replicate, which does exactly what you want.Next consider what easy intermediate values you can construct that get you closer to the result you want. The first step would be to replicate each element in the original list n times. That sounds like a job for map.map (replicate n) xs :: [[a]]Now we need :: [[a]] -> [a], which we could also search for. But this is of course list concatenation. There's one more trick you might not find by searching though, so here is when it comes in handy to have read through the documentation for Prelude. concatMap handles mapping and list concatenation in one go.explodeBy :: Int -> [a] -> [a]explodeBy n = concatMap (replicate n) |
_unix.323402 | I'm managing with some boards like Arduino/UDOO etc. but I always face an annoying problem with minicom or screen.If i type a long command, for example:sudo /sbin/wpa_supplicant -s -B -P /run/wpa_supplicant.wlan0.pid -i wlan0 -D nl80211 -c /etc/wpa_supplicant.confit reaches the end of terminal window, instead of continuing in a new line, it overwrites over the same line.One another annoying problem is when I up/down commands with UP/DOWN arrows. Check this gif while I up/down among last commands:This is my env:XDG_SESSION_ID=c1TERM=linuxSHELL=/bin/bashHUSHLOGIN=FALSEUSER=udooerLS_COLORS=rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arj=01;31:*.taz=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lz=01;31:*.xz=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.axv=01;35:*.anx=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.axa=00;36:*.oga=00;36:*.spx=00;36:*.xspf=00;36:PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/gamesMAIL=/var/mail/udooerLC_MESSAGES=POSIXPWD=/home/udooerLANG=en_US.UTF-8NODE_PATH=/usr/lib/nodejs:/usr/lib/node_modules:/usr/share/javascriptHOME=/home/udooerSHLVL=2LANGUAGE=en_US.UTF-8LOGNAME=udooerLESSOPEN=| /usr/bin/lesspipe %sXDG_RUNTIME_DIR=/run/user/1000LESSCLOSE=/usr/bin/lesspipe %s %s_=/usr/bin/envcheckwinsize is ON.Even nano is resized to a minipage in top-left of the bigger screen :/How to solve it? | Screen/Minicom multiline problem | terminal;gnu screen;minicom | You need to manually execute stty cols ... rows ... and/or export LINES and COLUMNS to set the remote side's belief about the size to the terminal's actual size. Unfortunately there's no way this could be set automatically over a serial line. |
_softwareengineering.22234 | Hypothetically speaking imagine that there exists a coworker that has a very shallow understanding of computing. To avoid stereotyping lets give this coworker the gender neutral name Chris. Now Chris' diagnostic ability is low and can't figure out the correct IP addresses to set his/her virtual machines to. Chris also fails to merge code properly and overwrites a commit I made fixing something, thereby re-introducing a bug. I let this slide, refix the bug and do not make a sound about it to management. Given a task Chris either 1) complains that there isn't sufficient information resulting in 2) you provide Chris with inordinately detailed instructions to satisfy 1). The more detail you provide in a list of steps to carry out, the more chance of an error being present in your instructions. Chris gets these instructions, tries to execute them, fails and it becomes your fault because your instructions aren't good enough. How do you deal with this? | Coworker that detrimentally picks on every minutia | management | Anyway given a task Chris either 1) complains that there isn't sufficient information resulting in 2) you provide Chris with inordinately detailed instructions to satisfy 1). The more detail you provide in a list of steps to carry out, the more chance of an error being present in your instructions.Having been in both your position and Chris's, I might be able to explain things a bit. I hear you saying that you're giving Chris tasks, but you don't mention involving him in coming up with those instructions. You're probably trying to help him do the right thing, but that's probably not how he sees it. When you're in Chris's place, it's difficult not to think of what you're trying to do as saying OK, here's your work. Now do your job, drone.In other words, the solution isn't to give Chris more instructions. In fact, you should give him no instructions. Instead, you should help him come up with a course of action. Once Chris sees his role in the process, he might very well turn into a different person altogether. |
_unix.306433 | I'm trying to make my HP Photosmart c4100 serie scanner working. From my understanding, scanbd is able to detect the button pressed of the multi-function and triggers a script (which is exactly what I need).I've installed scanbd and configured the config file (with the proper script) but when I press the button nothing happens :(I'm not sure how to debug this... I have the following questions:Is my HP compatible with scanbd?Do I have to configure scanbd for the printer/scaner? (FYI I can scan with the command: scanimage > scan.pnm)Where can I see what scanbd detects? Where is the log (sorry for this question but didn't find anything in /var/log)? | Detect scan button of HP multi-function | scanner;sane | null |
_softwareengineering.297618 | If I have two CPU cores, one is writing a particular cache line and the other core wishes toReadWritethe same cache line, what are the costs (in cycles) for doing so? I am a little unsure whether the Read-For-Ownership request has to propagate through the L1 and L2 caches on its way to the CPU-which-already-owns-the-cache-line. When the cacheline is retrieved and returned, I know we pass through the L3, L2 and L1 caches, because all three need to be populated with the updated cache line. | Cost of cache coherency/sharing data across multiple cores? | architecture;performance;multithreading;optimization;cpu | null |
_unix.44869 | I've just been updgraded to Sky+ and I've got an old Sky box (Thomson 286_544 aka Thomson DSI4212C). How would I go about installing NetBSD (compatible with nearly everything) on it?Has anyone ever done this? | Installing Linux/NetBSD on an old Sky digibox? | system installation;hardware compatibility;netbsd | null |
_datascience.5186 | I need to draw a decision tree about this subject :The research and development manager in an old oil company, which is considering making some changes, lists the following courses of action for the company:(i) Adopt a process developed by another oil company. This would cost 7 million in royalties and yield a net 20 million profit (before paying the royalty).(ii) Carry out one or two (not simultaneously) alternative research projects :(R1) the more expensive one has a 0.8 chance of success; net profit 16 million and a further 6 million in royalties. If it fails there will be a net loss of 10 million. (R2) the alternative research programme is less expensive but only has a 0.7 chance of success with a net profit of 15 million and a further 5 million in royalties. If it fails a net loss of 6 million will be incurred.(iii) Make no changes. After meeting current operating costs the company expects to make a net 15 million profit from its existing process. Failure of one research program would still leave open all remaining courses of action (including the other research programme).I need also to indicate the different payoffs. This is what I've done so far :I would like to be sure that I'm going in the right direction since I'm a beginner with decision trees. And then I need to decide the best course of action using Bayes, Maximax and Maximin rules. | Decision Tree Bayes rules / Maximax / Maximin | data mining;statistics;visualization;decision trees | null |
_hardwarecs.7498 | I live in a place where the electricity quality is poor. If I plug my desktop computer directly into the wall, it ends up damaged after some days due to unstable electricity. I have used an UPS for some years, which solved the problem, but its batteries last only for a few years, what makes it an expensive solution on the long term.I heard about Automatic Voltage Regulators and I would like to know if they would solve my problem. I'm not worried about loss of data due to occasional power failures, since that is not really a problem here, I would just need to protect my computer from damage.What options do I have to protect my computer from bad quality electricity? Would Automatic Voltage Regulators be a good alternative to a UPS? | Alternatives to UPS to protect a computer from unstable electricity | power supply;ups;power converter | null |
_unix.184054 | I've been cloning complete HDD images to restore OS crashes using DD and GZIP for a while now using dd if=/dev/sda | gzip > img.gz and gzip img.gz | dd of=/dev/sdaThis always working fine, but the process is a little slow. It takes more than 2 hours to create or restore an image. I started experimenting with faster (de)compression; LZ4.Again, using the same commands dd if=/dev/sda | lz4 > img.lz4 and lz4 img.lz4 | dd of=/dev/sda. Creating and restoring an image now takes less than 50% of the time. Point is, this restored image delivers a unbootable PC.What am I doing wrong? Is LZ4 not suitable for this purpose? | HD clone using LZ4 and DD fails | dd;system recovery | Is the restored image the same size as the original one ?You can test restored size using :lz4 -v img.lz4 > /dev/nullIf not, maybe the following line would be a bit safer :lz4 -d img.lz4 | dd of=/dev/sda |
_unix.223022 | Please consider below file: foo,boo,900foo,boo,900foo,boo,850I need to compare the a field ($3) with the next record, if the difference is equal or more than 50, then print the record. i.e from the sample above, $3 from second record - $3 from the third record = 50, then the output would be: foo,boo,850Please advise how this could be implemented. | Compare records value with each others. | text processing;awk | You can try this awkawk -F, 'NR != 1 { if ((x - $3) >= 50) print $0; } { x = $3 }' fileand this one if you don't want to print row if filed $1 changed:awk -F, 'NR != 1 { if ($1 == fc && (x - $3) >= 50) print $0; } { x = $3; fc = $1; }' file |
_softwareengineering.204875 | Let's say, I want to model an application which allows users to model class diagrams. The high level use case can be modelled as UC1:Model Class Diagram, which refines itself into UC11: Model Class, UC12: Model Connection, UC13: Model Composition, etc.Since UC11, 12, 13 are part of UC 1, I used the include-Association. Unfortunately, the UML specification says that included use cases are essential parts and if you would leave one of them out the high level behavior could not be achieved any more.But in this example a valid class diagram can be created without modelling a connection or a composition, so these use cases are optional.To boil it down to an essence: How can optional use cases be modelled in UML while providing a mechanism for reuse (like the include association)? | How to model optional use cases in UML | uml;use case;modelling | You could use Extend in this case.Example include and extend:UC login includes UC sign up: The login page can be accesses straight away, but if you havent signed up the alt path would lead you to the sign up page . You must complete this UC to get through. You can assess the sing up page directly as well. So for reuse you could make this two use cases, instead of an alt path and include the sing up UC. UC edit profile extends UC login: The UC Login always has a pop-up when you login to ask if you want to change your profile. You dont have to do this to accesses the site. You can accesses the profile edit page from several places, with its own UC of course. You would draw this relationship as an extend because its optional to get through. |
_unix.352282 | I need to install java on one of my VMs running SUSE Linux using ansible.Below is the playbook code I am using:- name: Download Java become_user: {{user}} command: wget -q -O {{java_archive}} --no-check-certificate --no-cookies --header 'Cookie: oraclelicense=accept-securebackup-cookie' {{download_url}} creates={{java_archive}}- name: Fix ownership become_user: {{user}} file: state=directory path={{java_name}} owner={{user}} group={{user}} recurse=yes- zypper: name={{download_folder}}/jdk-8u5-linux-x64.rpm become_user={{user}} state=present- name: Clean up become_user: {{user}} file: state=absent path={{java_archive}}The problem I'm facing is that the installer needs some interactions while installing. How do I automate that? Or there is some other way to achieve this?As requested in comments, following message appears when I try to install without ansible. | How do I install jdk on SUSE Linux with ansible? | suse;zypper;ansible;jdk | null |
_webapps.41768 | I cannot seem to add a member on my trello on the iphone app.When i am in a board, there is a button 'members' on the down bar, but i can only see who is member, but not add anyone.What am i missing ? | Trello on iphone, how to add a member | trello | null |
_unix.31292 | I'm finding myself helping out some classmates in my computer science class, because I have prior development experience, and I'm having a hard time explaining certain things like the shell. What's a good metaphor for the shell in the context of the Terminal on Mac, contrasted with a remote shell via SSH? | Metaphor for the concept of shell? | shell;architecture | Put simply, a terminal is an I/O environment for programs to operate in, and a shell is a command processor that allows for the input of commands to cause actions (usually both interactively and non-interactively (scripted)). The shell is run within the terminal as a program.There is little difference between a local and remote shell, other than that they are local and remote (and a remote shell generally is connected to a pty, although local shells can be too). |
_unix.191694 | I would like to create a file by using the echo command and the redirection operator, the file should be made of a few lines. I tried to include a newline by \n inside the string:echo first line\nsecond line\nthirdline\n > foobut this way no file with three lines is created but a file with only one line and the verbatim content of the string. How can I create using only this command a file with several lines ? | How to put a newline special character into a file using the echo command and redirection operator? | shell;command line;echo;newlines | You asked for using some syntax with the echo command:echo $'first line\nsecond line\nthirdline' > foo(But consider also the other answer you got.)The $'...' construct expands embedded ANSI escape sequences. |
_webmaster.209 | I have a pretty big legacy site with literally thousands of PDFs that are sometimes accounting for in a database, but often are just links on the page, and are stored in most every directory on the site.I have written a php crawler to follow all the links on my site, and then I am comparing that against a dump of the directory structure, but is there something easier? | Good tool to crawl my site and help me find dead link and unlinked files | site maintenance;web crawlers;dead links | I've used Xenu's Link Sleuth. It works pretty well, just be sure not to DOS yourself! |
_computergraphics.4101 | I need to write a photo-realistic renderer. I have been looking at ScratchAPixel site, asking a couple of questions here on CG, and going through the Advanced Global Illumination 2nd ed book. I've read about radiometry, probability, Monte Carlo, and a bit on Russian roulette. I'm aware of the rendering equation in its hemispherical and area formulations. I've written a SAH based kd-tree, so am ok for efficient ray casting.I'm poised to start writing some code alongside reading chapter 5 of my book which is about path tracing algorithms. However, the last half of chapter 4 is taken up with talking about the following concepts:The Importance FunctionThe Measurement EquationAdjoint equations and linear transport operatorsGRDF (Global Reflectance Distribution Function)These concepts I've not seen appearing elsewhere in my research (if you can call reading a few web pages about GI rendering proper research?). My question is, do I really need to know this stuff to progress to writing my path tracer? I suspect the answer might depend on which type of path tracer I'm going to developer. To start, based upon what little I know, I think it'll be unidirectional. | Can I ignore importance, adjoint equations, GRDFs for my path tracer? | pathtracing | No, you don't need to know this stuff to implement basic path tracer.Basic unidirectional path tracer is quite simple to implement. Just trace bunch of paths of length X for each pixel with uniform distribution over the normal oriented hemisphere at the path's intersection points. Then weight the remaining path with the BRDF at each intersection point and multiply with luminance of light once the path hits a light.You'll get quite a noisy (but unbiased!) image even for large number of paths and then you can start to look into methods to reduce noise, e.g. importance sampling & bidirectional path tracing. Just validate the more optimized path tracers towards earlier validated path tracers to avoid introducing accidental bias. |
_computergraphics.4334 | I have NURBS surface data. I have a list of control points, knot vectors in U and V params and the degree. The U knot vector lies in range -3.14 to 3.14 and the knot V vector lies in range -100 to 100. How can I normalize this data so that both knot U and V lies in range 0 to 1?Thanks for your help! | Normalize NURBS knot vector | nurbs;cad | The relative size of the spacing of knots is irrelevant for the NURBS curve. The only thing that matters is that they keep the relation. Note this may not be wise as parametrization may have other uses behind the scenes.Image 1: 3 differently parametrized knots result in same curve if knot values are relatively the same.So you can scale and offset knot points as you wish. However you can not make the relative distances between entries different or your curve will change.Image 2: On the other hand if you change the relative spacing your in trouble. So beware of floating point errors if you need to be really accurate. |
_unix.7691 | Recently in a bout of frustration with getting phpmyadmin setup, I decided to start from scratch.Unfortunately, during the uninstall phase, I was prompted with the root password for mysql which I didn't have on hand at the time. Suffice to say, it informed me that there would be residue components since it couldn't properly clean its database connectors.When I arrived home, I attempted to remove the package through aptitude purge which turns out to no more potent than aptitude remove in that it saw phpmyadmin, attempted to remove it, and failed since the directories associated with the package were already removed from my earlier attempt.I tried to reinstall phpmyadmin, but aptitude simply stated that there was no update available, and did nothing, if there were an update, I'd probably run into the same problems regardless.In this regard, I proceeded to clean up mysql by dropping the database it used, and cleaning it from the user tables. I however have no idea what else is left from the package, or even how to clean the hooks in aptitude.The result of dpkg --purgeickronia:/home/ken# dpkg --purge phpmyadmin(Reading database ... 27158 files and directories currently installed.)Removing phpmyadmin .../var/lib/dpkg/info/phpmyadmin.prerm: line 5: /usr/share/dbconfig-common/dpkg/prerm.mysql: No such file or directorydpkg: error processing phpmyadmin (--purge): subprocess pre-removal script returned error exit status 1/var/lib/dpkg/info/phpmyadmin.postinst: line 35: /usr/share/dbconfig-common/dpkg/postinst.mysql: No such file or directorydpkg: error while cleaning up: subprocess post-installation script returned error exit status 1Errors were encountered while processing:phpmyadminOn following Gile's advice, I tried to re-install the dependency dbconfig-commonickronia:/home/ken# aptitude reinstall dbconfig-commonReading package lists... DoneBuilding dependency treeReading state information... DoneReading extended state informationInitializing package states... DoneReading task descriptions... Donedbconfig-common is not currently installed, so it will not be reinstalled.dbconfig-common is not currently installed, so it will not be reinstalled.The following packages are BROKEN: phpmyadmin0 packages upgraded, 0 newly installed, 0 to remove and 3 not upgraded.Need to get 0B of archives. After unpacking 0B will be used.The following packages have unmet dependencies: phpmyadmin: Depends: php5-mcrypt but it is not installable Depends: dbconfig-common but it is not installable Depends: libjs-mootools (>= 1.2.4.0~debian1-1) which is a virtual package.The following actions will resolve these dependencies:Remove the following packages:phpmyadminScore is 121Accept this solution? [Y/n/q/?] n*** No more solutions available ***The following actions will resolve these dependencies:Remove the following packages:phpmyadminScore is 121Accept this solution? [Y/n/q/?] n*** No more solutions available ***The following actions will resolve these dependencies:Remove the following packages:phpmyadminScore is 121Accept this solution? [Y/n/q/?] yThe following packages will be REMOVED: phpmyadmin{a}0 packages upgraded, 0 newly installed, 1 to remove and 3 not upgraded.Need to get 0B of archives. After unpacking 17.7MB will be freed.Do you want to continue? [Y/n/?] yWriting extended state information... Done(Reading database ... 27158 files and directories currently installed.)Removing phpmyadmin .../var/lib/dpkg/info/phpmyadmin.prerm: line 5: /usr/share/dbconfig-common/dpkg/prerm.mysql: No such file or directorydpkg: error processing phpmyadmin (--remove): subprocess pre-removal script returned error exit status 1/var/lib/dpkg/info/phpmyadmin.postinst: line 35: /usr/share/dbconfig-common/dpkg/postinst.mysql: No such file or directorydpkg: error while cleaning up: subprocess post-installation script returned error exit status 1Errors were encountered while processing: phpmyadminE: Sub-process /usr/bin/dpkg returned an error code (1)A package failed to install. Trying to recover:Reading package lists... DoneBuilding dependency treeReading state information... DoneReading extended state informationInitializing package states... DoneWriting extended state information... DoneReading task descriptions... Doneickronia:/home/ken#It appears that phpmyadmin cleanly cleared out dbconfig-commonickronia:/usr/share/dbconfig-common# ls -alFtotal 12drwxr-xr-x 3 root root 4096 2011-02-09 08:09 ./drwxr-xr-x 98 root root 4096 2011-01-20 21:42 ../drwxr-xr-x 3 root root 4096 2011-01-05 11:08 data/ickronia:/usr/share/dbconfig-common#Attempted to dpkg from archives as suggested by Gilesickronia:/usr/share/dbconfig-common# dpkg -i /var/cache/apt/archives/{dbconfig-common,php5-mcrypt,libjs-mootools}*.debSelecting previously deselected package dbconfig-common.(Reading database ... 27161 files and directories currently installed.)Unpacking dbconfig-common (from .../dbconfig-common_1.8.46_all.deb) ...Selecting previously deselected package php5-mcrypt.Unpacking php5-mcrypt (from .../php5-mcrypt_5.3.3-6_i386.deb) ...Selecting previously deselected package libjs-mootools.Unpacking libjs-mootools (from .../libjs-mootools_1.2.5~debian1-2_all.deb) ...Setting up dbconfig-common (1.8.46) ...dpkg: dependency problems prevent configuration of php5-mcrypt: php5-mcrypt depends on libltdl7 (>= 2.2.6b); however: Package libltdl7 is not installed. php5-mcrypt depends on libmcrypt4; however: Package libmcrypt4 is not installed.dpkg: error processing php5-mcrypt (--install): dependency problems - leaving unconfiguredSetting up libjs-mootools (1.2.5~debian1-2) ...Processing triggers for man-db ...Processing triggers for libapache2-mod-php5 ...Reloading web server config: apache2.Errors were encountered while processing: php5-mcryptickronia:/usr/share/dbconfig-common#I have a webserver running on php, but I'm willing to risk downtime to get this resolved. | Removing broken packages | debian;package management;aptitude | phpmyadmin depends on dbconfig-common, which contains /usr/share/dbconfig-common/dpkg/prerm.mysql. It looks like you've managed to uninstall dbconfig-common without uninstalling phpmyadmin, which shouldn't have happened (did you try to --force something?).My advice is to first try aptitude reinstall dbconfig-common. If it works, you should have a system in a consistent state from which you can try aptitude purge phpmyadmin again.Another thing you can do is comment out the offending line in /var/lib/dpkg/info/phpmyadmin.prerm. This is likely to make you able to uninstall phpmyadmin. I suspect you did what that line is supposed to do when you edited those mysql tables manually, but I don't know phpmyadmin or database admin in general, so I'm only guessing.The difference between remove and purge is that remove just removes the program and its data files (the stuff you could re-download), while purge first does what remove does then also removes configuration files (the stuff you might have edited locally). If remove fails, so will purge. |
_cstheory.4904 | This question was motivated by a question asked on stackoverflow.Suppose you are given a rooted tree $T$ (i.e. there is a root and nodes have children etc) on $n$ nodes (labelled $1, 2, \dots, n$). Each vertex $i$ has a non-negative integer weight associated: $w_i$.Additionally, you are given an integer $k$, such that $1 \le k \le n$.The weight $W(S)$ of a set of nodes $S \subseteq \{1,2,\dots, n\}$ is the sum of weights of the nodes: $\sum_{s \in S} w_s$.Given input $T$, $w_i$ and $k$, The task is to find a minimum weight sub-forest* $S$, of $T$, such that $S$ has exactly $k$ nodes (i.e. $|S| => k$). In other words, for any subforest $S'$ of $T$, such that $|S'| = k$, we must have $W(S) \leq W(S')$.If the number of children of each node were bounded (for instance binary trees), then there is a polynomial time algorithm using dynamic programming.I have a feeling that this is NP-Hard for general trees, but I haven't been able to find any references/proof. I even looked here, but could not find something which might help. I have feeling that this will remain NP-Hard even if you restrict $w_i \in \{0,1\}$ (and this might be easier to prove).This seems like it should be a well studied problem. Does anyone know if this is an NP-Hard problem/there is a known P time algorithm? *A sub-forest of $T$ is a subset $S$ of nodes of the tree $T$, such that if $x \in S$, then all the children of $x$ are in $S$ too. (i.e. it is a disjoint union of rooted sub-trees of $T$).PS: Please pardon me if it turns out that I missed something obvious and the question is really off-topic. | Minimum weight subforest of given cardinality | cc.complexity theory;reference request;np hardness;tree;application of theory | Similar to the solution for a binary tree, you can solve it in polynomial time on a tree without degree restriction:First, generalize the problem such that every node also has a count $c_i\in\{0,1\}$, and the problem is to find a subforest $S$ of count $k=\sum_{i\in S} c_i$.Generalize the dynamic programming approach to this version (it still works with a table, given a fixed count $C$, what is the minimal weight subforest in the subtree having count precisely $C$) Keep the original tree with nodes of count 1. Every node $v$ with degree greater than 2 is split into a binary tree with deg$(v)$ leaves (the shape does not matter). The new nodes have count and weight 0. Solve the problem on the new tree. When reading out the solution ignore any new node; this will still be a subforest of the same weight. Because any original subforest translates into a new subforest of the same weight, the found subforest is optimal. |
_webmaster.63204 | My site is working if i type it as domain.com. But If I type it as www.domain.com I get an error 404 page.The domain is registered with Google Apps for Business and the hosting is done with another company. I have some A records that are pointing the site to this server. However, as it appears the A records are only working halfway.What do I have to do to make it working for both http:// and http://www ? | Site is working with http:// not with http://www | dns | Normally you would have an A record for the domain name example.com and a CNAME for www pointing to example.com or an A record for www with the same IP address as example.com.As well, your website has to be set up for this. For example, in the Apache site configuration file often found in /etc/apache2/sites-available/ or /etc/local/apache2/sites-available/, you would need some thing like:ServerName example.comServerAlias www.example.comIf you have another web server, you will need to research this second part for specific configuration details. |
_softwareengineering.112631 | Many modern programming languages (Scala, Erlang, Clojure, etc.) are targeting the JVM. What are the main reasons behind that decision?JVM's maturity?Portability?Because JVM simply exists, and the language designers were not willing to write a new VM/interpreter? | Modern languages and the JVM | programming languages;jvm;compatibility;modern | Erlang is a standalone language but there is work being done on Erjang which is targeting the JVM. You're right that Scala and Clojure are targeting the JVM, and there are versions of Ruby and Python targeting the JVM as well (JRuby, Jython).Yes, the JVM is a very mature platform and modern JVMs are able to optimize code and compile it on-demand to the host's native code for increased performance.Yes, portability. Compiled Scala, Clojure, etc can be packaged using standard tools and distributed to any system that has a JVM (of a suitable version level).Yes, the JVM means new languages only have to write a compiler to bytecode (and any supporting libraries they want to provide). They don't have to write a new runtime (which is what Erlang, Ruby, Python, JavaScript etc have all done in the past).But I think you've missed one of the biggest benefits of the JVM: the huge ecosystem of libraries - both the Java standard library and the vast array of third party libraries are accessible to any language that targets the JVM. |
_datascience.14959 | My understanding is that GPUs are more efficient for running neural nets, but someone recently suggested GPUs are only needed for the training phase and that once trained, it's actually more efficient to run them on CPUs.Is this true? | After the training phase, is it better to run neural networks on a GPU or CPU? | machine learning | null |
_unix.248715 | I have bash script which can login to Cisco switch and shut/noshut any port i have defined in it.what I want is that i can add variable , like i can define port number in command like this./cisco.sh 10 (10 is port number)but the script is not letting me add any variable.and gives errorcan't read 1: no such variable while executing set PORT $1i m using following script#!/usr/bin/expect -f set timeout 20 set IPaddress 192.168.0.1 set Username zaib set Password zaib set PORT $1 spawn ssh -o StrictHostKeyChecking no $Username@$IPaddress expect *assword: send $Password\r expect > send enable\r expect *assword: send $Password\r send conf term\r send interface gigabitEthernet 1/0/$PORT\r expect # send shut\r expect # send exit\r expect # send exit\r send wr\r send exit\rexit | set variable in bash script with EXPECT | bash;expect;cisco | null |
_unix.346054 | After installing Cinnamon on FC25, I can no longer easily use GVim. All my title bars blend too closely for me. For some reason, I didn't have this issue on FC24 and Cinnamon, so I'm leaning towards this being a Cinnamon theme issue. I'm trying to pick some theme that's as close to Windows XP as possible. When I drag the window to resize it, the foreground goes to something dark, and the background goes to white. When the dragging handle is released, it goes back to something that I can't distinguish.The desktop theme is Mint-XP, and I've tried Arc, Arc Solid, Dark, etc. | Foreground and background color too similar in Cinnamon | linux;fedora;vim;cinnamon;gvim | null |
_codereview.128645 | This piece of code takes odata web services response then translates it to nested tables. Is there any way this code could be improved and shortened or divided into smaller functions, as this big function is really confusing?/// <summary>/// Generates the display table of the specified data./// </summary>/// <param name =data>The data to generate the display from.</param>/// <param name =title>The title of the table.</param>OData.explorer.DataExplorer.prototype.createResultsTable = function (data, title) { var me = this; var $table = $('<table class=defaultResultsFormatting/>'); if (data && data.length > 0) { var $thead = $('<thead />'); $table.append($thead); var columnCount = 0; // Add the column names. var $headRow = $('<tr/>'); $thead.append($headRow); var result = data[0]; for (var property in result) { var type = typeof result[property]; // DataJS returns the dates as objects and not as strings. if (type === 'string' || type === 'number' || type === 'boolean' || result[property] instanceof Date || !result[property]) { $headRow.append($('<th />', { text: property })); ++columnCount; } } var hasLinks = false; var $tbody = $('<tbody />'); $table.append($tbody); $.each(data, function (index, e) { var $bodyRow = $('<tr/>'); $tbody.append($bodyRow); var expandedChildResults = null; var links = []; $.each(e, function (index, property) { var type = typeof property; if (type === 'string' || type === 'number' || type === 'boolean') { $bodyRow.append($('<td />', { text: property })); } else if (property instanceof Date) { // DataJS returns the dates as objects and not as strings. $bodyRow.append($('<td />', { text: property.toDateString() })); } else if (!property) { $bodyRow.append('<td />'); } else if (typeof property === 'object' && property.results && index !== '__metadata') { expandedChildResults = property.results; } else if (property.__deferred) { links.push({ key: property.__deferred.uri, value: index }); hasLinks = true; } }); // Display the links only if there are some. if (links.length !== 0) { columnCount += 2; var $cell = $('<td />'); $bodyRow.prepend($cell); me.addDropdown('links', links, $cell, '', true, false); // Prepend a blank cell for the expand icon. var $expandCell = $('<td/>'); $bodyRow.prepend($expandCell); if (expandedChildResults) { // Add the expand/collapse button. $expandCell.append('<span class=expandChild />'); // Create a new row for the child results. $bodyRow = $('<tr class=expandedChild />'); $table.append($bodyRow); var $childCell = $('<td />', { colspan: columnCount }); $bodyRow.append($childCell); $childCell.append(me.createResultsTable(expandedChildResults)); } } }); // Display the links column names only if they exist. if (hasLinks) { $headRow.prepend('<th></th><th>Links</th>'); } // Add a title to the table. if (title) { var $titleRow = $('<tr />'); $thead.prepend($titleRow); $titleRow.append($('<th />', { text: title, colspan: columnCount })); $table.attr('data-tabletitle', title); } } else { this.noResults(); } return $table;};Coming from Odata-query-builder.js library, output is dynamic in nature but this is one of the examples: | Recursive nested table creation code | javascript | null |
_unix.55952 | My directory structure is given below. I need to move all the folders from Test3 to Test2 and concatenate the files with same names[jg@hpc Test2]$ tree.|-- Sample_1008| |-- 1008_ATCACG_L002_R1_001.fastq| |-- 1008_ATCACG_L002_R2_001.fastq| |-- 1008_ATCACG_L006_R1_001.fastq| `-- 1008_ATCACG_L006_R2_001.fastq`-- Sample_1009 |-- 1009_CGATGT_L002_R1_001.fastq |-- 1009_CGATGT_L002_R2_001.fastq |-- 1009_CGATGT_L006_R1_001.fastq `-- 1009_CGATGT_L006_R2_001.fastq[jg@hpc Test3]$ tree.|-- Sample_1008| |-- 1008_ATCACG_L002_R1_001.fastq| |-- 1008_ATCACG_L002_R2_001.fastq| |-- 1008_ATCACG_L006_R1_001.fastq| `-- 1008_ATCACG_L006_R2_001.fastq`-- Sample_1009 |-- 1009_CGATGT_L002_R1_001.fastq |-- 1009_CGATGT_L002_R2_001.fastq |-- 1009_CGATGT_L006_R1_001.fastq `-- 1009_CGATGT_L006_R2_001.fastqI triedmv Test3/* /auto/dr-lc_sa1/Data/Test2nothing worked but when I tried cp -r Test3/* Test2/It overwrites.I want the files to be concatenated. At the end I need to have one Test2 directory and under every sample and their fastq files in the Test3 directory will be concatenated to corresponding fastq files in Test2 directory. | How to move files with same name and concatenate | shell script;files;rename;cat | There's no built-in way to concatenate a file and remove it, you'll have to break it into two steps.In zsh, or in bash 4 after running shopt -s globstar, or in ksh after running set -o globstar:cd Test3for x in **/*.fastq; do cat $x >>/auto/dr-lc_sa1/Data/Test2/$x && rm $xdoneWithout ** to recurse into subdirectories, use find.cd Test3find . -name '*.fastq' -exec sh -c 'cat $0 >>/auto/dr-lc_sa1/Data/Test2/$0 && rm $0' {} \;If Test2 and Test3 are on the same filesystem and there are many files under Test3 that don't have a corresponding file in the destination, you can save some execution time by moving the file instead of concatenating it onto an empty file:for x in **/*.fastq; do if [ -s ../Test2/$x ]; then cat $x >>/auto/dr-lc_sa1/Data/Test2/$x && rm $x else mv $x /auto/dr-lc_sa1/Data/Test2/$x fidone |
_unix.349778 | I'm running a a WSGI application. When accessing a URL of the application, for instance from my browser, I do get the page I'm asking for. But I also get an error in the log files.URL:https://api.example.com/api/v0/api-docs/api-docs.jsonError message:[Tue Mar 07 17:43:52.331186 2017] [authz_core:error] [pid 23997] [client my.client.ip:59666] AH01630: client denied by server configuration: /var/www/html/api-docsHere's the content of my app's apache config file.<VirtualHost *:80> ServerName api.example.com Redirect permanent / https://api.example.com/</VirtualHost><VirtualHost *:443> ServerName api.example.com SSLEngine On SSLCipherSuite HIGH:MEDIUM SSLCertificateFile /etc/ssl/localcerts/apache.pem SSLCertificateKeyFile /etc/ssl/localcerts/apache.key # We may run multiple API versions in parallel # http://stackoverflow.com/questions/18967441/add-a-prefix-to-all-flask-routes # API v0 WSGIDaemonProcess api-v0 threads=5 WSGIScriptAlias /api/v0 /path/to/application/application.wsgi WSGIPassAuthorization On <Location /api/v0> WSGIProcessGroup api-v0 </Location> <Directory /path/to/application/> Options FollowSymLinks WSGIProcessGroup api-v0 WSGIApplicationGroup %{GLOBAL} # WSGIScriptReloading On Require all granted </Directory> # API v1 # ... # I don't have a v1 yet on the server. #This is not something I stripped for the example.</VirtualHost>I'm using a self-signed certificate, accepted in the client browser. I don't think it should make any difference...Apache 2.4.The error message reads as if apache was fetching the files in /var/www/html/, in which case the error would be understandable. Except WSGIScriptAlias is understood correctly since I get the page content anyway, so apache must be looking in /path/to/application/.EditI managed to remove the warning by adding a DocumentRoot directive to the VirtualHost.<VirtualHost *:443> DocumentRoot /path/to/application ServerName api.example.comI've read the docs about DocumentRoot and it is still unclear to mewhy it worked without itwhy I got that error in the logs while the client actually got the content anywaywhy other virtual host apparently don't need it (e.g. I also host a Redmine instance that does not have this directive)why it works on another machine without it (although on port 80, but should that matter?)The default config for *.443 (ssl) has the defaultDocumentRoot /var/www/html | WSGI application, AH01630: client denied by server configuration but client receives page anyway | apache httpd;wsgi | null |
_cs.68533 | I'm talking about Type-0 (Chomsky hierarchy) unrestricted grammar, where production rules of grammar are of the form $\alpha\rightarrow\beta$, where $\alpha,\beta\in N\cup\Sigma$.I can not find any example of real unrestricted grammar which produces a non-context-sensitive language (of words). While there are examples of non-context-sensitive languages like here or here, there are not examples with proper grammars for them. Could somebody provide such example?(Ideally if you would provide links to corresponding papers where such example(s) are described).Note: I've tried to build unrestricted grammar for known universal Turing machines, as it was suggested in commentaries above. I've used JFLAP tool for this purposes. But obtained grammars appear to be extremely huge, incomprehensible and complex. If you would use same approach and give clear explanation of result, this would partially suit me. Thanks. | Example of unrestricted grammar which produces non-context-sensitive language | formal languages;turing machines;formal grammars | For posterity. Start with undecidable Post Correspondence Problem, or PCP:Given two lists of words $(u_1,\dots,u_n)$ and $(v_1,\dots,v_n)$ does there exist a sequence of indices such that $u_{i_1}\dots u_{i_k} = v_{i_1}\dots v_{i_k}$?The language will consist of PCP over $\{a,b\}$ that have a solution, coded as string $(u_1;v_1) \dots (u_n;v_n) $ with $u_i,v_i \in \{a,b\}^*$.The grammar will generate a PCP instance, and non-deterministically an attempt for a solution. Then equality of the two strings is tested by deleting matching letters.The equality check is more easily done when one of the strings is stored in reverse, so we will generate $u^R_{i_k} \dots u^R_{i_1} X v_{i_1}\dots v_{i_k}$.Generate an instance of PCP.$ I \to (W;W) I \mid (W;W) $ (add a pair of words)$ W \to a W \mid b W \mid \varepsilon $ (generate a word)Copy one of the pairs of PCP and move it to the pair of words around $X$ that should be equal.$ D \tau \to \tau D$ for $\tau = a,b,(,),;$$ D ( \to ( L $ (pair selected)$ L ; \to ; R $$ R ) \to ) $$ L \sigma \to \sigma L_\sigma L $ for $\sigma = a,b$ (make a copy)$ R \sigma \to \sigma R_\sigma R $ for $\sigma = a,b$$ L_\sigma \tau = \tau L_\sigma $ for $\sigma = a,b$ and $\tau = a,b,(,),;$ (move right)$ R_\sigma \tau = \tau R_\sigma $ for $\sigma = a,b$ and $\tau = a,b,(,),;$ $ L_\sigma X \to \sigma X$ (drop copy to the left of $X$, it willreverse)$ R_\sigma X \to X \sigma$ (drop copy to the right of $X$)Check equality$ X \to Z $ $ \sigma Z \sigma \to Z$ for $\sigma = a,b$Start. The $\#$ marks the end of the string, or rather the end of the solution that is generated. The last rule will delete it and at the same time test whether the solution has been completely removed by the earlier rules checking equality.$ S \to C I X \#$ $ C \to C D \mid D $ for generating the solution, copy several word pairsDone$ ) Z \# \to )$PS. Most constructions for a non-context-sensitive language use diagonalization on context-sensitive grammars to get a language that is still recursive. This one is not recursive, but recursively enumerable.Indeed many rules are context-sensitive (or rather monotonic/non-contracting). But especially note the rules like $aZa\to Z$: they shorten the string. These are definitely type-0 and not monotonic, and they are essential. After the computation they delete the proposed solution of PCP. Like a scratch tape used by a Turing machine.PS2. Although the intuitive meaning of each of the productions is explained above, it is not very simple to formally prove the grammar is correct. This is mostly due to the parallellism. There are many nonterminals moving around in the grammar at the same time independently (for instance, when a copy of a PCP pair $(u_i,v_i)$ is made all these letters move in a row to the right; also we can start looking for the next pair to copy even before the last one was finished). This makes it hard to formulate invariants. It takes some time to check that all nonterminals keep in a proper order, for instance. Fortunately none of the moving nonterminals can overtake one another.For this reason analysing Turing Machines sometimes is less complicated. Only the reading/writing head is moving around.Example (added by Andremoniy)Consider simplest case: $(a;a)$. For this string derivation sequence will be:$S\Rightarrow CIX\Rightarrow D(W;W)X\Rightarrow C(a;a)X\Rightarrow D(a;a)X\Rightarrow (La;a)X\Rightarrow (aL_aL;a)X\Rightarrow (aL_a;Ra)X\Rightarrow (a;L_aRa)X\Rightarrow (a;L_a aR_aR)X\Rightarrow (a;aL_aR_aR)X\Rightarrow (a;aL_aR_a)X\Rightarrow (a;aL_a)R_aX\Rightarrow (a;a)L_aR_aX\Rightarrow (a;a)L_aXa\Rightarrow (a;a)aXa\Rightarrow (a;a)aZa\Rightarrow (a;a)Z\Rightarrow (a;a)$ |
_webmaster.18942 | I get the folder with the following structure from my designer*:file.css images/Is there any tool which can automatically create sprite from images folder and replace all images references from file.css to appropriate sprite section?*It is actually not a designer (the man who creates design in, for example, Photoshop) but the man who makes html+css from it. How is this profession correctly called in English? | Create sprites automatically | css;sprite | null |
_softwareengineering.232301 | I have been puzzled lately by an intruiging idea.I wonder if there is a (known) method to extract the executed source code from a large complex algorithm. I will try to elaborate this question:Scenario: There is this complex algorithm where a large amount of people have worked on for many years. The algorithm creates measurement descriptions for a complex measurement device.The input for the algorithm is a large set of input parameters, lets call this the recipe. Based on this recipe, the algorithm is executed, and the recipe determines which functions, loops and if-then-else constructions are followed within the algorithm. When the algorithm is finished, a set of calculated measurement parameters will form the output. And with these output measurement parameters the device can perform it's measurement.Now, there is a problem. Since the algorithm has become so complex and large over time, it is very very difficult to find your way in the algorithm when you want to add new functionality for the recipes. Basically a person wants to modify only the functions and code blocks that are affected by its recipe, but he/she has to dig in the whole algorithm and analyze the code to see which code is relevant for his or her recipe, and only after that process new functionality can be added in the right place. Even for simple additions, people tend to get lost in the huge amount of complex code.Solution: Extract the active code path? I have been brainstorming on this problem, and I think it would be great if there was a way to process the algorithm with the input parameters (the recipe), and to only extract the active functions and codeblocks into a new set of source files or code structure. I'm actually talking about extracting real source code here.When the active code is extracted and isolated, this will result in a subset of source code that is only a fraction of the original source code structure, and it will be much easier for the person to analyze the code, understand the code, and make his or her modifications. Eventually the changes could be merged back to the original source code of the algorithm, or maybe the modified extracted source code can also be executed on it's own, as if it is a 'lite' version of the original algorithm.Extra information: We are talking about an algorithm with C and C++ code, about 200 files, and maybe 100K lines of code. The code is compiled and build with a custom Visual Studio based build environment.So...: I really don't know if this idea is just naive and stupid, or if it is feasible with the right amount of software engineering. I can imagine that there have been more similar situations in the world of software engineering, but I just don't know.I have quite some experience with software engineering, but definitely not on the level of designing large and complex systems.I would appreciate any kind of answer, suggestion or comment.Thanks in advance! | How to extract the active code path from a complex algorithm | c++;algorithms;c | null |
_unix.312362 | I have a WeeChat version 1.5 installed on Debian 8.5 with irc.server.freenode.ipv6 option set to on:10:57:15 weechat | [server] (irc.conf)10:57:15 weechat | irc.server.freenode.ipv6 = on (default: (undefined))10:57:15 weechat | 10:57:15 weechat | 1 option (matching with irc.server.freenode.ipv6)This should force WeeChat to prefer IPv6 over IPv4. irc.freenode.net has IPv6 AAAA records present:$ dig @8.8.8.8 -t AAAA irc.freenode.net +noall +shortchat.freenode.net.2a00:1a28:1100:11::422a01:270:0:666f::12a01:7e00::f03c:91ff:fee2:413b2001:6b0:e:2a18::118$ ..and for example I'm able to ping irc.freenode.net over IPv6:$ ping6 -nc 4 irc.freenode.netPING irc.freenode.net(2001:5a0:3604:1:64:86:243:181) 56 data bytes64 bytes from 2001:5a0:3604:1:64:86:243:181: icmp_seq=1 ttl=51 time=141 ms64 bytes from 2001:5a0:3604:1:64:86:243:181: icmp_seq=2 ttl=51 time=141 ms64 bytes from 2001:5a0:3604:1:64:86:243:181: icmp_seq=3 ttl=51 time=142 ms64 bytes from 2001:5a0:3604:1:64:86:243:181: icmp_seq=4 ttl=51 time=142 ms--- irc.freenode.net ping statistics ---4 packets transmitted, 4 received, 0% packet loss, time 3004msrtt min/avg/max/mdev = 141.567/141.903/142.081/0.431 ms$ However, when I try to connect to irc.freenode.net in WeeChat, then IPv6 is not even tried (checked with tcpdump). Even if I reject connections on IPv4 to TCP port 6667, then IPv6 is not tried.I assume that the problem is not with the WeeChat (I even tried with different versions). Any ideas, what might cause such behavior? | WeeChat does not use IPv6 | debian;ipv6;weechat | Could you please try with weechat 1.6-rc2 (current devel version)?I fixed a bug with host address during connection to servers.By the way, version 1.6 is scheduled in 2 days. |
_codereview.64798 | I am learning Haskell and I am doing 99 Haskell problems. There is a problem to check if a list if a palindrome. It's obvious solution isisPalindrome x = x == reverse xThere are several other solutions with some kind of reverse. All of them use two passes -- one for building reverse list and one more for ==. I wanted to write a single pass palindrome check.walk :: (Eq a) => [a] -> Int -> Either (Either Int [a]) Boolwalk (x:xs) n = case walk xs (n + 1) of Left (Left back) | (back + 1) == n -> Left (Right xs) | back == n -> let (y:ys) = xs in if y == x then (if null ys then Right True else Left (Right ys)) else Right False | otherwise -> Left (Left (back + 1)) Left (Right (y:ys)) -> if y == x then (if n > 0 then Left (Right ys) else Right True) else Right False Left (Right []) -> error Impossible Right ans -> Right answalk [] _ = Left (Left (-1))isPalindrome :: (Eq a) => [a] -> Bool-- isPalindrome x = x == reverse xisPalindrome [] = TrueisPalindrome [_] = TrueisPalindrome x = ans where (Right ans) = walk x 0This looks quite ugly. Is there a better way to check if a string is a palindrome using only one pass? Is it worth trying to write a single pass check? | Palindrome check in Haskell | haskell;palindrome | From the readability point of view, I'd suggest to keep some maximum line length, like 72 or 80 characters. Beyond that it's difficult to read. Also instead of having Either (Either X Y) Z, declare your own data type with 3 constructors. Not only it'll be simpler, but the constructor names will also be more descriptive and it'll be easier to understand what's going on. You should also document the function better, it's not clear what the Int argument is without thoroughly inspecting the code. A small improvement is also replacing nested ifs with a case statement.In general, I'd say that it's not really possible to make a genuine single-pass solution.Obviously we can't avoid checking the last element, if the list is a palindrome, and we need to have the first element to compare it to. So we'll always have to either get to the last element at the beginning, or keep the elements as we traverse the list, to have them for comparison when we get to the end, So we can't get to the situation when we just traverse the list from the beginning to the end, releasing the elements already traversed.In your case, you're also traversing the list twice, although it's somewhat hidden. The first pass is the recursive call to walk, which gets to the end of the list. And the second traversal is occurring in the lineLeft (Right (y:ys)) -> if y == x then (if n > 0 then Left (Right ys) else Right True) else Right Falsewhere we're traversing ys as walk returns to its parent call.(The above nested ifs can be rewritten as ... -> case () of _ | y /= x -> Right False | n > 0 -> Left (Right ys) | otherwise -> Right Trueusing case.)Furthermore, since walk calls itself recursively and examines the result of the recursive call, it can't be optimized to tail recursion, so you're building a chain of n calls on the stack.Another thing to notice is that the whole original list is kept in memory until the whole chain of walks finishes.My attempt to solve this problem would look like this:isPal' :: (Eq a) => [a] -> BoolisPal' xs = f [] xs xs where f ss xs [] = ss == xs f ss (_:xs) [_] = ss == xs f ss (x:xs) (_:_:es) = f (x:ss) xs esWe're traversing the list at two different speeds at the same time in a tail-recursive loop, to get to the middle. During that we compute the reverse of the traversed part in ss. When we hit the middle after n/2 steps, we just compare the accumulated reversed first half with the rest. The combined length of ss and xs is always n and if we find out in the middle a difference, like in aaabcaaa, we finish and release resources early.Since all suggested algorithms are O(n), it's hard to decide which one will be more efficient just by reasoning. We'd have to do some measurements, for example using the criterion package.Update: The problem could be actually solved in a genuine one pass using a rolling checksum, at the cost of having false positives in (rare) cases of hash collisions. Just compute two rolling checksums while traversing the list, one forwards and one backwards, and compare them at the end. import Control.Arrow ((&&&), second) import Data.Foldable (foldMap) import Data.Function (on) import Data.Hashable import Data.Monoid import Data.Word -- see https://en.wikipedia.org/wiki/Rolling_hash -- and https://en.wikipedia.org/wiki/Lehmer_random_number_generator#Parameters_in_common_use modulus :: Word64 modulus = 2^31 - 1 expG :: Word64 expG = 7^5 data RKHash = RKHash { rkExp :: Word64, rkVal :: Word64 } deriving (Eq, Show) inject :: (Hashable a) => a -> RKHash inject = RKHash expG . (`mod` modulus) . fromIntegral . hash instance Monoid RKHash where mempty = RKHash 1 0 mappend (RKHash e1 v1) (RKHash e2 v2) = RKHash ((e1 * e2) `mod` modulus) ((v1 * e2 + v2) `mod` modulus) isPalindrome :: (Hashable a) => [a] -> Bool isPalindrome = uncurry (on (==) rkVal) . second getDual . foldMap ((id &&& Dual) . inject) |
_webapps.1987 | Gmail really likes to make contacts for me and stick them in my All Contacts list. I'd much rather manage my contacts myself and only add people when I want to specifically do so. (Actually, I wouldn't mind if they managed the list automatically except that oftentimes I get contacts made from mailing lists and sometimes the names are wrong, which is inconvenient.)How can I make it so that my contacts list is neither obnoxiously long nor badly spelled and punctuated? | Stop Gmail from automatically creating contacts | gmail;google contacts | The previously-posted answers are no longer correct.On the General tab under Settings is Create contacts for auto-complete:When I send a message to a new person, add them to Other Contacts so that I can auto-complete to them next timeI'll add contacts myselfScreen shot:The second option will prevent Gmail from auto-creating contacts for you.This was one of many features announced in April, 2011. |
_codereview.68461 | I would like some feedback on my red black tree implementation. Anything is fine. I've debugged this and it seems to be working fine, however I may have missed something.Basically, this is a red black tree that stores character strings as keys and the passage that contains those strings as values. Since these keys are able to be repeated, they form a linked list as well. TNODE *tree_add(TNODE *root, const KEY k, const VALUE v) { LNODE *lnode = NULL; if (root == NULL) { TNODE *node = talloc(k); lnode = lalloc(v); node->head = lnode; node->tail = lnode; node->is_red = true; return node; } if (strcmp(k, root->key) < 0) { root->left = tree_add(root->left, k, v); } else if (strcmp(k, root->key) > 0) { root->right = tree_add(root->right, k, v); } else { if (strcmp(k, root->key) == 0) { lnode = lalloc(v); root->tail->next = lnode; root->tail = lnode; root->tail->next = NULL; } } if (is_red(root->right) && !is_red(root->left)) { root = rotate_left(root); } if (is_red(root->left) && is_red(root->left->left)) { root = rotate_right(root); } if (is_red(root->left) && is_red(root->right)) { flip_colors(root); } return root;}Here are TNODE and LNODE: // LNODE is the data structure for a singly linked list.typedef struct lnode { VALUE val; // A pointer to the value stored in the linked list. struct lnode *next; // Pointer to the next item in the list; it should be NULL if there is no successor.} LNODE;typedef struct tnode { KEY key; // Search key for this binary search tree node. struct tnode *right; // Right child. struct tnode *left; // Left child. LNODE *head; // Head of the linked list storing the values for the search key. LNODE *tail; // Tail of the linked list storing the values for the search key. bool is_red; // Flag use only in red-black trees to denote redness.} TNODE;Here are some more functions: TALLOC, LALLOC, and rotate TNODE *talloc(const KEY k) { TNODE *tnode = malloc(sizeof(TNODE)); if (tnode == NULL) { return NULL; } tnode->key = k; tnode->is_red = false; tnode->head = NULL; tnode->tail = NULL; tnode->right = NULL; tnode->left = NULL; return tnode;}LNODE *lalloc(const VALUE v) { LNODE *lnode = malloc(sizeof(LNODE)); if (lnode == NULL) { return NULL; } lnode->val = v; lnode->next = NULL; return lnode;}TNODE *rotate_left(TNODE *h) { TNODE *x = h->right; h->right = x->left; x->left = h; x->is_red = h->is_red; h->is_red = true; return x;}TNODE *rotate_right(TNODE *h) { TNODE *x = h->left; h->left = x->right; x->right = h; x->is_red = h->is_red; h->is_red = true; return x;}void flip_colors(TNODE *h) { h->is_red = true; h->left->is_red = false; h->right->is_red = false;} | Red Black Tree Implementation | c;tree | Implementation Issue:I would call strcmp(k, root->key) once:int cmpval;if (root == NULL){ ...}else{ cmpval = strcmp(k, root->key); if (cmpval < 0) { ... } else if (cmpval > 0) { ... } else // if (cmpval == 0) { ... }}Design Issue:The use of strcmp essentially couples the KEY type with a null-terminated string type.You should try to decouple them in order to allow the user to easily change the KEY type.One way to do it is by implementing a comparison function alongside the KEY type:typedef char* KEY;int compare(const KEY key1,const KEY key2){ return strcmp(key1,key2);}Of course, this does not really decouple KEY from char*, but at least it lets the user know that changing the KEY type must be followed by changing the implementation of the compare function.There is probably a design-pattern specifically for the case at hand... |
_unix.375512 | enabling syncookies is helping against some attacksbut the official docs still say syncookies seriously violate TCP protocol, do not allow to use TCP extensionsToday I discover that once upon a time in the kernel development syncookies were extended to handle also Timestamp, ECN, SACK, WScaleSohow can I find from which kernel version ?enabling syncookies and also Timestamp, ECN, SACK, WScale : how do other operating systems behave ? | syncookies and tcp options | linux;tcp ip | null |
_codereview.62678 | BackgroundAs this related question describes, there does not appear to be a canonical way to validate XML files against an XSD then subsequently transform them using an XSL template with file paths determined from a catalog resolver.The XSL templates can be XSLT 1.0 or XSLT 2.0, the latter requiring Saxon9HE.ProblemThe given answer works, but has a number of issues that are undesirable, including:Using an XMLCatalogResolver and a CatalogResolver.Creating an XML catalog resolver instance using the catalog resolver instance.Traversing a DOM to determine the XSD URI.Creating a SchemaFactory to perform the validation.Calling the XML catalog resolver instance to find the local XSD file path.Passing the catalog resolver instance to the XSL transformer instance.It seems like those aspects of the code should be handled by existing APIs, especially the contortions required to extract the XSD URI from the DOM.SourceA repository exists that contains the entire example, complete with catalog files, schema definitions, and XML tests. The main source file that has the problems noted above follows:package src;import java.io.*;import java.net.URI;import java.util.*;import java.util.regex.Pattern;import java.util.regex.Matcher;import javax.xml.parsers.*;import javax.xml.xpath.*;import javax.xml.XMLConstants;import org.w3c.dom.*;import org.xml.sax.*;import org.apache.xml.resolver.tools.CatalogResolver;import org.apache.xerces.util.XMLCatalogResolver;import static org.apache.xerces.jaxp.JAXPConstants.JAXP_SCHEMA_LANGUAGE;import static org.apache.xerces.jaxp.JAXPConstants.W3C_XML_SCHEMA;import javax.xml.validation.SchemaFactory;import javax.xml.validation.Schema;import javax.xml.validation.Validator;import javax.xml.transform.Result;import javax.xml.transform.Source;import javax.xml.transform.Transformer;import javax.xml.transform.TransformerFactory;import javax.xml.transform.dom.DOMSource;import javax.xml.transform.sax.SAXSource;import javax.xml.transform.stream.StreamResult;import javax.xml.transform.stream.StreamSource;/** * Download http://xerces.apache.org/xml-commons/components/resolver/CatalogManager.properties */public class TestXSD { private final static String ENTITY_RESOLVER = http://apache.org/xml/properties/internal/entity-resolver; /** * This program reads an XML file, performs validation, reads an XSL * file, transforms the input XML, and then writes the transformed document * to standard output. * * args[0] - The XSL file used to transform the XML file * args[1] - The XML file to transform using the XSL file */ public static void main( String args[] ) throws Exception { // For validation error messages. ErrorHandler errorHandler = new DocumentErrorHandler(); // Read the CatalogManager.properties file. CatalogResolver resolver = new CatalogResolver(); XMLCatalogResolver xmlResolver = createXMLCatalogResolver( resolver ); logDebug( READ XML INPUT SOURCE ); // Load an XML document in preparation to transform it. InputSource xmlInput = new InputSource( new InputStreamReader( new FileInputStream( args[1] ) ) ); DocumentBuilderFactory dbFactory = DocumentBuilderFactory.newInstance(); dbFactory.setAttribute( JAXP_SCHEMA_LANGUAGE, W3C_XML_SCHEMA ); dbFactory.setNamespaceAware( true ); DocumentBuilder builder = dbFactory.newDocumentBuilder(); builder.setEntityResolver( xmlResolver ); builder.setErrorHandler( errorHandler ); logDebug( PARSE XML INTO DOCUMENT MODEL ); Document xmlDocument = builder.parse( xmlInput ); logDebug( CONVERT XML DOCUMENT MODEL INTO DOMSOURCE ); DOMSource xml = new DOMSource( xmlDocument ); logDebug( GET XML SCHEMA DEFINITION ); String schemaURI = getSchemaURI( xmlDocument ); logDebug( SCHEMA URI: + schemaURI ); if( schemaURI != null ) { logDebug( CREATE SCHEMA FACTORY ); // Create a Schema factory to obtain a Schema for XML validation... SchemaFactory sFactory = SchemaFactory.newInstance( W3C_XML_SCHEMA ); sFactory.setResourceResolver( xmlResolver ); logDebug( CREATE XSD INPUT SOURCE ); String xsdFileURI = xmlResolver.resolveURI( schemaURI ); logDebug( CREATE INPUT SOURCE XSD FROM: + xsdFileURI ); InputSource xsd = new InputSource( new FileInputStream( new File( new URI( xsdFileURI ) ) ) ); logDebug( CREATE SCHEMA OBJECT FOR XSD ); Schema schema = sFactory.newSchema( new SAXSource( xsd ) ); logDebug( CREATE VALIDATOR FOR SCHEMA ); Validator validator = schema.newValidator(); logDebug( VALIDATE XML AGAINST XSD ); validator.validate( xml ); } logDebug( READ XSL INPUT SOURCE ); // Load an XSL template for transforming XML documents. InputSource xslInput = new InputSource( new InputStreamReader( new FileInputStream( args[0] ) ) ); logDebug( PARSE XSL INTO DOCUMENT MODEL ); Document xslDocument = builder.parse( xslInput ); transform( xmlDocument, xslDocument, resolver ); System.out.println(); } private static void transform( Document xml, Document xsl, CatalogResolver resolver ) throws Exception { if( versionAtLeast( xsl, 2 ) ) { useXSLT2Transformer(); } logDebug( CREATE TRANSFORMER FACTORY ); // Create the transformer used for the document. TransformerFactory tFactory = TransformerFactory.newInstance(); tFactory.setURIResolver( resolver ); logDebug( CREATE TRANSFORMER FROM XSL ); Transformer transformer = tFactory.newTransformer( new DOMSource( xsl ) ); logDebug( CREATE RESULT OUTPUT STREAM ); // This enables writing the results to standard output. Result out = new StreamResult( new OutputStreamWriter( System.out ) ); logDebug( TRANSFORM THE XML AND WRITE TO STDOUT ); // Transform the document using a given stylesheet. transformer.transform( new DOMSource( xml ), out ); } /** * Answers whether the given XSL document version is greater than or * equal to the given required version number. * * @param xsl The XSL document to check for version compatibility. * @param version The version number to compare against. * * @return true iff the XSL document version is greater than or equal * to the version parameter. */ private static boolean versionAtLeast( Document xsl, float version ) { Element root = xsl.getDocumentElement(); float docVersion = Float.parseFloat( root.getAttribute( version ) ); return docVersion >= version; } /** * Enables Saxon9's XSLT2 transformer for XSLT2 files. */ private static void useXSLT2Transformer() { System.setProperty(javax.xml.transform.TransformerFactory, net.sf.saxon.TransformerFactoryImpl); } /** * Creates an XMLCatalogResolver based on the file names found in * the given CatalogResolver. The resulting XMLCatalogResolver will * contain the absolute path to all the files known to the given * CatalogResolver. * * @param resolver The CatalogResolver to examine for catalog file names. * @return An XMLCatalogResolver instance with the same number of catalog * files as found in the given CatalogResolver. */ private static XMLCatalogResolver createXMLCatalogResolver( CatalogResolver resolver ) { int index = 0; List files = resolver.getCatalog().getCatalogManager().getCatalogFiles(); String catalogs[] = new String[ files.size() ]; XMLCatalogResolver xmlResolver = new XMLCatalogResolver(); for( Object file : files ) { catalogs[ index ] = (new File( file.toString() )).getAbsolutePath(); index++; } xmlResolver.setCatalogList( catalogs ); return xmlResolver; } private static String[] parseNameValue( String nv ) { Pattern p = Pattern.compile( \\s*(\\w+)=\([^\]*)\\\s* ); Matcher m = p.matcher( nv ); String result[] = new String[2]; if( m.find() ) { result[0] = m.group(1); result[1] = m.group(2); } return result; } /** * Retrieves the XML schema definition using an XSD. * * @param node The document (or child node) to traverse seeking processing * instruction nodes. * @return null if no XSD is present in the XML document. * @throws IOException Never thrown (uses StringReader). */ private static String getSchemaURI( Node node ) throws IOException { String result = null; if( node.getNodeType() == Node.PROCESSING_INSTRUCTION_NODE ) { ProcessingInstruction pi = (ProcessingInstruction)node; logDebug( NODE IS PROCESSING INSTRUCTION ); if( xml-model.equals( pi.getNodeName() ) ) { logDebug( PI IS XML MODEL ); // Hack to get the attributes. String data = pi.getData(); if( data != null ) { final String attributes[] = pi.getData().trim().split( \\s+ ); String type = parseNameValue( attributes[0] )[1]; String href = parseNameValue( attributes[1] )[1]; // TODO: Schema should = http://www.w3.org/2001/XMLSchema //String schema = attributes.getNamedItem( schematypens ); if( application/xml.equalsIgnoreCase( type ) && href != null ) { result = href; } } } } else { // Try to get the schema type information. NamedNodeMap attrs = node.getAttributes(); if( attrs != null ) { // TypeInfo.toString() returns values of the form: // schemaLocation=uri schemaURI // The following loop extracts the schema URI. for( int i = 0; i < attrs.getLength(); i++ ) { Attr attribute = (Attr)attrs.item( i ); TypeInfo typeInfo = attribute.getSchemaTypeInfo(); String attr[] = parseNameValue( typeInfo.toString() ); if( schemaLocation.equalsIgnoreCase( attr[0] ) ) { result = attr[1].split( \\s )[1]; break; } } } // Look deeper for the schema URI. if( result == null ) { NodeList list = node.getChildNodes(); for( int i = 0; i < list.getLength(); i++ ) { result = getSchemaURI( list.item( i ) ); if( result != null ) { break; } } } } return result; } /** * Writes a message to standard output. */ private static void logDebug( String s ) { System.out.println( s ); }}The most problematic parts of the code are the:getSchemaURI method; andif( schemaURI != null ) { ... } code block.I think that they are redundant and brittle, but do not know what mechanisms are available to avoid having to manually parse and validate against an XSD whose file path is looked up using an XML catalog.QuestionWithout directly involving SAX, how do you use a catalog resolver to both validate XML files using an XSD and transform documents (in DOM) whose XSL file paths are specified in the catalog?Relatedhttp://xerces.apache.org/xerces2-j/faq-xcatalogs.htmlhttp://xerces.apache.org/xml-commons/components/resolver/resolver-article.htmlhttp://www.xml.com/pub/a/2004/03/03/catalogs.htmlhttp://saxonica.com/documentation/sourcedocs/xml-catalogs.html | Validate XML using XSD, a Catalog Resolver, and JAXP DOM for XSLT | java;xml;dom;xslt;xsd | /** * Retrieves the XML schema definition using an XSD. * * @param node The document (or child node) to traverse seeking processing * instruction nodes. * @return null if no XSD is present in the XML document. * @throws IOException Never thrown (uses StringReader). */ private static String getSchemaURI( Node node ) throws IOException { String result = null; if( node.getNodeType() == Node.PROCESSING_INSTRUCTION_NODE ) { ProcessingInstruction pi = (ProcessingInstruction)node; logDebug( NODE IS PROCESSING INSTRUCTION ); if( xml-model.equals( pi.getNodeName() ) ) { logDebug( PI IS XML MODEL ); // Hack to get the attributes. String data = pi.getData(); if( data != null ) { final String attributes[] = pi.getData().trim().split( \\s+ ); String type = parseNameValue( attributes[0] )[1]; String href = parseNameValue( attributes[1] )[1]; // TODO: Schema should = http://www.w3.org/2001/XMLSchema //String schema = attributes.getNamedItem( schematypens ); if( application/xml.equalsIgnoreCase( type ) && href != null ) { result = href; } } } } else { // Try to get the schema type information. NamedNodeMap attrs = node.getAttributes(); if( attrs != null ) { // TypeInfo.toString() returns values of the form: // schemaLocation=uri schemaURI // The following loop extracts the schema URI. for( int i = 0; i < attrs.getLength(); i++ ) { Attr attribute = (Attr)attrs.item( i ); TypeInfo typeInfo = attribute.getSchemaTypeInfo(); String attr[] = parseNameValue( typeInfo.toString() ); if( schemaLocation.equalsIgnoreCase( attr[0] ) ) { result = attr[1].split( \\s )[1]; break; } } } // Look deeper for the schema URI. if( result == null ) { NodeList list = node.getChildNodes(); for( int i = 0; i < list.getLength(); i++ ) { result = getSchemaURI( list.item( i ) ); if( result != null ) { break; } } } } return result; }First off: The combination of 2-space tabs and new lines for elses on if-else statements is making it hard to read for me.Now, I don't have a solution for your main problems. I think you'll have to ask somewhere else for that; I can't help you refactor out huge parts of your program just like that. All I can do is review the code as it is based on my knowledge in Java.I believe this method suffers because you try to validate everything before deciding whether you're going to use it. // Hack to get the attributes. String data = pi.getData(); if( data != null ) { final String attributes[] = pi.getData().trim().split( \\s+ );data has no other uses. So why not do final String attributes[] = data.trim().split( \\s+ );instead? final String attributes[] = pi.getData().trim().split( \\s+ ); String type = parseNameValue( attributes[0] )[1]; String href = parseNameValue( attributes[1] )[1]; // TODO: Schema should = http://www.w3.org/2001/XMLSchema //String schema = attributes.getNamedItem( schematypens ); if( application/xml.equalsIgnoreCase( type ) && href != null ) { result = href; }After this bit of code, you return result. There's an else block, but it's not executed if this snippet of code is reached.In that light, there's no other uses for type and href in this function. Additionally, result was null to begin with.So all that's actually relevant is to do this: final String attributes[] = pi.getData().trim().split( \\s+ ); String type = parseNameValue( attributes[0] )[1]; // TODO: Schema should = http://www.w3.org/2001/XMLSchema //String schema = attributes.getNamedItem( schematypens ); if( application/xml.equalsIgnoreCase( type )) { result = parseNameValue( attributes[1] )[1]; //href }Validating whether href is null is not needed since you're just setting null to null otherwise anyway.I also feel this function should be split in three:One function for ProcessingInstruction nodes.One function for determining SchemaURI from node.getAttributes()and one function for determining SchemaURI from node.getChildNodes().This will get rid of the deep nesting of statements you have here and make it easier to understand your code. |
_softwareengineering.119827 | I got this question in an interview and I was not able to solve it.You have a circular road, with N number of gas stations.You know the amount of gas that each station has.You know the amount of gas you need to GO from one station to the next one.Your car starts with 0.You can only drive clockwise.The question is: Create an algorithm, to know from which gas station you must start driving so that you complete a full circle.As an exercise to me, I would translate the algorithm to C#. | Can anyone help solve this complex algorithmic problem? | c#;algorithms | (Update: now allows a gas tank size maximum)You can solve this in linear time as follows:void FindStartingPoint(int[] gasOnStation, int[] gasDrivingCosts, int gasTankSize){ // Assume gasOnStation.length == gasDrivingCosts.length int n = gasOnStation.length; // Make a round, without actually caring how much gas we have. int minI = 0; int minEndValue = 0; int gasValue = 0; for (int i = 0; i < n; i++) { if (gasValue < minEndValue) { minI = i; minEndValue = gasValue; } gasValue = gasValue + gasOnStation[i] - gasDrivingCosts[i]; } if (gasValue < 0) { Console.WriteLine(Instance does not have a solution: not enough fuel to make a round.); } else { // Try a round. int gas = DoLeg(0, minI, gasTankSize); if (gas < 0) { Console.WriteLine(Instance does not have a solution: our tank size is holding us back.); return; } for (int i = (minI + 1) % n; i != minI; i = (i + 1) % n) { gas = DoLeg(gas, i, gasTankSize); if (gas < 0) { Console.WriteLine(Instance does not have a solution: our tank size is holding us back.); return; } } Console.WriteLine(Start at station: + minI); }}int DoLeg(int gas, int i, int gasTankSize){ gas += gasOnStation[i]; if (gas > gasTankSize) gas = gasTankSize; gas -= gasDrivingCosts[i]; return gas;}First, we look at the case where we don't have a gas tank with a maximum.Essentially, in the first for-loop, we just drive the circle around, not caring if our fuel tank has negative fuel or not. The point of this is that no matter where you start, the difference between how much there is in your fuel tank at the start (0) and at the end is the same.Therefore, if we end up with less fuel than we started (so less than 0), this will happen no matter where we start, and so we can't go a full circle.If we end up with at least as much fuel as we started after going a full circle, then we search for the moment our fuel tank was at its lowest point (which is always just as we get to a gas station). If we start at this point, we will never end up with less fuel than at this point (because it is the lowest point and because we don't lose fuel if we drive a circle).Therefore, this point is a valid solution, and in particular, there always is such a point.Now we'll look at the version where our gas tank can hold only so much gas.Suppose our initial test (described above) we found out it is not impossible to go the entire circle. Suppose that we start at gas station i, we tank at gas station j, but our gas tank ends up being full, so we miss out on some extra gas the station has available. Then, before we get to station k, we end up not having enough fuel, because of the gas we missed out on.We claim that in this scenario, this will end up happening no matter where you start. Suppose we start at station l.If l is between j and k, then we either stop (long) before we can get to station k because we started at a bad station, or we'll always have at most the amount of fuel that we had when we started at i when we try to get to k, because we passed through the same stations (and our tank was full when we passed j). Either case is bad.If l is not between j and k, then we either stop (long) before we get to j, or we arrive at j with at most a full tank, which means that we won't make it to k either. Either case is bad.This means that if we make a round starting at a lowest point just like in the case with the infinitely large gas tank, then we either succeed, or we fail because our gas tank was too small, but that means that we will fail no matter which station we pick first, which means that the instance has no solution. |
_cstheory.36258 | I was wondering if there exists a brute force search algorithm for semidefinite programming problems. Specifically, can we find finite number of points in the positive semidefinite cone such that for any objective, we can get a good approximation by searching over these finite points?In a linear program, the answer is positive; we can search over all the vertices of the constraint, which is a convex polytope. This question is closely related to the representation of a spectrahedron, which is the intersection of the PSD cone with planes or half-spaces. Specifically, if the spectrahedron is finite representable, then we can search the values of the objective over the basis of the representations. | Brute force search algorithm for semidefinite programming (representation of spectrahedron) | approximation algorithms;linear programming;convex optimization;integer programming;semidefinite programming | null |
_softwareengineering.272586 | In an agile software development team, who would be the one to fix the bugs introduced in an update? The developer who writes the feature?Someone else specialized specifically in debugging with a certain title?The best developer in the team? | Who fixes bugs in a team? | debugging;maintenance;bug | null |
_codereview.39493 | I would like to get some feedback on my code. I am starting to program after doing a few online Python courses.Would you be able to help me by following my track of thinking and then pinpointing where I could be more efficient? (I have seen other answers but they seem to be more complicated than they need to be, albeit shorter; but perhaps that is the point).Question:Each new term in the Fibonacci sequence is generated by adding the previous two terms. By starting with 1 and 2, the first 10 terms will be:1, 2, 3, 5, 8, 13, 21, 34, 55, 89, ...By considering the terms in the Fibonacci sequence whose values do not exceed four million, find the sum of the even-valued terms.fibValue = 0valuesList = []a = 0b = 1#create listwhile fibValue <= 4000000: fibValue = a + b if fibValue % 2 == 0: valuesList.append(fibValue) a = b b = fibValueprint (valuesList) #just so that I can see what my list looks like after fully compiledprint() #added for command line neatnessprint() #added for command line neatnessnewTot = 0for i in valuesList: newTot += iprint (newTot) | More efficient solution for Project Euler #2 (sum of Fibonacci numbers under 4 million) | python;project euler;fibonacci sequence | null |
_unix.263446 | I want to copy the last used (or maybe created) files of a total size to another folder. Is this possible without additional tools?I have an USB drive of a certain size that is less than the total size of a folder. As I can't copy all files to USB I like to copy based on latest usage until there is no more space. Ideally the method also supports updating the files without the need to erase all files and re-copy them. | Copy last used files of total size | linux;rsync;file copy | On the assumption (based on the [linux] tag) that you have bash available, as well as the stat and sort commands; on the further assumption that you want to sync the most-recently-modified files first (see man stat for other timestamp options), then here is a bash script that will loop through all the files in the current directory (for f in * is the key line for that), gathering their last-modified timestamps into an array, then it loops through the sorted timestamps and prints -- a sample! -- rsync command for each file (currently has timestamp debugging information attached as proof).You'll have to adjust the rsync command for your particular situation, of course. This script will output rsync commands for every file in the current directory; my suggestion would be to either execute these rsync's blindly, letting the ones at the end fail, or to put them into a script to execute separately. This script does not attempt to optimize the space utilization of the destination in any way -- the only ordering it does is the last-modification timestamp (and the arbitrary ordering of the associative array in case there are multiple files modified in the same second).#!/usr/bin/env bashdeclare -A times# gather the files and their last-modified timestamp into an associative array,# indexed by filename (unique)for f in *do [ -f $f ] && times[$f]=$(stat -c %Y $f)done# get the times in (unique) sorted orderfor times in ${times[@]}do echo $timesdone | sort -run | while read tdo # then loop through the array looking for files with that modification time for f in ${!times[@]} do if [[ ${times[$f]} = $t ]] then echo rsync $f -- timestamp ${times[$f]} fi donedone |
_unix.31579 | I frequently use :ab to save typing time during coding. For e.g. :ab mat matrix to replace mat by matrix every time I type mat. Is there any way of storing and loading the abbreviations I create for a given file?I want something to store my abbreviations as and when I declare them and also reload them when I open a file. I would prefer the abbreviations to be local to a file rather than global but I can work around this if necessary. | Storing Abbreviations in Vim | vim | If it's for a single specific file, you could add an autocommand (:help autocommand or :help 40.3) to your .vimrc:au BufRead,BufNewFile /path/to/foobar call FoobarSettings()function FoobarSettings() ab mat matrix ... more setup commandsendfunctionChange foobar to something that makes more sense for you.A less flexible clunkier shotgun-style approach is to use sessions (:help sessions or :help 21.4). It is unwieldy because sessions by default save a great deal of things including window sizes, open files, options, mappings, folds, etc. You can change this with the 'sessionoptions' option if you like.After you've created opened the file and set up the abbreviations, :mksession! sessionfile.vim.To restore the session, from the shell you can do vim -S sessionfile.vim or from inside vim you can do :source sessionfile.vim. |
_webmaster.54366 | I'm at my wits' end.I have just ripped out a website and in the process of rebuilding everything.Previously, the 'home page' of the website is a blog, with the address www.mydomain.com/blog1.php.After exporting everything, I deleted the whole directory, and -- based on request -- immediately create a blog/ directory. The idea is to get the blog back up as soon as possible, and temporarily redirect people accessing www.mydomain.com to the blog.Accessing the blog via http://www.mydomain.com/blog/ works. So I put in an index.php file containing a (temporary) redirect to the blog's address.The problem: The server insists on opening blog1.php instead of index.php. Even after we deleted all the files (including .htaccess). And even putting in a new .htaccess file with the single line of DirectoryIndex index.php doesn't work. The server stubbornly wants blog1.php.Now, the server is actually a webhosting, so I have no actual access to it. I have to do my work via cPanel.Currently, I work around this issue by creating blog1.php; but I really want to know why the server does not revert to opening index.php. Did I perhaps miss some important settings in the byzantine cPanel menu page? | Webserver insists on opening blog1.php instead of index.php | php;htaccess;cpanel | null |
_codereview.18302 | There are a number of different ways to do this. What do people prefer and why?public boolean checkNameStartsWith(List<Foo> foos) { for (Foo foo : foos) { if (!(foo.getName().startsWith(bar))) { return Boolean.FALSE; } } return Boolean.TRUE;} | Looping over a list, checking a boolean and return value | java | null |
_softwareengineering.178149 | I'm working on a software problem at work that is fairly generic, but I can't find a library I like to solve it, so I'm considering writing one myself (at least a bare-bones version). I'll be writing some if not all of the 1.0 version at work, since I need it for the project. If turns out well I might want to bring the work home and polish it up just for fun, and maybe release it as an open-source project. However, I'm concerned that if I wrote the 1.0 version at work I may not be allowed to do this from a legal sense. Obviously I could ask my boss (who probably won't care), but I'm curious how other programmers have dealt with this issue and where the law stands here. My one sentence question is, When is it okay (legally/ethically) to open-source a software tool originally written by you for work at work? What if you have expanded the original source significantly during off-hours?Follow-up: Suppose I write the whole thing at home on my time then simply use it at work, does that change things drastically?Follow-up 2: Note that I'm not trying to rip off my employer (I understand that they're paying me to build products that they own)--I'm just wondering if there's a fair way of doing this for all involved... It would be nice if some nonprofit down the road could use my code and save them some time. Also, there's another issue at stake. If I write the library for a very simple, generic thing (like HTML tables in Javascript), does that mean I can never again do so on my own time without putting myself at legal risk (even if it was a whole new fresh rewrite or a segment of a larger project). Am I surrendering my right to write code for this sort of project for the rest of my life (without this company's permission), since the code at work might still be somewhere in my brain influencing me? This seems related to software patents, as a side-note. | When can I publish a software tool written at work? | legal | It is almost never OK, legally or ethically, to release products that you have created using your employer's resources or while being payed by the employer for your time without permission.However, it depends on your employment contract. If you were paid by the company and/or used company resources to produce the product, chances are that the work belongs to your company. You need to go through your supervisor and your legal department. Depending on your employment contract, there might also be restrictions on working on related technologies or using knowledge gained at your employer in projects, even if you work on them using personal resources on your own time.If you are using paid time, company resources, or are developing something that might be considered related to the business of your company, always seek guidance from your manager and/or legal department to ensure that you aren't violating any agreements and to get the appropriate permission to work on projects. Typically, it's easier to do this before you begin work as it might change the approaches that you take on the project.Writing products for the use at work on your own time is questionable and depends on the regulations that your employer must adhere to. At the very least, you could be interfering with your employers schedule, budget, and estimates by taking work off-line. In some cases, you could be violating the contractual regulations by creating products outside of time that is tracked and billed appropriately. |
_unix.325757 | Short Version: I'm doing regular backups with help of Btrfs send and receive commands. The snapshot which contains the data to be backed up (SOURCE) is a read-only snapshot. Creating this snapshot with Btrfs is atomic. The backup then is made using a combination of Btrfs send and receive commands. My question is: Does the Btrfs receive command also create the backup snapshot atomically on the destination volume?Long Version: For my daily backup strategy I use Btrfs to send changes of a source sub-volume to a backup-drive. The sub-volume I want to backup is located in SOURCE, while the backup itself will be stored in DEST.Before I can make a backup, I need a read-only snapshot of SOURCE which I will store below SOURCE itself in a sub-directory called .snapshots. This is done with the commandsbtrfs subvolume snapshot -r SOURCE SOURCE/.snapshots/current_backupsyncThe sync command above is needed according to the Btrfs-wiki to make btrfs send work. Now I want to send the snapshot called current_backup to a backup volume DEST on a different drive. I do this with the commandbtrfs send SOURCE/.snapshots/current_backup | btrfs receive DESTMy question is about the btrfs receive part of this backup process: Does this happen atomically? In other words: Is the backup on volume DEST only available if it has been completely received and written? | Is Btrfs Receive-Command Atomic? | backup;btrfs | No, it is not atomic. Btrfs receive does create a subvolume, so that's atomic, but initially the subvolume is empty. Then, btrfs receive fills the subvolume with the incoming data.You can test this by cd'ing to DEST while performing the backup and doing ls or find repeatedly. |
_webmaster.79382 | We're using a few Google APIs on our site such as embed maps, static maps and address lookup.We were thinking about removing these and going with other services, or even saving the returned static map as an image and loading that instead of doing a Google request every time.If we did that, would our rank in Google decrease? | Does Google rank websites higher for using their APIs? | google;pagerank;google ranking | null |
_codereview.93545 | In light of the recent SQL frenzy of sorts in The 2nd Monitor, I decided to take a stab at writing my own SEDE query. Essentially, what it does is find questions that could be answered based on the following parameters.-- @MinQuestionVotes - The minimum amount of votes on a question.-- @MaxQuestionAnswers - The maximum amount of answers to a question.-- @QuestionTags - The tags that should be on the questions.At the moment, I feel that those parameters are necessary for finding possible questions to answer, but if you feel that one isn't needed, just mention it. Anyways, here's the code, and here's the SEDE query link.-- User parameters for finding questions. Here is-- a brief description of what each parameter does. -- @MinQuestionVotes - The minimum amount of votes on a question. -- @MaxQuestionAnswers - The maximum amount of answers to a question. -- @QuestionTags - The tags that should be on the questions.DECLARE @MinQuestionVotes INT = ##MinQuestionVotes##;DECLARE @MaxQuestionAnswers INT = ##MaxQuestionAnswers##;DECLARE @QuestionTags NVARCHAR(150) = ##QuestionTag1##;-- SELECT the final results. Data is filtered based-- on the following conditions. -- ClosedDate IS EQUAL TO null -- PostTypeId IS EQUAL TO question -- Score GREATER THAN OR EQUAL TO @MinQuestionVotes -- AnswerCount LESS THAN OR EQUAL TO @MaxQuestionAnswers -- Tags CONTAIN @QuestionTagsSELECT Posts.Id AS [Post Link] , OwnerUserId AS [User Link] , Posts.Score , Posts.Tags , Posts.ViewCount , Posts.AnswerCount FROM Posts INNER JOIN PostTags ON Posts.Id = PostTags.PostId INNER JOIN Tags ON PostTags.TagId = Tags.Id WHERE Posts.PostTypeId = 1 AND Posts.ClosedDate IS NULL AND Posts.Score >= @MinQuestionVotes AND Posts.AnswerCount <= @MaxQuestionAnswers AND Tags.TagName LIKE CONCAT('%', @QuestionTags, '%');Finally, here's an example of possible inputs. When entering into the QuestionTags field, you need to surround your tags with single quotes, like this: 'python'.@MinQuestionVotes: 1@MaxQuestionAnswers: 0@QuestionTags: 'python' | Finding questions to answer | sql;sql server;stackexchange | First, your comments:They're structured well, but, the content could be improved:-- SELECT the final results. Data is filtered based ^^^^^^^^^^^^^^^^^^^^^^^^^-- on the following conditions.You're not really SELECTing the final results, you SELECT them based on the conditions, you don't SELECT them and then filter them.Your declaration DECLARE @QuestionTags NVARCHAR(150) = ##QuestionTag1## is a little confusing:Tags is plural, but then the variable you ask for input is Tag1 (singular)?You ask for input like ##MinQuestionVotes##, but there's no reason to abbreviate, it could really just be:##MinimumQuestionVotes##. Same things applies to the other two.Finally, CONCAT('%', @QuestionTags, '%') is good, but you could really just:'%' + @QuestionTags + '%' instead.You could even build the tag string with the ' attached so the user doesn't need to input.Other than that, your code looks really clean and nice. Good Work! |
_unix.59790 | Right now I'm using OpenSUSE 12.2 with KDE 4.9.4. If I upgrade that to KDE 4.10 in January, will it also bring in Qt 5 (or at least newer Qt packages)? Or are the Qt packages tied to the OS? | Will updating KDE also update Qt? | kde;opensuse;qt | null |
_unix.206163 | I was trying to create a cron job which runs a ruby code on digital ocean, however it seems I'm making a mistake. It doesn't give any error but it doesn't also do anything. I ran this cronjob on my raspberry pi however on digital ocean it doesn't work. Here my cronjob59 17 * * * ruby /home/workspace/delta/analytics/analyze.rb 7 >> /home/testrubyIt creates testruby file but analyze.rb 7 doesn't work. I tested running ruby /home/ .... and it is working. What might be the problem?UPDATEerror file: bin/sh: 1: /usr/local/bin/ruby: not foundThis is what I wrote in my crontab* * * * * /usr/local/bin/ruby /home/workspace/deriva/analytics/analyze.rb 7 >> /home/testruby 2>&1 | Running a ruby cron job | cron | Different environment variables, working directory, ... You need to debug where exactly analyze.rb is bailing out.First, you're only redirecting stdout, not stderr. Errors probably go to the later, so adding a 2>&1 to the end may help a lot. Or setting EMAIL= at the top of your crontab to have them mailed to you.You can confirm that ruby is starting up the print starting!\n or similar to the beginning of your Ruby script, and seeing if that shows up in the log file. |
_softwareengineering.189026 | This has been bugging me for a while. Most of the time, when it comes to storing data in structures such as hashtables, programmers, books and articles insist that indexing elements in said structures by String values is considered to be bad practice. Yet, so far, I have not found a single such source to also explain WHY it is considered to be bad practice. Does it depend on the programming language? On the underlying framework? On the implementation?Take two simple examples, if it helps:An SQL-like table where rows are indexed by a String primary key.A .NET Dictionary where the keys are Strings. | Why is the usage of string keys generally considered to be a bad idea? | programming practices;data structures;database design | It all has to do with the two things basically:1) The speed of lookup (where integers for instance fare much better)2) The size of indexes (where string indexes would explode)Now it all depends on your needs and the size of the dataset. If a table or a collection has like 10-20 elements in it, the type of the key is irrelevant. It will be very fast even with a string key.P.S. May not be related to your question, but Guids are considered bad for database keys too (16 byte Guid vs. 4 byte integer). On large data volumes Guids do slow down lookup. |
_softwareengineering.113393 | As an expansion from my previous question about using separate projects for seperate layers - Good practice on Visual Studio SolutionsI now wish to know if I am putting the right functionality in the correct layers.BackgroundI'm building a WPF application from scratch, that contains business logic and business objects. The database itself sits on another server on the web with access to it, restricted to web API calls using OAuth authentication.I think the following content should be in these layers. The idea being, you go from layer 1 to layer 4, you are only depending on the layers below you. To prevent circular dependencies.1. PresentationWPF View (what the user will see)WPF ViewModel (how the program responds to user interaction)No WPF Model, as it will just be the business object2. Application/ServicesRepository (class used by ViewModel to load/save business objects)Utility classes to assist in saving objects, by selecting the correct API calls.3. Business layerBusiness objects/Entities/DTO (whichever name is preferred)Factories (Used by the repositories in the creation of business objects)Other misc business class (i.e. storage of currently logged in user)4. Infrastructure/Data AccessOAuth client (makes authenticated calls against the web server, used by repository classes) | Recommended content for layers | design;design patterns;architecture;design principles;layers | null |
_unix.327599 | So, I have zero to none programming/coding experience though I'm trying to avoid having to manually load, edit and save 1600 files using AutoDock Tools.I have 1600 PDB files that I need to convert to PDBQT file for autodockVina docking. This would take a month using the ADT GUI so I though I'd do it using awk. Yes, I though about using openBabel to do this though since the PDB files I have are non-standard format it does not work.I mange to iterate through several files, reading the PDB and reproducing the PDBQT files using a buttload of nested if's. I now it's not pretty, anywhere, but its still better then the alternative.It all seems to be working except for three things 1) I have to add TER as the last line entry in every file. I can either get it in the last file OR in every file BUT the last file.... I cannot figure out where to place the print TER > out12) I'm guessing I might need to close the output at some point to avoid running out of memory, so I might need to place an close(out1) somewhere.3) I have a swedish-counting computer meaning that . and , means different things. Since I could not find a way to use gsub to convert backwards without saving all to a file, opening that file, converting back and closing, I'm running the script as LC_NUMERIC=us_US.UTF-8 ./awkFile.awk input.fileIs there any way to put this in the script instead of at the prompt?It has taking me a several hours to get to this point and now I've given up :-) Any help is appreciated, this is what I have:#!/usr/bin/awk -fBEGIN { }FNR==1 && NR!=1 { var=x=; } # works though skips the last fileFNR==1 { out1=scriptTest_FILENAME.pdbqt; print REMARK 4 XXXX COMPLIES WITH FORMAT V. 2.0 > out1; next; }{ endFunction(); print TER > out1;}END { }function endFunction(){ var= if ( $3 ~ C || $3 ~ N || $3 ~ O || /H5 MAA/ || /H16 DP/ || /H15 DP/ ) { # look for particular MAA atoms and assign the same values as in FF if ( /C MAA X/ ) { var = 0.170 C } ... if ( /H5 MAA X/ ) { var = 0.167 HD } # look for particular XLI atoms and assign the same values as in FF if ( /C XLI X/ || /C7 XLI X/ ) { var = 0.054 C } ... if ( /C3 XLI X/ || /C4 XLI X/ ) { var = 0.206 C }# x+=1 printf %-s %6d %s %-3s %3s %s %3d %11.3f %7.3f %7.3f %5.2f %5.2f %s\n, $1, x, , $3, $4, $5, $6, $7, $8, $9, $10, $11, var > out1; } # print TER > out1; needs to be added after the last line in each file # I suppose I also need to solve the open-file issue. # close(out1); return # And the decimal point thing...}The ... is replacing something like 120 lines of if's :-P | Using awk to print to last row of every file and closing | awk;scripting;osx | null |
_softwareengineering.236780 | I have a website that allows for users to paste content (like snippets of code, etc) for sharing. Like Pastebin and Github, I also have a raw link that will display the raw contents of those posts.However, some users are posting up code and then using our service as a host for distributing content that violates our TOS (for example, javascript code and then linking to that code from external sites).Running on NGINX and PHP, what is the best way to manage this?I have a feature that when reported, I can disable the raw version of a particular post. However, it is not feasible to monitor each and every post (and then be sure that I understand what is good / what is bad).Is my only solution to disable raw functionality across the board? Should I block the raw versions from sites like facebook (using referrer maybe)? I played around with hotlink protection, but in all truth, it doesn't really appear to work all that well (or it could be just my configuration of it). | User Generated Content and Hotlink Protection | php | Output raw content in plain text and tell (modern) browsers to honor it.header(Content-Type: text/plain);header('X-Content-Type-Options: nosniff');You can also do a referrer check, but that will disable all external linking to the raw content. |
_codereview.30557 | I am an average coder trying to improve my Python by doing solved problems in it.One of the problem I did is this, here is the code I tried whichI have based on the official solution:def area(A, B, C): return float((x[B] - x[A])*(y[C] - y[B]) - (y[B] - y[A])*(x[C] -x[B]))/2x, y = {}, {}n = int(raw_input())for i in xrange(n): arr = raw_input().split() x[i] , y[i] = int(arr[0]), int(arr[1])maxarea = 0for i in xrange(n): for j in xrange(i+1, n): maxminus, maxplus = -1, -1 for k in xrange(n): if k != i and k != j: a = area(i,j,k) if(a<0): maxminus = max(maxminus, -a) else: maxplus = max(maxplus, a) if maxplus >= 0 and maxminus >=0: maxarea = max(maxarea, (maxplus+maxminus))print maxareaThe code is still giving me TLE on test case 7.Can anybody suggest further optimization? | Is any further optimization possible? (Codeforces) | python;optimization | You can do some minor optimization as follows:def main(): n = int(raw_input()) coords = list([i] + map(int, raw_input().split()) for i in range(n)) max_area = 0 for a, Ax, Ay in coords: for b, Bx, By in coords[a+1:]: max_minus, max_plus = 0, -1 for c, Cx, Cy in coords: if c != a and c != b: ccw = (Bx - Ax) * (Cy - By) - (By - Ay) * (Cx - Bx) if ccw < max_minus: max_minus = ccw elif ccw > max_plus: max_plus = ccw if max_plus >= 0 and max_minus < 0 and max_plus - max_minus > max_area: max_area = max_plus - max_minus print(max_area / 2.0)main()Note that your use of float doesn't do anything because the values to be passed are integers. Anyway, there's no need to divide by 2 every time - you can just divide the final value by 2 at the end.I think this still won't pass the speed test, though. If it is doable in python it probably needs an algorithm that makes better use of python's functions and standard library (and are other libraries allowed?). You could try something like this, for example:from itertools import permutationsdef main(): n = int(raw_input()) coords = list([i] + map(int, raw_input().split()) for i in range(n)) max_area = 0 for (a, Ax, Ay), (b, Bx, By) in permutations(coords, 2): ccws = [(Bx - Ax) * (Cy - By) - (By - Ay) * (Cx - Bx) for c, Cx, Cy in coords if c != a and c != b] low, high = min(ccws), max(ccws) if low < 0 and high >= 0 and high - low > max_area: max_area = high - low print(max_area / 2.0) |
_unix.108724 | Kernel = 2.6.23.1-42genisoimage 1.1.6 (Linux)Wodim 1.1.10 Create iso image:genisoimage -V Data_Layer_1 -v -J -r -o cdl_data_1.iso cdl_1/Test integrity of iso image:mount -t iso9660 -o loop cdl_data_1.iso /mnt/iso_test/cksum each of 6 files in iso image against original file, byte counts and CRCs matchumount /mnt/iso_test/Burn iso to CD-R:Close X11 desktop, go to single user console mode as root userinsert blank diskmount -l, check that blank disk isn't mounted wodim -v -dao speed=2 dev=/dev/cdrw cdl_data_1.isoeject diskLook for burn errors:no errors or warnings in wodim outputdmesg | tailcdrom: This disk doesn't have any tracks I recognize! tail /var/log/messageslocalhost kernel: cdrom: This disk doesn't have any tracks I recognize! (The timestamp matches the time the blank disk was inserted)Test individual burned files:startxinsert burned CDexecute cksum on individual files on mounted CD, byte counts do not match, CRC values match example of post-burn cksum comparison [kfw@localhost ~]$ cksum /media/Data_Layer_1/CDL_2012_004.zip 1556659744 97975264 /media/Data_Layer_1/CDL_2012_004.zip [kfw@localhost ~]$ cksum CDL_2012_004.zip 752249099 97975264 CDL_2012_004.zipexample of cmp execution on individual files [kfw@localhost ~]$ cmp /media/Data_Layer_1/CDL_2012_004.zip CDL_2012_004.zip /media/Data_Layer_1/CDL_2012_004.zip CDL_2012_004.zip differ: byte 705623, line 1199copy individual files from burned CD to HDD, test integrityunzip CDL_2012_004.zip... error: invalid compressed data to inflate bad CRC 27b7a348 (should be eb348979)All data CD burns of different types of binary files suffer this problem; I have burned many dozens of audio disks with no problems at all.Any ideas? | Burned binary files don't match original files | data cd;burning | null |
_softwareengineering.301117 | JsonLogic is a data format (built on top of JSON) for storing and sharing rules between front-end and back-end code. It's essential that the same rule returns the same result whether executed by the JavaScript client or the PHP client.Currently the JavaScript client has tests in QUnit, and the PHP client has tests in PHPunit. The vast majority of tests are given these inputs (rule and data), assert the output equals the expected result. As the test set grows (and certainly as we add parsers in other languages) how can we maintain just one standard set of test data and expected results that each get executed in each language's testing framework? | Sharing Unit Tests between several language implementations of one spec? | unit testing;phpunit | A simple approach would be to just write one JSON file like:[ [ rule, data, expected ], [ rule, data, expected ]]And then in each language, download the file, parse it, and test it, row by row. My first inclination was to use a CSV (I'm back now to edit that answer), but as soon as the test data includes objects and arrays, suddenly you have a CSV with JSON in the cells, and it becomes eye-stabbingly frustrating to maintain. |
_unix.61090 | I came across the following blurb in some RHEL 6 training documentation: The number of drives that can be installed on modern computers has increased. With port multipliers, it's relatively easy to configure 16 Serial Advanced Technology Attachment (SATA) drives on a system (assuming you can fit all of those drives). Does this mean that RHEL 6 won't allow more than 16 SATA drives from a software perspective? Or just that practical hardware constraints usually don't allow for more than 16 but it's technically possible? | Does RHEL 6 enforce software constraints on the number of SATA drvices that can exist on a system? | rhel;hard disk;sata | RHEL's limitations are core- and RAM-based, not drive count-based; the wording is hinting at few chassis being able to mount more than 10 or so drives. Linux itself is limited to 128 SCSI drive devices (sda through sddx). |
_unix.76052 | To run my Matlab scripts, I've created a shell script to which I give two parameters - the path to the matlab file ($1) and to the log file ($2):nohup time matlab -some_parameters -r run $1;exit &>> $2 &When I need to kill one of the Matlab processes, it's sometimes difficult to tell which one is which. Would it be possible to somehow include the pid of the Matlab process in the log file (i.e. in $2)? | Print process ID (PID) of a Matlab instance | shell;shell script;process;kill;matlab | In the end, it seems that the Matlab command is subsequently spanning other processes (JVM) when called. However, there is an undocumented function feature that returns the PID of the running Matlab process:nohup time matlabR2012b -nodesktop -nosplash -nodisplay \ -r fprintf('PID: %s\n', num2str(feature('getpid')));run $1; exit &> $2 & |
_datascience.15115 | Currently, we are working on a school project which is trying to predict the number of crimes in some area/neighbourhood. There are 8 different categories for crimes and we've tried to find the correlation among those categories and now we only have 4 left. Instead of building a model for each category, we want to predict these 4 categories simultaneously by some multi-output algorithm.Our sample size is around 27,000 for 6 years (from 2011 to 2016, 4000+ for each year). We are going to use (maybe) cross-validation to build/test our model.Would you please list 2-3 algorithms which already have fully or partially implemented library in Python (preferred) or R you would recommend to use with our dataset scale?I only found scikit-learn with this algorithm. But it's for classification rather then prediction numbers.This is a intro-level ML course project, the group is not very experienced in the field and the time is limited so we don't want to implement an algorithm from scratch. | What are recommended methods for multi-task prediction? | machine learning;python;neural network;predictive modeling | null |
_cs.24572 | I need to generate binomial random numbers:A binomial random number is the number of heads in $N$ tosses of a coin with probability $p$ of a heads on any single toss. If you generate $N$ uniform random numbers on the interval $(0,1)$ and count the number less than $p$, then the count is a binomial random number with parameters $N$ and $p$.In my case, my $N$ could range from $10^3$ to $10^{10}$ and my $p$ is around $10^{-7}$. Often my $Np$ is around $10^{-3}$.There is a trivial implementation to generate such binomial random number through loops:getBinomial(int N, double p): x = 0 repeat N times: if getUniformRandom() < p: # getUniformRandom() returns a real number in (0,1) x = x+1 return xThis naïve implementation is very slow, $O(N)$. I tried the Acceptance Rejection/Inversion method [1] implemented in the Colt (http://acs.lbl.gov/software/colt/) lib. It is very fast, but the distribution of its generated number only agrees with the naïve implementation when $Np$ is not very small. In my case when $Np = 10^{-3}$, the naïve implementation can still generate the number 1 after many runs, but the Acceptance Rejection/Inversion method can never generate the number 1 (always returns 0).Does anyone know what is the problem here? Or can you suggest a better binomial random number generating algorithm that can solve my case?[1] V. Kachitvichyanukul, B.W. Schmeiser (1988): Binomial random variate generation, Communications of the ACM 31, 216-222. | A binomial random number generating algorithm that works when $Np$ is very small | algorithms;pseudo random generators | null |
_unix.363265 | I Have a request to alert usage of disk every 30 minutes, The thing is recent output should check old alert to avoid to send same alert again and again.#!/bin/bash#export [email protected] [email protected];#df -PH | grep -vE '^Filesystem|none|cdrom'|awk '{ print $5 $6 }' | while read output;df -PH | grep -vE '^Filesystem|none|cdrom|swdepot'|awk '{ print $5 $6 }' > diskcheck.log;#diskcheck is current output whereas disk_alert is previous runned outputif [ -s $HOME/DBA/monitor/log/disk_alert.log ]; then#Getting variables and compare with old usep=$(awk '{ if($1 > 60) print $0 }' $HOME/DBA/monitor/diskcheck.log | cut -d'%' -f1) usep1=$(awk '{ if($1 > 60) print $0 }' $HOME/DBA/monitor/log/disk_alert.log | cut -d'%' -f1) partition=$(cat $HOME/DBA/monitor/diskcheck.log | awk '{ print $2 }' )else cat $HOME/DBA/monitor/diskcheck.log > $HOME/DBA/monitor/log/disk_alert.logfi**echo $usep;echo $usep1;**if [ $usep -ge 60 ]; then if [ $usep -eq $usep1 ]; then mail=$(awk '{ if($usep == $usep1) print $0 }' $HOME/DBA/monitor/diskcheck.log) echo Running out of space \$mail ($usep%)\ on $(hostname) as on $(date) | mail -s Disk Space Alert: Mount $mail is $usep% Used $maillist; fifiOutput (ERROR):66 65 85 6666 65 85 66disk_alert.sh: line 19: [: 66658566: integer expression expectedI think the problem is in variables($usep and $usep1) it stores the values in single line which means (66 65 85 66), But it should be 66658566Then only:if [ $usep -ge 60 ]; then this condition will pass.Dear Guru's please help me with workaround. | Compare two files for greater than value | shell script;text processing;awk;disk usage;numeric data | null |
_softwareengineering.166539 | I may not be able to give the right title to the question. But here it is,We are developing financial portal for wealth management. We are expecting over 10000 clients to use the application. The portal calculates various performance analytics based on the the technical analysis of the stock market.We developed lot of the functionality through Stored procedures, user defined functions, triggers etc. through Database. We thought we can gain huge performance boost doing stuff directly in database than through C# code. And we actually did get a huge performance boost.When I tried to brag about the achievement to our CTO, he counter questioned my decision of having functionality implemented in database rather than code. According to him such applications suffer scalability problems. In his words These days things are kept in memory/cache. Clustered data is hard to manage over time. Facebook, Google have nothing in database. It is the era of thin servers and thick clients. DB is used only to store plain data and functionality should be completely decoupled from the database.Can you guys please give me some suggestions as to whether what he says is right. How to go about architect such an application? | Is having functionality in DB a road block to scalability? | architecture;database;application design | In short, I would agree with your CTO. You've probably gained some performance at the expense of scalability (if those terms are confusing, I'll clarify below). My two biggest worries would be maintainability and lack of options to scale horizontally (assuming you are going to need that).Proximity to data: Let's take a step back. There are some good reasons for pushing code into a DB. I would argue that the biggest one would be proximity to the data - for example, if you are expecting a calculation to return a handful of values, but these are aggregations of millions of records, sending the millions of records (on-demand) over the network to be aggregated elsewhere is hugely wasteful, and could kill easily your system. Having said this, you could achieve this proximity of data in other ways, essentially using caches or analysis DBs where some of the aggregation is done upfront.Performance of code in the DB: Secondary performance effects, such as caching of execution plans are more difficult to argue. Sometimes, cached execution plans can be a very negative thing, if the wrong execution plan was cached. Depending on your RDBMS, you may get the most out of these, but you won't get much over parametrised SQL, in most cases (those plans typically get cached, too). I would also argue that most compiled or JIT'ed languages typically perform better than their SQL equivalents (such as T-SQL or PL/SQL) for basic operations and non-relational programming (string manipulation, loops, etc), so you wouldn't be losing anything there, if you used something like Java or C# to do the number crunching. Fine-grained optimisation is also pretty difficult - on the DB, you're often stuck with a generic B-tree (index) as your only data structure. To be fair, a full analysis, including things like having longer-running transactions, lock escalation, etc, could fill books.Maintainability: SQL is a wonderful language for what it was designed to do. I'm not sure it's a great fit for application logic. Most of the tooling and practices that make our lives bearable (TDD, refactoring, etc) are difficult to apply to database programming.Performance versus scalability: To clarify these terms, I mean this: performance is how quick you'd expect a single request to go through your system (and back to the user), for the moment assuming low load. This will often be limited by things like the number of physical layers it goes through, how well optimised those layers are, etc. Scalability is how performance changes with increasing number of users / load. You may have medium / low performance (say, 5 seconds+ for a request), but awesome scalability (able to support millions of users). In your case, you will probably experience good performance, but your scalability will be bounded by how big a server your can physically build. At some point, you will hit that limit, and be forced to turn to things like sharding, which may not be feasible depending on the nature of the application.Premature Optimisation: Ultimately, I think you've made the mistake of optimising prematurely. As others have pointed out, you don't really have measurements showing how the other approaches would work. Well, we can't always build full-scale prototypes to prove or disprove a theory... But in general, I'd always be hesitant to chose an approach which trades maintainability (probably the most important quality of an application) for performance. EDIT: On a positive note, vertical scaling can stretch quite far in some cases. As far as I know, SO ran on a single server for quite some time. I'm not sure how it matches up to your 10 000 users (I guess it would depend on the nature of what they are doing in your system), but it gives you an idea of what can be done (actually, there are far more impressive examples, this just happens to be a popular one people can easily understand).EDIT 2: To clarify and comment on a few things raised elsewhere:Re: Atomic consistency - ACID consistency may well be a requirement of the system. The above doesn't really argue against that, and you should realise that ACID consistency doesn't require you to run all your business logic inside the DB. By moving code which does not need to be there into the DB, you're constraining it to run in the physical environment of the rest of the DB - it's competing for the same hardware resources as the actual data management portion of your DB. As for scaling only the code out to other DB servers (but not the actual data) - sure, this may be possible, but what exactly are you gaining here, apart from additional licensing costs in most cases? Keep things that don't need to be on the DB, off the DB.Re: SQL / C# performance - since this seems to be a topic of interest, let's add a bit to the discussion. You can certainly run native / Java / C# code inside DBs, but as far as I know, that's not what was being discussed here - we're comparing implementing typical application code in something like T-SQL versus something like C#. There a number of problems which have been difficult to solve with relational code in the past - e.g. consider the maximum concurrent logins problem, where you have records indicating a login or logout, and the time, and you need to work out what the maximum number of users logged in at any one time was. The simplest possible solution is to iterate through the records and keep incrementing / decrementing a counter as you encounter logins / logouts, and keeping track of the maximum of this value. It turns out that unless your DB supports a certain sliding window aggregation (which SQL 2008 didn't, 2012 may, I don't know), the best you can do is a CURSOR (the purely relational solutions are all on different orders of complexity, and attempting to solve it using a while loop results in worse performance). In this case, yes, the C# solution is actually faster than what you can achieve in T-SQL, period. That may seem far-fetched, but this problem can easily manifest itself in financial systems, if you are working with rows representing relative changes, and need to calculate windowed aggregations on those. Stored proc invocations also tend to be more expensive - invoke a trivial SP a million times and see how that compares to calling a C# function. I hinted at a few other examples above - I haven't yet encountered anyone implement a proper hash table in T-SQL (one which actually gives some benefits), while it is pretty easy to do in C#. Again, there are things that DBs are awesome at, and things that they're not so awesome at. Just like I wouldn't want to be doing JOINs, SUMs and GROUP BYs in C#, I don't want to be writing anything particularly CPU intensive in T-SQL. |
_unix.177175 | I have an audio CD (burnt a few years ago) that I want to rip (with K3B or other) to flac. K3B was unable to complete and I realized the CD was damaged. I managed to recover the data with safecopy and the --stage-1-3 arguments. From the output (see below) it seems that the data was properly recovered.However, I expected to be able to mount the file and take it from there. Unfortunately it doesn't seem to be the case:$ sudo mount -o loop -t iso9660 diskimage /media/cdrom1/mount: block device /mnt/data/Bureau/diskimage is write-protected, mounting read-onlymount: wrong fs type, bad option, bad superblock on /dev/loop1, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or sodmesg doesn't show much useful output:$ dmesg | tailISOFS: Unable to identify CD-ROM format.Indeed it seems to be in an unrecognized format:$ file diskimage diskimage: dataUnsurprisingly, renaming the file to .iso, .raw, .img or .bin made no difference.Some people on the Internet recommend using ccd2iso but it fails as well (Unrecognized sector mode (0) at sector 0!).How can I proceed to extract the audio from this raw data dump?Here is the output from safecopy. The stage3.badblocks is empty.$ safecopy /dev/sr0 diskimage --stage1Low level device calls enabled mode: 2Reported hw blocksize: 4096CDROM audio - low level access: drive reset, raw readCDROM low level disk size: 784954128CDROM low level block size: 2352Reported low level blocksize: 2352File size: 784954128Blocksize: 2352Fault skip blocksize: 78493296Resolution: 78493296Min read attempts: 1Head moves on read error: 0Badblocks output: stage1.badblocksMarker string: BaDbLoCkStarting block: 0Source: /dev/sr0Destination: diskimage......................................... [40961] ......................................... [82945] ......................................... [124929] ......................................... [166913] ......................................... [208897] ......................................... [250881] ................................[284577](+669325104){X [317950] }[317950](+78493296)................[333739](+37135728){X}[367112](+78493296)Done!Recovered bad blocks: 0Unrecoverable bad blocks (bytes): 2 (156986592)Blocks (bytes) copied: 333739 (784954128)xavier@marvin:~/Bureau$ safecopy /dev/sr0 diskimage --stage2Low level device calls enabled mode: 2Reported hw blocksize: 4096CDROM audio - low level access: drive reset, raw readCDROM low level disk size: 784954128CDROM low level block size: 2352Reported low level blocksize: 2352File size: 784954128Blocksize: 2352Fault skip blocksize: 301056Resolution: 2352Min read attempts: 1Head moves on read error: 0Incremental mode file: stage1.badblocksIncremental mode blocksize: 2352Badblocks output: stage2.badblocksStarting block: 0Source: /dev/sr0Destination: diskimageCurrent destination size: 863447424........................[309047](+726878544){X [309175] <<<<<<<}[309048](+2352).....[313338](+10090080){X<<<<<<<}[313339](+2352)..... 8-( 95%Done!Recovered bad blocks: 0Unrecoverable bad blocks (bytes): 2 (4704)Blocks (bytes) copied: 317950 (747818400)$ safecopy /dev/sr0 diskimage --stage3Low level device calls enabled mode: 2Reported hw blocksize: 4096CDROM audio - low level access: drive reset, raw readCDROM low level disk size: 784954128CDROM low level block size: 2352Reported low level blocksize: 2352File size: 784954128Blocksize: 2352Fault skip blocksize: 2352Resolution: 2352Min read attempts: 4Head moves on read error: 1Incremental mode file: stage2.badblocksIncremental mode blocksize: 2352Badblocks output: stage3.badblocksStarting block: 0Source: /dev/sr0Destination: diskimageCurrent destination size: 863447424. 8-( 93%Done!Recovered bad blocks: 0Unrecoverable bad blocks (bytes): 0 (0)Blocks (bytes) copied: 313339 (736973328) | Recover audio CD after safecopy | audio cd;safecopy | null |
_softwareengineering.89552 | I am very curious to hear input from others on a problem I've been contemplating for some time now.Essentially I would like to present a user with a text document and allow him/her to make selections of text and annotate it. Specific to the annotations i aim to achieve the following:Allow users to make a text selection, annotate it, then save the selection and annotation for reference later(UI) Support representing overlapped annotations. For example if the string where: This is the test sentence for my example test sentence, user1 might have an annotation on is the test sentence for my example and user2 might have an annotation on for my example.Account for a situations where the document's text changes. The annotations would to be updated, if possible.How would you tackle this from a technical perspective? Some ideas I've had are:Use javascript ranges and store an annotation as a pair of integers something like: (document_start_char, document_end_char). Save this pair in the db.Alternatively, using JS get the text selected and actually save the full text in the db. (not sure how i would then do overlapping annotations)Represent overlapped annotations by applying a css style to highlight the text then darken the stack of annotations where they overlap. Smallest annotation would always have to be on the top of the stack.What are your thoughts or areas of improvement? How the heck could I support a document's text being updated without breaking all the annotations? | Javascript, Text Annotations and Ideas | web development;design;javascript;user interface;document | null |
_softwareengineering.236391 | Most BST examples show a sample of a BST with unique values; mainlyto demonstrate the order of values. e.g. values in the left subtree are smaller than the root, and values in the right subtree are larger.Is this because BSTs are normally just used to represents SETs ?If I insert an element say 4 which already exists in the BST, what should happen ?e.g. In my case, 4 is associated with a payload. Does it mean I override the existing node's payload. | What happens to equal elements when inserting into a binary search tree? | data structures;trees | The classic examples of the BST demonstrate a set where there is one entry for a given value in the structure. An example of this is the TreeSet in Java (yes, thats a red-black tree behind the scenes - but its a tree and a set).However, there's nothing saying that there can't be additional values stored at the location indicated by the value. Once you decide to do this, it becomes an associative array (sometimes called a map). Again, going to Java for the example, there is the TreeMap.An example of this could be:TreeMap<Integer, String> ageToName = new TreeMap<Integer, String>();ageToName.put(4,Alice);ageToName.put(25,Bob);ageToName.put(16,Charlie);The structure of this would look like: 16 -> Charlie / \ / \ 4 -> Alice 25 -> BobA balanced binary tree (the red-black part of that structure) with a left child and a right child. You access it by looking for the value 4, or 16, or 25 and get back the associated data stored in that node.One aspect of the tree is that you can't have two different values with the same index. There is no way with this design to insert David at age 16 also. However, one could put another data structure such as a list instead of the String at the node and allow you to store multiple items in that list.But the (binary) tree, by itself, is a set that requires the indexes into it to be comparable (orderable) and that it will contain only distinct values (no duplicates).Realize that everyone is free to implement their own trees and sets and how they deal with the addition of another item with the same key. With a TreeSet, if you add an already existing value, the add function returns false and leaves the set unchanged. With a TreeMap, if you call put with a key that already exists, it replaces the old value and returns the replaced value (null if nothing was there).What should happen is whatever you need to have happen for that implementation. There's no inscribed tablet that all the computer scientists signed that dictates how any abstract data structure should behave. They behave as they should and when they need to behave other ways, document it and do it that way. |
_codereview.69289 | This is my implementation of recursive merge sort, using sentinels: #include<stdio.h> #include<stdlib.h> #include<limits.h>void merge(int a[], int p, int q, int r){ int n1,n2; int i,j,k; int *l,*m; n1 = q - p + 1; n2 = r - q; l = (int*)malloc(sizeof(int)*(n1+1)); m = (int*)malloc(sizeof(int)*(n2+1)); for(i=0; i<n1; i++) l[i] = a[p+i]; for(j=0; j<n2; j++) m[j] = a[q+j+1]; l[i] = INT_MAX; m[j] = INT_MAX; i = j = 0; for(k=p; k<r+1; k++){ if(l[i] <= m[j]){ a[k] = l[i]; i++; } else{ a[k] = m[j]; j++; } }}void merge_sort(int a[],int p, int r){ int q; if(p<r){ q = (p+r)/2; merge_sort(a,p,q); merge_sort(a,q+1,r); merge(a,p,q,r); }}int main(){ int *num,n; int i; printf(Enter number of digits:); scanf(%d,&n); num = (int*)malloc(sizeof(int)*n); printf(Enter numbers:); for(i=0 ; i<n; i++){ scanf(%d,&num[i]); } merge_sort(num,0,n-1); printf(Sorted array:\n); for(i=0; i<n; i++) printf(%d ,num[i]); free(num); return 0;}I'm looking for reviews, suggestions, and improvements. | Merge sort using sentinels | c;recursion;sorting;mergesort | In merge(), you call malloc() twice with no corresponding calls to free(). That can't be good for your memory consumption!When computing the average of two values, don't do q = (p+r)/2, because that is vulnerable to overflow. Instead, write it as q = p + (r - p) / 2. |
_codereview.549 | How does this class to resize an image look?using System;using System.Collections.Generic;using System.Web;using System.Drawing;using System.IO;/* * Resizes an image **/public static class ImageResizer{ // Saves the image to specific location, save location includes filename private static void saveImageToLocation(Image theImage, string saveLocation) { // Strip the file from the end of the dir string saveFolder = Path.GetDirectoryName(saveLocation); if (!Directory.Exists(saveFolder)) { Directory.CreateDirectory(saveFolder); } // Save to disk theImage.Save(saveLocation); } // Resizes the image and saves it to disk. Save as property is full path including file extension public static void resizeImageAndSave(Image ImageToResize, int newWidth, int maxHeight, bool onlyResizeIfWider, string thumbnailSaveAs) { Image thumbnail = resizeImage(ImageToResize, newWidth, maxHeight, onlyResizeIfWider); thumbnail.Save(thumbnailSaveAs); } // Overload if filepath is passed in public static void resizeImageAndSave(string imageLocation, int newWidth, int maxHeight, bool onlyResizeIfWider, string thumbnailSaveAs) { Image loadedImage = Image.FromFile(imageLocation); Image thumbnail = resizeImage(loadedImage, newWidth, maxHeight, onlyResizeIfWider); saveImageToLocation(thumbnail, thumbnailSaveAs); } // Returns the thumbnail image when an image object is passed in public static Image resizeImage(Image ImageToResize, int newWidth, int maxHeight, bool onlyResizeIfWider) { // Prevent using images internal thumbnail ImageToResize.RotateFlip(System.Drawing.RotateFlipType.Rotate180FlipNone); ImageToResize.RotateFlip(System.Drawing.RotateFlipType.Rotate180FlipNone); // Set new width if in bounds if (onlyResizeIfWider) { if (ImageToResize.Width <= newWidth) { newWidth = ImageToResize.Width; } } // Calculate new height int newHeight = ImageToResize.Height * newWidth / ImageToResize.Width; if (newHeight > maxHeight) { // Resize with height instead newWidth = ImageToResize.Width * maxHeight / ImageToResize.Height; newHeight = maxHeight; } // Create the new image Image resizedImage = ImageToResize.GetThumbnailImage(newWidth, newHeight, null, IntPtr.Zero); // Clear handle to original file so that we can overwrite it if necessary ImageToResize.Dispose(); return resizedImage; } // Overload if file path is passed in instead public static Image resizeImage(string imageLocation, int newWidth, int maxHeight, bool onlyResizeIfWider) { Image loadedImage = Image.FromFile(imageLocation); return resizeImage(loadedImage, newWidth, maxHeight, onlyResizeIfWider); }} | Image resizing class | c#;asp.net;image | PascalCase the method names and method params if you are feeling overly ambitious. // Set new width if in bounds if (onlyResizeIfWider) { if (ImageToResize.Width <= newWidth) { newWidth = ImageToResize.Width; } }FindBugs barks in Java for the above behavior... refactor into a single if since you are not doing anything within the first if anyways... // Set new width if in bounds if (onlyResizeIfWider && ImageToResize.Width <= newWidth) { newWidth = ImageToResize.Width; }Comments here could be a bit more descriptive; while you state what the end result is I am still lost as to why that would resolve the issue. // Prevent using images internal thumbnail ImageToResize.RotateFlip(System.Drawing.RotateFlipType.Rotate180FlipNone); ImageToResize.RotateFlip(System.Drawing.RotateFlipType.Rotate180FlipNone); Maybe something similar to what is stated on this blog... // Prevent using images internal thumbnail since we scale above 200px; flipping // the image twice we get a new image identical to the original one but without the // embedded thumbnail ImageToResize.RotateFlip(System.Drawing.RotateFlipType.Rotate180FlipNone); ImageToResize.RotateFlip(System.Drawing.RotateFlipType.Rotate180FlipNone); |
_unix.249624 | I have a CentOS 7 headless system with no serial ports. I sometimes want to access the server using a serial cable, so I plug in a USB serial cable (to my laptop's serial port) but I can't get a console/BASH from the connection.Is there something I have to do to tell the kernel to always create a serial console on appearance of a USB serial port? | Create serial console on plug in USB serial device | kernel;console;serial port | EDIT: This won't work if you have a recent udev version, because udev prevents you from starting long-lived background processes in RUN scripts. You may or may not be able to get around this by prefixing the getty command with setsid but in any case it's discouraged if not outright disallowed. If you have a system which uses systemd then there is another way to achieve this, which I hope someone will supply with another answer. In the meantime, I'm leaving this answer here in case it works for you.You cannot use a USB serial port as a console because USB is initialized too late in the boot sequence, long after the console needs to be working.You can run getty on a USB serial port to allow you to log in and get a shell session on that port, but it will not be the system's console.To get getty to start automatically, try this udev rule (untested):ACTION==add, SUBSYSTEM==tty, ENV{ID_BUS}=usb, RUN+=/usr/local/sbin/usbrungettyPut that in a rules file in /etc/udev/rules.d and then create this executable script /usr/local/sbin/usbrungetty:#!/bin/sh/sbin/getty -L $DEVNAME 115200 vt102 & |
_cs.50597 | I got confused with the analysis of algorithms in average case. Following is the my perception regarding average case using sorting problem:Suppose we have a 5 elements array to be sorted using Insertion sort. Time complexity will depend upon the particular arrangements of elements in the array. Usually, when algorithm's time complexity depends upon the particular ordering of elements or different instances of same problem size n, then different cases (i.e. best, average and worst) occurs. In the above example there are 5!=120 possible instances of problem size 5. For a instance, when elements are already sorted, algorithms takes lowest time, and that will be best case. For another instance, when elements are reverse sorted, it takes longest time, and that will be worst case. there are still 118 instances left. For average case time complexity, we should take average of running times for all possible input instances (including 118 left and 2 others). That means we should take average of all 120 running time for different 120 instances.Why probability distribution plays a role while computing average case time complexity? Why don't we just take a simple average of running times for all possible input instances of same problem size? | Average Case Complexity Rivisted | algorithm analysis;runtime analysis;average case | null |
_softwareengineering.313307 | Several of our customers have come to us with an interesting problem that involves adjusting data that occurred in the past which is rolled up and reported based on an org hierarchy.For example: If you run a report by Sales Team in March it will roll up sales closed by members currently assigned to that team. Then some team members switch teams at the start of April. Then a few days after some data for March is adjusted due to business-specific reasons (returns, audits, etc.) If someone reruns the same team-based report they will get drastically different results now besides the data adjustments because team members have moved around.We currently have our own solution to this issue which involves a significant amount of denormalization and updating effective time stamps whenever someone moves around in the hierarchy. I'd love learn about how others have solved this but I've struggled to even find examples of other CRMs or reporting platforms that even attempt to support this issue.What software have you see that supports this? Is this super uncommon and not worth the time to solve for most businesses? Or is this issue a symptom of a deeper problem? | Handling retroactively adjusted data that is reported hierarchically | database;reporting;hierarchy | null |
_cogsci.16175 | We know from many studies (see e.g. Taylor, 2009 for an empirical and experiential overview) that processing of information is massively heterogeneous with respect to hemisphere. How does this lateral effect show up in processing of basic somatosensory information? Do we fundamentally experience our left and right sides of our body differently? Some evidence points to emotional lateralization in the brain. If the answer to the last question above is yes, how are the two related?Taylor, Jill Bolte. My stroke of insight. Hachette UK, 2009. | Difference in hemispheres when processing of somatosensory information | neurobiology;emotion;sensation;lateralization | null |
_webmaster.65446 | I had to do a massive URL change stuff (Categories and Products) on my e-commerce site while keeping 301 redirect of old URLs to new ones. I did change category URLs (appr: 800 URLs) to new and improved ones and went live with them; but for an automated (scripted) 301 redirect; I had to get done with new improved Products URLs as well. And to avoid any 404 issues with old category URLs; I didn't want Google to crawl my site until I was done and gone live with Products' new URLs; I put a robots.txt block on my entire site! thinking that I'll have enough time converting all products to new URLs and keeping Google away of the site. It was intended to be temporary block. When I put robots.txt back to Allow all URLs; things started appearing all wrong! All the site's ranking has gone down! I am confused; what to do now! The robots.txt is now allowing all URLs; 301 redirect for old URLs is in place and working. Entire site has NEW URLs. I have submitted new sitemapsAnd I want to gain the ranking all back to normal. I have submitted new sitemaps and Google's not friendly with them. 99.9% of URLs in sitemap are still saying URLs blocked by robots.txt. What did I do wrong? And how's it gonna solve the best way? | Robots.txt destroyed my ranking? | robots.txt | Never use robots during maintenanceYou've created the problem by blocking the site using the robots.txt, you should only ever use robots to inform search engines what to index and what not to index. Because you blocked all URLS it's likely it dropped some from its index meaning you lost your rankings on those pages. Normally this process takes a couple of weeks to kick in. Generally pages that are older will rank better than newer ones due to several reasons such as links, age and so on, using a 301 redirect generally passes 99% of the rankings across but only if those URLS are still indexed.Correct methods when doing site maintenanceUsing the status 503 Service Unavailable (General down message with redirect)Using the status 307 Temporary Redirect (Redirect to a maintenance page)Using the status 401 Unauthorized (Temporary Password Authentication)Recovering from the problemYou should start by finding out if all OLD urls have been dropped, if they have then sadlyly there isn't anything anyone can tell you other than wait and see if your rankings will return, if Google dropped the old urls then those new urls won't rank as good as they would of if you hadn't blocked them using the robots, robots is like using 'NOINDEX' which your informing Google to not index those URLS which is a big mistake on your part. But anyway, Google may return your results depending on how long your URLS were dropped for and if you had any backlinks, if you have lots of backlinks going to the OLD urls then as Google figures out you got new urls it should pass those links to the new URLS and in the meantime you juts need to be patient.Finding out if Google has dropped my URLSYou should do site: www.example.com and find out if the old URLS have been dropped, you may find some new ones and if this is the case then its possible they dropped before hand, 301's normally take a week or more to index the new URLS over OLD. |
_scicomp.17574 | I'm with the background of computer engineering and generally use FEM for graphics simulation. As far as I know, FEM formulation is usually expressed with respect to the reference configuration, i.e., the volume integration is over the initial domain and the stress-based force $f$ is computed using the first Piola-Kirchhoff stress $P$ integrated over the initial element $\Omega_0$ as:$$f = \int_{\Omega_0}P\nabla_X Nd\Omega$$Is there any good reason why initial configuration is preferred over current configuration? Theoretically the force can be expressed with the cauchy stress $\sigma$ integrated over current volume.The integration is usually approximated with Gauss quadrature. One reason I assume is that accuracy and robustness of the numerically approximated volume integration maybe at stake if the shape of current domain is close to degenerated (with nearly not invertible jacobian). Integrating over initial volume is better as we assume the initial mesh quality is good. | Why does FEM usually formulate the problems in reference configuration? | finite element | null |
_softwareengineering.291930 | We have an important legal document that our app generates in WordML, with foreign characters represented via Unicode. These foreign characters vary widely, and include languages with special characters like Korean and Cyrillic. We have all of the unicode hex values for WordML, but our print room has informed us that they can't accept .doc files at all - only PDFs. So we're now converting the entire file into an XSL-FO document. The problem is - XSL-FO doesn't use the same Unicode hex values, and in fact when we try to produce the XSL-FO document, the hex values come out as # symbols, indicating that no proper value was found. Not all the unicode characters failed to be produced - in particular, special characters for French and Spanish seemed to display just fine. But none of the Cyrillic or Korean characters were successfully displayed. Is there a library of Hex code characters for XSL-FO, or some type of simple conversion we could do to make these hex codes match the XSL-FO Unicode values? | How can I resolve Unicode Hex Value Mismatches between WordML and XSL:FO? | unicode | null |
_unix.368729 | I would like to set a password for setting up samba share directory using a shell script. I wrote the following script test.sh:#!/bin/bashpass=123456(echo $pass; echo $pass) | smbpasswd -s -a $(whoami)This prints the following:When run by root: smbpasswd [options] [username]otherwise: smbpasswd [options]options: -L local mode (must be first option) -h print this usage message -s use stdin for password prompt -c smb.conf file Use the given path to the smb.conf file -D LEVEL debug level -r MACHINE remote machine -U USER remote usernameextra options when run by root or in local mode: -a add user -d disable user -e enable user -i interdomain trust account -m machine trust account -n set no password -W use stdin ldap admin password -w PASSWORD ldap admin password -x delete user -R ORDER name resolve orderAs it points out, I was not running it as root, when I run it as root, i.e., sudo ./test.sh, it runs fine. But the catch is, it adds root instead of noobuser, which is my logged in user. How can I add noobuser by doing something similar (I have a feeling I'm missing something here)? | Shell script to set password for samba user | shell script;ubuntu;scripting;samba | null |
_codereview.172959 | I created this batch script for incremental and scheduled backup using xcopy command in batch.The first execution of this script is to configure the paths of the source and the destination. It saves them in a .cfg file and then makes a full copy for the first time.It creates a scheduled task to run every hour with an incremental copy (ie : only copies the new files or has been modified from the source).Please suggest any improvements for this batch script.@echo off:: Incremental_Backup.bat Created by Hackoo on 12/08/2017:: It is a total copy first and then incrementally,:: ie, it just copies the new files and changed files.:: Create a Schedule Task for Copying files HourlyMode con cols=95 lines=5 & color 0ETitle %~nx0 for Incremental Backup with XCopy Command by Hackoo 2017set Settings=%~dpn0%_Settings.cfgSet FirstFull_CopyLog=%~dpn0_FirstFull_CopyLog.txtSet LogFile=%~dpn0_Incremental_CopyLog.txtSet TaskName=Backup_TaskRem The repeated task is in minutes (60 min = 1 hour)Set Repeat_Task=60If not exist %Settings% ( Call :BrowseForFolder Please choose the source folder for the backup SourceFolder Setlocal EnableDelayedExpansion If defined SourceFolder ( echo( echo You chose !SourceFolder! as source folder ) else ( echo( Color 0C & echo The source folder is not defined ... Exiting ...... Timeout /T 2 /nobreak>nul & exit ) Call :BrowseForFolder Please choose the target folder for the backup TargetFolder If defined TargetFolder ( echo( echo You chose !TargetFolder! as Target folder ) else ( echo( Color 0C & echo The Target folder is not defined ... Exiting ...... Timeout /T 2 /nobreak>nul & exit )Timeout /T 3 /nobreak>nul ( echo !SourceFolder! echo !TargetFolder!\Backups_%ComputerName%\ )> %Settings%cls & echo( & echo(echo Please wait a while ... The Backup to !TargetFolder!\Backups_%ComputerName%\ is in progress... Call :Backup_XCopy !SourceFolder! !TargetFolder!\Backups_%ComputerName%\ !FirstFull_CopyLog!Timeout /T 1 /nobreak>nul Call :Create_Schedule_Task_Copy %Repeat_Task% %TaskName%Start !FirstFull_CopyLog! & exit) else (Setlocal EnableDelayedExpansionfor /f delims= %%a in ('Type %Settings%') do ( set /a idx+=1 set Param[!idx!]=%%a)Set SourceFolder=!Param[1]!Set TargetFolder=!Param[2]!Cls & echo( & echo(echo Please wait a while ... The Backup to !TargetFolder! is in progress... Call :Backup_XCopy !SourceFolder! !TargetFolder! !LogFile!Rem Just to query the Backup_Task and log it ( echo( echo %Date% @ %Time% echo( echo ======================================== ====================== =============== @for /f skip=2 delims= %%a in ('Schtasks /Query /TN %TaskName%') do ( @echo %%a ) echo ======================================== ====================== =============== )>> !LogFile!)Timeout /T 1 /nobreak>nul Exit::****************************************************************************:BrowseForFolderset psCommand=(new-object -COM 'Shell.Application')^.BrowseForFolder(0,'%1',0,0).self.pathfor /f usebackq delims= %%I in (`powershell %psCommand%`) do set %2=%%Iexit /b::****************************************************************************:Backup_XCopy <Source> <Target> <LogFile>Xcopy /I /D /Y /S /E /J /C /F %1 %2 > %3 2>&1Exit /b::****************************************************************************:Create_Schedule_Task_Copy <Repeat_Task_Every(N)Minute> <TaskName>( Schtasks /create /SC minute /MO %1 /TN %2 /TR %~f0 Schtasks /Query /TN %2 )>> !FirstFull_CopyLog! 2>&1exit /b::**************************************************************************** | Incremental and scheduled backup using xcopy command | console;windows;batch | null |
_webmaster.13438 | I've upgraded one primary page of a site to use the async version of the GA tracking code. Since the upgrade, number of visits has increased by 60%, avg. time on site has decreased by 40% and the bounce rate of the said page is now always zero. Pageviews are intact.I suspect this has to do with using both the traditional and async snippets on the same profile.Other than that, it's a pretty standard setup. Right before </body>*, I have this:var _gaq = [['_setAccount', 'UA-XXXXXX-XX'], ['_trackPageview'], ['_setDomainName', 'domain.com'], ['_setAllowLinker', true], ['_setAllowHash', false], ['_setAllowAnchor', true]]; (function (d, t) { var g = d.createElement(t), s = d.getElementsByTagName(t)[0]; g.async = 1; g.src = ('https:' == location.protocol ? '//ssl' : '//www') + '.google-analytics.com/ga.js'; s.parentNode.insertBefore(g, s) } (document, 'script'));Any other ideas, confirmations and suggestions?*I know, it should go before </head>, will fix that in a new version. | Upgraded one page of a site to the async Google Analytics - Data is now messed up | google analytics | I notice that your calling _trackPageview before _setDomainName. That could cause some problems with the cookies. Try putting _trackPageview at the end of the command list and see if it solves the problem. |
_codereview.39336 | I'm just getting into testing and wanted to see if someone can tell me if I'm writing tests correctly. I'm using C#, NUnit, & Should.This is the class I'm testing: using System; using System.Collections.Generic; using System.Diagnostics;using System.Linq;using System.Reflection.Emit;Using VM.Models.Common;namespace Vm.Models.CustomerNS{ public class Customer {// ReSharper disable once InconsistentNaming public int ID { get; set; } public List<Address> Locations { get; set; } public List<User> Users { get; set; } public List<BuyerNum> BuyerNumbers { get; set; } public Dictionary<string, string> VehicleList { get; set; } public List<Contact> Contacts { get; set; } public string Name { get; set; } public Customer() { Locations = new List<Address>(); Users = new List<User>(); BuyerNumbers = new List<BuyerNum>(); Contacts = new List<Contact>(); VehicleList = new Dictionary<string,string>(); } public void AddLocation(AddressType addtype, string addressName, string streetAddress, string city, string state, string zip, string country) { var addressId = 0; var lastOrDefault = Locations.LastOrDefault(); if (lastOrDefault != null) addressId = lastOrDefault.AddressId + 1; Address newAddress = new Address { AddressId = addressId, AddressName = addressName, AddressType = addtype, City = city, State = state }; if (streetAddress != null) newAddress.StreetAddress = streetAddress; if (zip != null) newAddress.Zip = zip; Locations.Add(newAddress); } public void RemoveLocation(int addressId) { Debug.Assert(Locations != null, Locations != null); var addressid = Locations.Where(x => x.AddressId == addressId); var enumerable = addressid as IList<Address> ?? addressid.ToList(); if (enumerable.Count() != 1) { throw new InvalidOperationException(Cannot Delete); } Locations.Remove(enumerable.First()); } public void ModifyLocation(int addressId, AddressType addtype, string addressName, string streetAddress, string city, string state, string zip, string country) { var addressid = Locations.Where(x => x.AddressId == addressId); var enumerable = addressid as Address[] ?? addressid.ToArray(); if (enumerable.Count() != 1) { throw new InvalidOperationException(Cannot Delete); } Locations.Remove(enumerable.First()); } public Address GetLocation(int addressId) { return Locations.First(x => x.AddressId == addressId); } public List<Address> GetAllLocations() { return Locations; } }}Testing Class using System.Linq;using Vm.Models.Common;using VM.Models.CustomerNS;using NUnit.Framework;using Should;// ReSharper disable once CheckNamespacenamespace vm.Models.CustomerNS.Tests{ [TestFixture()] public class CustomerTests { [Test()] public void CustomerTest() { var customerObject = new Customer() { Name = Hello World!! }; customerObject.ShouldNotBeNull(); Assert.AreEqual(customerObject.Name, Hello World!!); } [Test()] public void AddLocationTest() { var customerObject = new Customer() { Name = Hello World!! }; customerObject.AddLocation(AddressType.Business, Hello Location, 123 Street Name, Hello City, HS, 10001, US); customerObject.Locations.First().ShouldNotBeNull(); Assert.AreEqual(customerObject.Locations.First().City, Hello City); } [Test()] public void RemoveLocationTest() { var customerObject = new Customer() { Name = Hello World!! }; customerObject = GenerateMultipleLocations(5, customerObject); var inintialCount = customerObject.Locations.Count; var custObjCopy = customerObject.Locations.First(); var thirdlocation = customerObject.Locations[3]; customerObject.RemoveLocation(1); customerObject.Locations.Count.ShouldEqual(inintialCount - 1); customerObject.Locations.First().ShouldEqual(custObjCopy); customerObject.Locations[3].ShouldNotEqual(thirdlocation); } private Customer GenerateMultipleLocations(int p, Customer customer) { int i = 0; while (i < p) { customer.AddLocation(AddressType.Business, EfficientlyLazy.IdentityGenerator.Generator.GenerateName().First, EfficientlyLazy.IdentityGenerator.Generator.GenerateAddress().AddressLine, EfficientlyLazy.IdentityGenerator.Generator.GenerateAddress().City, EfficientlyLazy.IdentityGenerator.Generator.GenerateAddress().StateAbbreviation, EfficientlyLazy.IdentityGenerator.Generator.GenerateAddress().ZipCode, EfficientlyLazy.IdentityGenerator.Generator.GenerateAddress().City); i++; } return customer; } [Test()] public void ModifyLocationTest() { var customerObject = new Customer() { Name = Hello World!! }; customerObject = GenerateMultipleLocations(5, customerObject); var inintialCount = customerObject.Locations.Count; var custObjCopy = customerObject.Locations.First(); var thirdlocation = customerObject.Locations[3]; customerObject.Locations[3].AddressId.ShouldEqual(3); customerObject.ModifyLocation(3, AddressType.Business, EfficientlyLazy.IdentityGenerator.Generator.GenerateName().First, EfficientlyLazy.IdentityGenerator.Generator.GenerateAddress().AddressLine, EfficientlyLazy.IdentityGenerator.Generator.GenerateAddress().City, EfficientlyLazy.IdentityGenerator.Generator.GenerateAddress().StateAbbreviation, EfficientlyLazy.IdentityGenerator.Generator.GenerateAddress().ZipCode, EfficientlyLazy.IdentityGenerator.Generator.GenerateAddress().City); thirdlocation.ShouldNotEqual(customerObject.Locations[3]); } [Test()] public void GetLocationTest() { Assert.Fail(); } [Test()] public void GetAllLocationsTest() { Assert.Fail(); } }}PS: I realize I haven't tested the bottom 2 methods. | Testing classes | c#;unit testing;nunit | I'm a little more concerned about the implementation class (as opposed to the testing class). It exposes List<T> and Dictionary<T,U> properties publicly, assigns them in the constructor but also allows them to be set via the property. Something smells, but I can't speak to it not knowing the full design. However, if you can, I'd recommend developing to interfaces, IList<T> and IDictionary<T,U> instead and not allowing setters on the properties. Here's what I'd code up:namespace Vm.Models.CustomerNS{ using System; using System.Collections.Generic; using System.Diagnostics; using System.Linq; using Vm.Models.Common; public class Customer { private readonly IList<Address> locations = new List<Address>(); private readonly IList<User> users = new List<User>(); private readonly IList<BuyerNum> buyerNumbers = new List<BuyerNum>(); private readonly IList<Contact> contacts = new List<Contact>(); private readonly IDictionary<string, string> vehicleList = new Dictionary<string, string>(); public IList<BuyerNum> BuyerNumbers { get { return this.buyerNumbers; } } public IList<Contact> Contacts { get { return this.contacts; } } // ReSharper disable once InconsistentNaming public int ID { get; set; } public IList<Address> Locations { get { return this.locations; } } public string Name { get; set; } public IList<User> Users { get { return this.users; } } public IDictionary<string, string> VehicleList { get { return this.vehicleList; } } public void AddLocation( AddressType addtype, string addressName, string streetAddress, string city, string state, string zip, string country) { var lastOrDefault = this.Locations.LastOrDefault(); var newAddress = new Address { AddressId = lastOrDefault == null ? 0 : lastOrDefault.AddressId + 1, AddressName = addressName, AddressType = addtype, City = city, State = state }; if (streetAddress != null) { newAddress.StreetAddress = streetAddress; } if (zip != null) { newAddress.Zip = zip; } this.Locations.Add(newAddress); } public IList<Address> GetAllLocations() { return this.Locations; } public Address GetLocation(int addressId) { return this.Locations.First(x => x.AddressId == addressId); } public void ModifyLocation( int addressId, AddressType addtype, string addressName, string streetAddress, string city, string state, string zip, string country) { var addressid = this.Locations.Where(x => x.AddressId == addressId).ToList(); if (addressid.Count() != 1) { throw new InvalidOperationException(Cannot Delete); } this.Locations.Remove(addressid.First()); } public void RemoveLocation(int addressId) { Debug.Assert(this.Locations != null, Locations != null); var addressid = this.Locations.Where(x => x.AddressId == addressId).ToList(); if (addressid.Count() != 1) { throw new InvalidOperationException(Cannot Delete); } this.Locations.Remove(addressid.First()); } }}Onto the unit test class. There are a number of stylistic issues I'd probably change, but I think only two real little testing bits that were off:In method ModifyLocationTest, you get the location count, but it would seem prudent to Assert it:GenerateMultipleLocations(5, customerObject);var initialCount = customerObject.Locations.Count;Assert.AreEqual(initialCount, 5);And method GenerateMultipleLocations can be made static.Back to the issue of the initial classes - if you can create interfaces for them, it would make dependency injection and mocking easier for unit testing. I can expound on that if you'd like, but I suggest reading up on those first. |
_unix.246362 | I want to do this without internet connection, as I go where I don't have internet?For arch-based what's the command to download and after that how do I install packages from terminal?How to write a script to automate installation?I also cannot change my backlight in manjaro, as backlight files not found running legacy bios, but in laptop it changes using fn key, although in manjaro I don't have backlights files there too. I want to do this without any utility, I know files are located somewhere else as the fn key works!! | How do I download packages manually and install them then? | arch linux;package management;manjaro;deb | null |
_unix.297270 | I have an SQL script which when run prompts me to input a value. If I don't input any value and press the Enter/Return key, it will select the default value and will pull all the records. I am trying to incorporate this into a shell script where I don't want the script to prompt me for any value and should just accept the return key by default. Is there a way I can do this ?Example of what I'm trying to do.in_code is the variable in the xecute.sql, which accepts a value. I want this variable to select the Enter/Return key by default. The code I use below doesn't work.#! /bin/bashsqlplus -s xxx/xxx<<!define in_code='echo -e \n'@execute.sqlexit! | Input return key in an SQL script | shell script;sql | null |
_softwareengineering.37677 | In my short experience here at programmers.stachexchange, when someone needs help to get started at programming quite a lot of users suggest learning this or that language, but few suggest learning the basis of programming (data structures, flow structures, algorithms, paradigms, etc.) So I'm coming to the conclusion that the programming community in general puts more value on language that on core programming skills, and here is where my question comes:Is knowing a multiple programming languages and understanding every single implementation detail more important that knowing how to abstract, transform and create some code that solves a certain problem? | Programming skills, problem solving genius or language guru? | programming languages;skills | You see, people usually experience feelings, and sometimes those feelings are a barrier to do the most important thing: team work.There are those who have excellent problem solving skills, and those who manage to remember all the tiny little deatails of every language. And over the years I've met people having one and lacking the other, and vice versa. I once worked with someone having superior problem solving skills. He'd participate in programming contests acheiving excellent results. He was a star programmer.But then, working with him on a team as a partner on daily basis was more than just complicated. His team work skills were something like the rest of the team cheering him to do all the work.Then I moved jobs and met the Architect. He knew all the Desing Patterns by memory, creating tons of layers of abstraction just because It makes sense to keep things separarted, leading to an over engineered solution twice the size of a more simpler one. And again, instead of communicating his solution to the rest, he'd open Eclipse and write all the code by himself, just because it was easier.Finally I met Q. He wasn't as smart as the first one, nor he knew all the desing patters like the Architect. But he'd code like a machine, creating elegant and simple solutions. His most notorius skill was explaining things, a skill the other two completely lacked. |
_cs.32651 | For two proposition logical formulas $\phi$ and $\chi$ so that $\phi\implies\chi$ is generally valid. How can I prove that there is a formular $\psi$ with $var(\psi )\subseteq var(\phi )\cap var(\chi )$ so that $\phi\implies\psi$ and $\psi\implies\chi$ are generally valid? | Prove the existence of a proposition logical formula so that following conditions are fulfilled | logic;propositional logic | The formula $\psi$ is called an interpolant and the method of finding such an interpolant is called Craig interpolation. On the Wikipedia article you can find more information about it, including a proof that interpolants always exist in propositional logic. |
_unix.11302 | Is there a way to write a bash script with the following functionalities?Be launched when I press some key or key combination. (not so important requirement)Identify the 7 most visited directories in last 3 hours.Offer me the list of this 7 directories so I can cycle through them with tab and shift-tab (backward). Then press enter would mean cd to the selected directory.thank you | Shell script printing the most visited directories | bash;shell script;scripting;directory | Add the following to your ~/.bashrc file:#### cd history mechanism ##############################################export CDHISTFILE=~/.cdhistoryif [ -e $CDHISTFILE ]then cdht=`mktemp` tail -500 $CDHISTFILE > $cdht mv $cdht $CDHISTFILEfifunction keep_cd_history() { if [ -z $1 ] ; then d=$HOME ; else d=$1 ; fi cdhcan=`readlink -f $d` if 'cd' $d then echo -e `date +%s`\t$cdhcan >> $CDHISTFILE fi}function pick_cwd_from_history() { f=~/.cdhistgo cdhistpick $f if [ -r $f ] ; then cd `head -1 $f` ; fi}alias cd=keep_cd_historyalias cdh=pick_cwd_from_history########################################################################The first section truncates the cd history mechanism's custom history file if it's gotten bigger than 500 lines since the last time we looked at it. We can't use Bash's built-in history because it doesn't include timestamps, which you need in order to get the in the last 3 hours behavior.The two Bash functions do things we cannot do in the Perl code below, which otherwise does all the heavy lifting. The only tricky bit here is the readlink call, which canonicalizes the paths you use. We have to do that so that cd $HOME ; cd ; cd ~ ; cd ../$USER results in 4 instances of the same path in the cd history, not four different entries.The aliases are just convenience wrappers for the functions.Now the really tricky bit:#!/usr/bin/perl -wuse strict;use List::Util qw(min);#### Configurables ###################################################### Number of seconds back in time to look for candidate directoriesmy $history_seconds_threshold = 3 * 60 * 60;# Ignore directories we have gone to less than this many timesmy $access_count_threshold = 1;# Number of directory options to give in pick listmy $max_choices = 7;#### DO NOT OPEN. NO USER-SERVICEABLE PARTS INSIDE. ##################### Get file name our caller wants the cd choice to be sent todie usage: $0 <choice_file>\n unless $#ARGV == 0;my $cdhg_file = $ARGV[0];unlink $cdhg_file; # don't care if it fails# Build summary stats from history file to find recent most-accessedmy $oldest_interesting = time - $history_seconds_threshold;my %stats;open my $cdh, '<', $ENV{HOME}/.cdhistory or die No cd history yet!\n;while (<$cdh>) { chomp; my ($secs, $dir) = split /\t/; next unless $secs and $secs >= $oldest_interesting; ++$stats{$dir};}# Assemble directory pick listmy @counts = sort values %stats;$access_count_threshold = $counts[$max_choices - 1] - 1 if @counts > $max_choices;my @dirs = grep { $stats{$_} > $access_count_threshold } keys %stats;$max_choices = min($max_choices, scalar @dirs);# Show pick list, and save response to the file pick_cwd_from_history()# expects. Why a file? The shell must call chdir(2), not us, because# if we do, we change only our CWD. Can't use stdio; already in use.my $choice;if ($max_choices > 1) { for (my $i = 0; $i < $max_choices; ++$i) { print $i + 1, '. ', $dirs[$i], \n; } print \nYour choice, O splendid one? [1-$max_choices]: ; $choice = <STDIN>; chomp $choice; exit 0 unless $choice =~ /^[0-9]+$/ && $choice <= $max_choices;}elsif ($max_choices == 1) { print Would you like to go to $dirs[0]? [y/n]: ; $choice = 1 if uc(<STDIN>) =~ /^Y/;}else { die Not enough cd history to give choices!\n;}if ($choice) { open my $cdhg, '>', $cdhg_file or die Can't write to $cdhg_file: $!\n; print $cdhg $dirs[$choice - 1], \n;}Save this to a file called cdhistpick, make it executable, and put it somewhere in your PATH. You won't execute it directly. Use the cdh alias for that, as it passes in a necessary argument via pick_cwd_from_history().How does it work? Ummmm, exercise for the reader? :)To get your first requirement, the hotkey, you can use any macro recording program you like for your OS of choice. Just have it type cdh and press Enter for you. Or, you can run cdh yourself, since it's easy to type.If you want a simpler but less functional alternative that will work everywhere, get into the habit of using Bash's reverse incremental search feature, Ctrl-R. Press that, then type cd (without the quotes, but with the trailing space) to be taken back to the previous cd command. Then each time you hit Ctrl-R, it takes you back to the cd command prior to that. In this way, you can walk backwards through all the cd commands you've given, within the limits of Bash's history feature.Say:$ echo $HISTFILESIZEto see how many command lines Bash history will store for you. You might need to increase this to hold 3 hours worth of command history.To search forward through your command history after having stepped backwards through it, press Ctrl-S.If that doesn't work on your system, it is likely due to a conflict with software flow control. You can fix it with this command:$ stty stop undefThat prevents Ctrl-S from being interpreted as the XOFF character.The consequence of this is that you can then no longer press Ctrl-S to temporarily pause terminal output. Personally, the last time I used that on purpose was back in the days of slow modems. These days with fast scrolling and big scroll-back buffers, I only use this feature by accident, then have to spend a second remembering to press Ctrl-Q to get the terminal un-stuck. :) |
_unix.320836 | I am trying to build a smart host in an Unix system, with Server app, Mail service and an Open Directory for local users.My question is if there is a way to use an email just for forwarding mail from several different mail users, i have test this just for one user email account, in both forwarding and in Mail app and works well, but i tried to add a second account and i receive a message telling that there already was an account for that domain (something like that i dont remember well). In exchange from microsoft i have saw one config there that there was an account just for forwarding mail, and in the ISP mail server, that was a normal email account, but locally in the exchange, i could access it like a regular local email account.Is it possible to do or there is some workarounds for this with this app's or in background messing with Postfix?Thanks! | Postfix relay to one server from multiple users | osx;email;postfix;forwarding | null |
_cs.48475 | You are designing an elevator controller for a building with 25 floors. The controller has two inputs: UP and DOWN. It produces an output indicating the floor that the elevator is on. There is no floor 13. What is the minimum number of bits of state in the controller?Answer: The system has at least five bits of state to represent the 24 floors that the elevator might be on.Can someone explain how this comes to be? | Difficult Question to Understand (Computer Artitechture) | computer architecture;computer algebra | Well, as binary typically goes, one bit represents two possible values: 0 and 1. With two bits there are a possible combination of four values: 0, 1, 2, 3 in hex/decimal/octal or 00, 01, 10, 11 in binary. Expand out to three bits you have eight values. 0 to 7 or 000, 001, 010, 011, 100, 101, 110, and 111. Expand upon this and you'll have your understanding. For reference (with a nice animation for counting on any base): https://www.mathsisfun.com/binary-number-system.html |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.