id
stringlengths 5
27
| question
stringlengths 19
69.9k
| title
stringlengths 1
150
| tags
stringlengths 1
118
| accepted_answer
stringlengths 4
29.9k
⌀ |
---|---|---|---|---|
_codereview.141187 | I've got a basic python function that I've been tasked with finishing in a class. It's already fulfilling all the requirements for an A as it's an introductory course to Python, but I'd like to get some advice on it still as it's taking quite some time to execute. I've got a few years of experience with general programming, but I'm still quite new to Python. As such any sort of advice on what might be taking up time would be appreciated.As it stands at the moment, it takes about .5 seconds to run through the code and print all the data.A bit of detail to the program:It reads three files:A part of Alice in the Wonderland's first chapter (alice-ch1.txt)A list of common words (common-words.txt)And a list of correctly spelt words.Finally it prints the top 7 words that have passed through the filters.def analyse(): #import listdir from os import listdir #import counter from collections import Counter files = *************************************\n for file in listdir(): if file.endswith('.txt'): files = files + file + \n choice = input(These are the files: \n + files + *************************************\nWhat file would you like to analyse?\n) if choice.strip() == : choice = alice-ch1.txt elif choice.endswith(.txt): print(choice) else: choice = choice + .txt print(choice) with open(choice) as readFile, open('common-words.txt') as common, open('words.txt') as correct: correct_words = correct.readlines() common_words = common.readlines() common_words = list(map(lambda s: s.strip(), common_words)) correct_words = list(map(lambda s: s.strip(), correct_words)) words = [word for line in readFile for word in line.split()] for word in list(words): if word in common_words or not word in correct_words: words.remove(word) print(There are + str(len(words))) c = Counter(words) #for word, count in c.most_common(): # print (word, count) nNumbers = list(c.most_common(7)) out = print(*************************************\nThese are the 7 most common:) for word, count in nNumbers: out = out + word + , + str(count) + \n print(out + \n*************************************) input(\nPress enter to continue...) | Given some text and a word list, print the 7 most common correctly spelled words | python;performance;beginner;strings;file | List ComprehensionI'm not at all sure it'll run much faster (though it might--it's a little faster under Python 2.7, anyway), but I think a more Pythonesque approach would be to replace your loop: for word in list(words): if word in common_words or not word in correct_words: words.remove(word)... with a list comprehension, something like:words = [word for word in words if not word in common_words and word in correct_words]AlgorithmTo gain substantial speed, you probably want to rearrange your operations. Right now you're looking at each word in the input separately (and looking at all of them). Then, after you've found all the words that aren't common and are spelled correctly, you choose the 7 most common.I'd reverse that: start by creating a Counter of all the input words. Then print those filtering words that are common or aren't spelled correctly. When you've printed seven of them, stop: words = [ word for line in readFile for word in line.split() ] c = Counter(words) counter = 0 for word, count in c.most_common(): if not word in common_words and word in correct_words: print word + , + str(count) counter = counter + 1 if counter == 7: break;You could simplify that inner loop a little by by doing a little preprocessing. Instead of testing against both the common and correctly spelled lists, you could start by removing all the common words from the correctly spelled list, to get a single list of the words that are acceptable. Then when you're printing out your results, you'd check only against that one list. Given the sizes of the lists, this would be a win primarily if you did it once and saved the result so you can re-use it. If you re-did the preprocessing every time you ran the program, you'd probably use more time on the preprocessing than you'd save on the output loop.There's probably more than can be done to make this neater as well, but nothing occurs to me immediately. At least for me in a quick test, this seems to run around ten to fifteen (or so) times as fast as the code in the question. The exact difference in speed will probably depend (heavily) on the size of input file though. In particular, I believe this is changing from \$O(N^2)\$ complexity to an expected complexity around \$O(N)\$1.As an aside, I did consider (and test with) using a set instead of a list for common_words and correct_words, but at least in my testing, with the updated algorithm this didn't seem to make a difference that I could replicate dependably. With the original algorithm, however, changing these from list to set can improve performance considerably.LogicAs it stands right now, your if/then chain:if choice.strip() == : choice = alice-ch1.txtelif choice.endswith(.txt): print(choice)else: choice = choice + .txtprint(choice)... prints out choice twice if it starts out ending with .txt. I suspect you really want something closer to:if choice.strip() == : choice = alice-ch1.txtelif not choice.endswith(.txt) choice = choice + .txtprint(choice)Magic numberIt would probably be better to use something on the order of:mostCommonLimit = 7# ...if counter == mostCommonLimit break;If you want to get technical, it probably is still \$O(N^2)\$. The Counter presumably uses a hash table, which is \$O(1)\$ expected complexity, but can be \$O(N)\$ in the worst case (where all keys produce equivalent hashes). This is, however, so rare that in practice it's often ignored. |
_webapps.28694 | I accidentally created two Facebook accounts. I want to migrate to one account only, but continuously get friend requests on the account I don't want to use. If I could configure an auto-response for all friend requests, telling the requestor to please redirect their request to the other account, I could safely begin ignoring the old one. | Can I setup an auto-response for all friend requests? | facebook | null |
_codereview.5191 | As I was trying to demystify the Android AsyncTask functionalities, I wrote this sample app to test it. Please review my code and suggest possible improvements:public class AsyncTaskExampleActivity extends Activity implements OnClickListener{ private Boolean success = true; private static AsyncTaskExampleActivity MainActivityInstance; private CallBack c; ProgressDialog progressDialog; Button startAsyncTask; MyAsyncTask aTask; Button cancelAsyncTask; /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); startAsyncTask = (Button)findViewById(R.id.button1); cancelAsyncTask = (Button)findViewById(R.id.button2); startAsyncTask.setOnClickListener(this); cancelAsyncTask.setOnClickListener(this); MainActivityInstance = this; //ProgressDialog progressDialog; progressDialog = new ProgressDialog(this.getApplicationContext()); progressDialog.setProgressStyle(ProgressDialog.STYLE_HORIZONTAL); progressDialog.setMessage(On Progress...); progressDialog.setCancelable(false); c = new CallBack() { public void onProgress(){ //progressDialog.show(); Toast toast = Toast.makeText(getMainActivity().getApplicationContext(), Progress!!, 1000); toast.show(); } public void onResult(Boolean result){ if(result.equals(true)){ Toast toast = Toast.makeText(getMainActivity().getApplicationContext(), Bingo...Success!!, 1000); toast.show(); } else { Toast toast = Toast.makeText(getMainActivity().getApplicationContext(), Alas!! Failure, 1000); toast.show(); } } public void onCancel(Boolean result){ Toast toast = Toast.makeText(getMainActivity().getApplicationContext(), Cancelled, 1000); toast.show(); } }; aTask = new MyAsyncTask(c); } static AsyncTaskExampleActivity getMainActivity(){ return MainActivityInstance; } public Boolean getSuccessOrFailureResult(){ return success; } public void onClick(View v){ if(v.equals(startAsyncTask)){ aTask.execute(Start); } if(v.equals(cancelAsyncTask)){ aTask.cancel(true); } }}public class MyAsyncTask extends AsyncTask<String, Integer, Boolean> { private CallBack cb; Boolean running = true; MyAsyncTask(CallBack cb){ this.cb = cb; } protected Boolean doInBackground(String... params){ while(running){ if(isCancelled()){ break; } try{ for (int i = 0; i<5; i++){ if(isCancelled()){ break; } Thread.sleep(10000,0); publishProgress(); } } catch(InterruptedException e){ return false; } return true; } return false; } protected void onProgressUpdate(Integer... progress){ cb.onProgress(); } protected void onPostExecute(Boolean result){ cb.onResult(result); } protected void onCancelled(){ running = false; cb.onCancel(true); }}public interface CallBack { public void onProgress(); public void onResult(Boolean result); public void onCancel(Boolean result);} | AsynTask example | java;android;asynchronous | null |
_unix.82919 | What do the Linux interface names mean? eth0eth1wlan0My current assumption is that when we are connected to the Internet via LAN cable it's eth0 or eth1 and when we are connected with internet via WiFi it's wlan0. | What does the eth0 interface name mean in Linux? | linux;networking | Your assumption is correct.The names however can be set/chosen by the user or the operating system that you are using. eth0 and eth1 is used because it's more intuitive than choosing an arbitrary name because LAN cable connection, like you said is Ethernet (hence the eth in eth0, eth1). Similarly when you connect to WiFi, it's WirelessLAN (hence the wlan in wlan0). |
_softwareengineering.324082 | I am programmer with 1 year experience, recently I realized I seldom start a project correctly (most of my side project), normally the project cycle goes likeStart with a few use-casesStart codingRealize a few things I did not handle well, and does not fit well in current codebase.Rewrite most part of codeand this might go a few timesSo my questions areIs such practice common, or it implies I am not competent?How can I improve myself on this aspect? | How can I get things right at the beginning of a software project? | programming practices;development process | The cycle you describe is normal. The way to improve things is not to avoid this cycle, but to streamline it. The first step is to accept that:It's near impossible to know everything on day one of a project.Even if you do somehow know everything, by the time you've finished the project then something (the client's requirements, the market they're in, the tech you're working with, their customers' wishes) will have changed and made at least part of what you knew invalid or incorrect.Therefore, it's impossible to plan everything up front, and even if you could, following that plan would lead you to build something imperfect or obsolete. Knowing this, we integrate change into our planning. Let's look at your steps:Start with a few use-casesStart codingRealize a few things I did not handle well, and does not fit well in current codebase.Rewrite most part of codeThat's actually a great starting point. Here's how I'd approach it:1. Start with a few use-casesGood. By saying use cases, you're focusing on what the software is for. By saying a few, you're not trying to discover everything; you're sticking to a manageable amount of work. All I'd add here is to prioritise them. With your client or end user, work out the answer to this question:What is the smallest, simplest piece of software I could give you that would improve your situation?This is your minimum viable product - anything smaller than this isn't helpful to your user, but anything bigger risks planning too much too soon. Get enough information to build this, then move on. Be mindful that you won't know everything at this point.2. Start coding.Great. You get working as soon as possible. Until you've written code, your clients have received zero benefit. The more time you spend planning, the longer the client has spent waiting with no payback.Here, I'd add a reminder to write good code. Remember and follow the SOLID Principles, write decent unit tests around anything fragile or complex, make notes on anything you're likely to forget or that might cause problems later. You want to be structuring your code so that change won't cause problems. To do this, every time you make a decision to build something this way instead of that way, you structure your code so that as little code as possible is affected by that decision. In general, a good way to do this is to separate your code:use simple, discrete components (depending on your language and situation, this component might be a function, a class, an assembly, a module, a service, etc. You might also have a large component that is built out of smaller ones, like a class with lots of functions, or an assembly with lots of classes.)each component does one job, or jobs relating to one thingchanges to the way one component does its internal workings should not cause other components to have to changecomponents should be given things they use or depend on, rather than fetching or creating themcomponents should give information to other components and ask them to do work, rather than fetching information and doing the work themselvescomponents should not access, use, or depend upon the inner workings of other components - only use their publicly-accessible functionsBy doing this, you're isolating the effects of a change so that in most cases, you can fix a problem in one place, and the rest of your code doesn't notice.3. Encounter issues or shortcomings in the design.This will happen. It is unavoidable. Accept this. When you hit one of these problems, decide what sort of problem it is.Some problems are issues in your code or design that make it hard to do what the software should do. For these problems, you need to go back and alter your design to fix the problem.Some problems are caused by not having enough information, or by having something that you didn't think of before. For these problems, you need to go back to your user or client, and ask them how they'd like to address the issue. When you have the answer, you then go and update your design to handle it.In both cases, you should be paying attention to what parts of your code had to change, and as you write more code, you should be thinking about which parts may have to change in the future. This makes it easier to work out what parts might be too interlinked, and what parts might need to be more isolated.4. Rewrite part of the codeOnce you've identified how you need to change the code, you can go and make the change. If you've structured your code well, then this will usually involve changing only one component, but in some cases it might involve adding some components as well. If you find that you're having to change a lot of things in a lot of places, then think about why that is. Could you add a component that keeps all of this code inside itself, and then have all these places just use that component? If you can, do so, and next time you have to change this feature you'll be able to do it in one place.5. TestA common cause of issues in software is not knowing the requirements well enough. This is often not the developers' fault - often, the user isn't sure what they need either. The easiest way to solve this is to reverse the question. Instead of asking what do you need the software to do?, each time you go through these steps, give the user what you've built so far and ask them I built this - does it do what you need?. If they say yes, then you've built something that solves their problem, and you can stop working! If they say no, then they'll be able to tell you in more specific terms what's wrong with your software, and you can go improve that specific thing and come back for more feedback.6. LearnAs you go through this cycle, pay attention to the problems you're finding and the changes you're making. Are there patterns? Can you improve?Some examples:If you keep finding you've overlooked a certain user's viewpoint, could you get that user to be more involved in the design phase?If you keep having to change things to be compatible with a technology, could you build something to interface between your code and that technology so you only have to change the interface?If the user keeps changing their mind about words, colours, pictures or other things in the UI, could you build a component that provides to the rest of the application those so that they're all in one place?If you find that a lot of your changes are in the same component, are you sure that component is sticking to just one job? Could you divide it into a few smaller pieces? Can you change this component without having to touch any others?Be AgileWhat you're moving towards here is a style of working known as Agile. Agile isn't a methodology, it's a family of methodologies incorporating a whole load of things (Scrum, XP, Kanban, to name a few) but the thing they all have in common is the idea that things change, and as software developers we should plan to adapt to changes rather than avoiding or ignoring them. Some of its core principles - in particular, the ones that are relevant to your situation - are the following:Don't plan further ahead than you can predict with confidenceMake allowances for things to change as you goRather than building something big in one go, build something small and then incrementally improve itKeep the end user involved in the process, and get prompt, regular feedbackExamine your own work and progress, and learn from your mistakes |
_unix.230621 | I'm trying to submit a job on a cluster via qsub, but it gets stuck in state Eqw with error message:$ qstat -j 466 | grep errorerror reason 1: 09/18/2015 17:12:32 [1125:3453]: error: can't chdir to /export/home/rafaelmf: No such file or directI'm just using a test.sh script with echo Hello World output so I can try adding or removing options like -cwd, -j, -o.But nothing works.qstat also shows:sge_o_home: /export/home/rafaelmfsge_o_workdir: /state/partition1/home/rafaelmfand I know export is a link to state/partition1/.Also, I don't have root access, and all is done with ssh.So, does anyone know how to deal with such error (without sudo)? | qsub job in state Eqw error: can't chdir to directory: No such file or directory | cluster;batch jobs;cluster ssh | null |
_unix.331542 | I have an absolute minimal Linux system that I have built myself.Next step in getting it to some sort of functional state is to install a working package manager i.e. apt-getHow can I install apt-get and get it configured with all the right setup and directories, given that I have no package manager already on the system?thanks | How to install apt-get from scratch on a minimal system? | apt | null |
_unix.28568 | In the /etc/passwd file on my system, the comment field, field 5, is inconsistent in its contents. I thought that I could extract it to get the full name of the user.fullname=`awk -F: '$1 == name {print $5}' name=$LOGNAME /etc/passwd`However this returns with $fullname containing a name with 0, 3, or 4 commas following. Exploring the man page (man 5 passwd) provides no details of this field other than describing it as user name or comment field.Perhaps there is additional information that is stored along with the user name? | Where can I find a reference to the format of the comment field (field 5) of the /etc/passwd file? | files;users;password | This field is often formatted as a GECOS field, which typically has 4 comma-separated fields for extra information in addition to the user's name, such as phone number, building number, etc.In all cases I have seen, if the field has a comma, the name is what is before the comma. But I can imagine cases where this is not the case (a name of Foo Bar, Jr would break, for instance). |
_softwareengineering.345414 | I have an API which is basically comprised of two parts: 1. A TensorFlow neural net that provides predictions based on input image (mainly GPU computations) and 2. Post processing on those predictions (mainly CPU)This is kind of a best practices/recommendation question. What I am wondering is if these two sections of the application should be decoupled, placed in separate Docker containers and scaled separately. There is no other use for the TensorFlow predictions (no other apps would want to receive predictions directly so there is no need for decoupling in terms of accessibility).The only scenario I can think of that would warrant decoupling is if the Post-Processing consumed a large amount of CPU resources that forced the application to scale when the GPU was being underutilized (the prediction part of the app was handling the load just fine) and by forcing the application to scale we are using more GPU resources than necessary.However as long as sufficient CPU resources can be allocated to the server so that the point at which the app scales is a point of high utilization on both the CPU and GPU I would see no reason why the services should be decoupled.Hopefully this makes sense - any suggestions? | Should TensorFlow prediction be decoupled from post-processing? | architecture;backend | null |
_webmaster.43874 | I'm working on a fairly large site, that generates a dynamic sitemap hourly. Now in Google Webmaster Tools the sitemap isn't submitted yet and I'm shying away because I'm afraid that the new content (which appears in the dynamic sitemap) won't get crawled as quickly. So my question is: How often do the GWT check the sitemap once submitted?Any other thing I should be aware of when working with GWT and dynamic pages?P.S. I checked this thread How often are sitemap.xml checked for updates by crawlers? and from what I understand Google crawls more often when the site gets updated regularly - but does the same apply for the GWT? | How often does GWT check dynamic sitemaps? | seo;google search console;sitemap | null |
_unix.292162 | How can I install an iso from a portable hard drive?The guides that I have seen require using grub2 on the local drive to load the iso on the portable drive.I would like to boot from the portable drive. I prefer methods that are automated or elegant.I could perform the actions form Windows 10 or Cinnamon.Edit: The portable drive must boot in UEFI and BIOS. And I'd like to leave one ntfs partition on the hard drive for shuttling data. Currently, I'm trying to work off of https://wiki.archlinux.org/index.php/Multiboot_USB_drive. But I'm having some trouble figuring out how to incorporate uuin into the grub.cfg. | Easy-ish Grub2 loading of iso FROM a portable hard drive | linux mint;system installation;grub2;iso;external hdd | null |
_cs.14826 | Scott Meyers describes here that traversing a symmetric matrix row-wise performes significantly better over traversing it column-wise - which is also significantly counter-intuitive. The reasoning is connected with how the CPU-caches are utilized. But I do not really understand the explanation and I would like to get it because I think it is relevant to me.Is it possible to put it in more simple terms for somebody not holding a PhD in computer architecture and lacking experience in hardware-level programming? | Performance of row- vs. column-wise matrix traversal | cpu cache;performance | In today's standard architectures, the cache uses what is called spatial-locality. This is the intuitive idea that if you call some cell in the memory, it is likely that you will want to read cells that are close by. Indeed, this is what happens when you read 1D arrays.Now, consider how a matrix is represented in the memory: a 2D matrix is simply encoded as a 1D array, row by row. For example, the matrix $\left(\begin{array}{ll} 2, 3 \\4, 5\end{array}\right)$ is represented as $2,3,4,5$.When you start reading the matrix in cell $(0,0)$, the CPU automatically caches the cells that are close by, which start by the first row (and if there is enough cache, may also go to the next row, etc). If your algorithm works row-by-row, then the next call will be to an element still in this row, which is cached, so you will get a fast response. If, however, you call an element in a different row (albeit in the same column), you are more likely to get a cache miss, and you will need to fetch the correct cell from a higher memory. |
_softwareengineering.311585 | We have built a complex Angular application that sends multiple HTTP request to a REST service that is also built in house. Since both the frontend and the backend is being developed in parallel, bugs can happen in either side. It could be a bug in the REST service, or it can be a problem with the HTTP Request generated from the front-end. When a bug has been reported, it's important to identify where the error occurs. There are specific structures for each of the requests. The data models are mostly populated when the users give inputs in a form or a directiveHow do we approach testing these HTTP requests? Can we only rely on unit tests? Can the testing be done with dummy data that produces a pre-defined JSON object? Or should integration tests be done with the actual data?And by which ever method we pick, how do we identify where the error lies when there is a bug? | How and where to test if the JSON request objects generated by the front-end is valid | javascript;unit testing;testing;angularjs;integration tests | null |
_codereview.123346 | Problem from hackerrank:Youre given the pointer to the head nodes of two sorted linked lists. The data in both lists will be sorted in ascending order. Change the next pointers to obtain a single, merged linked list which also has data in ascending order. Either head pointer given may be null meaning that the corresponding list is empty.Input FormatYou have to complete the Node* MergeLists(Node* headA, Node* headB) method which takes two arguments - the heads of the two sorted linked lists to merge. You should NOT read any input from stdin/console.Output FormatChange the next pointer of individual nodes so that nodes from both lists are merged into a single list. Then return the head of this merged list. Do NOT print anything to stdout/console.Sample Input1 -> 3 -> 5 -> 6 -> NULL2 -> 4 -> 7 -> NULL15 -> NULL12 -> NULLNULL 1 -> 2 -> NULLSample Output1 -> 2 -> 3 -> 4 -> 5 -> 6 -> 712 -> 15 -> NULL1 -> 2 -> NULL/*Merge two sorted lists A and B as one linked listNode is defined as struct Node{ int data; struct Node *next;}*/Node* MergeLists(Node *headA, Node* headB){// This is a method-only submission. // You only need to complete this method if(headA == NULL) return headB;if(headB == NULL) return headA;Node* temp1;Node* temp2;Node* originalHead;Node* head = new Node;head->data = 0;head->next = 0;//temp1 should always point to head with smaller valueif(headA->data <= headB->data){ temp1 = headA; originalHead = headA; temp2 = headB;}else{ originalHead = headB; temp1 = headB; temp2 = headA;}while(temp1 != 0 && temp2 != 0){ if(temp1->data <= temp2->data){ head->next = temp1; head = temp1; if(temp1->next != NULL) temp1 = temp1->next; else{ head->next = temp2; break; } } else{ head->next = temp2; head = temp2; if(temp2->next != NULL) temp2 = temp2->next; else{ head->next = temp1; break; } }}return originalHead;} | Merging sorted linked lists - C++ | c++;algorithm;programming challenge;linked list | SimplifyIn this loop, the loop condition is practically useless:while(temp1 != 0 && temp2 != 0){ if(temp1->data <= temp2->data){ head->next = temp1; head = temp1; if(temp1->next != NULL) temp1 = temp1->next; else{ head->next = temp2; break; } } else{ head->next = temp2; head = temp2; if(temp2->next != NULL) temp2 = temp2->next; else{ head->next = temp1; break; } }}The statements before the loop have already checked that temp1 and temp2 are not null. So the condition will be true for the first time.Then in each cycle, you check if the next value of temp1 or temp2 will be null and if yes then break out. So you could as well change the loop condition to while (true), and the program will still work.But instead of doing that,it would be simpler to move those checks out of the loop body,and let the loop condition be useful:do { if (temp1->data <= temp2->data) { head->next = temp1; head = temp1; temp1 = temp1->next; } else { head->next = temp2; head = temp2; temp2 = temp2->next; }} while (temp1 != NULL && temp2 != NULL);head->next = temp1 != NULL ? temp1 : temp2;Taking it one step further, head can be updated outside of the if-else, like this:do { if (temp1->data <= temp2->data) { head->next = temp1; temp1 = temp1->next; } else { head->next = temp2; temp2 = temp2->next; } head = head->next;} while (temp1 != NULL && temp2 != NULL);head->next = temp1 != NULL ? temp1 : temp2;You have this comment://temp1 should always point to head with smaller valueNope, not really! This would work just as well:if(headA->data <= headB->data) { originalHead = headA;} else { originalHead = headB;}temp1 = headA;temp2 = headB;Memory managementYou created a new Node for head, but you forgot to delete it.Suggested implementationSome further simplifications and improvements are possible:No need to check if one of the heads are null. It's possible to rewrite using a dummy a node to handle such cases naturally without special treatmentThe variable names can be improvedImplementation:Node* MergeLists(Node *headA, Node* headB){ Node *dummy = new Node(); Node *node = dummy; Node *nodeA = headA; Node *nodeB = headB; while (nodeA != NULL && nodeB != NULL) { if (nodeA->data <= nodeB->data) { node->next = nodeA; nodeA = nodeA->next; } else { node->next = nodeB; nodeB = nodeB->next; } node = node->next; } node->next = nodeA != NULL ? nodeA : nodeB; Node *head = dummy->next; delete dummy; return head;} |
_cs.56081 | I recently found out about the Rose tree data structure, but just going off of a Haskell data definition and the tiny Wikipedia description of it, I've got some trouble understanding what applications a Rose tree might have.For reference, the Haskell data definition:data RoseTree = RoseTree a [RoseTree a]For those unfamiliar with Haskell -- it's a recursive data type definition with an arbitrary type a, where the type constructor is provided with a literal of type a followed by an optionally empty list of type RoseTree on the same type a.The way I see it:This data structure is unordered by default (although I assume most practical applications do implement some form of ordering for searching)The data structure doesn't enforce a fixed number of nodes per layer at any point, except the global root, which must have a single nodeGiven that minimal amount of information, I'm having trouble figuring out when one might use this type of tree.In addition to the question in the title, if search is indeed implemented in most applications of a Rose tree, how is this done? | What are the applications of Rose trees? | trees | You seem to have an overly data structures and algorithms mindset. Not every tree is some kind of search tree. Data structures are often designed to correspond to or capture aspects of a domain model.S-expressions are almost exactly rose trees. (Or rather, I would say how they are typically thought of is as rose trees. Wikipedia is correct in saying they are more like binary trees, but what you might call proper S-expressions are only slightly different from rose trees.) At any rate, you can use them as a generic representation for an abstract syntax tree. The benefit of doing this is that you can easily write generic operations, e.g. find all variables or swap parameters or rename this symbol. It's also extensible in that adding a new type of node to your abstract syntax often doesn't require really changing anything. The downsides are there aren't really any constraints, so it doesn't a priori prevent you from writing nonsense. This can be mitigated for users by standard abstract data type techniques, but the implementer of transforms and such must deal with the unstructured representation even though they know that the input is structured via a data type invariant. Of course, when that certainty is misplaced (possibly because things have changed), the errors tend to be unpredictable and hard to debug.In practice, while the Data.Tree module in the standard libraries provides a rose tree, almost no one uses it in the Haskell community. Defining custom data types that explicitly capture the constraints is so easy that there is little reason to use a generic library type. Further, there has been an enormous amount of research and practice around performing generic operations over custom types which eliminates many of the benefits of using a generic representation. Finally, Haskellers tend to be very much in favor of explicit, enforced constraints and are willing to pay to get it.To answer your last question, oftentimes searching an AST is unimportant and/or the ASTs are generally assumed to be small enough that just walking the whole thing is acceptable. Admittedly, it's not uncommon to collect definitions in a separate data structure with references into the AST which could be viewed as a sort of index. Similarly, some optimization passes will (usually locally and temporarily) build up indexes to simplify and speed up their operation. The structure of the AST corresponds to the input and so it can't be rebalanced or anything like that. As such, it's uncommon for the AST itself to contain indexing information or information to help searching. |
_unix.196646 | How do I get bytes usage using iptables for a particular IP, including YouTube streaming videos?Currently, I am using the following IP tables rules to get data usage by IP address:sudo iptables -I FORWARD 1 -s 192.168.10.10 -j ACCEPTsudo iptables -I FORWARD 1 -d 192.168.10.10 -j ACCEPTBut it's not recognizing data usage by YouTube videos. | Linux - iptables - get YouTube streaming bytes usage | linux;networking;iptables;streaming | null |
_unix.177580 | Changing from Exchange to IredMail...Is there a way to import MS Exchange 2003 Mail into Iredmail running on Ubuntu 14.04?? | Changing from Exchange to IredMail...Mail Import? | email | null |
_cs.57278 | I have nodes a, b , c,d,N, and e in an adjacency matrix. If I follow the order as a,b,c,d,N,and,e , I get 100010(the question does not matter because I'm asking about the order) for b.But if I follow the order a,b,c,N,d,and e, I get 100100 which was done by my TA in the class. Does the order really matter? If so then is there a way to find order from the adjacency graphs? | Does the order matter in the adjacency matrix? | discrete mathematics;adjacency matrix | null |
_cs.50127 | I show from Wikipedia that the optimal number of hash functions is:$k =\frac{m}{n}\ln{2}$.However it's not obvious for me why, even after reading the Wikipedia article (such as the one on false positives). Anyone interested in explaining with simple words? :) | Why Bloom filter needs $\frac{m}{n}\ln{2}$ hash functions? | bloom filters;hashing | This is explained in Wikipedia. Given $n,m$, the false positive probability is$$\left(1 - \left(1 - \frac{1}{m}\right)^{kn}\right)^k.$$This is the quantity we want to minimize. While the exact expression is hard to minimize exactly, we can use the approximation$$\left(1 - \left(1 - \frac{1}{m}\right)^{kn}\right)^k \approx (1-e^{-kn/m})^k,$$which is good if $m$ is large. We can optimize the latter expression using calculus, and the result is some expression which is very close to $(m/n) \ln 2$. This is a calculation that I leave to you. |
_unix.346973 | I already go this installed:1 core/archlinux-keyring 20170104-1 [installed]10 blackarch/blackarch-keyring 20140118-3 [installed]But i got an error whenupgrading libc++abi from aur:==> Verifying source file signatures with gpg... llvm-3.9.1.src.tar.xz ... FAILED (unknown public key 8F0871F202119294) libcxx-3.9.1.src.tar.xz ... FAILED (unknown public key 8F0871F202119294) libcxxabi-3.9.1.src.tar.xz ... FAILED (unknown public key 8F0871F202119294)==> ERROR: One or more PGP signatures could not be verified!==> ERROR: Makepkg was unable to build libc++.==> Restart building libc++abi ? [y/N]How to resolve this? (is there a way to know which keyring I should install to solve this issue?) | unknown public key 8F0871F202119294 on ArchLinux | arch linux | Keys from AUR are not in the keyrings provided by the distributions' repositories.You will need to find and add the AUR package/upstream keys manually, if you trust them.Start by checking the PKGBUILD file of the package, then the comments in the AUR to see where/if to get and add the keys. |
_vi.8984 | I just started using ycm with clang-completer, which apparently can also do syntax checking.It instantly reminded me why I avoided syntax checking in gVim: As soon as an error is found, the signs appear on the left in an extra column. By doing so they shift the window to the right to make place for the characters of the sign.As soon as I correct the error and it was just a single error in the file, the signs disappear and with it the sign column.This can become quite flashy, see the gifHow can I make it steady, so that the sign column stays, and just he signs disappear?I found the :sing unplace but don't know how to NOT execute it. Or maybe expand beforehand and don't expand anymore when signs are added? | Make column for signs permanent in gVim | gvim;plugin you complete me;signs | You can do what some plugins do and create a dummy sign:sign define Dummyautocmd VimEnter,SessionLoadPost,BufRead * execute 'sign place 97349278 line=9999 name=Dummy buffer='.bufnr('%')All this does is creates an empty sign on line 9999, which should be far enough from valid lines in a file you actually want to see signs in. It has to be set on a far off line since only one sign can occupy a line at a time. 97349278 is an ID for the sign, which I got by mashing the keyboard. All that matters is that it's unique.I don't use YCM, but it might have an option to use a dummy sign. It may also remove your dummy sign, in which case, you'll have to look in its source to see how it can be prevented. |
_unix.127653 | I run Gentoo Linux on my laptop. I have an issue, though, where if I'm building some very large piece of software (as I do fairly frequently, since the purpose of this laptop is development), the CPU tends to heat up more than I'd like.I used to use cpufreqd to manage this, since it has an lm_sensors plugin and can reduce the CPU frequency once it reaches a particular temperature threshold.However, this is no longer going to be a good option, since (apparently) cpufreqd is no longer actively maintained, and as such is going to be removed from Gentoo's package tree.Because of this, my question is: is there some other way I can solve this problem?I am aware of other similar CPU frequency management daemons, as well as the drivers that are built into the Linux kernel, but as far as I know they do not manage CPU frequency as a function of CPU temperature. | Slow down CPU when it heats up | linux;gentoo;cpu;cpu frequency;temperature | There are still a couple of options left, please refer to Arch wikipage.The one you are looking for, specifically, is thermald. |
_unix.147671 | I sync my .vimrc file between two machines, one running Debian testing and the other Ubuntu. On Debian everything works fine.On Ubuntu, the fold column is gray instead of black, even though black is specified:213: hi FoldColumn ctermbg=Black ctermfg=BlackIf I comment out line 213, the fold column turns black, but then foldmarkers default to white (the whole point is to hide them black on black). If i just do:213: hi FoldColumn ctermfg=BlackThe FoldColumn is gray again. I do not find plugin conflicts with grep -r FoldColumn .vim/. Any ideas? | Vim FoldColumn color different on Debian / Ubuntu | vim;vimrc | null |
_unix.140167 | INPUT: a@notebook:~$ cat in.csv'XYZ843141'^'ASDFSAFXYVFSHGDSDg sdGDS dsGDSgfa assfd faSDFAS saDFSAFD adFSA343fa sdfSADF'^'BAAAR'^'YYY'^'..... and so on, further columns'YYZ814384'^'ASfdEtRiuognfnseaFREQTzKb aSFfdsaADSFSA adsFdsa34 34 ASFfsas saftrzj etrzrasdfasffasf safs'^'foooobaaar'^'ZZZ'^'..... and so on, further columnsOUTPUT: a@notebook:~$ cat in.csv | SOMEMAGIC'XYZ843141'^'ASDFSAFXYVFSHGDSDg s'^'BAAAR'^'YYY'^'..... and so on, further columns'YYZ814384'^'ASfdEtRiuognfnseaFRE'^'foooobaaar'^'ZZZ'^'..... and so on, further columnsMy question: If: '^'is the separator, then how can SOMEMAGIC (an awk/sed??) truncate the second column to given length? Example: 20 chars max, from this: ASDFSAFXYVFSHGDSDg sdGDS dsGDSgfa assfd faSDFAS saDFSAFD adFSA343fa sdfSADFto this: ASDFSAFXYVFSHGDSDg sand preserve all the other things :\ | How to truncate only given column length? | sed;awk | > awk -v OFS='^' -F'\\\\^' '{if(length($2)>20) $2=substr($2,1,20); print;}' file'XYZ843141'^'ASDFSAFXYVFSHGDSDg s'^'BAAAR'^'YYY'^'..... and so on, further columns'YYZ814384'^'ASfdEtRiuognfnseaFRE'^'foooobaaar'^'ZZZ'^'..... and so on, further columns |
_softwareengineering.16798 | In my web application, I give the user the option to import all of his/her contacts from their email account, and then send out invites to all of these accounts or map the user to the existing accounts based on emails. Now the question, is once all of these contacts are imported, would it be right to save these contacts back for repeated reminders, etc.? I am quite confused here because that is the way all of the sites operate, but would that not be violation of data privacy? Is there an algorithm for this? | Handling contacts imported from users email account | web development;algorithms | I think it would only be valid to store those contacts for repeated reminders people if they explicitly opt in to do so. Also very importantly, that reminder should not be sent unless the original user clicks on the magic button (s/he annoying their friends that's better than you annoying them).Contacts for a user change all of the time anyhow, so inviting them to go through the process from scratch is probably better idea anyhow. |
_webmaster.41141 | My site is a single page web-app. I am following the suggestions based on making AJAX applications crawl-able.My URL looks like this: http://domain.com/#!pages/contactUsMy understanding is:http://domain.com/#!chair/12 goes to http://domain.com/?_escaped_fragment=chair/12 As I am not using any server-side scripting on this project, I have created HTML pages with the application states and put them in a folder like so: http://domain.com/htmlFiles/1.htmlIn Apache I have forwarded requests that include _escaped_fragment_= to the right html page:RewriteEngine onRewriteCond %{QUERY_STRING} ^_escaped_fragment_=chair\/([\w]*)RewriteRule ^(.*)$ htmlFiles/%1.html? [R=302,L]The forwarding works correctly and the appropriate page shows up if the _escaped_fragment URL is used.The sitemap I submitted to Google looks like this:<url><loc>http://domain.com/#!pages/contactUs</loc><lastmod>2012-12-30</lastmod><changefreq>weekly</changefreq><priority>0.8</priority></url>The problem now is this:my whole htmlFiles folder (http://domain.com/htmlFiles/1.html) with the HTML files is indexed in Google. These pages are there in the first place just to show Google what content my actual pages contain.My entire website works from http://domain.com/These pages should not be coming up in the search results. As they had said they will only index pretty URLs, but still, I am reluctant to have them remove these pages as I don't know if it's going to hamper something else.Could it be that 302 is not the right redirect and 301 should be used instead?Also is there something wrong with this redirect approach thing in the first place? | Google indexed my escaped_fragment pages | seo;301 redirect;302 redirect;rich snippets | null |
_cs.47103 | Karp reduction (polynomial-time many one) is used in complexity theory to define NP-completeness. However, Cook reductions (polynomial-time Turing) is more powerful and intuitive from information theoretic perspective since it could offer an insight into the information content of hard sets in NP.Intuitively, if we say problem $A$ Cook reduces to problem $B$ then the information content of set $A$ should be proportional to the number of calls made to $B's$ oracle. For instance, we have a truth-table (Cook) reduction from Graph Automorphism problem to Graph Isomorphism problem but no such reduction in the opposite direction is known. I am interested in techniques for lower bounding the number of calls to GA's oracle (in possible Turing reduction from GI to GA).Why is it hard to find lower bounds by lower bounding the required number of calls to $B's$ oracle? Is there an intuition that supports $P^{GI} \ne P^{GA}$? | Complexity lower bounds via Cook reductions | complexity theory;np;graph isomorphism | null |
_unix.333239 | In my 2 Linux servers I have the below entries in /etc/grub.conf.Server 1 : password --encrypted $1$something$somevalue.Server 2: password --md5 $1$something$somevalue.My question is are they the same ? Is there any difference between the usage of --encrypted and --md5? | Is password --encrypted and password --md5 the same for the GRUB config file? | linux;configuration;password;grub | null |
_cogsci.1770 | I'm wondering if the human brain predicts how certain weeks of the year should feel? For example, a child who is going to school may have a more positive affect in anticipation of summer holidays, and may feel more negatively towards the end of the summer, as he/she knows that freedom and fun are about to end. Another example may be siblings' or parents' birthdays, where the anticipation of the birthday increases positive affect.I'm wondering if year after year of such patterns (in childhood, the teenage years and maybe young adolescence) can create an association in the brain between positive/negative affect and the specific time of the year, or specific photoperiod duration (day length). What I'm trying to understand is, if a pattern of such ups and downs that may have been established in the childhood persists throughout adulthood?Have there been any studies that looked at the previous life history and incidence of mania/depression episodes in bipolar disorder, or the onset of depression in Seasonal Affective Disorder(SAD), and correlated them with holidays, birthdays, etc? I've read that SAD may manifest a depression in any season, not just winter, which got me thinking of the possible causes. | Is there any predictive component to positive/negative affect in Seasonal Affective Disorder and Bipolar Disorder? | emotion;abnormal psychology;bipolar disorder | null |
_codereview.86505 | The following query returns the latest Odds for each Offer based on the timestamp on the Odds. However, the query takes an average of 1497ms - and I'm sincerely asking for help to optimize it.SELECT DISTINCT ON (odds_odds.offer_id) odds_odds.id, odds_odds.o1, odds_odds.o2, odds_odds.o3, odds_offer.odds_type_id, odds_offer.match_id, odds_offer.bookmaker_id FROM odds_oddsINNER JOIN odds_offer ON ( odds_odds.offer_id = odds_offer.id )INNER JOIN odds_match ON ( odds_offer.match_id = odds_match.id ) WHERE (odds_match.start_time >= ? AND odds_offer.match_id IN (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?) AND (odds_offer.flags = ? OR (odds_offer.flags = ?AND odds_offer.last_verified >= ?)) AND NOT (((odds_odds.o1 = ? AND odds_odds.o1 IS NOT NULL) OR (odds_odds.o2 = ? AND odds_odds.o2 IS NOT NULL))))ORDER BY odds_odds.offer_id ASC, odds_odds.time DESCThese are the stats from Heroku's log:1497ms Avg. time0/min Throughput29ms I/O timeHere is the output from EXPLAIN (ANALYZE, VERBOSE, BUFFERS):Unique (cost=342394.61..342394.61 rows=3 width=46) (actual time=46453.678..46703.724 rows=31430 loops=1) Output: odds_odds.id, odds_odds.o1, odds_odds.o2, odds_odds.o3, odds_offer.odds_type_id, odds_offer.match_id, odds_offer.bookmaker_id, odds_odds.offer_id, odds_odds.time Buffers: shared hit=482834 read=24654 dirtied=9, temp read=1902 written=1902 I/O Timings: read=93.940 -> Sort (cost=342394.61..342394.61 rows=3 width=46) (actual time=46453.674..46580.738 rows=250485 loops=1) Output: odds_odds.id, odds_odds.o1, odds_odds.o2, odds_odds.o3, odds_offer.odds_type_id, odds_offer.match_id, odds_offer.bookmaker_id, odds_odds.offer_id, odds_odds.time Sort Key: odds_odds.offer_id, odds_odds.time Sort Method: external merge Disk: 15208kB Buffers: shared hit=482834 read=24654 dirtied=9, temp read=1902 written=1902 I/O Timings: read=93.940 -> Nested Loop (cost=0.11..342394.60 rows=3 width=46) (actual time=8.455..45710.497 rows=250485 loops=1) Output: odds_odds.id, odds_odds.o1, odds_odds.o2, odds_odds.o3, odds_offer.odds_type_id, odds_offer.match_id, odds_offer.bookmaker_id, odds_odds.offer_id, odds_odds.time Buffers: shared hit=482827 read=24654 dirtied=9 I/O Timings: read=93.940 -> Nested Loop (cost=0.00..342383.74 rows=1 width=16) (actual time=8.409..44383.739 rows=33436 loops=1) Output: odds_offer.odds_type_id, odds_offer.match_id, odds_offer.bookmaker_id, odds_offer.id Join Filter: (odds_offer.match_id = odds_match.id) Rows Removed by Join Filter: 66691138 Buffers: shared hit=38450 read=24654 dirtied=9 I/O Timings: read=93.940 -> Seq Scan on public.odds_offer (cost=0.00..341815.13 rows=3222 width=16) (actual time=0.135..2791.383 rows=33922 loops=1) Output: odds_offer.odds_type_id, odds_offer.match_id, odds_offer.bookmaker_id, odds_offer.id Filter: ((odds_offer.flags OR ((NOT odds_offer.flags) AND (odds_offer.last_verified >= '2015-04-10 13:43:30.556949+00'::timestamp with time zone))) AND (odds_offer.match_id = ANY ('{2725665,2725667,2725670,2725671,2725674,2725668,2723416,2723423,2723421,2723422,3006845,3006846,3006848,2726643,2726644,2730552,2731247,2731250,2731248,2731249,2733487,2733490,2733740,2733741,2733742,2733743,2734281,2734286,2734288,2736599,2736600,2735768,2735770,2735769,2735773,2735767,2735772,2737269,2738308,2738309,3018437,3018441,3094187,3091835,2740985,2740982,2741303,2741304,2741309,2768481,2768487,2768483,2768482,2768488,2768485,2768484,2742802,3044541,2746058,2746057,2746063,2749068,2753763,2750377,2748517,3065622,2762436,2762437,2762439,2764009,2764320,3016595,2935111,2772316,2772318,2781140,2781144,2780837,2788433,3050601,3094643,3094641,2801042,2801044,2801047,2801048,2801049,2795387,2795390,2795388,2795389,2795395,2795391,2795392,2795394,2821571,2821729,2821730,2821731,2821732,2821733,2821735,2821736,2821738,2821739,2821740,2880288,2829676,2829678,2829679,2829680,2829681,2829682,2829683,2829685,3053895,2839492,2839497,2839501,2850609,2877859,2927855,2927848,2927852,2927854,2927850,3072825,2953872,2953874,3089862,3117521,3007435,3007428,3007427,3007430,3007436,3007444,3007445,3007442,3007446,3007447,3007448,3007429,3007449,3007431,2988273,3047885,3047887,3014213,3018787,3018790,3102572,3119336,3040014,3040020,3043864,3043861,3043862,3043865,3045244,3045245,3045246,3045247,3045248,3045249,3045250,3045251,3045252,3054436,3050931,3063078,3063079,3063080,3063081,3063082,3063083,3057971,3064730,3064731,3064732,3064733,3064734,3064735,3111903,3120490,3120446,3121373}'::integer[]))) Rows Removed by Filter: 5523694 Buffers: shared hit=38116 read=24654 dirtied=9 I/O Timings: read=93.940 -> Materialize (cost=0.00..517.06 rows=4 width=4) (actual time=0.001..0.511 rows=1967 loops=33922) Output: odds_match.id Buffers: shared hit=334 -> Seq Scan on public.odds_match (cost=0.00..517.05 rows=4 width=4) (actual time=7.797..8.636 rows=1967 loops=1) Output: odds_match.id Filter: (odds_match.start_time >= '2015-04-10 13:53:30.556949+00'::timestamp with time zone) Rows Removed by Filter: 50333 Buffers: shared hit=334 -> Index Scan using odds_odds_offer_id on public.odds_odds (cost=0.11..10.68 rows=60 width=34) (actual time=0.014..0.033 rows=7 loops=33436) Output: odds_odds.id, odds_odds.o1, odds_odds.o2, odds_odds.o3, odds_odds.offer_id, odds_odds.time Index Cond: (odds_odds.offer_id = odds_offer.id) Filter: (((odds_odds.o1 <> 0::numeric) OR (odds_odds.o1 IS NULL)) AND ((odds_odds.o2 <> 0::numeric) OR (odds_odds.o2 IS NULL))) Rows Removed by Filter: 2 Buffers: shared hit=444377 Total runtime: 46726.458 ms | Bookmaker odds and offers query | performance;sql;postgresql | null |
_webmaster.16384 | What are the laws on this? If I have a simple registration form, can I have underneath it:[x] Subscribe to the blog[x] Email me when a new release is issuedAuto checked? Or do they need to be Opt In by law (I remember reading this somewhere). If it makes a difference, we are registered in the UK, and our web server is also UK located.EditI'm not sure if people quite understand this question, what I mean is, can I have these check boxes checked by default? I see a lot of sites doing this. It will be presented in a 100% clear and non deceptive way. | Auto subscribe checkbox on registration | forms;registration;automation;subscription | From http://www.ico.gov.uk/for_organisations/privacy_and_electronic_communications/opt_in_out.aspxIf you provide a clear and prominent message along the following lines, the fact that a suitably prominent opt-out box has not been ticked may help establish that consent has been given. For example:'By submitting this registration form, you will be indicating your consent to receiving email marketing messages from us unless you have indicated an objection to receiving such messages by ticking the above box.'I would say removing a tick is the same as ticking an empty box, so you're probably OK. |
_unix.153203 | I'm using the 14 px Gohu font, and it looks like these characters are offset one pixel to the left from the cursor which causes the cursor to ignore that part of the text when typing.I am using bspwm + urxvt + compton.Things I have tried:Disable comptonSet cursor to underscoreThis problem did not occur with the default font.What is causing this and how is it fixed? | Urxvt cursor cutting off wide characters like 'w' and 'm' | debian;graphics;urxvt | null |
_softwareengineering.30254 | These days, so many languages are garbage collected. It is even available for C++ by third parties. But C++ has RAII and smart pointers. So what's the point of using garbage collection? Is it doing something extra?And in other languages like C#, if all the references are treated as smart pointers(keeping RAII aside), by specification and by implementation, will there still be any need of garbage collectors? If no, then why is this not so? | Why Garbage Collection if smart pointers are there | garbage collection;smart pointer | So, what's the point of using garbage collection?I'm assuming you mean reference counted smart pointers and I'll note that they are a (rudimentary) form of garbage collection so I'll answer the question what are the advantages of other forms of garbage collection over reference counted smart pointers instead.Accuracy. Reference counting alone leaks cycles so reference counted smart pointers will leak memory in general unless other techniques are added to catch cycles. Once those techniques are added, reference counting's benefit of simplicity has vanished. Also, note that scope-based reference counting and tracing GCs collect values at different times, sometimes reference counting collects earlier and sometimes tracing GCs collect earlier.Throughput. Smart pointers are one of the least efficient forms of garbage collection, particularly in the context of multi-threaded applications when reference counts are bumped atomically. There are advanced reference counting techniques designed to alleviate this but tracing GCs are still the algorithm of choice in production environments.Latency. Typical smart pointer implementations allow destructors to avalanche, resulting in unbounded pause times. Other forms of garbage collection are much more incremental and can even be real time, e.g. Baker's treadmill. |
_softwareengineering.273323 | How did these earlier programmers know what combinations of binary produced certain results? Is there a way I can create an assembler from binary today? | How were assemblers created straight from binary? | binary;assembly | null |
_computergraphics.354 | I know GPU prefetches textures and that's why dependent texture reads are slower, but how does it work and at what point that happens?EDIT: Split the content of this question into others as suggested by trichoplaxHere's a link to other questions:How does Texture Cache work considering multiple shader unitsHow does Texture Cache work in Tile Based Rendering GPUIs using many texture maps bad for caching? | How Texture Prefetch works? | opengl;texture;gpu;optimisation | null |
_webmaster.74131 | I am trying to find and use offpage SEO techniques for my website but most of search telling me that google has updated SEO algorithm and some offpage techniques now google is considering as SPAM.So anyone can list out valid offpage SEO techniques please? | What are best valid offpage SEO techniques? as per google's latest algorithm | seo;google panda algorithm;google penguin algorithm | I think a lot of people get very fixated on SEO techniques and what the latest algo is. Basing your decisions on the very frequent changes that Google makes is not the best way to run a site or a business. Unless, you are using black hat techniques and are trying to stay ahead of Google catching you.Focus on things you can controlYou will never guess what the next algo update will bring or how it might or might not effect your site. Additional, those changes might become reversed or obsolete. So focus on things that you can fully control. Content Quality, Content Strategy/Inbound Marketing, User Experience, Site Performance and etc. All of the aforementioned items will lead to quality backlinks, social sharing and return visits.Ask yourself:Why would someone visit my site? Does it provide value? Is it something that people are interested in reading about?You can control every aspect of this and you will be rewarded if you pay close attention to what your target audience wants and not what the latest link bating fab everyone is trying this month. |
_unix.102211 | I want to know how use rsync for sync to folders recursive butI only need to update the new files or the updated files (only the content not the owner, group or timestamp) and I want to delete the files that not exist in the source. | rsync ignore owner, group, time, and perms | rsync | I think you can use the -no- options to rsync to NOT copy the ownership or permissions of the files you're sync'ing.Excerpt From rsync Man Page--no-OPTION You may turn off one or more implied options by prefixing the option name with no-. Not all options may be prefixed with a no-: only options that are implied by other options (e.g. --no-D, --no-perms) or have different defaults in various circumstances (e.g. --no-whole-file, --no-blocking-io, --no-dirs). You may specify either the short or the long option name after the no- prefix (e.g. --no-R is the same as --no-relative). For example: if you want to use -a (--archive) but dont want -o (--owner), instead of converting -a into -rlptgD, you could specify -a --no-o (or -a --no-owner). The order of the options is important: if you specify --no-r -a, the -r option would end up being turned on, the opposite of -a --no-r. Note also that the side-effects of the --files-from option are NOT positional, as it affects the default state of several options and slightly changes the meaning of -a (see the --files-from option for more details).Ownership & PermissionsLooking through the man page I believe you'd want to use something like this:$ rsync -avz --no-perms --no-owner --no-group ...To delete files that don't exist you can use the --delete switch:$ rsync -avz --no-perms --no-owner --no-group --delete ....TimestampsAs for the timestamp I don't see a way to keep this without altering how you'd do the comparison of SOURCE vs. DEST files. You might want to tell rsync to ignore timestamps using this switch:-I, --ignore-times Normally rsync will skip any files that are already the same size and have the same modification timestamp. This option turns off this quick check behavior, causing all files to be updated.UpdateFor timestamps, --no-times might do what you're looking for. |
_codereview.164601 | I'm working on a Machine Learning project and I'm in Data Exploration step, and my dataset has both categorical and continuous attributes.I decided to compute a chi square test between 2 categorical variables to find relationships between them!I've read a lot and check if i can found a simple solution by library but nothing !So I decided to write a whole class by myself and using some scipy function .Please reviews and tell me how I can improve it for performance on large dataset.here is the code :import pandas as pdimport numpy as npimport matplotlib.pyplot as pltimport seaborn as sns #for beatiful visualisations%matplotlib inline import scipy.stats as scs #for statisticsimport operatorfrom scipy.stats import chi2_contingencyclass ChiSquareCalc(object): this class is designed to calculated and interpret the relationship between 2 categorials variables by computing the chi square test between them you can find more on chi square test on this video https://www.youtube.com/watch?v=misMgRRV3jQ it will use pandas , numpy ,searborn matplotlib , scipy def __init__(self, X,Y,dataset,**kwargs): we will initailise the with 2 colums of a datafrme the input must be a data and columns names if isinstance(dataset,pd.DataFrame) and isinstance(X,str)and isinstance(Y,str) and X in dataset.columns and Y in dataset.columns : if operator.and_(operator.__eq__(dataset[X].dtypes, 'object'),operator.__eq__(dataset[Y].dtypes, 'object')): self.dataset=dataset self.X=dataset[X] self.Y=dataset[Y] self.contingency=pd.DataFrame() self.c=0 self.p=0 self.dof=0 self.q=0.95 #lower tail probability else: raise TypeError('Class only deal wih categorial columns') else: raise TypeError('Columns names must be string and data must be a DataFrame') def contengencyTable(self): this method will return a contengency table of the 2 variables self.contingency = pd.crosstab(self.X,self.Y) return self.contingency def chisquare(self): this one will calculate the chi square value and return q: chi square results df: degree of freedom p: probability expexcted: excepected frequency table if (not self.contingency.empty): self.c, self.p, self.dof, expected = chi2_contingency(self.contingency) return pd.DataFrame(expected,columns=self.contingency.columns,index=self.contingency.index) else: raise ValueError('contingency table must be initialised') def conclude(self,on): we can decide to conclude on chi square value(chi) or on p (p)value Here is how we build the conclusion according to p value Probability of 0: It indicates that both categorical variable are dependent Probability of 1: It shows that both variables are independent. Probability less than 0.05: It indicates that the relationship between the variables is significant at 95% confidence And according to chi square value and df we use a ccritical value calculate with : q:lower tail probability df:degree of freedom the conclusion is approving or rejecting a null hypothesis NulHyp='is no relationship between '+self.X+'and '+self.Y criticalValue=scs.chi2.ppf(q = self.q, df =self.dof) if on not in ['chi','p']: raise ValueError('choose chi or p') else: if on=='chi': if criticalValue > self.c: return 'null hypothesis is accepted : '+NulHyp else: return 'null hypothesis is rejected : '+NulHyp else: if self.p==0: return ' It indicates that both categorical variable are dependent' elif self.p==1: return 'It shows that both variables are independent' elif self.p <(1-self.q): return 'It indicates that the relationship between the variables is significant at confidence of %s',self.q else: return 'there is no relationship ' def DrawPlot(self): and as for bonus you can draw plot to visualise the relationship sns.countplot(hue=self.X,y=self.Y,data=self.dataset) | Calculate relationship between 2 categorical variables in a pandas Dataset with chi square test | python;statistics;pandas;machine learning;scipy | null |
_codereview.74205 | Some fellow code reviewers (hi @Janos!) have been inquiring about a SEDE query to allow to check progress of the Red Shirt hat progression. Try it here!BackgroundRed Shirtcast 5 downvotes on posts that are later deleted or closedLimitationsThere are certain Stack Exchange limitations which make querying this information a bit tricky. Namely:The data is only refreshed once a week, on Sundays. This makes it impossible to have real time results.User voting activity is anonymous, in that a user can only see their own voting activity in their own profile. This disallows joining voting and user data on SEDE.AssumptionsI have made certain assumptions, based on the trends I generally see on closed questions. They are:A user who votes to close/delete a bad question will usually also downvote the question.A user who downvotes a question will usually do so before voting to close/delete.UsageTo get usefulness out of this (and as indicated in the SQL comments at the top of the query):The way that this report can be used is by comparing the results set side-by-side with your votes under your activity reports. Filter by down-votes and look to see if questions you down-voted are in the result set below.For example:Query/* Winter Bash 2014Red Shirt hat estimationCast 5 downvotes on posts that are later deleted or closedThe way that this report can be used is by comparing the results set side-by-side with your votes under your activity reports.Filter by down-votes and look to see if questions you down-voted arein the result set below.*/-- NumberWeeks: Number of weeks to go back-- DATETIME VARIABLESDECLARE @today DATETIME;SET @today = CURRENT_TIMESTAMP;DECLARE @weeks_ago INT;SET @weeks_ago = ##NumberWeeks:int?4##;-- Number of weeks must not go into the future, hence the following:SET @weeks_ago = (CASE WHEN @weeks_ago >0 THEN -@weeks_ago ELSE @weeks_ago END);DECLARE @target_week DATETIME;SET @target_week = DATEADD(WEEK, @weeks_ago, CURRENT_TIMESTAMP);-- POST-RELATED VARIABLESDECLARE @downvote TINYINT;SET @downvote = (SELECT Id FROM VoteTypes WHERE Name LIKE 'Down%'); --3DECLARE @question_post TINYINT;SET @question_post = (SELECT Id FROM PostTypes WHERE Name = 'Question'); --1DECLARE @closed_post TINYINT;SET @closed_post = (SELECT Id FROM PostHistoryTypes WHERE Name = 'Post Closed'); --10DECLARE @deleted_post TINYINT;SET @closed_post = (SELECT Id FROM PostHistoryTypes WHERE Name = 'Post Deleted'); --12-- QUERY BEGINSWITH cte_downvoted_posts AS( SELECT Votes.PostId AS dvote FROM Votes INNER JOIN Posts ON Votes.PostId = Posts.Id WHERE VoteTypeId = @downvote)SELECT Posts.Id AS [Post Link] -- magic column , Posts.OwnerUserId AS [User Link] -- magic column , Posts.CreationDate AS [Creation Date] , Posts.ClosedDate AS [Closed Date]FROM PostsINNER JOIN cte_downvoted_posts ON Posts.Id = cte_downvoted_posts.dvoteINNER JOIN PostHistory ON Posts.Id = PostHistory.PostIdWHERE Posts.CreationDate <= @today AND Posts.CreationDate > @target_week AND Posts.PostTypeId = @question_post AND Posts.ClosedDate IS NOT NULLGROUP BY Posts.Id , Posts.OwnerUserId , Posts.CreationDate , Posts.ClosedDateORDER BY Posts.CreationDate DESCConcernsNitpicks are fine! Anything from naming to indentation to inconsistencies, please don't be shy to point out anything at all!I noticed a lot of nested loops in the execution plan, when it's doing joins. Is there a better way to do this to avoid them?Is there a way to make this query result set more useful, or user-friendly?Are my comments appropriate/useful? Should I have fewer, or more of them? | Winterbash 2014 Red Shirt Estimation | sql;sql server;t sql;stackexchange | You don't need to join to Posts at all in your CTE since you don't use any of its columns and you get a Post must exist requirement from your main query.Along the same lines, you don't use PostHistory at all in your query but join to it all the same.You declare @deleted_post but you never use it; you just set @closed_post twice (presumably incorrectly the second time to the ID of the Deleted Post row.)But then again, you don't use @closed_post or @deleted_post in your query, so why are you getting those exactly?Some None-1-2 testing would reveal if SE ever adds a new votetype beginning with Down your query will fail as you assume your @downvote variable will only be assigned a single scalar value. So either add TOP 1 to your query or change your WHERE clause to an = operator to prevent this. (Also, since you're only searching for the DownMod votetype, why the LIKE operator in the first place?)I removed the CTE entirely and changed your Main Query to:SELECT Posts.Id AS [Post Link] -- magic column , Posts.OwnerUserId AS [User Link] -- magic column , Posts.CreationDate AS [Creation Date] , Posts.ClosedDate AS [Closed Date]FROM PostsINNER JOIN Votes ON Posts.Id = Votes.PostIdWHERE Posts.CreationDate <= @today AND Posts.CreationDate > @target_week AND Posts.PostTypeId = @question_post AND Posts.ClosedDate IS NOT NULL AND VoteTypeId = @downvoteGROUP BY Posts.Id , Posts.OwnerUserId , Posts.CreationDate , Posts.ClosedDateORDER BY Posts.CreationDate DESCPersonally, I would just call your Top 1 Subquery to get the DownMod VotetypeID and Question PostTypeID directly in the query, but that's just religion. |
_webapps.46463 | I have searched and tried various solutions elsewhere but none seems to work for both conditions. I would like to have the formula:=IF($A2=Elementary, VLOOKUP($B2,'ELEM PRINCIPAL'!$B:$AA,7,false), VLOOKUP($B2,'HS PRINCIPAL'!$B:$AA,7,false))inserted as new rows are added to my spreadsheet. That is all simple I hope. Thanks. | Auto paste formula on form submission | google spreadsheets;google apps script | null |
_unix.259203 | As part of my security job, I analyze dozens of Google Chrome history files each day using sqlite3 over SSH.There are a few dozen authorized safe sites each user is allowed to navigate to. For my purposes, I don't care about these safe sites. To list the URLs of each history file and ignore the safe websites, I use grep -v and list each safe site as follows:sqlite3 /home/me/HistoryDatabaseFile.db select * from urls order by url; | grep -v safesite1.com | grep -v safesite2.com | grep -v safesite3.com | grep -v safesite4.comand on and on. My command has grown to at least 20 lines and is becoming unmanageable. Is there any way I could show the user's list of URLs while excluding my safe sites in a listed format? I'm imagining something like:safesite1.comsafesite2.comsafesite3.comand then bringing that list into the command. It can be internal or external- I don't really care as long as it ends up outputting in bash.Thanks for any help you can give me! | Run multiple piped grep commands from a list in bash | bash;ssh;grep;sql | null |
_codereview.162076 | I'm writing a Mandelbrot Set implementation, and to map the Mandelbrot coordinates to screen coordinates, I'm using Quil's map-range function. Essentially, it takes a number within a certain range, and maps it to a number of another range. It worked great when I was first testing since it's fast, but it forces using float numbers, which is unacceptable since I need far more precision.I looked up the source of the function, and wrote my own Clojure version. It works, but it's painfully slow. The following measurements were taken using the Criterium library's bench function. Casting times were not included in the measurement:quil's version: 31.7nsmy version, no casting: 450.7nsmy version, casting input to float: 130.1ns... casting to double: 122.5ns... casting to long: 440.8ns... casting to BigDecimal: 880.2nsIdeally, I'd like to be using BigDecimal, but using it currently causes the entire IDE to freeze, and froze my entire computer once.The main problem is, the code is so simple, I don't know how I could possibly improve it. It's literally just a math equation:(defn map-range [value start1 stop1 start2 stop2] (+ start2 (* (- stop2 start2) (/ (- value start1) (- stop1 start1)))))I understand that BigDecimal will always be slower, I've accepted that. But even a float to float comparison is more than 4x slower.Is there any kind of trickery I can use to at least make this as fast as quil's version? All they use is plain math; my code is a direct translation from Java's infix notation to Clojure's prefix notation.Any advice here would be appreciated. | Map-range supporting different precisions | performance;clojure | null |
_codereview.56957 | I am using try/catch syntax in combination with a database transaction to (hopefully) prevent partial registrations.I am wondering if I'm on the right track, and what ways, if any, I can improve my code.Please be advised that __construct() is passed an array of unsanitized post values that have been organized into an array after the controller has checked the CSRF tokens, and this object is constructed. Once construction is finished without error, a try catch block on add_user(). I would show some of the other code, but its a proprietary design pattern that closely emulates angular JavaScript (with far superior performance, but sacrificing readability and longer development time). Someday I'll opensource the design pattern.Here is an excerpt from my class AddNewUser that illustrates the question:function __construct($user){ if(!is_array($user)) { incident('possible hack attempt','registration'); throw new \Exception('Invalid data received'); } if($user['termsofservice'] !== 'agree') { incident('non ajax submission','registration'); throw new \Exception ('Must agree to terms of service'); } if($user['privacypolicy'] !== 'agree') { incident('non ajax submission','registration'); throw new \Exception ('Must agree to privacy policy'); } if(usernameExists($user['username'])){ incident('possible user enumeration','registration'); throw new \Exception('Username Taken'); } if(emailExists($user['email'])) { incident('possible email enumeration','registration'); throw new \Exception('Email Taken'); } if(minMaxRange(3,25,$user['username'])) { throw new \Exception('Username must be 3 to 25 charachters in length') ; } if(minMaxRange(8,50,$user['password'])) { throw new \Exception('Password must be 8 to 50 charachters in length'); } $this->user_name = security($user['username']); $this->user_pass = generateHash($user['password']); $this->user_email = $user['email']; $this->user_ip = get_ip_address(); $this->verification = generateActivationToken(); $this->signupstamp = time(); $this->user_agent = security($user['agent']); $this->user_active = 0; $this->user_verified = 0; $this->terms_of_service = security($user['termsofservice']); $this->privacy_policy = security($user['privacypolicy']);}public function adduser(){ global $db,$cfg; try{ $query = $db->query(START TRANSACTION;); $stmt = $db->prepare(INSERT INTO users (u_name, u_pass, u_email, u_verified, u_ip, u_active, u_verification, u_signup_stamp, u_agent) VALUES (?,?,?,?,?,?,?,?,?)); $stmt->bind_param('sssisisis',$this->user_name,$this->user_pass,$this->user_email,$this->user_verified,$this->user_ip,$this->user_active,$this->verification,$this->signupstamp,$this->user_agent); $stmt->execute(); $stmt->close(); $stmt = $db->prepare(INSERT INTO termsofservice (ip_addr,u_name,u_agent,answer,timestamp) VALUES (?,?,?,?,?)); $stmt->bind_param('ssssi',$this->user_ip,$this->user_name,$this->user_agent,$this->terms_of_service,$this->signupstamp); $stmt->execute(); $stmt->close(); $stmt = $db->prepare(INSERT INTO privacypolicy (ip_addr,u_name,u_agent,answer,timestamp) VALUES (?,?,?,?,?)); $stmt->bind_param('ssssi',$this->user_ip,$this->user_name,$this->user_agent,$this->privacy_policy,$this->signupstamp); $stmt->execute(); $stmt->close(); try{ $mail = new Email('registration',$this->user_email,$this->user_name,'Important: Account Activation',$cfg['email']['no-reply']); $query = $db->query(COMMIT;); } catch(Exception $e){ incident('smtp unreachable','registration'); throw new \Exception('Unable to send activation email'); } return true; } catch(Exception $e) { incident('sql problems','registration'); throw new \Exception('An unknown Error Occured'); }} | Usage of try/catch and database transactions | php;error handling | null |
_webmaster.49794 | I am trying to figure out best-practise for moving a single page of content from one domain to another in a way that will preserve search engine ranking. I have found plenty of instructions for wholesale domain moves, but this situation is slightly different. I have a blog, one page of which gets a lot of search traffic for a specific term. I would like to take that one page of content and move it to an entirely new domain, which I will then use to start adding new content to.My plan so far:register the new domainremove the single page of content from the old domainput up the single page of content on the new domainset up a 301 redirect for the URL of the page on the old domain, pointing to the new domain Is this the right way to do it? Is there anything else I need to do in order to keep Google happy? | How to move a single page of content to a new domain | seo;google;domains;blog | Yes, that's a good start. To add to your steps:Check to make sure that you don't have any internal links to the old page in your content or sitemap, and if so change them to the new URL.Check for external links in Google Webmaster Tools, and if possible, try to contact the webmaster of the referring site to change them to the new URL.To avoid any potential for duplicate content issues, in case it appears elsewhere, you might add a Canonical link to the page.Submit your updated sitemap to Google Webmaster Tools, and other search engines.Use the Fetch As Google function in Google Webmaster Tools so that Google will re-crawl and index your site. See this for more information on that:Ask Google to crawl a page or site |
_unix.375644 | Can I use aplay to play sound from internet real time feedback such as:aplay http://...If possible, how to write the codes? | Can 'aplay' play sound from internet? | command line;http;music player | null |
_unix.25404 | Possible Duplicate:Is Linux a Unix? Kind of confused by the two terms. Is there any difference between the two terms? | Is there a difference between Linux and Unix? | linux | This really depends on what you mean by Unix. Unix has come to mean various things in modern times (and even at the creation point of Linux, it meant multiple things).In general, Unix is not a particular system, but a specification for systems calling themselves Unix-like. When people say Unix they do not necessarily mean the proprietary operating system owned by AT&T/Novell/Cisco/whoever now owns it when you're reading this, rather, they usually are referring to the whole spectrum of Unix-like OSes, like AIX, HP-UX, Linux, BSD, Solaris, etc. To this degree, Linux is a FOSS, Unix-like kernel. It is not a direct fork of the original Unix codebase, but it shares many similarities.Another reason that many people regard Linux to be Unix-like is the fact that it is mostly POSIX-compliant (which is very important for compatibility with other Unix-like systems). Some also associate Linux with Unix because of the initial history of the project -- Linux was largely inspired by (but was not a fork of) MINIX, which is, and was, widely regarded to be an attempt to create a FOSS Unix clone. Many Linux distributions also often implement many tools (or clones/approximations of tools) from Unix, often in the form of GNU Coreutils. Nowadays these tools have been changed a lot (some would argue for the worse, GNU Coreutils is notorious for feature creep), but usually still maintain portability with their original counterparts.Linux is also indisputably free, open-source software under the GPL, whereas the licensing of the original Unix codebase often depends on who you're asking, and when. |
_unix.387046 | I built a script that queries the domain registry. Just a disclaimer, this is NOT a hacking attempt. I am trying to gather a list of domains that my company does NOT host. Lets assume that the NameServer my company owns is ns.foo.net and my script does the following:Checks whois to find the Name Server of each domain.Then checks to see if the A record (querying the NameServer found) matches our IP address.If the Name Server is not ours AND the A record of the domain is not ours, add to a list. Otherwise, ignore it and move on through the list.Let us also assume the list of domains are as follows:example.netexampleagain.orgblah.orgwhatever.comHere is my script:#!/bin/bashFILE=sites.txtREGEX=Name ServerOUR_IP=192.168.5.10#Read from Text filewhile read -r line; do SERVER=`whois $line | grep -m 1 $REGEX | awk '{print $3}'` shopt -s nocasematch HOST=`host -ta $line $SERVER` if [[ ! $SERVER =~ foo.net && ! $HOST =~ $OUR_IP ]]; then echo $line >> results.txt fi sleep 1;done<$FILEI am noticing that the domains being added contains an A record of $OUR_IP and sometimes foo.net (my domain). What could be wrong with the if statement that is breaking the logic? | DNS query with bad logic | bash;dns | null |
_unix.209270 | Let's say you have to echo this into a file:RZWa4k6[)b!^%*X6EvfHow do you do it?My actual line to echo is a 2048 characters line. | Echoing something with multiple quotes and key characters (&, $, !, etc.) | shell script;quoting;echo | null |
_unix.344440 | I have two computers connected to the same router (so they are essentially connected in a LAN). Both run some GNU+Linux distribution. I have a bunch of files, in a directory ~/A/ on my first computer that I would like to transfer to my second computer.The names of the files in A are contained in a certain list, say names_list. Now I would like for each of these files to be accessible via a local address, provided with reference to the router (such as 192.168.2.1:2112/name_of_file or something similar), so that the second computer may simply download each file one-by-one when given the names_list.How can I do this? The downloading part is trivial, I am asking mainly regarding setting up the host computer to provide files at specific local addresses. | Make files available through local address | files;filesystems;file sharing;file transfer;lan | Plenty of remote filesystems exist. There are three that are most likely to be useful to you.SSHFS accesses files via an SSH shell connection (or more precisely, via SFTP). You don't need to set up anything exotic: just install the OpenSSH server on one machine, install the client on the other machine, and set up a way to log in from the client to the server (either with a password or with a key). Then mount the remote directory on the first computer:mkdir ~/second-computer-Asshfs 192.168.2.1:A ~/second-computer-ASSHFS is the easiest one to set up as long as you have access to all the files through your user account on the second computer.NFS is Unix's traditional network filesystem protocol. You need to install an NFS server on the server. Linux provides two, one built into the kernel (but you still need userland software to manage the underlying RPC protocol and the additional lock protocol) and one as a pure userland software. Pick either; the kernel one is slightly faster and slightly easier to set up. On the server, you need to export the directory you want to access remotely, by adding an entry to /etc/exports:/home/zakoda/A 192.168.2.2(rw,sync)On the second computer, as root:mkdir /media/second-computer-Amount -t nfs 192.168.2.1:/home/zakoda/A /media/second-computer-ABy default NFS uses numerical user and group IDs, not user and group names. So this only works well if you have the same user IDs on the server and on the client. If you don't, set up nfsidmap on the server.Samba is Windows's network filesystem protocol (rather, it's an open-source implementation of the protocol, which was called SMB and is now called CIFS). It's also available on Linux and other Unix-like systems. It's mainly useful to mount files from a Windows machine on a Unix machine or vice versa, but it can also be used between Unix machines. It has the advantage that matching accounts is easier to set up than with NFS. The initial setup is a bit harder but there are plenty of tutorials, e.g. server and client. |
_unix.197122 | First of all I'm new to awk so please excuse if it's something simple.I'm trying to generate a file that contains paths. I'm using for this an ls -LT listing as well as an awk script:This is an example of the input file:vagrant@precise64:/vagrant$ cat structure-of-home.cnf/home/:vagrant/home/vagrant:postinstall.shThis would be the expected output:/home/vagrant/home/vagrant/postinstall.shThe awk script should do the following:Check whether the line has a : in itIf yes allocate the string (without :) to a variable ($path in my case)If the line is empty print nothingIf it's not empty and it does not contain a : print the $path and then the current line $0Here's the script:BEGIN{path=}{ if ($1 ~ /\:/) { sub(/\:/,,$1) if (substr($1, length,1) ~ /\//) { path=$1; } else { path=$1/ } } else if (length($0) == 0) {} else print $path$1}The problem is that when I run the script I get the following mess:vagrant@precise64:/vagrant$ awk -f format_output.awk structure-of-home.cnfvagrantvagrantpostinstall.shpostinstall.shWhat am I doing wrong please? | awk ifs and variables - cannot pass a variable from one line towards subsequent lines | awk | As pointed out by taliezin, your mistake was to use $ to expand path when printing. Unlike bash or make, awk doesn't use the $ to expand variables names to their value, but to refer to the fields of a line (similar to perl).So just removing this will make your code work:BEGIN{path=}{ if ($1 ~ /\:/) { sub(/\:/,,$1) if (substr($1, length,1) ~ /\//) { path=$1; } else { path=$1/ } } else if (length($0) == 0) {} else print path$1}However, this is not really an awkish solution:First of all, there is no need to initialize path in a BEGIN rule, non-defined variables default to or 0, depending on context.Also, any awk script consist of patterns and actions, the former stating when, the latter what to do.You have one action that's always executed (empty pattern), and internally uses (nested) conditionals to decide what to do.My solution would look like this:# BEGIN is actually a pattern making the following rule run only once:# That is, before any input is read.BEGIN{ # Split lines into chunks (fields) separated by :. # This is done by setting the field separator (FS) variable accordingly:# FS=: # this would split lines into fields by : # Additionally, if a field ends with /, # we consider this part of the separator. # So fields should be split by a : that *might* # be predecessed by a /. # This can be done using a regular expression (RE) FS: FS=/?: # ? means the previous character may occur 0 or 1 times # When printing, we want to join the parts of the paths by /. # That's the sole purpose of the output field separator (OFS) variable: OFS=/}# First we want to identify records (i.e. in this [default] case: lines),# that contain(ed) a :.# We can do that without any RE matching, since records are# automatically split into fields separated by :.# So asking >>Does the current line contain a :?<< is now the same# as asking >>Does the current record have more than 1 field?<<.# Luckily (but not surprisingly), the number of fields (NF) variable# keeps track of this:NF>1{ # The follwoing action is run only if are >1 fields. # All we want to do in this case, is store everything up to the first :, # without the potential final /. # With our FS choice (see above), that's exactly the 1st field: path=$1}# The printing should be done only for non-empty lines not containing :.# In our case, that translates to a record that has neither 0 nor >1 fields:NF==1{ # The following action is only run if there is exactly 1 field. # In this case, we want to print the path varible (no need for a $ here) # followed by the current line, separated by a /. # Since we defined the proper OFS, we can use , to join output fields: print path,$1 # ($1==$0 since NF==1)}And that's all. Removing all the comments and moving the [O]FS definitions to command line arguments, all you have to write is:awk -F/?: -vOFS=\/ 'NF>1{path=$1}NF==1{print path,$1}' structure-of-home.cnf |
_unix.265911 | What are the differences betweenKill a processSuspend a process Terminate a processIn which situations each methods uses practically? | What are the differences between KILL, SUSPEND and TERMINATE of a process | bash;process | To suspend a process means to make it stop executing for some time. When the process is suspended, it doesn't run, but it's still present in memory, waiting to be resumed (woken up). A process can be suspended by sending it the STOP signal, and resumed by sending it the CONT signal.To kill a process means to cause it to die. This can be done by sending it a signal. There are various different signal, and they don't all cause the process to die. the KILL signal always does cause the process to die; some other signals typically do but the process can choose to do something different; and there are signals whose role is not to cause the process to die, for example STOP and CONT. Note that the kill utility and the kill C function send a signal, which may or may not actually kill the process.To terminate a process means to cause it to die. The difference between kill and terminate is that kill generally refers specifically to sending a signal, whereas terminate usually also includes other methods such as sending the process a command that tells it to exit (if the process includes a command interpreter of some kind). |
_softwareengineering.136376 | I was a full time java developer, now I'm also working with JavaScript and Android. A couple of years back when I started learning JavaScript, the first library I tried was jQuery. But it made my life harder, and after sometime I started writing fairly large a JavaScript app. It wasn't coming together for me using jQuery. I had huge a code base without much of a structure. Method blocks updating HTML blocks using selectors. Then I tried MooTools and obviously as a Java developer it appealed to me a lot. And I was able to write managable web apps having huge code base. As per my understanding MooTools is not considered a preferred way to write JavaScript because it mimics conventional OO over default prototype-based OO language. So now to really understand Javascript and desire of walking with the world, I decided to try other approaches, so again I turned back to jQuery, and realise that only jQueryis not enough. So started looking at current trending frameworks like backbone, spine, ember.js, sprouteCore. Strangely I found that these frameworks mimic conventional OO like MooTools only by having constructors and creating a object of class and reusing this class object to create instance objects. SoAm I missing something? Is MooTools really wrong? MooTools project is very alive and releases new versions/features, but I don'tsee many people talking about it on internet, also there are nocomparisons vs backbone/spine etc. | Is mootools alternative of jquery + backbone / spine / sprouteCore | javascript;jquery;mootools | null |
_softwareengineering.63215 | We're going to do a complete review of a Java/JEE based application. This includes an architecture review, code review and platform hardware review.While we're a bit aware of code review techniques, I'm wondering if there is a template or reference model for doing an architecture review for Java/JEE systems.Currently we're looking at following the ATAM Model to build a Quality Attribute Tree to cover the elements of Performance, Reliability, Availability, Security, Modifiability, Portability, Variability, Subsetability, Conceptual integrity and FunctionalityThis is a first for us - so the question is whether there are any other standard models you follow or whether anyone has tried ATAM before and has any tips/recommendations/tools for the Architecture review. | Architecture Review Guidelines for Java/JEE project | java;architecture;quality;review | null |
_webmaster.53132 | I got a ton of not found errors in Google Webmaster Tools, but the weird thing is, all of the pages ARE found. There is absolutely no error in the URLs they list that supposedly don't exist. Why is this happening? | Google Webmaster Tools increase in not found errors is wrong | google search console;404;crawl errors | null |
_unix.85411 | To prevent fork bomb I followed this http://www.linuxhowtos.org/Tips%20and%20Tricks/ulimit.htmulimit -a reflects the new settings but when I run (as root in bash) :(){ :|:&};: the VM still goes on max CPU+RAM and system will freeze. How to ensure users will not be bring down the system by using fork bombs or running a buggy application?OS: RHEL 6.4 | How to prevent fork bomb? | rhel;freeze;resources;ulimit | The superuser or any process with the CAP_SYS_ADMIN or CAP_SYS_RESOURCE capabilities are not affected by that limitation, that's not something that can be changed. root can always fork processes.If some software is not trusted, it should not run as root anyway. |
_softwareengineering.191949 | I know there are different ways to combine programming languages (Haskell's FFI, Boost with C++ and Python, etc...). I have an odd interest in combining programming languages; however, I have only found it necessary once (I didn't want to rewrite some older code). Also, I notice that this interest is shared (there are an abundance of questions about integrating languages on SO).My question is, simply, are there any other benefits in combining programming languages? Is there value in mixing different programming paradigms (e.g. functional+OO, procedural+aspect-oriented)?Any from-the-field examples would be much appreciated.UPDATEWhen I say combine two languages I am talking about using them in conjunction, in ways not necessarily originally intended. For example, suppose I use Boost to incorporate Python code in C++. | Benefits of combining programming languages | programming languages;language agnostic | A typical example shows up in the Computer games, particularly AAA titles where a C++ backend is the norm. The interface section will often be designed in a scripting language such as Python or Lua. This allows for easy modification by both the developers so they can test out new interface designs without messing with the highly complex physics and graphics engines underneath and at the same time allows for easy modification by players who may not have the coding chops to handle a full game engine but can competently do a few interface tweaks.Another widespread use case is the web itself. Javascript, CSS and HTML combine to form a front end with whatever you want as a backend on the server. Windows 8 apps use a similar approach with the declarative XAML defining interfaces and providing class outlines while C# fills in the details and runs the thing.Thus, the typical divide will be a fast, statically typed language running a solidly tested codebase handling heavily numerical work with a scripting language running on top of it to provide easily changeable frontends handling visual and presentation functions as well as input.Other use cases exist as well. Data processing languages like R can be bolted on to other codebases to provide analytic and presentation functions more easily than the native codebase might be able to. As far as combinations of paradigms, a combination of Prolog for database work and some other language to handle the standard program functions is a known case. Another case would be having Fortran or Assembly routines for fast numerical computation within some scripting language like Python. Which is exactly what Numpy and Scipy do. |
_unix.242142 | I know that Iptables is a user-space module, I also read it configures kernel modules to do big part of the filtering. So my question is, if I add a rule to allow only TCP:443 packets, would this be handled at the kernel level? | Iptables: does dropping UDP packets take place in the user-space or kernel-space? | linux;linux kernel;iptables;firewall | null |
_codereview.52140 | ProblemI am learning about HPC and code optimization. I attempt to replicate the results in Goto's seminal matrix multiplication paper. Despite my best efforts, I cannot get over ~50% maximum theoretical CPU performance.BackgroundSee related issues here, including info about my hardware.What I have attemptedThis related paper has a good description of Goto's algorithmic structure. I provide my source code below.My questionI am asking for general help. I have been working on this for far too long, have tried many different algorithms, inline assembly, inner kernels of various sizes (2x2, 4x4, 2x8, ..., mxn with m and n large), yet I cannot seem to break 50% CPU GFLOPS. This is purely for education purposes and not a homework.Compile OptionsOn 32 bit GCC:gcc -std=c99 -O3 -msse3 -ffast-math -march=nocona -mtune=nocona -funroll-loops -fomit-frame-pointer -masm=intelSource CodeI set up the macro structure (for loops) as described in the 2nd paper above. I pack the matrices as discussed in either paper. My inner kernel computes 2x8 blocks, as this seems to be the optimal computation for Nehalem architecture (see GotoBLAS source code - kernels). The inner kernel is based on the concept of calculating rank-1 updates as described here.#include <stdio.h>#include <time.h>#include <stdlib.h>#include <string.h>#include <x86intrin.h>#include <math.h>#include <omp.h>#include <stdint.h>// define some prefetch functions#define PREFETCHNTA(addr,nrOfBytesAhead) \ _mm_prefetch(((char *)(addr))+nrOfBytesAhead,_MM_HINT_NTA)#define PREFETCHT0(addr,nrOfBytesAhead) \ _mm_prefetch(((char *)(addr))+nrOfBytesAhead,_MM_HINT_T0)#define PREFETCHT1(addr,nrOfBytesAhead) \ _mm_prefetch(((char *)(addr))+nrOfBytesAhead,_MM_HINT_T1)#define PREFETCHT2(addr,nrOfBytesAhead) \ _mm_prefetch(((char *)(addr))+nrOfBytesAhead,_MM_HINT_T2)// define a min function#ifndef min #define min( a, b ) ( ((a) < (b)) ? (a) : (b) )#endif// zero a matrixvoid zeromat(double *C, int n){ int i = n; while (i--) { int j = n; while (j--) { *(C + i*n + j) = 0.0; } }}// compute a 2x8 block from (2 x kc) x (kc x 8) matricesinline void __attribute__ ((gnu_inline)) __attribute__ ((aligned(64))) dgemm_2x8_sse( int k, const double* restrict a1, const int cs_a, const double* restrict b1, const int rs_b, double* restrict c11, const int rs_c ){ register __m128d xmm1, xmm4, // r8, r9, r10, r11, r12, r13, r14, r15; // accumulators // 10 registers declared here r8 = _mm_xor_pd(r8,r8); // ab r9 = _mm_xor_pd(r9,r9); r10 = _mm_xor_pd(r10,r10); r11 = _mm_xor_pd(r11,r11); r12 = _mm_xor_pd(r12,r12); // ab + 8 r13 = _mm_xor_pd(r13,r13); r14 = _mm_xor_pd(r14,r14); r15 = _mm_xor_pd(r15,r15); // PREFETCHT2(b1,0); // PREFETCHT2(b1,64); //int l = k; while (k--) { //PREFETCHT0(a1,0); // fetch 64 bytes from a1 // i = 0 xmm1 = _mm_load1_pd(a1); xmm4 = _mm_load_pd(b1); xmm4 = _mm_mul_pd(xmm1,xmm4); r8 = _mm_add_pd(r8,xmm4); xmm4 = _mm_load_pd(b1 + 2); xmm4 = _mm_mul_pd(xmm1,xmm4); r9 = _mm_add_pd(r9,xmm4); xmm4 = _mm_load_pd(b1 + 4); xmm4 = _mm_mul_pd(xmm1,xmm4); r10 = _mm_add_pd(r10,xmm4); xmm4 = _mm_load_pd(b1 + 6); xmm4 = _mm_mul_pd(xmm1,xmm4); r11 = _mm_add_pd(r11,xmm4); // // i = 1 xmm1 = _mm_load1_pd(a1 + 1); xmm4 = _mm_load_pd(b1); xmm4 = _mm_mul_pd(xmm1,xmm4); r12 = _mm_add_pd(r12,xmm4); xmm4 = _mm_load_pd(b1 + 2); xmm4 = _mm_mul_pd(xmm1,xmm4); r13 = _mm_add_pd(r13,xmm4); xmm4 = _mm_load_pd(b1 + 4); xmm4 = _mm_mul_pd(xmm1,xmm4); r14 = _mm_add_pd(r14,xmm4); xmm4 = _mm_load_pd(b1 + 6); xmm4 = _mm_mul_pd(xmm1,xmm4); r15 = _mm_add_pd(r15,xmm4); a1 += cs_a; b1 += rs_b; //PREFETCHT2(b1,0); //PREFETCHT2(b1,64); } // copy result into C PREFETCHT0(c11,0); xmm1 = _mm_load_pd(c11); xmm1 = _mm_add_pd(xmm1,r8); _mm_store_pd(c11,xmm1); xmm1 = _mm_load_pd(c11 + 2); xmm1 = _mm_add_pd(xmm1,r9); _mm_store_pd(c11 + 2,xmm1); xmm1 = _mm_load_pd(c11 + 4); xmm1 = _mm_add_pd(xmm1,r10); _mm_store_pd(c11 + 4,xmm1); xmm1 = _mm_load_pd(c11 + 6); xmm1 = _mm_add_pd(xmm1,r11); _mm_store_pd(c11 + 6,xmm1); c11 += rs_c; PREFETCHT0(c11,0); xmm1 = _mm_load_pd(c11); xmm1 = _mm_add_pd(xmm1,r12); _mm_store_pd(c11,xmm1); xmm1 = _mm_load_pd(c11 + 2); xmm1 = _mm_add_pd(xmm1,r13); _mm_store_pd(c11 + 2,xmm1); xmm1 = _mm_load_pd(c11 + 4); xmm1 = _mm_add_pd(xmm1,r14); _mm_store_pd(c11 + 4,xmm1); xmm1 = _mm_load_pd(c11 + 6); xmm1 = _mm_add_pd(xmm1,r15); _mm_store_pd(c11 + 6,xmm1);}// packs a matrix into rows of sliversinline void __attribute__ ((gnu_inline)) __attribute__ ((aligned(64))) rpack( double* restrict dst, const double* restrict src, const int kc, const int mc, const int mr, const int n){ double tmp[mc*kc] __attribute__ ((aligned(64))); double* restrict ptr = &tmp[0]; for (int i = 0; i < mc; ++i) for (int j = 0; j < kc; ++j) *ptr++ = *(src + i*n + j); ptr = &tmp[0]; //const int inc_dst = mr*kc; for (int k = 0; k < mc; k+=mr) for (int j = 0; j < kc; ++j) for (int i = 0; i < mr*kc; i+=kc) *dst++ = *(ptr + k*kc + j + i);}// packs a matrix into columns of sliversinline void __attribute__ ((gnu_inline)) __attribute__ ((aligned(64))) cpack(double* restrict dst, const double* restrict src, const int nc, const int kc, const int nr, const int n){ double tmp[kc*nc] __attribute__ ((aligned(64))); double* restrict ptr = &tmp[0]; for (int i = 0; i < kc; ++i) for (int j = 0; j < nc; ++j) *ptr++ = *(src + i*n + j); ptr = &tmp[0]; // const int inc_k = nc/nr; for (int k = 0; k < nc; k+=nr) for (int j = 0; j < kc*nc; j+=nc) for (int i = 0; i < nr; ++i) *dst++ = *(ptr + k + i + j);}void blis_dgemm_ref( const int n, const double* restrict A, const double* restrict B, double* restrict C, const int mc, const int nc, const int kc ){ int mr = 2; int nr = 8; double locA[mc*kc] __attribute__ ((aligned(64))); double locB[kc*nc] __attribute__ ((aligned(64))); int ii,jj,kk,i,j; #pragma omp parallel num_threads(4) shared(A,B,C) private(ii,jj,kk,i,j,locA,locB) {//use all threads in parallel #pragma omp for // partitions C and B into wide column panels for ( jj = 0; jj < n; jj+=nc) { // A and the current column of B are partitioned into col and row panels for ( kk = 0; kk < n; kk+=kc) { cpack(locB, B + kk*n + jj, nc, kc, nr, n); // partition current panel of A into blocks for ( ii = 0; ii < n; ii+=mc) { rpack(locA, A + ii*n + kk, kc, mc, mr, n); for ( i = 0; i < min(n-ii,mc); i+=mr) { for ( j = 0; j < min(n-jj,nc); j+=nr) { // inner kernel that compues 2 x 8 block dgemm_2x8_sse( kc, locA + i*kc , mr, locB + j*kc , nr, C + (i+ii)*n + (j+jj), n ); } } } } } }}double compute_gflops(const double time, const int n){ // computes the gigaflops for a square matrix-matrix multiplication double gflops; gflops = (double) (2.0*n*n*n)/time/1.0e9; return(gflops);}// ******* MAIN ********//void main() { clock_t time1, time2; double time3; double gflops; const int trials = 10; int nmax = 4096; printf(%10s %10s\n,N,Gflops/s); int mc = 128; int kc = 256; int nc = 128; for (int n = kc; n <= nmax; n+=kc) { //assuming kc is the max dim double *A = NULL; double *B = NULL; double *C = NULL; A = _mm_malloc (n*n * sizeof(*A),64); B = _mm_malloc (n*n * sizeof(*B),64); C = _mm_malloc (n*n * sizeof(*C),64); srand(time(NULL)); // Create the matrices for (int i = 0; i < n; i++) { for (int j = 0; j < n; j++) { A[i*n + j] = (double) rand()/RAND_MAX; B[i*n + j] = (double) rand()/RAND_MAX; //D[j*n + i] = B[i*n + j]; // Transpose C[i*n + j] = 0.0; } } // warmup zeromat(C,n); blis_dgemm_ref(n,A,B,C,mc,nc,kc); zeromat(C,n); time2 = 0; for (int count = 0; count < trials; count++){// iterations per experiment here time1 = clock(); blis_dgemm_ref(n,A,B,C,mc,nc,kc); time2 += clock() - time1; zeromat(C,n); } time3 = (double)(time2)/CLOCKS_PER_SEC/trials; gflops = compute_gflops(time3, n); printf(%10d %10f\n,n,gflops); _mm_free(A); _mm_free(B); _mm_free(C); } printf(tests are done\n);} | Optimizing multiplication of square matrices for full CPU utilization | optimization;c;matrix;sse;openmp | null |
_unix.89550 | I fight with svn since 2 hours to store my password inside the gnome keyring, but nothing worked. I'm on a fresh installed archlinux system with the following packages installed:acl 2.2.52-1alsa-lib 1.0.27.2-1alsa-utils 1.0.27.2-1apr 1.4.8-1apr-util 1.5.2-1arandr 0.1.7.1-1archlinux-keyring 20130818-1aspell 0.60.6.1-1at-spi2-atk 2.8.1-1at-spi2-core 2.8.0-1atk 2.8.0-1attr 2.4.47-1aurvote 1.5-2autoconf 2.69-1automake 1.14-1avahi 0.6.31-10bash 4.2.045-5binutils 2.23.2-3bison 3.0-1boost-libs 1.54.0-3bzip2 1.0.6-4ca-certificates 20130610-1ca-certificates-java 20130815-1cairo 1.12.16-1cdparanoia 10.2-4chromium 29.0.1547.65-1cloog 0.18.0-2clucene 2.3.3.4-7colord 1.0.2-2compositeproto 0.4.2-2coreutils 8.21-2cracklib 2.9.0-1cronie 1.4.9-5cryptsetup 1.6.2-1curl 7.32.0-1customizepkg 0.2.1-2damageproto 1.2.1-2db 5.3.21-1dbus 1.6.12-1dbus-glib 0.100.2-1dconf 0.16.1-1desktop-file-utils 0.21-1device-mapper 2.02.100-1dhcpcd 6.0.5-1dialog 1.2_20130523-2diffutils 3.3-1dirmngr 1.1.1-1dnssec-anchors 20130320-1dotconf 1.3-3e2fsprogs 1.42.8-1elfutils 0.155-1enca 1.14-1enchant 1.6.0-4exo 0.10.2-1expat 2.1.0-2faac 1.28-4faad2 2.7-3fakeroot 1.19-1farstream-0.1 0.1.2-2fftw 3.3.3-1file 5.14-1filesystem 2013.05-2findutils 4.4.2-5firefox 23.0.1-1fixesproto 5.0-2flac 1.3.0-1flashplugin 11.2.202.297-1flex 2.5.37-1fontconfig 2.10.95-1fontsproto 2.1.2-1freeglut 2.8.1-1freetype2 2.5.0.1-1fribidi 0.19.5-1garcon 0.2.1-1gawk 4.1.0-1gcc 4.8.1-3gcc-libs 4.8.1-3gconf 3.2.6-2gcr 3.8.2-1gdbm 1.10-1gdk-pixbuf2 2.28.2-1gettext 0.18.3.1-1giflib 5.0.4-2git 1.8.4-1glib-networking 2.36.2-1glib2 2.36.4-1glibc 2.18-3glu 9.0.0-2gmp 5.1.2-1gnome-icon-theme 3.8.3-1gnome-icon-theme-symbolic 3.8.3-1gnome-keyring 3.8.2-1gnupg 2.0.21-1gnutls 3.2.4-1gpgme 1.4.3-1gpm 1.20.7-3graphite 1:1.2.3-1grep 2.14-2grml-zsh-config 0.8.2-1groff 1.22.2-5grub 2.00.5086-1gsettings-desktop-schemas 3.8.2-1gsm 1.0.13-7gstreamer0.10 0.10.36-2gstreamer0.10-bad 0.10.23-4gstreamer0.10-bad-plugins 0.10.23-4gstreamer0.10-base 0.10.36-1gstreamer0.10-base-plugins 0.10.36-1gstreamer0.10-ffmpeg 0.10.13-1gstreamer0.10-good 0.10.31-3gtk-engines 2.21.0-1gtk-update-icon-cache 2.24.20-1gtk2 2.24.20-1gtk2-xfce-engine 3.0.1-1gtk3 3.8.4-1gtk3-xfce-engine 3.0.1-1gtkspell 2.0.16-2gzip 1.6-1harfbuzz 0.9.19-1harfbuzz-icu 0.9.19-1heirloom-mailx 12.5-3hicolor-icon-theme 0.12-2hspell 1.2-1hunspell 1.3.2-2hwids 20130607-1hyphen 2.8.6-1iana-etc 2.30-3icon-naming-utils 0.8.90-2icu 51.2-1inetutils 1.9.1-6inputproto 2.3-1intel-dri 9.2.0-1iproute2 3.10.0-1iptables 1.4.19.1-1iputils 20121221-3isl 0.12.1-1iso-codes 3.44-1jasper 1.900.1-8jdk7-openjdk 7.u40_2.4.1-3jfsutils 1.1.15-4jre7-openjdk 7.u40_2.4.1-3jre7-openjdk-headless 7.u40_2.4.1-3js 17.0.0-1json-c 0.11-1kbd 2.0.0-1kbproto 1.0.6-1keyutils 1.5.5-5kmod 15-1krb5 1.11.3-1ladspa 1.13-4lcms2 2.5-1ldns 1.6.16-1less 458-1lib32-gcc-libs 4.8.1-3lib32-glibc 2.18-3lib32-libstdc++5 3.3.6-6lib32-ncurses 5.9-2lib32-zlib 1.2.8-1libarchive 3.1.2-2libass 0.10.1-1libassuan 2.1.1-1libasyncns 0.8-4libatasmart 0.19-2libcap 2.22-5libcap-ng 0.7.3-1libcdaudio 0.99.12-6libcroco 0.6.8-1libcups 1.6.3-1libdaemon 0.14-2libdatrie 0.2.6-1libdc1394 2.2.1-1libdca 0.0.5-3libdrm 2.4.46-2libdv 1.0.0-4libdvdnav 4.2.0-2libdvdread 4.2.0-1libedit 20130601_3.1-1libevent 2.0.21-2libexif 0.6.21-1libffi 3.0.13-3libfontenc 1.1.2-1libgcrypt 1.5.3-1libglade 2.6.4-3libgme 0.6.0-2libgpg-error 1.12-1libgssglue 0.4-1libgusb 0.1.6-1libice 1.0.8-1libidn 1.26-1libimobiledevice 1.1.5-1libjpeg-turbo 1.3.0-2libksba 1.3.0-1libldap 2.4.35-4liblrdf 0.5.0-1libltdl 2.4.2-10libmbim 1.4.0-1libmms 0.6.2-1libmng 2.0.2-2libmodplug 0.8.8.4-1libmp4v2 2.0.0-2libmpc 1.0.1-1libmpcdec 1.2.6-3libnice 0.1.4-1libnl 3.2.22-1libnotify 0.7.5-1libofa 0.9.3-4libogg 1.3.1-1libpcap 1.4.0-1libpciaccess 0.13.2-1libpipeline 1.2.4-1libplist 1.10-1libpng 1.6.3-1libproxy 0.4.11-2libpulse 4.0-2libpurple 2.10.7-4libqmi 1.4.0-2libraw1394 2.1.0-1libreoffice-af 4.1.1-1libreoffice-base 4.1.1-2libreoffice-calc 4.1.1-2libreoffice-common 4.1.1-2libreoffice-draw 4.1.1-2libreoffice-gnome 4.1.1-2libreoffice-impress 4.1.1-2libreoffice-math 4.1.1-2libreoffice-postgresql-connector 4.1.1-2libreoffice-sdk 4.1.1-2libreoffice-sdk-doc 4.1.1-2libreoffice-writer 4.1.1-2librsvg 1:2.37.0-1libsamplerate 0.1.8-2libsasl 2.1.26-4libsecret 0.15-2libsm 1.2.1-1libsndfile 1.0.25-2libsoup 2.42.2-1libssh2 1.4.3-1libtasn1 3.3-1libthai 0.1.19-1libtheora 1.1.1-3libtiff 4.0.3-3libtirpc 0.2.3-1libtool 2.4.2-10libunique 1.1.6-5libusbx 1.0.16-2libvdpau 0.7-1libvisual 0.4.0-4libvorbis 1.3.3-1libvpx 1.2.0-1libwebp 0.3.1-3libwnck 2.30.7-1libwpd 0.9.9-1libwps 0.2.9-1libx11 1.6.1-1libxau 1.0.8-1libxcb 1.9.1-2libxcomposite 0.4.4-1libxcursor 1.1.14-1libxdamage 1.1.4-1libxdmcp 1.1.1-1libxext 1.3.2-1libxfce4ui 4.10.0-1libxfce4util 4.10.1-2libxfixes 5.0.1-1libxfont 1.4.6-1libxft 2.3.1-1libxi 1.7.2-1libxinerama 1.1.3-1libxkbcommon 0.3.1-1libxkbfile 1.0.8-1libxklavier 5.3-1libxml2 2.9.1-2libxmu 1.1.1-1libxpm 3.5.10-1libxrandr 1.4.1-1libxrender 0.9.8-1libxres 1.0.7-1libxslt 1.1.28-1libxss 1.2.2-1libxt 1.1.4-1libxtst 1.2.2-1libxv 1.0.9-1libxvmc 1.0.8-1libxxf86vm 1.1.3-1libzeitgeist 0.3.18-3licenses 20130203-1linux 3.10.10-1linux-api-headers 3.10.6-1linux-firmware 20130725-1llvm-libs 3.3-1logrotate 3.8.6-1lpsolve 5.5.2.0-2lsb-release 1.4-13lsof 4.87-2lvm2 2.02.100-1lzo2 2.06-1m4 1.4.16-3make 3.82-6man-db 2.6.5-1man-pages 3.53-1mcpp 2.7.2-4mdadm 3.2.6-4mesa 9.2.0-1mesa-libgl 9.2.0-1mime-types 9-1mjpegtools 2.0.0-3mkinitcpio 0.15.0-1mkinitcpio-busybox 1.21.1-2modemmanager 1.0.0-1mozilla-common 1.4-3mpfr 3.1.2-1mtdev 1.1.3-1mumble 1.2.4-2musicbrainz 2.1.5-5nano 2.2.6-2ncurses 5.9-5neon 0.29.6-4net-tools 1.60.20130531git-1netctl 1.3-1nettle 2.7.1-1networkmanager 0.9.8.2-1nspr 4.10-2nss 3.15.1-1openjpeg 1.5.1-1openresolv 3.5.6-1openssh 6.2p2-1openssl 1.0.1.e-3opus 1.0.3-1orc 0.4.17-1p11-kit 0.18.4-1package-query 1.2-2pacman 4.1.2-1pacman-mirrorlist 20130830-1pam 1.1.6-4pambase 20130113-1pango 1.34.1-1parted 3.1-2patch 2.7.1-2pciutils 3.2.0-3pcmciautils 018-7pcre 8.33-1perl 5.18.1-1perl-error 0.17021-1perl-xml-parser 2.41-4perl-xml-simple 2.20-1pidgin 2.10.7-4pinentry 0.8.3-1pixman 0.30.2-1pkg-config 0.28-1pm-quirks 0.20100619-3pm-utils 1.4.1-6polkit 0.111-1poppler 0.24.1-1popt 1.16-6postgresql-libs 9.2.4-2ppp 2.4.5-8procps-ng 3.3.8-2protobuf 2.5.0-3psmisc 22.20-1pth 2.0.7-4pygobject2-devel 2.28.6-9pygtk 2.24.0-3python 3.3.2-1python-xdg 0.25-1python2 2.7.5-1python2-cairo 1.10.0-1python2-gobject2 2.28.6-9qt4 4.8.5-2randrproto 1.4.0-1raptor 2.0.9-2rasqal 1:0.9.30-1readline 6.2.004-1recode 3.6-7recordproto 1.14.2-1redland 1:1.0.16-2reiserfsprogs 3.6.24-1renderproto 0.11.1-2rsync 3.0.9-6rtmpdump 20121230-2run-parts 4.4-1schroedinger 1.0.11-1scrnsaverproto 1.2.2-1sdl 1.2.15-3seahorse 3.8.2-1sed 4.2.2-3serf 1.3.0-1sg3_utils 1.36-1shadow 4.1.5.1-6shared-color-profiles 0.1.5-1shared-mime-info 1.1-1slim 1.3.5-3snappy 1.1.0-1soundtouch 1.7.1-1speech-dispatcher 0.8-1speex 1.2rc1-3sqlite 3.8.0.1-1startup-notification 0.12-3strace 4.8-1subversion 1.8.1-2sudo 1.8.7-1sysfsutils 2.1.0-8systemd 204-3systemd-sysvcompat 204-3sysvinit-tools 2.88-11tar 1.26-4texinfo 5.1-1thunar 1.6.3-1thunar-volman 0.8.0-1thunderbird 17.0.8-1tmux 1.8-1ttf-bitstream-vera 1.10-9tumbler 0.1.29-1tzdata 2013d-1udisks 1.0.4-8unixodbc 2.3.1-1upower 0.9.20-2usbmuxd 1.0.8-2usbutils 007-1util-linux 2.23.2-1vi 1:050325-3videoproto 2.3.2-1vim 7.4.0-2vim-runtime 7.4.0-2vte 0.28.2-3vte-common 0.34.7-1wayland 1.2.1-1wget 1.14-2which 2.20-6wildmidi 0.2.3.5-2wpa_supplicant 2.0-4xcb-proto 1.8-2xcb-util 0.3.9-1xdg-utils 1.1.0.git20130520-1xextproto 7.2.1-1xf86-input-evdev 2.8.1-1xf86-input-synaptics 1.7.1-1xf86-video-intel 2.21.15-1xf86-video-vesa 2.3.2-3xf86vidmodeproto 2.3.1-2xfce4-appfinder 4.10.1-1xfce4-mixer 4.10.0-2xfce4-panel 4.10.1-1xfce4-power-manager 1.2.0-4xfce4-session 4.10.1-2xfce4-settings 4.10.1-1xfce4-terminal 0.6.2-1xfconf 4.10.0-3xfdesktop 4.10.2-1xfsprogs 3.1.11-1xfwm4 4.10.1-1xfwm4-themes 4.10.0-1xineramaproto 1.2.1-2xkeyboard-config 2.9-2xorg-bdftopcf 1.0.4-1xorg-font-util 1.3.0-1xorg-font-utils 7.6-3xorg-fonts-alias 1.0.3-1xorg-fonts-encodings 1.0.4-3xorg-fonts-misc 1.0.1-2xorg-iceauth 1.0.6-1xorg-mkfontdir 1.0.7-1xorg-mkfontscale 1.1.1-1xorg-server 1.14.2-2xorg-server-common 1.14.2-2xorg-setxkbmap 1.3.0-1xorg-xauth 1.0.7-1xorg-xinit 1.3.2-3xorg-xinput 1.6.0-1xorg-xkbcomp 1.2.4-1xorg-xrandr 1.4.1-1xorg-xrdb 1.0.9-2xorg-xset 1.2.3-1xproto 7.0.24-1xvidcore 1.3.2-1xz 5.0.5-1yajl 2.0.4-1yaourt 1.3-1zip 3.0-3zlib 1.2.8-1zsh 5.0.2-3my svn configs looks like this:~/.subversion/config:cat ~/.subversion/config | grep -v ^#[auth]store-passwords = yesstore-auth-creds = yespassword-stores = gnome-keyring~/.subversion/serverscat ~/.subversion/servers | grep -v ^#[global]store-passwords = yesstore-plaintext-passwords = askI also played with store-plaintext-passwords = no|yes while not having a proper result! According to several Threads it should work with this configuration. Has Anyone an idea what I#m doing wrong or can try next? | Subversion (svn) doesn't store passwords in gnome-keyring | arch linux;password;subversion;gnome keyring | The gnome-keyring-daemon must be running for Subversion to store passwords in it. When the daemon starts, it emits two variables that need to be exported into your environment. So if it's already running, it might be easier to kill it and start over. Start it up like this:export $(nohup gnome-keyring-daemon 2>/dev/null)The output that gets sent to export looks something like this:GNOME_KEYRING_SOCKET=/tmp/keyring-OpuUEI/socketGNOME_KEYRING_PID=9256Now when you execute a Subversion subcommand that requires it to contact the server, the client will prompt for your Subversion password first, then your Gnome keyring password. The keyring should stay unlocked for at least the duration of your login session (and maybe longer).There are also some pointers on the ArchWiki that may be Arch-specific, so take a look there if my suggestions don't work. |
_webapps.94729 | On the Evernote web app, is there a way to find and replace text?I can use the inbuilt find tool in my browser (Chrome) by pressing Ctrl + F, but I can't work out a quick way to replace text as well, given there doesn't seem to be an obvious option in the app.Is there an option I'm not seeing, or even a hacky way to do this that doesn't involve opening up a text editor and copy pasting?Googling for this leads to Evernote forum posts about using the Evernote Mac or PC application, which do have find/replace available, but unfortunately I can't find anything regarding this feature on the web app. | Find and replace on Evernote web app | evernote | null |
_unix.322196 | I have a Debian 32 bit machine running a server application.During previous reboot there seems to be some problem with display manager.After boot the display is blank. I am able to SSH to this from other systems and have root access. During boot the display works fine. Can you please tell me how can I reinstall GNOME or display manager or reset these display settings. | Debian Wheezy GNOME corrupted after reboot | linux;debian;gnome;display | I dont,t know what display-manager you use, So this will reinstall your Display-manager and gnome. apt install --reinstall $(cat /etc/X11/default-display-manager | cut -d / -f 4) gnome-session |
_scicomp.4979 | I have see the method PCHIP in matlab that implements the monotone Hermite interpolation method which was originally proposed by Carlson in 1980s. It seem to accomplish the goal of preventing the values go outside of the range. I have not seen the error estimates results but I guess it does as good as cubic polynomial and locally $O(h^4)$. Now, there are more recent ENO/WENO methods and their multiple kids. Would like to hear why these methods stand out and why they are better or worse compare to monotone Hermite? | ENO/WENO vs monotone Hermite interpolation | hyperbolic pde;interpolation | PCHIP is not a conservative reconstruction, making it inappropriate for conservation laws. Furthermore, hyperbolic problems have discontinuous solutions so there is generally no benefit to a continuous reconstruction. Conservative monotone spline reconstructions are being investigated by the UK Met Office for use in tracer advection for atmosphere modeling, see papers on the multi-dimensional case, quartic splines, and applications. These methods are relatively new and are not currently popular with many other groups. Some reasons for this includenonlocal reconstruction is inconvenient, especially in parallelthese methods are semi-Lagrangian and currently only suitable for advection, especially in multiple dimensionsthere is no characteristic spline-based reconstructiononly structured grids can be used |
_reverseengineering.15815 | I'm looking for help reverse engineering a production Ionic iOS app and turning it into a buildable project. I have taken the ipa file from the App Store, extracted the contents of the bundle, and copied the www folder out. I've then created a new Ionic blank starter app, copied the www folder from the ipa to the new project, installed all the plugins that I saw in the plugins folder (and as a result updated the config.xml and package.json files).With these steps, I can run the project in the simulator just fine, but did I miss any steps? Are there any other files that need to be copied from the bundle or settings I need to tweak to get a production Ionic app into a test project? Or can I just start editing the application JavaScript.My first pass can be found in this GitHub repo: subwaytime-2-re | Creating a buildable project from an Ionic iOS App Store app | ios;ionic | null |
_codereview.23981 | Here is a solution to the SPOJ's JPESEL problem. Basically the problem was to calculate cross product of 2 vectors modulo 10. If there is a positive remainder, it's not a valid PESEL number; (return D) if the remainder equals 0, it's a valid number (return N).import re, sysn = input()t = [1,3,7,9,1,3,7,9,1,3,1]while(n): p = map(int, re.findall('.',sys.stdin.readline())) # Another approach I've tried in orer to save some memory - didn't help: # print 'D' if sum([int(p[0]) * 1,int(p[1]) * 3,int(p[2]) * 7,int(p[3]) * 9,int(p[4]) * 1,int(p[5]) * 3,int(p[6]) * 7,int(p[7]) * 9,int(p[8]) * 1,int(p[9]) * 3,int(p[10]) * 1]) % 10 == 0 else 'N' print 'D' if ( sum(p*q for p,q in zip(t,p)) % 10 ) == 0 else 'N' n-=1The solution above got the following results at SPOJ:Time: 0.03sMemory: 4.1Mwhere the best solutions (submitted in Python 2.7) got:Time: 0.01sMemory: 3.7MHow can I optimize this code in terms of time and memory used?Note that I'm using different function to read the input, as sys.stdin.readline() is the fastest one when reading strings and input() when reading integers. | SPOJ Pesel challemge | python;optimization;programming challenge;python 2.7 | You can try to replace re module with just a sys.stdin.readline() and replace zip interator with map and mul function from operator module like this:from sys import stdinfrom operator import mulreadline = stdin.readlinen = int(readline())t = [1,3,7,9,1,3,7,9,1,3,1]while n: p = map(int, readline().rstrip()) print 'D' if (sum(map(mul, t, p)) % 10) == 0 else 'N' n -= 1UpdateIt seems getting item from a small dictionary is faster than int so there is a version without int:from sys import stdinreadline = stdin.readlineval = {0: 0, 1: 1, 2: 2, 3: 3, 4: 4, 5: 5, 6: 6, 7: 7, 8: 8, 9: 9}val3 = {0: 0, 1: 3, 2: 6, 3: 9, 4: 12, 5: 15, 6: 18, 7: 21, 8: 24, 9: 27}val7 = {0: 0, 1: 7, 2: 14, 3: 21, 4: 28, 5: 35, 6: 42, 7: 49, 8: 56, 9: 63}val9 = {0: 0, 1: 9, 2: 18, 3: 27, 4: 36, 5: 45, 6: 54, 7: 63, 8: 72, 9: 81}n = int(readline())while n: # Expects only one NL character at the end of the line p1, p2, p3, p4, p5, p6, p7, p8, p9, p10, p11, _ = readline() print 'D' if ((val[p1] + val3[p2] + val7[p3] + val9[p4] + val[p5] + val3[p6] + val7[p7] + val9[p8] + val[p9] + val3[p10] + val[p11]) % 10 == 0) else 'N' n -= 1 |
_softwareengineering.290557 | I've been attempting to learn C++, but it is famously plagued by bad tutorials. I learned about a clever little trick called RAII (Resource Acquisition is Initialization), where one wraps a heap variable in an object placed on the stack. One would free the resources in the destructor of this object, so one would not have to worry about calling delete on the heap based object. However, I also know that you are supposed to create as few objects as possible, as to save RAM. Which brings me to my question, how much should I use RAII? Especially in a project that creates a lot of heap variables. | How often should RAII be used? | c++;programming practices;raii | How often should RAII be used?As often as it makes sense to use (that is, whenever you have an operation that will need to be inverted/undone/closed/finalized/committed/etc. you should probably use RAII).However, I also know that you are supposed to create as few objects as possible, as to save RAM.No; This is a form of premature optimization which is bad enough, but it also relies on a fallacy:The number of variables in the code should not be a limiting factor in using RAII, because if you need a variable allocated, the allocation will be the same, whether it is in a wrapper or raw. An extra RAII wrapper will not add anything significant to the memory footprint of the application.In other words, the following pieces of code should have the same (or comparable) memory footprints:resource* allocate_resource() { return new resource{}; }void release_resouce( resource * r ) { delete resource; }// client codeauto r = allocate_resource();release_resouce(r);andstruct resource_ptr { resource * r_; resource_ptr() : r_ { new resource }{} ~resource_ptr() { delete r_; }};// client code:resource_ptr p;Both the resource* in the first sample and the resource_ptr instance in the second will have the same size (4 bytes on 32bit systems) and the calls to new/delete are in separate functions (which again, should have the same footprint whether in a structure or at top-level).Something to keep in mind:RAII can also be translated to mean Responsibility acquisition is initialization, in which case, it will mean something more abstract (and larger) than pointer/resource release in destructors. It effectively applies to any operation that requires a counter-operation later:files that are opened will need to be closeddatabase transactions will need to be committed or rolled backmutexes that are locked will need to be unlockednetwork connections that have been opened will need to be closedNone of these are heap variables, but RAII applies naturally to all of them. |
_unix.200166 | I know these are tools used for improving security. But what I would like to know is what they are and how they work? | What are the differences between sudo and the use of groups? | security;sudo;group;privileges | null |
_unix.151162 | I have a concatenated log file with multiple logs inside that I'm trying to parse out into individual log files (I will later rename them to the date/time of each). Each log is separated by --- LOG REPORT ---.So far I have...sed -n '/--- LOG REPORT ---/,/--- LOG REPORT ---/p' logname.log > test.outHowever, as you can imagine, that only outputs the first instance of the pattern. I looked over the man page for sed and I'm not convinced it can output multiple files. Perhaps I could keep extracting from a file until it's empty but that seems like too much work. How I can achieve this? Maybe I should be using awk instead?Example of input file filename.log--- LOG REPORT ---MaryHadALittleLamb--- LOG REPORT ---HerFleeceWasWhiteAsSnowDesired output:In filename_1.log--- LOG REPORT ---MaryHadALittleLambIn filename_2.log--- LOG REPORT ---HerFleeceWasWhiteAsSnow | Parse multiple sections of data into seperate files | text processing;sed;awk | How about something likeawk '/--- LOG REPORT ---/ {n++;next} {print > testn.out}' logname.log |
_cs.24312 | In computer science it is often assumed that a human mind can be reduced to a Turing machine. This is the assumption that underlies the field of artificial intelligence.However, it is an assumption, one that has neither been proven or disproven.Is there any kind of test within our current capabilities where we can prove/disprove this assumption?If not, is there any evidence that would suggest one way or another?Here is a similar question I asked awhile back on theoretical computer science:https://cstheory.stackexchange.com/questions/3170/human-intelligence-and-algorithms | What would show a human mind is/is not reducible to a Turing machine? | turing machines;artificial intelligence;computer vs human | If we identify a certain task that is non-computable, but the human mind perform, then this proves the human mind is not a Turing machine.As an example, Turing machines cannot make the distinction between proof and truth. Yet, we humans can, as with the statement this statement is unprovable, which is true but unprovable. |
_webapps.39891 | Is it possible to customize Google upper black panel? I want to add groups and translation to upper panel and hide other stuff that I don't need. | Google: Customization of upper panel | google;customization | null |
_cstheory.9007 | In the definition of (strong) fixed-parameter tractability, the time bound is an expression of the form $$f(k).p(|x|),$$ where the input instance is $(x,k)$ with parameter $k$, $p$ is a polynomial, and $f$ is a computable function.It is possible to replace the computability requirement for $f$ with other classes of functions, as long as the notion of reduction is similarly restricted. (For instance, Flum and Grohe cover exponential and subexponential families in chapters 1516 of their textbook, with the associated erf and serf reductions.)Has anyone studied the family of elementary functions for the parameter bound $f$?An elementary function can be bounded above by a fixed tower of exponentials, so this class is closed under composition. The growth in the parameter in a reduction must then be bounded above by an elementary function as well.There do exist interesting problems from automata theory which are fixed-parameter tractable, but where the parameter bound is non-elementary (unless P = NP, see Frick and Grohe, doi: 10.1016/j.apal.2004.01.007). I am wondering if anyone has looked at the fixed-parameter tractable problems which exclude fixed values of the parameter leading to such galactic constants (to use Richard Lipton and Ken Regan's term). Speculating wildly, such a restriction might have useful connections with finite model theory, such as being characterized by a fragment of monadic second-order logic that doesn't lead to the non-elementary constants that can arise from applying Courcelle's Theorem to a fragment with unbounded quantifier alternation. | Elementary bounds on parameter in fixed-parameter tractability? | cc.complexity theory;reference request;fixed parameter tractable | null |
_webapps.51128 | Please see the data below:What I want to do is to write a formula in column D to sum up the order quantity (column C) starting from the row no. specified in column F until current row, for the same item no. However, when I use the address function nested in the sumif function, it gives an error (if I replace the address function with a value then it works). So can someone please kindly advise me how I should write the formula in column D instead?Thank you so much in advance!Stanley | Error using sumif together with address function | google spreadsheets | null |
_webapps.10470 | My step-son tried to log in to Facebook and saw that account was disabled. Supposedly because he was using a fake name, which he wasn't. Is there any way to appeal it? | How to appeal being banned from Facebook for 'fake name' | facebook | I found this at Facebook's help center: Why was my personal Facebook account disabled? At the end, it says If you believe your account was disabled by mistake, click here.That link takes you to a form you can use to request re-enabling the account. It's possible that Facebook will not care about the requests though. :/ The question reminded me of a newspaper article (in Finnish; very bad Google translation here 'valo' means 'light') about someone whose name real name is Ville Valo (i.e. he's the namesake of a well-known rock artist) who got his FB account disabled, and Facebook just refused to listen to his pleas to re-open it. |
_webapps.40546 | I'd like to pull values from a named range into a calculation. I have this function: function getNamedRange(n){ SpreadsheetApp.getActiveSpreadsheet().getRangeByName(n);}Seems pretty simple. I have a named range called budgetItems. It definitely exists and has about 6 values in it. But when I try to pull the values with var items = getNamedRange(budgetItems); items.getValues(); it usually says items is null. I've gotten it to work in the past but it seems really flaky. I suspect there is eventual consistency and caching goofing things up here. I've attached this function (to pull the values from the range) to a menu item. When I run that menu item it takes about 15s to run 5 lines of js -- and then fails. That's... suspicious. | how do I get named ranges in Google Spreadsheets to be more reliable/current? | google spreadsheets;google apps script | This little script will a retrieve named range and make a summation:function namedRange() { var ss = SpreadsheetApp.getActiveSpreadsheet(); var sh = ss.getActiveSheet(); var nRange = ss.getRangeByName(budgetItems); var data = nRange.getValues(); var sum=0; for(var i=0; i<5; i++) { sum += parseInt(data[i]); } sh.setActiveSelection(B1).setValue(sum);}Using the above code as a formula in Google Spreadsheet, allows for significant reduction of code and API calls:function getTest(range){ var sum=0; for(var i=0, len=range.length; i<len; i++) { sum += parseInt(range[i]); } return sum;}You can address the range as: =getTest(A1:A5) or =getTest(A1:A9)See example file: getRangeByName (editable) |
_codereview.55386 | Design a Data Structure SpecialStack that supports all the stack operations like push(), pop(), isEmpty(), isFull() and an additional operation getMin() which should return minimum element from the SpecialStack. All these operations of SpecialStack must be O(1). To implement SpecialStack, you should only use standard Stack data structure and no other data structure like arrays, list, .. etc.Looking for code review, optimizations, best practices. public class StackMinimum<T>{ /* * Composition triumphs over inheritance :) */ private final Stack<T> stack1 = new Stack<T>(); private final Stack<T> stack2 = new Stack<T>(); public void push(T item) { stack1.push(item); if (stack2.isEmpty() || ((Comparable<T>) item).compareTo(stack2.peek()) < 0) { stack2.push(item); } } public T pop() { T item = stack1.pop(); if (item.equals(stack2.peek())) { stack2.pop(); } return item; } public T peek() { return stack1.peek(); } public int size() { return stack1.size(); } public T getMinimum () { return stack2.peek(); } public boolean isEmpty() { return stack1.isEmpty(); } }public class StackMinimumTest { @Test public void test() { StackMinimum<Integer> stack1 = new StackMinimum<Integer>(); stack1.push(1); stack1.push(2); stack1.push(3); assertEquals(1, (int)stack1.getMinimum()); stack1.push(-1); assertEquals(-1, (int)stack1.getMinimum()); stack1.pop(); assertEquals(1, (int)stack1.getMinimum()); while(!stack1.isEmpty()) { assertEquals(1, (int)stack1.getMinimum()); stack1.pop(); } }} | Stack with 'getMinimum' operation | java;algorithm;stack | You have a bug: .getMinimum() loses track if you push the same new minimum value twice.You should use both inheritance and composition.Use inheritance for the main stack, because your data structure is a stack one with an extra feature. That gives you the read-only operations .peek(), .size(), and .isEmpty() for free.Use composition for the minimum stack, as you have currently done. |
_unix.125983 | I found this weird behavior in find. Depending on the order of the parameters to find it finds different files.For example, I have a directory tree with the following content.. configure.ac Makefile.am src hello.c Makefile.amif I runfind -name '*.cpp' -o -name '*.[chS]' -print0 | xargs -0 echoIt lists./src/hello.cAnd if I runfind -name '*.[chS]' -o -name '*.cpp' -print0 | xargs -0 echoIt doesn't list anything. Notice that the only thing I changed is the order of the file name.Can anyone explain why the second command doesn't list any files? | Why does to order of the parameters affect the files found by `find`? | find | The -print0 action gets bound only to the second -name filter (test in find parlance), so it will only print out something if the second filter matches. This is because the default operator in the find expression is and, and binds tighter than or (-o). i.e. your second expression is evaluated as:find -name '*.[chS]' -o \( -name '*.cpp' -print0 \) | xargs -0 echoTry grouping the filters:find \( -name '*.[chS]' -o -name '*.cpp' \) -print0 | xargs -0 echoYou could also do this if you felt like it:find -name '*.[chS]' -print0 -o -name '*.cpp' -print0 | xargs -0 echo |
_vi.9888 | It is possible to pipe visually selected lines (i.e. selected with uppercase V) using :, after which I can enter a command in vim's command line, e.g.::'<,'>!python -m base64 -dI'd like to do the same for the selected character range (i.e. selected with lowercase v). Using : still creates a linewise range (:'<'>). Trying to manually provide a character range like this::`<,`>!python -m base64 -dStill does not work; it outputs this:E492: Not an editor command: `<`>!python -m base64 -dThe question: How can I pipe visually selected characters to a system (cmd) program? | How to pipe *characters* to cmd ( `:!` ) | command line;visual mode;range | null |
_softwareengineering.305435 | Well basically, I have an Engine class that recieves a command as string from the input and passes it to a CommandHandler class which executes the apropriate command.The CommandHandler passes the string to a CommandFactory to get the command and calls the method Execute() of the command, but the problem is that every command depends on different classes to execute properly. For example, one commands need the IOutputWriter to write something, the other needs IBuldingFactory to create a building, etc. I am using a reflection in the CommandFactory class, and I can't pass the dependencies through the constructor using Activator.CreateInstance(), because every command has different dependencies.My current architecture looks something like this:class Engine(){ IData data; // application database IInputReader inputReader; ICommandHandler commandHandler; public void Run() { string command = inputReader.Read(); commandHandler.Handle(command, data); }}class CommandHandler(){ ICommandFactory commandFactory(); public void Handle(string command, IData data) { string executableCommand = commandFactory().createCommand(command); executableCommand.Execute(data); }}class DisplayDataCommand : ICommand{ IOutputWriter outputWriter; public void Execute(IData data) { outputWriter.Print(data.ToString()); }}class BuildCommand : ICommand{ IBuildingFactory buildingFactory; string buildingType; public void Execute(IData data) { var building = buildingFactory.createBuilding(buildingType); data.AddBuilding(building); }}I can have different methods in the command handler for each command and call the appropriate method using switch case, but that would violate the Open/Closed principle. So my real question is - How to implement this without violating the Open/Closed principle. | Command handler executing commands with different dependencies | c#;object oriented;architecture | The problem you have is the conversion of the string to the class.Not much you can do about it, As you will always have to have some factory which knows how to parse string x into object a,b,c somewhere.But I would move the logic outside the command handler, which should just accept the command objects. And do the conversion as the strings are read into the application, via a repository (your inputReader or IData?) into which you inject your string parsing factories. If you allow failover, so that when one repo/factory cant handle an input string it moves onto the next, that will give you further seperation of concerns.If you use a DI or object serialization/deserialization library in yoir repo/factories (unity,json.net) that will hide some of the reflection/switch statements from your code and make it neater.Also I would allow for the case where a command cannot be handled. This isnt a bad thing, you should expect only to be able to handle the commands your code 'knows' about and for some other program to deal with the others.Additionaly, are you sure you need both commandHandler AND command.Execute() it seems to me that you should choose one or the other. If you go with command handlers you cam have one or more per type. The handler has the injected dependancies and the logic from the execute method.This keeps your handler decoupled, as it only handles a single type of command, you can pull only those types from a queue(IData).Of course if you keep execute and only pull one type (or set of known types) of object you have the exect same code just arranged differntly, but you can cut out the command handler class, as it just calls execute.The benefit (if any) of the handler is you can handle the same command type more than one way. Whereas with execute you have to define a new command type with the same data.example :class Engine(){ IData data; // application database IInputReader inputReader; Dictionary<string,ICommandHandler> commandHandlers; public void Run() { //todo use a DI framework to inject these types commandHandlers = new Dictionary<string,ICommandHandler>(); commandHandlers.Add(DisplayData, new DisplayDataCommandHandler(new OutputWriter()) ); commandHandlers.Add(Build, new BuildDataCommandHandler(new Builder()) ); foreach(var data in this.data.GetCommands()) { if(commandHandlers.Keys.Contains(data.type)) { var command = inputReader.Read(data.type, data.serializedObject); //handle the command commandHandlers[data.type].Handle(command); } else { //some other program will handle these commands } } }}public class InputReaderAndFactory : IInputReader{ public ICommand GetCommand(string commandType, string commandJson) { switch (commandType) { case DisplayData : return JsonConvert.DeserializeObject<DisplayDataCommand>(commandJson); case Build : return JsonConvert.DeserializeObject<BuildCommand>(commandJson); default : return new UnknownCommand(commandJson); } }}class DisplayDataCommandHandler : ICommandHandler{ IOutputWriter outputWriter; public DisplayDataCommandHandler(IOutputWriter outputWriter) { this.outputWriter = outputWriter; } public void Handle(ICommand command) { var cmd = command as DisplayDataCommand; outputWriter.Print(cmd.Data.ToString()); }}class BuildDataCommandHandler : ICommandHandler{ IBuilder builder; public BuildDataCommandHandler(IBuilder builder) { this.builder = builder; } public void Handle(ICommand command) { var cmd = command as BuildCommand; builder.Build(cmd.Data, cmd.MoreData); }}class DisplayDataCommand : ICommand{ public Data Data {get;set;}}class BuildCommand : ICommand{ public SomeOtherTypeOfData Data {get;set;} public MoreData MoreData {get;set;}} |
_unix.102349 | I have access to my university's VPN through OpenVPN, and would like to extend it to all the devices at home. I have cable internet, a DD-WRT router, a bunch of clients (mostly Windows), and a RHEL-derivative, two-NIC, always-on PC. Right now, the Linux router intermediates the traffic, with a setup is modem <-> RHEL-like router <-> DD-WRT device <-> clients. Usually, the traffic is masqueraded directly, but the Linux router automatically connects to uni's VPN, and for a bunch of journals, a script sets up VPN-intermediated traffic: ip route add table main 123.45.67.89 dev tun0.I'd like to replace the RHEL computer with a single-NIC computer. The setup I am thinking about is modem <-> DD-WRT device <-> {clients, new RHEL router}. RHEL router will connect to the internet via the DD-WRT device. It will also connect to VPN. When the other clients want access to the internet, DD-WRT should route them through RHEL, which in turn will decide to route directly or, if a connection to 123.45.67.89 is desired, through tun0. Is that possible? How would you do it? | complex dd-wrt routing setup - is it possible? | routing | null |
_softwareengineering.94620 | I'm a C++ developer. I know how Windows works on the native level, but I'm not a big expert in C# and .NET. Now I need a C# developer in my team (all my developers are C++). How can I hire a great C# developer if I don't know C# at good level? How to ask questions, how to test whether answers are great or are with silly mistakes? | How to hire a good C# developer if I don't know C#? | c#;c++;hiring | I am occasionally faced with the problem of interviewing programmers who are primarily experienced in C++, which I do not know as well as them. My strategy is to: mostly ask general programming questions, algorithms, OO design, how torefactor, what makes a good unit test, etc. I add in a few generalquestions targeted at the style of language so for C++ I might askabout memory management and object lifetimes for C# I might askthings like, can you have a memory leak when using a garbagecollector?try to find out how they learnt the language, what books they have read, etc.verify that they have written a substantial amount of C++. Go intodepth on when they have used it, how much, what they did with it andwho for. Then try to check this as far as possible using theirreferences.If they can answer the difficult design and theory questions well and they have written a decent amount of C++ then I expect they will be half good at least, and probably able to learn any missing stuff quite quickly. |
_unix.238922 | My server hosted at Hostgator was recently hit by malware and hence to monitor the file system I use find -mmin -xx command at regular intervals. But everytime I run the find command, I see the first 2 results returned are the same:[email protected] [~]# find -mmin -10./.bash_history./.dnsWhile the ./.bash_history is understable, I can't really figure out what changes are made to the ./.dns entry? Although it should be noted that on physical verification of the dns entry I find no altercations.Pls help me understand. | What changes are made to ./.dns? | shell;find;dns | null |
_unix.352374 | When I use the command ping ff02::1%eth0 to get a response from all IPv6 hosts in the network segment, I get responses from a bunch of link-local (fe80::) addresses (LLIPs).In order to figure out which hosts are (or are not) responding, I then have to use arp to see the MAC address associated with the IPv4 address, and then try to match them up with the IPv6 address to figure out which host is which.Although I am starting to remember which MAC address belongs to which host, I'd much rather have the system automatically map the LLIPs back to hostnames.Is there any mechanism to do this?I have tried putting the LLIP in /etc/hosts, with and without the zone (%interface suffix), and this allows me to ping a single host by name, but it is not used by ping to convert IP responses back to hostnames.ping has the -n option to avoid name resolution which hints to me that what I am trying to achieve is possible, I'm just not quite sure how to do it! | Make ping show hostnames instead of IPv6 addresses | ipv6;ping;hosts | null |
_softwareengineering.215219 | Is there any reason anyone would use GPL v2 over GPL v3 when starting a new project, or is GPL v2 still around only because older projects can't or haven't updated their license yet? | Is there a reason someone would choose GPLv2 instead of GPLv3? | licensing;gpl | If you use GPL v3 you give up your right to assert patents you have on the technology.If you own patents and want to monetize these, don't distribute your patented code under GPL v3. |
_reverseengineering.14749 | I'm tryin to RE dark souls 2.But, the strings are f***** up. I tried searching for DARK SOULS II which is the name of the window. No luck. I think it uses some encryption method (I've checked every dll and exe in the DS2 directory, no luck). Heres a pic of what I mean:As you can see, I'm clearly searching dark souls ii, and nothing pops up (this is the exe file thats being disassembled). However, I've searched in every other dll file, still no luck.What's going on? | Dark Souls 2 String Encryption? | c++;encryption | null |
_unix.239220 | I'm running XFCE 4.12 with 3 monitors setup into two X screens and two video cards on the same computer. Two of the monitors form a single X screen using nvidia twinview functionality, which is Screen0 on Device0 in the Xorg config. The 3rd monitor is for the second screen which is Screen1 on Device1 in the Xorg config. I can drag windows fine between the monitors on Screen0. I can also move my mouse freely between Screen0 and Screen1 and even the clipboard data is carried between the two X screens ok.Both of these X screens act as independent desktops which have their own set of viewports. I like it this way and its useful for making one side stick automatically. However if I start a program on one X screen, I can't move it to the other X screen by simply dragging it. If I want to run that program on the other screen I have to restart it on that screen.My question is if there is way to move the program while its running to the other screen using some command or other function of X windows. Thanks.Update: I'm going to start a bounty on this question but I've been wondering about this for a while. To earn the bounty, you have to provide some citation for proof. | Possible to move a window from one X screen to another on same host? | x11;xorg | null |
_unix.72759 | I'd like to be able to install Apache Subversion on Red Hat with yum. Can anyone recommend a package repository? | Looking for a yum package repository containing apache subversion | rhel;yum;subversion | Is there something you wouldn't be getting with the Subversion package in the default channel? Step-by-step guide to installing subversion on RHEL |
_unix.4999 | I'm looking for somthing like top is to CPU usage. Is there a command line argument for top that does this? Currently, my memory is so full that even 'man top' fails with out of memory :) | How to find which processes are taking all the memory? | process;memory;top | From inside top you can try the following:Press SHIFT+fPress the Letter corresponding to %MEMPress ENTER You might also try:$ ps -eo pmem,pcpu,vsize,pid,cmd | sort -k 1 -nr | head -5This will give the top 5 processes by memory usage. |
_webmaster.102602 | I run a site that has a paging list showing a fixed number of items per page. I want to find out how how likely users are to navigate to the next page, given there is a next page. In other words, if my event tracking shows that on, say, page 3, users are very unlikely to click next, I want to be sure that this is not because most of the list are only 3 pages long.My idea has been to send one event for each page view that tells GA whether this particular page is the last or not, and then to send another event if the user clicks on 'next'. I would then assume I could somehow extract information in Google Analytics telling me how often a user clicks 'next', given the last event was one saying the current page was not the last.However, this does not appear to be something Google Analytics can easily do. My question therefore is: Is there another way to solve the same problem, or is there indeed away to get the information from Google Analytics with the approach I've been using?Update: I want to clarify that on my site there are many of these lists, and they are of varying lengths. Which means that sometimes when the user is on page 2 of a list, sometimes there is a page 3 and sometimes there isn't. Obviously the user can't go to page 3 if there are only two pages. If I don't correct the data for these cases, I have no way of knowing if a sharp decline in users clicking 'next' is caused by a unusually many list that are only one page long, or something else. | How do I determine how many users click 'next' given 'next' exists? | google analytics;pagination | null |
_unix.272572 | I changed my /etc/issue file to display IP and system info with background color, and built a system which run on CentOS 7, but problem is when i deploy my machine on ESX it display regular /etc/issue file containt and after login and logout to Virtual console it display correct things with my changes !This is just after deploy i see and after i login and logout i see correct things !can someone please suggest me how do i get it fixed ?Thanks | /etc/issue file does not get reflect without login logout | systemd;login;virtual machine;logout;getty | null |
_codereview.57378 | One of the tenets of Windsor IoC (probably applies to all IoC containers too) is to release what you explicitly resolve, which admittedly should occur rarely. But we have a fair few UsingFactoryMethod setups in our installers, which are resolving directly (which appears to be a valid case for explicit resolution).However, remembering to call Dispose() on those resolved things is something I doubt a lot of people will remember, so I came up with a helper method in order to be able to wrap that logic away (always a good thing, right?). However, I'm not sure if it's just a bit overkill in this case.namespace Castle.Windsor { using System; using Castle.MicroKernel; public static class KernelExtensions { public static ResolvedResult<TInstance> ResolveDispose<TInstance>(this IKernel kernel) { return new ResolvedKernelResult<TInstance>(kernel); } public static ResolvedResult<TInstance> ResolveDispose<TInstance>(this IWindsorContainer container) { return new ResolvedContainerResult<TInstance>(container); } public abstract class ResolvedResult<T> : IDisposable { public T Instance { get; private set; } protected ResolvedResult(Func<T> resolver, Action<object> releaser) { this.Instance = resolver(); this._Releaser = releaser; } #region IDisposable Members public void Dispose() { if (null == _Releaser) { throw new ObjectDisposedException(this.GetType().Name); } //Dis-associate the resolved instance, can't null assign as T isn't typed Instance = default(T); _Releaser(Instance); _Releaser = null; } #endregion private Action<object> _Releaser; public static implicit operator T(ResolvedResult<T> res) { return res.Instance; } } private sealed class ResolvedKernelResult<T> : ResolvedResult<T> { internal ResolvedKernelResult(IKernel kernel) : base(kernel.Resolve<T>, kernel.ReleaseComponent) { } } private sealed class ResolvedContainerResult<T> : ResolvedResult<T> { internal ResolvedContainerResult(IWindsorContainer container) : base(container.Resolve<T>, container.Release) { } } }}UsageBefore:container.Register( //... Component.For<MyComponentType>() .UsingFactoryMethod(k => { var altComponent = k.Resolve<SomeAlternativeComponent>(); var component = new ImplementingComponentType(altComponent.RandomProperty); k.ReleaseComponent(altComponent); return component; }))After:container.Register( //... Component.For<MyComponentType>() .UsingFactoryMethod(k => { using(var altComponent = k.ResolveDispose<SomeAlternativeComponent>()) { return new ImplementingComponentType(altComponent.Instance.RandomProperty); } })) | Helper extension to release Windsor component; not sure if it's over-kill | c#;extension methods | null |
_unix.373820 | I have a script built just a way to learn bash and it uses jq for json parsing suppose someone else downloads it and runs the file, will bash automatically prompt the user to install jq or should I include in the script to install it?Yes I understand that the terminal will probably throw jq: command not found but is there a way to handle it more gracefully? Or is this how it's usually handled?How is that do you want to install the package jq (Y/N)? is achieved? | Should I include code to install the packages that my script requires? | shell script;dependencies;packaging;jq | You should leave it. Typically, you would only install dependencies when creating a package for a specific package manager, not as part of a program or script.There are so many different package managers, each with their own way of handling dependencies, and you want to let people choose which one to install with. That way they can be consistent. Otherwise, they could end up with problems like duplicate packages and incompatible versions of libraries.Also, your script won't know how to install dependencies on all systems, even if you compile from source (some machines don't have a compiler).You should list them in your documentation, if you have any (README file, comments, etc.) |
_softwareengineering.317260 | For the company I work at, all of our projects,including a new one started last year, are written in C89.We write for vxWorks (a real time embedded operation system).Our software runs multi-threaded through various spawned tasksWe are massively behind schedule, and I am struggling to be productive based on the company's design for new software components. Coming from a C++ background, I'm use to wrapping mutable state inside of a class, and then providing methods as an interface to manipulate the state: I realize that C doesn't have these language features, but I've always been under the impression that C developers do something similar with a set of global functions and having their struct as the first argument to all of these functions. In both of these cases, we still have an interface. It describes the actions that can be taken on the component, and helps with readability / dealing with mutable state. I am not allowed to write software components in the way I described above.Our company design, as I have come to understand it, is as followed: Every component must be expressed as a TypeInput, and a TypeOutput. Each component will have two global functions associated with it: Initialize (takes and modifies TypeOutput)Update (takes TypeInput, and TypeOutput; modifies TypeOutput)Here is the smallest example I could find that helps express the idea.A popable breaker: typedef struct CircuitBreakerInputs_t{ BOOL m_bPopped;} CircuitBreakerInputsT;typedef struct CircuitBreakerOutputs_t{ BOOL m_bPopped;} CircuitBreakerOutputsT;void InitializeCircuitBreaker(CircuitBreakerOutputsT *const ptOutputs){ //initalizes ptOutputs}void UpdateCircuitBreaker(CircuitBreakerOutputsT *const ptOutputs, const CircuitBreakerInputsT *const ptInputs){ //observes inputs to modify outputs}On the surface, this may seem like a simple and straightforward design.Inputs become outputs based on the current state of outputs.Now consider a component with more than one functionality. Let's take a millisecond timer. It can move forward in time, be paused, be unpaused, and be reset. That would look like this: typedef struct MillisecondTimerInputs_t{ BOOL m_bAdvanceTime; BOOL m_bSetPauseState; BOOL m_bReset; BOOL m_bPause;} MillisecondTimerInputsT;typedef struct MillisecondTimerOutputs_t{ double m_dElapsedTime_ms; BOOL m_bPaused; unsigned long m_ulLastUpdateTickAmount;} MillisecondTimerOutputsT;void InitializeMillisecondTimer(MillisecondTimerOutputsT *const ptOutputs){ //initializes ptOutputs}void UpdateMillisecondTimer(MillisecondTimerOutputsT *const ptOutputs, const MillisecondTimerInputsT *const ptInputs){ //observes inputs to modify outputs}This gets more complicated to use. Our inputs, are becoming triggers as to how we want to use the component. In order to get the specified functionality out of our component, we need to move the interface into the Update call and rely on the input data to dispatch appropriately. Of course, this allows you to call more than one hidden method when setting more than one method trigger(a boolean) to true. If the order in which the methods are called matters, you are left to either rely on the implementation, or set each one to triggers separately and call Update for each of them. You also need to be careful that you aren't calling any additional methods by accident. For example, when you construct the InputType, you need to set all the triggers to false initially. Then you need to set the trigger you want to call to true, and call update. You then need to potentially set that trigger back to false if you intend on enabling a new trigger for the next update. To make the process easier, I began using enums to represent the method I want to call. This was more in accordance with the initial design I showed earlier, because only one method can be called based on the enumerated value. I had to revert back to the booleans through because our design did not permit a third enumerated type for doing this kind of dispatch. The only way I was able to write concise tests for these components, was by creating a set of utility functions that take the OutputType, forward params into the input struct, and surface the appropriate output as a return value. This gave me back the interface I lost. I then went on to write a language that allowed me to express code in the same manner as the first diagram. I write code as a stateful object with an API in C++, and it can algorithmically be turned into any of the other diagrams shown thus far. Based on this design I can better construct, and test the code. I generate the input/output C design, and then I generate a C++ wrapper that holds the C OutputType, and provides the interface shown in the diagram above. My automation breaks down in regards to component composition.Our company prohibits InputTypes from containing OutputTypes, and vice versa. Which is unfortunate, because the OutputTypes are what have the state. So I can't easily pass a component into the method of another component. It is inputs, outputs, and updates all the way down. Additionally, you can only call update once for each of the sub components. Only being able to call Update once on each of the sub-components, has also hurt my productivity. Dealing with component interaction, and component transformation has to be done over the duration of various Updates. There are other routes I have tried to take, but it overcomplicates the Input/Output types. Are there good merits to this design?The lead engineer has told me: however the stateful object above has no specific division of memory for mutual exclusion. In order to do this, we needed new objects for memory management and cross thread locksIs this design better for multithreading? I don't understand why we couldn't just make copies, and still use the threading mechanisms appropriate when dealing with shared memory. Additionally, we don't use any threading in our sub components, and the highest level OutputType, gets copied while performing a semaphore locked read/write. We also don't use any dynamic memory allocation on the heap, but even if we did, I don't think this would affect the explanation. We would just need to manage that memory appropriately, when making copies and such. | What are the benefits of an input/output component design? | design;c++;c;object oriented design;interfaces | null |
_webapps.30182 | If I make a series of edits on Wikipedia that I decide to revert, is it possible to revert multiple edits at once?The edits are spread across multiple articles. | Reverting multiple edits at once on Wikipedia | mediawiki;wikipedia | null |
_opensource.1809 | Suppose I have some open source project which makes use of both GPL and MIT licensed components. The source code of my project is also MIT licensed (not copyleft).What is the right way to indicate this and comply with all licenses in the final (binary) distribution of the software?Many projects just include a single LICENSE.txt or COPYING.txt file, but due to the multiple licenses I'm not sure how to apply that here without creating confusion. The requirements are the following:Due to the GPL component the binary distribution must be distributed under the GPL. This has to be made clear.But I don't want to create the misconception that the project's source code is GPL licensed, as it is not (and has substantial reusable parts that do not depend on GPL'd libraries).The license of all utilized libraries needs to be indicated, with attribution (i.e. from the MIT license: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.)What is the usual way to achieve this? What text should go where in the final package, to make sure I comply with all requirements and do not create confusion? This is not a GUI program. | How to best indicate license of source code and copying terms of binary when multiple licenses are involved? | licensing;copyleft;distribution;attribution | null |
_reverseengineering.14190 | I'm trying to reverse engineer a game with the goal of creating an emulator.I want to know how to get the structure of a network packet of a game whether it is client or server.Example (Random) Client -> Server: XX XX XX XX XX XX XX XX XX XX XX XX XX Structure: uint16: 10 - byte: 3 - int16: 300 I just want to know how to get the type of each byte is.I already gathered a few packet structures from publically available repos.I want to know how everyone does this? Is there a tool out there that helps with getting structures? | How to get Packet Structures | packet;structure | null |
_webmaster.28213 | I would like to find a nice white cotton paper texture, which is very common in printwork, but seems really difficult to find online. this will be for commercial work. | looking for cotton paper texture | graphics;photoshop | null |
_unix.247074 | I have a very strange problem. I have two servers, namely daytona, which serves as a storage server with a raid array. I WOL it when I want to back up to it. The second server is testarossa which runs my services. It is the latter that I want to backup daily using duplicity. Both machines run Ubuntu Server 14.04, fully up-to-date.I have written a script to WOL the machine and then execute the duplicity backup each day on a fixed time. The import part of the backupscript is shown below. The backup runs as user root on testarossa and backups over SSH via backupper on daytona. Then it shuts down via ssh using user christophe on daytona.I have configured ssh keys on testarossa so I can ssh into daytona using backupper and christophe. I can execute the commands from the script just fine, and even execute the script in the shell as well (./script.sh). I have added the script in the cronjobs using:0 10 * * * /bin/bash /root/scripts/dailybackup >> /var/log/backup.daily.log 2>&1Each time the cronjob runs I get the following error:BackendException: ssh connection to [email protected]:22 failed: [Errno 111] Connection refusedI have, suggested on #ubuntu-server, tried echo | nc 192.168.1.120 22 and that returns the following error:SSH-2.0-OpenSSH_6.6.1p1 Ubuntu-2ubuntu2.3Protocol mismatch.This led me to believe that I had to upgrade daytona which I did. There was an upgrade for the gnu-openssl package and then the cronjob ran fine. But now it doesn't anymore.I am out of ideas on how to debug this. I have too little experience to fix it. Any pointers?Scriptserverip=192.168.1.120servermac=14:DA:E9:4C:6E:17attempts=50sourcedir=/targetdir=sftp://[email protected]//mnt/raidarr0/backups/testarossa/duplicity/dailyencryptkey=AC7A8F8Ckeep=1Msudouser=christophe fullbackup=## Load in the passphrase file env variable. /root/.passphraseexport PASSPHRASE## Do the snapshot backupif [ $fullbackup == full ]; then $(which duplicity) full --encrypt-key $encryptkey --exclude /srv --exclude /usr --exclude /cdrom --exclude /lib64 --exclude /bin --exclude /sbin --exclude /boot --exclude /dev --exclude /proc --exclude /sys --exclude /tmp --exclude /run --exclude /mnt --exclude /media --exclude /lost+found $sourcedir $targetdirelse $(which duplicity) --encrypt-key $encryptkey --exclude /srv --exclude /usr --exclude /cdrom --exclude /lib64 --exclude /bin --exclude /sbin --exclude /boot --exclude /dev --exclude /proc --exclude /sys --exclude /tmp --exclude /run --exclude /mnt --exclude /media --exclude /lost+found $sourcedir $targetdirfiecho Backup to target completed.## Remove older backups. We only want to backup 30 days. # (We have a full every month)$(which duplicity) remove-older-than $keep --force $targetdirecho Removal of stale backups completed## Shut down the machine using a sudo account. Expects the user to have a key installed for this.ssh $sudouser@$serverip sudo shutdown -h nowecho Shutdown command issued to remote machineFollow up:1) The script has a function which waits for the host to be ping-able. So it only starts backing up when the host has fully booted. (This script ran fine for over a year on a different machine with Debian.)2) The script runs fine in the shell of the root indeed.3) And no, I do not have a proxy command in either setting files.4) I have tried running the command using sudo /bin/bash /root/scripts/dailybackup and now, for some reason, it asks me to verify the authenticity of the host (with yes/no). So now it seems like the duplicity command is not using my known_hosts file? | SSH Protocol mismatch | ubuntu;ssh;duplicity | null |
_webapps.98887 | Would it be possible to automatically export a Google sheet to a SQL type database which I could query? I am trying to make a information dashboard out of the sheet that contains data from a Google form. What possible ways could I do this? | Google Sheets to SQL type database | google spreadsheets;migrate data | null |
_unix.79103 | I'm trying to use netcat on Linux server to stream video to my windows client using VLCI started running netcat on Linux: cat /media/HD1/myMovie.mkv | nc -l 8668In VLC Windows Client I tried to:Open VLC > Open network stream vlc > rtp://@serverIP:8668Without success. | Stream Video using Netcat and VLC | raspberry pi;vlc;netcat | null |
_cogsci.8876 | For example, I show to my students that XY=Z, so I ask for them to do some exercises, so in the future, they will always know that XY=Z.Is there any known study relating the numbers of exercises needed to learn a subject?And that relates the numbers of steps with the amount of practise?Because we can see clearly that the sequency 1-3-8 is easier to remember than 1-3-8-6-9-8-7-5-5.But the second sequency will need three more times of exercise to remember?I think this may be useful in my studies, considering time a important factor.If a do just some exercises, I just can half learn, and if I study much more than I need, I waste my time. | Number of exercises and the learning of some subject? | mathematical psychology;long term memory;mathematical ability;short term memory | null |
_unix.43068 | I am trying to use Wine to play a 3d game on Linux that will run as long as the base system has the correct Direct X rendering. I have tried the Wine configurations and come up with:http:/(s19.postimage.org/lgv1ky8dv/Selection_002.jpg)$# wine ~/.wine/drive_c/windows/system32/dxdiag.exeX Error of failed request: BadRequest (invalid request code or no such operation)Major opcode of failed request: 136 (GLX)Minor opcode of failed request: 19 (X_GLXQueryServerString)Serial number of failed request: 145Current serial number in output stream: 145The error may be related to the Wine configuration but I followed the guide (with ver jun2010) to the T on this and I just can't seem to get this to work on my Pinguy (Ubuntu 12.04 kernel: 3.2.0-25-generic-pae) system. Can anyone assist me with these drivers?I believe it is a bug within the X.org. I do not have an /etc/x11/xorg.conf and my system graphics settings does not show (as other bug reports have said also). I still don't understand how I can fix this.I have tried AMD installs (in case my processor has a hybrid). I have tried to see if I could compile a new driver (unsuccessful).I believe it is the stock drivers (MESA Intel sandybridge) but it should be (Intel HD 3000) also I have found that there is a bug reported for xorg or mesa (both are listed as causes) where it does not show up on the system tools and does not play well, both are my symptoms, yet installing mesa-tools does not solve itMy complete computer specs | Wine fails with DirectX on Pinguy | xorg;wine;opengl | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.