id
stringlengths 5
27
| question
stringlengths 19
69.9k
| title
stringlengths 1
150
| tags
stringlengths 1
118
| accepted_answer
stringlengths 4
29.9k
⌀ |
---|---|---|---|---|
_softwareengineering.221766 | I'm integration testing a system, by using only the public APIs. I have a test that looks something like this:def testAllTheThings(): email = create_random_email() password = create_random_password() ok = account_signup(email, password) assert ok url = wait_for_confirmation_email() assert url ok = account_verify(url) assert ok token = get_auth_token(email, password) a = do_A(token) assert a b = do_B(token, a) assert b c = do_C(token, b) # ...and so on...Basically, I'm attempting to test the entire flow of a single transaction. Each step in the flow depends on the previous step succeeding. Because I'm restricting myself to the external API, I can't just go poking values into the database.So, either I have one really long test method that does `A; assert; B; assert; C; assert..., or I break it up into separate test methods, where each test method needs the results of the previous test before it can do its thing:def testAccountSignup(): # etc. return email, passworddef testAuthToken(): email, password = testAccountSignup() token = get_auth_token(email, password) assert token return tokendef testA(): token = testAuthToken() a = do_A(token) # etc.I think this smells. Is there a better way to write these tests? | How to structure tests where one test is another test's setup? | testing | null |
_cs.68742 | POINTS_TABLE = [3, 5, 7, 1]function score(answer) { result = 0 for i in 0..4 result += POINTS_TABLE[answer[i]] return result}answer = [1, 2, 1, 0]s = score(answer)The sum performed is 5 + 7 + 5 + 3 = 20. It uses the values from the input as the indexes to read from the POINTS_TABLE.This is the style of the answer I'm trying to work out:\begin{equation}score(answer) = \sum_{i=1}POINTS\_TABLE_{answer_i}\end{equation} | How can I write this simple function in mathematical notation? | notation;mathematical foundations | null |
_softwareengineering.141485 | I just wanted to know what the difference is between static code analysis and code review. How are each of these two done? More specifically, what are the tools available today for code review/static analysis of PHP? I would also like to know about good tools for code review for any language. | What is the difference between Static code analysis and code review? | code quality;terminology;code reviews;quality | null |
_ai.2578 | The original Lovelace Test, published in 2001, is used generally as a thought experiment to prove that AI cannot be creative (or, more specifically, that it cannot originate a creative artifact). From the paper:Artificial Agent A, designed by H, passes LT if and only ifA outputs o,A outputting o is not the result of a fluke hardware error, but rather the result of processes A can repeatH (or someone who knows what H knows, and has H's resources) cannot explain how A produced o.The authors of the original Lovelace Test then argues that it is impossible to imagine a human developing a machine to create an artifact...while also not knowing how that machine worked. For example, an AI that uses machine learning to make a creative artifact o is obviously being 'trained' on a dataset and is using some sort of algorithm to be able to make predictions on this dataset. Therefore, the human can explain how the AI produced o, and therefore the AI is not creative.The Lovelace Test seems like an effective thought experiment, even though it appears to be utterly useless as an actual test (which is why the the Lovelace Test 2.0 was invented). However, since it does seem like an effective thought experiment, there must be some arguments against it. I am curious to see any flaws in the Lovelace Test that could undermine its premise. | Are there any refutation of the original Lovelace Test? | intelligence testing | I am a future neurologist with a very complete understanding of linguistic processing in the brain. I am also an overprotective parent, so I monitor every phrase uttered to my child, and also completely determine all the books she reads in the course of her education.When my child writes a poem, then, I know the dataset on which her brain was trained, as well as the processes by which her language inputs became language outputs--in broad outline I know these processes are non-linear and are based on how different inputs along with the current collection of trillions of distinct synaptic weights updates the synaptic weights. I don't know what her poem will be, of course, because there are random factors and the whole history of her synaptic weights are unobservable, but I adhere to the Lovelace test and can therefore conclude that composing the poem was not a creative act.The Lovelace Test, like the Chinese Room Argument, implicitly assumes that what computers/AI can do in processing symbols and information and what brains can do are distinct. If you accept that assumption, then the argument ceases to be interesting-- you've merely redefined creativity as one of the distinct things that brains can do. If you reject the assumption, the argument that computers are incapable of creativity ceases to be valid. The thought experiment itself does nothing to assist us in evaluating the truth of the assumption. |
_unix.165473 | I have two terminals installed, gnome-terminal and xfce4-terminal.I would like to have only the xfce terminal showing a simple > as prompt when I start it. The gnome-terminal prompt should remain unchanged (so no bashrc modification, I think).I don't mind starting xfce-terminal from a script or another terminal with some parameters.I tried:xfce4-terminal -x export PS1='> 'but that throws an error and is apparently not do-able.Any solution is welcome, even if it's a bit hackish | Change prompt when starting a terminal from bash script (but don't affect all terminals) | environment variables;path;prompt;gnome terminal;xfce4 terminal | null |
_webapps.90611 | I have a custom label in Gmail that sends mail I want to look at about once a month to a folder labeled JUNK. The filter tells the emails to skip the inbox, mark the emails as read, and label them as JUNK.I've noticed that a few months ago, my emails from Scotiabank regarding interac email transfers were being labeled as Junk. Undesirable behaviour! I looked in my filter settings, but nothing there indicates why this mail is being sent there. As far as I can see, none of my filters should result in messages with Scotiabank's information being sent to JUNK.Is there a way to determine why a specific email is being sent to a folder through a label filter? When I try to remove the label JUNK from the email, it still won't show up in my inbox. If there isn't a way to do this, how can I force these emails to go to the inbox?I looked at this post (Force email to stay in inbox), but I think it assumes I actually know why my mail is being filtered to JUNK in the first place.This is the first time I've posted here, so please let me know if I can add any information to my post. | Why is my email labeled and sent to a folder in Gmail instead of the inbox? | gmail;gmail filters;gmail labels | null |
_datascience.16175 | As the title indicates, over multiple runs of my program, I get different values of trans and est. Are these local minimum? If so how do I get the optimal one?NOTE - They all have the same starting value of trans and est.Thanks!Edit: I realized why I was getting different values of trans and est over different runs but it raised another question in my mind.My objective is to train an HMM on classical song data (MFCC coefficients). To discretize it, I assigned the MFCC coefficients to 7 clusters.I have a sequence of 10 observations stored in matrix M that I am providing to the hmmtrain function as input. Each row represents a song, and each column are the different clusters the MFCC coefficients (for different frames of the song) were assigned to.ex:M = 1 1 1 3 3 3 3 2 7 7 7 1 1 1 3 1 1 1 1 3 1 1 3 3 3 2 2 3 3 2 2 3 3 6 6 6 2 2 5 5 2 2 5 5 3 3 3 3 1 1 1 1 1 1 1 7 7 7 7 5 5 5 7 2 5 5 3 3 3 2 2 3 3 2 2 6 6 7 7 5 7 2 3 6 2 2 2 2 7 7 7 7 7 7 7 7 7 7 2However, since I was recomputing the clustering every time, my input data would change. By saving the matrix to a file and using that every time my results were consistent. What I do not understand is why for certain formations of clusters (i.e. for certain input matrices) I would get better performance? Shouldn't cluster formation be the same by and large (even if the number ex. 1,2.. might represent a different cluster)?Sorry about earlier. I'll repost the question if necessary. Thanks again! | hmmtrain in Matlab converging to different values of trans and est over different runs | matlab | null |
_unix.317440 | Can I have separate / and /tmp but /home + /usr + /var on one partition somehow?Separate /tmp is good because I can set it up with some quick unreliable filesystem. I often change distributions therefore separate / is a blessing - quick re-install and I'm good as long as /home, /usr and /var are untouched.The problem is, I don't want to designate space for any of the last three - I want them to share available resources. I sometimes need more space in /var and I can see there's available space in /usr that I cannot use, sometimes it's the other way around. It's frustrating. Any ideas? | Multiple mountpoints on one partition? | mount;partition;root;home | You can always mount your third partition somewhere (like /mnt/combo or something), and then bind-mount subdirectories from this mountpoint to the three designated directories.In fstab, this would look something likeUUID=... /mnt/combo auto defaults/mnt/combo/usr /home none bind/mnt/combo/var /var none bind/mnt/combo/home /home none bindAlso consider this: /home makes sense to live on a separate partition - even better, a separate drive, which can be somehow protected (raid, backups,...). /var would make sense to be separate if you really have something personal in there (websites and such), otherwise it makes no difference. /usr can definitely be part of /, it makes no sense to have it separate because on a modern system, the distinction between /bin and /usr/bin is blurred and noone cares about it anymore, and segmenting a system only creates problems if one of the partitions somehow doesn't mount./tmp should normally be ram-backed anyway (tmpfs), unless you really are running out of RAM, and most distros do that by default unless you change it.Big picture: separate /home if you have to, the rest is just overhead - you probably have no reason to have different filesystem types or different permissions on any of these, and partitioning doesn't usually mean physical separation (same hard drive?). |
_softwareengineering.232832 | In our organization some of the resources (such as QA machines etc) are shared. Different folks get done at different times and some tests have to be run (during dev and QA) on these machines. Right now, we just skype a message to the team stating I am going to run some destructive tests on such and such machines - let me know if anyone has an issue. This approach obviously has many issues (what happens if someone missed the message etc.) Apart from maintaining a shared google doc that needs to be constantly updated - is there an easier way that folks use for such coordination? | Coordinating between developers on common resources | project management;agile | A central scheduler is the heavy weight way to go but has several advantages. If you set up a number of machines (or virtual machines) that tests can be run on (or machines can be spun up when a test is required) then you can have a central queue that people can submit jobs to and it can figure out what machines are free and what resources can be made available for them. Advantages:That way you know tests can be run one at once (if they need to be e.g. are destructive).You can see what is in the queue. You can schedule regular tests to have an appropriate priority to your new requests for tests. You can also kill off tests when you know you need to clear the queue for an important test run. The other big option that I would normally recommend you consider is continuous delivery / deployment but from your question I'm guessing that isn't an option at the moment. |
_unix.318706 | I am new to laravel and am using ubantu when i have installed my project in opt/lampp/htdocs folder it is restricting that some folders permissions denied. when am trying to run the command chmod -R 644 app/storage it is showing that :user@host:~$ chmod -R 644 app/storagechmod: cannot access 'app/storage': No such file or directorywhen i try to run the project it is showing that :file_put_contents(/opt/lampp/htdocs/bazaa/app/storage/sessions/7b2822ce03a7f890afe496675cd269695c3bb1e8): failed to open stream: Permission deniedcan you please suggest me what is the problem. | permission access denied in laravel4.2 in linux | linux | null |
_unix.287383 | I added few vim plugins like sytastic, nerdTree. They change the status line and other UI elements, which works fine while editing files.But when I invoke vimdiff on 2 files, the nerdTree pane also open, the status lines are of no help. Is there anyway I can disable these plugins if I call vimdiff command? | How to disable vimplugins while invoking vimdiff command | vimrc;plugin;vimdiff | null |
_cs.18646 | Given $A,B$ regular languages with $A \prec B$. Prove the existence of $C\in L_{\text{regular}}$ so that: $A \prec C \prec B$.Here, $A\prec B$ stands for: $A\subset B $ and $B\setminus A $ is infinite. I tried to go for: $C=\overline{B} \cup A$ and some other options but it didn't work out. | Prove the existence of regular $C$ so that: $A \prec C \prec B $ | formal languages;regular languages;closure properties | Hint: Note that $B \setminus A$ is regular by closure properties of $\mathsf{REG}$. Since $B \setminus A$ is also infinite, you can find an infinite language $D \in \mathsf{REG}$ so that $D \prec B \setminus A$. |
_softwareengineering.339782 | For developing a C++ dynamically-linked library (i.e. shared object) which may interface with C programs, what is the best practice to save program state across exported function calls?I as a not-experienced C++ programmer can think of the following methods:Compilation unit level static variables.Instantiate a struct at heap which holds the state and passing back and forth its address in each API call (somewhat like JNI).The problem with the first approach is that my state variables need some data to be initialized and these data are provided by calling init API (one of exported functions). On the other side, when using module's level static variables, those data aren't available yet when those variables are getting initialized.Also my problem with the second method is that each API function should be supplied with that pointer and this is a bit cumbersome.Note that there is another option that static variables are pointers to those state variables and are assigned in that init function (actually state variables are instantiated in init and their address are saved in those static variables). This option is fine, but I would like to not use pointers where possible. | Best Practice for saving state in a C++ shared object | c++;libraries | Passing a pointer to a state object around is probably the best solution. Why?The statefulness of your functions is made explicit and obvious.It is the most general and most extensible solution.Unfortunately, APIs with a (hidden) global state are fairly common. They tend to make simple things simpler, but often make more difficult things outright impossible.E.g. imagine a database client library. To access a database, you need to create a connection first. What happens if I want to connect to multiple databases at the same time? If there is a single global connection hidden in the library, this is outright impossible.Passing an extra parameter around is somewhat annoying. But as long as all the state is grouped into a single object that has to be carried around, it isn't very annoying.Even if I would decide to keep a hidden global state, I would group that state into a single object so that it becomes easier to manage. Keeping track of multiple related global variables is quite error-prone, checking whether a single global variable has been initialized is much easier. Note that with a hidden global state, you will have to check whether the state is set in each exported function in order to prevent user errors. |
_unix.59860 | I'm setting conky and I'd like to add the usb space, I use:$font${color DimGray}/ $alignc ${fs_used /} / ${fs_size /} $alignr ${fs_free_perc /}%${fs_bar /}for the full hdd, what should I write as path for USB? | Add USB space in conky | arch linux;path;xfce;conky | It should be:${fs_used /media/Name_You_See} / ${fs_size /media/Name_You_See}Or, if you use udisks2:${fs_used /run/media/User/Name_You_See} / ${fs_size /run/media/User/Name_You_See}Also consider ${if_existing /media/Name_You_See} to check if path exists (which means it's mounted, not accurate but useful) |
_webmaster.99260 | I recently created a website of my own after buying domain name from namecheap and hosting it on digitaloceans.http://piyushkhemka.meAfter googling my own name after a few days, I see this website which is hosting my website on its own domain :gobismarckmandan.org (fixed now)Initially I thought someone was just copying my website, however after a few days, I received a stop and desist letter from them. They thought I hacked their site and used it to host my content.They are a non-profit website which is understaffed and both of us have no idea why their domain name is pointing to my website.Any ideas what could have happened?Anyways, the real question is: to prevent such things from happening again, what should I do? How do I prevent other websites from ever hosting my content?Do I need to edit .htaccess file? Currently, it looks like this:ErrorDocument 404 /404.htmlOptions -Indexes## EXPIRES CACHING ##<IfModule mod_expires.c>ExpiresActive OnExpiresByType image/jpg access 1 monthExpiresByType image/jpeg access 1 monthExpiresByType image/gif access 1 monthExpiresByType image/png access 1 monthExpiresByType text/css access 1 weekExpiresByType text/html access 1 weekExpiresByType application/pdf access 1 dayExpiresByType text/x-javascript access 1 weekExpiresByType image/x-icon access 1 weekExpiresDefault access 1 week</IfModule>## EXPIRES CACHING ##Do I need to add some options to this file? | Another domain hosting my content | domains;htaccess;redirects;hacking | null |
_codereview.33321 | I have an array of structure records and function intersection and difference.The intersection function take list1,list2,list3(an array of records),and size of both list1 and list2. Here, intersection and difference both are the boolean operators ( like : A U B , A - B , A intersection B). Also list1 and list2 are given as input and the result is copied to list3. My both functions are working fine. But given that the two lists are already sorted (on author name and if same author name then name of the book), how can I optimize the code? intersection is of O(n2) and difference is less than O(n2). copy() copies a record to first argument from second argument. //Intersection of two listsvoid intersection(struct books *list1,struct books *list2,struct books *list3,int n1,int n2){ int i,j,size1,size2; if(n1<n2){size1=n1;size2=n2;}else{size1=n2;size2=n1;} for(i=0;i<size1;i++) { for(j=0;j<size2;j++) { if(strcmp(list1[i].name,list2[j].name)==0 && strcmp(list1[i].author,list2[j].author)==0) { if(list1[i].copies < list2[j].copies) { copy(&list3[i],&list1[i]); } else { copy(&list3[i],&list2[j]); } } } }}//set difference on lists (optimised)void difference(struct books *list1,struct books *list2,struct books *list3,int n1,int n2){ int i,j,k=0,exists=0; for(i=0;i<n1;i++) { exists=0; for(j=0;j<n2 && exists==0;j++) { if(strcmp(list1[i].author,list2[j].author)==0 && strcmp(list1[i].author,list2[j].author)==0) { exists=1; } } if(exists==0) { copy(&list3[k],&list1[i]); k++; } }} | Optimised library database code for sorted array of structures | optimization;c | Given that the two lists are already sorted, you can do MUCH better than just the naive O(|A| * |B|) implementation. Let me reduce the problem to just intersecting lists of chars, and let's say our lists are:A: [... a elems ..., 'C', 'D', ... ]B: [... b elems ..., 'C', 'E', ... ]Let's say we just in the outer looped matched A's 'C' to B's 'C'. Cool, we got that right. Now, we're going to try to check for 'D'. Do we really need to start looking at B[0]? We know that B is sorted... and we know that B[b] == 'C', so we know for sure that if there is a 'D' in B then it cannot be in the first b+1 elements. If you change your inner loop from looping over every element in the 2nd list, to just making sure you end up only walking over the list once, you can reduce your complexity to O(|A| + |B|), which is pretty huge. |
_unix.70878 | I have a situation where i want to replace a particular string in many filesReplace a string AAA with another string BBB but there are lot of strings starting with AAA or ending in AAA ,and i want to replace only one on line 34 and keep others intact.Is it possible to specify by line number,on all files this string is exactly on 34th line. | Replacing string based on line number | sed;awk | You can specify line number in sed or NR (number of record) in awk.awk 'NR==34 { sub(AAA, BBB) }'or use FNR (file number record) if you want to specify more than one file on the command line.awk 'FNR==34 { sub(AAA, BBB) }'orsed '34s/AAA/BBB/'to do in-place replacement with sedsed -i '34s/AAA/BBB/' file_name |
_webapps.36249 | I cannot see my recent activities in my timeline on Facebook.It is disappeared. But others can see my timeline's activities except me. I can only see the activities log.What can I do? | I cannot see my recent activities in my timeline on Facebook | facebook | null |
_codereview.41939 | public static void streamReport(this Report report, Stream stream){ using (var streamWriter = new StreamWriter(stream)) { //some logic that calls streamWrite.Write() streamWriter.Flush(); } //Should return stream; here ? }I am writing an extension method to an object in which I would like to transform to a CSV which is generated and returned to a client (web).My questions are:Should I return stream (change void to Stream) ?Should I use the out keyword in before the Stream parameter ? Is this a common way to let the caller know the parameter will be changed ?Should I change the method to generate a new stream (and not accept one) and trust the user to Dispose it ?Should I pass StreamWriter instead of Stream ? | Extension method that writes to a stream | c#;asp.net;stream | Your questions imply that you might not be quite aware of how streams or references work in C#. You pass in a reference to a StreamYou create a StreamWriter which writes to that StreamThis will automatically make changes visible to anyone holding a reference to the same Stream.Therefor there is no need to try and return the Stream in any way from the method to make the changes visible to the caller - you just need to write to it. This is at least how I interpret your questions.However there is a catch in your implementation: disposing the StreamWriter will automatically close the underlying Stream which I find hugely annoying at times. Only since .NET 4.5 there is a constructor for StreamWriter which allows you to leave the Stream open. So right now your extension method will close the Stream which I guess is not the intention. Your only option to avoid that is to use .NET 4.5 or do not wrap the StreamWriter in a using statement. |
_unix.284191 | I have tried the following commands cut -c-11 ifshell.sh cat ifshell.sh | cut -c-11 ifshell.sh cat ifshell.sh | awk '{print $1} | cut -c-11 ifshell.shBut every time I get the full contents of the .sh file. These commands work perfectly on .txt files. The primary goal is to extract the first 11 character of the script #!/bin/bash as checking if the file is really a bash bin script. | cut -c won't work on my .sh file | shell;scripting;cut | You can also use the standard file command : [PRD][]user@localhost:~ 17:21:30$ head -n 1 setproxymkt.sh #!/bin/bash[PRD][]user@localhost:~ 17:21:38$ file setproxymkt.sh setproxymkt.sh: Bourne-Again shell script, ASCII text executable |
_unix.365355 | In the ext4 wiki article I've seen that ext4 can be used up to 1 EiB, but is only recommended up to 16 TiB. Why is that the case? Why is XFS recommended for larger file systems?(ELICS: Explain me like I'm a CS student, but without much knowledge in file systems) | Why is ext4 only recommended up to 16 TB? | filesystems;ext4;xfs | The exact quote from the ext4 Wikipedia entry isHowever, Red Hat recommends using XFS instead of ext4 for volumes larger than 100 TB.The ext4 howto mentions thatThe code to create file systems bigger than 16 TiB is, at the time of writing this article, not in any stable release of e2fsprogs. It will be in future releases.which would be one reason to avoid file systems larger than 16 TiB, but that note is outdated: e2fsprogs since version 1.42 (November 2011) is quite capable of creating and processing file systems larger than 16 TiB. mke2fs uses the big and huge types for such systems (actually, big between 4 and 16 TiB, huge beyond); these increase the inode ratio so that fewer inodes are provisioned.Returning to the Red Hat recommendation, as of RHEL 7.3, XFS is the default file system, supported up to 500 TiB, and ext4 is only supported up to 50 TiB. I think this is contractual rather than technical, although the Storage Administration Guide phrases the limits in a technical manner (without going into much detail). I imagine there are technical or performance reasons for the 50 TiB limit...The e2fsprogs release notes do give one reason to avoid file systems larger than 16 TiB: apparently, the resize_inode feature has to be disabled on file systems larger than this. |
_codereview.149615 | RunMyAtmpackage ATM;import java.util.*;import java.io.*;public class RunMyAtm { int input;static Scanner sc = new Scanner(System.in);Account[] myAccounts = new Account[3];public static void main(String[] args){ RunMyAtm rma = new RunMyAtm(); rma.preAtmMenu();}public void preAtmMenu(){ while (input != 5) { System.out.println(1.) Populate Accounts); System.out.println(2.) Pick Account); System.out.println(3.) Load Accounts); System.out.println(4.) Save Account); System.out.println(5.) Exit); System.out.print(Please select one of the options: ); input = sc.nextInt(); System.out.println(); if (input == 1) { populateAccts(); System.out.println(); } else if (input == 2) { pickAccts(); System.out.println(); } else if (input == 3) { loadAccount(); } else if (input == 4) { saveAccount(); } else if (input <=0 || input >=6) { System.out.println(Please enter a nubmer from the Menu); } }} public void populateAccts(){ for(int i = 0; i < myAccounts.length; i++) { myAccounts[i]= new Account ((i+1), 100); System.out.println(myAccounts[i].getAcctNum()); } } public void pickAccts(){ while (input != 4) { System.out.println(Press 1 for account 1); System.out.println(Press 2 for account 2); System.out.println(Press 3 for account 3); System.out.println(Press 4 to exit); System.out.print(Select an account: ); input = sc.nextInt(); System.out.println(); if (input <1 || input >4) { System.out.println(Please enter another number); } else if(input == 1 || input == 2 || input ==3) { myAccounts[input - 1].AtmMenu(); saveAccount(); } } } public void saveAccount(){ try { FileOutputStream outStream = new FileOutputStream(E:/03INFSYS 3806001 - Mngrl Appl Obj-Orntd Prg/tempfile1/BankAccounts.txt); ObjectOutputStream os = new ObjectOutputStream(outStream); os.writeObject(myAccounts); os.flush(); os.close(); } catch (IOException ioe) { System.err.println(ioe); }} void loadAccount(){ try { FileInputStream inStream = new FileInputStream(E:/03INFSYS3806 001-Mngrl Appl Obj-Orntd Prg/tempfile1/BankAccounts.txt); ObjectInputStream is = new ObjectInputStream(inStream); myAccounts = (Account[])is.readObject(); is.close(); } catch (Exception ioe) { System.out.println(ioe.getMessage()); } }Accountpackage ATM;import java.io.Serializable;import java.text.DecimalFormat;import java.text.NumberFormat;import java.text.ParsePosition;import java.text.SimpleDateFormat;import java.util.*;public class Account implements Serializable{int acctnum;double newBalance;double withdraw;double deposit;double amount;int firstdate;int seconddate; double rate;Date date = new Date();boolean dateflag = false;static Scanner sc = new Scanner(System.in);Calendar cal1 = new GregorianCalendar();Calendar cal2 = new GregorianCalendar();DecimalFormat df = new DecimalFormat(#.##);static NumberFormat fmt = NumberFormat.getCurrencyInstance(Locale.US);Account(){}Account(int acctnum, double newBalance){ this.newBalance = newBalance; this.acctnum = acctnum;}public void setAcctNum(int newId){ acctnum = newId;}public int getAcctNum(){ return this.acctnum;} public void withdraw(int amount){ System.out.println(Your current balance is : + fmt.format(this.getNewBalance()) + \n); System.out.print(Enter withdraw amount: ); amount = sc.nextInt(); System.out.println(); if (this.getNewBalance() >= amount) { newBalance = this.getNewBalance() - amount; System.out.println(Your current balance is: + fmt.format(newBalance)); } else { System.out.println(Insufficient Funds Availiable + \n); }} public void deposit(double amount){ System.out.println(Your current balance is : + fmt.format(this.getNewBalance()) + \n); System.out.print(Enter deposit amount: ); amount = sc.nextDouble(); newBalance = amount + this.getNewBalance(); System.out.println(Your new balance is: + fmt.format(newBalance)); System.out.println();}public void newBalance(){ System.out.println(Your balance is: + fmt.format(newBalance) +\n); }public double getNewBalance(){ return this.newBalance;}public void calcInterest(){ getDate1(); getDate2(); if (firstdate > seconddate) { System.out.println(You must enter a future date:); getDate2(); } else { System.out.println( Thank you:); } int datediff = seconddate - firstdate; rate = .05/365; double ratetime = Math.pow(1+rate,datediff); newBalance = getNewBalance() * ratetime; System.out.println(Your Balance with interest is: + df.format(newBalance));}public void getDate1(){ System.out.print(Enter first date(mm/dd/yyyy): ); String input = sc.next(); SimpleDateFormat formatter = new SimpleDateFormat(MM/dd/yyyy); ParsePosition pos = new ParsePosition(0); Date date = formatter.parse(input, pos); cal1.setTime(date); firstdate = cal1.get(Calendar.DAY_OF_YEAR); dateflag = true;} public void getDate2(){ System.out.print(Enter second date(mm/dd/yyyy): ); String input = sc.next(); System.out.println(); SimpleDateFormat formatter = new SimpleDateFormat(MM/dd/yyyy); ParsePosition pos = new ParsePosition(0); Date date = formatter.parse(input, pos); cal2.setTime(date); seconddate = cal2.get(Calendar.DAY_OF_YEAR); dateflag = true; }public void AtmMenu(){ int input = 0; while (input !=5) { System.out.println(1.) Withdraw); System.out.println(2.) Deposit); System.out.println(3.) Check Balance); System.out.println(4.) Calculate Interst); System.out.println(5.) Exit); System.out.print(Please enter a nubmer from the menu above + and press enter: ); input = sc.nextInt(); System.out.println(); if (input == 1) { withdraw((int) input); } else if (input == 2) { deposit(input); } else if (input == 3) { newBalance(); } else if (input == 4) { calcInterest(); } else if (input <=0 || input >=6) { System.out.println(Please enter a nubmer from the Menu); } }} } | Runs simple menu driven ATM program with various option in Java | java;array;io;constructor | null |
_cs.77326 | Unique SAT is defined as:Given any SAT problem, does the SAT problem have exactly 1 solution?As I understand it is co-NPHard. I am unclear how it is in co-NPAssuming the problem has more than 1 solution, any 2 solutions can be a certificate of it being nonUnique. That part is coNP. Now, given a problem can be unsatisfiable, how do we get a certificate of no solution? Its also said that an NP Oracle can solve this problem in polynomial time, but an NP Oracle just tells if the problem is satisfiable. So how is that possible?Apologies but I have no clue regarding this version of SAT. | Unique SAT complexity clarification | complexity theory;satisfiability | null |
_unix.5351 | I'd like to install Ubuntu (Desktop or netbook edition, preferably latest version), onto a laptop using a small USB stick. (480 MB free space.)How can I do this? | Install Ubuntu from a small USB stick | ubuntu;system installation | There is a dedicated article on this:https://help.ubuntu.com/community/Installation/FromUSBStickIn brief:Download the ISO.Download UnetBootin http://unetbootin.sourceforge.net/Burn the ISO to your USB using UnetBootin. Your USB will become aliveUSB from which you can boot.Boot the system using USB and choose Install. |
_softwareengineering.180026 | A project I am working on has a bunch of legacy tests that were not properly mocked out. Because of this the only dependency it has is EasyMock, which doesn't support statics, constructors with arguments, etc. The tests instead rely on database connections and such to run the tests. Adding powermock to handle these cases is being shot down as cost prohibitive due to the need to upgrade the existing project to support it (Another discussion). My questions are, what are the REAL world tangible benifits of proper unit testing I can use to push back? Are there any? Am I just being a stickler by saying that bad unit tests (even if they work) are bad? Is code coverage just as effective? | What are tangible advantages to proper Unit Tests over Functional Test called unit tests | unit testing | null |
_codereview.146726 | I recently had a bug where extracting a decimal from a string failed due to locale settings. That is, some locales use a , as a decimal point, rather than a .. An important goal is that the conversion function is deterministic.I have usually used boost::lexical_cast for such tasks, but my understanding is that this is reliant on the global application locale. I have therefore implemented a variant of lexical_cast that uses the std::locale::classic C locale for the conversion.#include <type_traits>#include <locale>#include <string>#include <sstream>#include <stdexcept>namespace typeconv{/** * Convert @c str to a T * * @param str string to convert to a T * @return the value contained within @c as a T * * @pre @c str is arithmetic and can be converted to a T * @note std::locale::classic() is used for the conversion. * * @throws std::invalid_argument if str cannot be converted to an object of type T. */template<typename T>inline auto lexical_cast(const std::string& str) -> typename std::enable_if<std::is_arithmetic<T>::value, T>::type{ std::istringstream istr(str); istr.imbue(std::locale::classic()); T val; istr >> val; if (istr.fail()) throw std::invalid_argument(str); return val;}}I plan to extend the template function in future to include other conversions, such as an arithmetic type to std::string. Any comments on the implementation are welcome. | C++ conversion from std::string to arithmetic type using std::locale::classic() | c++;converting;c++14;type safety;localization | null |
_scicomp.20332 | I am curious, if there is a function to convert MPIAIJ (distributed matrices in AIJ format) to a SEQAIJ matrix that lie on a single processor. It is possible to do such an operation for PETSc vectors with VecScatterCreateToAll or VecScatterCreateToZero, but I couldn't find a similar function for matrices. It is not a scalable operation obviously, but can be helpful for debugging easily.Naively, I thought the following will work, but the problem with this code is that every process generate a different SEQAIJ matrix. Interestingly, petsc4py doesn't generate an error, but simply leaves the matrix with all zeros.#Input A is an MPIAIJ matrixdef getSEQAIJ(A): N=A.getSize()[0] B=PETSc.Mat().create(comm=PETSc.COMM_SELF) B.setType(PETSc.Mat.Type.SEQAIJ) B.setSizes(N) B.setUp() rstart, rend = A.getOwnershipRange() for i in xrange(rstart,rend): cols,vals = A.getRow(i) #maybe restore later B.setValues(i,cols,vals,addv=PETSc.InsertMode.INSERT) B.assemble() return B | How to convert MPIAIJ to SEQAIJ matrix in petsc/petsc4py? | python;sparse;petsc;matrix | null |
_cs.27560 | Suppose we have two sorted arrays $A$ and $B$, and we want to find the indices in $B$ of all elements of $A$. We can do this in $\mathcal O(|A|\log|B|)$ time by simply binary searching $|A|$ times. We can do it in $\mathcal O(|A| + |B|)$ time by iterating through the arrays together like the merge phase of a mergesort; this may or may not be an improvement, depending on the sizes of $A$ and $B$.Can we do better? I don't expect any substantial improvement for $|A| \in \mathcal O(1)$ or $|A| \in \mathcal O(|B|)$, but for, say, $|A| \in \mathcal O(\log |B|)$, can we do better than $\mathcal O((\log |B|)^2)$? My ideas so far have been about some sort of adaptation of binary search that divides $B$ into a number of intervals depending on $|A|$, but I'm not yet sure whether it's an improvement. | Searching for multiple elements of an array | search algorithms | null |
_cogsci.1 | As a computer programmer, I have noticed an interesting phenomenon: If I am stuck on a particular problem in my work, often if I stop thinking about the problem and do something else, the answer will suddenly come to me.Is there a name for this phenomenon? How does this work? Has any research been done on this? How is it that taking a break from a problem sometimes allows you to figure out the answer?Edit: I remember now where I heard about this phenomenon: on the Charlie Rose Brain Series, Eric Kandel of Colombia University says (at 43:20 in)[The unconscious] can do many processes at the same time. You can either focus on one thing or another, you can't focus on two or three things at the same time. Because consciousness is very limited in what it can do; unconsciousness is much broader. And although we know very little about the true nature of creativity, one emerging theme that comes out of this is that, if you're trying to solve a mathematical problem... if you're trying to solve any intellectual problem... you keep on focusing at it, you may get stuck. Taking a break, taking a shower, going for a walk, playing golf, you come back refreshed. And often doing the other activity, boom, the idea will come to you.But then they change the subject! What is the concept he's talking about, and where can I read more about how it works? | How is it that taking a break from a problem sometimes allows you to figure out the answer? | terminology;problem solving;unconscious;creativity | It sounds like you're talking about a classic example of Incubation.Incubation is defined as a process of unconscious recombination of thought elements that were stimulated through conscious work at one point in time, resulting in novel ideas at some later point in time.Here's a great article by John F. Kihlstrom: Intuition, Incubation, and Insight: Implicit Cognition in Problem Solving. Basically it is believed that Incubation or stopping conscious thought on a problem allows one to find more creative solutions to a problem:In these cases, Wallas argued, thinkers enter an incubation stage in which they no longer consciously thinks about the problem. Wallas (1926) actually distinguished between two forms of incubation: the period of abstention may be spent either in conscious mental work on other problems, or in a relaxation from all conscious mental work (p. 86). Wallas believed that there might be certain economies of thought achieved by leaving certain problems unfinished while working on others, but he also believed that solutions achieved by this approach suffered in depth and richness. In many cases of difficult and complex creative thought, he believed, deeper and richer solutions could be achieved by a suspension of conscious thought altogether, permitting the free working of the unconscious or partially conscious processes of the mind (p. 87).1 In either case, Wallas noted that the incubation period was often followed by the illumination stage, the flash (p. 93) in which the answer appears in the consciousness of the thinker.Kihlstrom's references contain many good experiments backing up the claims made.A reason incubation may work is because it releases fixation; that case of being stuck which is a sort of mental rut which prevents one from thinking of new answers or methods of solving a problem. We become stuck on an idea that we believe should work but doesn't, which may hold us back from thinking of a different solution which actually does work; one we may have previously not considered or disregarded.A great dissertation by Bo T. Christensen covers both ideas of Fixation and Incubation in depth: Creative Cognition: Analogy and Incubation. |
_datascience.211 | I'm new to this community and hopefully my question will well fit in here.As part of my undergraduate data analytics course I have choose to do the project on human activity recognition using smartphone data sets. As far as I'm concern this topic relates to Machine Learning and Support Vector Machines. I'm not well familiar with this technologies yet so I will need some help. I have decided to follow this project idea http://www.inf.ed.ac.uk/teaching/courses/dme/2014/datasets.html (first project on the top)The project goal is determine what activity a person is engaging in (e.g., WALKING, WALKING_UPSTAIRS, WALKING_DOWNSTAIRS, SITTING, STANDING, LAYING) from data recorded by a smartphone (Samsung Galaxy S II) on the subject's waist. Using its embedded accelerometer and gyroscope, the data includes 3-axial linear acceleration and 3-axial angular velocity at a constant rate of 50Hz.All the data set is given in one folder with some description and feature labels. The data is divided for 'test' and 'train' files in which data is represented in this format: 2.5717778e-001 -2.3285230e-002 -1.4653762e-002 -9.3840400e-001 -9.2009078e-001 -6.6768331e-001 -9.5250112e-001 -9.2524867e-001 -6.7430222e-001 -8.9408755e-001 -5.5457721e-001 -4.6622295e-001 7.1720847e-001 6.3550240e-001 7.8949666e-001 -8.7776423e-001 -9.9776606e-001 -9.9841381e-001 -9.3434525e-001 -9.7566897e-001 -9.4982365e-001 -8.3047780e-001 -1.6808416e-001 -3.7899553e-001 2.4621698e-001 5.2120364e-001 -4.8779311e-001 4.8228047e-001 -4.5462113e-002 2.1195505e-001 -1.3489443e-001 1.3085848e-001 -1.4176313e-002 -1.0597085e-001 7.3544013e-002 -1.7151642e-001 4.0062978e-002 7.6988933e-002 -4.9054573e-001 -7.0900265e-001And that's only a very small sample of what the file contain. I don't really know what this data represents and how can be interpreted. Also for analyzing, classification and clustering of the data, what tools will I need to use? Is there any way I can put this data into excel with labels included and for example use R or python to extract sample data and work on this?Any hints/tips would be much appreciated. | Human activity recognition using smartphone data set problem | bigdata;machine learning;databases;clustering;data mining | The data set definitions are on the page here:Attribute Information at the bottomor you can see inside the ZIP folder the file named activity_labels, that has your column headings inside of it, make sure you read the README carefully, it has some good info in it. You can easily bring in a .csv file in R using the read.csv command.For example if you name you file samsungdata you can open R and run this command:data <- read.csv(directory/where/file/is/located/samsungdata.csv, header = TRUE)Or if you are already inside of the working directory in R you can just run the followingdata <- read.csv(samsungdata.csv, header = TRUE)Where the name data can be changed to whatever you want to call your data set. |
_webmaster.90717 | I am creating a website which will have much more videos than actual text. Now this concerns me as I want the particular site which I can not reveal details about here, to rank within the top 3 on google search engine.For example purposes, let's say that my website is for providing users with videos of skateboarding. There will be around 20 videos of people skateboarding in skate parks.Now for the whole style of the website, in design content isn't looking right. I have designed the pages with correct H1 ,H2 ,H3 ,H4 tags. For example:<h1>Skateboarding Videos</h1>Then a row of 4 videos, these videos will have random names which may not have the words skateboarding, as it would look silly if all the videos had near the same name.<h2>Watch out best moments in skateboarding</h2>then another row of 4 videos.Now in two columns would be something like<h3>The users voted best skateboarding clip</h3>Then one big video.<h3>Check out your nearest skateboarding parks now</h3>Then a few links to near places.<h4>Skateboarding products we suggest</h4>almost like an amazon style of listed products in a row.So this is not a real website as I can't disclose the idea, however as you can see it will be mostly made up of h tags, very small chunks of paragraph tags, and a lot of video content.I am a bit of an SEO freak and using Yoast's SEO tool I normally have all of my pages green (passed on everything). However, I know this will not be achievable here.What would you guys suggest for me to do in this situation/example to bring as much traffic as possible for the search terms like skateboarding videos, skateboarding clips, skateboarding in UK etc... you get the jist. | SEO optimisation for a video-heavy website | seo;google search;optimization | null |
_webapps.33840 | Using Google Spreadsheets, you can write queries. However, if you have column letters in quotes, then they aren't updated as column order changes. Is there a way to write these queries so they don't need to be updated every time a column is added or removed?Is it possible to use named ranges in queries to solve this problem?Here's an example: If you add a column after 'F', then column 'G' gets pushed to 'H' and the meaning of the formula changes.=Query(B:J,select avg(J) group by G)Related questionsThis question is not the same as Using Query with column headers instead of column letters because this one is focused on the use of named ranges. | Is it possible to use named ranges in Google spreadsheet queries so that the columns references are kept up to date? | google spreadsheets | It's a kind of tricky, but it is possible with a helper Range and some concatenation.What needs to be done:Create a named range, COLS, to carry the column letters like this:A B C D E ...Do it in a vertical way as shown.Assemble the query string like this:=QUERY( B:J, SELECT AVG( & INDEX(**COLS**, COLUMN(J1)) & ) GROUP BY & INDEX(**COLS**, COLUMN(G1)) ) |
_unix.292416 | I've found this question that explains how to edit a remote file with vim using:vim scp://user@myserver[:port]//path/to/file.txtIs it possible to do this as root (via sudo) on the remote host? I've tried creating a file with root permissions on the remote host and editing it with the above. Vim can see the content, can edit it, and can save it but nothing changes on the remote host (probably because vim is just saving its temp file and then giving that to scp to put back?)When doing this with a file saved by my user it behaves as expected. My SSH uses a key to authenticate and the remote server has NOPASSWD for my sudo accessThis question is similar, but the only answer with votes uses puppet which is definitely not what I want to use.Edit: In response to @drewbenn's comment below, here is my full process for editing:vim scp://nagios//tmp/notouchWhere /tmp/notouch is the file owned by root, I see vim quickly show :!scp -q 'nagios:/tmp/notouch' '/tmp/vaHhwTl/0'This goes away automatically to yield an empty black screen with the text/tmp/vaHhwTl/0 1L, 12CPress ENTER or type command to continuePressing enter allows me to edit the fileSaving pops up the same kind of scp command as the beginning, which quickly and automatically goes away (it's difficult to read it in time but the scp and /tmp/... files are definitely there) | Can vim edit a remote file as root? | vim;sudo;remote | I'm going to say this is not possible because vim is not executing remote commands. It is simply using scp to copy the file over, edit it locally and scp it back when done. As stated in this question sudo via scp is not possible and it is recommended that you either modify permissions to accomplish what you're wanting or just ssh across to the remote machine. |
_webapps.101288 | I have two Google accounts, say Account1 and Account2. I use and maintain a single calendar from Account1. Each time I'm in Account2 and need to add a calendar event, I log out of Account2 then into Account1 and finally add the event to the calendar.Is there a better way? One option would be to maintain separate calendars for each account but more interesting would be to set Account2 to use (and be able to view, edit, etc) the calendar from Account1...I'd be fine with disabling the calendar for Account2. | Share Calendar with two Google accounts | google calendar;google account;synchronization | You can share your main calendar from Account1 with Account2.Just go to the settings (gear, upper right) in Account1, choose Calendars and enter Share this calendar right of your main-calendar. Then you can enter the e-mail address of Account2 and choose Make changes and Manage sharing and you'll get a notification-mail in Account2 to add that calendar. Now you can manage the complete calendar for Account1 while being logged-in as Account2.Please note that when adding events, your main calendar of Account2 is checked by default. If you only want Account1s calendar visible you can press the pull-down next to Account1-calendar and choose Display only this calendar. Any new events will be for Account1 as default. |
_unix.154047 | I am writing a simple Hello World kernel module. The Makefile I wrote is giving me such an error:esp@ubuntu:~/task1-2$ make allmake -C /usr/src/linux-headers-3.13.0-35-generic SUBDIRS = /home/esp/task1-2 modulesmake: ****** empty variable name. Stop.make: ** [all] Error 2How do I rectify it?My Makefile:obj-m += task1-2.oKDIR = /usr/src/linux-headers-3.13.0-35-genericall: $(MAKE) -C $(KDIR) SUBDIRS = $(PWD) modulesclean: rm -f *.o rm -f *.ko rm -f *.mod.* rm -f *.symvers rm -f *.order | Makefile error: empty variable name | make | The section 9.3 of the (GNU) Make manual describes overriding variables.An argument that contains = specifies the value of a variable: v=x sets the value of the variable v to x.The problem is not with your makefile, but with the invocation. The argument that contains = is just =. Make does not concatenate multiple arguments into one you should specify: SUBDIRS=/home/esp/task1-2. |
_cs.10493 | Given any graph $G$ on $V(G)=\{1,\dots,n\}$ and its adjacency matrix $$A(G)=\left(\matrix{A_{1,1} & A_{1,2} & \dots & A_{1,n}\\A_{2,1} & A_{2,2} & \dots & A_{2,n}\\&&\dots&\\A_{n,1} & A_{n,2} & \dots & A_{n,n}}\right)$$ any permutation on $\{1,\dots,n\}$ defines a new isomorphic graph $G'$. A common approach to canonization is to take the lexicographically minimal string $A'_{1,2}A'_{1,3}\dots A'_{n-1,n}$ (i.e. the upper/lower triangular matrix) such that $G$ is isomorphic to $G'$ with $A'=A(G')$.If you now consider a permutation on $I=\{(i,j)\mid 1\leq i<j\leq n\}$ or equivalently a bijective function $\pi : \{1,\dots,{n \choose 2}\} \rightarrow I$, we can try to minimize $A'_{\pi(1)},\dots,A'_{\pi({n\choose 2})}$ instead, or at least to compute the first $k$ bits of the minimal string.Observe that the complexity of this task heavily depends on the choice of $\pi$:If you stick with default permutation (upper triangular matrix) you can easily compute the first $2n-1$ bits in polynomial time (adjacent vertices with maximal degrees).If you choose $\pi(i)=(i,i+1)$ for the first $\sqrt[c]{n}$ positions, you can reduce HamiltonPath to this in polynomial time.Now my questions:Given a fixed function $k$ and input $(G,\pi)$ how hard is it to compute the first bits $k(|V(G)|)$ of the minimal string $A'_{\pi(1)},\dots,A'_{\pi({n\choose 2})}$? Is there any (not necessarily strictly) monotonically increasing $k$ for which this is feasible? Is there a $\pi$ s.t. even $\omega(n)$ bits can be computed in polynomial time?Do you know any other reductions to a problem of this kind where $\pi$ is fixed (i.e. only depends on $|V(G)|$) and the input has the form $(G,k)$ or $G$ (i.e. $k$ is fixed too)?Note: Answering (1) is enough to get accepted.Edit: In the meanwhile there appeared a somewhat connected question: Is induced subgraph isomorphism easy on an infinite subclass? | Complexity of computing the first bits of a minimal permuted adjacency matrix | complexity theory;graph theory;graph isomorphism | null |
_webmaster.16446 | I have a project where I am migrating a website from one platform to another, but the look and feel will still be the same (this is for standardization within an organization). Through re-write rules, I have to maintain any of the links which someone could have bookmarked. The practical implication is that I have to inventory every link and make sure it goes to the right place on the new site. Since there are often multiple paths to the same pages, I've found that site mapping software that do a hierarchical tree aren't giving me everything I need. The ones I've tried so far just show me the first, shortest path that landed them on that page. What I want to see is the inter-connectedness of the site -- if 5 pages all link to the sales page, I want to see that. Is there a site-mapping software, preferably open source, what will show me the 'many-to-many' relationship of the site's pages, rather than the 'parent-child' hierarchy of navigation? | software for mapping links for site migration? | sitemap | null |
_unix.252208 | The long and short of it is we have a system with a single ethernet port and we need to provide both an IPV4 connection (that can be completely incontrol of the user and varies per system) and we want an IPV6 connection on the same adapter that can be predicted for each system. The Link-Local address based on EUI-64 MAC address is great and it should provide the connectivity we need just fine.Network Manager completely manages our interfaces (i.e., we do not have an /etc/network/interfaces file on the system at all), so typically we just modify the connection settings in the connection file (/etc/NetworkManager/system-connections/<con_name>), but IPv6 is not behaving as I would expect. I get the EUI-64 address when I have method set to ignore, but if I try using method=auto or method=manual, I will get an address in my ifconfig output, but I cannot ping the unit from any outside machine. Even if I connect directly between 2 PCs with the Ethernet cable, I only ever get Destination Host Unreachable. With method=ignore, it seems that I have no control over how my IPv6 address is set up, it is set up based on the ISP (so in my current ISP, which is not IPv6 ready, I happen to end up with the link-local address I want, but in a different network I may end up with a global scope and an IP address I cannot predict). How can I set this up on my system? What do I need to configure a manual address for my IPV6 connection via the NM connection files? Why is it generating an IP at all if I set IPv6 to method=ignore?I am using the following:Yocto Custom OS running systemd (I have a kernel config file in /usr/lib/sysctl.d but this has no config settings for IPV6), I am running Network Manager 1.0.6, Linux Kernel 4.1.8 | Manually configure Link-Local IPV6 regardless of ISP | networking;networkmanager;ipv6 | null |
_unix.363084 | My goal is to create a redundant internet connection. I have a USB-LTE Modem and a wired connection. I work on ubuntu 16.04. I can use both on their own but I want to combine them together to create redundancy. Now I searched for solutions and found the kernel bonding module.http://lxr.free-electrons.com/source/Documentation/networking/bonding.txt?v=3.13I tried multiple configurations. The mode I want to get working is the active backup mode as this is the mode which would give me redundancy.What I achieved: I can add both interfaces to bond0 and ping though that interface (tested with ping and route). The output from /proc/net/bonding/bond0 also looks fine for me:Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)Bonding Mode: fault-tolerance (active-backup)Primary Slave: usb0 (primary_reselect always)Currently Active Slave: usb0MII Status: upMII Polling Interval (ms): 100Up Delay (ms): 200Down Delay (ms): 200Slave Interface: usb0MII Status: upSpeed: UnknownDuplex: UnknownLink Failure Count: 0Permanent HW addr: 02:1e:10:1f:00:00Slave queue ID: 0Slave Interface: enx00044b580af6MII Status: upSpeed: 1000 MbpsDuplex: fullLink Failure Count: 0Permanent HW addr: 00:04:4b:58:0a:f6Slave queue ID: 0However, if I test the worst case and remove the USB-LTE Modem the connection is lost completely (can't even ping anymore). So no redundancy at all.My guess is that I have a dhcp/gateway problem here. Because the both slaves interfaces have a complete different ISP etc. Sadly, I don't have a lot of experience with networking on linuxand can't solve it on my own.So my Question: Is it possible to bond two such different connections together with the bonding module? And if yes, any ideas how? | Bond eth0 and LTE Modem | ubuntu;networking;bonding;lte | null |
_unix.366495 | Situation: I have a Python script that will recursively and separately count the total number of files and directories. Below is the code:def traverse(top): filecount = 0 dircount = 0 for root, dirs, files in os.walk(top): for f in files: if dirs: dircount += 1 elif files: filecount += 1 else: print(Error) break print(Num of dir: + dircount) print(Num of files: + filecount)Problem: I get a different number of directories and files almost every time I run the code.Question: Mind suggesting a reason why the files and directories number will fluctuate? Maybe is it how Linux operates?Additional Information: Just want to make sure as this portion of my script is very important to the whole program | Fluctuating number of files and directories | files;directory;python;python3 | A running Unix system will create temporary files and directories every once in a while during normal operation.Just opening a file in an editor, or sending an email, is likely to create one or two temporary files, and browsing the web may create and delete hundreds of files over a short timespan. Also, a graphical desktop environment may do caching and other things that you don't usually notice, which creates and deletes temporary files.Depending on what your top directory is, you may well cover directories that have a tendency to change a lot, like /tmp and all the directories under /var, and your home directory. |
_unix.82388 | I got a strange looking warning message in Windows Vista about a potential hard disk failure. I say strange because I have never in my life seen that type of warning in Windows. It suggested that I backup everything on this disk as soon as possible.The hard disk in question is the one I use for Ubuntu Linux. I know Windows can't read Linux file systems, not natively anyway, so it's probably some SMART reading that caused Windows to warn me about this disk drive.Ever since this happened I can't boot into Ubuntu Linux. I see several error lines passing by, something that indeed seems to be related to a disk failure. At the end it only presents the command prompt, the desktop doesn't load.Is there a way I can recover from this error? How do I grab the error logs from command prompt? I would like to post it here.Here's are a few screen shots: | Can I recover from a system disk error on Ubuntu Linux? | ubuntu;filesystems;hard disk | I would attempt to repair the disk with either HDAT (freeware) or possibly Spinrite (Commercial). I've used both of these tools to recover disks that were failing and they have both worked well in the past.Once the drive is in a usable state I'd use Clonezilla to replicate it as quickly as you can to an alternate HDD. |
_cstheory.11882 | There seem to be many randomized algorithms for polynomial identity testing, checking whether or not a given polynomial is zero. Are there any results of algorithms that do some sort of estimation of polynomials over a specific set of points? This could be, for instance, approximating for what fraction of these points the polynomial evaluates to zero, or approximating the average value of the polynomial over these points? The set of points can be specific to the algorithm. | What are some results on algorithms that estimate polynomials over a given set of points? | ds.algorithms;approximation algorithms;randomized algorithms;derandomization;polynomials | null |
_webmaster.992 | My graphics skills are seriously lacking. I can see when something looks nice and when it doesn't but have a hard time coming up with anything myself given a blank slate. What should I do? | As a non-designer what are some good sites/books/tutorials for learning web design? | css;design;graphics;website design | null |
_codereview.138932 | This is how I used to implement quicksort:def quickSort(alist): quickSortHelper(alist,0,len(alist)-1)def quickSortHelper(alist,first,last): if first<last: splitpoint = partition(alist,first,last) quickSortHelper(alist,first,splitpoint-1) quickSortHelper(alist,splitpoint+1,last)def partition(alist,first,last): pivotvalue = alist[first] leftmark = first+1 rightmark = last done = False while not done: while leftmark <= rightmark and alist[leftmark] <= pivotvalue: leftmark = leftmark + 1 while alist[rightmark] >= pivotvalue and rightmark >= leftmark: rightmark = rightmark -1 if rightmark < leftmark: done = True else: temp = alist[leftmark] alist[leftmark] = alist[rightmark] alist[rightmark] = temp temp = alist[first] alist[first] = alist[rightmark] alist[rightmark] = temp return rightmarkalist = [54,26,93,17,77,31,44,55,20]quickSort(alist)print(alist)But recently I have come across this way of implementing quicksort:def quickSort(L): if L == []: return [] return quickSort([x for x in L[1:] if x< L[0]]) + L[0:1] + quickSort([x for x in L[1:] if x>=L[0]])a = [54,26,93,17,77,31,44,55,20]a = quickSort(a)print(a)Which implementation is better in terms of memory usage and time complexity? | Which is better code for implementing quicksort? | python;algorithm;sorting;comparative review | null |
_webapps.35502 | Everytime I log in on a new device or browser I get the SMS text message with the 2nd step digits. It's pretty annoying. I already have the Google Authenticator to calculate my digits and it works great.How to disable SMS? | How to disable SMS in Google's 2-step verification? | google;security | null |
_cs.55564 | I have a question where I am asked to find the size of a cache. I am given the following info:a) the length of a memory addressb) the number of bits for offset, index, and tag fields.I know I can use the numbers of bits to easily solve the line size and the number of sets. But as far as I know, there is no way for me to find the cache size without knowing the associativity, which we are not given.Is there any way to find the associativity from the length of the memory address and the number of tag/index/offset bits?Although it is tempting to assume the cache is direct mapped because we are not given associativity, the next question asks what the cache mapping scheme is and this prof is notoriously tricky, so I'm assuming there's more to it. | Is it possible to figure out cache size and associativity using the length of offset, index, tag fields? | computer architecture;cpu cache;cpu | null |
_unix.167554 | It has been a while since I updated one of my RHEL6 machines (except for the occasional update of specific packages with known vulnerabilities).As a result of this, I have an old ca-certificates package:ca-certificates-2010.63-3.el6_1.5.noarch.The new ca-certificates package depends onp11-kit-trust >= 0.18.4-2,which in turn conflicts with nss < 3.14.3-33,which is currently installed (as nss-3.13.3-6.el6.x86_64).As a result, I cannot figure out how to correctly update ca-certificates.I have p11-kit installed, but not p11-kit-trust, since nss blocks it. yum update nss says No Packages marked for Update.yum erase nss refuses, since it implies erasing yum as well.The complete output from yum update looks like this:Loaded plugins: product-id, rhnplugin, security, subscription-managerThis system is receiving updates from RHN Classic or RHN Satellite.Setting up Update ProcessResolving Dependencies--> Running transaction check---> Package ca-certificates.noarch 0:2010.63-3.el6_1.5 will be updated---> Package ca-certificates.noarch 0:2014.1.98-65.1.el6 will be an update--> Processing Dependency: p11-kit-trust >= 0.18.4-2 for package: ca-certificates-2014.1.98-65.1.el6.noarch--> Running transaction check---> Package p11-kit-trust.x86_64 0:0.18.5-2.el6_5.2 will be installed--> Processing Conflict: p11-kit-trust-0.18.5-2.el6_5.2.x86_64 conflicts nss Finished Dependency ResolutionError: p11-kit-trust conflicts with nss-3.13.3-6.el6.x86_64 You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigestpackage-cleanup --problems finds no problems, and package-cleanup --cleandupes finds no duplicates.ca-certificates cannot be uninstalled, since openssl depends on it.Is there a way that I can resolve this without using override parameters such as --dbonly, --force, --nodeps or similar, and without manually downloading an old RPM off the net? | What is the correct way to resolve this rpm conflict? (Error: p11-kit-trust conflicts with nss-3.13.3-6.el6.x86_64) | rhel;yum;dependencies | Download all these packages (I took the CentOS 6.6 versions from rpmfind.net)nss-3.16.1-14.el6.x86_64.rpmnss-util-3.16.1-3.el6.x86_64.rpmnss-softokn-3.14.3-17.el6.x86_64.rpmnss-softokn-freebl-3.14.3-17.el6.x86_64.rpmnss-tools-3.16.1-14.el6.x86_64.rpmnss-sysinit-3.16.1-14.el6.x86_64.rpmand install them all in one go with rpm -Uvh nss-*.rpm.That satisfies the dependencies of p11-kit-trust that yum couldn't figure out how to resolve on its own.After that, yum update can update ca-certificates and install p11-kit-trust (for dependencies). |
_unix.2295 | Just noticed some 640MB wtmp file in a virtual container (Ubuntu Hardy).# last -n 10000 -f /var/log/wtmp.1|wc -l384# ls -hl /var/log/wtmp.1-rw-rw-r-- 1 root utmp 641M 21. Sep 07:49 /var/log/wtmp.1logrotate was not installed (I just did that and forced rotating).Are there records in there not being displayed by last (which should show the last 1000 entries, but apparently there are only 384).From quickly skimming the wtmp/utmp man page, it does not look like a single entry should use about 1,6MB.Is there another program besides last to inspect these files? | Why does /var/log/wtmp becomes so huge? How to inspect wtmp files? | login;logs | logrotate was a good idea. Like any regular file, wtmp could have been sparse (cf. lseek(2) holes and ls -s) which can show a extreme file size that actually occupies little disk. How did the hole get there, if it was a hole? getty(8) and friends could have had a bug. Or a system crash and fsck repair could have caused it.If you are looking to see the raw contents of wtmp, od or hd are good for peeking at binaries and have the happy side effect of showing long runs of empty as such.Unless it recurs, I wouldn't give it much more thought. A marginally competent intruder would do a better job than that, the contents aren't all that interesting, and little depends on them. |
_cs.14910 | We have two sets of vectors of positive numbers, $X$ and $Y$ where for$x\in X$ we write $x=(x_1,x_2,\ldots,x_k)$ and similarly for $y\in Y$we write $y=(y_1,y_2,\ldots,y_k$).We are given two vectors $l=(l_1,l_2,\ldots,l_k)$, and$u=(u_1,u_2,\ldots,u_k)$ such that $l_i\le u_i$ for all $i$.We want to find all pairs $(x,y)$, $x\in X$, and $y\in Y$ such that $l_i\le x_i+y_i\le u_i$Handwaving a bit, we can do this in a divide and conquer sort of way,separating each set into pieces that are larger and smaller than$l_i/2$ and throwing away the pairs where both are in the smallerhalf.This gives a recurrence$$T(m,n) = T(m,n/2) + T(m/2, n/2) + c(m+n)$$where $m=|X|$ and $n=|Y|$. For equal size sets, this gives$T(n,n) = O(n^{1.7})$, which is better than quadratic, but less than Iwould hope for.A similar question could be asked for three or more sets. | Range query for sum of vectors | algorithms;computational geometry | null |
_softwareengineering.156569 | When I (re-)build large systems on a desktop/laptop computer, I tell make to use more than one thread to speed up the compilation speed, like this:$ make -j$[ $K * $C ]Where $C is supposed to indicate the number of cores (which we can assume to be a number with one digit) the machine has, while $K is something I vary from 2 to 4, depending on my mood.So, for example, I might say make -j12 if I have 4 cores, indicating to make to use up to 12 threads.My rationale is, that if I only use $C threads, cores will be idle while processes are busy fetching data from the drives. But if I do not limit the number of threads (i.e. make -j) I run the risk to waste time switching contexts, run out of memory, or worse. Let's assume the machine has $M gigs of memory (where $M is in the order of 10).So I was wondering if there is an established strategy to choose the most efficient number of threads to run. | How many make threads to use? | multithreading;efficiency;multi core;make;build system | I ran a series of tests, building llvm (in Debug+Asserts mode) on a machine with two cores and 8 GB of RAM:Oddly enough, it seems to climb until 10 and then suddenly drops below the time it takes to build with two jobs (one jobs takes about the double time, not included in the graph).The minimum seems to be 7*$cores in this case. |
_codereview.139912 | I'm working on a logger that has a name of the module that called the logger (when I create an instance of a logger in my program, I call LoggingHandler(__name__)) to send all the messages, including info and debug, to the log file, and print the messages specified by max_level to console (so, by default, it will not print info and debug messages to console, but will still write them into file).The problem came when I was managing levels. If I set level in basicConfig to WARNING, then it will not print info and debug to file, even though I've set fh.setLevel(logging.DEBUG). It just won't go to levels lower than the one specified in basicConfig. Okay, I could just go ahead and specify filename in basicConfig to make it output to file, but I want a RotatingFileHandler to take care of it (because I require its rollover functionality). So, I've set level in basicConfig to NOTSET, the lowest one possible. Things go better now except one problem. The output to console doubles. It prints [2016-08-29 10:58:20,976] __main__: logging_handler.py[LINE:51]# WARNING hello[2016-08-29 10:58:20,976] __main__: logging_handler.py[LINE:51]# WARNING hello[2016-08-29 10:58:20,977] __main__: logging_handler.py[LINE:48]# ERROR hola[2016-08-29 10:58:20,977] __main__: logging_handler.py[LINE:48]# ERROR hola[2016-08-29 10:58:20,977] __main__: logging_handler.py[LINE:54]# INFO info message[2016-08-29 10:58:20,977] __main__: logging_handler.py[LINE:57]# DEBUG debug messageSo, the global logger does the output and the StreamHandler does. I need to prevent the global logger from outputting anything. So I redirect its output to a dummy class Devnull. Now the code works exactly as I need, but it feels like such approach is what they call bodging. So, I'd like to know if there's a better way to write this code.#!/usr/bin/python3 -u# -*- coding: utf-8 -*-import loggingfrom logging.handlers import RotatingFileHandlerfrom os import path, makedirsLOGS_DIR = logsDEBUG_FILE_NAME = 'debug.log'MAX_LOG_SIZE = 10*1024*1024BACKUP_FILES_AMOUNT = 3LOG_FORMAT = u'[%(asctime)s] %(name)s: %(filename)s[LINE:%(lineno)d]# %(levelname)-8s %(message)s'class Devnull(object): def write(self, *_, **__): passclass LoggingHandler: def __init__(self, logger_name, max_level=WARNING): makedirs(LOGS_DIR, exist_ok=True) logging.basicConfig(format=LOG_FORMAT, level=NOTSET, stream=Devnull(), ) self.main_logger = logging.getLogger(logger_name) # create file handler which logs even debug messages fh = RotatingFileHandler(path.join(LOGS_DIR, DEBUG_FILE_NAME), maxBytes=MAX_LOG_SIZE, backupCount=BACKUP_FILES_AMOUNT) fh.setLevel(logging.DEBUG) # create console handler with a higher log level ch = logging.StreamHandler() ch.setLevel(max_level) # create formatter and add it to the handlers fmter = logging.Formatter(LOG_FORMAT) fh.setFormatter(fmter) ch.setFormatter(fmter) # add the handlers to the logger self.main_logger.addHandler(fh) self.main_logger.addHandler(ch) def error(self, message): self.main_logger.error(message) def warning(self, message): self.main_logger.warning(message) def info(self, message): self.main_logger.info(message) def debug(self, message): self.main_logger.debug(message)if __name__ == '__main__': # Tests log = LoggingHandler(__name__) log.warning(hello) log.error(hola) log.info(info message) log.debug(debug message) | Python logging handler | python;python 3.x;logging | null |
_codereview.164261 | I've wrote a Javascript binary file handling library (write,close,show download propmt). Code passes tests, it's pretty small (6.5 kB) and really well comented with JSDoc style. It's redistributed under MIT license. You can view some tests, minified version and a readme here. I'll post only full, commented version. In my opinion code is pretty well. You can check commit history to see what was going on (really much...). Here is main file (containing pretty everything):/* * The MIT License * * Copyright 2017 Krzysztof Szewczyk * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the Software), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED AS IS, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN * THE SOFTWARE. *//** * Main Namespace * @type object */var binjs = binjs || {};/** * File 'class' for binjs namespace. * Contains all of library code. *//** * Creates new file object with specified * name. Remember that parameter 'name' * is critical. You cannot create File * instance without providing it's name. * * @param {string} name * @returns {BinJSFile} */binjs.File = function (name) { if (name === undefined) { throw [bin.js] Filename must be provided.; } else { /** * Filename */ this.name = name; /** * File Buffer */ this.buffer = []; /** * Variable that holds invaildation status * (after closing invaildate changes to true) * * If invaildate is set to true, you cannot * perform any action on file. Everything * you can do then is just set object to undefined. */ this.invaildate = false; /** * Closes file, by removing name * and buffer properties of File * class. * * Invaildates it, so you cant use * any function on file object * after calling this method. * * Please look at 'invaildate' * field description. */ this.close = function () { if (!this.invaildate) { delete this.buffer; delete this.name; this.invaildate = true; } else throw [bin.js] File alreday closed.; }; /** * Sets file buffer to passed variable. * It's expected to be string. * If not, error will be thrown. * * @param {string} txt */ this.setText = function (txt) { if (this.invaildate) { throw [bin.js] File alreday closed.; } else if (typeof (txt) === 'string') buffer = txt.split(''); else throw [bin.js] You can set buffer only to string variable.; }; /** * Returns text that curerntly is in buffer. * Throws an exception if file was * closed before. * * @returns {string} */ this.getText = function () { if (this.invaildate) { throw [bin.js] File alreday closed.; } else return this.buffer.join(''); }; /** * This function adds character to file buffer. * Be care what you pass to it. It's faster * than: * <pre><code> * * var f = new binjs.File('dummy.js'); * * // ... * * f.setText(f.getText() + text); * * </code></pre> * * Uses buffer.push(). * * Throws an error if file was * closed before. * * @param {any except undefined and array} c */ this.put = function (c) { if (typeof (c) === 'undefined') throw [bin.js] You must pass at least one argument.; if (this.invaildate) { throw [bin.js] File alreday closed.; } else this.buffer.push(c); }; /** * Starts downloading process. * Characters here are escaped using * encodeURI function. It creates * invisible '<A>' element in document * body and forces download file * dialog. Should work with all * HTML5-ready browsers. Shouldn't * break website layout. * * I'm unsure about older browsers * (I am looking on you, damn IE9). * * Throws error if buffer was closed * before function call * * @returns nothing */ this.download = function () { if (this.invaildate) { throw [bin.js] File alreday closed.; } else { var element = document.createElement('a'); element.setAttribute('href', 'data:text/plain;charset=utf-8,' + encodeURI(this.buffer.join(''))); element.setAttribute('download', this.name); element.style.display = 'none'; document.body.appendChild(element); element.click(); document.body.removeChild(element); } }; /** * Returns hash code of buffer. * Name of file doesn't affect * hash code. File with any name * and same content must return * the same value of this hashing * function. * * Throws an exception if buffer * was closed before function call. * * @returns {Number} */ this.hashCode = function() { if (this.invaildate) { throw [bin.js] File alreday closed.; } /** * Simple implementation of simple * Java's algorithm - String.hashCode(). * @param {type} str * @returns {Number} */ var hashcode = function (str) { var hash = 0; if (str.length === 0) return hash; else for (i = 0; i < str.length; i++) { char = str.charCodeAt(i); hash = ((hash << 5) - hash) + char; hash = hash & hash; } return hash; }(this.buffer); return hashcode; }; }};I've figured out how to create some kind of namespace (to don't pollute global one). Before I thought about about using class keyword of JavaScript but not every minifier/lint etc. was supporting it, but I didn't figure out how to hide my public class variables, eg. make 'name' and 'buffer' private. For hashCode method I've chosen Java .hashCode method because it was pretty simple to implement. Sorry for my poor English, I'll try to fix all spelling errors in code.Can you suggest me how to make my code better? | Binary file management library | javascript | null |
_webmaster.84039 | My Website Works when you go to it, however Google Adwords is telling me that my site is redirecting/ not showing up properly to their bots.When you put the site into a website checker like GT Metrix or Domain Tuno (won't let me post another link) they both show the default parallels plesk page.My plesk server currently hosts 300+ websites. And the IP address for them or the A record is the same across them all. However, if you go to the domain IP you'll get the same plesk default page. Yet if you type in the domain you get the correct website.On my other server when I input websites to those site checkers it pulls them up just fine. And the default parallels plesk page index.html was deleted from the root, so that's not the issue. The only index on my sites is the index.php which is the correct page.I could set one default domain to the IP address on my server and I think that would resolve it, but that only fixes the problem for one site and not the rest when being looked up by these google adword bots or website bots that check to see if your site is functioning correctly. I'm completely stumped and don't know what to do. | Parallels Plesk Apache dedicated server shared IP / A Address serving default page | php;htaccess;redirects;plesk | null |
_unix.232447 | I'm working with a Linux embedded board. It uses Linux kernel v2.6.37 with an external I2C RTC cbc34803. I've successfully intergrated the RTC hardware. It works correctly except the problem synchronization between system time and hardware clock time.As I know there're two types of time in Linux: system time and hardware clock time (RTC).When system boots, the system time is set from hardware clock time.But when I change the system time with date command, the system time does not sync to the RTC. Of course, it'll be synced if I use hwclock -w command.I want the system automatically update the system time to rtc (hardware clock) time everywhen the system time is changed.The question is which is responsibled for sync time from system time to rtc and what I need to do? | Automatically synchronization between system time and hardware clock time | linux;date;clock | You could write a function that does both:set_both_clocks() { date $@ hwclock -w}Give it the exact same arguments you'd give to date when setting the system clock. |
_datascience.16825 | Is there a way to reload all attributes after having removed someones without reopening the data file ?Any help please ? | How to reload all attributes in WEKA | weka | Judging from the screenshot, you are currently looking at the data in the preproces tab from the explorer module. In the menu above the top menu in your screenshot should be an undo option (5th option from the left). |
_unix.238567 | I want to rename all files to lowercase in a directory without renaming subdirectories. I know rename function but it renames files and subdirectories which is not desired. Can you help? | Rename files not directories | bash;shell;files;rename | null |
_codereview.92312 | I've just encountered this question and am trying to solve it using Java.Here is my solution to it, which may not be optimized or might not be right way to do it. Someone please review whether it is correct or there is some good way to do it. For focusing on logic, I have hard coded List creation and code repetition is present.package linkedlist.singly;//Add two numbers represented by linked lists // 245 : 5 -> 4 -> 2// 99789 : 9 -> 8 -> 7 -> 9 -> 9// Ans : 99341public class Add2NumbersInLinkListType2 { static int carry=0; public static void main(String[] args) { Node templ11 = new Node(5); Node templ12 = new Node(4); Node templ13 = new Node(2); Node templ21 = new Node(9); Node templ22 = new Node(8); Node templ23 = new Node(7); Node templ24 = new Node(9); Node templ25 = new Node(9); templ11.setNext(templ12); templ12.setNext(templ13); templ21.setNext(templ22); templ22.setNext(templ23); templ23.setNext(templ24); templ24.setNext(templ25); Node res = findSum(templ11, templ21, 0); if(carry==1){ Node tempNode = new Node(carry); tempNode.setNext(res); res = tempNode; } while(res!=null){ System.out.print(res.getData()); res=res.getNext(); } } private static Node findSum(Node l1, Node l2, int diff){ int length1 = findLength(l1); int length2 = findLength(l2); if(length1>length2){ //l1 having more nodes Node res = findSum(l1.getNext(), l2, diff--); int data = l1.getData() + carry; if(data>9){ carry=1; Node tempNode = new Node(data%10); tempNode.setNext(res); res = tempNode; }else{ carry=0; Node tempNode = new Node(data); tempNode.setNext(res); res = tempNode; } return res; }else if(length2>length1){ //l2 having more nodes Node res = findSum(l1, l2.getNext(), diff++); int data = l2.getData() + carry; if(data>9){ carry=1; Node tempNode = new Node(data%10); tempNode.setNext(res); res = tempNode; }else{ carry=0; Node tempNode = new Node(data); tempNode.setNext(res); res = tempNode; } return res; }else{ //both have same length Node res = findSumForListOfSameSize(l1, l2); return res; } } private static Node findSumForListOfSameSize(Node l1, Node l2){ if(l1==null && l2==null) return null; Node head = findSumForListOfSameSize(l1.getNext(), l2.getNext()); int temp = l1.getData() + l2.getData() + carry; if(temp>9){ carry=1; }else{ carry=0; } if(head==null){ head = new Node(temp % 10); }else{ Node tempNode = new Node(temp % 10); tempNode.setNext(head); head = tempNode; } return head; } private static int findLength(Node node){ int count=0; while(node!=null){ count++; node = node.getNext(); } return count; }} | Add two numbers represented by linked lists | java;algorithm;linked list | null |
_unix.196974 | I want to create a machine readable copyright file for a Debian package, as defined inhttps://www.debian.org/doc/packaging-manuals/copyright-format/1.0/#fields .I have some 3rd party files which are licensed in a different license. Debian recommends using the Files: syntax. But I have problems understanding which path I should use.The line in my package/debian/rules file:install -oroot -gstaff -m0644 share/includes/idna_convert.class.php debian/gwhois/usr/share/gwhois/includes/An the target machine, the file is installed on /usr/share/gwhois/includes/idna_convert.class.php .So, which is the correct usage?a) Files: share/includes/idna_convert.class.phpCopyright: 2004-2014, phlyLabs Berlin, http://phlylabs.deLicense: LGPL-2.1b) Files: debian/gwhois/usr/share/gwhois/includes/idna_convert.class.phpCopyright: 2004-2014, phlyLabs Berlin, http://phlylabs.deLicense: LGPL-2.1c) Files: /usr/share/gwhois/includes/idna_convert.class.phpCopyright: 2004-2014, phlyLabs Berlin, http://phlylabs.deLicense: LGPL-2.1 | Debian machine readable copyright: Files path | debian | null |
_unix.351061 | Am I right that all input typed from the keyboard goes through a controlling terminal? That means that if a program is run without a controlling terminal, it won't be able to receive any user input. Is that right for every kind of program in Linux?UPDATE #1: To clarify the question, my pager module for Python crashes when stdin is redirected:$ ./pager.py < README.rst... File pager.py, line 566, in <module> page(sys.stdin) File pager.py, line 375, in page if pagecallback(pagenum) == False: File pager.py, line 319, in prompt if getch() in [ESC_, CTRL_C_, 'q', 'Q']: File pager.py, line 222, in _getch_unix old_settings = termios.tcgetattr(fd)termios.error: (25, 'Inappropriate ioctl for device')This is because I try to get descriptor to setup keyboard input as fd = sys.stdin.fileno(). When stdin is redirected, its file descriptor no longer associated with any keyboard input, so attempt to setup it fails with input-output control error.I was told to get this controlling terminal instead, but I had no idea where does it come from. I understood that it is some kind of channel to send signals from user to running processes, but at the same time it is possible to run processes without it.So the question is - should I always read my keyboard input from controlling terminal? And what happens if the pager process is run without it? Will keyboard input still matter to user? Should I care to get it from some other source? | Does keyboard input always go through a controlling terminal? | terminal;ioctl;controlling terminal | No. Terminal applications read keyboard input from the device file (on Linux, something like /dev/ttyS0 or /dev/ttyUSB0... for a serial device, /dev/pts/0 for a pseudo-terminal device) corresponding to the terminal with the keyboard you're typing on.That device doesn't have to be the controlling terminal of the process (or any process for that matters).You can do cat /dev/pts/x provided you have read permission to that device file, and that would read what's being typed on the terminal (if any) at the other end.Actually, if it is the controlling terminal of the process and the process is not in the foreground process group of the terminal, the process would typically be suspended if it attempted to read from it (and if it was in the foreground process group, it would receive a SIGINT/SIGTSTP/SIGQUIT if you sent a ^C/^Z/^\ regardless of whether the process is reading from the terminal device or not). Those things would not happen if the terminal device was not the controlling terminal of the process (if the process was part of a different session). That's what controlling terminal is about. That is intended for the job control mechanism as implemented by interactive shells. Beside those SIGTTIN/SIGTTOU and SIGINT/SIGTSTP/SIGQUIT signals, the controlling terminal is involved in the delivery of SIGHUP upon terminal hang hup, it's also the tty device that /dev/tty redirects to.In any case, that's only for terminal input: real as in a terminal device connected over a serial cable, emulated like X11 terminal emulators such as xterm that make use of pseudo-terminal devices, or emulated by the kernel like the virtual terminals on Linux that interact with processes with /dev/tty<x> (and support more than the standard terminal interface).Applications like the X server typically get keyboard input from the keyboard drivers. On Linux using common input abstraction layers. The X server, in turn provides an event mechanism to communicate keyboard events to applications connecting to it. For instance, xterm would receive X11 keyboard events which it translates to writing characters to the master side of a pseudo-terminal device, which translates to processes running inside xterm reading the corresponding characters when they read from the corresponding pseudo-terminal slave device (/dev/pts/x).Now, there's no such thing as a terminal application. What we call terminal application above are applications that are typically used in a terminal, that are expected to be displayed in a terminal and take input from a terminal like vi, and interactive shell or less. But any application can be controlled by a terminal, and any application that reads or writes files or their stdin/stdout/stderr can be made to perform I/O to a terminal device.For instance, if you run firefox, an application that connects to the X server for user I/O, from within a shell running in an xterm, firefox will inherit the controlling terminal from its shell parent. ^C in the terminal would kill it if it was started in foreground by the shell. It will also have its file descriptors 0, 1 and 2 (stdin, stdout and stderr) open on that /dev/pts/<x> file (again as inherited from its shell parent). And firefox may very well end up writing on the fd 2 (stderr) for some kind of errors (and if it was put in background and the terminal device was configured with stty tostop, it would then receive a SIGTTOU and be suspended).If instead, firefox is started by your X session manager or Windows manager (when you click on some firefox icon on some menu), it will likely not get any controlling terminal and will have no file descriptor connected to any (you'll see that ps -fp <firefox-pid> shows ? as the tty and lsof -p <firefox-pid> shows no file descriptor on /dev/pts/* or /dev/tty*). If however you browsed to file:///dev/pts/<x>, firefox could still do some I/O to a terminal device. And if it opened that file without the O_NOCTTY flag and if it happened to be a session leader and if that /dev/pts/<x> didn't already have a session attached to it, that device would end up being the controlling terminal of that firefox process.More reading at:How do keyboard input and text output work?What is the exact difference between a 'terminal', a 'shell', a 'tty' and a 'console'?EditAfter your edit clarifies a bit the question and adds some context.The above should make it clear that a process can read input from any terminal device they like (except the controlling terminal if the process is not in its foreground process group), but that's not really what is of interest to you here.Your question would be: for an interactive terminal application, where to get the user input from when stdin no longer points to the terminal.Applications like tr get their input from stdin and write on stdout. When stdin/stdout is a tty device with a terminal at the other end, they happen to be interactive in that they read and write data from/to the user.Some terminal text editors (like ed/ex and even some vi implementations) continue reading their input from stdin when stdin is no longer a terminal so they can be scriptable.A pager though is a typical application that still needs to interact with the user even when their input is not a terminal (at least when their output still goes to the terminal). So they need another channel to the terminal device to take user input. And the question is: which terminal device should they use?Yes, it should be the controlling terminal. As that's typically what the controlling terminal is meant to be. That's the device that would send the pager a SIGINT/SIGTSTP when you press Ctrl-C/Z, so it makes sense for the pager to read other key strokes from that same terminal.The typical way to get a file descriptor on the controlling terminal is to open /dev/tty that redirects there (note that it works even if the process has changed euid so that it doesn't have read permission to the original device. It's a lot better than trying to find a path to the original device (which can't be done portably anyway)).Some pagers like less or most open /dev/tty even if stdin is a tty device (after all, one could do less < /dev/ttyS0 from within a terminal emulator to see what's being sent over serial).If opening /dev/tty fails, that's typically because you don't have a controlling terminal. One might argue that it's because you've been explicitly detached from a terminal so shouldn't be attempting to do user interaction, but there are potential (unusual) situations where you have no controlling terminal device but your stdin/stdout is still a tty device and you'd still want to do user interaction (like an emergency shell in an initrd).So you could fall back to get user interaction from stdin if it's a terminal.One could argue that you'd want to check that stdout is a terminal device and that it points to the same terminal device as the controlling one (to account for things that do man -l /dev/stdin < /dev/ttyS0 > /dev/ttyS1 for instance where you don't want the pager spawned by man to do user interaction) but that's probably not worth the bother especially considering that it's not easy to do portably. That could also potentially break other weird use cases that expect the pager to be interactive as long as stdout is a terminal device. |
_softwareengineering.47402 | I'm looking for a toolkit in the form of one or a couple of applications that can be used to write long technical texts (such as an introduction to a programming language). What applications (or combination of) are suitable for this?How should said applications be setup (for example how would one setup MS Word to best fit writing a technical text)?How do you deal with source code, syntax coloring and formatting?In the case of it being several applications, how do you interact between them? | What is the best toolkit for writing long technical texts? | text editor;applications;writing | LaTeX is what you need. MiKTeX - for Windows. Text EditorsKile - for Linux.TeXnicCenter for Windows. DocumentationFree LaTeX documentation.Complexity comparison |
_unix.103459 | I'd like to get output showing what has been copied by cp.The only problem is how to do it when I cp many files at a time. For instance, cp ./sourceDir/* $destinationPath/. | Display what has been copied by `cp` (using `ksh`) | ksh;cp | Like Lawrence has mentioned, you can usecp -vto enable verbose mode, which displays the files you copy. Something else that might be useful iscp -v > foowhich will output the list of files to a file called foo. This is useful if you're going to copy a lot of files and you want to be able to review the list later. |
_cstheory.25621 | The wikipedia page on PSPACE mentions that the inclusion $NL\subset PH$ is not known to be strict (unfortunately without references). Q1: What about $L\subset PH$ and $L\subset P^{\#P}$ - are these known to be strict? Q2: If no, is there an established class $C$ which contains $P^{\#P}$ and for which it is not known if the inclusion $L\subset C$ is strict?Q3: Are such inclusions discussed in literature? | Large classes which contain LOGSPACE for which strict inclusions are unknown | cc.complexity theory;complexity classes;logspace | null |
_webmaster.23727 | First of all I am a programmer and have really basic knowledge like keywords, titles and content makes difference on SEO ranking. I have a website which is more like yellow pages. I have imported about 40,000 businesses and they have a SEO friendly urls as well they are divided by categories and subcategories. If I expose all those to google at the same time will they create any problems should I open them to Google in few hundered and with couple of days between them? As this is a large number of pages can Google end up thinking it is a spam website and hence punish it in ranking? Also my category and paging urls are with query strings ie. ?page=1&cat=12. Can Google crawl those kind of urls? Do I have to change them to /12/1 type urls with Url rewiting? | Need SEO Guidance on Huge dynamic website | seo;dynamic | null |
_webmaster.59568 | Some time ago I bought a cool domainname to match my username, namely, wol.ph. After looking in the Google Webmaster Tools recently it told me that the Geographic Target for my domain is understandably the Philippines.Since the Webmaster Tools don't allow me to change the target I was wondering if this will actually affect my site negatively and if I should just move the site to a different domain for it to show up in rankings.So... should I move my site to a different (.com, .org, .net, etc.) tld? | Will the Geographic Target of my website negatively affect my search ranking? | seo;google search console;geotargeting;negative seo | When you website is targeted to a specific country, you will rank better in that country and much worse everywhere else in the world.Unless your top level domain is on Google's list of generic top level domains, you won't be able to use it for a website that targets a global audience and gets traffic from Google search. |
_codereview.32656 | Here is the code I wrote for my mother's dental office. It fulfills these tasks:determines whether a tooth is anterior or posteriordetermines if the patient needs a fillingsets up a later appointmentMy issue is with cleaning the code to increase readability, because I think there is a better way to write it. I also need help outputting the answers to a .txt file for later reference, but I'm not sure if that will screw up the code.I am newbie, so tips are very much appreciated. I have a lot of bad habits when it comes to programming.// fillings program#include <iostream>using namespace std;int main(){ bool post, // posterior: ? ante; // anterior: ? int fillings; // fillings: Y/N int tooth_number; // tooth_number: 1-32 int surface_num; // surface_num should be 1-5 int amal_or_comp; // amalgum or composite filling int cav_deep; // input for if cavity is deep cout << Is the cavity deep?: 1.Yes 2.No << endl; cin >> cav_deep; if (cav_deep==1) cout << Temporary filling needed. Patient needs to come back for permanent fill at later date. << endl; else if (cav_deep==2) { cout << Fillings Needed?: 1.Yes 2.No << endl; cin >> fillings; if (fillings==2) { cout << Proceed to 'crowns' << endl; } if (fillings==1) { cout << Tooth #: << endl; cin >> tooth_number; cout << Tooth # entered is: << tooth_number << endl; // if tooth is posterior if (tooth_number>=1&&tooth_number<=5) { cout << Posterior Tooth << endl; cout << 1. Amalgam or 2. Composite? << endl; cin >> amal_or_comp; if(amal_or_comp==1) cout << Amalgam selected. << endl; if(amal_or_comp==2) cout << Composite selected. << endl; cout << Surface #: ; cin >> surface_num; cout << Surface # entered: << surface_num; } if (tooth_number>=12&&tooth_number<=16) { cout << Posterior Tooth << endl; cout << 1. Amalgam or 2. Composite? << endl; cin >> amal_or_comp; if(amal_or_comp==1) cout << Amalgam selected. << endl; if(amal_or_comp==2) cout << Composite selected. << endl; cout << Surface #: ; cin >> surface_num; cout << Surface # entered: << surface_num; } if (tooth_number>=17&&tooth_number<=21) { cout << Posterior Tooth << endl; cout << 1. Amalgam or 2. Composite? << endl; cin >> amal_or_comp; if(amal_or_comp==1) cout << Amalgam selected. << endl; if(amal_or_comp==2) cout << Composite selected. << endl; cout << Surface #: ; cin >> surface_num; cout << Surface # entered: << surface_num; } if (tooth_number>=28&&tooth_number<=32) { cout << Posterior Tooth << endl; cout << 1. Amalgam or 2. Composite? << endl; cin >> amal_or_comp; if(amal_or_comp==1) cout << Amalgam selected. << endl; if(amal_or_comp==2) cout << Composite selected. << endl; cout << Surface #: ; cin >> surface_num; cout << Surface # entered: << surface_num; } // if tooth is anterior if (tooth_number>=6&&tooth_number<=11) { cout << Anterior Tooth << endl; cout << Composite << endl; cout << Surface #: ; cin >> surface_num; cout << Surface # entered: << surface_num; } if (tooth_number>=22&&tooth_number<=27) { cout << Anterior Tooth << endl; cout << Composite << endl; cout << Surface #: ; cin >> surface_num; cout << Surface # entered: << surface_num; } } }} | Dental office program | c++;beginner | null |
_unix.250666 | I am new to Linux and learning it. While installing gambas, I am facing some problems.The list of commands I need to execute follow.$> sudo add-apt-repository ppa:gambas-team/gambas-daily$> sudo apt-get update$> sudo apt-get install gambas3But when running the first command it shows the following. sudo: apt-get: command not found Any help? | Command not Found in centos 6 | centos;package management | As per the gambas3 Compilation & Installation documentation, there is no direct distro available in centOS 6 to install gambas3. If you still wants to install, you will have to manually compile and install it. For this, I think, you should try the instructions for Fedora 13, 14, 15 & 16 given in Gambas 3.0 compilation instructions documentation for Fedora. |
_codereview.126050 | I'm trying to learn the Repository pattern, and I have some questions regarding my current understanding of it.All the examples I've been able to find of database repositories use ORMs, but for a number of reasons, I can't use an ORM in the project I am learning this for. So, when not using an ORM, where should the SQL queries go? My best guess was in the repository class itself, so that's what I did in the example below.How's my naming convention for the repository's methods? I stuck with the create/update/delete verbiage of SQL as a sort of placeholder, but is there a better way?Because I'm not using an ORM, I need a setId() method in my repository. I recognize the danger inherent in allowing id's to be changed after object creation. Right now I prevent that by throwing an exception in setId() if id is not null. Is that alright or is there a better way?Am I doing anything just completely wrong in general?Here is my current implementation, as far as I understand the concepts.Product.php<?phpnamespace Vendor\Package\Module\Entities;class Product{ /** @var int $id */ protected $id; /** @var string $name */ protected $name; public function getId() { return $this->id; } public function setId($id) { if ($this->id !== null) { throw new Exception('id cannot be reset.'); } $this->id = $id; return $this; } public function getName() { return $this->name; } public function setName($name) { $this->name = $name; return $this; }}ProductRepositoryInterface.php<?phpnamespace Vendor\Package\Module\Repositories;use PDO;use Vendor\Package\Module\Entities\Product;interface ProductRepositoryInterface{ public function findAll(); public function findById($id); public function create(Product $product); public function update(Product $product); public function delete(Product $product);}ProductRepository.php<?phpnamespace Vendor\Package\Module\Repositories;use PDO;use Vendor\Package\Module\Entities\Product;class ProductRepository implements ProductRepositoryInterface{ /** @var PDO $db */ protected $db; public function __construct(PDO $db) { $this->db = $db; } /** * @return array */ public function findAll() { $stmt = $this->db->query( 'SELECT id, name FROM products WHERE active = 1' ); $products = []; while ($stmt->fetch(PDO::FETCH_ASSOC)) { $product = new Product(); $product ->setId($result['id']) ->setName($result['name']) ; } return $products; } /** * @param int $id * * @return Product */ public function findById($id) { $stmt = $this->db->prepare( 'SELECT id, name FROM products WHERE id = :id AND active = 1 LIMIT 1' ); $stmt->bindValue(':id', $id, PDO::PARAM_INT); $stmt->execute(); $result = $stmt->fetch(PDO::FETCH_ASSOC); $product = new Product(); $product ->setId($result['id']) ->setName($result['name']) ; } /** * @param Product $product * * @return int */ public function create(Product $product) { $stmt = $this->db->prepare( 'INSERT INTO products ( name ) VALUES ( :name )' ); $stmt->bindValue(':name', $product->getName(), PDO::PARAM_STR); $stmt->execute(); $id = $this->db->lastInsertId(); $product->setId($id); return $id; } /** * @param Product $product * * @return bool */ public function update(Product $product) { $stmt = $this->db->prepare( 'UPDATE products SET name = :name WHERE id = :id AND active = 1' ); $stmt->bindValue(':name', $product->getName(), PDO::PARAM_STR); $stmt->bindValue(':id', $product->getId(), PDO::PARAM_INT); return $stmt->execute(); } /** * @param Product $product * * @return bool */ public function delete(Product $product) { $stmt = $this->db->prepare( 'UPDATE products SET active = 0 WHERE id = :id AND active = 1' ); $stmt->bindValue(':id', $product->getId(), PDO::PARAM_INT); return $stmt->execute(); }}demo.php<?phpuse Vendor\Package\Module\Entities\Product;use Vendor\Package\Module\Repositories\ProductRepository;$repository = new ProductRepository($db);// Createif ( isset($_POST['create']) && isset($_POST['name'])) { $product = new Product(); $product ->setName($_POST['name']) ; $repository->create($product);}// Updateif ( isset($_POST['update']) && isset($_POST['id']) && isset($_POST['name'])) { $product = new Product(); $product ->setId($_POST['id']) ->setName($_POST['name']) ; $repository->update($product);}// Deleteif ( isset($_POST['delete']) && isset($_POST['id'])) { $product = new Product(); $product ->setId($_POST['id']) ; $repository->delete($product);} | Repository Pattern without an ORM | php;design patterns;repository | null |
_unix.285698 | As a root user, I created a file in / directory. I can only read this file when logged in as normal user (say A) as expected.I changed the ownership to A.Now A can read as well as write.But when I try to delete it, permission denied message comes.Can anyone explain why? | Why can't i delete file when i have the file's ownership? | permissions;chown | null |
_datascience.9590 | I have no background AT ALL in data science/stats/mathematics. However, I've always been interested in what data shows.I have a huge dataset right now - daily attendance figures for a factory of ~300 for the past 10 years. I'm interested in finding out answers to questions like is there a pattern of leaves correlated with public holidays? For example, around which holiday (+/- 2 days) are workers most likely to take a leave? This is an expected pattern. Or, was there a significant increase (+10%) in on-time reporting after bonuses were issued. Maybe there are hidden patterns which an algorithm can find.Is there a tool I can plug this data in which can help me find these patterns? Google tells me there's a tool http://www.i-programmer.info/news/84-database/3501-mine-finding-patterns-in-big-data.html but I'm not sure if this is the right direction for me.I'd appreciate any advice! | What tool to find expected and hidden patterns in data? | data mining | null |
_codereview.105670 | I'm trying to break down the individual functionality of a advanced table UI into different react components (right now all table components are really heavy).I came up with this way of exposing properties of a component to the parent component which allows components to talk to each other. It's quite messy and I'm looking for better ways of achieving this.Here's the working demo.function filter (value, caseSensitive, children) { return main.recursion (children, function (children, recursion, wrapper) { return _.filter(children, wrapper(function (child) { if (typeof child.props.children !== 'string') { var result = recursion(child.props.children) return Boolean(result.length) } else { var flag = (caseSensitive) ? '' : 'i' var pattern = new RegExp(value, flag) return (child.props.children.match(pattern)) } })) })}var FilterChildren = React.createClass({ displayName: 'FilterChildren', propTypes: { children: React.PropTypes.node, provideFilter: React.PropTypes.func, provideFilterChildren: React.PropTypes.func, onFilter: React.PropTypes.func, caseSensitive: React.PropTypes.bool, selector: React.PropTypes.string }, getInitialState: function () { return { children: this.props.children } }, componentWillMount: function () { if (this.props.provideFilter) this.props.provideFilter(this.filter) if (this.props.provideFilterChildren) this.props.provideFilterChildren(this.filterChildren) }, filterChildren: function (children) { this.children = children }, filter: function (value) { this.props.onFilter(filter(value, this.props.caseSensitive, this.children)) }, render: function () { return this.state.children }})var StatefulChildren = React.createClass({ displayName: 'StatefulChildren', propTypes: { children: React.PropTypes.node, provide: React.PropTypes.func, supply: React.PropTypes.func, element: React.PropTypes.string }, getInitialState: function () { return { children: this.props.children } }, componentWillMount: function () { if (this.props.provide) this.props.provide(this.setChildren) if (this.props.supply) this.props.supply(this.props.children) }, setChildren: function (children) { this.setState({ children: children }) }, render: function () { return React.createElement( this.props.element, null, this.state.children ) }})var Page = React.createClass({ invokeFilter: function (event) { return this.filter(event.target.value) }, handleFilter: function (nodes) { this.setTbodyChildren(nodes) }, provideFilter: function (filter) { this.filter = filter }, provideFilterChildren: function (filterChildren) { this.filterChildren = filterChildren }, provideSetTbodyChildren: function (setTbodyChildren) { this.setTbodyChildren = setTbodyChildren }, supplyTbodyChildren: function (tbodyChildren) { this.filterChildren(tbodyChildren) }, render: function () { return ( <div> <input type='text' onChange={this.invokeFilter}></input> <table> <FilterChildren provideFilter={this.provideFilter} provideFilterChildren={this.provideFilterChildren} onFilter={this.handleFilter}> <StatefulChildren element='tbody' provide={this.provideSetTbodyChildren} supply={this.supplyTbodyChildren}> <tr> <td>French Fries</td> <td>Mimes</td> <td>Discotech</td> </tr> <tr> <td>Bread</td> <td>Coffee</td> <td>Wine</td> </tr> </StatefulChildren> </FilterChildren> </table> </div> ) }})React.render(<Page/>, document.body) | Filter children component via input component | javascript;react.js | null |
_unix.343493 | I would like to know how to monitor tcp traffic between my localhost and IP address keeping activities in a file. I tried iftop and tcptrack but I can not keep activities in a file. These tools don't target a specify IP adress, they're monitoring the interface only : iftop -i eth2 -f dst port 22I tried to put the IP adress in place of dst but it doesn't work.The idea is for detecting any suspect trafficThanks for help | How to monitor tcp traffic between my localhost and IP adress | networking;monitoring;network interface;bandwidth | As @blametheadmin mentioned in a comment, you can use tshark. Another option is tcpdump:$ tcpdump -w trace.out host <hostname-or-ip>Then later, you can examine that trace with:$ tcpdump -r trace.out |
_unix.115732 | Quite often, we run an executable that needs to write / read some temporary files. We usually create a temporary directory, run the executable there, and delete the directory when the script is done.I want to delete the directory even if the executable is killed. I tried to wrap it in:#!/bin/bashdir=$(mktemp -d /tmp/foo.XXXXXXX) && cd $dir && rm -rf $dir/usr/local/bin/my_binaryWhen my_binary dies, the last process the kernel will delete the directory, as the script is the last process holding that inode; but I can't create any file in the deleted directory.#!/bin/bashdir=$(mktemp -d /tmp/foo.XXXXXXX) && cd $dir && rm -rf $dirtouch file.txtoutputs touch: file.txt: No such file or directoryThe best I could come up with is to delete the temp directory when the process dies, catching the most common signals, and run a cleanup process with cron:#!/bin/bashdir=$(mktemp -d /tmp/d.XXXXXX) && cd $dir || exit 99trap 'rm -rf $dir' EXIT/usr/local/bin/my_binaryIs there some simple way to create a really temporary directory that gets deleted automatically when the current binary dies, no matter what? | How to delete a directory automatically when an executable is killed | bash;files | Your last example is the most fail safe. trap 'rm -rf $dir' EXITThis will execute as long as the shell itself is still functional. Basically SIGKILL is the only thing that it won't handle since the shell is forcibly terminated.(perhaps SIGSEGV too, didn't try, but it can be caught)If you don't leave it up to the shell to clean up after itself, the only other possible alternative is to have the kernel do it. This is not normally a kernel feature, however there is one trick you can do, but it has it's own issues:#!/bin/bashmkdir /tmp/$$mount -t tmpfs none /tmp/$$cd /tmp/$$umount -l /tmp/$$rmdir /tmp/$$do_stuffBasically you create a tmpfs mount, and then lazy unmount it. Once the script is done it'll be removed.The downside other than being overly complex, is that if the script dies for any reason before the unmount, you've not got a mount laying around.This also uses tmpfs, which will consume memory. But you could make the process more complex and use a loop filesystem, and remove the file backing it after it's mounted.Ultimately the trap is best as far as simplicity and safety, and unless you're script is regularly getting SIGKILLed, I'd stick with it. |
_unix.229350 | Since my previous question on this topic, I upgraded my kernel a few times and I ran into another problem: cpupower doesn't seem to show and set the cpu frequency in a reliable way.First, some information:# uname -aLinux yoga 4.0.5-gentoo #3 SMP Tue Jul 21 08:43:04 HKT 2015 x86_64 Intel(R) Core(TM) i5-3317U CPU @ 1.70GHz GenuineIntel GNU/Linux# cpupower frequency-info analyzing CPU 0: driver: acpi-cpufreq CPUs which run at the same hardware frequency: 0 CPUs which need to have their frequency coordinated by software: 0 maximum transition latency: 10.0 us. hardware limits: 782 MHz - 1.70 GHz available frequency steps: 1.70 GHz, 1.70 GHz, 1.60 GHz, 1.50 GHz, 1.40 GHz, 1.30 GHz, 1.20 GHz, 1.10 GHz, 1000 MHz, 900 MHz, 800 MHz, 782 MHz available cpufreq governors: conservative, ondemand, powersave, userspace, performance current policy: frequency should be within 782 MHz and 1.70 GHz. The governor performance may decide which speed to use within this range. current CPU frequency is 1.70 GHz (asserted by call to hardware). cpufreq stats: 1.70 GHz:90.12%, 1.70 GHz:0.00%, 1.60 GHz:0.64%, 1.50 GHz:0.00%, 1.40 GHz:0.00%, 1.30 GHz:0.00%, 1.20 GHz:0.00%, 1.10 GHz:0.00%, 1000 MHz:0.00%, 900 MHz:0.00%, 800 MHz:0.00%, 782 MHz:9.25% (267) boost state support: Supported: yes Active: yes 2400 MHz max turbo 4 active cores 2400 MHz max turbo 3 active cores 2400 MHz max turbo 2 active cores 2600 MHz max turbo 1 active coresAnd now the weird stuff:# cpupower frequency-info|grep -P The governor|CPU frequency The governor performance may decide which speed to use current CPU frequency is 1.70 GHz (asserted by call to hardware).# grep MHz /proc/cpuinfocpu MHz : 1701.000cpu MHz : 1701.000cpu MHz : 1701.000cpu MHz : 1701.000# cpupower frequency-set -f 800Setting cpu: 0Setting cpu: 1Setting cpu: 2Setting cpu: 3# cpupower frequency-info|grep -P The governor|CPU frequency The governor userspace may decide which speed to use current CPU frequency is 1.70 GHz (asserted by call to hardware).# grep MHz /proc/cpuinfocpu MHz : 782.000cpu MHz : 782.000cpu MHz : 782.000cpu MHz : 782.000# cpupower frequency-set -f 1700Setting cpu: 0Setting cpu: 1Setting cpu: 2Setting cpu: 3# cpupower frequency-info|grep -P The governor|CPU frequency The governor userspace may decide which speed to use current CPU frequency is 1.70 GHz (asserted by call to hardware).# grep MHz /proc/cpuinfo cpu MHz : 782.000cpu MHz : 782.000cpu MHz : 782.000cpu MHz : 782.000# cpupower frequency-set -g performanceSetting cpu: 0Setting cpu: 1Setting cpu: 2Setting cpu: 3# cpupower frequency-info|grep -P The governor|CPU frequency The governor performance may decide which speed to use current CPU frequency is 1.70 GHz (asserted by call to hardware).# grep MHz /proc/cpuinfocpu MHz : 1701.000cpu MHz : 1701.000cpu MHz : 1701.000cpu MHz : 1701.000To summarize:When I set the frequency to 800, cpupower sets it to 782, and says it is still 1700 (asserted by call to hardware!)When I set the frequency back to 1700, cpupower does nothing (and still says it is 1700)When I set the governor to performance, cpupower finally sets the frequency to 1700Is there a way to make cpupower work reliably? Or is it a bug? | Cpupower doesn't work reliably | linux;cpu frequency | null |
_softwareengineering.185574 | I have almost a 'best-practice' question that's been nagging at me for a while.When I use JavaScript libraries and APIs such as JQGrid or Google Maps, I tend to find myself creating server side libraries to render the JavaScript for me.For example I might have:$map->setZoom(10);$map->setDimensions(200,250);in PHP, which would internally render the relevant JS when I call $map->render();, for example.This means I can easily generate JavaScript based on the state of my applications. I also find it easier to work with the data I already have in my controllers etc.Is this a suitable way of using common functionality within a JavaScript library or would it be better to write JS as standard? | Server side JavaScript rendering | javascript | First issue: code qualityHow do you:unit test,debug,review for security issues,this generated code?The issue is the same for any generated code.That's why code generation tools are used only for simplistic tasks where the code is mostly boilerplate.For example in .NET world, Visual Studio generates code for Windows Forms and is limited to positioning and customizing controls, but nothing technically challenging. In the same way, Entity Framework generates the mappings for the database, which means lines and lines of boilerplate, uninteresting, monotonous code.Second issue: performanceIf the code is generated, how do you cache it? Is it at least cached? If not, have you measured the precise impact on the performance, compared to the correctly-cached static JavaScript code? What about the impact on the server which needs to generate this code and how does it scale?This issue may be non-existent in some cases (or at least the slight performance impact is very limited by severe caching, and a few milliseconds spent by the server generating the files is outweighed by the gains in terms of time you spend writing code), but you still need to measure the impact to know exactly how is it affecting your application.Third issue: lower interoperabilityJavaScript code written in JavaScript can be used no matter which framework is used server-side. JavaScript code I've written for an ASP.NET MVC website two years ago can still be used for a new website in Python.If the code is generated server-side using server side programming language, you won't be able to reuse the same code in websites powered by other programming languages. Moreover, even migrating to newer versions of the same framework may be painful. |
_softwareengineering.211023 | So I was wondering today, where would you put utility classes in an ASP.NET MVC app? By utility classes I mean classes that can be static and are just used to perform a function. Like a class to send an email that takes email address , subject, and body as arguments.I would assume maybe creating a separate folder and namespace would be good enough, but wanted to get everyone opinions | Utility Classes in MVC - ASP.NET | asp.net;asp.net mvc 4 | You don't. And your own example is a perfect one to show why not.You want to send emails, right? So you create somewhere a static class CommunicationUtilities with a static SendEmail() in it. You use this method from some class which does a bunch of stuff, , for example resets the password of a user and sends him a new one by email. Perfect.Now, what happens if you want to unit-test your class? You can't, because every time you want to test the method which resets the password, it changes the database (which is not suitable for a unit test), and moreover sends an email (which is even worse).You may have read about Inversion of Control, which has a benefit of making unit testing easier. Articles about IoC will explain you that instead of doing something like:void ResetPassword(UserIdentifier userId){ ... new MailSender().SendPasswordReset(userMail, newPassword);}you do:void ResetPassword(IMailSender sender, UserIdentifier userId){ ... sender.SendPasswordReset(userMail, newPassword);}which allows to use mocks and stubs.Try to apply IoC to your CommunicationUtilities. Right, you can't. That's why it's broken. |
_cogsci.12395 | I've recently encountered the domain of social network modelling while reading the papers on models of strategic voting (unpublished), opinion dynamics and influence spread.All of these papers create complex computational models, usually modelling social networks as graphs and inferring that their behaviour match actual social behaviour.However, I don't understand how these models can be empirically rigorous. It seems to me that the ways to validate a model is to match with empirical data, either via data matching or prediction. This doesn't seem to be done in any of those papers.Consequently, how are these models validated? | Empirical proof for social network models | cognitive modeling;social networks;sociology | I found two papers in the same vein with considerably more empirical evidence.The first paper is Modeling the Size of Wars. In the paper, provinces and conflicts are modeled to justify the Richardson's observation that the proportion of the severity of conflicts in relation to their frequency is described by a power law. In other words, the more space there is between each conflict, the more casualties will result. The models uses a lot of detail. A geographical map is created and conflicts (down to technological advancement, political structural change and resource allocation) are simulated in models with dozens of parameters. More importantly, the level of detail in the paper remain meaningful given that parameters can be set to reflect and test historical scenarios to disprove or give further evidence to the model.The second paper is The Dynamics of Polarisation. The focus of the paper is modeling the change of public opinion in the United States as a way to provide explanation for two phenomenon: that polarization of opinion is rare despite being perceived otherwise and that opinion homogeneity is rare, despite being perceive otherwise. To model these phenomenon, a network similar to that of the Opinion Dynamics paper from the question is built, with imposed homophily. However, the paper grounds it's parameters in reality (for example, take-off issues that stimulate much discussion are considered rare). In contrast, the Opinion Dynamics paper creates parameters for skepticism and empathy, without giving much consideration for the mechanisms behind these attributes and how they might change over time.In summary, there are papers with empirical basis in the domain of social network models. Typically, this realism is achieved by basing parameters on empirical evidence, rather than the usual awkward simplification of cognitive phenomena.Update: The author of two of the papers I cited, Alan Tsang, was kind enough to provide a rebuttal to my scepticism via personal correspondence:The premise behind both the papers was to examine the effects of a particular psychological phenomenon by examining it in isolation. We base our work on more conventional agent models from economics, which assume completely rational actors. We want to see what happens when the rational behavior is tweaked to incorporate a behavioral component. In particular, we are interested in the qualitative effects that are produced, and the mechanisms by which these are achieved. An agent based simulation is the ideal way of studying this, because it allows us to drill down and take measurements that would be impossible or very difficult to do in an actual community. So what we are doing is more mathematics and less science. The results of the papers provide qualitative insights on what are plausible effects of these behaviors on a larger system. The next step could certainly be to validate the model against real data, but the goal of the paper is not to perform full stack science. Rather, we hope it would prove useful component in a more detailed model to be used on data collect in the wild.That said, we are also interested in validating our model against real data as well, but they are difficult to come by. For instance, Facebook almost certainly has access to data that can be used to detect homophily in networks, and how they might affect opinions over time. But the data is very proprietary and, even if we were to get it, ethical concerns might limit how it can be used. As you've pointed out in the follow-up post, there are a few papers that examine political affiliation, and those could potentially be useful ground truth. But I don't have the background to properly assemble such a data set from scratch (as it would need both a time series of opinions, and the underlying network structure). Moreover, there are sure to be other effects at work as well, which may confound the analysis. We're giving some thought to gathering data for the last Canadian election, since strategic voting was so widespread and successful, but any data we collect will likely be disconnected from social network structure. Maybe some aggregate network properties could be inferred (ex: based on region), but it would be a multi-layered problem. |
_unix.55069 | I want to accumulate the line size of a number of files contained in a folder. I have written the following script:let a=0let num=0for i in folder/*do num=`cat $i | wc -l` a=$a+$numdoneecho $aWhat i am geting at the end of the script is 123+234+432+... and not the result of the arithmetic operation of addition. | How to add arithmetic variables in a script | shell;shell script;arithmetic | Your arithmetic evaluation syntax is wrong. Use any of the following (the first is extremely portable but slow, the second is POSIX and portable except to the Bourne shell and earlier versions of the Almquist shell, the last three require ksh, bash or zsh):a=`expr $a + $num`a=$(($a+$num))((a=a+num))let a=a+num((a+=num))Or you can just skip the entire for loop and just do:wc -l folder/*Or, if you only want the total:cat folder/* | wc -lOr with zsh and its mult_ios option:wc -l < folder/* |
_unix.198283 | How do I use wget to download files to a specific location, without creating directories, and overwrite the original every time.I've tried using the -r -P and nc options in combination but this resulted in several undesirable effects.wget -P ./temp -r https://raw.githubusercontent.com/octocat/Spoon-Knife/master/README.md -ndThe above downloads README.md to the /temp directory in the current folder, but preserves the original README.md and numbers all subsequent README.md files.wget -r -P ./temp https://raw.githubusercontent.com/octocat/Spoon-Knife/master/README.md -ndAbove command does the same thing.wget -P ./temp -nc https://raw.githubusercontent.com/octocat/Spoon-Knife/master/README.md -rWith this one, the file is replaced but directories are created. | Using wget, how to download to a specific location, without creating folders, and always overwrite original files | files;terminal;wget | You can achieve the required result in wget (or curl) by specifying an output document.With wget:wget https://raw.git...etc.../README.md -O ./temp/README.mdWith curl:curl https://raw.git...etc.../master/README.md > ./temp/README.md |
_datascience.548 | I often am building a model (classification or regression) where I have some predictor variables that are sequences and I have been trying to find technique recommendations for summarizing them in the best way possible for inclusion as predictors in the model.As a concrete example, say a model is being built to predict if a customer will leave the company in the next 90 days (anytime between t and t+90; thus a binary outcome). One of the predictors available is the level of the customers financial balance for periods t_0 to t-1. Maybe this represents monthly observations for the prior 12 months (i.e. 12 measurements). I am looking for ways to construct features from this series. I use descriptives of each customers series such as the mean, high, low, std dev., fit a OLS regression to get the trend. Are their other methods of calculating features? Other measures of change or volatility? ADD:As mentioned in a response below, I also considered (but forgot to add here) using Dynamic Time Warping (DTW) and then hierarchical clustering on the resulting distance matrix - creating some number of clusters and then using the cluster membership as a feature. Scoring test data would likely have to follow a process where the DTW was done on new cases and the cluster centroids - matching the new data series to their closest centroids... | Feature Extraction Technique - Summarizing a Sequence of Data | machine learning;feature selection;time series | null |
_unix.9162 | I have somehow managed to get my keyboard only to work and be able to select with the mouse cursor, only when the Shift and Control keys are held.How can I undo this?I have tried System -> Preferences -> Keyboard but nothing appears there to resolve this.Edit:I can only enter text or select anything (even opening a new tab in web browser), only if the shift and control keys are held. once the selection is active then I can type normally, until the next time I need to type, etc..Below as recommended:KeyRelease event, serial 36, synthetic NO, window 0x5a00001,root 0x1ad, subw 0x0, time 2238759, (-435,502), root:(312,552),state 0x11, keycode 38 (keysym 0x41, A), same_screen YES,XLookupString gives 1 bytes: (41) AXFilterEvent returns: FalseEdit:Pressing (and holding) the Alt or Windows key has a similar action as to holding the Shift and Control keys (Shift and Control are held simultaneously).shift Shift_L (0x32), Shift_R (0x3e)lock Caps_Lock (0x42)control Control_L (0x25), Control_R (0x69)mod1 Alt_L (0x40), Alt_R (0x6c), Meta_L (0xcd)mod2 Num_Lock (0x4d)mod3 mod4 Super_L (0x85), Super_R (0x86), Super_L (0xce), Hyper_L (0xcf)mod5 ISO_Level3_Shift (0x5c), Mode_switch (0xcb) | ungrabbing keys | xorg;keyboard | null |
_datascience.6787 | Recently a friend of mine was asked whether decision tree algorithm a linear or nonlinear algorithm in an interview. I tried to look for answers to this question but couldn't find any satisfactory explanation. Can anyone answer and explain the solution to this question? Also what are some other examples of nonlinear machine learning algorithms? | Is decision tree algorithm a linear or nonlinear algorithm? | machine learning;algorithms;decision trees | null |
_cstheory.1940 | What are the most practically efficient algorithms for multiplying two very sparse boolean matrices (say, N=200 and there are just some 100-200 non-zero elements)?Actually, I have the advantage that when I'm multiplying A by B, the B's are predefined and I can do arbitrarily complex preprocessing on them. I also know that the results of products are always as sparse as the original matrices.The rather naive algorithm (scan A by rows; for each 1 bit of the A-row, OR the result with the corresponding row of B) turns out very efficient and requires only a couple thousand of CPU instructions to compute a single product, so it won't be easy to surpass it, and it's only surpassable by a constant factor (because there are hundreds of one bits in the result). But I'm not losing hope and asking the community for help :) | Fast sparse boolean matrix product with possible preprocessing | ds.algorithms;matrix product;sparse matrix;boolean matrix;implementation | null |
_codereview.140088 | I have put together a stored procedure to load and parse the Stack Exchange Data Dump into a relational database (akin to Stack Exchange Data Explorer). Each site has 8 XML files like these:The stored procedure below performs the following steps:Fetch the Badges.xml file for the target site from the local file systemLoad the XML document into the databaseParse the XML document <row> notes and populate the destination table with each attribute in its own columnI wrote this for Badges data, but I have to apply the same logic for all 8 types of XML data, so I would like to make this procedure as good as possible before I apply its model to processing the other XML files.The (very simple) structure of the Badges.xml files is as follows:<?xml version=1.0 encoding=utf-8?><badges> <row Id=1 UserId=2 Name=Autobiographer Date=2011-01-19T20:52:02.027 Class=3 TagBased=False /> <row Id=2 UserId=4 Name=Autobiographer Date=2011-01-19T20:57:02.100 Class=3 TagBased=False /> <row Id=3 UserId=6 Name=Autobiographer Date=2011-01-19T20:57:02.133 Class=3 TagBased=False /> ... <row Id=176685 UserId=99330 Name=Supporter Date=2016-03-06T03:34:14.827 Class=3 TagBased=False /></badges>TablesThe following 3 tables are used in conjunction with the procedure:CREATE TABLE RawDataXml.Badges ( SiteId UNIQUEIDENTIFIER PRIMARY KEY, ApiSiteParameter NVARCHAR(256) NOT NULL, RawDataXml XML NULL, XmlDataSize BIGINT NULL, Inserted DATETIME2 DEFAULT GETDATE(), CONSTRAINT fk_Badges_SiteId FOREIGN KEY (SiteId) REFERENCES CleanData.Sites(Id));CREATE TABLE CleanData.Badges ( SiteId UNIQUEIDENTIFIER NOT NULL, ApiSiteParameter NVARCHAR(256) NOT NULL, RowId INT, UserId INT, Name NVARCHAR(256), CreationDate DATETIME2, Class INT, TagBased BIT, Inserted DATETIME2 DEFAULT GETDATE(), CONSTRAINT fk_Badges_SiteId FOREIGN KEY (SiteId) REFERENCES CleanData.Sites(Id));CREATE TABLE RawDataXml.Globals ( Parameter NVARCHAR(256) NOT NULL, Value NVARCHAR(256) NOT NULL, Inserted DATETIME2 DEFAULT GETDATE());The RawDataXml.Globals table contains values such as these. The TargetSite values are meant to be used to run the procedure with a cursor iterating each of the sites (will show an example at the end).Parameter ValueSourcePath D:\Downloads\stackexchange\TargetSite codereview.stackexchange.comTargetSite meta.codereview.stackexchange.comTargetSite stats.stackexchange.comTargetSite meta.stats.stackexchange.comThe procedureThis is the CREATE PROCEDURE statement. I added comments throughout to hopefully make it easy to understand and maintain.IF EXISTS ( SELECT 1 FROM INFORMATION_SCHEMA.ROUTINES WHERE SPECIFIC_SCHEMA = 'RawDataXml' AND SPECIFIC_NAME = 'usp_LoadBadgesXml')DROP PROCEDURE RawDataXml.usp_LoadBadgesXml;GOCREATE PROCEDURE RawDataXml.usp_LoadBadgesXml @SiteDirectory NVARCHAR(256), -- Delete the loaded XML file after processing if True/1 (default True): @DeleteXmlRawDataAfterProcessing BIT = 1, -- Display/Return results to caller if @ReturnRows is set to True (default False) @ReturnRows BIT = 0AS BEGIN SET NOCOUNT ON; -- Fetch global source path parameter: DECLARE @SourcePath NVARCHAR(256); DECLARE @bslash CHAR = CHAR(92); SET @SourcePath = (SELECT Value FROM RawDataXml.Globals WHERE Parameter = 'SourcePath'); -- Make sure path ends with backslash (ASCII char 92) IF(SELECT RIGHT(@SourcePath, 1)) <> @bslash SET @SourcePath += @bslash; -- Fetch site identifiers based on @SiteDirectory parameter: DECLARE @SiteId UNIQUEIDENTIFIER; DECLARE @ApiSiteParameter NVARCHAR(256); SELECT @SiteId = Id, @ApiSiteParameter = ApiSiteParameter FROM CleanData.Sites WHERE SiteDirectory = @SiteDirectory; -- Throw error if @SiteDirectory parameter does not match an existing site: IF @SiteId IS NULL OR @ApiSiteParameter IS NULL BEGIN DECLARE @ErrMsg NVARCHAR(512) = 'The input site directory ' + @SiteDirectory + ' could not be matched to an existing site. Please verify and try again.'; RAISERROR(@ErrMsg, 11, 1); END -- Delete any previous XML data that may be present for the site: DELETE FROM RawDataXml.Badges WHERE SiteId = @SiteId; /** XML FILE HANDLING ** This section loads the XML file from the file system into a table. If @DeleteXmlRawDataAfterProcessing is set to 1 (default) this XML data will be deleted from the database (but not from the file system) after the data is parsed into a relational table (below). *****/ DECLARE @FilePath NVARCHAR(512) = @SourcePath + @SiteDirectory + @bslash + 'Badges.xml'; DECLARE @SQL_OPENROWSET_QUERY NVARCHAR(1024); -- Dynamic SQL is used here because OPENROWSET will only accept a string literal as argument for the file path. SET @SQL_OPENROWSET_QUERY = 'INSERT INTO RawDataXml.Badges (SiteId, ApiSiteParameter, RawDataXml)' + CHAR(10) + 'SELECT ' + QUOTENAME(@SiteId, '''') + ', ' + CHAR(10) + QUOTENAME(@ApiSiteParameter, '''') + ', ' + CHAR(10) + 'CONVERT(XML, BulkColumn) AS BulkColumn' + CHAR(10) + 'FROM OPENROWSET(BULK ' + QUOTENAME(@FilePath, '''') + ', SINGLE_BLOB) AS x;' PRINT CONVERT(NVARCHAR(256), GETDATE(), 21) + ' Processing ' + @FilePath; -- Execute the dynamic query to load XML into the table: EXECUTE sp_executesql @SQL_OPENROWSET_QUERY; /** XML DATA PARSING & PROCESSING ** This section parses the loaded XML document into columns and puts those in CleanData.Badges table. If previous data existed, that data is deleted prior to adding new data, to avoid duplication of rows and ensure a fresh set of data. *****/ -- Clear any existing data: DELETE FROM CleanData.Badges WHERE SiteId = @SiteId; -- Prepare XML document for parsing: DECLARE @XML AS XML; DECLARE @Doc AS INT; SELECT @XML = RawDataXml FROM RawDataXml.Badges WHERE SiteId = @SiteId; EXEC sp_xml_preparedocument @Doc OUTPUT, @XML; -- Parse XML <row> node attributes and insert them into their respective columns: INSERT INTO CleanData.Badges ( SiteId, ApiSiteParameter, RowId, UserId, Name, CreationDate, Class, TagBased ) SELECT @SiteId, @ApiSiteParameter, Id, UserId, Name, [Date], Class, CASE WHEN LOWER(TagBased) = 'true' THEN 1 ELSE 0 END AS TagBased FROM OPENXML(@Doc, 'badges/row') WITH ( Id INT '@Id', UserId INT '@UserId', Name NVARCHAR(256) '@Name', [Date] DATETIME2 '@Date', Class INT '@Class', TagBased NVARCHAR(256) '@TagBased' ); EXEC sp_xml_removedocument @Doc; -- Delete the loaded XML file after processing if True/1 (default True): IF @DeleteXmlRawDataAfterProcessing = 1 BEGIN DELETE FROM RawDataXml.Badges WHERE SiteId = @SiteId; END -- Display/Return results to caller if @ReturnRows is set to True (default False) IF @ReturnRows = 1 BEGIN SELECT * FROM CleanData.Badges WHERE SiteId = @SiteId ORDER BY CreationDate ASC; ENDENDGOExample run with statsHere is an example run for the 4 sites currently in the Globals table. Note that this is a post-compile run, i.e., it was ran before this run to calculate the execution plan.DECLARE @Start DATETIME2 = GETDATE();DECLARE @RowsProcessed INT;DECLARE @Now DATETIME2;DECLARE @CurrentSite NVARCHAR(256);DECLARE _SitesToProcess CURSOR FOR SELECT Value FROM RawDataXml.Globals WHERE Parameter = 'TargetSite';OPEN _SitesToProcess;FETCH NEXT FROM _SitesToProcess INTO @CurrentSite;WHILE @@FETCH_STATUS = 0BEGIN SET @Now = GETDATE(); EXECUTE RawDataXml.usp_LoadBadgesXml @CurrentSite; PRINT 'Processing time: ' + CAST(DATEDIFF(MILLISECOND, @Now, GETDATE()) AS VARCHAR(20)) +' ms.'; FETCH NEXT FROM _SitesToProcess INTO @CurrentSite;ENDCLOSE _SitesToProcess;DEALLOCATE _SitesToProcess;PRINT 'TOTAL Processing time: ' + CAST(DATEDIFF(MILLISECOND, @Start, GETDATE()) AS VARCHAR(20)) +' ms.';SELECT * FROM CleanData.Badges ORDER BY CreationDate DESC;Which prints the following to console, and finally displays the rows parsed from the XML document.2016-08-31 00:05:04.983 Processing D:\Downloads\stackexchange\codereview.stackexchange.com\Badges.xmlProcessing time: 8060 ms.2016-08-31 00:05:13.033 Processing D:\Downloads\stackexchange\meta.codereview.stackexchange.com\Badges.xmlProcessing time: 1517 ms.2016-08-31 00:05:14.550 Processing D:\Downloads\stackexchange\stats.stackexchange.com\Badges.xmlProcessing time: 8120 ms.2016-08-31 00:05:22.670 Processing D:\Downloads\stackexchange\meta.stats.stackexchange.com\Badges.xmlProcessing time: 1740 ms.TOTAL Processing time: 19437 ms.(345368 row(s) affected)Finally, here is a screenshot of the nontrivial parts of the actual execution plan: | Load and parse Stack Exchange data dump XML into DB table | performance;sql;sql server;xml;stackexchange | ReadabilityWhitespaceMulti-line statements are written without any indentation, which makes them a bit harder to read. Especially if they are also not separated from each other by extra vertical whitespace. This had me squint a bit on-- Prepare XML document for parsing:DECLARE @XML AS XML;DECLARE @Doc AS INT;SELECT @XML = RawDataXmlFROM RawDataXml.BadgesWHERE SiteId = @SiteId;EXEC sp_xml_preparedocument @Doc OUTPUT, @XML;which I would write like-- Prepare XML document for parsing:DECLARE @XML AS XML;DECLARE @Doc AS INT;SELECT @XML = RawDataXml FROM RawDataXml.Badges WHERE SiteId = @SiteId;EXEC sp_xml_preparedocument @Doc OUTPUT, @XML;making sure that only the first line of each statement is fully left-aligned.Dynamic SQLDynamic SQL always adds some readability issues, as most editors won't syntax highlight it, and readers will often have to count single quotes to make sure they are escaped correctly. But it is not necessary to split strings in order to use newlines with CHAR(10). You could rewrite the string like so:SET @SQL_OPENROWSET_QUERY = 'INSERT INTO RawDataXml.Badges (SiteId, ApiSiteParameter, RawDataXml) SELECT ' + QUOTENAME(@SiteId, '''') + ' , ' + QUOTENAME(@ApiSiteParameter, '''') + ' , CONVERT(XML, BulkColumn) AS BulkColumn FROM OPENROWSET( BULK ' + QUOTENAME(@FilePath, '''') + ' , SINGLE_BLOB ) AS x;'This will remove many of the distracting +es, CHAR(10) calls and quotes, and bring the SQL code back to a more readable formatting as well.PerformanceAdd primary keys or index to all tablesThe CleanData.Badges table is actually not a table but a heap. Because it doesn't have a primary key or other clustered index, SQL Server will always have to consult all full rows when operating on the data. The execution plan screenshot actually has a suggestion for this:Missing Index (Impact 31.7402): CREATE NONCLUSTERED INDEX [<Name of Missing Index, sysname, >] ON [CleanData].[Badges] ([SiteId])Which means that it would help if there were an index on the SiteId column. Indices only work with tables, not heaps, so you would also need to add a primary key (which would probably be (SiteId, RowId), guessing from the example data).Using xml.nodes() instead of OPENXMLNote: This is actually only a suggestion based on a hunch I have, based on what I see in the execution plan, not because I know this to have better performance. I do not have a sql server instance at hand to compare performance.The execution plan shows a 57% cost for Remote Scan, which is basically have a remote server give me all their data. Performance of that remote server is a black box, as far as the execution plan is concerned.A test like used in this answer may show that using @XML.nodes('/badges/row') may not only be less of a hassle with sp_xml_preparedocument, but may also perform better. The code would look like:-- Prepare XML document for parsing:DECLARE @XML AS XML;SELECT @XML = RawDataXml FROM RawDataXml.Badges WHERE SiteId = @SiteId;-- Parse XML <row> node attributes and insert them into their respective columns:INSERT INTO CleanData.Badges ( SiteId, ApiSiteParameter, RowId, UserId, Name, CreationDate, Class, TagBased ) SELECT @SiteId, @ApiSiteParameter, x.r.value('@Id','INT') as Id, x.r.value('@UserId', 'INT') as UserId, x.r.value('@Name', 'NVARCHAR(256)') as Name, x.r.value('@Date', 'DATETIME2') as [Date], x.r.value('@Class', 'INT') as Class, CASE WHEN LOWER(x.r.value('@TagBased', 'NVARCHAR(256)')) = 'true' THEN 1 ELSE 0 END AS TagBased FROM @XML.nodes('/badges/row') as x(r);This answer may also give pointers on how to select values from multiple xml documents combined, allowing you to process RawDataXml.Badges entries from multiple sites as a set. |
_codereview.171456 | I often have multiple versions of the same reports that accumulate in some directory. I'd like to automate the process of moving the old versions of each report into an archive.Sometimes these report titles are formatted so that the date is at the end of the file name (before the extension), but the date format can vary from report to report. For example: Tax Report 5.1.17.xlsx Tax Report 12.1.17.xlsxCompliance Report 5-1-2017.xlsxCompliance Report 6-1-2017.xlsxInsurance Report (May 2017).pdfInsurance Report (June 2017).pdf Each report should be handled separately (ie I'd like to keep the newest version of each report that I specify) based on a partial string identifier. The dates will be extracted using InStrRev and start/end indicators (by default, is the start indicator and . is the end indicator).So if all of the files above were in the same directory and I ran the code below, the files with May dates would be archived, and the others would remain.Dim sourceDir As StringDim backupDir As StringsourceDir = C:\Users\johndoe\Reports\backupDir = C:\Users\johndoe\Reports\Archive\Call archiveFiles(sourceDir, backupDir, Array(Tax*, Comp*), True)Call archiveFiles(sourceDir, backupDir, Ins*, True, (, ))Other times the report titles might not include dates, or the dates may be in a non-standard format. So I've included the option to determine the newest report based on date created or date modified (If you try to use the date string version and the procedure can't find any file names with valid dates, it won't move any of the files).I'm open to any feedback that might improve speed/stability/flexibility/readability/etc. I've tried to account for the obvious potential errors (trying to move an open file, trying to move a file to a directory containing an identically named file, etc.) but I may have missed some.Option ExplicitSub archiveFiles(sourcePath As String, backupPath As String, ByVal toMove As Variant, Optional leaveNewest As Boolean = False, Optional ByVal dateType As Variant = 1, Optional startIndicator As String = , Optional endIndicator As String = .)'Moves files meeting name criteria (toMove) from one path (sourcePath) to another (backupPath)'If a file already exists in the backup folder, version number is added to file name'Optionally leaves the newest file, which can be determined based on (by dateType)' - Date within file name (String or 1)' - Date file created (Created or 2)' - Date file last modified (Modified or 3) If Not IsArray(toMove) Then Dim tempStr As String tempStr = toMove ReDim toMove(1 To 1) As String toMove(1) = tempStr End If Dim i As Long For i = LBound(toMove) To UBound(toMove) If leaveNewest Then Dim keepName As String keepName = getNewestFile(sourcePath, CStr(toMove(i)), dateType, startIndicator, endIndicator) End If Dim FSO As Object Set FSO = CreateObject(Scripting.Filesystemobject) Dim f As Object For Each f In FSO.GetFolder(sourcePath).Files If f.Name Like CStr(toMove(i)) Then Dim goAhead As Boolean If Not leaveNewest Then goAhead = True ElseIf f.Name = keepName Then goAhead = False ElseIf keepName = Then goAhead = False Else goAhead = True End If If goAhead Then If Not isFileOpen(f) Then Dim j As Long Dim fMoved As Boolean j = 1 fMoved = False Do Until fMoved If Dir(backupPath & f.Name) <> Then Dim fileExt As String fileExt = Right(f.Name, Len(f.Name) - InStrRev(f.Name, .) + 1) If j = 1 Then f.Name = Left(f.Name, InStrRev(f.Name, .) - 1) & v1 & fileExt Else f.Name = Left(f.Name, InStrRev(f.Name, .) - Len(CStr(j)) - 1) & j & fileExt End If j = j + 1 Else f.Move backupPath fMoved = True End If Loop End If End If End If Next NextEnd SubFunction getNewestFile(strDir As String, Optional strFileName As String = *, Optional ByVal dateType As Variant = 1, Optional startIndicator As String = , Optional endIndicator As String = .) As String'Returns the name of the newest file in a directory (strDir) with a given filename (strFileName)'Determines newest file using dateType, which can be:' - String or 1 (date within file name),' - Created or 2 (date file created), or' - Modified or 3 (date file last modified) If Not IsNumeric(dateType) Then Select Case dateType Case Modified dateType = 3 Case Created dateType = 2 Case String dateType = 1 Case Else MsgBox Invalid date type getNewestFile = End Select ElseIf dateType < 1 Or dateType > 3 Then MsgBox Invalid date type getNewestFile = End If Dim tempName As String Dim tempDate As Date tempName = tempDate = DateSerial(1900, 1, 1) Dim FSO As Object Set FSO = CreateObject(Scripting.Filesystemobject) Dim f As Object For Each f In FSO.GetFolder(strDir).Files If f.Name Like strFileName Then If dateType = 3 Then If f.DateLastModified > tempDate Then tempDate = f.DateLastModified tempName = f.Name End If ElseIf dateType = 2 Then If f.DateCreated > tempDate Then tempDate = f.DateCreated tempName = f.Name End If Else Dim tempStart As String Dim tempEnd As String Dim tempStr As String tempStart = InStrRev(f.Name, startIndicator) + 1 tempEnd = InStrRev(f.Name, endIndicator) - 1 tempStr = Replace(Mid(f.Name, tempStart, tempEnd - tempStart + 1), ., /) If tempStart > 0 And tempStart < tempEnd Then If IsDate(tempStr) Then If CDate(tempStr) > tempDate Then tempDate = CDate(tempStr) tempName = f.Name End If End If End If End If End If Next getNewestFileName = tempNameEnd FunctionFunction isFileOpen(ByVal f As Variant) As Boolean'Determines whether a file (f) is open and returns true or false'Parameter f can be passed as a File object or as a complete file path string Dim errNum As Long Dim fileNum As Long fileNum = FreeFile() On Error Resume Next If IsObject(f) Then Open f.Path For Input Lock Read As #fileNum Else Open f For Input Lock Read As #fileNum End If Close fileNum errNum = Err On Error GoTo 0 Select Case errNum Case 0 isFileOpen = False Case 70 isFileOpen = True Case Else Error errNum End SelectEnd Function | Move files to archive while keeping newest file (based on date string in filename, date created, or date modified) | vba | null |
_vi.6283 | The HTML that's being returned by this PHP function doesn't have any syntax highlighting.If I delete the ' on line 13 the HTML highlighting works (but the PHP function breaks), with it the HTML highlighting does not work.How do I get the HTML to have its proper syntax highlighting inside of this function?Do I want to be doing something like this? I'm having a hard time figuring out what to make of that and how I would adapt it for my situation, let alone whether or not that's the right approach.It would be great if Vim could automatically recognize HTML inside a PHP file without having to type any hard-to-remember commands. | Why doesn't Vim recognize HTML inside PHP? | syntax highlighting;filetype php | From :help ft-php-syntax:There are the following options for the php syntax highlighting.[..]Enable HTML syntax highlighting inside strings: let php_htmlInStrings = 1You can add that your vimrc. |
_webapps.12426 | I am using the IP address to detect countries, and browser headers to detect language. But when it comes to having the user override these, it would be good interface design to:let them select neigbouring countries on top of the listlet them select languages by clicking on spoken language lists oncountriesIn order to avoid very long or multiple step (continent, country) dropdowns.I know you can find ISO lists of countries, currencies, currency symbols en notation, but has this all been put together in some kind of package with an API? | Are there datasets/frameworks available that map countries to neighbouring ones, spoken languages, currencies, notations? | localization | null |
_cs.73949 | Look at this solution:Is the lower bound $m\log n$ because we are only looking at the lower bound for union by rank only? If we make $n$ MAKE-SET operations, then there would be $\log n$ UNION operations, and then $m - 2n + 1$ FIND-SET operations. The lower bound seems larger to me but what am I missing? | Why is the lower bound $m \log n$ for this make-set, union and find-set sequence? | algorithm analysis;data structures;lower bounds;union find | You are asking two questions.Is the lower bound only for this specific implementation?Yes. If you also use path compression, the running time will be $o(m\log n)$.The lower bound seems to large to me.Your mistake is that you assume that there are only $\log n$ UNION operations, whereas there are $2^{\lfloor \log_2 n \rfloor}-1 = \Theta(n)$, as the answer clearly . |
_cs.47448 | If we take Solomonoff's prior $m$, defined here and normalize it we get a probability mass function on all finite words.But, the pmf isn't completely determined until we fix a universal Turing machine (UTM) $U$.Say $m_U$ is the normalized prior with respect to a UTM $U$.Is there a $U$ such that $m_U$ has maximum entropy? | Maximum entropy probability distribution among Solomonoff priors | turing machines;probability theory;entropy | null |
_codereview.61334 | I'm reading through Write Yourself a Scheme after finishing going through Learn You a Haskell. I attempted one of the early exercises: writing a program to get an operator and two numbers and do a computation. It works fine.Things I would like to know:How should I structure a program, in terms of building larger functions out of smaller functions? Are there redundancies in my code?What's the most effective way to use the Maybe type to indicate failure when main is of type IO ()? Is my checkSuccess an appropriate way to do this?module Main whereimport System.Environment-- parses the first arithmetic operator in a stringparseOperator :: String -> Maybe CharparseOperator [] = NothingparseOperator (x:xs) | x == '*' = Just '*' | x == '/' = Just '/' | x == '+' = Just '+' | x == '-' = Just '-' | otherwise = parseOperator xsparseNum :: String -> Maybe DoubleparseNum x = let parsed = reads x :: [(Double,String)] in case parsed of [(a,)] -> Just a [(_,_)] -> Nothing [] -> Nothingcompute :: Maybe Char -> Maybe Double -> Maybe Double -> Maybe Doublecompute Nothing _ _ = Nothingcompute _ Nothing _ = Nothingcompute _ _ Nothing = Nothingcompute (Just c) (Just x) (Just y) | c == '*' = Just $ x * y | c == '/' = Just $ x / y | c == '+' = Just $ x + y | c == '-' = Just $ x - ycheckSuccess :: Maybe Double -> IO ()checkSuccess Nothing = putStrLn Failed. Check correctness of inputscheckSuccess (Just r) = putStrLn $ Result: ++ (show r)runSequence :: String -> String -> String -> IO ()runSequence os xs ys = checkSuccess $ compute (parseOperator os) (parseNum xs) (parseNum ys)main = do putStrLn Enter operator: * / + - operator <- getLine putStrLn Enter first number first <- getLine putStrLn Enter second number second <- getLine runSequence operator first second | Simple Haskell calculator, using Maybe for error handling | beginner;haskell;functional programming;calculator | null |
_reverseengineering.3480 | Is understanding of Cryptography really important for a reverse engineer? Thanks. | How much Cryptography knowledge is important for reverse engineering? | cryptography | It is more and more important for practical reverse-engineering. It is now present in malware, the example of Stuxnet, Flame and others are quite typical of the usage of cryptography in such context. And, it is also present in most protection schemes because a lot of techniques use cryptography to protect the code and data. Just consider software such as Skype or iTunes which are relying on cryptography to protect their protocol or to hide information in the executable.So, indeed, it would be really a problem if you do not understand a bit cryptography when reversing. And, by understanding cryptography, I mean at least to be able to recognize the code of classical cipher algorithms at assembly level such as DES, AES, SHA-1, SHA-3, and so on. And, also to know classical flaws and cryptanalysis techniques for weak crypto (such as frequency analysis).A good way to learn about the cryptography needed for reverse-engineering would be to implement (with the help of existing codes found on the Net) your own cryptographic library with classical ciphers and look at the generated assembly. If you do not have the patience to do so, just look at the crypto-lib of OpenSSL, get it compiled and look at the code and the assembly.Of course, more you know about it, more you will be efficient when facing it. |
_softwareengineering.273673 | Say you have some basic code where similar operations will take place in nearby lexical scopes. Take for example some simple pseudo code:variable = foo# Do something with variableif (True) { variable = bar # Do something else with variable}for i in range 1..100 { variable = i # Do another thing with variable}Say that in each scope, the variable is used for a distinct, but similar task and thus the name variable is appropriate in each case. What is the best practice in this case for naming? Should you use the name variable each time? Increment the name such as variable1, variable2, etc.? Or something else entirely? | Is it bad form to use the same variable name in different scopes? | programming practices;naming;variables;scope | If the variable in question represents the same thing for both functions, I can't see why it would be a problem. If you're arbitrarily using variable to mean any variable within a function that can do anything then yes, it is a problem. Name your variables in the context to which they are used. |
_webapps.33320 | I have a webpage where i am having a textbox preferbaly for storing email address. I need to create an email intake database where i need a simple database built to store emails of users signing up . One of the ways i am thinking this is using an excel document on Google docs other being a standalone DB.Can anyone share links/pointers/tutorials regarding same. | Store form data to Google Docs | email;google spreadsheets;database | Try using the Forms functionality that is already provided by Google: http://support.google.com/docs/bin/answer.py?hl=en&answer=87809This help article shows you how to set up a form, and the form responses go to a Google Spreadsheet with all of the data. You could create a simple form that asks users to enter an email address and hit SubmitThe easiest way to create a form is:Create a Google SpreadsheetClick Form > Create FormAdd your question(s) and edit your question typesOnce the form is created:Click Form > Embed Form in a WebpageYou can then use the <iframe> to embed the form onto your site. |
_unix.331693 | I have a bunch of services (say C0, C1, … C9) that should only start after a service S has completed its initialization and is fully running and ready for the other services. How do I arrange that with systemd?In Ordering services with path activation and target in systemd it is assumed that service S has a mechanism for writing out some sort of flag file. Assume here, in contrast, that I have full control over the program that service S runs, and can add systemd mechanisms into it if needs be. | How can a systemd service flag that is is ready, so that other services can wait for it to be ready before they start? | systemd | null |
_softwareengineering.146009 | Is there a language that is capable of developing apps to cross platform OSs (win,*nix) and mobile apps (IOS, android) ..I'm a pro web developer but want to explore more environment to deploy my code into...Python ? Ruby ? | A single language to learn to develop desktop and mobile phone applications? | programming languages;cross platform | null |
_codereview.129191 | We want to refactor two methods that are exactly the same, except for one difference: one takes an org.hibernate.Criteria and the other org.hibernate.criterion.DetachedCriteria. These two do implement a mutual interface (org.hibernate.criterion.CriteriaSpecification), but this only contains some final static fields, and no methods.Here are the methods (removed comments and javadoc for compactness):public static DetachedCriteria applyRestrictionsToCriteria(final DetachedCriteria criteria, final Vector<RestrictionsHelper> filter) { final Map<String, DetachedCriteria> subCriteriaMap = new HashMap<>(); if (filter != null) { final Iterator<RestrictionsHelper> itp = filter.iterator(); while (itp.hasNext()) { final RestrictionsHelper restric = itp.next(); if (restric.getClassname().equals()) { final Iterator<Criterion> ir = restric.getCriterions().iterator(); while (ir.hasNext()) { final Criterion criterion = ir.next(); criteria.add(criterion); if (criterion.toString().contains(Happening_fk)) { criteria.setFetchMode(Happeningdetails, FetchMode.JOIN); } } final Iterator<Order> or = restric.getOrders().iterator(); while (or.hasNext()) { criteria.addOrder(or.next()); } } else { final String[] buff = restric.getClassname().split(\\.); DetachedCriteria subcriteria = criteria; String path = ; for (final String element : buff) { final String[] name = getNameAndAlias(element); path += name[0]; final DetachedCriteria exsubcriteria = subCriteriaMap.get(path); if (exsubcriteria == null) { subcriteria = subcriteria.createCriteria(name[0], name[1], CriteriaSpecification.LEFT_JOIN); subCriteriaMap.put(path, subcriteria); } else { subcriteria = exsubcriteria; } path += .; } final Iterator<Criterion> ir = restric.getCriterions().iterator(); while (ir.hasNext()) { subcriteria.add(ir.next()); } final Iterator<Order> or = restric.getOrders().iterator(); while (or.hasNext()) { subcriteria.addOrder(or.next()); } } } } return criteria;}andpublic static Criteria applyRestrictionsToCriteria(final Vector<RestrictionsHelper> filter, final Criteria criteria) { final Map<String, Criteria> subCriteriaMap = new HashMap<String, Criteria>(); if (filter != null) { final Iterator<RestrictionsHelper> itp = filter.iterator(); while (itp.hasNext()) { final RestrictionsHelper restric = itp.next(); if (restric.getClassname().equals()) { final Iterator<Criterion> ir = restric.getCriterions().iterator(); while (ir.hasNext()) { final Criterion criterion = ir.next(); criteria.add(criterion); if (criterion.toString().contains(Happening_fk)) { criteria.setFetchMode(Happeningdetails, FetchMode.JOIN); } } final Iterator<Order> or = restric.getOrders().iterator(); while (or.hasNext()) { criteria.addOrder(or.next()); } } else { final String[] buff = restric.getClassname().split(\\.); Criteria subcriteria = criteria; String path = ; for (final String element : buff) { final String[] name = getNameAndAlias(element); path += name[0]; final Criteria exsubcriteria = subCriteriaMap.get(path); if (exsubcriteria == null) { subcriteria = subcriteria.createCriteria(name[0], name[1], CriteriaSpecification.LEFT_JOIN); subCriteriaMap.put(path, subcriteria); } else { subcriteria = exsubcriteria; } path += .; } final Iterator<Criterion> ir = restric.getCriterions().iterator(); while (ir.hasNext()) { subcriteria.add(ir.next()); } final Iterator<Order> or = restric.getOrders().iterator(); while (or.hasNext()) { subcriteria.addOrder(or.next()); } } } } return criteria;}Since both methods do the same, we obviously want to refactor it into one method.Some things we've tried without resultCreating an interface (GenericCriteria) and two subclasses:public class OwnCriteria extends CriteriaImpl implements GenericCriteriaandpublic class OwnDetachedCriteria extends DetachedCriteria implements GenericCriteriaand use that interface in the method.Problem:We use Criteria#createCriteria(String, String, int) which returns a new Subcriteria(this, String, String, int);. Because Subcriteria is a final class, we can't create our own sub-class and we also can't make a convert-constructor in our Own classes because there aren't getters for all required constructor-parameters.Directly make an anonymous class from the interface (i.e. new BagGenericCriteria(){ @Override ... }Possible work-around that will most likely work, but is rather ugly:Using the shared interface (org.hibernate.criterion.CriteriaSpecification) as parameter and then use multiple instanceof checks for one or the other.NotesWe use Java 7 (so we can't use Java 8 features - for now)We use hibernate version 3.3.2.GA (so we can't use hibernate 4+ - for now)Some other parts of the code in the methods can be refactored as well, but right now we just want to have two exact identical methods (apart from the parameter used) refactored into one. | Overloaded applyRestrictionsToCriteria() methods | java;object oriented;interface;hibernate;overloading | Vector vs List/ArrayListVector was retrofitted to be part of the Java Collections framework, and if you do not need the synchronization feature, you should update to the ArrayList class. In fact, you should opt for the List interface, so that callers of these methods can eventually be refactored to pass in other List implementations, like ArrayList, and these two methods only know they are dealing with Lists.Indentationpublic static DetachedCriteria applyRestrictionsToCriteria(DetachedCriteria criteria, Vector<RestrictionsHelper> filter) { final Map<String, DetachedCriteria> subCriteriaMap = new HashMap<>(); if (filter != null) { // processing goes here } return criteria;}If you do an early return from the null-check, you can reduce one level of indentation. You also eliminate the possibly redundant new HashMap<>() declaration too, when the null-check holds true:public static DetachedCriteria applyRestrictionsToCriteria(DetachedCriteria criteria, List<RestrictionsHelper> filter) { // changed Vector -> List if (filter == null) { return criteria; } Map<String, DetachedCriteria> subCriteriaMap = new HashMap<>(); // processing goes here return criteria;}My take on final modifiers on method arguments and variables these days is that they are largely redundant, as long as you can easily observe that they are not carelessly reassigned. If you happen to come from a (programming) culture where this is done way too often, and thus you are introducing final to check such practices, then feel free to leave them in until such 'reminders' can be removed.Looping via iterationAnother way of doing looping via iteration is to rely on the standard for-loop as such:for (Iterator<RestrictionsHelper> helpers : filter.iterator(); helpers.hasNext(); ) { // more processing goes here}This scopes the Iterator to within the for-loop itself. The simpler way is to use the enhanced for-each loop:for (RestrictionsHelper helper : filter) { // more processing goes here}Deduplicating code blocks, part 1final Iterator<Order> or = restric.getOrders().iterator();while (or.hasNext()) { subcriteria.addOrder(or.next());}Since this is done regardless of restric.getClassname().equals(), you can perform it outside of the if-block (illustrating only for DetachedCriteria):public static DetachedCriteria applyRestrictionsToCriteria(DetachedCriteria criteria, List<RestrictionsHelper> filter) { if (filter == null) { return criteria; } Map<String, DetachedCriteria> subCriteriaMap = new HashMap<>(); for (RestrictionsHelper helper : filter) { DetachedCriteria currentCriteria; if (helper.getClassname().isEmpty()) { // instead of String.equals() currentCriteria = criteria; // some processing here } else { // some processing here // use currentCriteria instead of subcriteria } for (Order order : helper.getOrders()) { currentCriteria.addOrder(order); } } return criteria;}Declaring variables closer to usageNow let's take a look at the Map declaration again:Map<String, DetachedCriteria> subCriteriaMap = new HashMap<>();It's only being used when RestrictionsHelper.getClassname() is not empty. In addition, the only thing that code block seems to be doing is to eventually have currentCriteria be the final DetachedCriteria (following the example from the previous section) after splitting the class name. This suggests that we can convert this block into a method:private static DetachedCriteria processClassName(DetachedCriteria criteria, String className) { DetachedCriteria result = critera; StringBuilder path = new StringBuilder(); Map<String, DetachedCriteria> subCriteriaMap = new HashMap<>(); for (String element : className.split(\\.)) { String[] name = getNameAndAlias(element); path.append(name[0]); DetachedCriteria temp = subCriteriaMap.get(path.toString()); if (temp == null) { result = result.createCriteria(name[0], name[1], CriteriaSpecification.LEFT_JOIN); subCriteriaMap.put(path.toString(), result); } else { result = temp; } path.append('.'); } return result;}Deduplicating code blocks, part 2The method in question now looks much shorter:public static DetachedCriteria applyRestrictionsToCriteria(DetachedCriteria criteria, List<RestrictionsHelper> filter) { if (filter == null) { return criteria; } for (RestrictionsHelper helper : filter) { DetachedCriteria currentCriteria; if (helper.getClassname().isEmpty()) { currentCriteria = criteria; for (Criterion criterion = helper.getCriterions()) { criteria.add(criterion); if (criterion.toString().contains(Happening_fk)) { criteria.setFetchMode(Happeningdetails, FetchMode.JOIN); } } } else { currentCriteria = processClassName(criteria, helper.getClassname()); for (Criterion criterion = helper.getCriterions()) { currentCriteria.add(criterion); } } for (Order order : helper.getOrders()) { currentCriteria.addOrder(order); } } return criteria;}Before we even move to the DetachedCriteria/Criteria discussion, we can simplify this method one step further as such:public static DetachedCriteria applyRestrictionsToCriteria(DetachedCriteria criteria, List<RestrictionsHelper> filter) { if (filter == null) { return criteria; } for (RestrictionsHelper helper : filter) { boolean isClassnameEmpty = helper.getClassname().isEmpty(); DetachedCriteria currentCriteria = isClassnameEmpty ? criteria : processClassName(criteria, helper.getClassname()); for (Criterion criterion = helper.getCriterions()) { currentCriteria.add(criterion); if (isClassnameEmpty && criterion.toString().contains(Happening_fk)) { currentCriteria.setFetchMode(Happeningdetails, FetchMode.JOIN); } } for (Order order : helper.getOrders()) { currentCriteria.addOrder(order); } } return criteria;}Deduplicating code blocks, part 3Finally, the DetachedCriteria/Criteria discussion. You pointed out that an instanceof check is one way. Another alternative is to be inspired by Guava's Functions and Java 8's BiFunction to have your bespoke implementation of a two-tuple 'processor'-like interface:public interface OwnBiFunction<T, U> { T apply(T original, U name);}(BTW, a 'true' BiFunction will have a third generic type for the return type, but since we know we want an instance of T to be returned, we'll take a short-cut here.)Then, modify processClassName() to accept this additional argument:private static <T> T processClassName(T criteria, String className, OwnBiFunction<T, String[]> biFunction) { T result = critera; StringBuilder path = new StringBuilder(); Map<String, T> subCriteriaMap = new HashMap<>(); for (String element : className.split(\\.)) { String[] name = getNameAndAlias(element); path.append(name[0]); T temp = subCriteriaMap.get(path.toString()); if (temp == null) { result = biFunction.apply(result, name); subCriteriaMap.put(path.toString(), result); } else { result = temp; } path.append('.'); } return result;}The implementations for DetachedCriteria and Criteria are respectively:private static final OwnBiFunction<DetachedCriteria, String[]> DETACHED_CRITERIA = new OwnBiFunction<>() { DetachedCriteria apply(DetachedCriteria original, String[] name) { return original.createCriteria(name[0], name[1], CriteriaSpecification.LEFT_JOIN); } };private static final OwnBiFunction<Criteria, String[]> CRITERIA = new OwnBiFunction<>() { Criteria apply(Criteria original, String[] name) { return original.createCriteria(name[0], name[1], CriteriaSpecification.LEFT_JOIN); } };// Usage for both types// ...boolean isClassnameEmpty = helper.getClassname().isEmpty();DetachedCriteria currentCriteria = isClassnameEmpty ? criteria : processClassName(criteria, helper.getClassname(), DETACHED_CRITERIA);// ...// ...boolean isClassnameEmpty = helper.getClassname().isEmpty();Criteria currentCriteria = isClassnameEmpty ? criteria : processClassName(criteria, helper.getClassname(), CRITERIA);// ...Putting it altogetherYou can have a processClassName() method:private static <T> T processClassName(T criteria, String className, OwnBiFunction<T, String[]> biFunction) { T result = critera; StringBuilder path = new StringBuilder(); Map<String, T> subCriteriaMap = new HashMap<>(); for (String element : className.split(\\.)) { String[] name = getNameAndAlias(element); path.append(name[0]); T temp = subCriteriaMap.get(path.toString()); if (temp == null) { result = biFunction.apply(result, name); subCriteriaMap.put(path.toString(), result); } else { result = temp; } path.append('.'); } return result;}And finally, a single applyRestrictionsToCriteria() method that takes in some bespoke interfaces for the actual processing on a DetachedCriteria or Criteria type:public static <T> T applyRestrictionsToCriteria(T criteria, List<RestrictionsHelper> filter, OwnBiFunction<T, String[]> biFunction, OwnBiFunction<T, Criterion> criterionAdder, Function<T, Void> fetchModeSetter, // this can be from Guava OwnBiFunction<T, Order> orderAdder) { if (filter == null) { return criteria; } for (RestrictionsHelper helper : filter) { boolean isClassnameEmpty = helper.getClassname().isEmpty(); T currentCriteria = isClassnameEmpty ? criteria : processClassName(criteria, helper.getClassname(), biFunction); for (Criterion criterion = helper.getCriterions()) { criterionAdder.apply(currentCriteria, criterion); if (isClassnameEmpty && criterion.toString().contains(Happening_fk)) { fetchModeSetter.apply(currentCriteria); } } for (Order order : helper.getOrders()) { orderAdder.apply(currentCriteria, order); } } return criteria;}Java 8When you get the chance to upgrade to Java 8, it's relatively simple to 'upgrade' the method signature to the Java 8 types of BiFunction, BiConsumer and Consumer:public static <T> T applyRestrictionsToCriteria(T criteria, List<RestrictionsHelper> filter, BiFunction<T, String[], T> biFunction, BiConsumer<T, Criterion> criterionAdder, Consumer<T> fetchModeSetter, BiConsumer<T, Order> orderAdder) { // same method body as above, // except that BiConsumer's method is accept(T, U) instead of apply(T, U) // and Consumer's method is accept(T) instead of Guava's Function.apply(T)}An example call can be:Criteria criteria = /* ... */;Criteria result = applyRestrictionsToCriteria(criteria, filter, (v, name) -> v.createCriteria(name[0], name[1], CriteriaSpecification.LEFT_JOIN), (v, c) -> v.add(c), v -> v.setFetchMode(Happeningdetails, FetchMode.JOIN), (v, o) -> v.addOrder(o)); |
_unix.183118 | Yesterday I tried to install Kali Linux in my laptop. The installation succeeds and ask to remove installer cd/dvd/external drive at the end. But after that when I boot from Kali it starts installation process again. I continue installation 2 more times, but same issue continues.I am very confused about the issue.My Laptop ConfigurationIntel pentium dual core processor.2GB RAM320 GB Hard diskWindows 7 installed in C: driveI have another 3 Drives of 71GB each D:,E:,F:I am installing Kali on F: drive.I created a root partition, a boot partition and swap partitionAt my boot screen I can see 2 Operating Systems:1- Windows 72- Debian Linux installer | Kali Linux Install Issue | linux;kali linux;debian installer | I think there is some issue with boot loader.The GRUB is not loaded properly.And installation is not finished. |
_scicomp.136 | How can the gravitational n-body problem be solved numerically in parallel?Is precision-complexity tradeoff possible?How does precision influence the quality of the model? | How can the gravitational n-body problem be solved in parallel? | algorithms;numerics;precision;complexity;ode | There is a wide variety of algorithms; Barnes Hut is a popular $\mathcal{O}(N \log N)$ method, and the Fast Multipole Method is a much more sophisticated $\mathcal{O}(N)$ alternative. Both methods make use of a tree data structure where nodes essentially only interact with their nearest neighbors at each level of the tree; you can think of splitting the tree between the set of processes at a sufficient depth, and then having them cooperate only at the highest levels.You can find a recent paper discussing FMM on petascale machines here. |
_softwareengineering.155211 | I am about start my first project with client, However I will work as a consultant. So do I need to get developer certificate and post my client's app in app store? Or I should ask my client to get the license and then I help them deploying the app on their name?They don't want company name to be my organisation but they want their company name to show up in App Store.However the developer of app is my organisation not them.How to deal with this situation? | Who should get a developer certificate from Apple if client want their company name to show up in App Store | development process;deployment;apple;appstore;product owner | Apple has this scenario covered. Your client will need to join the iOS dev program so they can post things to the store. They can then add you to their program for development certificates and such if you don't have your own as well as provision an iTunes connect account for you to publish to the store on their behalf.I would advise getting your own iOS dev program account if for no other reason than convenience. |
_unix.288970 | I saw this question: Converting .odm to .odt, where someone shows to convert an .odm file into .odt or .pdf. Is there also a way to do this in a command line? I have an Open Document Master file, which has a link to two external files, but when I type:soffice --headless --convert-to pdf master.odmThen I only see a blank file. The same when I first try to convert into odt and then into pdf:soffice --headless --convert-to odt master.odmsoffice --headless --convert-to pdf master.odt | Converting .odm to .odt or .pdf - command line | command line;conversion;libreoffice;openoffice | null |
_softwareengineering.228911 | This is the problem I'm working with: given a phone number from anywhere in the world and some location information (state, province, possibly country name if I'm lucky, etc.), return the ISO country code for that number.For the purposes of this question, I will not focus on the location information, as that provides an alternative solution to determining the country code which doesn't even need to use the phone number anymore (though, it would be useful for validation purposes)When I first started working on the problem, I was hoping there was a deterministic way to figure this out because there was some sort of international standard out there. It became immediately apparent that one does not exist for phone numbers. There are standards within countries, between countries (NANP for example), but no unified international standard.Playing around with libphonenumbers for a few days, it seems to be able to provide accurate validation of a phone number if I'm given a country code (eg: CA for Canada, GB for United Kingdom, etc).The library provides two methods: isPossibleNumber, and isValidNumberForRegion. This is the code I'm usingboolean isValid;PhoneNumber number;PhoneNumberUtil util = PhoneNumberUtil.getInstance();String numStr = (123) 456-7890;for (String r : util.getSupportedRegions()){ try { // check if it's a possible number isValid = util.isPossibleNumber(numStr, r); if (isValid) { number = util.parse(numStr, r); // check if it's a valid number for the given region isValid = util.isValidNumberForRegion(number, r); if (isValid) System.out.println(r + : + number.getCountryCode() + , + number.getNationalNumber()); } } catch (NumberParseException e) { e.printStackTrace(); }}So for example, if I took an arbitrary phone number like +44 20 7930 4832 and ran it through the method, I would get the following outputGB: 44, 2079304832Now, that's assuming I'm given the dialing code (sometimes it's there). If I weren't given the dialing code, I might just get something like 20 7930 4832, and the results are not as prettyDE: 49, 2079304832US: 1, 2079304832GB: 44, 2079304832FI: 358, 2079304832AX: 358, 2079304832RS: 381, 2079304832CN: 86, 2079304832NZ: 64, 2079304832IN: 91, 2079304832IR: 98, 2079304832JP: 81, 2079304832Given a phone number, I can run it through all of the different rules for every country and filter the list down from 244 to around 20 or less if I'm lucky, but I'm not sure if there's anything else I could do to try and guess the country. | Guessing a phone number's country code | java | null |
Subsets and Splits