id
stringlengths 5
27
| question
stringlengths 19
69.9k
| title
stringlengths 1
150
| tags
stringlengths 1
118
| accepted_answer
stringlengths 4
29.9k
⌀ |
---|---|---|---|---|
_vi.2207 | When I do vimdiff file2 file1, file2 naturally goes on the left and file1 on the right.Sometimes I find that I put them the wrong way round, so I'd like to be able to switch them round without leaving Vim. Is that possible? | In Vimdiff, how do I switch the left and right panes? | split;vimdiff | You can use Ctrlw-x. From :he CTRL-W_x:CTRL-W x CTRL-W_x CTRL-W_CTRL-XCTRL-W CTRL-X Without count: Exchange current window with next one. If there is no next window, exchange with previous window. With count: Exchange current window with Nth window (first window is 1). The cursor is put in the other window. When vertical and horizontal window splits are mixed, the exchange is only done in the row or column of windows that the current window is in. |
_softwareengineering.307253 | When Eclipse is written in Java and Java is platform independent, why does Eclipse offer different versions according to platforms?I assume it should be write once, use anywhere code. | Why is Eclipse platform dependent? | java;cross platform;eclipse | Although Eclipse IDE is written in Java, the graphical control elements use Standard Widget Toolkit (SWT), whereas most Java applications use the Java standard Abstract Window Toolkit (AWT) or Swing.To display GUI elements, the SWT implementation accesses the native GUI libraries of the operating system using JNI (Java Native Interface) in a manner that is similar to those programs written using operating system-specific APIs. Programs that call SWT are portable, but the implementation of the toolkit, despite part of it being written in Java, is unique for each platform.SWT must be ported to every new GUI library that needs supporting. Unlike Swing and AWT, SWT is not available on every Java-supported platform since SWT is not part of the Java release. Therefore the Eclipse distribution must include different SWT implementation for each supported platform. |
_softwareengineering.200643 | In Mercurial you can close a branch like this: hg commit --close-branch, this means the the branch will not be listed anymore but will still exist, and can still be listed if you use hg branches --closedI have learned that in Git generally branches are not kept, they disappear, specially when doing fast-forward merges.So my question is: can you keep branches in git but in a way that you don't list them anymore?Edit: Some additional context: In Mercurial branches are metadata, so each commit always knows it's branch. In Mercurial if you want to delete a branch, you strip the base branch revision. You can also rebase and end up with the same result as with Git. Mercurial Bookmarks are pointers, so they have the exact same behavior than a Git Branch. | Can you close branches in Git? | git;mercurial;dvcs | I know it's not exactly the same, but could you use a workflow where you tag the feature branches before ‘closing’ them?For example:git merge feature-wxyz -m Merge messagegit tag closed-feature-wxyz feature-wxyzgit branch -d feature-wxyzOf course, you could also annotate the tag (git tag -a closed-feature-wxyz -m Description of closed branch feature-wxyz) if you wish.I know it's not exactly what you asked for, but this would satisfy:a record of the branch is keptthe branch is no longer visible in git branchyou can still base a new branch on the closed one: git checkout -b new-feature-ghij closed-feature-wxyz |
_webapps.59661 | Whenever entering the date e.g. day-month-year, it automatically changes to month-day-year. I went into format>number>more formats>more date and time formats, and I changed it to day-month-year. And again, when entering day-month-year e.g. 18-4-2014, it changes it to 4-18-2014, but when I click on the date, it shows as 18/4/2014 next to fx above the page. So it looks like Google Spreadsheet is messing up. How to fix this? | Date format in Google Spreadsheet is not correct | google spreadsheets;formatting;date | The solution is to set the correct Local under the Spreadsheet's settings (This affects formatting defaults, such as currency.). So if you want US.locale style (MM-DD-YYYY), then choose US.Locale. If you are in a different locale and want formatting |
_webapps.44430 | I have a separate sheet with some data where a user's input will reference the row number of a sheet. I'm trying to reference that sheet by inserting the column letter and taking a number input from a cell to make up the row #.I've tried:=Sheet2!INDIRECT(A&A1)where A1 in the current sheet holds a value of 2, thus I want it to obtain=Sheet2!A2The formula returns #ERROR (error - parse error)How (if possible) can I reference it in that way? | Using INDIRECT() in sheet reference range | google spreadsheets;formulas;worksheet function | Figured it out... took me a while...INDIRECT() has to wrap the entire reference. In order to achieve what I needed, the formula looked like this:=INDIRECT(Sheet2!A&A1) |
_unix.237427 | I try to eliminate from a file_in.dat as follow:283 K00845.01 16.329762180 177.2951100 0.9830284 K00846.01 27.807562927 186.7135320 15 K00847.01 80.872063900 203.8969600 4.73 0.764016 K00848.01 3.166464930 17 K00849.01 10.355331770 170.9368500 3.09 0.918018 K00850.01 10.526294063 176.5225030 8.50a single letter (K) from each line of the file.So I tough to use sed or echo or the follow idea:cut -d \K -f file_in.dat > file_out.datBut I found some problem with this idea.Can someone help me ? Thanks. | Cut a letter from each line of a file | shell script;text processing | Did you try : sed 's/K//' file_in.dat > file_out.dat |
_scicomp.8637 | Suppose I have a set of small subgraphs $A = \{G_i\}$ of an original directed acyclic graph $G$, typically $|G_i|<<|G|$, which together span the original graph $$G = \cup G_i$$My question is then if I take a arbitrary subset of these graphs $A' \subset A$, and a single subgraph from this graph, $a \in A'$, is the any simple way (or known algorithm) of reassembling the subgraph $G' \subset G$ given by that portion of the union of $A'$ connected to $a$?In words, this is rather like a jigsaw problem where $A$ is the total collection of pieces that originally came in the box, $A'$ is the subset left after half of them got lost and $a \in A'$ the random selected piece you put down the start the puzzle off. The question is then what is the largest connected graph (connected subset of pieces $x \in A'$) that you can lay down all on the board. The actual application arises where each subgraph $G_i$ of $A$ represents a rule and (e.g. $x \wedge y \Rightarrow z$) and the objective is to find the largest rule implied transitively from an initial seed rule $a \in A' \subset A$ and the remaining rules contained in thinned out subset $A' \subset A$. I think similar things are possible in declarative language such as Prolog but I suspect that Prolog can actually do any more. Any good up-to-date reference on declarative programming languages would also be very useful. | a jigsaw problem: recreating a subgraph from a limited number of fragments on an original graph | graph theory | null |
_unix.359643 | I'm using screen /dev/ttyS0 to connect to the serial console of a device. Once I have established my environment on the device I start screen on the device to have multiple shells there.Of course, this means I need to double-escape the screen commands for the inner session, which is annoying, because you're not used to it. That's why I do :escape for the outer session, because I don't use screen features in the outer session anymore, but only in the inner session.This is fine for most of the time, but no matter which character I use as outer escape key, every now and then I will need it as regular character. But since this happens so rarely, I don't remember to escape it, causing strange things to happen (you probably know that).So I would prefer to disable it completely. I don't need it anymore (if I want to close the outer screen session, I close the terminal window). From the manual I don't find a way to disable it. Am I missing something? | screen in screen session: How to disable outer screen's escape? | gnu screen | null |
_unix.331408 | How would you monitor a directory on a Linux machine to check if there was a user (or someone from the network) who attempted to access it? | Monitor accesses to directory on a Linux machine | files;security;monitoring;audit | inotify like soinotifywait -m -e modify,create,delete -r /var/www >> /var/log/i-see-www 2>&1assuming you meant worked in when you said access, simply listing or reading files .. that'd be harder to do. |
_cstheory.10291 | While it is very common to see successful autodidact musicians, painters, authors and architects - I am not familiar with any famous autodidacts in the field of TCS.Are there any examples of an accomplished autodidact theoretical computer scientist (i.e., someone who published a significant paper, without ever going to grad school) ? | Accomplished autodidact theoretical computer scientists | soft question | In addition to some of the great people listed in the comments, Gregory Chaitin independently developed much of Kolmogorov complexity while he was a highschool student in New York city. |
_unix.96650 | I would very much like to allow users in a small office environment harness the power of slocate indexed database on the file server.Currently when users are looking for a file in our fileserver, they need to run find from their Windows workstations on the network shares that are available from the server. This loads up the server while other are working.Alternatively, I could set the indexers in every workstation to index the server locations. This is not ideal either, as the server would again be loaded a task that must be run multiple times a day on the same set of data!Ideally, the file server will carry out its own indexing and my users (who are oblivious to Linux and its command-line) will be able to log on to a simple website on the file server and run a search in much the same way I run locate commands in the command line.Is there something available? | is there a web app for returning results to a search on an indexed database? | web;file server;locate | I looked and did not find any offering that provided just a web app interface to an existing slocate database file. So you have the following options:Roll your own. Shouldn't be too difficult use a CGI based approach which would allow users to search for entries in your pre-built slocate database file.Skip using the slocate database file and use a dedicated search engine such as one of the following that includes both a crawler and a web frontend:OpenSearchServerHyper EstraierRecoll + Recoll-WebUIWumpus Search Engine |
_codereview.113894 | I'm a beginner at coding, and I am looking to improve the structure of how I write code and will take any tips. Posted a simple program that takes your name as input, then runs the string through a switch statement to print out how many of each character your name has, as well as any spaces and symbols.package practice;import java.util.Scanner;public class counter { public static void main(String[] args) { Scanner data = new Scanner(System.in); String name; System.out.println(Enter name); name = data.nextLine(); int len = name.length(); int ch = 0; int charCount = 0; int space = 0; int symbols = 0; int a = 0,b = 0,c = 0,d = 0,e = 0,f = 0,g = 0,h = 0,i = 0,j = 0,k = 0,l = 0,m = 0; int n = 0,o = 0,p = 0,q = 0,r = 0,s = 0,t = 0,u = 0,v = 0,w = 0,x = 0,y = 0,z = 0; for (int in = 0;in < len; in++) { switch(name.charAt(ch)) { case 'a': a++; charCount++; break; case 'b': b++; charCount++; break; case 'c': c++; charCount++; break; case 'd': d++; charCount++; break; case 'e': e++; charCount++; break; case 'f': f++; charCount++; break; case 'g': g++; charCount++; break; case 'h': h++; charCount++; break; case 'i': i++; charCount++; break; case 'j': j++; charCount++; break; case 'k': k++; charCount++; break; case 'l': l++; charCount++; break; case 'm': m++; charCount++; break; case 'n': n++; charCount++; break; case 'o': o++; charCount++; break; case 'p': p++; charCount++; break; case 'q': q++; charCount++; break; case 'r': r++; charCount++; break; case 's': s++; charCount++; break; case 't': t++; charCount++; break; case 'u': u++; charCount++; break; case 'v': v++; charCount++; break; case 'w': w++; charCount++; break; case 'x': x++; charCount++; break; case 'y': y++; charCount++; break; case 'z': z++; charCount++; break; default: if(name.charAt(ch) == ' ') space++; if(name.charAt(ch) != ' ') symbols++; break; } char temp = Character.toUpperCase(name.charAt(ch)); if(name.charAt(ch) == temp && temp != ' ' ) { System.out.println(ERROR: UPPERCASE NOT ALLOWED); System.out.println(EXITING APPLICATION); System.exit(1); } if(name.charAt(ch) == '0' && name.charAt(ch) == '1' && name.charAt(ch) == '2' ) { System.out.println(ERROR: NUMBERS NOT ALLOWED); System.out.println(EXITING APPLICATION); System.exit(1); } if(name.charAt(ch) == '3' && name.charAt(ch) == '4' && name.charAt(ch) == '5' ) { System.out.println(ERROR: NUMBERS NOT ALLOWED); System.out.println(EXITING APPLICATION); System.exit(1); } if(name.charAt(ch) == '6' && name.charAt(ch) == '7' && name.charAt(ch) == '8' ) { System.out.println(ERROR: NUMBERS NOT ALLOWED); System.out.println(EXITING APPLICATION); System.exit(1); } if(name.charAt(ch) == '9' ) { System.out.println(ERROR: NUMBERS NOT ALLOWED); System.out.println(EXITING APPLICATION); System.exit(1); } ch++; } if (a > 0){ System.out.println(There are (+a+-As) in your name); } if (b > 0){ System.out.println(There are (+b+-Bs) int your name); } if (c > 0){ System.out.println(There are (+c+-Cs) int your name); } if (d > 0){ System.out.println(There are (+d+-Ds) int your name); } if (e > 0){ System.out.println(There are (+e+-Es) int your name); } if (f > 0){ System.out.println(There are (+f+-Fs) int your name); } if (g > 0){ System.out.println(There are (+g+-Gs) int your name); } if (h > 0){ System.out.println(There are (+h+-Hs) int your name); } if (i > 0){ System.out.println(There are (+i+-Is) int your name); } if (j > 0){ System.out.println(There are (+j+-Js) int your name); } if (k > 0){ System.out.println(There are (+k+-Ks) int your name); } if (l > 0){ System.out.println(There are (+l+-Ls) int your name); } if (m > 0){ System.out.println(There are (+m+-Ms) int your name); } if (n > 0){ System.out.println(There are (+n+-Ns) int your name); } if (o > 0){ System.out.println(There are (+o+-Os) int your name); } if (p > 0){ System.out.println(There are (+p+-Ps) int your name); } if (q > 0){ System.out.println(There are (+q+-Qs) int your name); } if (r > 0){ System.out.println(There are (+r+-Rs) int your name); } if (s > 0){ System.out.println(There are (+s+-Ss) int your name); } if (t > 0){ System.out.println(There are (+t+-Ts) int your name); } if (u > 0){ System.out.println(There are (+u+-Us) int your name); } if (v > 0){ System.out.println(There are (+v+-Vs) int your name); } if (w > 0){ System.out.println(There are (+w+-Ws) int your name); } if (x > 0){ System.out.println(There are (+x+-Xs) int your name); } if (y > 0){ System.out.println(There are (+y+-Ys) int your name); } if (z > 0){ System.out.println(There are (+z+-Zs) int your name); } System.out.println(\nThere are a total of +charCount+ characters); System.out.println(\nThere are (+space+-Spaces) in the data); System.out.println(\nThere are +symbols+-Symbols in the data); }} | Counting characters | java;beginner;strings;console | null |
_unix.19225 | Possible Duplicate:Resources to learn linux architecture in detail? I migrated to UNIX (Linux, Ubuntu) and I'm trying to understand the organisation of files and directories. I stumbled upon File Hierarchy Standard (quite old it seems) and it made me wonder if this is the ACTUAL standard that is used.Also may I ask if additional links to resources to broaden my knowledge (and everyone that asks questions about FHS) on these wonderful NIX* environments. | Where can I find the Official File Hierarchy Standard for UNIX? | fhs;standard | Here it is: The FHS 2.3 Specification |
_unix.237249 | My Document viewer is not able to launch. When I open it using commandline it gives me the following error:evinceevince: error while loading shared libraries: libffi.so.6: failed to map segment from shared object: Permission deniedwhen I sudo it gives me the following error:sudo evince ass3.pdf No protocol specified** (evince:13785): WARNING **: Could not open X displayNo protocol specifiederror: XDG_RUNTIME_DIR not set in the environment.Cannot parse arguments: Cannot open display:How to make it open normally, by normally I mean when I click on a pdf with my mouse? | Some applications on my unbuntu won't launch (document viewer) | libraries;desktop | null |
_unix.295864 | I want to paste one file (with vectos 1xn) in another file?Example:File1 rs01 rs02 rs03File2AA BB CCHow I can do this? But in my case, I have 55,000 column in File1 and File2, so, I've been thinking that is difficult to use the command to put head. | how to paste a file into another file (turn one), but one under the other? | terminal;paste | The cat utility concatenates all its inputs into one data stream.Giving it two files, it produces output consisting of the complete contents of the first file, followed by the complete contents of the second file, in that order.In your case:$ cat file1 file2 >file-1-and-2 |
_webapps.17455 | Firstly, I understand how I can use google apps to point a domain (my company's domain say 'greatapp.com' at 'greatapp.appspot.com' no problems there.I also understand that once I've got it onto 'greatapp.com' I can use wildcard hostnames to host the app on '*.greatapp.com' no issues there.The point is that the application I'm thinking about building would be one which I would charge companies to use, and it would probably need to be co-branded, and therefore the company might want to use their own domains for displaying their data to customers, so data.companyname.com instead of companyname.greatapp.com.Would GAE accept a domain pointed by the company via CNAME (or whatever) or is that not possible? | Can Google App Engine support multiple domains pointing at a single appsot.com (or apps domain) GAE app? | google app engine | James, have a look at this http://www.tipfy.org/wiki/cookbook/dynamic-subdomains/ for code example. It is possible.One thing to note is that AppEngine doesn't allow naked domain (domain without www). So when you are setting up a custom domain pointing to your AppEngine instance, you are setting up a CNAME for www. |
_unix.223656 | I am trying to include an attachment to my sendmail eml file.The current eml file (order.eml) has the following contentsFrom: Sender <[email protected]>To: [email protected]: ReportMime-Version: 1.0Content-Type: multipart/mixed; boundary=B835649000072104Jul07--B835649000072104Jul07Content-Type: text/plain; charset=US-ASCIIContent-Transfer-Encoding: 7bitContent-Disposition: inlineBody Copy--B835649000072104Jul07Content-Type: application/pdfContent-Transfer-Encoding: base64Content-Disposition: attachment; filename=por5151.pdfbase64 por5151.pdf--B835649000072104Jul07--both the order.eml and por5151.pdf file are in the same directory and I try to send it with# /usr/sbin/sendmail -t < order.emlWhen the email arrives I can see the por5151.pdf in the attachments but it is blank (0 bytes). I don't know why this is and I am struggling to fix it | sendmail attachment is empty | centos;sendmail;troubleshooting | What you would need to do is include the file below the Content-Disposition: attachment; filename=por5151.pdfline while you generate the .eml file. You can do so using the base64 utility:base64 por5151.pdfMake sure the closing boundary (--B835649000072104Jul07--) gets inserted behind that.sendmail doesn't interpret the file that you hand it, and therefore doesn't magically insert the contents of the .pdf file. |
_webmaster.41004 | Assuming one submits a sitemap for a website in Google Webmaster, does Google reprocess it from time to time? If yes, at which rate? | Does Google reprocesses submitted sitemaps from time to time? | google search console;sitemap | Edit: Google will recrawl it (see comment by John Mueller from Google below).However, if you want your sitemap reprocessed more quickly by Google, the recommended practice is to resubmit it.You can resubmit using Webmaster Tools or using an HTTP request.Using Webmaster Tools:On the Webmaster Tools Home page, click the site you want.Under Optimization, click Sitemaps.Select the Sitemap(s) you want to resubmit, and then click the Resubmit Sitemap button.Source: http://support.google.com/webmasters/bin/answer.py?hl=en&answer=183669The linked source also describes the slightly more complex alternative method where you use an HTTP request. |
_unix.14374 | I want to list usb ports in linux and then send a message to the printer connected to it. That message is sensed by the printer to open the cash drawer. I know I can use echo - e and a port name, but my difficulty is finding the port name. How can I list the available ports or the ports that are currently used? | List USB ports in linux | linux;usb | The lsusb command will yield the list of recognised usb devices. Here is an example:$ lsusbBus 002 Device 003: ID 1c7a:0801 LighTuning Technology Inc. Bus 002 Device 002: ID 8087:0020 Intel Corp. Integrated Rate Matching HubBus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hubBus 001 Device 004: ID 04ca:f01c Lite-On Technology Corp. Bus 001 Device 003: ID 064e:a219 Suyin Corp. Bus 001 Device 002: ID 8087:0020 Intel Corp. Integrated Rate Matching HubBus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hubYou can note that the information provided include the bus path as well as the vendorId/deviceId.I'm not sure what the ports that are currently used actually means. EditTo write a message to the device on bus 1 device 2 you must access the device $ ls -l /dev/bus/usb/001/002 crw-rw-r-- 1 root root 189, 1 2011-06-04 03:11 /dev/bus/usb/001/002 |
_unix.364708 | The thing is, you can specify a port to SCP, and you can transfer stuff from a remote host to another.If both hosts use different ports on SSH (i.e. 2203 and 2541), how can I specify these ports to the SCP command?I know I can doscp -P <port> host1:/file host2:/fileBut that port will apply to both hosts.So... how can I specify two different ports for the two different hosts? | SCP between two different servers with two different ports | scp;remote | null |
_unix.305345 | Currently I have Arch and Windows with grub installed and configured. I'm going to make another Arch installation on a separate partition.Do I need to install and configure grub again on the newly installed distribution or I can use the old one?I suppose that if I continue using the old (current from this point of view) grub I'd have to configure it again so that it sees the new Arch installation.What will happen if I format the current partition (with the old Arch installation)?Will grub continue working or not (i.e. I'd need to boot some live-cd to fix it)?To sum up: is grub installed on some general place independently from any OS, or is it tied to some (my current Arch installation).Tutorials give this command: grub-mkconfig -o /boot/grub/grub.cfg which makes me think that grub is tied to the specific linux installation; but they also show a grub-install command without specifying any directory.And if grub was tied to the current installation, how would my computer know which partition to check for grub? Otherwise, if it was general, why would I have to install it as a package on the specific arch installation? | Where is grub installed and do I need a new one for a separate linux installation? | filesystems;boot;grub2;dual boot;boot loader | Naming convention:GRUB (some of it) stays in the MBR.GRUB (rest of it) are several files that are loaded, from /boot/grub (for example: that nice image that appears as a background in GRUB is not stored on the MBR)Notes:The answer is considering an MBR setup, GRUB can be used in other setups.In an EFI setup things get hairy, GRUB can be used, but so can be the kernel itself as its own EFI stub.GRUB (some of it) is installed in the MBR. The MBR are the first 512 bytes on a disk. The MBR is also used by the partition table of the disk, therefore GRUB itself has somewhat less space than the 512 bytes.The GRUB (some of it) inside the MBR loads a more complete GRUB (rest of it) from another part of the disk, which is defined during GRUB installation to the MBR (grub-install). Since the MBR GRUB needs to find its counterpart on disk, which normally resides on /boot, the partition where the main GRUB resides cannot be too far from the partition table (often 512MB but may vary).It is very useful to have /boot as its own partition, since then GRUB for the entire disk can be managed from there.What does it mean:The GRUB on MBR can only load one GRUB (the rest of it) from disk.That specific GRUB (the rest of it) on disk must be configured to find all OSes on the machine.The command grub-mkconfig -o /boot/grub/grub.cfg runs os-prober (if it can find it) which scans all partitions and produces a grub.cfg pointing to all the OSes.Therefore if you have several partitions with /boot (or the MS windows equivalents, I do not know them but os-prober knows) the os-prober will find them and create grub.cfg accordingly. Running grub-install install a GRUB (some of it) on the MBR that points to the GRUB of the current running OS with the current grub.cfg.What does this mean:You only need a single GRUB for the entire system.You can have different GRUBs on different disks (since they have distinct MBRs) but that only makes sense if you plan to remove the disk.You can manage the boot of all OSes from a single GRUB installation.On a single disk you shall always run grub-install from a single OS only! That's important, otherwise you will keep overwriting your config. |
_webmaster.83896 | I have a website for testing. I don't want this site to be indexed by search engines. Now pages on the site return 200 OK in headers. How to make the whole site send code 404 in headers, but stay working?Site is built on ModX. | How to make whole site headers send code 404? | indexing;404;modx | null |
_webmaster.95276 | Can we add multiple Google Analytic property ids in a one Google Tag Manager tag? | Multiple Google Analytic property ids in One Google Tag Manager tag | google analytics;google tag manager | null |
_unix.26619 | I have a C program which I want to run as a daemon. How do I install it so it will run as a daemon on CentOS? Someone said to use @reboot, and some said to put it in /etc/rc.d/rc.local. Which is the right way? | How to run a C program as a daemon? | centos;daemon;c | Neither. If you want to have it behave properly like a real daemon you should place it using the init system - /etc/init.d ( and make appropriate runlevel links in the appropriate /etc/rc.X folders )Run a search or have a look at something like this: https://serverfault.com/questions/204695/comprehensive-guide-to-init-d-scripts |
_opensource.1582 | If I release a program specification [1] under the AGPL, and the program itself under the AGPL, the two are obviously compatible: I can develop them at the same time, copy text (for example, method headers) between them freely, derive methods from the spec requirements, back-derive requirements from implementation (it happens, ok ;). But, the specification would not be compatible with BY-SA works (say, Wikipedia or Stack Exchange).If I release the specification under BY-SA instead, it is still compatible in one direction (from spec to software), because BY-SA is one-way compatible with the GPL [2], and the GPL is compatible with the AGPL.But, if I release the specification under BY-SA (including releasing it before the software), am I losing any of the copyleft strength of the AGPL. i.e. enabling someone to create a non-copyleft work where they couldn't before?(Looking at it another way, if I release just an AGPL specification, am I guaranteeing that any implementation must be AGPL? Or is anyone free to develop the ideas in the spec under any license they want?)[1] To clarify, a specification is not end-user documentation (a manual), it is a (sometimes very detailed) outline and plan for what the software must do.The answer obviously rests on the judgment of to what extent the code is a derivative work of the specification. I will obviously accept an answer of it depends, not tested in law, etc, if that is the most accurate answer possible :D[2] It appears I am dreaming and this is underway, but not yet finalised. Let's assume for the sake of argument we are a few months in the future and it's true. | Does a CC BY-SA 4.0 specification lose any AGPL3+ benefits? | license compatibility;copyleft;cc by sa;agpl 3.0;software development | null |
_unix.197588 | I was doing this tutorial, but when it comes to the part where O should run these commands:local-server# ssh -NTCf -w 0:0 87.117.217.27local-server# ssh -NTCf -w 1:1 87.117.217.44It says:channel 0: open failed: administratively prohibited: open failedHow can I fix that? | Channel 0: open failed: administratively prohibited: open failed | ssh;ssh tunneling | After discussing this in a chat and debugged the issue, it turned out that the required directive PermitTunnel yes was not in place and active. After adding the directive to /etc/ssh/sshd_config and reloading sshd by service sshd reload this was resolved.We added -v to the ssh command to get some debugging information and from that we found:debug1: forking to backgroundroot@ubuntu:~# debug1: Entering interactive session.debug1: Remote: Server has rejected tunnel device forwardingchannel 0: open failed: administratively prohibited: open faileddebug1: channel 0: free: tun, nchannels 1The server actively rejected the tunnel request which pointed us to the right directive. |
_codereview.49576 | I have written a partition function in Python (my_partition_adv). I have written this one after reading a similar function from a website. The version of the partition function from the website is also given below (parititon_inplace).As I am a beginner in Python, I want to know the following things:My version looks certainly more readable than the partition_in_place_with_additional_memory.Are there any drawbacks for my version in terms of complexity that I am missing?Are there any other drawbacks for my_partition_adv over partition_inplace?def partition_inplace(A, start, stop, pivot_index): items_less_than = [] items_greater_than_or_equal = [] read_index = start while read_index <= stop: item = A[read_index] if item < pivot_value: items_less_than.append( item ) else: items_greater_than_or_equal.append( item ) read_index += 1 write_index = start for item in items_less_than: A[write_index] = item write_index += 1 for item in items_greater_than_or_equal: A[write_index] = item write_index += 1 return len(items_less_than)def my_partition_adv(A,p_val): less_than = [] greater_than_or_equal = [] for index, item in enumerate(A): if item < p_val: less_than.append(item) else: greater_than_or_equal.append(item) for index,item in enumerate(less_than): A[index] = less_than[index] for index,item in enumerate(greater_than_or_equal): A[len(less_than) + index] = greater_than_or_equal[index] return len(less_than) | Comparing two partition functions in Python | python;beginner;python 2.7;comparative review | null |
_webapps.71003 | Ire any way to configure the video speed on Udacity, not just for the current video but for all videos? Each time I change the video speed it gets back to 1 when I go to another video (same for the video quality) through clicking on some link (i.e. not waiting for the next video to load). | Configuring the video speed on Udacity | video | null |
_cstheory.34361 | Manuel Blum is a well-known theoretical computer scientist and a Turing award winner. But more interestingly, he has the highest number of students who have gone on to win a Turing award (Leonard Adleman, Shafi Goldwasser, Silvio Micali) in the whole computer science. The list of his students is amazing and even more so if we include the students of his students.Can anyone comment on Manuel's supervisory style? What makes him so successful in training exceptional researchers? Anything that can help other supervisors be more successful in training exceptional researchers? | Manuel Blum's supervisory style | soft question;research practice | null |
_softwareengineering.303090 | I would like to understand what would be the optimal method of finding minimum tree coverage of tree nodes. Let me explain.I have a self-referencing structure that represents a tree, with a limited depth of X.Nodes in the tree can be logically selected. From the application perspective, it means that user would like to have some aggregate information about the selection.Let's say user picks nodes A, D, I, J, L and M.What I would like to do is to be able to restructure the users selection in order to pick the minimum set of nodes that cover the entire selection.For this example:nodes I and J can be covered by their common parent node F, so I pick F and remove I and Jnode M can be covered by it's parent node H, so I pick H and remove Mnode D is already covered by node A, so I remove DAfter that, nothing can be restructured further - algorithm stops and I should get the following selection.First, I don't know if minimum coverage is the good name. Second, I can't find any better phrase, so Google is not my best friend here.Another thing to note is that the tree itself can be stored:in-memoryin transactional databasein OLAP database (not exactly self-referencing any more)I'm stuck :(EDITI have been thinking a lot about this problem, and there might be a solution that I have to analyse.Following can be introduced:let's call tree itself the template treelet's call selection the selection treeeach node of the template tree has a precalculated redundant attribute - number of children nodesthe selection tree can be dirty or cleanafter a node is added, if nothing is done yet, it's in the dirty stateafter the restructuring is done, it's in the clean stateeach node of the selection tree has redundant attributes - number of children nodes selected and state: selected or ghostselection tree is a variant of in-memory doubly linked tree (child knows its parent, parent knows its children)After that, adding a node in the selection tree (selection-node) would work like this.Let's call the new parent of the new node in selection tree a selection-parent, and parent of the new node in template tree a template-parent.Let's call the children of the selection tree node selection-children, and children of the template tree node template-children.Set dirty state of selection treeRemove all selection-children of selection-nodeIf unexistent in selection tree, fetch template-parent of selection-node, along with its number of children attribute; add selection-parent as a ghost node, with selected number of children = 1If existent in selection tree, access selection-parent of selection-node and increment its selected number of childrenIf selection-parent has selected number of children equal to number of template-children, promote its state from ghost to selected; treat that selection node as a new node and start at 1Set clean state of selection treeI'm curious whether this algorithm already exists as an implementation somewhere. It's behavior wouldn't obviously be worse than O(log(n)), right? Of course, if it doesn't lack something in the logic. | MInimum tree coverage | algorithms;trees | null |
_unix.247902 | I am trying to calculate what is the % of successful queries in apache log.I have two commands:cat access_log|cut -d' ' -f10|grep 2..|wc -landcat access_log|cut -d' ' -f10|wc -lThey return me the number of successful queries and total queries number. I want to calculate what is the % of successful requests using bash and if it is possible - it should be 1 line script. It suppose to output just the % number like - 50 or 12 without any additional info.I tried to use bc with it but failed because of lack of knowledges. Can somebody help me? | How to calculate % of successful queries | bash;bc | Try this:echo $(( 100 * $( cut -d' ' -f10 access_log|grep 2..|wc -l) / $(cut -d' ' -f10 access_log|wc -l) ))Bash can only handle integers. |
_unix.351571 | I have two programs (mplayer and a custom java application) which both present GUI using framebuffers. They run in separate processes. I want to be able to switch from one program to the other without ending/killing the process of the other(the reason is that launching the java program takes a lot of time).I want to simulate sending one of the two programs to background and hiding its GUI and showing the other program's gui.I am running this on a Raspberry Pi, Debian distribution. | Switching between two framebuffer programs | linux;debian;shell;terminal;framebuffer | null |
_unix.127625 | How can I view all the addresses that sshguard has blocked to iptables? | View blocked address from sshguard | ssh;iptables | SSH Guard have to has it's own chain in iptables called sshguard and you can view rules in this chain by:iptables -nvL sshguardMore info on setup sshguard for iptables here |
_unix.140408 | I have tried creating a bootable Linux USB that i want to use on MacBook Pro 2013.I used 'MAC Linux USB Loader' to do that.Linux distros I tried it with: Slack puppy and DSL (Damn small linux)Flash Drive/USB: 512MB, 1024bytes FAT32 MBR formatted on windows, MAC doesn't do FAT32 for some reason.When I boot through MAC Alt/Option, it shows EIF drive, boots a bit and then I get this error:Kernel path: /live/vmlinuz | ramdisc path: /live/initrd.lz boot parameters: Loading Linux kernel... doneunaligned pointer 0x2. Aborted.I get this same error when booting any of the above mentioned OS.I have tried literally all other approaches of creating bootable Linux USB and none of them worked - Mac don't even show EIF drive on 'Alt' booting in these scenarios.I have tried many other tools listed here for MAC, but they don't detect my USB itself. | Problem booting using Live USB on MAC | linux;boot;macintosh | null |
_softwareengineering.215727 | When I tried searching this I would just get things on equality. When I was reading through some documentation for navigation in Android I had come across something I had never seen before. I came across this:mTitle = mDrawerTitle = getTitle();It almost looks like something you can do in JavaScript where you can take the first not-null variable and assign it to a variable.In JavaScript I would do this:mTitle = mDrawerTitle || getTitle();And it would return the first not null, in Java, is this double equals usage the equivalent in Java? What is this type of expression called? | Double equals (Not equality) during assigning Java | java | null |
_softwareengineering.162402 | I'm trying to learn implementing TDD with mocking/fake objects. One of the questions I have is how to initialize a dependency in an application which implements TDD? An example from this article Beginning Mocking With Moq 3 shows:public class OrderWriter{ private readonly IFileWriter fileWriter; public OrderWriter(IFileWriter fileWriter) { this.fileWriter = fileWriter; } public void WriteOrder(Order order) { fileWriter.WriteLine(String.Format({0},{1}, order.OrderId, order.OrderTotal)); }}In this example, the constructor takes an IFileWriter parameter, I suppose because you want to supply the real file writer in case of the actual application, and the fake one for unit test. My question is, in the real application, who will supply this parameter? I suppose it will be the caller of this application. What if it has dependency as well in the constructor? Will the caller code be responsible for that too?Maybe the better way is to use factory. How would this factory work? And how will the factory be distributed? Will it be in the constructor parameter like in the above manner? | Who should initialize dependencies in a TDD application? | .net;unit testing;tdd;dependency injection;mocking | What you're looking for is an IoC container to autowire all your objects at application startup. Take a look at Ninject, it has a very simple example on the front page. (It's also a good product and ... well, ninjas!)As a general rule, you should attempt to resolve all of your top-level objects (eg. Page in ASP.NET, Controller in MVC for ASP.NET, Form in Winforms) directly from the IoC container and let it wire up ALL your dependencies through constructor injection. There will be times when you have to force it to resolve a lower-level item -- this is known as using it as a Service Locator -- but this should generally be avoided as they are tricky (but not impossible) to test, and create an API that can be confusing for the consumer, if that isn't you.ASP.NET for MVC has, since v3, been specifically designed to abstract away the IoC container from the rest of your code while allowing you to auto-inject into any top-level class (Controller, View, Filter, etc) through the DependencyResolver class. Other .NET frameworks take a bit more effort, but it's possible if you Google around.There is a book on the subject called Dependency Injection in .NET. I haven't read it personally, but I've heard good things. |
_bioinformatics.664 | From an RNA-seq experiment I have about 17000 gene ids for 2 sample conditions arranged according to their log2 fold changes when compared to a control. I need to annotate these, but I've never done annotation before and am wondering how to do this in R? There seems to be multiple packages available, and I'm wondering if any of them stand out as being the best?I'm primarily interested in human samples and annotated pathways. | How to perform functional analysis on a gene list in R? | r;annotation | null |
_opensource.2312 | When you're copying open source from another project (not simply linking), how should you provide attribution in your source repo? I've copied some things into my code base that are probably not actually copyrightable (e.g., word lists), but I'd like to provide the appropriate attribution for the MIT/BSD licensed projects they were copied from. For example LICENSE or LICENSE.txt are common for your project's license, but what about attribution? Also, what's the minimal amount of text required? | How should you put attribution into your project? | attribution;mit;bsd | [obvious disclaimer - I am not a lawyer]A common practice I've seen is to add an additional file, e.g. NOTICE.txt with references to other projects being used.For example, take a look at Apache Commons Lang (yes, I know it doesn't use the MIT license, it's just a really simple example for this practice, which holds for various licenses). It has a NOTICE.txt file which states that it uses code from the Spring Foundation, the licensing terms it was used by, and a reference to the exact location in the code. If you look at that location in the code, you'll find the complete details. |
_webmaster.13339 | Having a website that generates and receives JSON requests via AJAX, I failed to find a tool that shows me live the communication including the content of the JSON calls.I thought that the Google Chrome developer tools or the IE 9 developer tools do have such a feature, but again, I failed.Searching Google, I failed too.So my question is:Is there a client-side tool to monitor the content of JSON requests that a website sends to the server? | Monitoring JSON requests sent/received from the browser? | javascript;ajax | You can use firebug pulgin/addon in chrome and firefox.Open firefox.Search and download firebug addon/plugin. Install it.Open respecitive site whose data transfer you want to monitor. Enable Firebug on that particular site.Check the Net Panel and check the detail of the requests made and more... |
_cstheory.174 | Wikipedia only lists two problems under unsolved problems in computer science:P = NP?The existence of one-way functionsWhat are other major problems that should be added to this list?Rules:Only one problem per answerProvide a brief description and any relevant links | Major unsolved problems in theoretical computer science? | big list;open problem | null |
_softwareengineering.168655 | Sometimes I would like to declare a property like this:public string Name { get; readonly set; }I am wondering if anyone sees a reason why such a syntax shouldn't exist. I believe that because it is a subset of get; private set;, it could only make code more robust.My feeling is that such setters would be extremely DI friendly, but of course I'm more interested in hearing your opinions than my own, so what do you think?I am aware of 'public readonly' fields, but those are not interface friendly so I don't even consider them. That said, I don't mind if you bring them up into the discussionEditI realize reading the comments that perhaps my idea is a little confusing. The ultimate purpose of this new syntax would be to have an automatic property syntax that specifies that the backing private field should be readonly. Basically declaring a property using my hypothetical syntaxpublic string Name { get; readonly set; }would be interpreted by C# as:private readonly string name;public string Name{ get { return this.name; }}And the reason I say this would be DI friendly is because when we rely heavily on constructor injection, I believe it is good practice to declare our constructor injected fields as readonly. | DI and hypothetical readonly setters in C# | c#;dependency injection | The C# team has considered that this would be a very useful feature, and that's why in C# 6, they implemented it (just a little different from your proposal).Getter-only auto-propertiesThis new kind of properties are getter-only, but can be set either inline or in the constructor. This means they are backed internally by a readonly field.Instead of declaring a property as readonly set, you simply not declare it.Inline assignmentpublic class Customer{ public string First { get; } = Jane; public string Last { get; } = Doe;}Constructor assignmentpublic class Customer{ public string Name { get; } public Customer(string first, string last) { Name = first + + last; }}You can read about this and other new features of C# 6 in the Roslyn wiki in GitHub. |
_unix.22538 | Around version 7.0 (? just guessing) SuSE became paid only distribution (i.e. you had to pay to get the copy), after several releases SuSE came back to free + paid model.Now -- the most important to me, what was the first paid-only version, and also interesting, what was the last paid-only version?Editors, please do not fix the spelling of SuSE, back then it was SuSE not SUSE (not sure about end of paid period though). | What was the paid only period in history of SuSE? | opensuse;history;suse | null |
_unix.242163 | I tried making some crontabs, I created a file called brewupdater in my user folder, containing 0 */5 * * * ~/bin/brewupdate2.I then tried to run cron ~/brewupdater but it told me:cron: can't open or create /var/run/cron.pid: Permission deniedSo I tried sudo cron ~/brewupdater. But the script doesn't run, (it should do every 5th hour), because the files that should appear don't. | cron doesn't do anything | osx;cron | null |
_unix.377857 | I am following a tutorial on installing git on a shared host and need some clarification if possible.I have access to the GCC jpols@MrComputer ~$ ssh nookdig1@***.***.**.*'gcc --versiongcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-18)Copyright (C) 2010 Free Software Foundation, Inc.This is free software; see the source for copying conditions. There is NOwarranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.'and can edit the bashrc file:jpols@MrComputer ~$ vi .bashrcHowever I dont really understand how to read if the path has been added correctly:Update your $PATH None of this will work if you dont update the $PATH environment variable. In most cases, this is set in .bashrc. Using .bashrc instead of .bash_profile updates $PATH for interactive and non-interactive sessionswhich is necessary for remote Git commands. Edit .bashrc and add the following line:export PATH=$HOME/bin:$PATHI added the above to the file and saved but it goes on to sayBe sure ~/bin is at the beginning since $PATH is searched from left to right;But ~/bin is different to the given path. Could someone please explain what this means?After adding the Path as specified the output is:jpols@MrComputer ~$ source ~/.bashrcjpols@MrComputer ~$ echo $PATH/home/jpols/bin:/usr/local/bin:/usr/bin:/cygdrive/c/Python27:/cygdrive/c/Python27/Scripts:/cygdrive/c/WINDOWS/system32:/cygdrive/c/WINDOWS:/cygdrive/c/WINDOWS/System32/Wbem:/cygdrive/c/WINDOWS/System32/WindowsPowerShell/v1.0:/cygdrive/c/Program Files/nodejs:/cygdrive/c/Program Files/Git/cmd:GYP_MSVS_VERSION=2015:/cygdrive/c/WINDOWS/system32/config/systemprofile/.dnx/bin:/cygdrive/c/Program Files/Microsoft DNX/Dnvm:/cygdrive/c/Program Files/Microsoft SQL Server/130/Tools/Binn:/cygdrive/c/HashiCorp/Vagrant/bin:/cygdrive/c/MAMP/bin/php/php7.0.13:/cygdrive/c/ProgramData/ComposerSetup/bin:/cygdrive/c/Program Files (x86)/Yarn/bin:/cygdrive/c/Program Files/PuTTY:/cygdrive/c/Program Files (x86)/Brackets/command:/cygdrive/c/Program Files (x86)/Calibre2:/cygdrive/c/Ruby22-x64/bin:/cygdrive/c/Users/jpols/AppData/Local/Microsoft/WindowsApps:/cygdrive/c/Users/jpols/AppData/Roaming/npm:/cygdrive/c/Users/jpols/AppData/Roaming/Composer/vendor/bin:/cygdrive/c/Users/jpols/AppData/Local/Yarn/bin:/cygdrive/c/Program Files (x86)/NmapJust comparing the first part:Tutorial: /home/joe/bin:/usr/local/bin:/bin:/usr/binMine: /home/jpols/bin:/usr/local/bin:/usr/bin:/They are different so before I go on I am hoping someone can explain what I am trying to achieve and how to do it correctly. Thanks. | Clarification on updating path in bashrc | linux;path;git;bashrc | The '~' character is used to indicate the current user's home directory on UNIX systems. Because the username on your computer is different from the one on the machine used in the tutorial you referred to, different directory paths have been appended to the PATH variable. By using '~' one does not have to manually enter one's username for referring to the user home directory, which allowed the creator of the tutorial to create code which makes the PATH variable look into both of your home directories, even though both of your systems have different paths to your home directories. (e.g. /home/joe/bin and /home/jpols/bin are different directories, but ~/bin can be used to refer to both, as the '~' will be expanded to the correct path by the system) |
_computerscience.5413 | I am trying to do simple PCF with Unity but I am facing some issues and I don't know where they come from. If anybody has an idea...Here are two examples// C#GetTemporaryRT(_shadowMapProperty, _shadowSettings.shadowAtlasWidth, _shadowSettings.shadowAtlasHeight, _depthBufferBits, FilterMode.Bilinear, RenderTextureFormat.Depth, RenderTextureReadWrite.Linear);// CGsampler2D_float _ShadowMap;float4 _ShadowMap_TexelSize;half ShadowAttenuation(float3 shadowCoord){ float depth = tex2D(_ShadowMap, shadowCoord).r;#if defined(UNITY_REVERSED_Z) return step(depth - _ShadowData.y, shadowCoord.z);#else return step(shadowCoord.z, depth + _ShadowData.y);#endif}/////// EXAMPLE 1 /////// float shadow = ShadowAttenuation(half3(shadowCoord.xy, shadowCoord.z));return shadow;/////// EXAMPLE 2 ///////float3 UnityCombineShadowcoordComponents(float2 baseUV, float2 deltaUV, float depth){ float3 uv = float3(baseUV + deltaUV, depth); uv.z += dot(deltaUV, receiverPlaneDepthBias.xy); return uv;} half shadow = 1;const float2 offset = float2(0.5, 0.5);float2 uv = (shadowCoord.xy * _ShadowMap_TexelSize.zw) + offset;float2 base_uv = (floor(uv) - offset) * _ShadowMap_TexelSize.xy;float2 st = frac(uv);float2 uw = float2(3 - 2 * st.x, 1 + 2 * st.x);float2 u = float2((2 - st.x) / uw.x - 1, (st.x) / uw.y + 1);u *= _ShadowMap_TexelSize.x;float2 vw = float2(3 - 2 * st.y, 1 + 2 * st.y);float2 v = float2((2 - st.y) / vw.x - 1, (st.y) / vw.y + 1);v *= _ShadowMap_TexelSize.y;half sum = 0;sum += uw[0] * vw[0] * ShadowAttenuation(UnityCombineShadowcoordComponents(base_uv, float2(u[0], v[0]), shadowCoord.z));sum += uw[1] * vw[0] * ShadowAttenuation(UnityCombineShadowcoordComponents(base_uv, float2(u[1], v[0]), shadowCoord.z));sum += uw[0] * vw[1] * ShadowAttenuation(UnityCombineShadowcoordComponents(base_uv, float2(u[0], v[1]), shadowCoord.z));sum += uw[1] * vw[1] * ShadowAttenuation(UnityCombineShadowcoordComponents(base_uv, float2(u[1], v[1]), shadowCoord.z));shadow = sum / 16.0f;return shadow;Here are the results. | DX9 Shadow map PCF issue | shader;shadow;shadow mapping;unity;directx | It seems that sampler2D_float doesn't allow to interpolate shadow lookup linearly. So I had to do it by hand. Here's an example of interpolated shadowing.float texture2DCompare(sampler2D depths, vec2 uv, float compare){ float depth = texture2D(depths, uv).r; return step(compare, depth);}float texture2DShadowLerp(sampler2D depths, vec2 size, vec2 uv, float compare){ vec2 texelSize = vec2(1.0)/size; vec2 f = fract(uv*size+0.5); vec2 centroidUV = floor(uv*size+0.5)/size; float lb = texture2DCompare(depths, centroidUV+texelSize*vec2(0.0, 0.0), compare); float lt = texture2DCompare(depths, centroidUV+texelSize*vec2(0.0, 1.0), compare); float rb = texture2DCompare(depths, centroidUV+texelSize*vec2(1.0, 0.0), compare); float rt = texture2DCompare(depths, centroidUV+texelSize*vec2(1.0, 1.0), compare); float a = mix(lb, lt, f.y); float b = mix(rb, rt, f.y); float c = mix(a, b, f.x); return c;} |
_webmaster.92755 | My AWS's Free Tier is about to expire. How do I pay for the entire Reserved Instance with one upfront payment?I noticed from that Amazon EC2 Pricing reserved instances with the All Upfront option is cheaper. Does reserved instances (like t2.micro) include EBS storage (Amazon EBS Pricing)? If yes, what is the size? | How do I pay for the entire Reserved Instance with one upfront payment after AWSs Free Tier | amazon aws;amazon ec2;pricing | These are questions you should be asking the AWS billing department. Contact info is found here.EC2 Reserved Instances requires buying a time block on a certain instance, rather than going hour to hour with the on demand instances, with the selection of all upfront, partial upfront/partial monthly, or monthly billing. As you move towards monthly, the cost increases, but it's still cheaper than the on demand instances. You pay in advance for what you expect to use for a period of time. Otherwise the instances themselves are the same. EC2 instances do not include EBS, outside of the free tier. EC2 pricing (reserved and regular) is here.Elastic Block Storage (EBS) is a separate product and doesn't appear to have upfront pricing. Pricing for that is over here. |
_unix.372173 | I have around 50 gigabytes that I would like to move. I want to do it over TCP/IP (hence network in the title) optimized for a local area network. My problem is that the connection occasionally gets interrupted and I never seem to get all of the data reliably to its destination. I'd like this thing to not give up so easily keep retrying automatically (assuming that both machines are powered up).My approach would be to use rsync.SOURCE=/path/to/music/ # slash excludes music dirDESTINATION=/path/to/destination rsync \ --archive \ # archive mode; equals -rlptgoD (no -H,-A,-X) --compress \ # compress file data during the transfer --progress \ # show progress during transfer --partial \ # delete any partially transferred files after interrupt, on by default but I added it for kicks --human-readable \ #output numbers in a human-readable format $SOURCE \ $DESTINATION \Are there other parameters that I should consider? | How can I move (rsync) a huge quantity of data reliably the can handle network interruptions? | rsync;file transfer | Rsync ParametersIt would seem that my rsync parameters are fine. I had to add a parameter to deal with files that exist after a connection failure. The choices were --ignore-existing or --update to avoid rewriting things already written. I am still not sure which one is better (perhaps someone knows) but in this case I went with with --update after reading this https://askubuntu.com/questions/399904/rsync-has-been-interrupted-copy-from-beginningCompare:--update skip files that are newer on the receiver--ignore-existing skip updating files that already exist on receiverConnection InterruptionsThe connection problem conundrum (flaky wifi etc.) was solved by continually calling rsync when an exit code is not zero, thereby forcing my process to continue until the transfer is a success. (unless I cut the power, lightning strikes my power lines, or I kill it using a signal)To handle network disconnects, I used a while loop.while [ 1 ]do# STUFFdonewhile [ 1 ] has a caveat: using a signal like ctrl c for an interrupt (SIGINT) will not work unless you add an explicit check for any exit codes above 128 that calls break.if [ $? -gt 128 ] ; then breakthen you can check for rsync's exit code. Zero means all files have been moved.elif [ $? -eq 0 ] ; then exitOtherwise, the transfer is not complete.else sleep 5Script Example sync-music.shThe rsync script assumes ssh passwordless key authentication.#!/bin/bashSOURCE=/path/to/Music/[email protected]:/media/Musicwhile [ 1 ]do rsync -e 'ssh -p22'\ --archive \ --compress \ --progress \ --partial \ --update \ --human-readable \ $SOURCE \ $DESTINATION if [ $? -gt 128 ] ; then echo SIGINT detected. Breaking while loop and exiting. break elif [ $? -eq 0 ] ; then echo rsync completed normally exit else echo rsync failure. reestablishing connection after 5 seconds sleep 5 fidone |
_unix.33067 | Can I use TLS/SSL over Unix pipe with Unix command line? I want the equivalent of$ mkfifo /tmp/spipe$ echo a|openssl s_server -acceptFifo /tmp/spipe &[1] 25563$ openssl s_client -connectFifo /tmp/spipea[1] Done echo a|openssl s_server -acceptFifo /tmp/spipe(Yes, it's not hard to write a short program to do that, but I was hoping it is possible with existing tools)Let me clarify, I do not want a tcp connection any time in the process. I want to use the TLS/SSL protocol over a UNIX pipe. The client will open a unix pipe, and will connect to the server listening on another pipe. I do NOT want to move data from TLS tcp connection to a pipe. | TLS over unix pipe | pipe;ssl;openssl;tls | null |
_webapps.86897 | I have numbers in range D7:D and there are times when some cells are empty. I want the average of the last 7 numbers but to skip blanks. So if in the last 7 there's only 3 that are full, I want it to go back further and find 7 total and average them out. | Average of last 7 non-empty non-blank cells in Google Sheets | google spreadsheets | Here is one approach: =average(indirect(D & iferror(large(filter(row(D7:D), len(D7:D)), 7), row(D7)) & :D))Explanation:filter(row(D7:D), len(D7:D)) returns an array that consists of the row numbers of the nonempty entries in the given range.large(..., 7) picks the 7th largest number from this array: this is the row number where you want to start averaging.iferror(..., row(D7)) is a safeguard in case your range has fewer than 7 non-blank entries: in this case, the averaging will begin with D7. I could have just put 7 instead of row(D7), but row(D7) makes the formula more portable in case you decide to copy it elsewhere. indirect(D & ... & :D) forms the range for averaging, e.g., D9:D if the output of preceding computation was 9.Finally, average does the average. You could put other aggregate functions here, too. |
_scicomp.266 | I am running into a problem with COMSOL 4.2.0.187 on Ubuntu (both 10.04 LTS and 11.04). When using the option -np x with x > 1, COMSOL crashes systematically after a short, yet random amount of time with the following error:> [thread 140538491361024 also had an error] terminate called without an active exception > AbortedWhen setting only one working process (-np 1), the error disappear. The COMSOL support staff told me that there must be a problem with the threading libraries. I suspect that they just have no idea what to tell me.Do you guys have any idea as how to solve this problem? Have you ever encountered such an error with COMSOL or another piece of software?Thanks a lot, | COMSOL on Ubuntu (10.04 LTS and 11.04) | comsol;parallel computing | null |
_webmaster.29555 | I've just signed up with a webhost (which I prefer not to name) and I'm reasonably happy with it. The only nit was when I was ready to put a site online and I asked the support line to what name I should point my 'www' CNAME to. They responded that they don't do that and I need to set my domain's NS records for the hosting to work.Why would you ever want to do it that way? Our service to you includes DNS and our servers are probably much better than the one your registrar provides.This was a bit of surprise as all of the other webhosts I've worked with happily support this. I've set up (eg) gallery.myfriend.example for friends by having them configure their DNS to CNAME 'gallery' to the name of a shared server at a webhost and the webhost does name-based hosting for 'gallery.myfriend.example'.(Of course, if the webhost ever tells me I'm being moved from A.webhost.example to B.webhost.example, it would be my responsibility to change where the CNAME points. Really good webhosts would instead create myname.webhost.example for the IP of whichever server my stuff happens to be on, so I'd never have to worry about keeping my CNAME up to date.)Is my impression correct, that most webhosts will happily support a service that begins with a CNAME hosted elsewhere, or is it really more common that webhosts will only provide a service if they control the DNS service too?For example Alice is a customer, owning Alice.example.BobHost, CarolHost and DaveHost are webhosts. Alice has a domain registration, DNS hosting and website hosting with BobHost.BobHost have the following DNS setup... * Alice.example A 192.0.2.1 * Gallery.Alice.example CNAME SomeServer.CarolHost.exampleand her main website content and email is served from 192.0.2.1.Alice also has website hosting with CarolHost, but only to serve the 'Gallery' sub-domain. Her gallery content is served from SomeServer.CarolHost.example, but only when the 'Host:' header of a request is 'Gallery.Alice.example'.Yes, this is sub-optimal, but its the only practical way to have a sub-domain hosted elsewhere from the main site. Frankly, it works. CarolHost can change the IP of 'SomeServer' whenever they like without having to inform anyone as long as they update their own DNS records. No-one complains to CarolHost that the gallery is off-line when the fault is with BobHost's DNS service failing to serve that CNAME record.Continuing the story, Alice now wishes to put an additional sub-domain, 'Blog.Alice.example' to be hosted by DaveHost. She calls DaveHost support and asks how to host 'Blog' in the same way that 'Gallery' is hosted by CarolHost. DaveHost respond that they don't support this. If Alice wishes to use DaveHost's service, Alice will need to move DNS hosting to DaveHost.My question; In the world of webhosting services, are hosts like DaveHost commonplace? (All but one the webhosts I've dealt with are happy to be CarolHost, even though they all really want to be like BobHost.) | Are webhosts that require NS instead of a CNAME common? | web hosting;nameserver;virtualhost | As you correctly note, the web hosting provider wants to have control over the actual IP address your host name resolves to, so that they can e.g. move your site from one server to another or implement DNS-based load balancing.There are basically two ways in which this can be done. For example, assuming that the hostname you want for your site is host.yourdomain.com, you can either:let your webhost also be the DNS provider for yourdomain.com by telling your registrar to point the NS records for yourdomain.com to the web hosting company's nameservers, orlet your DNS provider return a CNAME record for host.yourdomain.com pointing to e.g. host-yourdomain-com.webhost.com, which your webhost can then resolve to whichever IP address they want.The second way, using a CNAME record, is slightly less efficient, since it includes an additional indirection step. However, as you note, it's the only practical way to have your web hosting and DNS service be provided by different companies.As such, I don't fault your webhost for recommending the first method. I do, however, think that they're providing sub-par service if they're insisting on it and refusing to deal with a CNAME record if you prefer to use one. |
_codereview.100967 | I wanted to get some eyes on some code that I wrote, and see if anyone can help improve what I did. Just looking for some constructive feedback to help make my code more performant, and more elegant. From a high level my code needs to accomplish the following:Create a function that takes an array of objects that each represent an event. Each object in the array has a start and end time represented in minutes. The start and end times are represented as minutes after 9am. For example {start:30, end: 60} represents an event that starts at 9:30am and ends at 10am. The function takes the objects, and plots them as events on a vertical timeline. Events that have crossover will split the container in half, while events that dont crossover will take the whole container width.Here is the JS that I wrote to accomplish the above task://sample data - array of objects that represent eventsvar myArray = [{start: 30, end: 130},{start: 140, end: 147},{start: 145, end: 155},{start: 150, end: 175},{start: 200, end: 250},];var myFunc = function(a) { //sort the input array from earliest start time to latest. Items with identical start times will compare end times - the longer duration item resolves with lower index. Assumes no exact duplicate items. var sortedDates = a.sort(function(a,b) { var caseOne= (a.end>b.end) ? -1 : 1, caseTwo= a.start - b.start; return (a.start===b.start) ? caseOne : caseTwo; }); for(var i=0; i<sortedDates.length; i++) { var currentItem = sortedDates[i], itemDuration = currentItem.end-currentItem.start, itemTop = currentItem.start, prevItem = sortedDates[i-1], nextItem = sortedDates[i+1], newDiv = document.createElement('div'); //set a default direction to each item currentItem.direction = 'left'; //determine which items overlap and set a property to indicate if(nextItem !== undefined && (currentItem.end - nextItem.start > 0)) { currentItem.overlap = true; nextItem.overlap = true; } //ensure items flow in UI by staggering overlapping items if(prevItem !== undefined && (prevItem.direction === 'left')) { currentItem.direction = 'right'; } //set class names on new DOM element based on overlap if(currentItem.overlap === true) { if(currentItem.hasOwnProperty('direction')) { newDiv.setAttribute('class', 'split '+currentItem.direction); } else { newDiv.setAttribute('class', 'split'); } } //set the size and position based on computed duration and start time newDiv.setAttribute('style', 'height:'+itemDuration+'px; top:'+itemTop+'px'); //insert new element into DOM document.getElementById('stuff').appendChild(newDiv); }};Here is a rough working example. | Generating a day event calendar based on an array of objects | javascript;jquery;html | null |
_codereview.171832 | I have the following code on my server that takes a string and inserts newlines in such a way that the string is separated into lines, all of which are shorter than maxLineLength in characters. This ensures that the text, when printed, will fit within a certain width.const formatTextWrap = (text, maxLineLength) => { var words = text.replace(/[\r\n]+/g, ' ').split(' ') var lineLength = 0 var output = '' for (var word of words) { if (lineLength + word.length >= maxLineLength) { output += `\n${word} ` lineLength = word.length + 1 } else { output += `${word} ` lineLength += word.length + 1 } } return output}What optimizations could I make? This code works, but are there any pitfalls to using this? | Text-wrapping function in JavaScript | javascript;strings;node.js;formatting | I would modify a bit logic regarding spacing between words in lines. With your solution you might end up with unnecessary spaces on line ends.Additionally using Array.reduce instead of for loop is more JS-way to join array. const formatTextWrap = (text, maxLineLength) => { const words = text.replace(/[\r\n]+/g, ' ').split(' '); let lineLength = 0; // use functional reduce, instead of for loop return words.reduce((result, word) => { if (lineLength + word.length >= maxLineLength) { lineLength = word.length; return result + `\n${word}`; // don't add spaces upfront } else { lineLength += word.length + (result ? 1 : 0); return result ? result + ` ${word}` : `${word}`; // add space only when needed } }, '');}let testingText = `Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nullam blandit mauris id venenatis tincidunt. Vestibulum at gravida sapien. Mauris tellus augue, aliquet sed laoreet blandit, pulvinar sed felis. Phasellus nec est vitae enim blandit facilisis.Vestibulum fermentum ligula sit amet volutpat fermentum. Sed in faucibus orci. Pellentesque a dui ex. Curabitur sollicitudin, nulla id dignissim lacinia, odio mauris blandit nisi, eget auctor arcu odio nec est.`;console.log(formatTextWrap(testingText, 20)); |
_webmaster.12942 | I'm assuming that when Google Analytics tracks a visitor's language it uses the Accept-language header from the request (most browser/OS combinations seem to populate it automatically).My browser sends:Accept-Language:en-US,en;q=0.8,en-GB;q=0.6,fr-CA;q=0.4,fr;q=0.2And yet I can't figure out how to get stats on what percentage of my visitors can speak a certain language even if it isn't their primary one (in my case I normally use English but I can use French). Does any analytics program do this, or am I going to have to capture it in my logs and track it manually? | Does Google Analytics track users with multiple languages? | google analytics;multilingual | Google Analytics claims to track the preferred language [not languages] that visitors have configured on their computers.This implies that they strip info after the first semicolon in the Accept-Language header. It's hard to tell from their documentation whether or not this is the case for sure, but you could test it by creating a secret page, adding analytics code, visiting it ten times yourself, and seeing which languages were reported under Visitors>Languages.Competing services appear to take the same approach, so your options appear to be limited to using Google Analytics custom variables or Woopra Custom Visitor Data or a home-baked solution to manipulate and store the full Accept-Language header. |
_unix.196630 | I ran rsync (cygwin) and got this error. I think they changed the Red Hat Linux OS from version 5 to version 6, last night. Would that be the cause of this error message? What do I need to do to fix this? I remember, the sysadm ran a command called ssh-keygen I think on my computer after he set up cygwin. Do I re-run that and copy the file to the RH6 server?@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!Someone could be eavesdropping on you right now (man-in-the-middle attack)!It is also possible that a host key has just been changed.The fingerprint for the RSA key sent by the remote host isxx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx.Please contact your system administrator.Add correct host key in /home/xxxxx/.ssh/known_hosts to get rid of this message.Offending RSA key in /home/xxxxx/.ssh/known_hosts:2RSA host key for xxxxx has changed and you have requested strict checking.Host key verification failed.rsync: connection unexpectedly closed (0 bytes received so far) [Receiver]rsync error: unexplained error (code 255) at /home/lapo/package/rsync-3.0.9-1/src/rsync-3.0.9/io.c(605) [Receiver=3.0.9] | SSH error: WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! | ssh;rsync;cygwin;ssh keygen | Assuming you believe the host really did change its host key you can delete the old entry. Since this one tells you the old entry is on line 2 you can dosed -i -e '2d' ~/.ssh/known_hoststo remove the old entry from you known hosts file |
_cs.55153 | I am currently finishing up my junior year as a Biochemistry major at a 4-year university. In a year, I will graduate with a B.S. in Biochemistry and way more credit hours than anyone should ever have due to AP classes in high school. To make a long story short, I want to pursue computer programming once I graduate next year. I need to know what to do now, where I should go next. My university does not offer any CS classes, but I am willing to work extracurricularly if it means gaining relevant experience. I'm not completely at square 1, I have some experience with programming, but not nearly enough to compete with a Bachelor's.1) Do I need a Bachelor's in CS to pursue programming? I have developed and published a handful of websites and iPhone/Android applications over the years, but I never took it on as a full time job. I am conversant in most relevant computer languages, but never as a result of any official classes, just through personal study.2) What resources are available to me regarding open positions in CS? I have been stuck in my birth state my entire life, and thus have little exposure to the job market outside of a 200-mile radius. I am engaged and plan to move away with my fianc when we both graduate next year, but I would like to move somewhere that will be conducive to my programming ambitions. Are there any notable cities that are iconic in the CS market?3) What should I be focusing on now? I would like to finish out my degree program so the past 3 years won't be completely for naught, but what should I be doing in the meantime? Should I pursue some form of internship, should I attempt to strike further out in freelance work, or should I take some sort of online classes?4) What do employers look for? If I'm looking to make the best first impression, what sort of things should I become conversant in? Is there a core set of languages I should go ahead and start learning?Honestly I'm just extremely lost and need to know what steps I should take from here. Also, if I can avoid paying for another 4 years of college, that would be great. | I want to pursue a career in computer programming. Where do I start? | programming languages | You are already making iOS Apps, so you most likely already know at least 1 programming language. You don't need a degree in computer science to be a programmer. You (usually) need a degree in computer science to be a computer scientist.You should continue programming, as it's the only way you'll get better. If you want to put it on your resume you need to be extremely fluent in at least 1 language and show that you're fluent (you're off to a great start with those iOS apps).Employers don't look for a CS degree, although it is nice. Most jobs require a bachelors degree, but not in computer science per se.Putting personal projects on your resume along with an explanation of your ability to write code should be good enough. And trust me, they will be testing your abilities as a programmer from the second you get a callback.During the first interview, you will most likely be asked to write code on paper with a developer at the company. If you pass that, they might ask you to complete a lengthy exercise to demonstrate your proficiency.So in my opinion: Keep doing what you're doing. If you want to land a job as a programmer master at least 1 language. |
_unix.177653 | I have a text file with data that looks like this (1875 lines to be exact)chr1 MOTEVOC_cage_181208 TF_binding_site_cage_181208 6585538 6585547 0.905022147 - . TF_binding_site_cage_181208 MEF2A,B,C,D-148428 ;ALIAS MEF2A,MEF2B,MEF2C,MEF2D ;L3_ID L3_chr1_-_6585517 chr1 MOTEVOC_cage_181208 TF_binding_site_cage_181208 6767855 6767864 0.703029237 + . TF_binding_site_cage_181208 MEF2A,B,C,D-148303 ;ALIAS MEF2A,MEF2B,MEF2C,MEF2D ;L3_ID L3_chr1_+_6768100 chr1 MOTEVOC_cage_181208 TF_binding_site_cage_181208 8686283 8686292 0.481284243 + . TF_binding_site_cage_181208 MEF2A,B,C,D-148085 ;ALIAS MEF2A,MEF2B,MEF2C,MEF2D ;L3_ID L3_chr1_-_8685906 chr1 MOTEVOC_cage_181208 TF_binding_site_cage_181208 10660924 10660933 0.818294903 + . TF_binding_site_cage_181208 MEF2A,B,C,D-148400 ;ALIAS MEF2A,MEF2B,MEF2C,MEF2D ;L3_ID L3_chr1_+_10661128 chr1 MOTEVOC_cage_181208 TF_binding_site_cage_181208 12327417 12327426 0.584010382 - . TF_binding_site_cage_181208 MEF2A,B,C,D-148387 ;ALIAS MEF2A,MEF2B,MEF2C,MEF2D ;L3_ID L3_chr1_+_12327504 chr1 MOTEVOC_cage_181208 TF_binding_site_cage_181208 12327433 12327442 0.825226087 - . TF_binding_site_cage_181208 MEF2A,B,C,D-148388 ;ALIAS MEF2A,MEF2B,MEF2C,MEF2D ;L3_ID L3_chr1_+_12327504 I am looking for a solution to extract the lines that have + near the very end. (it happens after the last ;. Similarly, I am looking to extract the - strand lines and put in a separate files. Edit: change of data set, was looking at the wrong file before. | extracting lines from a large text file | grep | From the comments, I understand that you are looking to extract lines whose 7th column is either + or -. The input file is tab-separated. To do that, while saving the + lines to the file called plus and the minus lines to the file called minus, the most natural tool is probably awk:awk -F'\t' '$7==+{print >plus} $7==-{print>minus}' fileHow it works:-F'\t'awk reads in a record (line) at a time and separates it into fields. Here, we set the field-separator to a tab.$7==+{print >plus}If the 7th field is a +, then save the line in the file plus.$7==-{print>minus}Similarly, if the 7th field is a -, then save the line in the file minus. |
_unix.217221 | I'm building a cross compiled 3.2.15 kernel for a Marvell Armada 370 system. The vendor's default config file for this is armada_370_v7up_defconfig. So when I perform a make armada_370_v7up_defconfig step, shouldn't that result in a .config file that matches the armada_370_v7up_defconfig file?Instead, I'm seeing a lot of differences (can include if needed).Or am I misunderstanding how make defconfig works? | Linux kernel build : shouldn't make defconfig yield the same .config file? | linux kernel;compiling;configuration | Defconfig generates a new kernel configuration with the default answer being used for all options. The default values are taken from a file located in the arch/$ARCH/configs/armada_370_v7up_defconfig file.These default configurations are not designed to exactly fit your target but are rather meant to be a superset so you only have to modify them a bit.The make armada_370_v7up_defconfig creates your initial .config, which you can now edit through make menuconfig and make your changes. After that, you can run make which will then compile the kernel using your settings. |
_softwareengineering.271829 | I am never sure which of these is better form:Option Adef a(x,y): def b(z): return z+y return map(b, x)print a([10,20], 5)Option Bdef b(z,y): return z+ydef a(x,y): return map(lambda x: b(x,y), x)print a([10,20], 5)Suppose that b() is ONLY ever called from inside a().I know Option B is more efficient because it only declares b() once, and then the variable y is just passed as a context variable. But Option A seems to be simpler syntax, eliminating the need to construct a lambda, and simplifying the interface of b(). Suppose further that there could actually be many context arguments. The complexity blows up on the interface of b():Option A2:def a(x,y1,y2,y3,y4,y5): def b(z): return z+y1+y2+y3+y4+y5 return map(b, x)print a([10,20], 5,6,7,8,9)Option B2:def b(z,y1,y2,y3,y4,y5): return z+y1+y2+y3+y4+y5def a(x,y1,y2,y3,y4,y5): return map(lambda x: b(x,y1,y2,y3,y4,y5), x)print a([10,20], 5,6,7,8,9)Option A2 is far fewer characters.Thoughts? | Iterating a function with a static argument: Global functions + lambdas vs internal function? | python;functions;lambda | Per the Zen of Python: simple is always better than complex. I'd pick the first option on the principle that it is vastly easier to understand and thus easier to maintain. Generally speaking, in Python, one worries more about the ease of use of the code than the efficiency of the code.If you want to simplify long lists of arguments, use *args:4.7.3. Arbitrary Argument ListsFinally, the least frequently used option is to specify that a function can be called with an arbitrary number of arguments. These arguments will be wrapped up in a tuple (see Tuples and Sequences). Before the variable number of arguments, zero or more normal arguments may occur.def write_multiple_items(file, separator, *args): file.write(separator.join(args))Normally, these variadic arguments will be last in the list of formal parameters, because they scoop up all remaining input arguments that are passed to the function. Any formal parameters which occur after the *args parameter are keyword-only arguments, meaning that they can only be used as keywords rather than positional arguments.>>> def concat(*args, sep=/):... return sep.join(args)...>>> concat(earth, mars, venus)'earth/mars/venus'>>> concat(earth, mars, venus, sep=.)'earth.mars.venus' |
_webmaster.5362 | Similar question but different on key points.I have the following setup:example.com requests forward to example.com/landing/re.php with a 301 permanently moved./landing/re.php evaluates cookies (or other request data) and directs you to either the landing page or to a specific language site, it redirects with a 303 see-other.All http-redirects are sent in the HTTP response header section.I'm really worried that bots won't be able to deal with this and I should resolve this in some way. My first instinct is to evaluate user-agent and send google to our English page, however I am unaware of the issues with my setup. What's the best way for me to deal with this? Should I re-work what I'm doing to some other way? | SEO and http-redirect headers? | seo;redirects | Most people who link to your site are likely to link to yourdomain.tld so having that page point to a 303 redirect may be wasting some PR (though perhaps trust and authority still get passed to the domain) as the 303 will not pass on PR, and will likely cause http://yourdomain.tld to just be unindexed.That said, there are a lot of sites that use this approach (having domain.tld dynamically redirect to *lang*.domain.tld). One way to do it without wasting PR would be to:Have domain.tld simply be a language-selection page that links to all the different language subsites.When a user goes to a particular language subsite, save (via cookie) that as their language preference.Next time the user comes to domain.tld use JavaScript to redirect them to their preferred subsite.This way all of your domain.tld PR flows to each of your language subsites, but you still auto-redirect users to the language they last visited.However, this is still considered cloaking as return users will see the language selection page in the SERP and instead end up at one of the language subsites. So that might be a reason to stick with your current setup and simply have search users go directly to one of the subsites. Google is pretty good at determining which language the user is looking for anyway (based on the query as well as which Google portal the user is searching from). |
_unix.116457 | I downloaded the gnuconio on this site (http://sourceforge.net/projects/gnulinuxconioh/)I unzipped the zip he jerou me the following files:bash-4.2$ lsDoxyfile READ-ME.txt conio.c conio.ppr constream titledoc.htmlMakefile READ-ME.txt~ conio.h conio_test.c constream_test.cpp+I read the readme.txtGNUCONIO 0.1 2012Thanks for downloading the opensource and GPL license gnuconio-0.1 library.With this you can use colors, getch and others graphical functions basedon the conio.h library, using the #include conio.h normally, in Windowsor Gnu Linux systems.In Gnu Linux systems only copy the conio.h file to your programs folderto use it. You will need the NCURSES library to work on linux (libncurses-dev).In Windows you will need to compile the conio.c file, and use the conio.o filein your compiler library list. Tested on the compiler Code::Blocks.To start the conio use any function, except the printf and scanf. To end theprogram, make the text color == background color, and use the clrscr() function.Have Fun !I do not know how to do what it says in the readme.txt.I am compiling my project directly in the terminal, I'm using Slackware 14.1 and the ncurses library is installed.Can someone help me? | How do I install gnuconio on linux | gcc;ncurses | From the comments to the other answer, what is missing is the library linked in.Make sure ncurses-dev (or similarly named package) is installed. This will install a file /usr/include/ncurses.h (and a lot of other stuff).Then you have to run the following commands:gcc -c main.cgcc main.o some.o other.o files.o here.o -lncurses -o prognameThs first just compiles your main.c file, this should give no errors. The second one gathers your resulting main.o object and others you need, and links them with the ncurses library to give the progname executable.In any case, if you are doing this for Linux only, better learn how to use ncurses and its ilk directly. The package you are refering to is old (last update in 2012), is not present in the Fedora repository (that usually means it is not up to snuff), and pastes a Windows specific interface over the perfectly usable and well-tested Unix interface. |
_unix.82215 | This thread What do the numbers in a man page mean? answers the question of the significance of the numbers between parentheses within a man page. My question is related.I have a local installation of a package called ffmpeg. The build folder has the typical bin, lib, etc. and then the folder:man/man1/ with the following files:ffmpeg-bitstream-filters.1 ffmpeg-scaler.1 libavdevice.3ffmpeg-codecs.1 ffmpeg-utils.1 libavfilter.3ffmpeg-devices.1 ffmpeg.1 libavformat.3ffmpeg-filters.1 ffplay.1 libavutil.3ffmpeg-formats.1 ffprobe.1 libswresample.3ffmpeg-protocols.1 ffserver.1 libswscale.3ffmpeg-resampler.1 libavcodec.3My questions are:Why is there a subfolder under man called man1? Why not just in man? And why the suffix 1?Which path should I add to MANPATH? The one pointing to man ? or man/man1?What do the suffices in the files above mean? Are they the same numbers within parentheses described in the thread I mentioned above? | Man folders and MANPATH | directory structure;man | The suffixes (such as 1) correspond to the numbers mentioned in What do the numbers in a man page mean?. The represent sections of the manual.Which path should I add to MANPATH? The one pointing to man? Yes (ie, not one of the inner man1, man2, etc. directories).These have the same significance as the directory suffixes from #1. Notice man1 contains all .1 files, man2 all .2 files, etc. |
_webapps.75307 | I am trying to understand and use the values for putting a date range in my form. I want the applicant to put employment range of from-to. I am not sure what to type in the field. Please advise. | Using & Understanding Date Values in Form | cognito forms | null |
_unix.44444 | in fvwm, you can change the default icon of any window using Style <regex> Icon <icon_name.xpm>. In other words, I want to over ride the default icon set for an application without having to hack at system files. How can I achieve the same effect in awesome?In particular, how can I make sure that the application icon in the status bar is the one I set and not the default? | Change status bar applications' default icon | window manager;awesome | I'm not completely sure, but awesome probably honors the applications' .desktop files which contain an Icon key... You could change this, I suppose.(FVWM pre-dates the fdo standardisation work, I think.) |
_codereview.49647 | I'm using Anemone to Spider a website, I am then using a set of rules specific to that website, to find certain parameters. I feel like it's simple enough, but any attempt I make to save the products into arrays looks very messy.the Rules are different for each site (the script is simply grabbing site 1 from the DB at the moment). rule.name should become the column name.Any ideas of a good way to store this data (not on a db)? my Array pushing seemed horrible.So my way of storing goes like this: I have 2 hashes (entity and product) and an array (array). I loop through the rules making an entity which I merge to Product after each successful iteration. I then push Product to Array before moving on to the next page.As I said.. It seems and feels crappy. I would like to add a Product Model with a method to set variable keys for a hash.. but I'm not certain.desc Crawl client sitetask :crawl => :environment do require 'anemone' @client = Client.find(1) @rules = @client.rules $i = 0 #just for testingarray = Array.new #Set up model/object or array to save the data. Anemone.crawl(@client.url) do |anemone| anemone.on_every_page do |page| #puts page.url #Create new instance of object or row of array? entity = Hash.new product = Hash.new product = {url: page.url} @client.rules.each do |rule| # if page.doc.at_css(rule.rule) != nil || !rule.required? #.text[/[0-9\.]+/] if page.doc.xpath(rule.rule) != nil || !rule.required? #.text[/[0-9\.]+/] entity[rule.product_attribute.name] = page.doc.xpath(rule.rule).remove product.merge!(entity) else #Not a product Page. Break the rules loop and move on to next page. (also delete current instance) product = nil break end #$i+= 1 end if product then array.push(product) end end end#puts $i puts arrayend | scraping and saving using Arrays or Objects | ruby;array;ruby on rails;web scraping | null |
_cs.44219 | This might be too easy... But I just don't get it.I've been reading about flow in networks and I stumbled upon this definition in wikipedia: https://en.wikipedia.org/wiki/Flow_network$\sum\limits_{w\in V} f(u,w) = 0 $$(\forall u \in V-\{s, t\})$That implies $\sum\limits_{(u,v)\in E} f(u,v) = \sum\limits_{(v, z)\in E} f(v,z)$It sounds trivial, but how does that implication work? The flow is 0 when there is no edge. So I think I can rewrite the first sum to:$\sum\limits_{w\in V} f(u,w) = 0 \iff \sum\limits_{(u,v)\in E} f(u,v) = 0 $ for a node $u$That would result in every flow being zero, wouldn't it?What am I doing wrong? Thanks in advance :) | Flow in a network: Conservation of flow definition | network flow;ford fulkerson;max flow | The short answer is $f(u,v)=-f(v,u)$. Note that the first sum includes both vertices $w$ such that $(u,w) \in E$, as well as vertices $w$ such that $(u,w) \notin E$ but $(w,u) \in E$. Now unpack the implications of the first sum, separating by these two cases, and I think you'll see what happens.For a more lengthy explanation, this is covered in standard textbooks. Make a trip to a library to check out a few textbooks to find a detailed derivation; or, there are even online algorithm texts. |
_cstheory.16436 | I am looking to write a research paper for an undergraduate class in security.I'd like to explore the details of a security attack that's happened within the last ~15 years (within the last 5 years is better). This topic really interests me but I'm having trouble finding a suitable topic.What attacks can you recommend? Good candidates would be attacks that:Are well documented (lots of technical details), including which vulnerabilities were exploited and great detail about how the attack was doneHave the attack's code that can be analysed on github or made publicEdit: I was going to pick the topic of the Playstation network DDoS by Anonymous, but that topic has already been selected by another student. | Good exploit to explore for an undergraduate research paper | soft question;security;project topic | null |
_softwareengineering.27943 | I'm interested in taking on more contract projects for programming and, although it'd be nice to find clients just in my area, I want to expand my search a bit to other areas where a company may allow me to telecommute. I'm wondering if anyone who has experience with this has noticed that whether a company would be more likely to be open to this based on if they are in a bigger city vs a smaller town.I would think that companies in smaller areas may have trouble finding programmers for some things and therefore would be more willing to outsource but maybe I am wrong. Any info is appreciated, I'm just trying to get an idea as to how to concentrate my efforts with networking with people form different areas, etc. | Are companies in certain locations (big city vs small town, etc) more likely to offer telecommute contract programming work? | freelancing;telecommuting | null |
_softwareengineering.315160 | I am developing a program which is to be installed on a simple machine in a LAN, the only particularity is that the NIC are connected on mirror ports.The software can scan and monitor the network. What I am looking for is a technique to take down a host on this network.When I discover a new host on the network I want to block it until I authorized it but the machine is not a firewall nor a real Network Access Controller (NAC) so I can only use passive methods.I have tried ARP cache poisoning to associate a fake MAC with the real gateway IP in the target host ARP cache but there is two major issues:The victim can manually set the ARP cache entry of the gateway as static.The victim can still communicate with the other LAN hosts.So I tried another method: poison the ARP cache of every hosts on the network except the victim's one. I send a broadcast ARP request (works better than reply) with the IP of the victim and a fake MAC. But there is still one issue:When the victim try to communicate with another LAN host, it sends an ARP request which put the valid MAC and IP association in the other hosts ARP cache.The victim can also manually sends ARP requests or replies to overwrite the fake MAC in the hosts ARP cache.For these reasons I wonder if there is a better method to accomplish such a thing. | Technique to passively take down a host on the network | security;monitoring | null |
_softwareengineering.254746 | I've been thinking about language design lately, and reading over some of the new things in Haskell (always a nice source of inspiration). I'm struck by the many odd uses of the left <- and right -> arrow operators.I guess the many different usages comes from prior art in math and other languages regarding arrow syntax, but is there some other reason not to try and make the usage more consistent or clear? Maybe I'm just not seeing the big picture?The right arrow gets used as a type constructor for functions, the separator between argument and body of lambda expressions, separator for case statements, and it's used in pattern views, which have the form (e -> p).The left arrow gets used in do notation as something similar to variable binding, in list comprehensions for the same (I'm assuming they are the same, as list comprehensions look like condensed do blocks), and in pattern guards, which have the form (p <- e).Now the last examples for each arrow are just silly! I understand that guards and views serve different purposes, but they have almost identical form except that one is the mirror of the other! I've also always found it kind of odd that regular functions are defined with = but lambdas with ->. Why not use the arrow for both? Or the equals for both?It also gets pretty odd when you consider that for comparing some calculated value against a constant, there's nearly a half-dozen ways to do it:test x = x == 4f x = if test x then g x else h xf' 4 = g 4f' x = h xf'' x@(test -> true) = g xf'' _ = h xf''' x | true <- test x = g x | otherwise = h xf'''' x = case (x) of 4 -> g x _ -> h xVariety is the spice of source code though, right? | What is the logic behind the use of different arrows (-> | language design;haskell | null |
_unix.311846 | I have a Linux 3.1.6 kernel as a router on a server with two CPUs Xeon E5405.The machine have two 1 Gbps network interfaces (Ethernet).We have several networks, two of them are 10.0.0.0/20, 10.1.0.0/20.When copying a file between two machines in the same network I have about 1 Gbps copying speed, but when copying between networks the speed degrades to ~200 Mbps. Copying to/from the outside world yield the same speed (~200 Mbps), but it should be much more, we have about ~1 Gbps to the outside and servers nearby with high available download speeds (confirmed, tested).So the problem is the routing server (we also did several tests confirming this). What could be the problem? Can the NAT process be this slow, routing between networks is slow, CPUs aren't to busy (load is negligible), kernel bug?HAH, UPDATE (17:40):I discovered that this is IPv6 issue somehow. How?wget SERVER_NETWORK1_IPv4/file (~1 Gbps)wget SERVER_NETWORK2_IPv4/file (~1 Gbps)wget **SERVER_DNS_NAME**/file (~200 Mbps with DNS name) HA!wget SERVER_IPv6/file (~200 Mbps with IPv6 address) HA! So, a different question, why IPv6 is multiple times slower? | Linux as router, bandwith degradation when using IPv6 | linux;routing;ipv6;router;nat | null |
_cstheory.7069 | Brief BackgroundIn Multi-Party Protocols by Chandra, Lipton, and Furst [CFL83], a Ramsey-theoretic proof is used to show a lower bound (and later, a matching upper bound) for the predicate Exactly-$n$ in the NOF multiparty communication complexity model. From the paragraph at the top of the second column of Page 1, we can see that they define the model such that the communication is strictly cyclic: e.g., for parties $P_0, P_1, P_2$, $P_0$ broadcasts at time $t=0$, $P_1$ broadcasts at time $t=1$, $P_2$ broadcasts at time $t=2$, then $P_0$ broadcasts at time $t=3$, and so on.In most other papers, this cyclic ordering restriction is not made. For (arbitrary) example, in Separating Deterministic from Nondeterministic NOF Multiparty Communication Complexity by Beame, David, Pitassi, and Woelfel [BDPW07], a counting argument over protocols separates $\bf{RP}^{cc}_k$ from $\bf{P}^{cc}_k$. By their definition, a protocol specifies, for every possible [public] blackboard contents [i.e., broadcast history] whether or not the communication is over, the output if over and the next player to speak if not. (emphasis added)Importantly, the proof technique in [CFL83] appears (to my eyes) to crucially depend on the parties speaking in a cyclic/modular fashion.QuestionAllow me to play Devil's Advocate:Doesn't the lower bound proof of [CFL83] break if we allow the parties to speak in an ordering specified by the protocol? More specifically, is it possible there could there be a protocol with a different communication pattern than cyclic for Exactly-$n$ in the NOF model that costs less than the $\log(\chi_k(n))$ lower bound given in the paper?Or more generally -- what's going on here? Why is one (highly cited) paper (I use the following very liberally) allowed to restrict the possible protocols to round-robin communication patterns only? | Effect of protocol ordering on multiparty comm. complexity | communication complexity | Any protocol $\pi$ can be modified into an equivalent protocol $\hat\pi$ that has the special round-robin communication pattern. The modification is as follows: Whenever party $i$ generates an output in $\pi$, it holds it in a buffer until party $i-1$ has spoken. After party $i-1$ speaks, party $i$ either releases its buffer or broadcasts a dummy (or empty) message if there is nothing in the buffer.The conversion of $\pi$ into $\hat\pi$ is without loss of generality, with respect to the correctness and security properties of the protocol. Whether it incurs a loss of generality with respect to communication complexity depends on whether the model allows empty messages. You should see whether the proof technique in this paper assumes that parties can broadcast only non-empty messages. If empty messages are allowed, then $\hat\pi$ has the same communication complexity as $\pi$. |
_unix.45524 | I need a GUI on a remote server to which I have an ssh connection. Is it possible to share my X11 window system with it given that there's no X11 installed on the server? | Share X11 with remote server where X11 are not installed | ssh;x11;remote desktop | null |
_codereview.59030 | I'm normally a C# developer, but I've started to learn F# and I want to make sure I'm writing code in a functional way that suits the language. I've quickly pieced this together with my knowledge from reading Functional Programming for the Real World and various material from the Web.Please look and provide feedback on style, layout or anything you want (Program.fs is where the meat is):// DataUtils.fsmodule DataUtilsopen System.IOopen System.Runtime.Serialization.Jsonopen System.Textlet GetJsonFromObject<'T> (obj:'T) = use ms = new MemoryStream() (new DataContractJsonSerializer(typeof<'T>)).WriteObject(ms, obj) Encoding.Default.GetString(ms.ToArray())let GetObjectFromJson<'T> (json:string) : 'T = use ms = new MemoryStream(ASCIIEncoding.Default.GetBytes(json)) let obj = (new DataContractJsonSerializer(typeof<'T>)).ReadObject(ms) obj :?> 'T// Web.fsmodule Webopen Systemopen System.Netopen System.IOlet GetJsonFromWebRequest url = let req = WebRequest.Create(new Uri(url)) :?> HttpWebRequest req.ContentType <- application/json let res = req.GetResponse () :?> HttpWebResponse use stream = res.GetResponseStream() use sr = new StreamReader(stream) let data = sr.ReadToEnd() data// Model.fsmodule Modelopen System.Runtime.Serialization[<DataContract>]type exchangeRate = { [<field: DataMember(Name = to)>] toCurrency:string; [<field: DataMember(Name = rate)>] rate:decimal; [<field: DataMember(Name = from)>] fromCurrency:string }// Program.fsmodule Programopen Systemopen Modellet getExchangeRate fromCurrency toCurrency = let url = String.Format(http://rate-exchange.appspot.com/currency?from={0}&to={1}, fromCurrency, toCurrency) let json = Web.GetJsonFromWebRequest url let rate:exchangeRate = DataUtils.GetObjectFromJson json ratelet rec displayExchangeRate currencies = match currencies with | [] -> [] | (a, b)::tl -> let r = getExchangeRate a b printfn %s -> %s = %A a b r.rate displayExchangeRate tllet currencies = [ (GBP, EUR) (GBP, USD) (EUR, GBP) (EUR, USD) (USD, GBP) (USD, EUR) ]displayExchangeRate currencies |> ignoreprintfn Press any key to exitlet input = Console.ReadKey()As you can see it's not doing an awful lot at the moment, just getting some exchange rates from a REST service and displaying them. The next step would be to add exchange rates to the database. I'd normally use Entity Framework for this, but given that Functional Programming is partly about reducing the amount of state that is held, is EF even the way to go? | Functional Programming style in F# | functional programming;f# | It looks good! Just a couple of points.Instead oflet data = sr.ReadToEnd()datayou can writesr.ReadToEnd()sprintf is more idiomatic than String.Format.displayExchangeRate would be better written as a loop or usingList.iter (or Seq.iter).DataContractJsonSerializer.WriteObject uses Encoding.UTF8, but you're decoding with Encoding.Default, which returns the OS's current ANSI code page. |
_webmaster.84581 | I was just wondering what is the best way to structure my URL and if my URLs are holding me back from an SEO point of view.Currently we have our categories URL displaying key words form that category but the contain numbers at the end of the URL for example http://www.example.co.uk/nike-shoes-c-31.htmlWould it be better or have an affect on our rankings if we just had http://www.example.co.uk/nike-shoes.htmlThen the subcategory to this would be http://www.example.co.uk/nike-shoes-c-31_110.htmlWould it be better if we explain our on page content a bit better in the sub category URL for example http://www.example.co.uk/nike-shoes-latest-designs.html | URL structures for SEO | seo;url | null |
_softwareengineering.143052 | I have a Swing application with a custom TreeModel that can refer to domain instances. I'm wondering what changes I could make if I consider moving to a web interface later on. Would a pluggable model be considered as good design? What would you do? | Separating model from UI | java;design;ui;swing;model | A pluggable model, using MVC (at a larger system level than what Swing defines MVC as), or a facade pattern would all work |
_unix.31184 | In some devices the cpu speed is dynamic, being faster when there is more load.I was wondering if it is possible to set nice level or priority of a process so that it does not influence an increase in cpu speed when it is running flat out.i.e.Process is running flat out, but only using spare cpu cycles as low priority. But also not causing an increase in cpu speed.When cpu is off process stops.When cpu is slow process may have some cpu, maybe most of it.When cpu is fast, because another process is running at 90%, process gets the remaining 10% of fast cpu.Then other process stops, so low priority process gets 100% of cpu, but the frequency controller does not see this low priority process and drops the frequency. | Process priority and cpu speed | linux;cpu;priority;efficiency;cpu frequency | You can use the ondemand cpu-freq governor, as long as you set the ignore_nice_load parameter to 1.From Documentation/cpu-freq/governors.txt, ondemand section:ignore_nice_load: this parameter takes a value of '0' or '1'. When set to '0' (its default), all processes are counted towards the 'cpu utilisation' value. When set to '1', the processes that are run with a 'nice' value will not count (and thus be ignored) in the overall usage calculation. This is useful if you are running a CPU intensive calculation on your laptop that you do not care how long it takes to complete as you can 'nice' it and prevent it from taking part in the deciding process of whether to increase your CPU frequency. |
_codereview.79156 | I am practicing some simple Java coding problems in an attempt to learn while doing them. I would like to know if my code is redundant/is there an easier way to accomplish the same thing?Question: Which Alien?A person who witnessed the appearance of the alien has come forward to describe the alien's appearance. The program will determine which alien has arrived. The three alien species that it could be is:TroyMartian, has at least 3 antenna and at most 4 eyesVladSaturnian, has at most 6 antenna and at least 2 eyesGraemeMercurian, has at most 2 antenna and at most 3 eyesSample session (with output shown in text, user input in italics)How many antennas?2How many eyes?3VladSaturnianGraemeMercurianIf the description does not match any of the aliens, there is no output.My code:import java.io.BufferedReader;import java.io.IOException;import java.io.InputStreamReader;public class Alien {public static int antenna;public static int eye;public static void main(String args[]) { try (BufferedReader in = new BufferedReader(new InputStreamReader( System.in))) { System.out.println(How many antennas?); antenna = Integer.parseInt(in.readLine()); System.out.println(How many eyes?); eye = Integer.parseInt(in.readLine()); if(troy(antenna, eye)) { System.out.println(TroyMartian); } if(vlad(antenna, eye)) { System.out.println(VladSaturnian); } if(graeme(antenna, eye)) { System.out.println(GraemeMercurian); } return; } catch (IOException e) { System.err.println(Error); }}public static boolean troy(int antenna, int eye) { if ((antenna >= 3) && (eye <= 4)) { return true; } else { return false; }}public static boolean vlad(int antenna, int eye) { if ((antenna <= 6) && (eye >= 2)) { return true; } else { return false; }}public static boolean graeme(int antenna, int eye) { if ((antenna <= 2) && (eye <= 3)) { return true; } else { return false; }}} | Alien distinguisher | java;beginner | You're on the right track. Good job using the try-with-resources block for the BufferedReader. You could use a Scanner instead for convenience. (You could even use Scanner.nextInt(), but it would work slightly differently: the newline in the input would be optional.)The variables antenna and eye can be local to main(), and therefore should not be static members of the class.In Java, a common naming convention for methods that return a boolean is isSomething() or hasSomething(). In this case, though, maybeSomething() seems more appropriate.It is rarely necessary to write return true; or return false; explicitly. Usually, you would be better off returning a boolean expression.import java.util.Scanner;public class Alien { public static void main(String args[]) { try (Scanner in = new Scanner(System.in)) { System.out.println(How many antennas?); int antenna = Integer.parseInt(in.nextLine()); System.out.println(How many eyes?); int eye = Integer.parseInt(in.nextLine()); if (maybeTroy(antenna, eye)) { System.out.println(TroyMartian); } if (maybeVlad(antenna, eye)) { System.out.println(VladSaturnian); } if (maybeGraeme(antenna, eye)) { System.out.println(GraemeMercurian); } } } public static boolean maybeTroy(int antenna, int eye) { return ((antenna >= 3) && (eye <= 4)); } public static boolean maybeVlad(int antenna, int eye) { return ((antenna <= 6) && (eye >= 2)); } public static boolean maybeGraeme(int antenna, int eye) { return ((antenna <= 2) && (eye <= 3)); }} |
_unix.370900 | How can I pass a full raw/MIME message (raw file) to the Linux mailx command for delivery? I don't want to extract the recipient, subject, body etc from the message - I want to feed a complete existing raw mail message 'as is' to mailx for sending whilst retaining all existing headers.An example message is as follows:Received: (qmail 32389 invoked by uid 0); 13 Jun 2017 09:24:51 -0400Date: Tue, 13 Jun 2017 09:24:51 -0400From: [email protected]: [email protected]: Test EmailMessage-ID: <593fe7a3.IgSR+/BLy+NYXlVZ%[email protected]>User-Agent: Heirloom mailx 12.5 7/5/10MIME-Version: 1.0Content-Type: text/plain; charset=us-asciiContent-Transfer-Encoding: 7bitThe test mail contentSo I want to be able to feed the above to the mailx command on the command line.The purpose of this is to make the server deliver the original message (exactly as it was read from the raw message file) via a secondary SMTP server - to do this we would use mailx's -S switch to specify the secondary SMTP server eg:mailx -S smtp=backup-mail-server.com:25 < feed in the MIME message here somehowHow can I do this with mailx? | Send raw message with mailx command | mailx;mail command | mailx -S smtp=backup-mail-server.com:25 < mailx -p -f /var/mail/nobodyThis will read the RAW mail file, and pipe it into your send. |
_unix.48051 | In Cinnamon, you can move your pointer to the top left corner of the screen to activate the desktop switcher. This would be awesome...if I wasn't left-handed. I throw the pointer up there most of the time. It's massively frustrating.Is there a way to switch it to the top-right of the screen? | How can I change my workspace switcher in Cinnamon? | linux mint;workspaces;cinnamon | There is an option for that in the recent Cinnamon versions. Open Cinnamon Settings, click on Hot Corner and choose Top Right. If you do not have this option, you need to update Cinnamon:$ sudo apt-get install cinnamon |
_unix.40743 | I have a Linux machine, and I'll switch between using an RTL8187 based card, and a RT2800USB based card (the cards are both USB dongles by the same manufacturer).I've loaded the drivers for both cards:modprobe rtl8187modprobe rt2800usbI can run an iwlist scan against the RTL8187 card until I'm blue in the face, but after a few minutes of doing the same with the RT2800USB card, it stops working, and displays the following:$ sudo iwlist wlan0 scanwlan0 No scan resultsWhat can I do to get the card to work 100%? | RT2800USB wireless adapter stops scanning | drivers;usb;wifi | null |
_unix.255791 | I used to open file in multiple tab like visual studio or eclipse do.How can I open the a file into the exist vim process ? | In xterm(mintty,bash), how can I open a file in the exist vim process? | bash;vim | There are two ways you could do this:run vim in a screen session, and attach to that from different terminalsrun vim in client/server modeFurther reading:Taking Command of the Terminal with GNU Screen Using GNU Screen to Manage Persistent Terminal SessionsServer and client mode in VimHow does vim support C/S mode? |
_unix.186697 | When a running process gives lots of stdout output throughout its lengthy running process, you don't want to kill it and rerun it. How can you not show the output? Thanks. | How to not show the stdout output of a running process? | process;stdout | null |
_webmaster.107894 | I am planning to buy a domain with keyword Android to my website. But i am worried about having any legal problem doing this. If i am buying what all things i have to look forward? | Will domain name with keyword Android makes any legal problem? | seo;domains;html;php;javascript | null |
_unix.213986 | When I usedps -u rootcommand in Ubuntu it showed the following output?PID TTY TIME CMD 1 ? 00:00:02 init 2 ? 00:00:00 kthreadd 3 ? 00:00:00 ksoftirqd/0 5 ? 00:00:00 kworker/0:0H 7 ? 00:00:02 rcu_sched 8 ? 00:00:00 rcu_bh 9 ? 00:00:00 migration/0Where are the processes for vhand,bdflush,sched ? How do i know about these processes? | Process command | linux;ubuntu;process | null |
_vi.11105 | I have an issue where in MacVim, I don't get any syntastic/eslint related functionality. When I run !which eslint in macvim, I get eslint not found - eslint is definitely installed.If I run !which eslint on iTerm, I get the correct location of eslint (/Users/fpe/.nvm/versions/node/v4.4.5/bin/eslint).It looks like MacVim is not aware of my $PATH. I didn't set anything specifically. I am also using zsh, if that makes a difference.How do I fix this? | syntastic not working on macvim, but works perfectly fine on the terminal | macvim;plugin syntastic;path | null |
_webmaster.84047 | I'm considering nested site maps for a site with many small files. I'm thinking of nesting sitemaps going 3 levels deep into the directories.Question 1: When you submit nested sitemaps to Google:Do you have to submit all of the sitemapsORDo you submit the root sitemap and expect that Googlebot will crawl to deeper levels. I suspect submitting the root sitemap will suffice.Question 2: Related Comments from Google, which has this text in it: A sitemap index file can't list other sitemap index files, but only sitemap files.Does this mean I can only have the root sitemap reference other sitemaps? Or, in other words, I can't go three levels deep such as: sitemap +-----sitemap +-----sitemap +-----sitemapI should only go two levels deep (root to all other sitemaps)? | Submission of nested sitemaps and nesting levels | sitemap;googlebot;xml sitemap | Sitemap index files can contain references to actual sitemap files, and each of those sitemap files then contain references to URLs that you want search engines to index.When you submit a sitemap index file to Google, it actually process all sitemap files that are connected to it, but you may have to wait up to a few hours to notice some action in webmaster tools because Google likes to be slow.So your structure then would be something like this:SitemapIndexFile.xml -SitemapFile1.xml -URL1 -URL2 .... -URLN -SitemapFile2.xml -OtherURL1 -OtherURL2 .... -OtherURLN .... |
_unix.354383 | I think I turned on some option which keeps randomly switching places of my code, and for example when I press delete instead of deleting it moves the pointer few lines back and copies some other text.What have I done? Seems to persist through reopening emacs. | emacs randomly deleting lines and text | emacs | null |
_vi.12619 | <tspan x=0 0.54717 0.75472 1.32077 1.81134 2.35851 2.58493 3.13213.39625 3.96229 4.50947 >Lincoln 30%</tspan> I have to yank the text between the tspan tags 32 times in a XML document. This is what I did: vsplit and enew to create a seperate window. Key combination: vit and yy to select and copy text between tags.Go to new window and paste and return to old window. This 32 times. It works but my question: do you know a more efficient way of doing this? | More efficient way of yanking text out of tags? | cut copy paste | null |
_unix.7940 | I am trying to restore a VM but I get this error message:I think this happened because, while the VM was live, I removed one snapshot.How do I fix this, short of restoring older snapshots?NOTE: This problem happens when I use version 4.0.4. Version 3.2.10 allows me to delete a snapshot of a VM, even though it's live. I guess it's a regression... watch me downgrading. | I'm failing to restore a VirtualBox VM | virtualbox | null |
_webapps.5155 | So you're the winning bidder all week until the last 5 seconds when someone shoots in and outbids you. Is there anything to help prevent that from happening or is that just a fact of web auctions? | Are there any tricks to prevent last second losses on eBay? | ebay | Bid the real maximum price you're willing to pay, then if you lose you don't mind because you didn't want to spend that much anyway.As for a real answer, I don't think there's a trick beside bidding at the last second too. |
_vi.2165 | I got a plain text file with whitespace separated columns of values.Like this:AU 3030 .... ... .... AU 3031 .... ... .... AU 3032 .... ... .... AU 3033 .... ... .... IT 48100 ... .. .....IT 40100 ... .. .....IT 48123 ... .. .....UK 3333 ... ... ..... UK 4444 ... ... .....UK 5555 ... ... .....I also got this regex which will match any adjacent line with the same value in the first column (assume the file is sorted on the first column) except the last:/^\(\([A-Z0-9]\+\)\s\+.*\n\)\(\2\)\@=(or to make it less hairy):/^\v([A-Z0-9]+)\s+.*\n(\1)@=Is it possible to fold lines over the line which was not matched? Having this result:+-- 4 lines AU ....+-- 3 lines IT ....+-- 3 lines UK .... | Folding by regex search pattern | regular expression;folding | Do set foldmethod=expr and use 'foldexpr' to set a vim script expression that will define the fold start points.set foldmethod=exprset foldexpr=get(split(getline(v:lnum-1)),0,'')!=get(split(getline(v:lnum)),0,'')?'>1':'='This looks more complicated than it is, because we can't easily use spaces in :set, but with spaces, and a newline or 2, it looks like:get(split(getline(v:lnum - 1)), 0, '') != get(split(getline(v:lnum)), 0, '') \ ? '>1' \ : '='OverviewBasically this compares the first word of each line with the previous line. If the words are different then the line is start of the fold, >1. Otherwise it keeps the same fold level, =.Glory of Detailsset foldmethod=expr to tell Vim to use a vim script expression to determine the foldings'foldexpr' option holds the vim script expressionEvaluating the condition with a ternary that returns >1 when a fold should start and = when the fold level should continuev:lnum is the current line that that 'foldexpr' is running on to update the foldsGet the contents the current line (v:lnum) and the previous line (v:lnum - 1) via getline()Split each line into words via split()Use get() to get the first index of the freshly split wordsUse a default value of '' in case of a blank line. e.g. get(words, 0, '')Compare the first word of the current line with the first word of the previous line in the condition portion of the ternaryNote: this method may have some performance issues with very large documentsFor more help see::h 'foldmethod':h 'foldexpr':h getline(:h v:lnum:h split(:h get( |
_unix.384031 | First of all, not sure at all if this is the correct place to ask. If it isn't, please forgive me.Well, I have an OrangePi zero as a server. I installed on it armbian (legacy, xenial version, kernel 4.11.5-sun8i).This has a 100Mb ethernet port; I set it up as a NAS, but I'm experiencing slow downloads from it.I run iperf3, and this is the output I received on the server. First I tested it as a server (so I verified the UPLOAD speed):user@DiscoRete:~$ iperf3 -s-----------------------------------------------------------Server listening on 5201-----------------------------------------------------------Accepted connection from 192.168.1.104, port 51784[ 5] local 192.168.1.2 port 5201 connected to 192.168.1.104 port 51785[ ID] Interval Transfer Bandwidth[ 5] 0.00-1.00 sec 10.8 MBytes 90.8 Mbits/sec[ 5] 1.00-2.00 sec 11.3 MBytes 94.9 Mbits/sec[ 5] 2.00-3.00 sec 11.3 MBytes 94.9 Mbits/sec[ 5] 3.00-4.00 sec 11.3 MBytes 94.9 Mbits/sec[ 5] 4.00-5.00 sec 11.3 MBytes 94.9 Mbits/sec[ 5] 5.00-6.00 sec 11.3 MBytes 94.9 Mbits/sec[ 5] 6.00-7.00 sec 11.3 MBytes 94.9 Mbits/sec[ 5] 7.00-8.00 sec 11.3 MBytes 94.9 Mbits/sec[ 5] 8.00-9.00 sec 11.3 MBytes 94.9 Mbits/sec[ 5] 9.00-10.00 sec 11.3 MBytes 94.9 Mbits/sec[ 5] 10.00-10.02 sec 220 KBytes 94.6 Mbits/sec- - - - - - - - - - - - - - - - - - - - - - - - -[ ID] Interval Transfer Bandwidth[ 5] 0.00-10.02 sec 113 MBytes 94.5 Mbits/sec sender[ 5] 0.00-10.02 sec 113 MBytes 94.5 Mbits/sec receiver-----------------------------------------------------------Server listening on 5201-----------------------------------------------------------Bandwidth is about 95Mb/s; since the link is a 100Mb ethernet it is quite ok for me.Then I tested the DOWNLOAD speed:user@DiscoRete:~$ iperf3 -c 192.168.1.104Connecting to host 192.168.1.104, port 5201[ 4] local 192.168.1.2 port 47504 connected to 192.168.1.104 port 5201[ ID] Interval Transfer Bandwidth Retr Cwnd[ 4] 0.00-1.00 sec 284 KBytes 2.33 Mbits/sec 37 2.83 KBytes[ 4] 1.00-2.00 sec 270 KBytes 2.21 Mbits/sec 39 2.83 KBytes[ 4] 2.00-3.00 sec 393 KBytes 3.22 Mbits/sec 40 1.41 KBytes[ 4] 3.00-4.00 sec 352 KBytes 2.88 Mbits/sec 29 1.41 KBytes[ 4] 4.00-5.00 sec 481 KBytes 3.94 Mbits/sec 32 1.41 KBytes[ 4] 5.00-6.00 sec 488 KBytes 4.00 Mbits/sec 56 2.83 KBytes[ 4] 6.00-7.00 sec 315 KBytes 2.58 Mbits/sec 37 2.83 KBytes[ 4] 7.00-8.00 sec 236 KBytes 1.93 Mbits/sec 28 2.83 KBytes[ 4] 8.00-9.00 sec 346 KBytes 2.84 Mbits/sec 38 2.83 KBytes[ 4] 9.00-10.00 sec 293 KBytes 2.40 Mbits/sec 32 2.83 KBytes- - - - - - - - - - - - - - - - - - - - - - - - -[ ID] Interval Transfer Bandwidth Retr[ 4] 0.00-10.00 sec 3.38 MBytes 2.83 Mbits/sec 368 sender[ 4] 0.00-10.00 sec 3.32 MBytes 2.78 Mbits/sec receiveriperf Done.Wait: less than 3Mb/s??? What is the problem here in your opinion? How can I further test it?Thank youUPDATE: I changed the ethernet wire; it worked better the first time (transfers around 8MB/s), but then it went back to 300kB/s (so the 2Mb/s seen here).If it can help, I noticed that when the cable is disconnected the two LEDs on the port light up on the OrangePI zero...This weekend, when I get back home, I'll try changing the oPI board with another one I got. Then I'll try with a direct connection.Do you have any other hints at how can I debug this? | Very low outgoing network speed | networking;performance | null |
_codereview.165782 | I have created a scraper for yell.com in vba. The scraper is efficient enough to pull data from that site, whatever the search parameter is. If any link from that site is given to my parser, it is able to scrape the whole records irrespective of how many pages it has spread across. There is no a tag for the first page in pagination option for this reason it was previously scraping all the records except for the first page. However, I've fixed that issue and now it is working flawlessly pulling all the records available there. I tried to make it accurate yet there are always rooms for improvement.Sub Yell_parser()Const mlink = https://www.yell.comDim http As New XMLHTTP60Dim html As New HTMLDocument, html2 As New HTMLDocumentDim page As Object, newlink As StringDim I As Long, x As LongWith http .Open GET, https://www.yell.com/ucs/UcsSearchAction.do?keywords=coffee&location=United+Kingdom&scrambleSeed=1370600159, False .setRequestHeader User-Agent, Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/59.0.3071.115 Safari/537.36 .send html.body.innerHTML = .responseTextEnd WithSet page = html.getElementsByClassName(row pagination)(0).getElementsByTagName(a)' First page first, selected already, 'row pagination' doesn't have 'a' for itGetPageData x, htmlFor I = 0 To page.Length - 2 newlink = mlink & Replace(page(I).href, about:, ) With http .Open GET, newlink, False .setRequestHeader User-Agent, Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/59.0.3071.115 Safari/537.36 .send html2.body.innerHTML = .responseText End With ' Next pages start from here GetPageData x, html2Next IEnd SubSub GetPageData(ByRef x, ByRef html As HTMLDocument) Dim post As HTMLHtmlElement For Each post In html.getElementsByClassName(js-LocalBusiness) x = x + 1 With post.getElementsByClassName(row businessCapsule--title)(0).getElementsByTagName(a) If .Length Then Cells(x + 1, 1) = .item(0).innerText End With With post.getElementsByClassName(col-sm-10 col-md-11 col-lg-12 businessCapsule--address)(0).getElementsByTagName(span) If .Length > 1 Then Cells(x + 1, 2) = .item(1).innerText End With With post.getElementsByClassName(col-sm-10 col-md-11 col-lg-12 businessCapsule--address)(0).getElementsByTagName(span) If .Length > 2 Then Cells(x + 1, 3) = .item(2).innerText End With With post.getElementsByClassName(col-sm-10 col-md-11 col-lg-12 businessCapsule--address)(0).getElementsByTagName(span) If .Length > 3 Then Cells(x + 1, 4) = .item(3).innerText End With With post.getElementsByClassName(businessCapsule--tel) If .Length > 1 Then Cells(x + 1, 5) = .item(1).innerText End With Next postEnd Sub | Web scraper for Yell | vba;web scraping | I would focus on the following improvements:avoid code duplication - for instance, you have the User-Agent string specified twice - extract it as a constant and re-use. GetPageData also has duplicated codesome of your locators are layout-oriented which makes them less reliable and less readable - Bootstrap classes like col-lg-12 or col-md-11 have a layout/design meaning and have a high probability of change. row businessCapsule--title can become businessCapsule--title; col-sm-10 col-md-11 col-lg-12 businessCapsule--address would become businessCapsule--address. |
_softwareengineering.284255 | I'm trying to create a calendar app similar to this design: Calendar DesignI'm currently using this calendar framework: CVCalendar and it's working great, but my question is, what do you think is the best approach to take for displaying the events beneath the calendar?I see 2 options to take:Option 1:I use a UITableView and place the calendar view as the only TableViewCell in it's own section, and the events would be the rows underneath it in a seperate section.The problem with that, is every time the user selects a new day, I need to load the events pertaining to that day which would require me to reload the whole TableView, including the calendar view which doesn't need to be reloaded.Option 2:I create custom views for the events, and add them as subviews in the scroll view along with the calendar view. That way, when the user clicks on a new day, I would just delete the event subviews and recreate them based off of the new data. | Best architecure approach to develop iOS app | design;ios;mobile;xcode;swift language | null |
_unix.207719 | I have a tricky step to walk for my system (openvpn, 20 clients - raspberries, 1 server CentOS).I want to make an auto install, from clients, for their keys.They call a php page on the server, which generate a .tar containing client.key, client.crt & ca.crt (and conf).Allright this works perfectly in root ssh.Now I make a php script with a shell_exec(my shell script).But almost all commands fails.A matter of being root or not (my php is in /home/non-root-user/public_html/testgenerateforinit)for instance : pkitool, cp /etc/openvpn/mykey.tgz /home/.../public_html/.../fails in php.How should I process ?Tks | Php shell_exec() : for non root user, how to access root command | shell script;php;openvpn | null |
_unix.38591 | In Firefox we have two options at Firefox->Preferences->Preferences->Fonts and colors->Colors menu, Use system colors and Sites can use other colors.I would like keep the first one checked (and this is ok) and change the second on a quick way.A quick way could be pressing a shortcut on keyboard, running a terminal command or changing a content of a config file (because I can do a shell script and use a keyboard command).My motivation is I would like to always use my system colors but if a webpage has strange visuals, I'd like change it to the original quickly.Any ideas? | How to change a firefox option on a quick way (via shortcuts, command line,..)? | command line;configuration;keyboard shortcuts;firefox;options | I had found a solution...I asked on mozilla forum and they returned a answer to me. The solution is:Install a extension called PrefBar. With this extension we can put a checkbox on mozilla that will change the property browser.display.use_document_colors. We can set a shorcut too (for example, F1).With this extension we can enable severel other options too. |
_softwareengineering.344145 | The Scrum Guide defines a single unit that consists of a Product Owner, a Development Team of 3-9 members, and 1 Scrum Master for somewhere between 5 and 11 members. I've seen instances where the Product Owner may have support staff or the team may not have a dedicated Scrum Master to vary that number slightly, but it seems to cap out at about a dozen people.The Nexus Guide describes one method of scaling Scrum to handle 3-9 Scrum teams working on a single product. It adds a new Nexus Integration Team which may be dedicated members or it may be composed of people from the various Scrum teams. Based on that guide, it would scale to about 20-120 individuals.Disciplined Agile can scale up from one team to N teams. A standard individual team size would be about the same as in Scrum - 3-9 members plus supporting roles from various specialists, independent test teams, domain experts, etc. The considerations in this framework aren't simply scaling, but applying the agile methods in large organizations, regulated environments with mandated compliance, outsourcing, globally distributed teams. It seems like the limit is that you would have one instance of DA per product or product line.To varying degrees, I've been involved in working on or implementing processes using Scrum, Nexus, and DAD, so I have a solid understanding of those. I don't have a working knowledge of LeSS and SAFe, beyond what I'm reading other people saying.LeSS seems straightforward. It's an alternative to Nexus that has the capability to scale much larger. The rules of LeSS state that LeSS is designed for 2-8 team and LeSS Huge is designed for 8+ teams, which I would estimate the development organization size to be at about 15-80 for LeSS and 80+ for LeSS Huge. Depending on your organization, you would probably be looking at 20-110 people in the product organization for LeSS and 100+ people in LeSS Huge, counting management, independent QA, operations, and so on. Both forms of LeSS appear to be geared toward a single product, or perhaps a closely related set of products (such as a product line or set of microservices). Every product would have its own instance of LeSS (or LeSS Huge).SAFe seems to be inclusive of the whole organization - operations, user experience, enterprise architects and systems engineers, product managers, QA, developers, and so on. It has two models - a 3 level organization and a 4 level organization. The 3 level organization identifies Team, Program, and Portfolio. The 4 level organization adds a Value Stream level between Program and Portfolio. Based on the number of roles identified, it seems like this is targeting large enterprise organizations with multiple products and concurrent programs. Reading their guidance for implementing, it seems like they expect an implementing organization to train executives and management and then at least 50 members of a development team. Minimum organization size would seem to be a couple of hundred people across all of the identified groups and multiple products to make implementation make sense.Am I right in my assumption that LeSS is a competitor to Nexus with respect to the target audience and SAFe is targeting very large organizations with a large number of products or product lines, far more than the other scaled agile frameworks are? | What is the intended or target organizational size for Large Scale Scrum (LeSS) and Scaled Agile Framework (SAFe)? | agile;large scale scrum;scaled agile framework | null |
_softwareengineering.237078 | I want to build up a social network, targeted to a specific interest. I want to generate revenues in form of ads. I first thought of building it from scratch, which will take a lot of time (even if it's something simple), then i remembered there are open source options like Diaspora or Friendica. I don't know if there are others... My question is, with these previously mentioned projects or any other, is it possible to legally generate revenues?Diaspora license is: AGPLv3,[3][4] some parts dual-licensed under MIT License[5] as wellFriendica: AGPLThank you! | open source social network allowing advertising | open source | Both the AGPL and the MIT license only address redistribution and have no restrictions at all regarding how you use the software. Neither forbids any form of commercial activity. The only restriction is that the AGPL forces you to publish any code changes you make. So when you change the software to add support for displaying advertisement, you will have to publish these changes. |
_unix.195050 | I have a C code which I kept in systemd service for boot time start. After restart it started successfully, but after a while it got killed and again started successfully as the type set is forking. The error log mqtt_to_REST.service - TCUP MQTT to RESTful connector Loaded: loaded (/lib/systemd/system/mqtt_to_REST.service; enabled) Active: activating (auto-restart) (Result: timeout) since Wed 2015-04-08 12:19:18 UTC; 648ms ago Process: 289 ExecStart=/usr/bin/sim (code=killed, signal=TERM)Apr 08 12:19:18 edison systemd[1]: Failed to start TCUP MQTT to RESTful connector.Apr 08 12:19:18 edison systemd[1]: Unit mqtt_to_REST.service entered failed state.The startup script :[Unit]Description=TCUP MQTT to RESTful connector#Documentation=NA#DefaultDependencies=no#Before=xdk-daemon.service#After=mqtt.service[Service]Type=forkingExecStart=/usr/bin/sim#ExecStart=/home/root/jsmn/example/sim /dev/null 2>&1ExecReload=/bin/kill -HUP $MAINPIDRestart=on-failureRestartSec=10#WatchdogSec=1min[Install]WantedBy=multi-user.targetSo where is the problem actually happening? | Yocto systemd service script problem | systemd | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.