id
stringlengths 5
27
| question
stringlengths 19
69.9k
| title
stringlengths 1
150
| tags
stringlengths 1
118
| accepted_answer
stringlengths 4
29.9k
⌀ |
---|---|---|---|---|
_webapps.42397 | If an email is seen in Gmails Sent Mail folder, should it not match the same email in the inbox folder? Can an email appear to be sent in the folder but not sent? | Can someone appear to have sent an email and place it in the Sent Mail folder? | gmail | null |
_webapps.76380 | Let's say I have a column of values (e.g., first names). There may be some duplicate data (some first names are very common). How can I get the unique values? In SQL I would doSELECT DISTINCT first_name FROM... How can I do it in a Google Spreadsheet? | Get unique values from range of cells | google spreadsheets | In Google Spreadsheets, you can use the UNIQUE() formula to do that.Formula=UNIQUE(A1:A10)ExplainedFor the range as seen in the screenshot, there are 10 entries. The UNIQUE() formula accepts a range and filters out the duplicates and returns that range, see screenshot, leaving only 7 unique entries.ScreenshotReferenceGoogle Spreadsheets Help: UNIQUE() |
_softwareengineering.226111 | Can I use MongoDB as the database for providing a paid service?MongoDB is licensed under the AGPL, but the drivers I'm using are MIT licensed. Do I have to buy a commercial license for MongoDB or can I use it as a backend for my app? | Can I use MongoDB for a commercial web based service? | licensing;mongodb;agpl | The use of MongoDB as a backend database may be used for commercial web based services and does not require one to GPL or AGPL the web based service. Do note that nothing in the GPL or the AGPL prevents anyone from using the library/database/whatever commercially - just that you need to distribute the source code of the work in its entirety to people you have distributed the work to.MongoDB recognizes that applications using their database are a separate work:we promise that your client application which uses the database is a separate workThis means that you don't need to be concerned with the licensing of MongoDB to use it. They'll even send signed letters asserting the promise to legal departments if there are questions (and they'll do commercial licenses if the signed letter isn't enough for the legal department or you live somewhere where such a promise isn't binding).That said, when a web programmer sees the AGPL, it is indeed right to go wait, what? and look closely at what is being used where and what it implies about your source code licensing.The specifics of why MongoDB is using the AGPL rather than some other, more permissive license stems to commercial companies modifications of MySQL. For Example, Google Cloud uses MySQL in its backend. However, there have been some changes to it (disabling some features... and possibly some optimizations). Since MySQL is under the GPL and has the web services loophole available to it, it doesn't need to submit those changes back to the MySQL community.MongoDB, by selecting the AGPL, forces that if a company was to do what Google has done with MySQL, any changes would be submitted back to the community.This is only an issue if you have modified MongoDB from its distribution. If there are no changes to MongoDB, you may use it anyhow you like.See also: http://www.mongodb.org/about/licensing/ |
_codereview.8371 | Here is my solution for Project Euler 35. (Find the number of circular primes below 1,000,000. Circular meaning all rotations of the digits are prime, i.e. 197, 719, 971.) The code takes about 30 minutes to run. Can you help me identify which parts of the algorithm are hogging up computation time?I know there are many solutions out there, I just want to know why this one is soooo slow. I suspect it has something to do with the p.count function call.p is initialized to a list of all primes below 1 million using the Sieve of Eratosthenes. total is initialized to 0.for i in p: primer = str(i) circ = 1 for j in range(len(primer)-1): primer = primer[1:]+primer[:1] if (p.count(int(primer)) == 1): circ += 1 if circ == len(primer): total += 1 | Euler 35 - python solution taking too long | python;optimization;algorithm;project euler;primes | null |
_unix.47009 | how to determine if the server is slow or not with traceroute unix command . Here is the traceroute Out put of a host IP . traceroute 188.165.247.43traceroute to 188.165.247.43 (188.165.247.43), 30 hops max, 60 byte packets 1 iPhone.local (172.20.10.1) 1.493 ms 2.546 ms 3.287 ms 2 * * * 3 10.52.141.50 (10.52.141.50) 782.228 ms 784.069 ms 786.188 ms 4 10.52.141.54 (10.52.141.54) 786.491 ms 786.510 ms 786.927 ms 5 10.52.92.237 (10.52.92.237) 787.157 ms 788.059 ms 788.001 ms 6 aircel-gprs-177.5.251.27.aircel.co.in (27.251.5.177) 787.140 ms 98.452 ms 100.978 ms 7 114.79.219.41 (114.79.219.41) 158.391 ms 161.252 ms 161.610 ms 8 abs-cn-61.194.148.202.aircel.co.in (202.148.194.61) 178.216 ms 175.575 ms 193.356 ms 9 114.79.196.185 (114.79.196.185) 197.859 ms 218.156 ms 220.694 ms10 abs-cn-129.198.148.202.aircel.co.in (202.148.198.129) 221.497 ms 238.732 ms 157.212 ms11 * * *12 * * *13 125.17.180.149 (125.17.180.149) 137.955 ms 157.563 ms 139.677 ms14 AES-Static-137.36.144.59.airtel.in (59.144.36.137) 289.250 ms 310.797 ms 290.745 ms15 * * *16 * * *17 * * *18 * * *19 * * *20 * * *21 * * *22 * * *23 * * *24 * * *25 * * *26 * * *27 * * *28 * * *29 * * *30 * * *is it possible to determine whether remote server is responding good or not by looking at the output ? | How to use traceroute command in unix | networking;ip;internet;traceroute | From your output, you are not able to reach the destination. The * denotes a timeout.traceroute command shows the path to your destination. packets send to will pass through the routers and you receive a response obeying the time to live (TTL) value for each packets. the * denotes a timeout as a response from the intermediate routers that says the packet has expired. This could be due to various reasons. Either the TTL value is not enough, or could be that a firewall or router is denying the trace packets. In which case you cannot always confirm that the destination server is in fact down. Search google on traceroute command, you will get plenty of resources. |
_softwareengineering.187543 | I am building a RESTful API, and so far, to make sure that my resources work as I need them to, I am using a REST client called Postman. This makes it easy for me to store routes and quickly make requests to them for testing. My current collection of routes in Postman looks like this:The trouble with testing the API is this way is I have to manually change resource IDs. For example, if I want to test the PUT method on my posts resource, I have to first create a resource, find the ID, and then paste it into the PUT URI. This is long!What is considered professional practice for building a RESTful API? Should I be writing unit tests for each route, dynamically creating the post before testing methods like update? | Workflow for building a RESTful API | testing;api;workflows | null |
_unix.225500 | I am designing n-factor authentication for CentOS 7 using a custom PAM module. When the user tries to SSH, they will be texted a pin code, and be prompted to enter the pin. Where should I store the user's cell phone number? And how should it be retrieved by the PAM module? I am starting by customizing the 2ndfactor.c file shown in this link. The sample in the link uses a link to a web service, but my module will call a java program to send the text. Sending is a separate question. In this question, I want to know where to store the data and how to retrieve it in CentOS 7. Is it stored in the OS? In a database? I don't want to create a security risk by leaving emails and phone numbers exposed somewhere. | persisting multi-factor user data for custom pam authentication in CentOS 7 | centos;ssh;authentication;pam | Where should I store the user's cell phone number? And how should it be retrieved by the PAM module?This is entirely your design decision.Some PAM modules store information in local files in /etc, like pam_access or the google authenticator module.Other modules may contact a remote server, like the radius authentication module.A scalable solution would probably involve some sort of database or directory service (like LDAP), so that the same information could be used on multiple servers. A simple solution would probably store the information in local files, and making synchronizing these files across multiple servers a problem for the local administrator. |
_webapps.105229 | Is there a way I could use hangouts.google.com chat windows in a fullscreen mode? I mean, I use Messenger via messenger.com (full size) or Slack using the website slack.com (also full screen). When I open a Hangouts chat window, it is a very small window in the bottom right corner. I could open it up to a new window, but I would like to keep it inside the tab, not a new window. Is there a way? (I have a big screen, so it is kinda funny looking at the small thing in the corner) | Hangouts full screen chat window | google chrome;google hangouts | null |
_unix.192894 | I have a long audio file that was created by concatenating many short files. I would like to detect silence between the speech segments (just a threshold is enough for my purposes) and replace them by absolute zeros such that there is no background noise. It is important for me to retain the length of the recording.I know that sox can detect silence at the beginning and end of a file and I can use silence, reverse, pad etc. to remove the samples and fill in the zeros. Is there a way to do it everywhere in the file, not just start+end?UPD: this is probably a pretty complicated way to ask if there are tools for voice activity detection for Linux | How to use sox or ffmpeg to detect silence intervals in a long audio file and replace them by zeros (aka suppress background noise)? | audio;ffmpeg;sox | null |
_webapps.45419 | I'd like to be able to use Google Spreadsheets to login to my registart and automatically list into rows the current name server and email settings of all my sites, rather than use the heinously clunky and slow admin interface of my registrar (who also has a habit of defaulting them back to their original settings or pointing my domains to their holding page.(*))Fortunately the URLS are in the easy form of /email?domain=mydomain.co.uk&submit=Submit+Query&r=y /manage-dns/?domain=mydomain.co.ukand so on. And I have the necessary wherewithal to extract data like this from a normal webpage. A Google Employee has helpfully posted some code which will pass a username and password in the case of HTTP basic or digest auth via the UrlFetchApp paremeters (docs). And I've tried to apply the method in this StackOverflow posting relating to doing the same with Java and Python.When I look at the traffic via the most excellent Fiddler2 I get the following:# Result Protocol Host URL Body Caching Content-Type Process Comments Custom 3 302 HTTPS www.123-reg.co.uk /public/login 311 post-check=0, pre-check=0; Expires: Tue, 01 Jan 2013 00:00:00 GMT text/html; charset=utf-8 chrome:7036 4 200 HTTPS www.123-reg.co.uk /secure 71,061 text/html; charset=UTF-8 chrome:7036 and I see that this passes username=myusername&password=mypass&login=Log+Me+In&login_submit= and I get some cookies back.Is this beyond the realms of Google Apps Scripting? I have a feeling it could probably be done with CURL but I was learning G-A-S and wondered what it could do.If there's a better suggestion for doing this, be it browser plugin or other, please let me know cos I'm done in after 2 days of Googling.(*)1: Yes, I should get a better registrar. 2: No, no-one is hacking in and changing it. | Google Apps Script UrlFetchApp and password protected pages | google spreadsheets;google drive;google apps script | null |
_unix.25395 | So I've done my share of investigating this problem...I just recently created a CentOS 6 VM on my LinuxMint box using VirtualBox. I left all the recommended values the same upon creation (8GB HDD, 512MB RAM, blah blah blah...). But it DOESN'T connect to the internet! When I try to ping an external network (google.com) it doesn't recognize the host:ping: unknown host google.comand when I try to ping my internal IP of my host box it says that:connect: Network is unreachable(perhaps it can see my host network, just doesn't know a route to it?)I've cleared my iptables, used every single given network adapter VirtualBox offers, and used both NAT and Bridged Adapter modes. I also reaped Google of all that I could mentally find. When I ifconfig it only shows the lo interface (loopback), unless I manually enable the eth0 interface via the ifcfg-eth0 file (set ONBOOT=yes) and restart. But when I enable the eth0 interface, it doesn't show an IP; just the MAC and IPv6 one. I don't think its the host's internet or link to the Guest OS at all... I have a WinXP VM set up and it can access the internet.Perhaps the answer is more obvious than I think? | CentOS doesn't know what the internet is | linux;networking;centos;virtualbox;virtual machine | Try dhclient -v eth0 - this forces the interface to get an IP via DHCP and this may not be happening for some reason.Once you have an IP, try ping 8.8.8.8 - if this works but ping www.google.com doesn't, you have a DNS issue (check your resolv.conf). |
_codereview.146048 | I have question regarding online PCA implemented in the paper. I am interested in the Algorithm 2. I am not certain if there is a bug in my code can someone confirm if my code is correct? The algorithm is as follows: input: \$X\$,\$\Delta\$ \$U\leftarrow\$ all zero matrix (dimensions are not given in the paper , 0 rows and 0 columns) \$B\leftarrow\$ a covariance sketchfor \$x_t\in X\$ do Add \$x_t\$ to sketch \$B\$while \$||(I-U U^{\top})B||^2 \geq \Delta\$ Add the top left singular vector of \$(I-UU^{\top})B\$ to \$U\$yield\$\;y_t=U^{\top}x_t\$ (I think while loop ends before yielding, not mentioned in the paper)end forFollowing is the R code I wrote:PCA<-function(X,delta){ n<-nrow(X) d<-ncol(X) B<-cov(X) Nrow<-nrow(B) Ncol<-ncol(B) U<-matrix(0,Nrow,Ncol) Y<-matrix(0,n,d) idx<-0 for(i in 1:n){ inc<-X[i,] B<-cbind(B,inc) Id<-diag(Nrow) while(norm((Id-(U%*%t(U)))%*%B,2)^2 >= delta){ idx=idx+1 test<-(Id-(U%*%t(U)))%*%B topleft<-as.matrix(eigen(test%*%t(test))$vectors[,1]) add<-cbind(U,topleft) U<-add[,which(!apply(add,2,FUN=function(x){all(x==0)}))] } y<-matrix(0,1,d) if(idx >0){y[1,1:idx]<-t(U)%*%X[i,]} Y[i,]<-y } Y<-Y[,which(!apply(add,2,FUN=function(x){all(x==0)}))] return(Y)}Then to test what I have done make sense I used Fisher iris data, and I got:log.ir <- log(iris[, 1:4])ir.species <- iris[, 5]ir.pca <- prcomp(log.ir)ir.pca$rotation #this is the built in r function equivalent output, which is very different then what I get in my understanding I get (ir.pca$x) from the online PCA implementation.OPCA<-PCA(as.matrix(log.ir),300)#If you change the value 300 to 0.34 say then you will have X1,X2,X3 and X4rotationOPCA<-data.frame(t(t(OPCA) %*% as.matrix(log.ir) %*% solve(cov(log.ir)))) | Online Principal Component Analysis with spectral bounds | algorithm;matrix;r | null |
_unix.311780 | I have built a vpn over ubuntu using libnet filter in conjunction with iptables rules in order to alter packets in the queue over eth0 (to the out world) and eth1 (to the inner lan). I had to use MASQURADE for source nating and mangle for packet alteration. Now i have noticed that the side which initiates the (ping request or tcp) executes the MASQURADE rule just right (this is correct fir both sides). But the reply packets and tcp ack does not apply MASQURADE rule and the packets exit with the internal lan ip (192.168.x.x). Also this is happening at both sides. Q1. What must i do to control these reply packets. Q2. As the reply andtcp ack packets arrive to the other side on the internet with the lan source ip. Is it allowed over the internet that the source ip could be a private ip (192.168.x.x)?N.b. All of the above is applied also over the source port in which i do changes but does not respond in reply or tcp ack. While changing payloads or dst. Ip is applied correct. Thanks. Atman | Dose mangle table prevent masquerade in reply packets | shell;ubuntu;terminal;iptables;packet | null |
_cstheory.33240 | We are not able to settle the (non) existence of a polynomial kernel for a parametrized combinatorial NP-complete problem (we also tried to apply some recent lower bound techniques to prove the non existence of a polynomial kernel under reasonable complexity-theoretic assumptions). So we are searching for major open problems that could be used in a parameter preserving reduction to underline its hardness.What are major parametrized NP-complete problems for which it is unknown if they have a polynomial kernel ? Is there a survey/technical report on the subject?An example could be ODD CYCLE TRANSVERSAL (OCT), the task of making an undirected graph bipartite by deleting as few vertices as possible, parametrized by the number of allowed vertex deletions (though Stefan Kratsch and Magnus Wahlstrm recently showed a randomized polynomial kernel for OCT) | Major open problems on polynomial kernel (non) existence | reference request;big list;parameterized complexity | Currently, I would say the 3 major open cases are:Directed feedback vertex set (make a given digraph acyclic by deleting at most k vertices) parameterized by the size of the solutionPlanar Vertex Deletion (make a graph planar by deleting at most k vertices)Edge Multiway cut (given an undirected graph and a list of terminals, delete at most k edges to ensure all the terminals end up in a different connected component)For all of these, the relevant parameter is the size of the solution. You can have a look at the open problem list from the 2013 Workshop on Kernelization ( http://worker2013.mimuw.edu.pl/slides/worker-opl.pdf ) for others. Pointers to other (but older) open problem lists in parameterized complexity can be found here: http://fpt.wikidot.com/open-problems . |
_cs.16754 | Suppose I have an undirected graph which is stored as an adjacency matrix. The graph contains a single cycle; all other vertices are isolated.How can I efficiently find the length of the cycle?The best I've been able to come up with:Starting at row 0 of the matrix, traverse through rows until an initial 1 is found, say at row startVertex and column k. Increment a counter.Search column k for its other 1, say at row j. Increment the counter.Search row j for its other 1 value. Increment the counter.Repeat steps 2 and 3 until a row or column which matches startVertex is found.The complexity of this algorithm is $\mathcal{O}(V^2)$.Is there a better algorithm out there? | Find Cycle Length | algorithms;graphs | null |
_unix.259943 | I have an USB flash drive that I formerly used as installation medium for Linux Fedora.The stick still has the Fedora Live USB installation files on it. When I insert it into my olde laptoppe, it appears as disk named Fedora-Live-KDE-x86_64-22-3 in KDE dolphin. Fair enough.So, I destroy all partitions on it using fdisk, create new partition, set up an ext4 filesystem on said partition. I insert the flash drive. I appears as Fedora-Live-KDE-x86_64-22-3 in KDE dolphin. UNDEAD FLASH DRIVE TIME!Where does that name come from? Feels like it does not come from the USB flash drive, but factoid (3) below indicates that it actually does. Where is that name coming from and how do I change it?Here is some research on where the name is coming from, the conclusion being that it apparently comes from the ISO-9660 data left on the disk. But how is this sane behaviour by Linux? e2label /dev/sdd1 shows nothing: the filesystem has no label blkid /dev/sdd1 shows/dev/sdd1: UUID=10aab422-4212-45c8-9f99-35e5eb719154 TYPE=ext4 PARTUUID=5c4a815c-01 Using the flash drive on another machine also results in the name Fedora-Live-KDE-x86_64-22-3 being displayed. One can dump the labels (whatever those are) by looking at the filesystem under /dev: ls -l /dev/disk/by-label/This shows the symlink Fedora-Live-KDE-x86_64-22-3 -> ../../sdbNote that the symlink points to the device, not the partition. So this is not a filesystem label, but something like a disk label. The original filesystem label obtainable with e2label being empty, we set it and then see what's up:# e2label /dev/sdb1 Scooby Doo# ls -l /dev/disk/by-label/lrwxrwxrwx. 1 root root 9 Feb 4 23:43 Fedora-Live-KDE-x86_64-22-3 -> ../../sdblrwxrwxrwx. 1 root root 10 Feb 4 23:43 Scooby\x20Doo -> ../../sdb1So now both the disk and the filesystem/partition have a label. However, after removal/reinsertion, dolphin (or rather, Linux) now settles on the Scooby Doo name of the filesystem. And why not! We can then erase the label again using e2label /dev/sdb1 ... and then the name is back, but only partially: Fedora-Live-KDE- (why partially? because it's read from 0x9000 onwards, whereas the full label is at 0x8000, see below) Also tried to see what parted does. It seems mightily confused: It thinks the 8GiB stick with 512-byte blocks is actually a 32GiB stick with 2048-byte blocks and detects a Apple partition, while fdisk is absolutely happy with finding an 8GiB Linux partition. Curioser and curioser.(parted) printWarning: The driver descriptor says the physical block size is 2048bytes, but Linux says it is 512 bytes.Ignore/Cancel? iModel: Generic USB Flash Disk (scsi)Disk /dev/sdb: 32.2GBSector size (logical/physical): 2048B/512BPartition Table: macDisk Flags:Number Start End Size File system Name Flags 1 2048B 10.2kB 8192B Apple 2 88.1kB 5278kB 5190kB EFI 3 5319kB 26.1MB 20.8MB EFIIt's probably not TOTALLY confused because on the stick we find this: Additional weirdness: The reformatted USB stick seems to be un-writeable but traversable for a non-root user. Writing as root works though. But that's just a side remark. Getting a diskdump with okteta shows the disk name string at position just past 0x8000, i.e. in block 64 (blocks being 512-Byte-sized):This evidently stems from the LiveCD structure. Looking further shows the name again likely in UTF-16 format just past 0x9000, with the version suffix dropped probably because the field has constant size: Time to POKE and see what happens. We modify the string at the 0x8000 mark:We also modify the string at the 0x9000 mark:Then write the blocks back to the stick (because we have been modifiying a file obtained using dd), sync, sync and eject. Then reinsert the stick. Linux settles in this case on the string at 0x9000.[root@elf ~]# ls -l /dev/disk/by-label/total 0lrwxrwxrwx. 1 root root 10 Feb 9 22:09 DellUtility -> ../../sda1lrwxrwxrwx. 1 root root 10 Feb 9 23:20 MOTHRA-Dead-KDE- -> ../../sdb1lrwxrwxrwx. 1 root root 10 Feb 9 22:09 OS -> ../../sda2lrwxrwxrwx. 1 root root 10 Feb 9 22:09 RECOVERY -> ../../sda4Dolphin shows the content of /dev/disk/by-label: So, we know where the string comes from. It does not seem useful to be able to change it as it comes from the CD-ROM structure, whereas we have put a standard partitioning scheme onto the USB disk. Why does Linux mash these two two structures? | USB flash drive formatted as Linux Live CD keeps the CD-ROM name after re-partitioning | linux;partition;usb;live usb;iso | null |
_softwareengineering.112794 | According to Martin Fowler classical article, there are two types of verification: state and behaviour verification. At the same time I often see people telling about implementation vs. behaviour verification. So I guess we speak about pretty same stuff here and state in the first classification is behaviour in the second; behaviour in the first equals to implementation in the second.What I don't like is that behaviour word appears in both namings but on contrary sides which causes a lot of confusion.Which naming do you prefer? | Verification naming confusion | unit testing;tdd | These two approaches use different meanings of the word behavior. The first one (behavior vs. state) is based on a design mindset that views programs as a collection of states and transitions between those states. In such a model, state is a static thing; if you had a computer that you could freeze or single-step (like those back in the 1970's), then you could stop it at any time and inspect its state (the values currently stored in its register and memory), but to observe its behavior, you need it to run. It is quite common to visualize the dynamic aspects of a (part of a) program as a diagram where boxes represent states, and arrows represent transitions between those states.The second meaning offsets functional characteristics of the program (what does it do?) against technical details (how did they make it do what it does?). This separation is fairly common; many methodologies split the design phase into functional and technical design parts. In this mindset, the behavior of a program is what it functionally and observably does, from a user's perspective (e.g., when I click this button here, a bunch of dummy text appears above it); the implementation, by contrast, is how it's done (e.g., there's this HTML form with a submit button, and the server-side script reads the POST request fired by that script, calls a web service to generate some lorem ipsum, and inserts it into the form which then gets sent back in the response).The first meaning is interesting, because deciding what you want to model as state (data) and what you want to follow implicitly through behavior has a huge impact on any software project. Making your data too rigid makes you inflexible to future change; putting too much information in your logic and too little in your data makes for an unmaintainable mess of complex logic.The second one is important, because it allows you to specify requirements without making any technical decisions (yet). If the functional requirements are unambiguous and clear, you can then verify that the technical design meets the functional requirements, and later, during acceptance testing, you can also verify that the actual product meets the functional requirements (which is far more important than meeting the technical requirements: who cares if you haven't used XML, as long as the thing does what it's supposed to do). |
_webmaster.3337 | I have a website that is currently hosted in Japan and has been for a number of years. Its well established in the search engines in both Yahoo Japan and Google Japan.For numerous reasons its become necessary to move our site to a new host, and our systems administrator has asked me if its ok to use a none Japanese Host in a country close by, preferably the US.(Not exactly close but closer than some).I am trying to find out if moving the site away from its homeland will have a significant effect on its search engine rankings in Japan. I've heard from somwhere that it will negatively effect the site, but I was hoping someone could provide some evidence, or ideally a case study for this?Its a tall order, but maybe someone can help out there?Thanks! | Japanese Hosting And Search Engine Rankings | seo;web hosting;serps | null |
_unix.180753 | I'm looking for a file system for an external HDD which will be basically used for backups. It will only be used with Linux machines, so I don't mind having a Linux-specific file system. I may consider encrypting the drive, but it is not necessary since I don't mind encrypting sensitive files and directories manually.I did some research on file systems like ext4, Btrfs and XFS and even found a benchmark, but I couldn't come to a conclusion.Is there a significant difference between file systems supported by Linux which I should consider in this setup? | Linux File System for an External HDD | linux;filesystems | ZFS is an ideal candidate in this case because of robust checksums, snapshots, the ability to export and detach the pool, and use ZFS send and receive for high efficiency differential backups. One important gotcha with external usb drives is to make certain that your pool isn't going to be marked as faulted if your drive spins down for power saving. There are workarounds for this such as disabling power saving on the device, or export the pool after your backups complete so that it can safely sleep.Also, lz4 is great for compression and is available in later pool versions. |
_vi.4830 | Upon opening a new file in an active buffer in the current window, the message line at the very bottom of the screen shows %f [New File].How does Vim know that the buffer contains a new file, rather than an existing file? I want to detect this in order to test whether quickfix has correctly parsed a file name from an error message. Since getqflist() gives 'bufno' instead of 'filename', I can't use filereadable() for this.I've also tried checking getbufvar() but until you actually try to jump to the error with :cc, any buffers created by quickfix after parsing an error message for a file name are unlisted and getbufvar() returns an empty dictionary. I want to determine whether the buffer will contain a new or existing file and intervene before jumping to the error and opening the file. | How can I detect whether an unlisted buffer contains a new file or an existing file? | vimscript;buffers | null |
_webmaster.18439 | I have a client that wants to make their products firmware files available for download on their website. The firmware files have a custom extension: .bi2. The client wants the files to be downloaded directly and not placed in a container (like a .zip file).Is there an IIS setting that will instruct the browser to download the .bi2 file instead of trying to open it as a webpage?Thanks in advance for your help! | Download Link for Custom File Type | iis7;webserver;download;filenames;iis6 | In order for IIS to allow access to the file at all, it needs to be assigned a MIME-type. Use application/octet-stream and the browser will almost certainly treat it as a file it can't handle itself.(You could also experiment with application/x-whatever-you-want) |
_unix.261087 | My long lasting previous installation somehow tied up VLC and gtk file dialog. I didn't even do anything special, except installing VLC.After update to VLC 2.2.1 the file dialog was replaced to Qt and I don't see any obvious way how to get back with gtk. When I mark vlc-qt for deinstallation, entire vlc is marked for removal as well.openSUSE 13.2 | How to setup VLC with gtk file dialog? | gtk;vlc;qt | VLC media player has been using Qt interface for quite long time. VLC however, has an option to override window style, which will also change the file dialog as well.In VLC media player, do the following steps:Go to Tools > Preferences (or press Ctrl+P)In the first tab, which titled Interface Settings, look for the last option under Look and feel. There is an option called Force window style: and probably System's default is being selected.Click on the drop-down menu and change from System's default to GTK+.Finally, click on Save button and changes will be applied.Then, go to Media > Open File... (or press Ctrl+O) to confirm that the file dialog has been applied with GTK+ window style. That's all.Tested working for VLC 2.2.1 in Debian 8.2 Xfce (Xfce 4.10).Force style for Qt5 in Debian/UbuntuInstall libqt5libqgtk2 package from the repository, which is available for the following releases of Debian and Ubuntu. No further configuration is needed.Debian Testing (stretch) and newerUbuntu 15.10 (wily) and newerThis has been tested working for VLC 2.2.2 in Xubuntu 16.04 (Xfce 4.12). I didn't test in Debian, but reportedly works according to this post on Ask Ubuntu.Force style for Qt5 in other distributionsThe package above is not available in repositories of other distributions, including openSUSE, according to this search result from software.opensuse.org.According to this Arch Wiki setting QT_STYLE_OVERRIDE=GTK+ will force specific style to Qt5 applications. This may be added in one of the following locations:~/.profile (reportedly works in Linux Mint, suggested in this post on Unix.SE)~/.bashrc (suggested in this post on Ask Ubuntu)~/.xsession or ~/.xinitrc (suggested in this post on FreeBSD forum)~/.xsessionrc (suggested for OpenBox in this post on CrunchBang Linux forum)Without installing the package, I have tried adding export QT_STYLE_OVERRIDE=GTK+ to each of above configuration files one at a time, except for the last one. However, none of these worked for VLC in Xubuntu 16.04. So I can't verify if the environment variable really works or not. |
_webmaster.52650 | Example:For a PC game review website would it be bad for SEO to have one Amazon affiliate link in addition to the game review? Since these links would be coded in the form of <iframe> would adding a rel=nofollow be of much benefit? Lets say they were not iframe links and regular <a href=>link</a>.A respected SEO equated affiliate links to poison for rankings and recommended they be pulled to a separate domain or buried deeper in the site away from root. So is it better to just remove all affiliate links and not risk any penalty from Google?I have looked around and not really been able to find a direct answer to this from Google. Here is some information I did find:http://moz.com/blog/getting-seo-value-from-your-affiliate-links http://www.nichepursuits.com/how-to-get-a-google-penalty-using-affiliate-links-and-how-to-recover | Are Amazon affiliate links bad for SEO? | seo;affiliate | bybe, that's simply not true at all.Affiliate DO links blackball your site. I could give a million example links pointing to case studies from ePN's forum, Amazon affiliates, PHPbay, Warrior Forum and more - it's just something that can't be ignored. You will NOT get a penalty for cloaking links (cloaking = renaming the links to adapt to your domain name). Cloaking is completely legit. Read any TOS. Look at any affiliate site out there, big or small, there's hardly a single one not cloaking links. Using methods to force people into launching an affiliate link is a different story.Google can punish whomever they want. We saw this with Panda & Penguin -- the number of innocent sites, especially those of affiliate marketers, was staggering. Nobody is going to sue one of the biggest corporations in the world for something they're doing on their own search engine, and win. A LOT of businesses went under after Panda. Many of them did nothing wrong besides being an affiliate marketer, which Google has a vendetta against.A completely legal way to remove your affiliate links from the equation is to separate them and put them on a NOINDEXed page. Unfortunately this going to increase the number of clicks that a customer goes through to get to your affiliate links. So, you'd have a site with nothing indexable by search engines but pure content, with the affiliate stuff on NOINDEX pages which are not being mixed into part of your site's ranking. Unfortunately, this is the sort of thing you have to do if you want to continue being an affiliate marketer on a search engine that is doing everything possible to make sure you don't succeed. |
_unix.372631 | I started to drop an InnoDB table in my testdatabaseecho DROP DATABASE test;|mysql -u root -pwhich is really slow, which I didnt know at the start, so I stopped the command with ^CNow my mysql database is in a broken state, where it sais this in /var/log/syslog:InnoDB: Warning: MySQL is trying to drop database `test`.``InnoDB: though there are still open handles to table `test`.`testtable`.InnoDB: Warning: MySQL is trying to drop database `test`.``InnoDB: though there are still open handles to table `test`.`testtable`....When I try to restart mysql, it fails.Finally, I had to reboot the server to solve this.How could I have solved this without reboot? | InnoDB: Warning: MySQL is trying to drop database though there are still open handles to table | mysql;innodb | null |
_unix.288886 | Im trying double loop using array values like array names for looparray1=name1 name2name1=one twoname2=red bluefor name in $array1do for value in $name do echo $name - $value donedoneI need to use 'name' to '$name' for use in 2nd loop, but this don't work for me.How could I use value of array1 like the name of array inside 2nd loop? | Bash array values like variables inside loop | bash;array;loop device | That's not how you define arrays in bash.a=foo bardefines a string/scalar variable. And using it as $a (unquoted) performs the split+glob operator which only makes sense for strings representing a $IFS separated list of file patterns.In bash, arrays are defined as:a=(foo bar)So here, you'd want:array1=(name1 name2)name1=(one two)name2=(red blue)for name in ${array1[@]}do typeset -n nameref=$name for value in ${nameref[@]} do printf '%s\n' $name - $value donedonetypeset -n is a relatively recent addition to bash and declares a nameref, that is a variable that contains the name of another variable and when expanded actually refers to the named variable. |
_webapps.100651 | I live in the Bahamas and cannot add my cell phone number as a recovery number for my account. My cell phone number is with a new cellular provider that has just emerged here and many people are having the same issue.I get an error saying invalid number.I am using a sim card from a new cellular company in the Bahamas NewCo2015 / Aliv. | Can't add my phone number as recovery number | gmail;google account;account management | null |
_scicomp.388 | I am trying to integrate$$\int^1_0 t^{2n+2}\exp\left({\frac{\alpha r_0}{t}}\right)dt$$which is a simple transformation of$$\int^{\infty}_1 x^{2n}\exp(-\alpha r_0 x)dx$$using $t = \frac1{x}$ because it is difficult to numerically approximate improper integrals.This does, however, lead to the problem of evaluating the new integrand near zero. It will be very easy to get the proper number of quadrature nodes seeing as the interval is only of length 1 (so the comparable $dt$ can be made very small), but what sort of considerations should I make when integrating near zero?On some level, I think that simply taking $\int^1_\epsilon t^{2n+2}\exp({\frac{\alpha r_0} {t}})dt$ is a good idea where $\epsilon$ is some small number. However, what number should I choose? Should it be machine epsilon? Is division by machine epsilon a well quantified number? Furthermore, if division my machine epsilon (or close to it) gives an incredibly large number, then taking $\exp(\frac{1}{\epsilon})$ will become even larger.How should I account for this? Is there a way to have a well defined numerical integral of this function? If not, what is the best way of integrating the function? | numerical integration with possible division by 'zero' | numerics;quadrature;accuracy | This can be done by integration by parts:$$ \int^\infty_1 x e^{-ax} = \frac{-1}{a} x e^{-ax}\mid^\infty_1 - \frac{-1}{a} \int^\infty_1 e^{-ax} = \frac{e^{-a}}{a} + \frac{e^{-a}}{a^2} = \frac{a+1}{a^2} e^{-a} $$and continuing on by induction$$ \int^\infty_1 x^k e^{-ax} = \frac{-1}{a} x^k e^{-ax}\mid^\infty_1 - \frac{-k}{a} \int^\infty_1 x^{k-1} e^{-ax} = \frac{e^{-a}}{a} + \frac{k}{a} \int^\infty_1 x^{k-1} e^{-ax} $$so that$$ I(k) = \frac{e^{-a}}{a} + \frac{k}{a} I(k-1) $$and $I(0) = \frac{e^{-a}}{a}$. |
_softwareengineering.22552 | We've all (almost all) have heard about the horror stories as well as perhaps studied about them.Its easy to find stories of software that is over budget and late.I wanted to hear from developers the opposite story:Question:Do you know, or have worked on a project that was on budget and on time?What is the most valuable lesson you learned from it? | Software on Budget and on time? | project management;project | Yep, I've seen it happen.Key elements:1) Well defined requirements, clearly agreed, with a solid change control process.2) Developers involved in the estimates, with no pressure on them to produce estimates which were what the client wanted to hear, just what they really thought would be needed to complete the work properly3) Estimates that also took account of risks and uncertainties4) Facilitate early feedback from the client - we've provided videos, demos (hands on and hands off depending on stability) as early as possible5) A stable team whose availability has been realistically figured into the schedule (for instance if they spend a day a week doing support and admin, then they're only expected to complete 4 days a week work on the project) It's not rocket science but removing the commercial pressures and, critically, getting the requirements clear and controlling them is challenging (and where things normally fall down). |
_unix.34171 | I need to save data from a failing hard drive.Sounds like ddrescue or myrescue (or maybe clonezilla?) will be my best friends here, but I'm just wondering what will likely be faster:using dd/ddrescue/myrescue/clonezilla to simply clone the failing drive to a new drive of identical capacityusing rsync/tar/cp to move files from the failing drive to a new drive?dd-ish choices avoid moving data back and forth between kernel-space and user-space, right? But rsync and others avoid moving empty space, right?Another oddly fortunate bit if I choose a dd-ish solution: the failing drive is currently mounted read-only (part of the failure process, I think) so I guess I don't have to worry about data changing while I'm dd'ing.This is the root partition, so dd would be handy in that I should be able to boot the new drive after it completes. | What's faster, dd 1.5TB or rsync 500GB? | hard disk;performance;dd | No question, rsync will be faster. dd will have to read and write the whole 1.5TB and it will hit every bad block, triggering multiple read retries which will further slow an already long process. rsync will only have to read blocks that matter, and since it is unlikely that every bad block occurs in existing files or directories, rsync will encounter fewer of them.The bad thing about using rsync for disk rescue is that if it does encounter a bad block, it gives up on the file or directory that contains it. If a directory contains a lot of subdirectories and rsync gives up on reading it, then your copy could be missing a lot of what you want to save. The problem is that rsync relies on the filesystem structures to tell it what to copy and the filesystem itself is no longer trustworthy.For this reason I would first use rsync to copy files off the drive, but I would look very carefully at the output to see what was missed. If you can't live without what rsync failed to copy, then use dd or one of the other low level block copying methods. You can then fsck the copy, mount it and see if you can recover more of your files. |
_webmaster.61773 | There was a change in Google Keyword Planner that added a search volume trends graph. How is it calculated?The keyword keyword1 has 800 average monthly searches while the search volume trends shows 3000 - why? | What are search volume trends in AdWords? | seo;google adwords;google keyword tool | null |
_webmaster.95435 | Please consider the following image, from Google Webmaster ToolsI have these unnaturally high link counts coming in from lowish Page Rank Domains (MOZ Score 13)Is this natural? Is this hurting my site? Is this an indication of someone trying to bring my SEO rankings down?The green circle is from a friendly competitor (which is natural hench green), the other two I dont know | Unnaturally high number of links from domain | seo;google search console;links;backlinks | Without knowing the domains linking to your website and indeed your own domain, it is impossible to be 100% certain of what nature and reason those domains have a number of links pointing to a page on your website but there is not necessarily any cause for alarm or concern.Firstly metric-wise, don't worry that a domain has a low Moz domain authority that is linking to you, DA is based on authority passed through linkage so if a website is relatively new or for whatever reason, does not have that many external links pointing to it, it does not mean that it is spammy or should not be trusted or can cause harm.There are many possible instances that could lead to a sitewide link pointing to one of your pages, or multiple links from a domain pointing to one of your pages. When you have multiple links coming from the same domain to the same page on your own website, the sitewide/multiple nature diminishes the links anyhow to something roughly equivalent to just one link from that domain to your page being counted/paid attention to.Something to note, there is nothing (or at least, very little) you can detect as unnatural in Google Search Console as it is quite a basic tool that only touches the surface of website behaviour.If you want to check for any malicious activity in relation to your website and external linkage, ensure that you don't have an abnormally high count of unnatural and spammy looking exact match keyword anchors pointing building to your website. This will be the quickest way a competitor can win with negative SEO against you.Hope that helps - of course, we could provide more information knowing the domains in question censored in your screenshot. |
_cs.30772 | Consider the Mergesort algorithm on inputs of size $n = 2^k$. Normally, this algorithm would have a recursion depth of $k$. Suppose that we modify the algorithm so that after $k/2$ levels of recursion, it switches over to insertion sort. What is the running time of this modified algorithm?I know that MergeSort has worst-case runtime of $O(n \log n)$ and Insertion Sort has worst-case runtime of $O(n^2)$, but I'm not sure how these bounds will be affected within the problem above. | What is the runtime of Mergesort if we switch to Insertion Sort at logarithmic depth? | algorithms;algorithm analysis;runtime analysis;sorting | null |
_opensource.2821 | I searched all over the Internet but I was not able to find if anyone has raised this issue. I want to edit some files in an open source sample project provided by Google (change some methods and the package name of all the files), and then deploy the app on the Play Store.I have the following questions:Can I do this?If I can, do I need to keep any license declaration in my app, or do I need to open source my project?NOTE: I know that the Google samples are licensed under the Apache License 2.0. But I am not able to find the answers to my questions in the licensing terms. | Can we edit Google Android sample project and create app | licensing;apache 2.0;proprietary code;open source definition | null |
_softwareengineering.60872 | I have a list of products. Each of them is offered by N providers. Each providers quotes us a price for a specific date. That price is effective until that provider decides to set a new price. In that case, the provider will give the new price with a new date.The MySQL table header currently looks like:provider_id, product_id, price, date_price_effectiveEvery other day, we compile a list of products/prices that are effective for the current day. For each product, the list contains a sorted list of the providers that have that particular product. In that way, we can order certain products from whoever happens to offer the best price.To get the effective prices, I have a SQL statement that returns all rows that have date_price_effective >= NOW(). That result set is processed with a ruby script that does the sorting and filtering necessary to obtain a file that looks like this:product_id_1,provider_1,provider_3,provider8,provider_10...product_id_2,provider_3,provider_2,provider1,provider_10...This works fine for our purposes, but I still have an itch that a SQL table is probably not the best way to store this kind of information. I have that feeling that this kind of problema has been solved previously in other more creative ways.Is there a better way to store this information other than in SQL? or, if using SQL, is there a better approach than the one I'm using? | How to store prices that have effective dates? | design patterns;database | For items that vary based on time (such as being able to answer things like what was the price of X on date D or which cow was in feedlot Q on date E) I recommend reading the book Developing Time-Oriented Database Applications in SQL. While this book is out of print, the author has graciously made available the PDF of the book as well as the associated CD on his website. http://www.cs.arizona.edu/~rts/publications.html (look for the first item under books).For a brief introduction online, see:http://talentedmonkeys.wordpress.com/2010/05/15/temporal-data-in-a-relational-database/http://martinfowler.com/eaaDev/TemporalProperty.html |
_cseducators.2800 | One of the next big thing that will get people interested are block-chain based technology. Many people are asking about WTH is blockchain and it seems that this will be asked quite a lot in coming days, both from CS and non-CS folks. Now, I love explaining particulars using an analogy. Is there some analogy that I could use to eli5 to non-CS folks that are curious?(Really appreciate answers, as I will simply link to this question for people asking about blockchains) | How do I explain blockchain using an analogy? | teaching analogy;layperson | Use a classroom activity, then present that as the analogy.An old campfire activity, for those that remember it. A growing story that nobody knows the end of, or even if it will end.The objective is to create a story, with everyone adding their parts, in turn. Someone starts the story by saying a few lines, and ending mid-sentence, just before some action happens. The next person repeats what the first persons said, and has to finish that sentence with something that makes sense, and then continues the story, using their own idea, since they don't know what the first person was thinking. Like the first person, the second stops mid-sentence in what they're saying. The third person repeats everything the second said (which includes what the first one said), and finishes the sentence left incomplete by the second. Adding more lines to the story, this person also ends mid-sentence. It continues in a similar fashion with each person repeating the whole story from the beginning, adding a couple lines, and ending mid-sentence. At some point the next person will not be able to repeat, even in their own words, the story so far, and the chain fails, hence ending. If it happens to be too short, or if not every student has had the opportunity to participate, a new story may be started, and try again. Once everyone has had a chance to participate, in as many rounds as you deem appropriate, you can pick up the chain and finish the sentence, and the story, bringing it to a successful completion.After the story, or stories, have run their course, you can relate the process to a blockchain. Each piece (except the first) depends upon the preceding piece, and is meaningless without it. If, at any point, someone in the chain doesn't hold up their part of the contract, by completing the previous sentence, the story is ruined. Still, even without an end, everything up to the last incomplete sentence remains valid, and can be traced back to the original piece.This is totally non-technical, so it will not be of value in discussing blockchain implementations. It will, however, be memorable, and the students should be able to grasp the concepts behind a blockchain implementation when you do present it. |
_unix.211231 | How can I read out WLAN RSSI value for each radio channel from command line? Target system would be either Ubuntu 15.04 or Raspberry Raspbian.By RSSI, I mean the raw received power, before any WLAN specific L1 operations. This is pretty much same way as in the 3GPP WCDMA, RSSI value means the raw energy received on antenna. Not just the received code power, not the signal-to-noise ratio. Just the overall received signal power containing both payload signal and any possible noise.The only solution I have found so far is wavemon: when started with parameter -d, it will print out signal and noise values and I can grep out them easily. But are there other possibilities or would there be even some ready-made utility to scan for noise over all WLAN channels?Reason for this question is that my both of my home 2.4G band WLAN networks have some random, but frequent problems blocking usage of both networks simultaneously. Problems are not related to WLAN base station HW, channel numbers or any of my own HW - all those have already been eliminated. Problems are not visible on my 5G band WLAN networks - those operate well also during 2.4G band problems.I'm now suspecting that my 2.4G band WLAN networks are victims of some external noise. I need to collect more evidence and my plan was to set up one Ubuntu or Rasberry device to continuously scan over 2.4G WLAN channels, and to combine the resulting long time information with e.g. ping status over my 2.4G WLAN networks.Additional information: I found one utility: https://github.com/simonwunderlich/FFT_eval This one uses laptop's existing WLAN card (assuming card has certain chipset) to make the proper FFT scan over WLAN band. Here is an example measurement: I'll try to tweak this utility so that I would get regular (like once per 10 seconds or so) scan results stored to file. | How to scan WLAN RSSI in command line? | wlan | null |
_computergraphics.100 | I often find myself copy-pasting code between several shaders. This includes both certain computations or data shared between all shaders in a single pipeline, and common computations which all of my vertex shaders need (or any other stage).Of course, that's horrible practice: if I need to change the code anywhere, I need to make sure I change it everywhere else.Is there an accepted best practice for keeping DRY? Do people just prepend a single common file to all their shaders? Do they write their own rudimentary C-style preprocessor which parses #include directives? If there are accepted patterns in the industry, I'd like to follow them. | Sharing code between multiple GLSL shaders | glsl | There's a bunch of a approaches, but none is perfect.It's possible to share code by using glAttachShader to combine shaders, but this doesn't make it possible to share things like struct declarations or #define-d constants. It does work for sharing functions.Some people like to use the array of strings passed to glShaderSource as a way to prepend common definitions before your code, but this has some disadvantages:It's harder to control what needs to be included from within the shader (you need a separate system for this.)It means the shader author cannot specify the GLSL #version, due to the following statement in the GLSL spec:The #version directive must occur in a shader before anything else, except for comments and white space.Due to this statement, glShaderSource cannot be used to prepend text before the #version declarations. This means that the #version line needs to be included in your glShaderSource arguments, which means that your GLSL compiler interface needs to somehow be told what version of GLSL is expected to be used. Additionally, not specifying a #version will make the GLSL compiler default to using GLSL version 1.10. If you want to let shader authors specify the #version within the script in a standard way, then you need to somehow insert #include-s after the #version statement. This could be done by explicitly parsing the GLSL shader to find the #version string (if present) and make your inclusions after it, but having access to an #include directive might be preferable to control more easily when those inclusions need to be made. On the other hand, since GLSL ignores comments before the #version line, you could add metadata for includes within comments at the top of your file (yuck.)The question now is: Is there a standard solution for #include, or do you need to roll your own preprocessor extension?There is the GL_ARB_shading_language_include extension, but it has some drawbacks:It is only supported by NVIDIA (http://delphigl.de/glcapsviewer/listreports2.php?listreportsbyextension=GL_ARB_shading_language_include)It works by specifying the include strings ahead of time. Therefore, before compiling, you need to specify that the string /buffers.glsl (as used in #include /buffers.glsl) corresponds to the contents of the file buffer.glsl (which you have loaded previously).As you may have noticed in point (2), your paths need to start with /, like Linux-style absolute paths. This notation is generally unfamiliar to C programmers, and means you can't specify relative paths.A common design is to implement your own #include mechanism, but this can be tricky since you also need to parse (and evaluate) other preprocessor instructions like #if in order to properly handle conditional compilation (like header guards.)If you implement your own #include, you also have some liberties in how you want to implement it:You could pass strings ahead of time (like GL_ARB_shading_language_include).You could specify an include callback (this is done by DirectX's D3DCompiler library.)You could implement a system that always reads directly from the filesystem, as done in typical C applications.As a simplification, you can automatically insert header guards for each include in your preprocessing layer, so your processor layer looks like:if (#include and not_included_yet) include_file();(Credit to Trent Reed for showing me the above technique.)In conclusion, there exists no automatic, standard, and simple solution. In a future solution, you could use some SPIR-V OpenGL interface, in which case the GLSL to SPIR-V compiler could be outside of the GL API. Having the compiler outside the OpenGL runtime greatly simplifies implementing things like #include since it's a more appropriate place to interface with the filesystem. I believe the current widespread method is to just implement a custom preprocessor that works in a way any C programmer should be familiar with. |
_unix.170346 | In a CentOS 7 server, I want to get the list of selectable units for which journalctl can produce logs. How can I change the following code to accomplish this? journalctl --output=json-pretty | grep -f UNIT | sort -u In the CentOS 7 terminal, the above code produces grep: UNIT: No such file or directory. EDIT: The following java program is terminating without printing any output from the desired grep. How can I change things so that the java program works in addition to the terminal version? String s; Process p; String[] cmd = {journalctl --output=json-pretty ,grep UNIT ,sort -u}; try { p = Runtime.getRuntime().exec(cmd); BufferedReader br = new BufferedReader(new InputStreamReader(p.getInputStream())); while ((s = br.readLine()) != null) System.out.println(line: + s); p.waitFor(); System.out.println (exit: + p.exitValue()+, +p.getErrorStream()); BufferedReader br2 = new BufferedReader(new InputStreamReader(p.getErrorStream())); while ((s = br2.readLine()) != null) System.out.println(error line: + s); p.waitFor(); p.destroy(); } catch (Exception e) {} | list of selectable units for journalctl | grep;systemd | journalctl can display logs for all units - whether these units write to the log is a different matter.To list all available units and therefore all available for journalctl to use:systemctl list-unit-files --allAs to your java code, in order to make pipes work with Runtime.exec() you could either put the command in a script and invoke the script or use a string array, something like:String[] cmd = {sh, -c, command1 | command2 | command3};p = Runtime.getRuntime().exec(cmd);or: Runtime.getRuntime().exec(new String[]{sh, -c, command1 | command2 | command3}); |
_scicomp.10843 | I'm dealing with Jacobi iterative method for solving sparse system of linear equations. For small matrices it works well and gives right answers even if matrix is not strictly diagonal dominant, however for the case of really big matrices ($100000*100000$) it does not converge because the matrix is not diagonal. Many articles suggest to interchange rows and columns in order to make diagonal dominant matrices, however for the case of my matrix it is always not diagonal dominant. Could anyone please, suggest me how to deal with this problem? Maybe there is some method how to choose right initial approximation or maybe there is more robust algorithm exists. I'm a newcomer in this field and I would be appreciated for any help. | Problem with convergence of Jacobi iterative algorithm | matrices;linear solver;convergence | The Jacobi iteration is the worst possible solver for linear systems. Furthermore, contrary to your belief (but easy to show), it is entirely independent of the ordering of the unknowns, so reordering rows and columns of the system makes absolutely no difference.There are many better methods for solving linear systems, among them CG and GMRES, and there are many good books on the subject (e.g., the one by Y. Saad). My take on many of the issues with solver and preconditioners is given in lectures 34-38 at http://www.math.tamu.edu/~bangerth/videos.html . |
_codereview.3138 | I have a simple implementation for a LRU cache using LinkedHashMap. I want it to be as generic as possible. This is not for production use, just practice, so I don't careif its thoroughly robust as far as itis correct. However, I will welcomeany comments, especially the oneswhich might make this better withsimple changes :) Are there any other ways of doing this?class LRUCache<E> { @SuppressWarnings(unchecked) LRUCache(int size) { fCacheSize = size; // If the cache is to be used by multiple threads, // the hashMap must be wrapped with code to synchronize fCacheMap = Collections.synchronizedMap ( //true = use access order instead of insertion order new LinkedHashMap<Object,E>(fCacheSize, .75F, true) { @Override public boolean removeEldestEntry(Map.Entry eldest) { //when to remove the eldest entry return size() > 99 ; //size exceeded the max allowed } } ); } public void put(Object key, E elem) { fCacheMap.put(key, elem); } public E get(Object key) { return fCacheMap.get(key); } private Map<Object,E> fCacheMap; private int fCacheSize;} | LinkedHashMap as LRU cache | java;cache;collections | null |
_unix.338543 | I am unable to figure out where is the exact problem on my disk. as per the screenshot it says that there is Input/Output error. | lvs shows Input/Output error | linux;lvm | The error indicated is happening at 4 different offsets (sectors) of your /dev/sdao device:040967516186214475161919488How you determine that it is not a hardware failure is beyond me as it is most likely the case. |
_cs.26389 | Couldn't the type inference in Apple's new programming language Swift had been done more aggressive? For instance why can't the return type of a function be deduced?func sayHello(personName: String) -> String { let greeting = Hello, + personName + ! return greeting} | Why isn't the Swift programming language type inference more aggressive? | programming languages;type inference | null |
_unix.69185 | When I execute following command to get cpu usage , I get nice + user cpu usage.top -b -n1 | grep Cpu(s) | awk '{print $2 + $4}' Output: 14.5Here I am getting problem is that the output depends on top command thus it doesn't change instantly as top command. So I am not getting correct cpu instantly. It gives same output and not changing.I want to get realtime cpuusage in output. Please help me to improve my command. | Getting cpu usage same every time. | command line;cpu | null |
_softwareengineering.288925 | While chasing a segfault around a complicated and grouchy c++ program I added several //comments and cout statements, but no 'actual' code. Then, suddenly, for no apparent reason the segfault vanishes.I'm happy, but still a little worried, because I don't think I fixed anything and there was clearly something wrong. How can I debug a problem that has disappeared? (sadly I don't have a version that's still giving a segfault, any older versions have other problems)As an aside, do you think I am mistaken in thinking that I have only added //comments and cout statements? Is it more likely that I accidentally altered something else? | How to debug a program after it appears to fix itself | c++;debugging | Getting a segmentation fault only happens when you have invoked undefined behaviour. And undefined behaviour means that the normal rules of a programming language don't apply: whatever the run-time system does is by definition OK, and you don't get to complain about it. It might even do the expected thing, just to confuse you.In particular, adding debug statements can change a program so that it raises exceptions where it didn't before, or vice versa. Indeed, this is expected, because detecting memory access violations depends on how precisely things are laid out in memory, and any code you add changes these details.Therefore, your program was definitely wrong before, and if you don't get exceptions now, it is almost certainly still wrong, only less obviously wrong. It is much more likely that introducing debug messages changed the variety of undefined behaviour you get than that you fixed your logic and didn't notice. |
_codereview.40067 | I'm trying to come up with an alternative to the gmean implementation in scipy, because it is awkwardly slow. To that end I've been looking into alternate calculation methods and implementing them in numpy. My only issue is that a method that in the two methods I'm implementing, the one that I feel ought to be faster is in fact much slower. I feel like this is an issue with my implementation and would love advice on improving it.def fast_gmean(vector, chunk_size=1000): base, exponent = np.frexp(vector) exponent_sum = float(np.sum(exponent)) while len(base) > 1: base = np.array_split(base, math.ceil(float(base.size)/chunk_size)) intermediates = np.array([np.prod(split) for split in base]) base, current_exponent = np.frexp(intermediates) exponent_sum += np.sum(current_exponent) return (base[0]**(1.0/vector.size)) * (2**(exponent_sum/vector.size))def actually_fast_gmean(vector): return np.exp(np.mean(np.log(vector)))While these both outperform scipy's gmean implementation, the second method is about 33% faster than the first.Note: I'm testing this on arrays of approximately 5000 entries. | Optimizing numpy gmean calculation | python;numpy | 1. Checking your claimYou claim that these both outperform scipy's gmean implementation, but I can't substantiate this. For example:>>> import numpy>>> data = numpy.random.exponential(size=5000)>>> from timeit import timeit>>> timeit(lambda:fast_gmean(data), number=10000)5.540040018968284>>> timeit(lambda:actually_fast_gmean(data), number=10000)1.4999530320055783>>> from scipy.stats import gmean>>> timeit(lambda:gmean(data), number=10000)1.4939542019274086So as far as I can tell, there's no significant difference in runtime between your actually_fast_gmean and scipy.stats.gmean, and your fast_gmean is more than 3 times slower.So I think you need to give us more information. What's the basis for your claim about performance? What kind of test data are you using?(Update: in comments it turned out that you were using scipy.stats.mstats.gmean, which is a version of gmean specialized for masked arrays.)2. Read the source!If you look at the source code for scipy.stats.gmean, you'll see that it's almost exactly the same as your actually_fast_gmean, except that it's more general (it takes dtype and axis arguments):def gmean(a, axis=0, dtype=None): if not isinstance(a, np.ndarray): # if not an ndarray object attempt to convert it log_a = np.log(np.array(a, dtype=dtype)) elif dtype: # Must change the default dtype allowing array type if isinstance(a,np.ma.MaskedArray): log_a = np.log(np.ma.asarray(a, dtype=dtype)) else: log_a = np.log(np.asarray(a, dtype=dtype)) else: log_a = np.log(a) return np.exp(log_a.mean(axis=axis))So it's not surprising that these two functions have almost identical runtimes.3. Why fast_gmean is slowYour strategy is to avoid calls to log by performing arithmetic on the exponent and mantissa parts of the floating-point numbers.Very roughly speaking, for each element of the input, you avoid one call to each of log and mean, and gain one call to each of frexp, sum, array_split and prod.>>> from numpy import log, mean, frexp, sum, array_split, prod>>> for f in log, mean, frexp, sum, prod:... print(f.__name__, timeit(lambda:f(data), number=10000))log 1.0724926821421832mean 0.3662677980028093frexp 0.34479621006175876sum 0.21649421006441116prod 0.280590218026191>>> timeit(lambda:array_split(data, 5), number=10000)2.1635821380186826So it's the call to numpy.array_split that's costly. You could avoid this call and split the array yourself, like this:def fast_gmean2(vector, chunk_size=1000): base, exponent = np.frexp(vector) exponent_sum = np.sum(exponent) while base.size > 1: intermediates = [] for i in range(0, base.size, chunk_size): intermediates.append(np.prod(base[i:i + chunk_size])) base, current_exponent = np.frexp(np.array(intermediates)) exponent_sum += np.sum(current_exponent) return base[0] ** (1.0/vector.size) * 2 ** (exponent_sum/vector.size)and this is roughly twice as fast as your version:>>> timeit(lambda:fast_gmean2(data), number=10000)2.585187505930662but still about twice as slow as scipy.stats.gmean, and that's because of the Python interpreter overhead. Numpy has a speed advantage whenever you can vectorize your operations so that they run on fixed-size datatypes in the Numpy core (which is implemented in C for speed). If you can't vectorize your operations, but have to loop over them in Python, then you pay a penalty.So let's vectorize that:def fast_gmean3(vector, chunk_size=1000): base, exponent = np.frexp(vector) exponent_sum = np.sum(exponent) while len(base) > chunk_size: base = np.r_[base, np.ones(-len(base) % chunk_size)] intermediates = base.reshape(chunk_size, -1).prod(axis=0) base, current_exponent = np.frexp(intermediates) exponent_sum += np.sum(current_exponent) if len(base) > 1: base, current_exponent = np.frexp([base.prod()]) exponent_sum += np.sum(current_exponent) return base[0] ** (1.0/vector.size) * 2 ** (exponent_sum/vector.size)For arrays of the size we've been testing (about 5000), this is a little slower than fast_gmean2:>>> timeit(lambda:fast_gmean3(data), number=10000)2.8020136120030656But for larger arrays it beats gmean:>>> bigdata = np.random.exponential(size=1234567)>>> timeit(lambda:gmean(bigdata), number=100)3.192410137009574>>> timeit(lambda:fast_gmean3(bigdata), number=100)2.3945167789934203So the fastest implementation depends on the length of the array.4. Other comments on fast_gmeanThere's no docstring. What does this function do and how do I call it? What value should I pass in for the chunk_size argument?It's critical that chunk_size is not too large, otherwise the call to prod could underflow and the result will be incorrect. So there needs to be a check that the value is safe, and a comment explaining how you computed the safe range of values. |
_unix.48223 | I'm trying to run strace trough ccze, and the pipe doesn't work as expected.The command-line I'm running to test is sudo strace -p $(pgrep apache2) | grep open, and all lines are output, ignoring grep.Is there something special about strace that causes this behavior? | piping strace to grep | io redirection;strace | strace prints its traces on standard error, not on standard output. That's because it's common to want to redirect the standard output of the program, but usually not a problem that strace's stderr and the program's stderr are mixed.So you should redirect strace's stderr to stdout to be able to pipe it:sudo strace -p $(pgrep apache2) 2>&1 | grep openexcept that what you're really looking for issudo strace -p $(pgrep apache2) -e open |
_codereview.15513 | I am trying to place JButtons from an array onto a JFrame. The way I'm doing it is having it test how many more buttons are left, and if buttons have been placed at the edge of the frame. The end result is an ugly piece of code.JButton[] grid = new JButton[2501];JFrame MapFrame = new JFrame();public void makeMap() { MapFrame.setBounds(40, 0, 750, 773); int x = 0; int y = 0; for (int i = 0; i < grid.length; i++) { // grid is the JButton Array if (x > 749) { x = 0; y = y + 15; } grid[i] = new JButton(); grid[i].setBounds(x, y, 15, 15); MapFrame.add(grid[i]); x = x + 15; } MapFrame.setVisible(true); MapFrame.repaint();}The code just looks so bulky with variables being changed in different braces, and with so many braces. How could I make this more elegant? (Please don't recommend layouts, as, none of them fit my requirements.) | How can I make placing JButtons from an array more elegant? | java;swing | In order to find a more elegant solution, we first need to identify the problems; only then can we solve them:Magic NumbersOne of the confusing things about this snippet is that we constantly (pun intended) come across magic values such as 15 and 749. What if the map gets bigger in the future, or the dimensions of the buttons change? Solution: Define constants.Note: I used the Java naming convention for constants, which is SHOUTY_CASE, although I dislike it, because unified coding standards when sharing code trump personal preferences.private static final int NUMBER_OF_BUTTONS = 2501;private static final int BUTTON_SIDE = 15;Unnecessary use of arrayWhy are you copying every button into your grid before adding them to the mapFrame? If you aren't accessing them from the array later on, you can get rid of all the access-by-index complexity.JButton button = new JButton();button.setBounds(x, y, BUTTON_SIDE, BUTTON_SIDE);mapFrame.add(button); Function doing too many thingsThe empty lines that you have used to split your code into sections is a code smell: it indicates that your function is doing too many different things, and therefore needs to be divided into several functions, each doing one thing. To quote Robert C. Martin, author of Clean Code: A Handbook of Agile Software Craftsmanship:Functions should do one thing. They should do it well. They should do it only.Using complex syntax for simple thingsCode like the following two examples from your code snippetx = x + 15;y = y + 15;can be shortened by using the combined += operator.Refactoredprivate static final int NUMBER_OF_BUTTONS = 2501;private static final int BUTTON_SIDE = 15;private JFrame mapFrame = new JFrame();public void createAndShowMap() { mapFrame.setBounds(40, 0, 750, 773); addButtons(); mapFrame.setVisible(true);}private void addButtons() { int leftOffset = 0; int topOffset = 0; for (int i = 0; i < NUMBER_OF_BUTTONS; i++) { if (isBeyondEndOfLine(leftOffset)) { leftOffset = 0; topOffset += BUTTON_SIDE; } addButtonAt(leftOffset, topOffset); leftOffset += BUTTON_SIDE; }}private boolean isBeyondEndOfLine(int x) { return x >= mapFrame.getBounds().width;}private void addButtonAt(int x, int y) { JButton button = new JButton(); button.setBounds(x, y, BUTTON_SIDE, BUTTON_SIDE); mapFrame.add(button);} |
_softwareengineering.63918 | Is it conceptually feasible to have on a Postgresql Cluster a transactional database and at the same time a datawarehouse that would get feeded by the transactional database ? | Transactional database and Datawarehouse on the same Postgresql cluster? | database;database design;cluster | null |
_codereview.93692 | I want to keep trying to get response until its code is 200 or unknown yet. In first case it should be stored in response variable. In another case I should raise any kind of exception. response = nil 1.times do response = begin http.request request rescue Net::ReadTimeout puts Net::ReadTimeout retry end case response.code when 503 puts servers are busy at #{Time.now}? sleep 5 redo when 200 ok else fail #{response.code} at '#{request.path}' end endThe 1.times thing is taken from SO. | Retry storing HTTP response into a variable until specific code | ruby;error handling;http | null |
_codereview.19712 | This is a project I am working on, which generates an HTML table out of a query result.(The result DataTable of an SQL command via SP)This section of the project is the one that will generate the headers of the table according to the list of columns of each table selected (from the database list of tables). What I am trying to review here is the section that's responsible for generating an HTML markup programmatically.I would like to know your opinion and what you would do differently.In aspx, I will move this back after tests are completed:<% //instantiating class to Get A List<string> returned (of all db table-columns) // this code is inplementing GetClassFields class // you can see its code in section #2 below (helpers section) var TableColscls = GetClassFields.AsListStr(GetClassFields.SelectedClass.tables, tbls.TblTimeCPAReport);%>Then using the list above for the generated HTML table:<div style=width:90%; dir=rtl><%=RenderHtmlTblHeaders(TableColscls)%></div>.cs code behind // some usings using Lsts = HTMLGenerator.dataSource.List; // this is what i call HTML TABLE GENERATOR // or DB TO HTML Tables Adapterpublic string RenderHtmlTblHeaders(List<string> SelectedListStr){ List<string> OmittedCols = new List<string>(); OmittedCols.Add(imprtCPAcols.tbName); OmittedCols.Add(imprtCPAcols.tbIdentCol); StringBuilder NwTRLoopSB = new StringBuilder(); string curRowStyle= string.Empty, nwLine = Environment.NewLine + \t\t\t, BaseTemplateTD = string.Empty; NwTRLoopSB.Append( string.Format( <table id='tbl_Settings' cellspacing='0' border='1'><tr id='TR_headers'{0}>{1}, curRowStyle, nwLine )._Dhtml_DoubleQoutes() );//a new approach i've discovered (in one of the posts on `SO` )//to have a counter with foreach loops foreach (var Item in SelectedListStr.Select((Val, counter) => new { Value = Val, Index = counter })) { if(Lsts.ExcludeColumns(Item.Value, OmittedCols)) { BaseTemplateTD = string.Format(<td>{0}</td>{1}, Item.Value, nwLine)._Dhtml_DoubleQoutes(); NwTRLoopSB.Append(BaseTemplateTD); } }///ENd TR cells generator Section NwTRLoopSB.Append(</tr></table>); return NwTRLoopSB.ToString();}.cs helper namespaces and classesThe code blocks below are extracted by relevance to this project as it (the whole file) serves all of my projects as a bunch of helpers.ExtensionsThis one is used to avoid the use of \ within formatted text: /// <summary> /// Replaces a single Quote with Double. used for html Attributes: /// </summary> public static string _Dhtml_DoubleQoutes(this string NewTRString) { return NewTRString.Replace(', \); }class to list // using reflection to list / extract all fields of a given class public class GetClassFields { public enum SelectedClass { tables, columns, ColHeaders } public List<string> AsListStr(SelectedClass tabls_Cols_StoerdProc, string TableName) { var tbls = new HTDB_Tables(); HTDB_Cols Cols = new HTDB_Cols(); var ColHeds = new Htdb_PresentedHebColHeaders(); switch (tabls_Cols_StoerdProc) { case SelectedClass.tables: return typeof(HTDB_Tables).GetFields() .Select(f =>f.GetValue(tbls).ToString()).ToList<string>(); case SelectedClass.columns: return typeof(HTDB_Cols).GetNestedTypes() .First(t => String.Compare(t.Name, TableName, true) == 0) .GetFields() .Select(f => f.GetValue(Cols).ToString()) .ToList<string>(); case SelectedClass.ColHeaders: return typeof(Htdb_PresentedHebColHeaders).GetNestedTypes() .First(t => String.Compare(t.Name, TableName, true) == 0) .GetFields() .Select(f => f.GetValue(ColHeds).ToString()) .ToList<string>(); default: return typeof(HTSPs.GetWorkerNameAndDataForTcReportCPABySnif_Fields).GetNestedTypes() .First(t => String.Compare(t.Name, TableName, true) == 0) .GetFields() .Select(f => f.GetValue(null) as string) .ToList(); } } }HTML Generator(This is the short version; you could see my other post for a longer version)Another method to produce style-background-color as bgColor of the alternation of rows within the HTML generated table:public class HTMLGenerator{ //i guess i will add whats in cs code behind to this next section of helpers public class HTMFactory { //TablesAlternatingRow public static string DynamicStyle_Generator ( int RowCounter = -1, Dictionary<string, string> StyleAttributeDict = null ) { string BaseStyle = , propTerminator = ', BgCol = ; StringBuilder StylerSB = new StringBuilder(); BgCol = ; bool bgclaltrnator; if (RowCounter >= 0) { RowCounter++; bgclaltrnator = (RowCounter % 2) == 0; if (bgclaltrnator) BgCol = #70878F; else BgCol = #E6E6B8; } BaseStyle = string.Format(style='background-color:{0};, BgCol); ///string.Format({0}:{1};, StlProps.BgColor, val); return string.Concat(BaseStyle, StyleAttributeDict, propTerminator); } } // when inside the loop this will supply the correct data source // that will be the content of the table cells // for now it is the selector of which column to omitt method that // i have placed here... public class dataSource { public sealed class List { public static bool ExcludeColumns ( string ListItem, List<string> OmittedColumns ) { bool Ret = false; foreach (string col in OmittedColumns) { Ret = string.Compare(ListItem, col) ==0; if (Ret) return false; } return true; } } }} | DataTable 'adapter' to HTML table generator | c#;performance;html;asp.net | null |
_softwareengineering.312978 | From Java 5 language guide:When you see the colon (:) read it as in.Why not use in in the first place then?This has been bugging me for years.Because it's inconsistent with the rest of the language.For example, in Java there are implements, extends, super for relations between types instead of symbols like in C++, Scala or Ruby.In Java colon used in 5 contexts.Three of which are inherited from C.And other two was endorsed by Joshua Bloch.At least, that was he sais during The closures controversy talk.This comes up when he criticises usage of a colon for mapping as inconsistent with for-each semantics.Which to me seems odd because it's the for-each abused expected patterns.Like list_name/category: elements or laberl/term: meaning.I've snooped around jcp and jsr, but did not found no sign of mailing list.No discussions on this matter found by google.Only newbies confused by the meaning of colon in for.Main arguments against in provided so far:requires new keyword; andcomplicates lexing.Let's look at relevant grammar definitions:statement : 'for' '(' forControl ')' statement | ... ;forControl : enhancedForControl | forInit? ';' expression? ';' forUpdate? ;enhancedForControl : variableModifier* type variableDeclaratorId ':' expression ;Change from : to in don't bring additional complexity or requires new keyword. | Why for-each has colon instead of in? | java | Normal parsers as they are generally taught have a lexer stage before the parser touches the input. The lexer (also scanner or tokenizer) chops the input into small tokens that are annotated with a type. This allows the main parser to use tokens as terminal elements rather than having to treat each character as a terminal, which leads to noticeable efficiency gains. In particular, the lexer can also remove all comments and white space. However, a separate tokenizer phase means that keywords cannot also be used as identifiers (unless the language supports stropping which has somewhat fallen out of favour, or prefixes all identifiers with a sigil like $foo).Why? Let's assume we have a simple tokenizer that understands the following tokens:FOR = 'for'LPAREN = '('RPAREN = ')'IN = 'in'IDENT = /\w+/COLON = ':'SEMICOLON = ';'The tokenizer will always match the longest token, and prefer keywords over identifiers. So interesting will be lexed as IDENT:interesting, but in will be lexed as IN, never as IDENT:interesting. A code snippet likefor(var in expression)will be translated to the token streamFOR LPAREN IDENT:var IN IDENT:expression RPARENSo far, that works. But any variable in would be lexed as the keyword IN rather than a variable, which would break code. The lexer does not keep any state between the tokens, and cannot know that in should usually be a variable except when we are in a for loop. Also, the following code should be legal:for(in in expression)The first in would be an identifier, the second would be a keyword.There are two reactions to this problem:Contextual keywords are confusing, let's reuse keywords instead.Java has many reserved words, some of which have no use except for providing more helpful error messages to programmers switching to Java from C++. Adding new keywords breaks code. Adding contextual keywords is confusing to a reader of the code unless they have good syntax highlighting, and makes tooling difficult to implement because they'll have to use more advanced parsing techniques (see below).When we want to extend the language, the only sane approach is to use symbols that previously were not legal in the language. In particular, these can't be identifiers. With the foreach loop syntax, Java reused the existing : keyword with a new meaning. With lambdas, Java added a -> keyword which could not previously occur in any legal program (--> would still be lexed as '--' '>' which is legal, and -> might have previously been lexed as '-', '>', but that sequence would be rejected by the parser).Contextual keywords simplify languages, let's implement themLexers are indisputably useful. But instead of running a lexer before the parser, we can run them in tandem with the parser. Bottom-up parsers always know the the set of token types that would be acceptable at any given location. The parser can then request the lexer to match any of these types at the current position. In a for-each loop, the parser would be at the position denoted by in the (simplified) grammar after the variable has been found:for_loop = for_loop_cstyle | for_each_loopfor_loop_cstyle = 'for' '(' declaration ';' expression ';' expression ')'for_each_loop = 'for' '(' declaration 'in' expression ')'At that position, the legal tokens are SEMICOLON or IN, but not IDENT. A keyword in would be entirely unambiguous. In this particular example, top-down parsers wouldn't have a problem either since we can rewrite the above grammar tofor_loop = 'for' '(' declaration for_loop_rest ')'for_loop_rest = ';' expression ';' expressionfor_loop_rest = 'in' expressionand all the tokens necessary for the decision can be seen without backtracking.Consider usabilityJava has always tended towards semantic and syntactic simplicity. For example, the language doesn't support operator overloading because it would make code far more complicated. So when deciding between in and : for a for-each loop syntax, we have to consider which is less confusing and more apparent to users. The extreme case would probably befor (in in in in())for (in in : in())(Note: Java has separate namespaces for type names, variables, and methods. I think this was a mistake, mostly. This does not mean later language design has to add more mistakes.)Which alternative provides clearer visual separations between the iteration variable and the iterated collection? Which alternative can be recognized more quickly when you glance at the code? I've found that separating symbols are better than a string of words when it comes to these criteria. Other languages have different values. E.g. Python spells out many operators in English so that they can be read naturally and are easy to understand, but those same properties can make it quite difficult to understand a piece of Python at a glance. |
_datascience.21955 | import tensorflow as tfx = tf.placeholder(tf.float32, [None,4]) # input vector w1 = tf.Variable(tf.random_normal([4,2])) # weights between first and second layersb1 = tf.Variable(tf.zeros([2])) # biases added to hidden layerw2 = tf.Variable(tf.random_normal([2,1])) # weights between second and third layerb2 = tf.Variable(tf.zeros([1])) # biases added to third (output) layerdef feedForward(x,w,b): # function for forward propagation Input = tf.add(tf.matmul(x,w), b) Output = tf.sigmoid(Input) return OutputOut1 = feedForward(x,w1,b1) # output of first layerOut2 = feedForward(Out1,w2,b2) # output of second layerMHat = 50*Out2 # final prediction is in the range (0,50)M = tf.placeholder(tf.float32, [None,1]) # placeholder for actual (target value of marks)J = tf.reduce_mean(tf.square(MHat - M)) # cost function -- mean square errors train_step = tf.train.GradientDescentOptimizer(0.05).minimize(J) # minimize J using Gradient Descentsess = tf.InteractiveSession() # create interactive session tf.global_variables_initializer().run() # initialize all weight and bias variables with specified valuesxs = [[1,3,9,7], [7,9,8,2], # x training data [2,4,6,5]]Ms = [[47], [43], # M training data [39]]for _ in range(1000): # performing learning process on training data 1000 times sess.run(train_step, feed_dict = {x:xs, M:Ms})>>> print(sess.run(MHat, feed_dict = {x:[[1,15,9,7]]}))[[50.]]>>> print(sess.run(MHat, feed_dict = {x:[[3,8,1,2]]}))[[50.]]>>> print(sess.run(MHat, feed_dict = {x:[[6,7,10,9]]}))[[50.]]In this code, I am trying to predict the marks M obtained by a student in a test out of 50 given how many hours he/she slept, studied, used electronics and played the day before the test. These 4 features come under the input feature vector x.To solve this regression problem, I am using a deep neural network with an input layer with 4 perceptrons (the input features), a hidden layer with two perceptrons and an output layer with one perceptron. I have used sigmoid as the activation function. But, I am getting the exact same prediction([[50.0]]) for M for all possible input vectors I feed in. Can someone please tell me what is wrong with the code above, and why I get the same result each time? | Tensorflow regression model giving same prediction every time | neural network;deep learning;regression;tensorflow | null |
_codereview.110936 | The purpose of this code is to let me loop over 100 items (up to MAX_CONCURRENT at a time), performing some action on them, and then return only once all items have been processed: /// <summary>Generic method to perform an action or set of actions/// in parallel on each item in a collection of items, returning/// only when all actions have been completed.</summary>/// <typeparam name=T>The element type</typeparam>/// <param name=elements>A collection of elements, each of which to/// perform the action on.</param>/// <param name=action>The action to perform on each element. The/// action should of course be thread safe.</param>/// <param name=MaxConcurrent>The maximum number of concurrent actions.</param>public static void PerformActionsInParallel<T>(IEnumerable<T> elements, Action<T> action){ // Semaphore limiting the number of parallel requests Semaphore limit = new Semaphore(MAX_CONCURRENT, MAX_CONCURRENT); // Count of the number of remaining threads to be completed int remaining = 0; // Signal to notify the main thread when a worker is done AutoResetEvent onComplete = new AutoResetEvent(false); foreach (T element in elements) { Interlocked.Increment(ref remaining); limit.WaitOne(); new Thread(() => { try { action(element); } catch (Exception ex) { Console.WriteLine(Error performing concurrent action: + ex); } finally { Interlocked.Decrement(ref remaining); limit.Release(); onComplete.Set(); } }).Start(); } // Wait for all requests to complete while (remaining > 0) onComplete.WaitOne(10); // Slightly better than Thread.Sleep(10)}I include a timeout on the WaitOne() before checking remaining again to protect against the rare case where the last outstanding thread decrements 'remaining' and then signals completion between the main thread checking 'remaining' and waiting for the next completion signal, which would otherwise result in the main thread missing the last signal and locking forever. This is faster than just using Thread.Sleep(10) because it has a chance to return immediately after the last thread completes.Goals:Ensure thread safety - I want to be sure I won't accidentally return too early (before all elements have been acted on), and be sure that I don't become deadlocked or otherwise stuck.Add as little overhead as possible - minimizing amount of time that fewer than MAX_CONCURRENT threads are executing action, and returning as soon as possible after the final action has been performed. | Parallel foreach with configurable level of concurrency | c#;multithreading;concurrency;locking | null |
_webapps.4398 | If I am looking for a particular C# language construct or how to use a keyword in JavaScript, I would like to search only in code, not in blog paragraphs about code. I thought I could do this at code.google.com but there if I type in e.g.protected internalI get discussions about that keyword and have to look through the results to find actual code.What are some web app search machines which allow me to search through large repositories of code only? | Is there a web app allow me to search through large repositories of code? | webapp rec | null |
_softwareengineering.213343 | Is it bad coding practice/design to make a class which will only be instantiated once?I have some variables and functions that can be grouped together under a class to look good (for a lack of a better description) since they are somewhat related, but they can just be global variables and global functions.(Btw, I am using JavaScript, AngularJS, Express, MongoDB.) | Is it a bad idea to create a class which will only have one instance? | design;design patterns | A single instance for a class makes sense if the object represents a single resource, like a ethernet connexion or the operating system task manager for instance.Functions are put in a class only if they act on the variables of instances of that class, otherwise the maintainer will be confused about the intention of that class.Usually, a good reason exists why your app has global variables. Try to find the common purpose of them and design a class around this purpose. It will not only make the design clear, but your mind as well. |
_softwareengineering.299339 | What I haveThis is a prototype. I have a pool of 100 clients connected to the server via websockets reporting things and awaiting for commands. The server polls the commands database table of type MEMORY in a loop using a query with WHERE client_id=?. I can insert a combination of client_id+command to that table, and once I do that, the corresponding loop will match and SELECT it and pass it back to the client. What's the problemThe approach sounds like it would work, but as far as I understand I'm talking about n simultaneous database connections and queries in an endless loop (n being the number of clients), which doesn't sound effective. It'd be much better to do one query in one loop and then somehow check the client_id, if any, and distribute the results to the corresponding clients.This reminds me of the approach where you're selecting articles first and then for () {} the resultset and do separate queries to get the details foe each of the items, which results in n+1 queries being made. The solution to that is doing a big query with JOINs and also preloading the other data that doesn't fit into the main JOINed query. There should be the similarly more effective way to do the database polling too.UPDATE: I found this answer in the related section, and it says pretty much the same thing: Hammering your database isn't really a good idea. While I'm pretty sure you've realized this, others might not have. I remember a friend of mine tried to use a php script and a Javascript AJAX function in a loop for a semi-real time game. He very quickly realized that performance degraded as more people joined, simply because he was executing a ton of queries per second which hammered the database.So polling the database for each client sounds as unscalable and ineffective as building an AJAX chat application.What I'm asking forI guess that every possible programming approach must have been named and covered by now, so what is this one called? What is the common advice/approach here? | How do I balance 100 clients checking the same database table in a loop? | database;node.js;sockets;websockets;polling | null |
_codereview.28429 | I want to verify that I'm correctly handling risk of Overflowing fixed-length string buffers. I'm also aware that I'm using C-style strings, which falls short of full C++ code.Main/*PURPOSE:attempt to implement, without using given/known-good code, various concepts of c++.one exception:code in header fileToArray/set_cArrayTemp()*///MOVED TO TOP - ELSE MY HEADERS FAIL//boilerplate - won't have to use: std::using namespace std;//needed to use cout/cin#include <iostream>//for using system calls -//warning: (www.cplusplus.com/forum/articles/11153/)#include <cstdlib>//to use strlen#include <cstring>//c++ headers (e.g. iostream) don't use .h extension so i am not either//provides indentation and v spacing for logging#include header/logFormatter//reads a file to a char array#include header/fileToArray//dialog gets filename from user#include header/fileDialog//this is the (p)array that should hold all the text from the filechar *cArray;//the name of the file to readchar *fileName;void set_fileName(){ cout << vspace << col1 << begin set_fileName(); char *temp = getFileName(); cout << col2 << tmp: << temp; fileName = new char[strlen(temp)]; strcpy(fileName,temp); delete[] temp; cout << col2 << fileName is set: << fileName; cout << col1 << end set_fileName();}void set_cArray(){ cout << vspace << col1 << begin set_cArray(); char *temp = fileToArray(fileName); if(temp){ cout << col2 << tmp: << temp; /*FROM MAN STRCPY: If the destination string of a strcpy() is not large enough, then anything might happen. Overflowing fixed-length string buffers is a favorite cracker technique for taking complete control of the machine. */ //so, this guards against overflow, yes? cArray = new char[strlen(temp)]; strcpy(cArray,temp); delete[] temp; cout << col2 << cArray is set: << cArray; cout << col1 << end set_cArray(); return; } cout << col2 << fail - did not set cArray; cout << col1 << end set_cArray();}//expect memory leaks = 0void cleanup(){ cout << vspace << col1 << begin cleanup(); if(cArray){ delete[] cArray; cout << col2 << cArray deleted; } if(fileName){ delete[] fileName; cout << col2 << fileName deleted; } cout << col1 << end cleanup();}void closingMessage(){ cout << vspace << col2 << APPLICATION COMPLETE;}int main(){ system(clear;);/*yes, i know (www.cplusplus.com/forum/articles/11153/)... ,but it provides focus for the moment and it is simple. */ //col0 cout << begin main(); set_fileName(); set_cArray(); cleanup(); closingMessage(); cout << \nend main(); return 0;}//TODO - //1. use a doWhile so that user can run once and read many files without//app exit.////2. find way to provide init (not null/safe sate) of pointersheader (fileToArray): //reads a file to a char array//for using files#include <fstream>// user inputs a file [path]namechar *filenameTemp;//this is the (p)array that should hold all the text from the filechar *cArrayTemp;void set_filenameTemp(char *fileName_param){ cout << vspace << col3 << begin set_filenameTemp(); filenameTemp = new char[strlen(fileName_param)]; strcpy(filenameTemp,fileName_param); cout << col4 << file name assigned: << filenameTemp; cout << col3 << end set_filenameTemp();}void set_cArrayTemp(){ cout << vspace << col3 << begin set_cArrayTemp(); ifstream file(filenameTemp); if(file.is_open()){ /*--------------------------------------------- source: www.cplusplus.com/doc/tutorials/files/ modified a bit*/ long begin,end,fileSize; begin = file.tellg(); file.seekg(0,ios::end); end = file.tellg(); fileSize = (end-begin-1);/* -1 because testing shows fileSize is always one more than expected based on known lenght of string in file.*/ /*---------------------------------------------*/ //cout << col4 << bytes in file (fileSize=): << fileSize; cArrayTemp = new char[fileSize]; file.seekg(0,ios::beg); file.read(cArrayTemp,fileSize); file.close(); cout << col3 << end set_cArrayTemp(); return; } cout << col4 << fail - file not open; cout << col3 << end set_cArrayTemp();}//caller is responsible for memory//may return nullchar *fileToArray(char *fileName_param){ cout << vspace << col2 << begin fileToArray(); if(fileName_param){ cout << col3 << file name received: << fileName_param; set_filenameTemp(fileName_param); set_cArrayTemp(); delete[] filenameTemp; cout << col2 << end fileToArray(); return cArrayTemp; } cout << col3 << received NULL fileName_param; cout << col2 << end fileToArray(); return cArrayTemp;}header (fileDialog): //dialog gets filename from user// user inputs a file [path]name//number of chars to get from user inputstatic const int size = 20;//20 will do for now/*FROM MAN STRCPY: If the destination string of a strcpy() is not large enough, then anything might happen. Overflowing fixed-length string buffers is a favorite cracker technique for taking complete control of the machine.*///so, this guards against overflow, yes?//that is, by using getline, rather than cin >> var,//size of array is controlled.//caller is responsible for memorychar *getFileName(){ cout << vspace << col2 << begin getFileName(); char* input = new char[size]; cout << col2 << enter filename: ; cin.getline(input,size); cout << col2 << end getFileName(); return input;}//NOTE:// currently, this hardly justifies a header file - i expect to do more later.// may eventually use this for all userdialog and rename to userDialogheader (logFormatter): //STRICTLY FOR LOGGING// spares me from managing chains of \n and \t//this allows me to use indentation to show progress/flow// making this a psuedo-debugger////left-most column - col0 is imaginary/conceptual//char col0 = ;static const char col1[] = \n ;//4 spacesstatic const char col2[] = \n ;//8 spacesstatic const char col3[] = \n ;// etc.static const char col4[] = \n ;static const char vspace[] = \n\n;//NOTE: changed from using \t since default is 8 spaces | Manage risk of Overflowing fixed-length string buffers | c++;c;strings | It is very apparent that you are new. That is fine; we were all new once. I'll try to adjust my feedback accordingly. Let me know if you are not familiar with some of the terminology, and I'll provide an explanation or definition.Overflow safeguardsFirst of all, I'll discuss what seems to be the issue you are most concerned with: buffer overflows. I'd like to state that at this level, you should not really care about that (yet), at least not from a security standpoint. You should focus on learning the language and programming in general first.Your code is following the general correct idea: Make sure buffers are large enough by dynamically allocating memory, and limiting the size of input strings when you are not. Note that std::string does all of this for you by growing as needed.In modern C++ code, you would normally avoid allocating buffers the way you do, because memory management quickly becomes hard as a program grows. In C++, the RAII pattern is essential. It boils down to allocating resources in the constructor of an object (a class instance), and freeing them automatically in the destructor when the object goes out of scope. This is what std::string does for you, as well as growing as needed if you add text to the string.High-level issues1. I strongly recommend you to reduce the commenting level.I used to teach programming at the local university, and I saw that over-commenting technique a lot. My experience is that it not a good idea. It works as a crutch, allowing you to read your comments rather than your code. However, you already know how to read text; you need to learn how to read code. Stick to regular commenting levels. If you must have notes, keep them in a separate document. You want to make it as inconvenient as possible for you to look at them, forcing you to read the code itself when possible.2. You are not writing C++.You are writing C code, with C++ library calls. Write your own String class instead of using raw arrays in the code. Take advantage of all the things C++ has to offer. (This normally includes std::string, but writing your own String class for practice is a nice exercise.)3. Your headers should not contain function definitions.In C++, there is something called the one definition rule, which states that any definition should occur at most once in a program. (There are some exceptions to this, but you don't have to think about that yet.) Headers are meant to be included in several files, so they can only have declarations in them. For example:In fileDialog.hpp:char *getFileName();In fileDialog.cpp:char *getFileName(){ cout << vspace << col2 << begin getFileName(); char* input = new char[size]; cout << col2 << enter filename: ; cin.getline(input,size); cout << col2 << end getFileName(); return input;}The former is a declaration, the latter a definition. While we're on the subject of headers: It's normal for user-defined headers to have a .h, .hpp or .hxx suffix. I personally prefer .hpp to separate them from C headers.4. Avoid using global variables.Global variables are bad, because they have a very large scope and can be changed from anywhere, at any time, sometimes without you realizing. Either implement a class design and put the variables into class scope, or pass them around using function arguments. Variables that will never change (often called constants :-) ) can be left in the global scope, but should be declared const.5. Learn the basics of a debugger.Basic use of a debugger is very simple, and it allows you to remove a lot of the cout calls that clutter the code. Learn to set breakpoints and step through your code; that's all you need for now. As a beginner, I recommend using a visual debugger and not just raw gdb. (You can use a gdb frontend, though.)6. Separate output from computations.Functions that do something should generally not perform IO. One of the key reasons for that is reusability. You want to write code that you can reuse later. While it's not very likely that you will reuse these functions later, you should train as you fight and follow good practice whenever possible. Later users of your functions (i.e. you at a later time) may not want that output, and the way to solve that is to decouple IO from computations.Lower-level issues7. It's safe to delete a null-pointer.Instead of writingif(cArray){ delete[] cArray;Writedelete [] cArray;cArray = nullptr;deleteing a null-pointer has zero effect, and is therefore harmless, so there's no point in checking against null. What you should do, however, is to set your pointer to null after deleting it, ensuring that nothing bad will happen if it is deleted again. In C++11, the null pointer is called nullptr. If for some reason you are not using C++11 (as a C++ learner in 2013, you should be), use 0 (or NULL) instead.8. Consider inverting conditions to reduce nesting.Instead of this code (superfluous comments and couts removed, whitespace inserted to increase readability):if (temp) { cArray = new char[strlen(temp)]; strcpy(cArray,temp); delete[] temp; return;}// Handle temp == nullptr ...Consider writing this:if (!temp) { // Handle temp == nullptr ...}cArray = new char[strlen(temp)];strcpy(cArray,temp);delete[] temp;There is a lot more to comment on, but these are the most pressing issues for now, and should be more than enough to get you started. I encourage you to implement as many of these changes as you can (except maybe refactoring to classes), and then post your updated code as a new question for further review.Some of the things that still remain to be pointed out are:Best practicesDesign issues -- what I would do differently, and whyIdentifier namingException safety and memory leaksFile IO(I am listing these so you can think about them yourself before posting another review.) |
_ai.3850 | Is it possible to feed a neural network, the output from a random number generator and expect it learn the hashing/generator function. So that it can predict what will be the next generated number? Does something like this already exist? If research is already done on this or something related to (predict pseudo random numbers) can anyone point me to the right resources. Any additional comments or advice would also be helpful.Currently I am looking at this library and its related links.https://github.com/Vict0rSch/deep_learning/tree/master/keras/recurrent | Using Machine/Deep learning for guessing Pseudo Random generator | deep learning;unsupervised learning;prediction;lstm | null |
_softwareengineering.316538 | I sort of understand unobtrusive javascript. Even in my CSS now I hardly ever use classes or id's because I like clean, easy to read, uncluttered html files. For example, why use this:<body id=anchor ontouchstart=> <nav id=nav> <div id=design class=option> <p class=vCenter>design</p> </div> <div id=function class=option> <p class=vCenter>function</p> </div> <div id=rule></div> <div id=advanced class=option> <p class=vCenter>advanced</p> </div> </nav></body>When I can use this:<body> <nav> <div> <p>design</p> </div> <div> <p>function</p> </div> <div></div> <div> <p>advanced</p> </div> </nav></body>And then use the very powerful CSS3 selectors to access all of my elements. Or I could use JavaScript to give these elements classes and ids. Am I too obsessed with clean code? Or is this a more future proof, cleaner way of developing? | Are there any reasons not to ever use classes or ids anymore? | javascript;html;css | This seems like a very bad idea to me. defining css rules for classes and adding those classes to the html is a great way to make your css reusable. The way you're suggesting, with a complex selectors, sounds like a recipe for mangled stylesheets. Sure, your html is clean as a whistle, but now the css is a pain in the butt to maintain. Consider:.centre-box { /* your rules */}Vsbody div > div:nth-of-type(3) > div { /* your rules */}Then next week you you add a div above the box you want to be centred, and it's broken. To fix it, you have to find the tangly css rule that targeted your centre-box before, and change it to be something new. And all of this hassle so that your html looks cleaner? More up front time, higher maintenance cost, no extenuating circumstance that makes it necessary. End of story.AddendumWhy do complicated css selectors even exist?Sometimes you want to style more than just one element. Consider this example from bootstrap, a very popular css framework. (Note: It's written in less, which compiles to css. It supports nesting, so all you need to know when reading the example is that foo { bar { /* rule */ } } in less is foo bar { /* rule */ } in css.)Example: navbar source uses the > selector (direct child selector) to style direct children of the .navbar-brand element. But in this case, you use a class with a meaningful name to relate the css rule to a part of the DOM, and you use the fancy selectors to style the child elements of that class. What about doing it in JavaScript?This seems like a non-solution for me too... to convert from using classes and ids and your old stylesheet to doing it with JavaScript, you'll keep your css the same, simplify your html, but add an entirely new JavaScript file which must either (1) use complex selectors with jQuery, so it's as much of a rat's nest as the crazy selector stylesheet option, or (2) use JavaScript without jQuery to traverse the DOM and attache elements as needed. (1) is just as bad as putting it in css, and (2) is worse than (1) in my opinion because you'll basically have to duplicate your DOM structure in your JavaScript file (just in a different format, but same info), so you still have a DOM with ids and classes, it's just written in JavaScript. That's a lot more complexity. So what is unobtrusive JavaScript?I won't give a full treatment of it here because there's lots on Google if you're looking for details. But the key point as it relates to this is that unobtrusive JavaScript is that you don't want your JavaScript to intrude on your html. This relies on using ids and classes to identify which elements to attache JavaScript behaviour to. Unobtrusive JavaScript says: use ids and classes to attach events to elements instead of inlining the JavaScript events. In a nutshellCss classes with meaningful names are the current best way to associate a set of css rules with a portion of the html that you want to modify. This is the current convention, and the alternatives that you're suggesting add complexity and reduce maintainability. |
_cs.35311 | I cannot comprehend how you can prove hardness between two NP complete problems.For example, let X be a NP hard problem, I want to prove Y is also NP hard.I can do this by reducing X to Y, if Y is as difficult as X then it is NP hard, otherwise it is not.But how is this done exactly? Do we restate the problem?When I looked online it was something about reducing 3 SAT problem to Clique problem, but I don't even know what these problem are.Is there a trivial example showing how this is done? Thanks! | Can someone provide a trivial example to the reduction procedure used to prove hardness? | complexity theory;reductions;np hard | Let $\Sigma$ and $\Gamma$ be two finite alphabets and $L_A$ and $L_B$ be two languages over $\Sigma$ and $\Gamma$, respectively. A polynomial reduction is a function $f$ from $\Sigma^{\star}$ to $\Gamma^{\star}$, which is computable in polynomial time, such that for all words $x \in \Sigma^{\star}$ it is true that\begin{align} x \in L_A \iff f(x) \in L_B.\end{align}The function $f$ maps words from one language to words from another language. When speaking of problems, we mean the associated decision problems. Decision problem $A$: Given an word $x \in \Sigma^{\star}$, is it true that $x \in L_A$ (analogous for $B$). Such a word $x \in \Sigma^{\star}$ is also called an instance of the decision problem $A$. One easy reduction would be the reduction $\mathrm{CLIQUE}$ to $\mathrm{IS}$ (independent set). The languages are defined as follows:\begin{align} CLIQUE = \{ (G, k) \mid \text{the graph $G$ contains a complete subgraph with $k$ vertices} \} \\ IS = \{ (G, k) \mid \text{the graph $G$ contains $k$ vertices, that have no edges between each other} \}\end{align}The complete subgraph in the first definition is called a k-clique and the set of vertices in the second definition is called an independent set.As you already stated in your question, $\mathrm{3SAT}$ can be reduced to $\mathrm{CLIQUE}$, thus $\mathrm{CLIQUE}$ is NP-hard. For proving that $\mathrm{IS}$ is NP-hard, we reduce $\mathrm{CLIQUE}$ to $\mathrm{IS}$: We map each element $(G, k)$ to $(G', k)$, where $G'$ is the complement graph of $G$ (that means two vertices are connected in $G'$ if and only if they are not connected in $G$). We can compute $G'$ in $\mathcal{O}(|V(G)|^2)$ many steps. If we find a clique $H$ in $G$, all nodes of $H$ are connect with each other in $G$. Thus there is no edge between those nodes in the complement graph $G'$, and therefore the nodes of $H$ are an independent set in $G'$. If we find an independent set $U \subseteq V(G')$ with $k$ elements in $G'$, we know that there is no edge between any of the vertices in $U$ in $G'$. Thus there is an edge between any two vertices of $U$ in $G$, and therefore $G$ contains a k-clique. |
_codereview.127286 | I began studying C# 5.0 a few days ago and am trying to avoid duplicated code for validation of input values.class transcript{ //Use lamda expression and Func for validation logic Func<byte, byte> validate = (grade) => { if (grade > 100) throw new ArgumentOutOfRangeException(grade can`t be more than 100); else return grade; }; public string name { get; set; } public byte kor { get { return kor; } set { kor = validate(value); } } public byte eng { get { return this.eng; } set { eng = validate(value); } }} | Validating input values in C# | c#;validation;lambda | null |
_reverseengineering.11868 | I'm trying to learn how to use the IDA pro debugger (having used Visual Studio's C++ debugger for years) and I'm struggling to find how to switch the code/asm view back to the current instruction that debugger broke on?Similar to the Show next statement button in Visual Studio:PS. Here's my situation. Say, I broke on some instruction and then using IDA's graph view navigated away from that instruction. How do I go back? | What is the command to go to current statement in IDA debugger? | ida;windows;debuggers | You can navigate back to the previous position simply by pressing ESC. If you want to back to the current IP address, just press the right mouse button a select Jump to IP.Alternatively you can press G and set EIP as address. |
_unix.351965 | I have 2 XFS filesystems where space seems to disappear mysteriously.The system (Debian) was installed many years ago (12 years ago, I think). The 2 XFS filesystems were created at that time. Since then, the system has been updated, both software and hardware, and both filesystems have been grown a few times. Its now running 32-bit up-to-date Debian Jessie, with a 64-bit 4.9.2-2~bpo8+1 linux kernel from the backports archive.Now, within days, I see that the used space on those filesystems grows, much more than it should because of the files. I have checked with lsof +L1 that its not related to files that would have been deleted but kept open by some processes. I can reclaim the lost space by unmounting the filesystems and running xfs_repair.Here is a transcript that shows it:~# df -h /homeFilesystem Size Used Avail Use% Mounted on/dev/mapper/system-home 2.0G 1.7G 361M 83% /home~# du -hsx /home1.5G /home~# xfs_estimate /home/home will take about 1491.8 megabytes~# umount /home~# xfs_repair /dev/system/home Phase 1 - find and verify superblock...Phase 2 - using internal log - zero log... - scan filesystem freespace and inode maps...sb_fdblocks 92272, counted 141424 - found root inode chunkPhase 3 - for each AG... - scan and clear agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - process newly discovered inodes...Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 2 - agno = 1 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7Phase 5 - rebuild AG headers and trees... - reset superblock...Phase 6 - check inode connectivity... - resetting contents of realtime bitmap and summary inodes - traversing filesystem ... - traversal finished ... - moving disconnected inodes to lost+found ...Phase 7 - verify and correct link counts...done~# mount /home~# df -h /homeFilesystem Size Used Avail Use% Mounted on/dev/mapper/system-home 2.0G 1.5G 521M 75% /home~# On this example, there were only 161MB that were lost, but if I wait too long, the filesystem is 100% full, and I have real problemsIf that matters, both filesystems are bind-mounted in a LXC container. (I dont have any other XFS filesystem on this system.)Does anybody has an idea why this happens or how I should investigate? | Some space mysteriously disapears on XFS filesystems | linux;debian;xfs | null |
_unix.148875 | So I had nginx set-up with the default site. I decided I wanted to change because my website is at /var/www/server.nyro.net/... so before I changed again, I went to 127.0.0.1 to see if everything worked. I did. I got the This page is run by nginx! and then I decided to edit the conf file.... and now I get 403 forbidden.I did the chmodd correctly... I don't know what is wrong...Here is my nginx.conf: http://pastebin.com/cbmvguW1Here is my nginx.conf .default: http://pastebin.com/aWdhv9JRWhat am I doing wrong? | Setting up nginx on Fedora 19 | linux;fedora;webserver;nginx | null |
_codereview.102596 | I want to extend the Closeable-Interface so that I can add Listeners, that get notificated when the resource is closed.The CloseListener-API is rather simple and straight forward: public interface CloseListener { public void objectWillBeClosed(CloseableObservable closeable); public void objectClosed(CloseableObservable closeable);}I found a nasty trick in a blog, that uses Java8's default methods to recreate multiple inheritance and I asked myself if it might be valid to try it for an Observable-Interface:public interface CloseableObservable extends Closeable{ class HiddenAndNasty{ private static final Map<CloseableObservable, Collection<CloseListener>> allObservers = new WeakHashMap<>(); private static final Collection<CloseListener> getObservers(CloseableObservable observable){ synchronized (observable) { Collection<CloseListener> observers = allObservers.get(observable); if(observers == null){ observers = new ArrayList<>(); allObservers.put(observable, observers); } return observers; } } } public default void addObserver(CloseListener observer){ HiddenAndNasty.getObservers(this).add(observer); } @Override public default void close() throws IOException{ // Copy Collection to avoid concurrent modification Collection<CloseListener> observers = new ArrayList<>(HiddenAndNasty.getObservers(this)); observers.forEach(observer -> observer.objectWillBeClosed(this)); closeInternal(); observers.forEach(observer -> observer.objectClosed(this)); } /** not part of the public api, use close() instead */ void closeInternal() throws IOException;}This seems to work - here is a simple example, that works:import java.io.IOException;public class TestAutoCloseableOnObservableCloseable { public static class SomeCloseableClass implements CloseableObservable{ @Override public void closeInternal() throws IOException{ System.out.println(Now I close myself); } } public static void main(String[] args) throws IOException { try(SomeCloseableClass someObject = new SomeCloseableClass()){ someObject.addObserver(new CloseListener() { @Override public void objectWillBeClosed(CloseableObservable closeable) { System.out.println(Object will be closed!); } @Override public void objectClosed(CloseableObservable closeable) { System.out.println(Object was closed!); } }); }// try-with-resource } // main}It works - but it seems somehow evil to me. Very evil. Is it an Anti-Pattern? | Observe closeable with default methods | java | null |
_unix.277662 | I'm running some performance testing, and I'm trying to send the same file repeatedly to a socket.If I do something like:$ socat -b1048576 -u OPEN:/dev/zero TCP4-LISTEN:9899,reuseaddr,fork $ socat -b1048576 -u TCP:127.0.0.1:9899 OPEN:/dev/nullThen with that 1MB buffer iftop tells me that I'm pushing 20Gbps.However, what I'm really trying to do is something more like:$ socat -b1048576 -u OPEN:somefile.dat TCP4-LISTEN:9899,reuseaddr,fork $ myprog TCP:127.0.0.1:9899 > /dev/nullBut it only pushes that somefile.dat one time, I'd really like it to rewind() to the beginning and send it again. | How can I repeatedly send the contents of a file via socat / ncat to a socket | networking;performance;netcat;socat;nc | null |
_codereview.67173 | I'm fairly new to Java, arriving in the future from C and returning to type safety from Python. I'm looking for your suggestions to improve this code in the following areas:Correctness - are there any bugs? have I used the language correctly?Java conventions and idioms.Runtime performance.Generalization of input data type.import java.util.ArrayList;import java.util.List;public class MergeSorter { /** * Merge sort * * Running time O(nlog(n)) * @param list * @return sortedSequence */ public List<Integer> sort(List<Integer> list) { // base case if(list.size() <= 1) return list; int halfwayIndex = list.size() / 2; List<Integer> leftSortedSeq = sort(list.subList(0, halfwayIndex)); List<Integer> rightSortedSeq = sort(list.subList(halfwayIndex, list.size())); return merge(leftSortedSeq, rightSortedSeq); } /** * Merge step * Running time O(n) * @param leftSortedSeq * @param rightSortedSeq * @return mergedSortedSequences */ private List<Integer> merge(List<Integer> leftSortedSeq, List<Integer> rightSortedSeq) { if(leftSortedSeq.isEmpty()) return rightSortedSeq; else if (rightSortedSeq.isEmpty()) return leftSortedSeq; List<Integer> sortedSeq = new ArrayList<>(); int lIdx = 0; int rIdx = 0; int leftSortedSize = leftSortedSeq.size(); int rightSortedSize = rightSortedSeq.size(); while(lIdx < leftSortedSize && rIdx < rightSortedSize) { Integer leftSmallestElem = leftSortedSeq.get(lIdx); Integer rightSmallestElem = rightSortedSeq.get(rIdx); if(leftSmallestElem < rightSmallestElem) { sortedSeq.add(leftSmallestElem); lIdx++; } else { sortedSeq.add(rightSmallestElem); rIdx++; } } // copy over remainder from both seqs sortedSeq.addAll(leftSortedSeq.subList(lIdx, leftSortedSize)); sortedSeq.addAll(rightSortedSeq.subList(rIdx, rightSortedSize)); return sortedSeq; }} | Merge Sorting Lists | java;beginner;sorting;mergesort | Your four questions are good ones:Correctness - are there any bugs?I can't see any significant bugs. There are lesser potential bugs which relate to unexpected input (for example, null lists, or lists with null members (each of those will throw NullPointerExceptions)Correctness - I used the language correctly?For the most part, it is neat, and well structured. Your names and conventions are good. Yout use of the sublist is uncommon but creative, and useful.The few places where there are problems are technically functional, but, for example, this line here is concerning:if(leftSmallestElem < rightSmallestElem) {Here you have two Integer instances, and the comparison is a <, which will 'unbox' the Integer vlues to int primitives, and do the integer compare.That's not broken, but it's not great either. For a start, it's slow.The better way is to use the natural ordering of the Integer object... if(leftSmallestElem.comareTo(rightSmallestElem) < 0) {This removes the unboxing.Java conventions and idioms.Here it gets interesting.Mostly good. You have been passing around List<Integer> instances instead of ArrayList<Integer> instances, and this is a good thing. Many 'novices' pass concrete, rather than interface types.You have JavaDoc, and I always like seeing that. Unfortunately the details are very sparse in it. It's not worth having if it is not useful.You have used private, and public appropriately.My only real concern here is that methods are not static. There is no reason to link these methods to a specific instance of MergeSorter. Making the methods static would mean that you call them with:List<integer> sorted = MergeSorter.sort(unsorted);One other thing, I would expect that the sort method does an in-place sort. This is only because I am more familiar with that from the Java API. Retuning a new instance of sorted data is not wrong, just odd. Note, that for small (and empty) inputs, you return the same instance as the one you sort. This difference in behaviour is problematic. I would return a new instance on the small sorts as well as the large ones, or alternatively, just copy the results back in to the source at the end, and not return anything (in-place sort).Runtime performance.The performance problem you have here is significant. Using an ArrayList in the recursion you have, implies that you will be creating a lot of ArrayList instances.The other performance issue is that you have is in your algorithm. Typically, the merge sort is done in to a single equal-sized 'buffer' as the input. You merge the small blocks from the input to the buffer, then you swap them, and merge the larger blocks back in to the original, and keep swapping the buffers, until you have a result from the top merge.Generalization of input data type.This is the open question.If you bring your input data down to the lowest common denominator of the Comparable class, you could create a method:public static <T extends Comparable<T>> List<T> sort(List<T>) {}and then, in each place where you have the generic type Integer, you replace it with T, then you can sort any Java class with a natural order (Numbers, Strings, etc.)The solution will need to use the compareTo mechanism, not the < comparison. I mentioned that earlier. |
_webapps.59431 | Because of messages I received about the mail server being insecure, I was told to change my password. This I did with relative ease, got a verification code to use, went through two or three steps to verify this and thought all was well. However, when I try to log into my Gmail, it asks for my user name and password. Neither the new nor old credentials work. What do I do now? | Can't get into my Gmail account | gmail;google account | null |
_codereview.13127 | I'm wondering about the difference between these two linq statementsbool overlap = GetMyListA().Intersect(GetMyListB()).Any(); // 1.vsbool overlap = GetMyListA().Any(i => GetMyListB().Contains(i)); // 2.Will statement 2. call GetMyListB() for each item in ListA?Which is more readable? | Which linq statement is better to find there is an overlap between two lists of ints? | c#;linq | Assuming LINQ to objects (i. e. these are in-memory collections not LINQ to Entities IQueryables or something):Will statement 2. call GetMyListB() for each item in ListA?Yes. If you want to avoid this, you'll have to store the result of GetMyListB() outside the function.Which is more readable?In my opinion two are about equally readable, although I would change #2 to be:bool overlap = GetMyListA().Any(GetMyListB().Contains); // 2.As far as performance, #1 will probably perform better in most cases, since it will dump all of one list into a hash table and then do an O(1) lookup for each element in the other list. Two exceptions to this that I can think of are (1) if the lists are very small, then building the hash table data structure might not be worth the overhead and (2) option 2 can theoretically return without having considered every element in either list. If the lists are large lazy enumerables and you expect this to return true in most cases then this could lead to a performance increase. |
_webapps.65925 | I'm monitoring the commit log of a project my team is working on, and I was wondering whether there's any way to view the commits in an RSS/Atom reader. I.e., is there any URL provided by github that simply outputs the commit log in a format readable by an RSS/Atom reader? | Tracking github commits using RSS | rss;github | null |
_unix.181728 | I'm a newbie to Linux. I have encountered an error when working with Cygwin and cmd prompt in Windows. I posted the snapshots. I'm trying to use carat ^ to find the info that begin with K. But getting error in cmd prompt and correct in Cygwin. Do I need to install any package or its the problem with the console?Thanks | Linux command error : not working in cmd prompt | linux | null |
_unix.214562 | I have had taken backups by TimeMachine and now my Macbook Air 2013-mid finally died so I have to rescue files in Debian 8.1. However, it seems that no backups of so directories have been taken. I have backups which have these permissions and ownersls -ls /media/masi/disc2/drwxrwxr-x 1 root root 481 Jul 5 23:28 .drwxr-xr-x 1 root root 7 Jul 5 23:41 ..-rwxrwxrwx 8 99 99 780966 Sep 29 2014 09292014232514.pdf-r--r--r-- 184 root 1922214 0 Jun 24 20:38 100 kuvaa-rwxrwxrwx 8 99 99 101499390 Aug 17 2014 20140817_Sami_airfoil.zip-r--r--r-- 1900902 root 1922218 0 Jun 24 20:38 248-r--r--r-- 197 root 1922219 0 Jun 24 20:38 2ndsemesterI dosucp -r /media/masi/disc2/ /home/masi/but getls -la /home/masi/disc2/drwxr-xr-x 29 root root 20480 Jul 8 11:48 .drwxr-xr-x 29 masi masi 4096 Jul 8 11:36 ..-rwxr-xr-x 1 root root 780966 Jul 8 11:36 09292014232514.pdf-r--r--r-- 1 root root 0 Jul 8 11:36 100 kuvaa-rwxr-xr-x 1 root root 101499390 Jul 8 11:36 20140817_Sami_airfoil.zip-r--r--r-- 1 root root 0 Jul 8 11:36 248-r--r--r-- 1 root root 0 Jul 8 11:36 2ndsemesterbut I have to do chown -R masi:masi /home/masi/disc2/ to be able to read those files:drwxr-xr-x 29 sami sami 20480 Jul 8 11:48 .drwxr-xr-x 29 sami sami 4096 Jul 8 11:36 ..-rwxr-xr-x 1 sami sami 780966 Jul 8 11:36 09292014232514.pdf-r--r--r-- 1 sami sami 0 Jul 8 11:36 100 kuvaa-rwxr-xr-x 1 sami sami 101499390 Jul 8 11:36 20140817_Sami_airfoil.zip-r--r--r-- 1 sami sami 0 Jul 8 11:36 248-r--r--r-- 1 sami sami 0 Jul 8 11:36 2ndsemesterwhere you see that some folders such as 248 and 100 kuvaa are empty. Are those files/directories indicated by the field five in the first code block really empty?dmg2imgIt alerts falsely that not dmg image. Its conversion of such a file leads to false document. Many other threads also about this dmg2img tool but none succeeds, etc here.tmfs Oct 31 2012 tryI installed tmfs by apt-get install tmfs which is some filesystem of HFS made for time-machine backups. I run as its manual says# mkdir /mnt/hfs /mnt/tm# mount /home/masi/Disc2/ /mnt/hfsmount: /home/masi/Disc2 is not a block devicewhere I am following the manualmkdir /mnt/hfs /mnt/tmmount /dev/sdXX /mnt/hfstmfs /mnt/hfs /mnt/tm -ouid=$(id -u $USER),gid=$(id -g $USER),allow_otherWhy do you get the error mount: /home/masi/Disc2 is not a block device?This may be filesystem situation. My disk is ext4 in Debian but the OSX backup disc is some default format in OSX. How can you recover these files from the OSX filesystem in Debian? | To recover OSX data in Debian | osx;backup;data recovery | The latest version of the HFS+ utilities on Debian are, as far as I can tell, from 2006 and lacking a maintainer. Apple released Time Machine in 2007, and when they did they introduced some quite significant changes to HFS+ (particularly to do with hard links to directories). It is highly likely that the HFS+ tools on Debian cannot deal very well with a Time Machine backup.In your situation I would try to get OSX running in a virtual machine and read the backup from there. |
_cstheory.5593 | Let $L$ be a context-free language. Define $ppc(L)$ to be the pre- and postfix closure of $L$, in other words, $ppc(L)$ contains all of $L$'s prefixes and postfixes, and hence $L$ itself. My question: if $L$ is context-free and has a non-ambiguous grammar, is the same true for $ppc(L)$?I believe that this kind of basic question would already have been resolved in the heyday of language theory, but I could not find a suitable reference. | Closure of unambiguous context-free languages under pre- and postfix. | fl.formal languages;automata theory;grammars;context free languages | The set $\mathit{ppc}(L)$ is certainly context-free, but I think it can be inherently ambiguous: consider$$L=\{a^mb^mc^nd\mid m,n\geq 0\}\cup\{da^mb^nc^n\mid m,n\geq 0\}\;,$$ then $\mathit{ppc}(L)$ includes the classical inherently ambiguous language $$L'=\{a^mb^mc^n\mid m,n\geq 0\}\cup\{a^mb^nc^n\mid m,n\geq 0\}\;,$$ and one can prove $\mathit{ppc}(L)$ is also inherently ambiguous by the usual argument (apply Ogden's Lemma to both $a^{n+n!}b^nc^n$ and $a^nb^nc^{n+n!}$ to deduce the existence of two distinct trees for $a^{n+n!}b^{n+n!}c^{n+n!}$). |
_unix.241679 | I'm using Elementary OS Freya (which is based on Ubuntu), with nautilus to manage the desktop.Is there a way to force nautilus desktop icons to open files with another file manager (PCManFM in this case).I have already set PCManFM as my default manager in settings, but because the desktop is using nautilus it opens them with nautilus. | Force nautilus desktop to open files with another file manager | linux;files;desktop;elementary os;nautilus | null |
_cs.63764 | I recently came across a problem where I was charged $50 by a merchant with my bank card but there was a communication error, so the money was taken from my account but never arrived in the merchants account. I have contacted both banks and neither know what happened to the money.How could one would construct software to avoid this problem? | How to build a reliable bank transfer? | database theory;software engineering;reliability | null |
_unix.385604 | Is it possible to determine the vendor name of the memory used in a dedicated GPU in linux?Under Windows, there is a tool called GPU-Z that shows this value, tho under linux it seems there is no tool to display that value...GPU I'm using is a Geforce GTX 1060 using Cuda8 and Nvidia proprietary driversCheers | Determine the GPU memory vendor name under linux | nvidia;gpu | null |
_unix.37489 | I have a btrfs partition. When I run df -h, it shows:Filesystem Size Used Avail Use% Mounted on/dev/sda2 113G 101G 8.3G 93% /homeFromWhy is that? Is it because reserved space for root as wth ext2/3/4? Or is it something else? If the former, how can I change it and reclaim those 4GB ?As per btrfs wiki, I know that metadata are stored twice which inflates the size of Used data:user@machine:~$ df -h /Filesystem Size Used Avail Use% Mounted on/dev/sda1 894G 311G 583G 35% / ^^^^user@machine:~$ btrfs fi df /Metadata: total=18.00GB, >>used=6.10GB<< *2= 12.20GBData: total=358.00GB, >>used=298.37GB<< *1= 298.37GBSystem: total=12.00MB, >>used=40.00KB<< *1= 0.00GB == 310.57GB ~~ 311 GBBut this still does not explain why Used + Avail < Size. | When using btrfs, why Size, Used and Avail values from df do not match? | btrfs;df | Unless you specified otherwise when you formatted, the default is to store duplicate copies of the metadata blocks for improved reliability. You probably have 2gb worth of metadata that is stored twice, using 4gb. You can see more details with btrfs filesystem df.In particular, 1.75GB is allocated for metadata, so it consumes twice that or 3.5GB of space. Only 385mb of that 1.75 gb is currently used for metadata, but the full 1.75GB is reserved for that use and so is not counted towards available space for file data |
_cogsci.12843 | Are the limitations to our vision like the field of view and singular focus entirely based on the limitations of the eye?It seems like it's possible to feed an artificial signal into the brain through the optic nerve. What would happen if you fed a 360 video through such interface?What kinds of differences in experience would this provide, regarding the ability to focus on particular object for example? Seems like moving the eye would not be substantial anymore (no need for optical focusing), which introduces another curious situation on it's own. Would the brain be able to adapt to operate with multiple (mental-visual) focus points in such setting? | How much our visual limitations are instrument(eye)-based, and how much are they brain-based? | vision;optical illusion;brain computer interface | null |
_unix.2477 | I'm new to LVM and have been very confused by this:I am transfering a large file to a partition that I thought had about 1.5 terabytes of space on it. Near the end of the transfer, rsync exits with an error claiming that the partition is full. I investigate and find the following:$ sudo lvm lvs LV VG Attr LSize Origin Snap% Move Log Copy% Convert home system -wi-ao 97.66G log system -wi-ao 48.81G log.audit system -wi-ao 9.75G root system -wi-ao 341.59G swap system -wi-ao 4.88G temp system -wi-ao 97.66G var system -wi-ao 1.46T This seems to imply that /var (the partition that I'm transfering to, has the amount of storage I expect. However, then I see:$ sudo df -hFilesystem Size Used Avail Use% Mounted on/dev/mapper/system-root 331G 1.3G 313G 1% //dev/mapper/system-temp 95G 188M 90G 1% /tmp/dev/mapper/system-var 95G 90G 0 100% /var/dev/mapper/system-home 95G 188M 90G 1% /home/dev/mapper/system-log 48G 264M 45G 1% /var/log/dev/mapper/system-log.audit 9.5G 340M 8.7G 4% /var/log/audit/dev/sda1 99M 25M 70M 26% /boottmpfs 8.0G 0 8.0G 0% /dev/shmI'm guessing this has something to do with the volume being resized at some point. While I have reliable backups, I'd rather not interrupt services for the time it will take to get the backup and restore. Thus, is there anyway to make the filesystem seen by the OS match the space available according to lvm without losing data? | Why is there a mismatch between size reported by LVM and the size reported by df -h? | lvm;disk usage | If this is an ext3 filesystem, you can extend it to the LV size by running: resize2fs /dev/system/varIf this anything else than ext3, use the appropriate tool, e.g. xfs_growfs /var if it's XFS. This is absolutely nothing to be afraid of. I have extended hundreds of filesystems in more than 10 years on several operating systems and I have never seen the operation leading to a disruption of any kind. |
_webapps.75664 | How do I export a YouTube playlist (video names) to Excel or a document? | How do I export a YouTube playlist? | youtube;export;youtube playlist | If the playlist is public you can get atom feed of it with an http request like.https://www.youtube.com/feeds/videos.xml?playlist_id=IDWhere the ID is replaced with an actual ID like this.https://www.youtube.com/feeds/videos.xml?playlist_id=PL1KYPbM0Swd0MJQ_oox0zTjYYCr57YDEyWith that document you will have all the information about the feed and you can further process it using other methods. |
_opensource.914 | Note: this is a hypothetical situation, not one I have actually encountered. (And one I hope I won't encounter)I started a small open source project and have gathered a few loyal committers. We don't have much in the way of a hierarchical structure and make our decisions by way of consensus. This has not led to any significant problems.Recently, the core contributors have split based on differing opinions about a rather central part of the project. (Which relates mainly to how users interact with the product on a very fundamental level.) A compromise seems unlikely and I fear the project will take a serious blow if it loses almost half of its contributors.How can I limit the damage this disagreement will deal to the project and how can I prevent something like this from happening in the future?For the sake of completeness, here are two related questions (separate links) | What do I do if my contributors are split into two camps? | contributor;community;collaboration;human resources | Been there, done that.Why does it happen?In my experience, a split due to creative differences usually happens because different people have a different idea of what the project goal actually is, but nobody is aware of that. As soon as a contributor realizes that someone else's vision is different from theirs, arguments and power struggle will start, which can quickly become personal and tear a project apart when not moderated properly.How to prevent it?The best way to prevent this situation early on is to communicate your creative vision of the project early, clearly and often. Every project should have some kind of official document which outlines the goals. Make sure every contributor knows and understands it, so everyone is on the same page and nobody gets any misconceptions about where you are heading. Make clear that anyone who wants the project to go somewhere else, should fork from the start and not get involved in the mainline in the first place.Should there be any disagreements about aspects of the project direction which were not set in stone beforehand, it is important to make a binding decision before things turn ugly. Having a clear hierarchy or well-defined decision making process is fundamental here. Without a binding process to make a decision - whether autocratic or democratic - people have no other choice but to reach consensus by either talking down the opposition through countless hours of filibusting in your communication channels (time they could rather spend working) or driving them out of the project through bullying and intrigue (a lose-lose situation for everyone involved).Unfortunately, when the conflict is already under way, it is likely too late to establish a proper decision-making process. Such a process only works when everyone supports it. But when you try to establish it now, everyone will perceive it in the context of the current conflict and their support for it will depend on whether this process would decide the current matter in their favor or not. Opening up this new battlefield now will likely deepen the trench instead of bridging it.The split still happened. How to deal with it?When the project is under a copyleft license (or when under a permissive license the other group is committed to keeping), you can still merge any of their commits into your codebase or vice versa, so the manpower is not completely lost to your project. But a split is still a considerable blow to the project because organisation structures and infrastructure need to be duplicated, coordination between the forks is impaired and their commits need to be carefully reviewed for relevance and merge conflicts. |
_unix.244673 | After upgrading Squeeze to Wheezy my server will no longer boot. I'm only able to boot, by selecting a previous kernel (2.6.32).linux:~# find /lib/modules/3.2.0-4-amd64/ -maxdepth 2/lib/modules/3.2.0-4-amd64//lib/modules/3.2.0-4-amd64/modules.order/lib/modules/3.2.0-4-amd64/modules.builtin/lib/modules/3.2.0-4-amd64/kernel/lib/modules/3.2.0-4-amd64/kernel/sound/lib/modules/3.2.0-4-amd64/kernel/net/lib/modules/3.2.0-4-amd64/kernel/mm/lib/modules/3.2.0-4-amd64/kernel/lib/lib/modules/3.2.0-4-amd64/kernel/fs/lib/modules/3.2.0-4-amd64/kernel/drivers/lib/modules/3.2.0-4-amd64/kernel/crypto/lib/modules/3.2.0-4-amd64/kernel/archlinux:~# uname -rmsLinux 2.6.32-5-amd64 x86_64linux:~# dpkg -l linux-image* | grep ^iiii linux-image-2.6.32-5-amd64 2.6.32-48squeeze6 amd64 Linux 2.6.32 for 64-bit PCsii linux-image-3.2.0-4-amd64 3.2.68-1+deb7u6 amd64 Linux 3.2 for 64-bit PCsii linux-image-amd64 3.2+46 amd64 Linux for 64-bit PCs (meta-package)So it appears modules.dep is not being created, even though the install worked. I tried depmod -a, I've tried apt-get install --reinstall on the kernel, nothing is fixing this issue. | Upgrade Squeeze to Wheezy now no modules.dep | debian;linux kernel;upgrade;kernel modules | When you are running depmod, it only calculates the dependencies and creates modules.dep for the running kernel as default behaviour unless you provide an alternate kernel version as an argument.In your case, since you are booting with version 2.6.32-5-amd64, you need to run:$sudo depmod -a 3.2.0-4-amd64 in order for it to create the file /lib/modules/3.2.0-4-amd64/modules.depFrom : http://www.computerhope.com/unix/depmod.htmdepmod generates a list of kernel module dependences and associated map files.depmod [-b basedir] [-e] [-E Module.symvers] [-F System.map] [-n] [-v] [-A] [-P prefix] [-w] [version] |
_scicomp.18677 | I have an irregular grid of points describing this surface (a large subduction fault in South America). The color is depth. Anyway I have 3D coordinates (lon,lat,depth) at irregular intervals. I'm trying to generate a triangular mesh using gmsh but I'm struggling with how. I can make every irregular grid point be a Point in gmsh with a little python function:def xyz2gmsh(fout,x,y,z): f=open(fout,'w') for k in range(len(x)): line='Point('+str(k+1)+') = {%.6f, %.6f, %.6f, 0.01};\n' %(x[k],y[k],z[k]) f.write(line) f.close()And this file (fout) gnerated loads into gmsh but nothing shows! As I understand it I need to somehow tell gmsh that these points collectively represent a surface to be meshed. How? And can I tell gmsh to shoot for making elements of a certain size?Thanks! | Triangular mesh of a 3D surface | mesh generation;gmsh | null |
_datascience.16545 | Let's say I begin with an exceptionally large dataframe (e.g. imported/munged from tsv files). Several of these columns are categorical labels. (As a more concrete example, let's imagine a group of students in a school district, pre-school to high-school). Now, I begin using sklearn and instantiate a t-SNE model, similar to the example here:http://scikit-learn.org/stable/modules/generated/sklearn.manifold.TSNE.html import numpy as np from sklearn.manifold import TSNE X # my data model = TSNE(n_components=2, random_state=0) np.set_printoptions(suppress=True) model.fit_transform(X) and then we plot this. The plot might look something like this: http://imgur.com/a/3amkJHere's my problem: with real datasets, after using t-sne to learn/cluster, you will have a number of clusters. Then, using the categorical labels, I try to go through each of these, and try to figure out what structure the t-SNE plot is giving me. For our school example, I'd get the t-SNE output, then I would label the datapoints. (Let's assume that the clusters are actually representative of age/classroom, e.g. the first-graders group together, the second-graders are a group, etc.)If I try to color this plot with grades, I'll see that the grades does not really explain the structure of this plot. (Why? Because every class-level has students with As, Bs, Cs, etc.) Then I might try height...that does pretty good (because there's a correlation between short students--> pre-school, tall students --> high school seniors). How does one use a t-SNE plot to infer the most correct labels of the data? How does one use t-SNE plots to explain (and further explore) the plot structure? | Given a t-SNE plot, how can I infer the most correct labels? How does one understand its structure? | clustering;labels;tsne | null |
_unix.171061 | I'd like make my ~/.ssh/config file dynamically generated by a shell script (or anything else that prints to STDOUT).Is there a UNIX trick to make reading a file result in executing a command & reading it's STDOUT?What I'd like:#!/bin/bashecho Hello World$ cat myfileHello World | Dynamic ~/.ssh/config | bash;ssh;files;stdout | null |
_unix.224283 | I have seen yum used with enableRepo and disableRepo. But what happens when I enable a repo say 'apache-tomcat' and what happens if I disable the same repo ? | What happens when I enable or disable a repo | yum | null |
_unix.17162 | For security reasons I have to boot Linux from u-boot with all output hidden (silently) until a password is entered. I've configured uBoot to do this correctly using the CONFIG_AUTOBOOT_KEYED macro and can successfully boot silently. The issue I am having is that when uBoot boots the Linux kernel and silent mode is enabled, it passes console= as part of the bootargs to Linux kernel. This is fine for silent booting, but I can't seem to find a way to re-enable the console again after bootup. I've also tried to boot normally and append loglevel=0 to the kernal bootargs which works for silent bootup, but again I cannot re-enable the console. I've tried:dmesg -n 4and klogd -c 4to try to set the Kernel loglevel back to KERN_WARNING (4) without luck. These commands work properly when I boot the Kernel normally.The best guide I've found on the matter is Silencing the boot process on blackfin.uclinux.org.Ideally I'd like to use uBoot's silent mode where it passes console= as part of the bootargs but still take input on the console and re-enable output when the password is entered, but I am open to other ideas if anyone can help guide me I would greatly appreciate it. | Silent booting Linux from u-boot | kernel;boot | If anyone else runs into this issue I never found a good fix. I ended up hacking both u-boot and the linux kernel serial driver and basically checking if the password had been entered. If it had, I allowed the code to run normally. If it hadn't I just returned from the functions so that nothing was actually printed out on the console.For Kernel I edited the receive_chars() function to look for the password (input) and transmit_chars() to mask output. I had u-boot pass the password in as part of the bootargs. If it was null, then the password was already entered and we ignored the special code. If it was a value, then we grabbed input chars via receive_chars() and compare them to the stored string from bootargs.In u-boot I just used the CONFIG_AUTOBOOT_KEYED and associated default macros for the password entry. I then changed common/cmd_bootm.c to not call fixup_silent_linux() to mask the console= value and let the kernel deal with it as stated above.Hopefully this helps someone else. |
_cs.44187 | I am working on an algorithm which approximates a certain optimal quantity. The approximation becomes better when the size of the problem ($n$) becomes larger: the difference from the optimum is approximately $1/n$. Initially, I wrote that the algorithm achieves an approximation of:$$\Omega(1-1/n)$$But, now I am not sure this notation is correct: it is just like writing $\Omega(1)$ (the smaller element is swallowed in the larger element, which is 1).Should I write:$$1-O(1/n)$$Or maybe:$$1-1/\Omega(n)$$Which of these is the correct notation? | Expressing that a function converges to 1 with linear rate using Landau notation | asymptotics;landau notation;notation | Both of the options you listed are acceptable. They have the same meaning; $f\in O(1/n)$ if and only if $1/f \in \Omega(n)$.Let $f\in O(1/n)$. Then there exist $n_0,M>0$ such that for all $n>n_0$, $f\leq M/n$. Then $1/f\geq n/M$ for all $n>n_0$, thus $1/f\in \Omega(n)$, since for $n>n_0$, $1/f \geq 1/M \cdot n$.The other direction is similar. |
_unix.330246 | In Linux, if we have directories represented as special type of files having entry of each file name it has. Obviously, we could traverse and find out paths, then why do we need Dentries to assist us in traversing the paths, in other words, what's the significance of Dentry if its job could be done by inodes itself? | Need of Dentry despite traversal could be done by Inode | files;filesystems;inode | null |
_codereview.51964 | I am working on some problems on Hackerrank.John has discovered various rocks. Each rock is composed of various elements, and each element is represented by a lowercase Latin letter from 'a' to 'z'. An element can be present multiple times in a rock. An element is called a 'gem-element' if it occurs at least once in each of the rocks.Given the list of rocks with their compositions, you have to print how many different kinds of gems-elements he has.Input FormatThe first line consists of N, the number of rocks. Each of the next N lines contain rocks composition. Each composition consists of small alphabets of English language.Output FormatPrint the number of different kinds of gem-elements he has.Constraints1 N 100Each composition consists of only small Latin letters ('a'-'z'). 1 Length of each composition 100Sample Input3abcddebaccdeeabgSample Output2Explanation Only a, b are the two kind of gem-elements, since these characters occur in each of the rocks composition.I solved the problem but I don't feel that it is fast or pythonic. I was wondering if someone could help me increase the performance and perhaps reduce the amount of memory being used.numRocks = int(raw_input())rockList = []for x in xrange(numRocks): rock = raw_input() # list of sets() rockList.append(set(rock))gemElement = 0for x in rockList[0]: rocks = len(rockList) - 1 count = 0 for y in rockList[1:]: if x in y: count += 1 if count == rocks: gemElement += 1print gemElement | Hackerrank Gem Stones | python;optimization;performance;memory management;programming challenge | Your solution looks pretty good. However, a few details can be changed to make your code more pythonic :Unused variableYou can use _ to name a variable whose value is not used. In your case, it applies to the first loop variable x.List comprehensionYou can rewrite your initialisaion of rockList abusing list comprehension :rockList = [set(raw_input()) for _ in xrange(int(raw_input()))]Simplifying the logicAt the moment, for each element in the first set, you look in how many other sets it appears to know if it is a gem. You can make things clearer by considering that by default it is a gem except if we don't find it in one of the other sets (and in that case, we can stop looping) :for x in rockList[0]: is_gem = True for y in rockList[1:]: if x not in y: is_gem = False break if is_gem: gemElement += 1Using Python good stuffUsing Python all builtin, we can write this all(x in y for y in rockList[1:]).You can extract rockList[1:] to call it only once (note that the cute way to write this in Python 3 would be to use extended iterable unpacking). Your code becomes :gemElement = 0other_rocks = rockList[1:]for x in rockList[0]: if all(x in y for y in other_rocks): gemElement += 1print gemElementNow, you can see that we can easily get the actual list of gems doing :gems = (x for x in rockList[0] if all(x in y for y in other_rocks))This is not an actual list but a generator expression, you'll need to call list(gems) if you want to see an actual list but we don't really care about the list, we just need the number of elements : len(list(gems)) or sum if you don't want to build an intermediate list in memory.The code is now :rockList = [set(raw_input()) for _ in xrange(int(raw_input()))]other_rocks = rockList[1:]print sum(1 for x in rockList[0] if all(x in y for y in other_rocks))One step back from the codeThe problem we are trying to solve is linked to a common problem : computing the intersection of multiple sets. It is a generic enough problem so that we can google it and find this answer for instance.Making the solution as concise as possible, one can write :print len(set.intersection(*[ set(raw_input()) for _ in xrange(int(raw_input()))])) |
_webmaster.74256 | Is is possible to configure my website to show different content in search result for the user from same google domain but from different cities?When I search in google.co.in from my city Chennai for the generic keywords like 'part time jobs', I get search results like 'Part time jobs in Chennai' for different websites. Are those website manipulate it or this is solely in the hands of Google? | City specific result in google for searches from different city for generic keyword? | seo;google;local seo | Yes and no.Google shows local searches from Chennai for the keyword Part time jobs because it believes if you search for this keyword, you're looking for a job near you. This behaviour is solely in the hands of Google.But you can optimize your website in such a way that your website will show up in these results:Add the name of the city to the title and the description of your pagesMake sure the address of the company is in this region and add this address on each page of the website.Add a landline with a zipcode from this region on each page of the website.Add the name of the city to the content, alt-texts of images, images-names.Create a Google business-page and verify your address.Add your business to Google Maps.If your business has multiple stores than you can create a page for each store to feature that location. (ex coffee shops)Show the address and the landline of the store on his page.Make sure to write unique content for each store to avoid being flagged as duplicate content. (about 300 words of unique content)The problem gets harder but the idea stays the same when your business covers multiple locations but has no physical store in these locations. (ex. a construction firm that covers 3-4 adjacent cities). What you can do in this case is for example create a page with testimonials for each city. |
_datascience.10802 | I understand from Hinton's paper that T-SNE does a good job in keeping local similarities and a decent job in preserving global structure (clusterization).However I'm not clear if points appearing closer in a 2D t-sne visualization can be assumed as more-similar data-points. I'm using data with 25 features.As an example, observing the image below, can I assume that blue datapoints are more similar to green ones, specifically to the biggest green-points cluster?. Or, asking differently, is it ok to assume that blue points are more similar to green one in the closest cluster, than to red ones in the other cluster? (disregarding green points in the red-ish cluster)When observing other examples, such as the ones presented at sci-kit learn Manifold learning it seems right to assume this, but I'm not sure if is correct statistically speaking.EDITI have calculated the distances from the original dataset manually (the mean pairwise euclidean distance) and the visualization actually represents a proportional spatial distance regarding the dataset. However, I would like to know if this is fairly acceptable to be expected from the original mathematical formulation of t-sne and not mere coincidence. | Can closer points be considered more similar in T-SNE visualization? | visualization;dimensionality reduction;tsne;manifold | I would present t-SNE as a smart probabilistic adaptation of the Locally-linear embedding. In both cases, we attempt to project points from a high dimensional space to a small one. This projection is done by optimizing the conservation of local distances (directly with LLE, preproducing a probabilistic distribution and optimizing the KL-divergence with t-SNE). Then if your question is, does it keep global distances, the answer is no. It will depend on the shape of your data (if the distribution is smooth, then distances should be somehow conserved). t-SNE actually doesn't work well on the swiss roll (your S 3D image) and you can see that, in the 2D result, the very middle yellow points are generally closer to the red ones than the blue ones (they are perfectly centered in the 3D image). An other good example of what t-SNE does is the clustering of handwritten digits. See the examples on this link:https://lvdmaaten.github.io/tsne/ |
_softwareengineering.284249 | I'm currently exploring TypeScript and I was wondering why not compile the whole app to a single JS file instead of compiling every .ts file to it's corresponding .js.Example for such an app is TypeDoc, which basically compiles to a single bin/typedoc.js file.The common concept is to compile each .ts file in a .js file with --module commonjs as an argument to the typescript compiler.Is there something that I need to worry about if I build a scalable ( big ) web application which compiles to a single file ? | Is it a bad practice to compile TypeScript NodeJS app to a single JS file? | node.js;typescript | null |
_codereview.47158 | I have implemented Tic-Tac-Toe so that human can play with the computer, where the computer should never lose. I did a simple analysis before implementing, and I found out that there are certain cells you want to occupy (I named it BEST_CELLS).Here is the link to the implementation: https://github.com/yangtheman/tictactoeI have five classes:Player: human or CPUTicTacToe: holds board instance variableTicTacToeGame: gets user input and make CPU moveTicTacToePrint: prints out the boardTicTacToeScan: scan the board for any two in a rows and three in a row (column/row/diagonal)I don't feel that my class design is optimal and scanning algorithms can be improved. I also used hashes for the board for faster look-up, but perhaps using arrays would be easier? I am looking for feedback on my design and algorithm.tic_tac_toe.rbclass TicTacToe X_COORDS = [A, B, C] Y_COORDS = [1, 2, 3] attr_reader :board def initialize @best_cells = [B2, A1, A3, C1, C3] @board = {} X_COORDS.each do |x| @board[x] = {} end end def x_coords X_COORDS end def y_coords Y_COORDS end def place_marker(coord_string, player) x, y = coord_string.upcase.split() return nil unless coord_valid?(x, y) deduct_from_best_cells(x, y) @board[x][y] = player end def best_cells_left @best_cells end def board_full? sum = X_COORDS.inject(0) {|sum, x| sum += @board[x].size} sum == 9 end def player_cell?(x, y, player) @board[x][y] == player end def empty_cell?(x, y) @board[x][y].nil? end def cell_marker(x, y) @board[x][y].nil? ? . : @board[x][y].marker end private def coord_valid?(x, y) within_range?(x, y) && @board[x][y].nil? end def within_range?(x, y) X_COORDS.include?(x) && Y_COORDS.include?(y) end def deduct_from_best_cells(x, y) @best_cells -= [#{x}#{y}] endendtic_tac_toe_game.rbrequire_relative './player'require_relative './tic_tac_toe'require_relative './tic_tac_toe_scan'require_relative './tic_tac_toe_print'class TicTacToeGame def initialize @board = TicTacToe.new @cpu = Player.new @human = Player.new(false) end def play print_initial_instruction interact_with_human play if continue_to_play? end def print_board TicTacToePrint.print_board(@board) end def scan_board(player) TicTacToeScan.new(@board, player) end private def print_initial_instruction puts Welcome to a Tic-Tac-Toe Game!\nYou are playing against the computer. Try to win. puts CPU marker is #{@cpu.marker}\nYour marker is #{@human.marker} print_board end def interact_with_human loop do if human_turn break if game_over? else puts Invalid Move. Please try again. end end end def game_over? human_scan = scan_board(@human) return true if game_finished?(human_scan) cpu_scan = scan_board(@cpu) cpu_turn(cpu_scan, human_scan) print_board return true if game_finished?(scan_board(@cpu)) end def human_turn print Your Next Move (for example A1 or C3): input = STDIN.gets.chomp().upcase @board.place_marker(input, @human) end def cpu_turn(cpu_scan, human_scan) cpu_cell = calculate_cpu_cell(cpu_scan, human_scan) @board.place_marker(cpu_cell, @cpu) puts CPU put his/her marker on #{cpu_cell} end def calculate_cpu_cell(cpu_scan, human_scan) playable_cells = cpu_scan.get_playable_cells to_block = human_scan.get_playable_cells if playable_cells[2] && playable_cells[2].length > 0 cpu_cell = playable_cells[2].first elsif to_block[2] && to_block[2].length > 0 cpu_cell = to_block[2].first elsif @board.best_cells_left.length > 0 cpu_cell = @board.best_cells_left.first elsif playable_cells[1] && playable_cells[1].length > 0 cpu_cell = playable_cells[1].first else cpu_cell = playable_cells[0].first end cpu_cell end def game_finished?(scan) if scan.winner? if scan.player == @human puts Congratulations, You Won! else puts Sorry. You Lost! end return true elsif @board.board_full? puts Awwwww. No one won! Game is tied! return true end false end def continue_to_play? print Would you like to play again? (Y or N): if STDIN.gets.chomp() =~ /Y|y/ @board = TicTacToe.new return true end false endendtic_tac_toe_scan.rbclass TicTacToeScan attr_reader :player, :playable_cells def initialize(game, player) @game = game @player = player @playable_cells = {} end def get_playable_cells calculate_playable_cells @playable_cells end def winner? calculate_playable_cells if @playable_cells == {} @playable_cells[3] && @playable_cells[3] == [] end private def calculate_playable_cells scan_rows scan_cols scan_diag_w2e scan_diag_e2w end def add_to_playable_cells(num, array) @playable_cells[num] ||= [] @playable_cells[num] += array @playable_cells[num].uniq! end def scan_rows @game.y_coords.each do |y| player_cell_num = 0 empty_cells = [] @game.x_coords.each do |x| empty_cells << #{x}#{y} if @game.empty_cell?(x, y) player_cell_num += 1 if @game.player_cell?(x, y, @player) end add_to_playable_cells(player_cell_num, empty_cells) end end def scan_cols @game.x_coords.each do |x| player_cell_num = 0 empty_cells = [] @game.y_coords.each do |y| empty_cells << #{x}#{y} if @game.empty_cell?(x, y) player_cell_num += 1 if @game.player_cell?(x, y, @player) end add_to_playable_cells(player_cell_num, empty_cells) end end def scan_diag_w2e player_cell_num = 0 empty_cells = [] @game.x_coords.each_with_index do |x, index| y = @game.y_coords[index] empty_cells << #{x}#{y} if @game.empty_cell?(x, y) player_cell_num += 1 if @game.player_cell?(x, y, @player) end add_to_playable_cells(player_cell_num, empty_cells) end def scan_diag_e2w player_cell_num = 0 empty_cells = [] @game.x_coords.each_with_index do |x, index| y = @game.y_coords.reverse[index] empty_cells << #{x}#{y} if @game.empty_cell?(x, y) player_cell_num += 1 if @game.player_cell?(x, y, @player) end add_to_playable_cells(player_cell_num, empty_cells) endend | Tic-Tac-Toe implementation where computer should not lose | algorithm;object oriented;ruby;design patterns | Random access and performanceFlambino has correctly remarked that performance is no issue with any container holding a 3x3 matrix, but for the sake of argument, let's say that it might be an issue.A major benefit in a Hash structure is that it keeps a \$O(1)\$ complexity in setting as well as fetching elements in it, no matter how large it is (as long as the hashing function is well thought of). This is what is called Random Access.An Array on the other hand has... Random access as well! That is, as long as you know where your item is, reaching that item is done immediately.In actuality, small hashes with a fixed number of elements will always be less performant than arrays, since the hashing function will be much too generic, and since hashes are implemented using buckets (some variants of a tree structure) which are kept in an array...I guess when you said faster look-up you might have meant using less code, or maybe having a structure closer to the human metaphor (human player enters A2...) - both are debatable, but from a performance point-of-view it is quite clear cut - there is no reason to work with a Hash for the board state - an array (or a two-dimensional array) will be your best option.Class names and motivationsFlambino has noted that the TicTacToe prefix is not advisable, and should be removed.Class names should be of actors and not of actions - this is extra obvious after removing the prefix of the class names - Printer and Scanner are better names than Print and Scan.Both of those classes look suspiciously specific, which should make us think are they really class-worthy? Shouldn't the TicTacToePrint class simply be a def print method within the Game class? After all - it is used only once, and you even chose to omit its implementation, since it is trivial... Also, it seems that TicTacToeScan is an elaborate class intended to maintain the state of the board, I believe that it should either be part of the board's state, or simply make the calculation ad-hoc on the fly.Oh, and forgotten - what does Player do? Does it do anything? Is it really class worthy?Method boundariesA method should do exactly what it claims it does.For example - interact_with_human looks innocent enough, since it calls human_turn, and breaks if game_over?, but actually game_over? plays the computer's turn!This is unpredictable and confusing. game_over? should do just that - check whether the game is over. Any other logic should be done elsewhere.Where is your strategy?In the title of the post you put at center stage that computer should not lose - which means that the point of the exercise is to showcase the strategy for playing cpu. But your strategy code is strewn all over the code (some in TicTacToe, some in TicTacToeGame, and some in TicTacToeScan) - that it is impossible to understand in one reading what your strategy actually is.From all the classes you decided to implement, that one most obviously missing is the Strategy class (you could call it CpuPlayer, as it stands as complement to the human player against it). It should know (or, at least, claim) which are the best cells, decide to score playable cells by player_cell_num (which, I admit, I couldn't thoroughly understand), and, of course, decide which is the next cell the CPU player should play.Be DRYYou TicTacToeScan class is full of cut-and-paste code. This makes it hard to read, and hard to maintain. |
_codereview.101612 | I wanted an easy way to augment objects, by adding functionality from any other object(s). More importantly, I needed a way to augment the object from multiple sources in a clean one-line solution.The inheritance is done with this: function extend(proto, args){ this[proto.id] = Object.create(proto); proto.constructor.call(this[proto.id], args);}extend.call(this, vehicle, args); // one-linerI can invoke as many objects as needed; with this pattern it's easy to swap and change the prototype chain, so I could just put the above extend code in the vehicle object, if that's where I want augmentation.It's now easy to create a car by pulling in whatever you need:extend.call(this, vehicle, args);extend.call(this, sunroof, args);extend.call(this, tyres, args);extend.call(this, wings, args);extend.call(this, rocket, args);etc...Questions:Plausibility: is it flawed in some way?Optimisation: can I enhance the pattern?clunky: I had to create an id property for each object, so the extend function knows how to create a property on the executing context. It seems like a hack. Is there a way to get a prototype name from a named constructor?var manufacturer = { id:'manufacturer', constructor : function (args) { this.boss = args.boss || 'Bernie Ecclestone'; this.country = args.country || 'UK'; return this; }};var vehicle = { id:'vehicle', constructor : function (args) { this.colour = args.colour || 'blue'; this.wheels = args.wheels || 2; extend.call(this, manufacturer, args); return this; }};var driver = { id:'driver', constructor : function (args) { this.name = args.name || 'John'; return this; }, info : function () { console.log(this.name); }};var engine = { id:'engine', constructor : function (args) { this.type = args.type || 'V6'; this.fuel = args.fuel || 'petrol'; return this; }, tune : function () { this.type = 'super-charged'; this.fuel = 'ethanol'; console.log('Car now ' + this.type + ' with ' + this.fuel); }};var car = { id:'car', constructor : function (args) { extend.call(this, vehicle, args); extend.call(this, driver, args); extend.call(this, engine, args); return this; }, info : function () { console.log('boss: ' + this.vehicle.manufacturer.boss); console.log('country: ' + this.vehicle.manufacturer.country); console.log('driver: ' + this.driver.name); console.log('colour: ' + this.vehicle.colour); console.log('wheels: ' + this.vehicle.wheels); console.log('type: ' + this.engine.type); console.log('fuel: ' + this.engine.fuel); console.log('\n'); }};function extend(proto, args){ this[proto.id] = Object.create(proto); proto.constructor.call(this[proto.id], args);}var ferrari = Object.create(car).constructor({ boss: 'Maurizio Arrivabene', country:'Italy', name: 'Steve', colour: 'red', wheels: 4, type:'100cc', fuel:'diesel'});var lotus = Object.create(car).constructor({ name: 'Jenson Button'});var mclaren = Object.create(car).constructor({ type:'hybrid', fuel:'battery/petrol'});ferrari.engine.tune();ferrari.info();/*Car now super-charged with ethanolboss: Maurizio Arrivabenecountry: Italydriver: Stevecolour: redwheels: 4type: super-chargedfuel: ethanol*/lotus.info();/*boss: Bernie Ecclestonecountry: UKdriver: Jenson Buttoncolour: bluewheels: 2type: V6fuel: petrol*/mclaren.info();/*boss: Bernie Ecclestonecountry: UKdriver: Johncolour: bluewheels: 2type: hybridfuel: battery/petrol*/ | Multiple inheritance pattern for vehicle information | javascript;object oriented;inheritance | null |
_unix.166458 | I have several workstations (laptops and desktops). I'd like to synchronize files among them such that each one is a mirror of the other. I used to run an NFS server and share files out, but that only works if I am on the network. I'd like to have access to my files, be able to make changes and when I connect to the network again, have the changes I've made be reflected to the other volumes and so on.I'm considering btrfs or perhaps a clustered fs such as glusterfs or luster. Are any of these good fits for frequently offline nodes? It seems they'd work well if they were always online, but likely not work well for frequently offline. | fs synchronization among desktops | btrfs;replication | I am actually using git-annex. I've been using it for over a year and it works reasonably well. |
_unix.384468 | I am using watch to periodically run a perl script that requires Term::Size to obtain the terminal width using$columns = Term::Size::chars *STDOUT{IO}Curiously, $columns is an empty string in this case. Does watch somehow manipulate STDOUT or the terminfo database? | Linux watch and terminfo | linux;terminal;perl | Unfortunately, watch uses pipes to collect output from the subprocess, as you can see from watch 'ls -l /proc/self/fd'Every 2.0s: ls -l /proc/self/fd ...lrwx------ 1 64 Aug 7 16:28 0 -> /dev/pts/6l-wx------ 1 64 Aug 7 16:28 1 -> pipe:[42416612]l-wx------ 1 64 Aug 7 16:28 2 -> pipe:[42416612]lr-x------ 1 64 Aug 7 16:28 3 -> /proc/3509/fd |
_unix.210954 | I'm trying to delete a user:pgrep -u testps -fp $(pgrep -u test)killall -KILL -u testuserdel -r testBut the last command always returns userdel: user test is currently used by process xxxwhere xxx is always different. | Unable to delete a user: user test is currently used by process xxx | linux;users;accounts | null |
_cstheory.32053 | The emptiness problem for Context free Grammars(CFG) is well studied. The same holds for the equivalence problem between Pushdown Automata (PDA) and CFGs. Therefore, given a PDA, the straightforward way to decide whether the language it accepts is empty is to convert the PDA into a CFG and then use the known algorithm to decide emptiness of the corresponding CFG.I am wondering whether there exists some algorithm to directly check emptiness of the language of the PDA without going through the conversion to a context free grammar. | Emptiness of PDA without constructing the corresponding CFG | automata theory;grammars | Quick Answer: Yes, there is a really lovely algorithm that solves non-emptiness for pushdown automata that does not involve constructing the equivalent CFG.Possible Drawback: Correct me if I am wrong, but it doesn't appear to be more efficient than the approach where you convert to a CFG.Basic Idea: It can be viewed as a sort of dynamic programming algorithm where you solve reachability without ever constructing the possibly exponential length paths that you need to consider.You start with a state diagram for a Pushdown Automata. Let's call a transition that doesn't manipulate the stack a resting transition. You proceed with a series of stages. Start of Stage: You combine all compatible push and resting transitions. Next, you combine all compatible pop and resting transitions. Then, you combine all compatible pairs of resting transitions with each other. Finally, you combine all compatible push and pop transitions. Now, you throw in all of the new transitions into the state diagram. End of Stage.You go through stage after stage repeating this process. There are only so many possible transitions. Eventually, you either get a transition that leads from the start state to a final state or you must run out of possible transitions to add. At this point, you know whether the automata's language is empty or not. Question: Can you provide me with any books or papers that give a good exposition of this algorithm? Whenever I searched for it several years ago, it seemed that this algorithm is unpopular or not well known. I personally really like it.Thanks for asking the question! I really appreciate it and I hope this helps a little bit. Have a nice day! :) |
_softwareengineering.235045 | I am very confused about white box testing.A simplified version of the example: the entire system consists of three methods - methodA(), methodB(), methodC().The program starts from methodA(), and methodB() requires input from methodA() and methodC() requires input from methodB().Do we create 3 white box tests, one for each method, or do we create one white box test for the entire system? | Do we do white box testing on methods or on an overall program? | unit testing;software;engineering | null |
Subsets and Splits