id
stringlengths 5
27
| question
stringlengths 19
69.9k
| title
stringlengths 1
150
| tags
stringlengths 1
118
| accepted_answer
stringlengths 4
29.9k
⌀ |
---|---|---|---|---|
_codereview.123820 | I am using the following VBA code to replace accented letters with regular letters in a spreadsheet. This is necessary because these spreadsheets have to be uploaded to an import tool that does not allow foreign characters.Function RemoveAccentsFromForeignLetters() StartNewTask (Removing accents from foreign letters) Dim AccChars As String Dim RegChars As String AccChars = RegChars = SZszYAAAAAACEEEEIIIIDNOOOOOUUUUYaaaaaaceeeeiiiidnooooouuuuyy Set MyRange = ActiveSheet.UsedRange Dim A As String * 1 Dim B As String * 1 Dim i As Integer For i = 1 To Len(AccChars) A = Mid(AccChars, i, 1) B = Mid(RegChars, i, 1) MyRange.Replace What:=A, Replacement:=B, LookAt:=xlPart, MatchCase:=True ' TODO: highlight changed cells yellow NextEnd FunctionI googled the code from somewhere and it works, but it is a bit slow. In a spreadsheet with 1.5 million cells (7000 rows, 200 columns), it takes 21 seconds to run.I wanted to look into ways to optimize it, for example:Maybe a RegEx would be faster?Maybe I should pass the entire spreadsheet to a DLL using an array, have the DLL do the replace, then pass it back? One article I read suggested that this is up to 10x faster than using native Excel VBA.Maybe I should use the same DLL trick as above, but add multi-threading?Any other ideas? | Replacing accented letters with regular letters in a spreadsheet | performance;vba | In addition to @Raystafarian's observations, there are a couple of other issues that I see.I would personally put your AccChars and RegChars variables intoConsts, because you never change their values.Your code also requires that AccChars and RegChars are the samelength, and will fail if they aren't. I'd add an assert that teststhat before you do anything that writes to the Worksheet.You generally want to avoid using the Integer type unless youabsolutely need to (for example in an API call). VBA stores them as aLong regardless of how they are declared.Declaring A and B as fixed-length strings is a bit of prematureoptimization that is backfiring. When you pass them to .Replace asparameters, they are actually being implicitly cast back tovariable length ones.Minor thing, but I prefer the term stripped to regular - allcharacters are regular in their native context.Using string functions is not ideal when what you really care aboutare individual characters as opposed to a sub-string. VBA allows adirect assignment of a String to a Byte array, and indexing intothe array is much faster than calling Mid. The performance hit is alot higher when you're doing it inside a loop. As a side note, youshould always use the String returning functions that end with '$'to avoid superfluous casting unless you explicitly require aVariant type. Mid$ (returns a String) as opposed to Mid(returns a Variant).Your call to .Replace method is acting on every single cell in yourRange. This is a huge performance hit, because I'm guessing thatnot every cell in the entire Worksheet is going to have an accentedcharacter in it. You should really only be concerned with cells thatdo. By performing the replacement on every cell, your performance is scaling directly with the number of cells, not the number ofreplacements. So, if only 5% of the cells have accented charactersyou are still doing 100% of the work. This is where the regularexpression would be useful, but you can't easily pawn that off onExcel (except maybe with using .Find, which has issues of its own).A loop would be better - a loop over an array pulled from the Rangewould be best.Your TODO: highlight changed cells yellow is going to be much moredifficult using the .Replace function, because it would requirestoring the state of the entire sheet, then doing a cell-by-cellcomparison. It will be a lot easier to track this concurrently whileyou are making changes.With all of that in mind, I'd do something more like this:Private Sub RemoveAccentsFromForeignLetters() Dim Target As Range Set Target = ActiveSheet.UsedRange StartNewTask (Removing accents from foreign letters) Dim Values() As Variant Values = Target.Value Debug.Assert Len(AccentedChars) = Len(StrippedChars) Dim FindChars() As Byte Dim ReplaceChars() As Byte FindChars = AccentedChars ReplaceChars = StrippedChars Dim AccentedTest As RegExp Set AccentedTest = New RegExp AccentedTest.Pattern = [ & AccentedChars & ] Dim index As Long Dim character As Long Dim col As Long Dim row As Long For row = 1 To UBound(Values, 1) For col = 1 To UBound(Values, 2) 'Ignore strings that don't require character replacements. If AccentedTest.Test(Values(row, col)) Then Dim buffer() As Byte buffer = StrConv(Values(row, col), vbUnicode) 'Skip every other character - VBA Unicode expansion 'inserts nulls there. For character = 0 To UBound(buffer) Step 2 For index = 0 To UBound(FindChars) If buffer(character) = FindChars(index) Then buffer(character) = ReplaceChars(index) End If Next index Next character 'Highlight changed cells yellow left as an exercise for 'the reader. Values(row, col) = StrConv(buffer, vbFromUnicode) End If Next col Next row ActiveSheet.UsedRange = ValuesEnd SubSome quick and dirty benchmarks, all done with 2000 rows and 10 columns. In the worse case benchmarks, all cells have the value . In the average case benchmarks, 5% of the cells have the and the rest of them contain XXXXXXXXXXXXXXX.Replace method, worst case: 3.51 seconds. Array method, worst case: 1.15 seconds. Replace method, average case: .40 seconds (supports disabling ScreenUpdating). Array method, average case: .08 seconds. |
_unix.266460 | Reading up on signal(7) I can see that now: two, but once: three; signal numbers past 31 are reserved for use by the Real-time signal system and should not be used:Real-time SignalsLinux supports real-time signals as originally defined in the POSIX.1b real-time extensions (and now included in POSIX.1-2001). The range of supported real-time signals is defined by the macros SIGRTMIN and SIGRTMAX. POSIX.1-2001 requires that an implementation support at least POSIX_RTSIG_MAX(8) real-time signals.The Linux kernel supports a range of 32 different real-time signals, numbered 33 to 64. However, the glibc POSIX threads implementation internally uses two (for NPTL) or three (for LinuxThreads) real-time signals (see pthreads(7)), and adjusts the value of SIGRTMIN suitably (to 34 or 35). Because the range of available real-time signals varies according to the glibc threading implementation (and this variation can occur at run time according to the available kernel and glibc), and indeed the range of real-time signals varies across UNIX systems, programs should never refer to real-time signals using hard-coded numbers, but instead should always refer to real-time signals using the notation SIGRTMIN+n, and include suitable (run-time) checks that SIGRTMIN+n does not exceed SIGRTMAX.So, how do I determine the value (in a C program that needs to set up signal handling for itself and any children) of SIGRTMIN when the program is running? I have looked through questions and answers here but they all seem to treat SIGRTMIN as if it was a #define SIGRTMIN 34 when the man page says that should not be done! | How does one establish SIGRTMIN at run-time? | linux kernel;c;signals | Stupidly I had forgotten that things that are #defined are not constant unless they are written that way!As @RuiFRibeiro points out in the /usr/include/architecture-specific/bits/signum.h include file at the bottom is the pair of MACROS that provides what is needed:#define SIGUNUSED 31#define _NSIG 65 /* Biggest signal number + 1 (including real-time signals). */#define SIGRTMIN (__libc_current_sigrtmin ()) #define SIGRTMAX (__libc_current_sigrtmax ()) /* These are the hard limits of the kernel. These values should not be used directly at user level. */#define __SIGRTMIN 32 #define __SIGRTMAX (_NSIG - 1)So now I know how to prevent signal handlers from being attempted to be replaced for those reserved one - I suspect any attempt would be rejected anyway but for error reporting it is better to know what the limits are rather than to determine them from a suck it and see approach! |
_unix.138303 | I want to build an Ubuntu kernel from scratch for beaglebone black. I have been searching for where I can download the kernel source code for more than two days but haven't found anything.So, please tell me from where I can get the kernel source code. | kernel source code for beaglebone black | ubuntu;linux kernel | The first result for ubuntu kernel source code in duckduckgo.com is https://wiki.ubuntu.com/Kernel/SourceCode which explains the process of getting and compiling an Ubuntu kernel. I reproduce it here:All of the Ubuntu Kernel source is maintained under git. The source for each release is maintained in its own git repository on kernel.ubuntu.com. These can be browsed in gitweb, the official Ubuntu trees are in the ubuntu/ directory. The Ubuntu Linux kernel git repository is located at git://kernel.ubuntu.com/ubuntu/ubuntu-.git or http://kernel.ubuntu.com/git-repos/ubuntu/ubuntu-.git. To obtain a local copy you can simply git clone the repository for the release you are interested in as below. The git command is part of the git-core package:git clone git://kernel.ubuntu.com/ubuntu/ubuntu-<release>.gitFor example to obtain the maverick tree:git clone git://kernel.ubuntu.com/ubuntu/ubuntu-maverick.gitThis will download several hundred megabytes of data. If you plan on working on more than one kernel release you can save space and time by downloading the upstream kernel tree. Note that once these two trees are tied together you cannot remove the virgin Linus tree without damage to the Ubuntu tree:git clone git://kernel.ubuntu.com/ubuntu/linux.gitgit clone --reference linux git://kernel.ubuntu.com/ubuntu/ubuntu-karmic.gitgit clone --reference linux git://kernel.ubuntu.com/ubuntu/ubuntu-maverick.gitIn each case you will end up with a new directory ubuntu- containing the source and the full history which can be manipulated using the git command from within each directory.By default you will have the latest version of the kernel tree, the master tree. You can switch to any previously released kernel version using the release tags. To obtain a full list of the tagged versions in the release as below:$ git tag -l Ubuntu-*Ubuntu-2.6.27-7.10Ubuntu-2.6.27-7.11Ubuntu-2.6.27-7.12Ubuntu-2.6.27-7.13Ubuntu-2.6.27-7.14$To look at the 2.6.27-7.13 version you can simply checkout a new branch pointing to that version:git checkout -b temp Ubuntu-2.6.27-7.13You may then manipulate the release for example adding new commits. |
_softwareengineering.209862 | C++ is a great language in many ways, but some things in particular are cumbersome to write without an IDE. As a VIM user, it would be very interesting if I had access to a higher level language which enabled me to write C++ with S-Expressions and possibly with Lisp-like macros, allowing for the generation of clean code while avoiding rewriting the same patters over and over again.I've asked on freenode and tested several ideas, such as compiling Lisp->C with compilers such as ECL and Bigloo, but none of those generated particularly clean C code.Are there any works on this issue? | Is it possible to compile a higher level language to readable C++? | programming languages;c++;c;lisp;vim | null |
_cstheory.37010 | For a graph $G$ on $n$ vertices, what is the value of following ratio: $$\max_{f:V\rightarrow [\frac{-1}{2},\frac{1}{2}], \\ \sum_{v}{f(v)}=0} \frac{f^T L_G f}{n-f^Tf} ,$$ where $L_G=D_G-A_G$ is the laplacian matrix of $G$? Is this parameter related to the spectrum of $G$? Is this parameter polynomially computable? Remark: Note that we have $$\max_{f:V\rightarrow [\frac{-1}{2},\frac{1}{2}], \\ \sum_{v}{f(v)}=0} \frac{f^T L_G f}{f^Tf} = \lambda_n(G),$$ where $\lambda_n(G)$ is the largest eigenvalue of $L_G$. | Is the value of $\max_{f:V\rightarrow [\frac{-1}{2},\frac{1}{2}], \\ \sum_{v}{f(v)}=0} \frac{f^T L_G f}{n-f^Tf}$ polynomially computable? | cc.complexity theory;polynomial time;spectral graph theory | null |
_unix.129290 | My system has 6 interfaces, a loopback, a Xen Bridge, and then 4 ethernet interfaces eth0-3. In this system, my DNS server assigned by DHCP on xenbr0 is 192.168.1.1.Initially eth1 is disabled and DNS assigned (checked via /etc/resolv.conf) is 192.168.1.1. When I enable eth1, internet on this system stops working, I checked with /etc/resolv.conf and now DNS is 127.0.0.1 (why?). Anyway DNS resolution doesn't work now.Question is How can I make internet work while keeping all interfaces active? Why DNS server changes, and how to stop that?About Environment: This is a Ubuntu 12.04 VM running in Xen in VirtualBox. eth1 is connected to a VBox host-only network 192.168.56.0/24. eth2 and eth3 are connected to VBox internal networks, there is no config on these two interfaces. xenbr0 is xen bridge and eth0 is added as its one port. In Vbox eth0 is sharing IP with host machine via NAT (currently getting 10.0.2.15). IP of host machine is 192.168.1.x and router IP is 192.168.1.1 which is default gateway for host and DNS too. This is hown DNS is propagated upto guest machine via xenbr0. | Why DNS server changes when enabling another interface on machine | networking;dns | Sounds like your system uses the same init script on all(?) interfaces, thus the most-recent DHCPconfigured connection will overwrite /etc/resolv.conf with whatever that DHCP defined |
_codereview.16211 | I made a simple library to help me doing A/B tests in a web app. The idea is simple: I have two or more page options (URL) for a given page and every call to the library method should give me a URL so at the end all options were given the same traffic.My questions are: Is this 100% thread safe? Is this performatic (I'm worried about the locks in a multithreaded (web) environment)? Did I use the best data structure for it?Can it be more readable?Here are the tests and the implementation:[TestFixture]public class ABTestTest { [SetUp] public void Setup() { ABTest.ResetAll(); } [Test] public void GetUrlWithTwoCandidates() { ABTest.RegisterUrl(CarrinhoPagamento, Url1.aspx); ABTest.RegisterUrl(CarrinhoPagamento, Url2.aspx); ABTest.GetUrl(CarrinhoPagamento).Should().Be.EqualTo(Url1.aspx); ABTest.GetUrl(CarrinhoPagamento).Should().Be.EqualTo(Url2.aspx); ABTest.GetUrl(CarrinhoPagamento).Should().Be.EqualTo(Url1.aspx); ABTest.GetUrl(CarrinhoPagamento).Should().Be.EqualTo(Url2.aspx); ABTest.GetUrl(CarrinhoPagamento).Should().Be.EqualTo(Url1.aspx); ABTest.GetUrl(CarrinhoPagamento).Should().Be.EqualTo(Url2.aspx); ABTest.GetUrl(CarrinhoPagamento).Should().Be.EqualTo(Url1.aspx); } [Test] public void GetUrlWithThreeCandidates() { var OptionsSelected = new Dictionary<string, int>(); OptionsSelected.Add(Url1.aspx, 0); OptionsSelected.Add(Url2.aspx, 0); OptionsSelected.Add(Url3.aspx, 0); ABTest.RegisterUrl(CarrinhoPagamento, Url1.aspx); ABTest.RegisterUrl(CarrinhoPagamento, Url2.aspx); ABTest.RegisterUrl(CarrinhoPagamento, Url3.aspx); var nextUrl = ; for (int i = 1; i < 10; i++) { nextUrl = ABTest.GetUrl(CarrinhoPagamento); OptionsSelected[nextUrl]++; } OptionsSelected[Url1.aspx].Should().Be.EqualTo(3); OptionsSelected[Url2.aspx].Should().Be.EqualTo(3); OptionsSelected[Url3.aspx].Should().Be.EqualTo(3); } [Test] public void GetUrlWithThreeCandidatesThreaded() { var OptionsSelected = new Dictionary<string, int>(); OptionsSelected.Add(Url1.aspx, 0); OptionsSelected.Add(Url2.aspx, 0); OptionsSelected.Add(Url3.aspx, 0); ABTest.RegisterUrl(CarrinhoPagamento, Url1.aspx); ABTest.RegisterUrl(CarrinhoPagamento, Url2.aspx); ABTest.RegisterUrl(CarrinhoPagamento, Url3.aspx); ThreadPool.SetMaxThreads(3, 3); for (int i = 1; i < 10; i++) { ThreadPool.QueueUserWorkItem((object state) => { var nextUrl = ABTest.GetUrl(CarrinhoPagamento); OptionsSelected[nextUrl]++; }); } while (OptionsSelected.Select(x => x.Value).Sum() != 9) { Thread.Sleep(100); } OptionsSelected[Url1.aspx].Should().Be.EqualTo(3); OptionsSelected[Url2.aspx].Should().Be.EqualTo(3); OptionsSelected[Url3.aspx].Should().Be.EqualTo(3); }}public class ABTest{ private volatile static Hashtable NextOption = new Hashtable(); private static Hashtable Options = new Hashtable(); public static void ResetAll() { lock (NextOption) { NextOption = new Hashtable(); } lock (Options) { Options = new Hashtable(); } } public static void RegisterUrl(string key, string url) { if (Options.ContainsKey(key.GetHashCode())) { lock (Options) { ((List<ABTestOption>)Options[key.GetHashCode()]).Add(new ABTestOption() { Url = url, Count = 0 }); } } else { if (!Options.ContainsKey(key.GetHashCode())) { lock (Options) { if (!Options.ContainsKey(key.GetHashCode())) { Options.Add( key.GetHashCode(), new List<ABTestOption>() { new ABTestOption() { Url = url, Count = 0 } }); } } } if (!NextOption.ContainsKey(key.GetHashCode())) { lock (NextOption) { if (!NextOption.ContainsKey(key.GetHashCode())) { NextOption.Add(key.GetHashCode(), url); } } } } } public static string GetUrl(string key) { lock (NextOption) { var nextUrl = (string)NextOption[key.GetHashCode()]; var keyOptions = (List<ABTestOption>)Options[key.GetHashCode()]; var selectedOption = keyOptions.Where(x => x.Url == nextUrl).First(); selectedOption.Count++; NextOption.Remove(key.GetHashCode()); NextOption.Add(key.GetHashCode(), keyOptions.OrderBy(x => x.Count).First().Url); return nextUrl; } } private class ABTestOption { public string Url { get; set; } public int Count { get; set; } }} | Library for doing A/B tests in a web app | c#;multithreading;thread safety | This part is not thread safe: lock (NextOption) { NextOption = new Hashtable(); }The problem is that the object being used to synchronize is re-assigned. That means a subsequent caller will acquire a different lock.It's conceivable that two writer threads hit RegisterUrl and each see a different lock, but end up adding concurrently to the same hash table, which is a harmful race condition. There are also more subtle problems if two threads are both calling ResetAll and a third thread is inserting.You should do something like this:object NextOptionLock = new object();Hashtable NextOption = new Hashtable();lock (NextOptionLock){ NextOption = new HashTable();}I would do this for all other places where you use lock(/* ... */). It's a good practice to keep the lock object separate from the data being protected.This next part looked kind of suspicious to me because its thread safety depends on the implementation of Hashtable: if (Options.ContainsKey(key.GetHashCode())) { lock (Options) { ((List)Options[key.GetHashCode()]).Add(new ABTestOption()The question is whether or not it's safe to call ContainsKey while another thread does Add. According to Microsoft's documentation it is safe:Hashtable is thread safe for use by multiple reader threads and a single writing thread. It is thread safe for multi-thread use when only one of the threads perform write (update) operations, which allows for lock-free reads provided that the writers are serialized to the Hashtable. So while it might be a red flag for someone auditing for concurrency problems, it seems OK.Your use of GetHashCode looks a little weird. Don't MS's classes do this for you when you use an object as a hash table key?Also, why use a non-generic class like Hashtable? I see from MSDN that Dictionary<K,V> does not allow reads to be overlapped with a write in the same way that Hashtable's documentation claims, so maybe that's the reason... Have you looked at ConcurrentDictionary? |
_unix.47222 | I am installing an SSD and would like to put / on the SSD and /home, /var, and /tmp on the HDD. My current distro is Kubuntu but I would not mind trying another distro if this procedure can be accomplished easier there. I have installed many different Linux OSes on multiple partitions, however I know of no installer that lets one mount multiple directories on a single partition. I would rather not use three separate partitions as particularly /home, /var, and /tmp are prone to large changes in size and it is not practical to allot each of them some arbitrary maximum.Note that I am discussing a new install, not moving the current system to the SSD / HD split. | How to mount multiple directories on the same partition? | linux;partition;system installation | There are two approaches you can use. For either approach, you need first mount your hard disk partition wherever (for example, under /hd) and also add it to /etc/fstab, then create home, var, and tmp inside the mount.Use symlinks. Then create symlinks from /home to /hd/home, etc.Instead of symlinks, use bind mounts. Syntax is mount --bind /hd/home /home. You can (should) also put that in fstab, using 'bind' as the fstype. The basic way to get it to install like that is to set up the target filesystem by hand before starting the actual install. I know its easy enough with debian-installer to use the installer to create your partitions, mount, and then switch to a different terminal (say, alt-f2), cd into /target, and create your symlinks (or bind mounts). Then switch back to alt-f1 and continue the install. Ubuntu's (and I assume Kubuntu's) installers are based on debian-installer, so I assume similar is possible. |
_cstheory.32747 | There is a polynomial-time algorithm for computing the number of words of length $n$ in an unambiguous CFG $G = (V, \Sigma, R, S)$ (via a dynamic programming approach). However, for ambiguous CFGs, the algorithm only computes the number of parse trees resulting in strings of length $n$. Therefore, this result is not the number of words of length $n$ in an ambiguous CFG.Is there a result (other than testing all possible strings of length $n$) for inherently ambiguous CFGs? | Counting words of length $n$ in an inherently ambiguous CFG? | grammars;context free;dynamic programming | null |
_webmaster.68692 | If I am building a new site where I want to publish my articles that have previously been published on other sites (that I do not have control over them), what is the best thing to do? | Publishing articles twice | seo;duplicate content | null |
_webapps.6596 | How can I delete images uploaded to TinyGrab? | How can I delete images uploaded to TinyGrab? | delete;screenshot | According to the TinyGrab FAQ:Q. How can I delete a grab from the TinyGrab server?Visit the TinyGrab online control panel at http://tinygrab.com/go/panel to delete an image that you've uploaded to TinyGrab. This is a Premium service only. Only under special / certain circumstances will we respond to support requests to delete images from our service. Free TinyGrab accounts are intended as a trial and not to be used as a replacement to TinyGrab Premium.You can submit a support ticket requesting them to remove a specific screenshot, but they will only do so under special circumstances. I've done this before, and they removed it pretty quickly. |
_unix.256686 | My laptop – which is my main everyday computer – is currently dual-booting Win10 and Ubuntu 14.04 (though I almost never use Windows). I recently tried out a USB boot stick of the live Debian 8.0.2 image, with the Cinnamon desktop GUI. I was very impressed, and am interested in using it. I don't want to get rid of Ubuntu, and unfortunately can't get rid of Windows (occasionally I still have to be able to read/write 100% docx and xlsx compatible files).So would it be more efficient (in terms of RAM and CPU workload) to bootstrap a Debian image within Ubuntu, or actually triple-boot? I have room on the HDD for another partition, but would I need multiple partitions for grub, etc like when I initially set up the Ubuntu dual-boot? I'm well experienced with double-booting machines, but have never tried a triple. | Most efficient for multiple distros on one machine | debian;ubuntu;dual boot;grub | null |
_unix.52659 | I recall that eval dircolors -b used to display the colours that LS_COLORS was using, based on the file types or extensions. It was not simply the colour values that were displayed but the colours themselves. I could see the colour in which a .png or .ogg file would be displayed and change it if needed through a custom file.I find that the output of eval dircolors -b is no more in colour.Can someone kindly explain how I might get it back? Perhaps some environment variable is not getting set. Otherwise, is there a workaround? | How can I list LS_COLORS in colour? | ls;colors | Try this script:( # Run in a subshell so it won't crash current color settings dircolors -b >/dev/null IFS=: for ls_color in ${LS_COLORS[@]}; do # For all colors color=${ls_color##*=} ext=${ls_color%%=*} echo -en \E[${color}m${ext}\E[0m # echo color and extension done echo)Output: |
_webmaster.20282 | This snippit from my apache log reflects some piece of malware casually browsing through my site for some unknown purpose. The most puzzling thing to me is that the GET address doesn't correspond to anything that exists, so how is this in the transfer log and not the error log?Aside from that, what agent is doing this, and why?60.169.78.42 - - [29/Sep/2011:18:49:53 -0500] GET /wp-admin/post-new.php HTTP/1.0 404 301 http://www.boardspace.net/cgi-bin/login.cgi$kx_http_post$cookie=on&language=&password=super123&pname=biffarcarm Mozilla/5.0 (Windows NT 6.1; rv:6.0.2) Gecko/20100101 Firefox/6.0.260.169.78.42 - - [29/Sep/2011:18:49:53 -0500] GET /member/manage_blog.php?tab=add HTTP/1.0 404 302 http://www.boardspace.net/wp-admin/post-new.php Mozilla/5.0 (Windows NT 6.1; rv:6.0.2) Gecko/20100101 Firefox/6.0.260.169.78.42 - - [29/Sep/2011:18:49:54 -0500] GET /profile_blog_new.php HTTP/1.0 404 300 http://www.boardspace.net/member/manage_blog.php?tab=add Mozilla/5.0 (Windows NT 6.1; rv:6.0.2) Gecko/20100101 Firefox/6.0.260.169.78.42 - - [29/Sep/2011:18:49:55 -0500] GET /account/submit/add-blog/ HTTP/1.0 404 304 http://www.boardspace.net/profile_blog_new.php Mozilla/5.0 (Windows NT 6.1; rv:6.0.2) Gecko/20100101 Firefox/6.0.260.169.78.42 - - [29/Sep/2011:18:49:55 -0500] GET /blogs.php?action=new_post HTTP/1.0 404 289 http://www.boardspace.net/account/submit/add-blog/ Mozilla/5.0 (Windows NT 6.1; rv:6.0.2) Gecko/20100101 Firefox/6.0.260.169.78.42 - - [29/Sep/2011:18:49:56 -0500] GET /blogs/my_page/add/ HTTP/1.0 404 298 http://www.boardspace.net/blogs.php?action=new_post Mozilla/5.0 (Windows NT 6.1; rv:6.0.2) Gecko/20100101 Firefox/6.0.260.169.78.42 - - [29/Sep/2011:18:49:56 -0500] GET /blogs.php?action=write HTTP/1.0 404 289 http://www.boardspace.net/blogs/my_page/add/ Mozilla/5.0 (Windows NT 6.1; rv:6.0.2) Gecko/20100101 Firefox/6.0.260.169.78.42 - - [29/Sep/2011:18:49:57 -0500] GET /my_blogs&action=add HTTP/1.0 404 303 http://www.boardspace.net/blogs.php?action=write Mozilla/5.0 (Windows NT 6.1; rv:6.0.2) Gecko/20100101 Firefox/6.0.266.251.84.28 - - [29/Sep/2011:18:49:56 -0500] GET /cgi-bin/login.cgi?pname=DrRaven&language=english HTTP/1.1 200 21878 - Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET CLR 1.1.4322; .NET CLR 2.0.50727; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729; CMNTDF; InfoPath.2)60.169.78.42 - - [29/Sep/2011:18:49:57 -0500] GET /index.php?do=/blog/add/ HTTP/1.0 404 289 http://www.boardspace.net/my_blogs&action=add Mozilla/5.0 (Windows NT 6.1; rv:6.0.2) Gecko/20100101 Firefox/6.0.260.169.78.42 - - [29/Sep/2011:18:49:58 -0500] GET /blog_edit.php HTTP/1.0 404 293 http://www.boardspace.net/index.php?do=/blog/add/ Mozilla/5.0 (Windows NT 6.1; rv:6.0.2) Gecko/20100101 Firefox/6.0.260.169.78.42 - - [29/Sep/2011:18:49:58 -0500] GET /manager/add_entry.php HTTP/1.0 404 301 http://www.boardspace.net/blog_edit.php Mozilla/5.0 (Windows NT 6.1; rv:6.0.2) Gecko/20100101 Firefox/6.0.2 | Why are nonexistant files listed in the TRANSFER log? | web crawlers;logging | null |
_unix.47458 | I want to delete a folder of portable hard drive on terminal in Mac. Are there any command line ? | access external hard drive to delete folder on terminal Mac | command line;osx;external hdd | You can use rm to remove the folder on your external hard drive.The full Terminal command looks like this rm -r /Volumes/$drivename/$folderReplace $drivename with the name of your external hard drive.Replace $folder with the name of your folder.If you don't know the name of your external hard drive you can look it up withls /Volumes |
_unix.359532 | I'm trying to install backport 4.4.2-1 on my Kali-Rolling vm machine but I'm getting the following error. I have no idea what went wrong but what I did was first install the linux headers using the following command --> #apt-get install linux-headers-$(uname -r)Everything went well.But When i tried to make install, I got the following error.Please help me. Are there any dependencies missing??? make[4]: 'conf' is up to date.boolean symbol HWMON tested for 'm'? test forced to 'n'boolean symbol HWMON tested for 'm'? test forced to 'n'## configuration written to .config#Building backport-include/backport/autoconf.h ... done. CC [M] /root/Downloads/backports-4.4.2-1/compat/main.oIn file included from /root/Downloads/backports-4.4.2-1/backport-include/backport/backport.h:7:0, from <command-line>:0:/usr/src/linux-headers-4.9.0-kali3-common/include/asm-generic/qrwlock.h: In function __qrwlock_write_byte:/root/Downloads/backports-4.4.2-1/backport-include/linux/kconfig.h:25:28: error: implicit declaration of function config_enabled [-Werror=implicit-function-declaration] #define IS_BUILTIN(option) config_enabled(option) ^/usr/src/linux-headers-4.9.0-kali3-common/include/asm-generic/qrwlock.h:156:26: note: in expansion of macro IS_BUILTIN return (u8 *)lock + 3 * IS_BUILTIN(CONFIG_CPU_BIG_ENDIAN); ^~~~~~~~~~/usr/src/linux-headers-4.9.0-kali3-common/include/asm-generic/qrwlock.h:156:37: error: CONFIG_CPU_BIG_ENDIAN undeclared (first use in this function) return (u8 *)lock + 3 * IS_BUILTIN(CONFIG_CPU_BIG_ENDIAN); ^/root/Downloads/backports-4.4.2-1/backport-include/linux/kconfig.h:25:43: note: in definition of macro IS_BUILTIN #define IS_BUILTIN(option) config_enabled(option) ^~~~~~/usr/src/linux-headers-4.9.0-kali3-common/include/asm-generic/qrwlock.h:156:37: note: each undeclared identifier is reported only once for each function it appears in return (u8 *)lock + 3 * IS_BUILTIN(CONFIG_CPU_BIG_ENDIAN); ^/root/Downloads/backports-4.4.2-1/backport-include/linux/kconfig.h:25:43: note: in definition of macro IS_BUILTIN #define IS_BUILTIN(option) config_enabled(option) ^~~~~~cc1: some warnings being treated as errors/usr/src/linux-headers-4.9.0-kali3-common/scripts/Makefile.build:298: recipe for target '/root/Downloads/backports-4.4.2-1/compat/main.o' failedmake[7]: *** [/root/Downloads/backports-4.4.2-1/compat/main.o] Error 1/usr/src/linux-headers-4.9.0-kali3-common/scripts/Makefile.build:549: recipe for target '/root/Downloads/backports-4.4.2-1/compat' failedmake[6]: *** [/root/Downloads/backports-4.4.2-1/compat] Error 2/usr/src/linux-headers-4.9.0-kali3-common/Makefile:1507: recipe for target '_module_/root/Downloads/backports-4.4.2-1' failedmake[5]: *** [_module_/root/Downloads/backports-4.4.2-1] Error 2Makefile:150: recipe for target 'sub-make' failedmake[4]: *** [sub-make] Error 2Makefile:8: recipe for target 'all' failedmake[3]: *** [all] Error 2Makefile.build:6: recipe for target 'modules' failedmake[2]: *** [modules] Error 2Makefile.real:88: recipe for target 'modules' failedmake[1]: *** [modules] Error 2Makefile:40: recipe for target 'install' failedmake: *** [install] Error 2 | Kali Linux (Rolling) backport 4.4 installation problem? | kali linux;backports | null |
_codereview.138856 | I am currently working on a new version of my application, and I just finished rebuilding the log part. It works well but I am skeptical about the choices I made. I am a student and I work alone in some side projects, and I am afraid of developing bad habits.I use java.util.logging, because it is simple enough for what I need to do.The logger will be used by several part of the application (but not at the same time). Each executable will have a logging file. All configurations are stored in Settings class.Main class :public class MyLogger{ public static Logger LOGGER = null; // Logger use by Spider, Cleaner, Checker private static Handler logFile = null; // Logger file /** * Initialize logger before first use * * @param folderName sub-folder after the log folder * @param fileName log file name (without extension) */ public static void initLogger(String folderName, String fileName) { if (MyLogger.LOGGER != null) throw new IllegalStateException(Logger already instantiated); MyLogger.LOGGER = Logger.getLogger(MyLogger.class.getName()); // Check if log is not disable if (Settings.getProp().getLogLevel() != Level.OFF) { String folderPath = Settings.getProp().getLogPath() + folderName; File dir = new File(folderPath); // if the directory does not exist, create it (recursively) if (!dir.exists()) { try { dir.mkdirs(); } catch (SecurityException e) { e.printStackTrace(); // Throw RunTime instead ? } } try { // Open log file MyLogger.logFile = new FileHandler(folderPath + File.separator + fileName + .log, true); MyLogger.logFile.setEncoding(UTF-8); // Attache file to logger MyLogger.LOGGER.addHandler(MyLogger.logFile); } catch (SecurityException | IOException e) { e.printStackTrace(); // Throw RunTime instead ? } } LOGGER.setLevel(Settings.getProp().getLogLevel()); // Console Text formatter LOGGER.setUseParentHandlers(false); MyConsoleHandler handler = new MyConsoleHandler(); handler.setLevel(Settings.getProp().getConsoleLevel()); handler.setFormatter(new MyFormatter()); LOGGER.addHandler(handler); LOGGER.config(Logger Started : + folderName + - + fileName); LOGGER.config(Settings.getProp().toString()); // Print application settings }}Formatter :public class MyFormatter extends Formatter{ // Create a DateFormat to format the logger timestamp. private static final DateFormat df = new SimpleDateFormat(H:mm:ss.SSS); /* (non-Javadoc) * @see java.util.logging.Formatter#format(java.util.logging.LogRecord) */ @Override public String format(LogRecord record) { StringBuilder builder = new StringBuilder(1000); builder.append(df.format(new Date(record.getMillis()))).append( ); // time builder.append(().append(record.getThreadID()).append() ); // Thread ID builder.append([).append(record.getLevel()).append(] ); // level builder.append(formatMessage(record)); // message builder.append(\n); return builder.toString(); }}Console Handler :public class MyConsoleHandler extends ConsoleHandler{ /* (non-Javadoc) * @see java.util.logging.Handler#publish(java.util.logging.LogRecord) */ @Override public void publish(LogRecord record) { try { if(record.getLevel().intValue() >= this.getLevel().intValue()) { String message = getFormatter().format(record); if (record.getLevel().intValue() >= Level.WARNING.intValue()) { System.err.write(message.getBytes()); } else { System.out.write(message.getBytes()); } } } catch (Exception exception) { reportError(null, exception, ErrorManager.FORMAT_FAILURE); return; } }}Example of use :public static void main(String[] args){ MyLogger.initLogger(test, 001); MyLogger.LOGGER.info(Testing message); MyLogger.LOGGER.warning(A warning message !);}Example of output (console) :22:43:26.918 (1) [CONFIG] Logger Started : test - 00122:43:26.921 (1) [CONFIG] Appication Settings : - [...] - Console Level : CONFIG - Log Level : FINER - Log Path : ./log/ - [...] - OutputQueueLimit : 1000 items22:43:26.922 (1) [INFO] Testing message22:43:26.922 (1) [WARNING] A warning message ! | Wrapper for Java logging | java;logging | 1) Braces stylePutting opening curly braces on new lines is fine, though the big majority of Java developers don't do it and it wastes space.Not putting curly braces around single line statements is dangerous and many people will discourage you from doing it!:https://stackoverflow.com/questions/8020228/is-it-ok-if-i-omit-curly-braces-in-javahttps://softwareengineering.stackexchange.com/questions/16528/single-statement-if-block-braces-or-no/16530 2) capitalized field namesCapitalized field names (LOGGER) are classically reserved for final constants, your LOGGER is not final. Meanwhile your DateFormat df should be capitalized.3) Properly build pathsUse the File constructor to combine path parts:https://stackoverflow.com/questions/412380/combine-paths-in-javaI guess making your ultimate File like File file = new File(folderName, fileName + .log) and then calling file.getParentFile().mkdirs(); would be the most pleasant invocation.Pass the path of the file to your FileHandler constructor.4) Throw RuntimeException on SecurityExceptionYour logger won't work. Something is wrong and your program shouldn't just silently ignore it! I'd prefer it to crash at that point: Fix the logging or I won't work! 5) SimpleDateFormat is not threadsafe!Evil JRE developers want to secretly and unexpectedly mess up your date formatting:https://stackoverflow.com/questions/6840803/simpledateformat-thread-safetyYou should at least be aware of that if you plan to use multithreading.6) Append cascadeYour append cascade is somewhat painful to read: It's hard for me to see exactly what you are combining.I'd put every .append() on a new line.7) Append cascadeIs there any advantage to calling write(message.getBytes()); instead of println/print (message)? I have never seen that before. 8) Seperate loggers for different classes, no direct accessNormally you create one logger for each class it's used in and put it in a static final field. That way it's easier to control logging and find out where the log entry comes from.9) HH instead of HYou might want to use HH instead of H in your SimpleDateFormat, so your log entries are better aligned and more consistent. |
_webmaster.30100 | I've received a message from SpamCop.net that email from 1 of my websites has been marked as spam. The report was sent to my hosting provider who then forwarded it to me.I'n now investigating what happened exactly.I've never been in touch with SpamCop, what exactly does it do?Did 1 user report email I sopposedly sent as spam? Or is there a minimum treshhold before SpamCop sends the report?Are there immedicate consequences when receiving such a report? | Spamcop message received, what now? | bulk email | The main purpose of SpamCop.Net is to report spam sources, not web sites. If it reported your website and not an e-mail you sent, then that was merely as a convenience to you and your ISP to let you know that someone is Spamvertizing (also see this page) your web site. If the unlikely event that you have any control over whoever used your web site in their e-mail, insist that she or he stop using it. If not, there is nothing you can (or need to) do unless your ISP is threatening to punish you, in which case you can refer them to this Answer and the SpamCop Forum web site in general.In answer to your question, But I was wondering if this report is immediately sent after just 1 complaint to SpamCop or if there's a treshhold within SpamCop after which SpamCop sends this report to my hosting provider? the answer is that each report from a SpamCop user is sent to the hosting provider, at the option of the SpamCop user (by default, it is sent). It is not really a complaint but, rather, a heads-up that your web site is being referenced in spam.Note from originator, Steve T: thanks to Su for improving my original post. |
_unix.81530 | Got debian installed finally but during install I think I choose to use the local media over the internet for packages. needed to do that as was not sure if will try to connect right away (my modem needs a login).Now I want to install/ enable sudo and install other programs like thunderbird eclipse, and chrome. but I do not have sudowhen I type the command,aptitude install sudoGet this message:The following NEW packages will be installed: sudo 0 packages upgraded, 1 newly installed, 0 to remove and 0 notupgraded. Need to get 0 B/842 kB of archives. After unpacking 1,882 kBwill be used. Media change: Please insert the disc labeled 'DebianGNU/Linux 7.1.0 _Wheezy_ - Official amd64 CD Binary-1 20130615-23:06'into the drive '/media/cdrom/' and press [Enter].Do not have a cdrom. What to do? Have root user password though logged in as another user. | use net instread of cdrom. linux, aptitude install | sudo;system installation | If you have access to root, you can edit the /etc/apt/sources.list file as described on the debian site hereuse suenter root passwordthen with an editor (nano is easy to use as it has menus if you have it installed otherwise vi is most likely to be present, see a manual for vi here )nano /etc/apt/sources.listor vi /etc/apt/sources.list |
_softwareengineering.240444 | I'm developing a website (using Django) which will depend on an API for it's main functionality which is create/update/delete objects.But the API also provides:User sign up and loginUser relations to their objectsUser groups and permissionsThis is great but I'm conserned about fully depending on the API for everything even user sign up and login, so I have 2 choices:Using the API for everything:Advantages:User authentication, objects, relations, permissions are already managedMy job is only to query the API and display the resultsDisadvantages:Lots of HTTP requests to the APIThe website will break if the API goes downThe website rendering time will be slower (will use ajax)Using the API when needed:Advantages:Fewer HTTP requestsThe website will be a little fasterIf the API goes down, not all the functions of the website will stopDisadvantages:I'll have to manage user authentication, permissions and relations to his objects in the APIDuplicate user data in my database and the API (I'll have a copy of the user objects)Worried about the data sync (I'll update the database only on create/update/delete requests to the API)Which one is a better choice and why ? what design is usually used for such cases ? | Website as an API client vs using the API only when needed? | design;web development;web applications;api;scalability | This is an interesting question, and not very easy to answer. It depends on what website you are developing, in order to justify the effort in any direction. Here is my opinion for some of your points:Using the API for everything: DisadvantagesLots of HTTP requests to the API This depends on whether you are developing a standard website or one that should be accessible extensively via mobile devices. It makes sense to use smaller in portions, but more frequent communication in case when the client is a mobile device. Mobile devices have limited capabilities in terms of hardware and could also have slower internet connection. A large server response may take significant time to be processed on the device itself.The website will break if the API goes downWell, that is a problem of the website, not the API. The website must be developed with caution to the case when the API is not available. This could be a real-life scenario, as if the API is being updated and redeployed, the site would experience a downtime and not work. Regardless of whether entirely depending on the API or not, you must provide means for graceful error handling - show a suitable error page, or limit the functionality accordingly.The website rendering time will be slower (will use ajax)I disagree. AJAX actually causes the site to load faster. Each AJAX request, if made asynchronous, would render a separate portion of the site. If you were to wait for all portions to load at once, the cumulative loading time would hardly differ. Besides, almost everyone uses AJAX today, and this has proven to not be an issue, if properly done. Additionally, AJAX is extensively used to bring some processing to the client, not the server - this is usually employed as a technique to reduce server load.Using the API when needed: DisadvantagesI'll have to manage user authentication, permissions and relations to his objects in the APIIf that is a concern to you, I mean if you need to do the user authentication and security by youtself, then you should not rely on the API entirely. It is often the case that people develop their own system and maintain both the local and the remote (the API's) accounts at once - the concept of linked accounts. It is popular approach for system with already existing users to provide means for linking the user account with, for instance, the user's facebook account. If you want have your own user account information, but are reluctant to manage the authentication and authorization yourself, consider using OpenID if the API supports it, or consider implementing one, if you are developing the API too.Duplicate user data in my database and the API (I'll have a copy of the user objects)As per the above point, it is inevitable to have some data repeated. If the user object to your project means something different to you than the user from the API, then you must manage this information yourself. Worried about the data sync (I'll update the database only on create/update/delete requests to the API)Maintaining two distinct systems in sync is something that has always been a serious development effort. If you can avoid this, and rely entirely on the API, you'd probably prefer the API approach. If the API does not provide you with all the capabilities for your own project, then you could still have to face this maintenance task. In addition to your concerns, you need to consider some additional ones:Does the API guarantee backwards compatibility if they introduce changes?Is the API documented well enough so you can cover the aspects of your project with it entirely?How does one introduce fixes/improvements to the API? There are serious differences between maintaining and improving OpenSource projects/APIs and commercial ones. Also, any project usually has its own policy to when is appropriate to apply certain fixes and changes (most open source projects will do this faster than a commercial product, that is not to say fast enough for you). This is something to consider if you discover a problem with the API that breaks your code. If this is not seen to be fixed by the API's team, you would be on your own for working this around.It is sometimes acceptable to not directly use the external API, but create a wrapping API that is what you would directly use for your project. Then delegate the API availability and compatibility problems to your API wrapper. That way, changes will not directly affect the website, or any other application that consumes it. You will also have the freedom to expose the external API in a form that is more suitable. For example, you may expose a single method that results in multiple calls to the external API - so you will reduce the number of requests the website does.As a bottom line, the real concerns you may have is the data duplication and the site availability in case of API crash or incompatible changes. I would decide based on the estimated development effort with either approach, and the likeability of problems with that approach. |
_cs.67042 | We have the word w = aabceefgeebdaabbceeffghdcbbeefbbbbghhie . I have created a Huffman tree for the string w. We get the following table: Now I want to create a Huffman tree for a Block-Code with length of block $4$. Do we maybe take each consecutive 4 letters, i.e., {aabc, eefg, eebd, aabb, ceef, fghd, cbbe, efbb, bbgh, hie} to make the tree? But then the last one is of length 3 and not 4. So, do we choose in an other way the blocks? | What is a Huffman tree for a Block-Code? | data structures;trees;binary trees;huffman coding | null |
_unix.115443 | I usually use kate as my preferred text editor, however, whenever I open it there are no window borders, and thus no maximize, minimize, and close buttons. How can I fix this?I tried going thought the view menu, but was unable to find any settings to alter this.Moreover, it seems to be stuck in always on top mode and thus I cannot view my panel to use right click to close or modify it.Pressing F11 produces no change.There is a scroll bar, but it doesn't appear to be controlled by the window manager (kwin) as it is much wider than normal and has a mini preview of the entire document as its background. I'd prefer not to override this feature if possible.I have an almost default install of Linux Mint KDE 16 x64.Related: How do I exit full screen after enabling via the right click context menu of kwin (KDE)? | Kate has no window borders, and thus no minimize, maximize, and close buttons | window manager;kate;window decorations | This will happen if you:right click on the kate window border or it's entry in the task manager barselect 'more actions' -> 'Full screen'orpress ctrl-shift-FI originally solved this by:removing katerc from ~/.kde/share/configThis is best undone by:pressing ctrl-shift-F |
_softwareengineering.152848 | It happens that some one just leaves the company all of sudden. Now his work needs to be completed and you are being assigned it. Having no idea what was he up to (was it 90% done or 9%), how do you manage the leftover?Shall I start from scratch? What if it was 90% done? Shall I try and understand whatever he has done? What if it was just nonsense? | How do you manage projects left over by other employees? | project management;efficiency | null |
_softwareengineering.205835 | Despite being very stakeholders friendly, ATDD aimed to provide a stop line when a feature has just been done. This avoids wasting time to add non-focused (and sometimes useless) code.That's why some teams start by establishing a walking skeleton of the application, and directly specifying with an acceptance test the first required feature. Let's suppose this first acceptance test (not representing a relevant first acceptance test, just being an example):Given Michael has just been created in the application,his status should be left to non-activated.I want to write my acceptances tests focusing on business logic directly (use-cases), not dealing with GUI for business rules.Thus my question would be...how to write it? since I don't even already know what is a User, what is a status etc...Indeed, shouldn't it be the role of TDD to emerge the design and therefore these components?But if I firstly practice TDD in order to emerge them, the benefit of ATDD (as a stop line) would disappear.I imagine that it would be more consistent to write some acceptance tests (before entering TDD cycle) when the project has well progressed, since all main components would already be designed.To sum up, should I always write my acceptances test BEFORE my TDD cycle? | How to practice ATDD if design is not yet emerged from TDD? | design;agile;testing;tdd;acceptance testing | Acceptance tests access the application through a special purpose API. You presented this use case:Given Michael has just been created in the application, his status should be left to non-activated.The API implied from this use case is something like:CreateUser(String name);enum UserStatus {non-activated};UserStatus GetUserStatus(String name);So far this has nothing to do with TDD. It's just a simple API that your acceptance tests can use to access the application.Now, to make this acceptance test pass, you'll have to implement this API. That's when you start doing TDD. The decisions you make while test-driving the solution will help you determine the design of the application. Note that the design of the application has nothing to do with the design of the API that's used by your acceptance tests. That API is an adapter layer between those tests and your application. That layer allows your application to assume any design you so desire.Regarding TDD and design. It is true that design emerges from TDD. But TDD is not the sole process by which you design your application. You also think through the design in many other ways. You might draw some UML diagrams. You might use CRC cards. You might have a design session with your co-workers. Indeed, you should likely do ALL of these things. And you should also allow designs to emerge with TDD. TDD doesn't replace previous design tools, it adds a new tool to the kit. Some folks will likely complain that this sounds like BDUF, and doesn't sound very Agile. The problem with that is the letter 'B'. It's entirely true that we don't want to do BIG design up front. But it's not true at all that we don't want to do some design up front. We do! A few hours, or even days of design up front is not bad. Months and months of it is. |
_unix.220223 | I like the default Adwaita background that came with cinnamon installed on Arch linux. For some reason however this background is very dark, much darker than it appears in the settings for choosing background:As I like to be able to view my backgound through my transparent terminal and just generally prefer a slightly happier look is there a way to fix this or a reason this is happening? I know I could always edit the image with something like GIMP increasing the brightness to counteract this effect, but I'm wondering if there is a more elegant solution. | Adwaita Background Appears Much Darker in Reality | linux;arch linux;cinnamon;wallpaper | null |
_cogsci.8215 | Literature suggests that analogies are helpful in teaching and learning science concepts. But every concept can not and should not be taught using analogies as there are some concepts that can be better taught using examples and learning by doing .My question is for what kind of concepts do we really need an analogy ?Answers may include a reference to a piece of literature / some insights from teaching experience /blogs etc. | For what kind of concepts do we really need an analogy? | learning;educational psychology | null |
_codereview.100594 | This entire class came out of a chat discussion, and I'm curious on how it looks. (This is like literally 30 minutes of development time.)The idea is to allow very easy, quick implementations of Google's reCAPTCHA (I am not a robot) checkbox CAPTCHA algorithm. It only requires minor work from the implementor to make it work properly, and that was the point.Warning: Minimal implementation effort required.public class ReCaptchaValidator{ private const string _HeadScriptInclude = <script src='https://www.google.com/recaptcha/api.js'></script>; private const string _ReCaptchaLocationInclude = <div class=\g-recaptcha %EXTRACLASSES%\ data-sitekey=\%SITEKEY%\></div>; private readonly string _ReCaptchaSecret; private readonly string _ReCaptchaSiteKey; /// <summary> /// Returns the script to be included in the <code><head></code> of the page. /// </summary> public string HeadScriptInclude { get { return _HeadScriptInclude; } } /// <summary> /// Use this to get or set any extra classes that should be added to the <code><div></code> that is created by the <see cref=ReCaptchaLocationInclude/>. /// </summary> public List<string> ExtraClasses { get; set; } /// <summary> /// Returns the <code><div></code> that should be inserted in the HTML where the reCAPTCHA should go. /// </summary> /// <remarks> /// I'm still not sure if this should be a method or not. /// </remarks> public string ReCaptchaLocationInclude { get { return _ReCaptchaLocationInclude.Replace(%SITEKEY%, _ReCaptchaSiteKey).Replace(%EXTRACLASSES%, string.Join( , ExtraClasses)); } } /// <summary> /// Creates a new instance of the <see cref=ReCaptchaValidator/>. /// </summary> /// <param name=reCaptchaSecret>The reCAPTCHA secret.</param> /// <param name=reCaptchaSiteKey>The reCAPTCHA site key.</param> public ReCaptchaValidator(string reCaptchaSecret, string reCaptchaSiteKey) { _ReCaptchaSecret = reCaptchaSecret; _ReCaptchaSiteKey = reCaptchaSiteKey; } /// <summary> /// Determines if the reCAPTCHA response in a <code>NameValueCollection</code> passed validation. /// </summary> /// <param name=form>The <code>Request.Form</code> to validate.</param> /// <returns>A boolean value indicating success.</returns> public bool Validate(NameValueCollection form) { string reCaptchaSecret = _ReCaptchaSecret; string reCaptchaResponse = form[g-recaptcha-response]; bool passedReCaptcha = false; using (WebClient client = new WebClient()) { byte[] response = client.UploadValues(https://www.google.com/recaptcha/api/siteverify, new NameValueCollection() { { secret, reCaptchaSecret }, { response, reCaptchaResponse } }); string reCaptchaResult = System.Text.Encoding.UTF8.GetString(response); if (reCaptchaResult.IndexOf(\success\: true) > 0) passedReCaptcha = true; } return passedReCaptcha; }}Usage:string reCaptchaSecret = ;string reCaptchaSiteKey = ;ReCaptchaValidator rcv = new ReCaptchaValidator(reCaptchaSecret, reCaptchaSiteKey);bool passedReCaptcha = rcv.Validate(Request.Form);It should be pretty self-explanatory. You can use the ReCaptchaValidator.HeadScriptInclude to get the entire <script> tag for the head, and ReCaptchaValidator.ReCaptchaLocationInclude to get the <div> element for placement in the body. These aren't demonstrated here, but are easy to implement. | Google reCAPTCHA Validator | c#;object oriented;polymorphism;captcha | Some quick shots at the code by retrieving the ReCaptchaLocationInclude property an ArgumentNullException is thrown, because you didn't initialize the ExtraClasses property. I would also like to suggest changing this property from autoimplemented to a normal one, so you can validate any set value. bool passedReCaptcha = false; is not really a good name here. so I wouldn't use this variable at all. Instead I would replace this if (reCaptchaResult.IndexOf(\success\: true) > 0) passedReCaptcha = true; with return (reCaptchaResult.IndexOf(\success\: true) > 0);and for the IDE's love add a return false; at the end of the method. If you don't want to do this, it is fine too, but you should replace the former if with passedReCaptcha = (reCaptchaResult.IndexOf(\success\: true) > 0); |
_unix.114908 | I want to convert all *.flac to *.mp3 in the specific folder.This is what I've tried, but not works:# change to the home directorycd ~/music# convert all *.flac filesffmpeg -i *.flac -acodec libmp3lame *.mp3# (optional: check whether there are any errors printed on the terminal)sleep 60How to get my goal? | Bash script to convert all *flac to *.mp3 with FFmpeg? | bash;shell script;ffmpeg | Try this: for i in *.flac ; do ffmpeg -i $i -acodec libmp3lame $(basename ${i/.flac}).mp3 sleep 60 done |
_webapps.108521 | Is there a way to attach my images directly from my google drive inline when composing a message on gmail?When I try to attach an inline image from a sharable link on my google drive, it doesn't show inline when attached, but an image is visible on the preview of image when using the Add Photo > Web URL functionality of gmail.Getting Sharable Link from Google DriveUsing Insert Photo from GMailPhoto is Not Properly AttachedThe reason why we need to use the photo from Google Drive is that we need the versioning of the file, and when we update the image, all the emails that has been sent with the image should be updated as well. We use it for our company wide announcements and resending an updated image would just spam everyone. Is there a way for this to be possible? | Inline image attachment on GMail from GDrive | email;gmail;google drive | null |
_softwareengineering.33228 | So, I'm seriously considering axing ASP.NET AJAX from my future projects as I honestly feel it's too bloated, and at times convoluted. I'm also starting to feel it is a dying library in the .NET framework as I hardly see any quality components from the open-source community. All the kick-ass components are usually equally bloated commercial components... It was cool at first, but now I tend to get annoyed with it more than anything else.I'm planning on switching over to the jQuery library as just about everything in ASP.NET AJAX is often easily achievable with jQuery, and, more often than not, more graceful of a solution that ASP.NET AJAX and it has a much stronger open-source community.Perhaps, it's just me, but do you feel the same way about ASP.NET AJAX? How was/is your experience working with ASP.NET AJAX? | ASP.NET AJAX and my axe! | asp.net;jquery;ajax | I gotta assume you're talking about WebForms. When first announced, I thought this framework sounded sorta cool... Then I actually got to use it, and immediately hated it. The superficial resemblance to WinForms provides a leaky abstraction that resembles - but utterly fails to match - traditional Windows desktop APIs, while adding endless pitfalls, mountains of tedious boilerplate, and very little else. Early promises of painless cross-browser/cross-device development quickly amounted to nothing, and viewstate brain-damage made it all too easy for new developers to create insanely large, slow pages. The underlying framework isn't all that bad though. |
_unix.374198 | I try to move a VirtualBox VM to a docker image. We use the VirtualBox to crosscompile source code for a armhf device (something based on a BeagleBone)I got problems at RUN apt-get install -y build-essential:armhfThe complete Docker code looks like this:FROM debian:jessieRUN apt-get updateRUN apt-get upgradeRUN apt-get install -y build-essential module-assistant curl git cmakeRUN curl http://emdebian.org/tools/debian/emdebian-toolchain-archive.key | apt-key add -RUN echo deb http://emdebian.org/tools/debian jessie main > /etc/apt/sources.list.d/crosstools.listRUN dpkg --add-architecture armhfRUN apt-get updateRUN apt-get install -y crossbuild-essential-armhfRUN apt-get install -y curl:armhfRUN apt-get install -y libcurl4-openssl-dev:armhf openssl:armhf RUN apt-get install -y build-essential:armhfRUN apt-get install -y libssl-dev:armhfI get the following error when running docker build:Step 12/13 : RUN apt-get install -y build-essential:armhf ---> Running in ca5a82d30cc7Reading package lists...Building dependency tree...Reading state information...Some packages could not be installed. This may mean that you haverequested an impossible situation or if you are using the unstabledistribution that some required packages have not yet been createdor been moved out of Incoming.The following information may help to resolve the situation:The following packages have unmet dependencies: build-essential:armhf : Depends: gcc:armhf (>= 4:4.9.1) but it is not going to be installed Depends: g++:armhf (>= 4:4.9.1) but it is not going to be installed Depends: make:armhfE: Unable to correct problems, you have held broken packages.The command '/bin/sh -c apt-get install -y build-essential:armhf' returned a non-zero code: 100If i do look at the bash history on the VM it looks the same. What is the problem with my Docker code? EDIT: Output with -o Debug::pkgProblemResolver=yes: Starting pkgProblemResolver with broken count: 2Starting 2 pkgProblemResolver with broken count: 2Investigating (0) dpkg-dev [ amd64 ] < 1.17.27 > ( utils )Broken dpkg-dev:amd64 Depends on make [ amd64 ] < 4.0-8.1 > ( devel ) Considering make:amd64 1 as a solution to dpkg-dev:amd64 2 Added make:amd64 to the remove listBroken dpkg-dev:amd64 Depends on binutils [ amd64 ] < 2.25-5+deb8u1 > ( devel ) Considering binutils:amd64 1 as a solution to dpkg-dev:amd64 2 Added binutils:amd64 to the remove list Fixing dpkg-dev:amd64 via keep of make:amd64 Fixing dpkg-dev:amd64 via keep of binutils:amd64Investigating (0) binutils [ armhf ] < none -> 2.25-5+deb8u1 > ( devel )Broken binutils:armhf Conflicts on binutils [ amd64 ] < 2.25-5+deb8u1 > ( devel ) Considering binutils:amd64 1 as a solution to binutils:armhf 2 Added binutils:amd64 to the remove list Fixing binutils:armhf via remove of binutils:amd64Investigating (0) make [ amd64 ] < 4.0-8.1 > ( devel )Broken make:amd64 Conflicts on make [ armhf ] < none -> 4.0-8.1 > ( devel ) Considering make:armhf 0 as a solution to make:amd64 1 Added make:armhf to the remove list Fixing make:amd64 via keep of make:armhfInvestigating (0) binutils-arm-linux-gnueabihf [ amd64 ] < 2.25-5 > ( devel )Broken binutils-arm-linux-gnueabihf:amd64 Depends on binutils [ amd64 ] < 2.25-5+deb8u1 > ( devel ) Considering binutils:amd64 1 as a solution to binutils-arm-linux-gnueabihf:amd64 1 Removing binutils-arm-linux-gnueabihf:amd64 rather than change binutils:amd64Investigating (1) build-essential [ armhf ] < none -> 11.7 > ( devel )Broken build-essential:armhf Depends on make [ armhf ] < none -> 4.0-8.1 > ( devel ) Considering make:armhf 0 as a solution to build-essential:armhf 9999 Re-Instated make:armhfInvestigating (1) dpkg-dev [ amd64 ] < 1.17.27 > ( utils )Broken dpkg-dev:amd64 Depends on binutils [ amd64 ] < 2.25-5+deb8u1 > ( devel ) Considering binutils:amd64 1 as a solution to dpkg-dev:amd64 2 Added binutils:amd64 to the remove list Fixing dpkg-dev:amd64 via keep of binutils:amd64Investigating (1) gcc-4.9-arm-linux-gnueabihf [ amd64 ] < 4.9.2-10 > ( devel )Broken gcc-4.9-arm-linux-gnueabihf:amd64 Depends on binutils-arm-linux-gnueabihf [ amd64 ] < 2.25-5 > ( devel ) (>= 2.25) Considering binutils-arm-linux-gnueabihf:amd64 1 as a solution to gcc-4.9-arm-linux-gnueabihf:amd64 2 Added binutils-arm-linux-gnueabihf:amd64 to the remove list Fixing gcc-4.9-arm-linux-gnueabihf:amd64 via keep of binutils-arm-linux-gnueabihf:amd64Investigating (1) binutils [ armhf ] < none -> 2.25-5+deb8u1 > ( devel )Broken binutils:armhf Conflicts on binutils [ amd64 ] < 2.25-5+deb8u1 > ( devel ) Considering binutils:amd64 1 as a solution to binutils:armhf 2 Added binutils:amd64 to the remove list Fixing binutils:armhf via remove of binutils:amd64Investigating (1) make [ amd64 ] < 4.0-8.1 > ( devel )Broken make:amd64 Conflicts on make [ armhf ] < none -> 4.0-8.1 > ( devel ) Considering make:armhf 0 as a solution to make:amd64 1 Added make:armhf to the remove list Fixing make:amd64 via keep of make:armhfInvestigating (1) binutils-arm-linux-gnueabihf [ amd64 ] < 2.25-5 > ( devel )Broken binutils-arm-linux-gnueabihf:amd64 Depends on binutils [ amd64 ] < 2.25-5+deb8u1 > ( devel ) Considering binutils:amd64 1 as a solution to binutils-arm-linux-gnueabihf:amd64 1 Removing binutils-arm-linux-gnueabihf:amd64 rather than change binutils:amd64Investigating (2) build-essential [ armhf ] < none -> 11.7 > ( devel )Broken build-essential:armhf Depends on make [ armhf ] < none -> 4.0-8.1 > ( devel ) Considering make:armhf 0 as a solution to build-essential:armhf 9999 Considering make-guile:armhf -1 as a solution to build-essential:armhf 9999 Re-Instated libgc1c2:armhf Re-Instated libltdl7:armhf Re-Instated libtinfo5:armhf Re-Instated libncurses5:armhf Re-Instated libreadline6:armhf Re-Instated libunistring0:armhf Re-Instated guile-2.0-libs:armhf Re-Instated make-guile:armhfInvestigating (2) dpkg-dev [ amd64 ] < 1.17.27 > ( utils )Broken dpkg-dev:amd64 Depends on binutils [ amd64 ] < 2.25-5+deb8u1 > ( devel ) Considering binutils:amd64 1 as a solution to dpkg-dev:amd64 2 Added binutils:amd64 to the remove list Fixing dpkg-dev:amd64 via keep of binutils:amd64Investigating (2) gcc-4.9-arm-linux-gnueabihf [ amd64 ] < 4.9.2-10 > ( devel )Broken gcc-4.9-arm-linux-gnueabihf:amd64 Depends on binutils-arm-linux-gnueabihf [ amd64 ] < 2.25-5 > ( devel ) (>= 2.25) Considering binutils-arm-linux-gnueabihf:amd64 1 as a solution to gcc-4.9-arm-linux-gnueabihf:amd64 2 Added binutils-arm-linux-gnueabihf:amd64 to the remove list Fixing gcc-4.9-arm-linux-gnueabihf:amd64 via keep of binutils-arm-linux-gnueabihf:amd64Investigating (2) binutils [ armhf ] < none -> 2.25-5+deb8u1 > ( devel )Broken binutils:armhf Conflicts on binutils [ amd64 ] < 2.25-5+deb8u1 > ( devel ) Considering binutils:amd64 2 as a solution to binutils:armhf 2 Holding Back binutils:armhf rather than change binutils:amd64Investigating (2) make [ amd64 ] < 4.0-8.1 > ( devel )Broken make:amd64 Conflicts on make-guile [ armhf ] < none -> 4.0-8.1 > ( devel ) Considering make-guile:armhf -1 as a solution to make:amd64 1 Added make-guile:armhf to the remove list Fixing make:amd64 via keep of make-guile:armhfInvestigating (3) build-essential [ armhf ] < none -> 11.7 > ( devel )Broken build-essential:armhf Depends on make [ armhf ] < none -> 4.0-8.1 > ( devel ) Considering make:armhf 0 as a solution to build-essential:armhf 9999 Considering make-guile:armhf 1 as a solution to build-essential:armhf 9999Investigating (3) gcc-4.9 [ armhf ] < none -> 4.9.2-10 > ( devel )Broken gcc-4.9:armhf Depends on binutils [ armhf ] < none -> 2.25-5+deb8u1 > ( devel ) (>= 2.25) Considering binutils:armhf 2 as a solution to gcc-4.9:armhf 3 Holding Back gcc-4.9:armhf rather than change binutils:armhfInvestigating (3) gcc [ armhf ] < none -> 4:4.9.2-2 > ( devel )Broken gcc:armhf Depends on gcc-4.9 [ armhf ] < none -> 4.9.2-10 > ( devel ) (>= 4.9.2-1~) Considering gcc-4.9:armhf 3 as a solution to gcc:armhf 1 Holding Back gcc:armhf rather than change gcc-4.9:armhfInvestigating (3) g++ [ armhf ] < none -> 4:4.9.2-2 > ( devel )Broken g++:armhf Depends on gcc [ armhf ] < none -> 4:4.9.2-2 > ( devel ) (>= 4:4.9.2-2) Considering gcc:armhf 1 as a solution to g++:armhf 0 Holding Back g++:armhf rather than change gcc:armhfInvestigating (3) g++-4.9 [ armhf ] < none -> 4.9.2-10 > ( devel )Broken g++-4.9:armhf Depends on gcc-4.9 [ armhf ] < none -> 4.9.2-10 > ( devel ) (= 4.9.2-10) Considering gcc-4.9:armhf 3 as a solution to g++-4.9:armhf 0 Holding Back g++-4.9:armhf rather than change gcc-4.9:armhfInvestigating (4) build-essential [ armhf ] < none -> 11.7 > ( devel )Broken build-essential:armhf Depends on gcc [ armhf ] < none -> 4:4.9.2-2 > ( devel ) (>= 4:4.9.1) Considering gcc:armhf 1 as a solution to build-essential:armhf 9999 Re-Instated binutils:armhf Re-Instated gcc-4.9:armhf Re-Instated gcc:armhfBroken build-essential:armhf Depends on g++ [ armhf ] < none -> 4:4.9.2-2 > ( devel ) (>= 4:4.9.1) Considering g++:armhf 0 as a solution to build-essential:armhf 9999 Re-Instated g++-4.9:armhf Re-Instated g++:armhfBroken build-essential:armhf Depends on make [ armhf ] < none -> 4.0-8.1 > ( devel ) Considering make:armhf 0 as a solution to build-essential:armhf 9999 Considering make-guile:armhf 1 as a solution to build-essential:armhf 9999Investigating (4) binutils [ armhf ] < none -> 2.25-5+deb8u1 > ( devel )Broken binutils:armhf Conflicts on binutils [ amd64 ] < 2.25-5+deb8u1 > ( devel ) Considering binutils:amd64 2 as a solution to binutils:armhf 2 Holding Back binutils:armhf rather than change binutils:amd64Investigating (5) build-essential [ armhf ] < none -> 11.7 > ( devel )Broken build-essential:armhf Depends on make [ armhf ] < none -> 4.0-8.1 > ( devel ) Considering make:armhf 0 as a solution to build-essential:armhf 9999 Considering make-guile:armhf 1 as a solution to build-essential:armhf 9999Investigating (5) gcc-4.9 [ armhf ] < none -> 4.9.2-10 > ( devel )Broken gcc-4.9:armhf Depends on binutils [ armhf ] < none -> 2.25-5+deb8u1 > ( devel ) (>= 2.25) Considering binutils:armhf 2 as a solution to gcc-4.9:armhf 3 Holding Back gcc-4.9:armhf rather than change binutils:armhfInvestigating (5) gcc [ armhf ] < none -> 4:4.9.2-2 > ( devel )Broken gcc:armhf Depends on gcc-4.9 [ armhf ] < none -> 4.9.2-10 > ( devel ) (>= 4.9.2-1~) Considering gcc-4.9:armhf 3 as a solution to gcc:armhf 1 Holding Back gcc:armhf rather than change gcc-4.9:armhfInvestigating (5) g++ [ armhf ] < none -> 4:4.9.2-2 > ( devel )Broken g++:armhf Depends on gcc [ armhf ] < none -> 4:4.9.2-2 > ( devel ) (>= 4:4.9.2-2) Considering gcc:armhf 1 as a solution to g++:armhf 0 Holding Back g++:armhf rather than change gcc:armhfInvestigating (5) g++-4.9 [ armhf ] < none -> 4.9.2-10 > ( devel )Broken g++-4.9:armhf Depends on gcc-4.9 [ armhf ] < none -> 4.9.2-10 > ( devel ) (= 4.9.2-10) Considering gcc-4.9:armhf 3 as a solution to g++-4.9:armhf 0 Holding Back g++-4.9:armhf rather than change gcc-4.9:armhfInvestigating (6) build-essential [ armhf ] < none -> 11.7 > ( devel )Broken build-essential:armhf Depends on gcc [ armhf ] < none -> 4:4.9.2-2 > ( devel ) (>= 4:4.9.1) Considering gcc:armhf 1 as a solution to build-essential:armhf 9999Broken build-essential:armhf Depends on g++ [ armhf ] < none -> 4:4.9.2-2 > ( devel ) (>= 4:4.9.1) Considering g++:armhf 0 as a solution to build-essential:armhf 9999Broken build-essential:armhf Depends on make [ armhf ] < none -> 4.0-8.1 > ( devel ) Considering make:armhf 0 as a solution to build-essential:armhf 9999 Considering make-guile:armhf 1 as a solution to build-essential:armhf 9999Done | Docker CrossCompile Debian build-essential:armhf unmet dependencies | debian;docker;cross compilation | This is the line causing problems:RUN apt-get install -y build-essential:armhfYou dont need build-essential:armhf to cross-compile. You should remove that; docker build should then be able to build a container without issue. |
_unix.213539 | I have installed CentOS 7 DVD ISO on my MSI GE70 Laptop that has geforce gtx 765m, when the GUI comes after 5 seconds screen freeze.Is this an nvidia problem? How can I solve it?I cannot disable my geforce video card from BIOS. So how can I use my default video or there is solution for that? Even if I use my default video card there will be a problem about watching videos on youtube, etc., because of lag issue. | Centos 7 GUI Freeze problem | centos;freeze;intel graphics;graphic card | null |
_codereview.155962 | GoalWrite an R function to generate probabilities modeled by the following equation (IRT; 2 Parameter Logistic Model):Dataset.seed(1)a = runif(10, 1.5, 3)b = rnorm(10, 0, 1)theta = rnorm(10000000)# the output of the implementation should result in # a matrix with 100 mil. rows and 10 columnsImplementationStrategy A# the implementation of the equationcomputeProbability = function(theta, b, a) { x = exp(1) ^ (a * (theta - b)) return(x / (1 + x))}strategy.A = function(theta, b, a) { n.rows = length(theta) n.cols = length(b) prob_mtx = matrix(nrow = n.rows, ncol = n.cols) for(i in 1:n.rows) { prob_mtx[i, ] = computeProbability(theta[i], b, a) } return(prob_mtx)}Strategy Bstrategy.B = function(theta, b, a) { return(t(sapply(theta, computeProbability, b = b, a = a)))}Strategy Cstrategy.C = function(theta, b, a) { return(1 / (1 + exp(-sweep(outer(theta, b, -), 2, a, *))))}Timings # Strategy A | # Strategy B | # Strategy C | | user system elapsed | user system elapsed | user system elapsed64.76 0.27 65.08 | 82.01 0.91 82.93 | 7.81 0.64 8.46Question: Strategy C is by far the most efficient way, but how can I make it even faster? | Increase performance in an R function using a vectorized approach | performance;r;iteration;vectorization | null |
_unix.191057 | I am trying to replace a range of text with sed containing special characters.I have the following output:Users /Users/Users SERVER1Roaming Profiles /Roaming Profiles/Roaming Profiles SERVER2I would like it to be like this:Users SERVER1Roaming Profiles SERVER2 | Replace a range of text with special characters using sed | sed | null |
_unix.102484 | This question concerns the yes command found in UNIX and Linux machines: Basically, what is the point (if any) and history of this tool? Are there practical applications for it? Can an example be shown where it is useful in a script or chained (via pipe or redirect) with another tool?The manpage is below:YES(1) BSD General Commands Manual YES(1)NAME yes -- be repetitively affirmativeSYNOPSIS yes [expletive]DESCRIPTION yes outputs expletive, or, by default, ``y'', forever.HISTORY The yes command appeared in 4.0BSD.4th Berkeley Distribution June 6, 1993 4th Berkeley DistributionSample output:$ yes whywhywhywhywhy^Cwhy | What is the point of the `yes` command? | utilities | It's usually used as a quick and dirty way to provide answers to an interactive script:yes | rm -r large_directorywill not prompt you about any file being removed. Of course in the case of rm, you can always supply -f to make it steamroll the directory removal, but not all tools are so forgiving.UpdateA more relevant example of this that I recently came across is when you are fscking a filesystem and you don't want to bother answering y when prompted before fixing each error:yes | fsck /dev/foo |
_reverseengineering.9309 | I bought a decibel meter off amazon recently (http://www.amazon.com/Sound-Measure-Tester-Pressure-Decibel/dp/B00CPKSE38/ref=sr_1_1?ie=UTF8&qid=1436376590&sr=8-1&keywords=wensn) which outputs db measurements to a microsd card. I opened the sd card's content on my computer and encountered four separate *.wsn files, two of which I created and the other two apparently made by the manufacturer in testing perhaps. Anyway, did a google search for the .wsn file extension and can't find anything but something called a whoopsie skin file which doesn't appear to be what I'm looking for. Can anyone help me find a way to parse this file? I imagine it simply contains a table of information with two columns (db level and time)Here's the link to a sample of the file. It is time series data for just a couple of seconds from the decibel meter.https://drive.google.com/file/d/0B0yWXI3LgLr4ME1DeUs4SDFMX1E/view?usp=sharingUpdate:The db measurement device's manufacturer is wensn, which clearly accounts for the file name extension. I've found two leads so far:manufacture website for device (not very helpful, in fact it has been HACKED)reverse engineering wensn usb stream data (pretty advanced stuff)wensn usb stream github parsing project in pythonbut both are for parsing usb streams of *.tmp files from the device, rather than the static wsn files. I'm guessing I could utilize this code to parse the wsn files, but I don't know how to do that yet. Honestly I'm in way over my head at this point.I first posted this question meta.stackexchange, you can find that discussion here. The expected output data is columnar, time and decibel level, as seems to be indicated here.Update 2:I think there may be a way to decode this file using the 'sigrok' utility, which is used to read all sorts of serial outputs from scientific sensors. | Mysterious bytecode (executable?) file from a chinese decibel meter whose manufacturer has been hacked &/or gone bankrupt | executable;byte code | null |
_softwareengineering.340481 | Should every single class in my system have an interface?I understand that interfaces provide an abstraction from the implementation of a class and so changes to the implementation do not affect classes that are using the interface, i.e. cross platform implementations may differ but the interface can stay the same. But what about when testing? I want to test every class and function I write, and sometimes I want to mock classes that may use heavy resources like databases or third party web apis. If I create an interface for every single class in my system I can easily create mocks and unit tests against the interfaces, and this will allow me to test all classes and functions in the system, but I feel like this is overkill. Is there a more efficient way to get 100% coverage when testing a system, using unit tests and mocks, without creating interfaces for every single class? | Should every class in my system have an interface? | architecture;interfaces | null |
_codereview.164048 | I'm new in Java and want to learn and improve.The full project can be inspected hereThis is a Terminal game Battle Ship:The computer sets ships randomly on an ocean and you have to shoot down all those ships.I got several questions:I try to follow best practice, dependency injection, as much as possible. But I'm not sure whether my approach is a good one. For example in OceanImpl.java, I created classes that takes the OceanImpl-cass itself as it's parameter. Is this dependency injection implemented correctly? If not, how would you do it?I tried to make the classes not larger than 100 lines, thus I exported as much as possible into separated small classses and functions, in order to be DRY, create reusable components. But for some classes I think the it's still not dry enough. For example SetOnOceanVertically and SetOnOceanHorizontally is quite similar. Is it possible to refactor them further?Eventhough I used interfaces in order to make the program as flexible to changes as possible, I think it is still not flexible enough. Lookint at Game.java, in particular the line int[] userInput = Helper.getIntegerUserInputInRange(ocean.getXLength(), ocean.getYLength());, if I wanted to switch from Commandline UI to a GUI or changing from 2D to 3D, then I would have to write at least that line too. So, I can't just swap the UI and expect not to touch other parts of the code.Main.javapackage main;import main.controller.Game;import main.model.MaritimeElement;import main.model.OceanImpl;public class Main { public static void main(String[] args) throws Exception { Ocean ocean = new OceanImpl(6, 7); ocean.setShipWhereThereIsPlace(MaritimeElement.AIRCRAFT_CARRIER); ocean.setShipWhereThereIsPlace(MaritimeElement.AIRCRAFT_CARRIER); ocean.setShipWhereThereIsPlace(MaritimeElement.CRUISER); ocean.setShipWhereThereIsPlace(MaritimeElement.CRUISER); ocean.setShipWhereThereIsPlace(MaritimeElement.DESTROYER); ocean.setShipWhereThereIsPlace(MaritimeElement.DESTROYER); Game game = new Game(ocean); game.start(); }}OceanImpl.javapackage main.model;import main.controller.*;import main.controller.assertion.AssertionMaritime;import main.controller.assertion.AssertionMaritimeImpl;import main.controller.utils.Helper;import java.awt.*;import java.util.HashMap;import java.util.HashSet;import java.util.Map;import java.util.Set;public class OceanImpl implements Ocean{ MaritimeElement[][] ocean; Map<Point, MaritimeElement> shotsMade = new HashMap<>(); Set<Point> shipsPlaced = new HashSet<>(); RandomCoordinateFactory randomCoordinateFactory; FindFreePosition findFreePosition; AssertionMaritime assertShip = new AssertionMaritimeImpl(); SetOnOcean setOnOceanHorizontally; SetOnOcean setOnOceanVertically; SetOnOcean[] setOnOcean; public OceanImpl(int xLength, int yLength) throws Exception { assertShip.isLargerThanMinimumDimension(xLength, yLength); ocean = Helper.initOcean(yLength, xLength); randomCoordinateFactory = new RandomCoordinateFactory(ocean[0].length, ocean.length); findFreePosition = new FindFreePosition(this, assertShip); setOnOceanHorizontally = new SetOnOceanHorizontally(this); setOnOceanVertically = new SetOnOceanVertically(this); setOnOcean = new SetOnOcean[]{setOnOceanHorizontally, setOnOceanVertically}; } @Override public int getXLength() {return ocean[0].length;} @Override public int getYLength() {return ocean.length;} @Override public MaritimeElement getLocationStatusAt(int x, int y) {return ocean[y][x];} @Override public MaritimeElement shootAt(int[] userInput) throws Exception { int x = userInput[0], y = userInput[1]; assertShip.isPointWithinRange(x,y, this.getXLength(), this.getYLength()); shotsMade.put(new Point(x,y), getLocationStatusAt(x,y)); shipsPlaced.remove(new Point(x,y)); return getLocationStatusAt(x,y); } @Override public int howManyTargetsHit() {return shipsPlaced.size();} @Override public MaritimeElement getShotMade(int x, int y) {return shotsMade.get(new Point(x,y));} @Override public Result setShipWhereThereIsPlace(MaritimeElement ship) throws Exception { int[] position = findFreePosition.getPosition(ship.val()); if (position[Coordinate.X.val()] != -1){ setOnOcean[position[Coordinate.ORIENTATION.val()]] .setShip(position[Coordinate.X.val()],position[Coordinate.Y.val()],ship); return Result.SUCCESS; } return Result.FAILED; } @Override public void setMaritime(int x, int y, MaritimeElement ship) { try { ocean[y][x] = ship; } catch(Exception e) { e.getCause(); } } @Override public void setShipsPlaced(int x, int y) { shipsPlaced.add(new Point(x, y)); }}Game.javapackage main.controller;import main.controller.assertion.AssertionMaritime;import main.controller.assertion.AssertionMaritimeImpl;import main.controller.utils.Helper;import main.model.Ocean;import main.model.OceanImpl;import main.model.MaritimeElement;import main.view.CommandLineInterface;import main.view.UserInterface;public class Game { Ocean ocean; UserInterface ui = new CommandLineInterface(); AssertionMaritime assertUser = new AssertionMaritimeImpl(); public Game(OceanImpl ocean) { this.ocean = ocean; } public void start() throws Exception { do { ui.showOceanHidden(ocean); int[] userInput = Helper.getIntegerUserInputInRange(ocean.getXLength(), ocean.getYLength()); MaritimeElement shotAtElement = ocean.shootAt(userInput); displayResult(shotAtElement); } while(ocean.howManyTargetsHit() != 0); ui.displayFeedbackWin(); ui.showOcean(ocean); } private void displayResult(MaritimeElement shotAtElement) { if (assertUser.isWater(shotAtElement)) { ui.displayFeedbackShotMissed(); } else { ui.displayFeedbackShotHit(); } }}SetOnOceanHorizontally.javapackage main.controller;import main.model.MaritimeElement;import main.model.Ocean;public class SetOnOceanHorizontally implements SetOnOcean { Ocean ocean; public SetOnOceanHorizontally(Ocean ocean) { this.ocean = ocean; } @Override public void setShip(int x, int y, MaritimeElement ship) { for (int i = 0; i < ship.val(); i++) { int xCoordinate = x + i, yCoordinate = y; try { ocean.setMaritime(xCoordinate, yCoordinate, ship); } catch(Exception e) { e.getCause(); } ocean.setShipsPlaced(xCoordinate,yCoordinate); } }}SetOnOceanVertically.javapackage main.controller;import main.model.MaritimeElement;import main.model.Ocean;public class SetOnOceanVertically implements SetOnOcean { Ocean ocean; public SetOnOceanVertically(Ocean ocean) { this.ocean = ocean; } @Override public void setShip(int x, int y, MaritimeElement ship) { for (int i = 0; i < ship.val(); i++) { int xCoordinate = x, yCoordinate = y + i; try { ocean.setMaritime(xCoordinate,yCoordinate, ship); } catch(Exception e) { e.getCause(); } ocean.setShipsPlaced(xCoordinate,yCoordinate); } }}RandomCoordinateFactory.javapackage main.controller;import java.awt.*;public class RandomCoordinateFactory { private int xLength; private int yLength; public RandomCoordinateFactory(int xLength, int yLength) { this.xLength = xLength; this.yLength = yLength; } private int getRandomHorizontalXPosition(int shipLength) { return (int)Math.floor(Math.random() * (xLength - shipLength));} private int getRandomHorizontalYPosition() { return (int)Math.floor(Math.random() * yLength); } private int getRandomVerticalXPosition() { return (int)Math.floor(Math.random() * xLength); } private int getRandomVerticalYPosition(int shipLength) {return (int)Math.floor(Math.random() * (yLength - shipLength));} public Point getStartPointForHorizontalShip(int shipLength) { return new Point(getRandomHorizontalXPosition(shipLength), getRandomHorizontalYPosition()); } public Point getStartPointForVerticalShip(int shipLength) { return new Point(getRandomVerticalXPosition(), getRandomVerticalYPosition(shipLength)); }}FindFreePosition.javapackage main.controller;import main.controller.assertion.AssertionMaritime;import main.model.MaritimeElement;import main.model.Ocean;import main.model.Orientation;import java.awt.*;public class FindFreePosition { Ocean ocean; RandomCoordinateFactory randomPoint; AssertionMaritime assertShip; public FindFreePosition(Ocean ocean, AssertionMaritime assertShip) { this.ocean = ocean; this.assertShip = assertShip; randomPoint = new RandomCoordinateFactory(ocean.getXLength(), ocean.getYLength()); } public int[] getPosition(int shipLength) throws Exception { int[][] startingPoints = findFreePositionsHorizontallyAndVertically(shipLength); int selectRandomly = (int)(Math.random() * startingPoints.length); if (startingPoints[selectRandomly][0] != -1) return startingPoints[selectRandomly]; throw new Exception(No space for ships found); } private int[][] findFreePositionsHorizontallyAndVertically(int shipLength) throws Exception { Point startPointHorizontalShip = randomPoint.getStartPointForHorizontalShip(shipLength); Point startPointVerticalShip = randomPoint.getStartPointForVerticalShip(shipLength); int[] coordHorizontal = findFreePositionHorizontally(shipLength, startPointHorizontalShip.x, startPointHorizontalShip.y); int[] coordVertical = findFreePositionVertically(shipLength, startPointVerticalShip.x, startPointVerticalShip.y); return new int[][]{coordHorizontal, coordVertical}; } private int[] findFreePositionVertically(int shipLength, int xOffset, int yOffset) throws Exception { int x = xOffset,y = yOffset, k = 0, xIteration = 0; int[] start = {-1,-1,Orientation.VERTICAL.getValue()}; while (x < ocean.getXLength() && xIteration < 2) { while (y < ocean.getYLength()) { MaritimeElement currentMaritimeElement = ocean.getLocationStatusAt(x,y); if (k == 0) start = new int[]{x, y, Orientation.VERTICAL.getValue()}; if (assertShip.isWater(currentMaritimeElement)) k++; if (k == shipLength) return start; if (assertShip.isSpaceAvailable(currentMaritimeElement, ocean.getYLength(), shipLength, y, k)) { k = 0; start = new int[]{-1, -1, Orientation.VERTICAL.getValue()}; } y++; } y = 0; k = 0; x++; if (x >= ocean.getXLength()) xIteration++; x = x % ocean.getXLength(); } return start; } private int[] findFreePositionHorizontally(int shipLength, int xOffset, int yOffset) throws Exception { int x = xOffset, y = yOffset, k = 0, yIteration = 0;; int[] start = {-1,-1, Orientation.HORIZONTAL.getValue()}; while (y < ocean.getYLength() && yIteration < 2) { while (x < ocean.getXLength()) { MaritimeElement currentMaritimeElement = ocean.getLocationStatusAt(x,y); if (k == 0) start = new int[]{x, y, Orientation.HORIZONTAL.getValue()}; if (assertShip.isWater(currentMaritimeElement)) k++; if (k == shipLength) return start; if (assertShip.isSpaceAvailable(currentMaritimeElement, ocean.getXLength(), shipLength, x, k)) { k = 0; start = new int[]{-1, -1, Orientation.HORIZONTAL.getValue()}; } x++; } x = 0; k = 0; y++; if (y >= ocean.getYLength()) yIteration++; y = y % ocean.getYLength(); } return start; }}CommandLindInterface.javapackage main.view;import main.controller.DrawMaritime;import main.controller.DrawMaritimeImpl;import main.model.MaritimeElement;import main.model.Ocean;public class CommandLineInterface implements UserInterface{ DrawMaritime drawMaritime = new DrawMaritimeImpl(); @Override public void display(String message) { System.out.println(message); } @Override public void displayFeedbackWin() { display(You won!); } @Override public void displayFeedbackShotMissed() { display(Missed); } @Override public void displayFeedbackShotHit() { display(Hit); } @Override public void showOceanOpen(Ocean ocean) { genericDrawOcean(ocean, drawShipsOpenly); } @Override public void showOcean(Ocean ocean) { genericDrawOcean(ocean, drawAllShips); } @Override public void showOceanHidden(Ocean ocean) { genericDrawOcean(ocean, drawShotsMade); } private void genericDrawOcean(Ocean ocean, DrawStuffOnOcean drawStuffOnOcean) { for (int y = 0; y < ocean.getYLength(); y++) { for (int x = 0; x < ocean.getXLength(); x++) { if (y == 0 && x == 0) { System.out.print(\t); for (int i = 0; i < ocean.getXLength(); i++) { System.out.print(i + \t); } System.out.println(); } if (x == 0) System.out.print(y + \t); drawStuffOnOcean.draw(ocean, y, x); } System.out.println(); } } interface DrawStuffOnOcean{ void draw(Ocean ocean, int y, int x); } DrawStuffOnOcean drawShipsOpenly = (Ocean ocean, int y, int x) -> { MaritimeElement element = ocean.getLocationStatusAt(x,y); if(element == MaritimeElement.WATER) { drawMaritime.water(); } else if (element == MaritimeElement.DESTROYER) { drawMaritime.destroyer(); } else if (element == MaritimeElement.CRUISER) { drawMaritime.cruiser(); } else if (element == MaritimeElement.AIRCRAFT_CARRIER) { drawMaritime.aircraftCarrier(); } }; DrawStuffOnOcean drawAllShips = (Ocean ocean, int y, int x) -> { MaritimeElement element = ocean.getLocationStatusAt(x,y); MaritimeElement checkForShotsMade = ocean.getShotMade(x,y); if (checkForShotsMade == null) { drawMaritime.water(); } else { if(element == MaritimeElement.WATER) { drawMaritime.missShip(); } else if (element == MaritimeElement.DESTROYER) { drawMaritime.destroyer(); } else if (element == MaritimeElement.CRUISER) { drawMaritime.cruiser(); } else if (element == MaritimeElement.AIRCRAFT_CARRIER) { drawMaritime.aircraftCarrier(); } } }; DrawStuffOnOcean drawShotsMade = (Ocean ocean, int y, int x) -> { MaritimeElement element; if (ocean.getShotMade(x,y) == null) { drawMaritime.water(); } else { element = ocean.getShotMade(x,y); if(element == MaritimeElement.WATER) { drawMaritime.missShip(); } else { drawMaritime.hitShip(); } } };} | Battle Ship game Terminal Game | java | null |
_softwareengineering.160941 | I've finished a first working, releasable version of a testing framework. Prior to release, I want to apply a proper license to it. Normally I'd choose something like GPLv3 but here I am pretty unsure. If I put the testing framework under GPLv3, does that mean users won't be able to test their commercial applications without also putting them under the GPL? Would a license like the MIT be a better fit?I've just had a look at several popular testing frameworks (JUnit, Jasmine) and none of them uses the GPL.EDITForgot to mention the project is open sourced. | Choosing the right license for a testing framework | licensing;gpl | Firstly it probably doesn't matter. There are a lot of testing frameworks. Almost every company/project/developer seems to roll their own version of cppunit, so the chance of the license you choose actually having any major effect is small.Secondly GPL limits how you can distribute a derived work. Unless your testing framework requires some changes to my main codebase (in which case it's a very bad testing framework!) then I should only need to use your code for my unit tests which I'm not going to distribute anyway. |
_unix.189445 | The command brew install boost works for me on my MacOSX, but it installs the latest boost 1.57. How to use brew to install the older 1.55? | How to install a particular version of boost with brew on MacOSX? | command line;macintosh | You'll want to run brew install homebrew/versions/boost155$ brew search boostboost homebrew/science/boost-computeboost-bcp homebrew/versions/boost149boost-build homebrew/versions/boost150boost-python homebrew/versions/boost155Caskroom/cask/iboostup Caskroom/cask/pivotalbooster Caskroom/cask/turbo-boost-switcher |
_softwareengineering.199592 | My background in mathematics is very poor (i.e. last relevant math class taken was high school Trigonometry two years ago - another story for another time). I'm reading 'Javascript: The Definitive Guide' and it's a term that is being repetitively used and I've sort of just ran with it. But I've come to a chapter (Chapter 6 - Objects) where my lack of understanding of the term and its application in programming/OOP is starting to become detrimental to the learning process. The online dictionaries aren't helping out, so does somebody have a more explainable definition and/or example to show? | What does 'enumerable' mean? | object oriented;definition | null |
_softwareengineering.80702 | Programming books often contain a lot of code scattered within it. Usually there will be an accompanying website to download the code used in the book.How do you use the code? Do you just run them and check the results or do you code it from scratch again?If you are coding it from scratch, have you found any advantages( like remembering the content better etc)? | How do you use the sample codes while reading programming books? | learning | The best advice on the matter I know of comes from the prologue of Zed Shaw's Learn Python The Hard Way:This simple book is meant to get you started in programming. The title says its the hard way to learn to write code; but its actually not. Its only the hard way because its the way people used to teach things. With the help of this book, you will do the incredibly simple things that all programmers need to do to learn a language:Go through each exercise. Type in each sample exactly. Make it run.Thats it. This will be very difcult at rst, but stick with it.And later on he elaborates:It seems stupidly obvious, but, if you have a problem typing, you will have a problem learning to code. Especially if you have a problem typing the fairly odd characters in source code. Without this simple skill you will be unable to learn even the most basic things about how software works.Typing the code samples and getting them to run will help you learn the names of the symbols, get familiar with typing them, and get you reading the language.Almost all of the prologue is dedicated to why typing code is preferred, there is no point to copy it here, there's a free pdf version of the book you can read. So, to answer your question, I always type in the code. Every time you choose to type the code instead of just compiling / running the accompanying code you actually get some valuable practice and extra muscle memory points in the language's syntax, conventions and quirks. It's not just the code that matters, unless all you want to achieve is reading the language. Python, where indentation is a language requirement and not a matter of style, is a perfect example of why you need to type everything when learning. |
_scicomp.21029 | Convert the following model into an LP model. Note that you're not being asked to convert this to standard form.$$\min z = \max (x_1, x_2, x_3, 2000)$$s.t.$$-2x_1 + x_2 + x_3 \geq -4$$$$3x_1 - 4x_2 + x_3 \leq 12$$$x_i \geq 0$ for all $i$I am having difficulty thinking through how to set up an objective function that isn't a formula, but selects the maximum of multiple variables. I've tried setting up the $\max(x_1, x_2, x_3, 2000)$ as a matrix and as another variable x4, but it doesn't make sense to me on paper. When I write the problem in Excel, I use =MAX(x1, x2, x3, 2000) as the objective function but I don't understand the algorithm I would use to solve this problem. I've researched several textbooks and am stuck on setting up an objective function that selects a single maximum value of several variables. | Convert the following model into an LP model (not asking for standard form), includes a max (a,b,c,d) | linear programming;model | null |
_unix.334973 | I want to use find command to find the new file uploaded from spacific time to my server,NOT by access time -aminNOT by modify time -mminfor exampleI have manually upload file.ex with SFTProot@server [/path-of-file]# stat file.exFile: `file.ex'Size: 1668 Blocks: 8 IO Block: 4096 regular fileDevice: 903h/2307d Inode: 22820305 Links: 1Access: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root)Access: 2017-01-05 07:37:52.000000000 +0100Modify: 2016-12-27 13:03:10.000000000 +0100Change: 2017-01-05 06:48:26.000000000 +0100I need to find with Change statswhich isChange: 2017-01-05 06:48:26.000000000 +0100All I want to dofind what uplaoded file (with change stat only) from 24 hours and save output to file | how to find by change date in meta data | find;stat | It seems you're looking for the -ctime option. For example, find /path -ctime -1 will find files with change time within the last 24 hours. |
_opensource.5888 | I am looking for Android Open Source Project of PDF Viewer when I can have the select/copy text feature. Like Acrobat Reader provided. Any advices? I have already checked few projects:https://github.com/barteksc/AndroidPdfViewerhttps://github.com/JoanZapata/android-pdfviewhttps://github.com/voghDev/PdfViewPagerBut all of them has not this feature. | Is any Open Source Android PdfViewer project when I can select and copy text? | android | null |
_cstheory.16276 | Suppose you have a data set composed of n images as training examples. You run clustering on each image ( initializing 3 clusters per image) and learn the centers. Is it ok to then take the cluster centers themselves as features for a supervised learning algorithm and thus have a vocabulary for each image that way or is it inconsistent ? Are there are other more consistent measures that can be used ? | K means feature learning | machine learning | I'd say you can. Coordinates of point of importance inside image qualifies as a feature so you can use it in supervised learning.If clustering is a way to discover that coordinate you can use that.A practical example: say you want to use x-coordinate of left eye as a feature in a classification problem. It does not matter how you get the object's values for that feature, as long as the values are accurate. If you have an algorithm (e.g. a clustering algorithm) which can compute the values you can use it. |
_webapps.14760 | Sometimes I get partnership offers from people but I'm not ready to work with them yet. I'd like to have an easy way to add them to Highrise (the 37 signals app) and tag them.Can I do that? | How do I add a new contact to Highrise via email? | crm | You can email a contact into Highrise using your 'Highrise Dropbox' email address. You can find this address in Settings > My Info > Email dropbox.You can bcc Highrise when communicating with a prospect, or you can forward an email from the prospect to your Highrise dropbox. If the contact exists, Highrise will attach the email to their case. If the contact doesn't exist, Highrise will create it automatically.See the Highrise demo here:http://highrisehq.com/emaildropboxI don't think you can tag them at the same time, but you can create a Highrise task (to remind you to tag them) using the process above. |
_unix.34524 | Which application can I use to figure out what to put in .inputrc for any custom keyboard shortcut? I've tried a few, and none of them seem to be usable:showkey, showkey -a and read just print ' if you press Ctrl-'.xev prints them separately, and doesn't print anything that seems usable for .inputrc. | How to print keypresses in .inputrc format? | keyboard shortcuts;keyboard;inputrc | I believe ctrl-' will not be passed to applications in the console. It also doesn't show up in xev.It may be the input system or even PC hardware, but without trickery some of the key combinations may be impossible to detect. |
_scicomp.11315 | What's the difference between these two methods? Can a problem be solved by one method will be able to solved by the other? Can both/or one of them be parallelized with OpenMP and/or MPI? | What's the difference between conjugate gradient method and biconjugate gradient method | mpi;conjugate gradient | The conjugate gradient method is the provably fastest iterative solver, but only for symmetric, positive-definite systems. What would be awfully convenient is if there was an iterative method with similar properties for indefinite or non-symmetric matrices.The CG method seeks approximate solutions at each step $k$ within the Krylov subspace$K_k(A,b) = \{b, Ab, A^2b,\ldots,A^kb\}$.The essential idea of the biconjugate gradient method is to maintain a second Krylov subspace$K_k^*(A,b) = \{b, A^*b, (A^*)^2b,\ldots,(A^*)^kb\}$and seeking a recurrence with similar orthogonality properties to that of CG, but without the stability issues of solving $A^*Ax = A^*b$.Unfortunately, that fails if you apply it naively. However, by performing one step of the generalized minimum residual (GMRES) algorithm after each BiCG step, the resulting iteration is stable; this is usually referred to as BiCG-Stab.So, BiCG-Stab is (in principle) a more general solver than CG but suffers worse efficiency when applied to the problems for which CG was intended. BiCG or BiCG-stab require more matrix-vector multiplications and more dot products, so if you parallelize them via distributed-memory multiprocessing you'll incur more communication overhead, but nonetheless they can be scaled up as much as you like.There are two things worth noting here which are more important than all that other junk I just said:For every iterative method (BiCG, GMRES, QMR...), there is a matrix that will make it fail to converge in finite-precision arithmetic.Therefore, coming up with a good preconditioner for your specific matrix is probably more important than using the optimal outer-level iterative solver.EDIT: For open-source libraries, the two most popular are PETSc and Trilinos. I highly recommend you also get the Python bindings, respectively petsc4py and PyTrilinos. You can also try Eigen. On the one hand, it doesn't have many features, but on the other hand, it has just what you need and no more; if you intend to read the code rather than just use it, Eigen might be the easiest.${}$See also: Yousef Saad, Iterative Methods for Sparse Linear Systems; Nachtigal et al, How Fast Are Nonsymmetrix Matrix Iterations? |
_codereview.92238 | This program reads a filename from standard input and then prints its content. Please review:#include <unistd.h>#include <fcntl.h>#include <stdio.h>#include <errno.h>int main(){ char fileName[20]; // Get the filename from the user: int fileNameLength = read(0,fileName,19); // We need to get rid of the new line character caused by terminal // Replace the new line character with \0 fileName[fileNameLength-1] = fileName[fileNameLength]; printf(You want to see the contents of: %s\n, fileName); // open the file: int fd = 0; fd = open(fileName,O_RDONLY); if(fd == -1) { // Something went wrong, perhaps no such file: perror(NULL); printf(%s\n, fileName); } else { // Read until the read method returns 0 bytes. char buf[20]; int numBytesRead = read(fd,buf,20); while(numBytesRead) { write(1,buf,numBytesRead); numBytesRead = read(fd,buf,20); } close(fd); } puts();}And in action:Korays-MacBook-Pro:~ koraytugay$ gcc koray.cKorays-MacBook-Pro:~ koraytugay$ ./a.out k.txtYou want to see the contents of: k.txtHello Code Review!This is the contents of k.txt.Have a good day!Korays-MacBook-Pro:~ koraytugay$ | Showing the contents of a file | c;file | Magic numbers 20 and 19?Instead of this:char fileName[20];// Get the filename from the user:int fileNameLength = read(0,fileName,19);Define MAX_FILENAME_LENGTH somewhere, and use that as the buffer size parameter, and MAX_FILENAME_LENGTH + 1 as the array size containing the buffer, to account for the terminating null character.Write in code your intention exactlyWrite in code more explicitly what you want.For example here you say in comment that you want to write a \0 to the final position of the char[], but the code doesn't do exactly that:// We need to get rid of the new line character caused by terminal// Replace the new line character with \0 fileName[fileNameLength-1] = fileName[fileNameLength];Don't assume what might be at an uninitialized memory location.If you want to set the terminating character to \0,then do exactly that:fileName[fileNameLength-1] = '\0';More magic number 20All those 20 everywhere:char buf[20];int numBytesRead = read(fd,buf,20);while(numBytesRead) { write(1,buf,numBytesRead); numBytesRead = read(fd,buf,20);}Why not introduce a variable so that you can change it later if you want to:int bufsize = 20;char buf[bufsize];int numBytesRead = read(fd,buf,bufsize);while(numBytesRead) { write(1,buf,numBytesRead); numBytesRead = read(fd,buf,bufsize);}And why use a buffer of size 20? Why read a file 20 byte at a time? It would be faster to read in larger chunks. Surely you have enough memory to load 4 kbyte at a time. Luckily, now it's easy to change that:int bufsize = 4096;Avoid code duplicationIn the previous code snippet, read(fd,buf,bufsize); appears twice,which is not pretty. It can be rewritten without such duplication:int numBytesRead;while ((numBytesRead = read(fd, buf, bufsize)) > 0) { write(1, buf, numBytesRead);}close(fd);On the other hand, some people find this writing style potentially error prone or confusing. In my opinion code duplication is the bigger evil,so I still prefer this writing style, which is also shorter.Comments stating the obvious// open the file:int fd = 0;fd = open(fileName,O_RDONLY);Is that comment really necessary there? Or is it just noise?Pointless variable initializationint fd = 0;fd = open(fileName,O_RDONLY);If you're going to set fd to something else, why set it to 0?UsabilityWhen you run the program,it prints nothing,it's just waiting for user input.It would be better to print a prompt,for example:puts(Enter file name:); |
_unix.173790 | I have a program written in Python 2.7 and wxPython 2.8 under Mint 13. One of the modules pulls data off a number of USB devices for subsequent analysis. Everything works fine with Dell laptops but when installed on the HP 255 G1 ~40% of them report USB I/O errors, all ports are USB 2.0One suggestion made to me was to try booting via USB with Mint 17 to see if there are any updates included that resolve the problem. Whilst booting with Mint 17 is not a problem I could not see the program as the only directory under /home is 'mint'. I then put a copy of the software in /home/mint but found that although Mint 17 includes Python it does not appear to include wxPython.I am somewhat unsure where to go next. Can I drop wxPython into the Mint 17 boot; are there other diagnostics I could run; could there be a driver update as I have seen many times on Windowz; etc... | Get I/O errors on USB ports | linux mint;usb | null |
_unix.88123 | I'm trying to search and replace some line from files, but when I run sed if that regex not match then sed does not return any status code, So instead of adding one more condition with grep, is there way in sed and Why sed does not return exit status ?Problem : | Why `sed` does not return exit status if regex does not matched ? | sed | sed does return an exit status:$ echo foo | sed 's/xx/yy/'foo$ echo $?0$ sed 's/xx/yy/' foo.txtsed: can't read foo.txt: No such file or directory$ echo $?2Exit codes are about whether a program exited successfully or not, they have nothing to do with the internals of what sed is doing, just with whether or not the command managed to run.That said, GNU sed does offer a way to do this: q [exit-code] Immediately quit the sed script without pro cessing any more input, except that if auto- print is not disabled the current pattern space will be printed. The exit code argument is a GNU extension.For example (taken from here):$ echo foo|sed '/foo/ s/f/b/;q' boo$ echo $?0$ echo trash|sed '/foo/{s/f/b/;q}; /foo/!{q100}'trash$ echo $?100 |
_datascience.8038 | import numpy as npfrom sklearn import linear_modelX = np.array([[-1, -1], [-2, -1], [1, 1], [2, 1]])Y = np.array(['C++', 'C#', 'java','python'])clf = linear_model.SGDClassifier()clf.fit(X, Y)print (clf.predict([[1.7, 0.7]]))#pythonI am trying to predict the values from arrays Y by giving a test case and training it on a training data which is X, Now my problem is that, I want to change the training set X to TF-IDF Feature Vectors, so how can that be possible?Vaguely, I want to do something like this:import numpy as npfrom sklearn import linear_modelX = np.array_str([['abcd', 'efgh'], ['qwert', 'yuiop'], ['xyz','abc'], ['opi', 'iop']])Y = np.array(['C++', 'C#', 'java','python'])clf = linear_model.SGDClassifier()clf.fit(X, Y) | Passing TFIDF Feature Vector to a SGDClassifier from sklearn | machine learning;classification;python;scikit learn | It's useful to do this with a Pipeline:import numpy as npfrom sklearn import linear_model, pipeline, feature_extractionX = np.array([[-1, -1], [-2, -1], [1, 1], [2, 1]])Y = np.array(['C++', 'C#', 'java','python'])clf = pipeline.make_pipeline( feature_extraction.text.TfidfTransformer(use_idf=True), linear_model.SGDClassifier())clf.fit(X, Y)print(clf.predict([[1.7, 0.7]])) |
_codereview.29469 | I'm working on a general Requestmethod class which sanitizes and recasts the input of users in an automatic fashion. I've also tried to do array-access in cookies.My biggest question are:Is this too much for one class alone?Is the readability/thinking right on this?Code: Example: (<input type=checkbox name=checkboxes[] />) $a = new Request('POST'); $b = $a->getArray('checkboxes'); $b->get(0); $b->get(1); $b->get(2); class REQUESTCONF { const XSS = false; const SALT = 'anysalt'; const c_expire = 24; const c_path = '/'; const c_domain = ''; const c_secure = false; const c_httponly = false; } /** * Class to get Secure Request Data * * XSS Levels: * 0 / false = off * 1 = htmlentities * 2 = strip tags * * @package Request */ class Request { private $DATA; private $CURSOR = false; private $XSS = false; private $METHOD; /** * Constructor * * @package Request * @param string $METHOD POST GET SESSION or COOKIE * @param boolean $XSS XSS Prevent Level * @uses sanitize(); * @return object Request * @access public */ function __construct($METHOD,$XSS=false) { $this->METHOD = strtoupper($METHOD); $this->XSS = $XSS; switch ($this->METHOD) { case 'FILE': $this->DATA = $_FILES; break; case 'GET': $this->DATA = $_GET; break; case 'POST': $this->DATA = $_POST; break; case 'COOKIE': foreach($_COOKIE as $k => $v) { // hiding Notice - but no other way :'( $array = @unserialize($v); if ($array && is_array($array)) { $this->DATA[$k] = $array; } else { $this->DATA[$k] = $v; } } break; case 'SESSION': $this->DATA = $_SESSION; break; default: trigger_error('Parameter must be a valid Request Method (GET, POST, FILE, SESSION or COOKIE)',E_USER_ERROR); die(); break; } $this->DATA = $this->sanitize($this->DATA); return $this; } /** * Destruct - clean up * * @package Request * @access public */ function __destruct() { $this->save(); if ( $this->METHOD == 'SESSION' ) session_regenerate_id(); unset($this->XSS); unset($this->DATA); unset($this->CURSOR); unset($this->METHOD); } /** * Removes a Value with $key of $this->DATA * * @package Request * @param string $key Arraykey of Element in $DATA * @access public */ public function remove($key){ if ( $this->CURSOR != false && isset($this->DATA[$this->CURSOR]) && is_array($this->DATA[$this->CURSOR]) ) { unset($this->DATA[$this->CURSOR][$key]); } elseif (isset($this->DATA[$key])) { unset($this->DATA[$key]); } else { return false; } } /** * Dumps whole $DATA * * @package Request * @access public */ public function dump(){ var_dump($this->DATA); } /** * Set Value in $DATA * * @package Request * @param string $key Arraykey of Element in $DATA * @param mixed $value String Integer Float or Bool want to set in $DATA * @return boolean * @access public */ public function set($key,$value){ if ( $this->CURSOR != false && isset($this->DATA[$this->CURSOR]) && is_array($this->DATA[$this->CURSOR]) ) { $this->DATA[$this->CURSOR][$key] = $value; return 1; } if (is_array($value)) return false; $this->DATA[$key] = $value; $this->setToken(); return $this; } /** * Get a Value from $DATA with validation * * @package Request * @uses object validateString * @param string $key Arraykey of Element in $DATA * @param mixed $validate Regex Rule for validation or false * @return mixed $this->DATA[$var] Contents of the Arraykey * @access public */ public function validGet($key,$validateRegex){ if (!$this->checkToken()) return false; if ( $string = $this->get($key) ) { if (preg_match($validateRegex,$string)) return $string; } return false; } /** * Get Filesize when Cursor is on a FILE Request * * @package Request * @return int Filesize in Bytes of File * @access public */ public function getFilesize(){ if ($this->METHOD != 'FILE' || !$this->CURSOR) return false; return $this->DATA[$this->CURSOR]['size']; } /** * Get Filename when Cursor is on a FILE Request * * @package Request * @return string Name of File * @access public */ public function getFilename(){ if ($this->METHOD != 'FILE' || !$this->CURSOR) return false; return $this->DATA[$this->CURSOR]['name']; } /** * Get MIME when Cursor is on a FILE Request * * @package Request * @return string MIME-Type of File * @access public */ public function getFileType(){ if ($this->METHOD != 'FILE' || !$this->CURSOR) return false; // When File is a Picture getimagesize() mime is more secure and can handle PSDs - exif not if ($img = getimagesize($this->DATA[$this->CURSOR]['tmp_name'])) { return $img['mime']; } elseif (isset($this->DATA[$this->CURSOR]['type'])) { return $this->DATA[$this->CURSOR]['type']; } else { return false; } } /** * Saves the File to a given destination * * @package Request * @param string $dest Save Destination * @return boolean * @access public */ public function saveFile($dest){ if ($this->METHOD != 'FILE' || !$this->CURSOR) return false; if (move_uploaded_file($this->DATA[$this->CURSOR]['tmp_name'],$dest)) { return true; } else { return false; } } /** * Sets the Cursor to a FILE Request * * @package Request * @param string $key Arraykey of Element in $DATA * @access public */ public function getFile($key){ if ($this->METHOD != 'FILE') return false; //sanitize filename => no leading dot return $this->getArray($key); } /** * Get a Value from $DATA * * @package Request * @param string $key Arraykey of Element in $DATA * @return mixed $this->DATA[$var] Contents of the Arraykey * @access public */ public function get($key){ if (!$this->checkToken()) return false; if ( $this->CURSOR != false && isset($this->DATA[$this->CURSOR][$key])) return $this->DATA[$this->CURSOR][$key]; if (!isset($this->DATA[$key])) return false; return $this->DATA[$key]; } /** * Set a Array in $DATA * * @package Request * @param array $array Array you want to Store in $DATA * @return boolean true * @access public */ public function setArray($key,$array){ if (!is_array($array) OR $this->METHOD == 'FILE') return false; $this->DATA[$key] = $array; $this->setToken(); return $this; } /** * Sets the Cursor to an existing arraykey from Data when its an Array * Otherwise set it to false and return false * * @package Request * @param string $key Key of an Array in $DATA * @return object Request * @access public */ public function getArray($key){ if (!isset($this->DATA[$key]) || !is_array($this->DATA[$key])) { $this->CURSOR = false; return false; } $this->CURSOR = $key; return $this; } /* Sanitizer * * * @package Request * @return array sanitized and pseudotyped Array (since POST and GET is String only) * @access private */ private function sanitize($array){ foreach ($array as $k => $v) { if (is_numeric($v)) { $array[$k] = $v + 0; if ( is_int($v) ) { $array[$k] = (int) $v; } elseif ( is_float($v) ) { $array[$k] = (float) $v; } } elseif (is_bool($v)) { $array[$k] = (bool) $v; } elseif (is_array($v)) { $array[$k] = $this->sanitize($array[$k]); } else { if ($this->XSS > 0) { switch ($this->XSS) { case 1: $array[$k] = htmlentities(trim($v)); break; case 2: $array[$k] = strip_tags(trim($v)); break; } } else { $array[$k] = (string) trim($v); } } } return $array; } /** * refill the original REQUEST * * @package Request * @access public */ public function save() { switch($this->METHOD) { case GET : $_GET = $this->DATA; break; case POST : $_POST = $this->DATA; break; case SESSION : $_SESSION = $this->DATA; break; case COOKIE : $expire = time()+3600*REQUESTCONF::c_expire; foreach ($this->DATA as $K => $V) { if ( is_array($V)) { setcookie($K,serialize($V), $expire, REQUESTCONF::c_path, REQUESTCONF::c_domain, REQUESTCONF::c_secure, REQUESTCONF::c_httponly ); } else { setcookie($K,$V, $expire, REQUESTCONF::c_path, REQUESTCONF::c_domain, REQUESTCONF::c_secure, REQUESTCONF::c_httponly ); } } break; } return 1; } /** * Generates a Token with data serializing and a given salt from Config * saves it in the first level of Session or Cookie * * @package Request * @access private */ private function setToken() { if ($this->METHOD == 'SESSION' || $this->METHOD == 'COOKIE') { if ( isset($this->DATA['TOKEN'])) unset($this->DATA['TOKEN']); $this->DATA['TOKEN'] = crc32(serialize($this->DATA).REQUESTCONF::SALT); } } /** * checks the inserted Tokenhash with actual Data in Session or Cookie * * @package Request * @return boolean true on success false on fail * @access private */ private function checkToken() { if ($this->METHOD != 'SESSION' && $this->METHOD != 'COOKIE' ) return 1; if ( isset($this->DATA['TOKEN'])) { $proof = $this->DATA['TOKEN']; unset($this->DATA['TOKEN']); if ( $proof != crc32(serialize($this->DATA).REQUESTCONF::SALT) ) { return false; } else { $this->DATA['TOKEN'] = crc32(serialize($this->DATA).REQUESTCONF::SALT); return 1; } } else { return false; } } } function Request($method,$xss=false) { if (!$xss) $xss = REQUESTCONF::XSS; return new Request($method,$xss); } ?> | Critique Request: PHP Request-Method Class | php;object oriented;classes | Right, let me be clear, I mean this in the nicest of ways, but I don't like this code one bit. Let me start off a couple of simple things:Please, follow the coding standards as described by PHP-FIGI've noticed you're (ab)using the error supressing operator (eg @unserialize). Don't. Errors and Warnings are there to help you improve on your code. If there's a warning issued, don't cover it up, fix it. Even more worrying is where I saw this being used:foreach($_COOKIE as $k => $v){ $array = @unserialize($v); if ($array && is_array($array)) { $this->DATA[$k] = $array; } else { $this->DATA[$k] = $v; }}Now this implies you're actually setting cookies to hold serialized objects or arrays. First off, it gives me a solid clue on what your stack looks like (I know for a fact you're using PHP). Serialized arrays are easily changed client side, so there's no guarantee on data integrity. If they're serialized objects, you're in trouble... big time. That would mean you're actually sending an object to the client, containing its state, and possibly all sorts of information on your server. If I were to be a 16 year old would-be hacker, I'd feel like I just struck gold. With some trial and error, I could easily manipulate that objects state, to gain access to those parts of your app you probably rather I didn't know about.Cookies should cosist of little slivers of, on its own, meaningless data, Sessions are where you could store serialized objects, but still: I wouldn't.Your Request class has a public function __construct function defined. Why then, do you also define a function Request? It looks as though you're trying to catch errors if somebody omitted the new keyword, or you're trying to mimic the factory pattern. Either implement a factory, or drop the function. If it's there to catch code that doesn't construct its instances using new, then that code contains bugs: fix them, don't work around them.Next: The REQUESTCONF class, which only contains constants (ignoring the naming issues here). These constants obviously belong to the Request object: the Request object's state is defined by it. Drop that class, which is used as a constant array anyway, and define the constants as Request constants.You don't need to call all those unset's in the destructor. Dealing with the session is fine, anything else is just overhead (the values will be unset when the object goes out of scope anyway), which is right after the destructor returns.Moving on to some actual code:public function getArray($key){ if (!isset($this->DATA[$key]) || !is_array($this->DATA[$key])) { $this->CURSOR = false; return false; } $this->CURSOR = $key; return $this;}This doesn't make sense, to me at least. I'd expect the return type of getArray to be either an array or null, you're returning false, or the object itself. I see what you're doing here, but the name is just begging for accidents to happen.either implement a child of the Traversable interface (which is what you're trying to do anyway) or change the method-name.Your first question: Is this too much for one class?Yes, it is. You might want to take a look at existing frameworks and how they manage the Request, what objects they use and how they're tied in.Your Request object deals with everything, from the basic GET params to sessions. They're not the same thing, and should be treated as seperate entities. You're using a class as a module, which violates the single responsability principle.I'd suggest you write a class for each request that requires its own treatment. Take, for example, a Session class. That class should indeed implement a destructor, but a Get object shouldn't. Their constructors should be different, too, but they can both implement the same basic getter and setter.Start off by writing an abstract RequestType class, that holds the shared methods/properties, and extend it with the various request-type classes:abstract class RequestType{ protected $data = null; protected $writeable = true; abstract public function __construct();//ensure all children implement their own constructor public function __get($name) { if (!is_array($this->data) || !isset($this->data[$name])) {//or throw exception return null; } return $this->data[$name]; } public function __set($name, $val) { if ($this->writeable === false) { throw new RuntimeException(sprintf('%s instance is Read-Only', get_class($this))); } $this->data[$name] = $value; return $this; }}//thenclass Session extends RequestType{ public function __construct($id = null, $readOnly = false) { $this->writeable = !!$readOnly; //start session, assign to $this->data }}You get the idea... You can use the abstract class for type-hinting, and implement the Traversable interface there, too.Your main Request object then sort of becomes a service, then:class Request{ private $session = null; private $cookie = null; private $post = null; private $get = null; private $xhr = false;//isAjax() private $uri = null; //type constants const TYPE_COOKIE = 1; //000001 const TYPE_SESSION = 2;//000010 const TYPE_POST = 4; //000100 const TYPE_GET = 8; //001000 const TYPE_URI = 16; //010000 const TYPE_AJAX = 32; //100000 const TYPE_ALL = 63; //111111 //config constants const CONF_XSS = 0; //use PDO-style: array(Request::CONF_XSS => XSS_ON) const XSS_ON = 1; const XSS_OFF = 0; private static $objects = array( 1 => 'Cookie', 2 => 'Session', 4 => 'Post', 8 => 'Get', 16 => 'Uri', 32 => 'Ajax' ); private $options = array( self::CONF_XSS => XSS_OFF, 'default' => 'config here' ); public function __construct($type = self::TYPE_ALL, array $options = null) { if (is_array($options)) { foreach($options as $conf => $value) { $this->options[$conf] = $value; } } if ($type === self::TYPE_ALL) { foreach(self::$objects as $type) {//assume setPost, setSession, .. $this->{'set'.$type}($this->options); } return $this; } if (($type & ($type - 1)) === 0 && ($type & self::TYPE_ALL) === $type) {//^^ $type is power of 2 , and it's value < 63 ===> constant exists $this->{'set'.self::$objects[$type]}($this->options); } return $this; }}You implement, as the constructor shows, your set<Type> methods, and some lazy-loading get<Type> methods, too, and you're good to go.In case you're not sure what I mean by lazy-loading getters:public function getSession(){ if ($this->session === null) { $this->session = new Session($this->options);//session_start is the responsability of the Session class! } return $this->session;} |
_softwareengineering.288789 | Is there a connection between futures and exceptions? async-await looks very similar to throw-catch. | Connection between futures and exceptions? | exceptions;exception handling;async;asynchronous programming | null |
_scicomp.25146 | Consider the equations$$\int_0^L \mathbf W(\mathbf u, s) \, \mathrm ds = \mathbf 0$$where $0 \leq s \leq L$ and $\mathbf u$ is a vector of constants. Numerically, what is the best way to determine $\mathbf u$ that satisfy the equations? | Solve integral equation for unknown constant | integral equations;constraints | null |
_cs.16431 | I have a table in which some values are repeated often as shown in the figure below. I want to encode that table such that it makes use of less memory. I have heard about run length encoding (RLE) but I would like to know if there are any other such encoding techniques or algorithms which can perform better than RLE or their performance is almost equivalent to RLE. | Encoding algorithms better than or equivalent to Run Length Encoding | algorithms;encoding scheme | null |
_unix.259757 | Suppose i have a folderwith a lot of file namessome very strange and nonsenseI want to rename it like File-1File-2File-3..I have tried this(echo is for tryng)for name in *; do echo mv $name File-`echo $(( RANDOM % (10 - 5 + 1 ) + 1 ))`;doneBut give me a lot of duplicatesmv bio1 file-3mv memory23 file-1mv mernad file-3mv nio2 file-4mv nun3 file-4 | Rename files randomly but without repetition | scripting;rename;mv;random | You could maybe use shuf (from the GNU coreutils package), which generates permutations rather than individual random samples - something likefor f in *; do read i; echo mv -- $f file-$i; done < <(shuf -i 1-10)or (perhaps better) shuffle the filenames - and then simply rename them sequentiallyi=1; shuf -z -e -- * | while IFS= read -rd '' f; do echo mv -- $f File-$((i++)); done |
_datascience.19256 | I am currently working with the famous titanic dataset from Kaggle. Now I want to explore the influence on different features on the chance of survival. I use the random fores classifier for an accuracy score output.I want to know whether people with sibling have a bigger chance of survival than people without. I could slice my dataset into one with siblings and one without.However these might probably not be comparable either because of the size or e.g. that people with sibblings are more often female. The comparison is distorted.How can I account for that and answer my question correctly. | Explore Influence of features on titanic dataset | machine learning;scikit learn | null |
_unix.361145 | I have 2 csv files whose contents are-expo1.csv:102,GREAT,adjective,ENG,p1_0,no,p2_1,no,p3,no,4,yes,p5_2,no,p6,yes....,su1,amb,su_09,no104,BHAAG,verb,HIN,p1,yes,p2,no,p3_7,amb,p4,no,p5,no,p6_9,yes....,sg4_3,yes,su119,amb110,.......,su11_0,ambandimpo1.csv:104,p1,no102,p2,yes104,p10,no110,su11,noBasically expo1.csv is a file on the server, and impo1.csv is a file I created to update expo1.csv. A script makes the changes in expo1.csv as specified in impo1.csv after performing slight processing in the impo1 data (eg. The line 102,p2,yes from impo1.csv is processed and then an update is made to expo1.csv - p2_1,yes.)expo1.csv after changes:102,GREAT,adjective,ENG,p1_0,no,p2_1,yes,p3,no,4,yes,p5_2,no,p6,yes....,su1,amb,su_09,no104,BHAAG,verb,HIN,p1,no,p2,no,p3_7,amb,p4,no,p5,no,p6_9,yes....,sg4_3,yes,su119,amb110,.........,su11_0,noNow after the script makes the changes, we need to validate if the changes are done properly by comparing the impo1 and expo1 files. This is where i'm stuck.So far I could isolate the data between the commas in impo1.csv separately into variables using awk:Sno=104 102 104Posw=p1 p2 p10cho=no yes noNow the question is, how do I check this? The impo1.csv files contains around 3000 updates. If I grep p1 expo1.csv|grep no expo1.csv, obviously it will not return the correct result as the file has many 'no' strings. I have tried using a for loop to separate the data using awk into separate variables and then grep using a wildcard - grep sno expo1.csv|grep '/<$posw.*,$cho>/' expo1.csv - but it doesn't work.Using GNU bash 4.1.2.EDIT - Should have mentioned this earlier, my bad - There are no clear patterns in the impo1.csv file which I can use to check the expo1 file. I have made corrections to the sample file contents which illustrate my point. | Using awk/for/grep for comparing 2 files | awk;grep;loop device | The solution is rather simple. You just need to create a pattern from each line of impo1.csv and then grep it from expo1.csv after updatedvalidate() { # $1 ~ impo1.csv # $2 ~ expo1.csv after changes while read pattern; do grep -q ^$pattern $2 || return 1 done < <(sed s/,/,.*/ $1 )} |
_cs.18310 | Some guy on the internet recommends using the same ntp server when it is required to troubleshoot asymmetric routes through ICMP, and it's somewhat important to have synchronised time between the two machines doing ICMP.Granularity of timestamps in ICMP is 1ms (unique per 24h period), assume packet roundtrip between the source and destination of at least 100ms, each way of at least 50ms, plus jitter.I find the recommendation of using the same ntp server unreasonable; for one, because it would seem that the likelihood of any given reliable ntp server, anywhere in the world, carrying correct time is much higher than the likelihood of transmitting said time through the internet over longer distances (plus with potential jitter and packet loss), e.g. a good collection of local servers is already the best you could do for the task at stake.Basically, my conjecture is that, should a single ntp server be shared, it won't necessarily be a good server for both hosts doing ICMP, and would not contribute to the clock between the two (and only two) machines being the most synchronised, compared to a good collection of local servers instead.What's the mathematical take on this? | NTP: synchronisation of time between two machines for ICMP timestamping | algorithm analysis;reference request;optimization;computer networks;synchronization | null |
_unix.259892 | I have a mount point for my nfs share:drwxrwxrwx 2 patryk patryk 4.0K Feb 4 16:23 nfs_shareafter I mount it I get$ sudo mount -t nfs 10.9.XXX.XXX:/root/src /home/patryk/nfs_share -o rw,user,vers=3 drwxr-xr-x 2 root root 4.0K Feb 4 17:06 nfs_shareI tried with /etc/fstab but I get the same results:10.9.XXX.XXX:/root/src /home/patryk/nfs_share nfs rw,user,vers=3 0 0The funny thing is that I cannot chown this after mounting:$ sudo chown patryk:patryk nfs_sharechown: changing ownership of `nfs_share': Operation not permittedMy server is configured as follows:// 10.9.XXX.XXX$ cat /etc/exports/root/src/napet_src/ *(rw,nohide,insecure,no_subtree_check,async)How do I define those permissions so that I can write to this folder? | Why does my nfs mount always changes to be owned by root after mounting? | linux;permissions;mount;nfs | null |
_cstheory.38583 | We know succinct version of many $P$-complete problems are $EXP$-complete. There are standard ways to define $EXP$-complete graph problems from succinct representations of these $P$ complete problems. What is the standard way to define $EXP$-complete problem from succinct representations of $P$ complete problems that do not come from graphs if there are any?For example what is the succinct version of the $P$-complete iterated mod problem $$\mbox{given }a, b_1, b_2,\dots, b_n\in\Bbb Z,\mbox{ is }((\dots((a \bmod b_1) \bmod b_2) \dots) \bmod b_n) = 0$$ and would that be $EXP$-complete and what is the succinct version of linear programming and would that be $EXP$-complete?Is there a higher version of $ETH$ (Exponential Time Hypothesis) that is applicable to the $EXP$ versus $NEXP$ problem for $NEXP$ complete problems that come from succinct version of $NP$ complete problems? | On succinct $EXP$ and $NEXP$ complete problems? | cc.complexity theory;nexp;succinct | The description of a succinct problem has very little to do with graphs, per se. Given a language $L \subseteq \Sigma^*$, we can define its succinct version as the set of Boolean circuits $C$ such that, if $C$ has $m$ inputs, then the string $s$ of length $2^m$ which is the concatenation of $C(0^m) C(0^{m-1} 1) C(0^{m-2} 10) \dotsb C(1^m)$ is in $L$.You can make such a version, though I haven't seen it studied before. The smallest known upper bound on the deterministic time complexity of $\mathsf{NEXP}$ is $\mathsf{EEXP} = \mathsf{DTIME}(2^{2^{n^{O(1)}}})$, so you could conjecture that a standard $\mathsf{NEXP}$-complete problem is not in significantly sub-doubly-exponential time. |
_unix.62617 | I have an iPod touch (2nd gen) and constant troubles with the clock setting itself one hour ahead of the actual time when I connect it to the computer. long story short, When I SSH into the device:date +%Z returns ARST which is correct (I'm in Buenos Aires, Argentina)date +%z the result is -0200, which is wrong and should be -0300My question is: How do I correct the offset of my timezone to the real value?I have found mentions of zic, zdump and references to a IANA Time Zone Database.I've tried to find allready compiled files in order to replace the whole zoneinfo folder, but the downloads I could find seem to use a different folder structure than the one on the iPod.edit: I am looking for a way to edit or update the timezone information, so that my timezone ARST is configured correctly. I have found several references to a compiler named zic, but need help in order to work out a solution.Both zic and zdump are present on the device, which leads me to believe it can be done via SSH and UNIX-commands. | Wrong timezone offset. How do I correct it? (help with zic timezone compiler) | timezone;ios | OK, I have stumbled upon the solution.Here's the link where I got the info from: http://brickybox.com/2009/10/18/os-x-fix-argentina-dst-october-2009The tzdata source has changed its url. It is now to be found at: ftp://ftp.iana.org/tz/ or http://www.iana.org/time-zones for more information.I downloaded the updated tzdata-file: in this casetzdata2012j.tar.gz and extracted it to a temporary folder.Then I SSHed into the iPod and copied the extracted files to theiPod. I chose User/Downloads and created a new (temporary) folder tzfix into which I copied everything.after that came the zic compile: zic southamerica, which took afew short secondsthen cp /usr/share/zoneinfo/America/Argentina/Buenos_Aires /usr/share/zoneinfo/America/Buenos_AiresI don't understand what this realy does. Copy, and overwrite, the file with itself?testing date +%z and date +%Z both return correct values, now: -0300 and ARTFinally! I can set the clock to the correct time without twitter refusing to login and Google authenticator throwing wrong auth codes. |
_unix.111223 | I am trying to compile iproute2-3-12-0 on Fedora 19, I have BerkeleyDB installed, the command ls -la /usr/lib/libdb* gives following results:-rwxr-xr-x 1 root root 1847852 May 16 2013 /usr/lib/libdb-5.3.solrwxrwxrwx 1 root root 12 Sep 18 20:15 /usr/lib/libdb-5.so -> libdb-5.3.solrwxrwxrwx 1 root root 18 Jan 4 12:57 /usr/lib/libdbus-1.so.3 -> libdbus-1.so.3.7.4-rwxr-xr-x 1 root root 317720 Nov 11 19:24 /usr/lib/libdbus-1.so.3.7.4I have newest version of Bison and Flex. I use kernel: 3.12.8-200.fc19.x86_64.I have ldb in /usr/lib and /usr/lib64. I did not find any LDFLAGS in Makefile though.I get an error:ssfilter.y: conflicts: 27 shift/reduce/usr/bin/ld: cannot find -ldbcollect2: error: ld returned 1 exit statusmake[1]: *** [arpd] Error 1make: *** [all] Error 2A closer look at the end of make log reveals: make[1]: Entering directory `/root/Traffic_Shaping/iproute2-3.12.0/bridge'gcc -Wall -Wstrict-prototypes -Wmissing-prototypes -Wmissing-declarations -Wold-style-definition -O2 -I../include -DRESOLVE_HOSTNAMES -DLIBDIR=\/usr/lib64\ -DCONFDIR=\/etc/iproute2\ -D_GNU_SOURCE -c -o bridge.o bridge.cgcc -Wall -Wstrict-prototypes -Wmissing-prototypes -Wmissing-declarations -Wold-style-definition -O2 -I../include -DRESOLVE_HOSTNAMES -DLIBDIR=\/usr/lib64\ -DCONFDIR=\/etc/iproute2\ -D_GNU_SOURCE -c -o fdb.o fdb.cgcc -Wall -Wstrict-prototypes -Wmissing-prototypes -Wmissing-declarations -Wold-style-definition -O2 -I../include -DRESOLVE_HOSTNAMES -DLIBDIR=\/usr/lib64\ -DCONFDIR=\/etc/iproute2\ -D_GNU_SOURCE -c -o monitor.o monitor.cgcc -Wall -Wstrict-prototypes -Wmissing-prototypes -Wmissing-declarations -Wold-style-definition -O2 -I../include -DRESOLVE_HOSTNAMES -DLIBDIR=\/usr/lib64\ -DCONFDIR=\/etc/iproute2\ -D_GNU_SOURCE -c -o link.o link.cgcc -Wall -Wstrict-prototypes -Wmissing-prototypes -Wmissing-declarations -Wold-style-definition -O2 -I../include -DRESOLVE_HOSTNAMES -DLIBDIR=\/usr/lib64\ -DCONFDIR=\/etc/iproute2\ -D_GNU_SOURCE -c -o mdb.o mdb.cgcc -Wall -Wstrict-prototypes -Wmissing-prototypes -Wmissing-declarations -Wold-style-definition -O2 -I../include -DRESOLVE_HOSTNAMES -DLIBDIR=\/usr/lib64\ -DCONFDIR=\/etc/iproute2\ -D_GNU_SOURCE -c -o vlan.o vlan.cgcc bridge.o fdb.o monitor.o link.o mdb.o vlan.o ../lib/libnetlink.a ../lib/libutil.a ../lib/libnetlink.a ../lib/libutil.a -o bridgemake[1]: Leaving directory `/root/Traffic_Shaping/iproute2-3.12.0/bridge'make[1]: Entering directory `/root/Traffic_Shaping/iproute2-3.12.0/misc'gcc -Wall -Wstrict-prototypes -Wmissing-prototypes -Wmissing-declarations -Wold-style-definition -O2 -I../include -DRESOLVE_HOSTNAMES -DLIBDIR=\/usr/lib64\ -DCONFDIR=\/etc/iproute2\ -D_GNU_SOURCE -c -o ss.o ss.cbison ssfilter.y -o ssfilter.cssfilter.y: conflicts: 27 shift/reducegcc -Wall -Wstrict-prototypes -Wmissing-prototypes -Wmissing-declarations -Wold-style-definition -O2 -I../include -DRESOLVE_HOSTNAMES -DLIBDIR=\/usr/lib64\ -DCONFDIR=\/etc/iproute2\ -D_GNU_SOURCE -c -o ssfilter.o ssfilter.cgcc ss.o ssfilter.o ../lib/libnetlink.a ../lib/libutil.a -o ssgcc -Wall -Wstrict-prototypes -Wmissing-prototypes -Wmissing-declarations -Wold-style-definition -O2 -I../include -DRESOLVE_HOSTNAMES -DLIBDIR=\/usr/lib64\ -DCONFDIR=\/etc/iproute2\ -D_GNU_SOURCE -o nstat nstat.c -lmgcc -Wall -Wstrict-prototypes -Wmissing-prototypes -Wmissing-declarations -Wold-style-definition -O2 -I../include -DRESOLVE_HOSTNAMES -DLIBDIR=\/usr/lib64\ -DCONFDIR=\/etc/iproute2\ -D_GNU_SOURCE -o ifstat ifstat.c ../lib/libnetlink.a ../lib/libutil.a -lmgcc -Wall -Wstrict-prototypes -Wmissing-prototypes -Wmissing-declarations -Wold-style-definition -O2 -I../include -DRESOLVE_HOSTNAMES -DLIBDIR=\/usr/lib64\ -DCONFDIR=\/etc/iproute2\ -D_GNU_SOURCE -o rtacct rtacct.c ../lib/libnetlink.a ../lib/libutil.a -lmgcc -Wall -Wstrict-prototypes -Wmissing-prototypes -Wmissing-declarations -Wold-style-definition -O2 -I../include -DRESOLVE_HOSTNAMES -DLIBDIR=\/usr/lib64\ -DCONFDIR=\/etc/iproute2\ -D_GNU_SOURCE -I/usr/include/libdb4 -o arpd arpd.c ../lib/libnetlink.a ../lib/libutil.a -ldb -lpthread/usr/bin/ld: cannot find -ldbcollect2: error: ld returned 1 exit statusmake[1]: *** [arpd] Error 1make[1]: Leaving directory `/root/Traffic_Shaping/iproute2-3.12.0/misc'make: *** [all] Error 2How can I get ld to find libdb? | /usr/bin/ld: cannot find -ldb while compiling iproute2 | make | As @bersh astutely points out in comments, you appear to be mixing libraries that have been compiled for different architectures (32-bit vs. 64-bit). On Fedora 32-bit libraries go in the /usr/lib, while 64-bit libraries go in /usr/lib64. You can convince yourself of this with a couple of examples.ExampleLet's pick on one of the share libraries for the DNS resolver, /usr/lib/libresolv-2.17.so. We can see that it's part of a 32-bit RPM.$ rpm -qf /usr/lib/libresolv-2.17.so glibc-2.17-20.fc19.i686You can also see that the library is a 32-bit ELF headered file.$ file /usr/lib/libresolv-2.17.so/usr/lib/libresolv-2.17.so: ELF 32-bit LSB shared object, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), BuildID[sha1]=0xeee8b9e6cb49f8dd64059cc158ce2c55f8c6df5b, for GNU/Linux 2.6.32, not strippedSo you need to take care when compiling your software to make sure that you have the appropriate libraries in place (32 & 64) as well as the corresponding header files. On Fedora (and all Red Hat based distros) the packages are named like so:32-bit - libdb-5.3.21-11.fc19.i68664-bit - libdb-5.3.21-11.fc19.x86_6432-bit header files - libdb-devel-5.3.21-11.fc19.i68664-bit header files - libdb-devel-5.3.21-11.fc19.x86_64Your library, libdbIf you notice the library file is available in both architectures. Given the output of your kernel package being x64, I would assume you meant to install the 64-bit versions of the libraries. Also since you're attempting to compile you'll want to install the header files for your architecture too.$ rpm -qf /usr/lib/libdb-5.3.solibdb-5.3.21-11.fc19.i686$ rpm -qf /usr/lib64/libdb-5.3.solibdb-5.3.21-11.fc19.x86_64How do I know what package to install?If you see your compiles are calling for files that you do not have then you can use repoquery to find out what package(s) provide various files like so:$ repoquery -f '*/libdb-5.3.so'libdb-0:5.3.21-11.fc19.x86_64libdb-0:5.3.21-11.fc19.i686 |
_codereview.51190 | I am a Java developer who is taking Python for the first time.I'm sure this is not at all elegant since I am thinking more in C syntax.This module contains two implementations of the algorithm Sieve ofEratosthenes. # import ############################################################### import #import mathimport numpy# # fun1 ################################################################# fun1 #def SieveBasic(n): This function runs the basic sieve of eratosthenis algorithm (non-optimized) and returns a list of prime numbers. The algorithm is implemented as described @: http://en.wikipedia.org/wiki/Sieve_of_Eratosthenes#Example l = list(range(2, n+1)) isPrime = [True] * (n-1) for m,n in enumerate(l): currentCheck = n for i,x in enumerate(l[m+1:]): #take each number and compare with later numbers if x % currentCheck == 0: isPrime[i+m+1] = False primes = [0,1] for i,x in enumerate(isPrime): if x==True: primes.append(2+i) return primes This function generates a list of Js required for optimized sieve of eratosthenis algorithm. def generateJs(i,n): j=i**2 if j<n: yield j while j+i<=n: j+=i yield jdef SieveOptimized(n): This function runs the optimized sieve of eratosthenis algorithm and returns a list of prime numbers. The algorithm is implemented as described @: http://en.wikipedia.org/wiki/Sieve_of_Eratosthenes#Implementation l = list(range(2, n+1)) isPrime = [True] * (n+1) maxChk = math.sqrt(n) for q,r in enumerate(l): if r>maxChk: break for z in generateJs(r,n): isPrime[z] = False return numpy.where(isPrime)[0] # how cool is this | Sieve of Eratosthenes - Standard and Optimized implementation | python;primes;numpy;sieve of eratosthenes | StyleYou code does not follow PEP 8, the usual Python coding convention. You'll find tools such as pep8 (or its online version), pylint, pyflakes or pychecker to check for this and other points that might make your code cleaner, more idiomatic or more correct.I can't be bothered to break the too long lines nor to fix typos in the comments but I did perform the other changes so make pep8 happy.The result is :This module contains two implementations of the algorithm Sieve ofEratosthenes.import mathimport numpydef SieveBasic(n): This function runs the basic sieve of eratosthenis algorithm (non-optimized) and returns a list of prime numbers. The algorithm is implemented as described @: http://en.wikipedia.org/wiki/Sieve_of_Eratosthenes#Example l = list(range(2, n+1)) isPrime = [True] * (n-1) for m, n in enumerate(l): currentCheck = n # take each number and compare with later numbers for i, x in enumerate(l[m+1:]): if x % currentCheck == 0: isPrime[i+m+1] = False primes = [0, 1] for i, prime in enumerate(isPrime): if prime: primes.append(2+i) return primesdef generateJs(i, n): This function generates a list of Js required for optimized sieve of eratosthenis algorithm. j = i**2 if j < n: yield j while j+i <= n: j += i yield jdef SieveOptimized(n): This function runs the optimized sieve of eratosthenis algorithm and returns a list of prime numbers. The algorithm is implemented as described @: http://en.wikipedia.org/wiki/Sieve_of_Eratosthenes#Implementation l = list(range(2, n+1)) isPrime = [True] * (n+1) maxChk = math.sqrt(n) for q, r in enumerate(l): if r > maxChk: break for z in generateJs(r, n): isPrime[z] = False return numpy.where(isPrime)[0]def main(): Main function for n in range(1, 200): s1 = SieveBasic(n) s2 = SieveOptimized(n).tolist() if s1 != s2: print n, s1, s2if __name__ == __main__: main()Also, the name of generateJs is not so great as it doesn't tell us much.CorrectnessWhen you write two functions to perform the same thing, it can be a good idea to check that you get the same result on a big set of inputs. Here's what I wrote :def main(): Main function for n in range(1, 200): s1 = SieveBasic(n) s2 = SieveOptimized(n).tolist() if s1 != s2: print n, s1, s2This shows that the result is different when n is 4, 9, 25, 49, 121, 169. The optimised algorithm consider these values as prime even though they shouldn't (as they are perfect squares - of prime numbers which might have its importance when looking for the fix). Once this is fixed, you can use assert to ensure the results of the two functions are the same.In your case, the fix is simple : changing if j < n for if j <= n in generateJs :def generateJs(i, n): This function generates a list of Js required for optimized sieve of eratosthenis algorithm. j = i**2 if j <= n: yield j while j+i <= n: j += i yield 0 and 1 shouldn't be considered as primes. Again, fixing this is simple : initialising primes as being primes = [] in the basic function and setting isPrime[0] = isPrime[1] = False in the optimised function.Improving the code for the optimised versionIn generateJs, you can factorise code by writing :def generateJs(i, n): This function generates a list of Js required for optimized sieve of eratosthenis algorithm. j = i**2 while j <= n: yield j j += iInterestingly, this now looks a lot like range (or xrange depending on the version of Python you are using). We can get rid of it and write :Also, you don't need to use enumerate:def SieveOptimized(n): This function runs the optimized sieve of eratosthenis algorithm and returns a list of prime numbers. The algorithm is implemented as described @: http://en.wikipedia.org/wiki/Sieve_of_Eratosthenes#Implementation l = list(range(2, n+1)) isPrime = [True] * (n+1) isPrime[0] = isPrime[1] = False maxChk = math.sqrt(n) for r in l: if r > maxChk: break for z in xrange(r*r, n+1, r): isPrime[z] = False return numpy.where(isPrime)[0]Now, if you think about it, the list l is not that useful. The point is just to perform a re-indexing : what we are really doing is that we loop starting as 2 and we stop at n (included). We can write this directly : for r in xrange(2, n+1):. The same kind of comment also applies to the basic version of the code.def SieveOptimized(n): This function runs the optimized sieve of eratosthenis algorithm and returns a list of prime numbers. The algorithm is implemented as described @: http://en.wikipedia.org/wiki/Sieve_of_Eratosthenes#Implementation isPrime = [True] * (n+1) isPrime[0] = isPrime[1] = False maxChk = math.sqrt(n) for r in xrange(2, n+1): if r > maxChk: break for z in xrange(r*r, n+1, r): isPrime[z] = False return numpy.where(isPrime)[0]Because sqrt(n) + 1 <= n + 1 for any n >= 1, we can use maxChk in the call to range :def SieveOptimized(n): This function runs the optimized sieve of eratosthenis algorithm and returns a list of prime numbers. The algorithm is implemented as described @: http://en.wikipedia.org/wiki/Sieve_of_Eratosthenes#Implementation isPrime = [True] * (n+1) isPrime[0] = isPrime[1] = False for r in xrange(2, int(math.sqrt(n))+1): for z in xrange(r*r, n+1, r): isPrime[z] = False return numpy.where(isPrime)[0]Improving the code for the basic versionHere again, the list l is not that useful. The point is just to perform a re-indexing : what we are really doing is that we loop starting as 2 and we stop at n (included).You can put a few asserts in your code to double check that we don't really need the values from l as we can compute them.for m, n in enumerate(l): assert m == n-2 assert n == m+2 currentCheck = m+2 # take each number and compare with later numbers for i, x in enumerate(l[m+1:]): assert x - m - i == 3 assert x == 3 + m + i if x % currentCheck == 0: isPrime[i+m+1] = FalseMessing a bit with indices, you get :def SieveBasic(n): This function runs the basic sieve of eratosthenis algorithm (non-optimized) and returns a list of prime numbers. The algorithm is implemented as described @: http://en.wikipedia.org/wiki/Sieve_of_Eratosthenes#Example isPrime = [True] * (n-1) for x in range(2, n+1): # take each number and compare with later numbers for j in range(x+1, n+1): if j % x == 0: isPrime[j - 2] = False primes = [] for i, prime in enumerate(isPrime): if prime: primes.append(2+i) return primesAnd the end of the function can be re-written with a list comprehension :return [2+i for i, prime in enumerate(isPrime) if prime]This code is still not optimal but because it corresponds to the basic version, I guess there is no need to try to make it too much better.Additional noteIn my code, I've been using range and xrange as I am used to switch between Python 2 and Python 3. Depending on the version you are using, I suggest you have a look online to see more details about this. Just keep in mind that you should try to avoid creating a list when an iterator is enough for you. |
_unix.372849 | I'm running Manjaro and I'd like to add a line to my conky that shows how many updates are available for my system. This should ideally include updates to AUR packages as well.How can I do this? | Update information in conky | manjaro;pacman;conky | null |
_unix.287532 | I am running a low-end server with about 1GB RAM and I was trying to optimize my ram. But after increasing my swap file and emptying the buffers, my total ram, judging from free -m, have lowered from 1 GB to 660 MB - and I cannot run my applications anymore.Repro steps:I have used the commands echo 1 > /proc/sys/vm/drop_cachesecho 2 > /proc/sys/vm/drop_cachesecho 3 > /proc/sys/vm/drop_cachesThen, I have tried swapoff -a and swapon -aAnd then I increased my swap file, as described here:https://www.linux.com/learn/increase-your-available-swap-space-swap-fileI have no idea what lowered my total memory. Could anyone help, please? Thanks in advance!I am running Centos. | Total memory decreased when increasing swap file? | linux;centos;memory | null |
_softwareengineering.350328 | I have read plenty of questions on here, which appear to confuse the MVP/MVC Model with the Domain Model. In my mind the MVP Model calls the Service, which then calls a rich Domain Model i.e. the MVC/MVP model is a view model..I have seen a lot of code, which does this (this is the MVC Model):public class Model : IModel { private IService service; public PersonModel GetPerson(int id) { PersonDTO personDTO = service.GetPerson(int id); PersonModel personModel = Mapper.Map<PersonModel>(personDTO); return personModel; } }The model calls the service and the service calls a rich domain model i.e. a domain model where the classes contain both state and behaviour.Notice in the above code that there is a class called Model (which contains behaviour and calls the service) and a class called PersonModel. Should there be one class called PersonModel, which contains both state and behaviour if a rich Domain Model is by the business layer/domain layer? I am talking about best practice here. I know both approaches work. | Should an MVP/MVC Model contain behaviour? | c#;design patterns;domain driven design;asp.net mvc;mvp | null |
_cstheory.496 | Are there any NP-complete problems for which an algorithm is known that the expected running time is polynomial (for some sensible distribution over the instances)?If not, are there problems for which the existence of such an algorithm has been established?Or does the existence of such an algorithm imply the existence of a deterministic polynomial time algorithm? | Are there NP-complete problems with polynomial expected time solutions? | cc.complexity theory;np hardness | Basically, Max 2-CSP on $n$ variables and $n$ randomly chosen constraints can be solved in expected linear time (see the reference below for the exact formulation of the result). Note that Max 2-CSP remains NP-hard when the number of clauses equals the number of variables as it is NP-hard if the constraint graph of the instance has maximum degree at most 3 and you can add some dummy variables to decrease the average degree to 2.Reference:Alexander D. Scott and Gregory B. Sorkin. Solving sparse random instances of Max Cut and Max 2-CSP in linear expected time. Comb. Probab. Comput., 15(1-2):281-315, 2006. Preprint |
_unix.285659 | I am having small initramfs with static busybox into it. The sole purpose of this initramfs is to download/upload files to the HTTPS server.I have the proper certificate and credentials to do so. But when I execute the command:curl --cacert /tmp/filename.pem -T /tmp/file_to_upload -u user:pass https://Server_name/I greeted with an error:curl: (60) SSL certificate problem: unable to get local issuer certificateIf I use the same command with same certificate onto Ubuntu, then everything goes smooth.How am I suppose to resolve this issue ?EDIT: I do not want to use -k or --insecure switchNOTE: I do not have openssl or /etc/ssl directory into initramfs | SSL Certificate Problem: unable to get local issuer certificate | linux;curl;certificates | null |
_webapps.106357 | Not the same as this... if only life were so simple.I'm talking about the find box... i.e. say you want to search among your emails. Whenever I start typing in this box it goes crazy with predictive suggestions. And this seems to be a more aggressive type of predictive text than normal: it often adds characters on to the end of my text, in the middle of typing, and generally needs turning off for the sake of my sanity. | Turn off predictive text in Gmail search box | gmail | null |
_unix.385326 | I'm on a fresh install of Mint 18.2, and I'm adding a bunch of multimedia tools. I would like to have LMMS use VST files, as well; but this requires Wine as (curiously) VSTs are all portable executables.I'm not a big fan of Wine, I have Windows for that; but so be it. However, before installing it, I see that it requires the removal of systemd. I don't know why this is, and honestly I'm not a huge fan of systemd either; but it serves a pretty critical role. I'm not ready to ditch it just yet.It's probably safe; this is through the Software Manager. However it seems appropriate to at least do a little digging first.Does anyone know why Wine insists on removing systemd? Is it replaced with one of these other packages? Am I safe in doing this? | Why does Wine insist on removing systemd? | linux mint;systemd;wine;uninstall | Interesting results... seems it doesn't actually need to remove systemd after all. Not according to apt-get & apt, and not according to Synaptic. I got Wine on here with a number of other dependencies, and compiled LMMS's most recent release (everything, including VST, now working beautifully).I think my question just got boiled down to Why does Software Manager insanely suggest that I remove systemd? Which is kind of a different question, and makes this problem resolved.I'm honestly wondering if this is even a bug of some kind. |
_webapps.51212 | In Google Docs, you can click on the menu items and then choose an action. For example, you can click on the Format menu item, then click Align > Left to align text left. Is there a way to see what code runs when you do that? | In Google Docs, is it possible to view the code that runs when you click on an action? | google documents | Load the page in ChromeChoose Tools->Developer Tools to bring up the developer pane. (This is a whole tutorial in itself, but I will tell you how to do the specific thing you want).Right click on the body element in the Elements tab and choose Break -> on subtree modification. This is just a guess, but its a good place to start. I'm guessing Google will modify the DOM when you click things. Now when you move the mouse around, your breakpoint might fire prematurely for what you are looking at. Just hit F8 to continue.Hit the {} button at the bottom when it breaks to make the google optimized javascript code pretty print or display in a more readable form.Now, basically, its up to you to figure out what you want to do and where you want to go. You need to familiarize yourself with developer tools to really have the full power you need to do what you want probably, but that's a whole book. |
_codereview.48577 | I use the following code to find the lowest denominator Rational that is within a certain delta from a double.The rationale is that the I am pulling float numbers from a database and in many cases summing them. All of the numbers are calculated using simple maths such as +, -, * and /. No transcendental numbers are involved, nor is there any trigonometry. In most cases finding the nearest Rational to the float gets what the original figure is supposed to be rather than the results of adding mashed-up numbers together.// Create a good rational for the value within the delta supplied.public static Rational valueOf(double dbl, double delta) { // Primary checks. if (delta <= 0.0) { throw new IllegalArgumentException(Delta must be > 0.0); } // Remove the sign and integral part. long integral = (long) Math.floor(dbl); dbl -= integral; // The value we are looking for. final Rational d = new Rational((long) ((dbl) / delta), (long) (1 / delta)); // Min value = d - delta. final Rational min = new Rational((long) ((dbl - delta) / delta), (long) (1 / delta)); // Max value = d + delta. final Rational max = new Rational((long) ((dbl + delta) / delta), (long) (1 / delta)); // Start the fairey sequence. Rational l = ZERO; Rational h = ONE; Rational found = null; // Keep slicing until we arrive within the delta range. do { // Either between min and max -> found it. if (found == null && min.compareTo(l) <= 0 && max.compareTo(l) >= 0) { found = l; } if (found == null && min.compareTo(h) <= 0 && max.compareTo(h) >= 0) { found = h; } if (found == null) { // Make the mediant. Rational m = mediant(l, h); // Replace either l or h with mediant. if (m.compareTo(d) < 0) { l = m; } else { h = m; } } } while (found == null); // Bring back the sign and the integral. if (integral != 0) { found = found.plus(new Rational(integral, 1)); } // That's me. return found;}In a recent test using 0.000001 as my delta this code took 75% of the CPU. Dropping it to 0.0001 reduced that dramatically but it is still a significant bottleneck.Is there a quicker way of doing this?My implementation of Rational forces the numerator and denominator to be fully reduced at all times. I accept that that is likely the biggest overhead but as mentioned in Wikipedia the rationals must be fully reduced for the mediant function to work correctly.Here is the full class - borrowed from the mentioned site and enhanced:/** * *********************************************************************** * Immutable ADT for Rational numbers. * * Invariants * ----------- * - gcd(num, den) = 1, i.e, the rational number is in reduced form * - den >= 1, the denominator is always a positive integer * - 0/1 is the unique representation of 0 * * We employ some tricks to stave of overflow, but if you * need arbitrary precision rationals, use BigRational.java. * * Borrowed from http://introcs.cs.princeton.edu/java/92symbolic/Rational.java.html * because it has a mediant method. * ************************************************************************ */public class Rational extends Number implements Comparable<Rational> { public static final Rational ZERO = new Rational(0, 1); public static final Rational ONE = new Rational(1, 1); private long num; // the numerator private long den; // the denominator // create and initialize a new Rational object public Rational(long numerator, long denominator) { // deal with x/0 if (denominator == 0) { throw new IllegalArgumentException(Denominator cannot be 0.); } // reduce fraction long g = gcd(numerator, denominator); num = numerator / g; den = denominator / g; // only needed for negative numbers if (den < 0) { den = -den; num = -num; } } public Rational(Rational from) { num = from.num; den = from.den; } // return the numerator and denominator of (this) public long numerator() { return num; } public long denominator() { return den; } // return double precision representation of (this) public double toDouble() { return (double) num / den; } public BigDecimal toBigDecimal() { // Do it to just 4 decimal places. return toBigDecimal(4); } public BigDecimal toBigDecimal(int digits) { // Do it to n decimal places. return new BigDecimal(num).divide(new BigDecimal(den), digits, RoundingMode.DOWN).stripTrailingZeros(); } // return string representation of (this) @Override public String toString() { if (den == 1) { return num + ; } else { return num + / + den; } } public int compareTo(Rational b) { // return { -1, 0, +1 } if a < b, a = b, or a > b Rational a = this; long lhs = a.num * b.den; long rhs = a.den * b.num; if (lhs < rhs) { return -1; } if (lhs > rhs) { return +1; } return 0; } @Override public boolean equals(Object y) { // is this Rational object equal to y? if (y == null) { return false; } if (y.getClass() != this.getClass()) { return false; } Rational b = (Rational) y; return compareTo(b) == 0; } @Override public int hashCode() { int hash = 5; hash = 97 * hash + (int) (this.num ^ (this.num >>> 32)); hash = 97 * hash + (int) (this.den ^ (this.den >>> 32)); return hash; } // create and return a new rational (r.num + s.num) / (r.den + s.den) public static Rational mediant(Rational r, Rational s) { return new Rational(r.num + s.num, r.den + s.den); } // return gcd(|m|, |n|) private static long gcd(long m, long n) { if (m < 0) { m = -m; } if (n < 0) { n = -n; } if (0 == n) { return m; } else { return gcd(n, m % n); } } // return lcm(|m|, |n|) private static long lcm(long m, long n) { if (m < 0) { m = -m; } if (n < 0) { n = -n; } return m * (n / gcd(m, n)); // parentheses important to avoid overflow } // return a * b, staving off overflow as much as possible by cross-cancellation public Rational times(Rational b) { Rational a = this; // reduce p1/q2 and p2/q1, then multiply, where a = p1/q1 and b = p2/q2 Rational c = new Rational(a.num, b.den); Rational d = new Rational(b.num, a.den); return new Rational(c.num * d.num, c.den * d.den); } // return a + b, staving off overflow public Rational plus(Rational b) { Rational a = this; // special cases if (a.compareTo(ZERO) == 0) { return b; } if (b.compareTo(ZERO) == 0) { return a; } // Find gcd of numerators and denominators long f = gcd(a.num, b.num); long g = gcd(a.den, b.den); // add cross-product terms for numerator Rational s = new Rational((a.num / f) * (b.den / g) + (b.num / f) * (a.den / g), lcm(a.den, b.den)); // multiply back in s.num *= f; return s; } // return -a public Rational negate() { return new Rational(-num, den); } // return a - b public Rational minus(Rational b) { return plus(b.negate()); } public Rational reciprocal() { return new Rational(den, num); } // return a / b public Rational divides(Rational b) { Rational a = this; return a.times(b.reciprocal()); } // Default delta to apply. public static final double DELTA = 0.0001; public static Rational valueOf(double dbl) { return valueOf(dbl, DELTA); } public static Rational valueOf(BigDecimal dbl) { return valueOf(dbl.doubleValue(), DELTA); } public static Rational valueOf(double dbl, int digits) { return valueOf(dbl, Math.pow(10, -digits)); } public static Rational valueOf(BigDecimal dbl, int digits) { return valueOf(dbl.doubleValue(), Math.pow(10, -digits)); } // Create a good rational for the value within the delta supplied. public static Rational valueOf(double dbl, double delta) { // Primary checks. if (delta <= 0.0) { throw new IllegalArgumentException(Delta must be > 0.0); } // Remove the sign and integral part. long integral = (long) Math.floor(dbl); dbl -= integral; // The value we are looking for. final Rational d = new Rational((long) ((dbl) / delta), (long) (1 / delta)); // Min value = d - delta. final Rational min = new Rational((long) ((dbl - delta) / delta), (long) (1 / delta)); // Max value = d + delta. final Rational max = new Rational((long) ((dbl + delta) / delta), (long) (1 / delta)); // Start the fairey sequence. Rational l = ZERO; Rational h = ONE; Rational found = null; // Keep slicing until we arrive within the delta range. do { // Either between min and max -> found it. if (found == null && min.compareTo(l) <= 0 && max.compareTo(l) >= 0) { found = l; } if (found == null && min.compareTo(h) <= 0 && max.compareTo(h) >= 0) { found = h; } if (found == null) { // Make the mediant. Rational m = mediant(l, h); // Replace either l or h with mediant. if (m.compareTo(d) < 0) { l = m; } else { h = m; } } } while (found == null); // Bring back the sign and the integral. if (integral != 0) { found = found.plus(new Rational(integral, 1)); } // That's me. return found; } private static void print(String name, Rational r) { System.out.println(name + = + r + ( + r.toDouble() + )); } private enum TestNumber { OneTenth(0.100000001490116119384765625), Pi(Math.PI), E(Math.E), OneThird(0.3333333333333), MinusOneThird(-0.3333333333333), ABig1(1.87344227533222758141533568138280569154340745619495504034120344898213260187710089517712780269958755185722145694193999220); final double v; TestNumber(double v) { this.v = v; } } private static void test2() { for (TestNumber n : TestNumber.values()) { print(n.name(), Rational.valueOf(n.v)); } } private static void test1() { Rational x, y, z; // 1/2 + 1/3 = 5/6 x = new Rational(1, 2); y = new Rational(1, 3); z = x.plus(y); System.out.println(z); // 8/9 + 1/9 = 1 x = new Rational(8, 9); y = new Rational(1, 9); z = x.plus(y); System.out.println(z); // 1/200000000 + 1/300000000 = 1/120000000 x = new Rational(1, 200000000); y = new Rational(1, 300000000); z = x.plus(y); System.out.println(z); // 1073741789/20 + 1073741789/30 = 1073741789/12 x = new Rational(1073741789, 20); y = new Rational(1073741789, 30); z = x.plus(y); System.out.println(z); // 4/17 * 17/4 = 1 x = new Rational(4, 17); y = new Rational(17, 4); z = x.times(y); System.out.println(z); // 3037141/3247033 * 3037547/3246599 = 841/961 x = new Rational(3037141, 3247033); y = new Rational(3037547, 3246599); z = x.times(y); System.out.println(z); // 1/6 - -4/-8 = -1/3 x = new Rational(1, 6); y = new Rational(-4, -8); z = x.minus(y); System.out.println(z); } // test client public static void main(String[] args) { //test1(); test2(); } // Implement Number. @Override public int intValue() { return (int) doubleValue(); } @Override public long longValue() { return (long) doubleValue(); } @Override public float floatValue() { return (float) doubleValue(); } @Override public double doubleValue() { return toDouble(); }} | Finding the nearest Rational to a double - is there a more efficient mechanism? | java;performance;mathematics;floating point;rational numbers | NamingYou definitely need to improve your variable names. Here are some suggested changes with the reasoningdbl -> value (does not change much but repeating the type of a variable as its name does not add value (no pun intended))delta -> epsilon (the name epsilon is much more common if you want to define a closeness boundary)ABig1 (I have no idea for this but it does not help much. This one is as big as every other double. More importantly, double is not capable of catching all the digits you give there.)Now to the AlgorithmDoing a binary search is quite efficient in comparison with other methods but it still has an \$\mathbb{O}(\log n)\$ runtime. Lets think about the representation of doubles in IEEE 754 format.A double consists of 64 bits of which the first is the sign bit, the next eleven bits store a biased exponent and the remaining 52 bits store the mantissa.The idea is as follows: We use the mantissa as numerator and the exponent as denominator (if it is negative) or as factor for the numerator when it is positive:public static long getMantissaBits(double value) { // select the 52 lower bits which make up the mantissa return Double.doubleToLongBits(value) & 0xFFFFFFFFFFFFFL;}public static long getMantissa(double value) { // add the hidden 1 of normalized doubles return (1L << 52) + getMantissaBits(value);}public static long getExponent(double value) { int exponentOffset = 52; long lowest11Bits = 0x7FFL; long shiftedBiasedExponent = Double.doubleToLongBits(value) & (lowest11Bits << exponentOffset); long biasedExponent = shiftedBiasedExponent >> exponentOffset; // remove the bias return biasedExponent - 1023;}public static Rational valueOf(double value) { long mantissa = getMantissa(value); long exponent = getExponent(value) - 52; int numberOfTrailingZeros = Long.numberOfTrailingZeros(mantissa); mantissa >>= numberOfTrailingZeros; exponent += numberOfTrailingZeros; // apply the sign to the numerator long numerator = (long) Math.signum(value) * mantissa; if(exponent < 0) return new Rational(numerator, 1L << -exponent); else return new Rational(numerator << exponent, 1);}As you can see, we don't need the delta anymore as we are as close as possible. Of course there are some caveats. If the double is denormalized getMantissa will give wrong results (but you could detect that and return the correct result). Another problem stems from too big/small exponents where the shift of 1L is greater or equal to 64bits (and thus the result is 0). However, if you think about this problem you will find that these exponents only occur when the number is too big/small to fit into your Rational class anyways.As noticed in the comments this will return the exact result which is not wanted. I tried to come up with a solution to get the rounding correct but I failed so I shamelessly translated python's solution to Java:public Rational limitDenominator(long maximumDenominator) { if (maximumDenominator < 1) { throw new IllegalArgumentException(Denominator cannot be less than 1.); } if(this.den <= maximumDenominator) // we can't get closer than the current value return this; long p0 = 0; long q0 = 1; long p1 = 1; long q1 = 0; long n = this.num; long d = this.den; while(true) { long a = n / d; long q2 = q0 + a * q1; if(q2 > maximumDenominator) break; long oldP0 = p0; p0 = p1; q0 = q1; p1 = oldP0 + a * p1; q1 = q2; long oldN = n; n = d; d = oldN - a * d; } long k = (maximumDenominator - q0) / q1; Rational bound1 = new Rational(p0 + k * p1, q0 + k * q1); Rational bound2 = new Rational(p1, q1); if(bound2.minus(this).abs().compareTo(bound1.minus(this).abs()) <= 0){ return bound2; } else { return bound1; }}I cannot say much about the exact details in play here because I need to first understand them myself but the idea is that you find the closest fraction with a denominator less or equal than the given maximum. To do so you find an upper and lower closest and choose the one that is closer.Algorithm explanation (some math?)I finally found some time to have a closer look at the algorithm. Let us at first note that the problem we are trying to solve is the Diophantine approximation. As noted in the linked article we can use convergents or semiconvergents of the continued fraction representation of the given number to approximate it. Its best approximations for the second definition are\$3, \frac{8}{3}, \frac{11}{4}, \frac{19}{7}, \frac{87}{32}, \ldots\,\$ ,while, for the first definition, they are\$3, \frac{5}{2}, \frac{8}{3}, \frac{11}{4}, \frac{19}{7}, \frac{30}{11}, \frac{49}{18}, \frac{68}{25}, \frac{87}{32}, \frac{106}{39}, \ldots \$As you can see the semiconvergents have a much tighter spacing so we should use them (because the target number can lie in a hole between the convergents that is filled by semiconvergents). If you take a look into the linked python documentation you will find that it also uses semiconvergents.The Wikipedia article also tells us thatEven-numbered convergents are smaller than the original number, while odd-numbered ones are bigger.So we corner the number from both sides. The same section also tells us thatIf successive convergents are found, with numerators h1, h2, and denominators k1, k2, then the relevant recursive relation is:\$h_n = a_nh_{n 1} + h_{n 2}\$, \$k_n = a_nk_{n 1} + k_{n 2}\$.The successive convergents are given by the formula\$\frac{h_n}{k_n} = \frac{a_nh_{n 1} + h_{n 2}}{a_nk_{n 1} + k_{n 2}}\$ Looking closer in the code you will note that \$h_{n-1}\$ corresponds to p1 and \$h_{n-2}\$ to p0 (similarly for the \$k\$s and qs), \$a_n\$ is a in the code.So the loop calculates convergents that approximate better and better and always increase in their denominator. This gives a convenient break condition when the next convergent would have a denominator greater than the allowed maximum. So we hit the end of the road and stop there.Now we found the nearest convergents but there might be a closer semiconvergent.We know that the convergent with the denominator q0 + a * q1 is too big and the one with the denominator q1 is smaller (or equal) so we only have to look if there are semiconvergents between the two that are closer. Any semiconvergent with bigger denominator will be a better approximation [Citation Needed] (say closer). So we need to find the semiconvergent with the biggest denominator that is still smaller than the maximum. The following lines do this exactly:long k = (maximumDenominator - q0) / q1;denominator = q0 + k * q1;numerator = p0 + k * p1;The python code does compare the closeness of this with \$\frac{p1}{q1}\$ but I am not sure if this is really necessary. This concludes my discussion of the algorithm. The next step would be to explain why the semi-/convergents work that way but I will leave that out just believe Wikipedia/some houndred years of mathematics. |
_cstheory.9639 | When interpreting keys as natural numbers we can use the following formula.\begin{equation}h(k) = \lfloor m (kA\bmod{1}) \rfloor\end{equation}What I am having trouble understanding is how we choose the value of A where:\begin{equation}0 < A < 1\end{equation}According to Knuth an optimal value is:\begin{equation}A \thickapprox (\sqrt{5} - 1) / 2 = 0.6180339887...\end{equation}So my question is how did Knuth come to this and how could I calculate an optimum value for my specific data? | How did Knuth derive A? | hash function | See exercise 9 of section 6.4 of The Art of Computer Programming.Any irrational $A$ would work, because $\{kA\}$ breaks up a largest gap of $\{A\}, \{2A\}, \ldots, \{(k-1)A\}$ (I use the notation $\{x\}$ for $x\mod 1$).But if $A = \phi^{-1}$ or $A = \phi^{-2}$, it has a special property: these are the only values for which neither of the two newly created gaps is more than twice as long as the other. |
_webmaster.88659 | The CloudFlare free tier service offers unlimited bandwith while other CDNs charge starting at about $.10/gb.CloudFlare does not have bandwidth limits. As long as the domains being added comply with our Terms of Service, CloudFlare does not impose any limits.There's very little said on their website in the way of restrictions. The offer appears to be legit. Is there a catch? | How can CloudFlare offer a free CDN with unlimited bandwidth? | cdn;cloudflare | It does not offer unlimited bandwidth. Unlimited bandwidth does not exist and is an impossibility. It is only a marketing term that states the limit is higher than most users require. There is always a catch somewhere when anything is unlimited. With something that says unlimited, you are worse off than a service that has a specific known limit (or pay per use).Read:https://www.cloudflare.com/terms/SECTION 10: LIMITATION ON NON-HTML CACHINGYou further agree that if, at CloudFlares sole discretion, you are deemed to have violated this section, or if CloudFlare, in its sole discretion, deems it necessary due to excessive burden or potential adverse impact on CloudFlares systems, potential adverse impact on other users, server processing power, server memory, abuse controls, or other reasons, CloudFlare may suspend or terminate your account without notice to or liability to you.So if you cost them too much, they can stop providing you a service without notice.Plus, what guarantee do you have that their service will not have large outages when you pay nothing? |
_webmaster.102904 | I recently overhauled a website for a client. Their preference was to switch from a www url to a non-www url as part of the redesign.Once we launched the site, it took a major plummet in its position on SERPs. Now, I know this could be for several reasons outside of the projected issue I presented. But has anyone else ever made a similar switch and saw their website take a decline on SERPs?Thanks in advance for your comments. | www vs non-www in seo | seo;url;serps;no www;negative seo | null |
_unix.75822 | Is it possible with the normal Unix permissions system to enable group members to have the right to change a file's permissions?In other words, suppose we have a file awesome.rb which is owned by user brandon who belongs to group developers, and user darryl (also a member of developers) needs to make a particular file executable. Is there any way to make this possible? | Is there any way for group owners to have full rights of a file or directory owner? | permissions | null |
_webmaster.59663 | In a .htaccess file in the subfolder /preview (not in document root), I have this rule:RewriteRule !^public/ /preview/forbidden.php [L,R]It redirects all /preview/something requests that are not in /preview/public/ to the fobidden message.However, I don't like the fact that the directory name preview is in the .htaccess file. I would like to copy the entire website to another folder or server simply by copying the file without having to change the .htaccess file.So, is it possible to achieve the effect of that rule in some other way? | Is there a current directory variable in .htaccess RewriteRule? | htaccess | null |
_unix.44466 | I want to create an image of some directory tree that is writable directly to an USB drive, just like the images of many linux distributions. For example, with openSUSE you can download an ISO image that is writable directly the USB using dd.root@computer# dd if=openSUSE.iso of=/dev/sdbI tried to create images using mkisofs, but when I ran the dd command above with that image didn't get a partition table, which made windows not recognize the format of the drive and linux didn't present /dev/sdb1. I also tried to create an empty file, and then create a filesystem in that file using mkfs.vfat.There seem to be a lot of tutorials on the web on how to write images to USB drives as well as dumping an usb drive to file, but I haven't found anything on creating an image with a partition table. The problem I'm trying to solve with this is to distribute a preformated USB stick, so there is no need for the stick to be bootable, and I would also like to make this process scriptable. | How do I create a USB image with a partition table? | linux;usb;dd | You can use kpartx for this. Here is a way to create a complete disk image.# create empty imagedd if=/dev/zero of=myvm.img bs=1G count=0 seek=100# partition the image file with fdisk/gdisk or any other toolgdisk myvm.img# make the partitions in the image file available as individual deviceskpartx -a myvm.img# work with the partitions./someprogram /dev/mapper/loop0p1# close the partitionskpartx -d myvm.img |
_unix.200667 | I'm trying to verify the all packages except for a pre-defined list of packages that I know are going to fail for known reasons. This script is going to be run on all Solaris systems within our environment to confirm a system baseline.I'm open to any technique which will work here, and is possible to put on a single line (Limitation of the tool I'm using for validation).My initial thought was that I'd take a pkg list, run it through AWK to grab the package name, filter out the packages I don't want, and then run an individual pkg verify on each package remaining individually.This is the code I've created below:pkg list | awk 'BEGIN {c=0} $1 == exclude1 || $1 == exclude2 { next } { system(pkg verify $1); c++ } END { if (c == 0) print none }'The problem I'm running into is I'm not seeing any output even though I know there should be a few things that fail the pkg verify. I thought the system( would capture the output, but I'm relatively new to AWK, and it could be I'm misunderstanding something. | Solaris: PKG - Script To Verify All Packages Except for a Few | awk;scripting;solaris | On Solaris 11.3, you will want to use nawk rather than awk. nawk (new awk) is installed by default and should be in your path (/usr/bin/nawk).The system() function in awk (any implementation) does not return the output of the command, but its exit code. This is ok though, as you probably don't want the actual output from pkg anyway. The pkg command will exit with a non-zero exit code if something went wrong (see the pkg manual).The following pipeline will take the pkg list output and skip the first line (which is a header), and all lines matching the excluded package names. For the remaining lines of input, it will execute pkg verify through system() with the package name.If pkg verify returns a non-zero exit status, it will increment a counter. At the end of processing, the counter will be displayed, showing how many verification errors occurred.pkg list | nawk 'NR > 1 && !/exclude1|exclude2/ { if (system(pkg verify $1)) { e++ } } END { printf(%d errors\n, e) }'This is rather inefficient though. It's quicker to get a list of the packages and verify them in one go:pkg list | nawk 'NR > 1 && !/exclude1|exclude2/ { print $1 }' | ( xargs pkg verify ) || echo there were errorsIf you have a list of packages to ignore in a file:pkg list | /usr/xpg4/bin/grep -F -v -f excluded.txt | nawk 'NR > 1 { print }' | ( xargs pkg verify ) || echo there were errors |
_codereview.40256 | I wrote the following program to calculate n digits of Pi (where n could be anything, like 10M) in order to benchmark the CPU and it works perfectly (without OpenMP):/*** Simple PI Benchmarking tool* Author: Suyash Srijan* Email: [email protected]** This program calculates how much time your CPU takes to compute n digits of PI using Chudnovsky Algorithm* (http://en.wikipedia.org/wiki/Chudnovsky_algorithm) and uses the GNU Multiple Precision Arithmetic Library* for computation.** For verification of digits, you can download the digits from here: http://piworld.calico.jp/estart.html** It's a single threaded program but you can compile it with OpenMP support to enable parallelization.* WARN: OpenMP support is experimental** Compile using gcc : gcc -O2 -Wall -o pibench pibench.c -lgmp -lssl -lcrypto* Compile using gcc (with OpenMP): gcc -O2 -Wall -o pibench pibench.c -lgmp -lssl -lcrypto -fopenmp**/#include <gmp.h>#include <stdio.h>#include <stdlib.h>#include <string.h>#include <time.h>#include <sys/resource.h>#include <sys/utsname.h>#include <openssl/md5.h>/* Import OpenMP header if compiling with -fopenmp */#if defined(_OPENMP)#include <omp.h>#endif/* You can't compile this on Windows */#ifdef _WIN32#error >>> Fatal: It is not possible to compile this program on Windows <<<#endif/* Build timestamp */#define build_time __TIME__#define build_date __DATE__/* Calculate log to the base 2 using GCC's bit scan reverse intrinsic */__inline__ unsigned int clc_log2(const unsigned int num) { return ((num <= 1) ? 0 : 32 - (__builtin_clz(num - 1)));}/* Calculate MD5 checksum for verification */__inline__ char *clc_md5(const char *string) { MD5_CTX context; unsigned char digest[16]; char *checksum = (char*)malloc(33); int i; MD5_Init(&context); MD5_Update(&context, string, strlen(string)); MD5_Final(digest, &context); for (i = 0; i < 16; ++i) { snprintf(&(checksum[i*2]), 3, %02x, (unsigned int)digest[i]); } return checksum;}/* Calculate pi digits main function */__inline__ char *clc_pi(unsigned long dgts){ /* Variable declaration */ struct timespec start, end; unsigned long int i, ti, constant1, constant2, constant3; unsigned long iters = (dgts / 15) + 1; unsigned long precision; double bits; char *oput; mpz_t v1, v2, v3, v4, v5; mpf_t V1, V2, V3, total, tmp, res; mp_exp_t exponent; /* Initialize */ constant1 = 545140134; constant2 = 13591409; constant3 = 640320; bits = clc_log2(10); precision = (dgts * bits) + 1; mpf_set_default_prec(precision); mpz_inits(v1, v2, v3, v4, v5, NULL); mpf_inits(res, tmp, V1, V2, V3, total, NULL); mpf_set_ui(total, 0); mpf_sqrt_ui(tmp, 10005); mpf_mul_ui(tmp, tmp, 426880); /* Get high-res time */ clock_gettime(CLOCK_REALTIME, &start); /* Print total iterations and start computation of digits */ printf(Total iterations: %lu\n\n, iters - 1);#if defined(_OPENMP)#pragma omp parallel for private(v1, v2, v3, v4, v5, V1, V2, V3, ti) reduction(+:total)#endif /* Iterate and compute value using Chudnovsky Algorithm */ for (i = 0x0; i < iters; i++) { ti = i * 3; mpz_fac_ui(v1, 6 * i); mpz_set_ui(v2, constant1); mpz_mul_ui(v2, v2, i); mpz_add_ui(v2, v2, constant2); mpz_fac_ui(v3, ti); mpz_fac_ui(v4, i); mpz_pow_ui(v4, v4, 3); mpz_ui_pow_ui(v5, constant3, ti); if ((1 & ti) == 1) { mpz_neg(v5, v5); } mpz_mul(v1, v1, v2); mpf_set_z(V1, v1); mpz_mul(v3, v3, v4); mpz_mul(v3, v3, v5); mpf_set_z(V2, v3); mpf_div(V3, V1, V2); mpf_add(total, total, V3); /* Print interations executed if debugging (I don't like spamming stdout unnecesarily) */ #ifdef DEBUG printf(Iteration %lu of %lu successfully executed\n, i, iters - 1); #endif } /* Some final computations */ mpf_ui_div(total, 1, total); mpf_mul(total, total, tmp); /* Get high-res time */ clock_gettime(CLOCK_REALTIME, &end); /* Calculate and print time taken */ double time_taken = (double)(end.tv_sec - start.tv_sec) + (double)(end.tv_nsec - start.tv_nsec) / 1E9; printf(Done!\n\nTime taken (seconds): %lf\n, time_taken); /* Store output */ oput = mpf_get_str(NULL, &exponent, 10, dgts, total); /* Free up space consumed by variables */ mpz_clears(v1, v2, v3, v4, v5, NULL); mpf_clears(res, tmp, V1, V2, V3, total, NULL); /* Return value */ return oput;}/* Entry point of program */int main(int argc, char *argv[]) { /* Set number of threads if compiling with -fopenmp */#if defined(_OPENMP) omp_set_num_threads(8);#endif /* Variable declaration and initialization */ unsigned long how_many_digits = 10000; unsigned int base = 10; char *tmp_ptr; int pd = 0; int dd = 0; /* Try setting process priority to highest */ int returnvalue = setpriority(PRIO_PROCESS, (id_t)0, -20); if (returnvalue == -1) { printf(WARN: Unable to max out priority. Did you not run this app as root?\n); } /* Parse command line */ if (argc == 3 && ((strcmp(argv[2], --printdigits) == 0) || (strcmp(argv[2], --nodigits) == 0) || (strcmp(argv[2], --dumpdigits) == 0))) { how_many_digits = strtol(argv[1], &tmp_ptr, base); pd = (strcmp(argv[2], --printdigits) == 0) ? 1 : 0; dd = (strcmp(argv[2], --dumpdigits) == 0) ? 1 : 0; } /* Invalid command line parameters */ else { fprintf(stderr, Error: Invalid command-line arguments!\nUsage: pibench [digits] [parameter]\nParameter:\n--printdigits : Prints all digits on console\n--nodigits : Suppresses printing of digits on console\n--dumpdigits : Saves all the digits to a text file\n\nUsage example: pibench 50000 --printdigits\n); exit(1); } /* Print introductory text */ struct utsname uname_ptr; uname(&uname_ptr); printf(\n---------------------------------------------------------------); printf(\nPi Bench v1.0 beta (%s)\nBuild date: %s %s\n, uname_ptr.machine, build_date, build_time); printf(---------------------------------------------------------------\n\n); /* Check if digits isnt zero or below */ if (how_many_digits < 1) { fprintf(stderr, Error: Digit cannot be lower than 1\n); exit(1); } /* Calculate digits of pi */ printf(Computing %lu digits of PI...\n, how_many_digits); char *digits_of_pi = clc_pi(how_many_digits); /* Print the digits if user specified the --printdigits flag */ if (pd == 1) { printf(Here are the digits:\n\n%.1s.%s\n, digits_of_pi, digits_of_pi + 1); } /* Save digits to text file if user specified the --dumpdigits flag */ if (dd == 1) { FILE *file; if ((file = fopen(pidigits.txt, w)) == NULL) { fprintf(stderr, Error while opening file\n); exit(-1); } else { fprintf(file, %.1s.%s\n, digits_of_pi, digits_of_pi + 1); fclose(file); } } /* Print MD5 checksum */ char *md5 = clc_md5(digits_of_pi); printf(MD5 checksum (for verification): %s\n, md5); /* Free the memory */ free(digits_of_pi); /* Time to go! */ printf(Goodbye!\n); return 0;}The source code is available here.Any suggestions or tips will be greatly appreciated! | Pi Benchmarking in C | performance;c;multithreading;openmp;openssl | Just a few notes on some things I didn't see mentioned.Compilation:I originally couldn't compile the program with the command in the comments./tmp/cc2H2h0a.o: In function 'clc_pi': test.c:(.text+0x148): undefined reference to 'clock_gettime' test.c:(.text+0x2f0): undefined referenceto 'clock_gettime' collect2: ld returned 1 exit statusAdd -lrt to the list of libraries you link to.// Compile using gcc : gcc -O2 -Wall -o pibench pibench.c -lgmp -lssl -lcrypto -lrtSyntax:The DEBUG stuff is distracting. Maybe it is temporary, but if youwanted to leave it in, I suggest extracting it:#include <stdarg.h>static inline void debug(const char *format, ...){#ifdef DEBUG va_list ap; va_start(ap, format); vfprintf(stdout, format, ap); va_end(ap);#endif}and calling it:debug(Iteration %lu of %lu successfully executed\n, i, iters - 1);If DEBUG is undefined, the inline debug function will be empty and willbe excluded during compilation - it disappears.Put the else on its own line.if (dd == 1) { FILE *file; if ((file = fopen(pidigits.txt, w)) == NULL) { fprintf(stderr, Error while opening file\n); exit(-1); } else { fprintf(file, %.1s.%s\n, digits_of_pi, digits_of_pi + 1); fclose(file); }}When you use it this way, it is very easy to overlook it. I almost glanced over it when examining your code. There isn't really a reason to put it on it's own line, except to save LOC, which you could do better in other places.if (dd == 1) { FILE *file; if ((file = fopen(pidigits.txt, w)) == NULL) { fprintf(stderr, Error while opening file\n); exit(-1); } else { fprintf(file, %.1s.%s\n, digits_of_pi, digits_of_pi + 1); fclose(file); }}Put all statements on separate lines. From Code Complete, 2nd Edition, pg. 759:With statements on their own lines, the code reads from top to bottom, instead of top to bottom and left to right. When youre looking for a specific line of code, your eye should be able to follow the left margin of the code. It shouldnt have to dip into each and every line just because a single line might contain two statements.I would use more comments, especially around your OpenMP #pragmas and function calls.Define i in your for loops.(C99)for (int i = 0x0; i < iters; i++)Miscellaneous:fopen(), a widely-used file I/O functions that you are using, got a facelift in C11. It now supports a new exclusive create-and-open mode (...x). The new mode behaves like O_CREAT|O_EXCL in POSIX and is commonly used for lock files. The ...x family of modes includes the following options:wx create text file for writing with exclusive access.wbx create binary file for writing with exclusive access.w+x create text file for update with exclusive access.w+bx or wb+x create binary file for update with exclusive access.Opening a file with any of the exclusive modes above fails if the file already exists or cannot be created. Otherwise, the file is created with exclusive (non-shared) access. Additionally, a safer version of fopen() called fopen_s() is also available. That is what I would use in your code if I were you, but I'll leave that up for you to decide and change.CLOCK_REALTIME represents the machine's best-guess as to the current wall-clock, time-of-day time. This means that CLOCK_REALTIME can jump forwards and backwards as the system time-of-day clock is changed, including by NTP.CLOCK_MONOTONIC represents the absolute elapsed wall-clock time since some arbitrary, fixed point in the past. It isn't affected by changes in the system time-of-day clock.If you want to compute the elapsed time between two events observed on the one machine without an intervening reboot, CLOCK_MONOTONIC is the best option. |
_unix.42533 | I'm trying to do...ssh -av -e [email protected]:/var/www/domain.com /Users/user/workspace/domainBut it's outputting this (I presume because of the period character):OpenSSH_5.6p1, OpenSSL 0.9.8r 8 Feb 2011Bad escape character '[email protected]:/var/www/domain.com'.I have tried ssh -av -e [email protected]:/var/www/domain\.com /Users/user/workspace/domainAnd various combinations with quotes. What is the right syntax? | How do I escape a dot character for an rsync command? | bash;ssh;rsync | You're doing this:ssh -av -e [email protected]:/var/www/domain.com /Users/user/workspace/domainYou're not executing rsync at all and ssh is telling you that [email protected]:/var/www/domain.com is not a valid escape character.Read ssh(1):-e escape_char Sets the escape character for sessions with a pty (default: `~'). The escape character is only recognized at the beginning of a line. The escape character followed by a dot (`.') closes the connection; followed by control-Z suspends the connection; and followed by itself sends the escape character once. Setting the character to ``none'' disables any escapes and makes the session fully transparent.I think what you meant to run is this:rsync -e ssh -av [email protected]:/var/www/domain.com /Users/user/workspace/domain |
_unix.89370 | I have to change the date after an SSH login into machine, but I am not able to change it. Here is the script I have written:#!/bin/bashENVIRONMENT_LIST=environment_ip_listUSERNAME=rootdeclare ENVIRONMENT_ARRAYmdate=$#readIp(){while read IP do ENVIRONMENT_ARRAY[$env_count]=$IP let env_count++ done < $ENVIRONMENT_LIST}change_date(){ for ((i = 0; i < env_count; i++)) do ssh -t -t -o StrictHostKeyChecking=no $USERNAME@${ENVIRONMENT_ARRAY[i]} 'date -s $1 $2 $3 $4' done}readIpchange_dateIn a terminal, I get this output:~/Desktop/changedate_script $ ./change.sh 04 SEP 2012 10:36:[email protected]'s password: bash: date -s : command not foundConnection to 192.168.12.160 closed. | Change date after SSH login in shell script | bash;shell script | There are too many quotes in ssh command.Use the following one: ssh -t -t -o StrictHostKeyChecking=no $USERNAME@${ENVIRONMENT_ARRAY[i]} date -s '$1 $2 $3 $4'also change tsring with change_date function call to:change_date $1 $2 $3 $4 |
_webmaster.2827 | Could you answer to at least one of these questions:Is GoDaddy SSL standard certificate compatible with all browsers(Chrome and Safari on iPhone, or Android browsers included)?http://www.godaddy.com/ssl/ssl-certificates.aspx?ci=8979Is it running on Apache servers? | Is GoDaddy SSL standard certificate compatible with all browsers? | godaddy;security certificate | YesYes:)I use it for all of my clients' websites which are hosted on an Apache powered web server. I obviously wouldn't do that if it wasn't 100% compatible. |
_scicomp.21092 | I am a student doing physics hons and have had very little experience in programming. This semester we are supposed to do a computational project in thermodynamics. I have to solve these two coupled diff eqns:$$\begin{aligned} (p,T ) &= \frac{p ^2}{2m} + \frac{2}{N}\sum_q f(p q)\,n (q) \frac{1}{N^2}\sum_{s,t} f(s t)\,n(s)\,n(t) \quad\text{and} \\n(p) &= \frac{1}{\exp\left[\cfrac{ (p,T ) \mu}{kT}\right] 1}\end{aligned}$$$\omega$ is the energy per boson.$p$ is the momentum of a boson.$n(p)$ is the number of bosons in the state with momentum $p$.$f$ is a function of the form $$ f(p)=\frac{1}{2} \left[\epsilon_0- \frac{p^2}{2m}\right]$$ $\epsilon_0$ is elementary excitation energy at 0 K. $N$ is total no of bosons.$T$ is temp and $k$ is a constant.Can someone guide me to any simple methods to generate some crude solution to this problem? Based on a paper: Evaluation of specic heat for superuid helium between 0 - 2.1 K based on nonlinear theory By Shosuke Sasaki (arXiv:0807.1361v1 [cond-mat.other] 9 Jul 2008 | Coupled Diff Equation from Bose Einstein distribution | numerical analysis;c++;computational physics | null |
_unix.283783 | I would like to add a new directory to my user's font directories. To achieve that, I've added the following file:$ cat ~/.config/fontconfig/conf.d/dropbox-fonts.conf <?xml version='1.0'?><!DOCTYPE fontconfig SYSTEM 'fonts.dtd'><fontconfig> <dir>~/Dropbox/fonts</dir></fontconfig>The reason for using a separate file is that it's easier for me to define it with Puppet.However, the fonts are not picked up. As soon as I create a symlink from ~/Dropbox/fonts to ~/.fonts/fonts they are picked up.How can I define an additional font directory in a separate file? | Adding a new per-user font directory | fontconfig | The configuration file was not being picked up since it is apparently necessary to have a numerical prefix for the files placed in conf.d directories, e.g. ~/.config/fontconfig/conf.d/10-dropbox-fonts.conf works, while ~/.config/fontconfig/conf.d/dropbox-fonts.conf does not.The leading 10- in the file name makes the difference. |
_codereview.87003 | Here's a method inside my controller that reads the values of angle and point data from my database. Then it grabs the data and adds it to a new list and sends the JSON to the view.I can't simplify this simply putting my if statements into two ActionResults because I can only bind one datasource to one kendo grid.[OutputCache(NoStore = true, Duration = 0, VaryByParam = *)] public ActionResult ReadMeasurements([DataSourceRequest] DataSourceRequest request, string viewType) { JsonResult json = new JsonResult(); List<AngleData> angledata = UserSession.GetValue(StateNameEnum.Planning, ScreenName.Planning.ToString() + Angles + viewType, UserSessionMode.Database) as List<AngleData>; List<PointData> pointData = UserSession.GetValue(StateNameEnum.Planning, ScreenName.Planning.ToString() + Points + viewType, UserSessionMode.Database) as List<PointData>; if(pointData != null && angledata != null) { List<PlanningViewParam> angles = new List<PlanningViewParam>(); foreach (AngleData i in angledata) { string col = # + ColorTranslator.FromHtml(String.Format(#{0:X2}{1:X2}{2:X2}, (int)(i.color.r * 255), (int)(i.color.g * 255), (int)(i.color.b * 255))).Name.Remove(0, 2); int angleVal = (int)i.angleValue; angles.Add(new PlanningViewParam() { Color = col, Label = Angle, Value = angleVal, Number = i.angleNumber }); } List<DPlanningViewParam> points = new List<PlanningViewParam>(); foreach (PointData f in pointData) { string col = # + ColorTranslator.FromHtml(String.Format(#{0:X2}{1:X2}{2:X2}, (int)(f.color.r * 255), (int)(f.color.g * 255), (int)(f.color.b * 255))).Name.Remove(0, 2); string pointAnglesVal = f.pointAnglesValue; points.Add(new PlanningViewParam() { Color = col, Label = Point, ValueTwo = pointAnglesVal, Number = f.pointNumber }); } return Json(new { Angles = angles, Points = points }, JsonRequestBehavior.AllowGet); } if (angledata != null) { List<PlanningViewParam> angles = new List<PlanningViewParam>(); foreach (AngleData i in angledata) { string col = # + ColorTranslator.FromHtml(String.Format(#{0:X2}{1:X2}{2:X2}, (int)(i.color.r * 255), (int)(i.color.g * 255), (int)(i.color.b * 255))).Name.Remove(0, 2); int angleVal = (int)i.angleValue; angles.Add(new PlanningViewParam() { Color = col, Label = Angle, Value = angleVal, Number = i.angleNumber }); } return json = Json(angles.ToDataSourceResult(request, i => new PlanningViewParam() { Color = i.Color, Label = i.Label, Value = i.Value, Number = i.Number }), JsonRequestBehavior.AllowGet); } if (pointData != null) { List<PlanningViewParam> points = new List<PlanningViewParam>(); foreach (PointData f in pointData) { string col = # + ColorTranslator.FromHtml(String.Format(#{0:X2}{1:X2}{2:X2}, (int)(f.color.r * 255), (int)(f.color.g * 255), (int)(f.color.b * 255))).Name.Remove(0, 2); string pointAnglesVal = f.pointAnglesValue; points.Add(new PlanningViewParam() { Color = col, Label = Point, ValueTwo = pointAnglesVal, Number = f.pointNumber }); } return json = Json(points.ToDataSourceResult(request, f => new PlanningViewParam() { Color = f.Color, Label = f.Label, Value = f.Value, Number = f.Number }), JsonRequestBehavior.AllowGet); } return null; } | Return existing values inside database | c#;json;asp.net mvc 4 | First of all, get rid of this line:JsonResult json = new JsonResult();Just use return new JSON(...).Now, you have repeating chunks of codes when you construct the angles and points Lists. I recommend you extract them to separate methods. If you do not want to clutter your code with the methods use the delegate() or Func<>() to create a functions inside your method.So, with that your code will be simpler: public ActionResult ReadMeasurements([DataSourceRequest] DataSourceRequest request, string viewType) { List<AngleData> angledata = UserSession.GetValue(StateNameEnum.Planning, ScreenName.Planning.ToString() + Angles + viewType, UserSessionMode.Database) as List<AngleData>; List<PointData> pointData = UserSession.GetValue(StateNameEnum.Planning, ScreenName.Planning.ToString() + Points + viewType, UserSessionMode.Database) as List<PointData>; if(pointData != null && angledata != null) { List<PlanningViewParam> angles = BuildAngles(angledata); List<DPlanningViewParam> points = BuildPoints(pointData); return new Json(new { Angles = angles, Points = points }, JsonRequestBehavior.AllowGet); } else if (angledata != null) { List<PlanningViewParam> angles = BuildAngles(angledata); return new Json(angles.ToDataSourceResult(request, i => new PlanningViewParam() { Color = i.Color, Label = i.Label, Value = i.Value, Number = i.Number }), JsonRequestBehavior.AllowGet); } else if (pointData != null) { List<DPlanningViewParam> points = BuildPoints(pointData); return new Json(points.ToDataSourceResult(request, f => new PlanningViewParam() { Color = f.Color, Label = f.Label, Value = f.Value, Number = f.Number }), JsonRequestBehavior.AllowGet); } return null; }BTW, I noticed that the JSON for the first condition pointData != null && angledata != null is returned differently. You return just new { Angles = angles, Points = points } allowing the .Net engine to serialise the data for you. For other conditions you explicitly list all elements. You either did not test the first condition, or the explicitly listing all elements is not required, as the engine does the job for just fine. If the latter is the case then use return new Json(new { Angles = angles }, JsonRequestBehavior.AllowGet); instead of angles.ToDataSourceResult(request, i... Try this and see how it goes: public ActionResult ReadMeasurements([DataSourceRequest] DataSourceRequest request, string viewType) { List<AngleData> angledata = UserSession.GetValue(StateNameEnum.Planning, ScreenName.Planning.ToString() + Angles + viewType, UserSessionMode.Database) as List<AngleData>; List<PointData> pointData = UserSession.GetValue(StateNameEnum.Planning, ScreenName.Planning.ToString() + Points + viewType, UserSessionMode.Database) as List<PointData>; if(pointData != null && angledata != null) { List<PlanningViewParam> angles = BuildAngles(angledata); List<DPlanningViewParam> points = BuildPoints(pointData); return new Json(new { Angles = angles, Points = points }, JsonRequestBehavior.AllowGet); } else if (angledata != null) { List<PlanningViewParam> angles = BuildAngles(angledata); return new Json(new { Angles = angles }, JsonRequestBehavior.AllowGet); } else if (pointData != null) { List<DPlanningViewParam> points = BuildPoints(pointData); return new Json(new { Points = points }, JsonRequestBehavior.AllowGet); } return null; } |
_datascience.18191 | can someone please tell me the difference between a BI trendline, and a linear/exponential regression?When explaining this to a hardcore BI person, what can be used to mark the difference? Thanks. | BI vs Data Science. Looking for a difference in definitions | regression | Any difference in regression models can be reduced to differences in the latent model (e.g., linear vs. exponential), regularizer (e.g., $L^p$ norm), and loss function. So you can have subtle differences by keeping some of these three parameters fixed while modifying the rest.My understanding of a BI trend line is that it assumes an affine latent model without saying anything about the regularizer or loss function (though I'd assume it's the MSE unless stated otherwise). In the data science world, you should also state what loss function and regularizer you used if you want to be clear. |
_cs.45455 | Is there a hardware interrupt that is pre-configured by the OS or something? Try to keep the answer on the scale of a register or so. Are some special preparatory signals sent across the bridges to the hardware to make boundaries? | How does the operating system set up memory boundaries? | computer architecture;memory management;memory access | How memory boundaries work depends on the system, but the most common method is a memory management unit (MMU) or a memory protection unit (MPU). Whenever the CPU makes a memory access, the address is analyzed by the MPU or MMU. An MPU either allows or forbid the access; an MMU is more powerful and translates the virtual address passed by the processor into a physical address used by the memory controller.When the access is denied, this triggers a kind of interrupt (not necessarily an actual interrupt because it often comes from inside the CPU; the vocabulary depends on the architecture but it's often called trap or exception). Other than that, no interrupts are involved.On most architectures, the MMU is part of the CPU. So MMU configuration does not involve any signals sent across the bridges (did you mean the buses?).The MPU/MMU acts based on tables. The CPU sets the tables, generally by setting a register which contains a pointer to the main table. The operating system normally modifies the table whenever a context switch between tasks occurs or a task allocates or frees memory. |
_unix.111578 | I want to copy positionXYZ into another directory's inside I want both of them.I put:Tutorials myname $ cp -r positionXYZ Documents/Gerris\ Programs/Tutorials/tutorial6/Then it says :cp: directory Documents/Gerris Programs/Tutorials/tutorial6 does not existTutorials is the positionXYZ's current parent directory, tutorial6 is the directory which I want to copy the file into. | Copy a file into another directory's inside | file copy | I assume you are already in Documents/Gerris Programs/Tutorials/so, all you need to do is:cp -r positionXYZ tutorial6/or if you want to use an absolute path (assuming that Documents is in your home directory ~):cp -r positionXYZ ~/Documents/Gerris\ Programs/Tutorials/tutorial6/ |
_unix.112184 | This is the output of apt-cache policy firefoxfirefox: Installed: 26.0~linuxmint1+lmde Candidate: 26.0~linuxmint1+lmde Version table: *** 26.0~linuxmint1+lmde 0 500 http://packages.linuxmint.com/ debian/import amd64 Packages 100 /var/lib/dpkg/statusThe version is shown as 26.0~linuxmint1+lmde, what is the 0 that comes after it? I have tried with various packages and there is always a 0 after the package version. Presumably, this can also take other values but I haven't seen it.I read through both the man and info pages of apt-cache and could not find anything relevant. The only explanation of policy is: policy [pkg...] policy is meant to help debug issues relating to the preferences file. With no arguments it will print out the priorities of each source. Otherwise it prints out detailed information about the priority selection of the named package. | What are the numbers after the version in the output of apt-cache policy? | debian;apt;version | It is <minimum-priority-to-consider>:A general output would be:package-name: Installed: <installed-version> Candidate: <version-installed-when-doing-apt-get-upgrade> Package-Pin: <version-of-Pin-in-etc-apt-preferences> Version table: *** <some-version> <minimum-priority-to-consider> <priority-of-this-instance> <repository1> <priority-of-this-instance> <repository2> *** <some-other-version> <minimum-priority-to-consider> <priority-of-this-instance> <repository3> <priority-of-this-instance> <repository4>So, in the output above, package firefox has version 26.0~linuxmint1+lmde with minimum priority of 0, and may be provided by two repositories with a priority of 500 and 100 respectively.From this debian errata section, via linuxquestions.org and askubuntu. |
_unix.329685 | When I enable or disable locales in /etc/locale.gen configuration file, then I need to execute locale-gen. Looks like locale-gen processes enabled locale files(or locale template files?) for enabled locales in /usr/share/i18n/locales/ directory. Does it produce some kind of binary file? How does this affect the glibc or locale-aware binaries? Am I correct that glibc and locale-aware binaries use the locales based on the variables seen in the output of locale command? | relationship between locales and glibc/locale-aware binaries | debian;locale | Does it produce some kind of binary file?Yes, typically /usr/lib/locale/locale-archive.How does this affect the glibc or locale-aware binaries?Usually they will fall back to the standard locale (called C or POSIX) which does not require a locale-archive file.Am I correct that glibc and locale-aware binaries use the locales based on the variables seen in the output of locale command?Yes. You can set separate locales for displaying monetary values LC_MONETARY, etc. But the main variable that acts as a default if the others aren't set is LANG. locale just reports what your current environment variables are implying. |
_unix.352516 | I don't quite understand the output from the free -h command since I'm a newbie. I have tried searching but am still not quite sure.Should I be worried my free memory is only 46M or is the -/+ buffers/cache row value that says 351M free also available for whatever? total used free shared buffers cached Mem: 594M 548M 46M 76M 28M 277M-/+ buffers/cache: 242M 351MSwap: 0B 0B 0BIf it matters, this is a web server that hosts a few websites that don't get more than 30 visits per day each. | Should I be concerned my free memory is so low or is the free memory in buffers/cache available for anything also? | debian;memory;webserver | The -/+ buffers/cache indicate the size of RAM that is dedicated directly for read/write by all the process of running applications. When you run free with -m flag, -/+ buffers/cache is the most important row to look at. In your case, it doesn't mean that (351+46)Mb is your total free memory but is a way to visualize that 242 Mb has been used by processes and 351Mb of buffers/cache in RAM is dedicatedly free for other application to use.Linux always tries to use RAM to speed up disk operations by using available memory for buffers (file system metadata) and cache (pages with actual contents of files or block devices). It may be noted that if a system has been running for a while, a small number can be seen under the free column of the mem row. |
_codereview.43350 | Here's a novel-length summary of the issue:I'm trying to write a VB.net program to help me collect remote site statistics from system-generated logs, but I'm a little like a carpenter who only knows how to use a hammer, and my project has turned into a bit of a monstrosity; as embarrassing as my code is, I would really love to get some professional opinions on how I can make it more streamlined and efficient, and generally less embarrassing.Here's the basic rundown of relevant program functionality:The user can select up to five plaintext log files, each of which can be relatively long (the longest I have available for testing is 26k lines).The program reads through every line of each file in turn, using IO.File.ReadLines, looking for relevant entries (in this case, every time a terminal goes UP or DOWN), and records the information in an entry object, which is stored in a list of entry objects. (At this point, I do a lot with the entries, but I'm going to focus just on one activity for this question).To find individual site outages, the program reads through the list of entries until it finds the first DOWN entry. It records the site ID, the site's group ID, and the outage start time. At this point, things start to get grossly inefficient.After it has collected the information listed in step 3, it records the current entry list index as a bookmark, then proceeds to look through all the following indices until it finds the next entry with that site ID; if that entry has an UP status, then it records that entry time as the outage end time, and calculated the total duration of the outage, then it goes back to the bookmark to look for the next outage start time. If it's a DOWN status, it scraps the current outage and goes back to the bookmark to look for the next outage start time. All of this information (and that recorded from step 3) is recorded in an outage object, and stored in a list of outages. This step takes an extremely long time.The program then goes through the list of outages, and checks to see if the site ID is contained in a dictionary(of string, array). If so, it adds the outage duration to the dictionary value array index 0, and it adds 1 to the array index 1. This way, I can keep track of the total outage duration for that site, and the total number of outages.Once all the outages have been added to the dictionary of sites, the program runs through that dictionary and calculates the average downtime of each site, and puts the results into another dictionary(of string,integer) to associate the site ID with its average downtime. It also adds each average downtime to a list (called sorter). This next part is really sloppy, but I don't know how to do it better (or at all).The sorter list is then sorted in descending order. When the average outage times are graphed (xval is index, yval is average duration in minutes), it only plots durations greater than the value in sorter(9); my intent was to graph only the top ten sites (by average downtime), because thousands can be present in a single log file and graphing all of them would be unreadable. However, there are many many problems with doing it this way, and I don't know how to do it better when the values are stored in a dictionary. Likewise, I can't store them in a list(of array), because I'd need to store strings and integers in the same array (unless there's a way around mixing types like that?).Here are the specific questions I have:Is there a more efficient way to perform these searches without resorting to so many time-consuming nested loops?Is there a more efficient and effective way to sort my outages by the outage duration (integer), while still keeping that duration associated with the site ID (string)?Public Sub avgdowntimepersite(ByVal type As DataVisualization.Charting.SeriesChartType) Dim stats As New Dictionary(Of String, Integer) Dim sorter As New List(Of Integer) Dim outages As New List(Of outage) Dim sites As New Dictionary(Of String, Array) Dim x = 0 'This is a bookmark to return to after finding the start and stop times of an outage. For i = 0 To searchedlist.Count - 2 Dim entry = searchedlist(i) Dim newoutage As New outage newoutage = Nothing If entry.status = Down Then 'Find the first Down status in the list of search results. newoutage.termid = entry.termid 'Gather as much info as you can from the Down status. newoutage.popid = entry.popid newoutage.starttime = entry.dtg x = i 'Set the bookmark index to the index at which the Down status was found. For a = i + 1 To searchedlist.Count - 2 'Go to the next line and start searching for the next status for this site. Dim findend = searchedlist(a) 'If the searchresult termid matches the termid of the current outage... If findend.termid = newoutage.termid And findend.status = Up Then '...and the status is Up... newoutage.endtime = findend.dtg '...collect the end time of the outage... newoutage.duration = newoutage.endtime - newoutage.starttime '...and calculate the duration, in minutes. outages.Add(newoutage) 'Finally, add the new outage to the list of outages. i = x + 1 'Go to one line after the bookmark to start looking for the next outage. ElseIf findend.termid = newoutage.termid And findend.status = Down Then 'If the searchresult termid matches the outage termid, but it's another Down status... newoutage = Nothing '...scrap the current outage as unresolveable... i = x '...and go back to the bookmark and start looking for the next outage. Continue For End If Next End If Next If outages.Count > 0 Then 'If there were actually outages found by the above loop... sites.Add(outages(0).termid, {outages(0).duration.Minutes, 1}) 'Add the first outage to the list of sites. Format is: termid,(total duration, total # of outages) For i = 1 To outages.Count - 1 Dim item As String = outages(i).termid If sites.ContainsKey(item) Then 'If the current outage is already in the dictionary of sites... sites(item)(0) = sites(item)(0) + outages(i).duration.Minutes '...add the duration of the current outage to the total outage duration for that site... sites(item)(1) += 1 '...and increase that site's total number of outages by one. Else sites.Add(item, ({outages(i).duration.Minutes, 1})) End If Next End If For Each tml In sites stats.Add(tml.Key, (tml.Value(0) / tml.Value(1))) 'Calculate the average duration of each outage, and add it to the stats dictionary. sorter.Add((tml.Value(0) / tml.Value(1))) Next sorter.Sort() sorter.Reverse() If stats.Count > 0 Then outagechart.Series.Clear() outagechart.Series.Add(avgpersite) outagechart.Series(0).ChartType = type outagechart.Series(0).Color = Color.Lime outagechart.ChartAreas(0).AxisY.LabelStyle.ForeColor = Color.Gold outagechart.ChartAreas(0).AxisX.LabelStyle.Angle = 45 outagechart.ChartAreas(0).AxisX.LabelStyle.ForeColor = Color.Gold outagechart.ChartAreas(0).AxisX.Interval = 1 outagechart.ChartAreas(0).AxisX.IntervalType = DataVisualization.Charting.DateTimeIntervalType.NotSet End If For Each tml In stats If tml.Value > sorter(9) Then outagechart.Series(0).Points.AddXY(tml.Key, tml.Value) outagechart.Series(0).IsXValueIndexed = True End If NextEnd Sub | Collect and calculate average times from log, then display top 10 longest durations | optimization;strings;vb.net;dictionary | null |
_softwareengineering.254271 | Best practices with git (or any VCS for that matter) is supposed to be to have each commit do the smallest change possible. But, that doesn't match how I work at all.For example I recently I needed to add some code that checked if the version of a plugin to my system matched the versions the system supports. If not print a warning that the plugin probably requires a newer version of the system. While writing that code I decided I wanted the warnings to be colorized. I already had code that colorized error message so I edited that code. That code was in the startup module of one entry to the system. The plugin checking code was in another path that didn't use that entry point so I moved the colorization code into a separate module so both entry points could use it. On top of that, in order to test my plugin checking code works I need to go edit UI/UX code to make sure it tells the user You need to upgrade.When all is said and done I've edited 10 files, changed dependencies, the 2 entry points are now both dependant on the colorization code, etc etc. Being lazy I'd probably just git add . && git commit -a the whole thing. Spending 10-15 minutes trying to manipulate all those changes into 3 to 6 smaller commits seems frustrating which brings up the questionAre there workflows that work for you or that make this process easier?I don't think I can some how magically always modify stuff in the perfect order since I don't know that order until after I start modifying and seeing what comes up.I know I can git add --interactive etc but it seems, at least for me, kind of hard to know what I'm grabbing exactly the correct changes so that each commit is actually going to work. Also, since the changes are sitting in the current directory it doesn't seem like it would be easy to run tests on each commit to make sure it's going to work short of stashing all the changes. And then, if it were to stash and then run the tests, if I missed a few lines or accidentally added a few too many lines I have no idea how I'd easily recover from that. (as in either grab the missing lines from the stash and then put the rest back or take the few extra lines I shouldn't have grabbed and shove them into the stash for the next commit.Thoughts? Suggestions?PS: I hope this is an appropriate question. The help says development methodologies and processes | git workflow for separating commits | version control;git | null |
_webmaster.27294 | Google page speed rank demands me to set an expiry date or a maximum age in the HTTP headers, by changing the Expires and Cache-Control: max-age.The site is hosted on a hosting company (not in my garage) on a windows platform.I tried by uploading a .htaccess file but they have IISPassword program that blocks it.The question is how do I modify the HTTP headers?when checking the current header this is what I get:HTTP/1.1 304 Not ModifiedContent-Location: http://pcgroup.co.il/Default.htmLast-Modified: Wed, 14 Mar 2012 18:38:56 GMTAccept-Ranges: bytesVary: Accept-EncodingServer: Microsoft-IIS/6.0X-Powered-By: ASP.NETDate: Wed, 14 Mar 2012 19:16:43 GMT | How do I modify the HTTP headers? | webserver;page speed;http headers;http server | Create a file called web.config.txt and then upload it to your server root. Then rename the file, removing .txt extension. You will then need to modify the file with the correct code. |
_unix.240414 | I am trying to build a very simple login page, which asks the user for his register_no, username and password. And when he presses the submit button. I am trying to check whether it is an existing user or a new user and display a message accordingly.My folder hierarchy is like thisprodicus@Acer:~/Downloads/souvik_refactoring$ tree. cgi-bin creating_user_base_table.py user_base.db usr_check.py index.html keyCheck.py1 directory, 5 filesWhat I have tried:For the index.html<!DOCTYPE html><html><head> <title>Login page</title></head><body> <div style = text-align : center ; > <h1>Login page</h1> <form action=/cgi-bin/usr_check.py method=get> Registration number : <input type=number name=register_no min=1 max=2000000000> <br><br> Username : <input type=text name=username> <br><br> Password : <input type=password name = password> <br><br> <input type=submit value = Login> </form> </div></body></html>For creating_user_base_table.py#!/usr/bin/env python3.4import sqlite3import osdb_name = user_base.dbif db_name in os.listdir(): print(removing the user_base.db and creating a fresh copy of it) os.system(rm user_base.db)print(Creating the database)conn = sqlite3.connect(db_name)cur = conn.cursor()user_table = CREATE TABLE users(reg_no INTEGER PRIMARY KEY, user_name TEXT, pass TEXT)new_users = ( (1081310251, 'admin', 'admin'), (1081310234, 'foo', 'admin123'))cur.execute(user_table)print(table created)cur.executemany('INSERT INTO users VALUES(?, ?, ?)', new_users)conn.commit()print(default users created \n\ndisplaying them)cur.execute('SELECT * FROM users')print(cur.fetchall())and finally usr_check.py#/usr/bin/env python3.4import cgi, cgitbimport osimport sqlite3cgitb.enable()form = cgi.FieldStorage()register_no = form.getvalue('register_no')username = form.getvalue('username')passwd = form.getvalue('password')print(Content-type:text/html\r\n\r\n)print(<html>)print(<head>)print(<h1>Shit gets real here</h1>)print(</head>)print(<body>)print('<div style = text-align:center ; ')# print(</div>)print()conn = sqlite3.connect('user_base.db')cur = conn.cursor()## now to check whether the entered data is for## -> new user ## -> an old usercur.execute('SELECT user_name FROM users WHERE register_no = ?', (register_no,))rows = cur.fetchall()print(<br><br>)if len(rows) == 0: print(<p>User : <b>, username , </b> does not exist.</p>) cur.execute('INSERT INTO users VALUES(?, ?, ?)', (register_no, username, passwd)) print(<p>User was created successfully</p>) print(Done)else: print(<p>Welcome<b>, username ,</b>. Good to have you back) print(<br><p>Your account details</p>) print(<ul>) print(<li>Register number : , register_no, </li>) print(<li>Username , username, </li>) print(</ul>)Error log : prodicus@Acer:~/Downloads/souvik_refactoring$ python -m CGIHTTPServerServing HTTP on 0.0.0.0 port 8000 ...127.0.0.1 - - [02/Nov/2015 12:43:23] GET /index.html HTTP/1.1 200 -127.0.0.1 - - [02/Nov/2015 12:44:03] GET /cgi-bin/usr_check.py?register_no=1081310234&username=foo&password=admin123 HTTP/1.1 200 -Traceback (most recent call last): File /usr/lib/python2.7/CGIHTTPServer.py, line 252, in run_cgi os.execve(scriptfile, args, env)OSError: [Errno 8] Exec format error127.0.0.1 - - [02/Nov/2015 12:44:03] CGI script exit status 0x7f00Following this https://stackoverflow.com/questions/10793042/sqlite3-insert-using-python-and-python-cgi I have the files permissionsprodicus@Acer:~/Downloads/souvik_refactoring$ lltotal 36drwxrwxr-x 3 prodicus prodicus 4096 Nov 2 08:30 ./drwxr-xr-x 15 prodicus prodicus 20480 Nov 2 11:29 ../drwxrwxrwx 2 prodicus prodicus 4096 Nov 2 12:23 cgi-bin/-rw-rw-r-- 1 prodicus prodicus 629 Nov 2 08:38 index.html-rwxrwxr-x 1 prodicus prodicus 463 Nov 2 08:29 keyCheck.py*and prodicus@Acer:~/Downloads/souvik_refactoring/cgi-bin$ lltotal 20drwxrwxrwx 2 prodicus prodicus 4096 Nov 2 12:23 ./drwxrwxr-x 3 prodicus prodicus 4096 Nov 2 08:30 ../-rwxrwxrwx 1 prodicus prodicus 710 Nov 1 23:12 creating_user_base_table.py*-rwxrwxrwx 1 prodicus prodicus 2048 Nov 2 12:23 user_base.db*-rwxrwxrwx 1 prodicus prodicus 1576 Nov 2 08:26 usr_check.py*Surprisingly, cgitb is not showing an error. Where am I going wrong? Have been breaking my head on this since morning! | CGI error while trying to retrieve from sqlite3 | ubuntu;python;sqlite;cgi | null |
_softwareengineering.165263 | Assume I have abstract base model class called MoneySource. And two realizations BankCard and CellularAccount. In MoneysSourceListViewController I want to display a list of them, but with ListItemView different for each MoneySource subclass. What if I define a category on MoneySource@interface MoneySource (ListItemView)- (Class)listItemViewClass; @endAnd then override it for each concrete sublcass of MoneySource, returning suitable view class.@implementation CellularAccount (ListItemView)- (Class)listItemViewClass{ return [BankCardListView class];}@end@implementation BankCard (ListItemView)- (Class)listItemViewClass{ return [CellularAccountListView class];}@end@implementation MoneySourceListController- (ListItemView *)listItemViewForMoneySourceAtIndex:(int)index{ MoneySource *moneySource = [items objectAtIndex:index]; Class viewClass = [moneySource listItemViewClass]; ListItemView *view = [[viewClass alloc] init]; [view setupWithMoneySource:moneySource]; return [view autoreleased];}@endso I can ask model object about its view, not violating MVC principles, and avoiding class introspection or if constructions.Thank you! | Let a model instance choose appropriate view class using category. Is it good design? | mvc;objective c | null |
Subsets and Splits