id
stringlengths
5
27
question
stringlengths
19
69.9k
title
stringlengths
1
150
tags
stringlengths
1
118
accepted_answer
stringlengths
4
29.9k
_codereview.68880
I'm trying to create a very simple jQuery plugin in the object-oriented way. Now, I'm not sure whether the code I produced is correct and efficient OO programming. The plugin's aim is to change the navigation ul li based on the clicked country.jsFiddle// Widget container(function($) {// Widget container plugin $.fn.myWidget = function () { this.each (function () { // Vars var item = $ (this); // Set events item.click (function (e) { if (e) e.preventDefault (); combine_all(item); }); }); }; //uk navigation var uk_navigation = '<ul>' +'<li id=menu-1>UK stuff</li>' +'<li id=menu-2>UK phones</li>' +'</ul>'; //global navigation var global_navigation = '<ul>' +'<li id=menu-1>EU stuff</li>' +'<li id=menu-2>EU phones</li>' +'</ul>'; //italy navigation var italy_navigation = '<ul>' +'<li id=menu-1>Italy stuff</li>' +'<li id=menu-2>Italy phones</li>' +'</ul>'; //germany var germany_navigation = '<ul>' +'<li id=menu-1>Germany stuff</li>' +'<li id=menu-2>Germany phones</li>' +'</ul>'; this.get_region = function(item){ my_region = $ (item).attr ('id'); return my_region; }; this.make_active = function(item){ item.each(function(){ item.siblings().removeClass('flag-active'); }); item.addClass('flag-active'); } this.update_navigation = function(my_region){ var old_navigation = $('#navigation').find('ul'); var new_navigation; if(my_region==='europe_flag'){ new_navigation = global_navigation; }else if(my_region==='uk_flag'){ new_navigation = uk_navigation; }else if(my_region==='germany_flag'){ new_navigation = germany_navigation; }else{ new_navigation = italy_navigation; } $(old_navigation).hide().html(new_navigation).fadeIn(800); }; this.combine_all = function(item){ this.get_region (item); this.update_navigation(my_region); this.make_active(item); };})(jQuery);// Main$(function() { $('#flags').find('div').myWidget();});
Change the navigation ul li based on the clicked country
javascript;jquery;html;object oriented
null
_cstheory.38621
Given a Boolean formula $\varphi$ over the variables $\{x_1...x_n\}$ , an assignment $T_0$ for $\varphi$ and an integer $k$, I am interested in the following question:Does $k$ is the minimal number of bits that we have to change with respect to $T_0$ to change the value of $\varphi$? I.e. there exists an assignment $T_1$ such that $T_1$ is different from $T_0$ in at least $k+1$ different places and $T_0(\varphi) \neq T_1(\varphi)$, and for all $T$ such that $T$ is different from $T_0$ in $k$ places or less, it holds that $T(\varphi)=T_0(\varphi)$.Edit : I suspect that this problem is in dp, but can't prove its completeness. Ideas wil be welcome
Dp completeness of a problem
cc.complexity theory;complexity classes;polynomial hierarchy
null
_unix.93875
Can someone please point me in the right direction? Hopefully first by letting me know if this is possible...I recently purchased a home theater system which you plug via hdmi into the tv. It has its own nice gui with netflix, and youtube, and blah blah blah. One of the options was to browse your computer. When you click on it it tries to connect to a server to find music/videos. Can I use my Ubuntu installed laptop to host a server to put music on and play it wirelessly essentially? I believe this is possible and should be pretty straight forward.How would I go about creating a server on the same laptop I would be interacting with it...I could then secure copy or sftp the files back and forth. Is there like special settings to keep in mind?Any words of wisdom would be appreciated. TxC
Setting up server to share music with tv
ubuntu
You'll need to confirm which standard your home theater ting is using, but it is probably using DLNA, an overly-cumbersome UPnP-based standard Consumer Electronics Manufacturers use.I have used 2, MediaTomb and MiniDLNA (now ReadyMedia, recently renamed). I definitely recommend MiniDLNA. MediaTomb is overly complicated and doesn't seem to be that actively maintained now. MiniDLNA just worked once my wife installed it.
_codereview.26923
is there a better why how can I refactor this code, making sure that values in a Hash are typecasted to true/false if their value is '1' or '0' while leaving unaltered the rest?I'm using Ruby 2.0.0 if that matters, and I'd like to improve this code. def transform hsh = {} preferences.each do |k, v| v = case v when '1' true when '0' false else v end hsh[k.to_sym] = v end hsh endupdated with benchmarksAll right, here's the performance test based on the replies so far:class Test HSH = {xxx=>xxx-rrr, yyy=>0, rrr=>1, nnn=>0, kkk=>1, iii=>1, lll=>default, mmm=>76, www=>1} def self.transform_case hsh = {} HSH.each do |k, v| v = case v when '1' true when '0' false else v end hsh[k.to_sym] = v end hsh end def self.transform_ternary Hash[ HSH.map { |k, v| [k.to_sym, v == '1' ? true : v == '2' ? false : v ] } ] end def self.transform_fetch special_values = {1 => true, 0 => false} Hash[HSH.map { |k, v| [k.to_sym, special_values.fetch(v, v)] }] end def self.transform_negation Hash[ preferences.map {|k,v| [k.to_sym, !!v]} ] endendBenchmark.bm(20) do|b| b.report('case') do 1500.times { Test.transform_case } end b.report('ternary') do 1500.times { Test.transform_ternary } end b.report('fetch') do 1500.times { Test.transform_fetch } end b.report('negation') do 1500.times { Test.transform_negation } endend user system total realcase 0.010000 0.000000 0.010000 ( 0.009282)ternary 0.010000 0.000000 0.010000 ( 0.013794)fetch 0.010000 0.000000 0.010000 ( 0.012804)negation 0.010000 0.000000 0.010000 ( 0.010760)It appears that my original implementation is faster. Or is the BM test wrong?
Manipulate Hash to typecast true/false for certain values
ruby
Code should be as declarative as possible (usually by using functional style):def transform special_values = {1 => true, 0 => false} Hash[preferences.map { |k, v| [k.to_sym, special_values.fetch(v, v)] }]endHowever, Hash[...] is very ugly and I prefer a more OOP approach with Enumerable#mash, so I'd really write preferences.mash { |k, v| [k.to_sym, special_values.fetch(v, v)] }.
_webmaster.46656
I am trying to figure out the best option for web hosting of a startup business. I want to get a VPS hosting for this business and one factor that affects the price is the number of IP addresses.I was wondering what would be the advantage of having multiple IP addresses for one server. I have read the other questions and I know if I am going to use SSL for multiple websites on my server I would need unique IP address for each one but in this case I only would have one website.
Multiple IP addresses to a server
ip address;vps
One IP could be for your web (HTTP/HTTPS) traffic, another could be for FTP or SSH access, another could be for mail, &c. If the publicly known IP (i.e. the one published to DNS) is separate to the one used for administration (only known by you and your team) then that would be a way of securing your server - by allowing different types of traffic over different IP addresses.
_webmaster.105218
I registered an app in MailChimp but I changed my mind. I wonder if I can delete it?
How to remove registered app in mailchimp?
mailchimp
null
_unix.111142
On my test system. I was doing some testing and i move the grub.conf file from /boot/grub/ to /opt/And on boot black screen came as expected with just grub> written on it. I tried to solve it using some tuts but it is not working./boot is on /dev/sda1is there a way to recover grub.conf without using live media.Sorry i forgot to add that this server is installed uisng Linux KVM Technology
How to recover missing Grub File
rhel;grub2
null
_cs.13785
As a software engineer, I write a lot of code for industrial products. Relatively complicated stuff with classes, threads, some design efforts, but also some compromises for performance. I do a lot of testing, and I am tired of testing, so I got interested in formal proof tools, such as Coq, Isabelle... Could I use one of these to formally prove that my code is bug-free and be done with it? - but each time I check out one of these tools, I walk away unconvinced that they are usable for everyday software engineering. Now, that could only be me, and I am looking for pointers/opinions/ideas about that :-)Specifically, I get the impression that to make one of these tools work for me would require a huge investment to properly define to the prover the objects, methods... of the program under consideration. I then wonder if the prover wouldn't just run out of steam given the size of everything it would have to deal with. Or maybe I would have to get rid of side-effects (those prover tools seem to do really well with declarative languages), and I wonder if that would result in proven code that could not be used because it would not be fast or small enough. Also, I don't have the luxury of changing the language I work with, it needs to be Java or C++: I can't tell my boss I'm going to code in OXXXml from now on, because it's the only language in which I can prove the correctness of the code... Could someone with more experience of formal proof tools comment? Again - I would LOVE to use a formal prover tool, I think they are great, but I have the impression they are in an ivory tower that I can't reach from the lowly ditch of Java/C++... (PS: I also LOVE Haskell, OCaml... don't get the wrong idea: I am a fan of declarative languages and formal proof, I am just trying to see how I could realistically make that useful to software engineering)Update: Since this is fairly broad, let's try the following more specific questions: 1) are there examples of using provers to prove correctness of industrial Java/C++ programs? 2) Would Coq be suitable for that task? 3) If Coq is suitable, should I write the program in Coq first, then generate C++/Java from Coq? 4) Could this approach handle threading and performance optimizations?
Formal program verification in practice
programming languages;program correctness;software verification
I'll try to give a succinct answer to some of your questions. Please bear in mind that this is not strictly my field of research, so some of my info may be outdated/incorrect.There are many tools that are specifically designed to formally prove properties of Java and C++. However I need to make a small digression here: what does it mean to prove correctness of a program? The Java type checker proves a formal property of a Java program, namely that certain errors, like adding a float and an int, can never occur! I imagine you are interested in much stronger properties, namely that your program can never enter into an unwanted state, or that the output of a certain function conforms to a certain mathematical specification. In short, there is a wide gradient of what proving a program correct can mean, from simple security properties to a full proof that the program fulfills a detailed specification.Now I'm going to assume that you are interested in proving strong properties about your programs. If you are interested in security properties (your program can not reach a certain state), then in general it seems the best approach is model checking. However if you wish to fully specify the behavior of a Java program, your best bet is to use a specification language for that language, for instance JML. There are such languages for specifying the behavior of C programs, for instance ACSL, but I don't know about C++.Once you have your specifications, you need to prove that the program conforms to that specification.For this you need a tool that has a formal understanding of both your specification and the operational semantics of your language (Java or C++) in order to express the adequacy theorem, namely that the execution of the program respects the specification.This tool should also allow you to formulate or generate the proof of that theorem. Now both of these tasks (specifying and proving) are quite difficult, so they are often separated in two: One tool that parses the code, the specification and generates the adequacy theorem. As Frank mentioned, Krakatoa is an example of such a tool.One tool that proves the theorem(s), automatically or interactively. Coq interacts with Krakatoa in this manner, and there are some powerful automated tools like Z3 which can also be used.One (minor) point: there are some theorems which are much too hard to be proven with automated methods, and automatic theorem provers are known to occasionally have soundness bugs which make them less trustworthy. This is an area where Coq shines in comparison (but it is not automatic!).If you want to generate Ocaml code, then definitely write in Coq (Gallina) first, then extract the code. However, Coq is terrible at generating C++ or Java, if it is even possible.Can the above tools handle threading and performance issues? Probably not, performance and threading concerns are best handled by specifically designed tools, as they are particularly hard problems. I'm not sure I have any tools to recommend here, though Martin Hofmann's PolyNI project seems interesting.In conclusion: formal verification of real world Java and C++ programs is a large and well-developed field, and Coq is suitable for parts of that task. You can find a high-level overview here for example.
_unix.5778
There are two syntaxes for command substitution: with dollar-parentheses and with backticks.Running top -p $(pidof init) and top -p `pidof init` gives the same output. Are these two ways of doing the same thing, or are there differences?
What's the difference between $(stuff) and `stuff`?
shell;command line;command substitution
The old-style backquotes ` ` do treat backslashes and nesting a bit different. The new-style $() interprets everything in between ( ) as a command.echo $(uname | $(echo cat))Linuxecho `uname | `echo cat``bash: command substitution: line 2: syntax error: unexpected end of fileecho catworks if the nested backquotes are escaped:echo `uname | \`echo cat\``Linuxbackslash fun:echo $(echo '\\')\\echo `echo '\\'`\The new-style $() applies to all POSIX-conformant shells.As mouviciel pointed out, old-style ` ` might be necessary for older shells.Apart from the technical point of view, the old-style ` ` has also a visual disadvantage:Hard to notice: I like $(program) better than `program`Easily confused with a single quote: '`'`''`''`'`''`'Not so easy to type (maybe not even on the standard layout of the keyboard)(and SE uses ` ` for own purpose, it was a pain writing this answer :)
_scicomp.16412
Before there was CUDA or OpenCL people were using GPUs for computation. I am trying to find out how they did that -- because I want to press my Rasberry Pi's GPU for computing and it does not seem to have OpenCL support. I am looking for notes on how they did this.I've done cursory Googling but have come up empty handed -- primarily because I think the search indexes and full of CUDA and OpenCL related material.Would appreciate any pointers on how to use GPUs for computation in the absence of CUDA and OpenCL.
using GPUs before CUDA and OpenCL
gpu;cuda;opencl
I don't know a definitive source, but have a look at GPU Gems 2, which is a book published by NVIDIA about ten years ago and available online. While much of it is about computer graphics, it has a number of sections devoted to general purpose computing on GPUs from the time before CUDA and OpenCL.I am not familiar enough with Raspberry Pi to tell if this book will actually help you very much.
_unix.123876
I have comp54.tgz installed.# cd /root && ftp http://ftp.openbsd.org/pub/OpenBSD/`uname -r`/src.tar.gz && tar -xzf /root/src.tar.gz -C /usr/src# uname -r5.4# pwd/usr/src# ls -latotal 124drwxrwxr-x 17 root wsrc 512 Apr 13 19:35 .drwxr-xr-x 17 root wheel 512 Jul 30 2013 ..drwxr-xr-x 2 root wsrc 512 Jul 29 2013 CVS-rw-r--r-- 1 root wsrc 3456 Jul 24 2013 Makefile-rw-r--r-- 1 root wsrc 16419 Jul 7 2013 Makefile.crossdrwxr-xr-x 36 root wsrc 1024 Jul 29 2013 bindrwxr-xr-x 31 root wsrc 512 Jul 29 2013 distribdrwxr-xr-x 35 root wsrc 2560 Jul 29 2013 etcdrwxr-xr-x 44 root wsrc 1024 Jul 29 2013 gamesdrwxr-xr-x 9 root wsrc 512 Jul 29 2013 gnudrwxr-xr-x 7 root wsrc 2048 Jul 7 2013 includedrwxr-xr-x 11 root wsrc 512 Jul 29 2013 kerberosVdrwxr-xr-x 40 root wsrc 1024 Jul 29 2013 libdrwxr-xr-x 40 root wsrc 1024 Jul 29 2013 libexecdrwxr-xr-x 15 root wsrc 512 Jul 10 2010 regressdrwxr-xr-x 78 root wsrc 1536 Jul 29 2013 sbindrwxr-xr-x 14 root wsrc 512 Jul 29 2013 sharedrwxr-xr-x 228 root wsrc 4096 Jul 29 2013 usr.bindrwxr-xr-x 144 root wsrc 2560 Jul 29 2013 usr.sbin# which gcc/usr/bin/gcc# # ftp http://ftp.openbsd.org/pub/OpenBSD/patches/5.4/common/001_pflow.patch Trying 129.128.5.191...Requesting http://ftp.openbsd.org/pub/OpenBSD/patches/5.4/common/001_pflow.patch100% |*******************************************************| 803 00:00 803 bytes received in 0.00 seconds (11.10 MB/s)# # patch -p0 < 001_pflow.patch Hmm... Looks like a unified diff to me...The text leading up to this was:--------------------------|Apply by doing:| cd /usr/src| patch -p0 < 001_pflow.patch||Then build and install a new kernel.||Index: sys/net/if_pflow.c|===================================================================|RCS file: /vol/openbsd/cvs/src/sys/net/if_pflow.c,v|retrieving revision 1.32|diff -u -p -r1.32 if_pflow.c|--- sys/net/if_pflow.c 5 Jul 2013 17:14:27 -0000 1.32|+++ sys/net/if_pflow.c 7 Nov 2013 16:48:45 -0000--------------------------File to patch: # what do I need to write here???????No file found--skip this patch? [n] patch: **** can't find ## My question: how do I get past of the File to patch: ?
How to apply a patch in OpenBSD?
openbsd
null
_unix.334476
In a fresh Bugzilla installation (5.0.3), on a Scientific Linux 6 server I can't set the mail parameters due to this error:The new value for smtpserver is invalid: Cannot connect to mail.smpt.serverDespite the fact that SMTP settings are correct. The SMTP server for the company is fully functional, on that server I downloaded thunderbird and was able to log in to my account smoothly without any problem which should mean the server has no problem with SMTP at all.I'm still searching with no clear hind what the cause could be. Any help is really appreciated!Update: In the old installation I can set these values and I can submit a bug but the email is not sent. This article says that bugzilla does not support SMTP with authentication (Not sure if true or not).Update: I installed the necessary modules here and again no luck.Update: On the old installation (4.something), we managed to setup an email account without authentication. This Bugzilla can now send email and it should work smoothly.Update: I found this useful article which applies a custom send mail script but the error message could not tell me where the error is.
Bugzilla cannot connect to an SMTP server
bugzilla
null
_codereview.108315
I am about to post a question to Stack Overflow about how to do a better job adding a number to the list inside the User class. But I feel if I refactored this I might solve my problem with coupling my data like this. So I am open to suggestions.Ideonepublic static void addOrUpdate(Dictionary<int, User> dic, int key, User user){ //sets a new user data and just replaces it in the dictionary //var used = new User { ID = id1, Name = Harry }; var used = new User{ ID = id3, Name = Henry ,AddressBook = new List<ContactNumber>(){new ContactNumber(3111),new ContactNumber(4444)}}; if (dic.TryGetValue(key, out user)) { // yay, value exists! dic[key] = used; } else { // darn, lets add the value dic.Add(key, used); }}
Data Structures C# Dictionary and user has a property that is a list
c#;object oriented;dictionary
null
_unix.55207
I am trying to run Windows XP in a Xen DomU virtual machine with a PCIe device, for which there are no Linux drivers, being passed through from a Debian Squeeze Dom0. My hardware supports virtualization and it is active in the bios. If I rungrep -E (vmx|svm) --color=always /proc/cpuinfowhen I boot from the standard kernel I can see my processor supports vmx, although when I boot the Xen kernel, vmx doesn't show up.I have followed the setup in http://wiki.xen.org/wiki/Xen_Beginners_Guide. The guide basically creates a minimal Debain Squeeze install as Dom0, a PV Debian Squeeze DomU and a HVM Windows DomU running on an LVM volume. I have followed the guide essentially to the letter with the only differences being network bridge is different and I didn't install a Debian PV DomU.I currently have a DomU on an LVM volume that is running a fully updated version of Windows XP with the GPLPV drivers. I am now trying to pass the PCI device, but am running into problems. If I compare the output of lspci with and without the PCIe card that I am trying to pass I see the following two new entries:05:00.0 PCI bridge: PLX Technology, Inc. PEX 8111 PCI Express-to-PCI Bridge (rev 21)06:04.0 Bridge: Device 4550:9054 (rev 01)I also see that another entry has changed its address from 06:00.0 IDE interface: Marvell Technology Group Ltd. 88SE6121 SATA II Controller (rev b2)to07:00.0 IDE interface: Marvell Technology Group Ltd. 88SE6121 SATA II Controller (rev b2)I modified /etc/default/grub to include eitherGRUB_CMDLINE_XEN=xen-pciback.hide=(05:00.0)(06:04.0)orGRUB_CMDLINE_XEN=pciback.hide=(05:00.0)(06:04.0)and run update-grub and update-grub2 after making the change and then fully powered down and rebooted. This doesn't appear to do anything and nothing shows up withxm pci-list-assignable-devicesLooking at the Xen wiki guide http://wiki.xen.org/wiki/Xen_PCI_Passthrough I have tried things likeecho 0000:05:00.0 > /sys/bus/pci/devices/0000:05:00.0/driver/unbindecho 0000:05:00.0 > /sys/bus/pci/drivers/pciback/new_slotecho 0000:05:00.0 > /sys/bus/pci/drivers/pciback/bindand some other stuff related to pci-stub. Sometimes my random fiddling results in xm pci-list-assignable-deviceslisting 05:00.0 and 06:04.0. If I modify my .cfg file to includepci = ['05:00.0', '06:04.0']I get an error about pci-stub not owning the 05:00.0 device. If I only try and pass 06:04.0 the DomU won't boot.Any ideas how to get pci passthrough working.
PCI passthrough with Xen
debian;virtual machine;virtualization;xen;pci
null
_scicomp.7375
How can I convert matrices L and U output by dgssvx() of SuperLU to triples format (to matrix market format)? Also how can I convert input matrix A in triples format (in matrix market format) to the format required by SuperLU? What is the easiest way to perform these format conversions?
Converting matrices L and U output by dgssv() of SuperLU to triples format
matrices;io
null
_softwareengineering.220769
In other words, is DRY (don't repeat yourself) applied at a class level a subset of SRP (single responsibilty principle)? What I mean is that while SRP states that each class should have only a single responsibility ( ie. class should only have one reason to change ), is it the application of DRY at class level that prevents two classes from having the same responsibility ( thus it is DRY that prevents the repetition/duplication of same responsibility in two different classes )?Thank you
How is DRY principle ( applied at class level ) related to SRP?
design patterns;dry;single responsibility
What I mean is that while SRP states that each class should have only a single responsibility ( ie. class should only have one reason to change ), is it the application of DRY at class level that prevents two classes from having the same responsibility ( thus it is DRY that prevents the repetition/duplication of same responsibility in two different classes )?No. You can still not repeat yourself but violate single responsibility. Likewise, you can repeat yourself but classes only have a single (duplicated, but subtly different) responsibility.While the two guidelines can and will tend to overlap, they are orthogonal concepts.
_unix.320255
I installed Apache version 2.0.65.Installation Process:cd /usr/local/srctar xvfz httpd-2.0.65.tar.gzcd httpd-2.0.65./configure --prefix=/usr/local/apache --enable-so --enable-module=so --enable-shared=max --enable-rewrite --enable-shared=rewritemakemake install/usr/local/apache/bin/apachectl startBut nothing like below:[root@ip- bin]# ./apachectl start<br>[root@ip- bin]#This is the output:[root@ip-172-31-2-245 bin]# ./apachectl restart<br> httpd not running, trying to startThe strange thing is that I think there is no problem in the setting.[root@ip- bin]# ./httpd -t<br> Syntax OK<br> [root@ip- bin]#./apachectl configtest<br> Syntax OK<br>So, I see in /usr/local/apache/logs/error_log:[crit] (22)Invalid argument: mod_rewrite: Could not set permissions onrewrite_log_lock;<br>check User and Group directives Configuration FailedWhat is the problem here, then?
Apache version 2.0.65 apachectl is not working
apache httpd;amazon ec2
null
_softwareengineering.59870
The more I explore Github, the more I like it. I really enjoy how coding is becoming more social.I'm curious as to if there are any bad practices that programmers should avoid in sharing their code with each other. And in naming bad practices, what are the best practices for code sharing?For example: Is it a bad practice for a single repo to have multiple scripts/projects named 'MiscProjects'? Where this repo, as the name suggest, is a collection of miscellaneous small scripts and projects. This may resemble how a programmer organizes projects on his/her local storage, but it's possibly not optimal for code sharing? Maybe if a good README/documentation is done, it would be better? Or as long as it's well documented, anything goes?
Best/Bad practices for code sharing?
github
While there no 'bad practices' set in stone, likewise with other version control systems, there are conventions.Your Git repo should be as small as possible. If you're coming from the CVS/SVN module, it was common to have a structured single repository which could compose of multiple repositories for a number of projects. The Git way is to split these up and have separate Git repos for each project. Reasons are:Git is faster for smaller repos.Due to its design, each operations affects the entire repo. It is inefficient to perform Git operations over necessary projects if you're only working on one of them.Documentation, as always, is a must. While people are adept at reading code, no one wants to interpret code any more than they need to. Using the top-level README to describe the project and the structure of the Git repo will always be a good thing for those involved (or looking to get involved) in the project.The majority of the project on GitHub conform to the conventions. Use them as examples for how to structure your future projects.
_unix.15982
How can I find out the Mflops on my linux computer? I can see the bogomips on my /proc/cpuinfo but I don't see any Mflops on that file:$ cat /proc/cpuinfo | grep -ie mips -ie flopsbogomips : 3591.29bogomips : 3590.96bogomips : 3590.96bogomips : 3590.96
Finding MFLOPS using Linux
performance;cpu
Similar question asked here - ServerFaultThe person who asked the question was pleased with Phoronix Test Suite.
_codereview.110473
is there an easier/more efficient way to join two JSON objects on a common property than this? It's a GeoJSON file (basically an array of objects), and I want to pull in a property from another JSON file (array of objects) based on a common property, similar to a SQL SELECT a.*, b.* FROM a LEFT JOIN b ON a.id = b.a_id$.when( $.getJSON('https://data.phila.gov/resource/bbgf-pidf.geojson'), $.getJSON('https://data.phila.gov/resource/r24g-zx3n.json?%24select=count(*)%20as%20value%2C%20%3A%40computed_region_bbgf_pidf%20as%20label&%24group=%3A%40computed_region_bbgf_pidf&%24order=value%20desc')).done(function(responseGeojson, responseData) { var data = responseData[0] var geojson = responseGeojson[0] // Create hash table for easy reference var dataHash = {} data.forEach(function(item) { if(item.label) dataHash[item.label] = item.value }) // Add value from hash table to geojson properties geojson.features.forEach(function(item) { item.properties.incidents = +dataHash[item.properties._feature_id] || null }) console.log(geojson)})
Join JSON objects on common property
javascript;json;geospatial;join
I notice a query on one of your urls. I suggest you use jQuery's $.param to construct the query cleanly.For your first loop, you could use reduce instead of forEach to generate your hash. Additionally, you might want kill NaN early. You don't want to be carrying NaN in any data structure, otherwise you'll get some unexpected NaN in your code.var dataHash = data.reduce(function(hash, item) { var value = +item.value hash[item.label] = isNaN(value) ? null : value; return hash;}, {});geojson.features.forEach(function(item) { item.properties.incidents = dataHash[item.properties._feature_id];});As for efficiency, I guess that's about it. You don't want to be nesting a lookup inside the other loop. Two separate loops is better then one nested inside another.
_webmaster.26370
I just bought the deluxe web-hosting service of godaddy and am very confused with the term primary domain. What is this thing for and can I remove it so I can organize my websites in different directories./root/site1/root/site2/root/site3....PS I used to work with MediaTemple and remember that there was no primary domain....please help.
what is the primary domain in godaddy for?
web hosting;domains;godaddy
Godaddy for some reason requires you to have a primary domain setup for each hosting account. So this basically makes the root whatever the primary domain is.There is a way that you can setup a false domain so that you can take the previous primary domain and move it. see the link here http://help.godaddy.com/article/4067This will make the root non-web-accessible. You basically create a false domain of anything you want as long as it isn't used by anyone else. You can then move all your domains to a central area.For instance my previous setup was this/root/primary/root/sites/site2/root/sites/site3After the switch I have/root/falsedomain <--not web accessible as well as any folders at same level/root/sites/site1 <--previous primary/root/sites/site2/root/sites/site3Note that it can take some time to make the domain changes so if you have live sites they could be down for up to 24 hours. I'm pretty sure you also won't be able to use some things Godaddy has like the stats.
_cs.18326
This is one of the special cases for Chinese Postman Problem.I know the answer is (Cost of all Edges + Cost of shortest path between the odd degree vertices). How do I prove this?I am trying to prove the solution to this UVA problem.
Minimum cost tour for a simple connected graph having exactly two odd degree vertices
graph theory
null
_codereview.91324
Two strings are isomorphic if the characters in s can be replaced to get t.Can efficiency be increased further for this code? As I am getting errors that limit is being exceeded for very large strings.static boolean isIsomorphic(String s, String t) {s= s.toLowerCase();t = t.toLowerCase();if(s.length()!=t.length()){ return false;}if(s.equalsIgnoreCase(t)){ return true;} HashMap<Character,Integer> mapOfFirst = new HashMap<Character,Integer>(); HashMap<Character,Integer> mapOfSec = new HashMap<Character,Integer>(); int cnt1 =0 ; int cnt2 =0 ; for(int i =0;i<s.length();i++){ if(mapOfFirst.get(s.toCharArray()[i])!=null){ } else{ mapOfFirst.put(s.toCharArray()[i],cnt1); cnt1 = cnt1+1; } } for(int i =0;i<t.length();i++){ if(mapOfSec.get(t.toCharArray()[i])!=null){ } else{ mapOfSec.put(t.toCharArray()[i],cnt2); cnt2 = cnt2+1; } } char[] sCharArray_Fir = s.toCharArray(); char[] sCharArray_Sec= t.toCharArray(); for(int i = 0 ; i< s.length();i++){ int ch1 = mapOfFirst.get(sCharArray_Fir[i]); int ch2 = mapOfSec.get(sCharArray_Sec[i]); if(ch1!=ch2){ return false; } } return true;}
Isomorphic Strings
java;algorithm;time limit exceeded
The efficiency could be increased if you placechar[] sCharArray_Fir = s.toCharArray();char[] sCharArray_Sec= t.toCharArray();before your for loops and replace everys.toCharArray()[i]withsCharArray_Fir[i]The same applies tot.toCharArray()[i].Each time you call that, your string is converted to char array.Call it once before the for loops.Also, consider removing the if conditional, and fill your maps first instead of checking them for mappings while they're empty:for(int i =0;i<s.length();i++){ // if(mapOfFirst.get(s.toCharArray()[i])!=null){ // // } // else{ mapOfFirst.put(sCharArray_Fir[i],cnt1); cnt1 = cnt1+1; // }}Another thing is that you already checked if your strings are of the same length, so you can include contents of your second for loop in the first one (use the same cnt value too):for(int i =0;i<s.length();i++){ mapOfFirst.put(sCharArray_Fir[i],cnt1); mapOfSec.put (sCharArray_Sec[i],cnt1); cnt1 = cnt1+1; }
_unix.337608
I have been using Qubes 3.2 for 5 months now without any significant problems. Then, out of nothing, I can't login. Qubes passed to sddm login screen as usual, then I could choose from desktop environments and type password but, after I did this, cross cursor (x) showed up and nothing more happend. It's not frozen, I can still switch to tty and login there, but even no ethernet controller has been loaded. Creating new user did also nothing.Some time ago I answered a question here regarding exactly the same scenario on standard fedora 24 but that steps won't work for me so.. I believe someone ever solved that as it happend twice to me (fedora support forums are still ignoring this).Note that I has been using Qubes clean install,no testing repos enableddomains and dom0 were up to date5 months without a bugno noob magic usedQubes is based on fedora 23 and xen kernel...Any fresh idea will be awesome as I have to boot an less secure OS now and don't have time and handy tools to secure it myself (better nothing than hacked you know).
Qubes (Fedora 23) won't login
fedora;xfce;xen;qubes;sddm
null
_webmaster.2408
Could someone explain me what could be an advnatage of being on Twitter for a company website?Except the fact that it's on fashion right now. :)I'm asking because on so many site I find the little bird with the words FOLLOW us on Twitter.Cool I click on the bird and it brings me to the Twitter page of the company.Ihave seen these Twitter's page on many sites, but these pages are always a list of short messages that I don't understand. Soem seem to be private message, some seem to be advertisments from others.What should I FOLLOW in those messages?How should I read through those messages?Could you give a real life example that shows what's a potenatial advanatge of being on Twitter?ThanksUPDATE:Just some example of hosting companies useing Twitter. Plz have a look and explain me what's the catch, what are they getting out from these (to me useless) twitter pages?http://twitter.com/inmotionhostinghttp://twitter.com/arhoster
Web marketing: tangible advantages of being on Twitter?
twitter;marketing;social networks
A raw twitter stream will typically look like gibberish for the uninitiated, especially when Twitter is used right. The short format (max 140 characters) typically force the tweeter to use very domain specific lingo, which may look weird for the outsider. This is why you might feel alienated when you visit certain Twitter streams.You should only follow companies/people you care about, and you need not read all their tweets, at least not addressed messages (those tweets start with @somename). Create your own twitter login and follow those that interest you. When you follow someone, you don't see addressed messages to strangers and this typically remove the 'private noise' you see on their Twitter profile page. If you find the tweets from someone boring or just too frequent, stop following them. Tweet, re-tweet and reply wisely. When done well people will start to follow you.Some companies tweet short sensible announcement, like a long boring list of short news headlines. The more savvy companies manage to engage with 'their followers' - people (typically customers) who somehow take an interest in what the company is doing. Companies who manage to do this have a 'direct line' to their most hard core customers/fans out there. They can chat back and forth with them both as a collective group and more individually.I would say that Twitter has value for a company if - and only if - they can reach and engage customers, potential customers or relevant influential people via their Twitter activity. Twitter without engagement is probably a waste of time. Obvious benefits of engaging relevant people via Twitter are keeping the outside world up-to-date with your products and getting feedback from people who care enough to give it, but you can also use Twitter to build up your brand as a caring company with great opinions on industry matters (react to feedback, share tips, voice you opinion on major relevant events). If you manage to do this you will probably get higher customer loyalty, and as a side effect you also get great input for future product development and voicing your opinions might also spawn of more awareness inside your company.Some companies can benefit from Twitter while other would be wasting their time. If you sell high-end skateboards to rich teens, Twitter could probably boost your sales a lot. If you are a mortician I'd guess you wouldn't find Twitter useful.Question UPDATE:http://twitter.com/inmotionhosting talks with customers and respond to positive feedback and negative feedback - those customers are probably happy being heard. He is also following people who contact him. Both can build loyalty.He is sharing some tips and he is showing a social side of the company by tweeting about employees birthdays and pictures from social events - this is to try to make him interesting to follow and show a human face. He also has some personal now Im going to meeting tweets which I guess he might as well not do I expect no one cares.In short he is engaging customers and showing a side of the company that would otherwise have gone unnoticed. The followers (that read his tweets) are often reminded of the companys existence and get a more personal relationship with the company.http://twitter.com/arhosterjust use twitter as a list of news headlines. They dont seem to spend much time on it and they dont seem to be using Twitter as a social media. They might be starting up their twitter activity it takes time to learn Twitter and engage people.
_unix.63379
I'm running Solaris 8 on a Sun Ultra 2 connected to a Verizon router. nslookup seems to work (it finds google.com) but the existing browser (Netscape 4.76 - yes - it's very old) fails to reach any web page Unable to connect to server (TCP Error. Network is unreachable).Obviously I'm a novice when it comes to these connectivity issues. Any help will be appreciated.
Browser fails to reach any host on Solaris 8
solaris;tcp;browser
null
_unix.202828
I'm writing a custom reboot program in C and trying to decide if I should use reboot(2) directly or call system(/sbin/reboot).Both reboot(8) and init 6 change the runlevel and gracefully shut down services, then unmount all filesystems. But, reboot(2) does neither of these things.When should reboot(2) be used in preference to reboot(8)?(I know from the man page to call sync(2) before reboot(2).)
When to use reboot(2) vs reboot(8)
linux;init
null
_unix.168170
How can I list the X clients which registered for a specific keyboard event (i.e. a key press; a shortcut thing). Those things are called passive key(board) grabs.And the list should contain what that application is registered for what keysyms (with what modifiers).
Which application receives which hotkey? (List X clients which hold key grabs.)
x11;keyboard shortcuts;xkb
null
_cstheory.10142
I am faced with the following question on max. integer multiflow:INSTANCE: An acyclic directed graph G=(V,E), a capacity function c:EN, k pairs of vertices (si,ti) and a demand function d:{1,,k}N.Objective: Find the integer flows satisfying maximum demand.what is the hardness status of this problem?
multi-commodity flow acyclic digraphs
cc.complexity theory;graph theory;max flow;multicommodity flow
null
_codereview.161691
I am currently using the following query to add a specific row (with 10 columns) from an Excel spreadsheet (~1500 rows and hosted on SharePoint - I've also tested locally with the same issue) to an array in PowerShell.$connection = New-Object System.Data.OleDb.OleDbConnection$connectstring = Provider=Microsoft.ACE.OLEDB.12.0;Data Source=$Source;Extended Properties='Excel 12.0 Xml;HDR=YES'; $connection.ConnectionString = $connectstring$connection.open()$cmdObject = New-Object System.Data.OleDb.OleDbCommand$query = Select * from [Sheet1$] WHERE [Sheet1$].[Test] = '$Test'$cmdObject.CommandText = $query$cmdObject.CommandType = Text$cmdObject.Connection = $connection$oReader = $cmdObject.ExecuteReader()[void]$oReader.Read() $Global:oData = New-Object PSObject $oData | Add-Member NoteProperty List1 $oReader[0] ... $oData | Add-Member NoteProperty List10 $oReader[9]$oReader.Close()$cmdObject.Dispose()$connection.Close()$connection.Dispose()This works absolutely fine, however, it is often quite slow. Is there any way in which I could speed up the query? I can't add the entire Excel sheet into an array, as the data changes throughout the day and is queried regularly.I've found other questions, such as this one. However, that doesn't seem relevant to this particular issue.
Excel query within PowerShell
performance;excel;powershell;sharepoint
null
_softwareengineering.131698
I am working on an issue where the exception only occurs in our production environment. I don't have access to these environments, nor do I know what this exception means. Looking at the error description, I'm unable to understand the cause.javax.net.ssl.SSLHandshakeException: Received fatal alert: handshake_failureWould someone please advise me on how to approach this kind of problem?
How can I debug exceptions that are not easily reproducible and only occur in a production environment?
exceptions
In general, better debug logging. Figure out what you want to know, add it to the code, and have that in the logs so that you can work it out. Capturing more details of the environment at the time also help - what request, when, etc.In specific, I would look for a common pattern in clients hitting this - and if you found one optimise - but then go and capture the TCP layer traffic.Looking at the SSL messages exchanged should give you some idea what is going wrong in the protocol, or at least what the common properties of the request are. Once you have that it should be closer to being debugged.As a guide, I would guess this comes from one of three things:Something that isn't SSL talked to the SSL port. (port scans are common, but HTTP to the HTTPS port also happens.)The client doesn't share an acceptable set of ciphers with the server.The client offers a certificate, and the server has a hissy-fit. (Uncommon, but possible.)
_unix.147857
Supervisord is running on centos server. If I dops -e -o %mem,%cpu,cmd | grep supervisord | awk '{memory+=$1;cpu+=$2} END {print memory,cpu}'I get 0 0 just because supervisord is just an initialization daemon. It runs four child processes on my server:# pgrep -P $(pgrep supervisord) | wc -l4How can I find summarized CPU and mem usage of these child processes in one-line-command?Thanks
How to find the cpu and memory usage of child processes
shell;centos
null
_vi.6697
If I have the following text:foobarI visually select it and copy it.The text is now stored in the unnamed register and here is its contents (output of :reg ): foo^Jbar^JAccording to this chart, it seems ^J is the caret notation for a Line Feed. If I want to duplicate the unnamed register in the a register by typing: :let @a = @Here is its contents (output of :reg a): a foo^Jbar^JIt didn't change.If I now duplicate it in the search register by typing :let @/ = @, here is its contents (output of :reg /):/ foo^@bar^@According to the previous chart, it seems ^@ is the caret notation for a Null character.Why is a Line Feed automatically converted into a Null character inside the search register (but not the a register)?If I insert the unnamed register on the command line (or inside a search after /), by typing :<C-R>, here is what is inserted::foo^Mbar^MAgain, according to the last chart, ^M seems to be the caret notation for a Carriage Return.Why is a Line Feed automatically converted into a Carriage Return on the command line?Edit: Usually you can insert a literal control character by typing:<C-V><C-{character in caret notation}> For example, you can insert a literal <C-R> by typing <C-V><C-R>.You can do it for seemingly any control character.However I've noticed that I'm unable to insert a literal LF inside a buffer or on the command line, because if I type: <C-V><C-J> it inserts ^@, a null character, instead of ^J.Is it for the same reason a LF is converted into NUL inside the search register?Edit 2:In :h key-notation, we can read this:<Nul> zero CTRL-@ 0 (stored as 10) <Nul><NL> linefeed CTRL-J 10 (used for <Nul>)The stored as 10 part on the first line and used for <Nul> on the second line could indicate that there's some sort of overlap between a LF and a NUL, and that they could be interpreted as the same thing. But they can't be the same thing, because after executing the previous command :let @/ = @, if I type n in normal mode to get to the next occurrence of the 2 lines foo and bar, instead of getting a positive match, I have the following error message: E486: Pattern not found: foo^@bar^@Besides this link seems to explain that a NUL denotes the end of a string, whereas a LF denotes the end of a line in a text file.And if a NUL is stored as 10 as the help says, which is the same code as for a LF, how is Vim able to make the difference between the 2?Edit 3:Maybe a LF and a NUL are coded with the same decimal code, 10, as the help says. And Vim makes the difference between the 2 thanks to the context. If it meets a character whose decimal code is 10 in a buffer or any register, except the search and command registers, it interprets it as a LF.But in the search register (:reg /) it interprets it as a NUL because in the context of a search, Vim only searches for a string where the concept of end of line in a file doesn't make sense because a string is not a file (which is weird since you can still use the atom \n in a searched pattern, but maybe that's only a feature of the regex engine?). So it automatically interprets 10 as a NUL because it's the nearest concept (end of string end of line).And in the same way, on the command line / command register (:reg :) it interprets the code 10 as a CR, because the concept of end of line in a file doesn't make sense here. The nearest concept is end of command so Vim interprets 10 as a CR, because hitting Enter is the way to end/execute a command and a CR is the same as hitting Enter, since when you insert a literal one with <C-V><Enter>, ^M is displayed.Maybe the interpretation of the character whose code is 10 changes according to the context: end of line in a buffer (^J)end of string in a search (^@)end of command on the command line (^M)
Why is a Line Feed converted into a Null character inside the search register and into a Carriage Return on the command line?
search;register;line breaks
null
_unix.106841
I am not sure what's wrong with my rule in the Iptables. I am actually trying to build a SSH server and following a cyberciti tutorial.My Iptables configuration looks like this:# Firewall configuration written by system-config-firewall# Manual customization of this file is not recommended.*filter:INPUT ACCEPT [0:0]:FORWARD ACCEPT [0:0]:OUTPUT ACCEPT [0:0]-A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT-A INPUT -p icmp -j ACCEPT-A INPUT -i lo -j ACCEPT-A INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT-A INPUT -j REJECT --reject-with icmp-host-prohibited-A FORWARD -j REJECT --reject-with icmp-host-prohibitedCOMMIT-A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPTWhen I restart of iptables, this is the transcript:[root@localhost raja]# service iptables restartiptables: Setting chains to policy ACCEPT: filter [ OK ]iptables: Flushing firewall rules: [ OK ]iptables: Unloading modules: [ OK ]iptables: Applying firewall rules: iptables-restore: line 14 failed [FAILED][root@localhost raja]#I tried placing COMMIT at the end, but it didn't help:[root@localhost raja]# cat /etc/sysconfig/iptables# Firewall configuration written by system-config-firewall# Manual customization of this file is not recommended.*filter:INPUT ACCEPT [0:0]:FORWARD ACCEPT [0:0]:OUTPUT ACCEPT [0:0]-A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT-A INPUT -p icmp -j ACCEPT-A INPUT -i lo -j ACCEPT-A INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT-A INPUT -j REJECT --reject-with icmp-host-prohibited-A FORWARD -j REJECT --reject-with icmp-host-prohibited-A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPTCOMMIT[root@localhost raja]# service iptables restartiptables: Setting chains to policy ACCEPT: filter [ OK ]iptables: Flushing firewall rules: [ OK ]iptables: Unloading modules: [ OK ]iptables: Applying firewall rules: iptables-restore: line 13 failed [FAILED][root@localhost raja]#What's wrong?I am using CentOS 32-bit.
iptables-restore just says FAILED
centos;iptables
The rule -A RH-Firewall-1-INPUT adds a rule to the chain called RH-Firewall-1-INPUT, but there is no prior line that creates this chain.This is not the output from iptables-save. I don't recommend writing an iptables rule file manually (which makes this tutorial dubious; I haven't looked at it further). Use iptables to create and manipulate rules. When you're satisfied with the state of the system, run iptables-save to save it to a file, and don't edit that file.
_softwareengineering.215382
I want to create a web/browser-based GUI for a command-line python application. The goal is to make use of HTML/JS technologies to create this GUI. As the application itself, it needs to run on Linux and Windows, and the interface will be accessible only from localhost (not exposed to internet). The GUI will contain 5 to 10 pages. I don't want a traditional desktop GUI that includes HTML/JS, but just a bunch of html files and some kind of controller between those and the application.I also want to make use of asynchronous programming (ajax like) so I can load and print data in the GUI without refreshing the whole page. I'd probably use jQuery for that and a couple other things. How would you recommend to design this? Performance is not the key here, I'm rather looking at reliability, portability and simplicity.I'm thinking of using a lightweight python HTTP server / framework (like CherryPy) and maybe later a Python templating system (at the begining it will just be a couple pages).EDIT:I'm looking for ideas/recommendations how to build this, not for alternatives to browser/web-based GUI.
Browser-based GUI for a python application
design;python;web applications;gui;browser
null
_codereview.18748
I am writing a simple-ish c# CLI app to improve our telephony reporting system. At present, I have a nightly DTS package which dumps a couple of tables from the Cisco telephony box into a database on our corporate cluster. An hour later, I have a SQL Server Agent job that runs a variety of messy stored procedures to generate queue level statistics. I was advised by a telephony consultant to do it this way, but have never been 100% happy with it, due to the lack or error handling.I have found 6 stored procedures on the uccx Cisco telephony box, which are used by the Historical Cisco Reporting application. These contain a lot more statistics than I can currently provide.So... my plan is as follows:Get a list of current queue names, including their open and close times from a SQL Server tableRun a stored procedure x amount of times, once for each queue, passing in the parameters from step 1For each row generated from the stored procedure in step 2, write a row in a table on the SQL Server clusterDo this for each stored procedure.My code, which works for one of the stored procedures is here. It is very very basic, with poor errrmmm everything ha. I want to make this a bit more OO, remove duplicate code, and make it a bit better but I am not too great at that. I can code, but not at a very advanced standard.Any advice on how to tackle this beast? I am willing to learn new techniques, and am willing to chuck out what I've done thus far and start again. I really need to skill up, as I'll be getting more projects in a similar veinThanks guys and gals!I have two classes, the main class and the logFile class:using System;using System.Collections.Generic;using System.Linq;using System.Text;using System.IO;using System.Configuration;namespace UCCXtoSQL{public static class LogFile{ public static void write(string logMessage) { string message = string.Empty; string logFileLocation = @C:\debug\UCCXtoSQL.log; StreamWriter logWriter; message = string.Format({0}: {1}, DateTime.Now, logMessage); if (!File.Exists(logFileLocation)) { logWriter = new StreamWriter(logFileLocation); } else { logWriter = File.AppendText(logFileLocation); } logWriter.WriteLine(message); logWriter.Close(); } }}Main:using System;using System.Collections.Generic;using System.Linq;using System.Text;using System.Data;using System.Data.Sql;using System.Data.SqlClient;using System.Data.SqlTypes;namespace UCCXtoSQL{class Program{ static void Main(string[] args) { getData(); } static void getData() { string connString = @Data Source=uccx-pri\crssql;Initial Catalog=db_cra;Integrated Security=SSPI;; string sql2k5ConnString = @Data Source=sql2k5;Initial Catalog=db_cra;User Id=sqlsupport;Password=blahblahblah;; string procedure = @sp_csq_activity; int id = 0; //string queueName = @|CSQ-SHG01; SqlConnection conn = new SqlConnection(connString); SqlConnection sql2k5conn = new SqlConnection(sql2k5ConnString); DataTable csq = getCSQTable(sql2k5conn); foreach (DataRow row in csq.Rows) { id++; string name = row[CSQName].ToString(); string open = row[CSQOpen].ToString(); string close = row[CSQClose].ToString(); string format = yyyy-MM-dd ; DateTime today = DateTime.Now.Date.AddDays(-1); Console.WriteLine(id + + name); LogFile.write(id + + name); string paramstart = (today.ToString(format) + open); string paramend = (today.ToString(format) + close); Console.WriteLine(Queue open: + paramstart); Console.WriteLine(Queue close: + paramend); //var Open = TimeSpan.Parse(row[CSQOpen].ToString()); //string statsDate = DateTime.Now.ToShortDateString(); //string newDateTime = statsDate + Open; //Console.WriteLine(Converted + newDateTime); csqActivity(paramstart, paramend, procedure, name, conn, sql2k5conn); } } private static void csqActivity(string pstart, string pend, string procedure, string queueName, SqlConnection conn, SqlConnection sql2k5conn) { try { queueName = @| + queueName; SqlDataAdapter da = new SqlDataAdapter(); da.SelectCommand = new SqlCommand(procedure, conn); da.SelectCommand.CommandType = CommandType.StoredProcedure; //da.SelectCommand.Parameters.Add(new SqlParameter(@starttime, 2012-11-08 08:30:00)); //da.SelectCommand.Parameters.Add(new SqlParameter(@endtime, 2012-11-08 17:15:00)); da.SelectCommand.Parameters.Add(new SqlParameter(@starttime, pstart)); da.SelectCommand.Parameters.Add(new SqlParameter(@endtime, pend)); da.SelectCommand.Parameters.Add(new SqlParameter(@csqlist, queueName)); DataSet ds = new DataSet(); da.Fill(ds, result_name); DataTable dt = ds.Tables[result_name]; foreach (DataRow row in dt.Rows) { //Console.WriteLine(row[CSQ_Name]); Console.WriteLine(Queue: {0} - Presented: {1}, row[CSQ_Name], row[Calls_Presented]); LogFile.write(Queue: + row[CSQ_Name]); LogFile.write(Presented: + row[Calls_Presented]); InsertRecord(sql2k5conn, row, pstart, pend, queueName); } } catch (Exception e) { Console.WriteLine(Error: + e); LogFile.write(Error: + e); } finally { Console.WriteLine(Done); conn.Close(); } } private static void InsertRecord(SqlConnection sql2k5conn, DataRow row, string start, string end, string queue) { try { DateTime startdate = Convert.ToDateTime(start); DateTime endDate = Convert.ToDateTime(end); sql2k5conn.Open(); SqlCommand sql2k5comm = new SqlCommand(insert_csq_activity, sql2k5conn); sql2k5comm.CommandTimeout = 0; sql2k5comm.CommandType = CommandType.StoredProcedure; //sql2k5comm.Parameters.Add(new SqlParameter(/*PARAMNAME*/,/*PARAM*/); sql2k5comm.Parameters.Add(new SqlParameter(@CSQ_Name, row[CSQ_Name])); //sql2k5comm.Parameters.Add(new SqlParameter(@CSQ_Name, queue)); sql2k5comm.Parameters.Add(new SqlParameter(@Call_Skills, row[Call_Skills])); sql2k5comm.Parameters.Add(new SqlParameter(@Calls_Presented, row[Calls_Presented])); sql2k5comm.Parameters.Add(new SqlParameter(@Avg_Queue_Time, row[Avg_Queue_Time])); sql2k5comm.Parameters.Add(new SqlParameter(@Max_Queue_Time, row[Max_Queue_Time])); sql2k5comm.Parameters.Add(new SqlParameter(@Calls_Handled, row[Calls_Handled])); sql2k5comm.Parameters.Add(new SqlParameter(@Avg_Speed_Answer, row[Avg_Speed_Answer])); sql2k5comm.Parameters.Add(new SqlParameter(@Avg_Handle_Time, row[Avg_Handle_Time])); sql2k5comm.Parameters.Add(new SqlParameter(@Max_Handle_Time, row[Max_Handle_Time])); sql2k5comm.Parameters.Add(new SqlParameter(@Calls_Abandoned, row[Calls_Abandoned])); sql2k5comm.Parameters.Add(new SqlParameter(@Avg_Time_Abandon, row[Avg_Time_Abandon])); sql2k5comm.Parameters.Add(new SqlParameter(@Max_Time_Abandon, row[Max_Time_Abandon])); sql2k5comm.Parameters.Add(new SqlParameter(@Avg_Calls_Abandoned, row[Avg_Calls_Abandoned])); sql2k5comm.Parameters.Add(new SqlParameter(@Max_Calls_Abandoned, row[Max_Calls_Abandoned])); sql2k5comm.Parameters.Add(new SqlParameter(@Calls_Dequeued, row[Calls_Dequeued])); sql2k5comm.Parameters.Add(new SqlParameter(@Avg_Time_Dequeue, row[Avg_Time_Dequeue])); sql2k5comm.Parameters.Add(new SqlParameter(@Max_Time_Dequeue, row[Max_Time_Dequeue])); sql2k5comm.Parameters.Add(new SqlParameter(@Calls_Handled_by_Other, row[Calls_Handled_by_Other])); sql2k5comm.Parameters.Add(new SqlParameter(@CSQ_StartDateTime, startdate)); sql2k5comm.Parameters.Add(new SqlParameter(@CSQ_EndDateTime, endDate)); sql2k5comm.ExecuteNonQuery(); } catch (Exception e) { Console.WriteLine(Error: + e); LogFile.write(Error: + e); } finally { sql2k5conn.Close(); } } static DataTable getCSQTable(SqlConnection connection) { SqlDataAdapter da = new SqlDataAdapter(); da.SelectCommand = new SqlCommand(get_csqnames, connection); da.SelectCommand.CommandType = CommandType.StoredProcedure; DataSet ds = new DataSet(); da.Fill(ds, csqTable); DataTable dt = ds.Tables[csqTable]; return dt; }}}
How to dump UCCX stored procedure results via c# to SQL Server
c#;sql;sql server
Your task falls into ETL (Extract-Tranform-Load) category, and SQL Server Integration Services (SSIS) is the service dedicated to ETL. You can actually implement all the processing described here using the SSIS package (next version of DTS), at it might result in faster and more reliable solution because SSIS uses stream-based processing and optimised interaction with SQL Server.If you still want to use .NET app to do the transfer then suggestion will vary depending on how much data do you need to transfer, and how likely you would need to maintain this solution in the future.If data volume is not large (i.e. less than a couple of thousands records) then you can keep using DataTables, otherwise it would be better to switch to streaming techniques (process the data while you load it)If it's a one-time implementation then usually it's not worth investing lots of efforts into clean design as long as program does the job.Since you've asked for suggestions to clean up the code, here is my list of what I would do with it:Apply .NET naming conventions (don't use Hungarian notation in particular).write method: use the using keyword for all disposable objects (StreamWriter) + you can initialise logWriter with a single line:public static void Write(string logMessage){ const string logFileLocation = @C:\debug\UCCXtoSQL.log; using (var logWriter = new StreamWriter(logFileLocation, true)) { logWriter.WriteLine(string.Format({0}: {1}, DateTime.Now, logMessage)); }}Replace DataSet/DataTable with Entity Framework - that's the biggest change since it will remove all the manual SqlCommand/SqlConnection/DataTable/DataSet/DataAdapter processing in favor of typed data objects. Read more on Entity Framework Replace your custom LogFile logging class with logging framework (log4net, NLog).
_unix.186172
I am trying to implement synproxy to my firewall, on a bridge interface (between eth1 and eth2). Here are my rules:/usr/local/sbin/iptables -t raw -i PREROUTING -i br0 -m physdev --physdev-in eth1 -p tcp -m tcp --syn -j CT --notrack/usr/local/sbin/iptables -A FORWARD -i br0 -m physdev --physdev-in eth1 -p tcp -m tcp -m state --state INVALID,UNTRACKED -j SYNPROXY --sack-perm --timestamp --wscale 7 --mss 1460echo 0 > /proc/sys/net/netfilter/nf_conntrack_tcp_looseI have a client connecting to eth1 and http server (192.168.0.1) connecting to eth2.With these rules, when I try to run curl 192.168.0.1 from the client I got timeout. It seems that the normal tcp request would not pass through. When I run tcpdump on br0 I don't see any syn-ack and ack packets. Seems all syn packets got lost in the synproxy target.Any ideas?
synproxy on a bridge
linux;bridge
null
_unix.30741
... and what are the differences between them? I formulated my question like this to make it clear I'm not interested in a flamewar of opinions, rather in an objective comparison between the different flavors of BSD Unix. Ideally I could get feedback from users who have experience in all of them.BackgroundI recently discovered that there's much more to Unix than merely Linux. I use Solaris at work, it opened my eyes. Now I'm interested in new unices, I want to try a new one and I'm naturally curious about BSDs. The problemI'm not asking for advice or opinions on what BSD to install; I want to know the differences (and common points) between them so I can make up my own mind. The problem is that it's difficult to get proper comparisons between them.If you're lucky, you get some hasty definition like this one:FreeBSD = Popular all-rounder.NetBSD = Portable (runs on a lot of platforms, including a toaster)OpenBSD = Security above anything else.(It might be true, but it's not really useful. I'm sure FreeBSD is portable and secure as well ...)If you're unlucky you get caught in one of those inevitable Unix legends about projects splitting, forking, rebranding on intellectual/moral grounds, how Theo de Raadt is an extremist and how MacOS X and FreeBSD had a common ancestor over 20 years ago.Fascinating, but not really informative, is it?The BSDsThe BSDs I am interested in are:FreeBSDOpenBSDNetBSDand optionallyDragonflyDarwin...My questionsIn order to understand the differences better, here's a list of somewhat related questions about the different distributions (can we use this term?). If you present your answer under some form of tabular data, you are my all-time hero!Do they use the same kernel?Do they use the same userland tools? (what are the differences, if any?)Do they use the same package/source management system?Do they use the same default shell? Are binaries portable between them?Are sources portable between them?Do they use different directory trees?How big are their respective communities? Are they the same order of magnitude?How much of the current development is common?What are the main incompatibilities between them?I don't know how easy those questions are to answer, and how relevant to the StackExchange format this question really is. I just never came across a simple document listing the differences between BSDs in a clear way, useful for fairly experienced users to look at and make a choice easily.
What do different BSDs have in common?
distribution choice;bsd
I don't think I will provide you and everyone with the perfect answer, however, using a BSD system everyday for work, I am sure I can give you a useful insight in the BSD world.I didn't ever use NetBSD, I won't talk a lot about it.Do they use the same kernel?No, although there are similarities due to the historic forks. Each project evolved separately.Do they use the same userland tools? (what are the differences, if any?)They all follow POSIX. You can expect a set of tools to have the same functionality between *BSD.It's also common to see some obvious differences in process/network management tools within the BSDs.Do they use the same package/source management system?They provide a packaging system, different for each OS.Do they use the same default shell?No, for example FreeBSD uses csh, OpenBSD uses ksh.Are binaries portable between them?No:(XXXX@freebsd-6 101)file `which ls`/bin/ls: ELF 32-bit LSB executable, Intel 80386, version 1 (FreeBSD), for FreeBSD 5.5, dynamically linked (uses shared libs), strippedThey don't really support stable and fast binary emulation. Don't rely on it.Are sources portable between them?Some yes, as long as you don't use kernel code or libc code (which is tied up tightly to the OS) for example.Do they use different directory trees?No, they are very similar to Linux here.However FreeBSD advocates the use of /usr/local/etc for third party software's configuration files. OpenBSD puts all in /etc...They put all third party in /usr/local, whereas Linux distribution will do as they see fit. In general you can say that *BSD are very conservative about that, things belongs where they belongs, and that's not something to make up.How big are their respective communities? Are they the same order of magnitude?FreeBSD's is the largest and most active, you can reach it through a lot of different forums, mailing lists, IRC channels and such...OpenBSD has a good community but mostly visible through IRC and mailing lists.Actually if you think you need a good community, FreeBSD is the way to go.NetBSD and OpenBSD communities are centered around development, talk about new improvements etc. They don't really like to do basic user-support or advertising. They expect everyone to be advanced unix users and able to read the documentation before asking anything.How much of the current development is common?Due to really free licenses code can flow among the projects, OpenBSD often patches their code following NetBSD (as their sources have a lot in common), FreeBSD takes and integrates OpenBSD's Packet Filter, etc. It's obviously harder when it comes to drivers and others kernel things.What are the main incompatibilities between them?They are not compatible in a binary form, but they are mostly compatible in syntax and code. You can rely on that to achieve portability in your code. It will build or/and execute easily on all flavors of BSD, except if your going too close to the kernel (ifconfig, pfctl...).Here's how you can enjoy learning from the BSD world:Try to replace your home router with an openbsd box, play with pf and the network. You will see how easy it is to make what you want. It's clean, reliable and secure.Use a FreeBSD as a desktop, they support a lot of GPUs, you can use flash to some extent, there's some compatibility with Linux binaries. You can safely build your custom kernel (actually this is recommended). It's overall a good learning experience.Try NetBSD on very old hardware or even toasters.Although they are different, each of them tries to be a good OS, and it will match users more than situations. As a learning experience, try them all (Net/Open/Free), but later you might find yourself using only 1 for most situations (since you're more knowledgeable in a specific system or fit in more with the community).The other BSDs are hybrids or just slightly modified versions, I find it better to stay close to the source of the software development (use packet filter on OpenBSD, configure yourself your desktop on FreeBSD, ...).As a personal note, I'm happy to see an enthusiast like you, and I hope you will find a lot of good things in the BSD world. BSD is not about hating windows or other OSs, it's about liking Unix.
_unix.191336
My task is simple:I need to able to delete the whole root which i can do with rm-rf ./* but the problem here is HFS recovery can get back the data since it is not actually deleted from diskif i use shred, how can i shred whole disk.How can i combine this two commands and make it work? even is it possible? is there some chance still data can be recovered ?Is there another way of filling the files with zeros or unwanted data (instead of shred) and then do a rm -rf ?Any help is appreciated.
How to delete the whole root with rm rf and shred together?
root;rm;deleted files;shred
null
_opensource.2026
I've read that the BSD-2 or Simplified BSD software license removes from the BSD-3 some language that is no longer necessary. Looking at the difference between these two licenses, it appears that this refers to Neither the name of [project] nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.In what sense is this language no longer necessary. Is this protection implicit because of some other statute or agreement?
What's no longer necessary about parts of BSD-3?
licensing;bsd
null
_unix.377955
Sometimes rpmdb gets corrupted, usually due to some process dying. The fix is quite easy, simply run rpm --rebuilddb, maybe remove the lock and some other files.My question is, is there any way to check if the rpmdb is corrupted or not before trying to use it?Just to give some context, I am managing multiple machines and sometimes rpmdb gets corrupted - i'm looking for a simple way to check.
Check rpmdb corruption
fedora;rpm
null
_unix.316618
In a subfolder of my home dir, There is a file called '-r'-rw-r--r-- 1 pi pi 10240 Sep 15 18:19 ./prog_python/-rI don't even know why.I would like to rename the file or delete it, but nothing like rename '/-r' newName or rename '\-r' newName works.Any advice ? Thanks !
how to rename of handle a '-r' named file?
rename;escape characters
null
_unix.96324
I'm trying to add myself to the fuse user group but it doesn't look like the change is taking effect even though /etc/group looks correct after invoking addgroup or usermod.I've tried both ...sudo addgroup fjohnson fuseand sudo usermod -a -G fuse fjohnson/etc/group shows the changefuse:x:104:fjohnsonbut I can't read -rw-r----- 1 root fuse 215 Oct 16 10:39 /etc/fuse.confas cat: /etc/fuse.conf: Permission deniedand groups(1) returns fjohnson adm dialout cdrom plugdev lpadmin admin sambashare
Added user supplementary group, but 'groups(1)' not showing change
users;group
When you add a group to an user this one should logout/login in order change to take effect.You can also use newgrp command. $ id uid=1000(romain) gid=1000(romain) groups=1000(romain),24(cdrom),25(floppy),27(sudo),29(audio),30(dip),44(video),46(plugdev),105(scanner),110(bluetooth),112(netdev) $ sudo addgroup romain fuse Adding user `romain' to group `fuse' ... Adding user romain to group fuse Done. $ id uid=1000(romain) gid=1000(romain) groups=1000(romain),24(cdrom),25(floppy),27(sudo),29(audio),30(dip),44(video),46(plugdev),105(scanner),110(bluetooth),112(netdev) $ newgrp fuse $ id uid=1000(romain) gid=103(fuse) groups=1000(romain),24(cdrom),25(floppy),27(sudo),29(audio),30(dip),44(video),46(plugdev),103(fuse),105(scanner),110(bluetooth),112(netdev)
_unix.31149
It looks like I can get to the waste-basket through nautilus, but when I look at the location given by properties, I see trash:///.But I can't cd trash:///. Where is the waste-basket? And in general, if I can find a file in nautilus, how do I get there from terminal? I've had some similar issues in the past with mounted media as well, so a general answer would be greatly appreciated.In case it is relevant, I'm using PinguyOS.
How to find Nautilus wastebasket in the file system
gnome;nautilus;trash
trash:// is a protocol, not a location. A post on AskUbuntu says it should be in ~/.local/share/Trash. Try there.
_unix.311313
I'm completely new to ubuntuI ordered my new notebook and chose one without an OS. It had a lite version of linpus installed when I got it but it seemed completely useless to me.So I followed a guide to install Linux Mint with a USB stick. the installation seemed to have worked perfectly, during the installation process I chose to disable secure boot and to format all of the disc. Of course after the installation it told me to restart, remove installation medium etc. I did that and all it displayed then was no bootable device.I tried rebooting but it was always the same game.I could boot via usb stick, start installation from scratch, but never reboot without the usb stick. I also switched between UEFI and legacy and played around with the booting priority order and found the secure boot was always enabled in UEFI, without the option to enable it. In legacy I couldn't even boot from the usb stick.Then I tried with Ubuntu mate but it's still the same issue.Here is my most recent boot repair report:http://paste2.org/pFB7gdW1Any ideas how I can get any ubuntu os to work? I'd be happy with pretty much any version at this point.
Ubuntu Mate doesn't boot without usb stick after installation
boot;startup;uefi
null
_softwareengineering.289930
I am currently planning an upcoming project and am looking for an algorithm for searching a database.The search is as follows;There will be some specifically labelled criteria (or fields) and I would like to find any objects in which its fields match the specified criteria. As well as this it needs to rank partial results based on the number matches for each field.Heres an example -Person 1Name: JohnOccupation: DeveloperFavourite Colour: BluePerson 2Name: JohnOccupation: ManagerFavourite Colour: BluePerson 3Name: JohnOccupation: DeveloperFavourite Colour: GreenPerson 4Name: LarryOccupation: MailmanFavourite Colour: RedSearch CriteriaName: JohnOccupation: DeveloperFavourite Colour: BlueResultsRank 1Person 1Rank 2Person 2Person 3The ranks would not be visible but would handle the order of the result list. I could do this quite easily for a small data set, for example, JavaScript;results = [];for(var i = 0; i < objects.length; i++) { var result = _.intersection(criteria, object[i]); if(result.length > 0) { object[i].rank = result.length; results.push(object[i]) }}return results (and order by rank)Obviously this won't work when querying a db but I am hoping someone much smarter than me can point me in the right direction.I feel like there must be a solution to this out there and it's probably simple but my Google-fu is failing me.
Looking for an Appropriate Search Algorithm
algorithms;search
null
_codereview.19307
@implementation User- (void)refreshProperties{ assert(!self.isSyncing); NetworkOperation *op = [[NetworkOperation alloc] initWithEndPoint:GetUserDetails parameters:nil]; [self setupCompletionForRefresh:op]; [self setNetworkOperation:op]; [self setSyncing:YES]; [[NetworkManager sharedManager] enqueueOperation:op];}The above code is used by a UIViewController subclass to update the properties of a User instance (i.e. the view controller calls [aUser refreshProperties] in pull-to-refresh).Concerted efforts have been made to make sure there are clear lines defining the separation of concerns:Network Operation class only deals with getting data from the networkNetwork Manager class deals with enqueuing operations and watchingreachabilityUser class houses the strings, numbers, bools, etc thatmake up a User instanceHowever, there are some problems with the above method. First and foremost is it's hard to test. There isn't any way to mock a network operation or network manager and get them inside the method call without swizzling. Also, it couples the User class with the NetworkOperation and NetworkManager classes.While the above code works, I would like to refactor it.Looking through GoF, it looks like there are some ways to remove the dependency between User and Network-Operation/Manager: adapter, dependency inversion (using a protocol), etc. From what I can tell, that code would look something like:[aUser setNetworkOperation:<object conforming to protocol>];[aUser setNetworkManager:<object conforming to protocol>];[aUser refreshProperties];But... isn't this just shifting the dependency? The above calls would happen in the view controller, and now it has to know about and use three classes instead of just the main one it was originally concerned with - User.Thoughts, discussion, resources, or code examples are greatly appreciated!
Dependency Inversion / Injection? - Networking code in model classes
objective c;ios
git checkout -b fetch_remote_usersLet's start by reviewing the concerns we want the system to address. We will need:A place to store the set of attributes which define a user.A request (to some URI with some set of parameters) which defined how we can obtain an updated view of a user.A queue of requests to perform with the ability to suspend execution of those requests when the network is unavailable and retry network failures. We probably also want this queue to be an ordered set of requests so we can avoid enqueuing duplicate requests.A way to react to successful requests so that we can merge and persist the newly received set of User attributes.An interface to allow user interaction in some view to enqueue a new request and possibly to see some indication of the request's state (in progress, failed, finished).That's a reasonable number of concerns and it'll be easy to end up with a confused our tightly coupled design trying to express all of them but let's see what we can do.Starting with #1. We can define a User model to store our user attributes. It is likely that we'll find some behaviors which should also belong to this model but let's start simple.@interface User : NSObject@endSo far so good. No coupling and while we haven't been motivated to write a test yet we could easily test this User in isolation.On to #2. NetworkOperation is probably still a good name for this since we're probably going to be dealing with create, read, update, or destroy operations in order to synchronize the state of our User resource. We've only talked about reads so far but it might be useful to capture the type of an operation when we create it so that we know what it is trying to accomplish.This might be a good place to start writing tests. Let's think about creating a NetworkOperation (using Kiwi's BDD styntax).describe(@NetworkOperation, ^{ context(@when created, ^{ __block NetworkOperation *operation; beforeEach(^{ operation = [NetworkOperation operationOfType:CRUDOperationTypeRead URL:[NSURL URLWithString:@example.com/users/1] withParameters:nil]; }); it(@has an operation type, ^{ [[theValue(operation.type) should] equal: theValue (CRUDOperationTypeRead)]; }); it(@use the specified URI, ^{ [[operation.URL should] equal:[NSURL URLWithString:@example.com/users/1]]; }); });});That's easy enough to build.typedef enum { CRUDOperationTypeCreate, CRUDOperationTypeRead, CRUDOperationTypeUpdate, CRUDOperationTypeDelete} CRUDOperationType;@interface NetworkOperation : NSObject@property(readonly) CRUDOperationType operationType;@property(readonly) NSURL *URL;- (id)operationOfType:(CRUDOperationType)type URL:(NSURL *)url withParameters:(NSDictionary *)parameters;@endSo we can create a NetworkOperation to encapsulate a request. That should also give us a good place to parse the response from whatever format was sent over the wire, we'll see about how to map it to our User domain object later. Who then should be responsible for actually creating NetworkOperations?We could follow the example in the original question and create operations inline when needed but, as noted, that's going to become hard to test and contributes to the coupling between the User model and the network classes. In order to separate the responsibilities of these classes it might be useful to be able to consider just the creation of a NetworkOperation as its own role. A Factory for building NetworkOperations for fetching Users.It does however seem reasonable for User to be the authority on how to locate its remote representation. We can use a Strategy to allow our domain model objects to act as Factories for their appropriate NetworkOperations.describe(@User, ^{ __block User *user; beforeEach(^{ user = [[User alloc] init]; }); describe(@as a RemoteResource, ^{ it(@builds a read operation, ^{ NetworkOperation *read = [user buildReadOperation]; [[theValue(read.operationType) should] equal:theValue(CRUDOperationTypeRead)]; }); });});@protocol RemoteResource- (NetworkOperation *)buildReadOperation;@end@interface User : NSObject <RemoteResource>@endNow Users can define the NetworkOperations needed to update themselves but the consumer of those operations can work with generic RemoteResources (some of whom happen to be users).On to #3!We probably want to have an object which manages our network operations and it is going to need to exist for most, if not all, of the life of our application so that it can keep track of the state of our operation queue as we wait for operations to finish. Sounds like a good fit for a service (often presented as part of the controller layer of MVC but not a view controller and certainly not a UIViewController).describe(@RemoteResourceManager, ^{ context(@given an existing resource, ^{ context(@reading the resource, ^{ pending(@enqueues the read network operation); pending(@sets a success callback block); pending(@sets a failure callback block); }); });});Hang on a second! That's sounds like a nice interface for updating a RemoteResource but we're not talking about NetworkOperations anymore. We could have this RemoteResourceManager maintain a queue of the operations it is using but that seems like an internal detail that would be hard to test. Looks like we skipped a step, let's create another service instead. Something the RemoteResourceManager can depend on to manage operations but which doesn't need to know anything about RemoteResources.describe(@NetworkOperationManager, ^{ context(@enqueuing an operation, ^{ pending(@adds the operation to the queue); context(@when an equivalent operation is already in the queue, ^{ pending(@does not add the operation); }); }); context(@when an operation succeeds, ^{ __block id mockOperation; beforeEach(^{ mockOperation = [NetworkOperation mock]; }); pending(@calls the operation's success callback); pending(@sends a notification containing the resource's identifier); }); context(@when an operation fails, ^{ pending(@calls the operation's failure callback); pending(@sends a failure notification containing the resource's identifier); });});Now we can revist our RemoteResourceManager.describe(@RemoteResourceManager, ^{ __block RemoteResourceManager *manager; __block id mockNetworkOperationManager; beforeEach(^{ mockNetworkOperationManager = [NetworkOperationManager mock]; manager = [[RemoteResourceManager alloc] initWithNetworkOperationManager:mockNetworkOperationManager]; }); context(@given an existing resource, ^{ context(@reading the resource, ^{ __block id mockOperation; __block id mockResource; beforeEach(^{ mockOperation = [NetworkOperation mock]; mockResource = [KWMock mockForProtocol:RemoteResource]; [mockResource stub:@selector(buildReadOperation) andReturn:mockOperation]; }); it(@enqueues the read network operation, ^{ [[[mockNetworkOperationManager should] receive] enqueueOperation:mockOperation]; [manager read:mockResource]; }); pending(@sets a success callback block); pending(@sets a failure callback block); }); });});...and so on.For #4 we might extend the RemoteResource protocol to define a method to which we can pass data from our NetworkOperations in our success callbacks to apply updates to the model.#5 is a little tricky and really depends on what UX we aim to provide. We can probably start by providing our view controller with a shared RemoteResourceManager. We could do that via a singleton but I'm reluctant to do so. A singleton would introduce a strong and non-obvious coupling between the view controller and the resource manager. Besides its not wrong to have several resource managers in our app. We just want them to be able to outlive view controllers so that they have time to finish their operations even if the view controller that started them is no longer needed. The fact that the singleton would be hard to replace with a double in a test is a good hint that this might be a poor design decision.Instead I would provide an instance of an existing resource manager either via a dependency injection framework or explicitly pass one to the controller from its creator. Either way, we can easily substitute a test double for the resource manager to test our controller's interaction with it.The NetworkOperationManager spec hints at how we might update the UI to reflect the state of the operation queue. Our resource manager supplies callback blocks to react to the success and failure of operations. On the other hand our view controller may no longer exist (or at least no longer be visible) when the operation finishes. Instead of callbacks we can listed for notifications about the particular resource we are interested in. If the controller gets a notification we can update the view to show that an update has started or finished. When we no longer need to maintain that view the controller unsubscribes from the notifications and ignores them.git add .git commit -mstream of consciousness networking refactorgit push codereview 19307/dependency-inversion-injection-networking-code-in-model-classesHope that's useful.
_cs.41262
Take a number i. You need to split it in two pieces as many times as necessary so that all splits are less than or equal to j. How does one obtain this with the minimum number of splits? Examples:Splitting it in half is optimal here:i = 4, j = 24 -> 2, 2 // (1 split)However, splitting in half is not optimal here:i = 9, j = 39 -> 5, 4 // Split nine in half5 -> 3, 2 // Then you have to split the 54 -> 2, 2 // and the 4 (3 splits)The optimal solution here would be:i = 9, j = 39 -> 6, 36 -> 3, 3 // (2 splits)Is there some straightforward way to obtain the optimal split without brute-force iteration?
Algorithm for fastest division below threshold
algorithms
The optimal strategy is the greedy one: repeatedly split chunks of size $j$. This strategy results in $\lceil \frac{i}{j} \rceil - 1$ splits. We can prove that this strategy is optimal by induction on $i$. The strategy is clearly optimal for $i \leq j$. Now suppose that we split $i > j$ as $i = i_1 + i_2$. In total, this choice will use up at least this many splits:$$1 + \lceil \tfrac{i_1}{j} \rceil - 1 + \lceil \tfrac{i_2}{j} \rceil - 1 = \lceil \tfrac{i_1}{j} \rceil + \lceil \tfrac{i_2}{j} \rceil - 1.$$Let $\alpha = i/j$, $\alpha_1 = i_1/j$, $\alpha_2 = i_2/j$, so that $\alpha = \alpha_1 + \alpha_2$. To complete the proof, notice that $\lceil \alpha_1 \rceil + \lceil \alpha_2 \rceil \geq \lceil \alpha \rceil$, and so the quantity above is always at least $\lceil \tfrac{i}{j} \rceil - 1$, as claimed.I have described one optimal strategy. Using the proof above, we can actually describe all optimal strategies. Splitting $i$ to $i_1+i_2$ is optimal iff $\lceil \frac{i_1}{j} \rceil + \lceil \frac{i_2}{j} \rceil = \lceil \frac{i}{j} \rceil$. When does that happen? Define$$i = aj-b, \quad i_1 = a_1j-b_1, \quad i_2 = a_2j-b_2,$$where $0 \leq b,b_1,b_2 < j$. A splitting is optimal iff $a_1 + a_2 = a$. Notice that$$i_2 + i_2 = (a_1+a_2)j - b_1 - b_2.$$This shows that the splitting is optimal iff $b_1 + b_2 = b$ (rather than $b + j$).
_unix.204284
I'm trying to create a shell script which will execute some commands which I require frequently. I made the ssh login to skip the password prompt using a public/private key pair. and after some search, I'm able to execute some codes like:File: ssh.shssh -X [email protected] << EOFcd /root/myDirgedit a.c b.cEOFBut I don't see any difference if I put like:ssh -X [email protected] << EOFcd /root/myDirgedit a.c b.c &EOFThe & doesn't make any difference.With &, generally after executing gedit, it'll be sent to background and prompt will come. But I'm not getting the prompt after gedit. (Though whatever there in the next line is getting executed. Like if I put echo Hello, it's displaying Hello.)What's wrong here? Is there any other method?I want to do a SSH and execute some command through shell script. But I don't want to logout after the shell script finishes its execution. I want to logout manually after executing some of my own commands which are not repetitive like the above script.
shell scripting remote commands through ssh
shell script;ssh
Without &, the script that you're executing remotely says change to /root/myDir, then execute gedit and wait for it to exit. The variant with & says not to wait for gedit to exit. Either way, the shell exits once it's executed the last command in the script.If you want to execute a script and then execute more commands that you type interactively, you need to execute an interactive shell at the end.ssh -Xt [email protected] 'cd /root/myDir && gedit a.c b.c; exec bash'The option -t tells SSH to set up a virtual terminal on the server; by default it doesn't do that when you pass a remote command to execute. exec bash at the end tells the shell to replace itself by a new instance of bash, which will be an interactive shell (showing a prompt, listening to your commands, etc.) since its input is coming from a terminal.
_codereview.131694
The four parameters a, b, c and d can be -1 (meaning it's not set) or a random one of {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15}. If they are different from -1, they are guaranteed to be distinct.The code looks at the four least significant bits of the four ints and tests if they all have a bit in common:1101111001000101have the second left bit 1.1001101000000001all have the second left bit 0.1000011001011101have no common bit.The following is possible:a = -1, b = -1, c = 15, d = 4; //sim(a, b, c, d) == 0, because at least one is -1a = -1, b = -1, c = -1, d = -1; //sim(a, b, c, d) == 0, because all are -1a = 9, b = 1, c = 3, d = 5; //sim(a, b, c, d) == 1, because all values have a bit in common (a & 1, b & 1, c & 1 and d & 1 are all 1)a = 8, b = 1, c = 2, d = 4; //sim(a, b, c, d) == 0, because there is no bit that is equal for a, b, c and dThe following will never happen:a = 1, b = 1, c = 2, d = 5; // a == b, which is guaranteed to never happen because a == bThe code is meant to be as fast as possible on x86, readability would be a plus but is not necessary.The code:int sim(int a, int b, int c, int d){ return ((a != -1) && (b != -1) && (c != -1) && (d != -1)) && ((((a & 8) == (b & 8)) && ((a & 8) == (c & 8)) && ((a & 8) == (d & 8))) || (((a & 4) == (b & 4)) && ((a & 4) == (c & 4)) && ((a & 4) == (d & 4))) || (((a & 2) == (b & 2)) && ((a & 2) == (c & 2)) && ((a & 2) == (d & 2))) || (((a & 1) == (b & 1)) && ((a & 1) == (c & 1)) && ((a & 1) == (d & 1))));}Are there obvious ways to speed up the code?
Test four int of commonality in the lower four bits
performance;c;bitwise
The code isn't bad, but it's a little more verbose than it needs to be. Consider that we don't really need to check one bit at a time; we can check four simultaneously.The key here is that we're looking for bits that are the same in all four numbers. If we wanted to look for ones that were shared, we could do this:a & b & c & d & 0xfIf we want to look for zeroes, we can simply invert:~a & ~b & ~c & ~d & 0xfIf we put those together with the -1 part, it might look like this:return a != -1 && b != -1 && c != -1 && d != -1 && ((a & b & c & d) || (~a & ~b & ~c & ~d & 0xf));However, the problem with this is that even though it's parallel, it still requires more operations than might be required.If we consider the exclusive or function, it effectively return a 1 whenever the bits differ. So if there were only two numbers, we could do:return (a ^ b) ^ 0xf;The expression would only be false if all of the bits were different. We can use a similar strategy for three numbers.return ((a ^ b) | (b ^ c)) ^ 0xf;For this one, (a ^ b) returns 1 for every bit that is not the same between a and b, and the expression (b ^ c) returns 1 for every bit that is not the same between b and c. When we do a bitwise or of those quantities, only the the bits that are the same for all three quanties remain zeroes. When we do ^ 0xf we invert the bottom 4 bits so that only bits that are the same are ones.Extrapolating this to four quantities,return ((a ^ b) | (b ^ c) | (c ^ d)) ^ 0xf;This is good but not sufficient since we still need to deal with the possible -1 quantities that may be among the inputs. One obvious way to do this is this:return a != -1 && b != -1 && c != -1 && d != -1 && (((a ^ b) | (b ^ c) | (c ^ d)) ^ 0xf);To compare this routine which I'll call Edward to the one above, which I'll call naive to the original and the other three proposals so far, I used this code:testcode.c#include <stdio.h>#include <stdlib.h>#include <time.h>#include <math.h>#include <stdbool.h>bool sim(int a, int b, int c, int d){ return ((a != -1) && (b != -1) && (c != -1) && (d != -1)) && ((((a & 8) == (b & 8)) && ((a & 8) == (c & 8)) && ((a & 8) == (d & 8))) || (((a & 4) == (b & 4)) && ((a & 4) == (c & 4)) && ((a & 4) == (d & 4))) || (((a & 2) == (b & 2)) && ((a & 2) == (c & 2)) && ((a & 2) == (d & 2))) || (((a & 1) == (b & 1)) && ((a & 1) == (c & 1)) && ((a & 1) == (d & 1))));}bool naive(int a, int b, int c, int d){ return a != -1 && b != -1 && c != -1 && d != -1 && ((a & b & c & d) || (~a & ~b & ~c & ~d & 0xf));}bool edward(int a, int b, int c, int d){ return a != -1 && b != -1 && c != -1 && d != -1 && (((a ^ b) | (b ^ c) | (c ^ d)) ^ 0xf);}bool BitsInCommon(int a, int b, int c, int d){ if (a == -1 || b == -1 || c == -1 || d == -1) return false; int rslt = 0; rslt = (a & 15) & (b & 15) & (c & 15) & (d & 15); if (rslt != 0) return true; //check for zero bits in common??? if needed? rslt = ((a ^ -1) & 15) & ((b ^ -1) & 15) & ((c ^ -1) & 15) & ((d ^ -1) & 15); return rslt != 0;}bool jan_korous(int a, int b, int c, int d) { if( a == -1 || b == -1 || c == -1 || d == -1 ) { return 0; } const unsigned int diff = (a ^ b) | (c ^ d) | (a ^ c); return ( (diff & 1) == 0 ) || ( (diff & 2) == 0 ) || ( (diff & 4) == 0 ) || ( (diff & 8) == 0 );}bool scottbb(int a, int b, int c, int d) { if (a == -1 || b == -1 || c == -1 || d == -1) { return 0; } else { unsigned int all_1 = (unsigned int)a & (unsigned int)b & (unsigned int)c & (unsigned int)d; unsigned int all_0 = ~((unsigned int)a | (unsigned int)b | (unsigned int)c | (unsigned int)d); return (all_1 | (all_0 & 0xF)) ? 1 : 0; }}int main(){ bool troubleshoot = false; struct { const char *name; bool (*func)(int, int, int, int); double elapsed; bool isCorrect; } tests[] = { { original, sim, 0, true }, { naive, naive, 0, true }, { Edward, edward, 0, true }, { dbasnett, BitsInCommon, 0, true }, { Jan Korous, jan_korous, 0, true }, { scottbb, scottbb, 0, true}, { NULL, NULL, 0, false} }; // see if they're all correct for (int a = -1; a < 16; ++a) { for (int b = -1; b < 16; ++b) { for (int c = -1; c < 16; ++c) { for (int d = -1; d < 16; ++d) { for (size_t i = 1; tests[i].func; ++i) { if (tests[i].func(a,b,c,d) != tests[0].func(a,b,c,d)) { if (troubleshoot) { printf(%s failed! [%d, %d, %d, %d] => %d, should have been %d\n, tests[i].name, a, b, c, d, tests[i].func(a, b, c, d), tests[0].func(a, b, c, d) ); } tests[i].isCorrect = false; if (troubleshoot) return -1; } } } } } } puts(All functions checked for accuracy; checking timing...); for (int iterations = 100; iterations; --iterations) { for (size_t i = 0; tests[i].func; ++i) { if (tests[i].isCorrect) { for (int a = -1; a < 16; ++a) { for (int b = -1; b < 16; ++b) { for (int c = -1; c < 16; ++c) { for (int d = -1; d < 16; ++d) { clock_t start = clock(); tests[i].func(a,b,c,d); tests[i].elapsed += clock() - start; } } } } } } } // print results for (size_t i = 0; tests[i].func; ++i) { if (tests[i].isCorrect) { printf(%12s\t%.10f\t%f%% %s than %s\n, tests[i].name, tests[i].elapsed, 100.0*fabs(tests[i].elapsed-tests[0].elapsed)/tests[0].elapsed, (tests[i].elapsed > tests[0].elapsed ? slower : faster), tests[0].name ); } else { printf(%12s\twas not correct; no time recorded\n, tests[i].name); } }}Results:Note: I've updated the results to include everybody's correct versions and to include the two solutions proposed by @js1 although I didn't bother duplicating the source code for those solutions in the code above.All functions checked for accuracy; checking timing... original 1052389.0000000000 0.000000% faster than original naive 1034792.0000000000 1.672100% faster than original Edward 1024637.0000000000 2.637048% faster than original dbasnett 1032151.0000000000 1.923053% faster than original Jan Korous 1028349.0000000000 2.284326% faster than original scottbb 1026372.0000000000 2.472185% faster than original JS1 1035062.0000000000 1.646444% faster than original JS1 Lookup 1023049.0000000000 2.787942% faster than originalSo in my tests, on a quad-core x86_64 machine, the Edward and Jan Korous routines are nearly identical in time and both are a small improvement (approximately 2%) over the original. There was a slight but persistent speed advantage to the JS1 Lookup version on this machine.Compile command (Linux) was:gcc -O2 -std=c99 testcode.c -o testcode && ./testcodeOn a quad core ARM7, I got this result with the same code:All functions checked for accuracy; checking timing... original 17302157.0000000000 0.000000% faster than original naive 17211229.0000000000 0.525530% faster than original Edward 17231692.0000000000 0.407261% faster than original dbasnett 17255411.0000000000 0.270174% faster than original Jan Korous 17115011.0000000000 1.081634% faster than original scottbb 17120390.0000000000 1.050545% faster than original JS1 17267606.0000000000 0.199692% faster than original JS1 Lookup 17118642.0000000000 1.060648% faster than originalI also tested on Windows under Cygwin, but found that the variability in timing on that platform was so large as to render the test results meaningless. As an example, here are four successive runs on that platform:All functions checked for accuracy; checking timing... original 1902.0000000000 0.000000% faster than original naive 2124.0000000000 11.671924% slower than original Edward 2182.0000000000 14.721346% slower than original dbasnett 2091.0000000000 9.936909% slower than original Jan Korous 1980.0000000000 4.100946% slower than original scottbb 2144.0000000000 12.723449% slower than original JS1 2276.0000000000 19.663512% slower than original JS1 Lookup 2075.0000000000 9.095689% slower than originalAll functions checked for accuracy; checking timing... original 2174.0000000000 0.000000% faster than original naive 2121.0000000000 2.437902% faster than original Edward 2088.0000000000 3.955842% faster than original dbasnett 2055.0000000000 5.473781% faster than original Jan Korous 2136.0000000000 1.747930% faster than original scottbb 2109.0000000000 2.989880% faster than original JS1 1855.0000000000 14.673413% faster than original JS1 Lookup 2149.0000000000 1.149954% faster than originalAll functions checked for accuracy; checking timing... original 2073.0000000000 0.000000% faster than original naive 2154.0000000000 3.907381% slower than original Edward 1960.0000000000 5.451037% faster than original dbasnett 2288.0000000000 10.371442% slower than original Jan Korous 2032.0000000000 1.977810% faster than original scottbb 2091.0000000000 0.868307% slower than original JS1 2245.0000000000 8.297154% slower than original JS1 Lookup 2138.0000000000 3.135552% slower than originalAll functions checked for accuracy; checking timing... original 2158.0000000000 0.000000% faster than original naive 2277.0000000000 5.514365% slower than original Edward 1840.0000000000 14.735867% faster than original dbasnett 2081.0000000000 3.568119% faster than original Jan Korous 2135.0000000000 1.065802% faster than original scottbb 2228.0000000000 3.243744% slower than original JS1 1969.0000000000 8.758109% faster than original JS1 Lookup 2010.0000000000 6.858202% faster than original
_cstheory.10063
I'm a math student and have encountered the concept of (mainly time) complexity of algorithms in several courses so far (Analysis of Algorithms, Cryptography, Numerical Analysis). However what strikes me as odd is that the definitions I have encountered so far seem to differ greatly. In particular, the differences that I have noted areDifference between the uniform and logarithmic cost models: this can, for example, make a difference, when evaluating the time complexity of something like for i = 1, 2, ..., n do c(i),where c(i) is an instruction such that the number of bits involved depends on i. When dealing with, say, sorting algorithms, all references I have ever come across adopt the uniform cost model, while in Cryptography, the time complexity of algorithms is always calculated in terms of bit operations. Another difference is the definition of the size of an input: again, in sorting algorithms, the size of the input is simply the length of the vector that has to be sorted (as in graph searching algorithms, the number of nodes and edges). In Cryptography and Computational Number Theory, instead, the size of the input is always the number of bits involved. I don't know if this distinction is considered part of 1. , but it certainly makes a huge difference, since an algorithm that is linear using the first criterion, becomes exponential when adopting the second. Factorizing integers would be linear (actually, $O(\log{n} \sqrt{n})$) if we considered the input size of $n$ to be $n$.One could justify these discrepancies with arguments such as well, in Number Theory we are dealing with large integers, so adopting a more realistic model makes more sense. But this defies the whole purpose of evaluating asymptotic expressions for the complexity of algorithms! If, in certain contexts, we knew that all inputs were smaller then a fixed quantity, then everything would be $O(1)$. Also, how can it be possible to define, without ambiguity, complexity classes and other rigorous Computer Science concepts, when the definition of complexity varies from field to field?
Different definitions of complexity
cc.complexity theory;soft question;time complexity
The performances of an algorithm are always analyzed in the context of a well defined computational model. Traditional sequential models, e.g. the RAM model, assume that:All memory accesses are equally expensive;There are no concurrent operations; All reasonable instructions take unit time;With the notable exception of function calls!Constant word size;Unless we are explicitly manipulating bits!In Cryptography and strictly related fields, you can not assume that arbitrary precision arithmetic operations can be performed in constant time; so adding two n bits numbers requires $O(n)$ instead of $O(1)$, multiplying them requires $O(n \lg n)$ using the Fast Fourier Transform (there are better algorithms) etc.In practice, using the uniform or the logarithmic cost model strictly depends on the target application. Even using the uniform cost model, please consider carefully what the size of the input actually is. Here you must remember that the efficiency of solving a problem strictly depends on how you encode the problem. As an example, consider a simple algorithm requiring as input just an integer $x$, and whose complexity is $O(x)$. For instance, think about a loop that is executed $x$ times, and in each iteration of the loop you perform an operation requiring $O(1)$. So, you conclude that the algorithm is linear in $x$.However, if you encode the input $x$ using the traditional binary representation of the integer $x$, then the input length is $n = \lfloor \log x \rfloor +1$. Therefore, the running time of the algorithm is $O(x) = O(2^n)$, which is exponential in the size of the input!
_softwareengineering.279299
I have a question about working with independent testers doing manual testing (not about automated unit and regression testing.)In a flow process I do my work on a feature branch until I'm confident that it works and I haven't introduced bugs. I merge from the develop branch to my feature, late and often, to ensure I haven't broken anything in merges with other recent work. Sometimes I'll even do it again during the testing phase, so that the tester can work with the most recent snapshot. Still, there's always a small window of time after testing where new work can -- and in high traffic times does -- come in from other features. This means that the merge back to the develop/release branch is sometimes not trivial, despite our treating it like it should be. (Sometimes it's even iterative: by the time I'm done making sure I've correctly integrated one feature that's slipped in, running regression tests, checking the code, and doing some manual testing, yet another one has come in.)My question is, is there a workflow for developers and testers where you don't lose out on the safety net of testers for that last step (but also hopefully don't need to ask again and again for re-testing tested work)? What are industry best practices here? If we could assure that branches won't interfere with one another, we'd be fine, but in practice we get conflicts sometimes.I'll add that I'm sure we don't want to do our main testing on the develop/release branch. It's been a huge win and stress-reducer since we switched to flow. We can easily put off releasing work that's created a problem or raised a question during testing. In our pre-flow practice, we wound up with emergencies near a release, where a problem was found that we had to deal with urgently before releasing because the work of a non-critical feature was already merged into the main branch for testing.
How do you get complete manual testing by QA in a git/hg flow development process?
testing;development process
null
_codereview.119934
Can someone please review my code and let me know if there are some bugs or possible improvements? /** * Definition for a binary tree node. * public class TreeNode { * int val; * TreeNode left; * TreeNode right; * TreeNode(int x) { val = x; } * } */public class Solution { //return true if the root node is null or there is only root node. public boolean isBalanced(TreeNode root) { if (root == null || (root != null && root.left == null && root.right == null)) { return true; } else { int balancedRight = 0; int balancedLeft = 0; if (root.left != null) { balancedLeft = findHeight(root.left); } if (root.right != null) { balancedRight = findHeight(root.right); } return Math.abs(balancedLeft - balancedRight) <= 1 ? true : false; } }//find the height of the tree public int findHeight(TreeNode root) { if (root == null) { return 0; } else if (root.left == null && root.right == null) { return 1; } else if (root.left == null && root.right != null) { return 1 + findHeight(root.right); } else if (root.left != null && root.right == null) { return 1 + findHeight(root.left); } else { return Math.max(1+findHeight(root.left), 1+findHeight(root.right)); } }}
Finding if the tree is balanced or not
java;tree;interview questions
First of all, your algorithm checks if the root of the tree is balanced or not. Rather, it should check for 3 conditions, whether the root , left subtree and right subtree are balanced or not.So check for this recursively,Math.abs(leftSubtreeHeight - rightSubtreeHeight) <= 1) && isBalanced(root.left) && isBalanced(root.right)Moreover, your height method is doing too many unnecessary checks. It can be simply reduced to :-private int treeHeight(TreeNode root) {if(root == null) return 0;return Integer.max(treeHeight(root.left),treeHeight(root.right) ) +1;}take this as an example:-else if (root.left == null && root.right != null) { return 1 + findHeight(root.right);in this, if root.left is null, then it will return 0. Hence, from the recursive call of the root, you'll get return 1 + Math.max(findHeight(root.left), findHeight(root.right));findHeight(root.left) will return 0 and you'll be left with just 1 + findHeight(root.right) which is what you have manually written.Also, it's better to keep the treeHeight(TreeNode root) function private if nobody outside this class is going to use it.You need not write else after this if (root == null || (root != null && root.left == null && root.right == null)) { return true; } else {because if you returned from the if statement then anyways you won't go further. And, if the if condition doesn't return true, you'll go to the else part anyways.Also, change the names from:-balancedRight to rightSubtreeHeight,balancedLeft to leftSubtreeHeight,isBalanced to isTreeBalancedAdditionally, you need not have this condition|| (root != null && root.left == null && root.right == null)because, if the root.left == null, it's height would be 0 and 0 will be returned from the right hand side as well. So, for root node, you received 0 from both left and right. Now, the difference between them is 0 which is <=1. The above code is anyways checking that using recursion.Try to make the entire recursion tree for various problems to have a better understanding of how recursion works. By knowing the power of recursion, you can make your code more elegant.The whole code could be reduced to public boolean isTreeBalanced(TreeNode root){ if(root == null) return true; int leftSubtreeHeight = treeHeight(root.left); int rightSubtreeHeight = treeHeight(root.right); if((Math.abs(leftSubtreeHeight - rightSubtreeHeight) <= 1) && isTreeBalanced(root.left) && isTreeBalanced(root.right)) return true; return false;}private int treeHeight(TreeNode root) { if(root == null) return 0; return Integer.max(treeHeight(root.left),treeHeight(root.right) ) +1;}Edit: However, if we look further deeply, we can find out the time complexity of the algorithm can be reduced further(from O(N^2) to O(N)) because isBalancedTree and treeHeight are going through same pattern of recursion. We can get the result in one traversal of the tree. Here is the code:-public boolean isBalanced(TreeNode root) { if(root == null) return true; if(helper(root) != -1) return true; else return false;}private static int helper(TreeNode root) { if(root == null) return 0; int lefth = helper(root.left); int righth = helper(root.right); if(lefth == -1 || righth == -1) return -1; if( Math.abs(lefth - righth) <= 1) return Math.max(lefth, righth) + 1; else return -1;}Edit: The code could be further compressed by removing unnecessary if else statements. public boolean isBalanced(TreeNode root) { if(root == null) return true; return helper(root) != -1;}private static int helper(TreeNode root) { if(root == null) return 0; int lefth = helper(root.left); int righth = helper(root.right); if(lefth == -1 || righth == -1) return -1; if( Math.abs(lefth - righth) <= 1) return Math.max(lefth, righth) + 1; else return -1;}
_codereview.118970
Related: Something like a LINQ providerI needed to work with the Sage300 View API. I had never worked with it, but my first impression has been that the API is stringly-typed, and makes you write very procedural and repetitive code.So I decided to make my life easier, and wrap it with a familiar interface:public interface IRepository<TEntity> where TEntity : class, new(){ /// <summary> /// Projects all entities that match specified predicate into a <see cref=TEntity/> instance. /// </summary> /// <param name=filter>A function expression that returns <c>true</c> for all entities to return.</param> /// <returns></returns> IEnumerable<TEntity> Select(Expression<Func<TEntity, bool>> filter); /// <summary> /// Projects the single that matches specified predicate into a <see cref=TEntity/> instance. /// </summary> /// <exception cref=InvalidOperationException>Thrown when predicate matches more than a single result.</exception> /// <param name=filter>A function expression that returns <c>true</c> for the only entity to return.</param> /// <returns></returns> TEntity Single(Expression<Func<TEntity, bool>> filter); /// <summary> /// Updates the underlying View for the specified entity. /// </summary> /// <param name=entity>The existing entity with the modified property values.</param> void Update(TEntity entity); /// <summary> /// Deletes the specified entity from the underlying View. /// </summary> /// <param name=entity>The existing entity to remove.</param> void Delete(TEntity entity); /// <summary> /// Inserts a new entity into the underlying View. /// </summary> /// <param name=entity>A non-existing entity to create in the system.</param> void Insert(TEntity entity);}So, to implement a repository, I make a simple POCO class, and I use a custom MapsToAttribute to tell the engine how to map the class and its properties to a View and its fields - note, SageViews is an internal static class exposing nothing but internal const string members:[MapsTo(SageViews.HeadersViewId)]public class PurchaseOrderHeader{ [MapsTo(PONUMBER)] public string Number { get; set; } [MapsTo(VDCODE)] public string VendorCode { get; set; } [MapsTo(PORTYPE)] public PurchaseOrderType Type { get; set; } [MapsTo(ONHOLD)] public bool IsOnHold { get; set; } [MapsTo(ORDEREDON)] public DateTime OrderDate { get; set; } [MapsTo(EXPARRIVAL)] public DateTime ExpectedDate { get; set; } [MapsTo(FOBPOINT)] public string FreeOnBoardPoint { get; set; } [MapsTo(VIACODE)] public string ShipViaCode { get; set; } [MapsTo(VIANAME)] public string ShipViaName { get; set; } [MapsTo(TERMSCODE)] public string TermsCode { get; set; } [MapsTo(TERMSCODED)] public string TermsName { get; set; } [MapsTo(DESCRIPTIO)] public string Description { get; set; } [MapsTo(REFERENCE)] public string Reference { get; set; } [MapsTo(COMMENT)] public string Comment { get; set; }}I decided to implement the interface in a base class first, to avoid having to implement the same reflection code over and over for every entity:/// <summary>/// Encapsulates a View and its CRUD operations./// </summary>/// <typeparam name=TEntity>The entity type associated with the view.</typeparam>/// <remarks>/// <see cref=TEntity/> should be a POCO class exposing get/set properties/// marked with a <see cref=MapsToAttribute/>./// </remarks>public abstract class SageRepositoryBase<TEntity> : IRepository<TEntity> where TEntity : class, new(){ /// <summary> /// Uses reflection to discover <see cref=MapsToAttribute/> mappings on specified <see cref=TEntity/> type /// and reflects on specified <see cref=entity/> to retrieve property values, mapped to the appropriate field. /// </summary> /// <param name=entity>The entity object to retrieve mapped values for.</param> /// <returns> /// Returns a dictionary keyed with View names, where each entry contains all values for that view. /// </returns> protected IDictionary<string, IEnumerable<EntityPropertyInfo<TEntity>>> DiscoverMappedValues(TEntity entity) { return entity.GetPropertyInfos() .GroupBy(property => property.ViewName) .ToDictionary(grouping => grouping.Key, grouping => grouping.AsEnumerable()); } /// <summary> /// Uses reflection to discover <see cref=MapsToAttribute/> mappings and fetch /// property values from the mapped views and fields. /// </summary> /// <returns> /// Returns an entity representing the current/active record in the composed views. /// </returns> protected TEntity ReadEntity() { var result = new TEntity(); var properties = result.GetPropertyInfos(); foreach (var property in properties) { property.Property.SetValue(result, Views[property.ViewName].Fields.FieldByName(property.FieldName).Value); } return result; } /// <summary> /// Projects the single that matches specified predicate into a <see cref=TEntity/> instance. /// </summary> /// <exception cref=InvalidOperationException>Thrown when predicate matches more than a single result.</exception> /// <param name=filter>A function expression that returns <c>true</c> for the only entity to return.</param> /// <returns>Returns the single entity matching specified criteria.</returns> public abstract TEntity Single(Expression<Func<TEntity, bool>> filter); /// <summary> /// Projects all entities that match specified predicate into a <see cref=TEntity/> instance. /// </summary> /// <param name=filter>A function expression that returns <c>true</c> for all entities to return.</param> /// <returns>Returns all entities matching specified criteria.</returns> public abstract IEnumerable<TEntity> Select(Expression<Func<TEntity, bool>> filter); /// <summary> /// Updates the underlying View for the specified entity. /// </summary> /// <param name=entity>The existing entity with the modified property values.</param> public abstract void Update(TEntity entity); /// <summary> /// Deletes the specified entity from the underlying View. /// </summary> /// <param name=entity>The existing entity to remove.</param> public abstract void Delete(TEntity entity); /// <summary> /// Inserts a new entity into the underlying View. /// </summary> /// <param name=entity>A non-existing entity to create in the system.</param> public abstract void Insert(TEntity entity); /// <summary> /// Gets a dictionary containing all composed views <see cref=TEntity/> maps to. /// Dictionary key is each View's RotoID/name. /// </summary> protected abstract IDictionary<string, View> Views { get; }}Here is the PurchaseOrderHeadersRepository class - I have not yet implemented all CRUD operations, but the implemented ones work perfectly.public sealed class PurchaseOrderHeadersRepository : SageRepositoryBase<PurchaseOrderHeader>, IDisposable { private View _headersView; private View _commentsView; private View _headersOptionalFieldsView; private View _requisitionsView; private View _functionsView; private View _detailsView; private View _detailsOptionalFieldsView; private View _shipViaAddressesView; private View _vendorsView; private View _termsView; public void Compose(DBLink context) { _headersView = context.OpenView(SageViews.HeadersViewId); _commentsView = context.OpenView(SageViews.CommentsViewId); _headersOptionalFieldsView = context.OpenView(SageViews.HeaderOptionalFieldsViewId); _requisitionsView = context.OpenView(SageViews.RequisitionsViewId); _functionsView = context.OpenView(SageViews.FunctionsViewId); _detailsView = context.OpenView(SageViews.DetailsViewId); _detailsOptionalFieldsView = context.OpenView(SageViews.DetailsOptionalFieldsViewId); _shipViaAddressesView = context.OpenView(SageViews.ShipViaAddressesViewId); _vendorsView = context.OpenView(SageViews.VendorsViewId); _termsView = context.OpenView(SageViews.TermsViewId); _headersView.Compose(new[]{ _commentsView, _detailsView, _requisitionsView, _functionsView, _headersOptionalFieldsView }); _detailsView.Compose(new[]{ _headersView, _commentsView, _functionsView, null, null, _detailsOptionalFieldsView }); _commentsView.Compose(new[]{ _headersView, _detailsView }); _requisitionsView.Compose(new[]{ _headersView, _functionsView }); _functionsView.Compose(new[]{ _headersView, _commentsView, _detailsView, _requisitionsView }); _views = new Dictionary<string, View> { { SageViews.HeadersViewId, _headersView }, { SageViews.DetailsViewId, _detailsView }, { SageViews.CommentsViewId, _commentsView }, { SageViews.RequisitionsViewId, _requisitionsView }, { SageViews.FunctionsViewId, _functionsView } }; } private IDictionary<string, View> _views; protected override IDictionary<string, View> Views { get { return _views; } } public override PurchaseOrderHeader Single(Expression<Func<PurchaseOrderHeader, bool>> filter) { var result = Select(filter).ToList(); return result.Single(); } public override IEnumerable<PurchaseOrderHeader> Select(Expression<Func<PurchaseOrderHeader, bool>> filter) { var searchFilter = filter.ToFilterExpression(); _headersView.Browse(searchFilter, true); if (!_headersView.GoTop()) { yield break; } do { yield return ReadEntity(); } while (_headersView.GoNext()); } public override void Update(PurchaseOrderHeader entity) { throw new NotImplementedException(); } public override void Delete(PurchaseOrderHeader entity) { throw new NotImplementedException(); } public override void Insert(PurchaseOrderHeader entity) { throw new NotImplementedException(); } public void Dispose() { _headersView.Dispose(); _commentsView.Dispose(); _headersOptionalFieldsView.Dispose(); _requisitionsView.Dispose(); _functionsView.Dispose(); _detailsView.Dispose(); _detailsOptionalFieldsView.Dispose(); _shipViaAddressesView.Dispose(); }}I don't like that I need to compose the views explicitly, and put that responsibility on the caller - especially since the Compose(DBLink) method isn't part of the interface. I thought of lazy-composing the views on first access, but then I would require the DBLink through the constructor and stored as a field, but I couldn't dispose it because I don't own the object - would it be a good idea to do that?Other than that.. I'd be happy to hear anyone with experience with the Sage300 View API, tell me what beartrap I might have just stuck my foot in with this code. Or is this clever code that will bite me later? Looks clear enough?All feedback welcome.Almost forgot - here's the GetPropertyInfos extension method, performing the reflection magic:public static IEnumerable<EntityPropertyInfo<TEntity>> GetPropertyInfos<TEntity>(this TEntity entity) where TEntity : class, new(){ var type = typeof (TEntity); var mapsToView = type.GetCustomAttribute<MapsToAttribute>(); if (mapsToView == null) { throw new InvalidOperationException(Entity type is missing a MapsToAttribute.); } return from property in typeof (TEntity).GetProperties() let mapsToField = property.GetCustomAttribute<MapsToAttribute>() where mapsToField != null select new EntityPropertyInfo<TEntity>(entity, property, mapsToView, mapsToField);}Does it get any cleaner?
Wrapping the Sage300 View API with... a Repository
c#;api;reflection;repository
The code looks good, but I can see a couple of things that could be a bit cleaner:e.g. this method:public override PurchaseOrderHeader Single(Expression<Func<PurchaseOrderHeader, bool>> filter){ var result = Select(filter).ToList(); return result.Single();}result is a poor name IMO as it's a list (even if it only has one item). I think it should be pluralised: results.There's no need to call .ToList() which I think makes the intermediate variable pointless:public override PurchaseOrderHeader Single(Expression<Func<PurchaseOrderHeader, bool>> filter){ return Select(filter).Single();}Regarding the DBLink dependency... I definitely think you should either pass it in as a constructor parameter or create a factory method for your repository. Even when you do that, I don't see why you'd need to store the link as a field which means your Dispose implementation would be unchanged.FWIW, I would go with a Create method over the constructor as you're doing some work there and I don't know whether any of that takes time... Either way, by having an extra method that the caller has to know about before they can use the instance is always frustrating.You could create it as an extension method on the DBLink if you want to treat that a bit like a unit of work. You could leave your create + initialize as separate steps, or you could create a static factory method.public class DBLinkExtensions{ IRepository<PurchaseOrderHeader> GetPurchaseOrderHeadersRepository(this DBLink link) { if (link == null) throw new ArgumentNullException(link); return new PurchaseOrderHeadersRepository().Compose(link); // OR // return PurchaseOrderHeadersRepository.Create(link); }}Then your client code would be:using (var dbLink = GetTheDBLink(...))using (var purchaseOrderHeadersRepository = dbLink.GetPurchaseOrderHeadersRepository()){ // Your stuff.}Just in case you're interested, what you're creating here is called an Anticorruption Layer. You're hiding all of the nastiness of the 3rd party library with a well designed API - a very good idea!
_unix.363765
I have noticed on my Arch Linux (with GNOME 3.24.2 and GDM) installation that my ~ is filled with files like this and they keep increasing:-rw-r--r-- 1 root root 0 May 8 00:01 wget-log-rw-r--r-- 1 root root 0 May 8 00:01 wget-log.1-rw-r--r-- 1 root root 0 May 8 00:01 wget-log.2-rw-r--r-- 1 root root 0 May 8 00:01 wget-log.3-rw-r--r-- 1 root root 0 May 8 00:01 wget-log.4-rw-r--r-- 1 root root 0 May 8 20:04 wget-log.5-rw-r--r-- 1 root root 0 May 8 20:04 wget-log.6-rw-r--r-- 1 root root 0 May 8 20:04 wget-log.7-rw-r--r-- 1 root root 0 May 8 20:04 wget-log.8-rw-r--r-- 1 root root 0 May 8 20:04 wget-log.9In fact, there would be more if I didn't delete them every day. I have noticed these files appearing after running sudo pacman -Syu, but I have also observed them not appearing after doing so so perhaps it was just coincidence? But I would really like to track down the cause of these empty log files appearing in ~ as they are actually quite annoying and seem to serve no real purpose.So what are they caused by and is there any way I get either stop them from appearing or have them do so in a different location?
Why do I keep getting wget-log file in ~ on Arch Linux?
arch linux;logs;configuration;wget
This looks like a bug, a regression from wget 1.18 to wget 1.19.1 which is used by Arch Linux. I have opened a bug report here: https://savannah.gnu.org/bugs/?51181
_codereview.25315
Given two integers, X and Y, you must create all possible integer sequences of length Y using the numbers from 1 to X.For certain reasons, the function must return a deque<deque<int> > representing such (the primary deque holds all my sequences - each of then is a deque of ints).This is not a permutation generator.Example InputX = 3, Y = 3This means that I require sequences of length 3 each one. To compose the sequences, I use the integers 1, 2, and 3.Example OutputThe previous input should produce:1 1 1 1 1 2 1 1 3 1 2 1 1 2 2 1 2 3 1 3 1 1 3 2 1 3 3 2 1 1 2 1 2 2 1 3 2 2 1 2 2 2 2 2 3 2 3 1 2 3 2 2 3 3 3 1 1 3 1 2 3 1 3 3 2 1 3 2 2 3 2 3 3 3 1 3 3 2 3 3 3 My IdeaI'm not very good with this stuff. My approach is:Create a start and end integerStart is 1 repeated Y times. So in the previous example it is 111End is X repeated Y times. So in the previous example it is 333I count i from start to end.If i contains an invalid digit (like 4, 5,...), abort this sequence.If i is valid, then this sequence is valid, thus add it to my resulting deque.When a sequence is valid, due to my needs, I split the integer and make a deque of each of its digits.Why I need a reviewI think it is pretty clear that this method doesn't sound very efficient. So many wasted iterations! Can you help me improve this code, and, perhaps, shorten it?My AttemptNote, you may notice that instead of sequences I call then procedures.#include <iostream>#include <sstream>#include <deque>using namespace std;// Converts a value to any typetemplate <typename T,typename S>T convert(S original) { stringstream ss; ss << original; T result; ss >> result; return result;}// The actual functiondeque<deque<int> > allPossibleProcedures(int values, int depth) { deque<deque<int> > result; // I create a start point and an end point for my counting string minString; for (int d = 0; d < depth; ++d) { minString.append(1); } string limitString; for (int d = 0; d < depth; ++d) { limitString.append(convert<string>(values)); } int start = convert<int>(minString); int limit = convert<int>(limitString); // I begin counting for (int i = start; i <= limit; ++i) { deque<int> procedure; string text = convert<string>(i); bool ok = true; for (int c = 0; c < text.length() && ok; ++c) { int x = convert<int>(text.at(c)); if (x <= values && x > 0) { // This is a valid character. Add to procedure. procedure.push_back(x); }else{ // Not a valid character. Abort this procedure. ok = false; break; } } if (ok) { result.push_back(procedure); } } return result;}int main(int argc, const char * argv[]) { deque<deque<int> >procedures = allPossibleProcedures(3,3); for (int i = 0; i < procedures.size(); ++i) { for (int j = 0; j < procedures[i].size(); ++j) { cout << procedures[i][j] << ; } cout << endl; } return 0;}
All integer sequences given amount of integers and length of sequence
c++
I don't have time to do a style review at the moment, so until I can expand on this later, I'll just suggest a different algorithm.Rather than depending on string manipulation and generating more items than you need, just take an arithmetic approach.In particular, you can take a deque of ints and add 1 to it until it is equal to the max value.std::deque< std::deque<int> > results;std::deque<int> min(Y, 1);std::deque<int> max(Y, X);for (; min != max; min = increment(min, Y)) { results.push_back(min);}results.push_back(max);As a bit more explanation, a std::deque<unsigned int> is frequently used as a make-shift arbitrary precision integer. The way this is done is to have a sign flag, and then to see each element of the deque as part of a radix = std::numeric_limits<unsigned int>::max() number.As a concrete example, a std::deque<unsigned char> d = {95, 14, 230} would represent 95 * 255^2 + 14 * 255^1 + 230 * 255^0 in the same way common base ten 134 = 1 * 10^2 + 3 * 10^2 + 4 * 10^0.Anyway, I'm getting carried away with this. What you're in need of is a much more specific case of this. Rather than making a generic adder, all you have to do is just add 1, and if that causes the lowest digit to overflow, push it back down to a 1 and carry the add up the chain.std::deque<int> increment(std::deque<int> d, int Y){ bool carry = true; for (std::deque<int>::reverse_iterator it = d.rbegin(), end = d.rend(); it != end; ++it) { *it += 1; if (*it > Y) { *it = 1; } else { carry = false; } } if (carry) { d.push_front(1); } return d;} The above code is untested and unoptimized. It can probably be written more compactly, but it's been a long time since I've done anything like this, and I'm just trying to illustrate how it can be done arithmetically. Also, if you're concerned about performance, you might want to have increment mutate d rather than return a modified copy.
_cstheory.8799
Suppose I have a simple polygon $S$ and an integer $k$. What are some existing approaches for finding the smallest radius $r$ such that I can cover $S$ with $k$ circles of radius $r$? How about if $r$ is fixed, and I want to minimize $k$?
Covering a simple polygon with circles
cg.comp geom;planar graphs;set cover
Use the k-center clustering algorithm: see Section 4.2 in http://goo.gl/pLiEO.One can get 1+eps approximation algorithm using sliding grids. It is natural to assume the problem is NP-Hard because of the work by Feder and Greene.
_webapps.82525
Recently, trying to log in to Endomondo with my Facebook ID, I saw error message along the lines: Could not log you in. Some software (like AdBlock) can interfere with login.(This is not exact message, I did not make a copy).I know, that some websites detect AdBlock or similar tools and then try to convince user to turn it off for their site, either by guilt tripping the user or even by disabling some functionality and content.Can tools like AdBlock really interfere with Facebook login? Or is it some clever way from a website to convince me to turn AdBlock off?
Can AdBlock interfere with Facebook login?
browser addons
null
_scicomp.13005
I want to solve the following system of nonlinear reaction-diffusion equations (Schnakenberg Turing) using FEM methods (such as deal.ii):$$ \partial_{t} u = \Delta u + \gamma\left(a-u+uv\right)$$$$ \partial_{t} v = d\Delta v + \gamma\left(b-uv\right)$$where d, $\gamma$, a, b being constants.Probably I need to first apply a time integration scheme (such as Crank-Nicholson, implicit Runge-Kutta), and then FE space discretization.Question: How should I choose and perform the time integration for this nonlinear system? Any hints for very similar examples or specific literature?
Time Integration of a nonlinear reaction-diffusion system
pde;finite element;nonlinear equations;time integration
null
_unix.177789
I'm running Linux Mint 17.1, but this probably applies to most distros anyway.I just installed LM cleanly from a bootable USB (which is fine - I checked) and I need to install the appropriate drivers from NVIDIA (which I can't do), which requires the X server to be closed. I try sudo /etc/init.d/mdm stop, and that takes me to a fully functional terminal - except from one thing. The entire screen is black, no text. Commands work fine, however.Any pointers as to how I can fix this problem?lshw -C display output: *-display UNCLAIMED description: VGA compatible controller product: GM107 [GeForce GTX 750 Ti] vendor: NVIDIA Corporation physical id: 0 bus info: pci@0000:01:00.0 version: a2 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress vga_controller cap_list configuration: latency=0 resources: memory:fd000000-fdffffff memory:d0000000-dfffffff memory:ce000000-cfffffff ioport:dc00(size=128) memory:feb00000-feb7ffffEdit: have now installed the NVIDIA drivers - they work, but still no terminal.
Blank console when exiting X
linux mint;xorg;console;nvidia
null
_unix.302641
OK so I made a bootable USB with Kali on it and I've used it on two different laptops and it booted just fine, but when I try booting it on my laptop it just skips the USB and boots from the hard drive. The two laptops that it did boot on were running Windows (one had vista the other had 10) but on mine I have Ubuntu installed and before that I was running mint, I don't know if that's the problem but it's the only thing I can think of right now.
Booting Kali Linux from live usb, USB skipped
kali linux;live usb
null
_unix.106392
I have been using VNC for a long time for development and now I decided to move to iterm2 and screen.But there is a small problem.I have to ssh into a remote machine and use vim there.And I want to enable set mouse=a option.Also I need to be able to use the remote vim clipboard in the ssh session available in the mac applications.For this, I used these instructions, but they did not help.I installed x11, selected the needed options in the preferences->pasteboard and ssh'ed to the remote machine with ssh -x [email protected] some reason this does not seem to work. Am I missing some portion?
Use remote ViM clipboard from iterm2 on Mac OS X
ssh;vim;osx
It's possible the remote machine's vim doesn't support the X clipboard/selection registers. If you type :ver there, does it say +clipboard or +xterm_clipboard?If not, you may have to resort to other means to use a clipboard, such as running clipper on your Mac, plus always forwarding port 8377, plus adding necessary key bindings to your Vim config files. The Clipper page is sparse but the demo video should tell you what you need to know to use it.
_softwareengineering.96383
I understand that exception in program means that something unpredictable happened (but not so bad to unavoidably crash the application!). The try-catch-finally sequence makes me sad because the program is harder to read (one more level of curly brackets) and harder to understand (jump from anywhere to catch in case of exception happened, it is deprecated GOTO).Since we have OOP, I suggest to create proxy class which in case of exception consumes it silently, returns some default value and fires onError event. So if we propagate exception, we have onError call instead, and the two disadvantages mentioned above are solved. See the example in C#:Standard exception attitude class Computer { // let's say we need to propagate exception and decide what to do lately public int divide(int a, int b) { int result = a / b; return result; } } class Program { public static void Main() { Computer c = new Computer(); try { Console.WriteLine(c.divide(1, 0)); } catch(ArithmeticException e) { // we are teleported here from the middle of Computer.divide method! // do something } } } // three levels of brackets (without namespace) in such trivial example??onError handler attitude class ProxyComputer { private Computer c = new Computer(); // it is not virtual, can not be overriden public int divide(int a, int b) { // alas, exceptions are standard, but we can stop them in this class try { return c.divide(a, b); } catch(ArithmeticException e) { this.onError(e); } } protected virtual void onError(Exception e) { // do nothing } } class MoreStrictComputer : ProxyComputer { protected override void onError(Exception e) { // mail to IT department, revert all transactions etc. Console.WriteLine(I can't seem to do that); } } class Program { public static void Main() { ProxyComputer pc = new ProxyComputer(); MoreStrictComputer msc = new MoreStrictComputer(); // fires onError instead of exception Console.WriteLine(pc.divide(1, 0)); // no problem Console.WriteLine(msc.divide(1, 0)); // this time it won't be so easy } } // only two levels of bracketsOften I see empty (or trivial) catch blocks. This means programmers often consider exception as no problem situation, but with standard exception attitude, they still have to bother with try-catch. The second attitude makes it optional.So the question is: Which solution would you prefer? Is this some kind of pattern or am I missing something?Edit As Paul Equis suggests, throw e; should be default onError behavior, not do-nothing silence swalowing.
Is onError handler better than exceptions?
c#
I agree with Paul Equis's answer that exceptions should be preferred, but with a caveat. One major feature of exceptions is that they break control flow. This is usually desirable, but if it isn't then some other pattern might be useful for augmenting the exception system.For example, suppose you're writing a compiler. Exceptions might not be the best choice here, because throwing an exception stops the compile process. This means that only that first error would be reported. If you want to keep reading the source code in order to try and find more errors (as the C# and VB compilers do), then some other system is needed for reporting errors to the outside world.The easiest way to take care of that would be saving the exceptions to a collection and then returning it. However, using an OnError delegate might be worthwhile if you want to give the caller an opportunity to give advice on how to proceed after each error. That sounds like an uncommon scenario to me, though. If you're not asking the caller to really micro-manage error handling, then using some flags to specify error-handling behavior would be less fiddly to work with.
_unix.56197
I've noticed that ls -l doesn't only change the formatting of the output, but also how directory symlinks are handled:> ls /rmnbiweekly.sh daily.sh logs ...> ls -l /rmnlrwxrwxrwx 1 root root 18 Feb 11 2011 /rmn -> /root/maintenance/I'd like to get a detailed listing of what's in /rmn, not information about the /rmn symlink.One work-around I can think of is to create a shell function that does something like this:cd /rmnls -lcd -But that seems too hacky, especially since it messes up the next use of cd -. Is there a better way?(I'm on CentOS 2.6.9)
How to ls using the long format (-l) while still following directory symlinks?
ls;symlink
See if your ls has the options: -H, --dereference-command-line follow symbolic links listed on the command line --dereference-command-line-symlink-to-dir follow each command line symbolic link that points to a directoryIf those don't help, you can make your macro work without messing up cd - by doing:(cd /rmn ; ls -l)which runs in a subshell.
_unix.207007
I'm trying to use iptables to load balance web traffic over multiple DSL lines by marking the packets and routing based on the mark. I'm working with CentOS 6.6, Kernel 2.6.32-504.16.2.el6.x86_64, Iptables v1.4.7.For now I've done the following, as a proof of concept:iptables -t mangle -A PREROUTING -j MARK --set-mark 2iptables -t mangle -A OUTPUT -j MARK --set-mark 2Plus some logging and failsafe for the remote connection:iptables -t mangle -A PREROUTING -p tcp --dport 22 -j ACCEPTiptables -t mangle -A OUTPUT -j LOG --log-prefix output iptables -t mangle -A PREROUTING -j LOG --log-prefix prerouting So iptables -t mangle -L -v gives me Chain PREROUTING (policy ACCEPT 177 packets, 93050 bytes) pkts bytes target prot opt in out source destination 164 13112 ACCEPT tcp -- any any anywhere anywhere tcp dpt:ssh 7687 4287K MARK all -- any any anywhere anywhere MARK set 0x2 7687 4287K LOG all -- any any anywhere anywhere LOG level warning prefix `prerouting 'Chain INPUT (policy ACCEPT 184 packets, 91203 bytes) pkts bytes target prot opt in out source destinationChain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destinationChain OUTPUT (policy ACCEPT 25 packets, 3100 bytes) pkts bytes target prot opt in out source destination 304 38367 MARK all -- any any anywhere anywhere MARK set 0x2 304 38367 LOG all -- any any anywhere anywhere LOG level warning prefix `output 'Chain POSTROUTING (policy ACCEPT 25 packets, 3100 bytes) pkts bytes target prot opt in out source destinationI've set up alternative routing tables. ip route show table DSL2 gives me10.77.0.0/16 via 112.112.224.1 dev eth4112.112.0.0/16 via 112.112.224.1 dev eth4default via 10.177.55.33 dev eth2(112.112.0.0/16 and 10.77.0.0/16 via eth4 is the LAN, 10.177.55.33 via eth2 is one of the DSL routers.)And I added a policy to use table DSL2 when mark is set to 2. ip rule shows: 0: from all lookup local32764: from all fwmark 0x2 lookup DSL232765: from all fwmark 0x1 lookup DSL132766: from all lookup main32767: from all lookup default(Ignore DSL1 for now. It comes into play when it's working so far.)The logs show that the mark is being applied: (end of the line)Jun 1 17:05:03 squidXXX kernel: output IN= OUT=eth4 SRC=112.112.xxx.xxx DST=10.77.xxx.xxx LEN=312 TOS=0x08 PREC=0x00 TTL=64 ID=60789 DF PROTO=TCP SPT=22 DPT=49328 WINDOW=543 RES=0x00 ACK PSH URGP=0 MARK=0x2 But when I try to connect to an outside address I get a network unreachable reply, both when pinging from the local machine or when connecting to the proxy from another machine. Note: I have a squid proxy running on that machine as well which is working as intended.When I add 10.177.55.33 as the default route in the main routing table I can reach outside networks just fine.Now I've read about someone having the same problem and solving it be replacing the default route with target net 0.0.0.0/1. Not only is this wrong (any addresses above 128.0.0.0 wouldn't be accessible) but it also doesn't work in my case. Anyways what I get from it is that my routing table could be faulty so it takes the main routing table instead, but I don't see any errors. Or are there known bugs?Following this lead I tried adding ip rule add from all lookup DSL2 prio 1002 which routes the packets as expected, so that's probably not it.So it looks to me as if ip rule can't properly read the MARK or not use the table specified. But why?
ip rule not acting on fwmark
iptables;routing
null
_computergraphics.3645
Using VTK 7.0 I have found that rendering a 20k triangle STL model takes approximately 17ms on my Nvidia GTX970. However, I am only interested in the silhouette of this model (like the image below) and was wondering if: a) such speeds are reasonable for this model size, and b) since I am not really interested in a full render, is there a much faster way to compute just that silhouette? I considered a ray tracing approach where I just compute whether or not each pixel hits the model, but since I am only interested in speed I do not know if this is a good route to pursue. As a side note, these questions need not be specific to VTK - I am really just conerned with the fastest way to utilize a GPU to compute a silhouette (suggestions algorithms, theory, or libraries are all all welcome!) and what reasonable times I can expect for such model sizes.
How fast should I expect to render the silhouette of a 20k triangle model?
rendering;silhouette;vtk
null
_unix.195271
I want to write a script to set permissions on a windows share from linux client. I know that I can use smbclient, cifs or smbfs to mount windows share from linux. But I have no idea how to set permissions on windows share for a specific user from linux. Any help appreciated.Just for the information, I can set permissions for windows share from windows with cacls. Is there any equivalent command/procedure to set permissions on windows share from linux?
How can I set permissions on a windows share from linux client
linux;smb
null
_computerscience.1698
I can't really seem to figure out how to bind two constant buffers to my shaders. I have them described like so. One in slot b0 and the other in slot b1.cbuffer WVPData : register(b0){ matrix model; matrix view; matrix projection;};cbuffer DirLightData : register(b1){ float4 Ambient; float4 Diffuse; float4 Specular; float3 Direction; float pad;};Then for the root signature it's described like so.CD3DX12_DESCRIPTOR_RANGE range[2];CD3DX12_ROOT_PARAMETER parameter[1];range[0].Init(D3D12_DESCRIPTOR_RANGE_TYPE_CBV, 1, 0);range[1].Init(D3D12_DESCRIPTOR_RANGE_TYPE_CBV, 1, 1);parameter[0].InitAsDescriptorTable(_countof(range), range, D3D12_SHADER_VISIBILITY_ALL);D3D12_ROOT_SIGNATURE_FLAGS rootSignatureFlags =D3D12_ROOT_SIGNATURE_FLAG_ALLOW_INPUT_ASSEMBLER_INPUT_LAYOUT | // Only the input assembler stage needs access to the constant buffer.D3D12_ROOT_SIGNATURE_FLAG_DENY_DOMAIN_SHADER_ROOT_ACCESS |D3D12_ROOT_SIGNATURE_FLAG_DENY_GEOMETRY_SHADER_ROOT_ACCESS |D3D12_ROOT_SIGNATURE_FLAG_DENY_HULL_SHADER_ROOT_ACCESS;CD3DX12_ROOT_SIGNATURE_DESC descRootSignature;descRootSignature.Init(_countof(parameter), parameter, 0, nullptr, rootSignatureFlags);ComPtr<ID3DBlob> pSignature;ComPtr<ID3DBlob> pError;DX::ThrowIfFailed(D3D12SerializeRootSignature(&descRootSignature, D3D_ROOT_SIGNATURE_VERSION_1, pSignature.GetAddressOf(), pError.GetAddressOf()));DX::ThrowIfFailed(d3dDevice->CreateRootSignature(0, pSignature->GetBufferPointer(), pSignature->GetBufferSize(), IID_PPV_ARGS(&mRootSignature)));I believe that's correct. I think the problem I have is when I create the constant buffer and the cbv below. Specifically right under the Describe and create the constant buffer view. comment. I don't really understand what's going on.// Create a descriptor heap for the constant buffers.{ D3D12_DESCRIPTOR_HEAP_DESC heapDesc = {}; heapDesc.NumDescriptors = 2; heapDesc.Type = D3D12_DESCRIPTOR_HEAP_TYPE_CBV_SRV_UAV; // This flag indicates that this descriptor heap can be bound to the pipeline and that descriptors contained in it can be referenced by a root table. heapDesc.Flags = D3D12_DESCRIPTOR_HEAP_FLAG_SHADER_VISIBLE; DX::ThrowIfFailed(d3dDevice->CreateDescriptorHeap(&heapDesc, IID_PPV_ARGS(&mCbvHeap))); mCbvHeap->SetName(LConstant Buffer View Descriptor Heap);}// Create the constant buffer.DX::ThrowIfFailed(d3dDevice->CreateCommittedResource( &uploadHeapProperties, D3D12_HEAP_FLAG_NONE, &CD3DX12_RESOURCE_DESC::Buffer(CAlignedWVPDataSize), D3D12_RESOURCE_STATE_GENERIC_READ, nullptr, IID_PPV_ARGS(&mWVPConstantBuffer)));DX::ThrowIfFailed(d3dDevice->CreateCommittedResource( &uploadHeapProperties, D3D12_HEAP_FLAG_NONE, &CD3DX12_RESOURCE_DESC::Buffer(CAlignedDirLightDataSize), D3D12_RESOURCE_STATE_GENERIC_READ, nullptr, IID_PPV_ARGS(&mDirLightConstantBuffer)));// Describe and create a constant buffer view.D3D12_CONSTANT_BUFFER_VIEW_DESC cbvDesc[2];// = {};cbvDesc[0].BufferLocation = mWVPConstantBuffer->GetGPUVirtualAddress();cbvDesc[0].SizeInBytes = CAlignedWVPDataSize;cbvDesc[1].BufferLocation = mDirLightConstantBuffer->GetGPUVirtualAddress();cbvDesc[1].SizeInBytes = CAlignedDirLightDataSize;CD3DX12_CPU_DESCRIPTOR_HANDLE cbvHandle0(mCbvHeap->GetCPUDescriptorHandleForHeapStart(), 0, 0);d3dDevice->CreateConstantBufferView(cbvDesc, cbvHandle0);CD3DX12_CPU_DESCRIPTOR_HANDLE cbvHandle1(mCbvHeap->GetCPUDescriptorHandleForHeapStart(), d3dDevice->GetDescriptorHandleIncrementSize(D3D12_DESCRIPTOR_HEAP_TYPE_CBV_SRV_UAV), 1);d3dDevice->CreateConstantBufferView(cbvDesc, cbvHandle1);// Initialize and map the constant buffers. We don't unmap this until the// app closes. Keeping things mapped for the lifetime of the resource is okay.DX::ThrowIfFailed(mWVPConstantBuffer->Map(0, nullptr, reinterpret_cast<void**>(&mMappedWVPBuffer)));memcpy(mMappedWVPBuffer, &mWVPData, sizeof(mWVPData)); DX::ThrowIfFailed(mDirLightConstantBuffer->Map(0, nullptr, reinterpret_cast<void**>(&mMappedDirLightBuffer)));memcpy(mMappedDirLightBuffer, &mDirLightData, sizeof(mDirLightData));Note: Slot b0 works perfect. I can change world view projection data just fine. But b1 does not work at all. What am I doing wrong?
DirectX 12 Constant Buffer Binding
c++;directx12;constant buffer
The problem looks like it's in this line:d3dDevice->CreateConstantBufferView(cbvDesc, cbvHandle1);The first parameter should be &cbvDesc[1]. As it is now, you're setting up two copies of cbvDesc[0].Also, it looks like you've reversed the second and third arguments to the cbvHandle1 constructor: the second argument should be the offset (1) and the third should be the increment size. Not that it really matters, since those two values just get multiplied together anyway.By the way, I don't think you need to set up two separate descriptor ranges when creating the root signature; since they're contiguous, you could just use a single range of two descriptors. But it shouldn't make a difference to the results.
_unix.346797
I'm considering to create a encrypted partition to store some documents and programs using Dm-crypt + Luks. Let's say it will be mounted on /mnt/secret.My question is: is it safe to have symbolic links in my plain partitions pointing to files in /mnt/secret?My apps will also store paths to /mnt/secret, for example, list of recently used documents.
Partition encryption vs symbolic links
disk encryption
A symbolic link is essentially just a special kind of pseudo-file containing the target path. There is no information about the contents, size, etc., of the target file.Besides leaking names of encrypted files, there should be no problem with having symbolic links to an encrypted partition security-wise, it is equivalent to a text file containing the target path.
_cogsci.8572
I've recently experienced a number of hypnogogic near sleep states characterized by change in thinking (stage 1-2 sleep). I noticed that if I let go and get absorbed in the state, I can follow it.I can describe the hypnogogic sensation as a feeling of empty space without a definite boundary. Typically the state arises at 17-23 minutes after bedtime (I'm using a timer to check). However, as soon as I activate inner voice or think a thought spoken in that voice, those other near sleep states get completely suppressed. Literally, a single word disrupts these states.This makes me interested what happens in the brain when a person thinks using a single threaded, spoken train of thought? (I'm typing a question on cogsci right now or I'm reading a question right now) Is there some part of the brain that gets activated while others get suppressed? In particular I'm interested why spoken thought suppresses other non-verbal states that a brain can consciously experience?Internal monologue, also known as inner voice, internal speech, or verbal stream of consciousness is thinking in words. It also refers to the semi-constant internal monologue one has with oneself at a conscious or semi-conscious level.It would be interesting to know if there's some difference that fMRI can show between a brain that's reading using voice and brain that's reading non-verbally. I recall reading about similar phenomenon in the eastern spiritual traditions, like Taoism or Buddhism, where the states of mind they are trying to achieve are also incompatible with inner voice.
What is the neurobiological basis of the inner voice used for thought or reading?
neurobiology;cognitive neuroscience;language
null
_webapps.25521
Can the court in anyway have access to a deactivated Facebook account? And if so, can they recover the permanently deleted account?
Deactivated Facebook accounts used in court
facebook;user interface
null
_webmaster.46765
Q1)If any automated bot visits my domain will Google Analytics consider it as traffic?What all factors does the Google Analytics consider to be legit traffic?Q2)If a user visits xyz.domain.com ,Does Google Analytics consider that it visited domain.com?
How does Google Analytics consider traffic?
google;google analytics
null
_softwareengineering.115720
I need 2 capabilities:calculating mutual friends distinguishing between different types of edges, (e.g. FRIEND, ENEMY and other)getting relationships distinguishing between different types of edges as overMy problem is speed: If I use a database as MySQL, I can get thousands of relationships in few moments, but if I need to calculate mutual friends, it costs a lot for my server, doesn't it?I've about 100,000 accounts on my site, and I want to introduce a relationship system, but obviously I have to decide the right way to develop it. Do you have any idea?
Which database should I use to manage relationship?
architecture;database;database design
null
_webmaster.17609
I just purchased and set up a Linode VPS. Normally I purchase a new domain name and set it up with my hosting but on Linode didn't ask for that. What I have to do to publish my web application to Linode?
Publishing a web application with Linode
publishing;linode
Linode is not a managed service. This means you have to set everything up from a-z.They have a good range of stack scripts to setup your server. Then you'll need to add your domain using the DNS manager of the linode manager page. Then you'll probably need to manually configure your server's config files to add your domain.It's a bit of a process if you're new to it all, but that's actually the benefit of an un-managed VPS, having full control of the system.If you're new to this, and didn't realise what you were getting into, it might be best to lodge a support request for a refund. And next time look for a Managed VPS. Managed means someone will set it all up for you.Hope this helps.
_unix.211237
I have been learning about different IPC mechanisms present in Linux, for communication between user space processes.I want to ask what are various ways in Linux for the kernel to communicate with the user space process (kind of opposite to system call, where user space initiates request) ? Can signal be one of them ? What are the others ?
Linux process - messages from kernel
linux;linux kernel
null
_unix.159947
I have an NFS mount that I use to log into many Linux/Unix servers. I created a passphraseless RSA and DSA key from with I copied the id_rsa.pub and id_dsa.pub files over to authorized_keys. total 9drwx------. 2 myusername mygroup 1024 Oct 7 2014 .drwxr-xr-x. 16 myusername mygroup 1024 Oct 7 2014 ..-rw-------. 1 myusername mygroup 621 Oct 7 2014 authorized_keys-rw-------. 1 myusername mygroup 668 Oct 7 2014 id_dsa-rw-r--r--. 1 myusername mygroup 620 Oct 7 2014 id_dsa.pub-rw-------. 1 myusername mygroup 887 Oct 7 2014 id_rsa-rw-r-----. 1 myusername mygroup 224 Oct 7 2014 id_rsa.pub-rw-r--r--. 1 myusername mygroup 1276 Oct 7 2014 known_hostsNow I am able to log into another Linux server without entering a password (great!), but the same thing doesn't work for the HP-UX machines. Not only does this not work it prevents me from logging in altogether. The password prompt will not take my password (neither ldap or local). Here is the output when I try to connect.[myusername@machine1 .ssh]$ ssh -vvv machine2OpenSSH_5.3p1, OpenSSL 1.0.0-fips 29 Mar 2010debug1: Reading configuration data /etc/ssh/ssh_configdebug1: Applying options for *debug2: ssh_connect: needpriv 0debug1: Connecting to machine2 [192.168.100.50] port 22.debug1: Connection established.debug1: identity file /home/mynfsmount/myusername/.ssh/identity type 0debug3: Not a RSA1 key file /home/mynfsmount/myusername/.ssh/id_rsa.debug2: key_type_from_name: unknown key type '-----BEGIN'debug3: key_read: missing keytypedebug3: key_read: missing whitespacedebug3: key_read: missing whitespacedebug3: key_read: missing whitespacedebug3: key_read: missing whitespacedebug3: key_read: missing whitespacedebug3: key_read: missing whitespacedebug3: key_read: missing whitespacedebug3: key_read: missing whitespacedebug3: key_read: missing whitespacedebug3: key_read: missing whitespacedebug3: key_read: missing whitespacedebug3: key_read: missing whitespacedebug3: key_read: missing whitespacedebug2: key_type_from_name: unknown key type '-----END'debug3: key_read: missing keytypedebug1: identity file /home/mynfsmount/myusername/.ssh/id_rsa type 1debug3: Not a RSA1 key file /home/mynfsmount/myusername/.ssh/id_dsa.debug2: key_type_from_name: unknown key type '-----BEGIN'debug3: key_read: missing keytypedebug3: key_read: missing whitespacedebug3: key_read: missing whitespacedebug3: key_read: missing whitespacedebug3: key_read: missing whitespacedebug3: key_read: missing whitespacedebug3: key_read: missing whitespacedebug3: key_read: missing whitespacedebug3: key_read: missing whitespacedebug3: key_read: missing whitespacedebug3: key_read: missing whitespacedebug2: key_type_from_name: unknown key type '-----END'debug3: key_read: missing keytypedebug1: identity file /home/mynfsmount/myusername/.ssh/id_dsa type 2debug1: Remote protocol version 2.0, remote software version OpenSSH_3.9debug1: match: OpenSSH_3.9 pat OpenSSH_3.*debug1: Enabling compatibility mode for protocol 2.0debug1: Local version string SSH-2.0-OpenSSH_5.3debug2: fd 3 setting O_NONBLOCKdebug1: SSH2_MSG_KEXINIT sentdebug3: Wrote 792 bytes for a total of 813debug1: SSH2_MSG_KEXINIT receiveddebug2: kex_parse_kexinit: diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1debug2: kex_parse_kexinit: ssh-rsa,ssh-dssdebug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected]: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected]: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96debug2: kex_parse_kexinit: none,[email protected],zlibdebug2: kex_parse_kexinit: none,[email protected],zlibdebug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: kex_parse_kexinit: diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1debug2: kex_parse_kexinit: ssh-rsa,ssh-dssdebug2: kex_parse_kexinit: aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,arcfour,aes192-cbc,aes256-cbc,[email protected],aes128-ctr,aes192-ctr,aes256-ctrdebug2: kex_parse_kexinit: aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,arcfour,aes192-cbc,aes256-cbc,[email protected],aes128-ctr,aes192-ctr,aes256-ctrdebug2: kex_parse_kexinit: hmac-md5,hmac-sha1,hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96debug2: kex_parse_kexinit: none,zlibdebug2: kex_parse_kexinit: none,zlibdebug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: mac_setup: found hmac-md5debug1: kex: server->client aes128-ctr hmac-md5 nonedebug2: mac_setup: found hmac-md5debug1: kex: client->server aes128-ctr hmac-md5 nonedebug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sentdebug1: expecting SSH2_MSG_KEX_DH_GEX_GROUPdebug3: Wrote 24 bytes for a total of 837debug2: dh_gen_key: priv key bits set: 137/256debug2: bits set: 496/1024debug1: SSH2_MSG_KEX_DH_GEX_INIT sentdebug1: expecting SSH2_MSG_KEX_DH_GEX_REPLYdebug3: Wrote 144 bytes for a total of 981debug3: check_host_in_hostfile: filename /home/mynfsmount/myusername/.ssh/known_hostsdebug3: check_host_in_hostfile: match line 1debug3: check_host_in_hostfile: filename /home/mynfsmount/myusername/.ssh/known_hostsdebug3: check_host_in_hostfile: match line 1debug1: Host 'machine2' is known and matches the RSA host key.debug1: Found key in /home/mynfsmount/myusername/.ssh/known_hosts:1debug2: bits set: 527/1024debug1: ssh_rsa_verify: signature correctdebug2: kex_derive_keysdebug2: set_newkeys: mode 1debug1: SSH2_MSG_NEWKEYS sentdebug1: expecting SSH2_MSG_NEWKEYSdebug3: Wrote 16 bytes for a total of 997debug2: set_newkeys: mode 0debug1: SSH2_MSG_NEWKEYS receiveddebug1: SSH2_MSG_SERVICE_REQUEST sentdebug3: Wrote 48 bytes for a total of 1045debug2: service_accept: ssh-userauthdebug1: SSH2_MSG_SERVICE_ACCEPT receiveddebug2: key: /home/mynfsmount/myusername/.ssh/id_rsa (0x7f83a699deb0)debug2: key: /home/mynfsmount/myusername/.ssh/id_dsa (0x7f83a699e540)debug3: Wrote 64 bytes for a total of 1109debug1: Authentications that can continue: publickey,password,keyboard-interactivedebug3: start over, passed a different list publickey,password,keyboard-interactivedebug3: preferred gssapi-keyex,gssapi-with-mic,publickey,keyboard-interactive,passworddebug3: authmethod_lookup publickeydebug3: remaining preferred: keyboard-interactive,passworddebug3: authmethod_is_enabled publickeydebug1: Next authentication method: publickeydebug1: Offering public key: /home/mynfsmount/myusername/.ssh/id_rsadebug3: send_pubkey_testdebug2: we sent a publickey packet, wait for replydebug3: Wrote 240 bytes for a total of 1349debug1: Server accepts key: pkalg ssh-rsa blen 149debug2: input_userauth_pk_ok: SHA1 fp 96:97:2b:5e:98:cd:2a:2e:5a:14:e1:ab:75:79:41:3f:eb:03:b1:65debug3: sign_and_send_pubkeydebug1: read PEM private key done: type RSAdebug3: Wrote 384 bytes for a total of 1733debug1: Authentications that can continue: publickey,password,keyboard-interactivedebug1: Offering public key: /home/mynfsmount/myusername/.ssh/id_dsadebug3: send_pubkey_testdebug2: we sent a publickey packet, wait for replydebug3: Wrote 528 bytes for a total of 2261debug1: Server accepts key: pkalg ssh-dss blen 434debug2: input_userauth_pk_ok: SHA1 fp 9b:97:04:7f:b8:09:ff:51:26:fa:d4:05:c0:e1:55:d3:2d:c0:54:60debug3: sign_and_send_pubkeydebug1: read PEM private key done: type DSAdebug3: Wrote 592 bytes for a total of 2853debug1: Authentications that can continue: publickey,password,keyboard-interactivedebug2: we did not send a packet, disable methoddebug3: authmethod_lookup keyboard-interactivedebug3: remaining preferred: passworddebug3: authmethod_is_enabled keyboard-interactivedebug1: Next authentication method: keyboard-interactivedebug2: userauth_kbdintdebug2: we sent a keyboard-interactive packet, wait for replydebug3: Wrote 96 bytes for a total of 2949debug2: input_userauth_info_reqdebug2: input_userauth_info_req: num_prompts 1Password: At this point it will keep prompting for a password until it disconnects me from to many authentication failures. If I remove or empty .ssh/authorized_keys it will work just fine after putting in my password. So it appears that the HP-UX machines are having trouble reading the public key in authorized_keys.To make matters worse, some of the other employees are able to authenticate with RSA/DSA to the HP-UX servers just fine. The problem is that they set up their configuration 8 years ago and have no clue what they did differently. I've compared the files and permissions and don't see a difference. Here the ssh versions on the two machines I've tried to create the keys on:OpenSSH_3.9, OpenSSL 0.9.7d 17 Mar 2004HP-UX Secure Shell-A.03.91.002, HP-UX Secure Shell versionOpenSSH_5.3p1, OpenSSL 1.0.0-fips 29 Mar 2010My syslog.log on the HP-UX machine doesn't give any useful information. The errors that you see below are caused from failed PAM authentication after the RSA public key has already been passed over. I'm including just for good measure.Oct 8 09:34:40 machine2 sshd[25497]: error: PAM: Success for myusername from machine1.example.comOct 8 09:34:40 machine2 sshd[25497]: Failed keyboard-interactive/pam for myusername from 192.168.100.90 port 59015 ssh2Oct 8 09:34:42 machine2 sshd[25497]: error: PAM: Authentication failed for myusername from machine1.example.comOct 8 09:34:43 machine2 sshd[25497]: Failed password for myusername from 192.168.100.90 port 59015 ssh2On the HP-UX machine I ran a sshd -d -p 5555 and connected with a client using ssh -p 5555 machine2. Here is the output. It doesn't seem to give any errors.# /usr/sbin/sshd -d -p 5555debug1: sshd version OpenSSH_3.9 [ HP-UX Secure Shell-A.03.91.002 ]debug1: read PEM private key done: type RSAdebug1: private host key: #0 type 1 RSAdebug1: read PEM private key done: type DSAdebug1: private host key: #1 type 2 DSAdebug1: rexec_argv[0]='/usr/sbin/sshd'debug1: rexec_argv[1]='-d'debug1: rexec_argv[2]='-p'debug1: rexec_argv[3]='5555'debug1: Bind to port 5555 on 0.0.0.0.Server listening on 0.0.0.0 port 5555.debug1: Server will not fork when running in debugging mode.debug1: rexec start in 5 out 5 newsock 5 pipe -1 sock 8I give up for now. I just put that same RSA public key from my local account into root's authorized_keys and I was able to log in as root flawlessly. Then I put roots RSA public key in my local account's authorized_keys and it worked as well.The problem only appears to be when I ssh from my NFS mounted account to my NFS mounted account. Why that would make any difference I don't know.
DSA/RSA keys work with Linux but not HP-UX
ssh;authentication;hp ux
null
_softwareengineering.201908
I am a new to storing dates based on time zones.Need to know the standard way to store the date in the datastore.My requirements areEasy to query the date based on the date range. show the date with the client appropriate time zone selected by him(I am having a table maintained for the timezone separately) Able to query using the datastore Admin console also.Any suggestions/ideas regarding this will be a great help in proceeding further.
What is the best way of storing date?
java;google app engine
null
_unix.36021
How can I change 'change' date?$ touch -t 9901010000 test;stat test File: `test' Size: 0 Blocks: 0 IO Block: 4096 regular empty fileDevice: fe01h/65025d Inode: 11279017 Links: 1Access: (0644/-rw-r--r--) Uid: ( 1000/ x) Gid: ( 1000/ x)Access: 1999-01-01 00:00:00.000000000 +0100Modify: 1999-01-01 00:00:00.000000000 +0100**Change: 2012-04-08 19:26:56.061614473 +0200** Birth: -
How can I change 'change' date of file?
linux;files;timestamps
null
_cogsci.3425
For some time I have been very interested in the intelligent design (ID)/creationism vs. evolution debate. My parents are both medical professionals and consider themselves creationists. I have many other family members who are also medical professionals and believe in creationism. While reading webpages on the subject and other widely rejected ideas, my experience suggested that medical professionals were more likely to accept such ideas than people from other technical professions. Is there any evidence to support this?Are people from medical professions more likely than average to accept conspiracy theories?I did find one article that appeared to address the issue, but I could not find a free copy of it anywhere.
Is there a variance in acceptance of conspiracy theories by occupation?
social psychology
null
_cogsci.9173
To my understanding, the steps of an action potential are as follows:The neuron is at rest--there is a negative charge (K ions) inside the cell, and a positive charge (Na ions) outside the cell. Pumps work hard, pumping in K and pumping out Na to maintain this polarization.Excitatory NTs bind with the dendrites. As soon as it's over a certain threshold it triggers an action potential.The inside of the cell depolarizes, and the depolarized chunk propagates through the axon.Now here's my confusion:A. How do neurotransmitters manage to depolarize the inside of the cell? Do they force the cell to give up pumping out Na ions? Do the neurotransmitters themselves contain positively charged ions that the ion pumps are not sensitive to?B. When the cell depolarizes, the electrical impulse travels down the axon. When this happens, Na+ and K+ ions rapidly pass through the membrane as the signal fires. (Like this picture) http://en.wikipedia.org/wiki/Action_potential#mediaviewer/File:Action_Potential.gifWhy are ions being pumped in and out of the axon in such a way to propagate an action potential? Why isn't it just a positively charged signal running through the axon, without regard to the outside environment? (please let me know if this is unclear!)
Explanatory gaps in the formation and propagation of action potentials
neurobiology;neurophysiology
I will try to answer all of your main and sub questions structurally below:How do neurotransmitters manage to depolarize the inside of the cell?Do they force the cell to give up pumping out Na ions?No, the Na,K-ATPase (the sodium potassium pump) keeps active, also during the action potential (AP). Do the neurotransmitters themselves contain positively charged ions that the ion pumps are not sensitive to?They can, but neurotransmitter charge is irrelevant.Answer: Neurotransmitters bind to their corresponding receptors. An example excitatory neurotransmitter is glutamate (Glu). Glu has many receptors and one of them is the NMDA receptor. The NMDA receptor is coupled to a cation channel that opens when Glu binds (and other conditions pertain). In turn Na+ and other depolarizing cations can enter the cell through the open channel. Other neurotransmitters and receptor mechanisms exist, but the coupling to a cation channel is a commonly encountered theme. When the cell depolarizes, the electrical impulse travels down the axon. When this happens, Na+ and K+ ions rapidly pass through the membrane as the signal fires. [...]Why are ions being pumped in and out of the axon in such a way to propagate an action potential?During an action potential, the voltage changes are not the result of ions being pumped in or out of the cell. Instead, ions flow along their concentration and charge gradients out or in the cell during an action potential. For example, Na+, a key ion in any action potential, flows passively into the cell during an action potential. Passive influx occurs, because Na+ is continuously and actively pumped out of the cell into the extracellular fluid by the Na,K-ATPase. Moreover, the inside of the cell is highly negatively charged. Both the concentration gradient and charge gradient (i.e., the potential difference) will cause Na+ to surge into the cell once Na+ channels open. How then does it move across the axon? The trick is that Na+ depolarizes the cell membrane. When for example Glu binds to its NMDA receptor, Na+ enters the cell into the dendrite. Then, voltage-gated ion channels take over. Voltage-gated ion channels open or close depending on the local membrane potential. Most notably, voltage-gated sodium channels (VGSCs) open when the cell membrane depolarizes. Hence, after NMDA receptors are activated in the dendrite, Na+ enters. This in turn depolarizes the dendrite and VGSCs open. This causes further depolarization and adjacent VGSCs open etc. etc. Voltage-gated potassium channels open after the VGSCs and re-polarize the membrane. An overview is provided in the following image from Antranik.org : Why isn't it just a positively charged signal running through the axon, without regard to the outside environment?Charges only move to an opposite charge. A neuron is not differentially charged from dendrite to axon terminal. Hence another way of action potential transduction is needed. Step-wise opening of VGSCs is a clever trick to use a constant cell membrane potential to generate a directional action potential.
_opensource.1544
I am a contributor to a very old project presently in violation of the GPL. Our project is a plugin for a closed-source program, but the GPL does not permit this kind of linking. The project should have been licensed under the LGPL instead, but the original authors were not aware of this.Relicensing is already covered in another question, so we will ignore that here.There are only about 30 contributors to the project, but it's almost ten years old at this point. Many have moved on to other projects.The contributors clearly intended for their work to be used with the closed-source program in question. They did not know the GPL forbade it. It is exceedingly unlikely that one of them will take legal action against the project.If the project continues despite the GPL violation, what is our legal exposure to third parties? Can the FSF or any other group sue us?
GPL Violation - What is our legal exposure to third parties (not our contributors)?
gpl;law;collaboration;enforcement
If the project does not create and distribute a combined work of the plugin and the closed source host, then I see no realistic liability. This is because you aren't violating the GPL. A violation occurs when someone combines the GPL library with the closed source component and then distributes the result. You haven't done that, so you haven't violated anything. Someone who uses your plugin in the privacy of their home or business, but doesn't redistribute it, doesn't violate the GPL, either.Only the copyright holders of your project have a course of action if someone does combine the works and distribute them. So if all of you agree that it was your intent to permit this use, then none of you will bother users. If none of you bother users, including anyone who creates and distributes a combination, there is no one to sue anyone else.A very faint liability might come into play if, in fact, some copyright holder did sue some third party for violating the license, and the third-party tried to come up with a counter-suite that you all somehow led him or her astray. I don't believe it; at worse, the fact that you all published this component looks to me like estoppel against any attempt by any of you to enforce the license.Remember, the GPL is a license that you, the copyright holder, grant. You own the rights. You are the only people who can bother anyone else for violating the terms of the license. If you choose (like Linux) to take a different view of plugins and linkage, you can do that.
_unix.385552
I'm running Bananian linux on my Banana Pro recently I changed some config settings but quit it with ctrl + c without finishing editing all the config settings. After restart I am unable to login with default login - root, I get the error incorrect login every time I try. I tried checking my username in /etc/passwd and /etc/shadow/etc/passwd fileroot:x:0:0:root:/root:/bin/bashdaemon:x:1:1:daemon:/usr/sbin:/usr/sbin/nologinbin:x:2:2:bin:/bin:/usr/sbin/nologinsys:x:3:3:sys:/dev:/usr/sbin/nologinsync:x:4:65534:sync:/bin:/bin/syncgames:x:5:60:games:/usr/games:/usr/sbin/nologinman:x:6:12:man:/var/cache/man:/usr/sbin/nologinlp:x:7:7:lp:/var/spool/lpd:/usr/sbin/nologinmail:x:8:8:mail:/var/mail:/usr/sbin/nologinnews:x:9:9:news:/var/spool/news:/usr/sbin/nologinuucp:x:10:10:uucp:/var/spool/uucp:/usr/sbin/nologinproxy:x:13:13:proxy:/bin:/usr/sbin/nologinwww-data:x:33:33:www-data:/var/www:/usr/sbin/nologinbackup:x:34:34:backup:/var/backups:/usr/sbin/nologinlist:x:38:38:Mailing List Manager:/var/list:/usr/sbin/nologinirc:x:39:39:ircd:/var/run/ircd:/usr/sbin/nologingnats:x:41:41:Gnats Bug-Reporting System (admin):/var/lib/gnats:/usr/sbin/nologinnobody:x:65534:65534:nobody:/nonexistent:/usr/sbin/nologinsystemd-timesync:x:100:103:systemd Time Synchronization,,,:/run/systemd:/bin/falsesystemd-network:x:101:104:systemd Network Management,,,:/run/systemd/netif:/bin/falsesystemd-resolve:x:102:105:systemd Resolver,,,:/run/systemd/resolve:/bin/falsesystemd-bus-proxy:x:103:106:systemd Bus Proxy,,,:/run/systemd:/bin/falsentp:x:104:109::/home/ntp:/bin/falsesshd:x:105:65534::/var/run/sshd:/usr/sbin/nologin./etc/shadow fileroot:$6$9KzHxAiY$L8WtC4E1KoZYbzaxMCK4AhpVGfS3oKLNdn1YjIbunGcQDJLm8GwjRy1fXU7vhHh7DrR8hNChqPnaoL76efh/f/:14610:0:99999:7:::daemon:*:16628:0:99999:7:::bin:*:16628:0:99999:7:::sys:*:16628:0:99999:7:::sync:*:16628:0:99999:7:::games:*:16628:0:99999:7:::man:*:16628:0:99999:7:::lp:*:16628:0:99999:7:::mail:*:16628:0:99999:7:::news:*:16628:0:99999:7:::uucp:*:16628:0:99999:7:::proxy:*:16628:0:99999:7:::www-data:*:16628:0:99999:7:::backup:*:16628:0:99999:7:::list:*:16628:0:99999:7:::irc:*:16628:0:99999:7:::gnats:*:16628:0:99999:7:::nobody:*:16628:0:99999:7:::systemd-timesync:*:16628:0:99999:7:::systemd-network:*:16628:0:99999:7:::systemd-resolve:*:16628:0:99999:7:::systemd-bus-proxy:*:16628:0:99999:7:::ntp:*:16628:0:99999:7:::sshd:*:16628:0:99999:7:::
Unable to recover lost login
debian;login;password;passwd;shadow
Since you seem to have access to /etc/shadow as a privileged user (sudo?), do sudo passwd root If on the other hand, you are editing the filesystem in the MicroSD card in another machine, just edit out the root password in /etc/shadow. Delete the encrypted password field as in:root::14610:0:99999:7:::Then you will be able to enter as root in the console, press ENTER when asked for the password, and change it once you login with passwd.
_codereview.132960
I'm working on a Meteor application which integrates a user's contacts from external sources (Google in the case of this example). I'm currently writing the server side code to retrieve this data and send it to the client.I figured using promises to do this made sense due to the asynchronous manner of requests. So I have getContacts which sends the request to the Google API, and processContacts which processes and formats the response data:getContacts = function (accessToken) { return new Promise(function (resolve) { httpRequest.get({ url: 'https://www.google.com/m8/feeds/contacts/default/full?alt=json', auth: { 'bearer': accessToken }, headers: { 'GData-Version': 3.0 }, }, function (err, res, body) { resolve(body); }); });}processContacts = function(googleContacts) { return new Promise(function(resolve) { const contacts = JSON.parse(googleContacts).feed.entry; const allContacts = []; const groupedContacts = { conflicts: [], new: [] }; ... ... resolve(groupedContacts); });}I have a Meteor method google.contacts.import which is called synchronously from the client, and because all of this is asynchronous I'm using a future to force the client to wait for the call to finish: Meteor.methods({ 'google.integration.import'(orgId) { check(orgId, String); const user = Users.getOne(Meteor.userId()); let fut = new Future(); getContacts(user.services.google.accessToken) .then(processContacts) .then(function(contacts){ fut.return(contacts); }) .catch(function(err){ console.log('error! :' + err); }); return fut.wait(); },})This all works fine, but it seems...messy or somewhat convoluted. I'm relatively new to Javascript and ES6 in particular so I feel I could definitely improve this. Am I correct to be using promises here? I guess they probably aren't necessary in the case of processContacts. I also realize this is lacking in terms of error checking/reporting.Any help or guidance is appreciated!
Using promises to GET and process data
javascript;asynchronous;ecmascript 6;meteor
null
_unix.316264
I had issues with 1.2.x version of insync, so I googled for its website and I downloaded the deb file of the newest version for my 64 bit debian. dpkg couldn't install it, even setting the --force-all option, the output is ale@debian:~/Scaricati$ sudo dpkg --force-all -i insync_1.3.12.36116-wheezy_amd64.deb [sudo] password for ale: (Reading database ... 342721 files and directories currently installed.) Preparing to unpack insync_1.3.12.36116-wheezy_amd64.deb ... Traceback (most recent call last): File <string>, line 5, in <module> zipimport.ZipImportError: not a Zip file: '/usr/lib/insync/library.zip' dpkg: warning: subprocess old pre-removal script returned error exit status 1 dpkg: trying script from the new package instead ... Traceback (most recent call last): File <string>, line 5, in <module> zipimport.ZipImportError: not a Zip file: '/usr/lib/insync/library.zip' dpkg: error processing archive insync_1.3.12.36116-wheezy_amd64.deb (--install): subprocess new pre-removal script returned error exit status 1 Traceback (most recent call last): File <string>, line 5, in <module> zipimport.ZipImportError: not a Zip file: '/usr/lib/insync/library.zip' *** Error in `dpkg': munmap_chunk(): invalid pointer: 0x000055edcb3d9751 *** ======= Backtrace: ========= /lib/x86_64-linux-gnu/libc.so.6(+0x3d93a70bcb)[0x7f34a30b2bcb] /lib/x86_64-linux-gnu/libc.so.6(+0x3d93a76fa6)[0x7f34a30b8fa6] dpkg(+0x20060)[0x55edc8b7c060] dpkg(+0x204b9)[0x55edc8b7c4b9] dpkg(+0x277fa)[0x55edc8b837fa] dpkg(+0x16b07)[0x55edc8b72b07] dpkg(+0x16ce5)[0x55edc8b72ce5] dpkg(+0x16f2d)[0x55edc8b72f2d] dpkg(+0xa297)[0x55edc8b66297] dpkg(+0x1ff9b)[0x55edc8b7bf9b] dpkg(+0x201a1)[0x55edc8b7c1a1] dpkg(+0x9d22)[0x55edc8b65d22] dpkg(+0x66a9)[0x55edc8b626a9] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf1)[0x7f34a30622b1] dpkg(+0x67e9)[0x55edc8b627e9] ======= Memory map: ======== 55edc8b5c000-55edc8ba0000 r-xp 00000000 08:06 5505633 /usr/bin/dpkg 55edc8da0000-55edc8da3000 r--p 00044000 08:06 5505633 /usr/bin/dpkg 55edc8da3000-55edc8da4000 rw-p 00047000 08:06 5505633 /usr/bin/dpkg 55edc8da4000-55edc8fb8000 rw-p 00000000 00:00 0 55edcac76000-55edceccb000 rw-p 00000000 00:00 0 [heap] 7f34a1bff000-7f34a1c15000 r-xp 00000000 08:06 5637055 /lib/x86_64-linux-gnu/libgcc_s.so.1 7f34a1c15000-7f34a1e14000 ---p 00016000 08:06 5637055 /lib/x86_64-linux-gnu/libgcc_s.so.1 7f34a1e14000-7f34a1e15000 rw-p 00015000 08:06 5637055 /lib/x86_64-linux-gnu/libgcc_s.so.1 7f34a1e15000-7f34a2170000 rw-p 00000000 00:00 0 7f34a2170000-7f34a217a000 r-xp 00000000 08:06 5637413 /lib/x86_64-linux-gnu/libnss_files-2.24.so 7f34a217a000-7f34a237a000 ---p 0000a000 08:06 5637413 /lib/x86_64-linux-gnu/libnss_files-2.24.so 7f34a237a000-7f34a237b000 r--p 0000a000 08:06 5637413 /lib/x86_64-linux-gnu/libnss_files-2.24.so 7f34a237b000-7f34a237c000 rw-p 0000b000 08:06 5637413 /lib/x86_64-linux-gnu/libnss_files-2.24.so 7f34a237c000-7f34a2382000 rw-p 00000000 00:00 0 7f34a2382000-7f34a238d000 r-xp 00000000 08:06 5637477 /lib/x86_64-linux-gnu/libnss_nis-2.24.so 7f34a238d000-7f34a258c000 ---p 0000b000 08:06 5637477 /lib/x86_64-linux-gnu/libnss_nis-2.24.so 7f34a258c000-7f34a258d000 r--p 0000a000 08:06 5637477 /lib/x86_64-linux-gnu/libnss_nis-2.24.so 7f34a258d000-7f34a258e000 rw-p 0000b000 08:06 5637477 /lib/x86_64-linux-gnu/libnss_nis-2.24.so 7f34a258e000-7f34a25a2000 r-xp 00000000 08:06 5639731 /lib/x86_64-linux-gnu/libnsl-2.24.so 7f34a25a2000-7f34a27a2000 ---p 00014000 08:06 5639731 /lib/x86_64-linux-gnu/libnsl-2.24.so 7f34a27a2000-7f34a27a3000 r--p 00014000 08:06 5639731 /lib/x86_64-linux-gnu/libnsl-2.24.so 7f34a27a3000-7f34a27a4000 rw-p 00015000 08:06 5639731 /lib/x86_64-linux-gnu/libnsl-2.24.so 7f34a27a4000-7f34a27a6000 rw-p 00000000 00:00 0 7f34a27a6000-7f34a27ad000 r-xp 00000000 08:06 5637380 /lib/x86_64-linux-gnu/libnss_compat-2.24.so 7f34a27ad000-7f34a29ac000 ---p 00007000 08:06 5637380 /lib/x86_64-linux-gnu/libnss_compat-2.24.so 7f34a29ac000-7f34a29ad000 r--p 00006000 08:06 5637380 /lib/x86_64-linux-gnu/libnss_compat-2.24.so 7f34a29ad000-7f34a29ae000 rw-p 00007000 08:06 5637380 /lib/x86_64-linux-gnu/libnss_compat-2.24.so 7f34a29ae000-7f34a29c6000 r-xp 00000000 08:06 5636302 /lib/x86_64-linux-gnu/libpthread-2.24.so 7f34a29c6000-7f34a2bc5000 ---p 00018000 08:06 5636302 /lib/x86_64-linux-gnu/libpthread-2.24.so 7f34a2bc5000-7f34a2bc6000 r--p 00017000 08:06 5636302 /lib/x86_64-linux-gnu/libpthread-2.24.so 7f34a2bc6000-7f34a2bc7000 rw-p 00018000 08:06 5636302 /lib/x86_64-linux-gnu/libpthread-2.24.so 7f34a2bc7000-7f34a2bcb000 rw-p 00000000 00:00 0 7f34a2bcb000-7f34a2bcd000 r-xp 00000000 08:06 5636507 /lib/x86_64-linux-gnu/libdl-2.24.so 7f34a2bcd000-7f34a2dcd000 ---p 00002000 08:06 5636507 /lib/x86_64-linux-gnu/libdl-2.24.so 7f34a2dcd000-7f34a2dce000 r--p 00002000 08:06 5636507 /lib/x86_64-linux-gnu/libdl-2.24.so 7f34a2dce000-7f34a2dcf000 rw-p 00003000 08:06 5636507 /lib/x86_64-linux-gnu/libdl-2.24.so 7f34a2dcf000-7f34a2e41000 r-xp 00000000 08:06 5636499 /lib/x86_64-linux-gnu/libpcre.so.3.13.3 7f34a2e41000-7f34a3040000 ---p 00072000 08:06 5636499 /lib/x86_64-linux-gnu/libpcre.so.3.13.3 7f34a3040000-7f34a3041000 r--p 00071000 08:06 5636499 /lib/x86_64-linux-gnu/libpcre.so.3.13.3 7f34a3041000-7f34a3042000 rw-p 00072000 08:06 5636499 /lib/x86_64-linux-gnu/libpcre.so.3.13.3 7f34a3042000-7f34a31d7000 r-xp 00000000 08:06 5636270 /lib/x86_64-linux-gnu/libc-2.24.so 7f34a31d7000-7f34a33d6000 ---p 00195000 08:06 5636270 /lib/x86_64-linux-gnu/libc-2.24.so 7f34a33d6000-7f34a33da000 r--p 00194000 08:06 5636270 /lib/x86_64-linux-gnu/libc-2.24.so 7f34a33da000-7f34a33dc000 rw-p 00198000 08:06 5636270 /lib/x86_64-linux-gnu/libc-2.24.so 7f34a33dc000-7f34a33e0000 rw-p 00000000 00:00 0 7f34a33e0000-7f34a3404000 r-xp 00000000 08:06 5636565 /lib/x86_64-linux-gnu/libselinux.so.1 7f34a3404000-7f34a3603000 ---p 00024000 08:06 5636565 /lib/x86_64-linux-gnu/libselinux.so.1 7f34a3603000-7f34a3604000 r--p 00023000 08:06 5636565 /lib/x86_64-linux-gnu/libselinux.so.1 7f34a3604000-7f34a3605000 rw-p 00024000 08:06 5636565 /lib/x86_64-linux-gnu/libselinux.so.1 7f34a3605000-7f34a3607000 rw-p 00000000 00:00 0 7f34a3607000-7f34a362a000 r-xp 00000000 08:06 5636259 /lib/x86_64-linux-gnu/ld-2.24.so 7f34a3662000-7f34a37fb000 r--p 00000000 08:06 5505564 /usr/lib/locale/locale-archive 7f34a37fb000-7f34a37fd000 rw-p 00000000 00:00 0 7f34a3825000-7f34a3829000 rw-p 00000000 00:00 0 7f34a3829000-7f34a382a000 r--p 00022000 08:06 5636259 /lib/x86_64-linux-gnu/ld-2.24.so 7f34a382a000-7f34a382b000 rw-p 00023000 08:06 5636259 /lib/x86_64-linux-gnu/ld-2.24.so 7f34a382b000-7f34a382c000 rw-p 00000000 00:00 0 7fffe1f34000-7fffe1f55000 rw-p 00000000 00:00 0 [stack] 7fffe1f9e000-7fffe1fa0000 r--p 00000000 00:00 0 [vvar] 7fffe1fa0000-7fffe1fa2000 r-xp 00000000 00:00 0 [vdso] ffffffffff600000-ffffffffff601000 r-xp 00000000 00:00 0 [vsyscall] AbortedSo I said who cares about insync, nautilus has google drive integration, so I decided to uninstall insync but I couldn't. Now, magically, I cannot run anymore apt-get upgrade or aptitude upgrade, I get this error output: Resolving dependencies... The following partially installed packages will be configured: insync No packages will be installed, upgraded, or removed. 0 packages upgraded, 0 newly installed, 0 to remove and 1 not upgraded. E: Can't find a source to download version '1.3.10.36104-wheezy' of 'insync:amd64' After unpacking 0 B will be used. E: Can't find a source to download version '1.3.10.36104-wheezy' of 'insync:amd64' E: Internal error: couldn't generate list of packages to download E: Perhaps the package lists are out of date, please try 'aptitude update' (or equivalent) firstThis maybe because the insync repo is unaccessable (which is, for the record, this one).What's more, I can't even open synaptic without getting this nice error: An error occurred the following details are prodived E: dpkg was interrupted, you must manually run 'sudo dpkg --configure -a' to correct the problem. E: _cache->open() failed, please report.of course insync is the cause of all this: ale@debian:~/Scaricati$ sudo dpkg --audit && echo ok The following packages are in a mess due to serious problems during installation. They must be reinstalled for them (and any packages that depend on them) to function properly: insync Google Drive sync and backup with multiple account supporWhat can I do, what can be done to solve this hideous situation and have my daily package upgrade back? Thank you all in advance.
SOLVED! insync messed apt and I can't do apt-get update/upgrade
debian;apt;dpkg;aptitude
I solved in another way I found online: I removed insync entry in /var/lib/dpkg/status, did apt-get update && sudo apt-get upgrade again and everything started working fine back
_vi.10728
We can make from two lines a single with J. It looks as the following:line 1line 2after J pressed on line 1 we've got this:line 1 line 2But is there a combination which what do the opposite (not u)? I mean to split line 1 line 2 ^ cursorinline 1line 2i<ENTER><ESC> is too messy. Is there a one button shortcut for this in vim already?
Splitting a line into two
normal mode
As far as I know, vim doesn't have a command for this. But vim is also all about customization. Easy enough to make your own mapping!nnoremap s i<CR><Esc>
_unix.386744
I'm doing a script to monitor some things from websites and one of the things is to monitor the http status and their response time.In the script I run a command to get the http_status:(http command is provided by: httpie : A Curl-like tool for humans)http -timeout 10 -follow -h http: //$I/ | grep HTTP\/1.1 | awk '{print $2}'This command will return the status itself, ie: 200, 404, 403, etc. or will return two other things: http: error: Request timed out (10.0s)orhttp: error: ConnectionError ...Note: Increasing the timeout does not solve my problem. I need it to be 10 seconds.How do I put a specific code when returning these two other options? For example in the timeout return 9999 and error 8888.
Shell Script to read output of command
shell;scripting;http
null
_unix.140194
How can I get the process with the biggest pid using ps?
How can i get the process with the biggest pid?
process;sort;ps;process management
null
_unix.255647
I've looked and looked and can't find a working solution to a bash script I'm trying to create to shut a process down and wait for it and spawned processes to finish. I'm still learning a lot of Linux.Context:Process FOO runs.Process BAR is used to check FOO and is also used to kill it (I have no control over the internals of these 2 processes).All I can do it is pass commands to BAR and it performs them.In this case, I send command to BAR to kill FOO and it spawns another process and backgrounds it.Goal:I'm trying to run 20 commands simultaneously to execute kill on 20 FOO's (via BAR) and WAIT for all FOO's to die before moving on to the next part of the script.Problem:So far all I can do is wait for BAR to execute and it moves on before the backgrounded process kills FOO.BAR exit FOO1BAR exit FOO2...BAR exit FOO20waitdo more stuffI've also triedBAR exit FOO1PID1=$!BAR exit FOO2PID2=$!wait $PID1 $PID2without luck.
How to wait for all spawned and backgrounded processes to finish in bash script
bash;shell script;process;background process;exit
null
_webmaster.102913
On my webpage I have nice URL addresses like this:example.com/categoryThen I have some filter which adds to the URL address a string, the URL address then looks like this:example.com/category/?type=1The problem is than on Google I can see these results:example.com/category/?type=1example.com/category/?type=2example.com/category/?type=etc.How could I get rid of these duplicities please?Thanks.
SEO - how to get rid of duplicate web URLs
seo;url
null
_softwareengineering.187595
I was recently hired by a large multi-national corporation to head up mobile development for their sales operation/support team. In a company of close to 10,000 people I am, at least in the America's, the only mobile developer. They are testing the waters and phase 1 (temp-to-hire) went well enough for them. Now they are considering expanding to other developers for their other sales operations/support teams and I've been tasked with assisting/leading the writing of a standardization guide for iOS programming. I am a big believer in giving people the freedom to work in the manner they most feel comfortable in but at the same time, I have been the creator of and on the receiving ends of big balls of mud applications. Having learned through experience I have several standards that I follow religiously such as commenting, at times almost every line - just short things but enough to let someone else know what is going on and using #pragma mark - DESCRIPTION to block off like minded methods, indentation, naming classes with a prefix to avoid name space conflicts, etc. etc. So I guess what I am looking for is not to tell another programmer how they should iterate through an array but rather some basic standardization so anyone can jump into anyone else's project and find their way around with little learning curve. I'd love to see what other means people use to maintain control over a software development group.
Setting up a development standardization guide for in-house/vendor programmers
development process;ios
null
_webmaster.18290
A friend of mine is starting a business. The business model is that his website is the medium through which businesses compete with each other and bid to provide a service to members of the public. He has no technical background but he knows the industry he's servicing well. He hired a web design company to create the site and the idea is that once everything is up, the site will kind of manage itself in that consumers will put up requests to be bid on, businesses will bid to provide the service and consumers will accept bids. Given that he has no technical background, it's unlikely that he will be able to resolve any technical issues. What kind of support should he seek from the web design company post-launch? He can't afford to have a full-time technical person on the payroll right now. what are the likely problems he can expect to run into? I'm thinking things like browser incompatibility as updates come out. Potentially performance issues as traffic ramps up if it takes off.
What are the challenges for a non-technical person running a web based business?
management
Future browser compatibility isn't so much of an issue provided the website is designed to be future-proof (within reason) from the off - by that, I mean valid html/css that displays properly in a range of browsers on a number of devices.If your friend were to experience an influx of traffic, it is merely a case of upgrading the server or data transfer allowances with his/her web host - obviously with your friend being non-technical, this may be something they will have taken care of, which means they would be his first port of call either way should this become an issue.Provided the site is programmed and put together well, there is no reason why it shouldn't stand the test of time (as far as intended functionality goes), though a lot of the best websites are updated with both features and content over time to keep things interesting and continually optimize whatever they intend to serve as.It sounds like there will be a customer support role (it also sounds like your friend will likely be taking care of it) - and of course, you never know what people might ask. That being said, if someone did have a technical issue he couldn't answer the chances are the source of the issue is either local to that user - or as I keep saying, the site was designed to be expandable in the first place.tl:dr; Provided the website is designed to display properly on a number of browsers and devices as well as being optimized for performance and his web design company are also taking care of the hosting, he shouldn't have any worries until the point where the business has scaled to probably requiring full-time technical support anyway.
_webmaster.34330
For a site http://imageocd.com that I just set up I initially spelled the category automobiles as autimobiles... I know it's rediculous. I then set up over 10,000 pages behind that category e.g. http://imageocd.com/automobiles/hillman-minx-cabrio-pictures-and-wallpapers.So, I set up over 10,000 301 url redirects to change the spelling on automobiles. I just checked my Google Webmasters report and got an error saying:http://www.imageocd.com/: Googlebot can't access your siteSep 7, 2012Over the last 24 hours, Googlebot encountered 2 errors while attempting to retrieve DNS information for your site. The overall error rate for DNS queries for your site is 66.7%.Could the overabundance of 301 redirects be causing this? I host 13 sites on this dedicated server and all sites are running fine. I also contacted GoDaddy and they said the server is running fine. Any ideas on what might be going on?Also, I have canonical set up for every URL. Could this be part of the error? Thanks.
Can too many 301 redirects cause a DNS error?
google search console;dns;301 redirect
null
_cs.9686
What does a pseudo-polynomial algorithm tell us about the problem it solves? I don't see how running time improves if the algorithm is exponential in the input length and polynomial in the input value; so how do we explain this shift from exponential to polynomial?
Weak and strong completeness
algorithms;complexity theory;np complete;pseudo polynomial
null
_cs.72271
I'm rather new to formal languages and reading up on Peter Linz's An Introduction to FORMAL LANGUAGES and AUTOMATA. As an exercise in the pre-requisite chapter, there's a proof. I'm not so familiar with formal proofs so I would like someone to tell me:Does it prove it?How would you make this proof stronger? I am not interested so much in seeing an alternative way to prove this (assuming mine is good) - I would rather know how I can make my proofs better in the future.PROOF:|AB| n + mn = |A|, m = |B|Where A & B are sets. So if I understand correctly, I want to prove that the size of the union of A & B is smaller or equal to the sum of the individual sizes of A & B. Which intuitively makes lots of sense to me.|AB| n + m|AB| |A| + |B|I then try to express the right-hand side differently. If I draw it, I can see that the following holds true (U for universal set, sorry still working out those symbols):|A| + |B| = |U| - |A| + |AB| + |U| - |B| + |AB||A| + |B| = 2|U| - |A| - |B| + 2|AB| (1)And also|AB| = |U| - |A| - |B| + 2|AB| (2)If I substitute back into the initial equation:|AB| |A| + |B||U| - |A| - |B| + 2|AB| 2|U| - |A| - |B| + 2|AB||U| 2|U|Which is always true.
Proof verification |AB| n + m
formal languages
null
_unix.148321
Is it possible for awk to read the program and the input from the standardinput? I would like to be able to pipe a file to the following function.process_data () { awk -f - <<EOF{print}EOF}Note: the actual program is longer, it can't be passed as a command lineargument, and I'd rather not use temporary files.Currently it doesn't output anything.$ yes | head | process_data $
awk - read program AND input from the standard input?
shell;awk;io redirection;here document
process_data() { awk -F /dev/fd/3 3<< \EOF awk code hereEOF}Note that command line arguments can contain newline character, and while there's a length limit, it's general over a few hundred kilobyte.awk ' BEGIN {...} /.../ ... END {...}'If the issue is about embedding single quote characters in the awk script, another approach is to store the code in a variable:awk_code=$(cat << \EOF{print 'quoted' $0}EOFAnd do:process_data() { awk $awk_code}
_unix.169039
I have a problem with locale and I can't found any solution that work !Every tutorial are similar like this:http://www.thomas-krenn.com/en/wiki/Perl_warning_Setting_locale_failed_in_DebianSo, this is the problem with locale:pi @ server [~]:$ > sudo deluser --remove-home cm22perl: warning: Setting locale failed.perl: warning: Please check that your locale settings:LANGUAGE = (unset),LC_ALL = (unset),LC_CTYPE = UTF-8,LANG = en_GB.UTF-8are supported and installed on your system.perl: warning: Falling back to the standard locale (C).Looking for files to backup/remove ...Removing user `cm22' ...Warning: group `cm22' has no more members.Done.How can I resolve ?thank you
Problem with locale: Setting locale failed.
debian;locale
null
_webmaster.22981
I am totally new to the SEO world.I got a client though, for whom I am doing SEO. He has a laundry/dry clean/moving/shoe repair kind of business and a website for it.I have no idea how much I should charge him.Also I am not sure whether it has to be a one time thing or ongoing work with monthly payments.We agreed that he pays me $300 and gives one month to work on his site, and we'll see what happens after that.Could you please give me any guidelines?Thank you!
SEO consultant job description and compensation guidelines
seo
null
_codereview.127999
I am working with a TreeView (default control from .Net Framework) that displays hierachical data. Data are bound to the view using MVVM pattern with the HierarchicalDataTemplate.<HierarchicalDataTemplate DataType={x:Type models:TreeItemViewModel} ItemsSource={Binding Children, Mode=OneWay}> <TextBlock Text={Binding Name} /></HierarchicalDataTemplate>The child items are loaded automatically when the view requests the child collection. To stay responsive, the child items are loaded in background up to a specific depth:public class TreeItemViewModel : ViewModelBase{ private readonly ObservableCollection<TreeItemViewModel> myChildren = new ObservableCollection<TreeItemViewModel>(); private const int PRELOADING_DEPTH = 1; public string Name { get; set; } public bool IsLoaded { get; set; } public bool IsLoading { get; set; } public ObservableCollection<TreeItemViewModel> Children { get { ThreadPool.QueueUserWorkItem(o => EnsureChildrenAreLoaded(myChildren, 0)); return myChildren; } } public void EnsureChildrenAreLoaded(ObservableCollection<TreeItemViewModel> childrenToFill, int depth) { if (!IsLoaded && !IsLoading) { IsLoading = true; var children = LoadChildren().ToArray(); App.RunOnGuiThread(() => { foreach (var child in children) childrenToFill.Add(child); IsLoaded = true; IsLoading = false; }); } if (depth < PRELOADING_DEPTH) foreach (var child in myChildren) child.EnsureChildrenAreLoaded(child.myChildren, depth + 1); } protected virtual IEnumerable<TreeItemViewModel> LoadChildren() { // logic for loading sub items return Enumerable.Empty<TreeItemViewModel>(); }}RunOnGuiThread: public static void RunOnGuiThread(Action action) { if (Current == null || Current.Dispatcher == null) action(); else Current.Dispatcher.Invoke(action); }Is that a proper solution? Has it any disadvantages / potential for improvements?
Loading sub items of TreeView in background
c#;multithreading;wpf;mvvm
I do not see your full UI then maybe you're using it (to display a busy indicator?) but, from what I have here, it seems that IsLoading is just a repetition for IsLoaded and as such it should be dropped. Any reason for EnsureChildrenAreLoaded() to be public and TreeItemViewModel not to be sealed?EnsureChildrenAreLoaded() might be:var children = LoadChildren().ToArray();App.RunOnGuiThread(() =>{ foreach (var child in children) childrenToFill.Add(child); IsLoaded = true;});However we now have the problem to preload the children. My major concern here is that you're queuing an action in the pool each time you read the property. Not such big overhead but avoidable. Also I'd move this responsibilities to separate methods:void LoadChildrenAndUpdateUi(ObservableCollection<TreeItemViewModel> childrenToFill){ IsLoaded = true; var children = LoadChildren().ToArray(); App.RunOnGuiThread(() => { foreach (var child in children) childrenToFill.Add(child); });}And:ObservableCollection<TreeItemViewModel> EnsureChildrenAreLoaded( ObservableCollection<TreeItemViewModel> childrenToFill, int depth){ if (depth >= PRELOADING_DEPTH) return; if (!IsLoaded) ThreadPool.QueueUserWorkItem(_ => LoadChildrenAndUpdateUi(myChildren)); foreach (var child in myChildren) child.EnsureChildrenAreLoaded(child.myChildren, depth + 1); return myChildren;}Our getter will then be:public ObservableCollection<TreeItemViewModel> Children => EnsureChildrenAreLoaded(myChildren, 0);Major difference is that we're going through the children to determine if they have to be loaded in the calling UI thread but we're queuing an action in the thread pool only when required. Note that we're using only one property to track the state (IsLoaded, which should be private). I set its value to true before effectively reading children to avoid multiple parallel requests, name is now somehow misleading and should be changed to something meaningful (_isVisited or something like that).What next? If PRELOADING_DEPTH might ever be 0 or loading time may be really long then I'd add a dummy item to the child collection (to display the expand/collapse indicator), something like:public TreeItemViewModel(){ Children.Add(new TreeItemViewModel { Name = Loading... });}Removed before you start adding new items:children.Clear();foreach (var child in children) childrenToFill.Add(child);But, well, you already have a busy indicator then you don't need this at all!Edit: if you're using IsLoading in your UI then, unfortunately, you can't drop it that easily. I think you have two options: replace IsLoading and IsLoaded with an enum like enum LoadingStatus { NotLoaded, Loading, Loaded } used in conjunction with a value converter to hide/show the busy indicator or use ordering (as you already doing) to avoid locks (because fortunately you have only one thread for writing):var children = LoadChildren().ToArray();App.RunOnGuiThread(() =>{ foreach (var child in children) childrenToFill.Add(child); IsLoaded = true; IsLoading = false;});Set IsLoading in the UI thread to avoid race conditions (both read and write will be then done in the same UI thread):if (!IsLoaded && !IsLoading){ IsLoading = true; ThreadPool.QueueUserWorkItem(_ => LoadChildrenAndUpdateUi(myChildren));}Final notes.If your depth is limited to direct children then you may simplify your implementation (just invoke the lazy load method when you populate the list). You may also want to try to use a LazyAsync<T> implementation (I saw a nice one somewhere...can't find/remember where).WPF supports asynchronous binding with a simple IsAsync=true in the binding expression. You may want to experiment with that, see MSDN for details
_codereview.167049
I first want to apologize for dumping a lot of code into here, but I have been stuck on this issue for days. I was assigned to build a planning screen wherein I could pull corresponding information regarding Sales, Production, and Inventory. As a preface to all of this, this code works for me and does exactly what I need it to do. I originally posted this in Stack Overflow, but someone mentioned I should also post it here. I also want to note that the SpeedUp and SpeedDown functions are in a different module and are used to affect: Screen Updating, Events, Calculations, and the Status Bar.The problem, however, is the amount of time it takes to run. Originally, it was taking about 5-7 minutes, but I have been able to reduce it to 1-2 minutes depending on the computer being used. I have tried changing multiple things, and can not reduce the time more. Any input or advice would be greatly appreciated.Sub FillInventoryAcross()'This code is the formulas for the Total Inventory, Sales, and Production Data. Just addition formulas.Call SpeedUpDim strFormulas(1 To 3) As VariantWith ThisWorkbook.Sheets(Inventory)strFormulas(1) = =SUM(C15,C20,C25,C30,C35,C40)strFormulas(2) = =SUM(C16,C21,C26,C31,C36,C41)strFormulas(3) = =SUM(C18,C23,C28,C33,C38,C43).Range(C11:W11).formula = strFormulas(1).Range(C11:W11).FillRight.Range(C12:W12).formula = strFormulas(2).Range(C12:W12).FillRight.Range(C13:W13).formula = strFormulas(3).Range(C13:W13).FillRightEnd WithCall SpeedDownEnd SubSub FillInventoryPerLocation()'This code will fill in the inventory per location. It will add up all of the sales, movement and production per plant along with the prior days inventory.Call SpeedUpDim strformula(1 To 12) As VariantWith ThisWorkbook.Sheets(Inventory)strformula(1) = =IFERROR(IF(TRIM(Inventori!$D:$D)=L95,INDEX(Inventori!$E:$E,MATCH(Inventory!$M$3,Inventori!$B:$B,0))+SUM(C16:C19)+C46,0),0)strformula(2) = =IFERROR(IF(TRIM(Inventori!$D:$D)=L90,INDEX(Inventori!$E:$E,MATCH(Inventory!$M$3,Inventori!$B:$B,0)),0)+SUM(C21:C24),0)strformula(3) = =IFERROR(IF(TRIM(Inventori!$D:$D)=L91,INDEX(Inventori!$E:$E,MATCH(Inventory!$M$3,Inventori!$B:$B,0)),0)+SUM(C26:C29),0)strformula(4) = =IFERROR(IF(TRIM(Inventori!$D:$D)=L93,INDEX(Inventori!$E:$E,MATCH(Inventory!$M$3,Inventori!$B:$B,0)),0)+SUM(C31:C34),0)strformula(5) = =IFERROR(IF(TRIM(Inventori!$D:$D)=L94,INDEX(Inventori!$E:$E,MATCH(Inventory!$M$3,Inventori!$B:$B,0)),0)+SUM(C36:C39),0)strformula(6) = =IFERROR(IF(TRIM(Inventori!$D:$D)=A78,INDEX(Inventori!$E:$E,MATCH(Inventory!$M$3,Inventori!$B:$B,0)),0)+SUM(C41:C44),0)strformula(7) = =C15+Sum(D16:D19)+D46strformula(8) = =C20+sum(D21:D24)strformula(9) = =C25+sum(D26:D29)strformula(10) = =C30+sum(D31:D34)strformula(11) = =C35+sum(D36:D39)strformula(12) = =C40+sum(D41:D44).Range(C15).formula = strformula(1).Range(C20).formula = strformula(2).Range(C25).formula = strformula(3).Range(C30).formula = strformula(4).Range(C35).formula = strformula(5).Range(C40).formula = strformula(6).Range(D15:W15).formula = strformula(7).Range(D15:W15).FillRight.Range(D20:W20).formula = strformula(8).Range(D20:W20).FillRight.Range(D25:W25).formula = strformula(9).Range(D25:W25).FillRight.Range(D30:W30).formula = strformula(10).Range(D30:W30).FillRight.Range(D35:W35).formula = strformula(11).Range(D35:W35).FillRight.Range(D40:W40).formula = strformula(12).Range(D40:W40).FillRightEnd WithCall SpeedDownEnd SubSub SumIfSales()'This code will pull up all of the sales information for a product. Just a Sumif looking up information that matches Date/SKU. After the code is in the starting cell, it is then dragged accross for all of the other dates.Call SpeedUpDim strformula(1 To 6) As VariantWith ThisWorkbook.Sheets(Inventory)strformula(1) = =(SUMIFS(Sales!$I$1:$I$200000,Sales!$B$1:$B$200000,L95,Sales!$D$1:$D$200000,CONCATENATE(Inventory!C$8,Inventory!$M$3),Sales!$AD$1:$AD$200000,VN)*-1)strformula(2) = =(SUMIFS(Sales!$I$1:$I$200000,Sales!$B$1:$B$200000,L90,Sales!$D$1:$D$200000,CONCATENATE(Inventory!C$8,Inventory!$M$3),Sales!$AD$1:$AD$200000,VN)*-1)strformula(3) = =(SUMIFS(Sales!$I$1:$I$200000,Sales!$B$1:$B$200000,L91,Sales!$D$1:$D$200000,CONCATENATE(Inventory!C$8,Inventory!$M$3),Sales!$AD$1:$AD$200000,VN)*-1)strformula(4) = =(SUMIFS(Sales!$I$1:$I$200000,Sales!$B$1:$B$200000,L93,Sales!$D$1:$D$200000,CONCATENATE(Inventory!C$8,Inventory!$M$3),Sales!$AD$1:$AD$200000,VN)*-1)strformula(5) = =(SUMIFS(Sales!$I$1:$I$200000,Sales!$B$1:$B$200000,L94,Sales!$D$1:$D$200000,CONCATENATE(Inventory!C$8,Inventory!$M$3),Sales!$AD$1:$AD$200000,VN)*-1)strformula(6) = =(SUMIFS(Sales!$I$1:$I$200000,Sales!$B$1:$B$200000,A78,Sales!$D$1:$D$200000,CONCATENATE(Inventory!C$8,Inventory!$M$3),Sales!$AD$1:$AD$200000,VN)*-1).Range(C16:W16).formula = strformula(1).Range(C16:W16).FillRight.Range(C21:W21).formula = strformula(2).Range(C21:W21).FillRight.Range(C26:W26).formula = strformula(3).Range(C26:W26).FillRight.Range(C31:W31).formula = strformula(4).Range(C31:W31).FillRight.Range(C36:W36).formula = strformula(5).Range(C36:W36).FillRight.Range(C41:W41).formula = strformula(6).Range(C41:W41).FillRightEnd WithCall SpeedDownEnd SubSub SumIfMovement()'This code works in a similar way to the prior code, but looks to match Date/SKU to find product movement.Call SpeedUpDim strformula(1 To 6) As VariantWith ThisWorkbook.Sheets(Inventory)strformula(1) = =(SUMIFS(Sales!$I$1:$I$200000,Sales!$D$1:$D$200000,CONCATENATE(Inventory!C8,Inventory!$M$3),Sales!$AD$1:$AD$200000,VT,Sales!$O$1:$O$200000,95)-SUMIFS(Sales!$I$1:$I$200000,Sales!$D$1:$D$200000,CONCATENATE(Inventory!C8,Inventory!$M$3),Sales!$AD$1:$AD$200000,VT,Sales!$B$1:$B$200000,L95))strformula(2) = =(SUMIFS(Sales!$I$1:$I$200000,Sales!$D$1:$D$200000,CONCATENATE(Inventory!C8,Inventory!$M$3),Sales!$AD$1:$AD$200000,VT,Sales!$O$1:$O$200000,90)-SUMIFS(Sales!$I$1:$I$200000,Sales!$D$1:$D$200000,CONCATENATE(Inventory!C8,Inventory!$M$3),Sales!$AD$1:$AD$200000,VT,Sales!$B$1:$B$200000,L90))strformula(3) = =(SUMIFS(Sales!$I$1:$I$200000,Sales!$D$1:$D$200000,CONCATENATE(Inventory!C8,Inventory!$M$3),Sales!$AD$1:$AD$200000,VT,Sales!$O$1:$O$200000,91)-SUMIFS(Sales!$I$1:$I$200000,Sales!$D$1:$D$200000,CONCATENATE(Inventory!C8,Inventory!$M$3),Sales!$AD$1:$AD$200000,VT,Sales!$B$1:$B$200000,L91))strformula(4) = =(SUMIFS(Sales!$I$1:$I$200000,Sales!$D$1:$D$200000,CONCATENATE(Inventory!C8,Inventory!$M$3),Sales!$AD$1:$AD$200000,VT,Sales!$O$1:$O$200000,93)-SUMIFS(Sales!$I$1:$I$200000,Sales!$D$1:$D$200000,CONCATENATE(Inventory!C8,Inventory!$M$3),Sales!$AD$1:$AD$200000,VT,Sales!$B$1:$B$200000,L93))strformula(5) = =(SUMIFS(Sales!$I$1:$I$200000,Sales!$D$1:$D$200000,CONCATENATE(Inventory!C8,Inventory!$M$3),Sales!$AD$1:$AD$200000,VT,Sales!$O$1:$O$200000,94)-SUMIFS(Sales!$I$1:$I$200000,Sales!$D$1:$D$200000,CONCATENATE(Inventory!C8,Inventory!$M$3),Sales!$AD$1:$AD$200000,VT,Sales!$B$1:$B$200000,L94))strformula(6) = =(SUMIFS(Sales!$I$1:$I$200000,Sales!$D$1:$D$200000,CONCATENATE(Inventory!C8,Inventory!$M$3),Sales!$AD$1:$AD$200000,VT,Sales!$O$1:$O$200000,78)-SUMIFS(Sales!$I$1:$I$200000,Sales!$D$1:$D$200000,CONCATENATE(Inventory!C8,Inventory!$M$3),Sales!$AD$1:$AD$200000,VT,Sales!$B$1:$B$200000,A78)).Range(C17:W17).formula = strformula(1).Range(C17:W17).FillRight.Range(C22:W22).formula = strformula(2).Range(C22:W22).FillRight.Range(C27:W27).formula = strformula(3).Range(C27:W27).FillRight.Range(C32:W32).formula = strformula(4).Range(C32:W32).FillRight.Range(C37:W37).formula = strformula(5).Range(C37:W37).FillRight.Range(C42:W42).formula = strformula(6).Range(C42:W42).FillRightEnd WithCall SpeedDownEnd SubSub SumIfProduction()'This code yet again works like the other two codes, but for production.Call SpeedUpDim strformula(1 To 6) As VariantWith ThisWorkbook.Sheets(Inventory)strformula(1) = =(SUMIFS(Production!$Q$1:$Q$100000,Production!$B$1:$B$100000,CONCATENATE(Inventory!C$8,Inventory!$M$3),Production!$I$1:$I$100000,P95))strformula(2) = =(SUMIFS(Production!$Q$1:$Q$100000,Production!$B$1:$B$100000,CONCATENATE(Inventory!C$8,Inventory!$M$3),Production!$I$1:$I$100000,P90))strformula(3) = =(SUMIFS(Production!$Q$1:$Q$100000,Production!$B$1:$B$100000,CONCATENATE(Inventory!C$8,Inventory!$M$3),Production!$I$1:$I$100000,P91))strformula(4) = =(SUMIFS(Production!$Q$1:$Q$100000,Production!$B$1:$B$100000,CONCATENATE(Inventory!C$8,Inventory!$M$3),Production!$I$1:$I$100000,P93))strformula(5) = =(SUMIFS(Production!$Q$1:$Q$100000,Production!$B$1:$B$100000,CONCATENATE(Inventory!C$8,Inventory!$M$3),Production!$I$1:$I$100000,P94))strformula(6) = =(SUMIFS(Production!$Q$1:$Q$100000,Production!$B$1:$B$100000,CONCATENATE(Inventory!C$8,Inventory!$M$3),Production!$I$1:$I$100000,A78)).Range(C18:W18).formula = strformula(1).Range(C18:W18).FillRight.Range(C23:W23).formula = strformula(2).Range(C23:W23).FillRight.Range(C28:W28).formula = strformula(3).Range(C28:W28).FillRight.Range(C33:W33).formula = strformula(4).Range(C33:W33).FillRight.Range(C38:W38).formula = strformula(5).Range(C38:W38).FillRight.Range(C43:W43).formula = strformula(6).Range(C43:W43).FillRightEnd WithCall SpeedDownEnd SubSub DailySalesHistory()'This code works to look up the Sales History by day for a given product. Takes each starting Monday and will add a day to it accross until Sunday, then the next week starts. Does the Date/SKU thing like the other sections. It then multiplies the end value by -1 to make the sales values positive, as the user would like to see them as.Call SpeedUpDim strformula(1 To 7) As VariantWith ThisWorkbook.Sheets(Inventory)strformula(1) = =(SUMIFS(Sales!$I$1:$I$200000,Sales!$D$1:$D$200000,CONCATENATE(Inventory!$B51,Inventory!$M$3),Sales!$AD$1:$AD$200000,VN))strformula(2) = =(SUMIFS(Sales!$I$1:$I$200000,Sales!$D$1:$D$200000,CONCATENATE(Inventory!$B51+1,Inventory!$M$3),Sales!$AD$1:$AD$200000,VN))strformula(3) = =(SUMIFS(Sales!$I$1:$I$200000,Sales!$D$1:$D$200000,CONCATENATE(Inventory!$B51+2,Inventory!$M$3),Sales!$AD$1:$AD$200000,VN))strformula(4) = =(SUMIFS(Sales!$I$1:$I$200000,Sales!$D$1:$D$200000,CONCATENATE(Inventory!$B51+3,Inventory!$M$3),Sales!$AD$1:$AD$200000,VN))strformula(5) = =(SUMIFS(Sales!$I$1:$I$200000,Sales!$D$1:$D$200000,CONCATENATE(Inventory!$B51+4,Inventory!$M$3),Sales!$AD$1:$AD$200000,VN))strformula(6) = =(SUMIFS(Sales!$I$1:$I$200000,Sales!$D$1:$D$200000,CONCATENATE(Inventory!$B51+5,Inventory!$M$3),Sales!$AD$1:$AD$200000,VN))strformula(7) = =(SUMIFS(Sales!$I$1:$I$200000,Sales!$D$1:$D$200000,CONCATENATE(Inventory!$B51+6,Inventory!$M$3),Sales!$AD$1:$AD$200000,VN)).Range(D51).formula = strformula(1).Range(D51:D108).FillDown.Range(E51).formula = strformula(2).Range(E51:E108).FillDown.Range(F51).formula = strformula(3).Range(F51:F108).FillDown.Range(G51).formula = strformula(4).Range(G51:G108).FillDown.Range(H51).formula = strformula(5).Range(H51:H108).FillDown.Range(I51).formula = strformula(6).Range(I51:I108).FillDown.Range(J51).formula = strformula(7).Range(J51:J108).FillDownEnd WithCall SpeedDownEnd SubSub WeeklySalesHistory()'This code will take all of the valuse returned in the prior code and add them together. This will give the user the total sales of a product for a given week.Call SpeedUpWorksheets(Inventory).Range(K51).formula = =SUM(D51:J51)Range(K51:K108).FillDownCall SpeedDownEnd SubSub TopDaysoftheWeek()'This code will bring up the days of the week for three weeks. Starts with Sunday and ends with Saturday. First formula finds the Sunday, the other formulas just adds 1 to the day.Call SpeedUpWorksheets(Inventory).Range(C8).formula = =TODAY()-WEEKDAY(TODAY(),2)Worksheets(Inventory).Range(D8).formula = =C8+1Range(D8:W8).FillRightCall SpeedDownEnd SubSub InventoryInfo()'This code runs vlookups on the inputted SKU number to pull up corresponding information. If the cell is blank, it will tell the user what will come up. If there is an error, it will reflect that.Call SpeedUpWorksheets(Inventory).Range(B6).formula = =IF(ISBLANK($M$3),SKU Number,INDEX(ItemMaster!$B:$B,MATCH(Inventory!$M$3,ItemMaster!$B:$B,0)))Worksheets(Inventory).Range(C6).formula = =IF(ISBLANK($M$3),Product Name,IF(ISTEXT(INDEX(ItemMaster!$D:$D,MATCH(Inventory!$M$3,ItemMaster!$B:$B,0))),INDEX(ItemMaster!$D:$D,MATCH(Inventory!$M$3,ItemMaster!$B:$B,0)),INDEX(ItemMaster!$C:$C,MATCH(Inventory!$M$3,ItemMaster!$B:$B,0))))Worksheets(Inventory).Range(F6).formula = =IF(ISBLANK($M$3),Pieces Per Case in,INDEX(ItemMaster!$I:$I,MATCH(Inventory!$M$3,ItemMaster!$B:$B,0)))& piecesWorksheets(Inventory).Range(I6).formula = =IF(ISBLANK($M$3),Pieces Per Case in ,(ROUND(INDEX(ItemMaster!$J:$J,MATCH(Inventory!$M$3,ItemMaster!$B:$B,0))*35.274,2)))& OzWorksheets(Inventory).Range(L6).FormulaArray = =IF(ISBLANK($M$3),Date of Last Run,MAX(IF(Production!$H:$H=Inventory!$M$3,Production!$N:$N)))Worksheets(Inventory).Range(N6).formula = =IF(ISBLANK($M$3),Line of Last Run,INDEX(Production!$E:$E,MATCH(Inventory!$M$3,Production!$H:$H,0)))Worksheets(Inventory).Range(P6).formula = =IF(ISBLANK($M$3),Allergen Codes,Codes: &$AI$5)Worksheets(Inventory).Range(Q6).formula = =IF(ISBLANK($M$3),,$AI$6)Worksheets(Inventory).Range(R6).formula = =IF(ISBLANK($M$3),,$AI$7) Worksheets(Inventory).Range(S6).formula = =IF(ISBLANK($M$3),,$AI$8)Worksheets(Inventory).Range(T6).formula = =IF(ISBLANK($M$3),,$AI$9)Worksheets(Inventory).Range(U6).formula = =IF(ISBLANK($M$3),,$AI$10)Worksheets(Inventory).Range(V6).formula = =IF(ISBLANK($M$3),,$AI$11)Worksheets(Inventory).Range(W6).formula = =IF(ISBLANK($M$3),,$AI$12)Worksheets(Inventory).Range(E10).formula = =IF(ISBLANK($M$3),Cases Per Dough,IF(ISNA(VLOOKUP(NUMBERVALUE($M$3),CPD!$A$1:$D$381,2,FALSE)),0,VLOOKUP(NUMBERVALUE($M$3),CPD!$A$1:$D$381,2,FALSE)))Worksheets(Inventory).Range(J10).formula = =IF(ISBLANK($M$3),Lines Product Was Run On,Lines: &$AO$5)Worksheets(Inventory).Range(K10).formula = =IF(ISBLANK($M$3),,IF(OR(ISERR(AO6),ISNA(AO6)),,AO6))Worksheets(Inventory).Range(L10).formula = =IF(ISBLANK($M$3),,IF(OR(ISERR(AO7),ISNA(AO7)),,AO7))Worksheets(Inventory).Range(M10).formula = =IF(ISBLANK($M$3),,IF(OR(ISERR(AO8),ISNA(AO8)),,AO8))Worksheets(Inventory).Range(N10).formula = =IF(ISBLANK($M$3),,IF(OR(ISERR(AO9),ISNA(AO9)),,AO9))Worksheets(Inventory).Range(O10).formula = =IF(ISBLANK($M$3),Average Cases Sold Per Week, SUM(K56:K67)/12)Worksheets(Inventory).Range(R10).formula = =IF(ISBLANK($M$3),Average Cases Sold Per Day, $O$10/7)Worksheets(Inventory).Range(U10).formula = =IF(ISBLANK($M$3),Days of Inventory Remaining,(INDEX(Inventori!$F:$F,MATCH(Inventory!$M$3,Inventori!$B:$B,0)))/R10)Call SpeedDownEnd SubSub HiddenFormulas()'This code runs some of the hidden formulas used to calculate and find factors for the inventory screen. The user will not be allowed to see or interact with them.Call SpeedUpWorksheets(Inventory).Range(AF5).formula = =INDEX(ItemMaster!$K:$K,MATCH($M$3,ItemMaster!$B:$B,0))Worksheets(Inventory).Range(AF6).formula = =INDEX(ItemMaster!$L:$L,MATCH($M$3,ItemMaster!$B:$B,0))Worksheets(Inventory).Range(AF7).formula = =INDEX(ItemMaster!$M:$M,MATCH($M$3,ItemMaster!$B:$B,0))Worksheets(Inventory).Range(AF8).formula = =INDEX(ItemMaster!$N:$N,MATCH($M$3,ItemMaster!$B:$B,0))Worksheets(Inventory).Range(AF9).formula = =INDEX(ItemMaster!$O:$O,MATCH($M$3,ItemMaster!$B:$B,0))Worksheets(Inventory).Range(AF10).formula = =INDEX(ItemMaster!$P:$P,MATCH($M$3,ItemMaster!$B:$B,0))Worksheets(Inventory).Range(AF11).formula = =INDEX(ItemMaster!$Q:$Q,MATCH($M$3,ItemMaster!$B:$B,0))Worksheets(Inventory).Range(AF12).formula = =INDEX(ItemMaster!$R:$R,MATCH($M$3,ItemMaster!$B:$B,0))Worksheets(Inventory).Range(AI5).formula = =IF($AF$5=X,O,)Worksheets(Inventory).Range(AI6).formula = =IF(AF6=X,A,)Worksheets(Inventory).Range(AI7).formula = =IF(AF7=X,B,)Worksheets(Inventory).Range(AI8).formula = =IF(AF8=X,C,)Worksheets(Inventory).Range(AI9).formula = =IF(AF9=X,AB,)Worksheets(Inventory).Range(AI10).formula = =IF(AF10=X,AC,)Worksheets(Inventory).Range(AI11).formula = =IF(AF11=X,BC,)Worksheets(Inventory).Range(AI12).formula = =IF(AF12=X,ABC,)Worksheets(Inventory).Range(AL5).FormulaArray = =INDEX(Production!$E:$E, SMALL(IF(Inventory!$M$3=Production!$H:$H, ROW(Production!$H:$H)-ROW($A$1)+1), ROW(1:1)))Range(AL5:AL554).FillDownWorksheets(Inventory).Range(AO5).FormulaArray = =INDEX($AL$5:$AL$554, MATCH(0, COUNTIF($AO$4:AO4,$AL$5:$AL$554), 0))Worksheets(Inventory).Range(AO6).FormulaArray = =INDEX($AL$5:$AL$554, MATCH(0, COUNTIF($AO$4:AO5,$AL$5:$AL$554), 0))Worksheets(Inventory).Range(AO7).FormulaArray = =INDEX($AL$5:$AL$554, MATCH(0, COUNTIF($AO$4:AO6,$AL$5:$AL$554), 0))Worksheets(Invnetory).Range(AO8).FormulaArray = =INDEX($AL$5:$AL$554, MATCH(0, COUNTIF($AO$4:AO7,$AL$5:$AL$554), 0))Worksheets(Inventory).Range(AO9).FormulaArray = =INDEX($AL$5:$AL$554, MATCH(0, COUNTIF($AO$4:AO8,$AL$5:$AL$554), 0))Call SpeedDownEnd SubSub BottomDaysoftheWeek()'This code runs with TopDaysoftheWeek. Thile that code will pull up the date of each day, this code will convert that into the name of the day.Call SpeedUpWorksheets(Inventory).Range(C9).formula = =TEXT(C8,ddd)Range(C9:W9).FillRightCall SpeedDownEnd SubSub SalesHistoryByMonth()'This code finds the months for Sales History. It will find the month ahead of the current month all the way until a year prior.Call SpeedUpWorksheets(Inventory).Range(L51).formula = =EOMONTH(TODAY(),0)+1Worksheets(Inventory).Range(L53).formula = =EOMONTH(TODAY(),-1)+1Worksheets(Inventory).Range(L55).formula = =EOMONTH(TODAY(),-2)+1Worksheets(Inventory).Range(L57).formula = =EOMONTH(TODAY(),-3)+1Worksheets(Inventory).Range(L59).formula = =EOMONTH(TODAY(),-4)+1Worksheets(Inventory).Range(L61).formula = =EOMONTH(TODAY(),-5)+1Worksheets(Inventory).Range(L63).formula = =EOMONTH(TODAY(),-6)+1Worksheets(Inventory).Range(L65).formula = =EOMONTH(TODAY(),-7)+1Worksheets(Inventory).Range(L67).formula = =EOMONTH(TODAY(),-8)+1Worksheets(Inventory).Range(L69).formula = =EOMONTH(TODAY(),-9)+1Worksheets(Inventory).Range(L71).formula = =EOMONTH(TODAY(),-10)+1Worksheets(Inventory).Range(L73).formula = =EOMONTH(TODAY(),-11)+1Worksheets(Inventory).Range(L75).formula = =EOMONTH(TODAY(),-12)+1Worksheets(Inventory).Range(L77).formula = =EOMONTH(TODAY(),-13)+1Call SpeedDownEnd SubSub SalesHistoryWeeks()'This code finds the weeks of sales history. The first formula takes the first date of the inventory section and adds 6 weeks to it. The following formula just decreases the week by a week until the end of the table.Call SpeedUpWorksheets(Inventory).Range(B51).formula = =(C8+42)-WEEKDAY(C8,3)Worksheets(Inventory).Range(B52).formula = =B51-7Range(B52:B108).FillDownCall SpeedDownEnd SubSub SalesHistoryMonthlyCalculations()'This code calculates the sales history by month. It works by adding together all of the weekly sales history to the left of these formulas based on if the weeks are within the corresponding month. This formula is a little iffy, where it works on a beginning of the week basis (i.e. 5/28-6/4 counts as May, not May and June). Aside form that, works really well. May need to rework this formula.Call SpeedUpWorksheets(Inventory).Range(L52).formula = =SUMIFS(K51:K108,B51:B108,>=&L51,B51:B108,<=&EOMONTH(L51,0))Worksheets(Inventory).Range(L54).formula = =SUMIFS(K51:K108,B51:B108,>=&L53,B51:B108,<=&EOMONTH(L53,0))Worksheets(Inventory).Range(L56).formula = =SUMIFS(K51:K108,B51:B108,>=&L55,B51:B108,<=&EOMONTH(L55,0))Worksheets(Inventory).Range(L58).formula = =SUMIFS(K51:K108,B51:B108,>=&L57,B51:B108,<=&EOMONTH(L57,0))Worksheets(Inventory).Range(L60).formula = =SUMIFS(K51:K108,B51:B108,>=&L59,B51:B108,<=&EOMONTH(L59,0))Worksheets(Inventory).Range(L62).formula = =SUMIFS(K51:K108,B51:B108,>=&L61,B51:B108,<=&EOMONTH(L61,0))Worksheets(Inventory).Range(L64).formula = =SUMIFS(K51:K108,B51:B108,>=&L63,B51:B108,<=&EOMONTH(L63,0))Worksheets(Inventory).Range(L66).formula = =SUMIFS(K51:K108,B51:B108,>=&L65,B51:B108,<=&EOMONTH(L65,0))Worksheets(Inventory).Range(L68).formula = =SUMIFS(K51:K108,B51:B108,>=&L67,B51:B108,<=&EOMONTH(L67,0))Worksheets(Inventory).Range(L70).formula = =SUMIFS(K51:K108,B51:B108,>=&L69,B51:B108,<=&EOMONTH(L69,0))Worksheets(Inventory).Range(L72).formula = =SUMIFS(K51:K108,B51:B108,>=&L71,B51:B108,<=&EOMONTH(L71,0))Worksheets(Inventory).Range(L74).formula = =SUMIFS(K51:K108,B51:B108,>=&L73,B51:B108,<=&EOMONTH(L73,0))Worksheets(Inventory).Range(L76).formula = =SUMIFS(K51:K108,B51:B108,>=&L75,B51:B108,<=&EOMONTH(L75,0))Worksheets(Inventory).Range(L78).formula = =SUMIFS(K51:K108,B51:B108,>=&L77,B51:B108,<=&EOMONTH(L77,0))Call SpeedDownEnd SubSub ProductionHistoryInfo()'This code is all of the formulas regarding Production History. Finds Cases, Doughs, and Line product was run on. Also has the yield formulas (cases/doughs).Call SpeedUpWorksheets(Inventory).Range(Q52).FormulaArray = =IF(OR(($O52)=,$O52=DATE(1900,1,0)),,INDEX(Production!$R:$R,MATCH(CONCATENATE(Inventory!$O52,Inventory!$M$3),Production!$B:$B,0)))Range(Q52:Q104).FillDownWorksheets(Inventory).Range(R52).formula = 0Range(R52:R104).FillDownWorksheets(Inventory).Range(S52).FormulaArray = =IF(OR(($O52)=,$O52=DATE(1900,1,0)),,INDEX(Production!$E:$E,MATCH(CONCATENATE(Inventory!$O52,Inventory!$M$3),Production!$B:$B,0)))Range(S52:S104).FillDownWorksheets(Inventory).Range(T52).formula = =IF(ISERR(Q52/R52),,Q52/R52)Range(T52:T104).FillDownCall SpeedDownEnd SubSub ProductionHistoryDates()'This code finds the dates that a given product was run on. Uses an array function to look up dates of production based on matching SKU numbers. Most volatile function in this section. Can only lookup/match up to 20,000 values, which could be problematic. May need some editing done based on how actual tables are set up. Goes from oldest to newest, which may also be a problem.Call SpeedUpWorksheets(Inventory).Range(O52).FormulaArray = =IFERROR(OFFSET(Production!$H$1:$H$100000,SMALL(IF(Production!$H$1:$H$100000=Inventory!$M$3,ROW(Production!$H$1:$H$100000)-ROW(INDEX(Production!$H$1:$H$100000,1,1))),ROW()-51),COLUMN()-9),)Range(O52:O104).FillDownCall SpeedDownEnd SubSub BorderFixer()'This code was created after I discovered that some of these vba formulas will break the borders I made after running. Its only purpose is to fill in those broken borders and fix them to look like how they looked before. With Worksheets(Inventory).Range(W10).Borders(xlEdgeBottom).LineStyle = xlContinuous.Weight = xlThin.ColorIndex = 2End With With Worksheets(Inventory).Range(W7:W43).Borders(xlEdgeRight) .LineStyle = xlContinuous .Weight = xlThin .ColorIndex = 2 End With With Worksheets(Inventory).Range(B108:K108).Borders(xlEdgeBottom) .LineStyle = xlContinuous .Weight = xlThin .ColorIndex = 2 End With With Worksheets(Inventory).Range(K108).Borders(xlEdgeRight) .LineStyle = xlContinuous .Weight = xlThin .ColorIndex = 2 End With End SubSub ResetInventory()'This code is made to run only if the user manages to unlock all of the cells on this sheet and start deleting the formulas I added. Once clicked, this code will: deactivate all of Excel's functions, run all of the prior codes related to inventory, run the border fixer to repair broken borders, and reactivate Excel's functions. While it runs fast and effectively, has a chance to break the worksheet. If so, click fix frozen cells. Will return error if the user tries to run this while sheets are protected. Call SpeedUp Call InventoryInfo Call TopDaysoftheWeek Call BottomDaysoftheWeek Call SumIfSales Call SumIfMovement Call SumIfProduction Call FillInventoryPerLocation Call FillInventoryAcross Call BorderFixer Call CPD Call SpeedDown End SubSub ResetSalesHistory()'This code is made to run only if the user manages to unlock all of the cells on this sheet and start deleting the formulas I added. Once clicked, this code will: deactivate all of Excel's functions, run all of the prior codes related to sales history, run the border fixer to repair broken borders, and reactivate Excel's functions. While it runs fast and effectively, has a chance to break the worksheet. If so, click fix frozen cells. Will return error if the user tries to run this while sheets are protected.Call SpeedUpCall SalesHistoryWeeksCall DailySalesHistoryCall WeeklySalesHistoryCall SalesHistoryByMonthCall SalesHistoryMonthlyCalculationsCall BorderFixerCall SpeedDownEnd SubSub ResetProductionHistory()'This code is made to run only if the user manages to unlock all of the cells on this sheet and start deleting the formulas I added. Once clicked, this code will: deactivate all of Excel's functions, run all of the prior codes related to production history, run the border fixer to repair broken borders, and reactivate Excel's functions. While it runs fast and effectively, has a chance to break the worksheet. If so, click fix frozen cells. Will return error if the user tries to run this while sheets are protected.Call SpeedUpCall ProductionHistoryDatesCall ProductionHistoryInfoCall BorderFixerCall SpeedDownEnd SubSub InventoryPrintPreview()'This code is made to show the users a print preview of what the worksheet will look like. I already made the print areas for the three tables on this sheet. User can change margins or format.Application.ExecuteExcel4Macro SHOW.TOOLBAR(Ribbon,True)Application.DisplayFormulaBar = TrueActiveWindow.DisplayWorkbookTabs = TrueWorksheets(Inventory).PrintPreviewApplication.ExecuteExcel4Macro SHOW.TOOLBAR(Ribbon,False)Application.DisplayFormulaBar = FalseActiveWindow.DisplayWorkbookTabs = FalseEnd Sub
Speeding Up My Current Code Inventory Management Planning Screen
performance;vba;excel
null
_unix.172159
In Debian, how can I tell initramfs not to request an IP address via DHCP? I'm using initramfs-tools. I'd be okay with assigning a static IP address for the initramfs, but I can't find how to set that either. I saw in the manual page initramfs-tools(8) the ip parameter, but I don't know where to specify it.Update: ip is not being passed as a kernel command line parameter:cat /proc/cmdlineBOOT_IMAGE=/boot/vmlinuz-3.16-3-amd64 root=/dev/mapper/root-root_vol ro root=/dev/mapper/root-root_vol ro rootdelay=10I watched it boot and the dhcp is definitely happening after the initramfs starts.
disable dhcp in initramfs
debian;networking;ip;initramfs
null