id
stringlengths 5
27
| question
stringlengths 19
69.9k
| title
stringlengths 1
150
| tags
stringlengths 1
118
| accepted_answer
stringlengths 4
29.9k
⌀ |
---|---|---|---|---|
_codereview.16437 | I am currently using this code, but I am pretty sure there is a better way of doing it:private string ReturnEmployeeName(string userName){ try { // Username coming is RandomDomain\RandomLengthUsername.whatever string[] sAMAccountName = userName.Split('\\'); return sAMAccountName[1];How can I make this faster? I am not sure if my try block will catch any exceptions that may arise because of this line. | Splitting a string of random length | c#;performance;strings;error handling;sharepoint | Fast is not too relevant here, since it's such a simple method.And the canned response also applies: if you want to improve the speed, first do some benchmarking to know where the bottlenecks really are.But here my proposal of a better way:private const string DefaultEmployeeName = JDoe;/// <param name=userName>Format expected: Domain\Name</param>private string ParseEmployeeName(string userName) { if (userName == null) { return DefaultEmployeeName; } // Username coming is RandomDomain\RandomLengthUsername.whatever // Let's split by '\', returning maximum 2 substrings. // The second (n) will contain everything from the first (n-1) separator onwards. string[] parts = userName.Split(new char[] { '\\' }, 2); if (parts.Length < 2) { return DefaultEmployeeName; } // Let's remove whitespace, just in case. string name = parts[1].Trim(); if (string.IsNullOrEmpty(name)) { return DefaultEmployeeName; } return name; // RandomLengthUsername.whatever}The comment with /// on top is a documentation comment.Those will enrich the Intellisense view of the method, providing extra information. |
_unix.15050 | I know why this is good in general: faster security fixes, easier packaging, more features. However, I'm trying to persuade some co-workers that we don't need to bundle a library with our program. It will not work without this library, but the library has been stable for a while now and will remain so for the foreseeable future. I don't see any reason NOT to unbundle it. What arguments could I use to persuade them?My specific situation is this: I'm working on SymPy, which is an open-source Python library for symbolic mathematics. A core part of it is mpmath, which is a library for multi-prevision floating point arithmetic. SymPy doesn't work without mpmath, there is no alternative. As such, it has been bundled with SymPy since the start (I was told that there were usually small incompatibilities to fix everytime a new version is imported). It should also be noted that the developer of mpmath used to be involved in SymPy development. There is now an issue on unbundling mpmath, you can read it all here.To summarize the discussion there:Unbundle:Somewhat easier porting to Python 3 (minor argument IMHO)Easier packaging for distributionsFaster (security) feature updates to usersPackaging and handling dependencies are hard problems, but they are solved. This is definitely not an area where we should do our own thing.Keep bundling:Installation. It's easy on Linux, harder on Mac and very hard on Windows. Lack of su access and other problems.it is an integral part of SymPy, i.e. sympy does not work without it (at all)there is no other package, that can do the job of mpmathWhen I, as a user, download sympy, I expect it to just work.That's my specific situation, but I'd accept an answer that provides a good, general answer as well. | Why are libraries shipped separately instead of bundled with every program? | package management | Yet another answer, but one I consider to be the most important (just my own personal opinion), though the others are all good answers as well.Packaging the lib separately allows the lib to be updated without the need to update the application. Say theres a bug in the lib, instead of just being able to update the lib, you'd have to update the entire application. Which means your application would need a version bump without its code even having changed, just because of the lib. |
_cogsci.16747 | Tell people that God says something and they believe.Tell people that the paint is wet and they have to touch to believe.If I ask people if God really made http://thegoodlordabove.com/god-talks-several-trump-supporters-need-safe-space/ people would laugh at me.If I ask people if God really made the bible or send prophets, many will stone me for even asking their sacred beliefs.Why?Is fear of authority or instincts to obey the authority, or lack of personal cost of being false makes people easily believe things?The bible and quran, for example, carries tons of authority and tend to support status quo. People would at least have incentive to pretend they believe. A wet paint doesn't. Or what?What are the explanations? | How do we explain that some humans require far more evidence for some claims than others? | religion | There are at least two important factors or phenomena at play here. The first is whether the subject of the belief has real-world consequences to the believer. The second is whether the belief relates to groups to which the believer associates or belongs.When a person's belief on a particular matter makes no functional difference to that person, he or she is free to believe whichever or whatever way without gain or loss. In this sense, it is easy for a person to criticise others for doing something a certain way when the person doing the criticism is not in the business of trying to accomplish the same goal. A person might complain, for example, that company X would be more successful if they focused on product Y instead of product Z. Having this belief is inconsequential to the believer since he or she has nothing to gain or lose from being right or wrong. A similar occurrence is when a person believes something to be dangerous without real evidence, as for example the consumption of GMO foods. Whether GMOs actually hurt you is irrelevant to a person who eats strictly non-GMO organic foods. Granted the person has no shortage of money, the accuracy of this belief makes no functional difference, so there is little if any incentive to be honest about the actual safety record. For a person having barely enough money to buy food, on the other hand, the actual safety record of low-cost foods is much more pertinent and important since the decision makes a substantial difference in financial burden. It is easy for a person having ample money to make the claim that expensive items are substantially safer and more effective. For a person with limited funds, the truth of the matter is far more important.In the case of wet paint, if there is a risk to the person of getting the paint on his or her clothes, then the person is likely to see real consequences in the reality of the matter. Hence, there is little room for carelessly choosing a belief -- the truth has real consequences to this person.A person's affiliations and group identity may be given high priority, especially when a person stands to gain from that association. Any attack or perceived attack on the group or its collective beliefs may be taken as an attack on the individual. The group may be seen and felt as an extended self. A person having no such association or feeling toward a group or its beliefs is not likely to be much offended or protective of those beliefs. Hence, he or she is free to consider their truth or falsehood. Someone whose life and well-being depend on that group or set of beliefs, on the other hand, is not going to take skepticism lightly since it poses a very real threat. In this sense, the situation is similar to that mentioned above about consequences. If a person depends on a group or belief-system for material or emotional well-being, then any attacks on that group's reputation or status may harm the person indirectly.Naturally there is a third factor where certain religions impose or threaten penalties for those who do not defend the religion and its beliefs. In this case, a true believer may protect the group and its beliefs purely out of fear for said consequences.To answer your main question more directly, those for whom knowing the truth is more consequential will require greater evidence than those for whom knowing the truth is irrelevant or even harmful (whether materially or emotionally). A person dependent on a group for material or emotional support is more likely to believe in that group and its teachings, regardless of the truth.On a side note, even science can be seen as a group identity these days, from which we can expect that some individuals would derive emotional or material well-being. Having an emotional stake in science would make a person require more evidence to the contrary before considering a religious perspective. Unfortunately, this type of group identity can blind individuals on both sides of the science/religion debate, among other debates (politics, GMOs, et cetera). Ego and emotion, when left unchecked, can be very harmful. |
_cs.29896 | I am going to have Introduction to Artificial Intelligence in this semester & I know that our main resource would be Artificial Intelligence A Modern Approach, since I want to learn deeply and Implement all the algorithms we are going to learn, I want to implement a simple intelligent mini-game for each material in the course.so my question is that what games would be best for each topic? the only ones I could come up with were a Sudoku Solver and a simple player vs. computer Tic-Tac-Toe. | Mini-Games for Artificial Intelligence Course | algorithms;artificial intelligence;education | null |
_softwareengineering.292028 | I've spent six full days so far working on a spec for a web-app component. Apart from personally wanting some task that doesn't involve Word, I'm wondering if there is a point at which I know that the spec I'm working on is finished (but isn't a spec a constant work-in-progress?). I feel that the spec still doesn't explain a good solution for all the requirements I have, so I'm still on it.Is there any good heuristic or red flag that says I should stop working on a spec?Note:As opposed to this question and it's answers, I am looking for a completion metric specific to the design, rather than how to decide when code quality is good enough to know that the implementation is complete. | How much effort should I put into the functional specification? | specifications | What you are looking for is traceability.Whether you use old-school waterfall or more modern iterative approaches, a unit of functionality necessarily follows a simple process:Define business requirements.Define functional requirements.Define technical requirements.Design the software. <-- You are hereImplement the software.Test the software.At each step of the process, you should be able to trace your requirements back and forth between steps.Each functional requirement (the system shall have function X) must be traceable to a business requirement (we need a system to do X). Note that a business requirement is normally high level and spawns many functional requirements.Each design element traces back to a functional requirement. E.g. the form elements on this screen all support functional requirement X. All of the data requirements in this functional or technical requirement are satisfied by this screen or interface.When you have 100% coverage throughout the whole process, you know your design is functionally complete.But wait! Can the design be implemented reasonably? The key here is collaboration. This is where design reviews come into play. Get the key players involved: customer, project management, developers, and QA. Can the design be implemented? Can it be tested? Does it really satisfy the requirements? Once the team comes to a consensus, you are likely done with the design. |
_unix.129300 | I have 2 python instance on a CentOS machine, i.e. /usr/bin/python2.4 and /usr/bin/python2.7.Modules for 2.4 are in /usr/lib/python2.4, and modules for 2.7 are in /usr/local/python27. When I do yum install numpy, which I want to install for python2.7, it automatically install for python2.4.How can I appoint which instance to install modules for with yum, easy_install and pip? | install python module for particular python instance | centos;yum;python | null |
_unix.86255 | I'm using Debian testing. I want to configure wifi on my netbook, but failed to do so and when the system boots I get the following message:INFO: task wpa_supplicant:1634 blocked for more than 120 seconds.echo 0 > /proc/sys/kernel/hung_task_timeout_secs disables this message.I have come across info that this is a new feature that informs that a service is failing. The service in my case is /etc/init.d/networking.The problem is this: Booting process hangs and prints every 120 seconds the message above and that is it. I can't use my netbook any more. Is there any way to boot without this service?PS. What I'm doing right now is booting from a rescue USB disk to fix the issue but I wonder if there is any boot options I could use in circumstances like this? | boot without failing services | debian;boot;services | I think you have basically following optionsdisable service from starting up$ sudo update-rc.d networking disabledisable configuration on boot (by editing /etc/default/networking)# Set to 'no' to skip interfaces configuration on boot#CONFIGURE_INTERFACES=yesboot to runlevel without it and then after fixing move to desired levelDebian networking is setup in the S runlevel so this doesn't help that much unless you move the service to a different runlevel. It can be done by using update-rc.d. Then while booting you just have to pass a boot parameter to the kernel saying what runlevel to enter (or updating /etc/inittab modifying default).kernel /boot/vmlinuz-2.6.30 root=/dev/sda2 ro 3You might find following resource useful https://wiki.debian.org/RunLevel.There is also a tool named rcconf for manipulating runlevels and enabling / disabling services.To me option 2 seems like easiest until you fix your issue. |
_unix.163883 | Used to be root@localhost. The system is CentOS Release 6.4 in a VMware virtual machine. Yesterday I did some test of the user and group commands. And I noticed the tty1 login prompt changed to be bogon login. And then the pts prompt changed after that. My question is how to change it back?Since the prompt change was not due to my deliberate modification to variable PS1, I suspect that there must be some reason for this. I want to dig it out and thus prevent future auto change to the bogon name again. Also, I want to know the what the word bogon stands for.As I indicated in the comment, this is not the first time that happened to my Linux virtual machine. Yes, the virtual machine is connected to a DHCP wifi router. | Why my bash prompt changed to be root@bogon | prompt | null |
_codereview.3222 | For every word there are 2^n different ways of writing the word if you take into account upper/lower case letters. Eg for word we can write;wordWordwOrdWOrdwoRdWoRdetcI've written this code to calculate all the combinations. Is there any way I can improve the performance? Profiling tells me that this method takes 99.9% of the execution time of my program (which measures password strength).String word = word;int combinations = 1 << word.length(); for (int i=0; i<combinations; i++) { StringBuilder buf = new StringBuilder(word); for (int j=0; j<word.length(); j++) { if ((i & 1<<j) != 0) { String s = word.substring(j, j+1).toUpperCase(); buf.replace(j, j+1, s); } } System.out.println(buf);} | Finding all upper/lower case combinations of a word | java;optimization | null |
_codereview.4654 | I have a lot of repeatable blocks of code and want to optimize/simplify them: function animationInit() { content = $('#content') .bind('show',function(event, f) { $(this).animate({right:0}, lag + 200, function() { if (f) f(); }); }) .bind('hide',function(event, f) { $(this).animate({right:-700}, lag + 200, function() { if (f) f(); }); }); hotelnav = $('#hotel') .bind('show',function(event, f) { hotelnav.removeClass('small'); $(this).animate({left:0}, lag + 200, function() { if (f) f(); }); }) .bind('hide',function(event, f) { $(this).animate({left:isGallery()?-300:-300}, lag + 200, function() { hotelnav.addClass(isGallery()?'small':''); if (f) f(); }); }); bottompanel = $('#bottompanel') .bind('show',function(event, f) { $(this).animate({bottom:40}, lag + 200, function() { if (f) f(); }); }) .bind('hide',function(event, f) { $(this).animate({bottom:-120}, lag + 200, function() { if (f) f(); }); }); booknow = $('#booknow') .bind('show',function(event, f) { $(this).fadeIn(lag + 200, function() { if (f) f(); }); }) .bind('hide',function(event, f) { $(this).fadeOut(lag + 200, function() { if (f) f(); }); }); };How i can optimize repeatable parts of code with callbacks?Im trying to create separate function like this:function cb(callback) { if (callback) callback();};... but just have a lot of asynchronous callbacks... | jQuery callbacks optimization | javascript;jquery;callback | I don't understand the need for function() { if (f) f();}put f instead of that whole thing. If it is undefined it won't get called.e.g.bind('show',function(event, f) { $(this).animate({right:0}, lag + 200, f);});Another thought came to me. The functions you use could be factored and used like:function animateRight(event, f, rightValue, delay){ $(this).animate({right: rightValue}, lag + delay, f);}(or if you can't pass that information in:function animateRight(event, f, rightValue, delay){ return new function() { $(this).animate({right: rightValue}, lag + delay, f); }} |
_softwareengineering.37191 | I think that most people would agree that ASP.NET MVC is one of the better technologies Microsoft has given us. It gives full control over the rendered HTML, provides separation of concerns and suites to stateless nature of web.The last version of the framework gaves us new features and tools and it's great, but... what solutions should Microsoft include in new versions of framework? What are biggest gaps in comparison with another web frameworks like PHP or Ruby? What could improve developers productivity? What's missing in ASP.NET MVC? Why is this missing feature important? How do you do without it now? | What's missing in ASP.NET MVC? | web development;php;.net;ruby on rails;asp.net mvc | null |
_unix.378659 | I downloaded the firmware and copied it to /lib/firmware and still I keep getting errors.Contents of /lib/firmware :-rw-r--r-- 1 root root 337520 Jun 16 2014 iwlwifi-1000-5.ucode-rw-r--r-- 1 root root 337572 Jun 16 2014 iwlwifi-100-5.ucode-rw-r--r-- 1 root root 689680 Jun 16 2014 iwlwifi-105-6.ucode-rw-r--r-- 1 root root 701228 Jun 16 2014 iwlwifi-135-6.ucode-rw-r--r-- 1 root root 695876 Jun 16 2014 iwlwifi-2000-6.ucode-rw-r--r-- 1 root root 707392 Jun 16 2014 iwlwifi-2030-6.ucode-rw-r--r-- 1 root root 670484 Jun 16 2014 iwlwifi-3160-7.ucode-rw-r--r-- 1 root root 667284 Jun 16 2014 iwlwifi-3160-8.ucode-rw-r--r-- 1 root root 666792 Jun 16 2014 iwlwifi-3160-9.ucode-rw-r--r-- 1 root root 1180356 Jul 15 12:13 iwlwifi-3165-15.ucode-rw-r--r-- 1 root root 150100 Jun 16 2014 iwlwifi-3945-2.ucode-rw-r--r-- 1 root root 187972 Jun 16 2014 iwlwifi-4965-2.ucode-rw-r--r-- 1 root root 353240 Jun 16 2014 iwlwifi-5000-2.ucode-rw-r--r-- 1 root root 340696 Jun 16 2014 iwlwifi-5000-5.ucode-rw-r--r-- 1 root root 337400 Jun 16 2014 iwlwifi-5150-2.ucode-rw-r--r-- 1 root root 454608 Jun 16 2014 iwlwifi-6000-4.ucode-rw-r--r-- 1 root root 444128 Jun 16 2014 iwlwifi-6000g2a-5.ucode-rw-r--r-- 1 root root 677296 Jun 16 2014 iwlwifi-6000g2a-6.ucode-rw-r--r-- 1 root root 679436 Jun 16 2014 iwlwifi-6000g2b-6.ucode-rw-r--r-- 1 root root 463692 Jun 16 2014 iwlwifi-6050-4.ucode-rw-r--r-- 1 root root 469780 Jun 16 2014 iwlwifi-6050-5.ucode-rw-r--r-- 1 root root 683236 Jun 16 2014 iwlwifi-7260-7.ucode-rw-r--r-- 1 root root 679780 Jun 16 2014 iwlwifi-7260-8.ucode-rw-r--r-- 1 root root 679380 Jun 16 2014 iwlwifi-7260-9.ucode-rw-rw-r-- 1 root root 885224 Jun 18 2015 iwlwifi-7265-13.ucode-rw-r--r-- 1 root root 1180224 Jul 15 15:46 iwlwifi-7265-14.ucode-rw-r--r-- 1 root root 1180356 Jul 15 12:13 iwlwifi-7265-15.ucode-rw-r--r-- 1 root root 690452 Jun 16 2014 iwlwifi-7265-8.ucode-rw-r--r-- 1 root root 691960 Jun 16 2014 iwlwifi-7265-9.ucode-rw-rw-r-- 1 root root 1008692 Jun 18 2015 iwlwifi-7265D-13.ucode-rw-r--r-- 1 root root 1384256 Jul 15 15:46 iwlwifi-7265D-14.ucodeDmesg output:[ 8.549314] iwlwifi 0000:03:00.0: firmware: failed to load iwlwifi-7265D-26.ucode (-2)[ 8.549420] iwlwifi 0000:03:00.0: Direct firmware load for iwlwifi-7265D-26.ucode failed with error -2[ 8.549447] iwlwifi 0000:03:00.0: firmware: failed to load iwlwifi-7265D-25.ucode (-2)[ 8.549546] iwlwifi 0000:03:00.0: Direct firmware load for iwlwifi-7265D-25.ucode failed with error -2[ 8.549569] iwlwifi 0000:03:00.0: firmware: failed to load iwlwifi-7265D-24.ucode (-2)[ 8.549667] iwlwifi 0000:03:00.0: Direct firmware load for iwlwifi-7265D-24.ucode failed with error -2[ 8.549689] iwlwifi 0000:03:00.0: firmware: failed to load iwlwifi-7265D-23.ucode (-2)[ 8.549786] iwlwifi 0000:03:00.0: Direct firmware load for iwlwifi-7265D-23.ucode failed with error -2[ 8.549807] iwlwifi 0000:03:00.0: firmware: failed to load iwlwifi-7265D-22.ucode (-2)[ 8.549905] iwlwifi 0000:03:00.0: Direct firmware load for iwlwifi-7265D-22.ucode failed with error -2[ 8.549925] iwlwifi 0000:03:00.0: firmware: failed to load iwlwifi-7265D-21.ucode (-2)[ 8.550093] iwlwifi 0000:03:00.0: Direct firmware load for iwlwifi-7265D-21.ucode failed with error -2[ 8.550114] iwlwifi 0000:03:00.0: firmware: failed to load iwlwifi-7265D-20.ucode (-2)[ 8.550280] iwlwifi 0000:03:00.0: Direct firmware load for iwlwifi-7265D-20.ucode failed with error -2[ 8.550301] iwlwifi 0000:03:00.0: firmware: failed to load iwlwifi-7265D-19.ucode (-2)[ 8.550467] iwlwifi 0000:03:00.0: Direct firmware load for iwlwifi-7265D-19.ucode failed with error -2[ 8.550488] iwlwifi 0000:03:00.0: firmware: failed to load iwlwifi-7265D-18.ucode (-2)[ 8.550654] iwlwifi 0000:03:00.0: Direct firmware load for iwlwifi-7265D-18.ucode failed with error -2[ 8.550674] iwlwifi 0000:03:00.0: firmware: failed to load iwlwifi-7265D-17.ucode (-2)[ 8.550839] iwlwifi 0000:03:00.0: Direct firmware load for iwlwifi-7265D-17.ucode failed with error -2 | Loading wifi drivers on a Lenovo laptop debian 8 | debian;networking;network interface;iwlwifi | Fixed by installing firmware:root@server:~# apt-get install -t jessie-backports firmware-iwlwifi |
_webapps.70329 | I've noticed that recently when I enter a letter into the search bar, the first person that comes up isn't a friend of mine, but the rest are. Does anyone know why this person comes up first? | Result starts with someone who is not a friend in Facebook search | facebook;search | null |
_softwareengineering.100956 | I am thinking of developing customized software for desktops in Visual FoxPro 9 and want to know what type of licensing is required.As a developer, would I need to have a Visual FoxPro 9 license and would my users need to have the End User License? What type of licenses would be needed for commercial release? How would the licensing change if I released this as freeware? | What kind of licensing is required for Visual FoxPro? | licensing;eula | null |
_codereview.121206 | I needed to create a mortgage calculator for an intro to CS class. As part of the assignment, with an interest rate of 6% needs to change to 7% after 3 years hence that if statement. The char dummy line at the end is a requirement from my professor.I'm mainly looking for ways to clean it up or if I missed anything. I'm required to use namespace std, and I know a lot of you don't care for it.#include <iostream>using namespace std;int main(){ double monthlyPayment; double balance; double interestRate; double interestPaid; double initialBalance; double termOfLoan; double month = 1; cout.setf(ios::fixed); cout.setf(ios::showpoint); cout.precision(2); cout << Enter the current balance of your loan: $; cin >> balance; cout << Enter the initial yearly interest rate : ; cin >> interestRate; cout << Enter the desired monthly payment : $; cin >> monthlyPayment; initialBalance = balance; while (interestRate >= 1) /*Converts the interest rate to a decimal if the user inputs in percentage form*/ { interestRate = interestRate / 100; } if(month >= 36); { if(interestRate=.06) { interestRate=.07; } } balance = balance * (1 + interestRate / 12) - monthlyPayment; cout << After month 1 your balance is $ << balance << endl; while (balance > 0) { if (balance < monthlyPayment) { balance = balance - balance; } else { balance = balance * (1 + interestRate / 12) - monthlyPayment; } month = month++; month = month + 1; month += 1; cout << After month << month << , your balance is : $ << balance << endl; } cout << You have paid off the loan at this point. Congratulations! << endl; termOfLoan = month;interestPaid = (monthlyPayment * termOfLoan) - initialBalance; /*I believe the formula above would work if only there was a way to calculate how many months it took to pay off the loan, but since it varies, I don't know how to calculate termOfLoan. */ cout << You paid a total ammount of $ << interestPaid << in intrest. << endl; cout << Total number of months = << month << . << endl;char dummy;cout << Enter any key to quit. << endl;cin >> dummy;} | Mortgage calculator for homework | c++;homework;finance | You use double values for calculations. Especially for money, I would suggest to switch to int and handle it in cents. That way you can avoid problems with floating point precision.You don't check if cin>> fails or succeeds. If the user doesn't enter a number you may have a problem here. A possible solution might be to let the user reenter the value as long as it isn't correct.do{ cout << Enter the current balance of your loan: $;}while (!(cin >> balance));You have a wrong check in the line if (interestRate = .06). You probably meant if (interestRate == .06). Additionally, this check might also be a problem as you do some calculations on interestRate that may cause inprecision (again floating point precision problem).As you obviously await the user to input the value as int your check should be against 6 before you divide the value with 100.You have this piece of code:if (balance < monthlyPayment){ balance = balance - balance;}Which is the same as:if (balance < monthlyPayment){ balance = 0;}I think the second version is easier to understand.I didn't check any more of the code yet. I especially didn't prove it for correctness in you calculations. |
_unix.180871 | I've got an Arch linux and every time I start the system it opens applications, that I don't want to be opened. It starts me Ark, Mozilla firefox and the folder download. Why does this happen? How to repair it? | Applications that start with a system | arch linux;kde | null |
_cstheory.9787 | Normal Order Reduction (NOR)reduce the leftmost, outermost redex.Normal Order Evaluation (NOE)reduce the leftmost, outermost redex, but not within the body of abstractions.So (w. (x.x) z) is in normal form under NOE, but not under NOR.Does using NOE instead of NOR weaken the Normalization property?The Normalization property states that if there is a normal form under beta-reduction, that NOR will find it.Edit:Sorry, it's the Curry/Feys Theorem that says that NOR always finds a normal form if it exists. | Does using Normal Order Evaluation instead of Normal Order Reduction lose the Normalization theorem? | lo.logic;pl.programming languages;lambda calculus | NOE reduction strategy (as you define it) won't find the $\beta$-normal form of a term. For example, the normal form of $\lambda x.((\lambda y.y)x)$ is $\lambda x.x$but NOE won't find it, because it won't reduce the inner redex $(\lambda y.y)x$ which is under the outer $\lambda$-abstraction. So it's not a normalizing reduction strategy.However, in a typed functional language with data types like Int, if you know that some expression is of such a type, you know that it cannot be a function, so it's enough to restrict reductions to so-called Weak Head Normal Form (which is close to NOE), and it simplifies many things. See also:http://www.haskell.org/haskellwiki/Weak_head_normal_formhttps://en.wikibooks.org/wiki/Haskell/Graph_reduction#Weak_Head_Normal_Formhttps://stackoverflow.com/questions/6872898/haskell-what-is-weak-head-normal-form |
_unix.234448 | We have two completely different computers with Mint 17.2, one server Xeon with ECC memory and the other is laptop i7 with normal memory. Both have 8GB of RAM with default swap.Both are used to play a CPU demanding Flash Player game (50% of CPU) in latest Chrome (not Chromium) with latest integrated Flash Player.Both have the recommended kernel installed::~ > uname -r3.16.0-38-genericThe problem is, they both freeze almost every day, with inability to switch to the console using Ctrl+Alt+F1, so I don't even know what happened.Please help. | Inability to switch to the console using Ctrl+Alt+F1 when Mint freezes | linux;linux mint;freeze | Since you have a couple of computers you can access from one computer to the other (if the system is still running and network) using ssh for instance, that way you may be able to check what happened and kill the process/es you need. |
_unix.58251 | After following several tutorials on setting up postfix at basic level on CentOs for my VPS, continue to get the following:-bash-4.1# postfix start/usr/libexec/postfix/postfix-script: line 317: cmp: command not foundpostfix/postfix-script: warning: /usr/lib/sendmail and /usr/sbin/sendmail differpostfix/postfix-script: warning: Replace one by a symbolic link to the otherpostfix/postfix-script: starting the Postfix mail systemIn the main.cf below are my edits:myhostname = myservername.sub.mysite.netmydomain = sub.mysite.netmyorigin = $myhostnamemyorigin = $mydomain inet_interfaces = allinet_interfaces = $myhostnameinet_interfaces = localhostmydestination = $myhostname, localhost.$mydomain, localhostmynetworks = 192.168.0.0/16, 127.0.0.0/8Also removed sendmail but haven't yet set up db, just trying to get postfix to start without errors. Is this the problem? why are two files in red? | how to solve postfix errors? | centos;postfix | You're probably lacking the diffutils package, which will provide the cmp binary that postfix needs for its sanity checks. sudo yum install diffutilsshould help you on. |
_unix.166050 | What I am trying to do is route all traffic that comes on a given interface to a specific, per interface, VPN connection where all outbound traffic is actually leaving on WAN0.I frankly don't have a notion where to start. I was thinking a VLAN per interface where there is only one route out of that VLAN and that's over the VPN connection. But then I tried to think how I could manage 4 desperate VPN connections and I got lost.SOeth0 -> VPN1 eth1 -> VPN2 eth2 -> VPN3 eth3 -> VPN4VPN{1,2,3,4} -> WAN0 | how to binding each each interface to a seperate VPN config | linux;routing;vpn | null |
_unix.181784 | Moving multiple folders from one subversion repository to another subversion repositoryI've a centOS 6.4 server and with subversion 1.4.2 installed.I've two subversion repositories in my server.I've been using 'repoOLD' for the past two 4 months and now I've created another repo with name 'repoNEW''repoOLD' contains 100 folders (projects).'repoNEW' is just created and I need to copy few projects from 'repoOLD' to 'repoNEW'Now the problem is how can I transfer multiple folders(projects) from my 'repoOLD' to 'repoNEW'I've tried googling but I was unable to find tutorials for moving multiple folders from one subversion repository to another repository | Move multiple folders from one subversion repository to another subversion repository | repository;subversion | null |
_unix.286301 | I'm writing a program in python that needs to edit some files in /etc. Some system some its own. How do I get those permissions from within the program itself without running sudo as the program will be non-interactive?Not sure yet where I'm going to autostart my program but I will likely use monit or similar for that purpose. | How do non-interactive programs get the permission to edit files in /etc | linux;python;monit | null |
_webmaster.53711 | Actually I don not know where I have to ask this question. Asking hereIf a website have a ranking 1000 in alexa how much people click or hit the server?I need to establish a server. We are aiming to get such traffic. Can anybody share me some suggestion? | Need to get Alexa ranking meaning | server | null |
_hardwarecs.291 | I am on a headset for work several hours each day. I also listen to music extensively at work.I currently listen to Sennheiser HD 380 headphones. My headset is pretty low quality comparatively.What I would like is a good quality headset which has comparable sound quality and comfort.My criteria:Sound quality comparable to my current Sennheiser headphonesPreferably over-ear entirely (for ambient noise reduction)USB connection (acceptable to have 3.5mm audio input, too) | USB headset with good headphone quality and comfort? | usb;headset;audio quality | null |
_unix.226401 | I used:gsettings set org.gnome.desktop.wm.keybindings switch-applications []gsettings set org.gnome.desktop.wm.keybindings close []to disable Alt+Tab and to disable Alt+F4 respectively.In Ubuntu 14.04: Alt+F4 worked, Alt+Tab failed (meaning the gsettings succeeded but the short-cut is still active)In Debian 8: Both Alt+F4 and Alt+Tab failedHow can I solve this problem? (Or is there another way to disable short-cuts through the command line?) | Failed to disable shortcut on ubuntu14.04 and debian8 | debian;ubuntu;gnome;keyboard shortcuts | You used the same key binding to try to do different things: the correct ones are:gsettings set org.gnome.desktop.wm.keybindings switch-applications []gsettings set org.gnome.desktop.wm.keybindings close []If you did that already, try setting then to something ridiculously complex like ['Above_Tab'] that you will never type by accident, as your goal is to not type it accidentally... |
_unix.275474 | Situation:When I turn on my Linux Mint 17.3 / 18 Cinnamon the NumLock is Off in the Login window.Objective:Turn on NumLock automatically at startup. | Turn on NumLock on startup in Linux Mint | linux mint;login;keyboard;numlock | First, you need to install a program needed for this purpose - numlockx; man page:sudo apt-get install numlockxThen, choose if you wish to achieve the goal through CLI or GUI below.GUI; probably most convenient under normal operation:Once numlockx is installed, the following menu item in Login Window -> Options called:Enable NumLockbecomes available; as you can see:As pointed out in the other answer, this will add the following line to /etc/mdm/mdm.conf:EnableNumLock=trueCLI; suitable if you are setting other computers up through SSH, for instance:Open a text editor you are skilled in with this file, e.g. nano if unsure:sudo nano /etc/mdm/Init/DefaultAdd these lines at the beginning of the file:if [ -x /usr/bin/numlockx ]; then /usr/bin/numlockx onfiAs pointed out by Gilles, don't put exec in front of the command. |
_softwareengineering.302421 | I fooled around with for-loops, remembered the with keyword from delphi and came up with following pattern (defined as a live template in IntelliJ IDEA):for ($TYPE$ $VAR$ = $VALUE$; $VAR$ != null; $VAR$ = null) { $END$}Should I use it in productive code? I think it might be handy for creating temporary shortcut variables like these one-character variables in lamdas and couting for loops. Plus, it checks if the variable you are going to use in the block is null first. Consider following case in a swing application:// init `+1` buttonfor (JButton b = new JButton(+1); b != null; add(b), b = null) { b.setForeground(Color.WHITE); b.setBackground(Color.BLACK); b.setBorder(BorderFactory.createRaisedBevelBorder()); // ... b.addActionListener(e -> { switch (JOptionPane.showConfirmDialog(null, Are you sure voting +1?)) { case JOptionPane.OK_OPTION: // ... break; default: // ... break; } });} | Is using for loop syntax for a with(variable) block an anti-pattern? | java;anti patterns;loops | It's not an anti-pattern, because that would mean it is a commonly used technique that's problematic somehow. This code fails to meet the commonly used criterion.However, it is problematic. Here are some of its problems:Misleading: it uses a loop structure, but never executes more than once.Hiding a check: the != null check is easy to miss where it is placed. Hidden code is harder to understand. It's also not clear whether the condition is actually necessary or just there to terminate the loop after the first iteration. (Your statements about the check indicate that you have situations where it's necessary, whereas in your example it's not, as new never returns null.)Hiding an action: the add(b) statement is hidden even better. It's not even sequentially at the position where you'd expect it.Unnecessary: You can just declare a local variable. If you don't want its name to be visible, you can use a block statement to limit its scope, although that would be an anti-pattern (or at least a code smell), indicating that you should extract a function. |
_unix.278865 | I have a FILE consisting of lines like the following:URL=http://someURL]somefilenameI need to download the URL link if somefilename isn't already there.I wanted to use shell command like:for i in $(cat FILE); do if [ ! -f somefilename] somecode; donebut I don't know what to use for somecode. Any ideas?edit:To address terdon questions:Yes there is only one ] per line, the one after someURLand yes the filename is the whole string after ] to the end of the line. | Extracting several strings in a regexp | shell | Here's a pure shell approach:while IFS='=]' read a url file; do [ -f $file ] || echo wget $url]$filedone < fileThis will iterate over the file, splitting each line on either = or ] and reading each resulting fields into the variables a (the string URL), $url (the url until the file name) and $file (the file name). Then, if the $file doesn't exist in the current directory (so [ -f $file ] returns false), it will download it. |
_unix.106047 | I have a csv file formatted as below.col1,col2,col3,col41,text1,<p>big html text</p>,4th column2,text2,<p>big2 html2 text2</p>,4th column2I want to extract the 4th column using. I think that awk is the best tool for this ( let me know if I am wrong). I tried this awk -F, '{print $4}' myFile.csv but it fails. I think because the 3rd column is multiline one. How can I use awk or any other unix command to extract the 4th column. I am looking for an efficient solution since my real file is big (> 2GB) | extract the 4th column from a csv file using unix command | text processing;awk;perl;csv | UPDATE:Actually, a much easier way is to set the record separator in gawk:$ gawk 'BEGIN{RS=\\n; FS=,}{print $4}' myFile.csvcol44th column4th column2However, this will remove the trailing from the end of each column. To fix that you can print it yourself:$ gawk 'BEGIN{RS=\\n; FS=,}{print $4\}' myFile.csvcol44th column4th column2If you don't want the quotes at all, you can set the field separator to ,:$ gawk 'BEGIN{RS=\\n; FS=\,\}{print $3}' myFile.csvcol34th column4th column2The only way I can think of One way of doing this is to first modify the file and then parse it. In your example, the newline that actually separates two records is always following a :col1,col2,col3,col4 <-- here 1,text1,<p>big <-- no If that is the case for the entire file, you can replace all newlines that are not immediately after a with a placeholder and so have everything in a single line. You can then parse normally with gawk and finally replace the placeholder with the newline again. I will use the string &%& as a placeholder since it is unlikely to exist in your file:$ perl -pe 's/\s*\n/&%&/; s/\n//g; s/&%&/\n/;' myFile.csv | awk -F, '{print $4}'col44th column4th column2The -p flag for perl means print each line of the input file after applying the script given by -e. Then there are 3 substitution (s/foo/bar/) commands:s/\s*\n/&%&/ : This will find any which is followed by 0 or more whitespace characters (\s*) and then a newline character (\n). It will replace that with &%&. The quotes are added to preserve the format and the &%& is just a random placeholder, it could be anything that does not appear in your file. s/\n//g; : since the real newlines have been replaced with the placeholder, we can now safely remove all remaining newlines in this record. This means that all lines of the current record have now been concatenated into the current line.s/&%&/\n/ : This turns the placeholder back into a normal new line.To understand the output of the command run it without gawk:$ perl -pe 's/\s*\n/&%&/; s/\n//g; s/&%&/\n/;' myFile.csv col1,col2,col3,col41,text1,<p>big html text</p>,4th column2,text2,<p>big2 html2 text2</p>,4th column2So, you now have your long records on single lines and this is perfect food for gawk. You can also do it directly in Perl:perl -ne '$/=\\n; chomp;@a=split(/,/);print $a[3]\\n' myFile.csvcol44th column4th column2This is using a bit more Perl magic. The $/special variable is the input record separator. By setting it to \n we tell Perl to split lines not at \n but only at \n so that each record will be treated as a single line. Once that is done, chomp removes the newline from the end of the line (for printing later) and split splits each record (on ,) and saves it in the array @a. Finally, we print the 4th element of the array (arrays are numbered from 0 so that is $a[3]) which is the 4th column.And even more magic, turn on auto spitting (-a) and split on commas (F,). This will split each record into the special @F array and you can print the 4th element of the array:$ perl -F, -ane '$/=\\n;chomp;print $F[3]' myFile.csvcol44th column4th column2 |
_softwareengineering.327828 | We have a large legacy Java-based project, the availability of certain features throughout the application is determined by its corresponding value in a feature_enabled table in a SQL database. Much like Windows enables features based on Registry values. Lately, we have been working to significantly reduce the number of SQL queries made in a specific part of the app. So we wrote a caching class that calls the query once and caches the results. Unfortunately, we can't rewrite the entire functions as other parts of the app are dependent on it. So, when our team calls the function (which executes the SQL query), we supply the entire class with the cached table property. The function then simply checks if the property has been set. If so, then uses the data from there, otherwise runs a fresh query.Problem is we would like to add this caching technique as a feature which can be enabled like others. Querying the DB every time the cached db is called to see if the feature is enabled would undo whatever query saving we are doing. So my question is what is a good pattern of design to accomplish this? | How should I query the DB for a specific key without querying every time the feature is called? | sql;query | null |
_webapps.95166 | I log into Facebook and it is in Spanish or some language I don't recognize. How do I correct this? | Language suddenly changed to something besides English | facebook | null |
_codereview.21334 | Im currently working on a system that will communicate with other systems via webservice (or some sort of communication). I have a system that stores all user data already and don't want to duplicate data in this new system so I have come up with a way of accessing the data when needed. In my current system I am planning to just store the user ID from the user system and fetch the data when required. My question is, is the following code considered acceptable/understood or would you suggest an alternative way of achieving this?public class Person{ private string id; [Transient] private string name; [Transient] private bool isPopulated; public Person(string id){ this.id = id; } public string Id{get;set;} public string Name{ get{ init(); return this.name; } set{ this.name = value; } } private void init(){ if(!isPopulated){ TempPerson tempPerson = UserService.getPerson(this.id); this.name = tempPerson.Name; this.isPopulated = true; } }}Is there a better way to do this and are there any problem with this way? | Populating a class whose data is stored in an external application | c#;design patterns | What you have implemented is a sort of Active Record where the record itself knows how to communicate with the storage. What is bad about your design is that this kind of code will be extremely hard to unit test. Imagine that you need to write a unit test for a class that uses Person objects. How can you prevent it from calling UserService?The proper solution for your problem depends on use cases. Will you need to update your entities and propagate those changes back to server? Can you load several entities at once using methods other than UserService.getPerson(this.id);?What is the lifetime of the entities loaded?Generally the most flexible solution (as I see it) would be to implement a repository and unit-of-work pattern (similar to ISession in NHibernate or DbContext in Entity Framework). Basically it's better not to hide communication with 3rd-party but rather expose it in such a way that you have maximum control and flexibility.Primitive implementation may look like:public class Person{ public string Id { get; set; } public string Name { get; set; }}public interface ISessionFactory{ IUserServiceSession CreateSession();}public interface IUserServiceSession{ Person GetPerson(string id);}public class SessionFactory : ISessionFactory{ public IUserServiceSession CreateSession() { UserService userService = new UserService(); //better use dependency injection, or cache it once if it's thread-safe. return new UserServiceSession(userService); }}public class UserServiceSession : IUserServiceSession{ private readonly Dictionary<string, Person> _cache = new Dictionary<string, Person>(); private readonly UserService _userService; public UserServiceSession(UserService userService) { _userService = userService; } public Person GetPerson(string id) { Person result; if (!_cache.TryGetValue(id, out result)) result = _cache[id] = _userService.getPerson(id); return result; }} |
_unix.214956 | I want to have my super key start dmenu.I have set it as a keyboard shortcut in my rc.xml as follows:<keybind key=0xffeb> <action name=Execute> <command>dmenu_run</command> </action></keybind>I tried specifying it in the key attribute as W, W-, and 0xffeb, but none of these worked.W responds to pressing the letter w, and the others appear to do nothing.I want the shortcut to trigger when the super key is pressed and released on it's own. Is this possible?This is cross posted from super user as per the guidelines here. I've read this question: Super key as shortcut - Openbox, but I didn't see any useful information in it. | How to set a single modifier key as a shortcut in openbox? | keyboard shortcuts;desktop environment;openbox | I ended up using xcape, a utility designed to do exactly this:xcape allows you to use a modifier key as another key when pressed and released on its own. Note that it is slightly slower than pressing the original key, because the pressed event does not occur until the key is released. Quoted from the xcape readmeUsing xcape, you can assign the press and release of a modifier key to a different key or even a sequence of keys. For example, you can assign Super to a placeholder shortcut like ⎈ Ctrl⇧ Shift⎇ Alt SuperD with:xcape -e 'Super_L=Control_L|Shift_L|Alt_L|Super_L|D'Now when you press and release Super without pressing any other keys, xcape will send keyboard events simulating presses of ⎈ Ctrl⇧ Shift⎇ Alt SuperD (holding all the modifier keys down as if you pressed them like a shortcut).If you press Super and another key (or hold Super too long, the default timeout is 500 ms), xcape will pass the keyboard events through as is, without firing extra keys.If you put the placeholder shortcut in rc.xml, it will run when Super and only Super is pressed. <keybind key=C-A-S-W-d> <action name=Execute> <command>dmenu_run</command> </action></keybind>Other shortcuts involving Super will not be affected.Note that you'll have to run xcape each time you boot, so you may want to put it somewhere like ~/.config/openbox/autostart where it will be run automatically. |
_unix.382878 | I have been monitoring this server for a while, it consistently shows 100% util in iostat, even though nothing much looks like being read/written not in iostat, iotop or dstat. Here is the output from iostat. What could be a possible reason for this? Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %utilxvda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 3.00 0.00 0.00 0.00 0.00 100.00OS : Ubuntu 14.04.3 LTS, Kernel 3.13Machine : AWS - t2.xlargeEdit :Output of top :top - 14:47:43 up 234 days, 41 min, 4 users, load average: 0.17, 0.47, 0.39Tasks: 143 total, 1 running, 142 sleeping, 0 stopped, 0 zombie%Cpu(s): 5.4 us, 0.3 sy, 0.0 ni, 94.2 id, 0.0 wa, 0.0 hi, 0.0 si, 0.2 stKiB Mem: 16433184 total, 14287468 used, 2145716 free, 268984 buffersKiB Swap: 0 total, 0 used, 0 free. 5704204 cached Mem | Iostat showing 100% utilization even though not much is being read/written | ubuntu;iostat | null |
_cs.29100 | Is ```sii``sii the smallest Unlambda program that doesn't halt?In other words, what is the smallest non-terminating combinator term in SKI augmented with $C$ (call/cc) and $D$ (delay)? Is it $SII(SII)$? | Smallest non-halting unlambda program | lambda calculus;halting problem;combinatory logic | Intuitively speaking, a non-terminating program needs either:a combinator such as $Y$ which, when applied, reduces to a larger expression containing itself;or two combinators such as $S$ which, when applied, replicate at least one of their arguments: one to do the initial replication and one to be replicated.Unlambda lacks a combinator of the first type, but has two combinators of the second type: $S$ and $C$.In the SKI-calculus, following the intuition above, a non-terminating term needs to somehow apply $S$ with the first argument being $S$. So it would have to be of the form $Swx(Syz)$, i.e. ```swx``syz in Unlambda). This suggests that $SII(SII)$ is minimal. (Note that I've only given an intuition, I haven't proved it!)However Unlambda also includes c (call/cc), and this is a very powerful operator in terms of replicating its argument. A term of the form $Cfx$ applies its own continuation $\phi$ to the function $f$; if $f \phi$ itself arranges not to destroy its context, then the term won't terminate. For example, $CI(CI)$ is non-terminating (exercise: work it out). ``ci`ci is known as the Yin-Yang puzzle. To see it in action, make it print a trace of its execution: ``.@`ci`.*`ci.Because $S$ requires 3 arguments and $C$ requires 2, intuitively, there can't be a non-terminating 3-combinator term, so the 4-combinator term we found above is minimal.Here's a quick-and-dirty bash script that enumerates all possible Unlambda terms of up to 4 combinators (i.e. 3 application nodes) and prints out the ones that take more than 1 second to terminate. I omitted the I/O primitives which reduce like i, as well as e (exit) which obviously wouldn't help to make a program non-terminating. This is an experimental way to list the non-terminating terms the terms not printed here are guaranteed to be terminating (assuming a correct implementation), and the terms printed here are likely to be non-terminating.for a in s k i c d v; do for b in s k i c d v; do for c in s k i c d v; do for d in s k i c d v; do for p in @$a$b @@$a$b$c @$a@$b$c @@@$a$b$c$d @@$a@$b$c$d @@$a$b@$c$d @$a@@$b$c$d @$a@$b@$c$d; do p=${p//\@/\`} timeout 1 unlambda <<<$p || echo $p done done done donedoneThe result of the experiment is that only the following terms are potentially non-terminating:```scc?``c?`c?where each ? can be independently i, c or d. In other words, the non-terminating combinator terms are $SCCx$ and $Cx(Cy)$ (or so the experiment suggests, but it happens to be correct).Exercises:Work out the reductions for these terms and check that they are indeed non-terminating. Do they loop or do they grow forever?Prove that all smaller terms terminate. (I don't think there's anything more interesting than a long case enumeration.)How does $SCCI$ relate with my intuition above concerning the minimum content of a non-terminating term the $S$ replicates only $I$?Prove or disprove that there is no smaller non-terminating SKI term than $SII(SII)$. Are there others of the same size? |
_computergraphics.1502 | When rendering 3D scenes with transformations applied to the objects, normals have to be transformed with the transposed inverse of the model view matrix. So, with a normal $n$, modelViewMatrix $M$, the transformed normal $n'$ is $$n' = (M^{-1})^{T} \cdot n $$When transforming the objects, it is clear that the normals need to be transformed accordingly. But why, mathematically, is this the corresponding transformation matrix? | Why is the transposed inverse of the model view matrix used to transform the normal vectors? | transformations;geometry | Here's a simple proof that the inverse transpose is required. Suppose we have a plane, defined by a plane equation $n \cdot x + d = 0$, where $n$ is the normal. Now I want to transform this plane by some matrix $M$. In other words, I want to find a new plane equation $n' \cdot Mx + d' = 0$ that is satisfied for exactly the same $x$ values that satisfy the previous plane equation.To do this, it suffices to set the two plane equations equal. (This gives up the ability to rescale the plane equations arbitrarily, but that's not important to the argument.) Then we can set $d' = d$ and subtract it out. What we have left is:$$n' \cdot Mx = n \cdot x$$I'll rewrite this with the dot products expressed in matrix notation (thinking of the vectors as 1-column matrices):$${n'}^T Mx = n^T x$$Now to satisfy this for all $x$, we must have:$${n'}^T M = n^T$$Now solving for $n'$ in terms of $n$,$$\begin{aligned}{n'}^T &= n^T M^{-1} \\n' &= (n^T M^{-1})^T\\n' &= (M^{-1})^T n\end{aligned}$$Presto! If points $x$ are transformed by a matrix $M$, then plane normals must transform by the inverse transpose of $M$ in order to preserve the plane equation.This is basically a property of the dot product. In order for the dot product to remain invariant when a transformation is applied, the two vectors being dotted have to transform in corresponding but different ways.Mathematically, this can be described by saying that the normal vector isn't an ordinary vector, but a thing called a covector (aka covariant vector, dual vector, or linear form). A covector is basically defined as a thing that can be dotted with a vector to produce an invariant scalar. In order to achieve that, it has to transform using the inverse transpose of whatever matrix is operating on ordinary vectors. This holds in any number of dimensions.Note that in 3D specifically, a bivector is similar to a covector. They're not quite the same since they have different units: a covector has units of inverse length while a bivector has units of length squared (area), so they behave differently under scaling. However, they do transform the same way with respect to their orientation, which is what matters for normals. We usually don't care about the magnitude of a normal (we always normalize them to unit length anyway), so we usually don't need to worry about the difference between a bivector and a covector. |
_unix.374514 | This is my first time having a fancy UEFI PC. I partitioned all my drives in GPT using gdisk.installed Windows 10installing Debianat the end of the installation a dialogue box warned me that many EFI implementations are buggy and had I wanted to install GRUB on a removable media (didn't tell what that media was). I clicked yesGRUB didn't detect Windows 10rebooted my PCno GRUB. booted straight to Windows 10.When I picked the drive explicitly from the boot menu (pressing F12):it did boot into Debian, though I have to do this at every boot.is there a way to make GRUB detect Windows 10 and be the default bootloader like back in the good old days of MBR? | GRUB booting only when the drive is explicitly selected | debian;boot;windows;dual boot;grub | null |
_unix.91999 | I am absolute beginner Linux user, and I want to know simply how to install JDK ( java development kit ) and eclipse IDE in Linux mint 15 | How to install JDK and eclips in linux mint 15? | linux mint | null |
_unix.225729 | I want to find the location of info file of the jcal program.It has appropriate info when I call info jcal. The output of info -w jcal is:*manpages*Did I do wrong way to get the full location of info file? What is thebest way to get the info file location?Dist: Slackware Current.jcal: 0.4.1info: 4.13 | Where does info file exist | info | The command info looks for files at places defined in $INFOPATH variable (usually /usr/share/info/, etc), but if it doesn't find the appropriate file there, as a fallback it switches to the man pages for help (see $MANPATH variable) and prints exactly the same content as man. So if info -w shows *manpages* then try man -w to get the information you wanted. |
_webmaster.75747 | I am very much a newbie in PHP. Could you tell me about basic looping in PHP if I want create a web design. Also, how do I connect to a MySQL database? | Basic looping on PHP for web design | php;mysql | LOOPINGThere are two methods to loop in PHP: for, and;foreachThe PHP for LoopThe for loop is used when you know in advance how many times the script should run.Syntaxfor (init counter; test counter; increment counter) { code to be executed;}Parameters:init counter: Initialize the loop counter valuetest counter: Evaluated for each loop iteration. If it evaluates to TRUE, the loop continues. If it evaluates to FALSE, the loop ends.increment counter: Increases the loop counter valueThe example below displays the numbers from 0 to 10:Example<?php for ($x = 0; $x <= 10; $x++) { echo The number is: $x <br>;} ?>The PHP foreach LoopThe foreach loop works only on arrays, and is used to loop through each key/value pair in an array.Syntaxforeach ($array as $value) { code to be executed;}For every loop iteration, the value of the current array element is assigned to $value and the array pointer is moved by one, until it reaches the last array element.The following example demonstrates a loop that will output the values of the given array ($colors):Example<?php $colors = array(red, green, blue, yellow); foreach ($colors as $value) { echo $value <br>;}?>Source: PHP 5 for LoopsSQL CONNECTIONOpen a Connection to MySQLBefore we can access data in the MySQL database, we need to be able to connect to the server:Example (MySQLi Object-Oriented)<?php$servername = localhost;$username = username;$password = password;// Create connection$conn = new mysqli($servername, $username, $password);// Check connectionif ($conn->connect_error) { die(Connection failed: . $conn->connect_error);} echo Connected successfully;?>A note on the object-oriented example above: $connect_error was broken until PHP 5.2.9 and 5.3.0. If you need to ensure compatibility with PHP versions prior to 5.2.9 and 5.3.0, use the following code instead:// Check connectionif (mysqli_connect_error()) { die(Database connection failed: . mysqli_connect_error());}Example (MySQLi Procedural)<?php$servername = localhost;$username = username;$password = password;// Create connection$conn = mysqli_connect($servername, $username, $password);// Check connectionif (!$conn) { die(Connection failed: . mysqli_connect_error());}echo Connected successfully;?>Close the ConnectionThe connection will be closed automatically when the script ends. To close the connection before, use the following:Example (MySQLi Object-Oriented)$conn->close();Example (MySQLi Procedural)mysqli_close($conn);Source: PHP Connect to MySQL |
_unix.355512 | I have an old PC that I started putting Linux distros on a long time ago and now I got it running one called Elementary OS. I'm trying to boot from a USB drive ISO, and it won't let me do it. I've never dealt with grub before and I don't understand it. I've read some of the related answers on here and elsewhere like this http://blog.viktorpetersson.com/post/93191892924/how-to-boot-from-usb-with-grub2But if I type initrd(hd1,0) then boot it gives me an error saying I have to load the kernel first.Another answer said chainloader +1 but that also gives me an error.I can get to the BIOS (Gateway) and select the USB drive but it always puts me right into grub. Am I stuck using Elementary OS forever? | Linux boot from USB - grub, bios | boot;usb | null |
_unix.369993 | I'm trying to compile a gstreamer plugin, using the general autotools setup. The plugin requires a specialized shared library 'libdce' which I know is installed on my machine, as a shared object in /usr/lib/ as libdce.so.1. When I run configure, the script fails as libdce is not a package. The error log is below:checking for LIBDCE... configure: error: Package requirements (libdce >= 1.0.0) were not met:No package 'libdce' foundConsider adjusting the PKG_CONFIG_PATH environment variable if youinstalled software in a non-standard prefix.Alternatively, you may set the environment variables LIBDCE_CFLAGSand LIBDCE_LIBS to avoid the need to call pkg-config.See the pkg-config man page for more details.How would I be able to link the shared library to the configure script?There is also a libdce package installed on this machine, installed as libdce1 | Configure not finding shared library, despite that library being installed | software installation;configure;gstreamer | null |
_unix.162311 | Normally I like to have all of the debug output of a script go to a file, so I will have something like:exec 2> somefileset -xvThis work very will in bash, but I have noticed in ksh it behaves differently when it comes to functions. I noticed when I do this in ksh, the output does not show the function trace, only that the function was called.When doing some additional testing, I noticed the behavior also depends on how the function was declared, if I use the ksh syntax of:function doSometime {....}All I see is the function call, however if declare the function using the other method, egdoSomething() {....}The trace works as expected. Is it possible to get set -xv to work the same with both types of function declarations? I tried export SHELLOPTS and that did not make a difference either.I am using ksh93 on Solaris 11. | set -xv behavior in ksh vs bash | ksh;debugging;function | From the documentation:Functions defined by the function name syntax and called by name execute in the same process as the caller and share all files and present working directory with the caller. Traps caught by the caller are reset to their default action inside the function.WhereasFunctions defined with the name() syntax and functions defined with the function name syntax that are invoked with the . special built-in are executed in the caller's environment and share all variables and traps with the caller.The solution is to not use the function keyword; stick to the standard form of function definitions.Alternatively, if you're only interested in a few functions, typeset -tf fname will just trace the function fname (if it was defined with the function keyword).To stop tracing: typeset +tf fnameTo trace all such functions in ksh93: typeset -tf $(typeset +f)To see which functions are traced: typeset +tfTo stop tracing all functions: typeset +tf $(typeset +tf) |
_cstheory.37071 | I'm a mathematics student in my junior year and I'm interested in computational complexity and specially geometric complexity theory. I'm going to learn algebraic geometry and representation theory but I want to consider the parts that are related to geometric complexity theory so I wonder What are the topics that should be mastered by someone who wants to understand geometric computational complexity?Surely, a lot of algebraic geometry and representation theory are needed, but which topics? and representation theory of what? finite groups? lie algebras(probably not) etc? and which topics in algebraic geometry are needed? It would be great if it is possible to name some topics that are required to be well understood in algebraic geometry and representation theory to before tackling geometric complexity theory. naming good resources ( texts etc ) that cover this background will be highly appreciated too.I have asked the same question on math stackexchange but got no answer, so I thought I should ask it here. | What is the background in algebraic geometry and representation theory needed for geometric complexity theory? | cc.complexity theory;reference request;lo.logic;time complexity;p vs np | null |
_softwareengineering.21987 | I know there have been questions like What is your favorite editor/IDE?, but none of them have answered this question: Why spend the money on IntelliJ when Eclipse is free?I'm personally a big IntelliJ fan, but I haven't really tried Eclipse. I've used IntelliJ for projects that were Java, JSP, HTML/CSS, Javascript, PHP, and Actionscript, and the latest version, 9, has been excellent for all of them.Many coworkers in the past have told me that they believe Eclipse to be pretty much the same as IntelliJ, but, to counter that point, I've occasionally sat behind a developer using Eclipse who's seemed comparably inefficient (to accomplish roughly the same task), and I haven't experienced this with IntelliJ. They may be on par feature-by-feature but features can be ruined by a poor user experience, and I wonder if it's possible that IntelliJ is easier to pick up and discover time-saving features.For users who are already familiar with Eclipse, on top of the real cost of IntelliJ, there is also the cost of time spent learning the new app. Eclipse gets a lot of users who simply don't want to spend $250 on an IDE.If IntelliJ really could help my team be more productive, how could I sell it to them? For those users who've tried both, I'd be very interested in specific pros or cons either way. | How is IntelliJ better than Eclipse? | java;ide;eclipse;intellij | I work with Intellij (9.0.4 Ultimate) and Eclipse (Helios) every day and Intellij beats Eclipse every time. How? Because Intellij indexes the world and everything just works intuitively. I can navigate around my code base much, much faster in Intellij. F3 (type definition) works on everything - Java, JavaScript, XML, XSD, Android, Spring contexts. Refactoring works everywhere and is totally reliable (I've had issues with Eclipse messing up my source in strange ways). CTRL+G (where used) works everywhere. CTRL+T (implementations) keeps track of the most common instances that I use and shows them first. Code completion and renaming suggestions are so clever that it's only when you go back to Eclipse that you realise how much it was doing for you. For example, consider reading a resource from the classpath by typing getResourceAsStream(/ at this point Intellij will be showing you a list of possible files that are currently available on the classpath and you can quickly drill down to the one you want. Eclipse - nope.The (out of the box) Spring plugin for Intellij is vastly superior to SpringIDE mainly due to their code inspections. If I've missed out classes or spelled something wrong then I'm getting a red block in the corner and red ink just where the problem lies. Eclipse - a bit, sort of.Overall, Intellij builds up a lot of knowledge about your application and then uses that knowledge to help you write better code, faster.Don't get me wrong, I love Eclipse to bits. For the price, there is no substitute and I recommend it to my clients in the absence of Intellij. But once I'd trialled Intellij, it paid for itself within a week so I bought it, and each of the major upgrades since. I've never looked back. |
_hardwarecs.2038 | I am looking for a DVR like product capable of following: can connect to various CCTV cameras (with a standard BNC output) mostly PAL CVBS.downloading of historical data can be performed remotely (nightly) over a secured channel (SFTP, SSH, Windows share etc.), eventually less secure FTP etc. from a distant location over IP network, the format should be preferably chunks of some standard video format like mp4 with H.264 codecsupports options to embed real-time video output into a custom web page or windows appsupports good motion detection and good quality with options (ideally storing both motion detected scenes and the whole time separately)it would be nice to have a good metadata about videos created, so that output recordings time vs. motion detection could be created programaticallycan be setup to delete old recordings automatically or programaticallyprovides option to connect external monitor to show real-time cameras' pictures on siteI am considering creating an intranet video surveillance server. So I am looking for a DVR with good automation options. Ideally with video surveillance server software option.Ok, I just need a good universal and not too expensive device even without points 2 - 3 (downloading ..., embedded video output), if there are any ideas? | One DVR device to standardize video surveillance on multiple sites | video;video camera;video capture | After some considerations I have decided for HIKVISION platform. I chose that platform over DASHUA mostly because of better support in my country and better firmware upgrade program.Multiple platform (like DASHUA, HIKVISION, AVTECH etc.) cannot be mixed nowadays mostly because ONVIF G standard is not yet widely implemented. Some attempts like Ozeki SDK exist to unify surveillance devices in terms of universal software library, but downloading historical data is not supported on the DVR/NVR devices yet.So it is better option to buy a new hybrid DVR on every site and use one common software like iVMS 4200.Such a device is eitherHCVR5104/5108/5116HS-S3 with SmartPSSorDS-7204/7208/16HQHI-F1/Nwith iVMS 4200 (which I chose).Hope one day the ONVIF G gets implemented and there will be a decent SDK or unversal tools to use it with various devices. |
_codereview.56225 | I'm trying to get a better understanding of decoupling methods. Right now, I have this method:private bool ContainsLegalFirstName(DataRow row, string legalFirstNameColumn){ return row.Table.Columns.Contains(legalFirstNameColumn) && !String.IsNullOrEmpty(row[legalFirstNameColumn].ToString());}I was thinking about changing the DataRow to an IDataRecord and adding an IListSource parameter and passing in the DataTable, but then the problem is I can't access the columns of the DataTable from the IListSource. Any suggestions or is what I have good enough? | Loose coupling, accessing class properties | c#;interface;.net datatable | As DataRow does not implement IDataRecord, you would have to pass an adapter that wraps the row and implements the interface. Also, passing an IListSource would only help if it always returned a ITypedList (which is not guaranteed) which you could then query for available properties by name. But what would be the advantage of all that?In my opinion, the parameter types are alright. We are still talking about a private helper method, not one that needs to be accessible using all kinds of interfaces. There are, however, some things that I might change:Either: Rename the method to represent that it actually does not care about the legal first name. You could currently rename it to ContainsLegalLastName without any change in behaviour.Or: Remove the legalFirstNameColumn and get that value from somewhere else (some configuration, const, ).Unless you decide to remove the column name parameter and get the column name from an instance variable, make the method static. It currently does not use any state at all. |
_unix.273185 | I've got a problem: my laptop loads the Nvidia driver despite it having been added to /etc/modprobe/blacklist.conf as blacklist nvidia, as well as in /etc/default/grub, and as rdblacklist nvidia in GRUB_CMDLINE_LINUX. This leads to the machine running hot and not-so-smooth on battery. Why is not Fedora not obeying my blacklist configuration? What can be done?Update.Files:[0] % cat /etc/modprobe.d/bumblebee.conf blacklist nvidiablacklist nouveauoptions bbswitch load_state=0 unload_state=0[0] % cat /etc/default/grub GRUB_TIMEOUT=5GRUB_DISTRIBUTOR=$(sed 's, release .*$,,g' /etc/system-release)GRUB_DEFAULT=savedGRUB_DISABLE_SUBMENU=trueGRUB_TERMINAL_OUTPUT=consoleGRUB_CMDLINE_LINUX=rd.lvm.lv=fedora/root rd.lvm.lv=fedora/swap nouveau.modeset=0 rd.driver.blacklist=nouveau,nvidia rhgb quietGRUB_DISABLE_RECOVERY=trueEDIT: lsmod|grep nvidia[1] % lsmod|grep nvidianvidia 8642560 1drm 335872 12 i915,drm_kms_helper,nvidia | Nvidia is loaded despite it being blacklisted | fedora;nvidia;bumblebee | The module might be loaded in the initramfs on boot. You must regenerate the initramfs to include your modifications to /etc/modprobe.d/*Run the following to regenerate your initramfsdracut -f /boot/your-initramfsOn reboot, the driver should not be loaded automatically |
_unix.148905 | I want to redirect all the inside network IPs (and only the inside network 192.168.1.0) to an error page except some IPs, A condition like this:if ( IP_from_Network = 192.168.1.0 and ((IP != 192.168.1.4) or (IP != 192.168.1.5)or (IP != 192.168.1.6)) ){redirect to an error page}so I have trying to achieve this using RewriteEngine: RewiteEngine OnRewriteCond %{REMOTE_ADDR} !^192\.168\.1\.4$ [NC]RewriteCond %{REMOTE_ADDR} !^192\.168\.1\.5$ [NC]RewriteCond %{REMOTE_ADDR} !^192\.168\.1\.6$ [NC]RewriteCond %{REMOTE_ADDR} ^192\.168\.1\.*$ [NC]RewriteCond %{REQUEST_URI} ^/test/manager/.* [NC]RewriteRule ^(.*)$ - [R=404,L]but this didn't work for meShould I use other tags like [OR] or [AND]?Update:Directory tag:<Directory /var/www/html/test> Order allow,deny Allow from 192.168.1 RewriteEngine on RewriteCond %{REMOTE_ADDR} !^192\.168\.1\.4$ [NC] RewriteCond %{REMOTE_ADDR} !^192\.168\.1\.5$ [NC] RewriteCond %{REMOTE_ADDR} !^192\.168\.1\.6$ [NC] RewriteCond %{REMOTE_ADDR} ^192\.168\.1\.*$ [NC] RewriteCond %{REQUEST_URI} ^/test/manager/.* [NC] RewriteRule ^(.*)$ - [R=404,L]</Directory> | Forbid some IPs from a certain Network on Apache? | apache httpd;rewrite | Use Allow/Deny instead:<Location /test/manager/> Order Deny,Allow Deny from 192.168.1.0/24 Allow from 192.168.1.4 192.168.1.5 192.168.1.6</Location>Notice that this allows also any other IP, which I think is not what you want. If so, swap the Order and remove the Deny line:<Location /test/manager/> Order Allow,Deny Allow from 192.168.1.4 192.168.1.5 192.168.1.6</Location> |
_cs.47682 | I have a set $E$ which is the set of all possible $d$-tuples ($d$-dimensional vectors) of integers between $1$ and $n$.Typically $d=3$ and $n\approx1000$, but for the sake of making a small example, suppose $d = 2$ and $n = 4$, so$$E = \{(1,1),(1,2),(1,3),(1,4),(2,1),(2,2),(2,3),(2,4),(3,1),(3,2),(3,3),(3,4),(4,1),(4,2),(4,3),(4,4)\}\,.$$Given an arbitrary nonempty $J\subseteq E$, I would like to find the product of the lengths of its projections on the axes, in other words, the area of the minimal axis-aligned bounding box of the 2d points in $J$.For example, given $J = \{(2,3),(3,2),(4,1),(4,2)\}$, my function would return$$f(J) = (\max(2,3,4,4)-\min(2,3,4,4))(\max(3,2,1,2)-\min(3,2,1,2)) = (4 - 2)(3 - 1) = 4\,.$$I want to perform this calculation for every possible non-empty $J\subseteq E$, of which there are $2^{16} - 1 = 65,535$. At present I call built-in Max and Min commands every time, i.e. four such calls for each evaluation of $f$, hence $65,535\times 4 = 262,140$ calls. Presumably each time Max or Min is called, it sorts the positive integers fed to it.This seems horribly inefficient, since for $J_1$ and $J_2$ having many elements in common, the lists to be sorted for $J_1$ are very similar to the lists to be sorted for $J_2$, so there may be great repetition of comparisons, even if the built in Max or Min function is efficient within itself.What is a good way to solve this problem which achieves a balance between efficiency of time and memory, given the typical parameters $d = 3$ and $n = 1000$?Should I store the nonempty subsets $J$ of $E$ in some kind of graph structure, and have my algorithm traverse this graph somehow?EDIT:So what Im actually trying to do is to evaluate$R(s_1, \ldots, s_d, n_1, \ldots, n_d; q) =1 + \sum_{J \in \mathcal{P}(E) \setminus \emptyset}(-1)^{|J|}\prod_{J' \in \mathcal{P}(J) \setminus \emptyset}\exp\left [(-1)^{|J'|}\ln\left (\frac{1}{q}\right )\prod_{r=1}^d\max\left (0, s_r -\left (\max_{e \in J'} e_r - \min_{e \in J'} e_r\right )\right )\right ],$where $d \in \mathbb{N}$, $n_1, \ldots, n_d \in \mathbb{N}$, and $s_1, \ldots s_d \in \mathbb{N}$ with $1 \leq s_r \leq n_r$ for all $r \in \{1, \ldots, d\}$, and $E = \prod_{r=1}^d \{1, \ldots, n_r - s_r + 1 \}$, and $q$ is a symbol. This expression gives a polynomial in $q$. The sequence of coefficients of the polynomial is what I would like to obtain.This formula itself might well not be a very good formula for computing this polynomial, but my present task is to make an algorithm based upon the formula.The quantity$\prod_{r=1}^d\max\left (0, s_r -\left (\max_{e \in J'} e_r - \min_{e \in J'} e_r\right )\right )$represents the volume of the intersection of several contiguous subarrays of dimensions $s_1 \times \ldots \times s_d$, within a large array of dimensions $n_1 \times \ldots \times n_d$. The reason for taking the maximum of 0 and $s_r -\left (\max_{e \in J'} e_r - \min_{e \in J'} e_r\right )$ is that some such intersections will be empty.Re. Toms comment about enumerating bounding boxes instead of subsets, I did have the following idea, which I think might be in that spirit:Since the volume of the intersection of several contiguous subarrays is invariant with respect to translation of that set of subarrays within the large array, I could choose only to compute the volumes of a certain number $M$ of representative cases, and then copy the result to all the translates. I think$M=2^{\left (\prod_{r=1}^d\min(s_r, n_r s_r + 1)\right )-1}$,which I computed by supposing that the top left contiguous subarray was included in the set of subarrays, and asking which other contiguous subarrays could intersect that one.EDIT:I do apologise. In my original question, $f(J)$ should not be the product of the lengths of the projections of the subset on the axes, it should simply be the sequence of lengths of the projections on the axes. As can be seen in my first edit, above, I do not compute the product of those lengths $l_r$, rather, the product $\prod_{r=1}^d \max(0,(s_r - l_r))$. When the $n_r$ are large compared with the $s_r$, many of those products $\prod_{r=1}^d \max(0,(s_r - l_r))$ will be zero, which should allow me to check many fewer than the $2^{(1000^3)}$ cases mentioned in Tom's comment. | Efficient method to sort very large set of integer vectors by all coordinates simultaneously | time complexity;efficiency | null |
_softwareengineering.355733 | Most, if not all IT people I know believe that it is beneficial to model software with UML or other types of diagrams before coding. (My question is not about UML specifically, it could be any graphical or textual description of the software design.)I am not so sure about it. The main reason is: Code doesn't lie. It is checked by the compiler or interpreter. It hopefully has automated tests and needs to pass static code analysis. If a module does not interface correctly with another module, it is usually obvious in code because you get an error message.All of this cannot be done with diagrams and other documents. Yes, there are tools that check UML, but everything I've seen so far is very limited. Therefore these documents tend to be incomplete, inconsistent or simpy false.Even if the diagrams themselves are consistent, you cannot be sure that the code actually implements them. Yes, there are code generators, but they never generate all of the code.I sometimes feel like the obsession with modeling results from the assumption that code inevitably has to be some incomprehensible mess that architects, designers or other well-paid people who get the big picture should not have to deal with. Otherwise it would get way too expensive. Therefore all design decisions should be moved away from code. Code itself should be left to specialists (code monkeys) who are able to write (and maybe read) it but don't have to deal with anything else. This probably made sense when assembler was the only option, but modern languages allow you to code at a very high level of abstraction. Therefore I don't really see the need for modeling any more.What arguments for modeling software systems am I missing?By the way, I do believe that diagrams are a great way to document and communicate certain aspects of software design but that does not mean we should base software design on them.Clarification:The question has been put on hold as being unclear. Therefore let me add some explanation:I am asking if it makes sense to use (non-code) documents that model the software as the primary source of truth about software design. I do not have the case in mind where a significant portion of the code is automatically generated from these documents. If this was the case, I would consider the documents themselves as source code and not as a model.I listed some disadvantages of this procedure that make me wonder why so many people (in my experience) consider it as the preferable way of doing software design. | What are the benefits of modeling software systems vs. doing it all in code? | design;architecture;uml;modeling | The benefit of modeling software systems vs. all in code is: I can fit the model on a whiteboard.I'm a big believer in the magic of communicating on one sheet of paper. If I tried to put code on the whiteboard, when teaching our system to new coders, there simply isn't any code at the needed level of abstraction that fits on a whiteboard.I know the obsession with modeling that you're referring to. People doing things because that's how they've been done before, without thinking about why they're doing it. I've come to call it formalism. I prefer to work informally because it's harder to hide silliness behind tradition.That doesn't mean I won't whip out a UML sketch now and then. But I'll never be the guy demanding you turn in a UML document before you can code. I might require that you take 5 minutes and find SOME way to explain what you're doing because I can't stand the existence of code that only one person understands.Fowler identified different ways people use UML that he called UML modes. The dangerous thing with all of them is that they can be used to hide from doing useful work. If you're doing it to code using the mouse, well I've seen many try. Haven't seen anyone make that really work. If you're doing it to communicate you'd better make sure others understand you. If you're doing it to design you damn well better be finding and fixing problems as you work. If everything is going smoothly and most of your time is spent making the arrows look nice then knock it off and get back to work.Most importantly, don't produce diagrams that you expect to be valid more than a day. If you somehow can, you've failed. Because software is meant to be soft. Do not spend weeks getting the diagrams just right. Just tell me what's going on. If you have to, use a napkin. That said, I prefer coders who know their UML and their design patterns. They're easier to communicate with. So long as they know that producing diagrams is not a full time job. |
_softwareengineering.141005 | How would one know if the code one has created is easily readable, understandable, and maintainable? Of course from the author's point of view, the code is readable and maintainable, because the author wrote it and edited it, to begin with. However, there must be an objective and quantifiable standard by which our profession can measure code.These goals are met when one may do the following with the code without the expert advice of the original author:It is possible to read the code and understand at a basic level the flow of logic.It is possible to understand at a deeper level what the code is doing to include inputs, outputs, and algorithms.Other developers can make meaningful changes to the original code such as bug fixes or refactoring.One can write new code such as a class or module that leverages the original code.How do we quantify or measure code quality so that we know it readable, understandable, and maintainable? | How would you know if you've written readable and easily maintainable code? | code quality;code reviews;readability;maintainability | Your peer tells you after reviewing the code.You cannot determine this yourself because you know more as the author than the code says by itself. A computer cannot tell you, for the same reasons that it cannot tell if a painting is art or not. Hence, you need another human - capable of maintaining the software - to look at what you have written and give his or her opinion. The formal name of said process is Peer Review. |
_unix.336196 | I'm trying to find a program, preferably web-based to schedule tasks, distribute files such as scripts and get an overview of about 50 RHEL servers in a confined environment.So far I've tried puppet, but it can't really schedule tasks unless you make it change cron files.Capistrano looks promising, but I have yet to try it.Hoping any of you had experience with Capistrano or other programs that might do the work or at least be able to schedule and manage tasks. | Program for task scheduling and managing on RHEL | rhel;capistrano | null |
_webapps.65691 | Is there a way to find friends who are only male or female in my friend list or in my friend's friend list?Because i see only All Friends,Mutual Friends,Recently Added,People You May Know,Followers only when i visit friends list on my profile or my friends profile. | Sort facebook friends by gender | facebook | To find all of your friends who are male or female, type one of the following into the main search box at the top of the Facebook page and then press Enter:My friends who are maleMy friends who are femaleTo find all of the people in your friend's friend list who are male or female, type one of the following into the main search box at the top of the Facebook page and press Enter (you'll want to replace [name of friend] with your friend's name):Friends of [name of friend] who are maleFriends of [name of friend] who are femaleNote that Facebook will only find those people who have filled in the gender in their profile. Facebook will not find anyone who hasn't filled in the gender in their profile. |
_softwareengineering.319696 | I have a in-house messaging system, similar to a message broker. We have one master message broker and one slave message broker. A message broker just receives messages and sends them to all nodes. The slave is acting as a node, receiving messages from master and building state so it can take over in case of master failure.Now my problem is: how can I detect, if possible and without human intervention, that the master is dead!? The master may look dead, and the slave might be tempted to take over, but then you might end up in the situation of two masters in your system.I'm trying to understand how clustering systems implement master-dead detection. Until now it looks like a human has to manually kill the master and turn on a slave, but it would be much more preferable for this process to be automatic. | Master-Slave Cluster - How to make sure the master is really dead for the slave to take over? | message queue;cluster;messaging | I'd suggest to define criteria of what dead means, then periodically poll for the dead condition and perform the swing over. Perhaps dead gets defined as hasn't sent any messages to any of the nodes in X seconds. Whatever decision tree a human currently follows to ascertain whether or not to flip service. It may be 1 condition, 10, or dozens. How well the logic is defined will control how accurately it detects dead and fails over. Also, the swing over process should include informing the dead master that it has been declared as dead and should not perform any master type of operations. With one exception -- you might want it to retry any messages that had been passed while it was master but did not go. Or, if the client code is under your control, have the clients manage re-trying failed messages. You need something in place to prevent messages falling through the cracks. Would be a good idea to also have the dead master, if it comes back online, to come online as a secondary..... and have the deadness detector now polling the new master and ready to fail back to the original master if that master dies and the original master is up. |
_scicomp.25728 | I have a heat transfer equation in a cube in $R^{100}$: $[0,1]\times[0,1]\times[0,1]\dots$:$$\nabla^2 \varphi = f,$$with boundary conditions set in a form that in the number of points $p_i$, temperature field should least deviate from observed values $o_i$, or in other words that solution of heat equation should minimise:$$\sum_{k=0}^{m}|\varphi(p_i) - o_i|^2.$$ This would be pretty straightforward problem in 2-3 dimensional case (assuming problem is well-posed), I've solved it with FEM successfully, but for high dimensional case I cannot even build the grid, let alone do any calculations. (I don't store $f$, I can easily calculate it in any point).It seems, I need to employ some grid-less method. I've skimmed google briefly and found two possible venues: to use radial basis functions or use particle methods. Are they applicable in my case? Do my problem feasible at all? I've never worked with high dimensional problems before, so I would like to hear all suggestions and references to the relevant and possibly relevant literature. | Solving Poisson equation while suffering from the curse of dimensionality | reference request;poisson;high dimensional | null |
_unix.44339 | I have a Windows XP/Debian Squeeze (XFCE desktop) dual-boot set up on a Dell Latitude laptop. The Windows XP portion boots fine. However, sometimes the Debian portion does not boot. If it doesn't boot, and I do a hard reboot, it will boot the second time. However, sometimes Debian will boot the first time.How can I diagnose this problem?All help appreciated![EDIT] I should mention that the improper boot manifests itself as a failure to reach the login screen. I see the standard Debian wallpaper with the stars and so on, but it doesn't display the login box. | Weird Booting Problem with Debian Squeeze | debian;windows;dual boot | null |
_codereview.55063 | I'm doing some of the Codility challenges using Ruby. The current challenge is The MaxCounter, which is described as:Calculate the values of counters after applying all alternating operations: increase counter by 1; set value of all counters to current maximum.See the link above for more details. I managed to get a solution. The performance, however, scored 0, being that some operations timed out. How can I improve my algorithm to perform better?def solution(n, a) counter = (0..n-1).to_a.map{|z| z = 0} for value in a counter.map!{|x| x = counter.max} unless value <= n counter[value-1] += 1 unless value == n+1 end counterendUpdate - my second and third attempts, still fails on performance2nddef solution(n, a) counter = (0..n-1).to_a.map{|z| z = 0} a.each{|x| counter.map!{|c| c = counter.max} unless x <= n; counter[x-1] +=1 unless x==n+1;} counterend3rddef solution(n, a) counter = (0..n-1).to_a.map{|z| z = 0} a.each{|x| counter[x-1] +=1 and next unless x==n+1; counter.map!{|x| x = counter.max};} return counterend | Performance of Codility MaxCounter challenge solution | optimization;performance;algorithm;ruby;programming challenge | I can't speak to performance, since I don't know what codility considers acceptable, but I'll give it a go.In terms of reviewing your code, your solutions seem to be getting more and more compact, but that doesn't necessarily help performance. Shorter code does not equal faster code. Sure, if you can skip code, that's one thing, but just shaving off bytes of source code is unnecessary.You certainly should not make confusing constructions like unless ... unless or and next unless. Just read that out loud, and it'll sound strange.And I don't think there's any advantage to using the self-modifying map!. In fact, there's no reason to use map at all.And please don't add pointless semi-colons. A nice aspect of Ruby is not having to use those darn things everywhere.You also seem to misinterpreting how map (with or without the !) works. These are all equivalent: counter.map! {|x| x = counter.max} counter.map! {|x| counter.max} counter.map! {counter.max}In other words, you setting the block parameter x does absolutely nothing useful - you don't even need the block parameter. The map method(s) only use the block's return value.Moreover, your block gets invoked n times by map!, since n is the counter-array's length. That means that counter.max gets called n times, even though the result is the same each time! If n is large, you're looking at a lot of unnecessary work being done there. Worst case is at that all of the values in a equal n + 1, in which case you'll be calling counter.max a total of a.count * n times. And given that you're using map!, I doubt Ruby even has a chance to cache/memoize or otherwise optimize the result of counter.max, since you keep changing the array in-place.Besides, Array provides a lot of help here, if you check the docs.fill does what it says in the name: Fills the array with a given value (or by using a block), which is what you're trying to do with map!and Array.new accepts a length and a seed value, so you can fill a brand new array right away. Don't muck around with mapping a range; just make an array of the right length that's filled with zeros.Here's my takedef solution(n, a) counters = Array.new(n, 0) # an array of zeros (and make the var name plural, since it's an array) limit = n + 1 # let's just calculate this once, since it's constant a.each do |v| if v == limit counters.fill(counters.max) # max gets called once, and the array gets filled elsif v > 0 && v < limit counters[v-1] += 1 # just increment end end countersendIt works for the sample input in Codility's example, but as mentioned I haven't gone beyond that.Update: With a bit of manual work, you can get rid of the call to max entirely by tracking the maximum yourself:def solution(n, a) counters = Array.new(n, 0) limit = n + 1 maximum = 0 # the maximum counter value a.each do |v| if v == limit counters.fill(maximum) # use our known maximum elsif v > 0 && v < limit counter = (counters[v-1] += 1) # increment a counter and store the result maximum = counter if counter > maximum # use the new value as maximum, if it's higher end end countersendThis will likely be faster than the first approach, especially for larger values of n. It's maybe not quite as Ruby-esque to do things manually like this, but it's not terribly complex either. |
_webmaster.17720 | I want to transfer a domain that I am buying off somebody but I want to make sure i'm not getting scammed. Is there some kind of 3rd-party website that can help me with this kind of transfer so that I can safely buy the domain and move it over to me? | Transferring a Domain Safely | domains;transfer;purchase | Use an escrow service like escrow.com. Ensuring Buyers get the Domain and Sellers get paid.Whether you're buying or selling domain names online, Escrow.com is a name you can trust. Escrow.com is a government licensed and audited 3rd party that safely holds a Buyer's payment in a trust account until the entire transaction is complete. That way, Buyers can be confident the domain will be registered in their name and Sellers can be sure they'll be paid.Escrow.com protects your money and your domain.Since the Buyer pays Escrow.com and not the Seller, Escrow.com can withhold payment until it's satisfied the domain name has been transferred by the Seller. One of the ways Escrow.com does this is by checking the WHOIS database of the appropriate Registrar* to make certain it properly reflects the new Buyer's name as the domain name Registrant. Once this has been verified, Escrow.com releases payment to the Seller.Buy and sell Domains without fear of fraud.Anytime you pay in advance for something you've purchased on the Internet, you're taking a chance. People can forge their identities. They can misrepresent what they're selling. And even with the best of intentions, some people are just plain irresponsible.That's why it's important to turn to a trusted 3rd party like Escrow.com for transactions involving a high risk of fraud like domain name transfers. Relying on Escrow.com is like having an insurance policy that protects you against fraud, deception and irresponsibility.Benefits for BuyersPeace of mind, Security, and ConvenienceAssured Domain name transfer prior to paying SellerAbility to confirm domain ownership directly with the registrar before the seller is paid.Ability to pay by wire transfer and in some cases PayPal, check, money order, or credit card (Visa, MasterCard, American Express) Restrictions applyAbility to send credit card information to a financial institution, not a strangerEasy access to live customer support by phone or email Benefits for SellersPeace of mind, Security, and Convenience Payment verification prior to Domain name transfer Guaranteed payment once transfer specifications are met Protection against credit card fraud, insufficient funds or credit card chargebacks Ability to accept credit cards and PayPal if otherwise not able to. Restrictions apply Escrow fees that may be lower than merchant credit card processing fees Easy access to live customer support by phone or email |
_unix.29292 | While using Fedora 14 (Gnome2), Each time if I was doing some privileged task (ex - mounting a new hard drive), I would be prompted for password and then there would appear an icon (like keys) on the taskbar. By clicking on it, I could exit the elevated priviliges mode.I do not see any such feature - to exit priviliged mode - in F15/G3.Is it possible to do so? How? | Fedora15/Gnome3 - Exit elevated priviliges mode | fedora;security;users;gnome3 | In Fedora15/Gnome 3 when you execute a command which requires elevated privileges, those privileges will remain until the window that required them is closed or until the privilege timeout is reached.I think this was actually fixed just at the end of Gnome2.Worth testing to see how long you retain privileges for. |
_unix.107698 | yum install ypchsh and yum install ldapmodify both don't find a package. I only want to change the shell for certain ldap users on this one server. Someone please help, I can't seem to figure this simple thing out. The passwd file won't work because the user isn't listed there I think. I don't just create an entry there I suppose. If I edit their login shell in their LDAP profile I think it would apply to all systems. There must be a mapping file somewhere -- in /etc I am guessing -- to map their attributes to local system settings. | How do I change login shell to nologin for an LDAP user RHEL 6? | shell;users;ldap;passwd | It was so easy, if your nsswitch is files ldap, you could just add an entry to /etc/passwd and modify their shell to whatever value. You cannot use useradd though, you would need to edit the passwd file with an editor like vi or use the vipw command. |
_unix.189787 | What is the difference between echo and echo -e?Which quotes ( or ' ')should be used with the echo command? ie: echo Print statement or echo 'Print statement'?Also, what are the available options that can be used along with echo? | Difference between 'echo' and 'echo -e' | shell | echo by itself displays a line of text. It will take any thing within the following ... two quotation marks, literally, and just print out as it is. However with echo -e you're making echo to enable interpret backslash escapes. So with this in mind here are some examplesINPUT: echo abc\n def \nghi OUTPUT:abc\n def \nghiINPUT: echo -e abc\n def \nghiOUTPUT:abc def ghiNote: \n is new line, ie a carriage return. If you want to know what other sequences are recognized by echo -e type in man echo to your terminal. |
_webmaster.100429 | I've just started using Tag Manager combined with Schema JSON-LD for the first time. I am attempting to use Custom HTML but have hit a brick wall when attempting to assign Custom HTML to a URL based on Page View.I'm using the following custom html: <script type=application/ld+json> { @context: http://schema.org, @type: ProfessionalService, additionalType: http://www.productontology.org/id/Web_design, name: BYBE, description: The Web Design Company, telephone: 01202 949749, address: { @type: PostalAddress, streetAddress: Flat 11, East Cliff Grange, 35 Knyveton Road, addressLocality: Bournemouth, addressRegion: Dorset, postalCode:BH1 3QJ }, geo: { @type: GeoCoordinates, latitude: 50.73744, longitude: -1.8495269 } }</script>Which I can confirm works when testing it with Google's Rich Snippet Testing Tool:This works fine with:Tag Type Custom HTMLTrigger Type All Pages Page ViewHowever it does not work with any of the following configurations:Tag Type Custom HTMLTrigger Type Page View > The trigger fires on:Page URL > equals > bybe.net/aboutPage URL > equals > bybe.net/about/Page URL > equals > www.bybe.net/aboutPage URL > equals > www.bybe.net/about/As you can see from above I have tried plenty of different URL types and its not clear what Google is expecting, Googles own examples does not use HTTP HTTPS or with WWW so I'm not sure why this is not working, hopefully someone can assist pointing me in the right direction! | Google Tag Manager Custom HTML Page View URL Equals Trigger not Working | schema.org;rich snippets;google tag manager;json ld | Page URL is full url (including protocol) , try contains instead of equals. Ref : https://support.google.com/tagmanager/answer/6106965?hl=enTo check value of each built-in variable's on your page follow below listed steps :1. Enabling Preview & Debug mode on your GTM web container2. Open your website, you'll see a Quick Preview panel on bottom of your website.3. Go to the event (Window Loaded/DOM Ready/Page View) you want your variable on and then navigate to variables tab, there you'll find all user-defined and built-in variables. Search for the variable you're looking for and there you'll find it's value listed. |
_unix.107566 | I know this xmodmap script can swap ctrl and capslock: remove Lock = Caps_Lockremove Control = Control_Lkeysym Caps_Lock = Control_Lkeysym Control_L = Caps_Lockadd Lock = Caps_Lockadd Control = Control_LI don't quite understand it. So I tried this: remove Lock = Caps_Lockremove Control = Control_Ladd Lock = Control_Ladd Control = Caps_LockAnd this script doesn't work.Could some explain this(why the 1st script works and the other one doesn't) in simple words? | how to swap ctrl and capslock using xmodmap? | x11;keyboard layout;xmodmap | null |
_unix.299677 | I want to config a PC with necat or socat to execute a script when I tell the server to do this.I have an old app cappable to send simple message UDP prefered.The message is stored in a playlist.exampleLet's say I want to send a message to open a macro/script to the PC that is running netcat/socatC:\Users\xxx\Desktop\script.bat the server needs to listen on a port.and execute the program when the command its receivedhow I do this? I don't know how to start I found nothing on internet.PS. please don't mind UDP security or reliability; it's a LAN thing, and I don't need the server to tell me anything back. | Controling a PC via tcp/udp commands necat/socat | netcat;socat | This is a classic use of netcat. But this is unix.SE so my answer will be completely in unix.Note: netcat has different names on different distros:netcat: alias to nc on some distrosnc: GNU netcat on linux or BSD netcat on *BSDncat: Nmap netcat, consistent on most systemsOptions between different versions of netcat vary, I'll point out where different version may behave differently. Moreover, I strongly recommend installing the nmap version of netcat (ncat) since its command line options are consistent across different systems.I'll be using ncat as the netcat name thorough the answer.TCPTo use TCP to control a machine through netcat you have two options: using a named pipe (which works with all versions of netcat) and using -e (which only exists in the linux version, or, more exactly, -e on *BSD does something completely different).On the server side you need to perform either:mkfifo pinkiencat -kl 0.0.0.0 4096 <pinkie | /bin/sh >pinkieWhere: 0.0.0.0 is the placeholder for all interfaces, use a specific IP to limit it to a specific interface; -l is listen and -k keep open (to not terminate after a single connection).Another option (on linux/ncat) is to use:ncat -kl 0.0.0.0 4096 -e /bin/shTo achieve the same result.On the client side you can use your app or simply perform:ncat <server ip> 4096And you are in control of the shell on the server, and can send commands.UDPUDP is similar but has some limitations. You cannot use -k for the UDP protocol without -e, therefore you need to use the linux/ncat to achieve a reusable socket.On the server side you do:ncat -ukl 0.0.0.0 4096 -e /bin/shAnd on the client side (or from your app):ncat -u <server ip> 4096And once again you have a working shell. |
_codereview.72255 | The original question is on careercup.Write a multi threaded C code with one thread printing all even numbers and the other all odd numbers. The output should always be in sequence ie. 0,1,2,3,4....etcNow I want to use C# for it. class Program{ static Object obj = new Object(); static Thread t1; static Thread t2; static LinkedList<int> a = new LinkedList<int>(); static void Main(string[] args) { for (int i = 0; i < 10; i++) { a.AddLast(i); } t1 = new Thread(PrintOdd); t2 = new Thread(PrintEven); t1.Name = Odd; t2.Name = Even; t1.Start(); t2.Start(); t1.Join(); t2.Join(); Console.WriteLine(Done!); Console.Read(); } private static void PrintOdd() { while (true) { if (a.Count == 0) break; lock (obj) { int x = a.First(); if (x % 2 != 0) { Console.WriteLine(Thread.CurrentThread.Name + + x); a.RemoveFirst(); } } } } private static void PrintEven() { while (true) { lock (obj) { if (a.Count == 0) break; int x = a.First(); if (x % 2 == 0) { Console.WriteLine(Thread.CurrentThread.Name + + x); a.RemoveFirst(); } } } }}Any improvements? | Printing all even and odd numbers with threads | c#;multithreading | This looks very good to me, but I am not very advanced at C# myself. I can give you some tips, though.First, you should not use ambiguous names like t1 and t2. You should use more descriptive names like EvenThread and OddThread instead.Second, you can shorten this:while (true){ if (a.Count == 0) break;Into this:while (Numbers.Count > 0) // For PrintOdd()And this:while (Numbers.Count > 1) // For PrintEven()Also, you could look into other forms of mutual exclusion techniques, such as semaphores and mutexes, instead of just using an Object.This is an implementation with a SemaphoreSlim (only code shown is changed):using System.Threading;class Program{ static SemaphoreSlim ThreadLock = new SemaphoreSlim(1,1); static Thread Odd; static Thread Even; static LinkedList<int> Numbers = new LinkedList<int>(); static void Main(string[] args) { for (int i = 0; i < 10; i++) { Numbers.AddLast(i); } Odd = new Thread(PrintOdd); Even = new Thread(PrintEven); Odd.Name = Odd; Even.Name = Even; Odd.Start(); Even.Start(); Odd.Join(); Even.Join(); Console.WriteLine(Done!); Console.Read(); } private static void PrintOdd() { while (Numbers.Count > 0) { ThreadLock.Wait(); int x = Numbers.First(); if (x % 2 != 0) { Console.WriteLine(Thread.CurrentThread.Name + + x); Numbers.RemoveFirst(); } ThreadLock.Release(); } } private static void PrintEven() { while (Numbers.Count > 1) { ThreadLock.Wait(); int x = Numbers.First(); if (x % 2 == 0) { Console.WriteLine(Thread.CurrentThread.Name + + x); Numbers.RemoveFirst(); } ThreadLock.Release(); } }} |
_computergraphics.1755 | I am interested in duplicating a figure (shown below, ch 1 fig 1.21) in the book Algorithmic Beauty of Plants. The book is available herehttp://algorithmicbotany.org/papers/#abopThis image appears in several resources but I have been unable to find the exact rules for the axial system that produced it. In the book, this figure is presented in the context of L-systems and is referenced in the text as follows.Of special interest are methods proposed by Horton [70, 71] and Strahler, which served as a basis for synthesizing botanical trees [37, 152] (Figure 1.21).I am unable to find a copy of the PhD thesis (ref 37) and ref 152 does not produce this figure.Performing a Google image search with this image points to material related to the book, such as slides.Has anyone here reproduced this figure? | What exact algorithm and parameters reproduce L-system plant growth figure in Algorithmic Beauty of Plants | untagged | null |
_unix.279793 | When I run the ping command it outputs information for each ping and in the end, when I kill the process it outputs some overall statistics--- 192.168.0.1 ping statistics ---10 packets transmitted, 10 received, 0% packet loss, time 9013msrtt min/avg/max/mdev = 1.275/2.596/7.246/1.870 msIs there a way to see these statistics while ping is running, without having to kill it? I'm particularly interested in continuously monitoring the the packet loss statistic because you need to wait a bit to get accurate numbers for that. | Can I see the summed up results of the ping command while it is running? | linux;ping | If you press CTRL+\ while ping is running it will display the statsCheck ping statistics without stopping |
_codereview.14061 | I needed to write a function today in JavaScript that would return all elements based on a given attribute. e.g retrieve all elements that have an id attribute in them. The function I wrote for this is as follows:function getElements(attrib) { // get all dom elements var elements = document.getElementsByTagName(*); // initialize array to put matching elements into var foundelements = []; // loop through all elements in document for (var i = 0; i < elements.length; i++) { // check to see if element has any attributes if (elements[i].attributes.length > 0) { // loop through element's attributes and add it to array if it matches attribute from argument for (var x = 0; x < elements[i].attributes.length; x++) { if (elements[i].attributes[x].name === attrib) { foundelements.push(elements[i]); } } } } return foundelements;}Looking at this, I am sure it could be written a great deal better. Any feedback would be much appreciated! | JavaScript function to get DOM elements by any attribute | javascript;dom | querySelectorAllFirst off, if you're only dealing with relatively modern browsers (basically anything above IE7), you can use querySelectorAll, which is the fastest and easiest method to go about this:document.querySelectorAll('[' + attrib + ']');Here's the fiddle: http://jsfiddle.net/rc6Pq/SizzleIf you're stuck having to support IE7 and below, then you might as well just include the Sizzle selector engine, since you're bound to be using some additional selectors in the future. Once you include the Sizzle script in your page, you could then just use it in a similar fashion to the native querySelectorAll:Sizzle('[' + attrib + ']');Here's the fiddle: http://jsfiddle.net/rc6Pq/1/jQueryIf you're already using jQuery on the page, you don't have to use Sizzle separately, since jQuery has Sizzle incorporated within it. If that's the case, just use this:$('[' + attrib + ']').get();Here's the fiddle: http://jsfiddle.net/rc6Pq/2/ |
_cstheory.20861 | everybody knows there exist many decision problems which are NP-hard on general graphs, but I'm interested in problems that are even NP-hard when the underlying graph is a path. So, can you help me to collect such problems?I've already found a related question about NP-hard problems on trees. | NP-hard problems on paths | graph theory;np hardness | null |
_webmaster.51647 | That was my question, in the form of a question.Fortunately, I've already whipped up some code and can only point you to errors in the html.. although I don't know what they are. Here's what I've got:http://cssdesk.com/uDaLgWhen the extra navigation links (which would normally be characterized by line breaks) are laid out in my navigation bar, my html puts a gap in the padding on the right side. I have highlighted the background in case you're overly skeptical that it is in fact my css. Of course, you can also go and view the css for yourself. | How do I get rid of unwanted gaps in a horizontal navigation bar? | html | There is always a gap between inline elements like spans. There are a couple of ways to get rid of the space. You can put all the spans on one line or you can do something like this to hack it: <span><a href=#>Link</a></span><!-- --><span><a href=#>Link</a></span><!-- --><span><a href=#>Link</a></span> |
_softwareengineering.163 | Are there any great programming or software development books that are language agnostic? Why should I read it? | Language agnostic programming books | books;language agnostic | null |
_unix.129279 | I have a Plantronics 590 bluetooth headset (the type probably does not matter, but I have no alternative to test).Using the old 3.02 I was able to use this via pulseaudio.With the current 3.11-amd64 kernel this does no longer work.I am still able to pair and to connect to the headset, using HSP profile.I get a beep on the headset to confirm connection.Unfortunataly there is neither input nor output in pulseaudio(with the old kernel pavucontrol shot the headset).This is most likely related to the kernel or a module.I am using debian testing (jessie).The current version of linux-image-amd64 is 3.13+56The current version of bluez is 4.101-4.1The current version of pulseaudio and pulseaudio-module-bluetooth is 5.0-2The current version of alsa-base is 1.0.25+3I also tried debian stable (wheezy), 32bit, not working, but different:I can connect the device, it appears in pavucontrol but sound does not work.The current version of linux-image-686-pae is 3.2+46The current version of bluez is 4.99-2The current version of pulseaudio and pulseaudio-module-bluetooth is 2.0-6.1The current version of alsa-base is 1.0.25+3~deb7u1 | Bluetooth headset profile not working with recent kernel | linux kernel;audio;bluetooth | null |
_softwareengineering.226613 | Some quick background - we don't have PMs or upper management breathing down our necks about status of features, etc, as we almost always deliver ahead of time and have built up a high level of trust with them. In other words, we have a huge amount of flexibility as far as process goes. We do very well with our current process, but we feel there can be some improvement.We have a small team (3 devs, 1 tester) and everyone on the team is senior level and can deliver large pieces of functionality, and usually does so on an individual basis. Sometimes two people work on the same story/task, but since we are all very familiar with the codebase(s), we typically handle things by ourselves and consult/collaborate when needed.We roll to a live site and have the ability to roll on a daily basis, which someone on our team typically does. On average, I'd say an individual developer rolls his own code 3 times a week.We have been doing scrum with 2 week sprints, but from my experience scrum was more advantageous when we needed the majority of the team to work on the same features at the same time (swarm-type stuff), and when needing to communicate out to external teams. We currently don't have either of those needs, so we're re-evaluating our process.What it seems like we're moving toward is a model where each team member exists in his/her own sprint (with optional teammates being less common), which lasts anywhere from 1-3 days. We don't have a requirement that all sprints have to end on the same days, etc, so in theory we could pull this off. My question is, is there something better than scrum that models this type of development process?Oh, I almost forgot. We're moving away from TFS to use Git, and while I have experience with Git, most of the team members don't. Since it's a paradigm shift for some on the team, I'm wondering if we can use the change to our advantage somehow process-wise. Thoughts? | What development process encourages frequent releases (rolling code to a live site), as well as developer individuality | development process;teamwork | If you take a step back and look at why 'formal' development processes such as Waterfall, Extreme, Scrum evolved, its because many development houses struggle with developing software products (as opposed to 'software programs') that met the business and customers requirements. Every published process is an attempt to provide a framework to help achieve these goals. Some frameworks (e.g. Rational RUP) are large and prescriptive, others (e.g. Agile) a light weight and provide guidelines. What you have evolved may not be Scrum, but is a form a Agile. It does not need a name to work. It appears to be working for you and does not appear to be broken, do not change what you are doing because someone says they have a different way. What I suggest you do is look at you process and ensure that it is more than luck that is keeping it working. Document how you work now so its repeatable - pretend anotehr team wants to adopt what you are doing and write it up. Look for weaknesses - For instance a team of 3 could become a team of two overnight and feasibly a team of one. Is the process robust against this kind of change, can you bring a new member into the team with little process change? Does it scale - can you double the size of the team and still get the same results - if not, ask is it needed, and document the decision. |
_cs.12624 | Or at least generate a set of strings that one NFA accepts, so I can feed it into the other NFA. If I do a search through every path of the NFA, will that work? Although that will take a long time. | Is there a way to test if two NFAs accept the same language? | algorithms;regular languages;finite automata | The decision problem is PSPACE-complete as Shaull noted.However, it turns out that in practice it is often possible to decide NFA equivalence reasonably quickly. Mayr and Clemente (based on experimental evidence) claim that the average-case complexity scales quadratically. Their techniques rely on pruning the underlying labelled transition system via local approximations of trace inclusions.Just like SAT is NP-complete in a worst-case analysis, yet often turns out surprisingly tractable for real-world instances, it therefore seems likely that NFA equivalence can be decided efficiently for many real-world instances.Richard Mayr and Lorenzo Clemente, Advanced automata minimization, POPL 2013, doi:10.1145/2429069.2429079 (preprint) |
_cs.35195 | Define the language$\qquad R = \{x \in \{0,1\}^\ast \mid C(x) \ge |x| \}$ where $C(x)$ is the Kolmorgorov Complexity of $x$ and $|x|$ denotes the length of $x$.Prove that $R$ is co-recursively enumerable (co-r.e.).So far, I have the following:In order to prove the above, we need to show$\qquad R^c$ = $\{x \in \{0,1\}^\ast \mid C(x) \lt |x| \}$is r.e. That is there exists a program $\pi$ such that the univeral TM $U$ with argument $\pi$ equals $x$ and $|\pi| \lt |x|$ (where $U(\pi) = x$ and $|\pi| \lt |x|$).We start by enumerating all strings in $\{0,1\}^\ast$.I have no idea where to go from this point onward. | Show that the set of programs whose Kolmorgorov complexity is smaller than their length is recursively enumerable | computability;turing machines;semi decidability;kolmogorov complexity | null |
_unix.180669 | I am using Cloud9 for Rails development and it uses an Ubuntu environment. In the documentation about using the PostgreSQL database, it says:Connect to the service:$ sudo sudo -u postgres psql What is the meaning of typing sudo twice?https://docs.c9.io/setting_up_postgresql.html | What is the meaning of sudo sudo? | sudo | sudo -u postgres allows you to impersonate the postgres user when running the command. Your user probably doesn't have that privilege, but root's does.So the first sudo gives you root's privileges and the second sudo allows you (as root) to sudo -u to postgres allowing the command to be run as the postgres user. |
_unix.116304 | According to the Debian installation manual, section 4.3, the hybrid installation ISO image can be easily copied on a USB key this way: # cp debian.iso /dev/sdX# syncBut according to my previous question about sync, it looks like it only works to flush file system buffers. Then, why would sync work in the above command which does not involve a file system? | Why does the Debian installation manual suggest to do sync after raw copy of image file to USB key? | debian;system installation;usb drive;disk image | null |
_softwareengineering.30103 | I was trying to do some C++ coding that can send files from my laptop to my webmail account. It took me about 10 minutes to realize that there is no easy way to do this. Look into these links: GMAIL: http://code.google.com/apis/gmail/oauth/code.htmlYAHOO: http://developer.yahoo.com/mail/I am trying to understand why PHP or Python or Java support exist but no C++. No flame wars, I love all programming languages and their creators equally but I am curious to understand the logic behind such implementation choices. | Why no developer API in C++ for Google or Yahoo mail? | c++;email | C++ has its strengths and weaknesses. One weakness is that its library is very thin. Email involves a lot of protocols, HTTP/S, SMTP, POP3, IMAP, etc. I don't see how you can support these protocols easily in a standardized way with C++. |
_unix.63680 | I'm trying to copy a bunch of files named folder.jpg into a folder. The problem is because all the files are named the same thing, I need to rename them in the process. I know I can probably do it with sed but I'd like to rename them to the name of part of the parent folder.Here is what I got just to find and copy the filescp $(find . -iname folder.jpg) .albumart/The folder structure is ./artist/artist.year.album/folder.jpg and what I'd like to use the parent folder (or just part of it) to name the file. Can someone help me with a one liner to accomplish the task?To make things even trickier, some folders have one more level of CD1 and CD2 that I would like to ignore if they are present (e.g. ./artist/artist.year.album/CD1/folder.jpg) | Use find + sed + cp to find files and copy them to a directory with a different name | sed;find;cp | Assuming you have bash, this version simply takes your folder structure (e.g. ./foo/bar/baz/folder.jpg) and replaces all the slashes with underscores (e.g. so you get foo_bar_baz_folder.jpg):find . -iname folder.jpg -exec bash -c 'for x; do x=${x#./}; cp -i $x .albumart/${x//\//_}; done' _ {} +Note that no matter what you do, any time you move files from multiple locations into the same destination, there is always a chance of a name collision. |
_webmaster.89627 | I want to create a website that will grab content from other news websites using their RSS and insert it in my database. I am only going to show the title and an excerpt with a link to the original post.Is this a good idea? Will Google ban my website? Is it bad for SEO and is it against Google AdSense rules? | Copying content from other websites and linking to the original post | seo;google adsense;copyright | null |
_unix.287540 | I'm trying to filter one array from another array. That is, I'm trying to create a third array with a logical not-intersection.The best I can tell, it appears this block of code never matches, and found remains low:found=0...if [ $flag = $filtered ]; then found=1fiI've tried using == with the same result. I also tried the X trick, but that did not work either (does it even apply here?): if [ X$flag = X$filtered ].I'm restricted to Bash 3. I'm using Bash because I somewhat know it. I'm restricted to 3 because the script runs on OS X, too. Because of Bash 3, I think I'm missing many useful functions, like HashMaps.Why are the strings not matching?Here is the relevant snippet. CXXFLAGS can be set by the user in his/her environment. I'm trying to remove flags that we explicitly test in our test script, like -DDEBUG, -DNDEBUG, and optimizations like -O0 and -O1.# Respect user's preferred flags, but filter the stuff we expliclty testFILTERED_CXXFLAGS=(-DDEBUG, -DNDEBUG, -O0, -O1, -O2, -O3, -Os, -Og)# Additional CXXFLAGS we did not filterRETAINED_CXXFLAGS=()if [ ! -z CXXFLAGS ]; then TEMP_CXXFLAGS=$(echo $CXXFLAGS | sed 's/\([[:blank:]]*=[[:blank:]]*\)/=/g') IFS=' ' read -r -a TEMP_ARRAY <<< $TEMP_CXXFLAGS for flag in ${TEMP_ARRAY[@]} do echo Flag: $flag found=0 for filtered in ${FILTERED_CXXFLAGS[@]} do echo Filtered: $filtered if [ $flag = $filtered ]; then echo Found: $flag found=1 fi done echo Found: $found if [ $found -eq 0 ]; then echo Retaining $flag RETAINED_CXXFLAGS+=($temp) else echo Discarding $temp fi donefiHere's a trace with the echo's in place. The test data was simply export CXXFLAGS=-DNDEBUG -g2 -O3 -mfpu=neonFlag: -DNDEBUGFiltered: -DDEBUG,Filtered: -DNDEBUG,Filtered: -O0,Filtered: -O1,Filtered: -O2,Filtered: -O3,Filtered: -Os,Filtered: -OgFound: 0Retaining -DNDEBUGFlag: -g2Filtered: -DDEBUG,Filtered: -DNDEBUG,Filtered: -O0,Filtered: -O1,Filtered: -O2,Filtered: -O3,Filtered: -Os,Filtered: -OgFound: 0Retaining -g2Flag: -O3Filtered: -DDEBUG,Filtered: -DNDEBUG,Filtered: -O0,Filtered: -O1,Filtered: -O2,Filtered: -O3,Filtered: -Os,Filtered: -OgFound: 0Retaining -O3Flag: -mfpu=neonFiltered: -DDEBUG,Filtered: -DNDEBUG,Filtered: -O0,Filtered: -O1,Filtered: -O2,Filtered: -O3,Filtered: -Os,Filtered: -OgFound: 0Retaining -mfpu=neon | Strings from two distinct arrays not matching? | bash;shell script;text processing;string | They don't match because FILTERED_CXXFLAGS has commas and ${TEMP_ARRAY[@]} does not:Flag: -DNDEBUGFiltered: -DDEBUG,Filtered: -DNDEBUG,If the commas are supposed to be there, then replace:if [ $flag = $filtered ]; thenwith:if [ $flag = ${filtered%%,} ]; thenAlternatively, if the commas are not supposed to be there, then the issue is with:FILTERED_CXXFLAGS=(-DDEBUG, -DNDEBUG, -O0, -O1, -O2, -O3, -Os, -Og)One can use declare -p to see what value bash has given to a variable:$ declare -p FILTERED_CXXFLAGSdeclare -a FILTERED_CXXFLAGS='([0]=-DDEBUG, [1]=-DNDEBUG, [2]=-O0, [3]=-O1, [4]=-O2, [5]=-O3, [6]=-Os, [7]=-Og)'One can see that the commas are included in the value of each element. While many languages require array elements to be separated by commas, Unix shell does not. Instead it treats them as part of the value of the array elements. Thus, replace the above definition with:FILTERED_CXXFLAGS=(-DDEBUG -DNDEBUG -O0 -O1 -O2 -O3 -Os -Og) |
_unix.8450 | I would like to configure squid in such way, so that only specific (public) ip (reverse proxy), could connect to the server, but I don't know how... can someone tell me how to do this? | squid (reverse proxy) configuration | firewall;ip;proxy;squid | In Squid this is done by specifying the public IP address in http_port, and using loopback address for the web server and Apache may be configured like in httpd.conf to listen on the loopback address:Port 80BindAddress 127.0.0.1 |
_unix.185728 | When I did the command : wget -r ftp://user:[email protected]/It's missing any sub-sub-directories. Does recursive FTP have a limit? | Why doesn't wget -r get all FTP subdirectories? | wget;ftp | How many level deep are you getting? If you need more than 5, you need to provide the -l option.man wget -r --recursive Turn on recursive retrieving. The default maximum depth is 5. -l depth --level=depth Specify recursion maximum depth level depth. -m --mirror Turn on options suitable for mirroring. This option turns on recursion and time-stamping, sets infinite recursion depth and keeps FTP directory listings. It is currently equivalent to -r -N -l inf --no-remove-listing. |
_softwareengineering.179047 | I notice two types of design used in web applications, some with a particular subdomain for users contents, and some with same URL structure for all the accounts.Ex: unique.domain.com and another_unique.domain.com for subdomains for sites like blogspot, wordpress, basecamp etc.while in the other approach domain.com/action1 and domain.com/action2 the content is shown according to the user logged in, but the URL is same for every user.What are main differences between both of these kind of design? | What are the advantages and disadvantages of having a subdomain for each user account? | web development;web applications | The server-side differences vary quite a bit from platform to platform.In most cases, however, it is easier to write an application that assumes it runs in the root (at least in PHP and ASP.NET it is) and then set up separate sites/virtual directories for each.From a user's perspective, telling them to go to mysite.example.com is typically easier to remember than www.example.com/mysite. There is no reason this must be so, but most people have it more or less ingrained to ignore everything after the / or, for that matter, before the main domain. As a more esoteric example, if you were to tell end users to go to ww2.host.example.com/mysite what most people will actually remember is example.com, their minds discarding the trash.The downside is, of course, that provisioning subdomains is kind of manual out of the box. You can script it up, of course, but in IIS or Apache, as well as most domain registrars it tends to be done by hand. So, you have to go to the effort of automating that (though I am sure there are some extant tools that do the trick). |
_webmaster.260 | If I register a really good .com domain name, should I also register:alternate spellingsmis-spellingshyphenated versionsother variations: iExample.com, eExample.com, myExample.com, etc.other TLDs: .org, .net, .biz, .infointernational TLDsWhich definitely, which maybe and which no? How do you know where to draw the line? | Should I preemptively register alternate domain names? | domains;registration | That depends very much on how important the site you are talking about is going to be. It is not unknown for spammers and such to buy those alternative domain names because people tend to make spelling mistakes and then land on those pages. Of course this will only happen if your site is really huge, otherwise nobody will bother. Some slight variations so users have less difficulty remembering your site (hyphenated for example) or international TLDs if you have a site in several languages are reasonable extra domains to register. |
_datascience.14193 | I'm using a neural network to analyze item choices made by players in a computer game. In the game players can choose between 0 and 7 items. Right now I'm struggling with how I can evaluate my data.Tensorflow provides a nice method for getting the k highest values:https://www.tensorflow.org/api_docs/python/tf/nn/top_ktf.nn.top_k(input, k=1, sorted=True, name=None)The input here would be the prediction by my neural network. The output I get by applying top_k to the prediction would then be compared to applying top_k to the correct_output (the one the neural net should have had) and by doing this multiple times and averaging I get the accuracy that I want to have. The problem I'm running into is that k should depend on the amount of 1's in the correct_output. I am lost as to how I can achieve this.edit: correct output (if I understand how this works correctly) should already be a tensor. It is loaded from a .pickle file and at the beginning of the code it is prepared as follows:correct_output = tf.placeholder('float')As to what it looks like: it is simply a list of given length of 1's and 0's | Using tensorflow to test a variable amount of correct labels | neural network;tensorflow;evaluation;multilabel classification | null |
_unix.11951 | I'm currently working on an assignment in a cluster programming class. The class has been given an account on the cluster, so we can ssh in and do our work. The problem is that the one account is shared among everyone in the class. Each student just makes their own directory and works within it.Obviously a problem is that students can just look at each others work and plagiarize. I don't want people to see my work until after the assignments have been submitted.There are no version control systems on the cluster, so I can't just pull a repository from my own machine and work on the assignment, then push it and remove what's on the cluster each time I work.What is the best way to prevent others from seeing my work? Ordinarily I'd just work on my own machine and then upload to the server when it is due. But because I don't have my own cluster, we all need to actually use the one account.Yes, believe it or not this is actually a real world problem I am facing - not a hypothetical. | Hiding work in account with multiple users | security | The real solution to this problem is to talk to your instructor to give you separate accounts, or to change the assignments to be group assignments. If he/she can't or won't do that then I would just ignore any plagiarizing attempts from other students. You don't lose anything when they copy your work, they lose.That said, here is a way to keep your source code virtually inaccessible for anyone else.you@local$ ssh shared@cluster gcc -x c -o yourdir/secret - < source.cNote the dash at the end of the gcc command. That means gcc will read the source from stdin. This will compile source.c from your local machine to yourdir/secret on the cluster. The secret source code will never exist as a real file on the cluster. It will only exist as a stream in some buffer (in the sshd process, I assume).If your code is not written in C then your will have to change the c in the -x c option. See here for more information about that.Other students can still grab your compiled file and decompile that. To minimize even that risk you can delete the file right after compiling and executing.you@local$ ssh shared@cluster gcc -x c -o yourdir/secret - && yourdir/secret ; rm yourdir/secret < source.cIf you are really paranoid you should make sure that you are executing the real gcc. Other students might write a wrapper around gcc, which saves the source code before compiling it, and place that wrapper in your path. You should also execute the real rm.you@local$ ssh shared@cluster /usr/bin/gcc -x c -o yourdir/secret - && yourdir/secret ; /bin/rm yourdir/secret < source.c |
_webmaster.24464 | For the purposes of this discussion, consider the following scenario:You have written a fair amount of quality, unpublished content that third-party websites wish to license from you and white-label on their own sites (i.e. you are ghostwriting for these third parties). In other words, there will be no attribution or links referring to your name, email address, or website at all; it will appear to readers and search engines as if the article were written by the third party instead of by you.Consider that 10,000 of these third-party sites were created independently, without any association to each other (they are absolutely NOT part of a blog farm network). From your collection of 100 articles, you plan to license the white-label publishing right of each article to 100 of these sites (again, on a per-article basis). So, while each of your 100 articles will be published 100 times, it's fair to assume that no site will host all 100 articles. The most any one site will host is 30 articles.Now for the question: Given this scenario, and that your intent is not to game search engines at all (but rather to provide quality content on established websites), do you need to include no-index meta tags on the pages that display your content?Clearly, you wouldn't want your customers to receive the wrath of the Google Duplicate Content Penalty. However, at the same time, if they can benefit from any Google juice deriving from your content, then that would be a nice benefit.I suppose a more concise question may be, Would this behavior cause Google to penalize the third-party websites? Or, would Google simply choose one of the sites as hosting the best version of the content on a per-article basis? | Is no-index necessary with extensive, white-labeled syndication? | seo;google;search engines;duplicate content | null |
_vi.8257 | I'd like to be able to view the last n commands, similar to the history command in bash, and then be able to execute the nth command similar to the way it is done in bash by using !<command number> is there an equivalent to this in vim? | Equivalent of !n in bash in vim for ex commands? | ex mode;command history | null |
_vi.12227 | I am running Vim version 8.0.563 on Solaris and when I run Vim, the CtrlV block selection works as expected, I type ^V and move the cursor and a block of text is highlighted by columns. So far, so good.Meanwhile, for another user running Vim from my directory with my .vimrc file, this does not work. The ^V is ignored and moving the cursor, moves the cursor but nothing is highlighted. Regular v, block mode works but the ^V column block mode is broken.I tried entering:vim --noplugin -u /dev/nulland it acts the same. I checked the shared libraries and they are the same. I tried clearing out (almost) all the environment variables, still no joy.Does anybody have any good ideas of what is wrong or something else to try?Thank you in advance. | Vim ^V Visual Block mode not working | visual mode;solaris | null |
_unix.244607 | I have a list of strings and want to find and delete lines containing these strings in a file. A short example of the list of strings is listed as below.File S1Mo 32,332Mo 7,262Mo 7,272Mo 7,28And a short example of the file is as follows.File A1Mo 32,33 I love you.2Mo 7,26 I like you.Hi 1,2 This is not so fun.Ab 3,4 I am stupid.My expected output is like this:Hi 1,2 This is not so fun.Ab 3,4 I am stupid.I tried to use the following command, unfortunately I failed:grep -f file S file A|awk '{print $0}'I searched the related question, but most of them focus on deleting the line with one specific pattern. I Does anybody know how to deal with this issue? Thx. | Find and delete lines containing multiple patterns in a file | sed;awk;grep;string | null |
_unix.15453 | Is it possible to use an environment variable in a tmux.conf file? I am trying to set a default-path to an environment variable. Currently what I am trying is:set -g default-path $MYVARfurther I would like to check if $MYVAR is set already so I could do:if($MYVAR == ) set-environment -g MYVAR /somepath/Any ideas? | using environment variables in tmux.conf files | environment variables;tmux | Yes it looks like it is possible to expand shell variables in .tmux.conf file It looks like it's not required, but a good idea to quote them, esp. I was able to do this successfully with the status bar options just now.# In ~/.tmux.conf:set -g status_left $MYVAR etc: $ export MYVAR=Shell stuff$ tmuxI don't know about any 'if' or other control structures in the config, but there might be. |
_codereview.142453 | This is the first program I've ever made in Python, and I never really studied the language (just looked at bits of code online) so I'm sure the performance is less than optimal.The Objective: At my college, security alerts are occasionally sent to our phones, and sometimes posted to our subreddit for discussion. To automate this, I set up my phone to forward relevant texts to an email address, then take the message from the email address and post to Reddit. The script scans my inbox every 30 seconds for unread emails and posts the newest one if it finds anything. Issues I wanted to use something like twilio to avoid the email address part, but I can only sign up for Alerts with one phone number, and I still want to receive alerts on my phoneToo many points of failure. My phone could be off, my computer (script host) could be off/asleep, text forwarding could fail, gmail could be down, etc.Scanning the inbox every 30 seconds seems needlessly excessive and I wish there was a better way to do it.There are a lot of while loops, but I don't know any other way to catch exceptions.I'm planning on putting the script onto a Raspberry Pi 3 later on so there's something dedicated running it, but I'd like to optimize the code as much as possible before I do that. I also had to remove the OAuth codes for security reasons.import prawimport imaplibimport emailimport timeimport getpassimport RUAlertsfrom datetime import datetimeapp_id = 'xxxxxxxxxxxx'app_secret = 'xxxxxxxxxxxx'app_uri = 'xxxxxxxxxxxx'app_scopes = 'account creddits edit flair history identity mysubreddits privatemessages read report save submit subscribe vote'app_ua='xxxxxxxxxxxx'app_account_code = 'xxxxxxxxxxxx'app_refresh = 'xxxxxxxxxxxx'def login(): r = praw.Reddit(app_ua) r.set_oauth_app_info(app_id, app_secret, app_uri) r.refresh_access_information(app_refresh) return rmail = imaplib.IMAP4_SSL('imap.gmail.com')while True: try: emailpass = getpass.getpass('Please enter the password for xxxxxxx@xxxxx: ') mail.login('[email protected]', emailpass) break except imaplib.IMAP4.error: print('Incorrect password')mail.select(inbox)while True: try: r = RUAlerts.login() while 1: result, response = mail.uid('search', None, (UNSEEN)) unread_msg_nums = response[0].split() result, data = mail.uid('search', None, ALL) latest_email_uid = data[0].split()[-1] result, data = mail.uid('fetch', latest_email_uid, '(RFC822)') raw_email = data[0][1] email_message = email.message_from_bytes(raw_email) if len(unread_msg_nums)>0: print('\t' + str(datetime.now().strftime(%Y-%m-%d %H:%M:%S)) + ' - Something\'s wrong!') for part in email_message.walk(): if part.get_content_type()=='text/plain': Alert=part.get_payload() while True: try: r.submit(subreddit='xxxxxxxxxxxx',title=Alert,text=str(Alert)+\n \n ******** \n \n*^^I ^^am ^^a ^^bot. ^^For ^^any ^^questions, ^^comments, ^^or ^^concerns, ^^please ^^email [^^xxxxxxx@xxxxx](mailto://xxxxxxx@xxxxx)*) print('\t' + str(Alert),end=' ') break except praw.errors.ExceptionList as e: print('\tReddit error!' + str(e) + '\tRetrying in 5 minutes - ' + str(datetime.now().strftime(%Y-%m-%d %H:%M:%S))) ##mail.uid('STORE', latest_email_uid, '-FLAGS', '\SEEN') time.sleep(300) else: print(str(datetime.now().strftime(%Y-%m-%d %H:%M:%S)) + ' - All clear on the RU front') time.sleep(30) break except: print('\t' + str(datetime.now().strftime(%Y-%m-%d %H:%M:%S)) + ' - No connection! Retrying in 5 minutes') time.sleep(300) | Reddit bot that posts text messages to subreddit | python;beginner;python 3.x;email;reddit | My main comment is that you should separate the different concerns of you code into descriptive functions. This will make it a lot more readable (and re-usable).One comment before I get started:Your login function, which looks like it would log you in with the Reddit API seems to be unused at the moment. I guess this is a copy&paste error from censoring?Your first concern is to log-in with Gmail to get the mail object. This can be pasted directly into a separate function:def mail_login(): mail = imaplib.IMAP4_SSL('imap.gmail.com') while True: try: emailpass = getpass.getpass( 'Please enter the password for xxxxxxx@xxxxx: ') mail.login('[email protected]', emailpass) break except imaplib.IMAP4.error: print('Incorrect password') mail.select(inbox) return mailThe second task, which is repeated quite often, is to print a message with the current time-stamp preceding it:def log(text): print('\t{:%Y-%m-%d %H:%M:%S} - {}'.format(datetime.now(), text))Note that I used the custom format options of str.format here.Another task is to post an alert to Reddit, once it is found:def post_alert(alert, r): alert_text = {} ******** *^^I ^^am ^^a ^^bot. ^^For ^^any ^^questions, ^^comments, ^^or ^^concerns, ^^please ^^email [^^xxxxxxx@xxxxx](mailto://xxxxxxx@xxxxx)* while True: try: r.submit(subreddit='xxxxxxxxxxxx', title=alert, text=alert_text.format(alert)) print('\t{}'.format(alert), end=' ') break except praw.errors.ExceptionList as e: log('Reddit error! {}'.format(e)) time.sleep(300)I build the alert text first and filled it with str.format and used the log function.The second to last task is to search in your emails for new messages and yield all alert texts:class ShortTimeOut(Exception): passdef search_for_alerts(mail): result, response = mail.uid('search', None, (UNSEEN)) unread_msgs = response[0].split() if not unread_msgs: raise ShortTimeOut else: log('Something\'s wrong!') result, data = mail.uid('fetch', unread_msgs[-1], '(RFC822)') email_message = email.message_from_bytes(data[0][1]) for part in email_message.walk(): if part.get_content_type() == 'text/plain': yield part.get_payload()I yield the email contents (to be iterated over in the outer scope). I also added a custom exception to allow handling the short time-out in main.It seems to me like you did one request too many. After your first request you already have a list of all unseen emails, the last of which is the latest email. So there should be no need to do another request here.Lastly, I re-ordered the logic, so that if there are no new messages, no further requests are made.The last function is a main function, which calls all the other functions. It is executed in a if __name__ == __main__: guard to allow importing your code from other scripts:def main(): while True: try: r = RUAlerts.login() mail = mail_login() try: for alert in search_for_alerts(mail): post_alert(alert, r) except ShortTimeOut: log('All clear on the RU front') time.sleep(30) except Exception as e: log('{} Retrying in 5 minutes'.format(e)) time.sleep(300)if __name__ == __main__: main()Final code:import prawimport imaplibimport emailimport timeimport getpassimport RUAlertsfrom datetime import datetimeclass ShortTimeOut(Exception): passapp_id = 'xxxxxxxxxxxx'app_secret = 'xxxxxxxxxxxx'app_uri = 'xxxxxxxxxxxx'app_scopes = 'account creddits edit flair history identity mysubreddits privatemessages read report save submit subscribe vote'app_ua = 'xxxxxxxxxxxx'app_account_code = 'xxxxxxxxxxxx'app_refresh = 'xxxxxxxxxxxx'def login(): r = praw.Reddit(app_ua) r.set_oauth_app_info(app_id, app_secret, app_uri) r.refresh_access_information(app_refresh) return rdef log(text): print('\t{:%Y-%m-%d %H:%M:%S} - {}'.format(datetime.now(), text))def mail_login(): mail = imaplib.IMAP4_SSL('imap.gmail.com') while True: try: emailpass = getpass.getpass( 'Please enter the password for xxxxxxx@xxxxx: ') mail.login('[email protected]', emailpass) break except imaplib.IMAP4.error: print('Incorrect password') mail.select(inbox) return maildef post_alert(alert, r): alert_text = {} ******** *^^I ^^am ^^a ^^bot. ^^For ^^any ^^questions, ^^comments, ^^or ^^concerns, ^^please ^^email [^^xxxxxxx@xxxxx](mailto://xxxxxxx@xxxxx)* while True: try: r.submit(subreddit='xxxxxxxxxxxx', title=alert, text=alert_text.format(alert)) print('\t{}'.format(alert), end=' ') break except praw.errors.ExceptionList as e: log('Reddit error! {}'.format(e)) time.sleep(300)def search_for_alerts(mail): result, response = mail.uid('search', None, (UNSEEN)) unread_msgs = response[0].split() if not unread_msgs: raise ShortTimeOut else: log('Something\'s wrong!') result, data = mail.uid('fetch', unread_msgs[-1], '(RFC822)') email_message = email.message_from_bytes(data[0][1]) for part in email_message.walk(): if part.get_content_type() == 'text/plain': yield part.get_payload()def main(): while True: try: r = RUAlerts.login() mail = mail_login() try: for alert in search_for_alerts(mail): post_alert(alert, r) except ShortTimeOut: log('All clear on the RU front') time.sleep(30) except Exception as e: log({}! Retrying in 5 minutes.format(e)) time.sleep(300)if __name__ == __main__: main() |
_softwareengineering.251220 | My condition:A WCF service which is self-hosted and it's on a Win8 Machine. Client is a WPF Program on another machine.Then I follow the article on Codeproject about how to set X509 certificate for WCF.Problem Description:Communication between Client and Service was OK when they are on the same Machine.When I put the Client on another machine, exception occurs that it says The caller is not authenticated by the service.I believe the cause of the exception above may be relevant to X509 Certificate.When I put the Client.exe on another computer, I just generate a new certificate for client, is it right?I want to know if the X509 Client certificate should be exported from the service Machine which has generated both client and server certificate, and then be imported into other Client Machine, or just use makecert.exe generate another certificate for other Client Machine?In short, can the certificate be generated by any machine or only by the machine having generated the service certificate? | X509 certificate question on WCF | web services;wcf;certificate | Public key infrastructure always involves a key PAIR (public and private). When you are authorizing to a WCF service with an x509 certificate you must have the private key and the service you are calling must have the public key (which is inside the x509 certificate). It must be the same pair because only your private key's public key knows how to verify the private key's signature. The two are mathematically connected.You can export the certificate from the certificate store then import it on the other server (using mmc with the ceriticate snap-in). It is also important that you transfer the certificate in a secure means AND/OR verify the hash of the certificate is correct before installing. If the wrong certificate was installed then someone else could access your service with THEIR private key. |
_cstheory.33965 | I am interested in a class of optimization problems of which we know that the input variable is first subjected to noise $\xi$ before entering the data-producing process $f$.I write the objective in probability, e.g. $x^* = \underset{x \in X}{\arg } \{ P[\partial f(x + \xi)^\intercal w \leq \epsilon_1 ] \geq 1 - \epsilon_2\}$ where $X \subset \mathbb{R}^d$ and $\epsilon_i$ and $w$ are constant w.r.t $x$.Notes : $f$ is not known explicitly but is continuous and quite regular, so we are able to compute subgradients for any realization of $\xi$. I am pretty sure it is not convex over all $X$.We do not know the distribution of $\xi$ and it might have a dependency on $x$.Question :I've seen many works in stochastic approximation dealing with additive noise (noisy zeroth and first- order oracles); is there any relevant literature on noisy control variables ? I would be very grateful for any pointers, especially to approximation algorithms. Thank you in advance | Stochastic optimization with erroneous oracles | reference request;approximation algorithms;optimization;stochastic process | null |
_softwareengineering.346927 | In functional programming languages, such as Scala, data types and structures, are really important. I am in two minds about the use of type-defs in helping with the readability of the code manipulating non-trivial data structures.Here is an example of a function that takes a generic collection in Scala, traverses it once in parallel and calculates its average value. Here I have used a type-def simply in order not to have (Int,Int) all over the place:def average(xs:GenTraversable[Int]):Int={ type IntTuple = (Int,Int) def addIntTuples(x:IntTuple,y:IntTuple):IntTuple=(x._1+y._1,x._2+y._2) val (sum,len)=xs.map(x=>(x,1)) .aggregate((0,0))(addIntTuples,addIntTuples) sum/len }Here is another version of the above function which tries to give the reader a better idea about what the function is doing by introducing typedefs indicating what the values in the tuple represent. def readableAverage(xs:GenTraversable[Int]):Int={ type Sum = Int type Len = Int type SumLen = (Sum,Len) def add(x:SumLen,y:SumLen):SumLen=(x._1+y._1,x._2+y._2) val (sum,len)=xs.map(x=>(x,1)) .aggregate((0,0))(add,add) sum/len }The second version is longer, but it perhaps gives the reader more of an insight into how the function operates. Question is: firstly, do you consider the second version actually more readable and insightful? If so, is the added benefit worth the increase in code length? | functional programming: impact of typedef-ing datatypes on code readability and maintenance | functional programming;scala | I strongly prefer the first version: addIntTuples does exactly what it says. It is a generic method that could even exist outside of this scope. This means that when I reason about the code, I can can think:okay this function just adds pairs of Ints, simple, lets see what the rest does...The other version forces specific meaning, that I need to appreciate before looking how it is actually used. Then I have to back and check:What is this SumLen again? Ah.. just a tuple of these Sum and Len... What type was Sum again? Int or Double? Int, (why?) Okay, lets go back again...This is of course exaggerated for small functions, but you can see it can become an issue for larger ones. I generally find type aliases that obscure the underlying type annoying.When two approaches look of similar complexity, I always opt for the one that is the most generic. E.g. try to separate the essence of what a method does from utility-like methods. That means you can easily factor out a commonly used utility, and IMHO it makes code easier to reason.EDIT:The main benefit for having generic helper/util methods is that you communicate that there is nothing to see here, no tricky business logic, just something that you wanted to hide/abstract from the actual interesting parts of the code.Check this relevant SO answer that uses scalaz semigroup:import scalaz._, Scalaz._scala> (1, 2.5) |+| (3, 4.4)res0: (Int, Double) = (4,6.9)or the second answer that uses Numeric to create essentially the same thing that scalaz provides:implicit class Tupple2Add[A : Numeric, B : Numeric](t: (A, B)) { import Numeric.Implicits._ def |+| (p: (A, B)) = (p._1 + t._1, p._2 + t._2)}(2.0, 1) |+| (1.0, 2) == (3.0, 3)These not only create reusable code, but do something more important: They communicate that there is nothing special there. E.g. there is nothing special about Int, it works with any type that has a Numeric, so that it can add them p._1 + t._1.There is a very nice talk that touches this topic, Constraints Liberate, Liberties Constrain Runar Bjarnason In a nutshell:def f[T](a:T):T has only one valid implementation: def f[T](a:T):T = a. Being so generic, the method is constrained to a single valid implementation.def f(a:Int):Int has a Int.MaxValue * 2 valid implementations.The takeaway message is that leaving your code needlessly specific to a particular use case opens it to multiple (and maybe incorrect) implementation and mental interpretations.As for the type aliases, I don't really like them because they just give a different name to the same type, and the compiler will happily accept either. I like more value classes and tagged types http://eed3si9n.com/learning-scalaz/Tagged+type.html . Both create a different type from the original, e.g. Int, so the compiler will complain if you use e.g. a Len type at the place where it expects a Sum type. |
_unix.304559 | We have an ancient Business Basic application which prints reports to a simple line printer, and we would like to capture that output to a file (to then scrape the data from it). This runs on Red Hat 8 (circa 2002).The Basic code OPENs then PRINTs to LP, which makes its way to lpd printer lp. Inspecting a couple of random spool files that didn't get deleted in /var/spool/lpd/lp/, these look to have suitable content.So the question is, how to temporarily change something such that the Basic program sends its output only to a file (and that file doesn't get printed).One could achieve the effect by changing the Basic code, but the system is extensive, has many places where printing is performed, and there would be no easy way to offer an option at those places.Hence the pursuit of a way to do this, external to the Basic application, which can be instated and uninstated (to return printing to normal) from a script. In case it's relevant, the printcap entry:lp:\:ml#0:\:mx#0:\:sd=/var/spool/lpd/lp:\:af=/var/spool/lpd/lp/lp.acct:\:sh:\:rm=[ip address]:\:rp=pr0:\:lpd_bounce=true:\:if=/usr/share/printconf/util/mf_wrapper:Thanks! | Redirect lpd lp to a file? | lpd | If you define your printcap entry similar to:lp:\ :ml#0:\ :mx#0:\ :sd=/var/spool/lpd/lp:\ :sh:\ :lp=/dev/null:\ :of=/var/output/capture:In this case the lp entry points to /dev/null and so it will never print anything out.The magic is in the of filter. It's a very simple script:#!/bin/shDIR=/var/output/filesd=`/bin/date +%Y-%m-%d_%H:%M:%S`output=$DIR/$d.$$cat > $outputchmod 644 $outputexit 0Nowmkdir /var/output/fileschown daemon /var/output/filesAt this point we can do something like:% echo this is a test | lprAnd is if by magic:% ls /var/output/files2016-08-20_09:44:19.26541% cat /var/output/files/2016-08-20_09\:44\:19.26541 this is a testYou can modify the script to your exact needs.(I've tested this on FreeBSD, which is the only machine I have that still uses lpd !)Now you had an if filter in your original; if is an input filter which is designed to modify the incoming file to a normalised format. I'm not sure what mf_wrapper does (m format?), but if you're seeing a mess in your output files then you might change the printcap to include the originalif filter:lp:\ :ml#0:\ :mx#0:\ :sd=/var/spool/lpd/lp:\ :sh:\ :lp=/dev/null:\ :of=/var/output/capture:\ :if=/usr/share/printconf/util/mf_wrapper:For files that' you're happy with you could then manually send them to another print queue with an lpr -Prealqueue or similar. |
_webmaster.106662 | I've got a Apache web server that serves 2 domains, now in my school one domain is blocked; one isn't (same webpage for the moment). I want that if people connect to my old domain (the one that isn't blocked) to get redirected, unless it comes from the school's IP address. How would I do this, I know it has something to do with .htaccess but I don't know how to do this. | Redirect domain except those users coming from a specific IP address | htaccess;redirects;apache | null |
_unix.267047 | I am looking for a way to monitor access to disk blocks, and to monitor the access as a bitmap of blocks. I also need the capability to freeze (and queue) the device block access (and also to unfreeze and write the pending blocks).Seems that this features must be supported at kernel mode (can't be done probably as a user application).In kernel there is blk-core.c which probably is the gate before calling the actual block device. I thought that I can use that for this purpose.It seems that is already uses some queue mechanism, and that I would need some way to understand when the actual writing to device is done.void blk_start_queue(struct request_queue *q){ WARN_ON(!irqs_disabled()); queue_flag_clear(QUEUE_FLAG_STOPPED, q); __blk_run_queue(q); } EXPORT_SYMBOL(blk_start_queue);I also see that it uses sectors, not blocks (which is what I need to trace).Is it that the kernel filesystem write request is in sectors, while the below device driver of disk works in blocks ? If yes, than block monitoring must be in the disk driver instead.I also not sure about the device block itself (for example , hd.c)The request structure contain the exact place where the trasfer should be made: structrequest { .... sector //thepositioninthedeviceatwhichthetransfershouldbemade .... }gives information about the exact sector to read/write, but how does the layer above which send the request can decide about it ? Isn't it the decision of the block driver (hd.c in this case) to take ?I probably missing something in my understand. Thank you for any suggestions on the subject. | monitoring block access to disk | drivers;disk;storage;sata;trace | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.