id
stringlengths
5
27
question
stringlengths
19
69.9k
title
stringlengths
1
150
tags
stringlengths
1
118
accepted_answer
stringlengths
4
29.9k
_softwareengineering.299635
I need a advice on creating an architecture where I want API layer in between UI layer and business layer. UI layer should only consume REST services for displaying data. Reason for doing this is that we need to expose same service for others clients like iPad, Android etc.Now my questions are:Do we need dependency injection in this case? (I don't think so because we are not going to use any reference at UI layer. The only thing is, we are manipulating the JSON returned by service.)Will it hurt performance?Is this the right approach?
RESTful service layer with MVC
architecture;rest;services;portability
Do we need dependency injection in this case?It depends on what you try to accomplish. Dependency injection is needed in order to easily replace the underlying implementation. Common examples are:To replace actual implementation by stubs/mocks in a context of unit testing.To easily swap between several data access layers (for instance to deal with several database systems).In your case, you would probably want to test your presentation layer without having to do the actual calls to the API, in which case DI would be useful.Will it hurt performance?Any additional abstraction or layer hurts performance.What you should ask yourself is:Do you actually have performance issues?If yes, what profiling reveals about the source of the slowness?Don't guess. Measure.Is this the right approach?Having a common API which is then used at once by desktop applications, mobile applications and web applications is a common practice and makes it possible to reduce code duplication and simplify the porting of a system to the different types of devices.
_codereview.157456
Specification:Given first, last (which are ForwardIterators, and whose std::iterator_trais::value_type is LessThanComparable), find the most frequent element in the sequence and return pair of iterator to the last occurrence of the element and frequency count. When using overload with comparator (which is Compare), the restriction on value type of iterators is lifted.Usage guidelines:Should be used when the elements in the sequence are non copyable or too expensive to copy. Should be avoided when entropy of the sequence is very high, most of the values in the sequence are distinct, size of value type of iterators is within range of a few integers and the sequence is very large.Code:#ifndef AREA51_ALGORITHM_HPP#define AREA51_ALGORITHM_HPP#include <utility>#include <map>#include <iterator>#include <cstddef>template <typename ForwardIterator, typename Comparator>std::pair<ForwardIterator, std::size_t> most_frequent(ForwardIterator first, ForwardIterator last, Comparator comparator){ auto comp = [&comparator](const auto& lhs, const auto& rhs) { return comparator(lhs.get(), rhs.get()); }; std::map<std::reference_wrapper<typename std::iterator_traits<ForwardIterator>::value_type>, std::size_t, decltype(comp)> counts(comp); std::size_t frequency = 0; auto most_freq = first; while (first != last) { std::size_t current = ++counts[*first]; if (current > frequency) { frequency = current; most_freq = first; } ++first; } return std::make_pair(most_freq, frequency);}template <typename ForwardIterator>std::pair<ForwardIterator, std::size_t> most_frequent(ForwardIterator first, ForwardIterator last){ return most_frequent(first, last, std::less<>{});}#endif //AREA51_ALGORITHM_HPPIt took roughly 3 milliseconds to find the most frequent integer in sequence of 100'000 integers that varies from 0 to 100 (release build, still faster than human reaction). The benchmark was very simplistic, so in the real world scenario performance can be different. Some further twisting of input (still simple tests) showed that range of the input (e.g. how many distinct elements are in the sequence) gives algorithmic performance degradation. Usage:#include <iostream>#include <string>struct integer{ int x; integer(int y): x(y) {} integer(const integer& other) = delete; //non copyable integer& operator=(const integer& other) = delete;};bool operator<(const integer& lhs, const integer& rhs){ return lhs.x < rhs.x;}std::ostream& operator<<(std::ostream& os, const integer& x){ return os << x.x;}int main(){ int arr[] = {1, 2, 3, 4 , 5, 1}; std::string names[] = {Olzhas, Erasyl, Aigerym, Akbota, Akbota, Erasyl, Olzhas, Olzhas}; auto answer = most_frequent(std::begin(arr), std::end(arr)); std::cout << The most frequent integer is << *answer.first << which occured << answer.second << times\n; auto most_frequent_name = most_frequent(std::begin(names), std::end(names)); std::cout << The most frequent name is << *most_frequent_name.first << which occured << most_frequent_name.second << times\n; integer weird_integers[] = {0, 1, 2, 3, 4, 5, 6, 1}; auto most_frequent_integer = most_frequent(std::begin(weird_integers), std::end(weird_integers)); std::cout << The most frequent weird integer is << *most_frequent_integer.first << which occured << most_frequent_integer.second << times\n;}The code executes in time faster than human reaction, so I think for the first version this should be enough.I'm interested in naming (I believe most_frequent doesn't really match the algorithm), readability and conformance to specification (from the transform_iterator, I found that it is quite hard to conform it, though it works for my needs). I also thought about std::unordered_map, but then I would specify too many input variables and types.By the way, the names are kazakh names :)
Find the most frequent element in a sequence without copying elements
c++;algorithm;template;c++14
Does not compile unless I add the move operators to integer.struct integer{ int x; integer(int y): x(y) {} integer(const integer& other) = delete; //non copyable integer& operator=(const integer& other) = delete; // Added these integer(integer&&) = default; integer& operator=(integer&&) = default;};You don't need that lamda in the your main function. You just did not specify your comparison operator correctly.return most_frequent(first, last, std::less<>{});// Should bereturn most_frequent(first, last, std::less<typename std::iterator_traits<ForwardIterator>::value_type>{});Then you can remove:auto comp = [&comparator](const auto& lhs, const auto& rhs){ return comparator(lhs.get(), rhs.get());};and just use Comparator where you use compThere is no need to have two versions of the function most_frequent just use default parameter values.// This is not needed// You can remove it.template <typename ForwardIterator>std::pair<ForwardIterator, std::size_t> most_frequent(ForwardIterator first, ForwardIterator last){ return most_frequent(first, last, std::less<>{});}Modify the declaration of the main function:template <typename ForwardIterator, typename Comparator = std::less<typename std::iterator_traits<ForwardIterator>::value_type>> // ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^std::pair<ForwardIterator, std::size_t> most_frequent( ForwardIterator first, ForwardIterator last, Comparator comparator = Comparator()) // ^^^^^^^^^^^^The code executes in time faster than human reaction, so I think for the first version this should be enough.I would hope so.Your tests sets are relatively small. Haw long does it take to scan the whole library of congress would be a better test.Final Version:template <typename ForwardIterator, typename Comparator = std::less<typename std::iterator_traits<ForwardIterator>::value_type>>std::pair<ForwardIterator, std::size_t> most_frequent(ForwardIterator first, ForwardIterator last, Comparator comparator = Comparator()){ std::map<std::reference_wrapper<typename std::iterator_traits<ForwardIterator>::value_type>, std::size_t, Comparator> counts(comparator); std::size_t frequency = 0; auto most_freq = first; while (first != last) { std::size_t current = ++counts[*first]; if (current > frequency) { frequency = current; most_freq = first; } ++first; } return std::make_pair(most_freq, frequency);}
_unix.188414
I installed GNOME Desktop Environment on Centos 6 on my VPS, then I installed Firefox, Flashplayer and VLC. Everything work well but I can't hear the sound. I searched for a solution on the Internet but couldn't find.Here are 3 screenshots for the system, the sound output and the sound card after running 'alsamixer' from the terminal.
Sound doesn't work on a Centos 6.5 VPS
audio
null
_softwareengineering.306180
What's a good approach to exposing web services of different versions on the same URL? I don't want to have different URLs for different versions so I can change which version consumers are using from the server side. If version is in the URL, it's not optional and I can't provide a sensible default.Offhand I can think of:Putting a version parameter in the query stringPutting a version parameter in the post bodyAre either of these good choices, or is there a different approach I should be taking?Let me also add another question. How would a framework/program execute different versions if they reside in different JAR or EAR versions of the code?
Approach to Web Services Exposure By Version
web services;versioning
It's not an altogether uncommon practice to use headers for version specification on web services these days. Query string is also one I've seen used; both of these rely on the concept of a sensible default which is typically the latest version when the header or query string are not provided.Another approach can be to have versions tied to consumers so if you require some authentication token on all requests, along with using it to identify who the consumer is, it can be used to identify what version of the API they have selected to use. This approach has good and bad sides to it as depending on how consumers use it, they may wish to use multiple versions at the same time (perhaps one call they want to use a later service version, while all the other calls they're still on an older one).I would strongly discourage using a parameter in the post body because this would require you to make every single request a post, which is bad form for REST. If you're using SOAP however, then actually the SOAP headers would be the perfect place to put this as you can't rely on HTTP headers since SOAP can be transmitted via non-HTTPtransports (e-mail, TCP, MQ, etc).tl;drIf it's REST, use HTTP headers or query string; whichever's easier for you. For SOAP use the SOAP headers. If none of these are viable choices, I'd say you'll have to fall back on the classic of using different URLs for different versions.
_scicomp.26619
I have a binary full-rank matrix of size, say, $25 \times 50$. I need to count how many subsets of its columns form matrices with a full column rank, i.e. all the columns in the subset are linearly independent.Straightforward approach would be to iterate over all subsets of columns of size up to $25$, and then check corresponding submatrix if it has full column rank. This way one needs to test$$\binom{50}{1} + \binom{50}{2} + \dotsb + \binom{50}{25} = 626\,155\,256\,640\,187 \approx 6.3 \times 10^{14}$$matrices. Hence one needs a really fast algorithm to test if a particular submatrix has a full column rank.For example, assume I have 500 cores and I want to calculate the subject in 24h. Then I need to test $1.4 \times 10^7$ submatrices per second on one core. Old good Gaussian elimination fails with this task. Can I do something much faster than it?Another approach might be some optimised method like branch and bounds, so that one does not need to check all the submatrices - but only a small portion of them. However, I don't see at the moment what can be done in this direction.P.S. All operations are over Galois field $\mathbb F_2$.
Fast counting of all submatrices of a binary matrix with a full column rank
linear algebra;matrices;linear solver;rank
null
_unix.158601
I'm trying to check how much memory strain a process is actually putting on the system, but ps, top and friends are almost useless for that purpose as they only report 3 statistics:RES - the resident memory set includes only data pages that in physical memory (not including swapped out pages) but also includes loaded shared libraries.VIRT - includes all pages mapped to the memory by the kernel, including swapped out pages but also memory mapped files, shared libraries, etc.SHR - possibly the most useless of all, includes just the memory used by libraries that can be shared, but as I understand it, it does not actually account for memory used by the process but counts the entire size of the library even if only part of it is actually resident.In multi-process computing software, I want to know how much memory will be used/freed by running or killing another process with a similar or identical shared memory/libraries as existing ones, which means I need to know how large is the data set used by the process - including all swapped out data pages but excluding all non-data pages, such as shared libraries, shared memory pages, memory mapped files, etc.I'm not afraid of some coding but it will be better if there's already a top replacement I'm not aware of that shows that information.
How to get a procesess's actual memory usage (including data in swap)
linux;process;memory
null
_codereview.156890
I wrote a program in c what looks for integer solutions of different formulas. The program asks the user for a dimension which defines the formula. For example, the formula of the third dimension has three variables, the formula of the second dimension has two variables. The formulas look similar to the following:dim formula2 a + b3 a + b + c4 a + b + c + dand so on. Next step, the program asks the user for a maximum value of all the variables. For a given dimension, 3 for instance, the code could look like:for(int a = 1; a <= max; a++){ for(int b = 1; b <= max; b++) { for(int c = 1; c <= max; c++) { do_sth(); } }}Of course, this would be a bad algorithm. It checks each possible combination up to 6 times, just in different order. The better alternative would be:for(int a = 1; a <= max; a++){ for(int b = 1; b <= a; b++) { for(int c = 1; c <= b; c++) { do_sth(); } }}To make that work for all dimensions, I had to find a way to stack multiple loops depending on the users input. I wrote a recursive loop function:int number[100];int dim, max;void loop(int depth){ for(number[depth] = 1; number[depth] <= number[depth-1]; number[depth]++) { if(depth == dim - 1) { do_sth(); } else { loop(depth+1); } }}int main(void){ printf(Enter a dimension: ); scanf(%d, &dim); printf(Enter a maximum value: ); scanf(%d, &max); for(number[0] = 1; number[0] <= max; number[0]++) { loop(1); }}Notes:It is not important, what do_sth() does. It builds the actual formula I don't want to show here.To be able to calculate the runtime, I wrote a program what calculates the number of combinations to try. I posted it on this question.My code works fine but I would like to know if there's a better way to do this.
trying out possible combinations of varying formula with recursion
performance;algorithm;c;recursion
Can be done without recursionYou can do the same thing without recursion, by simply adding one to the rightmost dimension, and then carrying over when that number exceeds the one to its left.Here is a sample implementation:void loop(){ int i = 0; for (i = 0; i < dim; i++) { number[i] = 1; } do { do_sth(); for (i = dim - 1; i > 0; i--) { if (number[i] < number[i-1]) { number[i]++; break; } number[i] = 1; } if (i == 0) { if (number[0] < max) { number[i]++; } else { return; } } } while (1);}
_webmaster.45605
Of the various people and organizations that have engaged in scraping activity, Automattic is one of the strangest. They have numerous independent IP ranges, as if they know they will be blocked so they want to make it hard. But who are they? Why are they scraping my site?Update By scraping I mean excessive unwarranted visits to my non-WordPress site. Like visiting the same page 10 times in under a minute. Or visiting every day. I've had to ban their many IP ranges, which are ranges located in many geographical regions.
Who is Automattic and why are they visiting my non-Wordpress site so often?
scraper sites
null
_unix.44793
I am using Ubuntu 11.10 (oneiric)./var/log/mail keeps inflating on my server:Aug 5 10:48:25 domU-12-31-39-0B-C4-54 sm-msp-queue[13360]: q71He1xw027248: to=postmaster, delay=3+17:03:10, xdelay=00:00:00, mailer=relay, pri=23074446, relay=[127.0.0.1] [127.0.0.1], dsn=4.0.0, stat=Deferred: Connection refused by [127.0.0.1]Aug 5 10:48:25 domU-12-31-39-0B-C4-54 sm-msp-queue[13308]: q717K1wk024979: to=postmaster, delay=4+03:23:18, xdelay=00:00:00, mailer=relay, pri=25779463, relay=[127.0.0.1] [127.0.0.1], dsn=4.0.0, stat=Deferred: Connection refused by [127.0.0.1]Aug 5 10:48:25 domU-12-31-39-0B-C4-54 sm-msp-queue[13360]: q71He1xx027248: to=postmaster, delay=3+17:03:10, xdelay=00:00:00, mailer=relay, pri=23075343, relay=[127.0.0.1] [127.0.0.1], dsn=4.0.0, stat=Deferred: Connection refused by [127.0.0.1]...I am not using sendmail directly, and would prefer to disable it.It seems sendmail cannot start:$ sudo /etc/init.d/sendmail start * Starting Mail Transport Agent (MTA) sendmail451 4.0.0 /etc/mail/sendmail.cf: line 100: fileclass: cannot open '/etc/mail/local-host-names': Group writable directoryI believe I have the correct permissions:$ ls -ld /etc/mail/local-host-names-rw-r--r-- 1 root root 52 2011-12-04 06:58 /etc/mail/local-host-namesBut ... are the permissions on the parent folder ok?g$ ls -ld / /etc /etc/maildrwxr-xr-x 23 root root 4096 2012-05-23 08:38 /drwxrwxr-x 99 root root 4096 2012-08-05 07:29 /etcdrwxr-sr-x 7 smmta smmsp 4096 2011-12-04 06:58 /etc/mailI would want to either fix sendmail, or disable it. I tried:$ sudo update-rc.d sendmail disableupdate-rc.d: warning: sendmail start runlevel arguments (none) do not match LSB Default-Start values (2 3 4 5)update-rc.d: warning: sendmail stop runlevel arguments (none) do not match LSB Default-Stop values (1) Disabling system startup links for /etc/init.d/sendmail ... Removing any system startup links for /etc/init.d/sendmail ... /etc/rc0.d/K19sendmail /etc/rc1.d/K19sendmail /etc/rc2.d/K79sendmail /etc/rc3.d/K79sendmail /etc/rc4.d/K79sendmail /etc/rc5.d/K79sendmail /etc/rc6.d/K19sendmail Adding system startup for /etc/init.d/sendmail ... /etc/rc0.d/K19sendmail -> ../init.d/sendmail /etc/rc1.d/K19sendmail -> ../init.d/sendmail /etc/rc6.d/K19sendmail -> ../init.d/sendmail /etc/rc2.d/K79sendmail -> ../init.d/sendmail /etc/rc3.d/K79sendmail -> ../init.d/sendmail /etc/rc4.d/K79sendmail -> ../init.d/sendmail /etc/rc5.d/K79sendmail -> ../init.d/sendmailbut my mail.log still gets the same errors.Additional links I looked into:http://www.linuxforums.org/forum/servers/184133-connection-refused-127-0-0-1-mail-logs.htmlhttps://serverfault.com/questions/314429/cannot-open-etc-mail-trusted-users-group-writable-directory
How to prevent /var/log/mail.log from inflating?
ubuntu;sendmail
null
_unix.359882
I just installed Linux Mint 18 kde version.All seemed to work fine except the wifi.There were wifi networks but laptop was not able to detect any connection.I checked that I am using BCM4313 wireless adapter provided by broadcom which is shown as not working always in Linux wireless siteHow can I make Linux mint 18 connect to WiFi?I had used Kubuntu 16.1 and Linux mint 17.3 before on my laptop and both seemed to work properly with WiFi devices.Can someone please help?
How do I make my wifi connection work with Linux mint 18?
linux mint;wifi;drivers
null
_codereview.82114
I'm developing on a web application where I have a start and a finish date and time and have to set the duration between it.There are inputs for dates and times.Then, I have this function which calculates the duration between the two of them, and they are working properly. The duration, though, needs to be dynamic. So, i made this piece of code, creating a on on each of then, so that when it changes, they mount an object with the four datetime values and then call it to the getDuration function.$(document.body).on('change','.data-inicio > input',function () { var tr = $(this).parent().parent(); var time = { dstart: dateDbToView($(this).val()), hstart: tr.find('.hora-inicio').find('input').val(), dfinish: dateDbToView(tr.find('.data-final').find('input').val()), hfinish: tr.find('.hora-final').find('input').val() }; var drc = getDuration(time.dstart, time.hstart, time.dfinish, time.hfinish); tr.find('.duracao').find('input').val(drc);});$(document.body).on('change','.hora-inicio > input',function () { var tr = $(this).parent().parent(); var time = { dstart: dateDbToView(tr.find('.data-inicio').find('input').val()), hstart: $(this).val(), dfinish: dateDbToView(tr.find('.data-final').find('input').val()), hfinish: tr.find('.hora-final').find('input').val() }; var drc = getDuration(time.dstart, time.hstart, time.dfinish, time.hfinish); tr.find('.duracao').find('input').val(drc);});$(document.body).on('change','.data-final > input',function () { var tr = $(this).parent().parent(); var time = { dstart: dateDbToView(tr.find('.data-inicio').find('input').val()), hstart: tr.find('.hora-inicio').find('input').val(), dfinish: dateDbToView($(this).val()), hfinish: tr.find('.hora-final').find('input').val() }; var drc = getDuration(time.dstart, time.hstart, time.dfinish, time.hfinish); tr.find('.duracao').find('input').val(drc);});$(document.body).on('change','.hora-final > input',function () { var tr = $(this).parent().parent(); var time = { dstart: dateDbToView(tr.find('.data-inicio').find('input').val()), hstart: tr.find('.hora-inicio').find('input').val(), dfinish: dateDbToView(tr.find('.data-final').find('input').val()), hfinish: $(this).val() }; var drc = getDuration(time.dstart, time.hstart, time.dfinish, time.hfinish); tr.find('.duracao').find('input').val(drc);});My question is: this looks pretty confusing for anyone but me, which can be an bad point when it comes about maintenance. There must be a better way of making this work, so, I'm asking for suggestions.
Getting duration between two dates
javascript;jquery;html;datetime;form
null
_unix.131468
Using Debian wheezy 7.5 x64 on a Acer Aspire 5738Z, I found that maximum brightness (value '9' in /sys/class/backlight/acpi_video0/max_brightness) is much lower than maximum brightness offered by hardware (i.e., when using windows on the same machine is almost 1.5 times!).Considering that: many googled related questions suggest that max_brightness should be '15' on Debian;the file max_brightness is not modifiable by root even with rwx permissions (it says I/O error)Is there any workaround to this boundary?
Max Brightness too low on Debian
debian;drivers;video;brightness
null
_datascience.2427
The meaning of multi-class classification rulesExample: I have two classification rules (Refund is a predictor and Cheat is a binary response):(Refund, No) (Cheat, No) Support = 0.4, Confidence = 0.57(Refund, No) (Cheat, Yes) Support = 0.3, Confidence = 0.43=> multi-class classification rules:(Refund, No) (Cheat, No) v (Cheat, Yes)When predicted classification for test data, (Cheat, No) will be selected priority so why we need to have (Cheat, Yes) in multi-class classification rules here?
The meaning of multi-class classification rules
classification
null
_scicomp.2124
I am looking for recommendations for a C++ math library, with a permissive licence, well suited to calculating a wide variety of statistics on segmentations of timebased parameter data.I would be particularly interested in anything that can quantify properties of the curve shapes as well as the raw data set.
Permissive Math Library for Parameter Statistics in C++
c++;statistics
null
_webmaster.37854
Recently we implemented Varnish in front of our web nodes so that the backend would get some rest from time to time. Since varnish is case sensitive and our app was not we implementeda 301 in varnish to redirect to small case. Example:You search for PlumBer StockHOLM you will get a 301 redirect to plumber stockholm and thenplumber stockholm will be cached. This worked as a charm, but when checking the Google webmaster tools we suddenly got a crazy amount of Status - Not able to follow errors. As you can see in the image below: This of course stirred up some panic and I started to read up on the documentation once again. If I pressed on one of the links I got to the help section where i found this:Well this is strange, but as the day progressed more and more errors were thrown by Google. We took the decision to make varnish return 200 instead of the 301.Now when testing the links that appears in the Not able to follow section I get a 200 back. I have tested with Chrome, curl and lynx reader and everything looks ok but the amount of errors are still increasing. What is a little bit comforting is that the links that appears in the Not able to follow section are dated before the 200 change in varnish.Why do I get these errors and why do they keep increasing? Did google release something new on October 31? Maybe I do not understand the docs correctly?
Major Google not follow increase since introducing 301 to site
google;google search console;301 redirect
null
_unix.372979
I'm using SSH to access a jump-box, essentially. I have two machines. The local machine, the one I'm physically seated in front of, is running Fedora 25. The server is running Cent OS 7. It sits behind a router, and I use it to hop into the network behind that router. Both machines have an identical user account, user1.I connect to the server by opening my favorite terminal emulator on the local machine and entering ssh -X -p 2201 server-dns.net where server-dns.net is the correct domain name of the server. I enter my password, and I reach a prompt. When I look at the prompt, I see that my username hasn't changed, but my hostname has.This is where the confusion begins. Both machines have a copy of Firefox installed, but only the server has a copy of Chromium installed. When I launch Chromium once connected, the remote instance of Chromium appears, and I can browse the remote network. But when I launch Firefox, my local install of Firefox opens. Why? When I ssh in as a different user, and launch Firefox, the remote install of Firefox opens. I know this issue is related to the usernames being identical, but how?
When attempting to open an application over X11 forwarding over SSH, why is a local instance of the application opening?
ssh;x11;users;xforwarding
null
_softwareengineering.328377
I am trying to learn how lazy evaluation works because I'm going to implement try to implement it in the programming language I'm developing (I know it isn't the best thing to do (to try to implement something you don't even understand) but it's making for a good extensive lesson through the world of functional languages and the like), so I was reading the paper A Gentle Introduction to Haskell.Soon I encountered the paragraph on the non-strictness of Haskell functions and the possibility of creating teorethically infinite data structures, as shown in this example:numsfrom n = 1 : numsfrom (n + 1)squares = map (^ 2) (numsfrom 0)Now, take 5 squares would return [0, 1, 4, 9, 16], right? Well, my problem is understanding how it would do it.First things first, this is what I understood of lazy evaluation:lazy = lambda x, y: x or yAssuming Python was non-strict, if I passed to lazy 1 and 5 ** 1000000 the second parameter would not get evaluated, but it would get evaluated if I passed False as the first argument, because or would then have requested it.So when calling take 5 squares, squares has to be evaluated: map is called with (^ 2) and (numsfrom 0) as the arguments; But since map uses it's second argument, numsfrom 0 will be evaluated, starting an infinite loop.I cannot understand how would map return if it's evaluating an infinite loop, and what it would return. Can someone please explain me it?
Concerns on lazy evaluation and infinite data structures
functional programming;haskell;evaluation
You have the fundamental concept absolutely correct. The problem is that you're not applying it on a large enough scale.In Haskell, everything (or close enough for the purposes of this question and answer) behaves the way that or does in Python (and many, many languages). Including, say, Get me the next element of the list.So when you call numsfrom, in fact, the result of numsfrom is not created immediately and returned. It is produced as needed on an element-by-element basis. The function numsfrom can be partially evaluated, piece by piece, as the result is needed. Since map doesn't ask for an infinite number of elements, there is no infinite looping going on. This is similar to iterators I believe in Python.As a side note, shouldn't numsfrom be numsfrom n = n : numsfrom (n + 1)? Seems to me like your numsfrom will produce an infinite list of 1s.
_vi.12677
I am new to vim and am trying to set a colorscheme. However, for both pre-installed and new colorschemes, the only thing that happens when I change the colorscheme is the background color changes. For example, :colo evening shows up with just a lighter gray background; :colo pablo changes nothing at all. Regardless of if I change it in the vimrc file or manually in a file, the result is the same. I am using iTerm2.What am I missing here?Update: changing colorscheme now changes the background color as well as other messages such as a .swp file exists... However, the text color of the actual program does not change.
Changing colorscheme does not change text color?
vimrc;colorscheme;iterm2
null
_codereview.92162
In our automated test framework, written in Java 8, there are different entities representing test data, having different states and transitions between them.To model this behavior, I started to implement a simple finite state machine (or at least what I understand as a FSM).The idea to use it would be like this:public class Example { private enum Human { UNBORN, BORN, KID, ADULT, DEAD } @Test public void test() { StateMachine<Human> fsm = StateMachineBuilder.create(Human.class) .from(Human.UNBORN).to(Human.BORN).to(Human.DEAD, this::died) .from(Human.BORN).to(Human.KID).to(Human.DEAD, this::died) .from(Human.KID).to(Human.ADULT).to(Human.DEAD, this::died) .from(Human.ADULT).to(Human.DEAD).startAt(Human.UNBORN); // dying as an unborn :( fsm.go(Human.DEAD); // going from UNBORN to BORN, KID, ADULT fsm.reset(); fsm.go(Human.ADULT); } private void died() { System.out.println(Oh no :(); }}The reason I chose to write an implementation by myself was that ie. stateless4j doesn't support going directly from UNBORN to ADULT, because it doesn't look for a shortest path.Although I'm thankful for every feedback I can get, I'm mostly thinking about the following points:Is my state machine a state machine? The input it receives aren't any triggers or so, but a target state it should transition to.Is my builder really a builder?Is there a simple way to not allow multiple from calls like builder.from(KID).from(ADULT)? I know I could introduce another class as return value for the first call to form, and using TransitionAdder only for to, but it seems like an overkill.Are the names okay? In particular, I'm unhappy with TransitionAdder.Have I missed important information in the Javadoc?shortestRoute.get().add(0, from); - should I use LinkedLists if I want to do this, or is it okay with ArrayLists in this case?If I would share this and deploy it in a central maven repository - should I use a logger (if so, I would use slf4j instead of log4j) or none at all?Is it okay to use Runnable in this case, or should I introduce my own functional interface like Transition?StateMachine.javapackage fsm;import com.google.common.collect.Table;import org.apache.log4j.Logger;import java.util.*;import java.util.stream.Collectors;/** * An implementation of a state machine able to choose the shortest path between two states. * * @param <S> the Enum describing the possible states of this machine */public class StateMachine<S> { private static final Logger log = Logger.getLogger(StateMachine.class); private final Table<S, S, List<Runnable>> transitions; private final S initialState; private S currentState; private int transitionsDone = 0; StateMachine(Table<S, S, List<Runnable>> transitions, S initialState) { this.transitions = Objects.requireNonNull(transitions); this.initialState = currentState = Objects.requireNonNull(initialState); } /** * Tries to look for the shortest path from {@code currentState} to {@code state} and executing all registered * transition actions. * * @param state the state to go to * @return this * @throws IllegalArgumentException if there is no path to {@code state} */ public StateMachine<S> go(S state) { if (currentState != state) { final List<Runnable> runnables = transitions.get(currentState, state); if (runnables != null) { // there's a direct path log.trace(Going to state + state); runnables.forEach(Runnable::run); currentState = state; transitionsDone++; } else { // check if there is a path List<S> intermediaryStates = getShortestStatePathBetween(currentState, state); if (intermediaryStates != null) { // the first item is the same as currentState, but since we ignore going to the current state, // we don't have to strip it intermediaryStates.forEach(this::go); } else { throw new IllegalArgumentException(There is no valid transition!); } } } return this; } /** * Returns the current state the machine is in * * @return the current state of the machine */ public S getCurrentState() { return currentState; } /** * Returns how many transitions were done by this machine. * <p> * Most used for debugging purpouses. * * @return an integer greater or equal to 0, describing how many transitions were done */ public int getTransitionsDone() { return transitionsDone; } /** * Resets the current state to the state the machine was created with, without doing any transitions. * <p/> * Also, {@link StateMachine#getTransitionsDone()} will return 0 again after {@code reset} */ public void reset() { currentState = initialState; transitionsDone = 0; } /** * Looks for the shortest available state path between the states {@code from} and {@code to} * <p> * Given the transitions {@code A -&gt; B -&gt; C -&gt; D -&gt; E}, a call to * {@code getShortestStatePathBetween(B, D)} will return the list {@code [B, C, D]} * * @param from the state to start looking * @param to the state to find a path to * @return either a list describing the shortest path from {@code from} to {@code to} (including themselves), * or null if no path could be found */ private List<S> getShortestStatePathBetween(S from, S to) { final Set<S> reachableStates = getKeysWithoutValue(transitions.row(from)); if (reachableStates.contains(to)) { final List<S> l = new ArrayList<>(); l.add(from); l.add(to); return l; } final List<List<S>> routes = new ArrayList<>(); for (S reachableState : reachableStates) { final List<S> statesBetween = getShortestStatePathBetween(reachableState, to); if (statesBetween != null) { routes.add(statesBetween); } } final Optional<List<S>> shortestRoute = getShortestList(routes); if (shortestRoute.isPresent()) { shortestRoute.get().add(0, from); return shortestRoute.get(); } else { return null; } } protected static <T> Set<T> getKeysWithoutValue(Map<T, ?> map) { return map.entrySet().stream().filter(e -> e.getValue() != null).map(Map.Entry::getKey).collect(Collectors .toSet()); } protected static <T> Optional<List<T>> getShortestList(List<List<T>> lists) { return lists.stream().min((l1, l2) -> l1.size() - l2.size()); }}StateMachineBuilder.javapackage fsm;import com.google.common.collect.ArrayTable;import com.google.common.collect.Table;import java.util.ArrayList;import java.util.Arrays;import java.util.List;import java.util.Map;/** * Configuration class for creating enum based StateMachines. * <p> * To create a builder, call the static factory method {@link StateMachineBuilder#create(Class)} * <p> * Configuration is fluently done using {@link StateMachineBuilder#from(Enum)} and * {@link fsm.StateMachineBuilder.TransitionAdder#to(Enum)}. * <p> * Example usage: * <pre> * StateMachineBuilder&lt;SomeEnum&gt; builder = StateMachineBuilder.create(SomeEnum.class); * * builder.from(SomeEnum.A).to(SomeEnum.B) * .from(SomeEnum.B).to(SomeEnum.C).to(SomeEnum.D) * .from(SomeEnum.A).to(SomeEnum.C, () -&gt; System.out.println(&quot;Transition to C&quot;); * * StateMachine&lt;SomeEnum<&gt; stateMachine = builder.startAt(SomeEnum.A); * </pre> * * @param <S> the used Enum */public class StateMachineBuilder<S extends Enum<S>> { private final Table<S, S, List<Runnable>> transitions; private StateMachineBuilder(S[] validStates) { final List<S> valueList = Arrays.asList(validStates); transitions = ArrayTable.create(valueList, valueList); } public static <T extends Enum<T>> StateMachineBuilder<T> create(Class<T> e) { return new StateMachineBuilder<>(e.getEnumConstants()); } public TransitionAdder from(S state) { return new TransitionAdder(transitions.row(state)); } /** * Creates a new {@link StateMachine} using the current configuration * * @param initialState the starting state of the state machine * @return a new StateMachine */ public StateMachine<S> startAt(S initialState) { return new StateMachine<>(transitions, initialState); } public class TransitionAdder { private final Map<S, List<Runnable>> transitionsTo; private TransitionAdder(Map<S, List<Runnable>> transitionsTo) { this.transitionsTo = transitionsTo; } /** * Creates a new transition to {@code state}, executing the transition {@code transition} when * switching the state to it * * @param state the state to create the transition to * @param transition a functional interface which should be executed when transitioning to {@code state} */ public TransitionAdder to(S state, Runnable transition) { List<Runnable> runnables = transitionsTo.get(state); if (runnables == null) { runnables = new ArrayList<>(); transitionsTo.put(state, runnables); } runnables.add(transition); return this; } /** * Creates a new transition to {@code state} without an action * * @param state the state to create the transition to */ public TransitionAdder to(S state) { List<Runnable> runnables = transitionsTo.get(state); if (runnables == null) { runnables = new ArrayList<>(); transitionsTo.put(state, runnables); } return this; } /** * @see StateMachineBuilder#from(Enum) */ public TransitionAdder from(S state) { return StateMachineBuilder.this.from(state); } /** * @see StateMachineBuilder#startAt(Enum) */ public StateMachine<S> startAt(S initialState) { return StateMachineBuilder.this.startAt(initialState); } }}
Finite State Machine supporting shortest path transitions
java;state machine
null
_softwareengineering.284905
Can you tell me the value of using PHP encoder (ioncube, phpshield) with currently present service like decry.pt (http://www.decry.pt/) that can easily decode source codes.I have tried decry.pt's free demo. Just drag & drop an encoded source and it will return the decoded one. It is so easy.It seems the value of encoder can be easily be canceled.
What is the value of encoders with decoders currently present
php;security
null
_webapps.2088
I have seen a couple of webapps that say something along the lines:Use your twitter account-> User ___ password __ And then they take you to some other page. After all this phishing warnings and all, why should I trust in one of those apps?
Am I to trust my user/password to some apps that claim to be integrated with Twitter?
phishing;twitter integration
What sites should do is use twitters oAuth to sign in, it will redirect you to twitter where you will be asked if want to share details (never your password). External sites will soon no longer be able to sign users into twitter using a username and password so behaviour like this will soon be going the way of the dodo. To see it in action, I have built a site for the stack apps api that uses twitter oAuth called stack of twits.
_unix.218999
The default interface of Zorin OS is a Windows 7 like interface, which is what I am looking for:In the gallery page of the Zorin OS website, it shows, the option to change to a Windows 7, XP, 2000 interface.I downloaded the free Zorin OS Lite, installed it next to Windows 7(dual boot). After I installed it, the interface was a Windows 2000 like interface. I found out in the Zorin OS look changer, that I only have the option of changing the looks to Windows 2000 Or Mac OS X. Don't like either of them. This is for a former Windows user, he has an old PC, with Win7, thats just too slow. and I want to make his transition to Linux* as enjoyable as possible. Hence the Windows 7 look is needed.
How to install the Windows 7 look on Zorin OS, Ubuntu Based Distro
zorin
null
_webmaster.34383
Possible Duplicate:Self hosted Web Analytics like urchin There are sites that provide web analytics solutions. But I don not want dependence to external sites and I want embed an analyzer in my PHP-based website. Is there such off-the-shelf solution so I can embed it in my site with no dependency to other sites? What is pros and cons of each one? Which one is most extensible?
Is there any alternative to web analytics websites?
traffic;analytics
null
_softwareengineering.263912
I'm new to the open source community. When I look at projects on Github I don't see any forum. There's only an issues page. Is that only meant for submitting bugs? Or can I say other things? for example:Can I suggest features? e.g. I have an idea. It would be great if this project had this and that.Can I ask questions like How does this work? or What's the syntax for this?Is it nice to address problems that I'm having? e.g. I tried this but it didn't work. Maybe I'm doing something wrong. Can someone help me?Can I ask what the owners think about a pull request I want to send? e.g I'm going to implement this. Would you accept it?
What can I say in Github issues?
github
Yes to all but check to see if there is a better place for 2 and 3.StackOverflow should be your first port of call, first to check if your question has been asked and second to construct a question.1 and 4 are most certainly the sort of things you can create an issue for. The maintainers are likely to tag them as 'Feature request' or 'Improvement' etc.
_unix.299432
I stumbled an outdated man page on my system (ubuntu 14.04), and I want to reinstall all man pages.I tried to use sudo mandb, and also sudo apt-get install man-db and sudo apt-get install manpages-dev, none of them worked.What are other options?
Why do I have outdated man pages and what can I do about it?
upgrade;man
The man pages on your system correspond to the software that's installed on your system. It would be bad if you had documentation that didn't describe the software you're running!Ubuntu 14.04, by definition, includes software versions released at least a few months before April 2014.Reinstalling man pages won't give you more recent versions. Reinstalling just gives you what you already had.If you want your system to have recent documentation, you need to upgrade your distribution. You'll get more recent software and the assorted documentation.If you want to read documentation for software that you don't have installed, then just read it on a website.If you want to have easy access to software and documentation that's newer or older than your distribution, you can install another distribution on your machine (e.g. another release of Ubuntu), either in a virtual machine or in a chroot. See How do I run 32-bit programs on a 64-bit Debian/Ubuntu?
_codereview.145820
At now I have such code, creating or updating some entity by dto items. public void Upsert(List<Object> items){ var existings = _db.Get().Where().ToList(); foreach (var item in items) { var existing = existings.FirstOrDefault(x => x.Id == item.SomeId); if (existing == null) { if (item.Property != 645) { throw new EntityNotFoundException(); } // create new } else { // update existing } } _db.Save();}I want to move if (item.Property logic out of this layer. I cannot add some method to Object, it is simply dto. I also cannot subclass UpsertService.What I can do is to pass Action<Object> checkWhenNotFound to Upsert method. Disadvantage - I need to write all those mocking It.IsAny<Action<Object, NotOnlyObjectInReal, AndMaybeMore>> in tests.Other way is to inject checker inside Upsert caller ctor:interface IUpsertService { void Upsert(List<Object> items); void SetChecker(Action<Object>);}public Owner(IUpsertService service){ _service = service; service.SetChecker(CheckWhenNotFound)}private CheckWhenNotFound(Object item){ if (item.Property != 645) { throw new EntityNotFoundException(); }}Is there commmon solution for such problem?
Business rule check inside DAL service
c#
Can't add a comment, so I'm going to try to answer the question.If I'm understanding the question correctly, you don't want the responsibility of validation to be in the Upsert method. If that is the case then you can invert the control to another class by passing a 'Validator'. Which I think is what you are alluding to in the second part of the question but I would recommend the following implementation.In a nutshell what you do is create a class which is responsible for validation which is injected into the main class. This makes this testable as well.private readonly IValidator _validator;public Constructor(IValidator validator){ _validator = validator;}public void Upsert(List<Object> items){ var existings = _db.Get().Where().ToList(); foreach (var item in items) { var existing = existings.FirstOrDefault(x => x.Id == item.SomeId); if (existing == null) { _validator.ValidateEntity(existing); } else { // update existing } } _db.Save();}Example Implementation of the IValidator interface would bepublic Validator : IValidator{ void ValidateEntity(object existing) { if (item.Property != 645) { throw new EntityNotFoundException(); } }}You could take this further and use generics and allow the Validator class to decide what the object is and what needs validating.
_unix.287024
Pseudocode ln -s $HOME/file $HOME/Documents/ $HOME/Desktop/where I want to create a symlink from the source to two destinations. Probably, moreutils and pee. How can you create many symlinks from one source?
ln -s: from one source to many destinations
symlink
You can't do this with a single invocation of ln,but you could loop through all necessary destinations:$ for i in $HOME/Documents/ $HOME/Desktop/; do ln -s $HOME/file $i; done
_cstheory.38305
The answer to my last question on the subject made several insightful points on how EAL could be used as the basis of a practical programming language, which, in turn, could be evaluated using the abstract part of Lamping's algorithm. I understand most of the practical remarks and they match my experimentation. I don't, though, understand, in a precise manner, the restrictions that I must follow in order to ensure my terms are EAL-typeable.Those are the typing rules for EAL*, as presented here:I have a vague understanding of what this is saying. I do understand functions can only be applied on terms without !s, and I understand there is a rule to introduce and remove those !s, but I don't fully grasp it. What, exactly, are the restrictions imposed by EAL? What means for a term to be stratified? What are those boxes about? I'd highly appreciate (non-PHD) resources to catch up with the understanding I'm missing here.
What, in simple terms, are the restrictions imposed by Elementary Affine Logic?
cc.complexity theory;lambda calculus;interaction nets
The terms stratification and boxes come from proof nets. Elementary linear logic ($\mathbf{ELL}$) was originally introduced by Girard as a variant of light linear logic ($\mathbf{LLL}$) and its execution was formulated in terms of proof nets. It is on proof nets that the elementary bound is satified, i.e., every $\mathbf{ELL}$ proof net $\pi$ may be reduced to its cut-free form in a number of steps bounded by$$\left.2^{\vdots^{2^s}}\right\}d$$where $s$ and $d$ are the size and the depth of $\pi$ (maybe the height of the tower is not exactly $d$, it is linear in $d$ I guess).The size of a proof net is similar to the size of a $\lambda$-term. The depth, on the other hand, is the maximum number of nested boxes in $\pi$. A box is basically a sub-proof net which may be duplicated or erased: remember that in linear logic there is no free contraction or weakening, which means that proofs in general may not be duplicated or erased; those that can must be marked with a special construct, called box.Stratification refers precisely to the depth. It is an informal word that people in the linear logic community use to describe the restricted cut-elimination dynamics that is typical of systems like $\mathbf{ELL}$. In full linear logic proof nets, cut-elimination may completely alter the depth: if $\pi\to\pi'$ by means of a cut-elimination step and $a$ is a node of $\pi$ at depth $i$ which has a residue $a'$ in $\pi'$, the depth of $a'$ may be anything between $0$ ($a$ is pulled out of all boxes) and $2d$, where $d$ is the depth of $\pi$ ($a$ is at maximal depth and it enters inside a box which is also at maximal depth). On the contrary, $\mathbf{ELL}$ proof nets, because of the structural constraints that define them, have the remarkable property that the depth is invariant under cut-elimination: in the above case, the depth of $a'$ is exactly $i$. This stratification property is essential in proving the complexity bounds of light logics (elementary for $\mathbf{ELL}$ and polynomial for $\mathbf{LLL}$).From the point of view of Lamping's algorithm, the depth corresponds to the integer label of fan nodes. Stratification means that a sharing graph corresponding to an $\mathbf{ELL}$ proof net will need no brackets and croissants to be evaluated, because the labels of fan nodes do not change.After Girard, people started applying the principles of light and elementary linear logic to define type systems for usual $\lambda$-terms (instead of proof nets) which would ensure interesting normalization properties. The paper you mention falls in this line of work, which explains the terminology they use. Informally, typing a simply-typed $\lambda$-term amounts to decorating it with boxes, which is what their so-called pseudo-terms are for.For the rest, $EAL^\star$ is just like any other non-trivial type system for $\lambda$-terms: there is no simple description of what a typable term looks like; the shortest description is its type derivation!
_unix.184945
I've looked for the greater part of two hours trying to see if I could find a way to redirect the output of a bash script to the line that called the bash script. The best example I can use to describe what I'm looking for is the Completion function that gets executed when one presses [TAB]:$ech[tab] #--> $echo I should note that the #--> was supposed to represent how the completion finished off the word echo. The closest thing I was able to find was setting certain options to echo so that \b\b can be read, and stdout can update the same line. But this is not what I'm looking for. I'd like to be able to output directly to either current, or next line within the console input. Is this even possible from bash script? Edit: Thanks to all who responded. I feel like I should update my original question, and possibly even change the question title itself. The scope of my question changed in the following way: this no longer is a question about what I can or can't accomplish via scripting alone. My original goal was to emulate the Completion function, which I've come to learn is actually a built-in bash command. It seems that the specific library that I should put attention to (in order to even begin trying to emulate bash's tab-completion command) is the Readline library. bash-completion itself was implemented in C(++?). In any case, thanks again for the insightful responses, and if anyone has tips beyond what is stated in the above article (maybe from personally using the Readline, or Completion libraries), I look forward to reading your comments.
Set bash script output to the line that called bash script
linux;bash;shell script;io redirection;io
null
_webmaster.91140
Plesk allows adding Additional nginx directives for hosted websites.I'm trying to use this to set a 410 header on some old directories, but I can't find anything as simple as RewriteRule ^rss/ - [G] was in .htaccess rules. I've seen location /rss/ { return 410; } suggested but that doesn't work.Is there a simple nginx directive to 410 a whole directory in Plesk?
Plesk 410 Directory
nginx;plesk;410 gone
Use:location ^~ /rss { return 410;} Regex matches all get applied earlier than other rules, so this makes the rule get applied first. The problem may be that there is some other rule that is getting applied first.
_webapps.20118
I love Dropbox. But Dropbox for Teams seems most appropriate for Teams with few members and a lot of data. I'm working with a team that has lots of members, but not much data, so the Dropbox pricing seems high. Does anybody have a recommendation for a file synchronization server for teams with lots of people, but not a lot of data (<50GB)?
Is there cost effective file synchronization service for large teams with not much data?
dropbox;files;synchronization
null
_softwareengineering.110176
I am writing an iPad application. The main view of the application will be a PDF. I have made considerable progress in parsing out the contents of the PDF.The application will also have at least two side views. These side views may or may not themselves be driven by PDF files. That is part of what I am trying to figure out.The main PDF will contain some hidden buttons which cause the side view to show certain things. For example, there might be a button over the name Lincoln that brings up a side view about Abe Lincoln, another one over the name Washington that brings up a side view about George Washington, and so on. The creation of these hidden buttons should be driven by data in the main PDF. I'm thinking this means annotations.Two questions:1) Is there one type of annotation that I might prefer over another? Options I can see include actions, URIs, and maybe links -- but the last one could be complicated by the numerous internal links within the PDF.2) Should I use PDFs for the side views? What are the arguments, pro and con?Considerations:A) Ease of moving the app to other platforms laterB) Pirating. I would prefer that someone who got my PDFs not be able to reproduce the app without some work. EDIT: answering the questions posed in an answer:By action, I mean PDF action. See section 12.6 of the [http://www.adobe.com/devnet/pdf/pdf_reference.html](PDF specification).The main view is a PDF because that's the format the content author is comfortable creating. But since this PDF will be embedded in the app, I have to assume that it could escape into the wild.
application architecture with one or more PDFs
architecture;ios;ipad;pdf
1) URIs and links are universal, they're your friends and enable anything. what is an action for you ?2) Why are you using a PDF in the center view ? is it some sort of PDF reader ? If yes, why do you bother parsing instead of embedding existing pdf-readers ? If you're already using PDF for the center, why would it be any worse to use it on the side ??A) If you're concerned about other platforms, why not make it web ?B) All source code can be stolen, your app will always be much easier to copy than to write and if you want to keep your market share your only solution is to remain ahead in terms of customer perception (i.e. noone switches from your app to the copy and people switch from the copy to your app because you have the features first and it works better).
_webapps.101407
New to Cognito...I want to create a form so that each section is filled out by a different user.Example:User A fills out section 1 and sends form to User B User B fills out section 2 and sends form to User C User C fills out section 3 and sends form to Upper Management for approvalIs this possible? How?
Multiple people fill out single Cognito form
cognito forms
null
_unix.367925
Following this question, CIFS randomly losing connection to Windows share , about share problems of a Debian Jessie server mounting a remote Windows CIFS dir hosted by a Windows server ;I just found out I have like 12 times the same remote CIFS mountpoint mounted with the same name in the same dir, when doing sudo mount -a.How can that happen? How can I prevent that?My /etc/fstab, some mounts made with://10.2.1.2/XX/ZZ/YY /mnt/mount_point cifs credentials=/root/.smbcredentials,iocharset=utf8,file_mode=0770,dir_mode=0770,uid=1001,gid=1001 0 0and some more with://10.2.1.2/XX/ZZ/YY /mnt/mount_point cifs credentials=/root/.smbcredentials,iocharset=utf8,file_mode=0770,dir_mode=0770,uid=1001,gid=1001,vers=2.1 0 0Example of the multiple mountpoints:$mount//10.2.1.2/XX/ZZ/YY on /mnt/mount_point type cifs (rw,relatime,vers=1.0,cache=strict,username=someusername,domain=XXX,uid=1001,forceuid,gid=1001,forcegid,addr=10.2.1.2,file_mode=0770,dir_mode=0770,nounix,serverino,mapposix,rsize=61440,wsize=65536,echo_interval=60,actimeo=1)//10.2.1.2/XX/ZZ/YY on /mnt/mount_point type cifs (rw,relatime,vers=1.0,cache=strict,username=someusername,domain=XXX,uid=1001,forceuid,gid=1001,forcegid,addr=10.2.1.2,file_mode=0770,dir_mode=0770,nounix,serverino,mapposix,rsize=61440,wsize=65536,echo_interval=60,actimeo=1)//10.2.1.2/XX/ZZ/YY on /mnt/mount_point type cifs (rw,relatime,vers=1.0,cache=strict,username=someusername,domain=XXX,uid=1001,forceuid,gid=1001,forcegid,addr=10.2.1.2,file_mode=0770,dir_mode=0770,nounix,serverino,mapposix,rsize=61440,wsize=65536,echo_interval=60,actimeo=1)//10.2.1.2/XX/ZZ/YY on /mnt/mount_point type cifs (rw,relatime,vers=2.1,cache=strict,username=someusername,domain=XXX,uid=1001,forceuid,gid=1001,forcegid,addr=10.2.1.2,file_mode=0770,dir_mode=0770,nounix,serverino,mapposix,rsize=61440,wsize=65536,echo_interval=60,actimeo=1)//10.2.1.2/XX/ZZ/YY on /mnt/mount_point type cifs (rw,relatime,vers=2.1,cache=strict,username=someusername,domain=XXX,uid=1001,forceuid,gid=1001,forcegid,addr=10.2.1.2,file_mode=0770,dir_mode=0770,nounix,serverino,mapposix,rsize=61440,wsize=65536,echo_interval=60,actimeo=1)
CIFS mounting multiple copies of the same share on the same mount point
debian;cifs
There has been an open bug in Debian in the past, #589218cifs-utils: mount -a mounts cifs shares multiple times (+1 time for each call of mount -a)However the general consensus seems to be this is a feature, and not a bug.Please do avoid doing sudo mount -a when trying to recover the service, and start doing:sudo mount -o remount -aOtherwise, you are mounting (yet) again the remote share in your mount point.On other hand, at least the good news is that you can unmount them in the reverse other you mounted them, and I would use as a remediation manoeuvre, n-1 times the corresponding umount command.
_unix.213349
E: Could not get lock /var/lib/dpkg/lock - open (11: Resource temporarily unavailable)E: Unable to lock the administration directory (/var/lib/dpkg/), is another process using it?What does these exceptions mean? I am getting these errors when I am trying to install simultaneously? Is there any means to avoid them??
Multiple simultaneous instances of apt-get
apt;concurrency
null
_unix.26800
I currently have this log/run script as part of a runit service:#!/bin/shset -eexec svlogd -tt ./mainIf I tail -f log/main/current, I don't see the service output written in real time. It seems to only dump the stdout in 4K increments. So if the service is used lightly, I can't see the most recent log data, unless I actually do an 'sv restart' on the service, in which case all data is written to the logs before the service is restarted.I've played around with the -l and -b arguments, but these did not have any effect (and I'm not even sure it matters at this point).
How do I get svlogd to write data more often within a runit job
linux;logs
It look like the fault unfortunately lies in the daemon which does not flush it's stdout after writing the log data.svlogd does only line buffering so it outputs complete lines to the log file as soon as they arrive on stdin.
_softwareengineering.299247
In the ISO-8601 there are multiple hour formats, one of them is kk for hours 1-24. What is the purpose of this? Are there countries that offset their time? Is it for military usages?The wikipedia article didn't clarify the exact nature between HH and kk. The main source of my concern is the behaviour of the formats in SimpleDataFormatter.Edit:The direct part from the SimpleDateFormat that I'm referring to is this:H Hour in day (0-23) Number 0k Hour in day (1-24) Number 24In usage...HH:mm:ss // 00:00:00kk:mm:ss // 01:00:00
What is the difference between kk and HH+1 in ISO-8601?
java;date format
Direct answer to your question, there is no difference. The point is you do not need to do the +1 computation by using kk. My source can be found at: http://www-01.ibm.com/support/knowledgecenter/SSKM8N_8.0.0/com.ibm.etools.mft.doc/ak05616_.htm?lang=ptHere you find this description:HH hour of day in 24 hour form (00-23)kk hour of day in 24 hour form (01-24)Now, regarding HH+1 = kk. Example: HH: 00 in kk is 01. You can do this for each of the values.As for the usage, it is for i18n purposes or different usages as you already mentioned. In different parts of world, different notations are used. But normally, you would only use kk in situations in which you have business hours which goes beyond or after 24 hours. For example, TV stations.
_webapps.56643
I'm using the Admin SDK with Google Apps Script to create a directory of user's names and emails on a Google Site. The code seems to work fine because when I view the logs I see the results.[14-02-04 12:24:05:996 GMT] Joe Ardee ([email protected])[14-02-04 12:24:05:997 GMT] The Headmaster ([email protected])[14-02-04 12:24:05:997 GMT] Edward Smith([email protected])After I publish it as a web app and add it to my page using the insert scripts function I get a 500 error, on the page I get a message saying Google Drive encountered an error. When I published the app I set it so that only I can access it. I have also enabled the API and enable API access in the admin console.Here is the code I am using;function listAllUsers() { var users = AdminDirectory.Users.list({domain: 'example.com'}).users; if(users.length != 0) { for (var i=0; i<users.length; i++) { var user = users[i]; Logger.log('%s (%s)', user.name.fullName, user.primaryEmail, user.phones); } } else { Logger.log('No users found.'); }}(Instead of example.com in domain I have used the correct domain)
500 error with Google admin sdk
google apps script;google sites
That's because it needs a whole different approach. Use the following code.Codefunction doGet(e) { var app = UiApp.createApplication(); var flex = app.createFlexTable(); var users = AdminDirectory.Users.list({domain: 'jacobjantuinstra.nl'}).users; if(users.length != 0) { for (var i=0; i<users.length; i++) { var user = users[i]; flex.setWidget(i, 0, app.createLabel(user.name.fullName)) flex.setWidget(i, 1, app.createLabel(user.primaryEmail)); } } else { flex.setWidget(0, 0, app.createLabel('No users found.')); } app.add(flex); return app;}
_softwareengineering.339018
I'm trying to set up a very simple user defined variables that can be set in an administration panel and used in the system. Our application is going in the direction of configuration over custom development. This would be almost identical to setting a property in Spring then injecting it in with @Value '${property}' boolean fooThe best way I can think to go about this is to have a table with every value stored as a String and its associated data type. Then have a class act like the application context which pulls in the values from the database, casts them to the data type specified (handling errors), then returning them with some sort of generic get method.Is there a more pre-built way potentially leveraging what Spring already has? Right now we have a property file that gets managed by a 'application specialist' but the customers want an admin to have the ability to alter basic values from the UI without restarting the application or contacting someone.
Implementing Data Type Independent User Set Values
java;spring;generics;data types;properties
null
_unix.195608
I am using Linux Mint and I am newbie. Someone is trying to access my computer via my MAC Address and open port. I have some questions:I know that there are different type of port like TCP and UDP. Should I close ALL (TCP and UDP, ...) the open (Listing) port to keep my computer save from hacking?How to close a port, if it is required?
Close the neccessary Ports
linux;firewall;tcp;udp
null
_reverseengineering.1727
I would like more information about the mathematical foundations of vulnerability and exploit development.online sources or books in the right direction will be helpful.
mathematical background behind exploit development and vulnerabilities
vulnerability analysis
I would read up on static program analysisStatic program analysis is the analysis of computer software that is performed without actually executing programs (analysis performed on executing programs is known as dynamic analysis). In most cases the analysis is performed on some version of the source code and in the other cases some form of the object code.dynamic program analysis,Dynamic program analysis is the analysis of computer software that is performed by executing programs on a real or virtual processor. For dynamic program analysis to be effective, the target program must be executed with sufficient test inputs to produce interesting behaviorabstract interpretation,In computer science, abstract interpretation is a theory of sound approximation of the semantics of computer programs, based on monotonic functions over ordered sets, especially lattices. It can be viewed as a partial execution of a computer program which gains information about its semantics (e.g. control-flow, data-flow) without performing all the calculations.symbolic execution,In computer science, symbolic execution (also symbolic evaluation) refers to the analysis of programs by tracking symbolic rather than actual values, a case of abstract interpretation. The field of symbolic simulation applies the same concept to hardware. Symbolic computation applies the concept to the analysis of mathematical expressions. Symbolic execution is used to reason about all the inputs that take the same path through a program.symbolic computation,In mathematics and computer science, computer algebra, also called symbolic computation or algebraic computation is a scientific area that refers to the study and development of algorithms and software for manipulating mathematical expressions and other mathematical objectssymbolic simulation,In computer science, a simulation is a computation of the execution of some appropriately modelled state-transition system. Typically this process models the complete state of the system at individual points in a discrete linear time frame, computing each state sequentially from its predecessor.model checking,In computer science, model checking aka property checking refers to the following problem: Given a model of a system, exhaustively and automatically check whether this model meets a given specification.might want to read System Assurance: Beyond Detecting Vulnerabilities.Rolf probably has a ton of really good input on this subject. Read about his advice here
_webmaster.46855
We use Joomla with Remository to store and manage publications (don't ask me why). Files (PDF) are stored in a database and can be accessed via dynamic, rewritten links of the formhttp://domain.de/some/path/filename.htmlHere is an example: some fileCurrent browsers reliably detect that they get a PDF. wget uses the .html filename but after renaming I get a working PDF file. curl behaves similarly; piping its output into a (suitably named) files gives a working file. All this leads me to believe that -- against all odds, one might say -- the data our system provides is generally valid and understandable for clients.However, Google does not seem to index PDF files referenced by such links. Our publication list is indexed, but the PDFs linked there are not (they don't show up in web and Scholar searches).How can we tell search robots to retrieve our files and index them?
How to make Google index files retrieved from database?
seo;indexing;links;pdf;dynamic
null
_unix.334110
I'm using pfSense, which uses a customised base of FreeBSD 10. pkg -vv shows the following relevant definitions:PKG_DBDIR = /var/db/pkg;PKG_CACHEDIR = /var/cache/pkg;PORTSDIR = /usr/ports;REPOS_DIR [ /etc/pkg/, /usr/local/etc/pkg/repos/,]Repositories: pfSense-core: { url : pkg+https://pkg.pfsense.org/pfSense_v2_3_2_amd64-core, enabled : yes, priority : 0, mirror_type : SRV } pfSense: { url : pkg+https://pkg.pfsense.org/pfSense_v2_3_2_amd64-pfSense_v2_3_2, enabled : yes, priority : 0, mirror_type : SRV }Looking at the two directories named in REPOS_DIR:/etc/pkg contains what looks like a default FreeBSD.conf (enabled=yes)./usr/local/etc/pkg/repos contains a different FreeBSD.conf (enabled=no) and also a pfsense.conf that contains the two repo definitions reported by pkg -vv.There is also /usr/local/share/pfSense/pkg/repos which contains the same FreeBSD.conf and pfsense.conf as /usr/local/etc/pkg/repos (the latter under a different filename though: pfSense-repo.conf), and also a link to further development repos in a separate file pfSense-repo-devel.conf.I'm trying to work out the logic by which pkg chooses which of these overrides which others, especially since when a priority is given, in each case it's the same (=0). Does a /usr/local/etc/pkg/*.conf file automatically override a similarly-named file at /etc/pkg/*.conf, if both are present? If not, what's going on and how is pkg choosing which repos to pay attention to?
How pkg is choosing its repos (FreeBSD 10)
package management;freebsd;pfsense
The behaviour is actually all documented in the manual (q.v.). REPOS_DIR is taken from pkg.conf and its directories are processed in the order given. Files in each directory are processed in alphabetical order. There's no notion of comparing filenames. Rather, a file that is processed later overrides anything earlier that it conflicts with.Further readingpkg.conf. §5. FreeBSD Manual. 2015.
_unix.247034
I have been having a lot of issues getting an encrypted multi-disk root filesystem to boot up reliably under systemd on Debian Jessie while only having to enter the password once. Previously I've handled this in Debian by using the decrypt_derived keyscript in /etc/crypttab for every device except the first, and this worked well.However, this does not play well when systemd is introduced. systemd-cryptsetup-generator does not handle keyscripts, and when trying to find more information about how to solve this, I only found vague references to some custom password agent in an email from one of the systemd developers which only gives the unhelpful advice that it is easy to write additional agents. The basic algorithm to follow looks like this and then a list of 13 steps to take. Clearly not meant for an end user.I Debian, I have got it to work to some degree by playing with a couple of kernel options that tells systemd to ignore /etc/crypttab during boot, or ignore it completely. Debian's update-initramfs will copy the keyscript to the initramfs and unlock the devices before systemd takes over, but I have found that it leads to issues later because systemd now does not have any unit files for the decrypted devices so mounts that rely on them sometimes seem to hang or get delayed. One place where this breaks is when trying to mount btrfs subvolumes; they are mounted from the same physical device as root, but systemd is not aware that the devices are already unlocked, and halts at boot.TL;DR - my actual question:What is the systemd way to handle an encrypted root filesystem spanning multiple devices (be it a btrfs system, LVM mirror, etc) where you only need to enter the password once? I hardly consider this to be an exceptionally unusual case, so here's hoping that there is a method in place to do this.Some possible solutions comes to mind:Tiny encrypted partition containing a keyfile, which is unlocked before root. The root devices would refer to this keyfile. How would I tell this to systemd?Some sort of caching password agent running in initramfs, which remembers the password and hands it to all devices needing it at boot.Someone has already written a systemd agent emulating decrypt_derived. How would I integrate this in my boot procedure?I do run Debian exclusively, but after having tried for days to find a solution to my problem I feel that this is perhaps a more system wide problem.
What is the proper way to unlock a root filesystem spanning two LUKS devices by only entering the password once, using systemd?
systemd;btrfs;luks;root filesystem
This is a well known problem, currently without solution.On Debian (and other systems), systemd fails to assemble an encrypted BTRFS array, because of the parallel processes and various tests. All the volumes of a BTRFS array must be present for it to be mounted (properly), but as all the volumes of the BTRFS array have the same UUID (by design), systemd try to mount the first volume that it opens without waiting for the others (which would expose the same UUID, confusing systemd even more).Currently, the only way to use encrypted BTRFS volumes on Debian is to not use systemd (packages sysvinit-core, systemd-shim, etc.). There is no possible systemd way.
_unix.269739
I'm running a live CD linux distro and I'm getting out of memory exceptions. >java -version#Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x0000000646e00000, 264241152, 0) failed; error='Cannot allocate memory' (errno=12)## There is insufficient memory for the Java Runtime Environment to continue.# Native memory allocation (mmap) failed to map 264241152 bytes for committing reserved memory.# An error report file with more information is saved as:# /tmp/hs_err_pid50274.logI ran free -m command and it shows ~250Mb of free RAM and 19Gb used for cache.>free -m total used free shared buffers cachedMem: 24128 23827 301 0 15 18929-/+ buffers/cache: 4881 19247Swap: 0 0 0Here is the memory dump:--------------- S Y S T E M ---------------OS:RapidLinux 20151103uname:Linux 3.18.22 #1 SMP Fri Oct 9 19:28:11 UTC 2015 x86_64libc:glibc 2.21 NPTL 2.21 rlimit: STACK 8192k, CORE infinity, NPROC 96487, NOFILE 4096, AS infinityload average:2.08 1.73 1.30/proc/meminfo:MemTotal: 24708040 kBMemFree: 307572 kBMemAvailable: 173696 kBBuffers: 15612 kBCached: 19383916 kBSwapCached: 0 kBActive: 3784768 kBInactive: 19327244 kBActive(anon): 3742084 kBInactive(anon): 19303520 kBActive(file): 42684 kBInactive(file): 23724 kBUnevictable: 15016 kBMlocked: 15016 kBSwapTotal: 0 kBSwapFree: 0 kBDirty: 96 kBWriteback: 0 kBAnonPages: 3727472 kBMapped: 55972 kBShmem: 19327344 kBSlab: 671580 kBSReclaimable: 116376 kBSUnreclaim: 555204 kBKernelStack: 23664 kBPageTables: 24588 kBNFS_Unstable: 0 kBBounce: 0 kBWritebackTmp: 0 kBCommitLimit: 12354020 kBCommitted_AS: 28666748 kBVmallocTotal: 34359738367 kBVmallocUsed: 738156 kBVmallocChunk: 34346400260 kBHardwareCorrupted: 0 kBAnonHugePages: 0 kBDirectMap4k: 11748 kBDirectMap2M: 2072576 kBDirectMap1G: 23068672 kBMemory: 4k page, physical 24708040k(307572k free), swap 0k(0k free)I tried to clear the cache by running sync ; echo 3 | sudo tee /proc/sys/vm/drop_caches as a sanity check and surprise surprise the cache did not go down at all, but the command completed successfully. There was a ton of old logs that I deleted (from the aufs / which should be in RAM), ran the command to clear the cache - still nothing.The rest of the file system takes only ~9Gb. How can I force my cache to clear?
How can I clear my cache?
kali linux;cache;out of memory
null
_webapps.92466
You can ignore column D, E, F and G cause those are just alphabetically ordered. So basically I want to draw the names of the responses on the responsesheet to this tab (see screenshot), which is what I did. But now, does a formule or script excist where the responses of the same name get overwritten by last edit? For example: If AiwenLyra signed up at first as 'Yes and Yes', and later on signs up as a 'No and No'.. I want the last answer to be the only one. So basically answers would have to get replaced. Does anything like that excists at all?Thanks in advance!
Replacing last edited response with the old one in another tab
google spreadsheets
null
_softwareengineering.105630
How can I study C# from Stack Overflow?I have only the basics of the C# language with some simple exercises; now I want to go to the highest level C# through Stack Overflow, reading any questions tagged c#. The questions are not ordered from easiest to hardest.Do you have any idea on how to learn C# from Stack Overflow?
How can I study C# from Stack Overflow
c#
NO, you can't. But you can surely get help, when you get stuck at some point. Stack Overflow is a Q&A answers website, which provides answers to code-related questions from users all around the world in various languages, including C#. Questions asked on Stack Overflow are particular to a project, user and his requirement. Since they are very specific about a particular project, learning from them will be nightmare and hell lot confusing. I suggest you start some project, like an accounting application or a website. Keep going through, and when you get stuck, post the question on SO; the humble community will definitely get you through.Or you can read a book, which shows sample applications. You can find various books on Problem-Design-Solution format.
_codereview.39246
I'm learning JavaScript and I was trying to make a custom addEvent function that would care about compatibility (I don't want to use jQuery [nor any other library] yet, in order for me to master the bases of JavaScript).I came across this code on github (https://gist.github.com/eduardocereto/955642):/** * Cross Browser helper to addEventListener. * * @param {HTMLElement} obj The Element to attach event to. * @param {string} evt The event that will trigger the binded function. * @param {function(event)} fnc The function to bind to the element. * @return {boolean} true if it was successfuly binded. */var cb_addEventListener = function(obj, evt, fnc) { // W3C model if (obj.addEventListener) { obj.addEventListener(evt, fnc, false); return true; } // Microsoft model else if (obj.attachEvent) { return obj.attachEvent('on' + evt, fnc); } // Browser don't support W3C or MSFT model, go on with traditional else { evt = 'on'+evt; if(typeof obj[evt] === 'function'){ // Object already has a function on traditional // Let's wrap it with our own function inside another function fnc = (function(f1,f2){ return function(){ f1.apply(this,arguments); f2.apply(this,arguments); } })(obj[evt], fnc); } obj[evt] = fnc; return true; } return false;};But I was not pleased with the solution even tho it is very short and readable so I made my own (below) with a little help from the book I'm reading: Secrets of the JavaScript NinjaI want to know if you guys think I have something wrong or even if you have any comments or improvements that I might not be seeing:I commented on that post the following (I'm pg2800 on github):ORIGINAL COMMENT: 12-January-2014. UPDATED 13-January-2014. I was intrigued with your implementation and I believe I added some best practices to the code in general and also added some improvements to the traditional way.All three implementations of the addEvent custom method below (meaning: with or without any of the addEventListener or attachEvent -- forcing the browser to test all three) worked for: CHROME: Version 32.0.1700.72 m FIREFOX: 26.0 EXPLORER: Version 10.0.9200.16750Needless to say; I didn't examine all possible scenarios in my test cases, only a few... Let me know what you think.(function(){ // I test for features at the beginning of the declaration instead of everytime that we have to add an event. if(document.addEventListener) { window.addEvent = function (elem, type, handler, useCapture){ elem.addEventListener(type, handler, !!useCapture); return handler; // for removal purposes } window.removeEvent = function (elem, type, handler, useCapture){ elem.removeEventListener(type, handler, !!useCapture); return true; } } else if (document.attachEvent) { window.addEvent = function (elem, type, handler) { type = on + type; // Bounded the element as the context // Because the attachEvent uses the window object to add the event and we don't want to polute it. var boundedHandler = function() { return handler.apply(elem, arguments); }; elem.attachEvent(type, boundedHandler); return boundedHandler; // for removal purposes } window.removeEvent = function(elem, type, handler){ type = on + type; elem.detachEvent(type, handler); return true; } } else { // FALLBACK ( I did some test for both your code and mine, the tests are at the bottom. ) // I removed wrapping from your implementation and added closures and memoization. // Browser don't support W3C or MSFT model, go on with traditional window.addEvent = function(elem, type, handler){ type = on + type; // Applying some memoization to save multiple handlers elem.memoize = elem.memoize || {}; // Just in case we haven't memoize the event type yet. // This code will be runned just one time. if(!elem.memoize[type]){ elem.memoize[type] = { counter: 1 }; elem[type] = function(){ for(key in nameSpace){ if(nameSpace.hasOwnProperty(key)){ if(typeof nameSpace[key] == function){ nameSpace[key].apply(this, arguments); }; }; }; }; }; // Thanks to hoisting we can point to nameSpace variable above. // Thanks to closures we are going to be able to access its value when the event is triggered. // I used closures for the nameSpace because it improved 44% in performance in my laptop. var nameSpace = elem.memoize[type], id = nameSpace.counter++; nameSpace[id] = handler; // I return the id for us to be able to remove a specific function binded to the event. return id; }; window.removeEvent = function(elem, type, handlerID){ type = on + type; // I remove the handler with the id if(elem.memoize && elem.memoize[type] && elem.memoize[type][handlerID]) elem.memoize[type][handlerID] = undefined; return true; }; };})();The first two (with addEventListener or attachEvent) run as the original ones, didn't notice any differences. But for the traditional way:My original test was 150k repetitions of adding an empty function to the element's event and then run the event. But as you wrap the handlers onto each other; javascript sends the next error: Maximum call stack size exceeded which is only natural.Then I tested for the maximum stack size allowed which was 7816 ( I made that my test size), the results of adding 7816 empty functions to the same type of event of the same element and then executing the event was:Your code: minimum = 19ms, maximum = 33ms, average = 30ms. My code: minimum = 20ms, maximum = 37ms, average = 27ms.There is obviously not an improvement on performance whatsoever, but we can now delete specific handlers and also we have room for more handlers, and we can use this to standarize our code with the same function to add and to remove events, so we don't have to worry about X-browser considerations.If we were to have very little to none memory available, I would definitely go with your implementation.--> Tests done with a Sony vaio 8GB RAM, core i7 second generation.i.e.var div = getElementById(divID); // returns a div elementvar handler = addEvent(div, click, function(){ /* do something */}, false);/* more code */removeEvent(div, click, handler);P.S. Pardon me if I made any grammatical or orthographic mistakes, English is not my native language
JavaScript custom addEvent function to add event handlers
javascript;event handling
Your code is clean and consistent in style and formatting. Good job.I've noticed two small things that are not problems but rather they were unexpected to me and might trip you coming back to this code in 6 months tieyou end all your code blocks with }; when you don't need to.Your code:if(something){ // ...};This will not cause issues but it isn't needed. You only need it for statements1 not code blocks.statements:var something = { someProp: true, other: 'test' };var somethingelse = function () { // ...};myObject.someMethod();code blocks:if(logicalTest){ // ...}while(count < 0){ // ...}function myFunction(){ // ...}elem.memoize[type] = { counter: 1 }; and id = nameSpace.counter++; means there will never be a handler with id 0. I'm not sure that is a problem but I assumed it would start at 0 like all JavaScript lists which are 0-based list.In fact I might actually use a list.elem.memoize[type] = [];elem[type] = function(){ for(var i =0; i <= elem.memoize[type].length; i++){ if(typeof elem.memoize[type][i] == function){ elem.memoize[type][i].apply(this, arguments); } }};// ... var nameSpaceList = elem.memoize[type], id = nameSpaceList.length;1 You don't technically need it for statements there are ways of writing javascript without them but I am personally not a fan.
_unix.384488
Device files are not files per se. They're an I/O interface to use the devices in Unix-like operating systems. They use no space on disk, however, they still use an inode as reported by the stat command:$ stat /dev/sda File: /dev/sda Size: 0 Blocks: 0 IO Block: 4096 block special fileDevice: 6h/6d Inode: 14628 Links: 1 Device type: 8,0Do device files use physical inodes in the filesystem and why they need them at all?
Why do special device files have inodes?
filesystems;devices;inode;stat
The short answer is that it does only if you have a physical filesystem backing /dev (and if you're using a modern Linux distro, you probably don't).The long answer follows:This all goes back to the original UNIX philosophy that everything is a file. This philosophy is part of what made UNIX so versatile, because you could directly interact with devices from userspace without needing to have special code in your application to talk directly to the physical hardware.Originally, /dev was just another directory with a well-known name where you put your device files. Some UNIX systems still take this approach (I believe OpenBSD still does), and you can usually tell if a system is like this because it will have lots of device files for devices the system doesn't actually have (for example, files for every possible partition on every possible disk). This saves space in memory and time at boot at the cost of using a bit more disk space, which was a good trade off for early systems because they were generally very memory constrained and not very fast. This is generally referred to as having a static /dev.On modern Linux systems (and I believe also FreeBSD and possibly recent versions of Solaris), /dev is a temporary in-memory filesystem populated by the kernel (or udev if you use Systemd, because they don't trust the kernel to do almost anything). This saves some disk space at the price of some memory (usually less than a few MB) and a very small processing overhead. It also has a number of other advantages, with one of the biggest being that it's easier to detect hot-plugged hardware. This is generally referred to as having a dynamic /dev.In both cases though, device nodes are accessed through the regular VFS layer, which by definition means they have to have an inode (even if it's a virtual one that just exists so that stuff like stat() works like it's supposed to. From a practical perspective, this has zero impact on systems that use a dynamic /dev because they just store the inodes in memory or generate them as needed, and near zero impact where /dev is static because inodes take up near zero space on-disk and most filesystems either have no upper limit on them or provision way more than anybody is likely to ever need.
_datascience.5166
I have the following data($x^1_i$, $y^1_i$) for $i=1,2,...N_1$($x^2_i$, $y^2_i$) for $i=1,2,...N_2$...($x^m_i$, $y^m_i$) for $i=1,2,...N_m$Is it possible to train a neural net to produce some $y_k$ where $k<=min(N)$ given a input ${x_1, x_2, ..., x_{k-1}}$?If so any suggestion of documentation/ library I can look at (preferably python)?
training neural net with multiple sets of time-series data
machine learning;dataset;neural network;time series;regression
Yes, this is a straightforward application for neural networks. In this case yk are the outputs of the last layer (classifier); xk is a feature vector and yk is what it gets classified into. For simplicity prepare your data so that N is the same for all. The problem you have is perhaps that in the case of time series you won't have enough data: you need (ideally) many 1000's of examples to train a network, which in this case means time series, not points. Look at the specialized literature on neural networks for time series prediction for ideas on network architecture.Library: try Pylearn2 at http://deeplearning.net/software/pylearn2/ It's not the only good option but it should serve you well.
_codereview.67205
I have a servlet that processes user registration to a website. It takes inputs from an HTML form like username, password, email, etc.MySQL Table:CREATE TABLE IF NOT EXISTS user( user_id VARCHAR(255), user_password VARCHAR(255) NOT NULL, user_last_name VARCHAR(255), user_first_name VARCHAR(255), user_email VARCHAR(255) UNIQUE NOT NULL, user_type TINYINT UNSIGNED NOT NULL, /* VALUES: 0 - Guest, 1 - Admin, 2 - User */ PRIMARY KEY(user_id));Servlet:protected void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { String userId = request.getParameter(username); String userFirstName = request.getParameter(firstname); String userLastName = request.getParameter(lastname); String userEmail1 = request.getParameter(email1); String userEmail2 = request.getParameter(email2); String userPassword1 = request.getParameter(pass1); String userPassword2 = request.getParameter(pass2); String captchaAnswer = request.getParameter(answer); try { // simple captcha HttpSession session = request.getSession(true); Captcha captcha = (Captcha) session.getAttribute(Captcha.NAME); request.setCharacterEncoding(UTF-8); boolean isCaptchaCorrect = captcha.isCorrect(captchaAnswer); session.setAttribute(isCaptchaCorrect, isCaptchaCorrect); session.setAttribute(userId, userId); session.setAttribute(userFirstName, userFirstName); session.setAttribute(userLastName, userLastName); session.setAttribute(userEmail1, userEmail1); session.setAttribute(userEmail2, userEmail2); if(isCaptchaCorrect) { // put database entries into a String[] DatabaseManipulator dm = new DatabaseManipulator(); String[] usernameArray = dm.dbEntriesToArray(user_id); String[] emailArray = dm.dbEntriesToArray(user_email); // validate inputs RegistrationModule rm = new RegistrationModule(); boolean hasDuplicateUsername = rm.hasDuplicate(usernameArray, userId); boolean hasDuplicateEmail = rm.hasDuplicate(emailArray, userEmail1); boolean isEmailMatch = rm.isMatch(userEmail1, userEmail2); boolean isPasswordMatch = rm.isMatch(userPassword1, userPassword2); // bind objects to session session.setAttribute(hasDuplicateUsername, hasDuplicateUsername); session.setAttribute(hasDuplicateEmail, hasDuplicateEmail); session.setAttribute(isEmailMatch, isEmailMatch); session.setAttribute(isPasswordMatch, isPasswordMatch); // throw user-defined exceptions if(hasDuplicateUsername) { try { throw new UsernameAlreadyExistsException(); } catch(UsernameAlreadyExistsException uaee) { // redirect to result page response.sendRedirect(register-result.jsp); } } else if(hasDuplicateEmail) { try { throw new EmailAlreadyExistsException(); } catch(EmailAlreadyExistsException eaee) { response.sendRedirect(register-result.jsp); } } else if(!isEmailMatch) { try { throw new MismatchedEmailsException(); } catch(MismatchedEmailsException mee) { response.sendRedirect(register-result.jsp); } } else if(!isPasswordMatch) { try { throw new MismatchedPasswordsException(); } catch(MismatchedPasswordsException mpe) { response.sendRedirect(register-result.jsp); } // register success } else { // assign if match String userPassword = userPassword1; String userEmail = userEmail1; // assemble user bean object User user = UserAssembler.getInstance( userId, userPassword, userLastName, userFirstName, userEmail, 2 // 2 = User ); // insert user into database dm.registerUser(user); response.sendRedirect(register-result.jsp); } // wrong captcha answer } else { response.sendRedirect(register-result.jsp); } } catch(NullPointerException npe) { // redirect when servlet is illegally accessed response.sendRedirect(index.jsp); }}Everything works as it should, however during a quick code review from my instructor, he commented I should not be catching NPEs. I am using the catch clause to redirect the user to index.jsp if they try to jump to the Servlet URL without going through the required pages. My other servlets are formatted similarly as well. What is the best practice if catching NPE is not encouraged?
User registration Servlet
java;mysql;error handling;null;servlets
What's even worse is all this:try { throw new SomeException();} catch (SomeException uaee) { response.sendRedirect(some-result.jsp);}It would be better to just doresponse.sendRedirect(some-result.jsp);Directly. There is really no need to throw an exception just to catch the same exception on the next line.if(hasDuplicateUsername) { response.sendRedirect(register-result.jsp);} else if(hasDuplicateEmail) { response.sendRedirect(register-result.jsp);} else if(!isEmailMatch) { response.sendRedirect(register-result.jsp);} else if(!isPasswordMatch) { response.sendRedirect(register-result.jsp);}As for the NullPointerException, I assume that it is one of these that are null:String userId = request.getParameter(username);String userFirstName = request.getParameter(firstname);String userLastName = request.getParameter(lastname);String userEmail1 = request.getParameter(email1);String userEmail2 = request.getParameter(email2);String userPassword1 = request.getParameter(pass1);String userPassword2 = request.getParameter(pass2);String captchaAnswer = request.getParameter(answer);The fix for this is easy, check if they are null before using them:if (userId == null || userFirstName == null || userLastName == null || userEmail1 == null || ...) { response.sendRedirect(index.jsp); return;}
_softwareengineering.258247
I'm integrating with a shipping API built in php. They have a strange coding standard where the comments are between the function name and the first curly bracket.. which - subjectively - makes the code really hard to read.Is this a particular, albeit non-standard, commenting standard?Here's an example of a such a functionpublic function qualityControlDescription($qcCode)/*Converts a Quality Control code (e.g. 'U') to a descriptive string.Input parameters (case-insensitive): $qcCode = a Quality Control code, as returned by invokeWebServiceReturned: Description string (e.g. 'UNSERVICEABLE'), or if not found*/{ if (is_null($qcCode)) { return ; } $descriptionMap = $this->qualityControlDescriptionMap(); $returnVal = $descriptionMap[strtoupper($qcCode)]; if (is_null($returnVal)) { $returnVal = ; } return $returnVal;}
Is there a particular coding standard with comments between function name and body?
php;coding standards;comments
PHP code tends to use a block before the function:/** * Is the given array an associative array? */function isAssoc($arr) { return array_keys($arr) !== range(0, count($arr) - 1);}I have seen much C, Java, JavaScript, Perl, and other code that uses a similar block-before-function style. However other languages (Python comes to mind most quickly) do use this between the definition and the code style. E.g.:def is_string(s): Is the given value `s` a string type? return isinstance(s, (str, unicode))There are a number of other conventions that tend to be language- and/or documentation-system specific for documenting the types, purposes, and default values for parameters and return types. So that style is idiosyncratic for the PHP community, but not out of bounds considering all common documentation styles.Here is more on the PHP DocBlock style.
_unix.65194
Every networkmanager I've tried is incompatible with this version in gnome's settings panel. I can still connect from command line fine, but that's just kind of a pain.Solutions I have tried:pacman -Syu networkmanager installed everything fine, but didn't solve the problem.pacman -S gnome-extra installed everything fine, but didn't solve the problem.pacman -S gnome-network-manager: pacman says package not found (the package is outdated according to wiki)pacman -S network-manager-applet installed everything fine, but didn't solve the problem.The arch wiki says network-manager-applet should suffice for gnome, but the GUI won't support it, which is inconvenient. Any help is appreciated.
networkmanager with gnome 3.6.2 in Arch Linux
arch linux;gnome;gnome3;networkmanager
null
_unix.237544
I have a RHEL 7 machine running an Apache web server with multiple virtual hosts on it. I've recently run into an issue where I am unable to upload media, e.g. images / pictures / video, to my machine for applications such as a blog (WordPress) and a forum (XenForo). I've tried to check out what's going on and I can't seem to figure it out.Both applications seem to share the same root problem. I've double-checked that the file permissions should be correct, yet I still run into the same errors.Case 1WordPress: ../wp-content/$ ls -Alhtotal 8.0K-rw-rw-r--. 1 jflory apache 28 Sep 26 09:50 index.phpdrwxrwxr-x. 5 jflory apache 84 Oct 20 15:42 pluginsdrwxrwxr-x. 7 jflory apache 4.0K Sep 26 12:07 themesdrwxr-xr-x. 2 jflory apache 6 Oct 20 15:42 upgradedrwxrwxr--. 3 jflory apache 17 Sep 26 11:40 uploadsimage.png has failed to upload due to an error The uploaded file could not be moved to wp-content/uploads/2015/10.Case 2XenForo$ ls -Alh ../public_html/total 76Kdrwxrwxrwx. 7 jflory apache 96 Oct 11 20:19 datadrwxrwxrwx. 6 jflory apache 4.0K Jul 19 02:57 internal_dataThe following errors occurred while verifying that your server can run XenForo:The directory /var/www/crystalcraftmc.com/public_html/data must be writable. Please change the permissions on this directory to be world writable (chmod 0777). If the directory does not exist, please create it.The directory /var/www/crystalcraftmc.com/public_html/internal_data must be writable. Please change the permissions on this directory to be world writable (chmod 0777). If the directory does not exist, please create it.Please correct these errors and try again.I have double-checked that SELinux context is correct by running sudo restorecon -Rv /var/www/. This has worked in the past, but this time, it was not the solution. HOWEVER, I have tried disabling SELinux with sudo setenforce 0, restarted the Apache service, and this resolved the issue. It seems like SELinux is to blame, but I unsure as to why, and I don't wish to keep it disabled.I am completely lost about what the issue can be - none of this makes sense. If any further information is needed, please ask me for clarification.
RHEL / SELinux - Apache unable to write to a directory it can write to
files;permissions;rhel;apache httpd;selinux
null
_webmaster.24773
It seems my vbulletin forum is still having problems with unregistered guests spamming threads and members Inbox on the forum. Do you have a solution for this?
How do I stop my vBulletin forum from having unregistered guests spamming it?
spam;vbulletin
null
_codereview.145651
I had to do this exercise for a further education in which I'm currently enrolled in:Write a Java class Air Plane.Object-property names and types are given and compulsory. Write the according constructor and getter-, setter-method. Check within the constructor the given values for being valid.Moreover are the following methods to implement:infoloadfillUpflygetTotalWeightgetMaxReachFurther requirements concerning the implementation of the methods I have written into my code as comments.Here's my Plane-classpackage plane;public class Plane { private double maxWeight; private double emptyWeight; private double loadWeight; private double travelSpeed; private double flyHours; private double consumption; private double maxFuel; private double kerosinStorage; public Plane( double maxWeight, double emptyWeight, double loadWeight, double travelSpeed, double flyHours, double consumption, double maxFuel, double kerosinStorage ) { this.maxWeight = maxWeight; this.emptyWeight = emptyWeight; this.loadWeight = loadWeight; this.travelSpeed = travelSpeed; this.flyHours = flyHours; this.consumption = consumption; this.maxFuel = maxFuel; this.kerosinStorage = kerosinStorage < this.maxFuel ? kerosinStorage : this.maxFuel; } public double getMaxWeight() { return maxWeight; } public double getEmptyWeight() { return emptyWeight; } public double getLoadWeight() { return loadWeight; } public double getTravelSpeed() { return travelSpeed; } public double getFlyHours() { return flyHours; } public double getConsumption() { return consumption; } public double getMaxFuel() { return maxFuel; } public double getKerosinStorage() { return kerosinStorage; } public void setMaxWeight(double maxWeight) { this.maxWeight = maxWeight; } public void setEmptyWeight(double emptyWeight) { this.emptyWeight = emptyWeight; } public void setLoadWeight(double loadWeight) { this.loadWeight = loadWeight; } public void setTravelSpeed(double travelSpeed) { this.travelSpeed = travelSpeed; } public void setFlyHours(double flyHours) { this.flyHours = flyHours; } public void setConsumption(double consumption) { this.consumption = consumption; } public void setMaxFuel(double maxFuel) { this.maxFuel = maxFuel; } public void setKerosinStorage(double kerosinStorage) { this.kerosinStorage = this.kerosinStorage + kerosinStorage > maxFuel ? maxFuel : this.kerosinStorage + kerosinStorage; } /* Returns the total weight of the plane. Which is: emptyWeight + weight of load + weight of kerosin. Expect 1 liter Kerosin as 0.8 kg. */ public double getTotalWeight () { return emptyWeight + loadWeight + (kerosinStorage * 0.8); } /* How far can the plane fly with the current kerosin storage? */ public double getMaxReach () { return (kerosinStorage / consumption) * travelSpeed; } /* Prevent flying further then possible (with the current kerosin) ! */ public boolean fly (double km) { if (km <= 0 || getMaxReach() < km || getTotalWeight() > maxWeight) { return false; } flyHours += (km / travelSpeed); kerosinStorage -= (km / travelSpeed) * consumption; return true; } /* ! The parameter 'liter' can be a negative number. Doesn't have to be overfilled. Prevent a negative number as value of the 'kerosinStorage' property ! */ public void fillUp (double liter) { if ((kerosinStorage + liter) > maxFuel) { kerosinStorage = maxFuel; } else if ((kerosinStorage + liter) < 0) { kerosinStorage = 0; } else { kerosinStorage += liter; } } /* Prevent illogical value-assignments ! */ public boolean load (double kg) { if ((loadWeight + emptyWeight + kg) > maxWeight) { return false; } else if ((emptyWeight + kg) < 0) { loadWeight = 0; return true; } else { loadWeight += kg; return true; } } // Display flying hours, kerosin storage & total weight on t. terminal. public void info () { System.out.println(Flying hours: + flyHours + , Kerosin: + kerosinStorage + , Weight: + getTotalWeight()); }}And my Plane-test class:package plane;public class TestPlane{ public static void main (String[] args) { Plane jet = new Plane( 70000, 35000, 10000, 800, 500, 2500, 25000, 8000); jet.info(); jet.setKerosinStorage(1000); System.out.println(jet.getKerosinStorage()); System.out.println(jet.getTotalWeight()); System.out.println(Maximal reach: + jet.getMaxReach()); System.out.println(Fly hours 1: + jet.getFlyHours()); jet.fly(5000); System.out.println(Fly hours 1: + jet.getFlyHours()); jet.load(10000); jet.info(); }}They let automated tests run upon the code. It passed that test but I'm still not sure about it.So therefore:I would appreciate your comments and hints concerning my implementation of the described task.
Java beginner exercise : Write a class Air Plane
java;beginner;object oriented
Builder patternConsider the builder pattern. As I started to pass in arguments to the constructor, it was hard to keep the semantics right.The builder pattern helps the developer toabstract from argument input orderhandle a lot of constructor argumentsabstract from default values that make sensemake arguments optional and therefore avoid telescope constructorsThe builder pattern has only one assertion: It doesn't matter how many arguments you passed in, it will always build a consistent object.Avoid multiple return-statementsReturn-statements are structural identical to goto-statements although they are a formalized version. What all goto-alike-statements (return, continue, break) hav in common: They are not refactoring-stable. They hinder you to apply reforings like extract method. If you have to insert a new case in an algorithm that uses break, continue and return-statements you may have to overthink the whole algorithm so that your change will not break it.Avoid inexpressive return valuesYou may see return values like true/false to indicate something has been processed well or not. These return values may be sufficient for trivial cases in trivial environments where less exceptional cases occur.In complex environment a method execution may fail due to several reasons. A connection to the server was lost, an inconsistency on the database-side was recognized, the execution failed because of security reasons... to name only the tip of the iceberg. There modern languages introduce a concept for exceptional cases: Exceptions.E.g. you have following signature:public boolean load (double kg)Beside you have mixed two concerns in one method (load/unload) that you treat differently (overload will not be allowed, unload will be corrected) you also try to publish success information via return value.I suggest to not publish true or false. I suggest to have either no return value or the new value of the loadWeight. Exceptional cases I would handle with the concepts of exceptions. I would expect a signature like this:public double load (double kg) throws OverloadedExceptionThe OverloadedException may not be signature relevant (RuntimeException) but it expresses the intention of the method.Beside that I would split responsibilities and introduce a method:public double unload (double kg)Avoid commentsIf you want to make comments it is an indicator for that your code itself may not be clear enough. I intentionally said avoid comments but not do not comment anything. First think about the things that will be compiled and run to be as clear as possible. Then if you think it's neccessary to comment then comment. Comments have to be maintained separately. They are uncompiled code and cannot be be put under test. So they may lie if they diverge from your code semantics.E.g. you have following signature:public void fillUp (double liter)In your comment you mentioned that liter may be negative. This is an allowed value but your method signature says fillUp. So one of them is lying. You now have two possibilities:Think about a name, that abstracts from draining or filling up fuel (adjust?) so it is clear that you may have a negative argument or ...... separate the concerns (draining, filling up) into separate methods to match SRP.The best comment for a procedure, function, method is a set of tests that show the usage of it so other developers can see, how your code will work in different situations. Instead of testing your object in a main-scope I suggest to make ...Unit TestsFollowing the suggestions you can do expressive unit tests:public class TestPlane { /** * A plane's fuel can be filled up. */ @Test public void fillUpNormal() { Plane plane = new PlaneBuilder().setMaxFuel(2000).setInitialKerosinStorage(1700).build(); Assert.assertEquals(1800, plane.fillUp(100)); } /** * A plane cannot be filled up beyond max fuel. */ @Test public void fillUpOverfilled() { Plane plane = new PlaneBuilder().setMaxFuel(2000).setInitialKerosinStorage(1700).build(); try { plane.fillUp(400); Assert.fail(); } catch (OverfilledException e) { Assert.assertEquals(100, e.getOverfilledBy()); } }}You should decide which coverage you want to aim. I prefer condition coverage over statement coverage because it forces you to keep your methods small. Methods under condtion coverage have at least 2^condition_elements of test cases. If you have long methods with several conditions your test case count may explode.As you see in the test cases, I have comments. They describe the business rules you want to enforce.
_softwareengineering.311635
We are developing a website for students on which they first have to fulfill specific tasks in order to use our service. The problem is, that those tasks are on another website, which has nothing to do with ours and is currently placed in an iframe on our page. One mentionable thing is, that after the user finished the task (on the other website), a new site opens up, which says, that he has finished.So our question would be: Is it possible to get a verification from the other website, that the user has fullilled the task? First we thought of reading out the specific span class name from the page on which the user lands last. The problem here is the same origin policy. Do you know a better way to solve this or a save and legal way around that policy? Thanks for your help! Ps: I am a Bachelor's student in Informatics so I'm still learning, if there are some logical mistakes at my qestion. (:
Best way to verify that a user has completed a task on another website.
javascript;verification;dom;tracking
null
_cs.14333
Given languages X, Y and Z, each with alphabet, define X/Y/Z as: X/Y/Z = { w * | u Y and v Z; such that wuv X }.Prove that if X is context-free, and Y and Z are regular, then X/Y/Zis context-free.
Prove that X/Y/Z is context-free
formal languages;regular languages;context free
null
_codereview.25059
I have some code that allows me to enumerate over months in a year. This code is used both in a Web application as well as a standalone exe. Although it doesn't have to be efficient it is used a lot so if there are any improvements that would be great (I haven't done any profiling). It also needs to be thread-safe.public enum MonthEnum{ Undefined, // Required here even though it's not a valid month January, February, March, April, May, June, July, August, September, October, November, December}public static class MonthEnumEnumerator{ private static readonly ReadOnlyCollection<MonthEnum> MonthsInYear = CreateYear(); private static readonly ReadOnlyCollection<MonthEnum> ReversedMonthsInYear = CreateYear(janToDec: false); private static ReadOnlyCollection<MonthEnum> CreateYear(bool janToDec = true) { var months = new List<MonthEnum>(); for (int i = 1; i <= 12; i++) months.Add((MonthEnum)i); return new ReadOnlyCollection<MonthEnum>(janToDec ? months : months.OrderByDescending(p => (int)p).ToList()); } /// <summary> /// Returns an array of MonthEnums without the MonthEnum.Undefined value /// </summary> /// <returns></returns> public static IEnumerable<MonthEnum> GetValues() { return MonthsInYear; } /// <summary> /// Returns an array of Months starting from December to January not including the Undefined value /// </summary> public static IEnumerable<MonthEnum> GetValuesReversed() { return ReversedMonthsInYear; } /// <summary> /// Gets a list of months in range of start and end. For example with a start month of Feb and end of April this function /// would return Feb, March, April. If the start month was Nov and end Month Feb it would return Nov, Dec, Jan, Feb. /// </summary> /// <param name=start>Start of month range to return</param> /// <param name=end>End of month range to return</param> /// <returns>Array of months in order from start to end</returns> public static IEnumerable<MonthEnum> GetInRange(MonthEnum start, MonthEnum end) { var range = new List<MonthEnum>(); if(start <= end) { // simple start to end of months with no december rollover for(MonthEnum month = start; month <= end; month++) range.Add(month); } else { // end month wraps around december i.e. Nov - Feb for (MonthEnum month = start; month <= MonthEnum.December; month++) range.Add(month); // now jan - end month for (MonthEnum month = MonthEnum.January; month <= end; month++) range.Add(month); } return new ReadOnlyCollection<MonthEnum>(range); } public static IEnumerable<MonthEnum> GetInRange(MonthEnum start) { return GetInRange(start, start.Previous()); } public static MonthEnum Next(this MonthEnum month) { return month == MonthEnum.December ? MonthEnum.January : month + 1; } public static MonthEnum Previous(this MonthEnum month) { return month == MonthEnum.January ? MonthEnum.December : month - 1; } public static MonthEnum Subtract(this MonthEnum month, int months) { MonthEnum subtracted = month; while ((months--) > 0) subtracted = subtracted.Previous(); return subtracted; } public static MonthEnum Add(this MonthEnum month, int months) { MonthEnum added = month; while ((months--) > 0) added = added.Next(); return added; }}I use it like so:foreach (var month in MonthEnumEnumerator.GetValues()){ // do stuff that is month related}or// for getting the months from July to Decemberforeach (var month in MonthEnumEnumerator.GetInRange(MonthEnum.July, MonthEnum.December)){ // do something}or if I want to get the previous month to what I'm on I can dovar previousMonth = currentMonth.Previous();UPDATE:I updated my answer after comments/answers below.
Enumerating over a enum that defines months in the year
c#
It took me a while to figure out what the purpose of janToDec was. Given that you're using Linq, I can't see any reason not to just implement ReversedMonthsInYear asprivate static readonly IEnumerable<MonthEnum> ReversedMonthsInYear = MonthsInYear.Reverse();IMO that's a lot easier on the maintenance programmer.But then CreateYear without the parameter is simply duplicating code, and you can eliminate it in favour ofprivate static readonly IEnumerable<MonthEnum> MonthsInYear = GetInRange(MonthEnum.January, MonthEnum.December);Since you're not afraid to use arithmetic on your enum, you can make Subtract and Add a bit less loopy.public static MonthEnum Subtract(this MonthEnum month, int months){ if (months < 0) throw new ArgumentOutOfRangeException(months, months must be non-negative); MonthEnum subtracted = month - (months % 12); if (subtracted < MonthEnum.January) subtracted += 12; return subtracted;}and similarly.
_datascience.9769
Suppose you have an input layer with n neurons and the first hidden layer has $m$ neurons, with typically $m < n$. Then you compute the actication $a_j$ of the $j$-th neuron in the hidden layer by $a_j = f\left(\sum\limits_{i=1..n} w_{i,j} x_i+b_j\right)$, where $f$ is an activation function like $\tanh$ or $\text{sigmoid}$. To train the network, you compute the reconstruction of the input, denoted $z$, and minimize the error between $z$ and $x$. Now, the $i$-th element in $z$ is typically computed as:$z_i = f\left ( \sum\limits_{j=1..m} w_{j,i}' a_j+b'_i \right)$I am wondering why are the reconstructed $z$ are usually computed with the same activation function instead of using the inverse function, and why separate $w'$ and $b'$ are useful instead of using tied weights and biases? It seems much more intuitive to me to compute the reconstructed with the inverse activation function $f^{-1}$, e.g., $\text{arctanh}$, as follows:$$z_i' = \sum\limits_{j=1..m} \frac{f^{-1}(a_j)-b_j}{w_{j,i}^T}$$Note, that here tied weights are used, i.e., $w' = w^T$, and the biases $b_j$ of the hidden layer are used, instead of introducing an additional set of biases for the input layer. And a very related question: To visualize features, instead of computing the reconstruction, one would usually create an identity matrix with the dimension of the hidden layer. Then, one would use each column of the matrix as input to a reactivation function, which induces an output in the input neurons. For the reactivation function, would it be better to use the same activation function (resp. the $z_i$) or the inverse function (resp. the $z'_i$)?
Why is Reconstruction in Autoencoders Using the Same Activation Function as Forward Activation, and not the Inverse?
machine learning;visualization;deep learning;autoencoder
null
_cstheory.4816
This question is inspired by a similar question about applied mathematics on mathoverflow, and that nagging thought that important questions of TCS such as P vs. NP might be independent of ZFC (or other systems). As a little background, reverse mathematics is the project of finding the axioms necessary to prove certain important theorems. In other words, we start at a set of theorems we expect to be true and try to derive the minimal set of 'natural' axioms that make them so. I was wondering if the reverse mathematics approach has been applied to any important theorems of TCS. In particular to complexity theory. With deadlock on many open questions in TCS it seems natural to ask what axioms have we not tried using?. Alternatively, have any important questions in TCS been shown to be independent of certain simple subsystems of second-order arithmetic?
Axioms necessary for theoretical computer science
cc.complexity theory;lo.logic;proof complexity
Yes, the topic has been studied in proof complexity. It is called Bounded Reverse Mathematics. You can find a table containing some reverse mathematics results on page 8 of Cook and Nguyen's book, Logical Foundations of Proof Complexity, 2010. Some of Steve Cook's previous students have worked on similar topics, e.g. Nguyen's thesis, Bounded Reverse Mathematics, University of Toronto, 2008.Alexander Razborov (also other proof complexity theorists) has some results on the weak theories needed to formalize the circuit complexity techniques and prove circuit complexity lowerbounds. He obtains some unprovability results for weak theories, but the theories are considered too weak.All of these results are provable in $RCA_0$ (Simpson's base theory for Reverse Mathematics), so AFAIK we don't have independence results from strong theories (and in fact such independence results would have strong consequences as Neel has mentioned, see Ben-David's work (and related results) on independence of $\mathbf{P} vs. \mathbf{NP}$ from $PA_1$ where $PA_1$ is an extension of $PA$).
_webapps.58876
I use Soundcloud to add a preview for an MP3 on a Facebook post:How can I add a preview for an MP3 on a Facebook post without Soundcloud so that the user doesn't have to open a new tab to listen to the MP3?
How can I add a preview for an MP3 on a Facebook post without Soundcloud?
facebook;music
Facebook has removed this feature since then: it is not possible anymore to preview an MP3 without Soundcloud.
_reverseengineering.2103
According to the techy zilla blogIt will be much harder to deobfuscate code that has been obfuscated using multiple obfuscating algorithms. According to them, jsbeautifier can't fix this obfuscated code. Can you find another way to deobfuscate this type of obfuscation? If not, what is the closest you can get?var _0x2815=[\x33\x20\x31\x28\x29\x7B\x32\x20\x30\x3D\x35\x3B\x34\x20\x30\x7D,\x7C,\x73\x70\x6C\x69\x74,\x78\x7C\x6D\x79\x46\x75\x6E\x63\x74\x69\x6F\x6E\x7C\x76\x61\x72\x7C\x66\x75\x6E\x63\x74\x69\x6F\x6E\x7C\x72\x65\x74\x75\x72\x6E\x7C,\x72\x65\x70\x6C\x61\x63\x65,,\x5C\x77\x2B,\x5C\x62,\x67];eval(function (_0xf81fx1,_0xf81fx2,_0xf81fx3,_0xf81fx4,_0xf81fx5,_0xf81fx6){_0xf81fx5=function (_0xf81fx3){return _0xf81fx3;} ;if(!_0x2815[5][_0x2815[4]](/^/,String)){while(_0xf81fx3--){_0xf81fx6[_0xf81fx3]=_0xf81fx4[_0xf81fx3]||_0xf81fx3;} ;_0xf81fx4=[function (_0xf81fx5){return _0xf81fx6[_0xf81fx5];} ];_0xf81fx5=function (){return _0x2815[6];} ;_0xf81fx3=1;} ;while(_0xf81fx3--){if(_0xf81fx4[_0xf81fx3]){_0xf81fx1=_0xf81fx1[_0x2815[4]]( new RegExp(_0x2815[7]+_0xf81fx5(_0xf81fx3)+_0x2815[7],_0x2815[8]),_0xf81fx4[_0xf81fx3]);} ;} ;return _0xf81fx1;} (_0x2815[0],6,6,_0x2815[3][_0x2815[2]](_0x2815[1]),0,{}));
Try to deobfuscate multi layered javascript
obfuscation;javascript;deobfuscation
null
_webmaster.60252
So I got hit by Panda/Penguin before two years. My site was completely white-hat, and it was wallpaper related site. Such type of sites have no much text, but I have tried to describe every wallpaper with 100 words (again nothing spammy).For this two years, nothing has changed, I update content every second day, some backlinks I removed etc etc.So I start to think to buy new domain and start over. But what should I do with old wallpaper/content. People love them, share, like etc.. Would it be wise to I 301 redirect everything from old to new domain? What would be your recommendation?
Can't recover from Panda/Penguin/Zoo? Think to start a new site
seo;search engines;google search;google panda algorithm;google penguin algorithm
null
_unix.185708
I have a directory (/var/www/dental-atelier.ch/) that I would like to make accessible in two different ways.As a normal web page<VirtualHost 78.47.122.114:80> ServerAdmin [email protected] DocumentRoot /var/www/dental-atelier.ch <Location /> Options +Includes </Location> ServerName dental-atelier.ch ServerAlias dental-atelier.ch www.dental-atelier.ch ErrorLog logs/dental-atelier.ch-error_log CustomLog logs/dental-atelier.ch-access_log combined </VirtualHost> and once with WebDav (but this time with SSL)<VirtualHost _default_:443> DocumentRoot /var/www/html # Use separate log files for the SSL virtual host; note that LogLevel # is not inherited from httpd.conf. ErrorLog logs/ssl_error_log TransferLog logs/ssl_access_log LogLevel warn <Directory /var/www/html> Options +Includes </Directory> Alias /webdav /var/www/webdav <Directory /var/www/webdav/dental-atelier.ch/> AuthType Basic AuthName Password Required AuthUserFile /etc/shadow Require user user DAV On Options Indexes FollowSymLinks </Directory> </VirtualHost>This was working without any problem with httpd 2.2.After upgrading to 2.4 httpd is not allowing both settings for the same directory. The first one works alone (with the the first vhost) and the second one alone with the second one.If I configure both, I get$ cadaver https://78.47.122.114/webdav/dental-atelier.chWARNING: Untrusted server certificate presented for `ip1.corti.li':Certificate was issued to hostname `ip1.corti.li' rather than `78.47.122.114'This connection could have been intercepted.Issued to: ip1.corti.liIssued by: http://www.CAcert.org, CAcert Inc.Certificate is valid from Thu, 10 Apr 2014 10:43:34 GMT to Sat, 09 Apr 2016 10:43:34 GMTDo you wish to accept the certificate? (y/n) yAuthentication required for Password Required on server `78.47.122.114':Username: userPassword: Could not access /webdav/dental-atelier.ch/ (not WebDAV-enabled?):405 Method Not AllowedConnection to `78.47.122.114' closed.dav:!> Any Idea on how to make an HTTP-shared directory also available via WebDAV (for editing)?The SSL virtual host logs show errors about the Includes directive which is specified in the non-SSL virtual host (port 80):ssl_access_log:129.132.179.107 - - [19/Feb/2015:15:40:29 +0100] OPTIONS /webdav/dental-atelier.ch/ HTTP/1.1 401 381129.132.179.107 - user [19/Feb/2015:15:40:34 +0100] OPTIONS /webdav/dental-atelier.ch/ HTTP/1.1 200 -129.132.179.107 - user [19/Feb/2015:15:40:34 +0100] PROPFIND /webdav/dental-atelier.ch/ HTTP/1.1 405 261ssl_error_log:[Thu Feb 19 15:40:34.556872 2015] [include:warn] [pid 29499] [client 129.132.179.107:65259] AH01374: mod_include: Options +Includes (or IncludesNoExec) wasn't set, INCLUDES filter removed: /webdav/dental-atelier.ch/index.html[Thu Feb 19 15:40:34.557949 2015] [include:warn] [pid 29499] [client 129.132.179.107:65259] AH01374: mod_include: Options +Includes (or IncludesNoExec) wasn't set, INCLUDES filter removed: /webdav/dental-atelier.ch/index.htmlEditThe issue is really related to having the same directory used differently in two different virtual hosts. If I copy the very same directory to /var/www/webdav/test and configure the SSL virtual host with the test directory everything works like a charm.The same apply if I remove the HTTP virtual host for the same directory.If I have the same data in both then somehow Apache httpd detects it. It was not like that in 2.2.
Apache httpd, WebDAV and multiple settings
apache httpd;webdav
Actually the problem was something different: the directory contains an index.html file and Apache httpd was automatically delivering it.SettingDirectoryIndex disabled solved the problem.
_softwareengineering.13746
I keep coming across this term hooks in various programming articles. However I don't understand what they are, and how can they be used. So I just wanted to know what is the concept of hooks; if someone could link me to some examples, particularly in the context of web development, it would be great.
What are hooks?
web development;python
My answer pertains to WordPress which is written in PHP, but this is a general development mechanic so it shouldn't really matter, despite the fact that you put 'python' in your question title.One good example of usage of hooks, coincidentally in web development, are WordPress' hooks.They are named appropriately in that they allow a way to 'hook into' certain points of the execution of a program.So for example, the wp_head is an 'action' that is emitted when a WordPress theme is being rendered and it's at the part where it renders the part that's within the <head> tags. Say that you want to write a plugin that requires an additional stylesheet, script, or something that would normally go within those tags. You can 'hook into' this action by defining a function to be called when this action is emitted. Something like:add_action('wp_head', 'your_function');your_function() could be something as simple as:function your_function() { echo '<link rel=stylesheet type=text/css href=lol.css />';}Now, when WordPress emits this action by doing something like do_action('wp_head');, it will see that your_function() was 'hooked into' that action, so it will call that function (and pass it any arguments if it takes any, as defined in the documentation for any particular hook).Long story short: It allows you to add additional functionality at specific points of the execution of a program by 'hooking into' those points, in most cases by assigning a function callback.
_datascience.22387
These two convolution operations are very common in deep learning right now. I read about dilated convolutional layer in this paper : WAVENET: A GENERATIVE MODEL FOR RAW AUDIODeconvolution is in this paper : Fully Convolutional Networks for Semantic SegmentationBoth seems to up-sample the image but what is thedifference ?
What is the difference between Dilated Convolution and Deconvolution?
machine learning;deep learning;convnet;computer vision;convolution
null
_unix.164600
I am trying to migrate mails from a server to a new server using OfflineIMAP.My config looks like:[general]accounts = TestAccountui = noninteractive.Basic[Account TestAccount]localrepository = TestAccountSourceremoterepository = TestAccountDestinationmaxsyncaccounts = 3maxconnections = 3[Repository TestAccountSource]type = IMAPremotehost = localhostremoteuser = [email protected] = password[Repository TestAccountDestination]type = IMAPremotehost = new.machine.comremoteuser = [email protected] = passwordssl = yesWhen I run this on the old server the syncing starts and mails are being copied. However directories are not being copied to the new machine:offlineimap -c /path/to/my/configThe original directory looks like:ChatsContactscourierimapkeywordscourierimapsubscribedcourierimapuiddbcurDraftsEmailed ContactsINBOXJunknewNotesSenttmp.Subdir.Subdir.SubSubdir1.Subdir.SubSubdir2.Subdir.SubSubdir3.Subdir.SubSubdir4All mails in the inbox are correctly synced, but the directory .SubDir (including subdirs of this) never end up on the new mailserver.P.S. Old mailserver uses courier, new mailserver uses Zimbra.
Migrating mail to a new server using OfflineIMAP - dot folders not copied
imap;migration;offlineimap
null
_unix.339712
Basically I have below scenario e.g.grep 'test: \K(\d+)' $file => 15grep 'test1: \K(\d+)' $file => 20Is there any way to store result of both commands into a variable like with comma as separator,Test=grep 'test: \K(\d+)' $file;grep 'test1: \K(\d+)' $fileAnswer=eval $TestExpected output: 15,20?
Combine multiple grep outputs in a variable
bash;shell script;grep;variable
null
_unix.116434
I first posted this in February, but I'm making the problem clearer because I had no responses in 6 months, and the problem still exists.My Arch boots to the terminal. I login as root. I start Cinnamon and everything is fine. I logout of root while in Cinnamon, and I'm returned to the terminal. I login then under my own user account. I start cinnamon and I get a black screen with a mouse pointer that I can move. I can't get out of this and it requires a shutdown. This problem is reproducable.After booting up again, I login under my normal user account and it works, but the wallpaper has changed to some basic, default image.What is happening and how can I fix this?
Arch Linux problem: What is causing the black screen problem when logging into Cinnamon?
linux;arch linux;login;desktop environment;cinnamon
null
_unix.80128
I am using Fedora 17 and want to compile and use Geary. However, the required library versions are only available in Fedora 18. For various reasons I want (need!) to stick to F17 so, I was thinking of compiling in Fedora 18 (in Virtualbox) and then moving across to F17. However, I a presume once I do this and try to run on F17 it will complain about missing libraries.Is there a way to compile on F18 and it pull in all the required libraries into a folder that I can just copy across?Or is there a better way to do this?
Compile newer software for outdated versions of the same distribution
fedora;compiling
null
_codereview.53804
<html><head> <title>GPS Tracker</title> <link rel=stylesheet type=text/css href=bootstrap/css/bootstrap.min.css> <link rel=stylesheet type=text/css href=view/style.css> <script type=text/javascript src=http://maps.google.com/maps/api/js?sensor=false>/script> <script type=text/javascript> function show_maps() { var div_peta = document.getElementById('kanvas'); var tengah = new google.maps.LatLng(-8.801502,115.174794); var options = { center : tengah, zoom : 14, mapTypeId : google.maps.MapTypeId.ROADMAP } //make map object var google_map = new google.maps.Map(div_peta,options); } </script></head><body OnLoad=show_maps();> <div class=container> <div class=row> <div class=col-md-4> <div class=panel panel-default panel-body> <p><b><h4>Tracking</h4></b> <hr> <?php includecontroller/connection.php; $query=mysql_query(SELECT id_history FROM tb_history); $count=mysql_num_rows($query); ?> <form method=POST action=<?=$_SERVER['PHP_SELF']?>> <input type=hidden name=count value=<?php echo $count ?>> <input type=submit name=submit class=btn btn-primary value=Get Location style=width:280px> </button></a></p> </form> </div> </div> <div class=col-md-8> <div class=panel panel-default panel-body style=height:490px> <div id=kanvas> kanvas peta</div> </div> </div> </div><?php if(isset($_POST['submit'])) { include controller/connection.php; $count1 = $_POST['count']; $count2 = $count1+1; $sql = mysql_query(SELECT id_history FROM tb_history); $count3 = mysql_num_rows($sql); if($count3==$count1) { echo<script>setTimeout(show_maps,1000)</script>; } else if { echo<script type='text/javascript'> alert('Data has been added!'); </script>; } } ?></body></html>The code that I wrote above will show a button and a Google map.Once the button has been clicked, the code will send the amount of data that exist in the tb_history.The PHP code at the bottom will always check if there's any new data added or not. If not, then it will refresh the Google map until new data has been added to the table tb_history.Can anyone suggest a better algorithm for this?
Refreshing Google map until new data has been added to the database
javascript;php;mysql;geospatial;google maps
null
_unix.179551
I used to access my NAS via Caja or some other GUI file managers. When I copied a file, the date modified on the target was the same as the source file copied.Now, I mount my NAS via the command line. But, now, the date modified on the target takes on the current system date and time at the time of copying.What option can I add or remove to my mount statement so that the date modified on the target takes on what the source file is?The mount command I use is:sudo mount -t cifs -o username=<username>,password=<password> //<ip address>/directory /<mount point>Results of mount -l is://<ip address>/<directory> on /media/dnas type cifs (rw,relatime,vers=1.0,cache=strict,username=<username>,domain=<NAS domain>,uid=0,noforceuid,gid=0,noforcegid,addr=<ip address>,unix,posixpaths,serverino,acl,rsize=1048576,wsize=65536,actimeo=1)
Mount of NAS affects date / time modified during copy
mount;nas
null
_unix.153769
For whatever reason, the first column is highlighted and I can't figure out why. I have not messed with my vimrc file, nor any zsh options. This column is only highlighted only with, aliases.zsh, file. Opening any other file, or filetype, the highlighted column is non-existent. Any ideas?
Random colored column?
vim
That can be either the fold column (showing nothing because no folds have been found). You can turn that off via:set foldcolumn=0You can find out where that was set via :verbose set foldcolumn?, and then remove / undo that.Or it could be the signs column, which should disappear if you remove all signs via:sign unplace *
_softwareengineering.283092
Consider the following type of Java / Spring web application, with an SQL database:there are multiple data entity types (about 100) with relations between themthe entities are viewed, edited or exposed to APIs, and frequently this happens with several entity types (joined)The current approach uses three layers:a data layer, that queries the tables, and uses entities that are 1:1 match to the databasea service layer to perform the business logic, and call the data layer as neededa controller layer - exposing operations to the client side code and to the APIMy questions related to handling the models are:Should every layer have its own models / entity classes? If yes, how is it best to handle copying / merging the models across the layers?Sometimes, at the service layer, an entity might require to have certain fields filled in one case, but not in other cases. Should there be two model classes for these two situations? (To make sure you can count on what fields are provided by the service)Given the large number of entities, is it worth to be consistent in addressing the issues above in the same manner, regardless of any extra complexities involved?
Application model management questions
java;architecture;spring;model
I don't think each layer should have their own entity classes. Doing this would multiply number of classes and raise complexity without any real benefit. Sometimes one concept from service layer doesn't map well to relational DB (or other persistence technology) so it's sensible to create different entity classes, but mostly it's not necessary and you can have the same entity class for all layers. But keep in mind that things will change and requirements for one layer will diverge from the others (e.g. you must keep the old contract at the controller layer) - that's when you need to be prepared to create different entity classes for different use cases. If you use one entity for all layers, changes will affect all layers - e.g. renaming attribute can possibly break contract of your controllers.This is usually done for performance reasons. Best thing to do is to avoid it as much as possible - beware the dangers of premature optimizations. But sometimes this is necessary and then it's difficult to give one answer - if this is one special case where this incomplete entity is used, you might get just fine with just documenting it. If it's used more often, you should create special entity class for this, otherwise you'll get lost in following complexity.Personally I think that it's better to handle this on one-by-one basis. Entities will evolve differently and trying to handle them all in a unified way will bring a lot of overhead and useless complexity.So, my advice is to create entity per layer only when it's necessary/meaningful. But you need to keep in mind the consequences and be ready to divide the entity class once the need arises.Also, be careful about transactions. It's too easy to forget whether you are in a transactional context or not, especially if you're using same entity classes everywhere. From my experience, it's usually good to have clear transactional demarcation - i.e. whole service layer is transactional (by default), everything outside (controllers, tasks etc.) is not.
_unix.210995
I want to make a script that adds a space every time it counts up and outputs the number it's at until it gets to 10 then stops. I'm learning scripting and I'm just making simple scripts to learn. Here's what I've made so far: x= 1if [$x < 10] thenecho [ $x += 1 ]echo \nthenecho done! fi
One `if`, two `then`s: Why not?
linux
null
_codereview.87043
Would you have any suggestions for improvements for the code below?def my_transpose(arr) # number of rows m = arr.count #number of columns n = arr[0].count transposed_arr = Array.new(n) { Array.new(m) } # loop through the rows arr.each_with_index do |row, index1| # loop through the colons of one row row.each_with_index do |num, index2| # swap indexes to transpose the initial array transposed_arr[index2][index1] = num p transposed_arr end end transposed_arrend
Transposing a 2D array
beginner;ruby;matrix
null
_codereview.59838
I have a large service application I designed; it's a solution made up of about 30 projects.I was wondering if anyone could point me in the right direction regarding using the Task-based Asynchronous Pattern (TAP) and/or the Event-based Asynchronous Pattern (EAP).BackgroundThe project in question is installed as a .NET 4.5 Windows service. This service runs a thread that uses a pre-emptive algorithm to execute detached task every 60000ms +/- 20ms (real time).The service has the ability auto recover, adjust itself based on system load, execute methods based on scheduling criteria in XML.My QuestionSince the service executes (groups of methods/no methods/single method) at specific times from a static context, what is the proper way to validate the asynchronous execution.I have implemented TPL across the solution, but I am not confident with validating the state of asynchronous running code within this project against a scheduler while implementing the task based asynchronous pattern in a windows service.In particular handling re-entrancy in Async with parallelism is the main goal.An ExampleThe Handler receives a list of methods that need to be run at time (DHHMM). This occurs every minute, at 500ms past the minute. I then asynchronously execute those method(s).For each method (methodX) I am to track:if the method has completed (will return a value indicating if it was a success or failure).if the method threw an exception (all exceptions have been handled, but just in case).if the method was canceled (due to timeout, or service onShutdown, onPause, etc).if the method is still running.But before I execute any methodX at any time (DHHMM) I need to make sure that methodX is not still currently running from any previous cycle.I must maintain that one and only one instance of any methodX be running at any time.If methodX is called while methodX is still running it is placed on a waiting list for the next interval.If methodX is called while methodX has failed or thrown an exception previously it is placed on a blocking list for all intervals(until removed from blocklist).I came up with a solution using a static dictionary to manage the status of tasks, but this is becoming apparently complicated, as I try to ensure mutual exclusion of the data structure. As I code the taskexecuter, I am attempting to batch execute collections of task based functions. I have extensively read about implementing the EAP, or current TAP. I will update the question as I code.namespace ServiceTestFloor{ public class TestFloor2 { public static List<Func<bool, bool>> lstActions = TestFloor2A.getMethodsEWS(); public static void ExecuteTask(string processname) { // if processname is in the list of functions then we fire the task /* adding more fuctionality requires external processes, or a re-complie with new code */ foreach (Func<bool, bool> act in lstActions) { string strclassname = act.Method.Name; // delay for the main action for this contract if (strclassname == processname) { bool isValid = act(true); return; } } } public static void init() { Console.WriteLine(Running TestFloor2A.cs); initalizeMonitor(); Console.WriteLine(Monitor initalized); // set up a program to pass lists of functions to test the process handler. // list one, sleep, list 2, sleep, list 3, sleep, list 4, sleep List<string> testgroup = new List<string>(); } public static void initalizeMonitor() { /* Clears the monitor */ dTMx = new Dictionary<string, int[]>(); isActive = false; blocklist = new List<string>(); waitlist = new List<string>(); } private static bool isActive; private static List<string> recientCompleted = new List<string>(); private static int intervalFailBlocking = 60; /* if a process failed then reset the failed list after an hour*/ private static List<string> waitlist = new List<string>(); private static List<string> blocklist = new List<string>(); /* blocking for the master process */ private static Dictionary<string, int[]> dTMx = new Dictionary<string, int[]>(); /* * taskstatus * Key(string) = process name * Value(int[]) * inx[0] = process status 0 = Complete, 1 = Incomplete, 2 = Failed, 3 = Canceled * inx[1] = Rcycs repetitions cycle counter * inx[2] = Wcycs waiting cycle counter * inx[3] = Fcycs failed cycle counter * inx[4] = Ccycs canceled cycle counterss */ public static void ReportProcessCompleted(string processname) { if (dTMx.ContainsKey(processname)) { int[] inxx = dTMx[processname]; inxx[0] = 0; dTMx[processname] = inxx; //recientCompleted.Add(processname); Console.WriteLine(Process Complete: + processname + Rcycle time: + inxx[1]); } else { Console.WriteLine(ReportProcessCompleted Failed: + processname + Not Found); } } public static void ReportProcessFailed(string processname) { if (dTMx.ContainsKey(processname)) { int[] inxx = dTMx[processname]; inxx[0] = 2; dTMx[processname] = inxx; Console.WriteLine(Process Failed: + processname + Rcycle time: + inxx[1]); } else { Console.WriteLine(ReportProcessFailed Failed: + processname + Not Found); } } public static void ReportProcessCancel(string processname) { if (dTMx.ContainsKey(processname)) { int[] inxx = dTMx[processname]; inxx[0] = 3; dTMx[processname] = inxx; Console.WriteLine(Process Canceled: + processname + Rcycle time: + inxx[1]); } else { Console.WriteLine(ReportProcessCancel Failed: + processname + Not Found); } } private static void processPre() /* Preemptive actions */ { foreach (KeyValuePair<string, int[]> entry in dTMx) /* iterate the dTMx dictionary */ { int[] inx = entry.Value; string processname = entry.Key; if (inx[0] == 0) { Console.WriteLine(Process Completed: + processname + Removing from Master); dTMx.Remove(processname); /* process completed sucessfully so remove from monitored processes */ } } if (dTMx.Count == 0) { isActive = false; } else { isActive = true; } } public static void processMain(List<string> pls) /* calling method, main action handler */ { processPre(); List<string> tcci = pls; List<string> tcco = new List<string>(); /* output combinations */ Console.WriteLine(Items in Input List); foreach (string str in tcci) { Console.WriteLine(str); } Console.WriteLine(); if (waitlist != null) { Console.WriteLine(Merging Wait List with Input); Console.WriteLine(Items in Wait List); foreach (string str in waitlist) { Console.WriteLine(str); } Console.WriteLine(); tcco = tcci.Union(waitlist).ToList(); Console.WriteLine(Items in Merged Input List); foreach (string str in tcco) { Console.WriteLine(str); } Console.WriteLine(); waitlist = new List<string>(); Console.WriteLine(Wait List Cleared); } else { Console.WriteLine(Wait List Empty); } if (blocklist != null) { Console.WriteLine(Removing Blocked Items); Console.WriteLine(Items in Block List); foreach (string str in blocklist) { Console.WriteLine(str); } Console.WriteLine(); tcco = tcco.Except(blocklist).ToList(); Console.WriteLine(Excepted Output List); foreach (string str in tcco) { Console.WriteLine(str); } Console.WriteLine(); } else { Console.WriteLine(Block List Empty); } Console.WriteLine(Finalized List); foreach (string str in tcco) { Console.WriteLine(str); } Console.WriteLine(); if (pls != null || pls.All(x => string.IsNullOrWhiteSpace(x))) /*check if list is empty or invalid */ { /* input list has processes */ foreach (string proc in tcco) { int[] inxx = { 1, 0, 0, 0, 0 }; if (dTMx.ContainsKey(proc)) { /* update the process if complete, canceled, Failed * inx[0] = process status 0 = Complete, 1 = Incomplete, 2 = Failed, 3 = Canceled * inx[1] = Rcycs repetitions cycle counter * inx[2] = Wcycs waiting cycle counter * inx[3] = Fcycs failed cycle counter * inx[4] = Ccycs canceled cycle counter */ inxx = dTMx[proc]; if (inxx[0] == 0) //0 = Complete, 1 = Incomplete, 2 = Failed, 3 = Canceled { dTMx[proc] = inxx; } else if (inxx[0] == 1) /* incomplete code, inc the Wcycs++ */ { int Wcycs = inxx[2]; Wcycs++; inxx[2] = Wcycs; dTMx[proc] = inxx; addtoWaitList(proc); } else if (inxx[0] == 2) /* failed code, inc the Fcycs++ */ { int Fcycs = inxx[3]; Fcycs++; inxx[3] = Fcycs; dTMx[proc] = inxx; } else if (inxx[0] == 3) { int Ccycs = inxx[4]; Ccycs++; inxx[4] = Ccycs; dTMx[proc] = inxx; } } else { /* add the process as incomplete */ //dTMx.Add(proc, inxx); addProcess(proc); } } } else { /* input list is empty */ } processPost(); } private static void processPost() /* Postemptive actions */ { incrementRCycs(); /* after setting up new tasks and registering the tasks */ /* now check the tasks that have registered cancel if any one exceed 5 failures in a row, 5 cancels then alert the necessary */ List<string> ffailedlist = new List<string>(); List<string> ccancellist = new List<string>(); List<string> wwaitedlist = new List<string>(); foreach (KeyValuePair<string, int[]> entry in dTMx) /* iterate the dTMx dictionary */ { int[] inx = entry.Value; string processname = entry.Key; int Fcycs = inx[3]; int Wcycs = inx[2]; int Ccycs = inx[4]; if (Fcycs > 5) { ffailedlist.Add(processname); } if (Wcycs > 5) { wwaitedlist.Add(processname); } if (Ccycs > 5) { ccancellist.Add(processname); } } // if an item is waited for 5 times, it may be a long running process, after 5 times Console.WriteLine(----------------------------); Console.WriteLine(Items in ffailedlist); foreach (string str in ffailedlist) { Console.WriteLine(str); } Console.WriteLine(); Console.WriteLine(Items in wwaitedlist); foreach (string str in wwaitedlist) { Console.WriteLine(str); } Console.WriteLine(); Console.WriteLine(Items in ccancellist); foreach (string str in ccancellist) { Console.WriteLine(str); } Console.WriteLine(); } private static void incrementRCycs() { foreach (KeyValuePair<string, int[]> entry in dTMx) /* iterate the dTMx dictionary */ { int[] inx = entry.Value; string processname = entry.Key; int Rcycs = inx[1]; Rcycs++; inx[1] = Rcycs; dTMx[processname] = inx; } } private static void addtoWaitList(string processname) { if (waitlist.Contains(processname)) { /* ignore this access attempt */ Console.WriteLine(Process.Add WaitList: + processname + Was Skipped, becuase it was already in the list); } else { waitlist.Add(processname); } } private static void addProcess(string processname) /* returns stats of processname 0 = complete, 1 = incomplete, 2 = Failed, 3 = canceled, 7 = empty */ { if (dTMx.ContainsKey(processname)) { /* ignore this access attempt */ Console.WriteLine(Process.Add Master: + processname + Was Skipped, becuase it was already in the list); } else { ExecuteTask(processname); int[] inxx = { 1, 0, 0, 0, 0 }; dTMx.Add(processname, inxx); } } private static int getStatus(string processname) /* returns stats of processname 0 = complete, 1 = incomplete, 2 = Failed, 3 = canceled, 7 = empty */ { int x = 7; if (dTMx.ContainsKey(processname)) { int[] inxx = dTMx[processname]; x = inxx[0]; } return x; } }}
Validating asynchronous behavior
c#;multithreading;asynchronous
null
_softwareengineering.263113
I'm developing a desktop application in .Net that follows a plugin architecture, something like this:-I have a core .Net solution, containing the desktop exe project, and a handful of class library projects. These classes provide all sorts of shared/common functionality, and are referenced by the application exe and individual modules.Each module lives in its own .Net solution. The solution has its own copy of the above shared library DLLs that the module's project(s) reference.End-user installation really just involves deploying the core exe & DLLs, along with the modules' DLLs required at that site, into one folder. I've got no problem with the mechanism used by the core exe to discover which modules are present, then load & initialise them.My concerns are around managing and deploying the DLLs. In an ideal world I should be able to make a change to module X and redeploy just that module's DLLs. However it's feasible that a change may also involve updating one of the shared libraries. When I redeploy these updated DLLs, the updated shared library functionality could break other modules (or even the application exe) that reference it. I guess what I should be doing in this scenario is to rebuild/redeploy all solutions, not just module X.An easier approach may be to treat the core and all modules as a single product, and redeploy everything in every release, even if it's just a change to one module. But it feels like I would be losing one of the advantages of a plugin architecture - I should be able to release a new version of a module in isolation.Any thoughts? Or are such DLL referencing issues an unavoidable part of using a plugin architecture?
Plugin/modular architecture - deployment concerns
architecture;plugin architecture
null
_softwareengineering.219615
I was asked about immutable strings in Java. I was tasked with writing a function that concatenated a number of as to a string.What I wrote:public String foo(int n) { String s = ; for (int i = 0; i < n; i++) { s = s + a } return s;}I was then asked how many strings this program would generate, assuming garbage collection does not happen. My thoughts for n=3 wasaaaaaaaaaEssentially 2 strings are created in each iteration of the loop. However, the answer was n2. What strings will be created in memory by this function and why is that way?
How many strings are created in memory when concatenating strings in Java?
java;strings;object
I was then asked how many strings this program would generate, assuming garbage collection does not happen. My thoughts for n=3 was (7)Strings 1 () and 2 (a) are the constants in the program, these are not created as part of things but are 'interned' because they are constants the compiler knows about. Read more about this at String interning on Wikipedia.This also removes strings 5 and 7 from the count as they are the same a as String #2. This leaves strings #3, #4, and #6. The answer is 3 strings are created for n = 3 using your code.The count of n2 is obviously wrong because at n=3, this would be 9 and even by your worst case answer, that was only 7. If your non-interned strings was correct, the answer should have been 2n + 1. So, the question of how should you do this?Since the String is immutable, you want a mutable thing - something you can change without creating new objects. That is the StringBuilder.The first thing to look at is the constructors. In this case we know how long the string will be, and there is a constructor StringBuilder(int capacity) which means we allocate exactly as much as we need.Next, a doesn't need to be a String, but rather it can be a character 'a'. This has some minor performance boosting when calling append(String) vs append(char) - with the append(String), the method needs to find out how long the String is and do some work on that. On the other hand, char is always exactly one character long.The code differences can be seen at StringBuilder.append(String) vs StringBuilder.append(char). Its not something to be too concerned with, but if you're trying to impress the employer it is best to use the best possible practices.So, how does this look when you put it together?public String foo(int n) { StringBuilder sb = new StringBuilder(n); for (int i = 0; i < n; i++) { sb.append('a'); } return sb.toString();}One StringBuilder and one String have been created. No extra strings needed to be interned.Write some other simple programs in Eclipse. Install pmd and run it on the code you write. Note what it complains about and fix those things. It would have found the modification of a String with + in a loop, and if you changed that to StringBuilder, it would have maybe found the initial capacity, but it would certainly catch the difference between .append(a) and .append('a')
_unix.172278
On a SLES 9 machine I added the line: vi /etc/security/limits.confUSERNAME hard cpu 70but when I check it with ulimit -a: SERVER:~ # su USERNAMEUSERNAME@SERVER:/root> ulimit -a | grep -i cpucpu time (seconds, -t) unlimitedUSERNAME@SERVER:/root> USERNAME@SERVER:/root> ulimit -Ha | grep -i cpucpu time (seconds, -t) unlimitedUSERNAME@SERVER:/root> It still says unlimited. Question: What am I missing?
limits.conf modification doesn't work
su;ulimit
null
_softwareengineering.231273
I work at a company that wants to be agile, but the business analysts often provide us user stories that are more solution than problem statement. This makes it difficult to make good design decisions, or in more extreme cases, leaves few design decisions to be made. It does not help the programmers understand the user's needs or make better design decisions in the future. Our product owner makes an effort to provide us with problem statements, but we still sometimes get solution statements, and that tends toward a code monkey situation.An additional challenge is that some (not all) of my teammates do not see a problem with this, and some of them honestly want to be told what to do. Thus, when we receive a solution statement on our backlog, they are eager to jump right in and work on it.I believe that as a software engineer part of my job is to understand the user's needs so that I can build the right thing for the user. However, within our organization structure, I have zero contact with the user. What kind of things can I do to better understand our users?
How to get better understanding of the users as a programmer
users;systems analysis
This really depends on your management.What I have found in over 13 years of software engineering is that I need to talk directly to the customers and sometimes even end users (often in software these are separate entities). Business analysts are hit or miss: some of them understand the customer and end users and write really good requirements (not solutions), some of them go too far and assume too much.I would recommend talking to your management and argue that you need to be involved earlier in the process, ideally right after sales exits the picture and the BAs enter. Do not be just a fly on the wall: you need to assert yourself as a product expert seeking to understand the problem domain (ugh, buzzwords: you know your software, you want to learn what the customer wants you to do with it).For existing projects, I would work through your project manager and try to cut the BAs out of the picture. You will likely get answers that match what you have already been told. Do not accept this. Try to get in meetings with the customer to discuss requirements. In an Agile environment there should be no problem with this: if you do not understand something, the customer is the final arbiter for requirements (but not necessarily for what your company will authorize you to do in the billable hours paid for).Essentially, keep asking questions and do not shut your mouth until you fully understand what the customer wants. Just because a BA gave you an odd-sounding requirement does not mean this is what the customer wants, or that it is the best way to accomplish the customer's wishes.
_webmaster.42432
Since Firefox 18 was released, new users have been unable to register on our Joomla-based site. The form loads and works in Chrome, IE, and Firefox 17 but not Firefox 18. To make things even more confusing: Inspect Element in Firefox 18 shows a form element that is empty, however, View Page Source in Firefox 18 shows the entire form. Furthermore, using the Web Developer Tools, we checked the HTTP request and response. The response contains the entire form (including inner elements) but Firefox 18 and Inspect Element still don't show these. We've tried dumping the cache, installing the latest Java update, and resetting Firefox to default (i.e. no add-ons or themes.) We are completely stumped on what to do. We've put in a support request, but we're wondering if anyone else has any idea what could be the problem.Here's the site for reference: SIJHSAA -- if you click on Create an account in the right hand sidebar, this is the form that is not working in Firefox 18.
Form content not loading in Firefox 18
joomla;firefox
Joomla 1.5 is pretty old now and not supported so at some point you should upgrade.The problem is mootools you can fix this by adding this to /components/com_gantry/js/mootools-1.2.5.js>String.prototype.contains = function(string, separator){return (separator) ? (separator + this + separator).indexOf(separator + string + separator) > -1 : String(this).indexOf(string) > -1;};Ensure not add within another function, alternatively you try updating mootools but it might be caused by joomla, worth trying mind.You could ask or read up on this issue here http://forum.joomla.org/viewtopic.php?f=428&t=785730
_webmaster.103380
We have company's website successfully connected to Google Analytics account, which we can access with one email address. We have only reading rights and can't see the main admin or add other users to the Analytics account. How do we take control of our own analytics without loosing the historical data?
Company's analytics account - How to change admin when current admin is unknown
google analytics
null
_cs.38023
A program takes as input a balanced binary search tree with n leaf nodes and computes the value of a function $g(x)$ for each node x. If the cost of computing $g(x)$ is min{no. of leaf-nodes in left-subtree of x, no. of leaf-nodes in right-subtree of x} then what is the worst case time complexity?Since $g(x)$ is applied on the 2 halves of the binary tree, I guess the recurrence relation must look something like :$$T(n) = 2T\Big(\frac{n}{2}\Big)+k$$What I could understand from the question is that $g(x)$ is applied on all the $n$ nodes and instead of $k$ it must be something of the order $O(n)$ but I'm not sure what it is. Am I heading in the right direction?
How to write recurrence relation for the following scenario?
algorithm analysis;runtime analysis;recurrence relation;binary trees
null
_unix.264516
After many, many, many failed attempts to get accelerated HTML5 video working on any hardware (tested about 5 machines) I came to the conclusion that accelerated HTML5 is something difficult under Linux.Now I just need some hardware to realize a HTML5/WebRTC based (not only) video conferencing application for use with a TV, but I don't know where to find suitable hardware. It's all easier with Windows, but I'd like to stick with Linux for other reasons.Can somebody tell me how to find or suggest some hardware that will...be supported by some HTML5 browser with WebRTC support (video conferencing) - preferably Chrome/Chromiumallow fluid video playback up to HD resolutionsmay be Intel architecture (preferred) or also ARM if there is some open board support packagehave HDMI outputunder Debian Jessie, or perhaps Ubuntupreferably with X11, but it's not important as only the (headless) browser is displayed full-screen (HTML5 application)It would also be a great help if it boils down to graphics card X works well with Chrome if you use Kernel X.I know that I'm asking for hardware but it's actually Linux software that's heavily limiting the selection, so I assume this is not off-topic.Thanks.
accelerated HTML5 video: what hardware is supported well by Linux?
debian;hardware;video;browser;gpu
null
_scicomp.21299
Consider a $GF(2^n)$ field, a $GF(2^k)$ galois fields, where $n=k \times m$ and $GF(2^k)$ is a ground field of $GF(2^n)$.Id appreciate pointers to papers or suggestions on:How to find $\log(a)$ and $\exp(a)$ where $a$ is in $GF(2^n)$ given $\log/\exp$ look-up tables of $GF(2^k)$?How to convert the values between $GF(2^n)$ into $GF\left( (2^k)^m \right)$.Specifically I need solution for $n = 16$ (with any combination of integer $m$ and $k$, e.g. $k = 8$, $m = 2$) such that amount of calculations used for conversion is minimal. Generator polynomials for all three fields can be assumed to be known, for example for case of $n = 16$, $k = 8$, $m = 2$:$$\begin{eqnarray}&& GF(2^{16}) : x^{16} + x^5 + x^3 + x^2 + 1 \\&& GF(2^8) : x^8 + x^4 + x^3 + x^2 + 1 \\&& GF\left( (2^8)^2 \right) : x^2 + 3x + 1\end{eqnarray}$$Additional background info: Generally I have $\log$ and $\exp$ look-up tables for $GF(2^n)$ and can avoid the conversion problem all together, but $2^n$ tables don't fit into memory-constrained CPU I'm using. Thus I'm interested to calculate $\log$ and $\exp$ of $GF(2^{16})$ using $\log/\exp$ tables of $GF(2^8)$ or $GF(2^4)$. I came across this paper, but it explicitly says that $GF(2^{km})$ is not identical to $GF \left( (2^k)^m \right)$, but doesn't offer a way to convert between the two: the result of multiplication using ground and extension fields doesn't match the multiplication result using any other method (presumably because the composite field is not identical to the original field).Thanks in advance for any help.p.s. same question is also posted on math.stackexchange, here.
How to calculate log or exp of a value in GF(2^n) using log/exp table of GF((2^k)^m) where n=k*m?
linear algebra
null
_softwareengineering.257174
I understand exceptions, throwing them, handling them, and propagating them to a method lower in the call stack (i.e. throws).What I don't understand is this:public static void main(String[] args) throws Exception { ...}Now, I assume that in the case that main throws an Exception, the JVM handles it (correct?). If that's the case, then my question is:How does the JVM handle exceptions thrown by main? What does it do?
How does the JVM handle an exception thrown by the main method?
java;exceptions;jvm
You might think that the public static void main method in Java or the main function in C is the real entry point of your program but it isn't. All high-level languages (including C) have a language runtime that initializes the program, and then transfers control flow to the entry point. In the case of Java, initialization will include:setting up the JVMloading required classesrunning static initializer blocks. This can execute user-defined code before main is invoked. These blocks aren't supposed to throw exceptions.There are a variety of ways to implement exception handling, but for the purpose of this question, they all can be viewed as a black box. The important thing however is that the language runtime must always provide an outermost exception handler that catches all exceptions that aren't caught by user code. This exception handler will usually print out a stack trace, shut down the program in an orderly fashion, and exit with an error code. Properly shutting down the program includes destroying the object graph, invoking finalizers, and freeing resources such as memory, file handles, or network connections.For purposes of illustration, you can imaging the runtime wrapping all code in a giant try-catch that looks liketry { loadClasses(); runInitializers(); main(argv); System.exit(0);} catch (Throwable e) { e.printStackTrace(); System.exit(-1);}except that it's not necessary for a language to actually execute code like this. The same semantics can be implemented in the code for throw (or equivalent) that searches for the first applicable exception handler.
_unix.239629
Given a certain PID, is it possible to discover what command-line executed this process? top, atop, ps provide real time informations, I'm looking for something whereby I can look the past, because I've seen a process taking many resources of the machine, I've killed it, but now I want to know more about it
history/log of command-lines executed to launch processes(pid)
process;logs
In the general classic sense, no, it's not possible to discover that information by PID. For one thing, PIDs wrap at 64k. There may be other security packages or loggers that would retain this information.
_unix.292203
I want to compile guile on shared hosting but when I run ./configure I've got error:configure: error: GNU MP 4.1 or greater not found, see READMEso I've downloaded GMP and tried to install it locally (found in answer to this question on Stack Overflow install library in home directory)mkdir /home/jcubic/lib./configure --prefix=/home/jcubic/makemake installit created this files in /home/jcubic/liblibgmp.alibgmp.lalibgmp.solibgmp.so.10libgmp.so.10.3.1then I've run configure from guile directory (found the option by reading configure script):./configure --with-libgmp-prefix=/home/jcubicbut the error remain, how can I use local GNU MP file while running guile ./configure and make?
How to use local shared library while compiling the FOSS project?
compiling;libraries;configure
As a sum up of the comments. One has to add the environment variables as follows.LD_LIBRARY_PATH=/home/<user>/lib LIBRARY_PATH=/home/<user>/lib CPATH=/home/<user>/include
_unix.66503
The executable files that gcc creates have execution permissions-rwxrwxr-xwhich are different than the permissions that the source file has. -rw-rw-r--How does gcc set these permissions ?
How does gcc handle file permissions?
permissions;files;gcc
Four things intervene to determine the permission of a file.When an application creates a file, it specifies a set of initial permissions. These initial permissions are passed as an argument of the system call that creates the file (open for regular files, mkdir for directories, etc.).The permissions are masked with the umask, which is an attribute of the running process. The umask indicates permission bits that are removed from the permissions specified by the application. For example, an umask of 022 removes the group-write and other-write permission. An umask of 007 leaves the group-write permission but makes the file completely off-limits to others.The permissions may be modified further by access control lists. I won't discuss these further in this post.The application may call chmod explicitly to change the permissions to whatever it wants. The user who owns a file can set its permissions freely.Some popular choices of permission sets for step 1 are:666 (i.e. read and write for everybody) for a regular file.600 (i.e. read and write, only for the owner) for a regular file that must be remain private (e.g. an email, or a temporary file).777 (i.e. read, write and execute for everybody) for a directory, or for an executable regular file.It's the umask that causes files not to be world-readable even though applications can and usually do include the others-write permission in the file creation permissions.In the case of gcc, the output file is first created with permissions 666 (masked by the umask), then later chmod'ed to make it executable. Gcc could create an executable directly, but doesn't: it only makes the file executable when it's finished writing it, so that you don't risk starting to execute the program while it's incomplete.
_webapps.78944
In Twitch chat, if your name is mentioned, it shows up with an inverse background (black in normal mode, white in theater mode). While this makes it easy to notice when you've been mentioned while you are watching chat, it doesn't do much for when you are not paying attention.I'd like a way to get a desktop notification or sound notification when my name is mentioned in chat. Is this possible?Also, is there a way to do this with any random keyword I'd like to?
Is there a way to be notified when my name is mentioned in Twitch?
twitch.tv
You can use NightDev's BetterTTV extension. It provides various enhancements to the twitch.tv website and it can be configured to provide desktop notifications and sounds on highlight. Note however that the latter of those two is currently labeled BETA.It does a lot more than what you're asking for but the structure of the extension seems to be pretty modular, so I imagine you can disable most, if not all annoyances you may have with it.As a side note, I'd like to mention that you can interface with Twitch chat through any IRC client (see how here).
_hardwarecs.5733
Are there switches which have a direct connection between two ports (e.g. port 1 and 2) when the power is not plugged in? I would like those two ports linked as if there was a direct connection with a cable instead of a (switched off) switch in between. As soon as the power is plugged in, the switch should operate as usual.Are there any? Is it probably a common behavior of some models? Is there a name for that feature?
Are there switches with a direct connection between two ports when the power is off?
networking;switch;power
null
_webmaster.11807
I am currently in the process of setting up an Apache server that will be hosting projects built from SVN using a continuous integration server. The problem is, however, that while I've managed to configure the build server to output the revisions of a project to a directory, I'm stumped as to how to actually configure Apache using mod_vhost_alias to serve the different projects.The directory structure is generated using the following pattern:/usr/share/Projects/r[revision]Inside, there are two directories which I'd like to configure access via a subdomain:api.r[revision].testserver.local -> ./serverr[revision].testserver.local -> ./webThere is already a local DNS server that serves the wildcards on *.testserver.local and already resolves them to the correct IP, but Apache needs to resolve the correct DocumentRoot for the different subdomains.The end result I'm hoping for is that as long as the build server outputs and configures the projects inside /usr/share/Projects/r*, Apache will know how to resolve these subdomains without the need to write conf files and reloading configurations each time a new revision is fetched and built from SVN.
Configuring `mod_vhost_alias` to server subdomain-based websites
apache;httpd.conf
null
_unix.353885
I just installed Ubuntu 16.04 LTS on my old Lenovo G50-80. I've been having issues with getting the wifi to work from the start. During install I gave it a network cable and then after the install I couldn't get the wifi working at first so I configured it manually somehow in Unity (something like adding a record about a Host with AP name and password in a config file). At last it was working, so I started polishing the look, and decided to go with gnome instead. Now the wifi still works on login and it automatically connects to my AP but as soon as I open my VPN client, connect and then later disconnect, it will not work again for some reason. At any point in time, when I try to change the gnome UI wifi settings, it's searching and never seems to find any APs, despite me already being connected to my own AP when looking at it.Is gnome not recognizing my wifi drivers?Otherwise, what could be the problem?
Ubuntu wifi connected but not working correctly
ubuntu;wifi;gnome;drivers;gdm3
null
_cs.11741
Let $p$ be the six-figure Boolean function with the following definition: $p(x_{0},x_{1},x_{2},x_{3},x_{4},x_{5})=\begin{cases} true & \text{if } x_{0}=x_{5} \text{ and } x_{1}=x_{4} \text{ and } x_{2}=x_{3}, \\ false & \text{else.}\end{cases}$This function obviously yields $true$ iff $x_{0}x_{1}x_{2}x_{3}x_{4}x_{5}$ is a palindrome. Provide a BDD for $p$ relative to a variable ordering of your choice.My problems begin when I try to define an appropriate variable ordering, so I am only able to guess it: $x_{0}=x_{5} < x_{1}=x_{4} < x_{2}=x_{3}$.I'm actually pretty lost with this exercise and any help is much appreciated (sorry for not being able to provide a better own approach).
Binary decision diagram for a six-figure Boolean function
formal methods
So finally this should be the correct solution:The variable ordering is $x_{0} < x_{5} < x_{1} < x_{4} < x_{2} < x_{3}$. The BDD is:
_unix.75622
I made a copy of all the backups made on Wed of every weekThe time stamp of each file is not sorted but the day is Wednesday of every fileNow, I need to sort each file based on time Stamp e.g if date is 1-May then it should first display the backup files of 1-May then it should display the files of 8-MayI used this command but of course it is slapping me with errorsort $(cat /home/emerg/Wedbackup.txt)Error issort: invalid option -- wAs i don't know how to use the output of one command as input of 2nd command. I need advice how to do it.
How to use output of one command as input in another command
rhel;sort
There's no need to use cat in this case:sort /home/emerg/Wedbackup.txtThe problem with your example is that your file is being passed as the command line to sort, which is not what you want. For example, if this was your file:foo barbaz quxwibble wobbleThe arguments would look like this:sort foo bar baz qux wibble wobbleThis is not what you want. What you actually want is to pass the file to sort on stdin, which can be done like this:sort < /home/emerg/Wedbackup.txtThis is more generalisable, as taking a filename as an argument is specific to sort, and is not a universal convention.In the case of sort, you should prefer to pass the filename as an argument rather than on stdin, as it allows seeking on the file, which can improve sorting efficiency.
_unix.382358
I have a process that is using /dev/ttyAMA2 port to communicate with an external device on a PCB.What options do I have of listening to the communication between a process and a serial port, on an ARM processor and a read-only file system (except /home) without interfering with their communication? Other question suggested using socat or interceptty but I was not able to cross-compile it for that processor. But I can compile my own c code and successfully run it. All I need is to get what the process is sending to that port (I don't need the response data).
Listening on the data between a process and a serial port
tty;arm;serial port
null
_cs.2069
I try to solve the following coverage problem.There are $n$ transmitters with coverage area of 1km and $n$ receivers. Decide in $O(n\log n)$ that all receivers are covered by any transmitter. All reveivers and transmiters are represented by their $x$ and $y$ coordinates.The most advanced solution I can come with takes $O(n^2\log n)$. For every receiver sort all transmitter by it distance to this current receiver, then take the transmitter with shortest distance and this shortest distance should be within 0.5 km.But the naive approach looks like much better in time complexity $O(n^2)$. Just compute all distance between all pairs of transmitter and receiver.I am not sure if I can apply range-search algorithms in this problem. For example kd-trees allow us to find such ranges, however I never saw an example, and I am not sure if there are kind of range-search for circles. The given complexity $O(n\log n)$ assumes that the solution should be somehow similar to sorting.
Coverage problem (transmitter and receiver)
algorithms;computational geometry;search problem
null
_unix.13034
Is it possible to divorce my input from the overall shell using Screen? What I'm aiming for is akin to a status line that expands if I type more than would fit within a single line and is 'submitted'/'sent' to the shell when I press enter.I'm looking to put together a simple configuration to use as a MUSH/MUD/MUCK/MOO client using screen+telnet. The current issue with using telnet is that data sent from the remote server is inserted at the cursor position, which sucks badly if you're typing a lengthy paragraph.
Divorced input line in GNU Screen
shell;terminal;gnu screen;telnet
null