id
stringlengths
5
27
question
stringlengths
19
69.9k
title
stringlengths
1
150
tags
stringlengths
1
118
accepted_answer
stringlengths
4
29.9k
_unix.302631
I'm working on preparing an enterprise software package for running on the cloud, but I'm facing the issue that the software package runs as a real-time process on our current deployments. No one is really sure whether it's really necessary for the system, but they all heavily recommend doing so. Running on a cloud service, however, our VM will share a host with dozen other (maybe hundreds?), and even though I can set the process to be scheduled at real-time inside the VM, the VM itself will still have normal priority on the host. Is that correct? Is the virtualization software scheduled as any other process on the host?
Does a VM guest system runs only when the VM's process is scheduled on the host?
virtual machine;cpu;scheduling
As far as the host is concerned, a VM is one process that is scheduled like any other process. In the end, each processor (each core) can only be running one program at a time. The host's scheduler decides which one it is.As far as I know, none of the virtual machine technologies that are typically used on cloud services offer real-time guarantees. It's definitely possible to make virtual machines with real-time guarantees, but there's a cost the other processes get less CPU time. The cost/benefits typically don't match what cloud hosting aims for, which is to amortize resources between many contenders such that processors don't stay idle too long.If you want real-time guarantees, that's going to be a fundamentally different service from basic cloud hosting, and one that you'll need to pay for. As putting multiple real-time processes together tends to require a holistic view to make sure that all of them meet their deadline, you'll most likely end up running your stuff the way you want, on dedicated hardware.Cloud and real-time does strike me as a strange combination. A task running on a cloud service is only completed once you've downloaded the response, and you typically wouldn't have any service guarantees for the communication between the endpoint that needs the response in real time and the cloud service. Real-time computations normally have to be kept within a network perimeter under your control, where you have throughput and latency guarantees.
_scicomp.23852
I'm working on a project where I have two adv-diff coupled domains through their respective source terms (one domain adds mass, the other subtracts mass). For brevity, I'm modeling them in steady state. The equations are your standard advection-diffusion transport equation with a source term look like this:$$\frac{\partial c_1}{\partial t} = 0 = \mathcal{F}_1 + \mathcal{Q}_1(c_1,c_2) \\\frac{\partial c_2}{\partial t} = 0 = \mathcal{F}_2 + \mathcal{Q}_2(c_1,c_2)$$Where $\mathcal{F}_i$ is diffusive and advective flux for species $i$, and $\mathcal{Q}_i$ is the source term for species $i$.I have been able to write a solver for my problem using the Newton-Raphson method, and have completely coupled the two domains using a block mass matrix, ie:$$F_{coupled} = \left[\begin{array}{c c}A_1 & 0 \\0 & A_2 \\\end{array}\right]\underbrace{ \left[\begin{array}{c}c_{1,i} \\c_{2,i} \\\end{array}\right] }_{x_i} - \left[\begin{array}{c}b_1(c_{1,i}, c_{2,i}) \\b_2(c_{1,i}, c_{2,i}) \\\end{array}\right]$$The term $F_{coupled}$ is used to determine the Jacobian matrix and update both $c_1$ and $c_2$:$$\mathcal{J}(x_i) \left[x_{i+1} - x_i \right] = -\mathcal{F}_{coupled}$$or$$x_{i+1} = x_i - \left(\mathcal{J}(x_i)\right)^{-1} \mathcal{F}_{coupled}$$To speed things up, I don't calculate the Jacobian every iteration - right now I'm playing with every five iterations, which has seemed to work well enough and keep the solution steady.The problem is: I'm going to be moving to a larger system where both domains are in 2D/2.5D, and calculating the Jacobian matrix is going to quickly deplete my available computer resources. I'm building this model to be used in an optimization setting later on, so I also can't be behind the wheel at every iteration tuning the damping factor, etc..Am I right to be looking elsewhere for a more robust and algorithm for my problem, or is this is as good as it gets? I've looked a bit into Quasi-linearization, but am not sure how applicable it is w.r.t. to my system.Are there any other slick algorithms that I may have missed that can solve a system of nonlinear equations without resorting to re-calculating the Jacobian as offen?
Methods of solving non-linear advection-diffusion systems beyond Newton-Raphson?
nonlinear equations;newton method;advection diffusion;coupling
I'm assuming the limitation in 2D and 3D is storing the Jacobian.One option is to retain the time derivatives and use an explicit pseudo time-stepping to iterate to steady state. Normally the CFL number you need for diffusive and reactive systems might get prohibitively small. You could try geometric multigrid (if using structured grids) or algebraic multigrid and local time-stepping to speed up convergence.The other option is to use a fully implicit scheme as you're doing now, but not store the global Jacobian. You could use a matrix-free implicit scheme.$$DF(u^n)\, \delta u^n = -F(u^n)$$(where $DF$ is the Jacobian) can be solved with a Krylov subspace solver like GMRES and BiCGStab using the fact that$$DF(u^n)\, \delta u \approx \frac{F(u^n+\epsilon\frac{\delta u}{\Vert\delta u\Vert}) - F(u^n)}{\epsilon}.$$This is because GMRES and BiCGStab don't require a LHS matrix $A$, they only need to be able to compute its product $Ax$ given a vector $x$.Now with a proper value of $\epsilon$ (usually about $10^{-7}$ for double-precision floats) you can execute a Newton loop without ever computing or storing a Jacobian. I know for a fact that this technique is used to solve some non-trivial cases in computational fluid dynamics. Note, however, that the number of evaluations of the function $F$ will be more than in a matrix-storage technique, instead of requiring a matrix-vector product.Another thing to note is that if your system is such that a powerful preconditioner is needed (ie. point- or block-Jacobi will not suffice), you might want to try using the above-mentioned method as a smoother in a multigrid scheme. If you want to try a point- or block-Jacobi preconditioner, you could compute and store only the diagonal elements or diagonal blocks of the Jacobian, which is not much. I would also mention that a Gauss-Seidel or SSOR preconditioner may be possible to implement without explicitly storing a Jacobian, depending on your exact governing equation.
_unix.213969
I'm wondering if there is an application that makes it so that commands after it operate in an environment that treats the working directory as if it were the top-most one, and there is absolutely no way to access the wider file system via '..' and such?
Making the current directory as if it were where an absolute path starts
shell;directory
Looks like you might be looking for chroot.Note that while something like ../../../../../.. will not escape the restricted root directory, there are other ways to escape indirectly, by leveraging other processes. If you're concerned about a malicious application, run it as a user who doesn't run any process outside the chroot.For a more advanced/packageable/secure solution, have a look to docker or other container based solutions.
_computergraphics.4637
If no, Should I create a new VAO for every VBO that has its own vertex attribute configurations?Could you please give me a snippet code example that shows how to use one VAO for multiple VBOs? All examples I find on the internet call to glVertexAttribPointer before drawing a buffer, but if you have to call that method everytime you want to draw a buffer, then VAOs doesn't make sense.PS: The tutorial I'm following uses Opengl 3.3
Can one VAO store multiple calls to glVertexAttribPointer?
opengl;c++;vertex buffer object
Yes, VAO state includes vertex attribute specification for multiple attributes. Each attribute has its own format information and can come from a distinct buffer object. That's part of why you can only bind one VAO at a time.// two VBOs but one VAOGLuint points_vbo = 0;glGenBuffers(1, &points_vbo);glBindBuffer(GL_ARRAY_BUFFER, points_vbo);glBufferData(GL_ARRAY_BUFFER, 9 * sizeof(float), points, GL_STATIC_DRAW);GLuint colours_vbo = 0;glGenBuffers(1, &colours_vbo);glBindBuffer(GL_ARRAY_BUFFER, colours_vbo);glBufferData(GL_ARRAY_BUFFER, 9 * sizeof(float), colours, GL_STATIC_DRAW);GLuint vao = 0;glGenVertexArrays(1, &vao);glBindVertexArray(vao);glEnableVertexAttribArray(0);glEnableVertexAttribArray(1);rendering-loop{ glBindVertexArray(vao); glBindBuffer(GL_ARRAY_BUFFER, points_vbo); glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, NULL); glBindBuffer(GL_ARRAY_BUFFER, colours_vbo); glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, 0, NULL); glDrawArrays(GL_TRIANGLES, 0, 3);}PS: you will need to be careful about the location of the vertex attribute. Those used in the rendering loop need to match those in the vertex shader.
_softwareengineering.336081
I am curious whether there are metrics on whether code coverage actually improves code quality? Any research studies?If so, at what percent does it become a case of diminishing returns?If not, why do so many people treat it as a religious doctrine?My skepticism is anecdotal and is brought on by 2 projects I was involved with - both implemented the same reasonably complex product. First one just used targeted unit tests here and there. Second one has a mandated 70% code coverage. If I compare the amount of defects, the 2nd one has almost an order of magnitude more of them. Both products used different technologies and had a different set of developers, but still I am surprised.
Does Code Coverage improve code quality?
code quality;metrics;test coverage
I'm assuming you are referring to a Code Coverage metric in the context of unit testing. If so, I think you indirectly have already answered your question here:First project just used targeted unit tests here and there. Second one has a mandated 70% code coverage. If I compare the amount of defects, the 2nd one has almost an order of magnitude more of them.In short no, a Code Coverage metric does not improve the quality of a project at all.There's also a common belief that Code Coverage reflects the quality of the unit tests but it doesn't. It doesn't give you an information what parts of your system are properly tested either. It only says what code has been executed by your test suite. What you know for sure is that code coverage gives you only an information what parts of your system are not tested. However, the Code Coverage metric may relate to overall code quality if you are sure of the quality of your unit tests. The quality of a unit test can be defined as the ability of being able to detect a change in your code base that breaks some business requirement. In other words, every change that breaks particular a requirement (acceptance criterion) should be detected by good quality tests (such tests should simply fail). One of the simplest and automated approaches to measure the quality of your test suite which does not involve too much additional effort from you side is mutation testing.UPDATE:http://martinfowler.com/bliki/TestCoverage.html
_unix.259182
I'm playing a little bit with LVM. I've noticed that I can add a new HDD, include it in the volume group, and add some space to all of the existing volumes without unmounting theirs filesystem or rebooting the whole machine. But when I tried to do the opposite thing, i.e. to reduce the volumes' space and remove the drive from the system, I was unable to do it online. This can be achieved when the filesystems are unmounted, or from live cd/dvd/pendrive (root filesystem).So the question is, why the filesystem can't be shrunk online?
Why the ext4 filesystem can be shrunk only when not mounted?
filesystems;lvm
Presumably you already found this: https://serverfault.com/questions/528075/is-it-possible-to-on-line-shrink-a-ext4-volume-with-lvm so the short answer is because the folks who wrote ext4 don't support this.The slightly longer answer is that it's hard, especially if maintaining any sort of backward compatibility with ext2. Finding all the bits and pieces of the filesystem that are in use past the end of the desired new size, then moving them all back within that new size is hard enough. Now do it while all the existing aspects of a filesystem's daily chores are still taking place and it becomes super easy to destroy your data. The ext4 folks have essentially said that if you need to shrink... do it safely.The very long answer delves deep into the internal operations of filesystems and is probably more than you really want to know. But if you do: this might be a good (free) place to start: http://www.nobius.org/~dbg/practical-file-system-design.pdf
_cs.62952
I'm doing a research on NLG systems. I need to annotate my corpus (~6 million words) automatically. My algorithm works well and I want to calculate Cohen's Kappa. What I cannot understand is the second annotator; I don't have the possibility to have another annotation. Is there any solution for this problem? Can I have a set of data as my sample and manually doing the annotation to calculate Cohen's Kappa? Is there any measurement for corpus analysis? Thanks
Evaluation of annotation
natural language processing;computational linguistics
null
_codereview.18727
In java.util.Random the Oracle implementation of nextInt(int) is as follows:public int nextInt(int n) { if (n <= 0) throw new IllegalArgumentException(n must be positive); if ((n & -n) == n) // i.e., n is a power of 2 return (int)((n * (long)next(31)) >> 31); int bits, val; do { bits = next(31); val = bits % n; } while (bits - val + (n-1) < 0); return val;}I have a need to do the same thing for longs, but this is not included as part of the class signature. So I extended the class to add this behavior. Here's my solution, and even though I'm pretty sure I have it right, bit-twiddling can subtly fluster even the best of devs!import java.util.Random;public class LongRandom extends Random { public long nextLong(long n) { if (n <= 0) throw new IllegalArgumentException(n must be positive); if ((n & -n) == n) // i.e., n is a power of 2 return nextLong() & (n - 1); // only take the bottom bits long bits, val; do { bits = nextLong() & 0x7FFFFFFFL; // make nextLong non-negative val = bits % n; } while (bits - val + (n-1) < 0); return val; }}Have I introduced a subtle bug? Are there improvements to make? What might I need to watch out for?
Extending java.util.Random.nextInt(int) into nextLong(long)
java;random
Try it with n>2^32 and your code will fail. 0x7FFFFFFFL reduces your code to int, breaking your extension to long. You need to use the maximal value of long not int in your mask. i.e. 7FFFFFFFFFFFFFFFL.
_codereview.126705
I'm trying to learn to use the C++ Standard Library and some of the modern C++11 features. Can someone review my counting sort algorithm below and critique my style/algorithm/use of the STL? Thank you!#include <algorithm>#include <chrono>#include <iostream>#include <iterator>#include <random>#include <vector>const int kSize = 100000000; // Size of container to sortconst int kRangeFrom = -1000000; // first of range for random number generatorconst int kRangeTo = 1000000; // last of range for random number generator// Linear time sorting algorithm for integerstemplate<typename InputIterator>void counting_sort(InputIterator first, InputIterator last) { auto minmax_el = std::minmax_element(first, last); auto min = *minmax_el.first; auto max = *minmax_el.second; std::vector<std::size_t> counts(max - min + 1, 0); std::for_each(first, last, [&](auto x) { ++counts[x - min]; // Store value counts }); for (auto it_c = counts.begin(); it_c != counts.end(); ++it_c) { auto idx = std::distance(counts.begin(), it_c); std::fill_n(first, *it_c, idx + min); // Store in sorted order std::advance(first, *it_c); }}int main() { std::random_device rd; std::mt19937 mt(rd()); std::uniform_int_distribution<int> dist(kRangeFrom,kRangeTo); std::vector<int> v1(kSize); std::generate(v1.begin(), v1.end(), [&](){ return dist(mt); }); std::vector<int> v2(kSize); std::copy(v1.begin(), v1.end(), v2.begin()); auto first1 = std::chrono::steady_clock::now(); counting_sort(v1.begin(), v1.end()); auto last1 = std::chrono::steady_clock::now(); auto first2 = std::chrono::steady_clock::now(); std::sort(v2.begin(), v2.end()); auto last2 = std::chrono::steady_clock::now(); std::cout << counting sort time: << std::chrono::duration<double, std::milli>(last1 - first1).count() << ms << '\n'; std::cout << std::sort time: << std::chrono::duration<double, std::milli>(last2 - first2).count() << ms << '\n'; std::cout << v1 == v2: << std::equal(v1.begin(), v1.end(), v2.begin()) << '\n'; return 0;}
Counting sort using STL
c++;algorithm;c++11;sorting
Associative containerUpdate: Because of the O(n.log(n)) nature of std::map we have concluded this is not a good idea (But it was worth the test).Rather than using a vector to store the count you can use an associative container.std::vector<std::size_t> counts(max - min + 1, 0);// replace with using ValueType = std::iterator_traits<InputIterator>::value_type;std::map<ValueType, std::size_t> counts;This will limit the amount of memory you use otherwise the amount of space you use could potentially exceed memory.Also iterating over a sparse array would be expensive (As there will be lots of zero counts). By using an associative container you only iterate over valid values.Range based forUse the new range based for.for (auto it_c = counts.begin(); it_c != counts.end(); ++it_c) {// replace with:for(auto const& value: counts) {Combine range based for and associative containersvoid counting_sort(InputIterator first, InputIterator last){ using ValueType = std::iterator_traits<InputIterator>::value_type; std::map<ValueType, std::size_t> counts; for(auto value: boost::make_iterator_range(first, last)) { ++counts[value]; } for(auto count: counts) { ValueType& value = count.first; std::size_t& size = count.second; std::fill_n(first, size, value); std::advance(first, size); }}
_unix.282962
This is what which is happening[4]+ Stopped sudo nohup exec php socket_axd.phpadnan@vm085:~/server/axdchat/Server$ sudo nohup exec php socket_axd.php &[5] 2312adnan@vm085:~/server/axdchat/Server$ sudo nohup exec php axd.com.php &[6] 2321[5]+ Stopped sudo nohup exec php socket_axd.phpadnan@vm085:~/server/axdchat/Server$ I want to run multiple files in background.
Can't run multiple files with nohup
command line;nohup
null
_unix.294664
I know the thread and try to fix my find with -mindepth 15 unsuccessfullyfind -L $HOME -type f -name *.tex \ -exec fgrep -l janne /dev/null {} + | vim -R -Unsuccessful attemptfind -L $HOME -type f -mindepth 15 -name *.tex \ -exec fgrep -l janne /dev/null {} + | vim -R -find -L about it hereIts STOUTVim: Reading from stdin...find: /home/masi/LOREM: Too many levels of symbolic linksVisualization of symlinks unsuccessful which gives all files while I would like to see only symlinked directories and files in the systemtree -lLaw29's proposal# include symlinksfind $1 -type l -name $2* -print0 \ | xargs -0 grep -Hr --include *.tex $2 /dev/null {} + | vim -R -Output unsuccessful but it should not be empty Vim: Reading from stdin...grep: {}: No such file or directorygrep: +: No such file or directoryCharacteristics of the systemmasi@masi:~$ ls -ld -- $HOME /home/masi/LOREM drwxr-xr-x 52 masi masi 4096 Aug 16 16:09 /home/masilrwxrwxrwx 1 masi masi 17 Jun 20 00:27 /home/masi/LOREM -> /home/masi/LOREM/masi@masi:~$ type findfind is /usr/bin/findmasi@masi:~$ find --versionfind (GNU findutils) 4.7.0-gitCopyright (C) 2016 Free Software Foundation, Inc.License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>.This is free software: you are free to change and redistribute it.There is NO WARRANTY, to the extent permitted by law.Written by Eric B. Decker, James Youngman, and Kevin Dalley.Features enabled: D_TYPE O_NOFOLLOW(enabled) LEAF_OPTIMISATION FTS(FTS_CWDFD) CBO(level=2) System: Linux Ubuntu 16.04 64 bitFor for Script at the thread: hereFind: 4.7.0Grep: 2.25Application of find: haetex here
How to Avoid Many Levels of symlinks with this find?
find;symlink
If you want to display all files under $HOME, including those referenced via symbolic links, that end with .tex and contain the string janne:find -L $HOME -type f -name '*.tex' -exec grep -l 'janne' {} + 2>/dev/null | vim -R -If you want to display only symbolic links found under $HOME named *.tex corresponding to files that contain the string janne:find -L $HOME -xtype l -name '*.tex' -exec grep -l 'janne' {} + 2>/dev/null | vim -R -The only way to avoid the error message Too many levels of symbolic links is to discard all errors, which I've done with the 2>/dev/null construct.In both cases the find verb will not traverse across files and directories that it has already traversed - it remembers where it's already visited and prunes those parts of the filesystem tree automatically. For example,mkdir a a/b a/b/ccd a/b/cln -s ../../../a# Here you can ls a/b/c/a/b/c/a/b/...# But find will not continue for very longfind -L aaa/ba/b/cfind: File system loop detected; a/b/c/a is part of the same file system loop as a.
_softwareengineering.180573
I have some legacy code, which uses Lisp as it's scripting language. To broaden, ease and accelerate scripting I'd like to replace Lisp by Javascript.In order to be able to built on all present scripting files, I first need to translate all lsp to js.Now I found parenscript but am not yet sure what it is good for (seems to modify Javascript to be able to run lisp, which is not what I want).Also there are some converters on the web, which seem to work quite well.Has anyone already done this and can share some experiences, best pracises and tools?
How to translate Lisp to Javascript
javascript;lisp
null
_unix.286080
I would like to find a Ubuntu Linux 16.04 systen command similar to strace to find out why my C++ program , ServiceController.exe , which [execle (/usr/lib/mono/4.5/mono-service,/usr/lib/mono/4.5/mono-service, ./Debug/ComputationalImageClientServer.exe, 0, char const* EnvironmentPtr)]mysteriously stop running after 90 seconds * where * ComputationalImageClientServer.exe and ComputatationalImageClientServer.exe are C#/.NET 4.5 executablesIn contrast, when I run /usr/lib/mono/4.5/mono-service.exe ./Debug/ComputatationalImageVideoServer.exe at the command prompt,it runs continually for 24 hours by 7 days at least.Why cannot the first example run continuously 24X7?How might I diagnose, debug and fix this error?open(Delaware_Client_Server.exe, O_RDONLY) = 3pipe2([4, 5], O_CLOEXEC) = 0clone(child_stack=0, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0x7f743e4dca10) = 3509close(5) = 0fcntl(4, F_SETFD, 0) = 0fstat(4, {st_mode=S_IFIFO|0600, st_size=0, ...}) = 0read(4, , 4096) = 0--- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=3509, si_uid=1000, si_status=1, si_utime=0, si_stime=0} ---close(4) = 0wait4(3509, [{WIFEXITED(s) && WEXITSTATUS(s) == 1}], 0, NULL) = 3509write(1, \n, 1) = 1write(1, Process returned 256\n, 21) = 21
Why does Ubuntu 16.04 execle of a specfic C# image halt after 90 seconds while others run 24X7?
ubuntu;exec;mono;c#
Use the GNU debugger, gdb, or something similar.
_codereview.117884
All I am trying to achieve is better use of built-in ES functions.var arr = [0, 1, 2, 3, -1, -2, -3];function countPositiv(p, c, i, a){ if(c > 0){ return p+1; } else { return p; }}var positiv = arr.reduce(countPositiv, 0);console.log(positiv);
Finding the positive numbers count in an ECMAScipt array
javascript;ecmascript 6
null
_unix.20515
I'm trying to create an interface that will show a message on the screen every set time. Cron is an ideal tool for this case only it doesn't read data from a file during it's run (as far as I could tell). I could create a bunch of file to read but this is redundant. Is there a way to add a line into the user crontab file or remove a line from it after I have read it from a file?
Is there a way to combine a file with crontab?
cron
You can always write a cronjob that calls a script that conditionally calls crontab -l > oldcrontab and then executes crontab file. Where file would be the new crontab that should be installed, constructed from oldcrontab modulo an appended/removed line.
_softwareengineering.263584
I've obviously heard a lot about the Micro-Services Architecture and think it makes a lot of sense (especially with the success stories of Netflix). I'd like to implement a small Grails application in Micro-Services (although the framework doesn't matter too much). My question is about the Security or Users Micro Service. My initial thought would be to create an application with a REST interface where my other Micro-Services would query the Security Service's REST interface. However, security would be duplicated in every service. This seems like an inevitable problem, but what is the best way to handle this? Should I be implementing Spring Security in every service and querying a User Service with simple rights information? Or could a central Security Service work in some way?
How do I set up a micro-services architecture that can take advantage of a common, centralized security service?
web services;grails
as @user454322 says, look up the OAuth2 spec ... it's widely-used and well-supported standard that will work well.There are (bascially) 2 ways to deploy an authorization server:1) As a reverse proxy (as shown in @user454322 's diagam). This is when all requests from the outside go through the OAuth2 server and then to your services. This centralizes the authorization concern, so that it's handled before any request reaches the services. This is the same as terminating SSL in the load balancer. In essence, the authorization server becomes a part of your network middleware, like the firewalls and load balancers.the primary downside is that implementing a reverse proxy can be tricky, especially if you have large payloads, or are doing clever things with HTTP (HTTP starts simple, but there are lots of complicated wrinkles it adds)you can buy API management solutions which provide the reverse proxy functionality, but add things like OAuth, metrics, throttling, etc.2) As an authorization server. This is a slightly different layout, where each service takes requests directly (from the load balancers, etc.) Each request comes with an access token in the header. The service then makes an HTTP call to the authorization service to validate the token. The authorization server is responsible for authenticating users and granting tokens in the first place, your micro-services don't have to do that part.the primary downside is that each incoming request has to make a round trip to the auth service. That adds to your latency.a secondary downside is that you have to make sure the every one of your services calls the auth service -- otherwise, any services you don't will be open to the internet and unprotected.
_unix.240901
I executed this command but it took long time when checking checksum.Load average was 1.00 at all times.Can rsync command run using multiple cpu? $ rsync --checksum -av -e ssh /usr/local/xxx/* hostname:/usr/local/xxx/cpu: 4 core, OS: CentOS release 5.11, Shell: bash
Can rsync command run using multiple cpu?
rsync;cpu
null
_webmaster.103719
I'm curious to know how search engines like Google enforce their noFollow policy on social sites. It seems like it would be largely out of their control, especially for webpages that cannot be crawled. What is to prevent the social sites from allowing doFollow on posted links and then preventing search engines from crawling those particular pages?EDIT: What I mean is, how does Google enforce the tagging of the link as noFollow, when a website could so easily allow these links to be tagged as doFollow, and thereby make the ranking process more difficult because of a low signal-to-noise ratio.
How is noFollow enforced on sites like Quora and Facebook?
web crawlers;nofollow
null
_unix.46014
How can I match a forward slash in bbe?If I have this text file called test.txt:foo / barI can match it in sed like so:sed -e 's/foo \/ bar/it worked!/' test.txtHowever when doing the same thing in bbe it doesn't replace it:bbe -e 's/foo \/ bar/it worked!/' test.txtI have also tried double escaping and triple escaping the slash, however it doesn't seem to work anyway.What am I doing wrong?
Match slash in bbe
sed
Not sure how to escape the /, but some alternate solutions:Using a hex escape sequence: 's/foo \x2F bar/it worked!/'Using a different delimiter such as underscore: 's_foo / bar_it worked!_'
_codereview.113312
I'm trying to implement an algorithm able to search for multiple keys through ten huge files in Python (16 million of rows each one). I've got a sorted file with 62 million of keys, and I'm trying to scan each of the ten files in the dataset to look for a set key and their respective value.This is a follow-up code on feedback from Scanning multiple huge files in Python.All files are encoded with UTF-8 and They should contain multiple language.Here is a little slice of my sorted key file:en Mahesh_Prasad_Varmaen Mahesh_Sahebaen maheshtalaen Maheshtala_Collegeen Mahesh_Thakuren Maheshwara_Institute_Of_Technologyen Maheshwar_Hazari....en Just_to_Satisfy_You_(song) 1en Just_to_See_Her 2en Just_to_See_You_Smile 2en Just_Tricking 1en Just_Tricking! 1en Just_Tryin%27_ta_Live 1en Just_Until... 1en Just_Us 1en Justus 2en Justus_(album) 2....en Zsfia_Polgr 1Here is an example of some lines from one of my dataset files:en Mahesh_Prasad_Varma 1en maheshtala 1en Maheshtala_College 1en Maheshwara_Institute_Of_Technology 2en Maheshwar_Hazari 1Here is an example of the output file displaying a given key, maheshtala, which only appears once in the first file of the dataset:... 1,maheshtala,1,0,0,0,0,0,0,0,0,0,en...The sorted_key file contains only unique keys obtained with cat and sort -u unix command on all of the ten dataset files. Each key can't be present more than once in a given dataset file, and each key can be present in more than one of the data files (not important I've to put zero if key is not in a specific file).I've improved my new solution with multiprocessing module, and I'm now able to process each slice in 3min, so it will take about 87min to output final result (27+3*20 = 87min against 447min obtained before).But due to a memory issue I'm not able to save the res dictionary. I'm sure that different processes have different address spaces and so all of them write to their own local copy of the dictionary. I'm forced to use Manager to share data between processes, obtaining worst performance. May I use queue?Here it is the bash script I use to create sorted keys file.#! /bin/bashclearBASEPATH=/home/processmkdir processedmkdir processed/slicecat $BASEPATH/dataset/* | cut -d' ' -f1,2 | sort -u -k2 > $BASEPATH/processed/sorted_keyssplit -d -l 3000000 processed/sorted_keys processed/slice/slice-for filename in processed/slice/*; do python processing.py $filenamedonerm $BASEPATH/processed/sorted_keysrm -rf $BASEPATH/processed/sliceFor each slice I launch processing.pyHere is my working code, with Manager:import os,sys,datetime,time,thread,threading;from multiprocessing import Process, Managerfiles_1 = [20140601,20140602,20140603]files_2 = [20140604,20140605,20140606]files_3 = [20140607,20140608]files_4 = [20140609,20140610]def split_to_elements(line): return line.split( )def print_to_file(): with open('processed/20140601','a') as output: for k in keys: splitted = split_to_elements(k) sum_count = 0 clicks = j=0 while j < 10: click = res.get(k+-+str(j), 0) clicks += str(click) + , sum_count += click j+=1 to_print = str(sum_count) + , + splitted[1] + , + clicks + splitted[0]+ \n output.write(to_print)def search_window(files,length): n=length for f in files: with open(dataset/pagecounts-+f) as current_file: for line in current_file: splitted = split_to_elements(line) res[splitted[0]+ +splitted[1]+-+str(n)] = int(splitted[2].strip(\n)) n+=1with open(sys.argv[1]) as sorted_keys: manager = Manager() res = manager.dict() keys = [] print STARTING POPULATING KEYS AT TIME: + datetime.datetime.fromtimestamp(time.time()).strftime('%Y-%m-%d %H:%M:%S') for keyword in sorted_keys: keys.append(keyword.strip(\n)) print ENDED POPULATION AT TIME: + datetime.datetime.fromtimestamp(time.time()).strftime('%Y-%m-%d %H:%M:%S') print STARTING FILES ANALYSIS AT TIME: + datetime.datetime.fromtimestamp(time.time()).strftime('%Y-%m-%d %H:%M:%S') procs = [] procs.append(Process(target=search_window, args=(files_1,0,))) procs.append(Process(target=search_window, args=(files_2,3,))) procs.append(Process(target=search_window, args=(files_3,6,))) procs.append(Process(target=search_window, args=(files_4,8,))) for p in procs: p.start() for p in procs: p.join() print_to_file() print ENDED FILES ANALYSIS AT TIME: + datetime.datetime.fromtimestamp(time.time()).strftime('%Y-%m-%d %H:%M:%S') print START PRINTING AT TIME: + datetime.datetime.fromtimestamp(time.time()).strftime('%Y-%m-%d %H:%M:%S') print ENDED PRINT AT TIME: + datetime.datetime.fromtimestamp(time.time()).strftime('%Y-%m-%d %H:%M:%S')
Scanning multiple huge files in Python (follow-up)
python;file;time limit exceeded;join
null
_webmaster.59292
I have several pages about an entity. To illustrate, an example:John Smith OverviewJohn Smith Places He Grew Up InJohn Smith Companies he worked forJohn Smith His HighSchool FriendsCurrently all are accessible to search engines. I'm thinking about removing all the pages as far as search engines are concerned and focus all of the SE traffic on the main Overview Page while keeping this page distribution for users. In order to do that I was thinking about doing a 301 redirect from all pages about 'John Smith' to his Overview page. The content itself will be available to users in a new url that will be blocked through robots.txt. The idea is to focus all current SEO power into one page and that way rank higher for the search term 'John Smith'. Is that a good idea? Is there a different way to combines pages authority into one main page?
How to combine pages authority into one page?
seo
null
_datascience.21674
I have some data I collected off a forum. It is essentially Nissan Leaf battery state of health along with ODO reading, number of charges, age of vehicle, generation of model, battery size. Its not great, but this stuff seems scarce!Wnat I wanted to be able to do with it, is maybe produce a rudimentary battery health model. i.e. with inputs of miles driven, age, number of charges I want state of health to pop out the other side. I have no idea where to start! I have been pointed in the direction of Orange and it seems nice, but there are so many things to choose from!From general knowledge I know batteries degrade over time. They also degrade with discharge use (driving) and they degrade with charging too. So Age, ODO Miles and number of charges should be correlated to state of health.I think I need to remove a few outliers, then I guess linear regression might be the first port of call?I am going to try watch through the tutorials on YouTube. I can share the data if that helps.
What can I do with this data?
machine learning;regression;linear regression;orange
null
_codereview.42203
I have written this small game for the learn python the hard way book. I ask for some opinions about it and how to improve it.I know about pygame and others game engines but all the game runs in the terminal. The text file orc.txt, cyclops.txt, etc are ASCII drawingsfrom PIL import Imageimport numpy as npimport msvcrtfrom time import sleepfrom random import choice, randintprevious = 15 #start position tiledef createmap(): The map is sketched in bmp, I make a array to use it like a tileset global bmp bmp = Image.open('sketch.bmp') bmp_data = bmp.getdata() bmp_array = np.array(bmp_data) global map map = bmp_array.reshape(bmp.size)def array_to_text_map(): Tileset # = Wall . = sand + = door e = enemy D = dragon $ = gold @ = the player numbers as due to the 16 color palette used in the sketch text_map = open('map.txt', 'w') for x in map: for y in x: if y == 0: text_map.write('#') elif y == 3: text_map.write('.') elif y == 8: text_map.write('+') elif y == 9: text_map.write('e') elif y == 10: text_map.write('D') elif y == 12: text_map.write('$') elif y == 13: text_map.write('@') elif y == 15: text_map.write(' ') text_map.write('\n') text_map.close()def print_text_map(): text_map = open('map.txt') for x in range(bmp.size[1]): print(text_map.readline(), end='')def move_player_by_key(): wkey = msvcrt.getch if wkey() == b\xe0: key2 = wkey() if key2 == bH: move_in_matrix('up') elif key2 == bM: move_in_matrix('right') elif key2 == bK: move_in_matrix('left') elif key2 == bP: move_in_matrix('down') else: exit()def position(): ver = np.where(map == 13)[0][0] hor = np.where(map == 13)[1][0] return ver, hordef move_in_matrix(direction): ver, hor = position() try: if direction == 'up': analize(ver-1 , hor) elif direction == 'right': analize(ver , hor+1) elif direction == 'left': analize(ver , hor-1) elif direction == 'down': analize(ver+1 , hor) except IndexError: console.append('Not that way')def analize(verm, horm): Take decision for each tile verm and horm = next position ver and hor = actual position previous = keeps tile number for later global previous ver, hor = position() def restore(ver, hor, previous): Restore color of the leaved tile map[ver, hor] = previous if map[verm, horm] == 0: pass elif map[verm, horm] == 3: restore(ver, hor, previous) previous = map[verm, horm] map[verm, horm] = 13 elif map[verm, horm] == 8: print('Open the door (yes/no)?', end='') answer = input(' ') if door(answer): restore(verm, horm, 15) console.append('The door was opened') console.append(doors_dict[(verm, horm)]) #show the description #attached to the door elif map[verm, horm] == 9: print('Do you want to fight against the monster (yes/no)?', end='') answer = input(' ') result = fight(answer) if result and randint(0,1) == 1: restore(verm, horm, 12) elif result: restore(verm, horm, 15) elif map[verm, horm] == 10: print('the beast seems asleep, you want to wake her?(yes/no)?', end='') answer = input(' ') result = fight(answer, dragon=True) if result: win() elif map[verm, horm] == 12: print(You want to grab the items in the floor (yes/no)?, end='') if input(' ') == 'yes': gold() restore(verm, horm, 15) elif map[verm, horm] == 15: restore(ver, hor, previous) previous = map[verm, horm] map[verm, horm] = 13def door(answer): if answer == 'yes': return True elif answer == 'no': return False else: console.append('Invalid command')def identify_doors(): Hardcode: Identify each door and zip() it with a description stored previously doors_array = np.where(map == 8) doors = zip(doors_array[0], doors_array[1]) dict_doors = {} narrate = open('narrate.txt') for x in doors: dict_doors[x] = narrate.readline() return dict_doorsdef fight(answer, dragon=False): if answer == 'yes': if dragon == False: monster = choice([Orc(), Goblin(), Cyclops()]) else: monster = Dragon() monster.show_appearance() print('Fighting against', monster.name) sleep(2) while True: monster.show_appearance() print(monster.name, 'is attacking you', end = '') answer = input('fight (1) or defend (2)? ') if answer == '1': player.defense = 1 + player.armor monster.life = monster.life - player.damage elif answer == '2': player.defense = 1.5 + player.armor else: print('Invalid command') continue print(monster.name, 'counterattack!!') player.life = player.life - (monster.damage / player.defense) print(monster.life, 'remaining', monster.name, 'life') print(player.life, 'remaining Player life') if player.life <= 0: print('The End') exit() if monster.life <= 0: print('\n' * 5) print('Enemy defeated') print('You earn', monster.gold, 'gold coins') player.gold += monster.gold break sleep(3) return True else: return Falsedef moredamge(val): player.damage += valdef morearmor(val): player.armor += valgolds = {'Fire sword': (moredamge, 20), 'Oak shield': (morearmor, 0.1), 'Anhilation sword': (moredamge, 40), 'Iron shield': (morearmor, 0.2), 'Siege shield': (morearmor, 0.5) }def gold(): bunch = randint(10, 200) print('You have found', bunch, 'gold coins') player.gold += bunch sleep(1) print('Is there anything else?') if randint(0, 10) > 7: obtained = choice(list(golds)) print('You have get', obtained) golds[obtained][0](golds[obtained][1]) #access to key: (function, quantity) del golds[obtained] else: print('Oooohh.. nothing else') sleep(2)def win(): print('You have win the game') print('You get', player.gold, 'gold coins!!') exit()class Orc(): def __init__(self): self.name = 'Orc' self.life = 100 self.damage = 10 self.defense = 1 self.gold = 100 self.appearance = open('orc.txt') def show_appearance(self): print(self.appearance.read())class Player(Orc): def __init__(self): self.life = 1000 self.damage = 20 self.gold = 0 self.appearance = open('knight.txt') self.armor = 0class Goblin(Orc): def __init__(self): self.name = 'Goblin' self.life = 50 self.damage = 10 self.gold = 50 self.appearance = open('goblin.txt')class Dragon(Orc): def __init__(self): self.name = 'Dragon' self.life = 400 self.damage = 40 self.gold = 2000 self.appearance = open('dragon.txt')class Cyclops(Orc): def __init__(self): self.name = 'Cyclops' self.life = 150 self.damage = 20 self.gold = 120 self.appearance = open('cyclops.txt')def presentation(): print('\n'*2) player.show_appearance() print('Welcome hero, are you ready for fight?') input() if __name__ == __main__: player = Player() presentation() console = [] createmap() array_to_text_map() doors_dict = identify_doors() print_text_map() while True: move_player_by_key() array_to_text_map() print_text_map() if console: for x in console: print(x) else: print() console = []
Simple console roguelike game
python;game;console
I'll assume you're fairly new to Python; tell me if you feel this stuff is too basic, and I'll start criticizing you for things that you don't deserve to be criticized for.StateOk, so the first thing that made me go eww in the code was global. In general, whenever you use global, you're probably doing something wrong. If I was to write your createmap function, it might look something more like this:def createmap(): The map is sketched in bmp, I make a array to use it like a tileset bmp = Image.open('sketch.bmp') bmp_data = bmp.getdata() bmp_array = np.array(bmp_data) map = bmp_array.reshape(bmp.size) return bmp, mapInstead of using a global bmp and map variable, I'm returning the non-global bmp and map variables. This means that I have a lot more flexibility in calling the function:# I can use it to create the old global variables:global bmp, mapbmp, map = createmap()# Or some local oneslocal_bmp, local_map = createmap()# But most importantly, I can use it to create multiple sets of thembmp1, map1 = createmap()bmp2, map2 = createmap()The last case is the most important; using your version of createmap(), when I call createmap() for the second time, it overwrites the previous bmp and map variables. This might not seem like a big problem; why would you ever need more than one map? but it makes it much harder to test things when they change global variables.We can do similar things to your other functions to avoid using global variables. The array_to_text_map function currently uses the map global variable, but we can replace that with a map argument. This way we will be able to convert any map to a text map, not just the one stored in the global map variable.One downside to this is that it does require more typing; I have to explicitly pass around all of my state. However, one of the core tenets of Python is that explicit is better than implicit, so get used to it ;) I should mention that people often wrap up their program's commonly used variables into classes, though that's probably more complex than you are looking for right now.Switch StatementsAnother thing that makes me feel uncomfortable about the code is when you have long chains of elif statements:if y == 0: text_map.write('#')elif y == 3: text_map.write('.')elif y == 8: text_map.write('+')elif y == 9: text_map.write('e')elif y == 10: text_map.write('D')elif y == 12: text_map.write('$')elif y == 13: text_map.write('@')elif y == 15: text_map.write(' ')This is ugly, and violates the DRY principle. What you are doing here is looking up a value, based on the value of the variable y. The Pythonic (and sane) way of doing this is to define a dictionary (mapping) from the values of y to the string that should be written to text_map:TEXT_MAP_CHARS = { 0: #, 3: ., 8: +, 9: e, 10: D, 12: $, 15: }We can then look up the correct character, and then write that to the file:char = TEXT_MAP_CHARS[y]text_map.write(char)Which is much neater, and doesn't mix up program data (the mapping from numbers to characters) and program logic (the writing of the character to a file based on the value of y).With that said, I would praise you for your style. For every problem your code may have semantically, it is formatted very well (hooray for PEP8)
_cs.24180
You have a description of a language that you have to prove is regular, context free, or other. In order to prove that it does not belong to a certain class of languages, you might think that it will be more convenient to prove it by using a subset of that language. The problem is that a subset of a language does not necessarily belong to the same language class as the superset language. For example: with $\Sigma = \{0,1\}$, $\Sigma^*$ is a regular language, while $ A = \{ 0^n1^n \mid n \geq 0\} \subseteq \Sigma^* $ is a context free language. (I have fallen in the trap of trying to use subsets of languages in order to try to prove that they belong to a certain class of languages.)An example:Let $\Sigma = \{0,1\} $ and $L = \{ w \in \Sigma^* \mid \text{w contains less 1's than 0's} \}$Is $L$ regular, context free, or neither?My intuition says that it is not regular, since finite state machines can't count. I think it is context free. The strategy I'm thinking of is to show that this language is context free:$ L_2 = \{ 0^n1^m \mid n \gt m\}$This language is clearly a proper subset of $L$. But, like we've seen, that may not be a very useful fact if we want to prove things about the proper superset (to my understanding). It seems that $L_2$ is easier to generate than $L$: it seems easier to count how many consecutive 0's there are rather than counting the number of 0's in a string. (Especially if you consider using a PDA to count it, since then it just boils down to pushing consecutive 0's and then popping when you start seeing 1's, accepting the language if there are 0's left on the stack when you have consumed the word.) Consequently, if this intuition of hardness is correct, then $L$ is at least not regular. But then another problem reveals itself; maybe $L$ is not context free? So then you have at least shown it to not be regular, but you still have to show that it is context free or not.This is not the only problem for which I want to use this strategy. For example:$L_p = \{ w \in \Sigma^* \mid \text{the number of 0's in w is prime}\}$Here I think it might be more convenient to use something like the Pumping Lemma on this language:$L_{p2} = \{ 0^n \mid \text{n is prime}\}$Is this kind of strategy valid?
Proving that a language does not belong to a language class by using more specific instances of that language
formal languages;proof techniques
Yes. The formal way to name this trick is the use of closure properties. In your specific case the families (regular, context-free) are closed inder intersection with a regular language: if $K$ is regular (context-free) then so is $K\cap R$, where $R$ is regular.In your examples $L_2 = L \cap 0^*1^*$ and $L_{p2} = L_p \cap 0^*$.So, if $L_{p2}$ isn't context-free then neither is $L_p$. Also if $L_2$ isn't regular then neither is $L$. Regarding the first part of you question this will not help: $L_2$ may be context-free while $L$ isn't. (Not is this example, where they both are context-free: use pushing and popping but in a more general way, think how to represent negative numbers.) On the other hand, when you are not able to solve the general case, it might be helpful to develop intuition on the special cases.
_unix.225053
My directory ~/foo contains many HTML files. Each one has a different unwanted title element. That is, each file contains the code<title>something unwanted</title>Many of these files also contain some span elements like this<span class=org-document-info-keyword>#+Title:</span> <span class=org-document-title>correct title</span>I'd like to write a script that scans each HTML file and, for each file that contains a code-block of the second type, replaces the unwanted title with the correct title.Once the title has been replaced, I'd like the script to remove the code in the second block.For example, running the script on <!DOCTYPE html PUBLIC -//W3C//DTD HTML 4.01//EN><!-- Created by htmlize-1.47 in css mode. --><html> <head> <title>foo.org</title> <style type=text/css> <!-- body { color: #839496; background-color: #002b36; } .org-document-info { /* org-document-info */ color: #839496; } .org-document-info-keyword { /* org-document-info-keyword */ color: #586e75; } .org-document-title { /* org-document-title */ color: #93a1a1; font-size: 130%; font-weight: bold; } .org-level-1 { /* org-level-1 */ color: #cb4b16; font-size: 130%; } a { color: inherit; background-color: inherit; font: inherit; text-decoration: inherit; } a:hover { text-decoration: underline; } --> </style> </head> <body> <pre><span class=org-document-info-keyword>#+Title:</span> <span class=org-document-title>my desired title</span><span class=org-document-info-keyword>#+Date:</span> <span class=org-document-info>&lt;2015-08-23 Sun&gt;</span><span class=org-level-1>* hello world</span>Vivamus id enim. </pre> </body></html>should result in<!DOCTYPE html PUBLIC -//W3C//DTD HTML 4.01//EN><!-- Created by htmlize-1.47 in css mode. --><html> <head> <title>my desired title</title> <style type=text/css> <!-- body { color: #839496; background-color: #002b36; } .org-document-info { /* org-document-info */ color: #839496; } .org-document-info-keyword { /* org-document-info-keyword */ color: #586e75; } .org-document-title { /* org-document-title */ color: #93a1a1; font-size: 130%; font-weight: bold; } .org-level-1 { /* org-level-1 */ color: #cb4b16; font-size: 130%; } a { color: inherit; background-color: inherit; font: inherit; text-decoration: inherit; } a:hover { text-decoration: underline; } --> </style> </head> <body> <pre> <span class=org-document-info-keyword>#+Date:</span> <span class=org-document-info>&lt;2015-08-23 Sun&gt; </span> <span class=org-level-1>* hello world</span> Vivamus id enim. </pre> </body></html>Is there a tool in linux that can easily do this?
How can I replace the element in many HTML files?
shell script;html
You are probably best off scripting something. This script is not robust (doesn't check for empty strings, doesn't account for desired title being on several lines etc) but it might be something to get you started. Backup before you start doing anything crazy.#! /bin/bashFILES=./*.htmlfor f in $FILESdo grep '.*org-document-title>.*' $f |\ sed -e 's/.*org-document-title>\([^<]\+\).*/\n\1/g' |\ tail -n 1 |\ xargs -I new_title sed -i.bak 's/<title>[^>]\+<\/title>/<title>new_title<\/title>/g' $fdoneThis only replaces the title with the new my desired title. You could expand by doing another pass and getting rid of the unwanted span elements.
_unix.226253
I've got a computer with an SSD with Linux Mint installed and a bunch of HDD's. When in the file-browser I click one of the HHD's they get mounted in /media, which is fine for me. However I need to put files on thoses drives remotely. I'm using WinScp and that works fine, however only when the drives are already mounted, so I have to physically access the Linux computer and click on the HDD-icons. So my question is: is there a command to mount all the drives to /media, like what's happening when I click on them in the file browser? Then I can use PuTTY to send that command.Thanks
Easy mount hdd from remote computer
mount;hard disk
null
_vi.4809
Say I entered :e bla/bla/bla.txt and I realize I want to put a ! after the e. Can I get there without using the arrow keys?
Can one jump in ex mode?
ex mode
Command Line WindowYou can use cmdline-window to edit the command line the same way as you would any other window. Enter into this window by pressing <c-f> while on the command line or use q: in normal mode.There is a similar Vimcasts episode on the subject: Refining search patterns with the command-line windowCommand line mappingsThe command line also has many of its own mappings to help navigate/modify:<c-b>/<home> to go to the beginning (Some people remap this to <c-a> to match emacs/bash/readline)<s-left> or shift + left arrow moves to the left one WORD<c-u> clears the command line completely<c-r>{reg} will put the value of register, {reg} into the lineMany more. See Q_ce for a quick review.For more help see::h cmdwin:h cmdline-editing
_softwareengineering.174860
Is an ID field is always needed in database tables?In my case I have a user with firstName, lastName and email fields. email is unique and not null, so it could be used as an ID, right? So in that case, could/should I try to remove the ID?Also I want to have another table which extends this one. Let's say its called patient and it has it's own field additionalData and I would like to link the relationship through the email of user I mentioned. So the relationship should be 1 to 1, right? and I wouldn't need the IDs? Somehow MySQL Workbench wants me to use the IDs.What do you guys think. Any suggestions on this topic?
Do database tables need to have IDs?
database
Virtually every table needs a primary key. I would strongly argue that every table needs a primary key but I'm willing to make the occasional exception.Whether the primary key of a table should be a natural primary key-- some column or columns that are part of the business data that are naturally unique-- or whether a synthetic primary key should be used-- some additional data that has no business meaning and is used solely as an identifier, generally either an incrementing integer or a GUID-- is a bit of a religious debate. Personally, I tend to prefer synthetic keys over natural keys but other data modelers who I have a great deal of respect for prefer natural keys.In your case, one of the primary issues with using a email address as a natural key is dealing with what happens when someone wants to change their email address. If email address is the primary key, they you'll have to change the data in the USER table but you'll also have to ripple the change through every child table that has a foreign key relationship to the USER table. Depending on your database, that can range from annoying to a major undertaking, depending on whether your database happens to support cascading updates. Oracle, for example, believes that primary keys ought to be immutable so it does not support cascading updates-- you'd have to write that code yourself (or leverage one of the packages floating around to do it). I believe MySQL does support cascading updates so you merely have to ensure that each and every foreign key constraint in the system is set to cascade on update and test that you haven't broken the ability to change an email address every time you add a new table to the database or modify a constraint. On the other hand, if you use a nice, simple synthetic key, i.e. a user_id column, you can simply update the email address column like you would update any other piece of business information.Whatever you define as the primary key of your parent table should be used as the foreign key in your child table.
_softwareengineering.187926
What is this pattern called? Is this a variant of the Module pattern? I remember there's disadvantage to this pattern and it's related to memory usage. var module = function() { this.Alert = function() { alert(I'm a public); } function iamPrivate() { alert(not callable because I'm private); }}var inst = new module();inst.Alert();
Is this a variant of the Module pattern?
javascript
null
_unix.77469
Below shell script instruction behave in weird wayARG_DATE=`sqsh -S $SERVER -U $DB_USER -P $DB_PASSWORD -D dbname -h<<ENDSET NOCOUNT ONgoselect convert(varchar, PRIOR_COB_DATE, 112) from TABLEgoEND`echo $ARG_DATEif [ ${ARG_DATE} != 1 ] && [ ${ARG_DATE} != ]; then if [ ${ARG_DATE/[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]/1} = 1 ]; then PRIOR_POSITION_DATE=${ARG_DATE} echo assigned $PRIOR_POSITION_DATE else echo Date must be in follow format: YYYYMMDD echo POSITION_DATE will be used. fifiURL_PARAMS=HttpAutosysJobExecutor/NotificationEmailGenerator.job?prior_cob_date=${PRIOR_POSITION_DATE} echo $URL_PARAMS echo ${CONNECTION_STATUS_FILE} http://${SERVER_ADDRESS}:${SERVER_PORT}/HTTP/${URL_PARAMS} wget -o ${CONNECTION_STATUS_FILE} http://${SERVER_ADDRESS}:${SERVER_PORT}/HTTP/${URL_PARAMS} if [ -f ${CONNECTION_STATUS_FILE} ]; then RESPONSE_STATUS=`grep -o '200 OK' ${CONNECTION_STATUS_FILE}` if [ -z ${RESPONSE_STATUS} ]; then echo Login failed. exit 3 fi # rm ${CONNECTION_STATUS_FILE} fiwhile executing wget I am getting so many whitespaces becasue of whitespaces database procedure is failing to cast string in DB Date, using sed I can remove whitespaces but I want to know the possible reason of this.Log in CONNECTION_STATUS_FILEhttp://XXXXX:7001/MYAPP/HttpAutosysJobExecutor/NotificationEmailGenerator.job?prior_cob_date=%2020130523%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20
Unexpected whitespace using sqsh in command substitution
shell script;quoting;whitespace
The echo Does not show you the whitespace at the end. For that you need something like echo ${URL_PARAMS}x. You can do set -x immediately before the wget call and set +x immediately after to see how wget is called.The problem is the shell's word splitting:${ARG_DATE/[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]/1} = 1instead of${ARG_DATE/[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]/1} = 1A solution would be to put this before that (fixed) line:ARG_DATE=${ARG_DATE// /}
_unix.312826
I'm trying to figure out and I would like to know if some of you experienced this before.I have my SFTP script (before it was FTP, and was migrated to SFTP), when I'm sending a file to the client server I'm getting a message in the verbus No such file or directory but the file is on the client server. this didn't happen before when I had FTP connection.any of you have experienced this before? even I'm exiting with status 0 that is very weird for me...debug1: Couldn't stat remote file: No such file or directorydebug1: Couldn't stat remote file: No such file or directorydebug1: channel 0: read<=0 rfd 6 len 0debug1: channel 0: read faileddebug1: channel 0: close_readdebug1: channel 0: input open -> draindebug1: channel 0: ibuf emptydebug1: channel 0: send eofdebug1: channel 0: input drain -> closeddebug1: client_input_channel_req: channel 0 rtype exit-status reply 0debug1: channel 0: rcvd closedebug1: channel 0: output open -> draindebug1: channel 0: obuf emptydebug1: channel 0: close_writedebug1: channel 0: output drain -> closeddebug1: channel 0: almost deaddebug1: channel 0: gc: notify userdebug1: channel 0: gc: user detacheddebug1: channel 0: send closedebug1: channel 0: is deaddebug1: channel 0: garbage collectingdebug1: channel_free: channel 0: client-session, nchannels 1debug1: fd 0 clearing O_NONBLOCKdebug1: fd 2 clearing O_NONBLOCKdebug1: Transferred: stdin 0, stdout 0, stderr 0 bytes in 4.6 secondsdebug1: Bytes per second: stdin 0.0, stdout 0.0, stderr 0.0debug1: Exit status 0Code:sure, I have this: sftp -v -b ${sftp_file} ${username}@${server} > ${tmplog1} 2>&1 > ${tmplog2} GetStatus=$? if (( $GetStatus != 0 )); then if [[ $(grep -c No such file or directory ${tmplog1}) > 0 ]]; then ErrorMessage=No such file or directory elif [[ $(grep -c Connection refused ${tmplog1}) > 0 ]]; then ErrorMessage=Connection refused with the server elif [[ $(grep -c Connection timed out ${tmplog1}) > 0 ]]; then ErrorMessage=Connection timed out with the server elif [[ $(grep -c No route to host ${tmplog1}) > 0 ]]; then ErrorMessage=No route to the server. else ErrorMessage=Unknown Error in transmission process. fi fiThanks!
Getting message No Such File Or Directory message with SFTP connection but file is on the server
shell script;shell;sftp;return status
null
_unix.60637
Possible Duplicate:Match word containing characters beyond a-zA-Z I do not understand vims definition of a word. From the help for the motion w(:h w):w [count] words forward. |exclusive| motion. These commands move over words or WORDS. *word*A word consists of a sequence of letters, digits and underscores, or a sequence of other non-blank characters, separated with white space (spaces, tabs, ). This can be changed with the 'iskeyword' option.This means when I invoke the w motion, vim needs to check which characterscan make up a word with the help of the iskeyword option. So let's check,what characters a word may be comprised of::set iskeyword?iskeyword=@,48-57,_,192-255Let's test this with characters not included in the characterslisted in the iskeyword option, e.g. U+015B LATIN SMALL LETTER SWITH ACUTE. Pressing ga on tells us that it has the decimalvalue 347, which is larger than 255 and thus outside the range ofiskeyword. The cursor is placed on the t of tre and I press w:tre bar^ (cursor)The result:tre bar ^ (cursor)If a word can be comprised of letters, digits, underscores andother characters, the only possibility is that vim treats the asa letter, since it's obviously not a digit or an underscore.Let's check how to find out if a character is a letter. From :h:alpha::The following character classes are supported: [:alpha:] [:alpha:] lettersA test with/[[:alpha]]shows that is not considered to be a letter.Why did the cursor jump to the b if is neither a letter,nor a digit, nor an underscore and not listed in iskeyword?Tested on VIM - Vi IMproved 7.3 (2010 Aug 15, compiled Dec 27 2012 21:21:18)Included patches: 1-762 on Debian GNU/Linux with locale set toen_GB.UTF-8.
What does vim consider to be a word?
vim
null
_softwareengineering.219351
Reading code and discussions pertaining to code, I often see the words state and status used interchangeably, but the following tendencies seem to exist:When a variable holds a value intended to indicate that something is in a certain state, the name of that variable more often than not contains the word state, or an abbreviation thereof.However, when the return value of a function serves to indicate some such state, we tend to call that value a status code; and when that value is stored in a variable, this variable is commonly named status or something similar.In isolation that's all fine I guess, but when the aforementioned variables are actually one and the same, a choice needs to be made involving the perverted intricacies of English language (or human language in general).What is the prevailing coding-standard or convention when it comes to disambiguating between the two? Or should one of those two always be avoided?This english.stackexchange question is also relevant, I suppose.
state or status? When should a variable name contain the word state, and when should a variable name instead contain the word status?
terminology;coding standards;variables;conventions;state
null
_softwareengineering.196934
Trying to convert some entities into value objects I am stuck in a case where what seems a value object must be unique within an aggregate.Suppose we have a Movie entity which makes the root of an aggregate. This Movie entity is related with some set of AdvertisementEvent objects with the role of displaying an advertisement at certain timestamp.The AdvertisementEvent contains a link to some Banner that must be displayed, the coordinates and some effect filters. Since AdvertisementEvent is just a collection of configuration parameters I am not sure if I should care about its identity and treat it like just a large value object. However I do care that within a Movie there must be only one AdvertisementEvent at a certain timestamp, probably even around the timestamps.I find hard to split my doubts in multiple independent questions, so there they go:Does a collection of configuration parameters sounds like a value object?Am I mixing the concept of uniqueness of AdvertisementEvent within Movie and transactional integrity rule?Does any of the choices in point (2) implies that AdvertisementEvent must be a member of the aggregate made by Movie?Is my AdvertisementEvent object an Entity, a Value Object or an Event Object? (I used the Event suffix in the name to highlight my confusion)Are large value objects like this a design smell?I guess that I am not dealing with an Event in the sense of DDD because it is not something that just happens. The real DDD event should be something more like AdvertisementEventReached
Unique Value Object vs Entity
domain driven design;entity;value object
The distinction between Entity and Value object should be based around the question: If I have two objects with the same contents (two AdvertisementEvents linking to the same Banner with the same parameters), should I treat them differently or can one be replaced by the other without affecting how the software works?In this case, I would say that you can replace one AdvertisementEvent by another with the same values without affecting the operation of the software. This makes them Value objects (the contained value is what counts, not the identity of the object itself).As for the size of a Value object: As long as it contains a coherent set of parameters for a single responsibility, there is no limit on how large a Value object can be. In the implementation it might be good to pay special attention to large value objects to ensure they are not needlessly and excessively copied but otherwise it is no problem.As for the constraints on the number of AdvertisementEvents you have within a Movie, this is a constraint on the relation between a Movie and its collection of AdvertisementEvents, not on one of those classes individually. As such, the most logical place to enforce the constrained is at the point where the collection gets maintained in Movie (thus in the method where you try to add an AdvertisementEvent).
_unix.232450
I'm getting conflicting information from the manuals, especially regarding kmod and modprobe. All of these programs come together in the same package, but are any f these backends or frontends? Does modprobe call insmod and rmmod? Does depmod call modinfo when building a list of dependencies? Is kmod used as a backend by insmod and rmmod?From kmod.8.man: kmod is a multi-call binary which implements the programs used to control Linux Kernel modules. Most users will only run it using its other names.
How do depmod, insmod, kmod, lsmod, modinfo, modprobe, and rmmod all relate?
linux;modprobe
null
_webmaster.84864
I have somewhere close to 500,000 user-uploaded images hosted on a Cloudfront CDN -- separate from our main host (exampledomain.com). Up until this point, few of them have been getting indexed at the default distribution URLs. Example:https://d7oxxxxxxx.cloudfront.net/images/example_directory/subdirectory/LG_example_filename.jpgSo I added a CNAME (alternate domain name) so that the URLs have now become:http://media.exampledomain.com/images/example_directory/subdirectory/LG_example_filename.jpgAnd I added media.exampledomain.com as a verified domain in Google Search Console.I also have a dynamic sitemap hosted on exampledomain.com that lists all of the images I would want to get indexed -- one image per page (there are probably close to 240,000 pages altogether). Example:<url><loc>http://www.exampledomain.com/directory/pagename</loc><changefreq>daily</changefreq><image:image><image:loc>http://media.exampledomain.com/images/exampledirectory/subdirectory/LG_filname.jpg</image:loc><image:title>Example Image Title</image:title><image:caption>Example Image Caption</image:caption></image:image></url>According to what I've read, this should get Google to start indexing all of the images. However, I do not want to potentially wait a whole week to find out that there is something else I may not have done or that something else may be blocking the images from being indexed. The Cloudfront URLs are all fully public as far as I can tell and there aren't any robots.txt restrictions in place on the CDN. I only have one Cloudfront distribution currently active so I don't believe there should be any issues with duplicate content. Is there anything else I may need to account for or some way I can see in advance if it is going to work?Thanks for any help you can provide.UPDATE:I've been tracking this for a few days now. The Google bots have been crawling and indexing all of our site's pages at a nice swift rate (over 50,000 pages in a day!). However, there is still something up with the images. I see that there are over 160,000 images submitted in the sitemap and Google has crawled roughly 15,000 of them, but only 50 have actually been indexed. Does anyone have any ideas why Google may be having difficulty with these?Here is an example format for one of URLs. There is a 12-14 digit timestamp appended to the end of all of the files:http://media.exampledomain.com/images/category/id/LG_keywords_1442182082.5437.jpg
getting CDN images indexed with Google
images;google index;cdn;cname;amazon cloudfront
null
_cstheory.5614
In a nutshell, the time hierarchy theorems say that a Turing machine can solve more problems if it has more time for computation. In detail for deterministic TM and time-constructable functions $f,g$ with $f(n) \log f(n) = o(g(n))$ it is$$ DTIME(f(n)) \subsetneq DTIME(g(n))$$and for nondeterministic TM and time-constructable functions $f,g$ with $f(n+1)=o(g(n))$ it is$$ NTIME(f(n)) \subsetneq NTIME(g(n)).$$There are a lot of (old and current) results which use the time hierarchy theorems to prove lower bounds. Here are my questions:What happens if we can prove a betterresult for the deterministic or nondeterministic case?If we can prove that there is a gapbetween the deterministic timehierarchy and the nondeterministictime hierarchy, does this imply $P \neq NP$?
What happens if we improve the time hierarchy theorems?
cc.complexity theory;lower bounds;big picture;time complexity;time hierarchy
null
_unix.118712
In Ubuntu, when I am up in / folder I type:sudo find . -name .erlang.cookieand the result is:./var/lib/rabbitmq/.erlang.cookieThen, when I am on the folder /var/lib/rabbitmq and I type ls, I see one file named mnesia.When I type the find command again, I see ./.erlang.cookie-- what does that mean?
Why ls doesn't show the file that find discovered?
find;ls
In Unix, a filename beginning with a dot, like .erlang.cookie, is considered a hidden file and is not shown by bare ls. Type ls -a to also show hidden files.From man ls: -a, --all do not ignore entries starting with .However, you can show a hidden file with ls if you specify the name:$ ls .erlang.cookie.erlang.cookie
_unix.175540
I'd like to understand how Linux detects which display devices are available (video output) and how it decides what to display on each one.For example: if I have an embedded device with a serial line and an HDMI port, how do I make the console appear on the HDMI display instead of the serial console?Also, if I want to use a simple OpenGL application that's linked against video drivers, what interface would OpenGL use to draw on the HDMI port?Pointers to the proper documentation would be awesome.
Where do I start to understand the display controller management?
linux;displayport
For most systems, handling which screen device to output to is dependent on the GPU or some other video display controller. All interfacing with the video device(s) on the system is handled by the Direct Rending Manager (DMS) and the closely related Kernel Mode Setting (KMS) kernel subsystems.From the Wikipedia page on the topic:In computing, the Direct Rendering Manager (DRM), a subsystem of the Linux kernel, interfaces with the GPUs of modern video cards. DRM exposes an API that user-space programs can use to send commands and data to the GPU, and to perform operations such as configuring the mode setting of the display. DRM was first developed as the kernel space component of the X Server's Direct Rendering Infrastructure, but since then it has been used by other graphic stack alternatives such as Wayland.User-space programs can use the DRM API to command the GPU to do hardware-accelerated 3D rendering and video decoding as well as GPGPU computing.The official Linux docs can be found in the source repository under Documentation/gpu. Here is the github link, for your convenience.Additionally, the Wikipedia article seems quite extensive. Depending on your goals, this resource alone might be sufficient, and it is certainly easier and less technical reading than the official documentation is.
_cs.76102
what is the height of avl tree with n nodes ? Can you explain it with formula ? I couldn't find any formula to solve this problem.
What is the height of avl tree with n nodes
trees
null
_unix.321550
I tried to install OpenSuse Tumbleweed, but one of the main problems was that the WLAN card (centrino 6230) was completely disabled during the setup, something which did not happen with Ubuntu(-derivatives) and Windows. In the boot text it tells me it did not find a driver for it. How can I fix that, if the WLAN card is my onliest access to the internet?
OpenSuse Tumbleweed does not find my WLAN card
opensuse;wlan
null
_softwareengineering.215682
I know the basics but do not how to model this (it is an exercise with no solution provided):An order can contain 1 or more cartons of beer. Each carton contains 6 beers of the same kind but there are some consisting of 3 different beer types (still 6 in total)Also having a Carton class, how can I make it the way it either 6 beers (same ones) OR 6 beers of 3 various types?
How to draw a class diagram where a class contains either 6 other classes or mupltie different
diagrams;class diagram
null
_webmaster.28903
Only for websites, not webapps.There are not many computers left with 1024x786, so why still support it?
Should I still support a screen resolution of 1024x786 or can I ignore it and support 1280x720 and higher?
web development;website design;screen size;resolution
That entirely depends on your user base, for a commercial site I work on 1024x768 represents 9.49% (166,453) of our visitors, we will continue to support that for some time.The flip side to that, a hobby project that I work on has a different audience and I don't support 1024x768 as it only represents about 2%.Check your existing stats and use that to make the decision. Failing that create a responsive layout and stop worrying about the resolution.
_unix.375640
I've just changed a filesystem label:btrfs filesystem label / rootfsWhen I run lsblk -f, I still see the old label.What do I need to do to refresh the filesystem information?
lsblk -f shows old filesystem label
btrfs;lsblk
null
_vi.10234
So, I installed the Airline plugin which is working quite nicely. But, I'd like it a bit sweeter. So I went to Powerline.I installed patched fonts from https://github.com/powerline/fonts/tree/master/Hack and modified my .vimrc like soBut, as you can see, it doesn't seem to work correctly... Is it a font issue or is it something else?
Powerline fonts not working on Airline
plugin vim airline;plugin powerline
null
_softwareengineering.267218
So, I should really know this stuff already, but I am starting to learn more about the lower levels of software development. I am currently reading Computer Systems: A Programmer's Perspective. by Bryant O'Hallaron. I am on chapter 2 and he is talking about the way data is represented internally. I am having trouble understanding something conceptually and I am sure that I'm about to make my ignorance lucid here.I understand that a word is just a set of bytes and that the word size is just how many bits wide the system bus is. But, he also says: the most important system parameter determined by the word size is the maximum size of the virtual address space. That is, for a machine with a w-bit word size, the virtual addresses can range from 0 to (2^w)-1, giving the program access to at most 2^w bytesI am both confused on the general relationship between the word size and the amount of addresses in the system and how the specific formula is w-bit word size=2^w bytes of memory available.I am really scratching my head here, can some one help me out?EDIT: I actually misinterpreted his definition of a word and consequently the definition of word size. What he really said was: Busses are typically designed to transfer fixed-sized chunks of bytes known as words. The number of bytes in a word (the word size) is a fundamental system parameter that varies across systems. most machines today have word sizes of either 4 bytes(32 bits) or 8 bytes(64 bits). For the sake of our discussion here, we will assume a word size of 4 bytes, and we will assume that buses transfer only one word at a time.which is pretty much a catch-all for the cases discussed in the answers without having to go into detail. He also said he would be oversimplifying some things, perhaps in later sections he will go into more detail.
How does word size affect the amount of virtual address space available?
word;bit
The idea is that one word of memory can be used as an address in the address space (i.e., a word is wide enough to hold a pointer). If an address were larger than a word, addressing would require multiple sequential words. That's not impossible, but is unreasonably complicated (and likely slow).So, given an w-bit word, how many values can that word represent? How may addresses can it denote? Since any bit can take on two values, we end up with 2222 = 2w addresses. Addresses are counted starting by zero, so the highest address is 2w - 1.Example using a three-bit word: There are 23=8 possible addresses:bin: 000 001 010 011 100 101 110 111dec: 0 1 2 3 4 5 6 7The highest address is 23-1 = 8 - 1 = 7. If the memory is an array of bytes and the address is an index to this array, then using a three-bit word we can only address eight bytes. Even if the array physically holds more bytes, we cannot reach them with a restricted index. Therefore, the amount of virtual memory is restricted by the word size.
_webmaster.24380
My VPS is giving me the option between running PHP as an Apache Module or a FastCGI.How should one make this decision? Performance? Security? Ease of use? Compatibility?I'm using PLESK.
What are the pros and cons of running PHP as an Apache module or a FastCGI?
php;apache;vps;plesk
null
_codereview.51321
As a test of my C# skills, I have been asked to create a simple web service which will take a message from a queue table in a SQL database and send it to a web application when a button is pressed on the application. I have never written a web service before so I am going in a little blind here so was after some advise on whether I had done this correct or not.The stored procedure usp_dequeueTestProject gets a value from the top of the list in a table and then deletes the row in that table. At the moment I do not have this being archived anywhere, would it be better practice to instead of delete this, to just mark it as sent instead?[WebService(Namespace = http://tempuri.org/)][WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)][System.ComponentModel.ToolboxItem(false)]// To allow this Web Service to be called from script, using ASP.NET AJAX, uncomment the following line. // [System.Web.Script.Services.ScriptService]public class Service1 : System.Web.Services.WebService{ [WebMethod] public string GetDataLINQ() { try { TestProjectLinqSQLDataContext dc = new TestProjectLinqSQLDataContext(); var command = dc.usp_dequeueTestProject(); string value = command.Select(c => c.Command).SingleOrDefault(); return value; } catch (Exception ex) { return ex.ToString(); } }}I've opted for using LINQ for starters, I am not sure if this is the best way to do it or not? I am only passing through the one string as well... In reality I guess you would normally want to send more than one field, such as datetime sent, message type etc, but wasn't sure what data type to use for this? I have seen this done using a Strut, but wasn't sure if this was correct? Any guidance would be greatly appreciated.
Web service getting value using LINQ from queue table in SQL database
c#;beginner;linq;asp.net;web services
A Linq-to-SQL DataContext is an IDisposable, your code isn't calling its Dispose() method. Wrap it in a using block to ensure proper disposal.I would use var in the TestProjectLinqSQLDataContext instantiation as well, but that's just personal preference. I find Foo bar = new Foo() redundant.Using SingleOrDefault assumes that the SP is returning only 1 record. That very well be the case now, but the SP being outside of the code, I wouldn't make that assumption. Use FirstOrDefault to grab only the first record - this way the day the SP is modified to return more than 1 row, your code won't break.Something like this:string result;using (var dc = new TestProjectLinqSQLDataContext()){ var command = dc.usp_dequeueTestProject(); result = command.Select(c => c.Command).FirstOrDefault();}return result;I agree with @Jesse's comment about exceptions usage: the calling code has no way of easily telling a valid response from an error message (exception type, message and stack trace), since your code returns a string in every case.Let it blow up!
_softwareengineering.112162
Two things I've noticed in the past week that make me wonder:An interview where my Perl skills were reviewed. I always use C-style for loops and use map about once in every 10,000 lines of code, so I almost always have to reference it before using it. While I can interpret others code using that, it's not my style and that is for reasons involving readability and ease of changing between languages. I at least perceived that I was dinged for that.This answer to my question in which the answerer proceeded to refactor and bullet-proof my tiny script. I admit I was a little bit offended and kind of annoyed at that, though I can understand how that as a habit may be helpful.I understand that in a team, my style will need to adapt what the team has decided is correct -- this is necessary in every project and everybody has their own ideas. Yet I feel like I'm judged for what is syntactically fine and readable code.How much does the style of your smaller snippet code weigh in? Does code like that imply to others that I don't have control of the language because I didn't use all the features? Does my code imply that I'm not familiar with unit testing and proper testing because I didn't fully error-protect a tiny script?
How much does/should your style imply about your skill in a language?
interview;coding style
null
_unix.80099
High, I need to test my arbitrary precision calculator, and bc seems like a nice yardstick to compare to, however, bc does truncate the result of each multiplication to what seems to be the maximum scale of the involved operands each.Is there a quick way to turn this off or to automatically set the scale of each multiplication to the sum of scales of the factors so that it doesn't lose any precision?If you have a more elegant solution to this involving something other than bc, I would appreciate your sharing it.Example:$ bc <<< '1.5 * 1.5'2.2The real answer is 2.25.
BCautomatic full precision multiplication
bc;calculator
You can control the scale that bc outputs with the scale=<#> argument.$ echo scale=10; 5.1234 * 5.5678 | bc28.52606652$ echo scale=5; 5.1234 * 5.5678 | bc28.52606Using your example:$ bc <<< 'scale=2; 1.5 * 1.5'2.25You can also use the -l switch (thanks to @manatwork) which will initialize the scale to 20 instead of the default of 0. For example:$ bc -l <<< '1.5 * 1.5'2.25$ bc -l <<< '1.52 * 1.52'2.3104You can read more about scale in the bc man page.
_unix.279938
I have successfully installed Windows 10 on a 3-disk (hardware) RAID 0 setup, in a 150GB NTFS partition. As part of that, and because I booted my installation media in UEFI mode, the Windows installer created an EFI partition. The disks in the raid group all have a GPT partition table. I'm attempting to install Fedora 23 (in UEFI mode) in order to dual boot. In following various guides, it looks like all I need to do is mount the existing EFI System Partition (created by Windows) at /boot/efi, create my other partitions as desired, and everything should work.Unfortunately, it doesn't look like the F23 installer is recognizing the EFI partition created by Windows as a valid option. When hitting DONE to apply my partition changes, I get an Error checking storage configuration. Clicking the link for more details reads as such:No valid boot loader target device found. See below for details.For a UEFI installation, you must include an EFI System Partitionon a GPT-formatted disk, mounted at /boot/efi.However, the disk meets those requirements. The relevant output of sudo parted -l reads:$ sudo parted -lPartition Table: gptNumber Start End Size File system Name Flags2 473MB 578MB 105MB fat32 EFI system partition boot, espI have disabled Windows' fast boot via the Power Management control panel.Any help or pointers in the right direction would be greatly appreciated; I'm tired of programming on my old, slow laptop and would love to utilize my desktop's resources.Update #1After reading through this bug report this morning, I think I may have found my issue. When installing Windows 10, it creates a 450MB recovery partition containing WinRE, the Windows Recovery Environment -- this is the first partition on the RAID0 volume, the ESP is second. I've got to go into the office now, but will update this post if I find a resolution tonight.Specifically, I believe comment #59 on that issue may be the solution I'm looking for.
Installing Fedora 23 alongside Windows 10; EFI partition not valid
fedora;windows;dual boot;uefi
A solutionSo it looks like I have found a working solution given my particular environment.I'll first describe my goals and environment, and then give step-by-step instructions. GoalsSide-by-side installation Fedora 23 and Windows 10 in UEFI mode. EnvironmentOne hardware-based RAID0 volume, formatted using a GPT partition table. (Let's call this group r0)Two separate 1TB internal hard disk drives, (sdd and sde)Two bootable USBs containing the latest release of F23 as of this post, and Windows 10 (created using the windows media creation tool)A motherboard capable of booting said installation media in UEFI mode. StepsInsert installation media for F23. Boot in UEFI mode and select install to hard drive. When selecting the disks, I chose r0 and sdd, and then chose I will configure partitioning. Change the new partition type from the default (LVM) to Standard Partition. Create your partitions. After creating each partition, check the settings and ensure that the partition is only on your desired drive. Note that the sizes below are what I chose to use -- your partition sizes may differ based on needs and availability. I created the following partitions, in order:/boot/efi, 500MB, on r0, as an EFI System Partition/, 50GB, on r0, ext4/var, 20GB, on r0, ext4/home, size left blank, on sdd, ext4 (after creation I reduced the partition size by 4GB)swap, 4GB, on sddClick Done. At this point, you will receive a warning saying that no valid boot loader was found. Press Done again to bypass it. Click Select disks again. Select the same disk(s). At the bottom of the window, click Full summary. In the window that pops up, select the boot drive (the drive with the ESP partition mounted at /boot/efi). Click remove boot flag, and then click add boot flag. Click done. You'll be at the partitioning screen again. Click done and accept the changes. Voila! You can now install Fedora. Continue with the installation - it should be pretty straightforward from here on. To install Windows 10, I simply inserted my installation media (after installing and updating Fedora) and when through the install process. When given the option, choose Custom Install. Choose the boot drive (r0 in my case), and add a new partition to it - I went with 150GB. Windows popped up with its normal we might create additional partitions alert -- hit okay. At this point, I also chose to format sde so that I could see my internal HDD when I booted into Windows. Complete the installation process. Wonderfully, you'll notice that Windows has not overridden your efi partition but simply added to it -- unfortunately, we aren't done yet. Restart and enter your F23 desktop. At this point, you'll have noticed that we didn't have Windows as an option in the GRUB menu. To fix that, we need to create a new menu entry in /etc/grub.d/40_custom:menuentry Microsoft Windows 10 UEFI-GPT { insmod part_gpt insmod fat insmod search_fs_uuid insmod chain search --fs-uuid --set=root --hint-efi=hd0,gpt1 DEVICE_ID chainloader /EFI/Microsoft/Boot/bootmgfw.efi}After saving the file, you'll need to regenerate you grub config. Run grub2-mkconfig -o /boot/efi/EFI/fedora/grub.cfg and voila! You are done!
_scicomp.24351
Looking at the plain heat equation $u_t=u_{xx}$ the explicit scheme for it would look like the following iteration:$$u_{m,n+1}=\rho u_{m-1,n}+(1-2\rho)u_{m,n}+\rho u_{m+1,n}$$I noticed this equation resembles some probability equation: basically If I look at it as:$$u_{m,n+1}=p_1 u_{m-1,n}+p_2u_{m,n}+p_3u_{m+1,n}$$I notice $p_1+p_2+p_3=1$ and if I restrict those numbers to be between $[0,1]$ if as in case of probabilities it matches the stability criteria. Is there some sort of connection between probability and FDM here? It looks to me as $u_{m,n+1}$ is some sort of expectations but I can't make my argument complete so I would appreciate some help on that.
explicit scheme stability restriction
parabolic pde
null
_unix.198145
I am trying to improve the kernel boot time of my device and I would like some help. I am using OMAPL138 with kernel version 2.6.37 and it takes about 50 seconds until the boot process is finished, and I think it is a long time. Below is the image of some stages of the boot process. As you can see, there is a delay of 19 seconds until the message EMAC: MII PHY CONFIGURED shows up and I think this is the main problem of my boot time.After some tests I discovered that this delay is during the unpacking of the initramfs.cpio.lzma . I discovered it by printing some messages in the initramfs.c file, and this delay happens in the while loop inside the unpack_to_rootfs function. The initramfs.cpio.lzma has 5.3MB and the total kernel image (uImage) has 7.3MB .My question is: Am I doing something wrong or the only way to improve this is by reducing the size of my kernel? Maybe some of you had to deal with this problem before so I would like some suggestions on how to proceed to improve my boot time. Thank you very much.
Unpacking of initramfs is very slow
boot;time;initramfs
null
_scicomp.21612
I'm trying to solve a 1D Poisson equation with pure Neumann boundary conditions. I've found many discussions of this problem, e.g.1) Poisson equation with Neumann boundary conditions2) Writing the Poisson equation finite-difference matrix with Neumann boundary conditions3) Discrete Poisson Equation with Pure Neumann Boundary Conditions4) Finite differences and Neumann boundary conditionsAnd many more.Some of the answers seem unsatisfactory though. For example, the answer in 2) contains a document that explains how the matrix $A$ changes when applying Neumann boundary conditions, but does not explain how to solve the singular system. The comment left in 2) by @Evgeni Sergeev is a reference to a problem with mixed boundary conditions, and not pure Neumann boundary conditions.That said, there are some useful things I've found. I do like the second comment given in the 1) by @Sumedh Joshi. It suggests to subtract the mean from the RHS. Other references I've read suggest using Dirichlet BCs at one location in the computational domain which should result in a unique solution, however, I prefer removing the mean since this seams more elegant.The problem setup is:$\frac{\partial^2 u}{\partial x^2} = f, \qquad f = -(2\pi)^2 cos(2\pi x), \qquad 0 \le x\le 1, \qquad \left(\frac{\partial u}{\partial x}\right)_{0} = 0, \qquad \left(\frac{\partial u}{\partial x}\right)_{1} = 0$I have ghost points in my computational domain, so my matrix $A$ has some extra zeros (I remove them later). Using central differences everywhere in the computational domain, matrix $A$ takes the following form:$A = \frac{1}{\Delta x^2}\left[\begin{array}{ccccccccc}0 & 0 & 0 & & & & & & \\1 & -2 & 1 & & & & & & \\0 & 1 & -2 & 1 & & & & & \\ & & & \ddots & \ddots & \ddots & & & \\ & & & & & 1 & -2 & 1 & 0 \\ & & & & & & 1 & -2 & 1 \\0 & & & & & & 0 & 0 & 0 \\\end{array}\right]$Once applying the ghost points and adjusting the matrix $A$, I end up with$A = \frac{1}{\Delta x^2}\left[\begin{array}{ccccccccc}0 & 0 & 0 & & & & & & \\0 & -2 & 2 & & & & & & \\0 & 1 & -2 & 1 & & & & & \\ & & & \ddots & \ddots & \ddots & & & \\ & & & & & 1 & -2 & 1 & 0 \\ & & & & & & 2 & -2 & 0 \\0 & & & & & & 0 & 0 & 0 \\\end{array}\right]$Correspondingly, the RHS has changed to$\frac{- 2u_b + 2u_i}{\Delta x^2} = f_b - \frac{-2\theta \Delta x}{\Delta x^2}$Where $u_b,u_i,f_b,\theta$ are the $u$ at the boundary, $u$ at the first interior point, $f$ at the boundary and the slope of the solution at the boundary (zero in this case) respectively. The interior of this matrix is the same as the one discussed in the document given in the accepted answer in 2).Here I show the setup and results of a small MATLAB script where I print out $A, f, mean(f)$ and the (attempted) solution $u$ using MATLAB's backslash operator. A = -100.0000 100.0000 0 0 0 0 0 0 0 0 0 100.0000 -200.0000 100.0000 0 0 0 0 0 0 0 0 0 100.0000 -200.0000 100.0000 0 0 0 0 0 0 0 0 0 100.0000 -200.0000 100.0000 0 0 0 0 0 0 0 0 0 100.0000 -200.0000 100.0000 0 0 0 0 0 0 0 0 0 100.0000 -200.0000 100.0000 0 0 0 0 0 0 0 0 0 100.0000 -200.0000 100.0000 0 0 0 0 0 0 0 0 0 100.0000 -200.0000 100.0000 0 0 0 0 0 0 0 0 0 100.0000 -200.0000 100.0000 0 0 0 0 0 0 0 0 0 100.0000 -200.0000 100.0000 0 0 0 0 0 0 0 0 0 100.0000 -100.0000 F = -19.7392 -31.9387 -12.1995 12.1995 31.9387 39.4784 31.9387 12.1995 -12.1995 -31.9387 -19.7392 meanF = -9.6892e-016 Warning: Matrix is singular to working precision. > In main at 121 u = NaN NaN NaN NaN NaN NaN NaN NaN NaN -Inf -InfMy question is why is this not working? I thought that removing the mean of the RHS would work as suggested in @Sumedh Joshi's comment. Any help is greatly appreciated.UPDATE:I found out that this same exact problem setup works perfectly fine for 12 unknowns (and 15 and much larger, e.g. 100), but does not work for 10 unknowns (as posted). So I suppose that the problem is setup correctly. This still begs the question though, what is going on here? It seems that this may be more of a question regarding numerical analysis.Here are the matlab results for 12 unknowns: A = 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 -1.0 1.0 0 0 0 0 0 0 0 0 0 0 0 0 0 1.0 -2.0 1.0 0 0 0 0 0 0 0 0 0 0 0 0 0 1.0 -2.0 1.0 0 0 0 0 0 0 0 0 0 0 0 0 0 1.0 -2.0 1.0 0 0 0 0 0 0 0 0 0 0 0 0 0 1.0 -2.0 1.0 0 0 0 0 0 0 0 0 0 0 0 0 0 1.0 -2.0 1.0 0 0 0 0 0 0 0 0 0 0 0 0 0 1.0 -2.0 1.0 0 0 0 0 0 0 0 0 0 0 0 0 0 1.0 -2.0 1.0 0 0 0 0 0 0 0 0 0 0 0 0 0 1.0 -2.0 1.0 0 0 0 0 0 0 0 0 0 0 0 0 0 1.0 -2.0 1.0 0 0 0 0 0 0 0 0 0 0 0 0 0 1.0 -2.0 1.0 0 0 0 0 0 0 0 0 0 0 0 0 0 1.0 -2.0 1.0 0 0 0 0 0 0 0 0 0 0 0 0 0 1.0 -1.0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 f = 0 -19.7392 -34.1893 -19.7392 -0.0000 19.7392 34.1893 39.4784 34.1893 19.7392 0.0000 -19.7392 -34.1893 -19.7392 0 mean(f) ans = -9.4739e-016 A(2:end-1,2:end-1)\f(2:end-1) ans = 0.9167 0.7796 0.4051 -0.1065 -0.6181 -0.9926 -1.1297 -0.9926 -0.6181 -0.1065 0.4051 0.7796 0.9167
Poisson equation finite-difference with pure Neumann boundary conditions
finite difference;boundary conditions;poisson
null
_webmaster.27938
Im working on a community platform writen in PHP, MySQL.I have some questions about the server usage maybe someone can help me out.The community is based on JQuery with many ajax requests to update content.It makes 5 - 10 AJAX(Json, GET, POST) requests every 5 seconds, the requests fetch user data like user notifications and messages by doing mySQL queries.I wonder how a server will handle this when there are for more than 5000 users online.Then it will be 50.000 requests every 5 seconds, what kind of server you need to handle this?Or maybe even more, when there are 15.000 users online, 150.000 requests every 5 seconds. My webserver have the following specs.Xeon Quad 2048MB5000GB trafficWill it be good enough, and for how many users?Anyone can help me out or know where to find such information, like make a calculation?
Question about server usage, big community platform
web hosting;usage data
null
_unix.284665
Actually I start x11vnc in /home/odroid/.config/lxsession/LXDE/autostart with@/bin/x11vnc -bg -forever -shared -rfbauth /home/odroid/.vnc-passwd -noxdamage -norc -noxrecord -capslock -no6 -rfbport 5900Autologin on startup is ok and it works well.But I log rarely in graphics mode.I want it to work like sshd.socket (vs sshd.service)Do you have an idea or line of research?
How to start x11vnc by socket (ie only when needed)
arch linux;x11;systemd;vnc
null
_unix.234084
I have some code that kills processes and their children/grandchildren. I want to test this code. Currently I'm doing $ watch date > date.txt, which creates a process with a child. Is there any way to create a parent -> child -> grandchild tree? What command can make that happen?
How to create a process tree?
linux;command line;process;command
#!/bin/sh#This is a process with an id of $$( sleep 1000 )& #this creates an idle child( (sleep 1000)& sleep 1000 )& #this creates an idle child and grandchildwait #this waits for direct children to finishRunning the above as ./1.sh & on my system created the following process tree:$ command ps -o pid,ppid,pgrp,stat,args,wchan --forest PID PPID PGRP STAT COMMAND WCHAN24949 4783 24949 Ss /bin/bash wait25153 24949 25153 S \_ /bin/sh ./1.sh sigsuspend25155 25153 25153 S | \_ sleep 1000 hrtimer_nanosleep25156 25153 25153 S | \_ sleep 1000 hrtimer_nanosleep25158 25156 25153 S | \_ sleep 1000 hrtimer_nanosleepYou can notice that the tree has the same process group (PGRP) of 25153, which is identical to the PID of the first process.The shell creates a process group whenever you start a new command in interactive mode (or with job control explicitly turned on).The PGRP mechanism allows the shell to send a signal to the whole process group at once without creating a race condition. This is used for job control, and when your script runs and a foreground job, for sending:(SIG)INTR when the user presses C-C(SIG)QUIT when the user presses C-\(SIG)STP when the user presses C-ZYou can do the same by doing, for example:kill -INTR -25153 where INTR is the signal and 25153 is the process group you want to send the signal to. The - before the 25153 means you're targeting a PGRP id rather than a PID.In your case, the signal you should be sending is -TERM (request termination). Term is the default signal kill sends, however, you have to specify it explicitly if you're targeting a group rather than a PID.kill -TERM -25153If you want to kill the process group of the last background job you started, you can do:kill -TERM -$!
_cs.7013
I am interested in self-reducibility of Graph 3-Coloralibity problem.Definition of Graph 3-Coloralibity problem.Given an undirected graph $G$ does there exists a way to color the nodes red, green, and blue so that no adjacent nodes have the same color?Definition of self-reducibility.A language $L$ is self-reducible if a oracle turing machine TM $T$ exists such that $L=L(T^L)$ and for any input $x$ of length $n$, $T^L(x)$ queries the oracle for words of length at most $n-1$.I would like to show in very strict and formal way that Graph 3-colorability is self-reducible.Proof of self-reducibility of SAT can be used as example (self-reducibility of SAT).In my opinion, the general idea of proof of self-reducibility of Graph 3-colorability is different from proof of SAT self-reducibility in few aspects.SAT has two choices for every literal (true or false) and Graph 3-colorability has three choices (namely, red green blue).Choices of SAT literal are independent on each other and choices of colors of Graph 3 colorability are strictly dependent, any adjacent node must have different color, this property potentially could help to make less iteration among all colors.The general idea of proof.Let's denote by $c_{v_i}$ the color of the vertex $v_i$, which can take one of the following values (red,green,blue). Define graph $G'$ from a given graph $G$ by coloring the arbitrary vertex $v_0$, assign $c_{v_0}$ to 'red' and put the graph $G'$ with colored vertex $v_0$ to the input of the oracle. If oracle answers 1, which means that the modified graph is still 3-colorable, save the current assignments and start new iteration, with the different vertex $v_1$ chosen arbitrarily, color vertex $v_1$ according to the colors of the adjacent vertices.if oracle answers 0, which means the previous assignment has broken 3 colorability, pick different color from the set of three colors, but still according to colors of adjacent vertices.The previous proof is not mathematical robust, the question is how to improve it and to make it more formal and mathematical strict. It looks like I need more carefully distinguish the cases when new vertex doesn't have any edges with already colored vertices and when the new vertex is adjacent to already colored vertices. In addition I would like to prove that Graph 3-colorability is downward self-reducible.Definition of downward self-reducible language.The language $A$ is said to be downward self-reducible if it is possible to determine in polynomial time if $x \in A$ using the results of shortest queries. The idea seems to be simple and intuitive: start with coloring an arbitrary vertex, and on each iteration add one more colored vertex and check by oracle if graph is still 3-colorable, if not reverse previous coloring and check another color.But how to write the proof in a strict way and more important how to find an appropriate encoding of a graph.In short, I would like to show that Graph 3-colorability is self-reducible and downward self-reducible in strict and formal way.I will appreciate sharing your thoughts with us.Update:downward self-reducibilityDownward self-reducibility is applied to decision problem and it's oracle answers the same decision problem with shorter input, at the end of the process of downward self-reduction we should have the right color assignments.Every 3 - colorable graph $G$ with more than three vertices, has two vertices $x,y$ with the same color. Apparently, there is only three colors and more than three vertices so some number of non-adjacent vertices might have the same color. If we merge $x$ and $y$ with the same color as the result we still have 3 - colorable graph, just because, if graph is 3 - colorable, then there are exist right assignment of all vertices that are adjacent to $x$ and $y$ according to the same color of $x, y$, so by merging $x, y$ we don't need to change any color of any vertices, we only need to add more edges between already correctly colored vertices (I know it's not the best explanation, I will appreciate if someone could explain it better). On every iteration we take two non-adjacent vertices $x,y$ of graph $G$, merge $x$ and $y$ and get graph $G'$ which is our shorter input to the oracle. Oracle answers if it's 3-colorable or not. Now the problem is before setting $G'$ on the input of oracle I should color the merged vertex and test colorability of $G'$, if it's not 3-colorable change the color, but how to implement it correctly, I need right encoding for it.self-reducibilityFirst, we should check if a given graph $G$ is 3-colorable at all, so set it on input of oracle, and oracle will answer if it's 3 - colorable, if yes then start the process. Any two nonadjacent vertices can have the same color in 3-colorable graph. The process of self-reducibility we should run in iterations, I think we can start from small subgraph $G'$ of a given graph $G$ and on every iteration add one more vertices from $G$ to $G'$. In paralel, we should maintain the assignment of already colored vertices. Unfortunately, I still don't get the idea completely. Would appreciate for help and hints.
Graph 3-colorability is self-reducible
complexity theory;reductions
As Vor mentions in his comment, your reduction doesn't work, since 3-colorability doesn't accept partial assignments of colors. The problem goes even deeper, since setting the color of a single vertex doesn't make any progress in determining whether the graph is 3-colorable: indeed, the graph is 3-colorable iff there is a 3-coloring in which vertex $v$ is assigned color $c$, for any $v,c$ you choose.Here is a hint on how to solve your exercise, second part. In any 3-coloring of a graph $G$ on more than three vertices, there are two vertices $x,y$ getting the same color (why?). If we merge $x$ and $y$, the resulting graph is still 3-colorable (why?). Try to use this idea to construct a downward self-reducing algorithm for 3-colorability.Edit: And here is a hint on how to solve the exercise, first part. Consider any two unconnected vertices $x,y$. If there is a coloring in which they get the same color then $G_{xy}$ is 3-colorable (why?), and a coloring of $G$ can be extracted from a coloring of $G_{xy}$ (how?). When will this process stop?
_codereview.141940
I'm trying to find a simple, clean and fast way to implement an XML config file in C# without any 3rd party tools. It also should replace this restricted System.Configuration.ConfigurationManager.using System;using System.IO;[Serializable]public class MyConfigFile{ // Static Members public static readonly string ConfigFilename = myConfig.config; public static readonly string ConfigFullFilename = Path.Combine(Environment.GetFolderPath(Environment.SpecialFolder.MyDocuments), ConfigFilename); // Singlton private static MyConfigFile _myConfig; public static MyConfigFile Instance { get { if (_myConfig == null) { if (!File.Exists(ConfigFullFilename)) { _myConfig = new MyConfigFile(); Save(); } else { _myConfig = Load(); } } return _myConfig; } } // Constructor public MyConfigFile() { Version = 1; ValueA = Important Value; ValueB = DateTime.Now; ValueC = false; } // Properties public int Version { get; set; } public string ValueA { get; set; } public DateTime ValueB { get; set; } public bool ValueC { get; set; } // Static Methodes public static void Save() { System.Xml.Serialization.XmlSerializer xs = new System.Xml.Serialization.XmlSerializer(_myConfig.GetType()); StreamWriter writer = File.CreateText(ConfigFullFilename); xs.Serialize(writer, _myConfig); writer.Flush(); writer.Close(); } public static MyConfigFile Load() { System.Xml.Serialization.XmlSerializer xs = new System.Xml.Serialization.XmlSerializer(typeof(MyConfigFile)); StreamReader reader = File.OpenText(ConfigFullFilename); var c = (MyConfigFile)xs.Deserialize(reader); reader.Close(); return c; }}This code works fine, but I wonder if you can improve it or have any better ideas.
C# XML Config File
c#;reinventing the wheel;xml;configuration
CodeThe StreamReader/StreamWriter in the methods Load and Save should be disposed in a finally block (or better by creating them in a using). Otherwise, the object is not disposed if the serialization/deserialization fails.The method Load does not check if the file exists. If it doesn't, an exception will be thrown. Probably it is enough to make that method private because it is not a required part of the API.Is there any reason for the My prefix? If not, just ConfigFile sounds better to me.Consider to use a more application specific path. For Example SpecialFolders.ApplicationData[CompanyName][Application] for user-specific data or CommonApplicationData[CompanyName][Application] for program-specific data.APIThe path of the config file is defined within the class. That has some disadvantages (central configuration is not possible, the class can not be tested by unit tests, the class is not reusable, ...). Therefore, consider to pass the path as constructor argument instead and do not use the singleton pattern.Consider to rename the Instance property to 'Default' or something like that.The methods Save/Load can not be static if the class lost its singleton status - but that should not be a problem.
_webmaster.13011
Early last year I implemented several new and unique features on Neocamera and most have been copied by DPReview.They announced the latest one this morning. It is called 'My Short List' there while on Neocamera, it is called 'Camera Bag' which I explained in a blog post last September. They had a vague announcement in March for the previously copied features, which I had also since last September, basically search engines for cameras and lenses by features or specifications.It is obvious that those are the same features but they are presented differently. So this is not a case of design or style being copied, only the capabilities. Once you look at them, you see that there is an uncanny overlap in functionality.Can something be done about it? In case it matters, my site is based in Canada while theirs is registered in the UK. It belongs to Amazon, not sure which corporate entity.
What to do about features of a website copied by another bigger one?
website features;copyright;copy
A Drupal view with exposed filters can be used to create something like those feature searches with almost no actual code being written. While these features are certainly convenient, they are not particularly unique. Your sites are in competition. You're going to steal ideas from each other. The actual problem is doing things better.Unless you can prove they somehow stole actual, proprietary code, or you have some sort of trademark or patent like Amazon's on one-click shopping, I doubt you have any case at all.
_codereview.164491
I wanted to extend concurrent.futures.Executor to make the map method non-blocking. It seems to work fine, but I would be very interested in feedback about the general approach, implementation, and code quality. The entire thing is on Github, but below is all the important code:Usage example:import timefrom itertools import count, islicefrom streamexecutors import StreamThreadPoolExecutordef produce(): for i in range(100): time.sleep(0.02) yield idef square(x): time.sleep(0.1) return x ** 2def print10(iterable): print(list(islice(iterable, 10)))squares = StreamThreadPoolExecutor().map(square, produce())print10(squares)And the implementation:import timefrom queue import Queuefrom concurrent.futures import Executor, ThreadPoolExecutor, ProcessPoolExecutorfrom concurrent.futures.process import _get_chunks, _process_chunkfrom functools import partialimport sysimport threadingimport itertoolsclass CancelledError(Exception): passclass StreamExecutor(Executor): def map(self, fn, *iterables, timeout=None, chunksize=1, buffer_size=10000): Returns an iterator equivalent to map(fn, iter). Args: fn: A callable that will take as many arguments as there are passed iterables. timeout: The maximum number of seconds to wait. If None, then there is no limit on the wait time. chunksize: The size of the chunks the iterable will be broken into before being passed to a child process. This argument is only used by ProcessPoolExecutor; it is ignored by ThreadPoolExecutor. buffer_size: The maximum number of input items that may be stored at once; default is a small buffer; 0 for no limit. The drawback of using a large buffer is the possibility of wasted computation and memory (in case not all input is needed), as well as higher peak memory usage. Returns: An iterator equivalent to: map(func, *iterables) but the calls may be evaluated out-of-order. Raises: TimeoutError: If the entire result iterator could not be generated before the given timeout. Exception: If fn(*args) raises for any values. if not callable(fn): raise TypeError('fn argument must be a callable') if timeout is None: end_time = None else: end_time = timeout + time.time() if buffer_size is None: buffer_size = -1 elif buffer_size <= 0: raise ValueError('buffer_size must be a positive number') iterators = [iter(iterable) for iterable in iterables] # Set to True to gracefully terminate all producers cancel = False # Deadlocks on the two queues are avoided using the following rule. # The writer guarantees to place a sentinel value into the buffer # before exiting, and to write nothing after that; the reader # guarantees to read the queue until it encounters a sentinel value # and to stop reading after that. Any value of type BaseException is # treated as a sentinel. future_buffer = Queue(maxsize=buffer_size) # This function will run in a separate thread. def consume_inputs(): while True: if cancel: future_buffer.put(CancelledError()) return try: args = [next(iterator) for iterator in iterators] except BaseException as e: # StopIteration represents exhausted input; any other # exception is due to an error in the input generator. We # forward the exception downstream so it can be raised # when client iterates through the result of map. future_buffer.put(e) return try: future = self.submit(fn, *args) except BaseException as e: # E.g., RuntimeError from shut down executor. # Forward the new exception downstream. future_buffer.put(e) return future_buffer.put(future) # This function will run in the main thread. def produce_results(): def cleanup(): nonlocal cancel cancel = True while True: future = future_buffer.get() if isinstance(future, BaseException): break else: future.cancel() raise exc # Ensure cleanup happens even if client never starts this generator. try: yield None except GeneratorExit as exc: cleanup() while True: future = future_buffer.get() if isinstance(future, BaseException): # Reraise upstream exceptions at the map call site. raise future if end_time is None: remaining_timeout = None else: remaining_timeout = end_time - time.time() # Reraise new exceptions (errors in the callable fn, TimeOut, # GeneratorExit) at map call site, but also cancel upstream. try: yield future.result(remaining_timeout) except BaseException as exc: cleanup() thread = threading.Thread(target=consume_inputs) thread.start() result = produce_results() # Consume the dummy `None` result next(result) return resultclass StreamThreadPoolExecutor(StreamExecutor, ThreadPoolExecutor): ...class StreamProcessPoolExecutor(StreamExecutor, ProcessPoolExecutor): def map(self, fn, *iterables, timeout=None, chunksize=1, buffer_size=10000): if buffer_size is not None: buffer_size //= max(1, chunksize) if chunksize < 1: raise ValueError(chunksize must be >= 1.) results = super().map(partial(_process_chunk, fn), _get_chunks(*iterables, chunksize=chunksize), timeout=timeout, buffer_size=buffer_size) return itertools.chain.from_iterable(results)Update:I failed to fix an issue of the process hanging in some cases of main thread termination. I had to rewrite this code somewhat to ping the main thread to check if it's alive. Not sure if I should copy the updated version from Github to this post, since it might end up being too many edits. I guess I'll leave the original code, since I'm still interested to see if it's good: I prefer my original approach to the periodic ping.
Implementing non-blocking Executor.map
python;multithreading;concurrency
null
_softwareengineering.234106
Consider the following situation:One hardware device, two applications (1 C# application, 1 Firmware).The C# application sends frames to the firmware and the firmware executes scripts.C# -> transmit frame[x] FW -> receive frame[x] FW -> execute script relating to frame[x] FW -> before finished, a FW event is triggeredforcing moving parts to stop (safety precaution).C# requirement after such events are raised:Poll FW for status, if status == stopped, send a resume fame.Caveat: the C# logic therefore needs to adjust itself to being restarted.So this is the situation I have; but what I'm seeking advice with is how I should design a class which can handle the pausing and restarting of tasks an unlimited number of times.While I can write convoluted methods which are hard to maintain through a chain of logical statements, I'm struggling with finding a design which is reusable and clean.Such scenarios are true for ~50 unique tasks, so finding a solution I can apply everytime would save a lot of headache. Anytime a script is restarted or resumed, a physical user can easily cause a safety trigger stop, therefore a solution needs to be robust.While I'm sure recursion would be a first good step, I'm worried that the stack limits has the potential to introduce errors.Clarification:In my bid to simplify things, I seem to have omitted too much. Apologies.Firmware:Software running on a physical device which controls the hardware. I have no control over this code-base and it was designed independently while allowing USB communication.Synchronization:There is very little or no synchronization between the C# application and the firmware. My C# code will send a frame of bytes which match up to a specification document for the firmware, and if the frame is valid, the firmware will reply saying it received the frame and will begin the work. After that, there is no direct synchronization. A task described by a frame typically has two attributes: the time it takes to execute (need to manually observe) and an end target status (complete with status identifiers). Currently I poll the firmware for the status until the target status is reached or until time-out.Recursion:What I meant by that was (if not complete -> callSelfAgain())Pause: User has done something to the device, wait here until they fix (can happen at ANY time during the time the firmware is running its jobs).Restart: The user has amended the state, now start the job from the beginning (or where we left off [task dependent]).
Designing software functions which are both pausable and restartable
c#;design;object oriented design;recursion;embedded systems
I think your best bet is to design the system as a state machine. The idea is that you have objects symbolizing each discrete step of the process, including any state data that has accumulated so far, and by pointing to the subtask currently being executed you can reconstruct the execution from that point.However, I've just remembered that there is a C# mechanism that actually works in this exact manner and also fulfills all of your requirements. Although they weren't made for this purpose, you can implement the design using iterators.They already support pausing execution at arbitrary points, and also support restarting it any number of items as the need arises. They don't actually have to return a value, but they can, and that's a bonus.public class Unit { private Unit() { }}public class StateMachine { public IEnumerable<Unit> Run(int input) { int working = input + 1; yield return null; //stop execution //restarting... we still have all the working state data! working *= input; //more work... yield return null; //stop work //restart... if (IsError()) { throw new Exception(); } if (IsOverPrematurely()) { yield break; } }}(I might be misunderstanding whether or not your question requires a return value, though.)
_codereview.13820
I am writing an API wrapper and the endpoint takes dates in a very specific format.The user of the API can pass in the parameters in whatever format they prefer, but regardless of what they pass in, I want to be able to clean up their input prior to submitting their query.My question centers around the best way to update the options hash in place, and I have thought of a few possible ways to implement.A helper method inside the class so you can overwrite options = reformat_hash(options)A singleton on that specific variabledef options.clean_up! # see internals belowendOr open up Hash and do the cleaning from the classclass Hash def clean_hash! self.each { |key, value| if value.is_a? Date self[key] = value.strftime('%Y-%m-%d %H:%M:%S') else self[key] = value.to_s end } endendso that I can just call it on whatever the variable may be named like:def api_request(options={}) options.clean_hash! # the options variable is now clean and I can pass it to the api HHTParty.get(path, :query => options).parsed_responseendIs there a best practice for modifying or formatting hashes after they're passed into a method?I feel like #3 is the neatest, but should I be worried about opening up Hash to do this?
Reformatting a method options hash
ruby
Here, while it may look nice, the method clean_hash is not general enough to be valid across all Hashes. So adding a method such as clean_hash to all Hashes would only serve to increase the coupling which is bad. A second problem is that you are mutating your method argument which is almost never advisable.The solution is to define the clean method outside, perhaps as a part of your internal API object and call HHTParty.get(path, :query => clean(options)).parsed_response.I would also define the clean method this waydef clean(opt) Hash[opt.collect{|k,v| [k,v.is_a? Date : v.strftime('%Y-%m-%d %H:%m:%S'):v.to_s ]}]end
_unix.140730
We all know we can use tree to get a nicely formatted text visualization of the structure of a directory; say:$ tree -spugD /usr/include/boost/accumulators/numeric//usr/include/boost/accumulators/numeric/ [drwxr-xr-x root root 4096 Dec 19 2011] detail [-rw-r--r-- root root 2681 Oct 21 2010] function1.hpp [-rw-r--r-- root root 406 Oct 21 2010] function2.hpp [-rw-r--r-- root root 409 Oct 21 2010] function3.hpp [-rw-r--r-- root root 409 Oct 21 2010] function4.hpp [-rw-r--r-- root root 6725 Oct 21 2010] function_n.hpp [-rw-r--r-- root root 530 Oct 21 2010] pod_singleton.hpp [drwxr-xr-x root root 4096 Dec 19 2011] functional [-rw-r--r-- root root 2316 Oct 21 2010] complex.hpp [-rw-r--r-- root root 16627 Oct 21 2010] valarray.hpp [-rw-r--r-- root root 12219 Oct 21 2010] vector.hpp [-rw-r--r-- root root 9473 Oct 21 2010] functional_fwd.hpp [-rw-r--r-- root root 21312 Oct 21 2010] functional.hpp2 directories, 11 filesWhat I would want, is the reverse of this - given a text file with the contents as above save in dirstruct.txt, I could write something like this (pseudo):$ reverse-tree dirstruct.txt -o /media/destpath... and so, /media/destpath directory would be created if it doesn't exist, and inside I would get detail subfolder with files function1.hpp, etc; as per the tree above. Of course, I can always do a copy cp -a and get the same; the idea here would be, that I could change filenames, directory names, sizes, permissions and timestamps in the textfile - and have that reconstructed in the output structure. For files, I first thought I'd be happy with them just being touched (that is, 0 bytes in size) - but it's probably better that the size is reconstructed too - by filling either 0x00 or random bytes, up to the requested size. Primary use of this would be actually to post questions :) - some of those rely on a directory structure, say from a program I have installed; but the program in itself is irrelevant to the question; then instead of targetting answerers that may happen to have the program installed, I could simply ask a question in respect to an anonymized directory tree, which they themselves could quickly reconstruct on their machines, simply by pasting the tree text description in the post. So - is there a straightforward way to achieve this?
The reverse of `tree` - reconstruct file and directory structure from text file contents?
files;text processing;scripting;tree
null
_codereview.85225
The problem is adding two lists as numbers L1 = [1,9,9] and L2 = [0,9,9]:?- sum([1,9,9],[0,9,9], Lo). Lo = [2,9,8]But I also wanted to add this:?- sum([8,1,9],[1,8,2],Lo).Lo = [1, 0, 0, 1].I used the backtracking method I've learned:link([],L,L).link([Head|Tale],L2,L3):- link(Tale,L2,L), L3=[Head|L].inve([],[]):-!.inve([X|Xs],L):- inve(Xs,L2), link(L2,[X],L).sum(L1,L2,L3):- inve(L1,LI1),inve(L2,LI2), sumID(LI1,LI2,L), inve(L,[Li|LIs]), Li > 9, Li2 is Li-10 , L3 = [1,Li2|LIs] ,!.sum(L1,L2,L3):- inve(L1,LI1),inve(L2,LI2), sumID(LI1,LI2,L), inve(L,L3),!.sumID([],[],[]):- !.sumID([X|[Xs|Xss]],[Y|Ys],[L|Ls] ):- XY is X+Y , XY > 9 , Head is XY - 10, L = Head, Xs1 is Xs + 1, sumID([Xs1|Xss], Ys, LTail) ,Ls = LTail,!.sumID([X|Xs],[Y|Ys],[L|Ls]):- L is X+Y, sumID(Xs,Ys,LTail), Ls = LTail.A friend told me to invest list and add from *Left to Right*, and later invest the final list. How can I improve this solution in order not to be so long? I'd appreciate a better idea to solve this problem, too. I made it in more than 2 hours and in the exam this is supposed to be in 20min.
Adding elements in two lists as numbers
beginner;backtracking;prolog
null
_vi.4710
I was trying to revamp the vim editor so I installed NeoBundle but I didn't like it. How can I uninstall it completely?I followed the installation instructions given in the readme:$ curl https://raw.githubusercontent.com/Shougo/neobundle.vim/master/bin/install.sh > install.sh`$ sh ./install.sh`Should I just erase the lines in the .vimrc file?
How to uninstall NeoBundle in OS X?
vimrc;macos;installing
null
_webmaster.69257
According Google Webmaster Tools, my site, www.yomkippurshoes.com is being indexed. However, when I try to find it in results (e.g. type site:yomkippurshoes.com in Google), I don't see it in the results. This isn't an SEO issue as I'm not concerned about the ranking (yet). I'm just not sure why it isn't being shown in the results at all. According to this article, Google will not show the results if the site is down. However, mine isn't down.Any thoughts on the disparity?
My site is indexed, but not showing up in results
google search;google index;search results
null
_vi.7346
I know that it is possible: I googled it some time ago, but now I can't find the solution.What do I need add to .vimrc to resize my current active window in full height automatically after window changing?
Automatically resize the active window to maximum height
vimrc;vim windows;autocmd
null
_cstheory.5045
Fry's theorem says that a simple planar graph can be drawn without crossings so that each edge is a straight line segment. My question is whether there is an analogous theorem for graphs of bounded crossing number. Specifically, can we say that a simple graph with crossing number k can be drawn so that there are k crossings in the drawing and so that each edge is a curve of degree at most f(k) for some function f?EDIT: As David Eppstein remarks, it is readily seen that Fry's theorem implies a drawing of a graph with crossing number k so that each edge is a polygonal chain with at most k bends. I'm still curious though whether each edge can be drawn with bounded degree curves. Hsien-Chih Chang points out that f(k) = 1 if k is 0, 1, 2, 3, and f(k) > 1 otherwise.
Drawing graphs of bounded crossing number
graph theory;co.combinatorics;planar graphs
If a graph has bounded crossing number it can be drawn with that number of crossings in the polyline model (i.e. each edge is a polygonal chain, much more common in the graph drawing literature than bounded-degree algebraic curves) with a bounded number of bends per edge. It's also true more generally if there is a bounded number of crossings per edge. To see this, just planarize the graph (replace each crossing by a vertex) and then apply Fry.Now, to use this to answer your actual question, what you need to do is to find an algebraic curve that is arbitrarily close to a given polyline, with degree bounded by a function of the number of polyline bends. This can also be done, fairly easily. For instance: for each segment $s_i$ of the polyline, let $e_i$ be an ellipse with high eccentricity that is very close to $s_i$, and let $p_i$ be a quadratic polynomial that is positive outside $e_i$ and negative inside $e_i$. Let your overall polynomial take the form $p=\epsilon-\prod_i p_i$ where $\epsilon$ is a small positive real number. Then one component of the curve $p=0$ will lie a little outside the union of the ellipses and can be used to substitute for the polyline; its degree will be twice the number of ellipses, which is linear in the number of crossings per edge.
_unix.338629
In bash, when I write this linenew_line=new\nlineI get this as expected:echo $new_linenew\nlineAnd this also works as expected:echo -e $new_line newlineas it says in a manual: -e enable interpretation of backslash escapesHowever, this doesn't give me an interpreted \n new line character:cur_log=$(who)echo -e $cur_logmyuser pts/0 2017-01-19 07:10 (:0) myuser pts/1 2017-01-19 09:26 (:0) myuser pts/4 2017-01-19 09:14 (:0)I thought that there is no new line character but if I write:echo $cur_logI get new line character interpreted. myuser pts/0 2017-01-19 07:10 (:0)myuser pts/1 2017-01-19 09:26 (:0)myuser pts/4 2017-01-19 09:14 (:0)Why doesn't echo -e $cur_log interpret new line character but `echo -e $new_line does?
Why doesn't echo -e interpret a new line?
bash;newlines
The reason is in your first variable (new_line), there is only an escape sequence (i.e. \n = backslash followed by n) which is passed unchanged to echo, while in the second one (cur_log), there are actual newlines which are stripped out by the shell being part of the IFS variable.A new line is, under Unix/Linux, a single character which ASCII code is 10 (line-feed). When a file containing this character is displayed on screen, it is converted into two characters, carriage-return plus line feed (CR-LF), 10 + 13. When an editor like gedit is opens such a file, it stores each line separately. Linefeed is only used to detect the separation between two contiguous lines. \n is made of two characters, ASCII 92 + 110. If you edit a file containing occurrences of \n these two characters will be left unchanged and displayed as is, unlike real newlines.
_unix.147843
In my script on my ubuntu machine I declare a string like this:DEBUG_PACKAGE_LIST=$(apt-cache search dbg | awk '{ print $1 }' | grep -e -dbg)For you to help you understand my problem here is the output ofecho $DEBUG_PACKAGE_LIST >> debugThe followingif [[ ! $DEBUG_PACKAGE_LIST =~ [^-a-z0-9]libmagick++5-dbg[a-z]* ]]; then echo no contained; fiechos not contained, despite $DEBUG_PACKAGE_LIST contains the string libmagick++5-dbgsym. Could you help me understand why?Basically my intention is to match libmagick++5-dbg, wheras, libmagick should only be preceeded by a space character.
bash: regular expressions in if expression
bash;regular expression;distros;test
In bash 3.2 or above:shopt -u compat31[[ ! $DEBUG_PACKAGE_LIST =~ [^-[:alnum:]]'libmagick++5-dbg' ]];In bash 3.1:[[ ! $DEBUG_PACKAGE_LIST =~ '[^-[:alnum:]]libmagick\+\+5-dbg' ]];(note that [a-z]* is redundant since it also matches the empty string so will always match).Works in both:re='[^-[:alnum:]]libmagick\+\+5-dbg'[[ ! $DEBUG_PACKAGE_LIST =~ $re ]]
_codereview.73561
These are questions from an interview:Reverse a stringFind matching anagrams in a word listusing System;using System.Collections.Generic;using System.Linq;using System.Text;using System.Threading.Tasks;namespace ConsoleApplication2{public class ReverseString{ public ReverseString() { string str = Gilad; string res = Reverse(str); List<string> wordsList = new List<string>(); wordsList.Add(batel); wordsList.Add(Gilad); wordsList.Add(daliG); wordsList.Add(enon); wordsList.Add(none); wordsList.Add(letab); var pairs = PairAnagramWords(wordsList); } public string Reverse(string str) { StringBuilder s = new StringBuilder(); for (int i = str.Length - 1; i >= 0; i--) { s.Append(str[i]); } return s.ToString(); } Dictionary<string, string> PairAnagramWords(List<string> wordsList) { wordsList.Sort(); Dictionary<string, string> res = new Dictionary<string, string>(); foreach (var item in wordsList) { if (res.ContainsValue(item)) { continue; } foreach (var item2 in wordsList) { if (item == item2 || item.Length != item2.Length) { continue; } var reversed = Reverse(item2); int i; for (i = 0; i < item.Length; i++) { if (item[i] != reversed[i]) { break; } } if (i == item.Length) { res.Add(item, item2); } } } return res; }}}I don't like the fact that I'm doing it in \$ O(n^2) \$, although I am checking for no repetitions.Please review my code's complexity and algorithm. I did the testing inside the constructor just for convenience, although I usually will use unit tests.
String reverse and pairing reversed words
c#;algorithm;interview questions
Please find my (language agnostic) suggestions below :String reversal: Use the same string variable and swap i and (len-i-1) characters? -> O(n/2).Your program caters to str == reverse(str) and not anagrams in general, i.egilad == dalig is checked but not gilad == gliad / gladi / ladig / glaid ,etcAnagram finder: Here are the multiple ways to do it:A hash function on each String which generates a unique number for string with same letters. Could be a unique prime mapping to each character and H(S) = P1 * p2 * ... => H(gilad) = P(g) * p(i) * P(l) * P(a) * P(d)Sort each string, Sort the array and traverse to find anagrams.{gilad,glaid,bat,tac,act,tab}= {adgil, adgil ,abt, act, act, abt} // Sort each string= {abt,abt,act,act, adgil, adgil} // Sort the array.= Traverse to know the anagrams.Use Trie to store sorted strings. Each traversal of the string will point to its anagram i.e Once gilad => adgil is added to the trie, addition of glaid = adgil will point to its early existence and hence its anagram.Use a HashMap to store the sorted string and you would easily find the anagrams.
_webmaster.85637
I am experimenting with Google Cloud. I was trying to test a VM Instance and Network Load Balancing. Here's what i did:Created an Instance Template => Instance Group => 1 VM Instance (say VM1) under the group.Uploaded all my files on VM1. Can access all the files using VM1 IP Address.Setup Load Balancing (say LB) on Instance Group.Sent huge traffic to LB and it created another VM Instance (say VM2) to handle the load After the traffic was brought down, LB detected that and decided to delete one VM Instance.Deleted VM1 and now i have only VM2 with no files (originally uploaded to VM1).Am i getting the whole point of Cloud Computing wrong? How can i keep accessing all the files on the server even when Instances are deleted automatically by LB?
Google Cloud Instance Deletes Files When used with Network Load Balancer
virtualhost;cloud;storage
null
_webapps.79222
I'm trying to create a calculation based on the Yes/No toggle field where Yes would add a cost to a Calculated total at the bottom. I understand in general how to do calculations with checkboxes, but what is the code to use for the calculation to identify the toggle field in a section?
Calculations in Cognito Forms based on Yes/No checkbox
cognito forms
null
_unix.335354
We are very low on entropy (HTTPS apache server, Ubuntu 12.04, not a VM, HW: using a Lenovo ThinkCentre M58): root@server:~# cat /proc/sys/kernel/random/entropy_avail417root@server:~# Question: could this affect the performance of our HTTPS server? or OpenSSH?
Can having low entropy cause an HTTPS server to be slower?
random;https
null
_unix.12157
I am using Ubuntu 9.10 - the Karmic Koala - released in October 2009 I downloaded mysql-workbench-gpl-5.2.33b-1ubu1004-i386.deb from hereBut when I run this package it show me following error:Error: Dependency is not satisfiable: libatk1.0-0 (>= 1.29.3)I am new to ubuntu. I have tried many packages but unable to install MySQL Workbench. How can I install it on my ubuntu...Thanks
Install MySQL workbench on ubuntu
ubuntu;install;package management;mysql
The simple answer is to either upgrade libatk or find an older version of MySQL Workbench. Sometimes there are newer packages in the backports repository, but I checked karmic-backports in Ubuntu's Package Repository and a newer version has not been backported. Ubuntu 10.04+ does have a newer version of libatk that's compatible with MySQL Workbench. You could try installing on 9.10 ignoring dependencies and hope it works anyways.dpkg --ignore-depends=libatk1.0 -i mysql-workbench-gpl-5.2.33b-1ubu1004-i386.debThe only other option is to download the source from 10.04 Maverick and recompile it on Karmic. Get the source from http://packages.ubuntu.com/source/maverick/atk1.0. You will need the .dsc, .diff.gz, and .tar.gz file from that page.wget 'http://archive.ubuntu.com/ubuntu/pool/main/a/atk1.0/atk1.0_1.32.0-0ubuntu1.dsc'wget 'http://archive.ubuntu.com/ubuntu/pool/main/a/atk1.0/atk1.0_1.32.0.orig.tar.gz'wget 'http://archive.ubuntu.com/ubuntu/pool/main/a/atk1.0/atk1.0_1.32.0-0ubuntu1.diff.gzsudo apt-get install dpkg-devdpkg-source -x atk1.0_1.32.0-0ubuntu1.dsccd atk1.0-1.32.0dpkg-buildpackageOnce it's done building, you will have a number of .deb files in the parent directory. You will need to install any over what's already installed.
_unix.163909
I just installed Cygwin a few days ago. Now when I want to explore it, I can't run KDE-Openbox. I haven't tried anything else, except compiling Mplayer which did work.Last time I had Cygwin installed and I posted a question here about not being able to run programs in it, I deleted the question after getting no answers. I thought I had figured it out: Norton 360 was finding viruses on every install and is set to remove it automatically. I learned that this is common, and the fix was to disable Norton antivirus protection while unpacking the archives, then enable it again and scan after the install. I thought this would then be the reason for Cygwin never working, but maybe that's not the case.I installed all Cygwin packages, and didn't have Norton remove any packages while installing.I started KDE-Openbox from the Windows start menu. I'm assuming KDE-Openbox starts its own X server. I've tried both with X server already running and without.
KDE-Openbox closes immediately after it opens on Cygwin
windows;kde;openbox;cygwin
null
_cstheory.8335
Background: The motivation for this question is two-fold. First, I would like to get some hard facts to better understand the ongoing conferences vs. journals debate. Second, if this information was somewhere available, I could make a more informed decision when submitting papers for review; I would be happy to favour journals whose editors do a good job at selecting and shepherding referees.Question: Are there any TCS journals that have consistently fast reviewing?The rules:I am not looking for any anecdotal evidence; I would like to see hard facts, such as according to our statistics, during the last 3 years, 98% of our submissions were reviewed in at most 4 months.Only time from the initial submission to the first decision counts. I do not care how long it takes to actually print a physical journal; it is beyond our control anyway.A journal that is just a series of conference proceedings does not count. We all know that conference reviewing is quick.Open-access on-line journals are perfectly fine.(And if a journal is so questionable that you would be embarrassed if your name was somehow associated with it, let's skip it entirely.)
Journals with quick reviewing
soft question;research practice;paper review;journals
null
_webmaster.92306
I have around 20 websites on 20 sub-domains , all are using same Google analytics tracking code for tracking hits.One of the site i.e. sub-domain is getting lots of hits, i would like to know more about that sub-domain only. and that site only. .e.g. bounce rate of that site, avg. page-views., avg. time spent on that sub-domain.How can i know something like that ?i have already added Filter in Google Analytics to show sub-domains too in the reports.e.g.Filter Name : Add whole domainFilter Type : AdvancedField A -> Extract A : Hostname (.*)Field B -> Extract B : Reuest URI (.*)Output To -> Constructor : $A1$B1
How can i find out which subdomain is bringing majority of traffic in Google Analytics
tracking;subdomain;google analytics
null
_unix.101252
I will get value like 2743410360.320 and I want value like 2743410360 to a variable.I tried INTValueOfGB=$ echo ($gb+0.5)/1 | bcBut I am getting (standard_in) 1: syntax error
How to round or convert a float value to int with bc? getting: (standard_in) 1: syntax error
shell script;variable;conversion;bc;floating point
null
_unix.77615
I'm trying to upgrade from Mint 13 to Mint 15, but receive the following error:Calculating upgrade... FailedThe following packages have unmet dependencies: mate-media : Depends: mate-media-gstreamer but it is not going to be installed or mate-media-pulse but it is not going to be installedE: Error, pkgProblemResolver::Resolve generated breaks, this may be caused by held packages.It seems that mate-media-gstreamer and mate-media-pulse conflict with each other. Could someone help with this?
Apt-get dist-upgrade fails
linux mint;apt;upgrade;mate
null
_softwareengineering.355968
Imagine you have a Vehicle entity in your domain model. Vehicle entity has Reserve method that put vehicle in reserved state and do another stuff. But Reserve method have to do some checking first to ensure that reservation could be done. This checking is done by legacy stored procedure that have to be called as a part of reservation process. Stored procedure call is encapsulated in repository method.The question:Should I pass repository as a parameter of domain entity method if method interacts with data storage? Are there any drawbacks of such a solution? Are there alternatives?The sample:class VehicleRepository: IVehicleRepository { public bool IsReserveAvailable() { // call stored proc here }}class Vehicle { public void Reserve (IVehicleRepository vehicleRepository) { bool isReserveAvailable = vehicleRepository.IsReserveAvailable(); if (isReserveAvailable) { // do stuff ... } }}
DDD: should entity method use repository for stored procedures (not CRUD)?
domain driven design;repository;stored procedures
The advantage of having the Method on the vehicle class is that it matches your business language. I want to reserve the vehicle please!The disadvantage is that whenever you have a vehicle, you have to also have the dependent service around in case you want to reserve it.If you can refactor it out of the sproc so that it is only a logic operation on the members of Vehicle great. But if you can't, because the operation relies on information outside of the Vehicle, such as knowledge of all other vehicles, then you might want to consider a VehicleReservationService class which you pass the Vehicle object to. After all, you are really only hiding the existence of this service with Vehicle.Reserve(IDependency repo). Sometimes the business language is wrong and needs to change,Clarification:No. You should not pass the repository. Either move code out of the db if it can be made a pure function of Vehicle. Or create a VehicleResevationService to deal wth reservations.Sample:public class Vehicle{ public bool Reserve() { if(this.x && this.y) { this.Status = reserved; return true; } return false; }}public class VehicleReservationService{ public VehicleReservationService(IRepository repo) { this.repo = repo; } public bool Reserve(Vehicle v) { return repo.Reserve(v.Id, v.OtherParametersOfSproc); }}
_unix.308794
I'm using ansible for adding zones to my firewall on a Centos machine. Didn't realize until it (almost) too late, that I'm not getting the IN_Internal interface working, it's all going to public, which is the default defined by firewalld.conf.This is my internal.xml<?xml version=1.0 encoding=utf-8?> <zone> <short>Internal</short> <description>For use on internal networks. You mostly trust the other computers on the networks to not harm your computer. Only selected incoming connections are accepted.</description> <interface name=eth0/> <service name=ipp-client/> <service name=mdns/> <service name=dhcpv6-client/> <service name=ssh/> </zone>doesn't seem like it's getting used at all. because, for whatever reason I wind up with: Chain IN_internal (0 references) 2 120 IN_public all -- eth0 * 0.0.0.0/0 0.0.0.0/0 [goto]Can't see why that's what's happening there. When I do firewall-cmd --zone=internal --change-interface=eth0 and it works (even after I reload the firewall), but it's exactly the same XMLSince I'm deploying my settings with ansible, and not running firewall-cmd on the machine, I'd like to know what firewall-cmd is doing behind the scenes so that I can push out those configs.
Where is the zone actually configured for an interface in firewalld?
firewalld;ansible
null
_codereview.5363
My implementation:Array.prototype.binarySearchFast = function(search) { var size = this.length, high = size -1, low = 0; while (high > low) { if (this[low] === search) return low; else if (this[high] === search) return high; target = (((search - this[low]) / (this[high] - this[low])) * (high - low)) >>> 0; if (this[target] === search) return target; else if (search > this[target]) low = target + 1, high--; else high = target - 1, low++; } return -1;};Normal Implementation:Array.prototype.binarySearch = function(find) { var low = 0, high = this.length - 1, i, comparison; while (low <= high) { i = Math.floor((low + high) / 2); if (this[i] < find) { low = i + 1; continue; }; if (this[i] > find) { high = i - 1; continue; }; return i; } return null;};The difference being my implementation makes a guess at the index of the value based on the values at the start and end positions instead of just going straight to the middle value each time.I wondered if anyone could think of any case scenarios where this would be slower than the original implementation.UPDATE:Sorry for the bad examples. I have now made them a little easier to understand and have setup some tests on jsPerf. See here:http://jsperf.com/binary-search-2I'm seeing about 75% improvement by using my method.
Efficient Binary Search
javascript;algorithm;search;binary search
null
_unix.98831
I'm trying to use a bash script to process a webserver log file and replace any IP's it finds with their corresponding DNS hostnames.An example entry of a single line from the log file is:<12>1 2013-11-04T15:04:05+00:00 networkname kernel - - - kernel: [161030.740000] ACCEPT IN=br0 OUT= MAC=00:11:22:33:44:11:00:11:11:11:11:11:11:11 SRC=192.168.1.6 DST=192.168.1.1 LEN=71 TOS=0x00 PREC=0x00 TTL=64 ID=30324 DF PROTO=UDP SPT=43729 DPT=53 LEN=51 (I have changed all private details in the above line for example purposes).So above, the two fields SRC=192.168.1.6 and DST=192.168.1.1 contain IP addresses, that I need to convert into DNS hostnames (I understand they are just internal addresses, this is just as an example).This is what I have come up with so far for my script:#!/bin/bashlogFile=$1while read linedo for word in $line do # if word is ip address change to hostname if [[ $word =~ 'DST='^[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}$ ]] then # check if ip address is correct ip=($word) | cut -d'=' -f 2 echo -n `nslookup $word | grep Name | cut -d' ' -f 8` echo -n # else print word else echo -n $word echo -n fi done # new line echodone < $logFileThe part that is throwing me is interpreting the DST= and SRC= fields as an IP address, I'm not really sure of the syntax to strip this off prior to DNS processing, then adding it back on following DNS processing, or if there is a better way?I did search the forums in advance and found the following article:resolve all ip addresses in command output using standard command line toolsHowever it didn't seem to work, potentially given the format of my log files.
Converting Webserver Logged IP Addresses to DNS
bash;networking;dns;ip;bash script
null
_codereview.121065
I have a pseudosite, where I actually post small functions, so that I can re-use them*, but some posts have visitors. The top is Quicksort (C++). I feel that beginners visit it, so I do not care in improving (its poor) performance, but only the readability, so that the beginner can catch things more easily.#include <iostream>void quickSort(int a[], int first, int last);int pivot(int a[], int first, int last);void swap(int& a, int& b);void swapNoTemp(int& a, int& b);void print(int array[], const int& N);using namespace std;int main(){ int test[] = { 7, -13, 1, 3, 10, 5, 2, 4 }; int N = sizeof(test)/sizeof(int); cout << Size of test array : << N << endl; cout << Before sorting : << endl; print(test, N); quickSort(test, 0, N-1); cout << endl << endl << After sorting : << endl; print(test, N); return 0;}/** * Quicksort. * @param a - The array to be sorted. * @param first - The start of the sequence to be sorted. * @param last - The end of the sequence to be sorted.*/void quickSort( int a[], int first, int last ) { int pivotElement; if(first < last) { pivotElement = pivot(a, first, last); quickSort(a, first, pivotElement-1); quickSort(a, pivotElement+1, last); }}/** * Find and return the index of pivot element. * @param a - The array. * @param first - The start of the sequence. * @param last - The end of the sequence. * @return - the pivot element*/int pivot(int a[], int first, int last) { int p = first; int pivotElement = a[first]; for(int i = first+1 ; i <= last ; i++) { /* If you want to sort the list in the other order, change <= to > */ if(a[i] <= pivotElement) { p++; swap(a[i], a[p]); } } swap(a[p], a[first]); return p;}/** * Swap the parameters. * @param a - The first parameter. * @param b - The second parameter.*/void swap(int& a, int& b){ int temp = a; a = b; b = temp;}/** * Swap the parameters without a temp variable. * Warning! Prone to overflow/underflow. * @param a - The first parameter. * @param b - The second parameter.*/void swapNoTemp(int& a, int& b){ a -= b; b += a;// b gets the original value of a a = (b - a);// a gets the original value of b}/** * Print an array. * @param a - The array. * @param N - The size of the array.*/void print(int a[], const int& N){ for(int i = 0 ; i < N ; i++) cout << array[ << i << ] = << a[i] << endl;} As you see, I use many things from C, since I wanted the code to be able to work in a C program with minor modifications, thus I am adding c tag too.*since I find it easier to find them there than in my FS and it helps in restoring harmony after a nuke.
Quicksort for a pseudosite
c++;c;sorting;quick sort
Only small/minor issues:Since code it to be portable to C with minor mods, suggest demarcating that which is sort code from test code. Its appears the sort code is only these 3. Do not mix test code with the application code. Better in separate files.void quickSort( int a[], int first, int last ) int pivot(int a[], int first, int last) void swap(int& a, int& b)pivot(), swap() should be static functions. No need for them outside this file.Minor: int vs. size_t. An int index may lack sufficient range to index an array. Use size_t as that is Goldilocks type neither to narrow nor wide a type to handle all array sizes - it is just right. As size_t is some unsigned type, watch out for attempting to create negative values. I did not notice any issue for your code concerning that.// int N = sizeof(test)/sizeof(int);size_t N = sizeof(test)/sizeof(int);// void quickSort( int a[], int first, int last )void quickSort( int a[], size_t first, size_t last )// int pivot(int a[], int first, int last) size_t pivot(int a[], size_t first, size_t last) The sort would works well with other types beside int. Perhaps code that way.typedef int sort_type; void quickSort(sort_type a[], size_t first, size_t last ) Function signature: Rather than oblige the calling code to supply 0, n-1, create a top level call:void quickSortTop(sort_type a[], size_t size) { if (size) quickSort(a, 0, size-1);}Minor: Keep local variables as local as able. pivotElement could be declared and initialized in 1 step local to the if() block as it is not used outside the block. This borders on style issues, but as a rule, limiting variable scope is easier to see its use and impact.int pivotElement; // here if(first < last) { int pivotElement = pivot(a, first, last); // or here quickSort(a, first, pivotElement-1); quickSort(a, pivotElement+1, last);}Minor: Correct comment? I would think to reverse the order a >= would be needed. If this is not so, then I would expect < to work slightly faster than <=./* If you want to sort the list in the other order, change <= to > *//* If you want to sort the list in the other order, change <= to >= */Minor: Dead code swapNoTemp(). No explanation for its existance here.
_codereview.30029
I have a piece of code that has a bit of a problem. Not that it doesn't work, though. The problem I am having is figuring out the optimal way of working with the MemoryStream, which I find quite difficult to do in this case.while (checkBox1.Checked && tt1.Connected) { tcpstream.Read(lenArray, 0, 4); read = false; Int32 length = BitConverter.ToInt32(lenArray, 0); var tempBytes = new byte[length]; texturestream = new MemoryStream(tempBytes); int currentPosition = 0; while (currentPosition < length && checkBox1.Checked) { currentPosition += tcpstream.Read(tempBytes, currentPosition, length - currentPosition); } AutoReset.Set(); }}As you can see, I'm making the texturestream over and over again (which contains tempBytes).From my perspective, I can't understand why I need to even have a byte array. Isn't it possible to just write to the MemoryStream immediately? And just have the MemoryStream made outside the loop with a Using, and then reuse the same thing over and over (as they are dynamic with their size)?I have tried making this possible, but it doesn't work.I was thinking something like this:tcpstream.read(texturestream.GetBuffer(),...,...)I thought that would make it read to the MemoryStream instead of the byte array, which then goes into the MemoryStream.But, it didn't work, so I am at a loss at what can be done with these loops.Any ideas?EDIT:So if you check the first code, its a simply while loop, its reading 4 byte, then gets the int data from that (BitConverter). This is the Length of the TCP stream, which is why i remake it every loop, as i send the new length etc.Then i take the tcpstream and set it into a bytearray, then write that array to the MemoryStream.This isnt what i would like to do however, i dont really need a byte array in the first place. I however probably need a MemoryStream, though not totally sure, i may be able to work without one. It depends as i use 2 threads that works in parallel, so one is reading while the other receives and fill the memorystream.So if i use only tcp stream, i think it would take longer time, as if the total operation would have to halt to complete it.If i use a MemoryStream the tcpstream can continue to work, by getting the length and writing to a byte array etc, until it writes to the MemoryStream (when it reaches that, the other thread will already have read the data from it).So here is the reading Thread:SharpDX.Windows.RenderLoop.Run(form, () =>{ AutoReset.WaitOne(); if (read == true) { sprite.Begin(SharpDX.Direct3D9.SpriteFlags.None); device.BeginScene(); texturestream.Position = 0; try { using (tx = SharpDX.Direct3D9.Texture.FromStream(device, texturestream, -2, -2, 1, SharpDX.Direct3D9.Usage.None, SharpDX.Direct3D9.Format.X8R8G8B8, SharpDX.Direct3D9.Pool.Default, SharpDX.Direct3D9.Filter.None, SharpDX.Direct3D9.Filter.None, 0)) { sprite.Draw(tx, new ColorBGRA(0xffffffff)); if (tx.GetLevelDescription(0).Width != form.Width || tx.GetLevelDescription(0).Height != form.Height) { wid = tx.GetLevelDescription(0).Width; heg = tx.GetLevelDescription(0).Height; if (heg * wid > 40000) { form.Height = heg; form.Width = wid; presentParams.BackBufferHeight = heg; presentParams.BackBufferWidth = wid; device.Reset(presentParams); } Console.WriteLine(Width: + tx.GetLevelDescription(0).Width + Height: + tx.GetLevelDescription(0).Height); } } } catch (Exception ex) { //if (ex is SharpDX.SharpDXException) MessageBox.Show(ex.Message, Rendering); } sprite.End(); device.EndScene(); device.Present(); }});Sorry for the mess.But here i have a rendering loop, and while not ideal, it does the work currently.So, it only runt after the receiving thread has told it to(right after it has written to the MemoryStream). And i also have a Bool just for safety.And then it does its device things, and finally reads the Texture from the MemoryStream.And then, draws it.So thats basically what the MemoryStream is used for.1 Threads writes, 1 Threads reads, pretty much like that.Hope this is the information needed.EDIT 2:So here it is currently with, the MemoryStream outside the loop, and using write, and reset the position right before the write. So the MemoryStream will only be as big, as the biggest written data at one time. using (texturestream = new MemoryStream()) { while (checkBox1.Checked && tt1.Connected) { tt1.GetStream().Read(lenArray, 0, 4); read = false; var length = BitConverter.ToInt32(lenArray, 0); var tempBytes = new byte[length]; int currentPosition = 0; while (currentPosition < length && checkBox1.Checked && tt1.Connected) { currentPosition += tt1.GetStream().Read(tempBytes, currentPosition, (int)length - currentPosition); } texturestream.Position = 0; texturestream.Write(tempBytes, 0, (int)length); read = true; AutoReset.Set(); } }But, i would like to prevent having to use a byte array as a middle hand, cant i write to the memorystream directly?And, about this:while (currentPosition < length && checkBox1.Checked && tt1.Connected) { currentPosition += tt1.GetStream().Read(tempBytes, currentPosition, (int)length - currentPosition); }if i Dont have (int)length - currentPosition , it wont work.I have tried using this:tt1.GetStream().Read(tempBytes,0,(int)length);And only that, without a while loop. And that doesnt work, i think it can work one time or so. Not sure why it doesnt work though, as it should work in my eyes.I mean, i tell it to read the length of the data, so i see no reason for it to fail.(The fail is System.OverflowException, Arithmetic Overflow).EDIT 3:Missed the first parts that you wrote.I would gladly hear how you would embed the size, please tell.And, CopyTo, is probably as you say, i think i tried it before sometime.But i dont get why i have to allocate a Byte Array, if i have a MemoryStream, i should be able to simple Stream the TCPStream into the MemoryStream (at least i think so).I tried: tt1.GetStream().CopyTo(texturestream);And that didnt work as i cant tell when its supposed to stop (it reads on forever).EDIT 3:Okay as you said, the streams goes on Forever, i always sends new data, and always reads new data. So this is why embed the size at the first 4 byte.If you know a better way of doing this, please tell.And for the MemoryStream, am i suppose to use Write?As i gess thats the only way to write to the MemoryStream without remaking it.Sadly, from my tests, this: while (currentPosition < length && checkBox1.Checked && tt1.Connected) { currentPosition += tt1.GetStream().Read(tempBytes, currentPosition, length - currentPosition); } texturestream.Position = 0; texturestream.Write(tempBytes, 0, length);Which has Using texturestream outside of the loop, is slower than: texturestream = new MemoryStream(tempBytes); while (currentPosition < length && checkBox1.Checked && tt1.Connected) { currentPosition += tt1.GetStream().Read(tempBytes, currentPosition, length - currentPosition); }Though, here i can also have Using outside, so it will dispose of it after the while loop.But i think its faster cause the tempBytes is linked to the MemoryStream, so writing to the tempBytes will write to the MemoryStream.But i dont understand this:Finally - if your application logic allows - then don't use using(textureStream = new MemoryStream()).Why shouldnt i use Using?Using does exactly the same thing you mention, it disposes when i am done, which is when the loop ends.
Can this be improved by working directly with the MemoryStream?
c#
Edit - rewrite my entire answerFirst off, I kind of misunderstood the purpose of lenArray. I sort of get it now, although I don't fully understand why your app is designed like that. I would perhaps use other means of getting Length instead of embedding that value into the stream, but that is besides the point.Perhaps the true answer you are looking for then is - each stream has a CopyTo() method. So you can perhaps use that. This way you avoid having to create a seperate byte array. Although I think what you end up winning is simply code clarity (less lines of code), because the CopyTo method has to internaly do the exact same thing you are doing now. Allocate memory (byte array) for MemoryStream and write bytes into it.The documentation isn't very clear about it though. In that I can't be certain if it will copy the entire stream (with your 4 byte embedded length that you don't need to represent your texture) or as a MSDN comment suggests, it will only copy from the current stream position, in which case you could simply seek over the embedded length if you can't remove it to begin with. So you need to test that out yourself.Other things worth mentioningYou can avoid having to constantly reallocate the MemoryStream in the loop. Move the declaration outside the loop, at the start of the loop call the .Seek(0, SeekOrigin.Begin) method. That way you can rewrite the internal MemoryStream data with the new data and don't incur the performance penalty of allocating and garbage collecting new objects. If your stream length changes based on the texture, then also call the .SetLength() method to make your MemoryStream of the correct size. For all of this to work your app logic must be sound, in that one thread doesn't read the MemoryStream wile another is rewriting it etc.Finally, looking at your original code example. You are wasting CPU cycles (unless the compiler is smart enough to otimize this) here:currentPosition += tcpstream.Read(tempBytes, currentPosition, length - currentPosition);Before this line you set currentPosition = 0;, which means you are effectively sayinglength - 0, that is a pointless substraction.Edit 2With length - currentPosition - I stand corrected, I overlooked the while loop, sorry about that.I tried: tt1.GetStream().CopyTo(texturestream); And that didnt work as i cant tell when its supposed to stop (it reads on forever).It will continue on forever if your tcpStream never stops/closes. I'm thinking that is the case. That you are continually streaming textures with different sizes and it would be the only reason that I can currently think of, why you would embed the length in the stream (otherwise you could simply access stream.Length and use that)...From my perspective, I can't understand why I need to even have a byte array. Isn't it possible to just write to the MemoryStream immediately?But, i would like to prevent having to use a byte array as a middle hand, cant i write to the memorystream directly?In that case you don't quite understand how streams work. In order to work with a stream (Read or Write) you need finite things. A byte array is a fixed finite thing. A stream in of itself doesn't convey a finite construct, it's kind of an abstract concept. I'm struggling to think of a good analogy here...Suffice it to say, that there doesn't exist a write API/method for a stream that accepts another stream (apart from CopyTo). Thus your only option is to read into a byte array from the source stream and then write that byte array content to the destination stream. End of story.Finally - if your application logic allows - then don't use using(textureStream = new MemoryStream()). You are (if I understand correctly what your app is doing) constantly allocating in a loop new relatively short lived MemoryStream objects that will have to be garbage collected. Simply write MemoryStream textureStream = new MemoryStream(); outside your loop. Seek to the beginning at each loop iteration, then rewrite the old information with new data from tcpStream (if necessary also call textureStream.SetLength()). Once your application closes or you stop showing your constant stream of textures, then call textureStream.Dispose(); This way you reuse your object and don't apply unnecessary memory pressure or waste CPU cycles.Edit 3Okay as you said, the streams goes on Forever, i always sends new data, and always reads new data. So this is why embed the size at the first 4 byte. If you know a better way of doing this, please tell.Most likely that is the best way to do it, unless you consider opening a different tcpStream for each texture.And for the MemoryStream, am i suppose to use Write? As i gess thats the only way to write to the MemoryStream without remaking it.Apart from CopyTo and WriteByte, yes Write is the only construct to write to a MemoryStream. There are other means to get the same end result, if they apply, which is discussed next.Which has Using texturestream outside of the loop, is slower than:We'll the two pieces of code behave very differently. In your example, where you have MemoryStream outside of loop, this is happening:tcpStream references memory X.tempBytes references memory Y.new MemoryStream() references memory Z;You are now doing 2 memory copies. One from X -> Y (tcpStream.Read(tempBytes...)), and then Y -> Z (textureStream.Write(tempBytes...)). Now consider your other example with textureStream = new MemoryStream(tempBytes); inside the loop. This statement effectively means, that the new MemoryStream references existing memory Y not Z. So you end up doing only one direct memory copyX -> Y. As you read from tcpStream into tempBytes, the textureStream will already contain that information because it is based on tempBytes.That is why it is faster.But i think its faster cause the tempBytes is linked to the MemoryStream, so writing to the tempBytes will write to the MemoryStream.Just noticed this, so yes, you came to the same conclusion I was trying to make.Why shouldnt i use Using? Using does exactly the same thing you mention, it disposes when i am done, which is when the loop ends.What is it with me and while loops today? Again completely overlooked that you don't go out of the outer while loop, which at first I thought you did, thus disposing the textureStream, then as you would re-enter you would have re-created it... Pardon me, ignore my previous statement about this subject.OptimizationSharpDX.Direct3D9.Texture has a method called FromMemory(), which takes a byte array as one of its argument. It would seem if that method is more apropriate in your scenario, because that way you can completely avoid using textureStream. Simply pass in tempBytes and job done (I assume).Race conditionYou use read as a thread synchronization mechanism, but you are not using any locking. That can lead to the following race condition (which might not effect your app or maybe it needs to work like that):After this code has taken place:texturestream.Write(tempBytes, 0, (int)length);read = true;AutoReset.Set();That same thread can go back to the beginning of the loop and set read = false; before the other thread has a chance to continue. Thus you will miss an entire texutre, because calling AutoReset.Set(); doesn't mean that the other thread will start execution immediately. Or worse yet, the second thread can start, evaluate if (read == true) and go in, then pause. Now the first thread continues, reads half the bytes from tcpStream and writes it into textureStream, then pauses. As the second thread now continues it will get a garbeled texutreStream. A System.Collections.Concurrent.ConcurrentQueue<T> might be a better fit in this case. You put stuff in from the tcpStream and the other thread checks if anything is queued, if so then displays that stuff.
_cstheory.17427
I need a simple augmentation to support median/order statistic queries in O(log log n) time,without increasing the time for other operations.
Give a simple way to augment Van emde boas tree, to find/delete median in O(log log u) time
ds.data structures
null
_codereview.68418
I code Haskell as a hobbyist. I'm interested in feedback on my naive implementation of Conway's Game of Life. Specifically, as stated in the Quick Tour of the website, I am interested in:Best practices and design pattern usageCorrectness in unanticipated casesAdmitting the naivete of the implementation, I'm not so much interested in security issues or performance, unless my implementation is just a totally unwise implementation. That is, if I might, for example, run three iterations of a 3x3 Blinker and kill the CPU. No matter how pretty I think my code, that's just stupid.First I list the test specifications. We don't currently use TDD at work, so I'm a bit inexperienced in TDD.module Life_Spec whereimport Test.Hspecimport Life-- Any live cell with fewer than two live neighbors dies.-- Any live cell with two or three live neighbors lives.-- Any live cell with more than three live neighbors dies.-- Any dead cell with exactly three live neighbors becomes live.main :: IO ()main = hspec $ do describe Life $ do it Returns a dead cell for a live cell with fewer than two live neighbors. $ generation (Alive, 1) `shouldBe` Dead it Returns a live cell for a live cell with two or three live neighbors. $ generation (Alive, 2) `shouldBe` Alive it Returns a live cell for a live cell with two or three live neighbors. $ generation (Alive, 3) `shouldBe` Alive it Returns a dead cell for a live cell with more than three live neighbors. $ generation (Alive, 4) `shouldBe` Dead it Returns a live cell for a dead cell with more exactly three live neighbors. $ generation (Dead, 3) `shouldBe` Alive it Returns the indices of a cell's neighbors for a 3x3 grid. $ neighbors 0 [] [Dead, Alive, Dead] [Dead, Alive, Dead] `shouldBe` [Alive, Dead, Alive] it Returns an empty Grid when given an empty Grid. $ gridGeneration (Grid []) `shouldBe` (Grid []) it Successfully processes the 3x3 blinker grid. $ gridGeneration (Grid [[Dead, Alive, Dead], [Dead, Alive, Dead], [Dead, Alive, Dead]]) `shouldBe` (Grid [[Dead , Dead , Dead ], [Alive, Alive, Alive], [Dead , Dead , Dead ]]) it Successfully processes the 5x5 blinker grid. $ gridGeneration (Grid [[Dead, Dead, Dead , Dead, Dead], [Dead, Dead, Alive, Dead, Dead], [Dead, Dead, Alive, Dead, Dead], [Dead, Dead, Alive, Dead, Dead], [Dead, Dead, Dead , Dead, Dead]]) `shouldBe` (Grid [[Dead, Dead , Dead , Dead , Dead], [Dead, Dead , Dead , Dead , Dead], [Dead, Alive, Alive, Alive, Dead], [Dead, Dead , Dead , Dead , Dead], [Dead, Dead , Dead , Dead , Dead]])Next I list my implementation:module Life whereimport Data.Maybe (catMaybes)data State = Dead | Alive deriving (Eq, Show)newtype Grid = Grid [[State]] deriving (Eq, Show)generation :: (State, Int) -> Stategeneration (Alive, 2) = Alivegeneration (_ , 3) = Alivegeneration (_ , _) = Dead-- Surely this can be done more cleanly...neighbors :: Int -> [State] -> [State] -> [State] -> [State]neighbors x rowAbove rowOfX rowBelow = let (w,y) = (x-1,x+1) cs = [w, x, y, w, y, w, x, y] rs = replicate 3 rowAbove ++ replicate 2 rowOfX ++ replicate 3 rowBelow in catMaybes . map maybeCell $ zip rs cs where maybeCell (r,c) | c < 0 = Nothing | c >= length r = Nothing | otherwise = Just (r!!c)gridGeneration :: Grid -> GridgridGeneration (Grid []) = Grid []gridGeneration (Grid rs@(row0:row1:row2:rows)) = Grid $ g ([]:rs) where g (r0:r1:r2:rs) = [processRow r0 r1 r2] ++ g (r1:r2:rs) g (r0:r1:[]) = [processRow r0 r1 []] g _ = [] processRow r0 r1 r2 = reverse $ foldl p [] [0..(length r1 - 1)] where p a n = (generation (r1!!n, live $ neighbors n r0 r1 r2)) : a live = length . filter (==Alive)gridGeneration _ = undefined
(Yet Another) Conway's Game of Life in Haskell (Naive)
haskell;unit testing;game of life
Ideomatic codeHLint gave some hints,catMaybes . map maybeCell <=> mapMaybe maybeCelland a few superfluous brackets, but nothing big.Hlint doesn't catch some other unnecessarily convoluted formulations though:reverse $ foldl p [] someListp a n = f n : aIn Haskell, foldl is usually less efficient than foldr due to laziness. (In some cases you want to use the strict version foldl', but almost never foldl.)foldr p [] someListp n a = f n : aAnd this is completely equivalent to:map p someListp n = f nAnother example: Using a tuple instead of just two values as arguments to generation. This is not wrong, just unideomatic and doesn't serve any purpose (as far as I can see).generation :: (State, Int) -> Statewhy not usegeneration :: State -> Int -> StateFormattingYou can use formatting to emphasize structure:cols = [w, x, y, w, y, w, x, y]Use clear namesTestingYour tests looks good, you might want to add cases for smaller grids and non-square ones as well though. You might also consider if QuickCheck could be helpful, but I don't see any obvious properties, apart from that all functions should preserve length, but that is already (mostly) covered by the existing tests.Thanks for supplying tests, it helped me with verifying that my suggestions didn't break anything. :)Edit: According to Hspec documentation you are supposed to use one describe per function, not one per module as you did.Edge casesAs I said, you crash on grids with one or two rows, but your code actuallysupports them already (they are handled by g). Just remove the arbitrary constraint of needing at leastthree rows at the second case of gridGeneration (and remove the third case).gridGeneration (Grid rs) = Grid $ g ([]:rs) where ...The function g also handles empty grids correctly, so you can remove the first case as well.EfficiencyI know you weren't that interested in efficiency, but here it is anyways.Your code currently traverses the whole row for every single cell. Example:neighbors 0 [] [] $ repeat [Dead]Will never terminate, despite not really needing anything other than the first valueA good rule of thumb is that if you are indexing over lists, you're probably doing something wrong. And if you are iterating over all indexes of a list, you're definitely doing something wrong.Either use a different data structure or a different method. In the latter case, you could probably just use a fold or a map.In your case it is not as simple as that, but you can reuse (read factor out as a function) the pattern you used for traversing the neighboring rows, but for the columns. Misc pointsYou can loosen up the types of some functions, without changing any other code, like neighbors:neighbors :: Int -> [a] -> [a] -> [a] -> [a]This may or may not be better, but it's good to know, so you can factor out more generic functions and just keep the specific ones with the code.My versionWithout changing anything but formatting, names, adding type signatures and replacing equivalent constructs:module Life whereimport Data.Maybe (mapMaybe)data State = Dead | Alive deriving (Eq, Show)newtype Grid = Grid [[State]] deriving (Eq, Show)gridGeneration :: Grid -> GridgridGeneration (Grid rs) = Grid $ g ([]:rs) where g (r0:r1:r2:rs) = processRow r0 r1 r2 : g (r1:r2:rs) g [r0,r1] = [processRow r0 r1 []] g _ = []processRow :: [State] -> [State] -> [State] -> [State]processRow r0 r1 r2 = map updateCell [0..(length r1 - 1)] where updateCell n = generation (r1 !! n) (live $ neighbors n r0 r1 r2) live = length . filter (==Alive)neighbors :: Int -> [a] -> [a] -> [a] -> [a]neighbors x rowAbove rowOfX rowBelow = mapMaybe maybeCell $ zip rows cols where (w,y) = (x-1,x+1) cols = [w, x, y, -- Use formatting to indicate structure w, y, w, x, y] rows = replicate 3 rowAbove ++ replicate 2 rowOfX ++ replicate 3 rowBelow maybeCell :: ([a], Int) -> Maybe a maybeCell (r, idx) | idx < 0 = Nothing | idx >= length r = Nothing | otherwise = Just (r !! idx)generation :: State -> Int -> Stategeneration Alive 2 = Alivegeneration _ 3 = Alivegeneration _ _ = DeadUpdate 1:With new algorithmI also remade the code with using the same method for traversing both rows and columns. I factored out the function g as map3 and added the padding first, to simplify things. Then I used zip3 to encapsulate the three lists, so I could map3 over them again.module Life whereimport Data.List (zip3)data State = Dead | Alive deriving (Eq, Show)newtype Grid = Grid [[State]] deriving (Eq, Show)gridGeneration :: Grid -> GridgridGeneration (Grid rs) = Grid $ map3 processRow (withEmptyRows . map withEmptyCols $ rs) where emptyRow = repeat Dead withEmptyCols xs = Dead : xs ++ [Dead] withEmptyRows xss = emptyRow : xss ++ [emptyRow]-- Map a function over each triplet of neighbouring valuesmap3 :: (a -> a -> a -> b) -> [a] -> [b]map3 f (x0:x1:x2:xs) = f x0 x1 x2 : map3 f (x1:x2:xs)map3 f _ = []processRow :: [State] -> [State] -> [State] -> [State]processRow r0 r1 r2 = map updateCell . map3 neighbors $ rows where rows :: [(State,State,State)] rows = zip3 r0 r1 r2 updateCell (cell, neighs) = generation cell (live neighs) live = length . filter (==Alive)neighbors :: (a,a,a) -> (a,a,a) -> (a,a,a) -> (a,[a])neighbors (x1,x2,x3) (x4,x5,x6) (x7,x8,x9) = (x5, [x1,x2,x3, x4, x6, x7,x8,x9])generation :: State -> Int -> Stategeneration Alive 2 = Alivegeneration _ 3 = Alivegeneration _ _ = DeadUpdate 2:Here is a Hspec for map3, using QuickCheck: describe Life.map3 $ do it Decreases the length of the list by two, but not to a negative length $ property $ \xs -> length (map3 f xs) === 0 `max` (length xs - 2) where f () () () = ()The fact that the type is completely generic (map3 :: (a->a->a->b)->[a]->[b]) means that it cannot do anything with the values, so we don't have to test for that.
_unix.331581
I have computer A in which I run xclock program. I want to forward it's graphical interface to my pc, not directly but through computer B. Is it possible? I'm running xming on my pc.Resuming: A runs xclock > A forwards to B > B forwards to my PC. Is it possible?
Fordward xclock graphical interface
solaris;xming
null
_unix.268165
I want to be able to logout inactive sessions on my webserver.I have done this like so: 13. Restrict idle users. Timeout after a certain pre-defined amount of time.a. In the directory /etc/profile.d:i. Create a file called autologout.sh and add the following lines:TMOUT=300readonly TMOUTexport TMOUTThis sets autologout settings for the bash shell.ii. Create a file called autologout.csh and add the following lines:set -r autologout 5iii. Add execute privileges to both files with: sudo chmod +x /etc/profile.d/autologout.* I notice that the above lines log the user out only from the current active account and not terminate the session completely.eg.) If I sudo-ed to root, I am logged out of root and returned to my user account. Can I log the user out completely? If so, how do I do it?
Inactivity-based auto logout from all sessions
bash;logout
Many years ago, I used to use a program called timeoutd to do exactly this. It seems to have vanished from debian since I last used it (or maybe it was never in debian and I compiled it myself - I can't remember, I last used it in the mid-1990s).Anyway, I found a copy of it at:https://launchpad.net/ubuntu/+source/timeoutdIt is configurable with an /etc/timeouts file. You can find the man page in the package, with the source, or at http://manpages.ubuntu.com/manpages/gutsy/man8/timeoutd.8.html
_webmaster.108304
I would like to remove the old cache of my webpage from Google search content. What would be the next step?
Remove outdated cache of my webpage
google
null
_unix.34836
I used to send files in Unix to a printer with lp, and used -ofp16.16 or -ofp12 to change the size of the fonts. This does not work on Linux; what should I use instead?
How do I change the font size when using lp on Linux?
linux;printing;fonts
null
_webapps.52725
Let's say we have some data in a spreadsheet like: A B CPrice Quantity Genre 20 2500 Car 10 1000 Car 10 2500 BikeI can filter Cars with Quantity 2500 with FILTER(A:A;B:B=2500;C:C=Car) - easy.But what if there is another column, for postage, like: A B C DPrice Quantity Genre Postage 20 2500 Car 0 10 1000 Car 6 10 2500 Bike 10I want to filter to find everything who's total price (postage+price) is 20, and I have 2500 of. I tried various combinations but can't seem to do it. Should I create a sum column first?
Filter by the sum of two cells in Google Spreadsheet
google spreadsheets
null
_unix.18919
I'm new to VCS and I decided to give Mercurial a try. I signed up for bitbucket and created some repositories. The I created /home/max/hgrepo/ and ran hg clone http://bitbucket.org/[username]/[repository]That made a [repository] directory. I copied some source files into this directory. then I hg add, hg commit and hg push. Then I wanted to move my source files into a src directory instead of having them into the root dir. So I moved all the sources files down one directory to my src directory. I then ran hg add *, then hg commit and lastly hg push.The problem I have is that the old sources files in the root directory of my repo are still there. How do I remove them? I don't have them in my local repo anymore. Is there a way to fully sync my local repository and my remote one?
How to fully sync local repository using Mercurial (bitbucket)
repository;version control;mercurial
You can use the addremove command to mark missing files (those prefaced with a !) as removed.See the excellent Mercurial: the Definitive Guide chapter on tracking files.For future reference, there is a command to move files, hg mv.
_unix.87200
I have a symlink with these permissions:lrwxrwxrwx 1 myuser myuser 38 Aug 18 00:36 npm -> ../lib/node_modules/npm/bin/npm-cli.js*The symlink is located in a .tar.gz archive. Now when I unpack the tar.gz archive using maven the symlink is no longer valid. I'm therefore trying to reconstruct the symlink. First I create the symlink using ln but how do I set the same permissions as the original symlink?
Change permissions for a symbolic link
symlink;chmod
You can make a new symlink and move it to the location of the old link.ln -s <new_location> npm2mv -f npm2 npmThat will preserve the link ownership. Alternatively, you can use chown to set the link's ownership manually.chown -h myuser:myuser npmOn most systems, symlink permissions don't matter. When using the symlink, the permissions of the components of symlink's target will be checked.
_webapps.103423
I'm new at Chef and I'm trying to figure out how it can be charged. I'd like to launch my chef server using (for example) ec2 instance.But after couple hours of reading docs I've found:...When using more than 25 nodes, a configuration change to your Chef server needs to be made in order for your Chef server to be properly configured and recognize your purchased licenses. You will need to edit to your chef-server.rb file ...And I didn't get the meaning. So I run a chef server on my instance that costs something and I also have to pay for using chef solutions when my network will reach more than 25 instances? Also I read pricing part and ... it's quite expensive. It's okay, the software is brilliant as for me but I also want to understand what options and limitations it has.
Chef Server and additional charges
amazon ec2
null
_codereview.157478
The idea of this class is that several threads are sending data over a network and each thread are sharing the same instance of this class and before sending N bytes over the network each thread is calling ThrottledWait(n).My worry is that each thread might run on different core and get different value for DateTime.UtcNow.Ticks. I am not 100% sure it's thread-safe.Also calling Thread.Sleep(ts) might sleep for longer than asked for and might cause traffic to not be smooth because of aliasing so we might want to do a thread.sleep() for less than the calculated amount and waste the remaining time checking DateTime.UtcNow.Ticks in a busy loop.public class Throttler{ // Use this constant as average rate to disable throttling public const long NoLimit = -1; // Number of consumed tokens private long _consumedTokens; // timestamp of last refill time private long _lastRefillTime; // ticks per period private long _periodTicks; private double _averageRate; public long BurstSize { get; set; } public long AverageRate { get { return (long)_averageRate; } set { _averageRate = value; } } public TimeSpan Period { get { return new TimeSpan(_periodTicks); } set { _periodTicks = value.Ticks; } } public Throttler() { BurstSize = 1; AverageRate = NoLimit; Period = TimeSpan.FromSeconds(1); } /// <summary> /// Create a Throttler /// ex: To throttle to 1024 byte per seconds with burst of 200 byte use /// new Throttler(1024,TimeSpan.FromSeconds(1), 200); /// </summary> /// <param name=averageRate>The number of tokens to add to the bucket every interval. </param> /// <param name=period>Timespan of on interval.</param> /// <param name=burstSize></param> public Throttler(long averageRate, TimeSpan period, long burstSize = 1) { BurstSize = burstSize; AverageRate = averageRate; Period = period; } public bool TryThrottledWait(long amount) { if (BurstSize <= 0 || _averageRate <= 0) { // Instead of throwing exception, we just let all the traffic go return true; } RefillToken(); return ConsumeToken(amount); } private bool ConsumeToken(long amount) { while (true) { long currentLevel = System.Threading.Volatile.Read(ref _consumedTokens); if (currentLevel + amount > BurstSize) { return false; // not enough space for amount token } if (Interlocked.CompareExchange(ref _consumedTokens, currentLevel + amount, currentLevel) == currentLevel) { return true; } } } public void ThrottledWait(long amount) { while (true) { if (TryThrottledWait(amount)) { break; } long refillTime = System.Threading.Volatile.Read(ref _lastRefillTime); long nextRefillTime = (long) (refillTime + (_periodTicks / _averageRate)); long currentTimeTicks = DateTime.UtcNow.Ticks; long sleepTicks = Math.Max(nextRefillTime - currentTimeTicks, 0); TimeSpan ts = new TimeSpan(sleepTicks); Thread.Sleep(ts); } } /// <summary> /// Compute elapsed time using DateTime.UtcNow.Ticks and refil token using _periodTicks and _averageRate /// </summary> private void RefillToken() { long currentTimeTicks = DateTime.UtcNow.Ticks; // Last refill time in ticks unit long refillTime = System.Threading.Volatile.Read(ref _lastRefillTime); // Time delta in ticks unit long TicksDelta = currentTimeTicks - refillTime; long newTokens = (long)(TicksDelta * _averageRate / _periodTicks); if (newTokens > 0) { long newRefillTime = refillTime == 0 ? currentTimeTicks : refillTime + (long)(newTokens * _periodTicks / _averageRate); if (Interlocked.CompareExchange(ref _lastRefillTime, newRefillTime, refillTime) == refillTime) { // Loop until we succeed in refilling newTokens tokens while (true) { long currentLevel = System.Threading.Volatile.Read(ref _consumedTokens); long adjustedLevel = (long)Math.Min(currentLevel, BurstSize); // In case burstSize decreased long newLevel = (long) Math.Max(0, adjustedLevel - newTokens); if (Interlocked.CompareExchange(ref _consumedTokens, newLevel, currentLevel) == currentLevel) { return; } } } } }}To throttle to 1024 byte per seconds with burst of 200 byte we would dovar throttler = new Throttler(1024,TimeSpan.FromSeconds(1), 200);Then each time we need to send some bytevoid Sendbytes(byte[] byteArray) { throttler.ThrottledWait(byteArray.Length); ... // write the bytes}
Throttling class
c#;thread safety
I updated the code to solve some of the issue. For reference I am including the latest version below:public class Throttler{ // Use this constant as average rate to disable throttling public const long NoLimit = -1; // Number of consumed tokens private long _consumedTokens; // timestamp of last refill time private long _lastRefillTime; // ticks per period private long _periodTicks; private double _averageRate; public long BurstSize { get; set; } public long AverageRate { get { return (long)_averageRate; } set { _averageRate = value; } } public TimeSpan Period { get { return new TimeSpan(_periodTicks); } set { _periodTicks = value.Ticks; } } public Throttler() { BurstSize = 1; AverageRate = NoLimit; Period = TimeSpan.FromSeconds(1); } /// <summary> /// Create a Throttler /// ex: To throttle to 1024 byte per seconds with burst of 200 byte use /// new Throttler(1024,TimeSpan.FromSeconds(1), 200); /// </summary> /// <param name=averageRate>The number of tokens to add to the bucket every interval. </param> /// <param name=period>Timespan of on interval.</param> /// <param name=burstSize></param> public Throttler(long averageRate, TimeSpan period, long burstSize = 1) { BurstSize = burstSize; AverageRate = averageRate; Period = period; } public long TryThrottledWait(long amount) { if (BurstSize <= 0 || _averageRate <= 0) { // Instead of throwing exception, we just let all the traffic go return amount; } RefillToken(); return ConsumeToken(amount); } // Return number of consummed token private long ConsumeToken(long amount) { while (true) { long currentLevel = Volatile.Read(ref _consumedTokens); long available = BurstSize - currentLevel; if (available == 0) { return 0; } long toConsume = amount; if (available < toConsume) { toConsume = available; } if (Interlocked.CompareExchange(ref _consumedTokens, currentLevel + toConsume, currentLevel) == currentLevel) { return toConsume; } } } /// <summary> /// Wait that works inside synchronous methods. /// </summary> /// <param name=amount>number of tokens to remove</param> /// <returns>Returns once all Thread.Sleep have occurred</returns> public void ThrottledWait(long amount) { long remaining = amount; while (true) { remaining -= TryThrottledWait(remaining); if (remaining == 0) { break; } TimeSpan ts = GetSleepTime(); Thread.Sleep(ts); } } /// <summary> /// Wait that works inside Async methods. /// </summary> /// <param name=amount>number of tokens to remove</param> /// <returns>Returns once all Task.Delays have occurred</returns> public async Task ThrottledWaitAsync(long amount) { long remaining = amount; while (true) { remaining -= TryThrottledWait(remaining); if (remaining == 0) { break; } TimeSpan ts = GetSleepTime(); await Task.Delay(ts).ConfigureAwait(false); } } /// <summary> /// Compute elapsed time using DateTime.UtcNow.Ticks and refil token using _periodTicks and _averageRate /// </summary> private void RefillToken() { long currentTimeTicks = DateTime.UtcNow.Ticks; // Last refill time in ticks unit long refillTime = Volatile.Read(ref _lastRefillTime); // Time delta in ticks unit long TicksDelta = currentTimeTicks - refillTime; long newTokens = (long)(TicksDelta * _averageRate / _periodTicks); if (newTokens <= 0) { return; } long newRefillTime = refillTime == 0 ? currentTimeTicks : refillTime + (long)(newTokens * _periodTicks / _averageRate); // Only try to refill newTokens If no other thread has beaten us to the update _lastRefillTime if (Interlocked.CompareExchange(ref _lastRefillTime, newRefillTime, refillTime) != refillTime) { return; } // Loop until we succeed in refilling newTokens tokens // Its still possible for 2 thread to concurrently run the block below // This is why we need to make sure the refill is atomic while (true) { long currentLevel = Volatile.Read(ref _consumedTokens); long adjustedLevel = Math.Min(currentLevel, BurstSize); // In case burstSize decreased long newLevel = Math.Max(0, adjustedLevel - newTokens); if (Interlocked.CompareExchange(ref _consumedTokens, newLevel, currentLevel) == currentLevel) { return; } } } /// <summary> /// Get time to sleep until data can be sent again /// </summary> /// <returns>Timespan to wait</returns> private TimeSpan GetSleepTime() { long refillTime = Volatile.Read(ref _lastRefillTime); long nextRefillTime = (long)(refillTime + (_periodTicks / _averageRate)); long currentTimeTicks = DateTime.UtcNow.Ticks; long sleepTicks = Math.Max(nextRefillTime - currentTimeTicks, 0); TimeSpan ts = new TimeSpan(sleepTicks); return ts; }}
_scicomp.27188
I am trying to compute the one-dimensional energy spectra for my channel-flow simulation. I have already written a post-processing script to achieve this, however I need to validate my code before proceeding.To do so, I am taking the two-point cross correlation (per given plane) vector from a DNS database. Then, I am trying to apply my code and plot against the 1D energy spectra provided by the same database and in the same (homogeneous) direction. I am always getting the wrong output, but I cannot figure out what went wrong. I have looked into several reference books and forum posts here and there with no tangible result as of yet.The steps I am following are: Given a two-point cross-correlation vector in the streamwise (i.e. x-direction) and homogeneous direction, I am applying the following (e.g. in Matlab form):% where Ruu is the correlation vector from the DNS databaseN = length(Ruu);Nk = 2^nextpow2(N);% Fourier transform dataBx1 = zeros(Nk, 1); for k=1:Nk for n=1:N Bx1(k) = Bx1(k) + (1/N)*Ruu(n)*exp(-2i*pi*(k-1)*(n-1)/N); endend% wavenumbers initializationkx = zeros(Nk, 1);% total distance between correlations usedLx = (max(x) - min(x));for n=1:Nk % streawise coordinates to wavenumber kx(n) = pi*(n-1)/Lx;end% calculate 1D streamwise energy spectraEu = Bx1.*conj(Bx1); % show only first half due to symmetryloglog(kx(1:end/2), 2*Eu(1:end/2))The resulting figure, in case you were wondering, is depicted below.The output of such figure is nowhere near what I am looking for. Can someone shed some insight on the issue ? Thank you !Reference DNS DataMoser, Robert D.; Kim, John; Mansour, Nagi N., Direct numerical simulation of turbulent channel flow up to $Re_{\tau} = 590$, Phys. Fluids 11, No. 4, 943-945 (1999). ZBL1147.76463.
1d turbulent energy spectra in homogenuous direction (non-isotropic)
numerical analysis;fluid dynamics;computational physics;fourier analysis;statistics
null
_unix.264299
My directory variablePOSTMAP=/work/Documents/Projects/untitled\ folder/untitled\ folder/*/*_tsta.bamMy for statement:for file0 in ${POSTMAP}; do...It seems that the whitespace in 'untitled folder' messes with the globbing. I suspect this because file0 ends up being '/untitled'. Note that I have 'shopt -s extglob'.
Globbing error due to whitespace
bash;shell;quoting;wildcards
It's not really messing up with the globbing. Here, by using $POSTMAP unquoted, you're using the split+glob operator.With the default value of $IFS, on your /work/Documents/Projects/untitled\ folder/untitled\ folder/*/*_tsta.bam, it will first split it into /work/Documents/Projects/untitled\, folder/untitled\ and folder/*/*_tsta.bam. Only the third one contains wildcard characters and thus be subject to the glob part. However, the glob would just search for files in the folder directory relative to the current directory.If you only want the glob part and not the split of that split+glob operator, set $IFS to the empty string. For that operator, backslash can't be used to escape $IFS separators (with bash (and bash only among Bourne-like shells), it can be used to escape wildcard glob operators though).So either:POSTMAP=/work/Documents/Projects/untitled folder/untitled folder/*/*_tsta.bamIFS= # don't splitset +f # do globfor file0 in $POSTMAP # invoke the split+glob operatordo...Or probably better here with shells supporting arrays like bash, yash, zsh, ksh:postmap=( '/work/Documents/Projects/untitled folder/untitled folder/'*/*_tsta.bam) # expand the glob at the time of that array assignmentfor file0 in ${postmap[@]} # loop over the array elementsdo....