id
stringlengths 5
27
| question
stringlengths 19
69.9k
| title
stringlengths 1
150
| tags
stringlengths 1
118
| accepted_answer
stringlengths 4
29.9k
⌀ |
---|---|---|---|---|
_softwareengineering.243331 | I my c# program I have to perform 5 steps(tasks) sequentially. basically these five should execute one after the other only the previous task performed is success. Currently I have done it in following style. But this is not very good code style to follow.var isSuccess=false;isSuccess=a.method1();if(isSuccess) isSuccess=a.method2();if(isSuccess) isSuccess=a.method3();if(isSuccess) isSuccess=a.method4();if(isSuccess) isSuccess=a.method5();How can I re factor this code. What is the best way I can follow? | calling methods if previous call success | c# | null |
_codereview.102457 | I just wrote an advanced CSV parser that translates CSV to SQL(SQLite Compatible) statements, which are then inserted into the database. It translates variable names in the CSV to values defined in the script.Example CSV0,1,An empty black tile,${ASSET_PATH}/BlackTile.png1,1,Grassy Tile,${ASSET_PATH}/Grass.png2,1,Lava Tile,${ASSET_PATH}/Lava.png3,1,Stone Brick Texture,${ASSET_PATH}/StoneBrick.pngThe code# Variables that are used in the CSV filesASSET_PATH=~/spacegame/assets# $1 - File to read. Named same as table to insert to plus .csv# $2 - function parseCsvAdv2Db { local oldIFS=$IFS local table=$(echo $1 | cut -d'.' -f 1) local ins=INSERT INTO $table$2 VALUES IFS='|' while read line do # Preprocess the line local data=$(eval echo $line | \ awk 'BEGIN { FS=,; OFS=|; } { $1=$1; print $0; }') local tmpdata=\( for field in $data do tmpdata+='$field' done tmpdata+=')' ins+=$tmpdata done < $1 ins=$(echo $ins | sed -e 's/)(/),(/g' -e s/''/','/g) ins+=';' sqlite3 $dbfile $ins # Restore state IFS=$oldIFS}parseCsvAdv2Db test.csv '(id,type,descr,path)' | Advanced CSV-to-SQLite converter | parsing;bash;csv;sqlite;awk | First of all, it should be noted that this script would be vulnerable to arbitrary command execution as well as SQL injection. It might be OK if you trust the CSV data not to contain malicious shell commands or characters with special significance in SQL.Several features make this code hard to follow:Mixing Bash and AWK. It should either be pure Bash or mostly AWK (with a thin Bash wrapper to invoke AWK with the right parameters). Calling AWK like this, especially with one invocation per line, is both confusing and bad for performance.What does the | character have to do with anything? It seems to be a secret delimiter used for Bash-AWK communication. That's bad: will a literal | in the data break the script?If you are going to override IFS temporarily, use the set-this-variable-for-one-command syntax.Why is there post-processing done using sed?$(echo $1 | cut -d'.' -f 1) can be better expressed in Bash using ${1%%.*}.A corner case is that the code generates a malformed INSERT statement if the CSV file is empty.Suggested solution# $1 - Name of CSV file to read. Table name is inferred from this by# dropping the filename extension.# $2 - Optional (column, names) for the INSERT statement# $dbfile - SQLite filenameparseCsvAdv2Db() { ( local rec_sep=INSERT INTO ${1%%.*}$2 VALUES while IFS=',' read -r -a fields ; do local field_sep= echo $rec_sep ( for field in ${fields[@]} ; do echo -n $field_sep'$(eval echo $field)' field_sep=', ' done echo -n ) rec_sep=, done < $1 echo ';' ) | sqlite3 $dbfile}Alternatively, define this function to do just one thing generate the INSERT statement and let the caller pipe the result: parseCsvAdv2Db test.csv | sqlite3 $dbfile.parseCsvAdv2Db() { local rec_sep=INSERT INTO ${1%%.*}$2 VALUES while IFS=',' read -r -a fields ; do local field_sep= echo $rec_sep ( for field in ${fields[@]} ; do echo -n $field_sep'$(eval echo $field)' field_sep=', ' done echo -n ) rec_sep=, done < $1 echo ';'} |
_unix.272004 | I need to write a script that figures out if a reboot has occurred after an RPM has been installed. It is pretty easy to get the epoch time for when the RPM was installed: rpm -q --queryformat %{INSTALLTIME}\n glibc | head -1, which produces output that looks like this: 1423807455.This cross checks with rpm -q --info.# date -d@`rpm -q --queryformat %{INSTALLTIME}\n glibc | head -1`Fri Feb 13 01:04:15 EST 2015# sudo rpm -q --info glibc | grep Install Date | head -1Install Date: Fri 13 Feb 2015 01:04:15 AM EST Build Host: x86-022.build.eng.bos.redhat.comBut I am getting stumped on trying to figure out how to get the epoch time from uptime or from cat /proc/uptime. I do not understand the output from cat /proc/uptime which on my system looks like this: 19496864.99 18606757.86. Why is there two values? Which should I use and why do these numbers have a decimal in them?UPDATE: thanks techrafhere is the script that I will use ...#!/bin/shnow=`date +'%s'`rpm_install_date_epoch=`rpm -q --queryformat %{INSTALLTIME}\n glibc | head -1`installed_seconds_ago=`expr $now - $rpm_install_date_epoch`uptime_epoch=`cat /proc/uptime | cut -f1 -d'.'`if [ $installed_seconds_ago -gt $uptime_epoch ]then echo no need to rebootelse echo need to rebootfiI'd appreciate any feedback on the script. Thanks | How do I get the time when the system booted up in epoch format? | linux;bash;shell script;uptime | As manuals (and even Wikipedia) point out:/proc/uptime Shows how long the system has been on since it was last restarted.The first number is the total number of seconds the system has been up. The second number is how much of that time the machine has spent idle, in seconds. On multi core systems (and some linux versions) the second number is the sum of the idle time accumulated by each CPU.The decimal point separates seconds from fractions of a second. To calculate point in time system booted up using this metrics, you would have to subtract the number of of seconds the system has been up (first number) from current time in epoch format rounding up the fraction. |
_softwareengineering.111306 | Rather than asking a general question about WebForms vs MVC (such as in ASP.NET v/s ASP.NET MVC), I have a specific quesiton.It appears the main differences between the two approaches areWebForms is Event Driven and uses pre built componentsMVC has a built in layer that WebForms does not: the ModelMVC has the Controllers in a separate folder than the Views while the WebForms Controller is the CodeBehindOne could easily add a folder to a WebForm project called Model that stores all the business logic which used in the Code Behind (decoupling the two). The main argument against WebForms is that it's easy to write business logic in the CodeBehind. But you could easily add business logic in your MVC controller completely violating the separation of concerns.Now for the question: Isn't it the case that you could write a WebForms project in such a way that the business logic is separate from the Controller/Code Behind which would have all the benefits of MVC (separation of concerns) while keeping the benefits of WebForms: rich controls, Event driven (if you like events)? | Obtaining the best of both worlds: MVC and WebForms | asp.net;mvc;business logic;webforms | What you're describing is the MVP pattern. There is a framework called WebFormsMVP specifically designed to facilitate this.I'm not sure many will agree that it's the best of both worlds, but it is a considerably more testable way to use WebForms than putting all your code in the codebehind class.However, putting a service layer in between your forms and your data model also achieves this, much more simply. |
_softwareengineering.258475 | We are planning to use Git pull requests for code review in our company. Before we start I have a basic question: How often should I open a pull request? Is it best to open one for every little commit I create? Or should I open a single pull request for a larger quantum of work, such as all the commits in, say, a user story? What is the right size?What do you do in your team? | How often to open pull requests | git;code reviews | open a single pull request for a larger quantum of work, such as [...] a user story?That's what you should do. Two reasons:There is a mantra that one should commit early and often. Once you get used to it, you will recognize that this is a good habit. The side effect of it is, that you will produce a larger number of commits, which you may or may not want to squash later into fewer ones. That's perfectly ok as long as you didn't publish the stuff in any way.Once you're finished with your work and want it to be merged, you open a PR. Keep in mind that the people who are acting upon your PR are not interested in every single minorish step of the development. They want a completed feature as a whole, because they will have to review it.One major point about DVCS is exactly that: There is no need to publish every tiny step, but you still have the benefits of a repository. Once the branch is merged into the main development branch and/or repository you are contributing to via PRs, typically only the end result is of interest. |
_unix.12307 | Is there a piece of Linux software that does what GraphClick does in Mac OS X?That is, is there a Linux software that is a graph digitizer software which allows to automatically retrieve the original (x,y)-data from the image of a scanned graph? | Linux equivalent of GraphClick? | data recovery;image editor;ocr;graph | You can use g3data in conjunction with Gnuplot. |
_unix.127981 | I installed Erlang on amazon ec2 - on FreeBSD 10 with fetch http://www.erlang.org/download/otp_src_17.0.tar.gzgunzip -c otp_src_17.0.tar.gz | tar xf -cd otp_src_17.0./configure --disable-hipegmakegmake installand I get this error:configure: error: Perl is required to generate v2 to v1 mib converter scriptconfigure: error: /bin/sh '/usr/home/ec2-user/otp_src_17.0/lib/snmp/./configure' failed for snmp/.configure: error: /bin/sh '/usr/home/ec2-user/otp_src_17.0/lib/configure' failed for libHow can I install Erlang on FreeBSD 10? | Erlang install on Freebsd 10 on Amazon ec2 | freebsd | null |
_codereview.52424 | This morning, being in urgent need of an effective subject code solution, I have had to write a quick & dirty custom one:using System;using System.Collections.Generic;using System.Linq;using System.Reflection;namespace MyTestConsole{ /// <summary> /// Handles System.Windows.Forms.WebBrowser DocumentCompleted event handlers /// </summary> /// <remarks> /// /// Part of code borrowed from /// http://stackoverflow.com/questions/3783267/how-to-get-a-delegate-object-from-an-eventinfo /// by /// http://stackoverflow.com/users/259769/enigmativity /// /// Needs refactoring, any is very welcome. /// /// </remarks> public class WebBrowserDocumentCompletedEventHandlersKeeper { private const string EVENT_NAME = DocumentCompleted; private System.Windows.Forms.WebBrowser _webBrowser; public WebBrowserDocumentCompletedEventHandlersKeeper(System.Windows.Forms.WebBrowser webBrowser) { _webBrowser = webBrowser; } public static EventInfo GetEventInfo(Type controlType, string targetEventName) { foreach (var eventInfo in controlType.GetEvents()) { if (string.Compare(eventInfo.Name, targetEventName, true) == 0) return eventInfo; } return null; } public void AddEventHandler(System.Windows.Forms.WebBrowserDocumentCompletedEventHandler handler) { _webBrowser.DocumentCompleted += handler; } public void AddEventHandlers(params System.Windows.Forms.WebBrowserDocumentCompletedEventHandler[] handlers) { handlers.ToList().ForEach(handler => AddEventHandler(handler)); } public void RemoveEventHandler(System.Windows.Forms.WebBrowserDocumentCompletedEventHandler handler) { int countBefore = this.Count; if (countBefore <= 0) throw new InvalidOperationException(WebBrowser instance doesn't have any attached DocumentCompleted event handlers.); _webBrowser.DocumentCompleted -= handler; if (countBefore == this.Count) throw new ArgumentException(String.Format('{0}' is missing in the list of WebBrowser instance's attached DocumentCompleted event handlers, handler.Method.Name)); } public void RemoveEventHandlers(params System.Windows.Forms.WebBrowserDocumentCompletedEventHandler[] handlers) { handlers.ToList().ForEach(handler => RemoveEventHandler(handler)); } public void RemoveAllEventHandlers() { if (this.Count <= 0) return; var eventInfo = GetEventInfo(typeof(System.Windows.Forms.WebBrowser), EVENT_NAME); Func<EventInfo, FieldInfo> ei2fi = ei => _webBrowser.GetType().GetField(eventInfo.Name, BindingFlags.NonPublic | BindingFlags.Instance | BindingFlags.GetField); var fieldInfo = ei2fi(eventInfo); var eventHandler = fieldInfo.GetValue(_webBrowser); var removeMethodInfo = eventInfo.GetRemoveMethod(); removeMethodInfo.Invoke(_webBrowser, new object[] { eventHandler }); } public IEnumerable<MethodInfo> EnumerateAddedHandlers() { var eventInfo = GetEventInfo(typeof(System.Windows.Forms.WebBrowser), EVENT_NAME); Func<EventInfo, FieldInfo> ei2fi = ei => _webBrowser.GetType().GetField(eventInfo.Name, BindingFlags.NonPublic | BindingFlags.Instance | BindingFlags.GetField); return from eventInfo1 in new EventInfo[] { GetEventInfo(typeof(System.Windows.Forms.WebBrowser), EVENT_NAME) } let eventFieldInfo = ei2fi(eventInfo1) let eventFieldValue = (System.Delegate)eventFieldInfo.GetValue(_webBrowser) from subscribedDelegate in eventFieldValue.GetInvocationList() select subscribedDelegate.Method; } public int Count { get { try { return EnumerateAddedHandlers().Count(); } catch { return -1; } } } #region Testing instrumentation public void ListHandlers() { System.Console.WriteLine(\n === List Event Handlers: count = {0}, this.Count); if (this.Count > 0) { int index = 1; foreach (var h in this.EnumerateAddedHandlers()) System.Console.WriteLine( {0}. {1} in {2}, index++, h.Name, h.ReflectedType.FullName); // .Assembly.GetName().Name); // .FullyQualifiedName); } else System.Console.WriteLine( *** Event handlers are missing.); } public byte RunTest(int testIndex, string testTitle, Action a, int expectedCountTestResult, Type expectedException = null) { System.Console.Write(\n{0}. '{1}': , testIndex, testTitle); try { if (_webBrowser.InvokeRequired) _webBrowser.Invoke(a); else a(); ListHandlers(); } catch (Exception ex) { System.Console.WriteLine(\n Error = '{0}',\n ExpectedException = {1}, ex.Message, expectedException != null && ex.GetType() == expectedException); } System.Console.WriteLine(\n *** Test result = {0} ***, (expectedCountTestResult == this.Count).ToString().ToUpper()); return expectedCountTestResult == this.Count? (byte)1 : (byte)0; } #endregion }}Here are tests - I have written simple custom test runner as part of this code solution just to have as little as possible bindings to any test frameworks: partial class Program { [STAThread] static void Main(string[] args) { try { var k = new WebBrowserDocumentCompletedEventHandlersKeeper(new System.Windows.Forms.WebBrowser()); byte c = 0; c += k.RunTest(1, test initial count, () => System.Console.WriteLine(Count1 = {0}, k.Count), -1); c += k.RunTest(2, test remove not attached handler from empty handlers list, () => k.RemoveEventHandler(docCompleted2), -1, typeof(InvalidOperationException)); c += k.RunTest(3, test add one handler, () => k.AddEventHandler(docCompleted1), 1); c += k.RunTest(4, test remove not attached handler, () => k.RemoveEventHandler(docCompleted2), 1, typeof(ArgumentException)); c += k.RunTest(5, test add two handlers, () => k.AddEventHandlers(docCompleted2, docCompleted3), 3); c += k.RunTest(6, test add already added handler, () => k.AddEventHandler(docCompleted3), 4); c += k.RunTest(7, test add already added handlers, () => k.AddEventHandlers(docCompleted1, docCompleted2, docCompleted3), 7); c += k.RunTest(8, test remove one handler, () => k.RemoveEventHandler(docCompleted3), 6); c += k.RunTest(9, test remove two handlers, () => k.RemoveEventHandlers(docCompleted2, docCompleted3), 4); c += k.RunTest(10, test remove all handlers, () => k.RemoveAllEventHandlers(), -1); c += k.RunTest(11, test remove all handlers when none are attached, () => k.RemoveAllEventHandlers(), -1); System.Console.WriteLine(\n\n*** All tests' overall success count == 11 = > {0:U} ***, (c == 11).ToString().ToUpper()); } catch (Exception ex) { System.Console.WriteLine(Main: Error = '{0}', ex.Message); } } private static void docCompleted1(object sender, System.Windows.Forms.WebBrowserDocumentCompletedEventArgs e) { throw new NotImplementedException(); } private static void docCompleted2(object sender, System.Windows.Forms.WebBrowserDocumentCompletedEventArgs e) { throw new NotImplementedException(); } private static void docCompleted3(object sender, System.Windows.Forms.WebBrowserDocumentCompletedEventArgs e) { throw new NotImplementedException(); } }Here are the test results:1. 'test initial count': Count1 = -1 === List Event Handlers: count = -1 *** Event handlers are missing. *** Test result = TRUE ***2. 'test remove not attached handler from empty handlers list': Error = 'WebBrowser instance doesn't have any attached DocumentCompleted event handlers.', ExpectedException = True *** Test result = TRUE ***3. 'test add one handler': === List Event Handlers: count = 1 1. docCompleted1 in MyTestConsole.Program *** Test result = TRUE ***4. 'test remove not attached handler': Error = ''docCompleted2' is missing in the list of WebBrowser instance's attached DocumentCompleted event handlers', ExpectedException = True *** Test result = TRUE ***5. 'test add two handlers': === List Event Handlers: count = 3 1. docCompleted1 in MyTestConsole.Program 2. docCompleted2 in MyTestConsole.Program 3. docCompleted3 in MyTestConsole.Program *** Test result = TRUE ***6. 'test add already added handler': === List Event Handlers: count = 4 1. docCompleted1 in MyTestConsole.Program 2. docCompleted2 in MyTestConsole.Program 3. docCompleted3 in MyTestConsole.Program 4. docCompleted3 in MyTestConsole.Program *** Test result = TRUE ***7. 'test add already added handlers': === List Event Handlers: count = 7 1. docCompleted1 in MyTestConsole.Program 2. docCompleted2 in MyTestConsole.Program 3. docCompleted3 in MyTestConsole.Program 4. docCompleted3 in MyTestConsole.Program 5. docCompleted1 in MyTestConsole.Program 6. docCompleted2 in MyTestConsole.Program 7. docCompleted3 in MyTestConsole.Program *** Test result = TRUE ***8. 'test remove one handler': === List Event Handlers: count = 6 1. docCompleted1 in MyTestConsole.Program 2. docCompleted2 in MyTestConsole.Program 3. docCompleted3 in MyTestConsole.Program 4. docCompleted3 in MyTestConsole.Program 5. docCompleted1 in MyTestConsole.Program 6. docCompleted2 in MyTestConsole.Program *** Test result = TRUE ***9. 'test remove two handlers': === List Event Handlers: count = 4 1. docCompleted1 in MyTestConsole.Program 2. docCompleted2 in MyTestConsole.Program 3. docCompleted3 in MyTestConsole.Program 4. docCompleted1 in MyTestConsole.Program *** Test result = TRUE ***10. 'test remove all handlers': === List Event Handlers: count = -1 *** Event handlers are missing. *** Test result = TRUE ***11. 'test remove all handlers when none are attached': === List Event Handlers: count = -1 *** Event handlers are missing. *** Test result = TRUE ****** All tests' overall success count == 11 = > TRUE *** | Keeping WebBrowser control's DocumentCompleted event handlers | c#;winforms;event handling | public static EventInfo GetEventInfo(Type controlType, string targetEventName){ foreach (var eventInfo in controlType.GetEvents()) { if (string.Compare(eventInfo.Name, targetEventName, true) == 0) return eventInfo; } return null;} a little bit linq with with the help of FirstOrDefault() will lead to public static EventInfo GetEventInfo(Type controlType, string targetEventName){ return controlType.GetEvents() .FirstOrDefault(evt => string.Compare(evt.Name, targetEventName, true) == 0);} public void AddEventHandlers(params System.Windows.Forms.WebBrowserDocumentCompletedEventHandler[] handlers){ handlers.ToList().ForEach(handler => AddEventHandler(handler)); } Although this looks short and clear, it is creating unneccesary objects by the call to ToList(). public void AddEventHandlers(params System.Windows.Forms.WebBrowserDocumentCompletedEventHandler[] handlers) { foreach(var handler in handlers) { AddEventHandler(handler); } }This should be applied for RemoveEventHandlers too. public IEnumerable<MethodInfo> EnumerateAddedHandlers(){ var eventInfo = GetEventInfo(typeof(System.Windows.Forms.WebBrowser), EVENT_NAME); Func<EventInfo, FieldInfo> ei2fi = ei => _webBrowser.GetType().GetField(eventInfo.Name, BindingFlags.NonPublic | BindingFlags.Instance | BindingFlags.GetField); return from eventInfo1 in new EventInfo[] { GetEventInfo(typeof(System.Windows.Forms.WebBrowser), EVENT_NAME) } let eventFieldInfo = ei2fi(eventInfo1) let eventFieldValue = (System.Delegate)eventFieldInfo.GetValue(_webBrowser) from subscribedDelegate in eventFieldValue.GetInvocationList() select subscribedDelegate.Method;} there is no need call GetEventInfo() twice. Just reuse eventInfo. public void RemoveEventHandler(System.Windows.Forms.WebBrowserDocumentCompletedEventHandler handler){ int countBefore = this.Count; if (countBefore <= 0) throw new InvalidOperationException(WebBrowser instance doesn't have any attached DocumentCompleted event handlers.); _webBrowser.DocumentCompleted -= handler; if (countBefore == this.Count) throw new ArgumentException(String.Format('{0}' is missing in the list of WebBrowser instance's attached DocumentCompleted event handlers, handler.Method.Name)); } this can throw an ArgumentException based on the usage of this class and a registering of the DocumentCompleted outside of this class. Assume that from the application somehow a thread comes along registering that event just after if (countBefore <= 0). Then the condition countBefore == this.Count will be true which results in the said exception. I think this whole concept how you are doing this registering of events is somehow sub optimal. I don't really get the sense of doing all of this. If you want to be sure that you only register once to that event, you should simply do a -= before you add the handler. If you need to keep track of the amount of handlers registered to the control, why don't you just have a normal Count property which will be increased and decreased while adding or removing the handler(s). |
_softwareengineering.149792 | Let it be known that I am a big fan of dependency injection (DI) and automated testing. I could talk all day about it.BackgroundRecently, our team just got this big project that is to built from scratch. It is a strategic application with complex business requirements. Of course, I wanted it to be nice and clean, which for me meant: maintainable and testable. So I wanted to use DI.ResistanceThe problem was in our team, DI is taboo. It has been brought up a few times, but the gods do not approve. But that did not discourage me.My MoveThis may sound weird but third-party libraries are usually not approved by our architect team (think: thou shalt not speak of Unity, Ninject, NHibernate, Moq or NUnit, lest I cut your finger). So instead of using an established DI container, I wrote an extremely simple container. It basically wired up all your dependencies on startup, injects any dependencies (constructor/property) and disposed any disposable objects at the end of the web request. It was extremely lightweight and just did what we needed. And then I asked them to review it.The ResponseWell, to make it short. I was met with heavy resistance. The main argument was, We don't need to add this layer of complexity to an already complex project. Also, It's not like we will be plugging in different implementations of components. And We want to keep it simple, if possible just stuff everything into one assembly. DI is an uneeded complexity with no benefit.Finally, My QuestionHow would you handle my situation? I am not good in presenting my ideas, and I would like to know how people would present their argument.Of course, I am assuming that like me, you prefer to use DI. If you don't agree, please do say why so I can see the other side of the coin. It would be really interesting to see the point of view of someone who disagrees.UpdateThank you for everyone's answers. It really puts things into perspective. It's nice enough to have another set of eyes to give you feedback, fifteen is really awesome! This are really great answers and helped me see the issue from different sides, but I can only choose one answer, so I will just pick the top voted one. Thanks everyone for taking the time to answer.I have decided that it is probably not the best time to implement DI, and we are not ready for it. Instead, I will concentrate my efforts on making the design testable and attempt to present automated unit testing. I am aware that writing tests is additional overhead and if ever it is decided that the additional overhead is not worth it, personally I would still see it as a win situation since the design is still testable. And if ever testing or DI is a choice in future, the design can easily handle it. | Dependency injection: How to sell it | dependency injection | Taking a couple of the counter arguments:We want to keep it simple, if possible just stuff everything into one assembly. DI is an uneeded complexity with no benefit.its not like we will be plugging in different implementations of components.What you want is for the system to be testable. To be easily testable you need to be looking at mocking the various layers of the project (database, communications etc.) and in this case you will be plugging in different implementations of components.Sell DI on the testing benefits it gives you. If the project is complex then you're going to need good, solid unit tests.Another benefit is that, as you are coding to interfaces, if you come up with a better implementation (faster, less memory hungry, whatever) of one of your components, using DI makes it a lot easier to swap out the old implementation for the new.What I'm saying here is that you need to address the benefits that DI brings rather than arguing for DI for the sake of DI. By getting people to agree to the statement:We need X, Y and ZYou then shift the problem. You need to make sure that DI is the answer to this statement. By doing so you co-workers will own the solution rather than feeling that it's been imposed on them. |
_datascience.14519 | I'm trying to solve a multivariate regression problem similar to PLS regression.The problem can be described as a connectivity analysis problem where we have two regions with unknown unidirectional connections(many-to-many) and given a set of input region patterns and output region patterns, we want to infer the underlying connections.Mathematically, the problem can be formulated as below$Y = BX \qquad$ where $Y \in \mathbb{R}_+^{M\times N}$, $X \in \mathbb{R}_+^{L\times N}$, and $B \in \mathbb{R}_+^{M\times L}$ with $L > M >> N$The column of $X$ and $Y$, will be a vectorized version of 2D image. Although this would result in highly underdetermined system, I do have some prior knowledge about the pattern in input/output regions that I can incorporate in the model. Is there a model/idea that I can use in situation like this? | Multitask multivariate regression? | machine learning;clustering;regression | You may want to model your problem using bayesian regression, which may allow you to introduce your prior knowledge in the form of priors (a priori distributions) of the model parameters. They would also allow you to model latent variables that govern the dynamics of the interactions (and impose priors on them as well).The specific approach may be based on sampling (e.g. Markov Chain Monte Carlo) or optimization (e.g. Variational Bayes).One of the most popular bayesian frameworks is Stan, which has bindings to R (rstan) and python (pystan). In R there are other alternatives such a BUGS and JAGS. In the python realm other options are PyMC (which is also pretty popular) or Edward. |
_softwareengineering.181482 | We're trying to move our project documentation process from Google Documents to a set of self-hosted Git repositories.Text documents are Git-friendly enough, since we usually do not need any fancy formatting, we'll just convert everything to, say, multimarkdown with an option to embed LaTeX for complex cases.But spreadsheets are quite different story... Is there a spreadsheed(-like) format that is friendly to version control systems (and, preferably, is as human-readable as Markdown)? Friendly format: Git works well with the format (it doesn't with XML) and it generates human-readable diffs (extra configuration involving external tools is OK).Obviously, Markdown flavors allow one to build static tables, but I'd like to be able to use stuff like SUM() etc... (Note that CSV has the same problem.) No WYSIWYG is OK, but decent editor/tool support would be nice.Update: Linux-friendly answers only, please. No MS Office stuff. | Git-friendly spreadsheet format? | version control;documentation;tools;linux | null |
_webmaster.52346 | Let assume if everything being equal, which domain name might rank higher if I search for How to fix computer term?HowToFixComputers.comhtfc.com | Do long domain names reduce or increase your PageRank? | seo;pagerank;ranking | null |
_unix.242444 | I have the following contrived script to illustrate my issue:#!/bin/bashset -euxsudo sleep 120 &spid=$!sleep 1sudo kill $spidwait $!This will print$ ./test.sh + spid=21931+ sleep 1+ sudo sleep 120+ sudo kill 21931+ wait 21931and then hang on 'wait' until the 'sleep 120' times out. However, when I run sudo kill 21931 from another terminal the sleep process is killed immediately. I expected the 'sudo kill $spid' line in the script to also kill the sleep process immediately. Why doesn't this work and how do I make this work?(Might be relevant: I see this behaviour bash 4.3.42 and dash 0.5.7 on Ubuntu 15.10.) | Why does kill not work from script, but does work from terminal? | bash;ubuntu;kill;dash | null |
_cstheory.4126 | I'm reading Simon Peyton Jones's The Implementation of Functional Programming Languages and there's one statement that surprised me a little bit (on page 39):To a much greater extent than is the case for imperative languages, functional languages are largely syntactic variations of one another, with relatively few semantic differences.Now, this was written in 1987 and my thoughts on this subject might be influenced by more modern programming languages that weren't around or popular then. However, I find this a bit hard to believe. For instance, I think that the described Miranda programming language (an early predecessor to Haskell) has much more different semantics compared to a strict language like ML than say C has to Pascal or maybe even C has to smalltalk (although I'll cede that C++ provides some validation of his point :-).But then again, I'm basing this on my intuitive understanding. Is Simon Peyton Jones largely correct in saying this, or is this a controversial point? | How are imperative languages more different from each other than functional languages? | functional programming;semantics;imperative programming | null |
_datascience.20319 | I want to begin exploring OpenCV in Python but I'm stuck at importing the package cv2. I have installed the package through pip3 install opencv-python and it got installed at this location - C:/Users/Kshitiz/AppData/Local/Programs/Python/Python36-32/Lib/site-packages.When I'm trying to import cv2 using this:import syssys.path.append('C:/Users/Kshitiz/AppData/Local/Programs/Python/Python36-32/Lib/site-packages')import cv2It gives the following error:Traceback (most recent call last): File <stdin>, line 1, in <module> File C:/Users/Kshitiz/AppData/Local/Programs/Python/Python36-32/Lib/site-packages\cv2\__init__.py, line 7, in <module> from . import cv2ImportError: cannot import name 'cv2'I have searched a lot but cannot find anything relevant. Please suggest what needs to be done. | Import Error: cannot import name 'cv2' | python;computer vision | null |
_unix.65450 | i have a kvm host based on ubuntu 10.04 host , and the guest is rhel 5.3 64-bit,on the guest i tried to execute mii-tool eth0 SIOCGMIIREG on eth0 failed: Input/output errorSIOCGMIIREG on eth0 failed: Input/output erroreth0: no autonegotiation, 100baseTx-FD, link okand mii-tool -v eth4 eth0: no autonegotiation, 100baseTx-FD, link ok product info: vendor 00:50:43, model 2 rev 0 basic mode: software reset, autonegotiation enabled basic status: autonegotiation complete, link ok capabilities: 100baseTx-FD 100baseTx-HD 10baseT-FD 10baseT-HD advertising: 100baseTx-FD 100baseTx-HD 10baseT-FD 10baseT-HD flow-control link partner: 100baseTx-FD 100baseTx-HD 10baseT-FD 10baseT-HDand mii-tool -r eth0 this is output /var/log/messageFeb 20 13:16:44 xil1 kernel: [ 1289.790780] e1000: eth4 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RXso that it is supposed to work with 1000MB/s but it still working with 100MB/s any suggestion what may be the problem ? | kvm guest Network interface no Authenication | ubuntu;networking;kvm | null |
_unix.386882 | I have again and again had this problem: I have a glob, that matches exactly the correct files, but causes Command line too long. Every time I have converted it to some combination of find and grep that works for the particular situation, but which is not 100% equivalent.For example:./foo*bar/quux[A-Z]{.bak,}/pic[0-9][0-9][0-9][0-9]?.jpgIs there a tool for converting globs into find expressions that I am not aware of? Or is there an option for find to match the glob without matching a the same glob in a subdir (e.g. foo/*.jpg is not allowed to match bar/foo/*.jpg)? | Convert glob to `find` | find;wildcards | If the problem is that you get an argument-list-is-too-long error, use a loop, or a shell built-in. While command glob-that-matches-too-much can error out, for f in glob-that-matches-too-much does not, so you can just do:for f in foo*bar/quux[A-Z]{.bak,}/pic[0-9][0-9][0-9][0-9]?.jpgdo something $fdoneThe loop might be excruciatingly slow, but it should work.Or:printf %s\0 foo*bar/quux[A-Z]{.bak,}/pic[0-9][0-9][0-9][0-9]?.jpg | xargs -r0 something(printf being builtin in most shells, the above works around the limitation of the execve() system call)$ cat /usr/share/**/* > /dev/nullzsh: argument list too long: cat$ printf %s\n /usr/share/**/* | wc -l165606Also works with bash. I'm not sure exactly where this is documented though.Both Vim's glob2regpat() and Python's fnmatch.translate() can convert globs to regexes, but both also use .* for *, matching across /. |
_cs.37992 | I am working with random number generation and testing, so I'm using NIST statistical tests to examine my random numbers. Now I want to compare my solution with other RNGs, but i can't find any probabilities of passing a NIST statistical test for them. So does anyone have that info? Info about both PRNGs and TRNGs would be appreciated. | Approximate probabilities of passing a NIST statistical test | reference request;random number generator | null |
_webapps.22601 | Recently when posting links I've found Twitter has no longer offered to automatically shorten URLs; they used to be shortened to a max of 20 characters just when I pasted them into the tweet box. Now they take as many characters as the full URL. | Why doesn't Twitter's URL shortening always work? | twitter;url shortening | null |
_unix.272817 | I am trying to make a tar file. I have 2 folders that I need to tar. let me clear my question.folder 1. /temp1folder 2. /temp1now I want my tar output to be such that when i untar it i get /temp1/* (stuff of temp1) /temp1/temp2/* (temp2 and subdirectories inside temp1).Right now I am copying temp2 into temp1 and than tarring it. Can anyone suggest me to do in a way that I don't have to copy the stuff, As if I kill the process in between I will be left some of temp2's stuff inside temp1. | putting files in subdirectory while making tar | tar | null |
_codereview.114250 | The source data represents interaction between people. Some are internal, others are external.The internals are recorded in the Users table (represented as the Users CTE in this demonstration).Each Entries record is identified by an ID (ItemID), the time of the interaction (Sequence) and who performed the interaction (UserID).The goal is to have a single line per ItemID with the following columns:ItemID - self explanatoryFirstSequence - Sequence value of first interaction (remember in reality this is a time-stamp)firstInternal - UserID of first user that IsInternalFirstInternalSequence - Sequence of firstInternalCountPerFirstInternal - A count of all interactions by firstInternal userCountAllInternal - A count of all interaction with any IsInternal userCountAll - Count of all interactions for ItemIDLastSequence - The last interaction for ItemID - allows to measure the 'age' of the interaction.----- Demo source data BEGINWITH Users AS ( SELECT 'A' AS UserID UNION ALL SELECT 'B' AS UserID), Entries AS ( SELECT 10001 ItemID, 'X' AS UserID, 101 AS Sequence UNION ALL SELECT 10001 ItemID, 'A' AS UserID, 102 AS Sequence UNION ALL SELECT 10001 ItemID, 'X' AS UserID, 103 AS Sequence UNION ALL SELECT 10001 ItemID, 'B' AS UserID, 104 AS Sequence UNION ALL SELECT 10001 ItemID, 'X' AS UserID, 105 AS Sequence UNION ALL SELECT 10001 ItemID, 'A' AS UserID, 106 AS Sequence UNION ALL SELECT 10020 ItemID, 'Y' AS UserID, 201 AS Sequence UNION ALL SELECT 10020 ItemID, 'Y' AS UserID, 202 AS Sequence UNION ALL SELECT 10020 ItemID, 'B' AS UserID, 203 AS Sequence UNION ALL SELECT 10020 ItemID, 'Y' AS UserID, 204 AS Sequence UNION ALL SELECT 10020 ItemID, 'A' AS UserID, 205 AS Sequence UNION ALL SELECT 10020 ItemID, 'Y' AS UserID, 206 AS Sequence UNION ALL SELECT 10020 ItemID, 'B' AS UserID, 207 AS Sequence UNION ALL SELECT 10020 ItemID, 'B' AS UserID, 208 AS Sequence UNION ALL SELECT 10300 ItemID, 'A' AS UserID, 301 AS Sequence UNION ALL SELECT 10300 ItemID, 'Z' AS UserID, 302 AS Sequence UNION ALL SELECT 10300 ItemID, 'Z' AS UserID, 303 AS Sequence UNION ALL SELECT 10300 ItemID, 'Z' AS UserID, 304 AS Sequence UNION ALL SELECT 10300 ItemID, 'A' AS UserID, 305 AS Sequence UNION ALL SELECT 10300 ItemID, 'Z' AS UserID, 306 AS Sequence UNION ALL SELECT 10300 ItemID, 'A' AS UserID, 307 AS Sequence)----- Demo source data END----- Code I am asking about, Src AS ( SELECT e.ItemID , e.UserID , e.Sequence , CASE WHEN u.UserID IS NULL THEN 0 ELSE 1 END AS IsInternal FROM Entries AS e LEFT JOIN Users as u ON u.UserID = e.UserID), Src_UserID AS ( SELECT * , ROW_NUMBER() OVER ( PARTITION BY Src_UserID.ItemID, Src_UserID.IsInternal ORDER BY Src_UserID.FirstUserSequence ) AS RC FROM ( SELECT src.ItemID , src.IsInternal , src.UserID , COUNT(*) AS CountPerUser , MIN(src.Sequence) AS FirstUserSequence FROM src GROUP BY src.ItemID, src.IsInternal, src.UserID ) as Src_UserID), Src_Items AS ( SELECT src.ItemID , COUNT(*) AS CountAll , SUM(IsInternal) AS CountAllInternal , MIN(src.Sequence) AS FirstSequence , MAX(src.Sequence) AS LastSequence FROM src GROUP BY src.ItemID), Src_FirstInternal AS ( SELECT src.ItemID , src.UserID AS firstInternal , src.CountPerUser AS CountPerFirstInternal , MIN(src.FirstUserSequence) AS FirstInternalSequence FROM Src_UserID AS src WHERE src.IsInternal = 1 AND src.RC = 1 GROUP BY src.ItemID, src.IsInternal, src.UserID, src.CountPerUser) SELECT s0.ItemID , s0.FirstSequence , s1.firstInternal , s1.FirstInternalSequence , s1.CountPerFirstInternal , s0.CountAllInternal , s0.CountAll , s0.LastSequence FROM Src_Items as s0 JOIN Src_FirstInternal AS s1 ON s1.ItemID = s0.ItemIDDifference between the Demo code and real life:Items are in tables and not UNION ALL CTEsThe list represented by Entries in the demo, in reality is 800K rows, and takes 8 seconds to retrieve.Sequence column is actually a date.Execution plan:Inspecting the code in the Execution Plan, I see that most of the time is spent on SORT, and it occurs 3 times.GoalI'm trying to get the above query to perform better. Running this on even on a limited set of results takes ages. I wonder if there is a better way of writing this query. | Query to count interactions between users | performance;sql;sql server;t sql | Before we begin with the code...I just want to address one thing regarding test cases with sample data. To get the best out of a performance review of your queries, try to provide a sample that's as close as possible to your real data. You stated:Difference between the Demo code and real life:Items are in tables and not UNION ALL CTEsThe list represented by Entries in the demo, in reality is 800K rows, and takes 8 seconds to retrieve.Sequence column is actually a date.While (2) would be difficult to replicate on a small scale, (1) and (3) are fairly simple. I modified your sample data in the following ways to match your real life data more closely:Created temp tables #Users and #Entries including keys and indexes (clustered indexes created automatically on primary key constraintsN/A - cannot produce 800K rows of demo dataChanged sequence column to DATETIME type and seeded demo data using a RAND() formula with DATEADD(). While not completely identical, it should be close enough. New demo data:----- Demo source data BEGINIF OBJECT_ID('tempdb..#Users') IS NOT NULL DROP TABLE #Users;IF OBJECT_ID('tempdb..#Entries') IS NOT NULL DROP TABLE #Entries;GOCREATE TABLE #Users ( UserID VARCHAR(100) NOT NULL, CONSTRAINT PK_#Users PRIMARY KEY (UserID));CREATE TABLE #Entries ( ItemID INT NOT NULL, UserID VARCHAR(100) NULL, Sequence DATETIME NOT NULL, CONSTRAINT PK_#Entries PRIMARY KEY (ItemID, Sequence), CONSTRAINT FK_#Users FOREIGN KEY (UserID) REFERENCES #Users(UserID));GOINSERT INTO #Users (UserID) SELECT 'A' UNION ALL SELECT 'B' ;INSERT INTO #Entries (ItemID, UserID, Sequence) SELECT 10001 ItemID, 'X' AS UserID, DATEADD(HOUR, (RAND() * 1000), GETDATE()) AS Sequence UNION ALL SELECT 10001 ItemID, 'A' AS UserID, DATEADD(HOUR, (RAND() * 1000), GETDATE()) AS Sequence UNION ALL -- etc. SELECT 10300 ItemID, 'A' AS UserID, DATEADD(HOUR, (RAND() * 1000), GETDATE()) AS Sequence ;GO----- Demo source data ENDFor reference to others looking at this, the result set after running the whole query with sample data is as follows:PerformanceThis being the meat of your question, let's start by looking at our execution plan, which I ran based on the above sample data. I added markers 1-4 which caught my attention and will address individually. Note: I will make changes mainly in formatting as we go along.1. Duplicated Index ScansBoth of those identical scans come from the Src CTE, which is called from 2 the other 2 CTEs separately. I was looking for a way to eliminate the left join in favor of an existence check, however due to needing the u.UserID in your IsInternal field we will have to keep this join. One possibility, if this kind of operation (checking whether a user is internal) is something that is done frequently in your code base, you may consider adding an IsInternal boolean/bit column in Entries so you could eliminate this join altogether from your code base when you need to check if an entry is internal.I cannot tell you exactly how to optimize that CTE otherwise, but since you are scanning the same source data sets twice, perhaps consider storing the result set inside a temp table, which presumably might be a smaller set than the entire two original tables. WITH Src AS ( SELECT Src_Ent.ItemID , Src_Ent.UserID , Src_Ent.Sequence /*If the user for this item sequence is not found in Users, we mark it as Internal.*/ , (CASE WHEN Src_Usr.UserID IS NULL THEN 0 ELSE 1 END) AS IsInternal FROM #Entries AS Src_Ent LEFT JOIN #Users AS Src_Usr ON Src_Usr.UserID = Src_Ent.UserID)Improvements to formatting: Changed table aliases to make query (and execution plan) easier to read. #Entries AS Src_Ent was e and #Users AS Src_Usr was u. I also wrapped the CASE expression in round brackets to help isolate it visually from its alias. I added a bit of documentation to the CASE statement.2. Sort #1 - Src_UserID_SubThis Sort results from the GROUP BY clause of Src_UserID_Sub subquery. Unfortunately it's not possible to eliminate this expensive sort, as rows must be sorted prior to being grouped. It's possible that this would be less expensive if you used a temp table as suggested in step (1) if you had a clustered index, for example by making an artificial primary key such as a RowID INT IDENTITY(1,1) column on the temp table. 3. Sort #2 - Src_UserID with ROW_NUMBER()This Sort is also impossible to eliminate with your current logic, otherwise the following error will be raised: The function 'ROW_NUMBER' must have an OVER clause with ORDER BY. It might be possible to do away with ROW_NUMBER(), but that might also be more harmful than beneficial, as it would likely require a loop or some other construct that is not very SQL-ish, and in the end, the query optimizer can probably work better with this built-in function than if we rolled our own. So again, very little optimization possible. I did eliminate the SELECT * in favor of enumerating the columns, as it makes it easier to understand, and in general SELECT * should usually be avoided for a variety of reasons. , Src_UserID AS ( SELECT Src_UserID_Sub.ItemID , Src_UserID_Sub.IsInternal , Src_UserID_Sub.UserID , Src_UserID_Sub.CountPerUser , Src_UserID_Sub.FirstUserSequence , ROW_NUMBER() OVER ( PARTITION BY Src_UserID_Sub.ItemID , Src_UserID_Sub.IsInternal ORDER BY Src_UserID_Sub.FirstUserSequence ) AS [RowCount] FROM ( /*This subquery is used to get the number of entries per user, as well as the earliest sequence related to said entries*/ SELECT Src.ItemID , Src.IsInternal , Src.UserID , COUNT(*) AS CountPerUser , MIN(Src.Sequence) AS FirstUserSequence FROM Src GROUP BY Src.ItemID, Src.IsInternal, Src.UserID ) AS Src_UserID_SubImprovements to formatting: Changed subquery alias from Src_UserID to Src_UserID_Sub to differentiate it from the CTE name and therefore make the code less ambiguous. Got rid of SELECT * as mentioned above. Added a small amount of documentation explaining what the subquery is for. 4. Hash MatchSo here is the most expensive operation in your whole execution, at 31% operator cost. I'm going to quote one of the pros on DBA.StackExchange:The hash join is one of the more expensive join operations, as it requires the creation of a hash table to do the join. That said, its the join thats best for large, unsorted inputs. It is the most memory-intensive of any of the joins.Now, the thing is with hash joins, they are not necessarily slow, but they can be slow, depending on the memory load of the server at the time it is being executed. performance can also vary wildly based on the size of the build input vs. the amount of memory available, as the SQL optimizer will attempt to hold the hash table in memory, if it can. If it cannot due to insufficient memory, then it has to resort to more complex constructs such as Grace Hash Join and Recursive Hash Join. The type of hash join that is used is not easily discerned when optimizing, as this is done dynamically. Per TechNet page on Understanding Hash Joins:It is not always possible during optimization to determine which hash join is used. Therefore, SQL Server starts by using an in-memory hash join and gradually transitions to grace hash join, and recursive hash join, depending on the size of the build input.This one being less predictable, you will have to benchmark different solutions and compare the results. Would using temp tables instead of CTEs help? Maybe. Maybe not. Only way to know for sure is trying it in your environment. SELECT items.ItemID , items.FirstSequence , internal.firstInternal , internal.FirstInternalSequence , internal.CountPerFirstInternal , items.CountAllInternal , items.CountAll , items.LastSequenceFROM Src_Items AS itemsJOIN Src_FirstInternal AS internal ON internal.ItemID = items.ItemID;Formatting improvements: changed aliases as such: s0 -> items and s1 -> internal. OverallI think overall your SQL code is quite well written. From the looks of it, this probably belongs in a stored procedure. If it doesn't, then maybe you should make it so, that would give you further performance improvement by saving the execution plan after first execution. |
_unix.356082 | When we cat /proc/stat, the first line is time spent in certain mode, user, nice, system, idle, iowait, irq, softirq, ext.My question is how the number of cores or the number of cpus impact the value.For example,if the computer have two cpus, each with two cores.The idle time will be the sum of all four cores? | How the value of /proc/stat will be impacted by the count of cpu or cpu cores? | linux;proc;time | It is the sum of idle times of all cpu's present in the machine. Assuming the machine to have two cpu's, you shall see something like this,cpu 12025658 7696 2460383 3405462812 174924 2 19062 144244 0 0 <----- first linecpu0 8463714 3740 1309236 1700443907 15984 0 68 63475 0 0cpu1 3561944 3955 1151147 1705018904 158940 2 18994 80769 0 0I am not sure about how can we get core level information in a cpu. For example, What is the idle time of core0 in cpu0 ? Will update if i get to know about it. |
_softwareengineering.272627 | This is a question concerning the fundamental approach of TDD, so the example below is as simple as possible which might make it seem a little useless; but of course the question applies to more complicated situations as well.Some colleagues and I are currently discussing and trying out some basic TDD ways of coding. We came across the questions how to deal with cheap solutions for existing but not encompassing TCs. In TDD one writes a TC which fails, then implements whatever it takes (and not more!) to let the TC pass. So the task at hand would be to make the TC green with as little effort as possible. If this means to implement a solution which uses inside knowledge of the TC, so be it. The reasoning was that later TCs would check for more general correctness anyway, so that first solution would need to be improved then (and only then).Example:We want to write a comparison function for a data structure with three fields. Our comparison shall return whether the given values are equal in all three fields (or differ in at least one). Our first written TC only checks if a difference in the two first values is detected properly: It passes (a,b,c) and (a,b,c) and checks for a correct detection of equality, then it passes (a,b,c) and (x,b,c) and checks for a correct detection of inequality.Now the cheap approach would be to also implement only a comparison of the first field because this should be enough to pass this TC. Keep in mind that this can be done because we know that later tests will also check for the equality of the two other fields.But of course it does not seem very useful to only implement such a (more or less) nonsense solution; every programmer doing this would do it in the knowledge of writing a bug. It obviously seems more natural to write a decent comparison right away in the first iteration.On the other hand, writing a correct solution without having a TC which checks it might lead to the situation that such a TC which tests the behaviour more thoroughly will never get written. So there is behaviour which was written without having a TC for it (i. e. which is not developed test-driven).Maybe a proper approach is to not write such rudimentary TCs (like the one only checking the first field) in the first place, but that would mean to demand perfect TCs in first iteration (and of course in complexer situations one will probably not always write perfect TCs).So how should one deal with rudimentary TCs? Implement a cheap solution or not? | Cheap implementations in fundamental TDD | unit testing;testing;tdd | null |
_softwareengineering.202399 | So, I've been evaluating Entity Framework and NHibernate (I'm not looking for an EF vs. NH battle here, though!).One thing that I see come up very often is that NHibernate is recommended for legacy/brownfield database projects, and lighter-weight ORMs (Dapper, etc) are sometimes recommended for newer dbs.I will be applying my ORM to a brownfield database. What specific features of NHibernate make it so widely recommended for legacy dbs. (I have never heard anyone say here's why NHibernate is better for legacy DB's -- I really want to know that, so that I can evaluate NHibernate appropriately)And by the way, what is the definition of legacy here? Do people mean databases that are not well normalized?(or)databases that are being accessed through non-ORM means, such as SQL queries or stored procs?(or)not talking about the database at all, but referring to classic 2 tier systesms (or 2-tier web apps, where there is thick session state, and no application tier)?(or)Any database that is isn't a noSQL database?If it's of any use the discussion. I will be using this ORM to build distributed, multi-tier software. So I think that a lot the stateful features in ORMs -- like change tracking, etc, will not matter to me very much. | What specific features of NHibernate cause it to be recommended for legacy database systems? | architecture;orm;nhibernate | I'm not familiar with EF, so it's possible that what I'm about to mention exists in EF as well.I'm working with Priority ERP, which has a legacy database. What does legacy means in this case? No foreign keysSometimes being forced to create both a sequence numeric primary key and a unique key due to Priority ERPTable and field names limited to 20 characters capital letters onlyFake floating numbers (field stores int 10500, actual value is 10.500)Booleans are stored as one character varchar field, where Y is true, and anything else is false (and I do mean anything else, some Priority ERP procedures use empty string, some N)Dates and times are stored as number of minutes since 1-1-1988 (only minutes, no ability to store seconds)Having to work with prebuilt tables that were built in the 80's and because of no foreign keys the relationship between the tables is awkward to say the leastSome tables have FIELD1...FIELD10 per row instead of a join table, which makes it impossible to do normal queries on the table.No nulls allowed in any fieldEvery table, even with zero data, has an empty row filled with default values that is used as a replacement for outer join because of the no nulls setting.NHibernate plus ActiveRecord enables me to support all those limitations pretty easily:Built in extension points when handling CRUD operationsAbility to map the actual field contents on a field and convert it back and forth with a propertyLetting me define almost any mapping between entitiesCreating a custom query with HQL to do exactly what I need, even if I can't map the relationships between the entities |
_codereview.93973 | I have two lists of objects Person and PersonResult. Both are linked through the property PersonId. I need to create a filter for the list of PersonResult that meet certain criteria for the Person (e.g. Person.Gender == female).Im currently using the following LINQ query to achieve this:PersonResultList = PersonResultList.Where(pr => PersonList.FirstOrDefault(p => pr.PersonId == p.PersonId) != null && PersonList.FirstOrDefault(p => pr.PersonId == p.PersonId).Gender == female);This works apparently well, however, I must iterate twice through PersonList to check if the person exist and its gender. Is there a more elegant way to achieve this? | LINQ query that filters elements from a list of object | c#;linq | You can simply combine the conditions inside the FirstOrDefault() like PersonResultList = PersonResultList .Where(pr => PersonList .FirstOrDefault(p => pr.PersonId == p.PersonId && p.Gender == female) != null ); Because I only changed your existing code, it didn't came to my mind what Nikita Brizhak commented here . You should probably use Any instead of FirstOrDefault So let us change the code to PersonResultList = PersonResultList .Where(pr => PersonList .Any(p => pr.PersonId == p.PersonId && p.Gender == female)); This is based on the assumption that for each entry in the first list there will be only one entry in the second list. |
_unix.155818 | With Buildroot I'm making images for my Embedded Linux Hardware. Mainly I'm trying to speed up the boot sequence (and on the way lower the memory usage), where I've tried many techniques successfully.What I'd like to do: Recently I've heared about removing duplicated files in a directory (by e.g. replacing those files with symbolic links) and I'd like to apply this method to my rootfsThe surroundings: With Buildroot I can have many different types of rootfs formats (cramfs, cpio, ext2/3/4, etc.), which are created during make as 1 (packed) file (e.g.: rootfs.cpio). Now I don't really know how to:open up the image remove duplicated files (well I know how to remove duplicated files in general)pack the rootfs again - sothat I still can use it to flash and execute it on my hardwareMaybe it's not even possible at all. I believe at least when using static libraries, many files can be replaced.Has somebody an idea? | Removing duplicated files in the rootfs - to speed up booting time, and improve memory usage | linux;files;embedded;startup;buildroot | null |
_scicomp.21375 | When learning the deal.II FE library, I am a bit confused about the mechanism of its SparsityPattern class. Through reading the documentation, I only got to know that it uses the Compressed Row Storage format to store indices of nonzero entries of a sparse matrix. To put my confusion be explicit, suppose I have a quare 10x10 sparse matrix which stores the values corresponding to 10 degrees of freedom, namely dof_handler.n_dofs = 10:\begin{pmatrix}1 & 2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\3 & 4 & 5 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\0 & 6 & 7 & 8 & 0 & 0 & 0 & 0 & 0 & 0\\0 & 0 & 9 & 10 & 11 & 0 & 0 & 0 & 0 & 0\\0 & 0 & 0 & 12 & 13 & 14 & 0 & 0 & 0 & 0\\0 & 0 & 0 & 0 & 15 & 16 & 17 & 0 & 0 & 0\\0 & 0 & 0 & 0 & 0 & 18 & 19 & 20 & 0 & 0\\0 & 0 & 0 & 0 & 0 & 0 & 21 & 22 & 23 & 0\\0 & 0 & 0 & 0 & 0 & 0 & 0 & 24 & 25 & 26\\0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 27 & 28\\\end{pmatrix}But at first, suppose we don't know exactly where the nonzero entries are and can only get a maximal estimate of their number at each row, say max_per_row, and let max_per_row = 5. Then the sparsity pattern would be:\begin{pmatrix}X & X & 0 & 0 & 0 & 0 & 0 & X & X & X\\X & X & X & 0 & 0 & 0 & 0 & 0 & X & X\\X & X & X & X & X & 0 & 0 & 0 & 0 & 0\\0 & X & X & X & X & X & 0 & 0 & 0 & 0\\0 & 0 & X & X & X & X & X & 0 & 0 & 0\\0 & 0 & 0 & X & X & X & X & X & 0 & 0\\0 & 0 & 0 & 0 & X & X & X & X & X & 0\\0 & 0 & 0 & 0 & 0 & X & X & X & X & X\\X & 0 & 0 & 0 & 0 & 0 & X & X & X & X\\X & X & 0 & 0 & 0 & 0 & 0 & X & X & X\\\end{pmatrix}One way to get the sparsitypattern is:SparsityPattern sparsity_pattern;sparsity_pattern.reinit(10, 10, 5);DoFTools::make_sparsity_pattern(dof_handler, sparsity_pattern);sparsity_pattern.compress()where dof_handler stores the information of degrees of freedom. And it can also be implemented as below:DynamicSparsityPattern dynamic_pattern (dof_handler.n_dofs());DoFTools::make_sparsity_pattern (dof_handler, dynamic_pattern);constraints.condense (dynamic_pattern);SparsityPattern sp;sp.copy_from (dynamic_pattern);So, my question is:*1) Here, when we call sparsity_pattern.reinit(10, 10, 5), does it create an empty 10x10 matrix (2D vector) or just a 1x100 vector storing the values row by row? And if it is 10x10 or 1x100, then what is the functionality of third parameter max_per_row=5? *2) I think sparsity_pattern.reinit(10, 10, 5) creates a 10x10 matrix and then using DoFTools::make_sparsity_pattern(dof_handler, sparsity_pattern) to create a CRS format, i.e, two vectors storing the row columm index of nonzero entries at each row and the index starting a new row respectively, according to the dof information stored in dof_handler. Does this understanding make sense?*3) I know in C++ STL, we can compress a vector to its actuall capacity using its member function vector.shrink_to_fit. But here if DoFTools::make_sparsity_pattern has already make a compressed format of 5 entries at each row (according to *2), how can it be compressed again using sparsity_pattern.compress()?*4) Does DynamicSparsityPattern dynamic_pattern (dof_handler.n_dofs()) creates a (n_dofs) x (n_dofs) matrix, then what's the difference between DoFTools::make_sparsity_pattern(dof_handler, sparsity_pattern) and DoFTools::make_sparsity_pattern (dof_handler, dynamic_pattern).*5) What about constraints.condense (dynamic_pattern)? *6) In C++ STL, we know that we can copy a vector v1 of size 4, capacity 8 to a vector v2, then v2 will be of capacity 4. In this way, can can also compress a vector to its actuall capacity. Is it the same mechanism to use `sp.copy_from (dynamic_pattern)'?I know these questions are so dummy to experienced users of deal.II. But to green learners as me, they are really great hurdles to jump. I sincerely hope someone could be so kind to give some help, and any comments would be greatly appreciated. Thanks in advance! | Topics about the deal.II finite element library class SparsityPattern | c++;sparse;data storage;deal.ii | I will try answer based on my experience with deal.ii. The max_per_row=5 means that at most there will be 5 non-zeros per row in the matrix. Since we now know this then we do not need to have a $1\times{}100$ matrix but rather a $1\times{}50$. In other words this parameter sets an upper-bound on the memory needed. In reality it is not stored as one vector but rather as two: a row pointer vector and a column indicies vector in accordance with CRS format. See 2).Yes the sparsity pattern holds the row pointers and the column indices as two vectors.We must potentially compress again because while we set maximum of 5 entries per row, there may in fact be less for any given row. One row might have 5 while a different row might only have 3. This results in extra entries in the column index vector that should be removed by compression. Note that these extra entries in the column index vector are often written as $-1$'s since a column index can never be negative and thus if it is $-1$ we know that no non-zero has been assigned for that column index.As the documentation says the DynamicSparsityPattern is used to find the sparsity pattern while being compressed at all times. This reduces memory overhead at the expense of cpu time.This class acts as an intermediate form of the SparsityPattern class. From the interface it mostly represents a SparsityPattern object that is kept compressed at all times. However, since the final sparsity pattern is not known while constructing it, keeping the pattern compressed at all times can only be achieved at the expense of either increased memory or run time consumption upon use. The main purpose of this class is to avoid some memory bottlenecks, so we chose to implement it memory conservative. The chosen data format is too unsuited to be used for actual matrices, though. It is therefore necessary to first copy the data of this object over to an object of type SparsityPattern before using it in actual matrices.Another viewpoint is that this class does not need up front allocation of a certain amount of memory, but grows as necessary. An extensive description of sparsity patterns can be found in the documentation of the Sparsity patterns module.The documentation has this to say:Condense a sparsity pattern. The name of the function mimics the name of the function we use to condense linear systems, but it is a bit of a misnomer for the current context. This is because in the context of linear systems, we eliminate certain rows and columns of the linear system, i.e., we reduce or condense the linear system. On the other hand, in the current context, the functions does not remove nonzero entries from the sparsity pattern. Rather, it adds those nonzero entry locations to the sparsity pattern that will later be needed for the process of condensation of constrained degrees of freedom from a linear system.Since this function adds new nonzero entries to the sparsity pattern, the given sparsity pattern must not be compressed. The constraint matrix (i.e., the current object) must be closed. The sparsity pattern is compressed at the end of the function.We copy the DynamicSparsityPattern to a regular SparsityPattern using the copy call because apparently the dynamic one isn't in a nice enough format for later operations. How this is done internally may involve vector copies as you suggest but I am not sure.Concrete ExampleLets run through your first SparsityPattern code snippet (keeping in mind that the exact details of how this is done in deal.ii might differ, but the basic idea I think is correct.)SparsityPattern sparsity_pattern;sparsity_pattern.reinit(10, 10, 5);DoFTools::make_sparsity_pattern(dof_handler, sparsity_pattern);sparsity_pattern.compress()Call SparsityPattern sparsity_pattern;. This just creates our sparsity pattern object with default constructor. The sparsity_pattern object contains member variables like row and col both of which are vectors (currently of size 0). Note: row and col are names I made up. They are called something else in deal.ii.Call sparsity_pattern.reinit(10, 10, 5);. Our sparsity pattern initializes the row pointer and column index vectors. This might look something like: row.resize(10+1,0); and col.resize(10*5,-1);Call make_sparsity_pattern. This determines where the non-zeros in our sparse matrix will be, however some of the col entries will still be -1's - i.e. cases where there was less then 5 non-zeros in the row. Otherwise row is correctly filled and col contains both -1's and other positive column indices indicating the columns where non-zeros exist.Call compress. This compresses the col vector by removing any -1's. |
_softwareengineering.345741 | I'm writing java library(jar file) to log Web service request and responses in a Database for in-house application. This library will have two methods registerReuqest and registerResponse. I'm wondering is it good idea to pass database connection to these library methods?Passing connection to library will have some pros and cons.Pros:One connection can be used to register request and response, both of them. This decrease delay of opening second connection. In some cases, caller service will use the same connection too. Cons: Caller service and library becomes coupled. | Passing database connection to the library | java;design patterns;libraries;coupling;inversion of control | null |
_codereview.61571 | Some patterns are emerging (fmap (b->a) . fmap (c->b) . .. . (IO z)). Also infix zip is kind-of a hack.What is:Best practice in point-free style?Best practice?Elegance > Performance; Functional > Imperativeimport qualified Data.Map.Lazy as Mimport qualified Data.ByteString.Char8 as BSfromFile :: FilePath -> IO (M.Map Char Int)fromFile = fmap (M.fromListWith (+)) . fmap (`zip` [1,1..]) . readFile where readFile = fmap BS.unpack . BS.readFile | Count frequency of characters in a file | haskell;io | Best practice would be to separate the pure operations of your program from those that actually require IO. Counting the frequency of elements in a list doesn't require any IO, so you should tease that fromFile function apart into its constituent components for reusability, testing, comprehensibility, or whatever other purpose you'd like.frequencies :: Ord k => [k] -> Map k Intfrequencies = fromListWith (+) . (`zip` [1,1..])fromFile :: FilePath -> IO (Map Char Int)fromFile = fmap frequencies . fmap unpack . readFileI'd tweak this just a bit further, using repeat from the Prelude to build the infinite list instead of abusing list ranges, and to take advantage of the Functor laws and drop a few characters. I flip back and forth on writing functions in pointfree style when it requires infix sectioning too, in this case I'd probably keep the points but I don't know that one choice is clearly better than the other.frequencies ks = fromListWith (+) $ zip ks (repeat 1)fromFile = fmap (frequencies . unpack) . readFile |
_softwareengineering.108664 | I am developing a very simple iPhone game with this view hierarchy:Main Menu View > New Game View | > Player vs Computer Game View | | > Pause View | | > End Turn View | | > End Game View | | | > Player vs Player (offline) Game View | > Pause View | > End Turn View | > End Game View | > Information ViewMy current implementation has a single ViewController that controls every aspect of the user interface and a single XIB file that contains every View of the game.Is this correct? It looks a bit confusing...Should i have more ViewControllers and more XIB files? And what's the proper way to make them cooperate? | What's the proper way to organize ViewControllers and XIB? | iphone;ios;game development;user interface | null |
_webapps.103082 | I keep getting email sent to similar email addresses as mine. for example, if my email address is: [email protected], I also get emails addressed to [email protected]. This j.frank email is not associated with my email address, so I cannot remove the association. How can I block these type emails? | Receiving gmail to an account that is not linked to my email address | gmail | null |
_hardwarecs.7578 | I'm looking for options to replace my current mouse which has some interesting features, but not all I want.Must-Haves / Hard Requirements:Price must be <100 in GermanyThe configuration software must work with Windows 10 Creator's UpdateThe mouse needs to have at least two clickable buttons additionally to a standard clickable scroll-wheel and the left- and right mouse buttonsThe configuration software must be able to assign the following actions:Open Windows Explorer (win-key + e)Copy (Ctrl+c)Paste (Ctrl+v)Open Start (win-key)Double-ClickThe mouse must feature a sensor that either has no hardware mouse acceleration or where it can be turned off through the configuration softwareIt must be wired and use USB as its interfaceIt must feature a closed design, that is it must not be / look like the Mad Catz RAT seriesReally Nice-To-Haves:The mouse allows me to program 4 of the 5 above listed functionalities at once, ie it has either 4 buttons or features something like a shift buttonThe mouse works out of the box with Windows using default drivers (HID)The mouse can save its programming and apply it to new machines without the configuration software installed (ie if I configure ExtraButton1 to be copy it must work on all machines out-of-the-box)The mouse should last >5 years, for the sake of comparability we set this requirement equal with has >2 years of manufacturer warrantyNeat features:A configurable lift distanceA configurable DPI valueConfigurable weightBig size | Linear, durable and programmable mouse? | mice | null |
_codereview.60695 | The if/else statements below are not good. How can I improve this method?public T GetContentByNodeIdSync<T>(Guid nodeId){ var data = m_CMSCatalog.GetContentByNodeId(nodeId); if (typeof(T) == typeof(WebFolder)) { var model = (WebFolderDTO)data; return Mapper.Map<WebFolderDTO, T>(model); } else if (typeof(T) == typeof(ContentListItem)) { return Mapper.Map<ContentListItemDTO, T>((ContentListItemDTO)data); } else if (typeof(T) == typeof(Image)) { return Mapper.Map<ImageDTO, T>((ImageDTO)data); } else if (typeof(T) == typeof(File)) { return Mapper.Map<FileDTO, T>((FileDTO)data); } else if (typeof(T) == typeof(Folder)) { return Mapper.Map<FolderDTO, T>((FolderDTO)data); } else if (typeof(T) == typeof(WebRoot)) { return Mapper.Map<WebRootDTO, T>((WebRootDTO)data); } else if (typeof(T) == typeof(Article)) { var model = (ArticleDTO)data; return Mapper.Map<ArticleDTO, T>(model); } else if (typeof(T) == typeof(WebContent)) { var model = (WebContentDTO)data; return Mapper.Map<WebContentDTO, T>(model); } return default(T);} | Simplifying a series of type checks and casts in a generic method | c#;generics;type safety | null |
_unix.136439 | I'm running A file | B --params > file.txt. Since I want to accelerate the processing speed, I used the parallel -j+0< a.txt to run 20 jobs concurrently. a.txt contains all the commands:A file1 | B --params > file1.txt A file2 | B --params > file2.txtA fileN | B --params > fileN.txtIs this way safe? Will the stdout from different programs be messed when running in parallel? | Is it safe to pipe the stdout of A to B in parallel | bash;pipe;parallel | null |
_datascience.15346 | Ive been using statsmodels for multivariable regression and id like to know of it has q lincom command like stata. For example if i want to carry out hypothesis testing on an estimate with Ho: b=1 and H1:b>1 after regression | Python equivalent to statas lincom command | python;dataset | null |
_unix.368365 | So I recently installed the kde packages, running the commandapt install kde-full for installing the kde desktop environment, after test it a while I realize I didn't want it anymore so I remove the packages like this apt remove kde-full, but I notice that some packages and applications remains and weren't completely removed.I am on Debian 8 distribution. I recently turned to the linux world so any help will be appreciated.[Edit]I also tried tasksel --list-task:u desktop Debian desktop environmentu gnome-desktop GNOMEu xfce-desktop Xfceu kde-desktop KDEu cinnamon-desktop Cinnamonu mate-desktop MATEu lxde-desktop LXDEu web-server web serveru print-server print serveru ssh-server SSH serveri laptop laptopSome packges listing with synaptic-package-manager:aptitude why kde-base-artwork output:aptitude why kde-base-artworki kdeartwork Depends kscreensaver (>= 4:4.14.2-1) i A kscreensaver Depends kde-workspace-bin i A kde-workspace-bin Depends kde-workspace-data (= 4:4.11.13-2)i A kde-workspace-data Depends kde-base-artwork | How to remove all kde packages? | debian;apt;package management;kde | null |
_codereview.77683 | I tried to help someone in Stackoverflow with a refactoring exercise. Did many changes to his original code and made a somewhat decent solution (at least in my eyes). Was thinking, whether someone can critique on my implementation.class Theatre COST = { running: 3, fixed: 180 } attr_accessor :number_of_audience, :ticket_price def revenue @number_of_audience * @ticket_price end def total_cost COST[:fixed] + (@number_of_audience * COST[:running]) end def net revenue - total_cost end def profit? net > 0 endendclass TheatreCLI def initialize @theatre = Theatre.new end def seek_number_of_attendes print 'Number of audience: ' @theatre.number_of_audience = gets.chomp.to_i end def seek_ticket_price print 'Ticket price: ' @theatre.ticket_price = gets.chomp.to_i end def print_revenue puts Revenue for the theatre is RM #{@theatre.revenue}. end def print_profit message_prefix = @theatre.profit? ? 'Profit made' : 'Loss incurred' puts #{message_prefix} #{@theatre.net.abs}. end def self.run TheatreCLI.new.instance_eval do seek_ticket_price seek_number_of_attendes print_revenue print_profit end endendTheatreCLI.run | Refactoring a simple Ruby CLI program | ruby | It's very strange you're using instance_eval. That's not how a regular program should work and is not needed in your code.If you want to avoid calling 4 instance methods from a class instance method, then you can create a new instance method like:def workflow seek_ticket_price seek_number_of_attendes print_revenue print_profitenddef self.run TheatreCLI.new.workflowendMaybe it's not a problem here, but using a lot of instance_eval for saving keystrokes looks like working around a bad API. Also my gut feeling tells me that such practice can lead to unexpected troubles if used outside class body. |
_softwareengineering.300801 | I am currently building an application with a layered architecture in C. Currently, I have built and tested the bottom layer, which is a networking module, providing functionality such as connecting/disconnecting, sending messages, etc.On top of it, I am building another layer, that implements a communication protocol. It has functionality such as connect, which calls network_connect internally to create the actual connection and does some protocol related business, such as registering on a server.Now, the problem is how should I test the second layer? The original approach was to create 2 threads, open a server on one of them, connect to it and monitor the traffic (basically check if the required protocol-related data has been transferred). However, I do not need to test the actual network connection, since that is already tested in the tests for the networking module and I feel this approach is too complicated.One approach I could think of was to expose the internal socket file descriptor from the networking module through getters/setters (which I needed anyway) and replace the network connection with a pipe. This approach works for most of the operations, except for the actual connection, where the network connect routine is called internally. Also, the connection in this case is made to a server whose ip/port is not exposed outside of this module (it is currently defined as a constant and will soon be moved into a config file).How should I approach the testing of this module? | Testing of a layered software architecture | testing;layers | null |
_unix.184098 | I have VMware virtual machine running Debian Wheezy. I have compiled my own kernel 3.14. I have noticed dmesg is flooded with messages pci BAR 7: can't assign io (size 0x1000).I have no idea what these messages mean. The VM seems to be running OK, I don't see any problems. Nevertheless, I am bothered by these error messages, and I would be happy if I could get rid of them.Could somebody please explain 1) what do these messages mean2) how can I get rid of them...pnp: PnP ACPI: found 9 devicesACPI: bus type PNP unregisteredpci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000pci 0000:00:15.3: res[7]=[io 0x1000-0x0fff] get_res_add_size add_size 1000pci 0000:00:15.4: res[7]=[io 0x1000-0x0fff] get_res_add_size add_size 1000pci 0000:00:15.5: res[7]=[io 0x1000-0x0fff] get_res_add_size add_size 1000pci 0000:00:15.6: res[7]=[io 0x1000-0x0fff] get_res_add_size add_size 1000pci 0000:00:15.7: res[7]=[io 0x1000-0x0fff] get_res_add_size add_size 1000pci 0000:00:16.3: res[7]=[io 0x1000-0x0fff] get_res_add_size add_size 1000pci 0000:00:16.4: res[7]=[io 0x1000-0x0fff] get_res_add_size add_size 1000pci 0000:00:16.5: res[7]=[io 0x1000-0x0fff] get_res_add_size add_size 1000pci 0000:00:16.6: res[7]=[io 0x1000-0x0fff] get_res_add_size add_size 1000pci 0000:00:16.7: res[7]=[io 0x1000-0x0fff] get_res_add_size add_size 1000pci 0000:00:17.3: res[7]=[io 0x1000-0x0fff] get_res_add_size add_size 1000pci 0000:00:17.4: res[7]=[io 0x1000-0x0fff] get_res_add_size add_size 1000pci 0000:00:17.5: res[7]=[io 0x1000-0x0fff] get_res_add_size add_size 1000pci 0000:00:17.6: res[7]=[io 0x1000-0x0fff] get_res_add_size add_size 1000pci 0000:00:17.7: res[7]=[io 0x1000-0x0fff] get_res_add_size add_size 1000pci 0000:00:18.2: res[7]=[io 0x1000-0x0fff] get_res_add_size add_size 1000pci 0000:00:18.3: res[7]=[io 0x1000-0x0fff] get_res_add_size add_size 1000pci 0000:00:18.4: res[7]=[io 0x1000-0x0fff] get_res_add_size add_size 1000pci 0000:00:18.5: res[7]=[io 0x1000-0x0fff] get_res_add_size add_size 1000pci 0000:00:18.6: res[7]=[io 0x1000-0x0fff] get_res_add_size add_size 1000pci 0000:00:18.7: res[7]=[io 0x1000-0x0fff] get_res_add_size add_size 1000pci 0000:00:0f.0: BAR 6: assigned [mem 0xc0000000-0xc0007fff pref]pci 0000:00:15.3: BAR 7: can't assign io (size 0x1000)pci 0000:00:15.4: BAR 7: can't assign io (size 0x1000)pci 0000:00:15.5: BAR 7: can't assign io (size 0x1000)pci 0000:00:15.6: BAR 7: can't assign io (size 0x1000)pci 0000:00:15.7: BAR 7: can't assign io (size 0x1000)pci 0000:00:16.3: BAR 7: can't assign io (size 0x1000)pci 0000:00:16.4: BAR 7: can't assign io (size 0x1000)pci 0000:00:16.5: BAR 7: can't assign io (size 0x1000)pci 0000:00:16.6: BAR 7: can't assign io (size 0x1000)pci 0000:00:16.7: BAR 7: can't assign io (size 0x1000)pci 0000:00:17.3: BAR 7: can't assign io (size 0x1000)pci 0000:00:17.4: BAR 7: can't assign io (size 0x1000)pci 0000:00:17.5: BAR 7: can't assign io (size 0x1000)pci 0000:00:17.6: BAR 7: can't assign io (size 0x1000)pci 0000:00:17.7: BAR 7: can't assign io (size 0x1000)pci 0000:00:18.2: BAR 7: can't assign io (size 0x1000)pci 0000:00:18.3: BAR 7: can't assign io (size 0x1000)pci 0000:00:18.4: BAR 7: can't assign io (size 0x1000)pci 0000:00:18.5: BAR 7: can't assign io (size 0x1000)pci 0000:00:18.6: BAR 7: can't assign io (size 0x1000)pci 0000:00:18.7: BAR 7: can't assign io (size 0x1000)pci 0000:00:18.7: BAR 7: can't assign io (size 0x1000)pci 0000:00:18.6: BAR 7: can't assign io (size 0x1000)pci 0000:00:18.5: BAR 7: can't assign io (size 0x1000)pci 0000:00:18.4: BAR 7: can't assign io (size 0x1000)pci 0000:00:18.3: BAR 7: can't assign io (size 0x1000)pci 0000:00:18.2: BAR 7: can't assign io (size 0x1000)pci 0000:00:17.7: BAR 7: can't assign io (size 0x1000)pci 0000:00:17.6: BAR 7: can't assign io (size 0x1000)pci 0000:00:17.5: BAR 7: can't assign io (size 0x1000)pci 0000:00:17.4: BAR 7: can't assign io (size 0x1000)pci 0000:00:17.3: BAR 7: can't assign io (size 0x1000)pci 0000:00:16.7: BAR 7: can't assign io (size 0x1000)pci 0000:00:16.6: BAR 7: can't assign io (size 0x1000)pci 0000:00:16.5: BAR 7: can't assign io (size 0x1000)pci 0000:00:16.4: BAR 7: can't assign io (size 0x1000)pci 0000:00:16.3: BAR 7: can't assign io (size 0x1000)pci 0000:00:15.7: BAR 7: can't assign io (size 0x1000)pci 0000:00:15.6: BAR 7: can't assign io (size 0x1000)pci 0000:00:15.5: BAR 7: can't assign io (size 0x1000)pci 0000:00:15.4: BAR 7: can't assign io (size 0x1000)pci 0000:00:15.3: BAR 7: can't assign io (size 0x1000)pci 0000:00:01.0: PCI bridge to [bus 01]pci 0000:00:11.0: PCI bridge to [bus 02]pci 0000:00:11.0: bridge window [io 0x2000-0x3fff]pci 0000:00:11.0: bridge window [mem 0xd1900000-0xd23fffff]pci 0000:00:11.0: bridge window [mem 0xdc400000-0xdc9fffff 64bit pref]pci 0000:03:00.0: BAR 6: assigned [mem 0xd4400000-0xd440ffff pref]pci 0000:00:15.0: PCI bridge to [bus 03]pci 0000:00:15.0: bridge window [io 0x4000-0x4fff]pci 0000:00:15.0: bridge window [mem 0xd2400000-0xd24fffff]pci 0000:00:15.0: bridge window [mem 0xd4400000-0xd44fffff 64bit pref]pci 0000:00:15.1: PCI bridge to [bus 04]pci 0000:00:15.1: bridge window [io 0x8000-0x8fff]pci 0000:00:15.1: bridge window [mem 0xd2800000-0xd28fffff]pci 0000:00:15.1: bridge window [mem 0xd4800000-0xd48fffff 64bit pref]pci 0000:00:15.2: PCI bridge to [bus 05]pci 0000:00:15.2: bridge window [io 0xc000-0xcfff]pci 0000:00:15.2: bridge window [mem 0xd2c00000-0xd2cfffff]pci 0000:00:15.2: bridge window [mem 0xdcb00000-0xdcbfffff 64bit pref]pci 0000:00:15.3: PCI bridge to [bus 06]pci 0000:00:15.3: bridge window [mem 0xd3000000-0xd30fffff]pci 0000:00:15.3: bridge window [mem 0xdcd00000-0xdcdfffff 64bit pref]pci 0000:00:15.4: PCI bridge to [bus 07]pci 0000:00:15.4: bridge window [mem 0xd3400000-0xd34fffff]pci 0000:00:15.4: bridge window [mem 0xdcf00000-0xdcffffff 64bit pref]pci 0000:00:15.5: PCI bridge to [bus 08]pci 0000:00:15.5: bridge window [mem 0xd3800000-0xd38fffff]pci 0000:00:15.5: bridge window [mem 0xdd100000-0xdd1fffff 64bit pref]pci 0000:00:15.6: PCI bridge to [bus 09]pci 0000:00:15.6: bridge window [mem 0xd3c00000-0xd3cfffff]pci 0000:00:15.6: bridge window [mem 0xdd300000-0xdd3fffff 64bit pref]pci 0000:00:15.7: PCI bridge to [bus 0a]pci 0000:00:15.7: bridge window [mem 0xd4000000-0xd40fffff]pci 0000:00:15.7: bridge window [mem 0xdd500000-0xdd5fffff 64bit pref]pci 0000:0b:00.0: BAR 6: assigned [mem 0xd4500000-0xd450ffff pref]pci 0000:00:16.0: PCI bridge to [bus 0b]pci 0000:00:16.0: bridge window [io 0x5000-0x5fff]pci 0000:00:16.0: bridge window [mem 0xd2500000-0xd25fffff]pci 0000:00:16.0: bridge window [mem 0xd4500000-0xd45fffff 64bit pref]pci 0000:00:16.1: PCI bridge to [bus 0c]pci 0000:00:16.1: bridge window [io 0x9000-0x9fff]pci 0000:00:16.1: bridge window [mem 0xd2900000-0xd29fffff]pci 0000:00:16.1: bridge window [mem 0xd4900000-0xd49fffff 64bit pref]pci 0000:00:16.2: PCI bridge to [bus 0d]pci 0000:00:16.2: bridge window [io 0xd000-0xdfff]pci 0000:00:16.2: bridge window [mem 0xd2d00000-0xd2dfffff]pci 0000:00:16.2: bridge window [mem 0xd4b00000-0xd4bfffff 64bit pref]pci 0000:00:16.3: PCI bridge to [bus 0e]pci 0000:00:16.3: bridge window [mem 0xd3100000-0xd31fffff]pci 0000:00:16.3: bridge window [mem 0xd4d00000-0xd4dfffff 64bit pref]pci 0000:00:16.4: PCI bridge to [bus 0f]pci 0000:00:16.4: bridge window [mem 0xd3500000-0xd35fffff]pci 0000:00:16.4: bridge window [mem 0xd4f00000-0xd4ffffff 64bit pref]pci 0000:00:16.5: PCI bridge to [bus 10]pci 0000:00:16.5: bridge window [mem 0xd3900000-0xd39fffff]pci 0000:00:16.5: bridge window [mem 0xd5100000-0xd51fffff 64bit pref]pci 0000:00:16.6: PCI bridge to [bus 11]pci 0000:00:16.6: bridge window [mem 0xd3d00000-0xd3dfffff]pci 0000:00:16.6: bridge window [mem 0xd5300000-0xd53fffff 64bit pref]pci 0000:00:16.7: PCI bridge to [bus 12]pci 0000:00:16.7: bridge window [mem 0xd4100000-0xd41fffff]pci 0000:00:16.7: bridge window [mem 0xd5500000-0xd55fffff 64bit pref]pci 0000:00:17.0: PCI bridge to [bus 13]pci 0000:00:17.0: bridge window [io 0x6000-0x6fff]pci 0000:00:17.0: bridge window [mem 0xd2600000-0xd26fffff]pci 0000:00:17.0: bridge window [mem 0xd4600000-0xd46fffff 64bit pref]pci 0000:00:17.1: PCI bridge to [bus 14]pci 0000:00:17.1: bridge window [io 0xa000-0xafff]pci 0000:00:17.1: bridge window [mem 0xd2a00000-0xd2afffff]pci 0000:00:17.1: bridge window [mem 0xdca00000-0xdcafffff 64bit pref]pci 0000:00:17.2: PCI bridge to [bus 15]pci 0000:00:17.2: bridge window [io 0xe000-0xefff]pci 0000:00:17.2: bridge window [mem 0xd2e00000-0xd2efffff]pci 0000:00:17.2: bridge window [mem 0xdcc00000-0xdccfffff 64bit pref]pci 0000:00:17.3: PCI bridge to [bus 16]pci 0000:00:17.3: bridge window [mem 0xd3200000-0xd32fffff]pci 0000:00:17.3: bridge window [mem 0xdce00000-0xdcefffff 64bit pref]pci 0000:00:17.4: PCI bridge to [bus 17]pci 0000:00:17.4: bridge window [mem 0xd3600000-0xd36fffff]pci 0000:00:17.4: bridge window [mem 0xdd000000-0xdd0fffff 64bit pref]pci 0000:00:17.5: PCI bridge to [bus 18]pci 0000:00:17.5: bridge window [mem 0xd3a00000-0xd3afffff]pci 0000:00:17.5: bridge window [mem 0xdd200000-0xdd2fffff 64bit pref]pci 0000:00:17.6: PCI bridge to [bus 19]pci 0000:00:17.6: bridge window [mem 0xd3e00000-0xd3efffff]pci 0000:00:17.6: bridge window [mem 0xdd400000-0xdd4fffff 64bit pref]pci 0000:00:17.7: PCI bridge to [bus 1a]pci 0000:00:17.7: bridge window [mem 0xd4200000-0xd42fffff]pci 0000:00:17.7: bridge window [mem 0xdd600000-0xdd6fffff 64bit pref]pci 0000:00:18.0: PCI bridge to [bus 1b]pci 0000:00:18.0: bridge window [io 0x7000-0x7fff]pci 0000:00:18.0: bridge window [mem 0xd2700000-0xd27fffff]pci 0000:00:18.0: bridge window [mem 0xd4700000-0xd47fffff 64bit pref]pci 0000:00:18.1: PCI bridge to [bus 1c]pci 0000:00:18.1: bridge window [io 0xb000-0xbfff]pci 0000:00:18.1: bridge window [mem 0xd2b00000-0xd2bfffff]pci 0000:00:18.1: bridge window [mem 0xd4a00000-0xd4afffff 64bit pref]pci 0000:00:18.2: PCI bridge to [bus 1d]pci 0000:00:18.2: bridge window [mem 0xd2f00000-0xd2ffffff]pci 0000:00:18.2: bridge window [mem 0xd4c00000-0xd4cfffff 64bit pref]pci 0000:00:18.3: PCI bridge to [bus 1e]pci 0000:00:18.3: bridge window [mem 0xd3300000-0xd33fffff]pci 0000:00:18.3: bridge window [mem 0xd4e00000-0xd4efffff 64bit pref]pci 0000:00:18.4: PCI bridge to [bus 1f]pci 0000:00:18.4: bridge window [mem 0xd3700000-0xd37fffff]pci 0000:00:18.4: bridge window [mem 0xd5000000-0xd50fffff 64bit pref]pci 0000:00:18.5: PCI bridge to [bus 20]pci 0000:00:18.5: bridge window [mem 0xd3b00000-0xd3bfffff]pci 0000:00:18.5: bridge window [mem 0xd5200000-0xd52fffff 64bit pref]pci 0000:00:18.6: PCI bridge to [bus 21]pci 0000:00:18.6: bridge window [mem 0xd3f00000-0xd3ffffff]pci 0000:00:18.6: bridge window [mem 0xd5400000-0xd54fffff 64bit pref]pci 0000:00:18.7: PCI bridge to [bus 22]pci 0000:00:18.7: bridge window [mem 0xd4300000-0xd43fffff]pci 0000:00:18.7: bridge window [mem 0xd5600000-0xd56fffff 64bit pref]pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff]pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000cffff]pci_bus 0000:00: resource 6 [mem 0x000d0000-0x000d3fff]pci_bus 0000:00: resource 7 [mem 0x000d4000-0x000d7fff]pci_bus 0000:00: resource 8 [mem 0x000d8000-0x000dbfff]pci_bus 0000:00: resource 9 [mem 0xc0000000-0xfebfffff]pci_bus 0000:00: resource 10 [io 0x0000-0x0cf7]pci_bus 0000:00: resource 11 [io 0x0d00-0xfeff]pci_bus 0000:02: resource 0 [io 0x2000-0x3fff]pci_bus 0000:02: resource 1 [mem 0xd1900000-0xd23fffff]pci_bus 0000:02: resource 2 [mem 0xdc400000-0xdc9fffff 64bit pref]pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff]pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000cffff]pci_bus 0000:02: resource 6 [mem 0x000d0000-0x000d3fff]pci_bus 0000:02: resource 7 [mem 0x000d4000-0x000d7fff]pci_bus 0000:02: resource 8 [mem 0x000d8000-0x000dbfff]pci_bus 0000:02: resource 9 [mem 0xc0000000-0xfebfffff]pci_bus 0000:02: resource 10 [io 0x0000-0x0cf7]pci_bus 0000:02: resource 11 [io 0x0d00-0xfeff]pci_bus 0000:03: resource 0 [io 0x4000-0x4fff]pci_bus 0000:03: resource 1 [mem 0xd2400000-0xd24fffff]pci_bus 0000:03: resource 2 [mem 0xd4400000-0xd44fffff 64bit pref]pci_bus 0000:04: resource 0 [io 0x8000-0x8fff]pci_bus 0000:04: resource 1 [mem 0xd2800000-0xd28fffff]pci_bus 0000:04: resource 2 [mem 0xd4800000-0xd48fffff 64bit pref]pci_bus 0000:05: resource 0 [io 0xc000-0xcfff]pci_bus 0000:05: resource 1 [mem 0xd2c00000-0xd2cfffff]pci_bus 0000:05: resource 2 [mem 0xdcb00000-0xdcbfffff 64bit pref]pci_bus 0000:06: resource 1 [mem 0xd3000000-0xd30fffff]pci_bus 0000:06: resource 2 [mem 0xdcd00000-0xdcdfffff 64bit pref]pci_bus 0000:07: resource 1 [mem 0xd3400000-0xd34fffff]pci_bus 0000:07: resource 2 [mem 0xdcf00000-0xdcffffff 64bit pref]pci_bus 0000:08: resource 1 [mem 0xd3800000-0xd38fffff]pci_bus 0000:08: resource 2 [mem 0xdd100000-0xdd1fffff 64bit pref]pci_bus 0000:09: resource 1 [mem 0xd3c00000-0xd3cfffff]pci_bus 0000:09: resource 2 [mem 0xdd300000-0xdd3fffff 64bit pref]pci_bus 0000:0a: resource 1 [mem 0xd4000000-0xd40fffff]pci_bus 0000:0a: resource 2 [mem 0xdd500000-0xdd5fffff 64bit pref]pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff]pci_bus 0000:0b: resource 1 [mem 0xd2500000-0xd25fffff]pci_bus 0000:0b: resource 2 [mem 0xd4500000-0xd45fffff 64bit pref]pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff]pci_bus 0000:0c: resource 1 [mem 0xd2900000-0xd29fffff]pci_bus 0000:0c: resource 2 [mem 0xd4900000-0xd49fffff 64bit pref]pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff]pci_bus 0000:0d: resource 1 [mem 0xd2d00000-0xd2dfffff]pci_bus 0000:0d: resource 2 [mem 0xd4b00000-0xd4bfffff 64bit pref]pci_bus 0000:0e: resource 1 [mem 0xd3100000-0xd31fffff]pci_bus 0000:0e: resource 2 [mem 0xd4d00000-0xd4dfffff 64bit pref]pci_bus 0000:0f: resource 1 [mem 0xd3500000-0xd35fffff]pci_bus 0000:0f: resource 2 [mem 0xd4f00000-0xd4ffffff 64bit pref]pci_bus 0000:10: resource 1 [mem 0xd3900000-0xd39fffff]pci_bus 0000:10: resource 2 [mem 0xd5100000-0xd51fffff 64bit pref]pci_bus 0000:11: resource 1 [mem 0xd3d00000-0xd3dfffff]pci_bus 0000:11: resource 2 [mem 0xd5300000-0xd53fffff 64bit pref]pci_bus 0000:12: resource 1 [mem 0xd4100000-0xd41fffff]pci_bus 0000:12: resource 2 [mem 0xd5500000-0xd55fffff 64bit pref]pci_bus 0000:13: resource 0 [io 0x6000-0x6fff]pci_bus 0000:13: resource 1 [mem 0xd2600000-0xd26fffff]pci_bus 0000:13: resource 2 [mem 0xd4600000-0xd46fffff 64bit pref]pci_bus 0000:14: resource 0 [io 0xa000-0xafff]pci_bus 0000:14: resource 1 [mem 0xd2a00000-0xd2afffff]pci_bus 0000:14: resource 2 [mem 0xdca00000-0xdcafffff 64bit pref]pci_bus 0000:15: resource 0 [io 0xe000-0xefff]pci_bus 0000:15: resource 1 [mem 0xd2e00000-0xd2efffff]pci_bus 0000:15: resource 2 [mem 0xdcc00000-0xdccfffff 64bit pref]pci_bus 0000:16: resource 1 [mem 0xd3200000-0xd32fffff]pci_bus 0000:16: resource 2 [mem 0xdce00000-0xdcefffff 64bit pref]pci_bus 0000:17: resource 1 [mem 0xd3600000-0xd36fffff]pci_bus 0000:17: resource 2 [mem 0xdd000000-0xdd0fffff 64bit pref]pci_bus 0000:18: resource 1 [mem 0xd3a00000-0xd3afffff]pci_bus 0000:18: resource 2 [mem 0xdd200000-0xdd2fffff 64bit pref]pci_bus 0000:19: resource 1 [mem 0xd3e00000-0xd3efffff]pci_bus 0000:19: resource 2 [mem 0xdd400000-0xdd4fffff 64bit pref]pci_bus 0000:1a: resource 1 [mem 0xd4200000-0xd42fffff]pci_bus 0000:1a: resource 2 [mem 0xdd600000-0xdd6fffff 64bit pref]pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff]pci_bus 0000:1b: resource 1 [mem 0xd2700000-0xd27fffff]pci_bus 0000:1b: resource 2 [mem 0xd4700000-0xd47fffff 64bit pref]pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff]pci_bus 0000:1c: resource 1 [mem 0xd2b00000-0xd2bfffff]pci_bus 0000:1c: resource 2 [mem 0xd4a00000-0xd4afffff 64bit pref]pci_bus 0000:1d: resource 1 [mem 0xd2f00000-0xd2ffffff]pci_bus 0000:1d: resource 2 [mem 0xd4c00000-0xd4cfffff 64bit pref]pci_bus 0000:1e: resource 1 [mem 0xd3300000-0xd33fffff]pci_bus 0000:1e: resource 2 [mem 0xd4e00000-0xd4efffff 64bit pref]pci_bus 0000:1f: resource 1 [mem 0xd3700000-0xd37fffff]pci_bus 0000:1f: resource 2 [mem 0xd5000000-0xd50fffff 64bit pref]pci_bus 0000:20: resource 1 [mem 0xd3b00000-0xd3bfffff]pci_bus 0000:20: resource 2 [mem 0xd5200000-0xd52fffff 64bit pref]pci_bus 0000:21: resource 1 [mem 0xd3f00000-0xd3ffffff]pci_bus 0000:21: resource 2 [mem 0xd5400000-0xd54fffff 64bit pref]pci_bus 0000:22: resource 1 [mem 0xd4300000-0xd43fffff]pci_bus 0000:22: resource 2 [mem 0xd5600000-0xd56fffff 64bit pref]Following is the identification of the device with lspci00:15.3 PCI bridge: VMware PCI Express Root Port (rev 01) (prog-if 00 [Normal decode]) Flags: bus master, fast devsel, latency 0 Bus: primary=00, secondary=06, subordinate=06, sec-latency=0 Memory behind bridge: d3000000-d30fffff Prefetchable memory behind bridge: 00000000dcd00000-00000000dcdfffff Capabilities: [40] Subsystem: VMware PCI Express Root Port Capabilities: [48] Power Management version 3 Capabilities: [50] Express Root Port (Slot+), MSI 00 Capabilities: [8c] MSI: Enable- Count=1/1 Maskable+ 64bit+ Kernel driver in use: pcieportUPDATEactually, dmesg contains over 550 lines of (debug) logs related to PCI.I have pasted the relevant part hereHow can I get rid of these logs?They are not useful, and are just flooding my logs | dmesg: pci BAR 7: can't assign io | kernel;linux kernel;pci;dmesg;hot plug | null |
_unix.18731 | In djvused bookmark format, to disable characters that have special meaning in djvusedbookmark format:The format of djvused bookmark of a djvu file is for example:(bookmarks (1 first chapter #10 (1.1 first section #11 (1.1.1 first subsection #12 )) (1.2 second section #13 )) (2 second chapter #14 (2.1 first section #16 ) (2.2 second section #13 )))...where the main points are the paring of left and rightparenthesis for tree-like organization of sections and chapters,double quote for each bookmark item and each page number is precededby a #. How can I escape characters like , ( and ) to not beinterpreted as control characters in the titles of chapters andsections?e.g. The following examples will not be accepted by djvused:(2.2 Hello! #13 )(2.2 f(g) #13 )The command I use to embed bookmarks into a djvu file is djvusedin.djvu -e 'set-outline bmks' -s, where bmks is the text file forbookmarks.In djvused bookmark format, to enable characters that have special meaning in general textfiles:The character \n means new line. But if using it directly in djvubookmark format, it will be shown as it is, not be interpreted asnew line.For example:(bookmarks (long title part 1 \n long title part 2 #10 )The long title will not be broken into two lines where \n isspecified. | How to specify special characters in djvused bookmarks | djvu | null |
_computerscience.1666 | I can't understand math equations. I'm a graphic designer.What is importance sampling?What is multiple importance sampling?Could you explain easily, using illustrations and no math equations? What is the difference between importance sampling and multiple importance sampling? | What is the difference between importance sampling and mutiple importance sampling? | raytracing;sampling;importance sampling | null |
_codereview.44307 | This draws blocks on the screen from a grid to make a background for my game. I am wondering if anyone has any suggestions on optimizing it for speed.int blockwidth=blocksize-2;//Draw coloured blocksfor (int x=0;x<sizex;x++){ int screenx=-(int)camerax+(x*blocksize); if (screenx>-blocksize && screenx<gamewidth){ for (int y=0;y<sizey;y++){ int screeny=-(int)cameray+(y*blocksize); if (screeny>-blocksize && screeny<gameheight){ if (tiles[x][y][0]>0){ g.setColor(new Color( tiles[x][y][1])); //g.fillRect(screenx,screeny,blockwidth,blockwidth); g.drawImage(Iloader.Imagelist.get(0), screenx,screeny, screenx+blockwidth,screeny+blockwidth, graphicsize,0,graphicsize*2,graphicsize, null); } else { //g.setColor(new Color( tiles[x][y][1] | 0xFFFF0000)); g.setColor(new Color( tiles[x][y][1])); g.fillRect(screenx,screeny,blockwidth,blockwidth); } } } } } | Drawing blocks for a 2D game background | java;game;graphics | There are a few optimizations I can see in your code.creating a new Color every time is a little severe. You can do a few things here, for example, if your color palette is limited, then cache the individual Color instances. I know it sounds petty right now, but, when you add it up there are a lot of new Color instances created.What you should at minimum do, is track your last Color used, and only create a new one if it is different.Pull the Iloader.Imagelist.get(0) outside the loop, and have Image image = Iloader.Imagelist.get(0)Pull calculations outside the loops where you can... and continue/break when you can too.Image image = Iloader.Imagelist.get(0);int screenx=-(int)camerax - blocksize;for (int x = 0; x < sizex; x++){ screenx += blocksize; if (screenx <= -blocksize) { continue; } if (screenx >= gamewidth) { break; } int screeny= -(int)cameray - blocksize; for (int y = 0; y < sizey; y++){ screeny += blocksize; if (screeny <= -blocksize) continue; } if (screeny >= gameheight) { break; } if (tiles[x][y][0] > 0) { // need to set the color here? g.setColor(new Color( tiles[x][y][1])); g.drawImage(image, screenx, screeny, screenx + blockwidth, screeny + blockwidth, graphicsize, 0, graphicsize * 2, graphicsize, null); } else { //g.setColor(new Color( tiles[x][y][1] | 0xFFFF0000)); g.setColor(new Color( tiles[x][y][1])); g.fillRect(screenx,screeny,blockwidth,blockwidth); } }}The above code does not have the mechanism for caching the color. You should figure one out. |
_unix.283305 | In /var/log/syslog I see lot of pure-ftpd logs, that indicate downloaded action, however I do not see any actions related to moving or deleting files.I would like to get rid of all downloaded logs from /var/log/syslog.I also would like to create three separate log files for upload, move and delete actions.How can I do it? | Separate modification logs in pure-ftpd | pure ftpd | null |
_unix.160319 | #!/bin/bash#organization: Seneca College#Purpose: Validate a date#Usage: chkdate year month day#year=$1; month=$2; day=$3; extra=$4if [[ $year == || $month == || $day == ]]; then # Not enough data! echo Usage: chkdate year month day exit 0fiif [[ ! ( $year =~ ^[0-9]+$ && $month =~ ^[0-9]+$ && $day =~ ^[0-9]+$ ) ]]; then # Date not numeric! echo Usage: chkdate year month day exit 1fiif [[ $year -lt 1 || $year -gt 9999 || $month -lt 1 || $month -gt 12 || $day -lt 1 || $day -gt 31 ]]; then # Date out of range! echo Usage: chkdate year month day exit 2fiif [[ ( $month == 1 || $month == 3 || $month == 5 || $month == 7 || $month == 8 || $month == 10 || $month == 12 ) && $day -gt 31 ]]; then # Invalid day! echo Usage: chkdate year month day exit 3fiif [[ ( $month == 4 || $month == 6 || $month == 9 || $month == 11 ) && $day -gt 30 ]]; then # Invalid day! echo Usage: chkdate year month day exit 4fiif [[ ($month == 9) && ($year == 1752) && ( $day -gt 2) && $day -le 14 ]] ; then #invalid day! echo Usage: chkdate year month day exit 5date -d $2/$3/$1 > /dev/null 2>&1if [[ $@ ]] ; then echo valid dateelse echo not a validfiit says my script has line 44 however my script total lines are 43 | syntax error: unexpected end of the file | bash;shell script | null |
_codereview.81840 | I have created a little program to search a set of folders holding documents scanned.The folder structure is as follows:c:\images\year\month\date\documenttype\firstpartofdocumentNo.\the year folder contains years from 2005 - 2015the month folder contains the months of the year (Obviously) same with datethe documenttype folder can contain between 1 and 5 foldersthe firstpartofdocumentno. can contain between 1 and 3 foldersThe code I am using at the moment is:CompName = Environment.MachineNameTicketNo = TxtTicketNo.TextIf CompName = Comp1 Then ImageDir = C:\Images\Else ImageDir = \\Comp2\Images\End IfFor Each DirYear As String In Directory.GetDirectories(ImageDir) Dim YearInfo As New DirectoryInfo(DirYear) For Each DirMonth As String In Directory.GetDirectories(DirYear) Dim MonthInfo As New DirectoryInfo(DirMonth) For Each DirDate As String In Directory.GetDirectories(DirMonth) Dim DateInfo As New DirectoryInfo(DirDate) For Each DirType As String In Directory.GetDirectories(DirDate) Dim TypeInfo As New DirectoryInfo(DirType) For Each DirStart As String In Directory.GetDirectories(DirType) Dim StartInfo As New DirectoryInfo(DirStart) MainDirectory = ImageDir & YearInfo.Name & \ & MonthInfo.Name & \ & DateInfo.Name & \ & TypeInfo.Name & \ & StartInfo.Name & \ For Each Ticket As String In Directory.GetFiles(MainDirectory, TicketNo & *) LstFiles.Items.Add(Ticket) Next Next Next Next Next NextI have a textbox on the form which is used to enter the last four numbers of the ticketno and then this code runs when the button is clicked.The problem is it can take up to five minutes to search, so I was wondering if there is a way to optimize this code to speed it up a bit or does this sound about right for searching that many folders. | Recursive filename search | performance;vb.net;file system;search | null |
_vi.1915 | In vim, I often need to delete, or yank/put large blocks of text.I can count the lines of text and enter something like 50dd to delete 50 lines.But that's a bit of a pain. How can I delete this large block of text without having to know how many lines to delete in advance? | How do I delete a large block of text without counting the lines? | cut copy paste | Go to the starting line of your block, and type ma (mark a, though you can of course use different letters, and even multiple letters for different marks provided you can keep it straight in your head what each letter is a mark of).Then go to the last line and enter d'a (delete to mark a) or y'a (yank to mark a) (a).That will delete/yank all lines from the current to the marked one (inclusive).Then you can paste it somewhere else with the normal paste commands, such as p or P.It's also compatible with vi as well as vim, on the off chance that your environment is not blessed with the latter.(a) I also use this to save my place in the file if I have to go looking elsewhere for something like, for example, copy-pasting the definition of a function I want to call.I simply mark the current line in the same way, ma, then wander off to find whatever you're looking for.Then, once I've found it and copied it to a register, I just enter 'a to go back to mark a, the line I saved beforehand, where I can then paste it. |
_cstheory.29094 | Can the majority of $n$ bits be computed by a depth 2 formula all of whose gates compute the majority of $m$ bits where $m=O(n^c)$ for a constant $c<1$? Such a formula contains $m+1$ gates and $m^2$ leaves so $c$ must be at least $1/2$. I assume that the leaves can only be labeled by variables (without negations), but it would be also interesting to know the answer if we allow also negated variables and constants.Two examples of such formulas are given below.$n=7$, $m=5$: $n=9$, $m=7$: | Computing $\operatorname{MAJ}_n$ by $\operatorname{MAJ}_m$ in depth 2 | cc.complexity theory;circuit complexity;boolean functions;boolean formulas;circuit depth | null |
_codereview.164219 | As far as I know there is no standard method yet of maintaining keyword-value pairs. I'm certain most implementations would come to a screeching halt given my number crunching requirements. The blogged benchmarking I've seen of the different methods range from dismal loops to object[keyword] dominance, but they're for static data.My dynamic data HashCompactor() algorithm has been gathering dust at SourceForge since 2006, but when I recently raced it against the built-in methods, I'm always neck and neck with o[k], sometimes even beating it. I designed HashCompactor to be fast, and it looks like it's even faster than I imagined. Tool, weapon, whatever you want to call it: here it is; have fun. There's a busking tip jar if anyone cares.I've hosted the HashCompactorLite() version of my algorithm at JSFiddle.Here's a simple implementation where a list of image file extensions is compared to a url. In this case each keyword's data in exts is an array of length 1. If ext === 'html', then .item() will return null after being unable to find 'h' among ['j','g','t','p','f']. Of that initial list of 15, this possible worst case scenario only concerns itself with 5 of them.let exts = new HashCompactorLite(['jpeg', 'jpg', 'jif', 'jfif', 'gif', 'tif', 'tiff', 'png', 'pdf', 'jp2', 'jpx', 'j2k', 'j2c', 'fpx', 'pcd']);ext = extFromURL(url);if (!! exts.item(ext)) displayImage(url);In my WebLogHog implementation of HashCompactor, I'm analyzing website Log Format files. In this example, the nested .sort() functionality begins with this.referrers being a HashCompactor object before returning as a sorted Array. The optional second parameter to HashCompactor's .sort() routine preprocesses the data.// compress duplicates and sort by most common; report total count for eachtime = performance.now();this.referrers = this.referrers.sort(function(a,b) { let _a = a.count, _b = b.count; if (_a > _b) return -1; if (_b > _a) return 1; return 0; }, function(keyword,data,arrayToFill) { let hc = new HashCompactor(), count = data.length; for (let i=0; i < count; i++) { hc.add(data[i]); } data = []; // used by .forEach() below hc.sort(function(a,b) { let _a = a.count, _b = b.count; if (_a > _b) return -1; if (_b > _a) return 1; return 0; }, function(keyword2,data2,arrayToFill2) { // keyword2, data2, arrayToFill2 for readability, but scope protects them arrayToFill2.push({ keyword: keyword2, count: data2.length }); }).forEach(function(e,i,l) { // data [] from above data.push(e.keyword + ( + e.count + )); }); arrayToFill.push({ keyword: keyword, data: data, count: count }); });time = performance.now() - time;console.log( sort referrers: + time);I first started talking about my algorithm back in the early 1990s, but geopolitical concerns have made bringing it to fruition among civilians that much more difficult. Here's the gist of the routine.HashCompactorLite(optional HashCompactorLite object OR Array of Strings) creates (a copy of) a HashCompactorLite object, or if an array is provided, .add() each item as a keyword..add(keyword,optional datum) returns an array of data. Multiple calls to the same keyword pushes the datum onto the array. Undefined datum works well for calculating word counts or simple comparisons against a list. The keyword can be either a String, a Number converted to a String (to take advantage of 1 being the most common number), or an array of String objects (such as names of modules)..count(optional show data count) return number of keywords or total data items if parameter is true, features object[keyword] doesn't offer..item(keyword) returns the keyword's data array, otherwise null..set(keyword,data,optional combine data) sets the keyword's data array, either overriding what may have existed, or combining to any existing data if true..forEach(callback,optional thisp) iterates through each keyword calling the callback function with the parameters (keyword,data) in the this context provided..sort(comparison function,optional preprocessor function,optional partial keyword, optional thisp) returns an array of keywords sorted after preprocessing HashCompactor data into an array for comparisons using the preprocessor parameteters (keyword,data,arrayToFill). If the partial keyword is 'a', for example, only the keywords beginning with 'a' will be returned..copy(HashCompactorLite object OR Array of keywords) deletes this HashCompactorLite and replaces it with a copy, or if an array of keywords is provided, .add() each one..clear() deletes internal objects created by HashCompactor.deleteKeyword(keyword) Removes keyword from HashCompactor. See .add() for keyword requirements.[Edit] I've been turned onto the Map() feature that flew in under my radar and internalized the character searches using it, but it didn't seem to speed up the WebLogHog stress test.I'm still dedicated to the read-once keyword searching, especially since I'm envisioning the six-foot-long keywords that is DNA. Having designed a real time MIDI-to-music staff notation spelling algorithm back in the early 1990s, I'm into reading data bit by bit. I'm just not up to speed with the world of software engineering. | HashCompactor() keyword-value pair manager will become .hash() | javascript;algorithm | null |
_bioinformatics.982 | I was wondering how I can calculate the charge of a protein peptide (e.g. RKTTLVPNTQTASPR) computationally in R or another tool. | Calculating the charge of a peptide computationally | r;proteins | null |
_codereview.48553 | Using names.txt (right click and 'Save Link/Target As...'), a 46K text file containing over five-thousand first names, begin by sorting it into alphabetical order. Then working out the alphabetical value for each name, multiply this value by its alphabetical position in the list to obtain a name score.For example, when the list is sorted into alphabetical order, COLIN, which is worth 3 + 15 + 12 + 9 + 14 = 53, is the 938th name in the list. So, COLIN would obtain a score of 938 53 = 49714.What is the total of all the name scores in the file?I'm relatively new to clojure, and this is what I came up with:(def names (sort (map (fn[x] (replace x #\ )) (split (slurp /users/calvinfroedge/Downloads/names.txt) #,))))(loop [i 0 total 0] (if (not= i (count names)) (recur (inc i) (+ total (* (inc i) (reduce + (map (fn[x] (- (int x) 64)) (nth names i)))))) total)I read somewhere that doseq is preferred to loop/recur, but it wasn't apparent to me how to comprehensibly AND idiomatically approach this problem without using an explicit loop with an incrementing value.Am I missing something? | More succinct / ideal solution to Project Euler #22 | clojure;programming challenge | null |
_unix.105395 | Some days ago I received a LDLC Iris FB2-I5-8-S2 notebook and installed Linux on it (Linux Mint 16 Cinnamon 32bit, Kernel 3.11.0-12-generic).Everything except the TouchPad works out of box (even the touch screen).I searched a lot but not found any solution. Its not a problem of having disabled the device using Fn+F*Here is some output from various commands:lsusb:Bus 001 Device 002: ID 8087:8000 Intel Corp. Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hubBus 003 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hubBus 002 Device 007: ID 2808:5001 Bus 002 Device 005: ID 294e:1001 Bus 002 Device 004: ID 1532:000d Razer USA, Ltd Bus 002 Device 003: ID 8087:07dc Intel Corp. Bus 002 Device 002: ID 0489:d616 Foxconn / Hon Hai Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hublspci:00:00.0 Host bridge: Intel Corporation Haswell-ULT DRAM Controller (rev 09)00:02.0 VGA compatible controller: Intel Corporation Haswell-ULT Integrated Graphics Controller (rev 09)00:03.0 Audio device: Intel Corporation Device 0a0c (rev 09)00:14.0 USB controller: Intel Corporation Lynx Point-LP USB xHCI HC (rev 04)00:16.0 Communication controller: Intel Corporation Lynx Point-LP HECI #0 (rev 04)00:1b.0 Audio device: Intel Corporation Lynx Point-LP HD Audio Controller (rev 04)00:1c.0 PCI bridge: Intel Corporation Lynx Point-LP PCI Express Root Port 1 (rev e4)00:1c.2 PCI bridge: Intel Corporation Lynx Point-LP PCI Express Root Port 3 (rev e4)00:1c.3 PCI bridge: Intel Corporation Lynx Point-LP PCI Express Root Port 4 (rev e4)00:1d.0 USB controller: Intel Corporation Lynx Point-LP USB EHCI #1 (rev 04)00:1f.0 ISA bridge: Intel Corporation Lynx Point-LP LPC Controller (rev 04)00:1f.2 SATA controller: Intel Corporation Lynx Point-LP SATA Controller 1 [AHCI mode] (rev 04)00:1f.3 SMBus: Intel Corporation Lynx Point-LP SMBus Controller (rev 04)02:00.0 Network controller: Intel Corporation Wireless 7260 (rev 73)03:00.0 Unassigned class [ff00]: Realtek Semiconductor Co., Ltd. RTS5229 PCI Express Card Reader (rev 01)lsmodModule Size Used byparport_pc 31981 0 ppdev 17391 0 arc4 12536 2 rfcomm 53664 0 x86_pkg_temp_thermal 13810 0 coretemp 13195 0 bnep 18893 2 kvm 364766 0 crc32_pclmul 12967 0 aesni_intel 18156 1 aes_i586 16995 1 aesni_intelxts 12749 1 aesni_intellrw 13057 1 aesni_intelgf128mul 14503 2 lrw,xtsablk_helper 13357 1 aesni_intelcryptd 15577 1 ablk_helperiwlmvm 149128 0 mac80211 513247 1 iwlmvmbinfmt_misc 13140 1 microcode 18830 0 snd_hda_codec_realtek 45473 1 snd_hda_codec_hdmi 40508 1 snd_seq_midi 13132 0 snd_seq_midi_event 14475 1 snd_seq_midisnd_rawmidi 25094 1 snd_seq_midirtsx_pci_ms 17807 0 iwlwifi 143578 1 iwlmvmserio_raw 13189 0 memstick 16008 1 rtsx_pci_mssnd_hda_intel 42658 5 snd_seq 55383 2 snd_seq_midi_event,snd_seq_midilpc_ich 16864 0 uvcvideo 71309 0 cfg80211 401436 3 iwlwifi,mac80211,iwlmvmsnd_hda_codec 164003 3 snd_hda_codec_realtek,snd_hda_codec_hdmi,snd_hda_intelvideobuf2_vmalloc 13048 1 uvcvideobtusb 23443 0 videobuf2_memops 13170 1 videobuf2_vmallocsnd_hwdep 13272 1 snd_hda_codecmei_me 13933 0 snd_pcm 89488 3 snd_hda_codec_hdmi,snd_hda_codec,snd_hda_intelvideobuf2_core 39125 1 uvcvideobluetooth 323534 12 bnep,btusb,rfcommvideodev 107508 2 uvcvideo,videobuf2_corejoydev 17097 0 hid_multitouch 17191 0 snd_page_alloc 14230 2 snd_pcm,snd_hda_intelmei 66411 1 mei_mesnd_seq_device 14137 3 snd_seq,snd_rawmidi,snd_seq_midisnd_timer 24447 2 snd_pcm,snd_seqsnd 60790 21 snd_hda_codec_realtek,snd_hwdep,snd_timer,snd_hda_codec_hdmi,snd_pcm,snd_seq,snd_rawmidi,snd_hda_codec,snd_hda_intel,snd_seq_device,snd_seq_mididm_multipath 22402 0 scsi_dh 14458 1 dm_multipathsoundcore 12600 1 sndintel_smartconnect 12610 0 mac_hid 13037 0 lp 13299 0 parport 40795 3 lp,ppdev,parport_pcdm_mirror 21715 0 dm_region_hash 15984 1 dm_mirrordm_log 18072 2 dm_region_hash,dm_mirrorhid_generic 12492 0 usbhid 47361 0 hid 87192 3 hid_multitouch,hid_generic,usbhidi915 589697 5 rtsx_pci_sdmmc 22898 0 i2c_algo_bit 13197 1 i915drm_kms_helper 46867 1 i915drm 242354 4 i915,drm_kms_helperrtsx_pci 43458 2 rtsx_pci_ms,rtsx_pci_sdmmcahci 25579 2 libahci 26554 1 ahciwmi 18590 0 video 18777 1 i915Output of xinput can be found in a Pastebin.Do you have an idea how to enable the touchpad? | Touchpad not detected | linux;touchpad;xinput | I wrote a Linux driver for this crappy device, it can be found here:https://github.com/daedric/cntouch_driverI've also submitted it for review and merge.Next time I buy a laptop from LDLC without OS (if that happens), I'll think twice...EDIT:There are only events for:click (or double tap with one finger, same event is generated);right click;horizontal wheel;vertical wheel.There is no event for a tap with two fingers (usually to simulate a right click). |
_softwareengineering.156266 | If I come across a non-critical typo in code (say, an errant apostrophe in a print(error) statement), is it worth making a commit to resolve that error, or should it simply be left alone?Specifically, I'm curious about weighing the gumming-up of the commit log against the value of resolving these non-critical typos. I'm leaning toward resolving them. Am I being pedantic? | Is it worth making a commit solely to resolve non-critical typos? | version control;grammar | My personal feeling is that improving quality is worth the minor inconvenience of an additional commit log entry, even for small improvements. After all, small improvements count a lot when you factor in the broken window effect.You might want to prefix it with a TRIVIAL: tag, or mark it as trivial if your VCS supports it. |
_codereview.116154 | The Win32 API has a so-called 'high performance counter' (QueryPerformanceCounter() and friends) but often it is neither precise enough nor reliable enough, due to high jitter.The low resolution is no surprise since the value is often derived by shifting off the 10 low bits of the CPU's time stamp counter (TSC), after adding a value that reflects the cumulative sleep/hibernation time of the system. A good overview of the official story is given in the MSDN article Acquiring high-resolution time stamps.On many (most?) reasonably non-ancient systems the time stamp counter is global - shared by all logical CPUs in the processor package - which is how Windows can use it for timing purposes in the first place. This makes the RDTSC instruction even more attractive than it always has been, since it can now also be used for global timing measurements of longer duration - across time slices and across logical CPUs. For many purposes it's just as good as QueryPerformanceCounter() but a thousand times as precise.However, modern CPUs with their deep pipelines and out-of-order execution add another difficulty. By the time the TSC value is read out, some of the instructions preceding RDTSC may not have finished executing and/or some instructions that follow RDTSC may already have been executed. These vagaries introduce a lot of jitter when the TSC is used for timing code fragments. The CPUID instruction comes to the rescue here since it has a serialising effect on the execution of the instruction stream; it basically acts like a full barrier. When CPUID returns, all preceding instructions will have finished execution and none of the instructions following it will have begun execution. Its drawback is that it takes hundreds of cycles to execute and that its execution time is highly variable. That's no problem if CPUID is placed before the initial TSC measuremnt - before the code fragment to be timed - but it's a big problem for the second measurement, after the execution of the code fragment to be timed.This is where RDTSCP comes in. This instruction is available on most reasonably modern CPUs, and it forces the retirement of all instructions that precede it in the instruction stream (i.e. the instructions of the code fragment to be timed). A CPUID instruction can then be placed after the RDTSCP - where its own timing cannot add to the measured time - in order to keep subsequent instructions from jumping the queue.A good overview of various issues is in Performance measurements with RDTSC, including cache considerations and so on. The full story about precise measurements with RDTSC is in Intel's article How to Benchmark Code Execution Times on Intel IA-32 and IA-64 Instruction Set Architectures.Hence the timing of a code fragment can be done like this:t0 := RdTSC0; // CPUID before RDTSCcode_to_be_timed;t1 := RdTSC1; // CPUID after RDTSCPcycles := t1 - t0;Note: this applies only to measuring the cycles for code that can be bracketed as shown above, since RdTSC0 adds lots of cycles before the initial measurement and RdTSC1 adds lots of cycles after the second measurement.For flank-to-flank measurements of external events it is best to use plain RDTSCP without CPUID at the back. The reason is that the reading of the TSC must still be kept from occurring before the instruction that detects the external event (like the change of a shared memory location), which requires RDTSCP instead of plain RDTSC, but there is no place where a CPUID instruction can be stowed without its timing getting in the way.Hence, precise timing calls for three different functions that read the TSC: a pair RdTSC0 and RdTSC1 for bracketing code fragments, and RdTSCP for flank-to-flank measurements. Of course, there 's a ton of auxiliary functions that are necessary - like for setting thread affinity and priority, or even a humble Sleep(0) in the right places - but those won't be shown here.At long last, here's the code for the three TSC functions:type TTicks64 = type Int64; // signed, so that deltas can be represented cleanly///////////////////////////////////////////////////////////////////////////////////////////////////// CPUID implements a full barrier; it doesn't influence the timing as it is called before RDTSC.// Full story: ia-32-ia-64-benchmark-code-execution-paper.pdffunction RdTSC0: TTicks64; // the 'before' tickasm{$ifdef CPUX64} xor rax, rax push rbx // Delphi requires EBX/RBX to be preserved cpuid // full fence pop rbx rdtsc shl rdx, 32 or rax, rdx{$else} xor eax, eax push ebx cpuid pop ebx rdtsc{$endif}end;//--------------------------------------------------------------------------------------------------// RDTSCP implements a sort of read fence: it waits until all preceding instructions in the stream// have been executed but it doesn't keep later instruction from jumping the queue. That's why the// RDTSCP is bracketed by CPUID from behind.function RdTSC1: TTicks64; // the 'after' tickasm{$ifdef CPUX64} {$ifdef ZX_dont_use_RDTSCP} rdtsc {$else} rdtscp {$endif} shl rdx, 32 or rdx, rax xor rax, rax push rbx push rdx cpuid pop rax pop rbx{$else} {$ifdef ZX_dont_use_RDTSCP} rdtsc {$else} db $0F, $01, $F9 // rdtscp; X2 understands the mnemonic for x64 but not for x86 {$endif} push eax xor eax, eax push edx push ebx cpuid pop ebx pop edx pop eax{$endif}end;//-------------------------------------------------------------------------------------------------// for flank-to-flank measurementsfunction RdTSCP: TTicks64;asm{$ifdef CPUX64} {$ifdef ZX_dont_use_RDTSCP} rdtsc {$else} rdtscp {$endif} shl rdx, 32 or rax, rdx{$else} {$ifdef ZX_dont_use_RDTSCP} rdtsc {$else} db $0F, $01, $F9 // rdtscp; X2 understands the mnemonic for x64 but not for x86 {$endif}{$endif}end;The ZX_dont_use_RDTSCP $define is there to allow compilation without RDTSCP. It makes the measurements less precise but it offers a quick and dirty way of compiling test programs for older CPUs, where the code with RDTSCP would bomb.Whether a given machine has a suitable TSC can be ascertained in two different ways.A quick and dirty manual way is tracing into QueryPerformanceCounter(); if that thing uses RDTSC then it's presumably okay to do so.Another way is to run a bit of test code on every logical CPU in parallel; each thread must be confined to its own logical CPU by setting thread affinity, and thread priority must be raised to the max to increase the likelihood of getting a clean test run without preemption. Glossing over a lot of details, the important bits of the test code look like this:constructor CTestThread.Create (mask_bit: DWORD_PTR);begin inherited Create(true); FreeOnTerminate := false; // so that the calling code can read results if SetThreadAffinityMask(Handle, mask_bit) = 0 then zw_ThrowLastWin32Error('SetThreadAffinityMask'); if not SetThreadPriority(Handle, THREAD_PRIORITY_TIME_CRITICAL) then zw_ThrowLastWin32Error('SetThreadPriority');end;//-------------------------------------------------------------------------------------------------procedure CTestThread.Execute;begin t0.measure; g_start_event.WaitFor; t1.measure; if InterlockedDecrement(g_sleeping) = 0 then m_woken_last := true else while g_sleeping <> 0 do ; t2.measure;end;g_sleeping is initialised to the number of threads (logical CPUs) by the thread that initialises the whole shebang, before it rings in the fun by setting the global g_start_event. This event is intended to offer rough synchronisation between threads, before they start precise synchronisation by spinning on the g_sleeping. This reduces the time during which the system is unresponsive.Each thread gets a different bit from the 1-bits found in the affinity mask of the process. t0 etc. are timers that are member variables of the test threads.Sample output on my notebook:mask t0 t1 t1-min(t1) t2 t2-min(t2)-------------------------------------------------------------------------------0001: 000000003905436A 00000000390CE1E7 0 000000003910B14D 690002: 000000003907DA4B 00000000390CEBC7 2528 000000003910B14B 670004: 0000000039095559 00000000390DA98E 51111 000000003910B12E 380008: 00000000390CAC0E 00000000390DA960 51065 000000003910B143 590010: 00000000390E1E4D 00000000390E292A 83779 000000003910B12D 370020: 00000000390F0F2F 00000000390F1826 144959 000000003910B149 650040: 0000000039101DF6 0000000039102965 214910 000000003910B139 490080: 000000003910A73C 000000003910AF25 249150 000000003910B108 * 0Obviously, the thread that is the last to decrement g_sleeping (i.e. m_woken_last == true, marked with a star) will be the only one that can make it to the t2 measurement without delay. The other threads have to wait for the memory change go propagate through their cache hierarchies.Still, it can be seen that spinning on a global variable manages to synchronise all threads to within about 50 cycles of each other (column t2-min(t2)). Contrast this to synchronising via a Win32 event where eons can pass between the different threads being released.I'd be most grateful for reviews of the three TSC functions (superfluous instructions, missing instructions, non-optimal instruction order) and insights regarding the methodology - especially potential weak points.Please bear in mind that the code is intended to be run on the developer's machine and selected test systems, which means that things like automatic selection of appropriate code paths for different CPU architectures and so on are basically irrelevant. Also, the code is not intended to replace functions like QueryPerformanceCounter(), which still serves the bulk of my timing needs. It is intended for cases where the TSC is most appropriate. | Precise timings with low jitter via RDTSC (for x86 and x64) | performance;timer;delphi | null |
_unix.271739 | I had a computer with a windows 7 and I installed lubuntu 14.04 on another partition (I think I shrinked the windows NTFS partition first).I installed lubuntu this way:-> NTFS boot-> NTFS windows 7-> extended partition -> / (ext4) -> /home (ext4) -> MAYBE there was a data partition in ext4 (I don't remember, not my computer) -> swapAfter installation, when you booted on windows (in GRUB), it didn't work and it kept rebooting each time.Today I decided to reinstall windows 7 so here is what I did:I launched on lubuntu, installed gparted and removed both Windowspartition.I made one big NTFS partition with the intention to install windows 7 on it.I installed windows 7It erased my / and /home partitions (but they aren'treformatted)I don't even know why it did so, maybe because I should have kept the windows partition unformatted and the boot partition created erased linux. Seriously, I'm very surprised.So now, here are two questions:I) How can I retrieve all the data from /home ? I suppose it's not removed for now since I didn't write anything on these partitions as of now.II) What caused this accident? | Retrieve data from erased home partition while installing win7 | ubuntu;partition;dual boot;data recovery;ext4 | null |
_cs.6977 | I'm reading a book about computer network theory, and one topic is discusses is routing algorithms. It only mentions (probably not intentionally) how routers participate in forming the understood network topology stored in each routers memory - the routing tables. So this brings me to my question, do Host acts like dedicated routers in this case and participate in and become part of that understood topology of the network in nearby routers?For example, it says that routers communicate with each other to form their routing tables to find the best path. Do hosts as end user machines find themselves in this routing table (as an entry in dedicated router, routing tables) or routing topology? Do they participate in forming it?Likewise do Computer Hosts, have entries for nearby dedicate routers in their routing tables? I'm trying to find the relation ship between a router and a host in the this process.The issue I'm having, is if they don't participate in this process, how do the routers not where the end user machines are in the topology?Thanks :-) | Routing Algorithms and Hosts | computer networks | I believe you are talking about the Internet and the IP. Here are answers of your questions: First note the internet is a network of networks. Networks communicate with each other through routers. These big networks are like the ISP, your university or a big organization. Your computer belong to a small network in your ISP subnetwork. This small network is probably your WiFi switch and your computer. Do hosts find themselves in the routing tables ? NO, but from the IP address of the host and its subnetwork, a router can tell whether they belong to the same subnetwork or not. If they do belong, then there is no need to route the message to another router. The message is simple sent down the subnetwork. You have to study in this case the BGP protocol and the structure of IP. Your host got nothing to do with routing. The ISP takes care of all that. or your cellular company if you are using your mobile phone .. or etc .. It would be hugely complicated to let these terminals take care of that. Again, you must have a look on the IP structure. If you try to send a message to X, then your router will look at X (which has an address probably similar to 122.23.12.11 or whatever !) from the upper digits, the router (which is your ISP router in this case) will know which neighbor router to send it to. In general, Internet routing is greedy. How a router selects its neighbors ? [that's another topic]One advice: dont look to the Internet from a pure theoretical point-of-view.I guess I answered your question ? |
_cs.75073 | On my research I came across the following problem.Given a weighted graph $G = (V, E, w)$ and four nodes $s_1, t_1, s_2, t_2$ find the minimum number of edges that need to be deleted from $G$ so that the set of shortest paths from $s_1$ to $t_1$ and the set of shortest paths from $s_2$ to $t_2$ have at least one edge in common.I have been trying to prove that this problem is NP-hard but I was not able to come up with anything. Does anyone have any idea? | Minimum edges to delete to make shortest paths intersect | np hard;shortest path | null |
_unix.384990 | I'm facing a really frustrating problem on this specific server, every time I press ctrl+c, I logout from the root sessionRunning CentOS Linux release 7.3.1611 & Bash (4.2.46-21.el7_3.x86_64)[root@server ~]# uname -a Linux server 3.10.0-514.16.1.el7.x86_64 #1 SMP Wed Apr 12 15:04:24 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux [root@server ~]# ^C [root@server ~]# logout[user@server ~]$ | Control-C triggers logout from root in bash | linux;bash;centos;root | null |
_unix.252244 | I'm running an Ubuntu 12.04 VM and trying to convert an rpm file to a deb file. When I run sudo alien --to-deb --scripts oracle-xe-11.2.0-1.0.x86_64.rpm I get this error dpkg-deb: error: control directory has bad permissions 777 (must be=0755 and <=0775)I tried sudo chmod 0755 oracle-xe-11.2.0-1.0.x86_64.rpm and sudo chmod -R 0755 on the directory containing the file and still get the error. What is the control directory?UpdateSorry for not realizing this before I am getting this error before the control directory error. dpkg-shlibdeps: warning: /usr/lib/x86_64-linux-gnu/libXm.so.3 has an unexpected SONAME (libXm.so.4) dpkg-shlibdeps: error: no dependency information found for /usr/lib/x86_64-linux-gnu/libXm.so.3I ran sudo apt-file search libXm.so.3 and it returned libmotif4: /usr/lib/x86_64-linux-gnu/libXm.so.3 so I downloaded libmotif4 and still got the error and then downloaded libmotif3 as well and got the error. I ran sudo alien -g my.rpm and that generated oracle-xe-11.2.0 and oracle-xe-11.2.0.orig directories. I ran sudo chmod -R 0755 oracle-xe-11.2.0 and then ran debian/rules binary to generate the errors described above. | dpkg-deb: error: control directory has bad permissions | ubuntu;dpkg;deb;alien | null |
_cstheory.25338 | A flow network is a directed graph in which each edge has a capacity. A flow through this network is an assignment of a value to each edge that is less or equal to the edge capacity, and such that the net incoming flow to every node balances with the net outcoming flux at that node. Two special nodes are exempted from this last restriction: the source (which can output a net non-zero flux) and the sink (which can receive a net non-zero flux).There are algorithms to find the maximum net flow from source to sink in such a network (for example, Ford-Fulkerson algorithm).I am looking for algorithms that generate pseudo-random admissible flows through such a network. Hopefully the space of admissible flows should be sampled as uniformly as possible. What methods are available here? | Random flows through fixed network | pseudorandom generators;flow problems | null |
_codereview.146167 | This is a classic problem with a classic solution and I've seen it a number of times on this site, but I wanted to know what people thought of this C# implementation, as opposed to the numerous C implementations.Below you'll find the source code, the output of the code and then a link to repl.it, so you can run the code in your browser.Source codeusing System;using System.Collections.Generic;using System.Linq;class MainClass { // An array of valid words. Would normally contain way more words. private static string[] dictArray = new string[] { trainee, train }; public static void Main (string[] args) { Node dictTrie = CreateDictTrie(dictArray); Console.WriteLine(IsWord(Train, dictTrie)); // True Console.WriteLine(IsWord(Traine, dictTrie)); // False Console.WriteLine(IsWord(Trainee, dictTrie)); // True } // Create the trie to use for spell checking, based on an array of valid words. private static Node CreateDictTrie(string[] dictArray) { Node root = new Node(); for (int i = 0; i < dictArray.Length; i++) { string word = dictArray[i]; Node node = root; for (int j = 0; j < word.Length; j++) { char character = word[j]; if (!node.Children.ContainsKey(character)) { node.Children[character] = new Node(); } node = node.Children[character]; } node.IsWord = true; } return root; } // Check whether a string is a valid word. private static bool IsWord(string word, Node dictTrie) { word = word.ToLower(); Node node = dictTrie; for (int i = 0; i < word.Length; i++) { char character = word[i]; if (!node.Children.ContainsKey(character)) { return false; } node = node.Children[character]; } return node.IsWord; }}// Class used for the trie structure.public class Node { public bool IsWord; public Dictionary<char, Node> Children { get; set; } public Node() { Children = new Dictionary<char, Node>(); }}OutputTrueFalseTrueRepl.it linkhttps://repl.it/EOlt/14I tried to make the code pretty readable, so if you wanted to really optimize the code, you could probably remove a few of the variable declarations and so on. But other than that, are there any overall performance problems with my approach? Anything else you think I could have done better? Feedback is much appreciated. Thanks!Edit: I updated the code to use arrays and for loops instead of Lists and foreach loops, which should improve the performance a bit. | Trie-based spell checker | c#;trie | The trie creation could by very pretty if you made it an extension then can call it on the array:var trie = new string[] { trainee, train, tree}.ToTrie();The method itself can be simplified. You can use foreach to get rid of the indexes (unless performance really, really, really matters).Sometimes it's also prettier to use the TryGetValue + else where you can assign all values in a single line to avoid duplication like: currentNode.Children[character] = new Node(); }currentNode = currentNode.Children[character];To my taste it doesn't look nice. Instead I suggest this where I also use currentNode instead of just node which I find is easier to understand.public static Node ToTrie(this string[] values){ var root = new Node(); foreach (var value in values) { var currentNode = root; foreach (var c in value) { var node = (Node)null; if (currentNode.TryGetValue(c.ToString(), out node)) { currentNode = node; } else { currentNode = (currentNode[c.ToString()] = new Node()); } } currentNode.IsWord = true; } return root;}The Node can actually be simplified too by deriving it from a dictionary. By adding the search method to it you can easily use it on any node. I named it Contains.Another adjustment you can make is to use string instead of the char so you can use one of the construtor overloads of the dictionary and make it case insensitive. This way you won't need the ToLower.public class Node : Dictionary<string, Node>{ public Node() : base(StringComparer.OrdinalIgnoreCase) {} public bool IsWord { get; set; } public bool Contains(string value) { var currentNode = this; foreach (var c in value) { var node = (Node)null; if (currentNode.TryGetValue(c.ToString(), out node)) { currentNode = node; } else { return false; } } return currentNode.IsWord; }}Usage:var result1 = trie.Contains(train); // Truevar result2 = trie.Contains(TraINE); // Falsevar result3 = trie.Contains(TRAINEE); // True |
_cs.66410 | I am trying to create an algorithm for a Distributed Systems and Algorithms lecture. It is to control flows between a producer and a consumer. I want to understand how to manage the indices of insertion and extraction to the consumer buffer.Let be two computers $P$, the producer, $C$ the consumer, such that the producer sends messages to the consumer through an unidirectional channel. When the application layer of $P$ wants to send a message, it calls produire(m), where $m$ is an argument of a message. When the application layer of the consumer wants to receive a message of the producer, it calls consume(m) where $m$ is by reference.To solve the asynchronysmof sites $P$ and $C$, a solution is to subjugate procution to emission. For each message sent, this one consume an authorization. On the other hand, the consuumer sends an authorization at the end of each calls to consume(m). It goes without saying that consumer consumes if is buffer isn't empty.Variablestwo variables of controlNbmess the consumer variable showing the number of messages in thebufferNbcell the number of authorizations.On the other hand, the buffer managment leads to create the following variables :T the buffer of size N containing the messages to consumein the indice of insertionout, the indice of retrievalAlgorithmproduce(m)Begin Wait(Nbcell>0) send_to(C,m); Nbcell = Nbcell -1;Endon_reception_of(C,Ack)Begin Nbcell = Nbcell +1;EndAnd for the consumer :consume(m)Begin Wait(Nbmess>0) m = T[out]; out = (out+1)%N; Nbmess = Nbmess -1; send_to(P,Ack);Endon_reception_of(P,m)Begin T[in]=m; in = (in+1)%N; Nbmess = Nbmess + 1;EndMy questionI don't understand the indices of extraction :in = (in+1)%N;out = (out+1)%N;How does the buffer doesn't explodes?The given proof is that we always have an equality Nbmess + NbCell + Nbt = N with Nbt the number of messages in transit?When we call produce, NbCell decreases and Nbt increase. When we call consume, Nbmess decrease and Nbt increaseWhen receiving a message, Nbmess increases and Nbt decreases.When receiving an ack, Nbcell increases and Nbt decreasesBut does it implies that in = (in+1)%N;out = (out+1)%N;My first guess was to construct an array with a so huge size that it would never explodes such asproduce(m)Begin Wait(NbCell>0) T[in]=m; NbCell--; in++;Endconsume(m)Begin Wait(Nbmess>0) m=T[out] out++; NbCell++End | Managing the buffer size of point-to-point producer consumer distributed algorithm | distributed systems | null |
_unix.176168 | From the book Linux Administration Made Easy:When deciding on a backup solution, you will need to consider the following factors:Portability Is backup portability (ie. the ability to backup on one Linux distribution or implementation of Unix and restore to another; for example from Solaris to Red Hat Linux) important to you? If so, you'll probably want to choose one of the commandline tools (eg. dd, dump, cpio, or tar), because you can be reasonably sure that such tools will be available on any *nix system.What does backup on one Linux distribution or implementation ofUnix and restore to another mean? Is it to backup a Linux systemand then restore it later? Then what does it mean by restore toanother?Remote backups Is the ability to start backups and restores from a remote machine important to you? If so, you'll probably want to choose one of the commandline tools or textbased utilities instead of the GUIbased utilities (unless you have a reasonably fast network connection and the ability to run remote X sessions).Network backups Is performing backups and restores to and from networked hosts important to you? If so, you'll probably want to use one of several of the commandline utilities (such as tar) which support network access to backup devices, or a specialized utility such as Amanda or one of several commercial utilities.It seems that remote backup and network backup are the same. Whatare their differences? | What are portability of backup, remote backup and networked backup mean? | backup | null |
_softwareengineering.319218 | I have been learning algorithms and trying to solve problems and now I have the following problem:In a 4x4 matrix, and it contains fields with height. There is a start field with given height also the maximum height a field can have. To be able to traverse from on field to another the height of the current field must be higher or equal to the field we want to go.There are also unmarked fields with no height assigned to them, meaning we can change it.The goal is to traverse all the fields with given height by changing the height of the unmarked fields ?. For a solution to count as valid all given ? have to have an assigned height.I think this will need brute-forcing all the possible combinations of the ? fields.Example: 2 2xx*xx1?1x?1xxxxxThe minimum height a field can have is 0. The first digit represents the height of the * and the second, the maximum height a field can have. So the * represents the start point and has height 2 (for this case we have 2 as maximum height), from there we need to go to the other fields with numbers, by changing the value of the ? fields. We need to find how many variations are valid.In this case there are : 6. Because the ? on 3rd row does not matter if it gets traversed or not so here are the solutions:xx*x xx*x xx*x xx*x xx*x xx*x x121 x121 x121 x111 x111 x111x21x x11x x01x x21x x11x x01x xxxx xxxx xxxx xxxx xxxx xxxx The nodes that matter have been traversed in both cases. We use Breadth First search to traverse all the nodes. The ? in the 3rd row is not traversed in some of the cases because this field is not in the group of the target fields and its height does not affect reaching any of the target fields. | How to implement backtracking to check if all fields have been traversed | algorithms | null |
_unix.375489 | Today morning I was browsing the internet with Chromium and it closed out of nowhere. I went to open it again but it opens very briefly (less than a second) and then it closes back again. This has never happened before.The only way I can open it back up is by completely deleting ~/.config/chromium/Default. Then I can open it back again but it closes again within a few minutes. Things I've tried:I have purged and reinstalled ChromiumI have deleted every file that was crated within the time frame that the problem started (from ~/.config/chromium/)I have tried browsing only different websites to see if it's some specific kind of website that triggers it. Apparently the website I am in doesn't make a difference.For the moment I'm using Chrome, but I really would like to get back to using Chromium.If I open Chromium with a terminal these are the messages I get (keep in mind that as far as I know the first two messages are normal and happen even when Chromium works):Gkr-Message: couldn't connect to dbus session bus: Failed to connect to socket /tmp/dbus-FvyymbKhrF: Connection refused(chromium-browser:29177): LIBDBUSMENU-GLIB-WARNING **: Unable to get session bus: Could not connect: Connection refusedReceived signal 11 SEGV_MAPERR 000000000010#0 0x7f0dc4dcc425 base::debug::StackTrace::StackTrace()#1 0x7f0dc4dcc80b <unknown>#2 0x7f0dc50f7390 <unknown>#3 0x562a8f85edc8 <unknown>#4 0x562a8f861656 <unknown>#5 0x562a8f861df9 <unknown>#6 0x562a8f862143 <unknown>#7 0x7f0dc4e47821 <unknown>#8 0x7f0dc4dcdeea base::debug::TaskAnnotator::RunTask()#9 0x7f0dc4df6e90 base::MessageLoop::RunTask()#10 0x7f0dc4df897d base::MessageLoop::DeferOrRunPendingTask()#11 0x7f0dc4df983d <unknown>#12 0x7f0dc4dfa300 base::MessagePumpLibevent::Run()#13 0x7f0dc4df5f15 base::MessageLoop::RunHandler()#14 0x7f0dc4e20628 base::RunLoop::Run()#15 0x7f0dc4e4ce36 base::Thread::ThreadMain()#16 0x7f0dc4e47726 <unknown>#17 0x7f0dc50ed6ba start_thread#18 0x7f0dae79c3dd clone r8: 000000000000002e r9: 0000562a912b56ec r10: 0000000000000000 r11: 00007f0dae829f50 r12: 00007f0d23ffcff0 r13: 0000000000000008 r14: 0000000000000008 r15: 00007f0d23ffceb0 di: 0000000000000000 si: 00007f0d23ffceb0 bp: 00007f0d23ffcf00 bx: 00007f0d23ffceb0 dx: 000000000000006b ax: 0000000000000000 cx: 00007f0d0800a790 sp: 00007f0d23ffce60 ip: 0000562a8f85edc8 efl: 0000000000010206 cgf: 002b000000000033 erf: 0000000000000004 trp: 000000000000000e msk: 0000000000000000 cr2: 0000000000000010[end of stack trace]Calling _exit(1). Core file will not be generated.I'm using Linux Mint 18.1 with a 4.10 Kernel.CheersEDITAs Faheem Mitha pointed out, this could be a hardware issue, although the fact that the laptop shows no other weird behaviors kind of points to software. I will use memtest86 when I can.One other thing that I found out these files are created in ~/.config/chromium:Jul 5 10:02 SingletonSocket -> /tmp/.org.chromium.Chromium.vibNiB/SingletonSocketJul 5 10:02 SingletonLock -> NP900X3N-2368Jul 5 10:02 SingletonCookie -> 1648236092507555754And the output of file Singleton* shows thatSingletonCookie: broken symbolic link to 1648236092507555754SingletonLock: broken symbolic link to NP900X3N-2368SingletonSocket: symbolic link to /tmp/.org.chromium.Chromium.vibNiB/SingletonSocketI don't know if they are important, but the fact that two of those links are broken and that NP300X3N is my laptop model tell me that they have something to do with the issue. | Can't open Chromium anymore | linux mint;chrome | Just for completeness: it turns out this was simply a bug that got corrected after a couple of updates from the repository. |
_codereview.74668 | Please take a look at the following Scala program and give me suggestions for improvement. I'm sure there would be plenty. This is my very first Scala code, so please don't be frustrated because of its low quality.abstract class Expression { def eval() : List[List[String]] = this match { case Identifier(token) => List(List(token)) case Union(exprs) => exprs.flatMap(e => e.eval) case Sequence(exprs) => exprs.map(e => e.eval).reduceLeft(product) case Iteration(min, max, expr) => { val subResult = expr.eval; (min to max toList) .flatMap(card => List.fill(card)(subResult).foldLeft(List(List[String]()))(product)) } } def product(first: List[List[String]], second: List[List[String]]) : List[List[String]] = { for { x <- first; y <- second} yield x ++ y }}case class Identifier(token: String) extends Expressioncase class Union(subExprs: List[Expression]) extends Expressioncase class Sequence(subExprs: List[Expression]) extends Expressioncase class Iteration(minCard: Int, maxCard: Int, subExpr: Expression) extends Expressionobject App { def main(args: Array[String]) = { println( Iteration( 1, 2, Union( List( Identifier(cat), Sequence( List( Identifier(dog), Iteration( 0, 1, Identifier(pig) ), Identifier(bird) ) ) ) ) ).eval ) }} | Scala Case Classes | scala | This is quite good. I have one suggestion, though:Use traits instead of abstract classes, and since probably all your data of type Expression will be defined only in this file make it a sealed trait:sealed trait Expression { // same body}Sealing a trait (or abstract class for that matter) has the advantage that whenever you'll do a pattern match over a value the compiler can tell you if you omitted a case. Also, using a trait has two advantages over abstract classes:traits can be used to express everything that an abstract class can, with little syntactic overhead (when expressing the equivalent of class parameters). While the converse is not true (you cannot inherit, or mixin, multiple abstract classes).traits are a slight performance optimization, since for any non-abstract member of a trait, the compiler literally copies those definitions in the bodies of the subclasses (not that this optimization is ever truly useful, the knowledge of how the compiler works is more important though).Second, as a response to all suggestions that you should use the OO style more, that's really a choice that depends on the situation. By using the functional design you leave yourself vulnerable to adding new data, i.e. whenever you add a new case class you have to update every pattern match, but adding new functionality does not require you to update any of the previously defined case classes. While in OO the opposite would be true. So choosing between the two styles is really a question about leaving your code open to easy extension with respect to new data (OO), or new functionality (functional). |
_codereview.137916 | I need to encrypt/decrypt 2D arrays (double pointers) using AES-128/CTR using Intel TinyCrypt (written in C). The following are two helper methods to simplify the library usage. Any comments would be highly appreciated. Thanks in advance.#include <tinycrypt/constants.h>#include <tinycrypt/ctr_mode.h>#include <tinycrypt/aes.h>#define AES_128_KEY_LENGTH 16#define AES_128_CTR_LENGTH 16typedef struct aes_128_ctr_params_t{ byte key[AES_128_KEY_LENGTH]; byte ctr[AES_128_CTR_LENGTH];} aes_128_ctr_params_t;//---------------------------------------------------------------------------inline int32_t encrypt(uint8_t const * const * const plaintext, uint8_t * const * const cihpertext, size_t const height, size_t const width, aes_128_ctr_params_t params) { //TODO: Do some validation here! struct tc_aes_key_sched_struct sched; uint32_t result = TC_CRYPTO_SUCCESS; result = tc_aes128_set_encrypt_key(&sched, params.key); if (result != TC_CRYPTO_SUCCESS) return result; size_t const row_size_in_bytes = sizeof(uint8_t) * width; for (size_t row_index = 0; row_index < height; ++row_index) { result = tc_ctr_mode(cihpertext[row_index], row_size_in_bytes, plaintext[row_index], row_size_in_bytes, params.ctr, &sched); if (result != TC_CRYPTO_SUCCESS) return result; }}//---------------------------------------------------------------------------inline int32_t decrypt(uint8_t const * const * const cihpertext, uint8_t * const * const plaintext, size_t const height, size_t const width, aes_128_ctr_params_t params) { //TODO: Do some validation here! struct tc_aes_key_sched_struct sched; uint32_t result = TC_CRYPTO_SUCCESS; result = tc_aes128_set_encrypt_key(&sched, params.key); if (result != TC_CRYPTO_SUCCESS) return result; size_t const row_size_in_bytes = sizeof(uint8_t) * width; for (size_t row_index = 0; row_index < height; ++row_index) { result = tc_ctr_mode(plaintext[row_index], row_size_in_bytes, cihpertext[row_index], row_size_in_bytes, params.ctr, &sched); if (result != TC_CRYPTO_SUCCESS) return result; }} | Intel TinyCrypt: AES-128/CTR to encryption/decryption of 2D arrays | c++;c;cryptography;aes | I find your code a bit dense and hard to read overall.Your identifier names are compound words but all lower case.plaintextI prefer plainText others would prefer plain_text (and a lot of the code uses this second C like style). But either is preferable to your current style.This seems redundant.uint32_t result = TC_CRYPTO_SUCCESS;result = tc_aes128_set_encrypt_key(&sched, params.key);Just use one line:uint32_t result = tc_aes128_set_encrypt_key(&sched, params.key);Technically both functions exhibit undefined behavior (in C++ not sure about C). There is no return on successful completion. if (result != TC_CRYPTO_SUCCESS) return result; } // Add the following line return result;} |
_unix.110294 | I have a notebook with Ubuntu/Linux. When I'm at home I plug it into my 20 monitor thats hooked up to a switch. Occasionally I need to use it remotely and the screen is so small I need to zoom in numerous times on every page. Is there a way to set zoom command into the .kshrc so it will automatically be larger? Kinda doubt it but thought I would ask. | Need to Zoom in permanently | linux | null |
_unix.77723 | In Debian Squeeze, if I right clicked on something in the Applications menu I could lock it to the top bar. I upgraded to Debian Wheezy and now if I right click it just opens the program. I'm using Virtualbox, so maybe the right click just isn't working. I also said debian gnome because it looks different than the regular gnome I know.So how can I create shortcuts in gnome 3/debian wheezy? I don't care if it's pinning it to the top bar, or locking it in the task bar (bottom). And while I'm at it, is there a way I can get a shortcut to the desktop (preferably a button, which I had in Debian squeeze). I've googled for all sorts of combinations of debian (or gnome) shortcut to desktop and debian (or gnome) pin to taskbar | Lock to launcher/Pin to taskbar in Debian Wheezy/Gnome (was possible in Squeeze) | debian;gnome | Use the Alt key together with the right-mouse.On applications that should give you the possibility to Move/Remove. On the open area of the taskbar in Add to Panel… |
_softwareengineering.313010 | In the language I work with, Progress OpenEdge 11.5.1, there is nothing like anonymous classes. However, the system design would really benefit the use of such classes.Is there some nice known way of constructing such classes without having them in the language specification? My thoughts goes like a class having a constructor that must be injected by the user object in a smart sense or a constructor with some key. All ideas are welcome.Background:I have class A with one purpose: to calculate a value P. Some users of A, but not all, need a heavy machinery in order to calculate P. Hence, I would like to hide such calculations for other users in order to speed up loading of the object A. | Alternatives to anonymous class | object oriented;language design;language features | You can use the Strategy Pattern to implement the various strategies to calculate P. The basic idea is that you make a class for each version of the algorithms to calculate P, they each inherit from the same interface (for instance ICalculateP). Your class A would then have a member of type ICalculateP that binds on runtime to one of the concrete classes.That way at runtime you can decide which strategy fits best with the specific situation. |
_cstheory.12012 | An elaboration on this question, but with more constraints.The idea is the same, to find a simple, fast algorithm for k-nearest-neighbors in 2 euclidean dimensions. The bucketing grid seems to work nicely if you can find a grid size that will suitably partition your data. However, what if the data is not uniformly distributed, but has areas with both very high and very low density (for example, the US population), so that no fixed grid size could guarantee both enough neighbors and efficiency? Can this method still be salvaged?If not, other suggestions would be helpful. | Simple k-nearest-neighbor algorithm for euclidean data with highly variable density? | ds.algorithms;ds.data structures;cg.comp geom;clustering;near neighbors | null |
_webapps.37886 | This bit of code will make a new blogger post. What can I do to update (not create) an existing page (not post)?curl -v --request POST -H Content-Type: application/atom+xml \ -H Authorization: GoogleLogin auth=$AUTH \ http://www.blogger.com/feeds/$FEED/pages/default --data @blog_post.xml | Automatically updating a blogger page? | blogger | null |
_cs.54317 | As a fun project, I've been working on a C# implementation of Richard Korf's - Finding Optimal Solutions to Rubik's Cube Using Pattern Databases.https://www.cs.princeton.edu/courses/archive/fall06/cos402/papers/korfrubik.pdfI actually have it working, I'm just trying to improve my solution.One thing that Korf glazes over in his paper is how he stores and indexes into the pattern databases. Ideally, I think we want to use an instance of a rubik's cube to generate an index into an array.My question is about the best way to generate this index.My solution is to generate a minimal perfect hash. This involves keeping ALL of the cubes in memory until I have discovered the entire pattern database then generating a minimal perfect hash based off of that. The MPH takes a couple hours to run depending on the pattern database size, but I only need to do it once since I save it to disk. In the end, I can throw away the cubes themselves storing only the MPH. That way I can take a randomized rubik's cube, apply the pattern, then look up the array index in the MPH to get an estimated solution length.I believe Korf and Shultz describe a better way to determine the cube's index in their 2005 paper called Large Scale Breadth-First Searchhttps://www.aaai.org/Papers/AAAI/2005/AAAI05-219.pdfThis paper describes an algorithm to generate an index based off of the lexicographical ordering of a permutation. Basically you can take the permutation {1, 2, 3} and figure that it is the smallest with an index of 0. {1, 3, 2} is next up with an index of 1 and so on.I feel like I should be able to apply this algorithm to a rubik's cube to get its index within a pattern database, but I'm having a hard time figuring out how it would work in practice.The corners only pattern database for instance contains all rubik's cubes that have had their edge stickers taken off. There are exactly 88,179,840 cubes in this set. Any corner cubie on a rubiks cube can be in one of 24 different states. The state of the 8th corner cubie can be calculated based on the other 7 so cubes in the corners only pattern database each have 7 values between 0 and 23e.g.{0, 3, 6, 9, 12, 15, 18, 21} defines the solved cube with all edge stickers removed.if I rotate the front face 90 degrees the permutation might be:{0, 3, 11, 23, 12, 15, 8, 20}Is there a way to get an index out of these sort of permutations? | Indexing into a pattern database - Korf's Optimal Rubik's Cube solution | algorithms;permutations | You don't explain what the numbers from 0 to 23 mean, but according to this answer, you can represent the state of the corners using eight pairs $(p_i,o_i)$, where $(p_0,\ldots,p_7)$ is a permutation of $(0,\ldots,7)$, $o_i \in \{0,1,2\}$, and $o_7$ (say) is determined by $o_0,\ldots,o_6$. In total, this gives $8! \cdot 3^7 = 88179840$ degrees of freedom. Assuming that you can decompose your $\{0,\ldots,23\}$ to pairs $(p_i,o_i)$, you can easily convert a position to an index by encoding separately the permutation $(p_0,\ldots,p_7)$ (which the AAAI paper explains how to do) and the values $o_0,\ldots,o_6$, which you can encode in base 3. Putting the two values together in the obvious way (for example, $3^7p + o$ or $8!o + p$), we get an index. |
_softwareengineering.251445 | I have a classic Java webapp. It is composed of a database (PostgreSQL), a servlet container (Tomcat) and my code (deployed on Tomcat as a *.war file).I want to package/deploy it using Docker (mostly for testing for now), but I'm unsure what would be the best way to map it.My initial idea was to have an app-in-a-box - define a container that has Java, Postgres and Tomcat on it, exposing just the http port.Further reading of the Docker docs shows that this, although possible (install and run supervisord as the single foreground process, have it start both Postgres and Tomcat) is probably not the intended usage. Going by the spirit of the tutorials I should probably create a container for Postgres, another for Tomcat, and a data-container to hold the application code (my *.war) and database files. This would mean 3+ containers (should the db files and *.war share the same data container?)What's the common practice here?Since I have no previous experience with Docker, what pitfalls can I expect from each approach?Is there some other approach I'm missing? | docker-izing a classical db-based webapp - single or multiple containers? | java;virtualization;docker | The recommendations I've seen is to have all-in-one container: Docker Misconceptions:Misconception: You should have only one process per Docker container!It's important to understand that it is far simpler to manage Docker if you view it as role-based virtual machine rather than as deployable single-purpose processes. For example, you'd build an 'app' container that is very similar to an 'app' VM you'd create along with the init, cron, ssh, etc processes within it. Don't try to capture every process in its own container with a separate container for ssh, cron, app, web server, etc.One way to think about it is to ask yourself if you'd ever need one piece running without the others. OK, maybe you'd want the DB running without the app server, but how often? |
_softwareengineering.340220 | I'm developing a Java software according to the object-oriented Layers architectural pattern. Every layer should be clearly separated from the rest, and provide a well-defined interface to use it's services (maybe more than one).A common example for these layers could be an architecture consisting of a request processing layer, a business logic layer and a persistence layer.However, I'm not sure how to use Java interfaces correctly to implement this structure. I guess that each layer should have it's own Java package. Should every layer contain one Java interface that defines methods to access it? Which classes implement these interfaces? Classes from the layer or classes from outside the layer? Which classes methods does an outside object use if it wants to use a layer? | Java Interfaces in Layers pattern | java;object oriented design;layers | The idea of the Layered Architecture is that each layer provides an abstraction to the previous layer, so a layer depends only on the previous layer. As an example with a Web ServiceRequest Managementpublic interface IXController { post(); get(); delete();}public class XControler implements IXController {public XController (IXService service){} post(){} get(){} delete(){}}Business Layerpublic interface IXSercice { doSomething(); }public class XService implements IXService { public XService (IXDao dao){} doSomething(){}}Persistence layerpublic interface IXDao { doSomething();}public class XDao implements IXDao { public XDao (){} doSomething(){}}As you may see the interfaces role is only to provide contracts between your layers, this can also be useful when using soke patterns as Factory or Dependency injection.Who access the interfaces? Whoever has a dependency on the object. Everything else is solved with SOLID principles and OOP, and you should consider using design patterns.Anything else? |
_unix.368557 | I had been having a big problem with the VPN connection that I was using to pass my ISP's filtering constraints against sites like Facebook or YouTube. The true problem was that some websites which are as default filtered by the ISP would show up, but not Facebook or YouTube. After spending so much of my time purging and reinstalling all I would assume to be related to my ubuntu 16.04 networking system, I suddenly and but mere chance notice the Mask value assigned to what I think is related to my VPN connection:ppp0 Link encap:Point-to-Point Protocol inet addr:168.158.114.168 P-t-P:80.84.49.159 Mask:255.255.255.255 UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1400 Metric:1 RX packets:8208 errors:0 dropped:0 overruns:0 frame:0 TX packets:7599 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:3 RX bytes:6213278 (6.2 MB) TX bytes:1098996 (1.0 MB)To check it by myself to see if I can make my browser to view youtube, I tried the command below:sudo ifconfig ppp0 netmask 255.255.255.0refreshed, and the browser just started loading the previously unavailable web page. Now, when I disconnect from the VPN service, everything would become the same as it was before, and I should retype that command to be able to use vpn again. As there is no ppp0 value after disconnecting the vpn, i would like to ask you to help me find the file responsible for holding the values as defaults to be used by vpn service each time I connect to it.P.S: Please forgive me and my dumbness about the whole thing, But I am terribly exhausted by all the effort spent, and do not dare to add something to any of the files just to check if it works or not. Thank you very much for your help in advance. | Having problem in finding the file to set a permanent netmask | vpn;ifconfig;defaults | null |
_softwareengineering.107242 | I have a project to develop an inventory system for a medical shop. Till date, I confronted simple requirements which were fulfilled with XML as the backend db. My interaction knowledge with XML is pretty-good and I can make almost anything with XML using LINQ-TO-XML.Since, this is an inventory system, I am bit confused as to which database should I use. Can i stick with XML or proceed with SQL Server 2008. In case I use SQL Server, will I need to install the SQL Server on Client Machine as well. This is important information because SQL Server is a commercial product, hence i need to include this in my project estimation cost. | Which database to prefer while developing a WPF medical inventory system? | sql server;wpf;xml | You don't need SQL Server. You can use any number of open source databases such as MySQL and PostgreSQL. There are others such as MongoDB, CouchDB, neo4j, etc, but you're not really in need of NoSQL solution (and imho, they have a little more of a learning curve as they aren't ORM friendly and still relatively new).Depending on the size of the application, I might also recommend SQLite, which is a file-based database ;) However, I'd only recommend SQLite if you don't have a large number of concurrent users or a very large dataset. See this document for further reasons as to why to and not to use SQLite.I would highly suggest not using XML for your back end solution as it isn't scalable and much more error prone than using a proper database solution.Take a look at http://sqlite.phxsoftware.com/ for a managed wrapper for SQLite.Side note: I haven't used C# in a number of years, so I may be missing something in terms of managed support outside of SQLite. |
_unix.209866 | When doing ifup wlan0 on a system with / mounted as read-only (embedded computer), I get this error:Failed to connect to non-global ctrl_ifname: wlan0 error: Read-only file systemInternet Systems Consortium DHCP Client 4.3.1Copyright 2004-2014 Internet Systems Consortium.All rights reserved.For info, please visit https://www.isc.org/software/dhcp/can't create /var/lib/dhcp/dhclient.wlan0.leases: Read-only file systemListening on LPF/wlan0/80:1f:02:d3:42:b8Sending on LPF/wlan0/80:1f:02:d3:42:b8Sending on Socket/fallbackDHCPDISCOVER on wlan0 to 255.255.255.255 port 67 interval 7DHCPDISCOVER on wlan0 to 255.255.255.255 port 67 interval 13...DHCPDISCOVER on wlan0 to 255.255.255.255 port 67 interval 5No DHCPOFFERS received.No working leases in persistent database - sleeping.On the other hand, when doing ifup wlan0 with / mounted as read-write, no problem, an IP is succesfully attributed.How to make DHCP work on a read-only root filesystem?# /etc/network/interfacesauto loiface lo inet loopbackauto eth0iface eth0 inet dhcpauto wlan0allow-hotplug wlan0iface wlan0 inet dhcpwpa-ssid <myssid>wpa-psk <mypasswd> | DHCP and read-only root filesystem | networking;dhcp;readonly;etc | null |
_cstheory.4988 | Let's say we discover alien civilizations that are able to send and receive messages using an interstellar digital communications channel. (Say using modulated radio waves, laser pulses, re-positioning stars in various orbits, what have you.) Let's assume we have decided to make contact with them.Once we initiate a dialog, how would we go about establishing a communications protocol and language? What methodology would we use to agree on a basic vocabulary and ways of expressing logical ideas? Is it ad-hoc or is there some way to optimize the process of establishing a common language based on symbolic manipulations. We would want to agree on a language quickly and minimize the resources required to encode and send messages (since they're quite slow to send).Next, reciprocity: Once we have a shared language, how would we make sure that both sides reciprocate in trading secrets? That is, we don't want to be in a situation where we give away valuable technology without receiving anything in return. Can both sides prove that they posses certain technology? Is there a way to send results piecemeal, gradually, so that each side can have increasing confidence in the value of the message? | Best alien communication protocol? | big picture;communication complexity | Your first question is the topic of Brendan Juba's PhD thesis on Universal Semantic Communication. You should take a look at it, as well as some of the papers on his website.As for your second question, you might want to read about zero-knowledge proofs. |
_webapps.101087 | I have a limited monthly bandwidth allowance. 240p is almost always good enough for me (and if there isn't any on-screen text even 144p is usually sufficient).When I'm on the YouTube.com website, it respects my preferences (well, usually; sometimes there isn't a version that low quality, when that happens it chooses a medium quality from the available range).Is there any way to force embedded clips on sites that I do not control to also respect my preferences? Quite often they present much higher resolution, and by the time I've noticed they've already buffered the rest of the clip anyway so changing manually doesn't help. | Force low-bandwidth on embedded YouTube clips | youtube | null |
_unix.111041 | I am trying to install Gerris . The website is :http://gfs.sourceforge.net/wiki/index.php/Mac_OSX_InstallationI followed the instructions. But I could not install Gerris Dependencies.I installed Xcode, Command line tools, Xquartz as the page suggests, but about brew not quite sure because of what terminal says for the steps below.As page says I make a directory. % cd% mkdir softAs page says :PathsFor installed software to be properly localized, various environment variables have to be set accordingly in ~/.bashrc% export PATH=$PATH:$HOME/soft/bin% export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/X11/lib:$HOME/soft/lib% export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/opt/X11/lib/pkgconfig/:$HOME/soft/lib/pkgconfigNote: make sure that the file ~/.profile contains the linesource ~/.bashrcfor these changes to be taken into account.I created nano ~/.bashrc and included the lines: % export PATH=$PATH:$HOME/soft/bin % export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/X11/lib:$HOME/soft/lib % export and pressed ctrl+X and then I created nano ~/.profile and include source ~/.bashrc on the last line and pressed ctrl+X.But at this stage below I stuck. Gerris dependencies Now that brew is installed, almost every dependency needed by gerris can be installed in just a single command line:% brew install gtkglext% brew install gnuplot% brew install gawk% brew install gsl% brew install gfortran% brew install open-mpi% brew install proj% brew install netcdf% brew install ode% brew install fftw% brew install ffmpeg --with-theora% brew install coreutils% brew install autoconf% brew install libtool% brew install automakeWhen I type brew install gtkglextthe terminal says samires-mbp:~ samirebalta$ brew install gtkglext-bash: brew: command not foundI did ~/.bashrc and ~/.profile what is wrong I do not know? | Mac 10.9.1 // Gerris installation // | terminal | OK, there are a couple of issues. First, the lines in your .bashrc should be:export PATH=$PATH:$HOME/soft/binexport LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/X11/lib:$HOME/soft/libexport PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/opt/X11/lib/pkgconfig/:$HOME/soft/lib/pkgconfig You shouldn't have the % at the beginning of each line, that's a mistake on the page you followed. The other issue is that you don't seem to have installed brew. The page you linked to includes these instructions:brew Homebrew is a package manager that will make gerris (and dependencies) installation smooth and manageable. To install brew, just enter ruby -e $(curl -fsSkL raw.github.com/mxcl/homebrew/go)So, you need to run the command above in order to install brew. Once you have done so, the brew command will be available and you won't get the error you show. |
_cs.22256 | Consider a set of $N$ nodes. There is a $N\times N$ non-negative valued matrix $D$ where the $(i,j)$th element $d_{ij}$ gives the positive metric between node $i$ and $j$, where $i,j\in [N]$. Thus the diagonal entries of $D$ are all zero and $d_{ij}=d_{ji}$ so $D$ is symmetric. Then there is a set of $k$ colors. I want to assign these colors to the $N$ nodes such that the minimum metric of a common color between any pair of nodes is maximized. So if $c(i)$ is the color assigned to $i\in [N]$ by the assignment $a\in A$, where $A$ is the set of all possible color assignments, we are looking for $$\max_{a\in A} \min_{i,j} \{d_{ij}:c(i)=c(j)\}.$$Is this problem NP-hard? If it is, what sort of reduction can be used to show that this problem is NP-hard? | Relaxed graph coloring, with penalties for assigning adjacent vertices the same color | complexity theory;reductions;np hard;colorings | Yes. Reduce from graph coloring.$D$ is given by $d_{i,j} = \begin{cases} 0 & \text{ if } i=j \\ 1 & \text{ if } i \text{ is adjacent to } j \\ 2 & \text{ else}\end{cases}$. |
_computerscience.5068 | I have multiple reflection cube map that's prebaked before a scene.However I am being confused as to which one to choose.I am told I need to choose closest cube map then use it.However it is unclear to me that whether I should do that based off of per pixel or per object(object's position).I looked at Unity to get some ideas as well.Unity chooses a reflection probe based off of distance and area probe affects the object.However there is no clear do this. So I am tentative and left unsure. | How to choose which reflection probe to use? | reflection | null |
_cs.57978 | Let $$L = \{ \langle M \rangle \mid M \text{ is a Turing machine so } A_{TM} \leq_m L(M) \}$$The question is whether $L$ is in $\mathcal{R}, \mathcal{RE}, co-\mathcal{RE}$ or in $\overline{\mathcal{RE} \cup co-\mathcal{RE}}$ ?I gained some progrees showing $L \notin co- \mathcal{RE}$:Define reduction $f:A_{TM}\rightarrow L$ on input $\left\langle M,w\right\rangle$ returns:if $M$ accepts $w$ return $ \left\langle U_{TM}\right\rangle $ ($L \left(U_{TM}\right)=A_{TM}$ so $\left\langle U_{TM}\right\rangle \in L$)if $\left\langle M\right\rangle$ rejects $w$ return 1 (not a TM encoding hence not in L)$f$ is computable and $\left\langle M,w\right\rangle \in A_{TM}\iff f\left(\left\langle M,w\right\rangle \right)\in L$ hence $A_{TM}\leq_m L \implies L\notin co-\mathcal{RE}$. Now I want to show that $L \notin \mathcal{RE}$. And I'm stuck..Notation:$A_{TM} = \{ \langle M,w \rangle \mid M \text{ is a TM}, w \in L(M)\}$$H_{TM} = \{ \langle M,w \rangle \mid M \text{ is a TM and $M$ halts on $w$} \}$ | Classify the set of all TMs whose languages from the accepting problem | computability;turing machines;undecidability | As suggested in the comments, the extended version of Rice's theorem clearly states this language is not in $RE$. Nevertheless, let us prove this claim via a direct reduction from a language known not to be in $RE$, let's say $$ \overline{HP} = \{\langle M,w\rangle \mid M \text{ doesn't halts on }w \}$$We will show that $\overline{HP} \le L$ and conclude that $L \notin RE$.The reduction will assume we have a machine that recognizes $A_{TM}$ (call it $R$) and goes as follow. Given an input $\langle M,w\rangle$ we construct the output string $\langle M_w\rangle$ which is a machine that, on input x, does:repeat the loop:1.1 run one step of $M$ on $w$1.2 run one step of $R$ on $x$1.3 if 1.1 halts - the machine $M_w$ rejects. If 1.2 accepts - the machine $M_w$ accepts.It is easy to verify this is a computable reduction. Let's just verify it is valid.Case I: $\langle M,w\rangle\notin \overline {HP}$, then eventually (say after $T$ steps) $M$ will halt on $w$ before the computation of 1.2 concludes, thus $L(M_w)$ can be decided in less then $T$ steps and in particular it is decidable. Therefore $A_{TM} \not\le L(M_w)$ and thus $\langle M_w\rangle \notin L$.Case II: $\langle M,w\rangle\in \overline {HP}$, then $M$ never halts on $w$, which means that only step 1.2 is relevant, which means that $M_w$ behaves in this case just like $R$. So it holds that $L(M_w) = A_{TM}$ and in particular, $A_{TM} \le L(M_w)$. Then, $\langle M_w \rangle \in L$. |
_cs.40507 | The common examples of NP-hard problems (clique, 3-SAT, vertex cover, etc.) are of the type where we don't know whether the answer is yes or no beforehand.Suppose that we have a problem in which the we know the answer is yes, furthermore we can verify a witness in polynomial time.Can we then always find a witness in polynomial time? Or can this search problem be NP-hard? | Can finding a witness be NP-hard even if we already know there is one? | complexity theory;np hard;search problem | TFNP is the class of multivalued functions with values that are polynomially verified and guaranteed to exist. There exists a problem in TFNP that is FNP-complete if and only if NP = co-NP, see Theorem 2.1 in:Nimrod Megiddo and Christos H. Papadimitriou. 1991. On total functions, existence theorems and computational complexity. Theor. Comput. Sci. 81, 2 (April 1991), 317-324. DOI: 10.1016/0304-3975(91)90200-L and the references [6] and [11] within. PDF available here. |
_cs.44226 | For example, we can say we have a abstract program that, given a finite binary string as input, removes all of the zeros (i.e. 0010001101011 evaluates to 111111), which is definitely a Turing-computable function.How can a cyclic tag system compute this (which it can, by definition of it being Turing-complete) when it only halts when it reaches the empty string? The Wikipedia article gives an example of converting to a 2-tag system, but it adds an emulated halt that the original system does not have.I can't find any reference to how a cyclic tag system halts meaningfully. What is its output supposed to be? I've considered things likeNumber of steps (but then input restricts possible output without some kind of fancy encoding I can't find)The last production (but that only has a finite output range)Fixed points (which can't be detected in this system and only exist with very limited production rules and inputs)but they don't work, at least not in any way I can see. | How can a cyclic tag system halt with an output? | computability;turing machines | Neary and Woods describe an efficient simulation of Turing machines using cyclic tag systems, improving on work of Matthew Cook. Turing-completeness is a somewhat fluid and informal notion. A computing system X simulates another computing system Y if given each program in Y we can come up with a program in X such that looking at the transcript of the X-program, we can recover a transcript of the Y-program.You can look at the papers above to see what this means for cyclic tag systems. The basic idea is that when the Turing machine halts, the cyclic tag systems keeps going, forever repeating the same sequence of configurations, representing the halting configuration of the Turing machine. In this sense it can actually compute functions.In an earlier answer I noted that some computation models can only compute decision problems, in the sense that they either don't halt, or they halt with just one bit of output. In that case you can encode general function in at least two ways:Given a function $f$, consider the language of pairs $\langle x,f(x) \rangle$.Given a function $f$, consider the language of triples $\langle x,i,b \rangle$ such that the $i$th bit of $f(x)$ (if any) equals $b$.As usual, we require that the machine always halt. |
_unix.297776 | I have installed Linux Mint 18 64 bit and noticed Google and Youtube and others loading flawlessly (even HD videos play) but some sites like Wikipedia don't show up even though the tabs in Mozilla either show loading or show wikipedia - the free encyclopedia but then the site doesn't appear.The computer loads some sites but doesn't load others, and they don't change if site A doesn't load then it doesn't load next time either. If site B loads then it loads afterwards too.Last time I checked the internet connection in a different computer with windows everything worked, so it's not ISP's fault.It does this even with ufw disabled.Doesn't work with Mint liveDVD either. It's the same thing.If I open up a web proxy and type in Wikipedia then it goes there. Same thing with other browsers.After typing wget wikipedia.org:--2016-07-20 21:30:40-- http://wikipedia.org/Resolving wikipedia.org (wikipedia.org)... 91.198.174.192, 2620:0:862:ed1a::1Connecting to wikipedia.org (wikipedia.org)|91.198.174.192|:80... connected.HTTP request sent, awaiting response... 301 TLS RedirectLocation: https://wikipedia.org/ [following]--2016-07-20 21:30:40-- https://wikipedia.org/Connecting to wikipedia.org (wikipedia.org)|91.198.174.192|:443... connected.HTTP request sent, awaiting response... 301 Moved PermanentlyLocation: https://www.wikipedia.org/ [following]--2016-07-20 21:30:40-- https://www.wikipedia.org/Resolving www.wikipedia.org (www.wikipedia.org)... 91.198.174.192, 2620:0:862:ed1a::1Connecting to www.wikipedia.org (www.wikipedia.org)|91.198.174.192|:443... connected.HTTP request sent, awaiting response... 200 OKLength: unspecified [text/html]Saving to: index.htmlindex.html [<=> ] 0 --.-KB/s ..and it stops and never continues. I had to interrupt it.The index.html I found in my home folder is totally empty BUT at the tab it says Wikipedia.dig wikipedia.org output is:; <<>> DiG 9.10.3-P4-Ubuntu <<>> wikipedia.org;; global options: +cmd;; Got answer:;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 15688;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1;; OPT PSEUDOSECTION:; EDNS: version: 0, flags:; udp: 4096;; QUESTION SECTION:;wikipedia.org. IN A;; ANSWER SECTION:wikipedia.org. 343 IN A 91.198.174.192;; Query time: 21 msec;; SERVER: 193.231.252.1#53(193.231.252.1);; WHEN: Wed Jul 20 21:38:59 EEST 2016;; MSG SIZE rcvd: 58After ping -c 3 wikipedia.org:PING wikipedia.org (91.198.174.192) 56(84) bytes of data.64 bytes from text-lb.esams.wikimedia.org (91.198.174.192): icmp_seq=1 ttl=59 time=50.5 ms64 bytes from text-lb.esams.wikimedia.org (91.198.174.192): icmp_seq=2 ttl=59 time=48.3 ms64 bytes from text-lb.esams.wikimedia.org (91.198.174.192): icmp_seq=3 ttl=59 time=49.4 ms--- wikipedia.org ping statistics ---3 packets transmitted, 3 received, 0% packet loss, time 2003msrtt min/avg/max/mdev = 48.342/49.475/50.594/0.954 msAfter ping -c 3 91.198.174.192:PING 91.198.174.192 (91.198.174.192) 56(84) bytes of data.64 bytes from 91.198.174.192: icmp_seq=1 ttl=59 time=50.6 ms64 bytes from 91.198.174.192: icmp_seq=2 ttl=59 time=50.7 ms64 bytes from 91.198.174.192: icmp_seq=3 ttl=59 time=48.2 ms--- 91.198.174.192 ping statistics ---3 packets transmitted, 3 received, 0% packet loss, time 2003msrtt min/avg/max/mdev = 48.267/49.898/50.796/1.155 msIt's not an MTU issue.Edit: At the suggestion of mrwhale I will post the output of route -n, ip route, and cat /etc/resolv.conf in both the usable distro and the not-so-usable distro. I mentioned that internet works on a web proxy and he asked me to post this.On the usable distroroute -nKernel IP routing tableDestination Gateway Genmask Flags Metric Ref Use Iface0.0.0.0 10.0.0.1 0.0.0.0 UG 0 0 0 ppp010.0.0.1 0.0.0.0 255.255.255.255 UH 0 0 0 ppp0ip routedefault via 10.0.0.1 dev ppp0 proto static10.0.0.1 dev ppp0 proto kernel scope link src (here it showed my IP address)cat /etc/resolv.confDynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTENnameserver 127.0.1.1On the current broken distro (Linux Mint)route -nKernel IP routing tableDestination Gateway Genmask Flags Metric Ref Use Iface0.0.0.0 10.0.0.1 0.0.0.0 UG 100 0 0 ppp010.0.0.1 0.0.0.0 255.255.255.255 UH 100 0 0 ppp0169.254.0.0 0.0.0.0 255.255.0.0 U 1000 0 0 ppp0ip routedefault via 10.0.0.1 dev ppp0 proto static metric 10010.0.0.1 dev ppp0 proto kernel scope link src (my IP address) metric 100169.254.0.0/16 dev ppp0 scope link metric 1000cat /etc/resolv.confDynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTENnameserver 193.231.252.1nameserver 213.154.124.1nameserver 127.0.1.1 | Some websites load while others load forever in Linux Mint 18 | networking;linux mint | null |
_unix.140150 | From time to time crontab -e fails on me on a Ubuntu box. It is the same for all users including the root and crontab -eu <user> under the root. Goes like this on a regular basis, a success here and there:$ crontab -e/tmp/crontab.SHw8Ge: Input/output errorCreation of temporary crontab file failed - aborting$ crontab -e/tmp/crontab.L8gEG4: Input/output errorCreation of temporary crontab file failed - aborting$ crontab -ecrontab: installing new crontab$ crontab -e/tmp/crontab.Vvp59T: Input/output errorCreation of temporary crontab file failed - aborting$ crontab -ecrontab: installing new crontabI don't seem to be having problems creating files there by hand or script repeatedly:$ vim /tmp/crontab.Vvp59T$ ls -la /tmp/crontab.Vvp59T-rw-r--r-- 1 <user> <user> 6 2014-07-01 04:17 /tmp/crontab.Vvp59T/tmp permissions:$ ls -la /...drwxrwxrwt 92 root root 20480 2014-07-01 04:24 tmp...Looks like a storage issue to me, no clue how to prove or rule that out. Any ideas what could be causing this and how to test?$ lsb_release -dcDescription: Ubuntu 9.10Codename: karmicUpdate regarding comments:/tmp disk space use is 5%, inode use 1%. Crontab file size is 13K, ~200 lines. All this seems okay. | `crontab -e` sometimes fails with Creation of temporary crontab file failed | ubuntu;files;cron;io;storage | null |
_softwareengineering.115250 | I am not sure if this is the right place to ask this but I wrote an app that shows all the movies in one's computer with the appropriate info such as genre, director, rating, etc.I am wondering how can I make it so that the user can filter them based on criteria such as genre, rating, etc?Most of them are enums, and I was thinking of using a ComboBox for these but they should be able to specify more than 1 genre. So should I use ListBox controls for these? Then it will be harder to present all these options in listboxes.I haven't seen any examples of apps doing similar things, that's why I am not sure.Any ideas? | How to allow filtering Films in my app intuitively? (GUI design) | c#;design;.net;gui | You can implement or get elsewhere a custom component that allows for checkboxes in front of items in a drop down list, similar to what you can find in Excel.Another option would be to have a button that says Filter... and then you open a dialog box in which you have a drop down list with the criteria, e.g. Director, Genre, etc. Depending on the choice of the criteria you could then offer the values, e.g. when Director is selected, a list box could contain the names of directors. When Genre is selected in the drop down, the list box could contain the different genres. Each of the value lists could also contain the value All as the first entry. When you click this it ticks all entries in the list.Upon closing this dialog box you probably want to also indicate somehow that a filter has been applied. Maybe in the column header.Of course these are just some very simple suggestions. User interface have a gazillion options for how to represent something and you make it very flashy. At some point it is a matter of taste but you certainly want to try out your UI to see whether it works with others. |
_unix.324729 | I just upgraded my ThinkPad T560 from Fedora 24 to Fedora 25.On Fedora 24, I used these commands$ xrandr --output eDP-1 --scale 1.25x1.25$ xrandr --output eDP-1 --panning 3600x2025to set up proper scaling. These commands no longer work on Fedora 25:$ xrandr --output eDP-1 --scale 1.25x1.25warning: output eDP-1 not found; ignoringApparently the display identifier is now XWAYLAND0 (and not eDP-1 anymore):$ xrandr -qScreen 0: minimum 320 x 200, current 2880 x 1620, maximum 8192 x 8192XWAYLAND0 connected 2880x1620+0+0 340mm x 190mm 2880x1620 59.92*+However, using this new identifier with the old command also does not work:xrandr --output XWAYLAND0 --scale 1.25x1.25X Error of failed request: BadValue (integer parameter out of range for operation) Major opcode of failed request: 139 (RANDR) Minor opcode of failed request: 26 (RRSetCrtcTransform) Value in failed request: 0x20 Serial number of failed request: 22 Current serial number in output stream: 23As a short term solution I choose GNOME on Xorg on login. Then I can use the xrandr commands shown above as before.Can somebody please point me to a how-to for properly setting up HiDPI displays on Fedora 25? Thanks! | Fedora 25, Wayland, and HiDPI display | fedora;wayland | null |
_unix.369691 | My diff shows some numerical differences between two log files. That means for example:fileA: Parameter n (fill abs) /All_Data/Height 9830400fileB: Parameter n (fill abs) /All_Data/Height 9830500SO, if the diff command is executed between the files:% diff fileA fileB-> < /All_Data/Height 9830400---> /All_Data/Height 9830500I would like to set a threshold in the diff command what is to saydisplay the difference if the discrepancy between the number is greater than 500. So 9830400-9830500=100. No differences should be displayed. | Can diff show numerical differences, with a threshold to not show them as differences? | diff;file comparison;numeric data | null |
_unix.93729 | After my last dist-upgrade of my testing debian system, X refuses to start. I can see the following error (which shows up when gnome-session is started): symbol lookup error: /usr/lib/i386-linux-gnu/libcairo.so.2: undefined symbol: glXGetProcAddressBesides, even texlive refuses to upgrade, with the same error (caused by luatex).I don't know how to fix this issue: is it possibile that one crucial library is missing? If not, what else could cause this problem? | error caused by undefined symbol: glXGetProcAddress | debian;xorg;system installation | @peterph's answer was very close to the problem.The video card was a Matrox G550 (mga) but in the past there been an nvidia installed and some glx stuff remained in /usr/lib/tls.I've not experience with the mga driver but my understanding is that there's not a proprietary glx implementation, so we gone ahead trying to get the mesa working.Once installed libgl1-mesa-glx and glx-alternative-mesa we checked the libcairo with ldd, then used dpkg -S with full path to understand from which packages were picked the libGL and libGLcore resolved by ldd, just to check that was mesa, both the libraries weren't in any package.Moved those libraries away and this time ldd shown the right mesa libraries were used, at this point I asked @zar to check again and his answer been that this time apt-get -f install finished properly and gdm3 run without errors.Even being a bug I think we can't fill any bug request as the non debianized nvidia driver broken the contract.Proprietary drivers installation are still evil, I don't understand why they favor their own crap supposed-to-install-everywhere.run instead of looking for some collaboration effort at least with major distros (that probably would come for free/no charge). |
_unix.117609 | #!/bin/shif [ $(ls sample01.log | wc -l) = 1 ]thenecho File Found > lsOutput.logelseecho File Not Found > lsOutput.logfiBut if sample01.log is not already existing my code already returns the error:ls: cannot access sample01.log: No such file or directoryAnd the code will not run anymore. The File Not Found will not be displayed anymore. I wanted to capture that error to file (No such file or directory) so each time such error occurs, it is written and documented on the log file.Thank you. | Capture Error of LS to file | shell;shell script;files;io redirection | You don't need to parse ls in order to determine if a file exists.help test would tell you: -e FILE True if file exists.You could say:if [ -e sample01.log ]then echo File Found > lsOutput.logelse echo File Not Found > lsOutput.logfiIf you want to ensure that the file is a regular file, use -f instead: -f FILE True if file exists and is a regular file.(What you've done also works, but it causes ls to emit an error message (to STDERR) which perhaps leads you into thinking that it doesn't work.) |
_codereview.86400 | I created this responsive jQuery navigation which will re-size when the pixels get < 500 px and display the mobile navigation, although when the browser resizes > 500px it runs the widthCheck() function which is supposed to run the jQuery code that returns it to tablet/desktop size, although it doesn't do this transition smoothly.Could someone help me? Maybe this is something in my code that isn't proper. It works but it isn't doing it smoothly.$(document).ready(function() { //on ready function widthCheck ();$(window) .resize(function(){ //Resize widthCheck widthCheck ();});$('.nav a') .click(function(){ //NavLinkClick Function Event Handler navLinkClick();});/*============= Mobile Navigation Click Function ================*/$('#menu').click(function(){ $(this).toggleClass('open'); if ($(this) .hasClass('open')) { $('.nav') .slideDown('fast', function() { $('.nav a') .fadeIn('fast'); }); } else { $('.nav a') .fadeOut('fast', function() { $('.nav') .slideUp('fast'); }); } });}); //Document.Ready Close/*============= Nav Links Click Function ================*/ function navLinkClick () { //Nav Link Function var width = $(window).width(); if (width <= 500) { $('.nav a') .fadeOut('fast', function() { $('.nav') .slideUp('fast'); }); } } /*============= Device Width Check Function ================*/ function widthCheck () { // Device Width Check var width = $(window).width(), $menu = $('#menu'), $nav = $('.nav'), $navA = $('.nav a'); if (width <= 500) { $navA .fadeOut(400, function() { $nav .slideUp(400,function(){ $menu .fadeIn(400); }); }); } // Close If else { $menu .fadeOut('fast', function() { $nav .slideDown(400, function() { $navA .fadeIn(400); }); }); } // Close Else } //Close Function/******************************************************************Website Name: Website URL:Website Description: Author:Sean ParsonsAuthor Portfolio: http://seanpar203.github.io/portfolio/Author Linkedln: https://www.linkedin.com/in/seanparsons203******************************************************************//* Table of Content==================================================#Fonts#Reset & Basics#Header & Navigation *//*#Fonts=================================================================== */@import url(http://fonts.googleapis.com/css?family=Source+Sans+Pro:400,600,700);@import url(http://fonts.googleapis.com/css?family=Pacifico);/*#Reset & Basics=================================================================== */ html, body, div, span, applet, object, iframe, h1, h2, h3, h4, h5, h6, p, blockquote, pre, a, abbr, acronym, address, big, cite, code, del, dfn, em, img, ins, kbd, q, s, samp, small, strike, strong, sub, sup, tt, var, b, u, i, center, dl, dt, dd, ol, ul, li, fieldset, form, label, legend, table, caption, tbody, tfoot, thead, tr, th, td, article, aside, canvas, details, embed, figure, figcaption, footer, header, hgroup, menu, nav, output, ruby, section, summary, time, mark, audio, video { margin: 0; padding: 0; border: 0; font-size: 100%; font: inherit; font-family: 'Source Sans Pro', sans-serif; }article, aside, details, figcaption, figure, footer, header, hgroup, menu, nav, section { display: block; }body { line-height: 1; }ol, ul { list-style: none; }blockquote, q { quotes: none; }table { border-collapse: collapse; border-spacing: 0; } .active{ color: rgb(255,255,255); }/* #Header=================================================================== *//*Header Color*/.header { position: fixed; width: 100%; background-color: #cc4646; padding-bottom: 20px; border-bottom: 4px solid black;}/* Logo Attributes */.logo { font-family: 'Pacifico', cursive; font-size: 2.2em; color: #fff; font-weight: bold; text-decoration: none; height: 70px; margin-top: 20px; padding-bottom: 15px;}/*Navigation Attributes*/nav ul li { margin-top: 15px; padding-bottom: 15px; font-size: 1.2em; font-variant: small-caps; text-align: center;}nav a { text-decoration: none; color:rgba(255,255,255, 0.65); }nav a:hover { text-decoration: none; color: rgb(255,255,255);}/* Mobile Navigation Button*/#menu { position: fixed; top: 2em; right: 1.5em; cursor: pointer;}<script src=https://ajax.googleapis.com/ajax/libs/jquery/1.11.1/jquery.min.js></script><html lang=en> <head> <meta charset=utf-8> <meta http-equiv=X-UA-Compatible content=IE=edge> <meta name=viewport content=width=device-width, initial-scale=1> <meta name=description content= Website Description > <meta name=keywords content= Website Keywords > <meta name=author content= Website Author > <title>Test Website</title> <!-- Bootstrap CSS --> <link rel=stylesheet href=https://maxcdn.bootstrapcdn.com/bootstrap/3.3.2/css/bootstrap.min.css> <!-- Main Style Sheet --> <link rel=stylesheet type=text/css href=main.css> <!-- jQuery --> <script src=https://ajax.googleapis.com/ajax/libs/jquery/1.11.2/jquery.min.js></script></script></head><body> <header class=header> <nav> <div class=rows container-fluid> <h1 class=logo col-md-10 col-xs-10>Sean Parsons Portfolio</h1><!--/====/===/===/===/ Button For Mobile Navigation ===/===/===/===/===/== --> <div id=menu class=col-md-2 col-xs-2> <img src=https://cdn3.iconfinder.com/data/icons/eightyshades/512/45_Menu-128.png height=40px width=40px alt=Mobile Menu> </div> <div class=container> <!-- Navigation Links --> <ul class=nav> <div class=col-xs-6 col-sm-4 col-md-2> <li><a href=# title=Home>Home</a></li> </div> <div class=col-xs-6 col-sm-4 col-md-2> <li><a href=# title=About Me rel=author>About Me</a></li> </div> <div class=col-xs-6 col-sm-4 col-md-2> <li><a href=# title=Skills>Skills</a></li> </div> <div class=col-xs-6 col-sm-4 col-md-2> <li><a href=# title=Experience>Experience</a></li> </div> <div class=col-xs-6 col-sm-4 col-md-2> <li><a href=# title=Portfolio>Portfolio</a></li> </div> <div class=col-xs-6 col-sm-4 col-md-2> <li><a href=# title=Contact Us>Contact Us</a></li> </div> </ul> </div> <!-- Navigation Container Collapse --> </div> <!-- Row/Container-Fluid Collapse --> </nav> </header><!--/.header--> </body></html> | Nav slider needs to be smoothed out | javascript;jquery;html;css | null |
_cs.62981 | Let's say I have a graph with $N$ nodes, $A$ arcs and an average branching factor $b$. I want to find the $K$ shortest paths between two nodes.Is there some relation (even approximate is fine) that expresses the dependency between the parameter $K$ and percentage of nodes included in the paths discovered by running the algorithm (Yen's loopless KSP)?For example, in a graph of 20 nodes, the ($1st$) shortest path from node $1$ to $12$ is $1-4-7-12$, while the $2nd$ shortest path is $1-4-6-9-12$.So for $K=1$, the discovered path contains $4/20 = 20\%$ of the nodes in the graph. For $K=2$, the two paths contain $6/20 = 30\%$ of the nodes. This relation between $K$ and the percentage is what I'm looking for. | K shortest paths - any relation between K and % of graph nodes in discovered paths? | algorithms;graphs;shortest path | null |
_softwareengineering.118962 | Sometimes I stare blankly into space or sketch ideas and write some pseudo codes on paper. Then I scratch it out and start again, then when I think I have the correct solution for the problem I begin writing the code.Is it normal to think for days without writing any code? Is this a sign that I am approaching the problem entirely wrong? It makes me nervous to not getting any tangible code written in my IDE. | Is it normal to think about a design problem for days with no code written? | design | Depending on the problem you are trying to solve, the design phase can take weeks and months (if not years), not just days.It takes experience to not start bashing out code immediately. Thinking about the architecture and high level design should take days if not longer - definitely something that should happen before you start writing your code. |
_codereview.88456 | Edit. Version 2.I am working on a genetic algorithm in order to solve a little puzzle.Given a text file with N rows with 4 int each, the idea is to establish 2 bijections between 2 x 2 columns and the same number of 0 in each column. For this purpose, the program is only allowed to shift the data to the right. For example, if a row has elements {1, 2, 3, 4}, they can be not shifted at all, shift 1 place ({4, 1, 2, 3}), 2 places ({3, 4, 1, 2}) or 3 places ({2, 3, 4, 1)}. No vertical permutations are allowed. No horizontal shuffles are allowed (e.g: {1, 4, 2, 3} is forbidden). When a solution is found, a text file outputs the DNA of this puzzle with each gene being 0, 1, 2, or 3, that is if and how many times each row is shifted. Example: a 36 rows puzzle can give: 1113311331133111, the first 1 refering to the fact that row #1 is shifted one time to the right; the last 1 refering to the fact that row #16 is also shifted one time to the right.This input text is formated like this: 1. 2 3 4 5. The first number 1. is the identification of the row, and 2 3 4 5 are the elements of this row. The bijections are to be established between the column containing the first elements and the third one; and between the second one and the fourth one. I hope my explanation is clear. If not, it is detailed here.My program works, but it does not seem efficient. Of course, it is difficult to evaluate the efficiency of a genetic algorithm, but I think the way I code is far from being optimal (see for example the horrible use of Goto that seems to me extremely useful in this context, but is not recommended...).My code is a little bit long, and I doubt you have time to go into the details of its implementation, of course. But I think you can easily spot what seems wrong with my code, or what can be improved. Indeed, I think my code does not use the memory efficiently, but I do not know how to solve this issue. I have selected only the relevent segments of the code (I show the deleted segments with [. . .]); the full source code is released here if you are interested.Moreover, if you have any comment regarding the genetic algorithm parameters (population, mutation, etc) feel free to share them.#define PUZZLE 36#define POPULATION 30#define COMPTEUR PUZZLE * POPULATION * 50#define TEST 0#define COUPE 50#define MUTATION 1#include <iostream>#include <algorithm>#include <vector>#include <fstream>#include <string>#include <math.h>#include <random>#include <functional>#include <stdlib.h>#include <ctime>#include <iomanip>using namespace std;random_device rd;mt19937 gen(rd());uniform_real_distribution<double> dist(0, 4);class Pieces{public: vector<int> ADN; int intersections; double fitness; bool best; bool candidat; bool solution; Pieces(){}; ~Pieces(){};}; int reproduction(int geneA, int geneB, int j){ if (j < ((COUPE * PUZZLE) / 100)) return geneA; else return geneB;} int aleADN(){ if (TEST == 0) return (int)dist(gen); else return 0;}int main(){ unsigned long compteur = 0; int i, j, k; string e1, e2, e3, e4, e5; vector<int> R, A, B, C, D; // A droite, B bas, C gauche, D haut if (TEST != 0) cout << TEST << endl;/* ----------------------- OPENING OF THE FILE -----------------------*/ [. . .]/* ------------------- INTEGRITY CHECKS -------------------*/ [. . .]/* ------------------ INITIALIZATION ------------------*/ [. . .] Pieces * pieces = new Pieces[POPULATION];/* ------------- EVOLUTION -------------*/ do { double fitness = 0; double fitness_ref = fitness; for (i = 0; i < POPULATION; i++) { pieces[i].ADN.clear(); for (j = 0; j < PUZZLE; j++) { pieces[i].ADN.push_back(aleADN()); } } for (i = 0; i < POPULATION; i++) { pieces[i].fitness = 0; pieces[i].solution = false; pieces[i].best = false; pieces[i].intersections = 0; } do { compteur++; for (i = 0; i < POPULATION; i++) { pieces[i].candidat = false; pieces[i].best = false; }/* -------------- EVALUATION --------------*/ int rotation; for (i = 0; i < POPULATION; i++) { int** evaluation = new int*[4]; for (k = 0; k < 4; k++) evaluation[k] = new int[PUZZLE]; for (j = 0; j < PUZZLE; j++) { rotation = pieces[i].ADN[j]; evaluation[(0 + rotation) % 4][j] = A[j]; evaluation[(1 + rotation) % 4][j] = B[j]; evaluation[(2 + rotation) % 4][j] = C[j]; evaluation[(3 + rotation) % 4][j] = D[j]; } double eval = 0; // EVAL BORDURES bool OK_zeros = true; int zeros; for (int col = 0; col < 4; col++) { zeros = 0; for (int j = 0; j < PUZZLE; j++) { if (evaluation[col][j] == 0) { zeros++; } } if (abs(nb_lignes - zeros) != 0) { OK_zeros = false; eval += abs(nb_lignes - zeros); } } if (OK_zeros != true) eval++; // EVAL DOUBLONS vector<int> bijA, bijB, bijC, bijD; vector<int> intersection; for (j = 0; j < PUZZLE; j++) { bijA.push_back(evaluation[0][j]); bijB.push_back(evaluation[1][j]); bijC.push_back(evaluation[2][j]); bijD.push_back(evaluation[3][j]); } sort(begin(bijA), end(bijA)); sort(begin(bijC), end(bijC)); set_intersection(begin(bijA), end(bijA), begin(bijC), end(bijC), back_inserter(intersection)); bijA.clear(); bijC.clear(); eval += abs(PUZZLE - (int)intersection.size()); pieces[i].intersections = PUZZLE - (int)intersection.size(); intersection.clear(); sort(begin(bijB), end(bijB)); sort(begin(bijD), end(bijD)); set_intersection(begin(bijB), end(bijB), begin(bijD), end(bijD), back_inserter(intersection)); bijB.clear(); bijD.clear(); eval += abs(PUZZLE - (int)(intersection.size())); pieces[i].intersections += PUZZLE - (int)intersection.size(); intersection.clear(); // Calcul du fitness pieces[i].fitness = 1 / (eval + 1); if (pieces[i].fitness == 1) { pieces[i].solution = true; goto Solution; } for (k = 0; k < 4; k++) delete[] evaluation[k]; delete[] evaluation; }/* ------------- SELECTION -------------*/ // Best for (i = 0; i < POPULATION; i++) { if (pieces[i].fitness > fitness) { fitness = pieces[i].fitness; } } for (i = 0; i < POPULATION; i++) { if (pieces[i].fitness == fitness) { pieces[i].best = true; break; } } if (fitness > fitness_ref) { fitness_ref = fitness; k = 0; for (i = 0; i < POPULATION; i++) { if (pieces[i].best == true && k == 0) { cout << pieces[i].intersections << \t << fitness << endl; k++; } } } // Roulette double fitness_total = 0; for (i = 0; i < POPULATION; i++) fitness_total += pieces[i].fitness; uniform_real_distribution<double> pool_rand(0, fitness_total); vector<int> candidats; vector<double> pool_fitness; for (i = 0; i < POPULATION; i++) pool_fitness.push_back(pieces[i].fitness); sort(begin(pool_fitness), end(pool_fitness), greater<double>()); do { double r = pool_rand(gen); k = 0; while (r > 0) { r -= pool_fitness[k]; k++; } for (i = 0; i < POPULATION; i++) { if (pieces[i].fitness == pool_fitness[k - 1]) { candidats.push_back(i); break; } } } while (candidats.size() < POPULATION); pool_fitness.clear();/* ---------------- REPRODUCTION ----------------*/ for (i = 0; i < POPULATION; i++) { if (pieces[i].best == true) { pieces[0].ADN = pieces[i].ADN; } } for (i = 1; i < POPULATION; i++) { for (j = 0; j < PUZZLE; j++) { pieces[i].ADN[j] = reproduction ( pieces[0].ADN[j], pieces[candidats[i]].ADN[j], j ); } } candidats.clear();/* ------------ MUTATION ------------*/ uniform_real_distribution<double> mutation_rand(0, PUZZLE); for (i = 1; i < POPULATION; i++) { for (j = 0; j < PUZZLE; j++) { if (mutation_rand(gen) <= MUTATION) { pieces[i].ADN[j] = (int)dist(gen); } } } } while (compteur < COMPTEUR);/* ------------ SOLUTION ------------*/ Solution: for (i = 0; i < POPULATION; i++) { if (pieces[i].solution == true) { [. . .] // Save the output text file } } compteur = 0; cout << *RESET* << endl << endl; } while (1);} | Genetic algorithm for solving a puzzle | c++;beginner;combinatorics;genetic algorithm | Constants#define PUZZLE 36#define POPULATION 30#define COMPTEUR PUZZLE * POPULATION * 50#define TEST 0#define COUPE 50#define MUTATION 1You're using C++. You have type-safe const declarations available. You should be using them instead of the textual substitution of #define macros. Instead write:const int PUZZLE = 36;const int POPULATION = 30;const int COMPTEUR = PUZZLE * POPULATION * 50;const int TEST = 0;const int COUPE = 50;const int MUTATION = 1;Choice of header files to include#include <math.h>#include <stdlib.h>These happen to be the C versions of the header files. You ought to be using the C++ versions of these files, as you are with <ctime>:#include <cmath>#include <cstdlib>Namespaceusing namespace std;Please, please, please don't do this. It's a really bad idea. You pollute the global namespace with everything from std::, and if anything in std:: conflicts with anything in your project, you're in big trouble.It's really not that much work to put the std:: prefix before the appropriate types and functions, but if you really want to avoid doing so, you can limit the namespace pollution to just those objects:using std::random_device;using std::mt19937;using std::uniform_real_distribution;using std::vector;using std::string;using std::cout;using std::endl;using std::sort;using std::set_intersection;using std::abs;Infinite loopsdo{ // [...]} while (1) The preferred idiom is for(;;) -- it makes it more clear from the beginning what's going on, without any magic numbers.Repeated codeLoopyYou don't need to pre-initialize pieces[i].best = false; since it will be initialized in the loop.A, B, C, DWhenever you repeat code with slightly different variables, it might be a good idea to put the things in an array and then loop over the array. Particularly since there's a connection between evaluation[0] and A.Get rid of an unnecessary loop for (i = 0; i < POPULATION; i++) { if (pieces[i].fitness > fitness) { fitness = pieces[i].fitness; } } for (i = 0; i < POPULATION; i++) { if (pieces[i].fitness == fitness) { pieces[i].best = true; break; } }You could get away with only one loop, by maintaining a std::vector that stores the indexes of entries that are tied with the current best, .clear() ing the vector once you find something better, then looping over the vector to set the best entries when you're all done.Separate user interface and calculationsYou've got that cout in the middle of the routine. It should be in a separate routine. The calculation routine should return something; its caller should output. |
_webmaster.18298 | I am a grey-haired professional programmer, quiet conversant in PHP, MySql, HTL, CSS which means I can tweak things or code plugins if need be, but Id like something off the shelf as far as possible.I would prefer something free but can pay if it is not hideously expensive (actually, I started out thinking free only, until I found http://pligg.com/ which is dedicated to communities, but has plugins which cost about $10, and I might be willing to pay for convenience). Before I found that, I was heavily in favour of Drupal, but might be swayed by something dedicated to my needs or with many, many users or great support or a great deal of plug-ins.Anyhoo, what am I trying to do?A social site for ex-pats to help them adjust to their new country. Preparing to get there, legal/visa issues, employment opportunities, accommodation, shopping, plus social stuff.The demographic is mainly girls in their 20s, so a social angle is important. Integration of FaceBook, Flikr & the like might be nice. What I am thinking of is- main content provided by me and a few moderators- a Wiki to which everyone can contribute - forums for discussions- small ads/freebie- user registration (which may limit access to certain parts of the site)- groups of friends, with shared .. stuff (photos, discussion rooms, etc)- per user blog- per user photo album- mailing lists- I think you get the picture . 9and can you suggest moe?)As I said, I was originally set on Drupal, which also has some pre-configured distros, but I didnt find one yet that really matches my needs. Then I saw Pligg, which looks good, but I might have to shell out $100 or $200 to get it the way I want it. And I still continue to search.Any suggestions? Thanks | Which CMS system for a community based site? | cms;community | null |
_unix.61625 | vi /etc/ssh/sshd_configegrep -i 'Pubkey|Password|Listen' /etc/ssh/sshd_config | grep -v '^#' ListenAddress 0.0.0.0 PermitRootLogin without-password PubkeyAuthentication yes PasswordAuthentication noon an OpenBSD 5.1 server. But I can still login with FileZilla, and FileZilla only knows my password AFAIK (or not?).How can I restrict any ssh/scp/sftp access to only accept key auth?UPDATE: clinet side is a Scientific Linux 6.3, afaik the key is not cached. | I'm only allowing pubkey auth via ssh. How can I still log in with password? | ssh;scp;sftp | Assuming your OS is Microsoft Windows, a SSH agent (like pageant) may have cached your key if you already connected your server using, for example, putty. see this page on Filezilla wiki for details. So your server is probably setup as needed. |
_unix.62556 | Is it possible to issue WHOIS requests via my local machine over a port other than 43 to a remote box to execute? Ideally, I'm using Jsch and I'd like to be able to round-robin these requests. I've looked in /etc/services, didn't learn much. Anytime I specify a port other than 43, the whois request hangs (whois -p 4999 74.125.224.72).I know about the RIR's and their various limits, and I realize that I could pay somewhere like DomainTools. But let's be honest, that wouldn't as much fun, would it? | WHOIS over SSH via specific port | port forwarding;whois | null |
_cs.13783 | Wikipedia Pushdown automaton (as of aug 16, 2013) states:In general, pushdown automata may have several computations on a given input string, some of which may be halting in accepting configurations. If only one computation exists for all accepted strings, the result is a deterministic pushdown automaton (DPDA)My professor gave this as an example that we shouldn't trust Wikipedia but rather consult a textbook. Is he right? | Definition of deterministic pushdown automaton | terminology;pushdown automata | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.