id
stringlengths
5
27
question
stringlengths
19
69.9k
title
stringlengths
1
150
tags
stringlengths
1
118
accepted_answer
stringlengths
4
29.9k
_webapps.91855
I often accidentally hit the Esc key when preparing emails on OWA, resulting in my draft mail disappearing into the ether. Is there any way to disable this behaviour. I'm using Chrome on Windows 8.
Disable Escape to discard mail with Outlook Web Access
outlook web access
null
_cstheory.18015
I am working on Normal form continuous games. I am not very familiar with dynamic game theory. I would like to know if there is any relation between static Nash equilibria and dynamic equilibria. If players play a certain game repeatedly or learn about the strategies of the players or if they follow certain dynamics, will they always converge to static Nash equilibria? Is there any result between set of static Nash equilibria and dynamic equlibria (under some conditions)? Thanks in advance!
Relation between static Nash equlibria and dynamic equlibria
gt.game theory;dynamic algorithms
null
_codereview.56175
I have a huge list of song objects in my program and I need those objects in almost all activities. Well, at least a part of it up to everything.So I created a class which looks pretty much like this :class DataStore { private static ArrayList<Song> songList; private static ArrayList<Album> albumList; private DataStore() { } //getters and setters for the private variables above public void getSongList() { ... } ...}So this DataStore has a lot of private variables and getters / setters to access them. I call these functions like this :ArrayList<Song> songList = DataStore.getSongList();Now I am wondering, is that a good approach? Should I do anything different to create global variables I can use in every activity?Also, I saw a few questions about Singletons. Is this a Singleton class?
Android global data
java;android;singleton
I have a huge list of song objects in my program and I need those objects in almost all activities.Your description is an excellent fit for Content Providers.http://developer.android.com/guide/topics/providers/content-providers.htmlImplementing a content provider for your songs might seem like a lot of work at first, but it will be worth the investment. It's the clean and recommended approach, and you'll benefit from it greatly. As an added bonus, other applications will be able to use your song database too. Go for it.About the singleton pattern, read this:http://en.wikipedia.org/wiki/Singleton_patternAnd avoid using singletons as much as possible.
_codereview.29489
if ((key == null && group.Key == null) || (key is DBNull && group.Key is DBNull) || (!(key is IComparable) && !(group.Key is IComparable)))Can the above code simplified like below,if(key == group.key || (!(key is IComparable) && !(group.Key is IComparable)))
IComparable comparision
c#
No, not unless you restrict what values the variables can have, or what type they are declared as.If for example key is an int with the value 4 and group.Key is an int with the value 4, the first code gives false while the second code gives true.
_codereview.92442
CreateInputParameter is an overloaded function with 6 type overloads, is it possible to simplify the following code or is this as good as it gets?I wish to call it like this : CreateInputParameter( string , object ) like following code :private void AddInputParameter(string description, object value) { Type type = value.GetType(); if (type == typeof(int)) { CreateInputParameter(description, (int)(value)); return; } if (type == typeof(decimal)) { CreateInputParameter(description, (decimal)(value)); return; } if (type == typeof(DateTime?)) { CreateInputParameter(description, (DateTime?)(value)); return; } if (type == typeof(bool)) { CreateInputParameter(description, (bool)(value)); return; } if (type == typeof(byte[])) { CreateInputParameter(description, (byte[])(value)); return; } if (type == typeof(Guid)) { CreateInputParameter(description, (Guid)(value)); return; } }
Generically calling an overloaded method
c#
At first glance:Value shouldn't be capitalized, since it's a parameter.The parameter is called description, yet your code uses Description.Why does each if contain a return;? Why not simply do else if? Also, can an object even be two different types at once?Is this really actual, working code?Also, you're not showing us CreateInputParameter; I wouldn't be surprised if it doesn't need to be an overloaded function with 6 type overloads. I also would expect a method called Create to return something, but that's open for debate.You could look into System.Convert and use that to convert the object to a value, but again: this depends on CreateInputParameter. I'd advise you to submit a new question and include those methods, and provide us with more background.
_unix.305848
On an older router, I was using this command:iptables -t mangle -I PREROUTING -i br0 -s 192.168.157.0/24 -j MARK --set-mark 7That router died (red led of death), and now I'm forced to try to recreate an implementation I didn't quite understand even when I got it working the first time. Anyway, the error message it produces is so:iptables: No chain/target/match by that name.If I leave everything off of if after -j, then this command will execute successfully (though does little to accomplish my goals). That suggests to me that both PREROUTING and mangle are available. I remember vaguely that packet marking might be its own kernel module, but I'm not even able to determine what its name would be, were that the case.What do I need to do to debug this? I'm not sure where to start with this one. Is something not compiled into the kernel that should be? The kernel version appears to be 2.6.22.19.I have other iptables commands that are failing similarly, though I expect that the explanation for this one command will shed light on the others.
What is wrong with this particular iptables command?
iptables;router;netfilter
null
_softwareengineering.314738
In the book Database Fundamentals, Silberschatz. It is explained that aggregate functions can be calculated on the march.This make sense. What it means is that for calculating the maximun, average or count the items in a set, you don't need to pass a copy of the set to the aggregate procedures, you only process each record meanwhile you transverse the set.One naive implementation could be to keep a variable for each aggregate desired. For example, a SELECT sum(a_field), count(a_field), max(a_field) FROM a_set could be implemented as:sum_ = 0count_ = 0max_ = -INFfor record in a_set: sum_ = sum_ + record.a_field count_ = count_ + 1 max_ = max(max_, record.a_field)return (sum_, count_, max_)Of course, this is unthinkable as the loop over the set should not be so tied to the aggregate computation. I suppose the loop delegates the aggregation to a kind of coroutine.Supposing a coroutine is a kind of object with two methods:feed: where you can pass a value to the coroutineget: which gives you the result of a computationThe loop would be something like:# Given a set C of aggregation coroutinesfor record in a_set: for c in C: c.feed(record.a_field)return (c.get() for c in C)In this case, I imagine a coroutine like max as:max_ = -INFwhile item = consume(): max_ = max(max_, item)yield max_Here, I'm supposing that when the coroutine invokes consume it waits until somebody calls it's feed method. And when it calls yield, that value is collected later by the one who invokes it's get method.Just for fun, let's implement the sum:sum_ = 0while item = consume(): sum_ = sum_ + itemyield sum_So, this is broadly what I imagine is happening behind the scenes, but I can't be sure, so:How is this process actually implemented in the most of SQL engines?.What would happen with an aggregation which requires two or more transverses on the dataset, as the standard deviation?.Note: The pseudo is a kind of pseudo Python.
How is one or more aggregate function implemented in most SQL engines?
database;sql;database development;big data;etl
null
_softwareengineering.306563
When Instant was introduced with JSR-310, convenience methods were added to perform conversion between Date and Instant:Date input = new Date();Instant instant = input.toInstant();Date output = Date.from(instant);I wonder why it was chosen to be this way instead of, let's say, having Date output = instant.toDate(). My guess is that it was intended to avoid making Instant dependent on Date. Is this right?But to me it is somehow strange that you make the new class independent of the already existing one, and make the old one dependent on the newly introduce one. Is this made so because of an expected deprecation of Date or is there any other reason?
Why was conversion between Instant and Date named the way it was?
java;api design
Problems with Date:It is a mutable objectIt is not thread-safecost of using it in a thread-safe manner (synchronized) is expensiveInstant on the other hand is designed to be thread-safe and immutable. Making the object Immutable is by itself already a simple and inexpensive way of guaranteeing thread-safety. The Javadoc for Instant states:Implementation Requirements: This class is immutable and thread-safe.Now if you want to do something like Instant t = Instant.from(new Date()), that means the implementers of the API would have to introduce synchronization to guarantee the thread-safe access of the Date object.I suspect that they wanted to simply have a clean break with the old ways, and avoid introducing any explicit synchronized methods. Introducing synchronized methods would have suggested that it is ok to use them. While it might be useful to have, it would be setting a bad precedent.Instead they have chosen to keep the conversion of Date to Instant within the Date API, which does not document any thread-safety. Safe assumption would be that it is up to the user to synchronize access to the Date Object first if required to do so.
_webapps.39787
The obvious way is to just copy and paste. However, the HTML has stylesheets like.auto-style9 { background-image: url('http://domainname.com/backgroundTall.png'); background-repeat:no-repeat;}That is a background image of one of the table. If I see the HTML in Firefox I can see the background image.If I copy it to Gmail or Outlook, the background is gone.So how can I do it? Actually, how can I send HTML email in general? How do corporations do that?
How to send HTML email via Gmail including the background CSS
gmail;html
null
_unix.346838
I have two scripts:foo.sh:#!/bin/bashecho -e one\ntwo |while read line; do cat not-existing echo hello $linedonebar.sh:#!/bin/bashecho -e one\ntwo |while read line; do ssh user@machine 'cat not-existing' # Here is the only difference echo hello $linedoneAnd now I run them$ ./foo.sh cat: not-existing: No such file or directoryhello onecat: not-existing: No such file or directoryhello two$ ./bar.sh cat: not-existing: No such file or directoryhello oneThe output of bar.sh is surprising to me. I would expect it to be the same for both scripts.Why does the output of foo.sh and bar.sh differ? Is it a bug or a feature?NoteThe following works as I expect, i.e. the output of this is the same as the output of foo.sh:#!/bin/bashfor line in `echo -e one\ntwo`; do ssh user@machine 'cat not-existing' echo hello $linedoneWhy?
Unexpected behaviour of a shell script
bash;shell script;ssh;stdin
In bar.sh, the two is consumed by ssh. In the last example, the full output from echo is used by for before it starts looping.To avoid having ssh gobble up your data from standard input, use ssh -n. This will hook up the standard input of ssh with /dev/null rather than with the standard input of the while loop.This will do what you expect it to do:#!/bin/bashecho -e one\ntwo |while read line; do ssh -n user@machine 'cat not-existing' # Here is the only difference echo hello $linedoneIf you had written#!/bin/bashecho -e one\ntwo |while read line; do ssh user@machine 'cat' echo hello $linedonethen the cat on the remote machine would have outputted two since its standard input is handed down to it from ssh which in turn got it from the loop and echo. It will print two rather than one since the first line of input already has been consumed by read.
_unix.248842
Absolute noob here, I'm loving every last bit of Linux but I'm still at the very first steps. And apparently tonight I managed to work out my first royal screw up, most of all I really hope answers wont be what I fear.Long story short, while learning bash scripting etc, I erroneously ran chmod -R 770 /bin (please don't ask why, this is already quite embarrassing as is).The issue that made me realize the horrible mistake was a denied permission running /bin/bash when logging in as user (resulting in a closed SSH connection), and after trying many other solutions found googling, I checked .bash_history to find out the comic mistake.Anyways, is there any way at all to get permissions back to defaults for folder and files? (other than reinstalling the os)I have a backup of the whole SD (I'm on a headless RasPi running Minibian) not older than 3 days, but I'm not quite sure rolling back the previous version would actually change any permission. Are these details stored in the folder itself, or in some sort of a registry?Also. Why is it that, despite being the permissions rwx on the user as well as the root, the scripts aren't executed?
/bin (and sub) default permissions
permissions;system recovery
null
_unix.378373
I want to create a dummy, virtual output on my Xorg server on current Intel iGPU (on Ubuntu 16.04.2 HWE, with Xorg server version 1.18.4). It is the similiar to Linux Mint 18.2, which one of the xrandr output shows the following:Screen 0: minimum 8 x 8, current 1920 x 1080, maximum 32767 x 32767...eDP1 connected primary 1920x1080+0+0 (normal left inverted right x axis y axis) 0mm x 0mm...VIRTUAL1 disconnected (normal left inverted right x axis y axis)...In the Linux Mint 18.2, I can turn off the built-in display (eDP1) and turn on the VIRTUAL1 display with any arbitrary mode supported by the X server, attach x11vnc to my main display and I'll get a GPU accelerated remote desktop.But in Ubuntu 16.04.2, that's not the case. The VIRTUAL* display doesn't exist at all from xrandr. Also, FYI, xrandr's output names is a little bit different on Ubuntu 16.04.2, where every number is prefixed with a -. E.g. eDP1 in Linux Mint becomes eDP-1 in Ubuntu, HDMI1 becomes HDMI-1, and so on.So, how to add the virtual output in Xorg/xrandr?And how come Linux Mint 18.2 and Ubuntu 16.04.2 (which I believe uses the exact same Xorg server, since LM 18.2 is based on Ubuntu, right?) can have a very different xrandr configurations?Using xserver-xorg-video-dummy is not an option, because the virtual output won't be accelerated by GPU.
Add VIRTUAL output to Xorg
x11;xorg;xrandr;opengl;virtual desktop
null
_softwareengineering.105294
Is there a valid reason for the browsers to prefix new CSS features, instead of letting the webmasters use the non-prefixed version?For example, a sample code for the background gradient looks like:#arbitrary-stops { /* fallback DIY*/ /* Safari 4-5, Chrome 1-9 */ background: -webkit-gradient(linear, left top, right top, from(#2F2727), color-stop(0.05, #1a82f7), color-stop(0.5, #2F2727), color-stop(0.95, #1a82f7), to(#2F2727)); /* Safari 5.1+, Chrome 10+ */ background: -webkit-linear-gradient(left, #2F2727, #1a82f7 5%, #2F2727, #1a82f7 95%, #2F2727); /* Firefox 3.6+ */ background: -moz-linear-gradient(left, #2F2727, #1a82f7 5%, #2F2727, #1a82f7 95%, #2F2727); /* IE 10 */ background: -ms-linear-gradient(left, #2F2727, #1a82f7 5%, #2F2727, #1a82f7 95%, #2F2727); /* Opera 11.10+ */ background: -o-linear-gradient(left, #2F2727, #1a82f7 5%, #2F2727, #1a82f7 95%, #2F2727);}What's the point in forcing webmasters to copy-paste the same code four times to have the same result?Note: one of the reasons often quoted is that prefixed styles are intended to be temporary while either the browser does not implement the spec correctly, or the spec is not definitive.IMO, this reason is a nonsense:If the browser engine does not implement the spec correctly, the browser will not be compliant, no matter if it does not implement it in a non-prefixed form or it does not implement it in a prefixed form.If the spec is not definitive, it may matter when there were previous implementations with the same name. For example if CSS2 had linear-gradient, but CSS3 was intended to extend linear-gradient with additional features, it would be clever to temporary prefix the new, draft, implementation by -css3-<style> differentiate between the working CSS2 one, and the experimental CSS3 one. In practice, CSS2 doesn't have linear-gradient or other CSS3 novelties.I would also understand if different browsers had different implementation formats: for example let's say Firefox required, for text shadow, <weight-of-shadow distance-x distance-y color>, while Chrome required <distance-x distance-y weight-of-shadow color>. But actually, this is not the case; at least all new features of CSS3 I've used so far had the same format.
What is the reason to put prefixes in new CSS features?
web development;css;browser compatibility
According to this W3C note:To avoid clashes with future CSS features, the CSS2.1 specification reserves a prefixed syntax for proprietary and experimental extensions to CSS. Prior to a specification reaching the Candidate Recommendation stage in the W3C process, all implementations of a CSS feature are considered experimental. The CSS Working Group recommends that implementations use a vendor-prefixed syntax for such features, including those in W3C Working Drafts. This avoids incompatibilities with future changes in the draft.You can follow up the state of the CSS here and here.
_softwareengineering.73611
I'm pretty new to code review, and I feel often overwhelmed by incoming changes.I mean when there are serious code changes, coming from several developers I tend to accept everything without reviewing the whole, especially when I have lots of stuff to finish up.What are the techniques to help in being efficient in this area ?
How to avoid being overwhelmed when doing code review?
code reviews
Plan code review for the first thing in the morning before you do any of your other projects. If you have several developers passing their code to you, then it will likely take you a few hours. It helps if the developer is there explaining why he did something the way he did. You should try to dedicate the time to the code review. If you are feeling overwhelmed, then you should go to your manager and tell him you need to lighten your workload because you can't handle it. Don't feel pressure to finish other projects because you are being overloaded.
_codereview.110246
Below is the code for user recommendations using mahout.DataModel dm = new FileDataModel(new File(inputFile));UserSimilarity sim = new LogLikelihoodSimilarity(dm);UserNeighborhood neighborhood = new NearestNUserNeighborhood(100, sim, dm);GenericUserBasedRecommender recommender = new GenericUserBasedRecommender( dm, neighborhood, sim);After the recommendations are generated, I am trying to write it to a file like this:FileWriter writer = new FileWriter(outputFile);for (LongPrimitiveIterator userIterator = dm.getItemIDs(); userIterator.hasNext();) {long user = (long) userIterator.next();List<RecommendedItem> recs = recommender.recommend(user, numOfRec ); for (RecommendedItem item : recs) { writer.write(user + , + item.getItemID() + , + item.getValue()+\n); }}writer.close();This code to write to file is taking lot of time. How can I speed up the write operations?I tried with BufferedWriter, but was unable to gain speed-up.
File-write operations
java;performance;machine learning
You can gain some efficiency by wrapping the writer in a BufferedWriter.If you are using java 7 or better you should use try-with-resources to auto close the writer, otherwise you should use a try-finally:try(BufferedWriter writer = new BufferedWriter(new FileWriter(outputFile))){ //the for loops}with try-finally:BufferedWriter writer = new BufferedWriter(new FileWriter(outputFile));try{ //the for loops}finally{ writer.close();}This ensures that the writer is closed should an exception occur.Second appending Strings for output is not the most performant thing that should be done instead pass each string separately:writer.write(user);writer.write(,);writer.write(item.getItemID());writer.write(,);writer.write(item.getValue()writer.newLine();//only available in BufferedWriterAs a more general suggestion there is a better method than reading the entire file doing some processing and then writing the output. Instead you can read only as far as you need to process a part of the data and then write the result out again. Whether you can depends on what you are actually doing with the data. This is only possible if you have a 1-pass transform.
_codereview.85787
I've been building a simple C# server-client chat-style app as a test of my C#. I've picked up code from a few tutorials, and extended what's there to come up with my own server spec.In this post (the second will be the client), I'd like to get some feedback on the server. To me, the code seems bulky and as if it could be brought down by some judicious use of functions or a utility class (I've spotted a doubled-up function (SendToClient) that I guess I could just make public to save on LOC, but what else is there?)1 - Program.csusing System;using System.Collections.Generic;using System.Linq;using System.Text;namespace MessengerServer{ class Program { static void Main(string[] args) { int port = 1100; if (args.Length == 2 && args[0] == --port) { try { port = Int32.Parse(args[1]); } catch (Exception e) { Output.Log(Not a valid port number. Defaulting to 1100., LogType.Error); } } Output.Log(Starting server on port + port, LogType.Info); new Server(port); } }}2 - Server.csusing System;using System.Collections.Generic;using System.Net;using System.Net.Sockets;using System.Threading;using System.Text;namespace MessengerServer{ class Server { private TcpListener tcpListener; private Thread listenThread; private ASCIIEncoding encoder = new ASCIIEncoding(); public Dictionary<DateTime, string> Messages = new Dictionary<DateTime, string>(); private List<string> MessagesAfter(string time) { List<string> matches = new List<string>(); DateTime sinceTime = DateTime.Parse(time); foreach (KeyValuePair<DateTime, string> pair in Messages) { if (pair.Key.CompareTo(sinceTime) > 0) { matches.Add(pair.Value); } } return matches; } private List<TcpClient> connectedClients = new List<TcpClient>(); public Server(int port) { Output.Log(Starting server: all network interfaces, port + port, LogType.Info); this.tcpListener = new TcpListener(IPAddress.Any, port); this.listenThread = new Thread(new ThreadStart(ListenForClients)); this.listenThread.Start(); } private void ListenForClients() { Output.Log(Listener thread spawned, LogType.Info); this.tcpListener.Start(); Output.Log(TCP listener started, ready to accept clients., LogType.Info); while (true) { TcpClient client = this.tcpListener.AcceptTcpClient(); Output.Log(Client connected, starting thread..., LogType.Info); Thread clientThread = new Thread(new ParameterizedThreadStart(HandleClientComms)); clientThread.Start(client); connectedClients.Add((TcpClient) client); } } private void HandleClientComms(object client) { int clientId = new Random().Next(0, int.MaxValue - 1); TcpClient tcpClient = (TcpClient) client; NetworkStream clientStream = tcpClient.GetStream(); Output.Log(Communication thread started with client at + tcpClient.Client.RemoteEndPoint.ToString(), LogType.Info); Output.Log(Client identifier is + clientId, LogType.Info); SendToClient(tcpClient, clientId.ToString()); byte[] message = new byte[4096]; int bytesRead; string data = ; while (true) { bytesRead = 0; data = ; try { bytesRead = clientStream.Read(message, 0, 4096); Output.Log(Read + bytesRead + bytes from + clientId, LogType.Info); } catch(Exception e) { Output.Log(Could not read from client: + e.Message, LogType.Error); if (e.GetType() == Type.GetType(System.IO.IOException)) { connectedClients.Remove(tcpClient); Thread.CurrentThread.Abort(); } break; } if (bytesRead == 0) { Output.Log(Client + clientId + disconnected, LogType.Info); connectedClients.Remove(tcpClient); Thread.CurrentThread.Abort(); break; } string received = encoder.GetString(message, 0, bytesRead); data += received; Output.Log(Message: + received, LogType.Info); try { HandleMessage(tcpClient, clientId, data); } catch (ThreadAbortException tae) { Output.Log(Client thread disconnect complete: + tae.Message, LogType.Info); } catch (Exception e) { Output.Log(Could not handle message: + e.Message, LogType.Warn); } } } public void SendToClient(TcpClient client, string message) { try { NetworkStream stream = client.GetStream(); byte[] msg = new byte[message.Length]; msg = encoder.GetBytes(message); stream.Write(msg, 0, message.Length); stream.Flush(); Output.Log(Sent to client: + message, LogType.Info); } catch (Exception e) { Output.Log(Unexpected exception sending: + e.Message, LogType.Error); } } private void HandleMessage(TcpClient client, int clientId, string message) { Output.Log(Attempting to handle message + message, LogType.Info); if (message.StartsWith([AllSince])) { try { string date = message.Split(']')[1]; List<string> messages = MessagesAfter(date); string data = ; foreach (string msg in messages) { data += msg + |&|; } SendToClient(client, data); } catch (IndexOutOfRangeException e) { SendToClient(client, [200]); throw new Exception(No date was found for [AllSince]: + e.Message); } catch (FormatException e) { SendToClient(client, [201]); throw new Exception(Date was not formatted correctly: + e.Message); } catch (Exception e) { SendToClient(client, [100]); throw new Exception(Unexpected exception: + e.Message); } } else if (message.StartsWith([Send])) { try { string text = message.Split(']')[1]; Messages.Add(DateTime.Now, < + clientId + > + text); Output.Log(Added to message list: + text, LogType.Info); NotifyAllClients(< + clientId + > + text); SendToClient(client, [600]); } catch (Exception e) { SendToClient(client, [300]); throw new Exception(No message could be found for [Send]: + e.Message); } } else if (message.StartsWith([Disconnect])) { Output.Log(Client + clientId + 's client thread disconnected, LogType.Warn); connectedClients.Remove(client); Thread.CurrentThread.Abort(); } else if (message.StartsWith([Command])) { string command = message.Substring(9); Commands.HandleCommand(client, command); } else { SendToClient(client, [400]); throw new Exception(No handling protocol specified.); } } public void NotifyAllClients(string message) { Output.Log(Notifying all clients of message + message, LogType.Info); foreach (TcpClient client in connectedClients) { Output.Log(Notify: client + client.Client.RemoteEndPoint.ToString(), LogType.Info); SendToClient(client, [Message] + message); } } }}3 - Commands.csusing System;using System.Collections.Generic;using System.Linq;using System.Text;using System.Net;using System.Net.Sockets;namespace MessengerServer{ class Commands { public static void HandleCommand(TcpClient client, string command) { string[] args = command.Split(' '); switch (args[0].ToLower().Trim()) { case force: if (args.Length == 2) { SendToClient(client, [ + args[1] + ]); } else { SendToClient(client, [CommandInvalid]); } break; default: SendToClient(client, [CommandInvalid]); break; } } public static void SendToClient(TcpClient client, string message) { try { NetworkStream stream = client.GetStream(); byte[] msg = new byte[message.Length]; msg = Encoding.ASCII.GetBytes(message); stream.Write(msg, 0, message.Length); stream.Flush(); Output.Log(Sent to client: + message, LogType.Info); } catch (Exception e) { Output.Log(Unexpected exception sending: + e.Message, LogType.Error); } } }}The other class, Output (in Output.cs) is one I'm happy with - I use it widely and it's pretty solid, so I've decided not to put it up for review.Note: I've also got XML documentation comments in my code, but have excluded them here for succinctness.
C# Chat - Part 1: Server
c#;server
I would start with looking at the private methods. Looks like we have quite a few of them. Based on the work it's doing my first stab would be to extract them as their own classes and expose the methods as public methods used by the server. This way you can actually add unit tests around your classes.e.g. I would take HandleClientComms() as my first candidate to be extracted out. Once we have it out the other private methods that are called by `HandleClientComms() would be something that I would consider as the next candidates to be their own classes. public interface IHandleClientCommunications{ void Handle(TcpClient client);}The Server Class that has the method below would look something like below :private void ListenForClients() { //Output.Log(Listener thread spawned, EntityConstants.LogType.Info); this.tcpListener.Start(); //Output.Log(TCP listener started, ready to accept clients., EntityConstants.LogType.Info); while (true) { TcpClient client = this.tcpListener.AcceptTcpClient(); Thread clientThread = new Thread(o => _handleClientCommunications.Handle(client)); clientThread.Start(client); connectedClients.Add((TcpClient)client); } }Once you do this you will see that a lot of your private methods move out of the Server class as they are not needed there. They will be moved under the new class that we introduced above. This is ok.Next thing to look at it is to Extract HandleMessage() (I smell Strategy Pattern once its extracted. I can see we have quite a few if...else where we do different algorithms based on a certain criteria. This is a good candidate to refactor)Extract out NotifyAllClients()(This should take care of Client and Clients).Extract out RemoveClient().Note as you start extracting out things you want to ensure that you are adding tests.Usually I stick with the Single ResponsibilityDependency Inversion principle Ability to write unit tests for the piece of software.as my yardstick to decide if a certain object needs to be broken down or not. This guideline helps to me to avoid the urge of over- or under-engineering a certain piece of software.There are a lot of good books out there that have helped me immensely to grow as a software Engineer. I list a few below:http://martinfowler.com/books/refactoring.htmlhttp://www.amazon.com/Working-Effectively-Legacy-Michael-Feathers/dp/0131177052
_webapps.51362
I want to reindex a page using Google Custom Search. Google Webmaster Tools says I manage the site. In Google Custom Search, I go to Specific URLs under Index Now. I add a URL or two and click Index Now. It always says Invalid url to index. Is there some trick to this, like some special formatting I have to do with the URLs?Anyone gotten this to work?
Invalid url to index in Google Custom Search
google custom search
null
_cs.21935
I'm struggling a bit to understand two of the problems we were given in class. Could someone look over my work and maybe give me a few hints?State whether the following languages are regular or not and prove your answer.$$\{0^n1^m \mid m \geq 0 \text{ and } n = 2m+1\}$$$$\{0^a1^b0^c \mid 0 \leq a \leq b \leq c \leq 100\}$$The first one appears to me as non-regular as $m$ is not finite. Then to prove that it is not regular, I used the pumping lemma method with $0^P1^\frac{P-1}{2}$ as the string. For the second language, I'm not really sure if it is regular or not. I can't come up with a DFA/NFA/RegExp for it nor can I figure out how to apply the pumping lemma method to it. Is it safe to use $0^P1^P0^P$ as the string and say that if $Y$ where $Y = 0^k, 1 \leq k \leq P$ is appears more than once (i.e. $XYYZ$), the resulting string is not in the language and thus the language is not regular?Edit: I'm sorry if this question seems elementary. I've already read through most of the posts pertaining to this topic and some things were unclear to me. I was asking for someone to look over my work and make sure I followed the correct procedures. For instance, I'm still confused on the second question. The pumping lemma I used makes it seem like it is not regular but at the same time it is finite. Does that mean I used pumping lemma incorrectly? In what way did I use it incorrectly?
Proving a language is regular or non-regular
regular languages;pumping lemma
null
_unix.202309
In a GNU/Linux bash shell, I am using a command to identify and save all my user defined variables at a specific point in time for debugging purposes (I source the file later):declare | grep '^[[:lower:]]' | grep -v '^colors' | sort > ${output_dir}/env_variables.tmpI am aware that this is bad practice as I am programming by coincidence, trusting that all system variables (except colors) will begin with an upper case letter -- I would like to come up with a better way.I have also considered saving a version of my current env variables before running the script, and then simply differencing the env variables in the child shell with it. This seems as hack-ish as my current attempt, so I was wondering if is there a way to filter variables by date or user defined, or any other criteria to identify those that are newly created by a certain child shell?
Identify user defined variables
bash;environment variables;variable
No, there's no way to filter variables by date or who owned it. You COULD set all existing variables to read-only and then later you use declare -p to filter those out. But a more common way to solve this is to prefix all your vairables with __project_ (where project is whatever). The variables get lengthy, but that seems to be the safest way. Your idea of saving the variables on startup isn't a bad one at all. You can save just the names with declare |awk -F= '/=/ { print $1 }' >tmpfile.$$.shvarsOr back to the read-only idea: while read var ; do declare -r var; done < tmpfile.$$.shvarsNow you declare yours and later, when you are done: declare -p |awk '$2 !~ /^.r$/ { print $3 }' |cut -d= -f1gets you the list of your variables. The downside is, all those variables are now read-only that shouldn't be.
_softwareengineering.351840
Imagine a structure like this, a list with products.<div class=container> <div class=teaser> <img src=...> <p>Product 1</p> </div> <div class=teaser> <img src=...> <p>Product 2</p> </div></div>We need to test the amount of .teaser elements in .container is greater than X.Now the question. I suggested to my team that we rename the generic classnames to real semantic names, so we change the css to fit the semantic classes and we can do frontend-tests for the semantic fields.My suggestion:Radically change the classes and the CSS, no extra classnames for testing purpose.<div class=product-list> <div class=product> <img src=...> <p>Comment</p> </div> <div class=product> <img src=...> <p>Another Comment 2</p> </div></div>My teams counter suggestion:Keep the classnames we already have and add test-specific classes (with prefix testing) only used for testing purpose:<div class=container testing-product-list> <div class=teaser testing-product> <img src=...> <p>Comment</p> </div> <div class=teaser testing-product> <img src=...> <p>Another Comment 2</p> </div></div>Which solution is better?
Add extra HTML classes for frontend-tests?
testing;tdd;html;css;front end
null
_softwareengineering.305077
Reading Mary Rose Cook's Practical Introduction to Functional Programming, she give as an example of an anti-patterndef format_bands(bands): for band in bands: band['country'] = 'Canada' band['name'] = band['name'].replace('.', '') band['name'] = band['name'].title()since the function does more than one thingthe name isn't descriptiveit has side effectsAs a proposed solution, she suggests pipelining anonymous functionspipeline_each(bands, [call(lambda x: 'Canada', 'country'), call(lambda x: x.replace('.', ''), 'name'), call(str.title, 'name')])However this seems to me to have the downside of being even less testable; at least format_bands could have a unit test to check if it does what it's meant to, but how to test the pipeline? Or is the idea that the anonymous functions are so self-explanatory that they don't need to be tested?My real-world application for this is in trying to make my pandas code more functional. I'll often have some sort of pipeline inside a munging functiondef munge_data(df) df['name'] = df['name'].str.lower() df = df.drop_duplicates() return dfOr rewriting in the pipeline style:def munge_data(df) munged = (df.assign(lambda x: x['name'].str.lower() .drop_duplicates()) return mungedAny suggestions for best practices in this kind of situation?
Unit testing for data munging pipelines made up of one-line functions
python;unit testing
null
_unix.316185
Where can I get detailed information (including file format, database format) on built-in NSS services such as db, files?
Where to get information on built-in NSS services?
nsswitch
Read the source! Prepare via: sudo apt-get install apt-srcThen, pick a file associated with NSS, and find out which software package(s) provide it. Let's use /etc/nsswitch.conf: dpkg -S /etc/nsswitch.confThen, select a package, and apt-src install it.
_codereview.85099
I wrote 3 subroutines related to batch data processing, they will be used together. A bit of background, I wrote this for my admin colleagues who do not write code. An application dumps daily .ack files onto a shared drive which contain data processing messages (success, errors, etc.). I wrote the code with comments aimed at my colleagues, hence stating what would be obvious to someone who knows VBA, please be mindful of that; they are intended so they can modify the data locations and such for their own purposes.The first two subs are quite simple but if something can be improved it would be great:Sub Copy_Files_With_Specific_Extension() Dim FSO As Object Dim FromPath As String Dim ToPath As String Dim FileExt As String ' change the value in quotes to the source and destination path you need FromPath = C:\Users\fveilleux-gaboury\Documents ToPath = C:\test ' change the value in quotes to the file extension you want to copy ' change the value to *.* to copy all file types FileExt = *.ack* ' DO NOT CHANGE ANYTHING BELOW THIS LINE If Right(FromPath, 1) <> \ Then FromPath = FromPath & \ End If Set FSO = CreateObject(Scripting.FileSystemObject) If FSO.FolderExists(FromPath) = False Then MsgBox Source folder & FromPath & doesn't exist Exit Sub End If If FSO.FolderExists(ToPath) = False Then MsgBox Destination folder & ToPath & doesn't exist Exit Sub End If FSO.CopyFile Source:=FromPath & FileExt, Destination:=ToPath MsgBox You can find the files from & FromPath & in & ToPath Set FSO = NothingEnd SubSub Rename_File_Extension() Dim FileName As String Dim FSO As Object Dim Folder As Object Set FSO = CreateObject(Scripting.FileSystemObject) ' change the value inside the quotes to the folder containing the files Set Folder = FSO.GetFolder(C:\test) Dim OldText As String Dim NewText As String ' change the value inside the quotes to find and replace different extensions OldText = .ack NewText = .txt ' DO NOT CHANGE ANYTHING BELOW THIS LINE For Each File In Folder.Files If InStr(1, File.Name, OldText) <> 0 Then FileName = Replace(File.Name, OldText, NewText) File.Name = FileName End If Next MsgBox File extension & OldText & has been replaced with & NewText & in folder & Folder Set FSO = Nothing Set Folder = NothingEnd SubThis sub is more complicated and I would really want to improve it. It loops through a folder to grab all the file names, puts them in an array, then another loop goes over the indexes and performs IO functions. The output is a large text file which contains every line of data from all of the input files (which I can then import into an Access database for further processing). Sub Combine_Text_Files() ' change the value inside the quotes to the folder containing the files ' only supports plain text files *.txt Dim InputDirPath As String InputDirPath = C:\test\ ' change the value inside the quotes to the folder where you want the output file to go Dim OutputDirPath As String OutputDirPath = C:\ ' change the value inside the quotes to the desired output file name Dim OutputFileName As String OutputFileName = _CombinedOutput.txt ' DO NOT CHANGE ANYTHING BELOW THIS LINE If Right(InputDirPath, 1) <> \ Then InputDirPath = InputDirPath & \ End If If Right(OutputDirPath, 1) <> \ Then OutputDirPath = OutputDirPath & \ End If Dim InputFileType As String InputFileType = *.txt Dim InputFileName As String InputFileName = Dir$(InputDirPath & InputFileType) Dim FileArray() As String Dim i As Integer: i = 0 Do Until InputFileName = vbNullString ReDim Preserve FileArray(0 To i) FileArray(i) = InputFileName InputFileName = Dir$ i = i + 1 Loop Dim FSO As Object Set FSO = CreateObject(Scripting.FileSystemObject) Dim Stream As Object Set Stream = FSO.CreateTextFile((OutputDirPath & OutputFileName), OverWrite:=True, Unicode:=False) Dim FileNameAndPath As String For i = LBound(FileArray) To UBound(FileArray) FileNameAndPath = (InputDirPath & FileArray(i)) Debug.Print (Processing: & FileNameAndPath) Dim FileToCopy As File Set FileToCopy = FSO.GetFile(FileNameAndPath) Dim StreamToCopy As TextStream Set StreamToCopy = FileToCopy.OpenAsTextStream(ForReading) Dim CopiedText As String CopiedText = StreamToCopy.ReadAll Stream.WriteLine CopiedText Debug.Print (Appended to & OutputFileName & : & FileNameAndPath) Next i MsgBox InputFileType & files in & InputDirPath & have been merged together. & vbNewLine _ & You can find the output file & OutputFileName & in this location: & vbNewLine _ & OutputDirPath Stream.Close Set FSO = Nothing Set Stream = NothingEnd Sub
Text files: Copy, Rename, Append/Merge together
beginner;strings;array;file system;vba
given the fact that you wrote it for your colleagues who may change it in future, I'd have all the code in one module to make it a bit clear for them.The code is well structured and easy to read. I'd change just couple of thingsError handlingYou currently do:Sub Copy_Files_With_Specific_Extension() Dim FSO As Object Dim FromPath As String Dim ToPath As String Dim FileExt As String ' change the value in quotes to the source and destination path you need FromPath = C:\Users\fveilleux-gaboury\Documents ToPath = C:\test ' change the value in quotes to the file extension you want to copy ' change the value to *.* to copy all file types FileExt = *.ack* ' DO NOT CHANGE ANYTHING BELOW THIS LINE If Right(FromPath, 1) <> \ Then FromPath = FromPath & \ End If Set FSO = CreateObject(Scripting.FileSystemObject) If FSO.FolderExists(FromPath) = False Then MsgBox Source folder & FromPath & doesn't exist Exit Sub End If If FSO.FolderExists(ToPath) = False Then MsgBox Destination folder & ToPath & doesn't exist Exit Sub End If FSO.CopyFile SOURCE:=FromPath & FileExt, Destination:=ToPath MsgBox You can find the files from & FromPath & in & ToPath Set FSO = NothingEnd Subwhich means that your FSO object may not be disposed correctly if any of the condition is true, like hereIf FSO.FolderExists(FromPath) = False Then MsgBox Source folder & FromPath & doesn't exist Exit Sub End IfI'd change all your methods to support proper error handling like here:Public Sub Copy_Files_With_Specific_Extension() Const SOURCE As String = Copy_Files_With_Specific_Extension Dim FSO As Object Dim FromPath As String Dim ToPath As String Dim FileExt As String On Error GoTo ErrorHandler ' change the value in quotes to the source and destination path you need FromPath = C:\Users\fveilleux-gaboury\Documents ToPath = C:\test ' change the value in quotes to the file extension you want to copy ' change the value to *.* to copy all file types FileExt = *.ack* ' DO NOT CHANGE ANYTHING BELOW THIS LINE If Right(FromPath, 1) <> \ Then FromPath = FromPath & \ End If Set FSO = CreateObject(Scripting.FileSystemObject) If FSO.FolderExists(FromPath) = False Then MsgBox Source folder & FromPath & doesn't exist GoTo ExitRoutine End If If FSO.FolderExists(ToPath) = False Then MsgBox Destination folder & ToPath & doesn't exist GoTo ExitRoutine End If FSO.CopyFile SOURCE:=FromPath & FileExt, Destination:=ToPath MsgBox You can find the files from & FromPath & in & ToPathExitRoutine: Set FSO = Nothing Exit SubErrorHandler: MsgBox Hey mate, something went wrong, call me and tell me this & vbNewLine & _ Method name: & SOURCE & vbNewLine & _ Error code: & Err.Number & vbNewLine & _ Error description: & Err.Description GoTo ExitRoutineEnd Subif any of your conditions are true or if any unexpected error is thrown, the FSO object will be always properly disposed.My assumption is that the source folder and the destination folder may change in future and I don't think your colleagues have to go to the code and change it. I'd write a code that will allow them to change the folder as they need and set the folder you mentioned as defaultHere is code I use (but didn't write it):Option ExplicitPrivate Type BrowseInfo hWndOwner As Long pIDLRoot As Long pszDisplayName As String lpszTitle As String ulFlags As Long lpfnCallback As Long lParam As Long iImage As LongEnd TypePublic Const BIF_RETURNONLYFSDIRS = &H1Public Const BIF_DONTGOBELOWDOMAIN = &H2Public Const BIF_STATUSTEXT = &H4Public Const BIF_RETURNFSANCESTORS = &H8Public Const BIF_EDITBOX = &H10Public Const BIF_VALIDATE = &H20Public Const BIF_NEWDIALOGSTYLE = &H40Public Const BIF_USENEWUI = (BIF_NEWDIALOGSTYLE Or BIF_EDITBOX)Public Const BIF_BROWSEINCLUDEURLS = &H80Public Const BIF_UAHINT = &H100Public Const BIF_NONEWFOLDERBUTTON = &H200Public Const BIF_NOTRANSLATETARGETS = &H400Public Const BIF_BROWSEFORCOMPUTER = &H1000Public Const BIF_BROWSEFORPRINTER = &H2000Public Const BIF_BROWSEINCLUDEFILES = &H4000Public Const BIF_SHAREABLE = &H8000Private Const MAX_PATH = 260Private Const WM_USER = &H400Private Const BFFM_INITIALIZED = 1Private Const BFFM_SELCHANGED = 2Private Const BFFM_SETSTATUSTEXT = (WM_USER + 100)Private Const BFFM_SETSELECTION = (WM_USER + 102)Public Declare Function SHGetPathFromIDList Lib shell32.dll Alias SHGetPathFromIDListA (ByVal pidl As Long, ByVal pszPath As String) As LongPublic Declare Function SHBrowseForFolder Lib shell32.dll Alias SHBrowseForFolderA (lpBrowseInfo As BrowseInfo) As LongPublic Declare Sub CoTaskMemFree Lib ole32.dll (ByVal pv As Long)Private Declare Function SendMessage Lib user32 Alias SendMessageA (ByVal hWnd As Long, ByVal wMsg As Long, ByVal wParam As Long, ByVal lParam As String) As LongPrivate mstrSTARTFOLDER As StringPublic Function GetFolder(ByVal hWndModal As Long, _ Optional StartFolder As String = , _ Optional Title As String = Please select a folder:, _ Optional IncludeFiles As Boolean = False, _ Optional IncludeNewFolderButton As Boolean = False) As String Dim bInf As BrowseInfo Dim RetVal As Long Dim PathID As Long Dim RetPath As String Dim Offset As Integer 'Set the properties of the folder dialog bInf.hWndOwner = hWndModal bInf.pIDLRoot = 0 bInf.lpszTitle = Title bInf.ulFlags = BIF_RETURNONLYFSDIRS Or BIF_STATUSTEXT If IncludeFiles Then bInf.ulFlags = bInf.ulFlags Or BIF_BROWSEINCLUDEFILES If IncludeNewFolderButton Then bInf.ulFlags = bInf.ulFlags Or BIF_NEWDIALOGSTYLE If StartFolder <> Then mstrSTARTFOLDER = StartFolder & vbNullChar bInf.lpfnCallback = GetAddressofFunction(AddressOf BrowseCallbackProc) 'get address of function. End If 'Show the Browse For Folder dialog PathID = SHBrowseForFolder(bInf) RetPath = Space$(512) RetVal = SHGetPathFromIDList(ByVal PathID, ByVal RetPath) If RetVal Then 'Trim off the null chars ending the path 'and display the returned folder Offset = InStr(RetPath, Chr$(0)) GetFolder = Left$(RetPath, Offset - 1) 'Free memory allocated for PIDL CoTaskMemFree PathID Else GetFolder = End IfEnd FunctionPrivate Function BrowseCallbackProc(ByVal hWnd As Long, ByVal uMsg As Long, ByVal lp As Long, ByVal pData As Long) As Long On Error Resume Next Dim lpIDList As Long Dim ret As Long Dim sBuffer As String Select Case uMsg Case BFFM_INITIALIZED Call SendMessage(hWnd, BFFM_SETSELECTION, 1, mstrSTARTFOLDER) Case BFFM_SELCHANGED sBuffer = Space(MAX_PATH) ret = SHGetPathFromIDList(lp, sBuffer) If ret = 1 Then Call SendMessage(hWnd, BFFM_SETSTATUSTEXT, 0, sBuffer) End If End Select BrowseCallbackProc = 0End FunctionPrivate Function GetAddressofFunction(add As Long) As Long GetAddressofFunction = addEnd Functionand how I implemented it to your code:Public Sub Copy_Files_With_Specific_Extension() Const SOURCE As String = Copy_Files_With_Specific_Extension Dim FSO As Object Dim FromPath As String Dim ToPath As String Dim FileExt As String On Error GoTo ErrorHandler ' change the value in quotes to the source and destination path you need FromPath = GetFolder(hWndModal:=0, _ StartFolder:=C:\Users\fveilleux-gaboury\Documents, _ Title:=Select the source folder that contains all the *.ack* files, _ IncludeNewFolderButton:=True)the change means they will have to make one extra click if the folder is correct but on other side it will give them ability to change it easily in future if needed.The same way you can do the **C:\test** folder but here I'd consider to make the path as a constant that will be at top of your module and any change applied to the constant will be reflected to all places in your codeConst WORKING_FOLDER As String = C:\testPublic Sub Copy_Files_With_Specific_Extension()... ToPath = WORKING_FOLDER...Sub Rename_File_Extension()... Set Folder = FSO.GetFolder(WORKING_FOLDER)...Sub Combine_Text_Files()... InputDirPath = WORKING_FOLDER...I noticed that you use for your ack files two 'formats:FileExt = *.ack*OldText = .ackIs this intended? If it should be the same, again, I'd make a constant at top of your module to make it easy for change in future = one placeFinally your Combine_Text_Files method. I do find your method readable and appropriate. I'm not sure if there is any better/faster method for reading and appending text files but if it's not slow just use it. I found some minor bugs there like **ForReading* constant and not disposing some object variables but otherwise it seems to be OKHere is how it looks like in my editor after all the changes Option ExplicitConst WORKING_FOLDER As String = C:\testPublic Sub Copy_Files_With_Specific_Extension() Const SOURCE As String = Copy_Files_With_Specific_Extension Dim FSO As Object Dim FromPath As String Dim ToPath As String Dim FileExt As String On Error GoTo ErrorHandler ' change the value in quotes to the source and destination path you need FromPath = GetFolder(hWndModal:=0, _ StartFolder:=C:\Users\fveilleux-gaboury\Documents, _ Title:=Select the source folder that contains all the *.ack* files, _ IncludeNewFolderButton:=True) ToPath = WORKING_FOLDER ' change the value in quotes to the file extension you want to copy ' change the value to *.* to copy all file types FileExt = *.ack* ' DO NOT CHANGE ANYTHING BELOW THIS LINE If Right(FromPath, 1) <> \ Then FromPath = FromPath & \ End If Set FSO = CreateObject(Scripting.FileSystemObject) If FSO.FolderExists(FromPath) = False Then MsgBox Source folder & FromPath & doesn't exist GoTo ExitRoutine End If If FSO.FolderExists(ToPath) = False Then MsgBox Destination folder & ToPath & doesn't exist GoTo ExitRoutine End If FSO.CopyFile SOURCE:=FromPath & FileExt, Destination:=ToPath MsgBox You can find the files from & FromPath & in & ToPathExitRoutine: Set FSO = Nothing Exit SubErrorHandler: MsgBox Hey mate, something went wrong, call me and tell me this & vbNewLine & _ Method name: & SOURCE & vbNewLine & _ Error code: & Err.Number & vbNewLine & _ Error description: & Err.Description, vbExclamation, Unexpected error at & SOURCE GoTo ExitRoutineEnd SubSub Rename_File_Extension() Const SOURCE As String = Rename_File_Extension Dim FileName As String Dim FSO As Object Dim Folder As Object Dim File As Object On Error GoTo ErrorHandler Set FSO = CreateObject(Scripting.FileSystemObject) ' change the value inside the quotes to the folder containing the files Set Folder = FSO.GetFolder(WORKING_FOLDER) Dim OldText As String Dim NewText As String ' change the value inside the quotes to find and replace different extensions OldText = .ack NewText = .txt ' DO NOT CHANGE ANYTHING BELOW THIS LINE For Each File In Folder.Files If InStr(1, File.Name, OldText) <> 0 Then FileName = Replace(File.Name, OldText, NewText) File.Name = FileName End If Next MsgBox File extension & OldText & has been replaced with & NewText & in folder & FolderExitRoutine: Set FSO = Nothing Set Folder = Nothing Set File = Nothing Exit SubErrorHandler: MsgBox Hey mate, something went wrong, call me and tell me this & vbNewLine & _ Method name: & SOURCE & vbNewLine & _ Error code: & Err.Number & vbNewLine & _ Error description: & Err.Description, vbExclamation, Unexpected error at & SOURCE GoTo ExitRoutineEnd SubSub Combine_Text_Files() Const SOURCE As String = Combine_Text_Files Const fso_ForReading As Integer = 1 ' change the value inside the quotes to the folder containing the files ' only supports plain text files *.txt Dim InputDirPath As String On Error GoTo ErrorHandler InputDirPath = WORKING_FOLDER ' change the value inside the quotes to the folder where you want the output file to go Dim OutputDirPath As String OutputDirPath = C:\ ' change the value inside the quotes to the desired output file name Dim OutputFileName As String OutputFileName = _CombinedOutput.txt ' DO NOT CHANGE ANYTHING BELOW THIS LINE If Right(InputDirPath, 1) <> \ Then InputDirPath = InputDirPath & \ End If If Right(OutputDirPath, 1) <> \ Then OutputDirPath = OutputDirPath & \ End If Dim InputFileType As String InputFileType = *.txt Dim InputFileName As String InputFileName = Dir$(InputDirPath & InputFileType) Dim FileArray() As String Dim i As Integer: i = 0 Do Until InputFileName = vbNullString ReDim Preserve FileArray(0 To i) FileArray(i) = InputFileName InputFileName = Dir$ i = i + 1 Loop Dim FSO As Object Set FSO = CreateObject(Scripting.FileSystemObject) Dim Stream As Object Set Stream = FSO.CreateTextFile((OutputDirPath & OutputFileName), OverWrite:=True, Unicode:=False) Dim FileNameAndPath As String For i = LBound(FileArray) To UBound(FileArray) FileNameAndPath = (InputDirPath & FileArray(i)) Debug.Print (Processing: & FileNameAndPath) Dim FileToCopy As Object Set FileToCopy = FSO.GetFile(FileNameAndPath) Dim StreamToCopy As Object Set StreamToCopy = FileToCopy.OpenAsTextStream(fso_ForReading) Dim CopiedText As String CopiedText = StreamToCopy.ReadAll Stream.WriteLine CopiedText Debug.Print (Appended to & OutputFileName & : & FileNameAndPath) Next i MsgBox InputFileType & files in & InputDirPath & have been merged together. & vbNewLine _ & You can find the output file & OutputFileName & in this location: & vbNewLine _ & OutputDirPath Stream.CloseExitRoutine: Set FSO = Nothing Set FileToCopy = Nothing Set StreamToCopy = Nothing If Not Stream Is Nothing Then Stream.Close End If Set Stream = Nothing Exit SubErrorHandler: MsgBox Hey mate, something went wrong, call me and tell me this & vbNewLine & _ Method name: & SOURCE & vbNewLine & _ Error code: & Err.Number & vbNewLine & _ Error description: & Err.Description, vbExclamation, Unexpected error at & SOURCE GoTo ExitRoutineEnd Sub
_unix.344059
I am trying to recover the data from an external HDD for a friend.I am using Knoppix latest version booting it from USB.I created an image (.img) using a tutorial for ddrescue, but now I have the copia.img file and can't mount it.If I try to mount the terminal says:mount: wrong fs type, bad option, bad superblock on .....The drive was used to storage photos and does not contain any OS or similar.If I run File command to the copia.img file it says:DOS/MBR boot sector, code offset 0x52+2, OEM-ID NTFS, Media descriptor 0xf8, sectors/track 63, heads 255, hidden sectors 63, dos <4.0 BootSector (0x80), FAT (1Y biy by descriptor);NTFS, sectors/track 63, sectors 1953520001, $MFT start cluster 21931768, $MFTMirror start cluster 477176, clusters/RecordSgement 2, clusters/index block 8, serial number 0d2c6a522c6a507b5; contains Microsoft Windows XP/Vista bootloader BOOTMGRAlso, if I run dmesg command it says:Please can you please help me recovering it?
ddrescue image can't be mounted
ddrescue
null
_softwareengineering.164909
HistoryWe are currently using a so called redirect model for our online payments (where you send the payer to a payment gateway, where he inputs his payment details - the gateway will then return him to a success/failure callback page). That's easy and straight-forward, but unfortunately quite inconvenient and at times confusing for our customers (leaving the site, changing their credit card details with an additional login on another site etc).Intention & Problem descriptionWe are now intending to switch to an integrated approach using an exchange of XML requests and responses. My problem is on how to cater with all (or rather most) of the things that may happen during processing - bearing in mind that normally simplicity is robust whereas complexity is fragile. ExamplesUser abort: The user inputs Credit Card details and hits submit. An XML message to the provider's gateway is sent and waiting for response. The user hits stop in his browser or closes the window.ignore_user_abort() in PHP may be an option - but is that reliable?might it be better to redirect the user to a please wait-page, that in turn opens an AJAX or other request to the actual processor that does not rely on the connection? Database goes awaysounds over-complicated, but with e.g. a webserver in the States and a DB in the UK, it has happened and will happen again: User clicks together his order, payment request has been sent to the provider but the response cannot be stored in the database. What approach could I use, using PHP to sort of start an SQL like Transaction that only at the very end gets committed or rolled back, depending on the individual steps? Should then neither commit or roll back have happened, I could sort of lock the user to prevent him from paying again or to improperly account for payments - but how?And what else do I need to consider technically? None of the integration examples of e.g. Worldpay, Realex or SagePay offer any insight, and either my search engine or my search terms weren't good enough to find somebody else's thoughts on this.Thank you very much for any insight on how you would approach this!
Integrating with a payment provider; Proper and robust OOP approach
design;php;payment
null
_webapps.15022
Using MediaWiki 1.16 with the Vector skin (fresh installation, no upgrades or imports), how do I get that new fancy Ajax-enabled editing toolbar?I've digged through all available settings in php, but whatever I do, I get the old monobook-style toolbar:Apparently, this is not a caching issue, as I was able to use this method to add more buttons to this toolbar.Currently I have the following customizations:$wgDefaultSkin = 'vector';$wgUseAjax = true;$wgEnableMWSuggest = true; # Enable ajax suggestions for search box$wgGroupPermissions['*']['edit'] = false; # Disable anonymous editsI've also enabled short URLs as described here.Everything else on my wiki looks definitely like Vector. It's just the toolbar.The official manual seems to be making a good laugh out of me, as this new fancy toolbar said to be the default for the Vector skin.
New toolbar for editing in Vector: How to get?
mediawiki
It appears that the new toolbar is an experimental feature that you have to install: http://www.mediawiki.org/wiki/Extension:UsabilityInitiative
_webmaster.11145
I have a site which was performing well in the search engines - I wanted to redevelop the site, so in the interim period I set up a redirect from my site to my parent company's site (which has a small section relating to my services). Fairly quickly, this section of the parent site inherited my seo ranking, backlinks etc, which is fine and is what I expected. However, I now have a new site ready and plan to remove the redirect - do you know how this is likely to affect my site? Many thanks
SEO on site temporarily redirected, then re-enabled
seo
null
_unix.111736
I need to monitor the time it takes to rlogin from one HP-UX machine to another.So I wrote this:#!/bin/shresult=`rlogin 10.10.10.1 << EOFexitEOF`result2=`{ time $result >/dev/null; } |& grep real`echo $result2This shellscript works when I run it on a Linux machine, but for some reason it doesnt work when I run it on HP-UX. Both are using /bin/sh.Why doesn't it work on HP-UX?
How to measure rlogin time?
shell;shell script;time;hp ux
null
_softwareengineering.315098
I plan to have a web server, which will serve JavaScript used to make connections, and a socket server which the javascript will talk to.How can I make sure that when deploying a new update, the javascript and socket server are on the same version and so don't get confused. Do I have to restart both at exactly the same time?
Keeping deploys in Sync
sockets;websockets
null
_cogsci.507
Knowing that sleep quantity and quality affects cognitive performance across many domains, why aren't pre-test sleep measures or intra-test measures of arousal a standard part of all cognitive test paradigms?
Why aren't sleep measures consistently measured as mediators/moderators of cognitive performance?
cognitive psychology;measurement;methodology;sleep
null
_softwareengineering.337229
I am working on setting up a multi-tenant site, where users can select a theme. Each of these themes have different settings, so I would like someone to be able to select a theme and then when they edit their site settings it would bring up a form to allow that. I am sure I could do this through hard coding data, but it would seem that I would be better off using plugins to allow new theme plugins to be added and remove the need to magic strings or having to create a dynamic settings page that reads what fields to display from a database. There is plenty of documentation on things like MEF, but I need some help on figuring out how to display a unique view for each theme and then store the results. Any help or direction would be appreciated.
Plugin Strategy For MVC 5 Site
c#;asp.net mvc5
null
_unix.125015
GPIO User Space App describes user space application to test the GPIO.Another related link is Linux GPIO Driver.This would be running on a xilinx zynq board having an ARM Cortex A 9 processor. I am unable to understand why they are asking to compile this source code using gcc:// the following bash script to toggle the gpio is also handy for// testing//// while [ 1 ]; do// echo 1 > /sys/class/gpio/gpio240/value// echo 0 > /sys/class/gpio/gpio240/value// done// to compile this, use the following command// gcc gpio.c -o gpioShould it not be ARM-linux-gcc. instead of gcc? Or these commands are to be typed on the target once the kernel boots?
GPIO User Space App
linux kernel;embedded;drivers;gpio
null
_codereview.133141
I'm implementing a SinglyLinkedList struct that uses a private Node class in its implementation. (See this Gist.)public struct SinglyLinkedList<Element> { private var head: Node<Element>? ...}extension SinglyLinkedList: MutableCollection { ... }extension SinglyLinkedList: RangeReplaceableCollection { ... }private class Node<Element> { private var value: Element private var next: Node?}In order to give my linked list value semantics, I want to give it copy on write behaviour, much like Swift's native Array, Set and Dictionary structures. So before any mutation takes place, I need to make a copy of the data in case the data is shared with another list:extension SinglyLinkedList { /// Adds a new element to the front of the list. public mutating func prepend(_ element: Element) { copyIfNeeded() head = Node(value: element, next: head) }}The only thing left to do is to implement copyIfNeeded(). Clearly I need to use isUniquelyReferencedNonObjC for that. The naive way to do this would be to traverse the whole list and call the isUniquelyReferencedNonObjC function once for each node. However, this would make the time complexity of each mutating operation O(n), defeating the purpose of using a linked list in the first place.Checking if head is uniquely referenced is not enough, because I made SinglyLinkedList its own subsequence, meaning that any slice of a linked list will be another linked list. The slice and the original list won't necessarily share their head nodes.In order to be able to run copyIfNeeded() in constant time, I introduce an empty Reference class:private class Reference {}I also give my linked list a reference property:public struct SinglyLinkedList<Element> { ... private var reference: Reference public init() { reference = Reference() ... } ...}This allows me to implement copyIfNeeded() like this:extension SinglyLinkedList { /// - returns: `true` if a copy was made, `false` otherwise. @discardableResult private mutating func copyIfNeeded() -> Bool { guard !isUniquelyReferencedNonObjC(&reference) else { return false } var copy = SinglyLinkedList() // add all elements of self to copy self = copy return true }}This implementation works fine. I have verified that a copy is only made if two lists reference the same nodes, and only when one of the lists is being mutated. However, it doesn't feel right to implement an empty class that I only use for its reference count.Are there any alternatives to this approach?
Copy-on-write linked list with value semantics
linked list;swift;reference
null
_webapps.70767
Yesterday I started to create an account on Dwolla. As I couldn't input certain required information, I had to cancel the signup.When I tried to find a Delete account option, I notice there's none. That means that I can still login, by entering my email and password.So I sent an email to support asking to (please) delete my account, and this is what I get:Our data retention policy is based on the laws applicable to Dwolla. We are required to retain certain customer information even after account closure in order to comply with those laws. Per our policy, we will only retain customer information to allow us to comply with such laws and enforce our TOS. Please be assured that your information is protected and kept secure on our encrypted servers.Is this right? I can't get them to delete my account, even if I ask them to do so?
Dwolla not deleting my account
user accounts
null
_webapps.17170
Is there a way to always watch YouTube videos in HD (when available) and without changing the resolution yourself?YouTube offers 360 standard for small player, 480 for large and 720 for full screen.I would like to have the 720 video high definition version. Always.
Always watch videos in HD on YouTube
youtube
I don't believe that there is a setting in YouTube to do this. You can achieve this by installing this GreaseMonkey script though.YouTube HD SuiteScript Summary: Perfect package to enjoy HD videos in YouTube. Always watching or downloading the highest quality format ( HD 1440p / HD 1080p / HD 720p / HQ FLV / MP4 iPod ). Add download icons in video list page.Version: 3.4.1
_codereview.122564
This function try to scan the given function in n dimensions in the given range by adaptively divide the area into n+1 points polygon (e.g. triangle in 2D; tetrahedron in 3D)I tried to seperate the implementation of splitting/initializing, as it would be easier to apply it on other cases (e.g. square lattice scan, discrete scan, tree scan, etc..)async_scan.pyfrom initializer import RecursiveBoundaryCentreInitializerfrom solver import Solverfrom explorer import Explorerfrom splitter import LargestDisputeLineSplitterdef async_scan(executor, initializer, splitter, func, *iterables): solver = Solver(executor, func) explorer = Explorer(solver, splitter) for points in initializer(iterables): explorer.register(points) for point, future in solver: exception = future.exception() result = future.result() if exception is None else exception yield point, result explorer.trigger(point)def async_surface_scan(executor, tol, func, *iterables): initializer = RecursiveBoundaryCentreInitializer() splitter = LargestDisputeLineSplitter(tol) return async_scan(executor, initializer, splitter, func, *iterables)def example(): from concurrent.futures import ThreadPoolExecutor as Executor from math import atan2 as func with Executor() as executor: x = y = (-1, 3) results = async_surface_scan(executor, 0.03, func, x, y) for (x, y), f in results: print(x, y, f)if __name__ == '__main__': example()explorer.pyfrom collections import defaultdictfrom concurrent.futures import Futureclass Explorer(object): def __init__(self, solver, splitter): self.solver = solver self.splitter = splitter self.registry = defaultdict(set) def register(self, points): job = Job(self.solver, points) for point in points: self.registry[point].add(job) def trigger(self, point): queue = self.registry[point] while queue: job = queue.pop() if not job.done(): continue for p in job.points: self.registry[p].discard(job) for points in self.split(job): self.register(points) def split(self, job): points = job.points exceptions = job.exception() if exceptions is not None: return results = job.result() return self.splitter(points, results)class Job(Future): def __init__(self, solver, points): self.points = points self.futures = tuple(map(solver, points)) def __hash__(self): return hash(self.points) def cancel(self): return all(f.cancel() for f in self.futures) def cancelled(self): return all(f.cancelled() for f in self.futures) def running(self): return any(f.running() for f in self.futures) def done(self): return all(f.done() for f in self.futures) def result(self, timeout=None): return tuple(f.result(timeout) for f in self.futures) def exception(self, timeout=None): exceptions = (f.exception(timeout) for f in self.futures) exceptions = tuple(e for e in exceptions if e is not None) if len(exceptions) < 1: return None return exceptionssolver.pyfrom collections import ChainMapfrom concurrent.futures import as_completedfrom bidict import bidictclass Solver(object): def __init__(self, executor, func): self.fresh = bidict() self.done = dict() self.cache = ChainMap(self.fresh, self.done) self.func = func self.executor = executor def __call__(self, args): if args not in self.cache: self.fresh[args] = self.executor.submit(self.func, *args) return self.cache[args] def __iter__(self): return self def __next__(self): if not self.fresh: raise StopIteration future = as_completed(self.fresh.values()) future = next(iter(future)) args = self.fresh.inv[future] self.done[args] = self.fresh.pop(args) return args, futureinitializer.pyfrom itertools import chain, tee, productfrom statistics import meandef pairwise(iterable): a, b = tee(iterable) next(b) yield from zip(a, b)def selection(seq): for i, x in enumerate(seq): yield i, x, seq[:i] + seq[i+1:]def iappend(iterable, x): return chain(iterable, [x])class RecursiveBoundaryCentreInitializer(object): Given cuts for n dimensions, generate polygon with n+1 points, each containing two point on the boundary, one point on the centre of the sides, one point on the centre of the volumes, etc... @classmethod def __call__(cls, iterables): xs = list(map(tuple, iterables)) return cls.reduce(xs) @classmethod def reduce(cls, xs): if len(xs) <= 1: yield from pairwise((x,) for x in xs[0]) return c = tuple(map(mean, xs)) for i, y, zs in selection(xs): for j, z in product(y, cls.reduce(zs)): t = tuple(p[:i] + (j,) + p[i:] for p in z) yield tuple(iappend(t, c))def example(): initializer = RecursiveBoundaryCentreInitializer() for points in initializer([range(2), range(2), range(2)]): print(points)if __name__ == '__main__': example()splitter.pyfrom itertools import combinations, repeatfrom statistics import meanfrom numpy.linalg import matrix_rankfrom numpy import arrayclass LargestDisputeLineSplitter(object): Given polygon with n+1 points in n dimensions, Split the polygon at the side with largest diff. result, Check the splitted polygon for closeness and collinearity. def __init__(self, tol): self.tol = tol def __call__(self, points, results): pairs = combinations(zip(points, results), 2) pairs = ((abs(r0 - r1), (p0, p1)) for (p0, r0), (p1, r1) in pairs) diff, pairs = max(pairs) if diff < self.tol: return () mid = tuple(map(mean, zip(*pairs))) old = tuple(p for p in points if p not in pairs) news = zip(pairs, repeat(mid)) groups = (old + new for new in news) groups = map(list, groups) groups = map(self.remove_close, groups) groups = map(self.remove_colinear, groups) groups = (points for points in groups if len(points) > 1) groups = map(tuple, groups) return tuple(groups) def remove_close(self, points): while True: for x, y in combinations(points, 2): if any(abs(i-j) > self.tol for i, j in zip(x, y)): continue points.remove(y) break else: return points def remove_colinear(self, points): while True: for three in combinations(points, 3): T = array(three) T -= T[0, :] if matrix_rank(T, self.tol) > 1: continue mid = sorted(three)[1] points.remove(mid) break else: return points
Adaptive scan of function concurrently
python;multithreading;python 3.x;multiprocessing
null
_unix.66915
I compiled the mod rewrite for apache (version 1.3.0) however, when I try to run the server I have this error about mod_rewrite:Syntax error on line 27 of /home/myuser/apache/etc/httpd.conf:Cannot load /home/myuser/apache/libexec/mod_rewrite.so into server: /home/myuser/apache/libexec/mod_rewrite.so: undefined symbol: lstat./sbin/apachectl start: httpd could not be startedA similar problem is here https://bugzilla.redhat.com/show_bug.cgi?id=101837 but the solution provided does not help me...I get an error after __THROW, the classic missing , ; before { and, removing the THROW clause leads me to an lstat already defined here (pointing at sys/stat.h header).Can you help me?
mod_rewrite: undefined symbol: lstat error
apache httpd;libraries;c
null
_webapps.80538
I have a business conversation going on and I'm trying to find a particular message I'm sure was part of it. Let's say it includes the term widget. But the problem is there were (for example) 3 messages with that term. Now by itself that would be fine, I'd search for that term and see three results and read them and find the one I want. But the problem is that one of those messages had a very long back-and-forth so there's a long re: history and that term is repeated each and every time. So when I search I get 590 matches instead of 3.I want to search without including all the quoted replies. So that only the typed messages is included in the search.Possible?
Searching Gmail messages while excluding the reply history?
gmail;gmail search
null
_webapps.39923
If someone doesn't show his/her friends in Facebook, would that person appear in another friend's list?I mean, I want to hide my friend list. I don't want to be related in terms of Facebook friendship with anyone. If I hide the list, would I appear in my friend's friend list when a stranger looks at it?I am referring to hiding the list from strangers (people who aren't my friends on Facebook).
If someone doesn't show his/her friends in Facebook, will that person appear in another friend's list?
facebook;friends;friend list
Yes you will be visible on your friends-friend list. AFAIK there is no way to disable this unless your friend too hides his/her friends list.
_softwareengineering.233464
Recently, I've stumbled across question about predicting the output of code which heavily uses post/pre increment operators on integers. I am experienced C programmer, so I felt like at home, but people made statements that Java just blindly copied that from C (or C++), and that this is an useless feature in Java.That being said, I am not able to find good reason for that (especially with keeping in mind the difference betwen post/pre forms) in Java (or C#), because it's not like you will manipulate arrays and use them as strings in Java. Also, it was long ago when I last looked into bytecode, so I don't know if there's a INC operation, but I don't see any reason why thisfor(int i = 0; i < n; i += 1)could be less effective thanfor(int i = 0; i < n; i++)Is there any particular part of the language where this is really useful, or is this just a feature to bring C programmers in town?
Reason for (post/pre) increment operator in Java or C#
java;c#
is this just a feature to bring C programmers in townIt is surely a feature to bring C (or C++) programmers in town, but using the word just here underestimates the value of such a similarity. it makes code (or at least code snippets) easier to port from C (or C++) to Java or C#i++ is less to type than i += 1, and x[i++]=0; is much more idiomatic than x[i]=0;i+=1;And yes, code which heavily uses post/pre increment operators maybe hard to read and maintain, but for any code where these operators are misused, I would not expect a drastic increase in code quality even when the language would not provide these operators. That's because if you have devs which don't care for maintainability, they will always find ways to write hard-to-understand code. Related: Is there any difference between the Java and C++ operators? From this answer one can deduce that any well-defined operator behaviour in C is similar in Java, and Java does only add some definitions for operator precedence where C has undefined behaviour. This does not look like coincidence, it seems to be pretty intentional.
_unix.132341
I have 2 file, a.txt and b.txt and I want to compare them.a.txt contains:abcjkl < jklmno > mnopqr <> pqrb.txt contains:abcjkl < jklmno > mnopqr <> pqrsstuI'm using this script:$ diff a.txt b.txt | grep > | cut -c3- > c.txtWhich results in c.txt:pqr <> pqrpqr <> pqrsstuWhy is pqr <> pqr being included in the results? How can I resolve this?
Diff not working as I expect
shell script;diff
null
_unix.122011
I'm endeavoring to put Kali linux onto a USB stick - I know it's already written up, but I'd like to use only a portion of the total space (the aforementioned link will use the entire drive space).Let's have my 16GB usb stick mounted as sdb ... the goal is:16 GB total, split like this...----------------------------| 11 | 01 | 04 | (GB)---------------------------- sdb1 sdb2 sdb3 (partition ID) FAT32 FAT32 FAT32 (format) storage fatdog kalipart (label)sdb1 is FAT32 and the main storage area (so that [windows can see it][2] along with any other OSes)sdb2 is bootable and has Fatdog64 (6.3.0) and Precise Puppy (5.7.1) installed (multi-booting from one syslinux menu)sdb3 is the target partition for Kali to useThe objective is to multi-boot Fatdog64, Puppy, and Kali linux. Currently, sdb2 is bootable (syslinux) and successfully passes to Fatdog and Puppy, both on sdb2. Next I'd like to add chainloading to Kali on sdb3. It seems to me that the best way to do that is to load GRUB4DOS from syslinux (both on sdb2), map sdb3 and chainload to sdb3 from GRUB4DOS.So I ask: How do I install Kali onto an existing partition on this USB stick?Other options: Install live Kali onto the USB stick/partition from the Kali distro itself - but this doesn't seem to be an option the same way it is with Fatdog/Puppy/UbuntuBoot direclty to sdb3, chainloading to sdb2 if necessary (not preferred, but an option)Update:I have tried copying the files from a mounted iso to sdb3 using Fatdog64 and noticed several errors, mostly in copying the firmware files. Here's two examples:Copying /mnt/+mnt+sda1+isos+kali-linux-1+0+6-i286+kali-linux-1+0+6-i286+iso/firmware/amd64/microcode_1.20120910-2_i386.deb as /mnt/sda3/firmware/amd64-microcode_1.20120910-2_i286.debERROR: Operation not permittedCopying /mnt/+mnt+sda1+isos+kali-linux-1+0+6-i286+kali-linux-1+0+6-i286+iso/debian as /mnt/sda3/debianERROR: Operation not permittedThese errors look like permissions errors, but I can't tell if they affect booting or not (I can troubleshoot other errors later, I'd prefer to keep this question to just multi-boot).I'm chainloading GRUB4DOS from the SYSLINUX installed by default via Fatdog64 ...label grub4dosmenu label grub4dosboot /boot/grub/grldrtext helpLoad grub4dos via grldr (in /boot/grub)endtext... and then once in GRUB4DOS, I have successfully chainloaded GRUB2 (on the kali partition) ...title Load GRUB2 inside of kalifind --set-root /g2ldr.mbrchainloader /g2ldr.mbr... but all this gives me is a grub> prompt, and I haven't figured out any proper combinations of GRUB4DOS commands to load GRUB2 with a GRUB2 config file - and to add to the confusion, I thought the live CD iso of Kali ran on syslinux. (@jasonwryan @user63921)
How to install Kali linux on to a specific (existing) partition on a USB stick
partition;grub2;usb drive;live usb;kali linux
null
_webapps.31374
In which country, or countries, are servers running Trello located? Because of the laws in Canada regulating public institutions, we are not allowed to store certain kinds of information if the servers are in the USA. A solution would be to run this nice tool on our own servers, but I just read that this is not an option.
Where are the Trello servers located?
trello;legal
The Trello servers (including the databases) are hosted on Amazon Web Services (EC2, in the United States)The Trello javascript/CSS are hosted on amazon cloudfront (with edge nodes around the world)Attachments that are uploaded to Trello are stored on Amazon S3, but in a US region.Google Drive/Docs attachments that are attached to Trello cards are (of course) hosted wherever Google stores its files.
_softwareengineering.56215
In C, you cannot have the function definition/implementation inside the header file. However, in C++ you can have full method implementation inside the header file. Why is the behaviour different?
Why can you have the method definition inside the header file in C++ when in C you cannot?
c++;c;headers
In C, if you define a function in a header file, then that function will appear in each module that is compiled that includes that header file, and a public symbol will be exported for the function. So if function additup is defined in header.h, and foo.c and bar.c both include header.h, then foo.o and bar.o will both include copies of additup.When you go to link those two object files together, the linker will see that the symbol additup is defined more than once, and won't allow it.If you declare the function to be static, then no symbol will be exported. The object files foo.o and bar.o will still both contain separate copies of the code for the function, and they will be able to use them, but the linker won't be able to see any copy of the function, so it won't complain. Of course, no other module will be able to see the function, either. And your program will be bloated up with two identical copies of the same function.If you only declare the function in the header file, but do not define it, and then define it in just one module, then the linker will see one copy of the function, and every module in your program will be able to see it and use it. And your compiled program will contain just one copy of the function.So, you can have the function definition in the header file in C, it's just bad style, bad form, and an all-around bad idea.(By declare, I mean provide a function prototype without a body; by define I mean provide the actual code of the function body; this is standard C terminology.)
_codereview.120427
I recently built a Docker image for Composer. I'd love to get a review of the image, the Bash based wrapper script, its recommended use, and the repository structure.Here's the Dockerfile for the latest tag:FROM alpine:edgeMAINTAINER Samuel Parkinson <[email protected]>RUN echo http://dl-4.alpinelinux.org/alpine/edge/testing >> /etc/apk/repositories && \ apk add --no-cache \ ca-certificates \ git \ mercurial \ subversion \ php7 \ php7-curl \ php7-json \ php7-openssl \ php7-phar \ php7-posixRUN /usr/bin/php7 -r readfile('https://getcomposer.org/installer'); | \ /usr/bin/php7 -- --install-dir=/usr/local/bin --filename=composerCOPY ./composer-wrapper /usr/local/bin/composer-wrapperVOLUME [/usr/src/app, /root/.composer]WORKDIR /usr/src/appENTRYPOINT [/usr/local/bin/composer-wrapper]The wrapper script, which adds the --ignore-platform-reqs flag for supported commands, so that users don't have to do it themselves as the image doesn't contain many of the common php extensions library's require:#!/bin/sh# Loop over each argument.for argument in $@; do case $argument in # Append the argument if the command matches one we need to use `--ignore-platform-reqs` with. # Found using the following search: https://github.com/composer/composer/search?q=ignore-platform-reqs+path%3Asrc%2FComposer%2FCommand%2F # Uses `set` to update the arguments, see https://www.gnu.org/software/bash/manual/html_node/The-Set-Builtin.html. create-project|install|remove|require|update) set -- --ignore-platform-reqs $@;; # Otherwise just pass it on. *) ;; esacdone# Call composer with the updated arguments.exec /usr/bin/php7 /usr/local/bin/composer $@The recommended way to use the image, mounting the source and composer user folder, and read only mounting the users ssh key for use when using git based dependencies:docker run --rm -it \ -v $(pwd):/usr/src/app \ -v ~/.composer:/root/.composer \ -v ~/.ssh:/root/.ssh:ro \ graze/composerAnd finally the repository itself (which includes tests of the image!):https://github.com/graze/docker-composerI'm no expert at bash, and pretty new to Docker too so I'd love to know if there's anything I'm missing, or anything I'm doing that's unconventional.
A Dockerfile for Composer (a dependency manager for PHP)
php;bash;dockerfile
null
_unix.349203
Is it possible to allow OCSP requests using for example iptables or Squid? I'm doing an experiment where I reject insecure outgoing HTTP requests, but OCSP is of course a valid exception since the response is validated by the browser. Or is there at least some way to identify OCSP HTTP requests uniquely so as to write a custom filter for it?
How to allow OCSP but disallow all other outgoing HTTP requests?
arch linux;iptables;proxy;http;squid
null
_softwareengineering.191914
I have created web services before that are used by a small number of users but have a new project that would have lots of users.For each user that uses the services, this is what they would do:1) Call a method on the web service that calculates a price based on parameters passed by the user.2) The actual method and algorithm for calculating and returning a price is not very complicated and runtime would be very quick, although a look up in a database table would be necessary for each call.3) The problem is that this method could be called over and over again for each item that needs a price, and if lots of users are using the web service (I don't know an exact number, lets say 1,000 users, 10,000 users, whatever), I don't know what different things I need to think about in terms of how to manage high traffic, many different users trying to use the method at the same time over and over again, pulling data from a table over and over again.So pretty much I would like some advice from someone who has experience with web services with high amounts of traffic and many different users, with the method pulling from a data base table over and over again, to explain to me steps/thing I need to think about when designing the service to avoid traffic congestion, at what point a certain number of users would start slowing the service down,or just things to think/worry about, etc.Appreciate any help, thanks!
How do I create a web service with high amounts of traffic that works effectively with lots of different users?
database;web services;wcf;performance;high performance
For item 2, you could definately use a cache to avoid going to the DB for every call, especially if the lookup data is not that volatile. The lifetime of the cache will depends on how long you keep the data. The cache could be refreshed as needed by your requirements. As a side note, most DBs cache results of common queries anyway.Every service has limits. I would performance test the service to see how many calls it can handle. Definately build it stateless, then it is just a matter of how many calls a sec it needs to service. Say your requirements are 1,000 calls per second and your performance is 400 calls for 1 server. Then you need 3 servers to meet your requirements. Should be no problemto implement that sort of infrastructure.You will not know for sure your service throughput until it is performance tested. Once you have baseline numbers, you can tweak the code or the environment to improve performance as needed.
_unix.353715
I'm trying to extend my root partition with the unallocated space but it seems like the unallocated space didn't exist. When I reinstalled the system I had two different Free Spaces, one with 32GB and other with 19GB but wasn't able to fix them in the same partition.I've tried with the solutions in other posts with no result...
Can't extend root partition with unallocated space
debian;root;ext4;gparted
Seems like you're running gparted from your Linux-distro, and that means that some of you're partitions - including your root-partition - are in use (that's what that icon looking like a numerical keypad or whatever means). You can't move or resize a partition your actually actively using (which you here are).Try running gparted from a live-DVD. It may use your swap-partition, but should leave the root-partition alone. When it's not in active use, you should be able to resize it into your free-space (will probably take a long while, since stuff will have to be moved too).(Could add that the mother-program - parted - may be a bit less restrictive than gparted... but then again, the interface is more difficult and it's easier to make a mistake.)
_webapps.15344
I'm trying to generate some simple plots of shapes on the coordinate plane using Wolfram Alpha. I've had success with plotting the shape, but sometimes the plot output doesn't look nice. For example, the following plots the vertices of a rhombus on the coordinate plane:Plot {(-5,3), (-1,0), (-1,-5), (-5,-2) (-5,3)}However, the plot is distorted becausethe y-axis is shown at x = -5; andthe scale for the x-axis doesn't match the scale for the y-axis.As a result, my rhombus doesn't look like a rhombus. Is there a way to control what portion of the coordinate plane Alpha plots on? I've tried addingon x:[-8,8] y:[-8,8]at different points in the command, but that won't work. Anyone know the secret?
Can a user control the scale of plots in Wolfram Alpha?
wolfram alpha
No I've tried various combinations, the only thing I can come up with is find edges in Plot{(-5,3) (-1,0) (-1,-5) (-5,-2)}, x and the image output is horrible. Sorry.
_codereview.87486
An object (POJO) holds query arguments and has two-binding to a form using some MVC framework. later the object needs to be converted to a query string that will be appended to an HTTP request.function objectToQueryString(obj) { return Object.keys(obj) .filter(key => obj[key] !== '' && obj[key] !== null) .map((key, index) => { var startWith = index === 0 ? '?' : '&'; return startWith + key + '=' + obj[key] }).join(''); }Any pitfalls? IE9+ is what we support.Example input:{isExpensive:true, maxDistance:1000,ownerName:'Cindy',comment:''}And its expected output:?isExpensive=true&maxDistance=1000&ownerName=cindy
Object to query string
javascript;url
You have neglected to escape your keys and values. If any of the data contains a special character such as &, the generated URL will be wrong.To perform the escaping, I recommend calling encodeURIComponent().
_unix.79191
If I use vim,put the cursor on the beginning of a line, and then input 3>> or 3<<I can left or right shift 3 lines by 8 columns If I just want to shift 4 columns, what should I do?
How to right or left shift a block of text by a specific number of columns?
vim
:set shiftwidth=4and then use 3>> as normal
_unix.174202
I have a debian 7 server hosted by Google Cloud running a game server and a rails server.The rails server run on port 80 and the game server on port 8000.I want to apply a network rule that allow the game server packets to have a higher priority in order to minimize latency.For now, I found that iptables could help me with this :iptables -A PREROUTING -t mangle -p tcp --dport 8000:8010 -j TOS --set-tos Minimize-DelayBut when I check if my rule has been added :iptables -L -vt natChain PREROUTING (policy ACCEPT 877 packets, 100K bytes) pkts bytes target prot opt in out source destinationChain INPUT (policy ACCEPT 877 packets, 100K bytes) pkts bytes target prot opt in out source destinationChain OUTPUT (policy ACCEPT 329 packets, 20395 bytes) pkts bytes target prot opt in out source destinationChain POSTROUTING (policy ACCEPT 329 packets, 20395 bytes) pkts bytes target prot opt in out source destinationI'm not seeing my rule. What I'm doing wrong ? And also, is this the right way to do what I want ?
Minimize game server delay with iptables
iptables;delay
null
_unix.334237
I know vi well. I would really like to use it when I'm piping around on the command line.Is there an easy way to pipe the stdout of a process into a headless version of vi kind of thing and then to stdout?Something like this:$ uname -a | <headless_vi> 3f DLinux robbie 4.8.13-1-ARCH
piping through a headless vi-style editor
pipe;vi
null
_webmaster.82853
How can you get homepage to show instead of random pages? Is there a special meta tag or something?
How to make homepage show in google instead of other pages?
seo
I did a search as you specified and I get your home page as #1. However, if you do not, this is not a problem and therefore there is noting to fix. Going beyond that, you will not be able to exercise control over how Google decides to present search results short of some level of SEO which never guarantees any result.Here is what is wrong with your search:Without using the site: annotation before the domain, Google will return pages in the order of importance as Google sees it including pages from other sites. Just using the domain name without the site: annotation opens the search up to any result that ranks for the domain name. However, using the site: annotation limits the results to the domain only. In this case, the home page is often the first page, however, this is not always the case and should not be taken as an indicator of a problem.
_cs.25804
I want to understand the expected running time and the worse-case expected running time.I got confused when I saw this figure (source),where $I$ is the input and $S$ is the sequence of random numbers.What I don't understand from the above equation is why the expected running time is given for one particular input $I$?I always thought that for a problem $\pi$, $E(\pi) = \sum_{input \in Inputs}(Pr(input)*T(input))$ , isn't this correct?So, let's assume Pr(x) is the uniform distribution, and we are to find the expected running time of the problem of searching an element in a $n$ element array using linear search.Isn't the expected running time for linear search, $$E(LinearSearch) = \frac{1}{n}\sum_1^ni $$And what about the worst case expected running time, isn't it the time complexity of having the worst behavior? Like the figure below,I would highly appreciate if someone can help me understand the two figures above.Thank you in advance.
Understanding Expected Running Time of Randomized Algorithms
algorithms;time complexity;probability theory;randomized algorithms;average case
There are two notions of expected running time here. Given a randomized algorithm, its running time depends on the random coin tosses. The expected running time is the expectation of the running time with respect to the coin tosses. This quantity depends on the input. For example, quicksort with a random pivot has expected running time $\Theta(n\log n)$. This quantity depends on the length of the input $n$.Given either a randomized or a deterministic algorithm, one can also talk about its expected running time on a random input from some fixed distribution. For example, deterministic quicksort (with a fixed pivot) has expected running time $\Theta(n\log n)$ on a randomly distributed input of length $n$. This is known as average-case analysis. Your example of linear search fits this category.A related concept is smoothed analysis, in which we are interested in the expected running time of an algorithm in the neighborhood of a particular input. For example, while the simplex algorithm has exponential running time on some inputs, in the vicinity of each input it runs in polynomial time in expectation.
_scicomp.21021
Let $A\in \mathbb{R}^{n\times n}$ symmetric and positive semidefinite, and $\omega\in \mathbb{R}\setminus\{0\}$. I am interested in solving the following linear system for a range of values of $\omega$:$$((A-\omega^2 I)(A-\omega^2 I)+\omega^2 I)x = b.$$ It may be useful to note that the matrix factors as $$(A-(\omega^2-i\omega)I)(A-(\omega^2+i\omega)I), $$where $i^2 = -1$.Details:$A$ is sparse and I won't have direct access to its entries. The dimension of the null space of $A$ is a non-negligible fraction of $n$. The dimension of the problem, $n$, will be as big as the computer's RAM will allow.What is a good way to solve preprocess / precondition this system? Note that the RHS, $b$, will change when $\omega$ changes.Notes: This is a follow up question to this one. The idea of the proposed solution to that question shows that if we could perform an complete eigendecomposition on $A$, we would have a pretty much ideal preprocess. I have implemented an Lansczos iteration to approximate this eigendecomposition but it doesn't perform as well as I had hoped. I can explain this idea in more detail as an addendum if there is interest.Of course full answers are appreciated, but they are not expected. I am mainly looking for ideas to investigate. Any comments and pointers to the literature are much appreciated.Note to mods: Is this kind of question acceptable? I can change it to something more definite if asking for ideas is unacceptable.EditThis is what I plan on doing. First note that as $\omega\to \infty$ the matrix starts looking like $I(\omega^4+\omega^2)$, so we are mainly interested in when $\omega$ is comparable to the norm of $A$ and smaller.To that end, we compute $r$ eigen-pairs of $A$, $(\lambda_i,q_i)\in \mathbb{R}\times \mathbb{R}^{n\times n}$, with the largest eigenvalues. Then, since these eigenvectors can be made to be orthonormal we have$$x= \sum_{i=1}^r \alpha_i q_i + \sum_{i={r+1}}^n \alpha_i q_i.$$Now, taking the dot product of both side of the equation with $q_i$ for $1\le i\le r$ we get$$\alpha_i = \left\langle q_i,b \right\rangle \frac{1}{(\lambda_i - \omega^2)(\lambda_i - \omega^2) + \omega^2}.$$I plan on using this information to construct an initial guess for $x$. I am still unsure on what preconditioner to use.
What are some ideas to preprocess / precondition the following linear system?
linear algebra;linear solver;eigensystem
null
_unix.165207
I'm noticing an error: bash: syntax error near unexpected token `-105.5*7+50*3'When executing the below script/expression:expr (-105.5*7+50*3)/20 + (19^2)/7 | bc -lIs there any other way to evaluate such mathematical/floating point operations?EDIT #1NOTE: echo in place of expr does resolve this however I've used expr with bc before and it has handled floats quite normally why not in this scenario is what I'd like to find out now.
Why isn't this `expr ... | bc -l` command working?
bash;shell;quoting;bc
null
_unix.67468
I want to create a bind that executes g++ !$ in the same way that the shell would interpret it if I just typed it.I tried:bind '\ee: g++ !$', but it doesn't execute command (justpastes it)bind -x '\ee: g++ !$', but it doesn't interpretthe !$ part correctlyAny way to overcome it without using some custom shell scripts?
Bind to execute command with last argument of previous command
bash;keyboard shortcuts
bind '\ee: g++ !$' does exactly what you wrote, which is to insert g++ !$ on the command line. If you want the command to be executed, you need to press Enter.bind '\ee: g++ !$\r'
_unix.188345
Whenever I boot OpenSUSE, The KDE splash screen appears, which is normal. What is not normal, however, is that it never disappears again ! The previous session's windows just appear. KDE is functioning absolutely normal, but this is really annoying, because I can't use the taskbar and I have to use Alt+F2 whenever I want to launch anything. This also makes me lose desktop features, like folders, wallpapers, the clock and themes.I am using KDE 5.What can I do to:Make KDE start a new session every time, not just continue the previous one.Make the splash screen disappear so that I can get my desktop again.Thanks, in advance.
KDE Splash Screen doesn't disappear
linux;kde;opensuse;desktop;desktop environment
null
_cs.55434
I'm implementing the ID3 algorithm (Iterative Dichotomiser 3). I have an attribute which happens to be continuous like 12.21, 3.01, etc. AND have missing values which are marked as NA.How I'm discretizing the data: I'm finding the optimal split which results in the max information gain. How I'm dealing with missing values: I will use the most probable attribute value to replace the ?.Of course I can do either process in both ways, and this is where my confusion arises. Is there a correct way in handling this?
How to handle missing continuous attribute values in ID3 (Iterative Dichotomiser 3)?
machine learning
I would like to propose paper about ID3 and it's successors, generally about Decision Tree Algorithms. Using Mean, Median, Mode etc. is very tempting and it works to some degree, but of course the outcome depends on values inserted to missing (NA) data. Mean has nice property in many statistics that it just acts like missing value, but increases weight of other ones (since it changes nothing, other values are counted with +1/N weight).But in decision trees the effect is bigger, changing the classifier, so there is one big idea - apply all possible missing values :-/. There are also three easier techniques: apply mean and do not care reconstruct data to fit classifier better (very often trial and error, but due to discretization of continous data, only values that differ by multiplicity of $\epsilon$ are to be checked) try to reconstruct dataThe last one should yield the best results, but it is not always possible, and still these are not exact values.If you can predict the most probable value and replace missing ones - this is the best way to do it.
_cogsci.12958
Most of the Defense Mechanisms are unconscious, but for those who know about them and still use then it somehow becomes conscious. They know they are using it, and not do anything about it. For example when one loses trust in everyone around them, but they tell everyone who understands about their secret or something that shouldn't be shared easily; Reaction Formation, yes.So, when, let's say, someone has mood swings, they cry when a bit depressed or even just sad, and laugh like there's no tomorrow when they're happy, even if it's the moment right after their cry, or just shout out loud because they are angry while they are generally a calm person. They simply let it all sink in and accept everything that comes, without doing anything about it, even though they know they should, and they know where the wrong is. So can't it be set as one?
Can acceptance become a Defense Mechanism?
cognitive psychology;consciousness;depression;unconscious
null
_hardwarecs.1292
I'm building a new PC but I'm stuck at one thing. What CPU should I choose if what I do mostly is programming & gaming?I saw opinions here and there saying Intel's CPUs would be good, others saying AMDs, but none seemed to give me a clear answer.The 2 options I would have are Intel Skylake, Core i5 6400 2.70GHz & AMD Vishera, FX-8350 4.0GHz.I'm open to any other recommendations, preferably good price/value ratio.
CPU recommendation: Programming + Gaming
processor
I'd recommend Intel, they're really reliable, have low power consumption and future-proof if you're into keeping your rig for more than 3-4 years.AMD has a good performance/price ratio, but are more risky, heat issues and useless cores for marketing.Pros/ConsFor gaming just rely on a good GPU, the CPU bottle neck is pretty much a myth if you have an i5 2500 or better.You'd get better performance with a 4th gen Intel CPU and a better GPU. Keep that in mind.
_codereview.126899
I have started learning recursion and search algorithms, especially DFS and BFS. In this program, I have tried to make an implementation of a maze-solving algorithm using DFS. The program is working functionally, but as a beginner I am fully aware there are many possible areas of improvement.If not in a specific way, what are some of the more general and possibly theoretical criticisms of my program? Keep in mind that the current product is based on an elementary understanding of C++ and search algorithms.I have a Graph class for loading in the maze from an external text file:#include <fstream> //ifstream#include <iostream> //standard c output modes#include <iomanip> //setprecision()#include <vector> //vectors, including 2-dimensional#include <cstdlib> //system(cls) to clear console#include <stack> //stacks#include <math.h> //sqrt()#include <ctime> //clock in DelayFrame()#include Cell.h //Class for individual unit cells of mazeclass Graph{ public: Graph(); virtual ~Graph(); void LoadGraph(const std::string &fileName); void DisplayGraph(); void DFS(int r, int c); void DelayFrame(clock_t millisec); private: int height; //# of rows of maze int width; //# of columns of maze int numPaths; //# of possible path positions in maze int pathDistance; //Total distance of correct position sequence char buffer; //To store char elements from external text-file const char obstacle, goal, path; //Constant chars to represent elements of maze double cellsVisited; //# of cells visited; does not contain duplications of cells std::vector <std::vector<Cell*> > maze; //Stores maze std::vector <Cell*> cells; //Stores individual rows of maze to be allocated into maze 2-dimensional vector std::stack <Cell*> cellSequence; //Stack of cells to store sequence of positions in use};I have also implemented a Cell class for the individual cells in the maze:class Cell{ public: Cell(int r, int c, char symbol); virtual ~Cell(); int GetRow(); int GetColumn(); char GetChar(); void SetChar(char toReplace); char GetCounter(); void IncrementCounter(); protected: int r; //Row of cell int c; //Column of cell char symbol; //Symbol of cell int counter; //Number of visits; initialized to be 1 in cell constructor};Here is the loading member function in the Graph class (I am wondering if I can detect a new line and skip the extraction without extracting first and then redoing it)://Loads in the maze from an external text-file//Gets # rows, # columns and all symbols for all elements in mazevoid Graph::LoadGraph(const std::string &fileName){ std::ifstream fileData(fileName.c_str()); //# rows fileData >> height; //# columns fileData >> width; //Don't skip blank spaces fileData >> std::noskipws; //Adds elements from external text-file to one row of the maze for (int row = 0; row < height; row++) { for (int col = 0; col < width; col++) { fileData >> buffer; //If there is a new line character, take the next character if (buffer == '\n') { fileData >> buffer; } cells.push_back(new Cell(row, col, buffer)); //If there is a new path position, increment the counter if (buffer == path) { numPaths++; } } //Pushes the row into a 2-dimensional vector maze.push_back(cells); cells.clear(); } //Close file fileData.close();}Most critically, here is the implementation of DFS I am using to try to search the maze. The end (goal) of the maze is represented by a $ symbol. Walls are represented as Xs and paths are represented by the blank space ' ' symbol.Basically, it keeps searching until it reaches the goal, starting from position (1,1). It searches in all 4 directions, as long as the direction is not blocked by an obstacle and there is a neighboring unvisited cell; if neither is met, then it backtracks. If the goal is not reached once the stack is empty, then there is no solution. I realize this implementation is again not the most efficient, but I think it is relatively robust and has some additional functionality./*Depth First SearchMaze search starts at r = 1, c = 1*/void Graph::DFS(int r, int c){ //Displays state of maze as it is being solved //Clears the console screen to make room for an updated display std::system(cls); DisplayGraph(); //Pause for 200 milliseconds so user can monitor progression of search DelayFrame(200); //If goal is reached, stop if (maze[r][c] -> GetChar() == goal){ //Declare array to hold 'solution set' for valid path int stackSize = cellSequence.size(); Cell** solutionSet = new Cell*[stackSize]; //Fill array with path positions for (int i = 0; i < stackSize; i++) { solutionSet[i] = cellSequence.top(); //Remove the topmost cell once it has been added to array cellSequence.pop(); } //Write dimensions of maze solved std::cout << std::endl << # Rows: << height << std::endl; std::cout << # Columns: << width << std::endl; std::cout << std:: endl << Path Sequence: << std::endl; //Display valid path positions in correct order as array elements for (int j = stackSize - 1; j >= 0; j--) { std::cout << ( << solutionSet[j] -> GetRow() << , << solutionSet[j] -> GetColumn() << ) -> ; //Makes the display more optimal for viewing by approximately equalizing display x and y dimensions int interval = sqrt(stackSize); if ((stackSize - j) % interval == 0) { std::cout << std:: endl; } } //Don't forget position of goal at the end which is not in stack std::cout << ( << r << , << c << ) = $ << std:: endl; //Delete dynamically allocated array delete solutionSet; //Total distance of path is the stack size + 1 for the goal cell pathDistance = stackSize + 1; //Writes path length std::cout << std:: endl << Solved | # Steps in Path: << pathDistance; //Writes #cells visited std::cout << std:: endl << | % Cells Visited: << std::setprecision(4) << cellsVisited / numPaths * 100 << ( << cellsVisited << / << numPaths << possible path positions); } else { //Otherwise, push current cell to stack if (maze[r][c] -> GetChar() == path) { cellSequence.push(maze[r][c]); cellsVisited++; } //Set current cell as visited and mark it with #times visited - 1 (know how many repeats) maze[r][c] -> SetChar(maze[r][c] -> GetCounter()); //Increment the number of times visited (prior) maze[r][c] -> IncrementCounter(); //Goes through all 4 adjacent cells and checks conditions //Down if (r+1 < maze.size() && ((maze[r+1][c] -> GetChar() == path) || (maze[r+1][c] -> GetChar() == goal))) { r++; DFS(r, c); } //Up else if ((r-1 > 0) && ((maze[r-1][c] -> GetChar() == path) || (maze[r-1][c] -> GetChar() == goal))) { r--; DFS(r, c); } //Right else if (c+1 < maze[0].size() && ((maze[r][c+1] -> GetChar() == path) || (maze[r][c+1] -> GetChar() == goal))) { c++; DFS(r, c); } //Left else if (c-1 > 0 && ((maze[r][c-1] -> GetChar() == path) || (maze[r][c-1] -> GetChar() == goal))) { c--; DFS(r, c); } else { //No neighboring cells are free and unvisited, so we need to backtrack //Sets current cell to obstacle maze[r][c] -> SetChar(obstacle); //Remove current (top) cell from stack cellSequence.pop(); if (cellSequence.empty()) { //If the stack is empty, there are no neighboring cells that can be used and there is no solution std::cout << std::endl << No solution: -1; } else { //Get row and column of last valid cell in stack and use those to resume search r = cellSequence.top() -> GetRow(); c = cellSequence.top() -> GetColumn(); DFS(r, c); } } }}
Searching a maze using DFS in C++
c++;recursion;depth first search
null
_unix.317041
I have to access a RHEL server form Fedora. They have given me a VPN-URL, RSA SecureID hardware token generator, a user name and a personal password to be used as a prefix for the token key. On Windows I can use these four bits of information in a VPN program to make a VPN connection to the RHEL server. Now I have switched to Fedora 24 from Windows. In Fedora 24 there is a VPN with tabs like: Details, Identity, IPv4, IPv6 and Reset. I cannot see how to enter my hardware token. Any suggestions?
RSA hardware token on a Fedora 24 client to RHEL
fedora;openvpn;vpn
null
_codereview.151543
For some reason or another, I want to have available unsigned integers of sizes other than 1, 2, 4 and 8 (e.g. an unsigned integer with 3 bytes). For most platforms, compilers don't make those available; so - I rolled my own.Other than asking for a general review, I will also pose a few specific questions / requests for guidance.#ifndef UINT_H_#define UINT_H_#include <boost/integer.hpp>#include <climits>#include <ostream>#include <istream>#include <cstring> // for memcpy and memsetnamespace util {/** * A hopefully-fast integer-like class with arbitrary size * * @note Heavily dependent on compiler optimizations... * @note For now, assumes little-endianness * @note For now, limited to small sizes * */template <unsigned N>class uint_t final{ static_assert(N <= sizeof(unsigned long long), Size not supported, for now);public: // types and constants enum { num_bytes = N, num_bits = N * CHAR_BIT }; using byte = unsigned char; using value_type = byte[N]; using fast_builtin_type = typename boost::int_t<num_bits>::fast; using least_builtin_type = typename boost::int_t<num_bits>::least;protected: // data members value_type value; // Note it is _not_ necessarily alignedpublic: // constructors uint_t() noexcept = default; uint_t(const uint_t& x) noexcept = default; uint_t(uint_t&& x) noexcept = default;protected: // building blocks for converting ctors, assignments and conversion operators /* The next two methods are buggy, see @Deduplicator's answer */ template <typename I> uint_t& assign (I x) noexcept { if (sizeof(I) < N) { std::memset(value, sizeof(uint_t) - sizeof(I), 0); } std::memcpy(value, &x, N); return *this; } template <typename I> I as_integer() const noexcept { I result; if (sizeof(I) < N) { result = 0; } std::memcpy(&result, value, N); return result; } /* // Alternative for the two above methods, // following @Deduplicator's answer: static constexpr size_t min(size_t x, size_t y) { return x < y ? x : y; } template <typename I> uint_t& assign(I x) noexcept { auto x_bytes = (const byte* const) &x; for (auto j = 0; j < min(sizeof(I), N); j++) { value[j] = x_bytes[j]; } for (auto j = min(sizeof(I), N); j < N; j++) { value[j] = 0; } return *this; } template <typename I> I as_integer() const noexcept { I result; if (sizeof(I) > N) { result = 0; } auto result_bytes = (byte* const) &result; for (auto j = 0; j < min(sizeof(I), N); j++) { result_bytes[j] = value[j]; } return result; } */public: // converting constructors uint_t(char x) noexcept { assign<char >(x); } uint_t(signed char x) noexcept { assign<signed char >(x); } uint_t(unsigned char x) noexcept { assign<unsigned char >(x); } uint_t(short x) noexcept { assign<short >(x); } uint_t(unsigned short x) noexcept { assign<unsigned short >(x); } uint_t(int x) noexcept { assign<int >(x); } uint_t(unsigned x) noexcept { assign<unsigned >(x); } uint_t(long x) noexcept { assign<long >(x); } uint_t(unsigned long x) noexcept { assign<unsigned long >(x); } uint_t(long long x) noexcept { assign<long long >(x); } uint_t(unsigned long long x) noexcept { assign<unsigned long long >(x); } ~uint_t() = default;public: // operators uint_t& operator = (const uint_t& other) noexcept = default; uint_t& operator = (uint_t&& other) noexcept = default; uint_t& operator = (char x) noexcept { return assign<char >(x); } uint_t& operator = (signed char x) noexcept { return assign<signed char >(x); } uint_t& operator = (unsigned char x) noexcept { return assign<unsigned char >(x); } uint_t& operator = (short x) noexcept { return assign<short >(x); } uint_t& operator = (unsigned short x) noexcept { return assign<unsigned short >(x); } uint_t& operator = (int x) noexcept { return assign<int >(x); } uint_t& operator = (unsigned x) noexcept { return assign<unsigned >(x); } uint_t& operator = (long x) noexcept { return assign<long >(x); } uint_t& operator = (unsigned long x) noexcept { return assign<unsigned long >(x); } uint_t& operator = (long long x) noexcept { return assign<long long >(x); } uint_t& operator = (unsigned long long x) noexcept { return assign<unsigned long long >(x); } uint_t& operator += (const fast_builtin_type& other) noexcept { return *this = as_fast_builtin() + other; } uint_t& operator -= (const fast_builtin_type& other) noexcept { return *this = as_fast_builtin() - other; } uint_t& operator *= (const fast_builtin_type& other) noexcept { return *this = as_fast_builtin() * other; } uint_t& operator /= (const fast_builtin_type& other) { return *this = as_fast_builtin() / other; } uint_t& operator += (const uint_t& other) noexcept { return operator+=(other.as_fast_builtin()); } uint_t& operator -= (const uint_t& other) noexcept { return operator-=(other.as_fast_builtin()); } uint_t& operator *= (const uint_t& other) noexcept { return operator*=(other.as_fast_builtin()); } uint_t& operator /= (const uint_t& other) { return operator/=(other.as_fast_builtin()); } bool operator == (const uint_t& other) noexcept { return value == other.value; } bool operator != (const uint_t& other) noexcept { return value != other.value; }public: // conversion operators operator fast_builtin_type() const noexcept { return as_integer<fast_builtin_type>(); }public: // non-mutator methods fast_builtin_type as_fast_builtin() const noexcept { return as_integer<fast_builtin_type>(); } fast_builtin_type as_least_builtin() const noexcept { return as_integer<least_builtin_type>(); }};// Additional operators which can make do with public memberstemplate <unsigned N> bool operator > (const uint_t<N>&x, const uint_t<N>& y) noexcept { return x.as_fast_builtin() > y.as_fast_builtin(); }template <unsigned N> bool operator < (const uint_t<N>&x, const uint_t<N>& y) noexcept { return x.as_fast_builtin() < y.as_fast_builtin(); }template <unsigned N> bool operator >= (const uint_t<N>&x, const uint_t<N>& y) noexcept { return x.as_fast_builtin() >= y.as_fast_builtin(); }template <unsigned N> bool operator <= (const uint_t<N>&x, const uint_t<N>& y) noexcept { return x.as_fast_builtin() <= y.as_fast_builtin(); }template <unsigned N> uint_t<N>& operator ++ (uint_t<N>& i) noexcept { return (i += 1); }template <unsigned N> uint_t<N>& operator -- (uint_t<N>& i) noexcept { return (i -= 1); }template <unsigned N>uint_t<N> operator ++ (uint_t<N>& i, int) noexcept{ uint_t<N> result = i; i += 1; return result;}template <unsigned N>uint_t<N> operator -- (uint_t<N>& i, int) noexcept{ uint_t<N> result = i; i -= 1; return result;}template <unsigned N>std::ostream& operator<<(std::ostream& os, uint_t<N> i) { return os << i.as_least_builtin(); }template <unsigned N>std::istream& operator>>(std::istream& is, uint_t<N> i){ typename uint_t<N>::fast_builtin_type fast_builtin; is >> fast_builtin; i = fast_builtin; return is;}} // namespace util#endif /* UINT_H_ */My questions/requests for guidance:My implementation currently assumes little-endianness. What would you suggest as the 'proper' way to support big-endian platforms? Another template parameter? preprocessor directives? Something else?I've been very cavalier in my treatment of signed integers, since it's not quite clear to me what I should be doing.Should I try and optimize the memcpy() myself? e.g. with a large switch statement over N (the number of bytes), which perhaps also accounts for the alignment or mis-alignment of the data? I was thinking about that and noticed different behavior in different compilers.If I specialize std::numeric_limits for this class - what should I set traps to?
A template-fixed-size unsigned integer class
c++;template
null
_unix.360757
With SystemD, how can I make certain services dependent on certain network interfaces being up?For example, let's say I have an 802.1ad bond interface I need to wait on for access to my SAN/NAS before I bring up libvirtd, even though I may have network access available via a different interface. Or let's say I have an sshfs mount I want to come up automatically (and be torn down automatically) dependent on a VPN connection?What's the idiomatic way to handle fine-grained dependencies on network interfaces?At present, I'm using NetworkManager on Ubuntu and CentOS7, but I'm open to other platform-appropriate mechanisms for managing network state.
How do I make certain services dependent on certain interfaces?
centos;systemd;networkmanager
There isn't really a standard built in way I don't think, but there are a few things you can leverage in systemd.ExecStartPre=Additional commands that are executed before [...] the command in ExecStart=, respectively. Syntax is the same as for ExecStart=, except that multiple command lines are allowed and the commands are executed one after the other, serially.If any of those commands (not prefixed with -) fail, the rest are not executed and the unit is considered failed.Restart=on-failureConfigures whether the service shall be restarted when the service process exits, is killed, or a timeout is reached.If set to on-failure, the service will be restarted when the process exits with a non-zero exit code, is terminated by a signal (including on core dump, but excluding the aforementioned four signals), when an operation (such as service reload) times out, and when the configured watchdog timeout is triggered.Commands are considered to have failed if they return non-zero exit codes, so by setting the ExecStartPre to something that proves your interface is up you can be sure your service will require it.Some examples:ExecStartPre=/usr/bin/ping -c 1 ${SAN_IP}ExecStartPre=/usr/sbin/iscsiadm -m sessionI personally like the iscsiadm variant for your use case. If there are iscsi connections the return value is 0, otherwise it returns 21 (which would cause the service to fail). The ping variant can work for a larger variety of uses, but I would say in most cases you may want to find a more suitable command to check network status. You could even try using ssh if you have keys setup to check things on another host.The point is that ExecStartPre can let you make your service fail based on any commands. Just check to make sure that any command you use will return non-zero exit codes when you want it to (for example cating an empty file returns 0, whereas cating a non-existent file returns 1)After some consideration and the asker's comment, I would say the best way to define a complex condition for a service is to create another service for it to depend on.Create a new service that uses the command to check status as the ExecStart. Give it the Restart=on-failure. Then make your original service Require and After it. The ExecStartPre example I used above is sort of twisting its original purpose, which was to set things up for the service to run properly. It can still apply and the knowledge is still useful so I'm leaving it intact.
_softwareengineering.100127
I have been studying Scala, but what I keep running into is the optimization of syntax. I'm sure that will be great when I am an expert, but until then.. Not so much.Is there a command or a program that that would add back in all/most of the code that was optimized away? Then I would be able to read the examples, for example.
How do I view Scala code without all the syntactic sugar?
syntax;scala
null
_codereview.173319
Is it possible to have an if condition with curly brackets and else conditions without brackets? Also what is the good practice to format it , should there be a new line after else keyword?if (!String.IsNullOrEmpty(CssClass)){ fuDocumentUploader.CssClass = CssClass; CssClass = null;}else txtDocumentUploadLink.Text = string.Empty;
If and else without curly brackets
c#;javascript;.net;asp.net
null
_unix.338753
I have the following data set about the snps ID POS ID 78599583 rs987435 33395779 rs345783 189807684 rs955894 33907909 rs6088791 75664046 rs11180435 218890658 rs17571465 127630276 rs17011450 90919465 rs6919430and a gene reference filegenename name chrom strand txstart txendCDK1 NM_001786 chr10 + 62208217 62224616CALB2 NM_001740 chr16 + 69950116 69981843STK38 NM_007271 chr6 - 36569637 36623271YWHAE NM_006761 chr17 - 1194583 1250306SYT1 NM_005639 chr12 + 77782579 78369919ARHGAP22 NM_001347736 chr10 - 49452323 49534316PRMT2 NM_001535 chr21 + 46879934 46909464CELSR3 NM_001407 chr3 - 48648899 48675352I'm trying to match the genes with the SNps location, so include the snps that has POS >= txstart and POS<= txendfor example I want a data set that has the following columnsgenename SNPID chrom position txstart txend
how to map snps to ref gene file
linux
As far as I can see, your sample files do not contain any matches of the kind you describe.If we modify the first file toCHROM POS ID chr7 78599583 rs987435chr15 33395779 rs345783chr1 189807684 rs955894chr20 33907909 rs6088791chrx 1234567 rsMadeUpchr12 75664046 rs11180435chr1 218890658 rs17571465chr4 127630276 rs17011450chr6 90919465 rs6919430such that the made-up entry falls in the range genename name chrom strand txstart txendCDK1 NM_001786 chr10 + 62208217 62224616CALB2 NM_001740 chr16 + 69950116 69981843STK38 NM_007271 chr6 - 36569637 36623271YWHAE NM_006761 chr17 - 1194583 1250306SYT1 NM_005639 chr12 + 77782579 78369919ARHGAP22 NM_001347736 chr10 - 49452323 49534316PRMT2 NM_001535 chr21 + 46879934 46909464CELSR3 NM_001407 chr3 - 48648899 48675352thenawk ' NR == FNR && FNR > 1 {snp[$2]=$3; next} FNR > 1 { for (p in snp) {if (p>=$5 && p<=$6) print $1, snp[p], $3, p, $5, $6} } ' snpsid generef YWHAE rsMadeUp chr17 1234567 1194583 1250306
_cstheory.27347
Scott Aaronson said in the paper entitled Why Philosophers Should Care About Computational Complexity (Please see ECCC Report: TR11-108, section 7, pp 25-31):Following the work of Kearns and Valiant, we now know that many natural learning problems as an example, inferring the rules of a regular or context-free language from random examples of grammatical and ungrammatical sentences are computationally intractable.My question is: Which factors make the problem of inferring the grammar difficult? Is the introducing random examples of ungrammatical sentences? If so, what would happen if the condition of random examples of grammatical and ungrammatical sentences is replace with random examples of grammatical sentences with probability p>0 and random examples of ungrammatical sentences with probability 1-p?
Which factors make the problem of inferring the grammar difficult?
cc.complexity theory;lg.learning;grammars;philosophy
null
_cs.23980
The principle (called a LwenheimSkolem theorem by Huth and Ryan) statesLet $\phi$ be a sentence of predicate logic such that for any natural number $n \geq 1$, there is a model of $\phi$ with at least $n$ elements. Then $\phi$ has a model with infinitely many elements.IMO, it basically states that if you can always name a number larger than mine arbitrary natural number then your model is infinite. What needs to be proven here? There are no other options obviously for any school kid.PS The answers state that there is a difference between having infinite amount of models and single infinite model. But this is similarly stupid. At first, I do not see whether I claim that I have a single infinite model or approach it by having all denumerable models. Secondly, it does not matter since in any case you should have an infinite model in order to respond to any natural number.Nevertheless, I started to understand why people (mistakenly) ask me to differentiate between infinite amount of models and models of infinite size. They fail to recognize that principle The fact that I can always name a number larger than yours implies that we have an infinite model/set, which is intuitive and used to prove the overspill theorem, also implies that the model of infinite size exists. The set of models $A = \{M_k, M_l, M_m \ldots\}$ has sizes $S = \{k, l, m, \ldots\}$ correspondingly. When you speak about size of models, you basically speak of the numbers in $S$. When you say a model of size n you just say n. Thus, we can forget about set of models and speak only about S. Now, you say that whatever integer you have, set S contains a larger one. This basically means that S contains an infinite number (i.e. $A$ contains infinite models). What to be proven here? In other words, what is the point of expanding $\phi$ with infinite set $\{I_1, I_2, \ldots\}$ in the proof and applying the Compactness theorem? This says that there is an infinite model. But this is obvious without even without it, right from the the premise of the overspill principle.
What is the point of (Compactness theorem in the) Overspill principle?
sets;first order logic
null
_unix.308042
Before I start I'm not asking for any code to be written just to enlighten and the behavior I am having. I have this snippet of code. NOW=$(date +%H)While [ true ]; do echo $NOWdoneI would of expected so that when it would be printed to the screen the time would update since I am storing the date command and formatting it into the variable NOW , but instead all it does is keep printing the same date that the script was started at. Will someone enlighten me on why it does that.
Variable Behavior
shell script;date
null
_codereview.58869
I have class House and module Lockable. Locking and unlocking House should reflect the real world, so you can't lock twice.Do you think this is a good approach to using a module and raising an exception?module Lockable attr_reader :locked def lock! raise StandardError, Already locked if @locked == true @locked = true end def unlock! raise StandardError, Already unlocked if @locked == false @locked = false endendclass House include LockableendUsage:house = House.newhouse.lock!house.unlock!house.unlock! # raise an exception
Good approach to raise an exception
ruby;locking;exception
I would define domain specific error classes like so:module Lockable Error = Class.new StandardError AlreadyLocked = Class.new Error AlreadyUnlocked = Class.new Error def lock! raise AlreadyLocked if locked? @locked = true end def unlock! raise AlreadyUnlocked unless locked? @locked = false end def locked? # unless already defined, this assumes an # initial unlocked state (!!nil == false) !!@locked endendBy having a common ancestor (LockedError), users can simply catch that one if they don't care about the inner state of the lockable object:begin house.lock! house.lock!rescue Lockable::Error # meh...rescue StandardError # $! needs handlingendOn the other hand, if I try to lock an already locked lock, the key just won't move (and the lock doesn't blow up in my face). So, a more realistic reflection of the world would be this implementation:module Lockable def lock! @locked = true end def unlock! @locked = false endend
_unix.164913
Is there a CLI utility to search your gmail account? Perhaps by opening your default browser to gmail.com and placing implementing your search?
CLI Utillity to search your gmail account
email
null
_unix.25273
I am developing an x-server implementation, and I want to make it as similar to the current one as possible. I read through the documentation, but I couldn't find anything specific. In particular, I'm trying to find a numbering scheme for windows. It seems to me that this is implementation specific.Either way, I found this concerning window ids:The most significant 11 bits of the XID indicate the client, leaving 21 bits for each client, giving each client 2^21 (= 2,097,152) XIDs.I've read elsewhere that the max x-clients is 255: here and here.Is there any clear documentation on how windows should be numbered?
What is the max number of x-clients?
x11;window;x server
Cygwin X Faq states that they use getdtablesize :Cygwin/X queries getdtablesize() for the maximum number of client connections allowed; by default Cygwin returns 32 from getdtablesize(). Cygwin/X Server Test Series release Test44, released on 2001-08-15, changed the maximum number of clients from 32 to 1024 by passing the square of getdtablesize() to setdtablesize().Mac OS X X Source Code has an hard definition in include/xorg/misc.h :#define MAXCLIENTS 256Some Old Unixes and RHEL > 4 are able to set it at runtime :-maxclients 64|128|256|512 Set the maximum number of clients allowed to connect to the X server. Acceptable values are 64, 128, 256 or 512.X.org Server Source Code, Virtual Box X source code and some others share it. Of course, as it is free software, Debian & Red Hat can change it and has raise it to 512. So I guess that you can take as an hint that it should be between 256 and 512 on all modern computers. As far as I know, the only way to know it is when you receive the Cannot connect to X error. BTW, numbering of xclient has 11 bits. Numbering and max clients are different issues. You can see numbering of each window with xlsclient -l.
_codereview.31578
I've played with jQuery for some time now but have never written my own plugin.A question was asked: can I blur an image using jQuery? and I thought this to be a decent candidate to play with.Here's my code so far:(function ($) { $.fn.blurStuff = function (options) { var defaults = {blurRadius:2, deblurOnHover:false}; var settings = $.extend(defaults, options); $(this).wrap('<div data-blurimage />'); var blurContainers = $(this).closest('[data-blurimage]'); blurContainers.each(function () { var img = $(this).children(); $(this).css({ 'width': img.width(), 'height': img.height(), 'overflow': 'hidden', 'position': 'relative' }); var clone = img.clone(); clone.css({ 'opacity': 0.2, 'position': 'absolute' }); $(this).append(clone.clone().css({'left': +settings.blurRadius, 'top': +settings.blurRadius})); $(this).append(clone.clone().css({'left': -settings.blurRadius, 'top': +settings.blurRadius})); $(this).append(clone.clone().css({'left': +settings.blurRadius, 'top': -settings.blurRadius})); $(this).append(clone.clone().css({'left': -settings.blurRadius, 'top': -settings.blurRadius})); }); if (settings.deblurOnHover == true) { blurContainers.hover(function () { $(this).children('img:gt(0)').toggle(); }); } return blurContainers; };})(jQuery);In action: http://jsfiddle.net/gvee/xvvWj/Example usage:$('img').blurStuff({deblurOnHover: true, blurRadius: 2});My working logic is to wrap the selector in a parent container and then append 4 translucent clones to this, where each clone is positioned slightly off centre.This works pretty well so far and I'm pleased with my progress but I can conceive a couple of potential bugs that I wanted some opinions on!Is my approach reasonable? I realise that I'm appending an extra 4 elements to the DOM on each call which is not ideal (I could get away with using just two, laterally or diagonally, but I think 4 produces a better effect)...What to do if a user passes a fixed position element? This will break existing flow.Should I bother validating/sanity checking the parameter values? If so, how should I approach this? Previously I had a parameter called blurOpacity but I removed this because I realised that the wrong values (e.g. 1) effectively breaks things.
My first jQuery plugin - blurStuff()
javascript;jquery;plugin
null
_cstheory.16947
I guess the question is does an 'infinite' number of patterns imply 'every' number of patterns?For instance, if you could quickly calculate the decimal sequence of , could you not (in theory of course) come up with an algorithm to search that sequence for some pre-determined sequence?Then you could do this:start = findInPi(sequence)So sequence in theory could be a decimal representation of the movie The Life of Pi. The implication is that all digital knowledge (past, present and future) is bound up in irrational numbers (not just the group of irrational numbers, but each irrational number), and we just need to know the index to pull data out.Once you know the index and length of data, then you could simply pass this long. playMovie(piSequence(start, length))From an encryption standpoint, you could pass the start, length pair around, and the irrational number would be known only by the private key holder.Am I off base here?
Do irrational number contain infinate/every patterns of sequences?
ds.algorithms;soft question;homomorphic encryption
null
_cs.48537
The Toffoli gate takes in three inputs and gives out three outputs, and is often referred to as the quantum AND gate.It takes in a,b,c and gives out a, b, c XOR (a AND b).Why does it do that, instead of just giving out a,b,b AND c? or a,b,(a AND b)?
Why does the Toffoli gate output c XOR (a AND b) instead of just a AND b?
quantum computing
As the comments said, if it worked like i asked, it would neither be reversible, nor a unitary matrix.Both things are required for quantum computing!
_softwareengineering.200612
I am about to embark on a redesign of an application, one where querying the database is particularly annoying. I intend to redesign the database as much as possible but the data shape cannot change too much.The database has a main table and 10 other tables each representing a record type. The main table has all of the data the record types have in common. Record ID, DateLogged etc. Around 30-40 columns in total. The other tables are all completely different as each record type is very different from another. They all have about 20-30 columns each.The main table has a column called type, which is an int. However nowhere does it reference what record types the type numbers refer to. You can just figure it our by looking at the procedures. The main table has to join with each type table to allow searching. I have added an image representing what I am trying to describe.Is there a better way to create these relationships? I would like to ditch the verbose stored procs used with ADO.NET and move to EF. I need to think of a better way to relating the data so people in the future wont need to work out the relationships by scouring stored procedures for clues.
Database Design - Optimise Relationships
database;database design;sql server;relationships
Add a Type table.TypeID PKType stringChange Type in Main to TypeID, making it a Foreign Key to the Type table TypeID.Where you go from here depends on how distinct each type is. If the only distinction is that the fields vary somewhat between types, and there are only a few types, you might get away with having a single Types table, adding a TypeID to it, and putting every field for every type in each record (with the understanding that some fields are going to be empty for every record).If the types are very distinct, you can keep your current design, but add TypeID to each of the Type tables.
_codereview.40849
My question concerns the following Python code which is already working. As far as I have seen, there are very elegant solutions of compacting code. Do you have any ideas on how to make the following code look smoother? mom = [0.,0.13,0.27,0.53,0.67]strings = ['overview_files/root/file_' + str(e) for e in mom] myfile = [np.loadtxt(s) for s in strings]nbinp = len(mom)nbinomega = len(myfile[0][:,0])x, y, z = (np.empty(nbinp*nbinomega) for i in range(3))for i in range(nbinomega): for j in range(nbinp): i_new = i + j*nbinomega y[i_new] = myfile[j][i,0] - 1.4 x[i_new] = mom[j] z[i_new] = myfile[j][i,1]
Prepare data for a contour plot with matplotlib
python;numpy;matplotlib
You may want to follow PEP8 a bit more closely to learn how to write code that is easily understood by most Python developers. Eg. mom = [0.3, 0.13] instead of mom = [0.3,0.13] and four spaces indentation.Try to be more careful about variable names. I couldn't understand what most of them meant, which is probably because I don't much about the code you're writing. But think about your readers (including you in three months) and wonder what are the best ways to convey information in your variable names. For example, 'myfile' suggest that this is not a collection. And 'my' doesn't provide any useful information.There's a common idiom in Python to avoid dealing with ranges explicitely: enumerate().for i, this in enumerate(myfile): for j, that in enumerate(myfile[i]): i_new = ... y[i_new] = ... x[i_new] = ... z[i_new] = ...If you often deal with ranges, it will certainly help you at some point.
_codereview.28622
I am trying to return a value (a.tnAddress) from a custom class based on a lookup (foreach loop). Depending on the type of transaction, I will need to do the foreach loop based on different properties (sExecID, iMsgSeqNum, orsClOrderID). I prefer to not have 3 differentforeach` loops just but I am not sure how else to re-write this.Keep in mind that the code is working fine; I just want to simplify it.private TreeNode GetNodeAddress(cls_Transactions trPassedInTransaction){ switch (trPassedInTransaction.sMessageType) { case Q: case 8b: foreach (cls_Transactions a in cls_GlobalVariables.transList) { if (trPassedInTransaction.sExecID == a.sExecID) { return a.tnAddress; } } break; case 3: foreach (cls_Transactions a in cls_GlobalVariables.transList) { if (trPassedInTransaction.iMsgSeqNum == a.iMsgSeqNum) { return a.tnAddress; } } break; default: foreach (cls_Transactions a in cls_GlobalVariables.transList) { if (trPassedInTransaction.sClOrderID == a.sClOrderID) { return a.tnAddress; } } break; } return null;}
Returning a value based on a lookup
c#;lookup
This will add the use of LINQ to clean up the code the way you want:private TreeNode GetNodeAddress(cls_Transactions trPassedInTransaction){ //a predicate to pass to the FirstOrDefault method Func<cls_Transactions,Boolean> filter = null; switch (trPassedInTransaction.sMessageType) { case Q: case 8b: filter = x => trPassedInTransaction.sExecID == x.sExecID; break; case 3: filter = x => trPassedInTransaction.iMsgSeqNum == x.iMsgSeqNum; break; default: filter = x => trPassedInTransaction.sClOrderID == x.sClOrderID; break; } cls_Transactions result = cls_GlobalVariables.transList.FirstOrDefault(filter); return result != null ? result.tnAddress : null;}As an explanation, the switch statement has just been converted to use a predicate, which is a function type where it passes one parameter (in this case a cls_Transactions) and returns true/false.The FirstOrDefault method is shorthand for the foreach loop and return, foreach-ing through the elements and using the predicate to determine if it meets the required condition, if none meet the required condition it will return a default value (in the case null).You can use the First method also, which will throw an exception if nothing is found :)
_webmaster.61461
If I have a domain like http://example.com/ which would be the best to use for every new project I create. Would it be a subdomain http://exampleprojectname.example.com/ or a new domain http://exampleprojectname.com/ or a new folder http://example.com/exampleprojectname. Which one would be the best and good for seo.
New domain vs Subdomain vs new Folder for each new project
seo
null
_cs.74142
I have been studying unification, especially nominal unification (paper) gets my attention.I read the theory and examples. But I am wondering that what kind of problems occur in unifications.For examples, The following examples are from nominal unification paper. (I write distinct bound variables, so easier to see the solutions )(1) $\lambda a. \lambda b. (X \, b) = \lambda c. \lambda d. (d \,X) $(2) $\lambda a. \lambda b. (X \, b) = \lambda c. \lambda d. (d \, Y) $(3) $\lambda a. \lambda b. (b \, X) = \lambda c. \lambda d. (d \, Y) $I observed that each variable ($X$ or $Y$) occurs only once on one side of equations.Should it be always that way?Can I write the following(4) $\lambda a. \lambda b. (X \, X) = \lambda c. \lambda d. (d \,X) $(5) $\lambda a. \lambda b. (Y \, X) = \lambda c. \lambda d. (d \,X) $(6) $\lambda a. \lambda b. (X \, X) = \lambda c. \lambda d. (X \,X) $and are these possible unification problems?I read many sources, but and all presented unification problems with variables only have once occurrences on one side of equations.I tried to find more examples, but could not find anything different.I know the unification problems arise from logic programming. And so far, I did not see any logic programs which puts $X$ twice in a term.Anyway, hoping someone to clarify these points. Thanks in advance!
Are these examples of unification problems?
logic;unification
Yes, the variables can occur more than once in a term. Either way you end up with a system of equations. For plain unification, you can always have the unification variables be distinct and then add equations to unify them separately. That is, you can turn $\mathtt{p}(X,X) = \mathtt{q}(\mathtt{a},\mathtt{b})$ into $\mathtt{p}(X,Y) = \mathtt{q}(\mathtt{a},\mathtt{b})$ and $X = Y$.Comparing predicates with duplicate variables comes up all the time in Prolog. For example, one of the most iconic Prolog programs is list append, usually written as:append([], Ys, Ys).append([X|Xs], Ys, [X|Zs]) :- append(Xs, Ys, Zs).If you want to be very pedantic and say these programs don't illustrate terms with duplicate variables, then you can consider another iconic Prolog technique: difference lists. The most general difference list representation of a list $[1,2,3]$ is $[1,2,3|X]\setminus X$.
_codereview.172995
I'm working on Project Euler problem 19, which reads as follows:You are given the following information, but you may prefer to do some research for yourself.1 Jan 1900 was a Monday. Thirty days has September, April, June and November. All the rest have thirty-one, Saving February alone, Which has twenty-eight, rain or shine. And on leap years, twenty-nine. A leap year occurs on any year evenly divisible by 4, but not on a century unless it is divisible by 400.How many Sundays fell on the first of the month during the twentieth century (1 Jan 1901 to 31 Dec 2000)?Note that my title is slightly misleading because this isn't counting all of the Sundays in the 20th Century, just the ones that fall on the first day of the month.Here's my code:public int HowManySundays() { // Have an array with the number of days in each month. For example, month 0 is January, // which has 31 days. int[] daysInEachMonth = new int[] { 31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31 }; int currentYear = 1900; // First year in 1900 // We could calculate the first Sunday in 1901 to save a little time // but that's not all *that* much of an optimization so it's not terribly important int currentDay = 7; int currentMonth = 0; int numberOfSundays = 0; while (currentYear < 2001) { // Add 7 each time so that we know that it's another Sunday currentDay += 7; // I don't particularly like the special reasoning for February for leap-year detection // We don't actually have to do separate logic for centuries because the only century we // care about is the year 2000, which we already know is evenly divisible by 400 int daysInMonth = currentMonth == 1 ? ((currentYear % 4 == 0) ? 29 : 28) : daysInEachMonth[currentMonth]; if (daysInEachMonth[currentMonth] < currentDay) { currentDay -= daysInEachMonth[currentMonth]; currentMonth++; // Months are 0 - 11 // See if we've wrapped around to a new year if (currentMonth >= 12) { currentMonth = 0; currentYear++; } // If day == 1, then it must be a Sunday on the first day of the month if (currentDay == 1 && currentYear > 1900) { numberOfSundays++; } } } return numberOfSundays; }I have a trivial unit test (not included here) proving that my method does, in fact, return the correct answer (171).Does anyone have feedback on this (especially on its readability)? Is this decently efficient, or did I miss some optimizations?
Project Euler 19 (count Sundays in the 20th Century) with a while loop
c#;programming challenge;.net;datetime
null
_unix.264249
I am trying to get files having size greater than 1k and with an txt extension my code is as follows :files=$(find foldername -size +1k -name \*.txt -exec {} \;)for item in $filesdo echo $itemdoneBut I am getting unexpected output as given below. Please help !!!DEST/sample - Copy - Copy.txt: line 1: Hello: command not foundDEST/sample - Copy - Copy.txt: line 2: This: command not foundDEST/sample - Copy - Copy.txt: line 3: In: command not foundDEST/sample - Copy - Copy.txt: line 4: $'\r': command not foundDEST/sample - Copy - Copy.txt: line 5: User: command not foundDEST/sample - Copy - Copy.txt: line 6: -s: command not foundDEST/sample - Copy - Copy.txt: line 7: -d: command not foundDEST/sample - Copy - Copy.txt: line 8: -t: command not foundDEST/sample - Copy - Copy.txt: line 9: $'\r': command not foundDEST/sample - Copy - Copy.txt: line 10: $'\r': command not foundDEST/sample - Copy - Copy.txt: line 11: Hello: command not foundDEST/sample - Copy - Copy.txt: line 12: This: command not foundDEST/sample - Copy - Copy.txt: line 13: In: command not foundDEST/sample - Copy - Copy.txt: line 14: $'\r': command not foundDEST/sample - Copy - Copy.txt: line 15: User: command not foundDEST/sample - Copy - Copy.txt: line 16: -s: command not foundDEST/sample - Copy - Copy.txt: line 17: -d: command not foundDEST/sample - Copy - Copy.txt: line 18: -t: command not foundDEST/sample - Copy - Copy.txt: line 19: $'\r': command not foundDEST/sample - Copy - Copy.txt: line 20: Hello: command not foundDEST/sample - Copy - Copy.txt: line 21: This: command not foundDEST/sample - Copy - Copy.txt: line 22: In: command not foundDEST/sample - Copy - Copy.txt: line 23: $'\r': command not foundDEST/sample - Copy - Copy.txt: line 24: User: command not foundDEST/sample - Copy - Copy.txt: line 25: -s: command not foundDEST/sample - Copy - Copy.txt: line 26: -d: command not foundDEST/sample - Copy - Copy.txt: line 27: -t: command not foundDEST/sample - Copy.txt: line 1: Hello: command not foundDEST/sample - Copy.txt: line 2: This: command not foundDEST/sample - Copy.txt: line 3: In: command not foundDEST/sample - Copy.txt: line 4: $'\r': command not foundDEST/sample - Copy.txt: line 5: User: command not foundDEST/sample - Copy.txt: line 6: -s: command not foundDEST/sample - Copy.txt: line 7: -d: command not foundDEST/sample - Copy.txt: line 8: -t: command not foundDEST/sample - Copy.txt: line 9: $'\r': command not foundDEST/sample - Copy.txt: line 10: $'\r': command not foundDEST/sample - Copy.txt: line 11: Hello: command not foundDEST/sample - Copy.txt: line 12: This: command not foundDEST/sample - Copy.txt: line 13: In: command not foundDEST/sample - Copy.txt: line 14: $'\r': command not foundDEST/sample - Copy.txt: line 15: User: command not foundDEST/sample - Copy.txt: line 16: -s: command not foundDEST/sample - Copy.txt: line 17: -d: command not foundDEST/sample - Copy.txt: line 18: -t: command not foundDEST/sample - Copy.txt: line 19: $'\r': command not foundDEST/sample - Copy.txt: line 20: Hello: command not foundDEST/sample - Copy.txt: line 21: This: command not foundDEST/sample - Copy.txt: line 22: In: command not foundDEST/sample - Copy.txt: line 23: $'\r': command not foundDEST/sample - Copy.txt: line 24: User: command not foundDEST/sample - Copy.txt: line 25: -s: command not foundDEST/sample - Copy.txt: line 26: -d: command not foundDEST/sample - Copy.txt: line 27: -t: command not found
Ignore spaces in for Loop while printing file name?
bash;shell script;filenames
Assuming -exec is to what you mean to do.(The option -exec executes a file not read it)The simple solution to print the filenames that match find options is:find foldername -size +1k -name \*.txt -printIf you need the names to be assigned to a variable, you need more.To be able to deal with spaces in file names that result from the command find, the option -print0 is the usual solution:find foldername -size +1k -name \*.txt -print0However, to be able to read the results into a bash variable, is not easy.There is a long explanation in this excellent Greg's wiki page#!/bin/bashunset awhile IFS= read -r -d $'\0' file; do a+=( $file ) # or however you want to process each filedone < <(find foldername -size +1k -name \*.txt -print0)printf 'filename=%s\n' ${a[@]}
_unix.342153
find fails to list the contents of the /etc directory when invoked in the obvious way and I'm not sure what the explanation is.find /etc just shows /etc, even though there are other files inside the directory.$ find /etc/etcalso$ find /./etc/./etcand as root$ sudo find /etc/etcHowever, I can see some files inside /etc when I run find /etc/. $ find /etc/. | headfind: /etc/./cups/certs: Permission denied/etc/./etc/./afpovertcp.cfg/etc/./afpovertcp.cfg~orig/etc/./aliasesOther commands such as ls show contents of /etc...$ ls -1 /etc | headafpovertcp.cfgafpovertcp.cfg~origaliasesaliases.dbIs this expected behavior for find?
OS X: BSD `find /etc` prints just `/etc`
find;osx
On OSX, /etc is a symbolic link, and find won't traverse that as if it were a directory.For what it's worth, /tmp and /var also are symbolic links (pointing in each case to subdirectories of /private).You could use (see POSIX find) the -H or -L options to get something like your intention.
_unix.125758
I am currently using Arch linux. I am trying to print a pdf file for the command line, it is easier that way as I need to print several range of pages.Just for testing I issue a command to print a text file:lpr a.txtAnd it prints ok. Now I want to print the pdf file: lpr -P HP_LaserJet_2300_series -o page-ranges=80-81 -o media=a4 -o sides=two-sided-long-edge -o scaling=110 Introductory and Intermediate Algebra - Aufmann 4th.pdfWhen I issue lpq this is the result:HP_LaserJet_2300_series is readyRank Owner Job File(s) Total Size 1st alber 321 Introductory and Intermediate A 32868352 bytesSo the file is in the queue but is not printing and there are no processing jobs on the printer either.I then access CUPS web interface. I look at state of the pending jobs and the following message appears:stopped Exception: /var/spool/cups/d00321-001 (file position 32580806): unknown token while reading object (PDF)
Can't print pdf file with lpr
printing;pdf;cups
null
_unix.143838
As many people, I too have several email accounts. Until now, I have been using Thunderbird for my main account, and either used web interface for the other accounts, or used other email clients (Sylpheed, Balsa, ...). For some reason, I never liked the idea of having separate, independent accounts integrated in one email client, perhaps because the added complexity and possibility of confusion. When I used three different email clients, I had three truly independent email accounts.The only disadvantage is, the other (non-Thunderbird) email clients never worked as well as Thunderbird.Now I am wondering whether there is a possibility to use three independent instances of Thunderbird, so that I don't have to use inferior email clients.I know that when Thunderbird is already running, I cannot start another instance. Also, any additional instance would need its own (independent) config directory.Is there any way to achieve this?I am using Thunderbird (Icedove 24.6.0) on Debian WheezyUPDATE:I have found this article on MozillaZine, which suggests to use the -no-remote option and does not mention the option -new-instance at all.The man page lists both options, but does not explain what the difference is, if -new-instance is implied when -no-remote is used, or whether they should both be used at the same time.thunderbird -P profile_name -no-remotethunderbird -P profile_name -new-instanceSide note:The refered article also says, that:Multiple instances is intended for debugging, so use it at your own riskWell, I don't intend to use it for debugging, I want to use it for my work. What can possibly go wrong when using multiple instances? How can I mitigate that danger?
Using several independent instances of Thunderbird
email;thunderbird
You can start Thunderbird from the commandline with the -P <profile> option to specify a different profile. Within the different profiles you have complete seperation. IIRC specifying a profile implies the -new-instance option when starting thunderbird but if not, just add it.To create a new profile start thunderbird from the commandline with:thunderbird -ProfileManager -new-instanceOn the other hand, have you tried using the IMAP protocol? This gives me completely different trees of folders one for each account that I have in (one) Thunderbird session. Unless I actively copy messages from one account to the other everything stays separate and as long as you close the tree of the account your are not working on, things should not be confusing.
_datascience.14538
I have downloaded and built octave library and it works fine . But I cannot call function minimizers like fminunc() , fmingc() etc to minimize my functions for performing logistic regression or using it in neural networks . Can these functions be accessed from c++ ? If yes, then how ?
How can I use octave function minimizers in c++?
machine learning;neural network;logistic regression;optimization;octave
null
_codereview.92580
I have a QTreeview that is filled with data from data_for_tree dictionary. Suppose that dictionary represents the most common purchases a person makes. I call it source list.Data, shown in that list differs from data that is shown in the receiver list. Suppose a person inserted it via special form. The goal is - by clicking twice add data from source list to receiver list in proper format and in a proper way.The goal is reached, but it seems to me, by the very complicated way. First I get the selected items, then compare them to data_for_tree dictionary and get the rest of data (that is not shown), then make new item that is a tuple of QStandardItems, then add it to receiver list, then update a dictionary for the receiver list.I am sure that there is a better way for performing it. But due to my beginner's level I couldn't apply it. I'm going to add drag and drop option in the nearest future, and I think may be there is a shared (common) way for adding items by double clicking and drag'n'drop methods from one list to another.I ask for improvements, optimization and comments. #!/usr/bin/env python -tt# -*- coding: utf-8 -*-#from PySide.QtGui import *from PyQt5.QtGui import *from PyQt5.QtCore import *from PyQt5.QtWidgets import *import sysreload(sys)sys.setdefaultencoding('utf8')data_for_tree = {tomato:{color:red,ammount:10, note:a note for tomato,price:0.8},banana:{color:yellow,ammount:1, note:b note for banana, price:.6}, some fruit:{color:unknown,ammount:100, note:some text,price:2.1}}data_for_receiver = {1:{name:milk,price:3.2,note:I love milk}, 2:{name:coca-cola,price:.8,note:coke forever}}class ProxyModel(QSortFilterProxyModel): def __init__(self, parent=None): super(ProxyModel, self).__init__(parent) def lessThan(self, left, right): leftData = self.sourceModel().data(left) rightData = self.sourceModel().data(right) try: return float(leftData) < float(rightData) except ValueError: return leftData < rightDataclass MainFrame(QWidget): def __init__(self): QWidget.__init__(self) self.MyTreeView = QTreeView() self.MyTreeViewModel = QStandardItemModel() self.MyTreeView.setModel(self.MyTreeViewModel) self.most_used_cat_header = ['Name', ammount, color] self.MyTreeViewModel.setHorizontalHeaderLabels(self.most_used_cat_header) self.MyTreeView.setSortingEnabled(True) self.MyTreeView_Fill() self.receiver_tree = QTreeView() self.receiver_model = QStandardItemModel() self.receiver_tree.setModel(self.receiver_model) self.receiver_tree_header = ['#','Name', price] self.receiver_model.setHorizontalHeaderLabels(self.receiver_tree_header) self.MyTreeView.doubleClicked.connect(self.addToReceiver) self.receiver_fill() MainWindow = QHBoxLayout(self) MainWindow.addWidget(self.MyTreeView) MainWindow.addWidget(self.receiver_tree) self.setLayout(MainWindow) def addToReceiver(self): indexes = self.MyTreeView.selectedIndexes() index_list =[i.data() for i in self.MyTreeView.selectedIndexes()] last_id = max(int(i) for i in data_for_receiver) for k in data_for_tree: v = data_for_tree[k] if [k,v[ammount],v[color]] == index_list: i =QStandardItem(str(last_id+1)) name = QStandardItem(k) price = QStandardItem(format(float(v[price]), .2f)) tooltip = v[note] name.setToolTip(tooltip) item = ( i, name, price) self.receiver_model.appendRow(item) upd = {name:k,price:v[price],note:v[note]} data_for_receiver[str(last_id+1)] = upd def MyTreeView_Fill(self): for k in data_for_tree: name = QStandardItem(k) ammount = QStandardItem(data_for_tree[k][ammount]) note = QStandardItem(data_for_tree[k][color]) name.setEditable(False) tooltip = price +format(float(data_for_tree[k][price]), .2f)+<br> tooltip += data_for_tree[k][note] item = (name, ammount, note) name.setToolTip(tooltip) self.MyTreeViewModel.appendRow(item) self.MyTreeView.sortByColumn(1, Qt.DescendingOrder) proxyModel = ProxyModel(self) proxyModel.setSourceModel(self.MyTreeViewModel) self.MyTreeView.setModel(proxyModel) c = 0 while c < len(self.most_used_cat_header): self.MyTreeView.resizeColumnToContents(c) c=c+1 def receiver_fill(self): for k in data_for_receiver: v = data_for_receiver[k] i = QStandardItem(k) name = QStandardItem(v[name]) price = QStandardItem(format(float(v[price]), .2f)) tooltip = v[note] name.setToolTip(tooltip) item = (i,name, price) self.receiver_model.appendRow(item) c = 0 while c < len(self.receiver_tree_header): self.receiver_tree.resizeColumnToContents(c) c=c+1if __name__ == __main__: app = QApplication(sys.argv) main = MainFrame() main.show() main.move(app.desktop().screen().rect().center() - main.rect().center()) sys.exit(app.exec_())
Adding data from QTreeVIew to QTreeView
python;beginner;pyqt
null
_unix.42065
I have installed ncurses package from source, and now I have$HOME/local/include/ncurses/curses.h$HOME/local/include/ncurses/ncurses.hon my filesystem. I have also set up the search pathes so that$ echo $C_INCLUDE_PATH$HOME/local/include:$ echo $CPLUS_INCLUDE_PATH$HOME/local/include:(i have eddited the output of echo to replace home path with $HOME)however, when i ./configure another package i getchecking ncurses.h usability... nochecking ncurses.h presence... nowhat's the problem that the system cannot detect curses installation?
ncurses.h is not found, even though it is on the search path
compiling;configure;ncurses
configure scripts produce config.log (in the same folder) files which contain all the details on the tests it ran. They're not particularly easy to read, but open it up and search for checking ncurses.h usability. Look at what went wrong with the small test program it tried to compile.My guess is, it doesn't care about $C_INCLUDE_PATH and you'll need to pass it to the build system in a different matter. configure options (eg. --includedir=$HOME/local/include) and $CFLAGS + $CXXFLAGS + $CPPFLAGS (adding -I$HOME/local/include) come to mind.
_unix.27207
I have a D-Link DI-624 rev. D2 Router. It is based around an Atheros AR2316A-001 chipset, and has 8MB RAM.I opened the device to check for actual parts use in it, and I can confirm, it is indeed the AR2316A-001 chipset with PSC A2V64S40CTP (8MB RAM). I couldn't locate the flash chip, the original firmware is 1MB in size, I don't know if anything larger can be loaded onto the device. I was wondering, If I could load OpenWrt on it, so I compiled OpenWrt with the AR231x chipset as Target. Now, the compile process yielded those squashfs images:openwrt-atheros-np25g-squashfs.binopenwrt-atheros-ubnt2-pico2-squashfs.binopenwrt-atheros-ubnt2-squashfs.binopenwrt-atheros-ubnt5-squashfs.binopenwrt-atheros-wpe53g-squashfs.binAll those files are around 2.4MB to 2.5MB in size, which is far more, than the firmware available from D-Link (di624revD_firmware_404.bin is around 1MB). I was wondering which file I should try to upload if any.On the DD-WRT page for supported devices this router is listed, revision C, which uses the same chipset.The DI-624 has an interesting emergency feature comparable to other D-Link products, like the DIR-600: When holding down the reset button while connecting power to the device, the router goes into an emergency restore mode. Then, when going to 192.168.0.1 with a browser, you can upload another firmware, no matter how badly bricked the router is.In case anyone succeeded with flashing an alternative OS onto a DI-624, I'd very much like to know how. There was some guy at the OpenWrt forums that claimed he could boot Linux on the DI-624, but he didn't really explain how he did it.I wasn't sure whether this question belongs here or electronics.SE
D-Link DI-624 H/W ver. D: Flashing OpenWrt
embedded;openwrt;dd wrt
Until you determine, what type and size of Flash ROM is used in the device, you should not risk flashing it with anything other than dedicated firmware. Atheros chipsets are very common across a wide range of wireless devices and the sole fact of using a particular chip does not guarantee that the entire device will work correctly with your firmware. The chipset is like a coputer CPU + some peripherals, but not necessarily all. And the system storage must be supported.Edit: If you'd read carefully, you'd see that the page you linked to presents a list of incompatible devices. Since DI-624 is listed there, it is definitely not supported by dd-wrt. This makes it almost certain, that your custom OpenWrt image would not work either.
_unix.31322
Possible Duplicate:Redirecting stdout to a file you don't have write permission on I'm trying to install drupal according to the instructions given in this tutorial:http://how-to.linuxcareer.com/how-to-install-drupal-7-on-ubuntu-linuxand am stuck on a step:$ cd /etc/apache2/sites-available$ sudo sed 's/www/www\/drupal/g' default > drupalbash: drupal: Permission deniedThe permissions for /var/www/drupal are set to 777.
permission denied when redirecting sudo sed output
permissions;sudo;io redirection
The tutorial does not use sudo and requires a root shell. You can get a root shell with sudo -i.In case you prefer sudo, the redirection is handled by the shell and not by the sudo command. So you can't create a file in /etc/apache2/sites-available by directing the output as you did. According to the sudo manual, you should use a subshell like:$ cd /etc/apache2/sites-available$ sudo sh -c sed 's/www/www\/drupal/g' default > drupal
_codereview.84081
I have written a program where the user inputs a regular expression and a replacement string, and the program will search through a set of files and do the replacements. The user is allowed to use backreferences in the replacement string (to refer to capture groups in the regex). The part of the program I would like you to consider is the sub routine substitute_regex_backref below that searches a input string (for example the contents of a file) and does the replacements:#! /usr/bin/env perluse feature qw(say);use strict;use warnings;my $old_str = '$1B<$ <hello> $> aba B<$ <kk> $>$1';my $str = $old_str;my $regex = qr'B<\$ <(.*?)> \$>';my $replace = 'I<\$1$1\$1>';(my $cnt, $str) = substitute_regex_backref( $str, $regex, $replace );say Old string: '$old_str';say Number of replacements: $cnt;say New string: '$str';exit;### ($num_substitutions, $new_str) =# substitute_regex_backref( $str, $regex, $replace )### This sub routine is based on a stackoverflow.com answer# by username Kent Fredric, see http://stackoverflow.com/a/392649/2173773## Replace all occurences of $regex in $str with $replace.# Returns number of replacements and the new string.## The $regex input is assumed to be a regex quoted string. For example:# my $regex = qr/A simple (\w+) example/;## The replacement string $replace is allowed to have backreferences:# Example: $replace = a $1 b# $1 is here treated as a backreference. It corresponds to capture group number 1# in the $regex string.## The replacement string $replace can also have backslash escaped dollar signs,# Example: $replace = \$1 a $1 b # Such escaped dollar signs should be replaced by# a literal '$' (and not treated as a backreference) # Note: This subroutine was written to avoid using the ee modifier technique:# $var =~ s/$find/$replace/ee;# which has sequrity risks if the $replace string comes from user input.#sub substitute_regex_backref { my ( $str, $regex, $replace ) = @_; # First obtain an array @m of matches my @m = $str =~ /$regex/g; if (@m == 0) { return (0, $str); } my $special_character_seq = \x41\x42\x43\x44; # Remove any dollar signs from $str $str =~ s/\$/$special_character_seq/g; # If $regex contain escaped dollar signs (to be treated as # literal dollar signs), we need to replace # them with $special_character_seq since we have removed all # dollar signs in the previous line $regex =~ s/\\\$/$special_character_seq/g; # Do the replacement, but dollar signs in the $replace variable # are left as literal dollar signs $str =~ s/$regex/$replace/g; # Replace backslash escaped dollar signs with special string $str =~ s/\\\$/$special_character_seq/g; # use reverse function to cope with mixed one- and two-digit back references # For example, $12 should be dealt with before $1, in order to avoid confusion. for( reverse 0 .. $#m ){ my $n = $_ + 1; my $val = $m[$_]; # Replace $n with the value of capture group $str =~ s/\$$n/$val/g; } # Reinsert all literal dollar signs: $str =~ s/$special_character_seq/\$/g ; return (@m + 0, $str); }Any comments and suggestions are appreciated. I am especially concerned with the use of the $special_character_seq variable, if it is necessary or how to avoid using it..
Regex substitution using a variable replacement string containing backreferences in Perl
perl
This is incredibly hackish and broken code. Input that contains the magic substitution string (ABCD) will lead to incorrect output, and no steps to guard against this were taken. Even worse, the documentation is misleading since it uses double-quoted strings.If you perform some escape encoding, then you must also escape the escape mechanism whenever it occurs in the input. For example, in single-quoted or q()-style strings, the closing delimiter is a forbidden symbol and must be escaped with a backslash \. However, there must also be a way to escape the backslash itself so that we can express both the strings ' via '\'' and \' via '\\\''. An alternative encoding in languages where two string literals cannot be directly adjacent is to use the delimiter as an escape character. If ' is the delimiter then '' cannot occur naturally except when denoting the empty string, so ' can be encoded as '''' and \' can be encoded as '\'''.In our case, dollar symbols are forbidden and are escaped by the sequence ABCD, but that sequence itself is not escaped.Instead of figuring out a proper way to escape this so that you can cobble together a half-solution using successive substitutions, let's take a step back and treat your substitution syntax as a serious language of its own. And languages get a parser. In particular, your language is a concatenation of three elements:literal parts that consist of anything that's not a backslash or a dollar sign,escaped characters that must at least contain $ via \$ and \ via \\,backrefs that are a dollar sign followed by a positive non-zero integer.The point that backslashes need escaping as well follows directly from the discussion above.The syntax for backrefs is problematic since there's no way to properly delimit the number. Assume that we want to surround the capture group 1 with the letter a, and we can use the replacement directive 'a$1a', but assume we want to surround it with the digit 4, and '4$14' would parse more sensibly as the digit 4 followed by the backref $14. Perl following established shell syntax offers a ${1} syntax for these cases, and we should as well. Your code tries to deal with this problem by looping through all possible backrefs starting with the largest number, but given ten backrefs we wouldn't be able to discern $10 from ${1}0!Therefore, the backref syntax ought to be a dollar symbol followed by either a sequence of decimal digits not starting with zero or else by such a sequence enclosed in matching curly braces. Clearly specifying it like this allows us to die when there is an unknown backref such as $666 when we only have three matches.If we want to parse this substitution language in Perl, m/\G.../gc-style parsing is a viable option since the language is regular. The below function takes a replacement string, parses it, and returns a function that given a list of captures returns the fully substituted string.use Carp;use Scalar::Util 'reftype';sub parse_replacement { my ($replacement) = @_; my @tokens; # literals are strings, backrefs are refs pos($replacement) = 0; while (pos($replacement) < length($replacement)) { # normal literals if ($replacement =~ /\G( [^\$\\]+ )/xgc) { # if the previous token was literal, concatenate rather than pushing if (@tokens and not defined reftype $tokens[-1]) { $tokens[-1] .= $1; } else { push @tokens, $1; } } # escapes elsif ($replacement =~ /\G [\\]/xgc) { if ($replacement =~ /\G ([\$\\])/xgc) { # if the previous token was literal, concatenate rather than pushing if (@tokens and not defined reftype $tokens[-1]) { $tokens[-1] .= $1; } else { push @tokens, $1; } } elsif ($replacement =~ /\G\z/xgc) { croak Illegal trailing backslash; } else { $replacement =~ /\G (.)/smxgc; croak sprintf Escape can only contain backslash or dollar sign, not U+%4X '%s', ord $1, $1; } } # backrefs elsif ($replacement =~ /\G [\$]/xgc) { if ($replacement =~ /\G [{]/xgc) { if ($replacement =~ /\G( [1-9][0-9]* )/xgc) { my $n = $1; push @tokens, \($n - 1); if ($replacement =~ /\G [}]/xgc) { # all is OK } else { croak Expected closing curly brace for \${$n} identifier; } } else { croak 'Expected ${123} style numeric identifier inside ${...}'; } } elsif ($replacement =~ /\G( [1-9][0-9]* )/xgc) { my $n = $1; push @tokens, \($n - 1); } else { croak 'Expected $123 or ${123} style number after dollar sign'; } } else { croak sprintf Illegal state expected literal, escape or backref at position %d, pos($replacement); } } return sub { my ($captures) = @_; my $buffer = ''; for my $token (@tokens) { if (reftype $token) { my $i = $$token; if ($i < @$captures) { $buffer .= $captures->[$i]; } else { croak sprintf 'Unknown backref $%d; there are only %d captures', $$token, 0+@$captures; } } else { $buffer .= $token; } } return $buffer; };}Note that this code handles any eventuality and is therefore clearly correct and tries to provide meaningful error messages. It could further be improved by including the exact position and context in the error message.There are various other problems with your code I immediately see minor stylistic problems and another large-scale logical problem but I'll leave those to another answer to tackle.Your development would benefit from using a stringent set of test cases that handles edge cases and deliberately difficult inputs in order to find out whether your code actually matches your implementation. The Test::More module is an excellent place to start testing with Perl.
_computergraphics.1850
I have a game that has simple particles (basically dots) moving around the screen leaving a trail.My ultimate goal is to be able to change the opacity of the solid black fading texture each drawing call in a way so that I can keep the length of a particle with trails (using a accumulation buffer) is the same even though the particle movement might move with time-stepping. The way I render my particles is I render the current frame based on the particles position, lets call this texture A. Then I take the previous frame and draw a solid black texture over it with set opacity in order to fade this, lets call this texture B. Then I draw texture A over texture B. This way the particle leaves a trail based on where it has been. I am currently time-stepping the movement of the particles so that no-mater how much latency their is it always moves the same speed (considering real-world time).In essence I want to be able to say say that the particle trial will always be 100 pixels (considering the particle moves 60 pixels in a second) , and have it be that way even if the frame-rate drops. Currently when the frame rate drops the accumulation buffer gets run less times and thus the trail is longer the more latency their is.The information I need to know to do this is how exactly does fading work. Because I need to change the opacity of the black texture in a way that allows it to fade more when the accumulation buffer is called less times per second, and fade less when it is called many times per second. Right now I really dont understand what happens to a pixel when a black texture with a specific opacity is on it.Here is some data I collected.Data 1:This is a chart of the particle trails. The x axis represents the FPS it was running at, and the y axis is how many pixels long the trail was. On the left it says what opacity the black texture used in the accumulation buffer was.You can find all the data, and some best fit line formulas at here (on desmos)It is important to note I stopped counting after V was lower then 20 since the difference between colors starts getting really small.some things I noticed about this is that their seems to be either a exponential or division relationship between the numbers. Also I noticed that their were segments for each frame before it was faded, and that the amount of them that were visible were about the same no-mater the fps when considering the same opacity of the black texture.Data 2:I recreated what I thought was going on with a fading texture with paint.net. I made a 100x100 picture and colored it red. I then added a layer with one black pixel at an opacity of 25/255 then I duplicated this and added another pixel, basically I created a gradient. Then I went through and labeled each pixel considering its rounded darkness (in hsv color space). Then labeled the difference between the darkness values. Unfortunately from this I didnt learn much. Just reaffirmed that it is a non linear relationship. And I also noticed if I did the fade one more time that all the darkness values shifted to the square next to them. So I could predict the darkness values if I knew how many times the fade had been applied.So what happens when a semitransparent black texture is blended over another one?If you are willing to put in some extra thought what would a formulabe that would give me the opacity the black texture needs to be inorder to make the particle trail be about 'N' pixels long, given thedt since last frame?
Please help me understand what happens as an image is faded to black in order to time-step particle fading
opengl;transparency;particles
The basic equation for alpha blending is as follows:$$ c_\text{final} = c_\text{source} \cdot \alpha + c_\text{dest} \cdot (1 - \alpha) $$Here, $c_\text{source}$ is the color of the thing being blended, $c_\text{dest}$ is the background onto which you're blending it, and $\alpha$ is between 0 and 1.In your case, $c_\text{source} = 0$ (black fading texture), so blending black $n$ times over a background reduces to$$ c_\text{final} = c_\text{dest} \cdot (1 - \alpha)^n $$which is indeed an exponential falloff as you guessed. You can plug this formula in and get numbers pretty similar to those you posted (although when I did it, my results were off by one or twomaybe due to rounding differences; I'm not sure).Regarding your problem of making the trails appear the same regardless of framerate, a simple approach is to run the accumulation buffer passes multiple times per frame if necessary, so that there's effectively a fixed timestep for the trail regardless of framerate. That will probably give better-looking results than attempting to fiddle with the alpha based on the framerate. It might be too slow, though.A potentially more efficient approach would be to have the particles spawn a new trail particle every so often, where each trail particle stays fixed in position, with its alpha going from 100% to 0 over some length of time (and then it gets recycled). You have to re-render all the particles every frame, but it could end up being faster overall if the overhead of the full-screen blending passes for the accumulation buffer get to be too high.I should point out though that all of these approaches actually perform time-based fading, not distance-based as you asked for (100-pixel trails). If all the particles move at the same speed, you can just tweak the fade time to make the trails the desired length, but if different particles move at different speeds, then you would need to account for that when setting the fade times for their trails. There's also no trivial way to do that with the accumulation buffer approach, since it implicitly uses the same fade time for everything on the screen.
_cs.72663
Im trying to prove that $L=\{w\#s : |w|=|s|, w \neq s\} \notin CFL$ using the pumping lemma. So I said, let say $L \in CFL$ so by the pumping exists $p$ which is the pumping length of language $L$, I think $ s = 0^{m}1^{p}\#1^{m}0^{p} \in L$ would be a good word to start with, I can write $s = uvxyz$ and i want $uv^{i}xy^{i}z\notin L$ i want somehow to force the new word to Not contain $'\#'$ or $|w|\neq |s|$. The first option could be achived if ill make sure $'\# \in v\cup y$ and ill just choose $i=0$. And for the second option, i want to force $|v|\neq |y|$ and then for every $i$ it will be good. But i cant see how to analyize the word fulfills the requirements.Edit:I think it isnt a duplicate question, This language isnt context free as mention in the comments.The $\#$ char is the thing that make the difference between the language that was suggested as a duplicate post.I've succsed to prove it by the pumping lemma when choosing the word $w = 0^{p!+p}1^{p}\#0^{p}1^{p!+p}$ and $i=1+\frac{p!}{t}$ to pump with in the non trivial part where $w=uvxyz$ is the partition and $|v|=|y|, v\subseteq w $ and $ y\subseteq s$ , this way ill get that $uv^{i}xy^{i}z=w\#s$ where $w=s$ Full proof : (more rigorous proof to this question than mine) Assume by way of contradiction that $L CFL$. Let $p > 0$ be the pumping constant for $L$guaranteed by the pumping lemma for context-free languages. We consider the word$s = 0^{m}1^{p}\#0^{p}1^{m}$ where $m=p!+p$ so $s L$.Since $|s| > p$, according to the pumping lemma there exists a representation $s = uvxyz$, such that$|vy| > 0$, $|vxy| p$ , and $uv^{j}xy^{j}z L$ for each $j 0$.We get a contradiction by cases:If $v$ or $y$ contain $\#$: Then for $i = 0$, we get that $uxz$ doesnot contain $\#$, so $uxz \notin L$ in contradiction.If both $v$ and $y$ are left to $\#$: Then for $i = 0$, we get that$uxz$ is of the form $w\#x$, where $|w| < |x|$, so $uxz \notin L$.If both $v$ and $y$ are right to $\#$: Similar to the last case.If $v$ is left to $\#$, $y$ is right to it, and $|v| < |y|$: Thenfor $i = 0$, we get that $uxz$ is of the form $w\#x$, where $|w| > |x|$, so $uxz \notin L$.If $v$ is left to $\#$, $y$ is right to it, and $|v| > |y|$: Similarto the last case.If $v$ is left to $\#$, $y$ is right to it, and $|v| = |y|$: This isthe most interesting case. Since $|vxy| p$, $v$ must be containedin the $1^{p}$ part of $s$, and $y$ in the $0^{p}$ part. So it holdsthat $v = 1^{k}$ and $y = 0^{k}$ for the same $1 k p$ (in fact,it must be that $k < p/2$). For each $j 0$, it holds that$uv^{j+1}xy^{j+1}z = 0^{m}1^{p+jk}\#0^{p+jk}1^{m}$, so if ithappens that $m = p + j k$, then it holds that $uv^{j+1}xy^{j+1}z \notin L$ in contradiction. To achieve this, we must take $j = (m-p)/k$, which is valid only if $m p$ is divisible by $k$. Recallthat we chose $m = p + p!$, so $m p = p!$, and $p!$ is divisible byany $1 k p$ as wanted.
Why does $L=\{w\#s : |w|=|s|\, w,s\in \{0,1\}^{*}, w \neq s \} \notin CFL$
context free;pumping lemma
null
_softwareengineering.208776
I have re-written an open source project from java to haxe, then from haxe compiled to javascript, with totally different UISo, the question is, is the code considered to be mine after rewriting it to another language in a closed source project? can I use it freely with no worries about original copyrights?
Would copyrights drop if I re-write open source project into another language?
licensing;open source
No. It is derived from the original open-source project, thus a so-called derivative work, still protected by the original copyright.In copyright law, a derivative work is an expressive creation that includes major, copyright-protected elements of an original, previously created first work (the underlying work)...For copyright protection to attach to a later, allegedly derivative work, it must display some originality of its own. It cannot be a rote, uncreative variation on the earlier, underlying work. The latter work must contain sufficient new expression, over and above that embodied in the earlier work for the latter work to satisfy copyright laws requirement of originality...
_softwareengineering.182488
I would like to use a timer in my C# program with millisecond accuracy to keep a camera in sync with some events and keep shooting a picture every 250ms (or 1/4 sec, or I might adjust it to even shorter times like 200ms or 100ms). The normal timer event can be used for this.But I wonder what would be the best way to do this.Also I think I should NOT write the whole capture routine in it, but instead just raise another thread (multi-threading) to process the image with some vision logic on it, as my vision logic takes about 1 seconds, so I will get some queue here.If my vision algorithms would take 1 seconds per thread, would this mean that on a multicore (12 cores) PC that such code thread would go to the next available free processor or am I thinking to easily about multitasking?
Multitasking in C#
c#;multithreading;real time
null
_softwareengineering.84625
I have been considering using the Amazon cloud (EC2) for a small workflow application. In terms of power and storage, a SQL Server Express database will more than meet my needs. I have been cautioned against paying for just a Windows server instance and installing SQL Server Express due to mainly security issues. Is it reasonable to think that there is an Amazon Machine Image (preconfigured images that you can load onto your instance) with SQL Server Express installed where most of the server hardening has been taken care of?The wise course may be to just pay for the Windows + SQL Server instance so it is already configured for me, but it is quite a cost difference.
Amazon Cloud (EC2) w/SQL Server. Pay for SQL instance, or use an AMI w/SQL Server Express?
security;sql server;amazon ec2
null
_softwareengineering.274199
Our business works with truck drivers making pickups/deliveries of containers. The location of containers needs to be tracked.The drivers use mobile devices to generate a DriverReport (a log of their trip locations, times, container pickup/drop-offs, etc.). This is currently working with a mostly CRUD design.We wanted to maintain good SoC so we built two services:DriverServiceInventoryServiceWe've had the DriverService take a dependency on the InventoryService. When a new DriverReport was inserted, it would generate DTO's (based off the reports data) that could be sent to the InventoryService.We're not using CQRS yet, but we're considering transitioning to it.I'm struggling to understand the relationship between the services, commands and BLL. I seem to have lots of questions.Who would be responsible for calling the inventory service?Would it be good design for a NewTripCommandHandler to generate and send an UpdateContainerInventory command?Or, would updating inventory for a new Trip be considered business logic? If yes, should a a domain entity for Trip generate and send an UpdateContainer command?Or, should the mobile client generate both trip and inventory specific commands to send to the appropriate service?Would it make more sense to take the parts specific to DriverReport inventory and merge it into the DriverService (so the dependency is broken)? Is that still good SoC? If we did this, at what layer might we start to think about the inventory? Would updating the inventory simply become a DB repository concern, or should there still be inventory specific commands and/or business logic?I'm getting lost in the details, and it seems the more I read on the topics, the more confused I get.
What layer generates commands for dependent services?
architecture;domain driven design;separation of concerns;cqrs
You could consider SOA if you want to. The thing is here you're only dealing with Commands. If you add the Publish/Subscribe pattern (message queues, service bus), you can have the serives publish events/messages, and not have to know who the subscribers were; the subscribers would all get the news and operate off the message contents.