id
stringlengths 5
27
| question
stringlengths 19
69.9k
| title
stringlengths 1
150
| tags
stringlengths 1
118
| accepted_answer
stringlengths 4
29.9k
⌀ |
---|---|---|---|---|
_webmaster.53580 | I found the answer to this question from a few years ago, but a lot has changed with Google since then. Would a technique with the one described here still work?Given that the small sites are more than just a page and the sites have unique content?I am a web developer and have a client who wants to hire me to do this, but I'm not sure its going to help or hurt him. | Multiple keywords sites for local SEO | local seo | null |
_unix.384371 | Input file looks something like this:chr1 1 G 300chr1 2 A 500chr1 3 C 200chr4 1 T 35chr4 2 G 400chr4 3 C 435chr4 4 A 223chr4 5 T 400chr4 6 G 300chr4 7 G 340chr4 8 C 400The actual file is too big to process, so I want to output a smaller file filtering by chromosome (column 1) and position (column 2) within a specific range.For example, I'm looking for a Linux command (sed, awk, grep, etc.) that will filter by chr4 from positions 3 to 7. The desired final output is:chr4 3 C 435chr4 4 A 223chr4 5 T 400chr4 6 G 300chr4 7 G 340I don't want to modify the original file. | Split a File into Rows Based on Column Values | linux;text processing;awk;sed;grep | The solution for potentially unsorted input file:sort -k1,1 -k2,2n file | awk '$1==chr4 && $2>2 && $2<8'The output:chr4 3 C 435chr4 4 A 223chr4 5 T 400chr4 6 G 300chr4 7 G 340If the input file is sorted it's enough to use:awk '$1==chr4 && $2>2 && $2<8' file |
_unix.90657 | Sometimes I unplug my USB drive only to find out files were not written to it.I suppose the only way to ensure files are written to it is to right click on the USB drive on the desktop and then select un-mount, but sometimes I forget.What is the best way to ensure files are written to the USB drive instantly? This way I can remove the USB drive as soon as I notice the LED light on the USB drive stopped blinking.OS: CentOS 6 | How to remove a USB drive without worrying if its been unmounted? | linux;mount;usb;usb drive | This is Gilles' answer, saving it here so it doesn't get lost.If you use the sync mount option on the removable drive, all writes are written to the disk immediately, so you won't lose data from not-yet-written files. It's a bad idea, but it does what you're asking, kind of.Note that sync does not guarantee that you won't lose data. Unmounting a removable drive also ensures that no application has a file open. If you don't unmount before unplugging, you won't notice if you have unsaved data until it's too late. Unmounting while a file is open also increases the chance of corruption, both at the filesystem level (the OS may have queued some operations until the file is closed) and at the application level (e.g. if the application puts a lock file, it won't be removed).Furthermore, sync is bad for the lifetime of the device. Without the sync option, the kernel reorders writes and writes them in batches. With the sync option, the kernel writes every sector in the order requested by the applications. On cheap flash media that doesn't reallocate sectors (meaning pretty much any older USB stick, I don't know if it's still true of recent ones), the repeated writes to the file allocation table on (V)FAT or to the journal on a typical modern filesystem can kill the stick pretty fast.Therefore I do not recommend using the sync mount option.On FAT filesystems, you can use the flush mount option. This is intermediate between async (the default) and sync: with the flush option, the kernel flushes all writes as soon as the drive becomes idle, but it does not preserve the order of writes (so e.g. all writes to the FAT are merged). |
_softwareengineering.209156 | Unhandled exception termIn .NET Framework, unhandled exceptions are the exceptions which were not handled by the application itself, and result in a crash. In a case of a desktop application, it means that a window similar to this one is displayed:For a web application, it mostly means an HTTP 500.Under unhandled exceptions, I also include ones which are handled globally, i.e., in a case of a desktop applications, the global handling which consists of displaying a custom window instead of the Windows default one.ContextWhen I work as a freelancer, I use my own in-house solution to gather unhandled exceptions from different sources (web apps and desktop apps). The gathered results are then displayed on a monitoring panel in real time as well as collected for future analysis (to be linked with a bug tracking software, etc.)Currently, I work in a company where I wouldn't be able to use my in-house solution to collect the exceptions (one of the reasons being that they won't accept to send all the exception messages to my servers).This company doesn't have any precise strategy for collecting unhandled exceptions. The only solution which was used before is both rudimentary and out of question: it consists of sending every exception by e-mail.This means that for the new product I'm working on, we should develop a custom strategy for collecting unhandled exceptions.QuestionI can always do by hand the part which will save the exceptions to the database or a log and the part which will load them from the database, a log or Windows Events.I would like to avoid reinventing the wheel and use something which is already commonly used.What are my choices? How are unhandled exceptions usually collected and processed later?By the way:Are there any libraries which help collecting those exceptions?Are there any software products which help analyzing those exceptions? | We need a custom strategy for collecting unhandled application exceptions. What are our options? | c#;.net;exception handling | In short: It will depend on .NET Framework version, as it might be handled differently in .NET 4.0 than in 2.0.A general rule of thumb is, when a runtime error occurs ( Runtime Error Yellow Screen of Death - YSOD) on a web application in production it is important to notify a developer and to log the error so that it may be diagnosed at a later point in time. There are several resources on this topic. For .NET 2.0 applications you may look at overview of a post how ASP.NET processes runtime errors and looks at one way to have custom code execute whenever an unhandled exception bubbles up to the ASP.NET runtime.Starting with .NET 4.0 developers get more flexibility by Asynchronous Programming Model (APM pattern). More details are posted here.Prior to the .NET Framework 2.0, unhandled exceptions were largely ignored by the runtime. For example, if a work item queued to the ThreadPool threw an exception that went unhandled by that work item, the ThreadPool would eat that exception and continue on its merry way. Similarly, if a finalizer running on the finalizer thread threw an exception, the system would eat the exception and continue on executing other finalizers.There are different flavor of User friendly Exception Handling dialog strategies that you might look for windows forms applications.In conclusion, there are some options depending on the Framework version that .NET application currently runs on. |
_webapps.89348 | I wonder how to export the messages I have received and sent on LinkedIn. | How can I export my LinkedIn messages? | linkedin;data liberation | Under Privacy & Settings under the Account tab is a link to Request an archive of your dataDownload your LinkedIn data Did you know you can request an archive of your activity and data on LinkedIn anytime?Within minutes, you'll get the archived information that's fastest to compile including things like your messages, connections and imported contacts. We'll send you an email with a link where you can download it right away.You'll get an email with a link where you can download the second part of your data archive in about 24 hours. You'll also be able to access your archive by going to your settings, selecting the Account tab, and clicking Request an archive of your data. Want more details? Just visit our Help Center.Here's whats included Your data archive will contain the information LinkedIn has stored for you including your activity and account history, from who invited you to join, to the time of your latest login. For the full list, visit our Help Center.According to the help page, the available information includes all the messages in your Messages, Sent, and Archive folders. It also includes the messages in the Trash folder (if you haven't emptied it). Example of archive content: |
_unix.46940 | I have an overlay alone with the default portage. Basically, I want to emerge only the updates from the main portage when I run emerge world -unD, and emerge the extra packages from the overlay explicitly.But now, each time I run emerge world, I got all the packages from both portage.I want to know what is the best way to keep enforce update from one portage only. | Gentoo: How to update from one portage only? | gentoo;emerge | You should edit it in files /etc/portage/*In /etc/portage/provided/package.providedyou place files which you do not want to be updated or installed e.g.dev-util/android-sdk-update-manager-20.0.3dev-java/icedtea-bin-7.2.2.1-r1dev-java/icedtea-bin-6.1.11.3-rIn manuals you will find the rest:http://wiki.gentoo.org/wiki/Portagehttp://wiki.sabayon.org/index.php?title=En:HOWTO:_The_Complete_Portage_Guide |
_cstheory.11506 | I am interested in the following problem. One has two collections $Q$ and $T$ of strings, and a set $A$ of alignments of strings in $Q$ to strings in $T$. I want to find a subset $A'$ of $A$ that (i) involves each element of $Q$ only once and (ii) maximizes the total number of characters in elements $T$ that are covered by an alignment in $A'$.(To be precise: Let $a$ be an alignment in $A$. We can think of $a$ as a tuple $(q,t,f)$ where $q\in Q$, $t\in T$, and $f : \mathcal I_a \to \mathcal J_a$ is a bijection such that $f(i)=j$ if $a$ aligns the $i$th character of $q$ to the $j$th character of $t$. Here $\mathcal I_a \subset \{1,2,\dots,|q|\}$ and $\mathcal J_a \subset \{1,2,\dots,|t|\}$, where $|\cdot|$ means length. We can think of characters in elements of $T$ as pairs $(t,j)$ where $t\in T$ and $j\in\{1,2,\dots,|t|\}$. We say that $(t,j)$ is covered by an alignment in $A'$ if there exists $a=(q,t',f)\in A'$ such that $t=t'$ and $j\in\mathcal J_a$. (Note that there might exist several such $a\in A'$.) Let $\chi(t,j,A') = 1$ if such an $a\in A'$ exists, and 0 otherwise. We want to maximize the cardinality of $\{(t,j) : \chi(t,j,A')=1\}$ over $A'\subset A$ such that for all $q\in Q$, there exists at most one $a=(q',t,f)\in A'$ with $q=q'$.)My question: What is a name for this problem, if any? Thanks in advance! I guess that it is related to maximum matchings and maximum coverage, but it's not quite the same as either. | Name of the problem to find the maximum number of characters covered by a set of strings | cc.complexity theory;reference request;optimization | null |
_codereview.161398 | Count the numbers without consecutive repeated digitsAsha loves mathematics a lot. She always spends her time by playing with digits. Suddenly, she is stuck with a hard problem, listed below. Help Asha to solve this magical problem of mathematics.Given a number N (using any standard input method), compute how many integers between 1 and N, inclusive, do not contain two consecutive identical digits. For example 1223 should not be counted, but 121 should be counted.Test Cases7 => 71000 => 8193456 => 256210000 => 7380The code snippet that i submitted for the above challenge was given belowf(int n){ int i=1; int c=n; for(; i<=n; i++) { int m=-1; int x=i; while(x) { if (x % 10 == m) { c--; break; } m = x % 10; x /= 10; } } return c;}When I submitted the above code in the competition a few days ago I ranked the First. but yesterday I was at no 10-11-13 and now I am ranked 13 right now.What was my mistake? What should I improve so that my rank will be the first in competitive programming? | Count the numbers without consecutive repeated digits | algorithm;c;programming challenge | null |
_unix.167770 | I am using this script to count files on my directory and sub-directories:for i in $(find . -type d) ; do printf $i %s\t ; ( find $i -type f | wc -l ) ; doneThis script works fine. What I really want it to do is to print only the directories that contain more than 31 files. | Print only directories with more than 31 files | shell;find | null |
_datascience.10239 | NOTE : I'm not sure if this is the right forum for this question. if not, please advice.Context : I am collecting a huge amount of data using an android app that is placed on a vehicle. I collect the data at ~1second intervals for about 2 hours, which gives almost 7200 data sets. These are the parameters :Timestamp (milliseconds)LatitudeLongitudeSpeedAccelerationNow I was looking at ways to simplify this data, as processing and rendering these many date points, especially on mobile devices is not a good idea.EDIT : as answer to Spacedman's comment : I want to simplify the data because most of it is redundant. The data is collected every 1 second regardless of whether the values are constant or changing. Accuracy is not crucial as its only used for displaying visual graphs on a website, drawing a polyline on a map. etc. So I want to keep only the minimum necessary data points required to reproduce the graph/line.While searching, I came across the RamerDouglasPeucker algorithm and also found a library that implements this.The issue : I am a bit confused as to how to simplify this data. I could :Somehow simply the entire thing by considering each data as a 5D point, orGenerate three sets of arrays, namely : [Lat, Long, time], [Speed, time] and [acceleration, time].So my question is :Are there any advantages/disadvantages of using either of the two approaches?I was thinking that - as each metric will have different patterns of variations, will combining them reduce the efficiency of the whole simplification process?Or is it best to keep the metrics separate, so that each will be simplified to its maximum efficiency?I am a Java/Obj-C developer, normally active on SO, and I am not an expert on these things, So I would like to know what you guys think.Thanks in advance | Combining parameters for Douglas-Peucker Simplification | bigdata;dataset;data cleaning | null |
_unix.58011 | Recently I've been working with JS and I'm very enthusiastic about this language. I know that there is node.js for running JS at server side, but is there a shell that uses JS as a scripting language? If such thing exists, how usable & stable is it? | Is there a JavaScript shell? | shell;shell script;javascript | Does this look desirable to you?// Replace macros in each .js filecd('lib');ls('*.js').forEach(function(file) { sed('-i', 'BUILD_VERSION', 'v0.1.2', file); sed('-i', /.*REMOVE_THIS_LINE.*\n/, '', file); sed('-i', /.*REPLACE_LINE_WITH_MACRO.*\n/, cat('macro.js'), file);});cd('..');If so, ShellJS could be interesting, it's a portable (Windows included) implementation of Unix shell commands on top of the Node.js API.I'm unsure if this could be used as a full-featured login shell, though. (Maybe with some wrapping?)You could argue that it's not really a shell, but do you know TermKit? It's made of Node.js + Webkit, you could use JS to extend it (I guess); the shell language is still Bash(-ish). |
_codereview.98515 | I'm trying to print a binary tree in vertical strips.For instance, a tree like this: 8 / \ 6 10 / \ / \ 4 7 9 12 / \3 5Is printed out as:346 58 7 91012Here's how I'm doing it:Map<Integer,Set<Integer>> printStripsVertically(TreeNode root, Map<Integer,Set<Integer>> countMap, int dist) { if( root == null ) return countMap; Set<Integer> nodeSet = countMap.get(dist);// get list at current distance. if(nodeSet ==null) { // this dist hasnt been inspected yet, so create new list here. nodeSet= new HashSet<Integer>(); } nodeSet.add(root.data);// add current node to list. countMap.put(dist,nodeSet);// create mapping for current dist //recurse left and right. Map<Integer,Set<Integer>> leftMap = printStripsVertically(root.getLeft(),countMap,dist-1); Map<Integer,Set<Integer>> rightMap = printStripsVertically(root.getRight(),countMap,dist+1); Map<Integer,Set<Integer>> mergedMap= new HashMap<Integer,Set<Integer>>(); Iterator<Integer> itLeft = leftMap.keySet().iterator(); while(itLeft.hasNext()){ mergedMap.put(itLeft.next(),leftMap.get(itLeft.next())); } Iterator<Integer> itRight = rightMap.keySet().iterator(); while(itRight.hasNext()){ mergedMap.put(itRight.next(),rightMap.get(itRight.next())); } return mergedMap;//iterate over the map to get results.}Edit :TreeNode.java public class TreeNode { int data; private TreeNode left; private TreeNode right; public TreeNode getLeft() { return left; } public void setLeft(TreeNode left) { this.left = left; } public TreeNode getRight() { return right; } public void setRight(TreeNode right) { this.right = right; } public TreeNode(int data) { this.data=data; }}Call to the method Map<Integer,Set<Integer>> verticalStrips = new HashMap<Integer,Set<Integer>>(); verticalStrips =printStripsVertically(root,verticalStrips,0);//where root is the root of the tree.I've dry-tested the algorithm to work, but I'm not quite convinced that iterating over the leftMap and rightMap is the cleanest way to go about this, so I'm looking for any suggestions towards that in particular or the code in general.Any thoughts on its correctness are appreciated. I'm also looking for any other approaches that might be more efficient in dealing with the problem. | Printing a binary tree in vertical strips | java;algorithm;tree | null |
_webmaster.79825 | I have a competitor (restaurant) that shows up very high in the local listings on Google and that has a lot of links from other local websites. It went out of business and the domain-name is now out to grab. If I buy this domain and redirect it to another restaurant its homepage, would their be a negative or positive SEO impact? What is the best way to do optimize this domain? | Purchasing expired domain relevant to business | redirects;multiple domains;local seo | Restaurants and takeaways use local rankings which is different to normal search listings, it uses NAP (Name Address Phone Number) and many other factors to determine where your business is located and the intended local audience. So, unless you sell the same food and serve the same area its generally a bad idea. Google has wised up to people purchasing expired domain and unless the domain is absolutely relevant to your own then its advisable not to get involved as it would be considered a 'risk' or no improvement.If you really want to improve your rankings do the right research... building links nowadays for local businesses is the least thing you should concern yourself with...Also restaurants are easy to rank... simply serve the best food and the rest looks after itself because it starts a buzz locally, on forums, social media and so forth. |
_softwareengineering.180878 | I am working in a Python project that I want to release under the GPL3 license. This project has one file that uses the BSD lxml library .I really need to put reference of the BSD license in the file where I use the library? Or just the GPLv3 of my project covers the licenses restrictions.Library: http://lxml.deLicenses: https://github.com/lxml/lxml/blob/master/doc/licenses/BSD.txthttp://www.gnu.org/licenses/gpl.txt[Edit]I think we're misunderstandingI will not put the source code of the lxml library on my project. I will put the reference for functions of the library. If someone wants use my project, he needs to download separately the lxml library.import lxml# do stuff with lxml...# do stuff# my codeOnly is necessary put the library license if I distribute the lxml source code with my project. Am I right? | GPLv3 Project with BSD Library | licensing;python;gpl;bsd license | null |
_vi.4233 | I heard someone had set up their editor to highlight python raw strings as regular expressions, e.g. for Django:urls(r'^site1/(\d)*/new$'', ...)Unfortunately my googleing doesn't get me anything. Is this possible in vim? | Highlight python raw strings as regular expression | syntax highlighting;regular expression;filetype python | null |
_cs.65756 | I have a homework problem: Calculate the overall speedup of a system that spends 40% of its time in calculations with a processor upgrade that provides for 100% greater throughput.Which is a pretty straightforward calculation with Amdahl's Law$S = \frac{1}{(1-f)+(\frac{f}{k})} $$f$ = fraction of work performed by component = .40$k$ = the speedup of new component = 1.00$S$ = overall system speedupPlugging in my values I get$S = \frac{1}{(1-.4)+(\frac{.4}{1})} $$S = 1 $Which from my understanding mean's there is no speed up in the system. I am unsure if my calculation is wrong or my understanding of Amdahl's Law because I would think this processor upgrade would've provided at least some system speedup.My book gives an example where $S = 1.22$ means a $22\%$ increase in speed so I think I am interpreting the answer correctly, which implies I did my calculation wrong, but that also seems correct. | Understanding Amdahl's Law calculation | performance;program optimization | 100% greater throughput means a (local) speed-up by factor $k=2$. |
_codereview.172307 | Is this a good way to solve the quiz Chessboard from http://eloquentjavascript.net/02_program_structure.html ?Write a program that creates a string that represents an 88 grid, using newline characters to separate lines. At each position of the grid there is either a space or a # character. The characters should form a chess board.When you have a program that generates this pattern, define a variable size = 8 and change the program so that it works for any size, outputting a grid of the given width and height.This is my code: size = 10; grid = for (var i = 1; i <= size; i++) { for (var j = 1; j <= size; j++) { if (i % 2 === 0) { grid+= # } else { grid+= # } } grid+= \n } console.log(grid) | Eloquent JavaScript chessboard | javascript;programming challenge;ascii art | Fun question;you should write a function that takes a parameter instead of just writing the codeA chessboard has lots of repetition, take a minute to ponder how String.repeat could make this code much simpler.Your indentation is not perfect, consider using a site like http://jsbeautifier.org/I am not a big fan of var within the loop, I would declare var up front.This is a possible solution that provides the right size of the board:function createChessboardString(size){ const line = ' #'.repeat( size ), even = line.substring(0,size), odd = line.substring(1,size+1); let out = ''; while(size--){ out = out + ((size % 2) ? odd: even ) + '\n'; } return out;}console.log(createChessboardString(8));You could consider for very large boards that the board in essence repeats odd + '\n' + even, so you could repeat that as well. The problem for me is that there are too many corner cases to consider. So personally I would go for the above for any board size < 1000. |
_unix.319985 | Which is the minimum version of the linux-kernel implementing the system call nanosleep? (sys_nanosleep) | Minimum version for syscall nanosleep | linux kernel;syscalls | It was added in the late 1990s (in ncurses since February 1998). Mailing list comments by David Dawes a year earlier said at that point it was only available in Solaris.According to Linux IO mini HOWTO (December 1997), it was available in the 2.0.x kernels, and was added after the previous version of the HOWTO on March 30, 1997. I found a French translation of the manual page dated April 1997.From that, it seems it was added in April 1997, which would be 2.0.30 (see Linux Kernel Version History: 2.0 series kernels ). |
_unix.324093 | I have to count the number of values that are between 0 and 0.05 in column 11 of a dataset. How do I go about doing this? | Count number of values within a range in a specific column | linux;text processing;numeric data | null |
_unix.307723 | I have a small AWS EC2 instance that is a fairly simple LAMP setup. I am experimenting with migrating the MySQL install to a remote AWS RDS instance, however I wish to mask the remote IP so that the db looks local (future plans). For this my intention is to use mysql::router which seems to do exactly what I want. I can have it listen on localhost:3306 and then route all the traffic to the MySQL host on the other box.My configuration for the router is very simple at the moment, and I have it listening on port 3307 so as to test without taking out the currently running MySQL instance.Rest of config is stock, this is my routing section[routing:aws]bind_address = 127.0.0.1:3307connect_timeout = 30destinations = XXXXXX.cwhpshcru9zi.eu-west-1.rds.amazonaws.commode = read-writeNow I have tried dumping a db and then connecting with interactive agent via the mysql::router and reimporting the data, the creates all work fine, and some small inserts, but when it gets to a substantial insert the connection drops (and VERY quickly, so not hitting even a second in time wise).I thought it might be the RDS instance, or some incompatibility with the dump, so I attempted exactly the same process but connecting directly to the remote MySQL instance and attempted the import again, no errors this time, all data created.I switched on debug logging in mysql router and see the following popping up:2016-09-03 21:59:41 DEBUG [7f6248b31700] Trying server XXXXXXX.cwhpshcru9zi.eu-west-1.rds.amazonaws.com:3306 (index 0)2016-09-03 21:59:41 DEBUG [7f6248b31700] [routing:aws] [127.0.0.1]:45420 - [52.49.XXX.XXX]:33062016-09-03 21:59:41 DEBUG [7f6248b31700] [routing:aws] Routing stopped (up:1465b;down:6638b)2016-09-03 21:59:42 DEBUG [7f6249332700] [routing:aws] Routing stopped (up:520b;down:401b)So it looks like for some reason mysql::router is hitting a problem? Has anyone else experienced this issue? is there a fix?So far I have migrated one site, and the site seems to function fine through the router, just the import of data didn't work.Thanks | mysql::router losing connection during data import | mysql;aws | null |
_codereview.158131 | I am writing this question to get some advice on improving this Database Management library. I'll explain a little about it:Database Manager - DatabaseManager is the holder, it generates new connections when the NewConnection is called, it returns a new DatabaseConnection with the saved connection string.Database Connection - DatabaseConnection is a connection containing a new connection that's created on each call from DatabaseManager.Usage:using (var databaseConnection = Serber.GetDatabase().NewDatabaseConnection){ databaseConnection.SetQuery(SELECT * FROM `table` WHERE `enabled` = '1' ORDER BY `name` DESC;); databaseConnection.Open(); using (MySqlDataReader Reader = databaseConnection.ExecuteReader()) { while (Reader.Read()) { try { // do some work } catch (DatabaseException ex) { log.Error(Unable to load item for ID [ + Reader.GetInt32(id) + ], ex); } } }}DatabaseManager:internal sealed class DatabaseManager{ private readonly string _connectionString; public DatabaseManager() { var connectionString = new MySqlConnectionStringBuilder { ConnectionLifeTime = (60 * 5), ConnectionTimeout = 30, Database = Hariak.HariakServer.Config.GetConfigValueByKey(database.mysql.database), DefaultCommandTimeout = 120, Logging = false, MaximumPoolSize = uint.Parse(Hariak.HariakServer.Config.GetConfigValueByKey(database.mysql.pool_maxsize)), MinimumPoolSize = uint.Parse(Hariak.HariakServer.Config.GetConfigValueByKey(database.mysql.pool_minsize)), Password = Hariak.HariakServer.Config.GetConfigValueByKey(database.mysql.password), Pooling = Hariak.HariakServer.Config.GetConfigValueByKey(database.mysql.pooling) == 1, Port = uint.Parse(Hariak.HariakServer.Config.GetConfigValueByKey(database.mysql.port)), Server = Hariak.HariakServer.Config.GetConfigValueByKey(database.mysql.hostname), UseCompression = false, UserID = Hariak.HariakServer.Config.GetConfigValueByKey(database.mysql.username), }; _connectionString = connectionString.ToString(); } public bool ConnectionWorks() { try { using (var databaseConnection = NewDatabaseConnection) { databaseConnection.OpenConnection(); } return true; } catch (Exception) { return false; } } public DatabaseConnection NewDatabaseConnection => new DatabaseConnection(_connectionString);}DatabaseConnection:internal sealed class DatabaseConnection : IDisposable{ private static readonly ILogger Logger = LogManager.GetCurrentClassLogger(); private MySqlConnection _connection; private List<MySqlParameter> _parameters; private MySqlCommand _command; public DatabaseConnection(string connectionString) { _connection = new MySqlConnection(connectionString); _command = _connection.CreateCommand(); } public void OpenConnection() { if (_connection.State == ConnectionState.Open) { throw new InvalidOperationException(Connection already open.); } _connection.Open(); } public void AppendParameter(string key, object value) { if (_parameters == null) { _parameters = new List<MySqlParameter>(); } _parameters.Add(new MySqlParameter(key, value)); } public void SetQuery(string query) { _command.CommandText = query; } public int ExecuteNonQuery() { if (_parameters != null && _parameters.Count > 0) { _command.Parameters.AddRange(_parameters.ToArray()); } try { return _command.ExecuteNonQuery(); } catch (MySqlException e) { Logger.Error(e, Database error was logged.); return 0; } finally { _command.CommandText = string.Empty; _command.Parameters.Clear(); if (_parameters != null && _parameters.Count > 0) { _parameters.Clear(); } } } public int GetLastId() { try { return (int)_command.LastInsertedId; } catch (MySqlException e) { Logger.Error(e, Database error was logged.); return 0; } finally { _command.CommandText = string.Empty; } } public int ExecuteSingleInt() { try { if (_parameters != null && _parameters.Count > 0) { _command.Parameters.AddRange(_parameters.ToArray()); } return int.Parse(_command.ExecuteScalar().ToString()); } catch (MySqlException e) { Logger.Error(e, Database error was logged.); return 0; } finally { _command.CommandText = string.Empty; _command.Parameters.Clear(); if (_parameters != null && _parameters.Count > 0) { _parameters.Clear(); } } } public bool TryExecuteSingleInt(out int value) { try { if (_parameters != null && _parameters.Count > 0) { _command.Parameters.AddRange(_parameters.ToArray()); } var scalar = _command.ExecuteScalar(); if (scalar == null) { value = 0; return false; } value = int.Parse(scalar.ToString()); return true; } catch (MySqlException e) { Logger.Error(e, Database error was logged.); value = 0; return false; } finally { _command.CommandText = string.Empty; _command.Parameters.Clear(); if (_parameters != null && _parameters.Count > 0) { _parameters.Clear(); } } } public MySqlDataReader ExecuteReader() { if (_parameters != null && _parameters.Count > 0) { _command.Parameters.AddRange(_parameters.ToArray()); } try { return _command.ExecuteReader(); } catch (MySqlException e) { Logger.Error(e, Database error was logged.); return null; } finally { _command.CommandText = string.Empty; _command.Parameters.Clear(); if (_parameters != null && _parameters.Count > 0) { _parameters.Clear(); } } } public DataSet ExecuteDataSet() { if (_parameters != null && _parameters.Count > 0) { _command.Parameters.AddRange(_parameters.ToArray()); } var dataSet = new DataSet(); try { using (var adapter = new MySqlDataAdapter(_command)) { adapter.Fill(dataSet); } return dataSet; } catch (MySqlException e) { Logger.Error(e, Database error was logged.); return null; } finally { _command.CommandText = string.Empty; _command.Parameters.Clear(); if (_parameters != null && _parameters.Count > 0) { _parameters.Clear(); } } } public DataTable ExecuteTable() { var dataSet = ExecuteDataSet(); return dataSet.Tables.Count > 0 ? dataSet.Tables[0] : null; } public DataRow ExecuteRow() { var dataTable = ExecuteTable(); return dataTable.Rows.Count > 0 ? dataTable.Rows[0] : null; } public void Dispose() { Dispose(true); } private void Dispose(bool disposing) { if (!disposing) { return; } if (_connection.State == ConnectionState.Open) { _connection.Close(); _connection = null; } if (_parameters != null) { _parameters.Clear(); _parameters = null; } if (_command != null) { _command.Dispose(); _command = null; } }} | MySQL database library with multiple connections | c#;mysql | Your code looks great to me at first glance. I have some tips for you that could increase the performance of your database manager:Go asyncI wouldn't say that the official ADO.NET MySQL connector is bad. It does pretty well what it has to, but let's admit it, async programming is not that rare today. You should take a look at one particular connector repo on Github, which is a fresh, clean and fully async ADO.NET MySQL connector that also supports .NET Core.Move the parameter and the command holder outside your DatabaseConnection class. You might not need a holder for parameters during every query. If you move those back to your connector (DatabaseManager) and just grab one when needed, you can spare some allocated space.Like this:DatabaseManager:public MySqlConnection CreateConnectionObject() => Activator.CreateInstance(typeof(MySqlConnection)) as MySqlConnection;public MySqlCommand CreateCommandObject() => Activator.CreateInstance(typeof(MySqlCommand)) as MySqlCommand;public MySqlParameter CreateParameterObject() => Activator.CreateInstance(typeof(MySqlParameter)) as MySqlParameter;RefactorOnly make methods and variables public, if they really must be exposed to the environment. Any internal method that shouldn't be called from the outside has to be private or protected.It's better, if you have only one DatabaseManager instance (which is the connector) and all the other are DatabaseConnection instances that link back to the DatabaseManager (which are the individual database interfaces and should only one per database exist). First you should create your DatabaseManager connector, then you create one DatabaseConnection for each database and if you would like to commit a query on that particular database, you can instruct the appropriate DatabaseConnection to do it.Break your code into sections depending on the MySQL commands. Instead of one global query handler, you would have different methods for different actions (Select, Insert, Delete etc.). It results in a much cleaner code. Also it's great, if you have one MySqlCommand builder method, that decides whether parameters are needed or not.See example below for what I mean:private MySqlCommand CreateSqlCommand(MySqlConnection Connection, string Sql, params object[] Args){ MySqlCommand SqlCommand = connector.CreateCommandObject(); // connector = DatabaseManager instance SqlCommand.Connection = Connection; SqlCommand.CommandText = Sql; SqlCommand.CommandTimeout = 300; if (Args.Length > 0) { MySqlParameter[] Params = new MySqlParameter[Args.Length]; for (var i = 0; i < Args.Length; i++) { MySqlParameter Param = connector.CreateParameterObject(); // connector = DatabaseManager instance Param.ParameterName = ; Param.Value = Args[i]; Params[i] = Param; } SqlCommand.Parameters.AddRange(Params); } return SqlCommand;}public async Task<MySqlDataReader> SelectAsync(string Sql, params object[] Args){ try { using (MySqlCommand Command = CreateSqlCommand(CreateConnection(), Sql, Args)) return await Command.ExecuteReaderAsync(); } catch (Exception ex) { Console.WriteLine(ex.ToString()); return null; }}Coding styleIf an if, using, while or for statement is followed by a single action, then scoping ({ and }) is not needed.Commenting is useful, when not overused. Personally, I dislike commenting everything, it makes my code messy and difficult to work with. If you name your variables after their purpose (as you did), comments are not that necessary, since the code speaks for itself. |
_unix.169521 | I have the following data from which I mean to extract only those lines which contain bb only. Not b or bbb or anything else just bb.abbabbbaabbccaabababbbbcNow when I use the following combination of commands-:cat file1 | grep bb[^b]I am getting the output as all the lines in my sample file-:abbabbbaabbccaabababbbbcThe expected I want is -:(The lines that contain only bb)abbaabbccWhat is the regular expression that can achieve this ?abbbabb is not valid. I am looking for lines that contain only bb and no other pattern of b. The line will contain only two, consecutive b and no other b characters at all. | regex - Searching for only character pairs | regular expression;patterns | I guess the most straight-forward way is:grep '^[^b]*bb[^b]*$' file1Btw, for commands like grep that accept a file name argument it's more efficient to dogrep '^[^b]*bb[^b]*$' file1orgrep '^[^b]*bb[^b]*$' < file1(the latter working if no file argument is supported, too)thancat file1 | grep '^[^b]*bb[^b]*$'and often more flexible. |
_unix.254761 | I have a python script$ cat ~/script.pyimport sysfrom lxml import etreefrom lxml.html import parsedoc = parse(sys.argv[1])title = doc.find('//title')title.text = span2.text.strip()print etree.tostring(doc)I can run the script on an individual file by issuing something like$ python script.py foo.html > new-foo.htmlMy problem is that I have a directory ~/webpage that contains hundreds of .html files scattered throughout sub-directories. I would like to run ~/script.py on all of these html files. How can I do this?I'm aware that I can list all the .html files under ~/webpage/ by issuing $ find ~/webpage/ -name *.htmlbut I'm not quite sure how to use this list to run my script on them. | How can I run this python script on all html files under a directory? | bash;shell script;scripting;python | null |
_unix.344070 | I need some help with these text files. (fields are separated by commas)$ cat File1.seed389,0,390,1,391,0,392,0,393,0,SEED394,0,395,1,$ cat File2.seed223,0,224,1,225,0,226,1,227,0,SEED228,1,$ cat File3.seed55,0,56,0,SEED57,1,58,0,59,1,60,0,and the desired output would be:389,0,,223,0,,,,,0390,1,,224,1,,,,,2391,0,,225,0,,,,,0392,0,,226,1,,55,0,,1393,0,SEED,227,0,SEED,56,0,SEED,0394,0,,228,1,,57,1,,2395,1,,,,,58,0,,1,,,,,,59,1,,1,,,,,,60,0,,0As you can see the files are aligned by the pattern SEED, and then sum all the 2nd columns of the files horizontally adding the result in a last column. | compare and match multiple files by pattern | text processing;files;scripting | null |
_unix.33729 | Recently, I look for scripts to take this:This;Is;First;Line;and make it like:This Is First Line | Need help to make rows into column with awk or sed | linux;scripting | null |
_unix.51826 | There's a new server that is assigned 5 IP addresses. I want to use Xen to run several VMs with various services. This is my first attempt to install Xen, I use this tutorial as a guideline. Stuck in the very beginning: they talk about replacing a single ip on an eth0 with a bridge br0. My server has 5: eth1 and 4 aliases eth1:1 .. eth1:4. How should the network config look like?bridge that replaces eth1 completely, and then 4 aliases added to the bridge?bridge can only replace a single IP out of the 5?Pardon my lame questions, first time in this forest. | Xen, bridge and multiple IPs on CentOS | centos;routing;xen;bridge | 1 advice with xen, if you decide to use classic bridge (vs ovs), set it manually as the scripts didn't get it right for me at first (with the single nic being locked out)something like this should get bridging to work:auto lo br0iface lo inet loopbackiface br0 inet static address 192.168.128.7 netmask 255.255.255.128 network 192.168.128.0 broadcast 192.168.128.127 gateway 192.168.128.126 dns-nameservers 172.16.2.200 bridge_ports eth1 bridge_stp off bridge_fd 0 #bridge_hello 2 #bridge_maxage 12iface eth1 inet manualNow on every guest os you will get an 'eth0' interface (rfr. bridge_fd=0), if you assign a ip address to that interface, it will be on the br0 bridge and will be able to do everything like the host can, given the fact that nothing is blocking it (netfilter etc)for completeness sake, then you edit /etc/sysctl.conf (assuming debian here,sry) and set this as it might be needed for your networknet.ipv4.conf.eth1.proxy_arp = 1net.bridge.bridge-nf-call-ip6tables = 0net.bridge.bridge-nf-call-iptables = 0net.bridge.bridge-nf-call-arptables = 0and do sysctl -p to commit them. This disables netfilter from intervening on the bridge. Alternatively you could use iptables to do this too. from top of my head, something like this (they might not all be needed), but since I don't use these, it's just to give an idea:iptables -I FORWARD -m physdev --physdev-is-bridged -j ACCEPTiptables -I FORWARD -m physdev --physdev-in vif1.0 -j ACCEPTThat vif1.0 (or perhaps named a bit different) interface will be shown once your guest os is started, you can check the network on the host with the classic tools (ip, ifconfig etc). |
_unix.202588 | I run Gnome, which has pretty good support for my HiDPI screen. However, when I run QT apps I can't seem to find a way to scale the fonts. Is there a way to do this without installing a full version of KDE? | How can I set the default font size for all Qt5 apps? | configuration;fonts;qt | You can try this recipe from the archwikiQt5 applications can often be run at higher dpi by setting the QT_DEVICE_PIXEL_RATIO environment variable. Note that the variable has to be set to a whole integer, so setting it to 1.5 will not work.This can for instance be enabled by creating a file /etc/profile.d/qt-hidpi.shexport QT_DEVICE_PIXEL_RATIO=2And set the executable bit on it. |
_unix.268835 | I am running Plesk on DebianSince i installed Plesk almost 6 months ago every time i restarted the server nginx would fail to start on boot and i would have to go and manually restart it. Now today i needed to restart the server again but this time i can't even manually restart nginx. I get this:Starting nginx (via systemctl): nginx.serviceJob for nginx.service failed.See 'systemctl status nginx.service' and 'journalctl -xn' for details.failed!systemctl status nginx.service returns: nginx.service - Startup script for nginx serviceLoaded: loaded (/lib/systemd/system/nginx.service; enabled)Active: failed (Result: exit-code) since Wed 2016-03-09 23:00:15 MST; 25min agoProcess: 4723 ExecStartPre=/usr/sbin/nginx -t (code=exited, status=1/FAILURE)Process: 4720 ExecStartPre=/usr/bin/test $NGINX_ENABLED = yes (code=exited, status=0/SUCCESS)Mar 09 23:00:15 fineartschool.net nginx[4723]: nginx: the configuration file /etc/nginx/nginx.conf syntax is okMar 09 23:00:15 fineartschool.net nginx[4723]: nginx: [emerg] bind() to 64.4.6.100:80 failed (99: Cannot assign requested address)Mar 09 23:00:15 fineartschool.net nginx[4723]: nginx: configuration file /etc/nginx/nginx.conf test failedMar 09 23:00:15 fineartschool.net systemd[1]: nginx.service: control process exited, code=exited status=1Mar 09 23:00:15 fineartschool.net systemd[1]: Failed to start Startup script for nginx service.Mar 09 23:00:15 fineartschool.net systemd[1]: Unit nginx.service entered failed state.And journal -xn reads-- Logs begin at Wed 2016-03-09 22:49:30 MST, end at Wed 2016-03-09 23:10:01 MST. --Mar 09 23:05:01 fineartschool.net CRON[6067]: pam_unix(cron:session): session closed for user rootMar 09 23:09:01 fineartschool.net CRON[7188]: pam_unix(cron:session): session opened for user root by (uid=0)Mar 09 23:09:01 fineartschool.net CRON[7189]: (root) CMD ( [ -x /usr/lib/php5/sessionclean ] && /usr/lib/php5/sessionclean)Mar 09 23:09:01 fineartschool.net CRON[7188]: pam_unix(cron:session): session closed for user rootMar 09 23:09:47 fineartschool.net CRON[4606]: pam_unix(cron:session): session closed for user rootMar 09 23:10:01 fineartschool.net CRON[7505]: pam_unix(cron:session): session opened for user root by (uid=0)Mar 09 23:10:01 fineartschool.net CRON[7506]: pam_unix(cron:session): session opened for user root by (uid=0)Mar 09 23:10:01 fineartschool.net CRON[7507]: (root) CMD (/opt/psa/admin/bin/php -dauto_prepend_file=sdk.php '/opt/psa/admin/plib/modules/magicspam/script|Mar 09 23:10:01 fineartschool.net CRON[7508]: (root) CMD (/opt/psa/admin/bin/php -dauto_prepend_file=sdk.php '/opt/psa/admin/plib/modules/plesk-mobile/scrMar 09 23:10:01 fineartschool.net CRON[7505]: pam_unix(cron:session): session closed for user rootand the nginx error log2016/03/09 22:28:57 [emerg] 952#0: bind() to 64.4.6.100:80 failed (99: Cannot assign requested address)2016/03/09 22:31:14 [emerg] 2675#0: bind() to 64.4.6.100:80 failed (99: Cannot assign requested address)2016/03/09 22:34:56 [emerg] 914#0: bind() to 64.4.6.100:80 failed (99: Cannot assign requested address)2016/03/09 22:38:36 [emerg] 2670#0: bind() to 64.4.6.100:80 failed (99: Cannot assign requested address)2016/03/09 22:39:26 [emerg] 941#0: bind() to 64.4.6.100:80 failed (99: Cannot assign requested address)2016/03/09 22:42:17 [emerg] 2795#0: bind() to 64.4.6.100:80 failed (99: Cannot assign requested address)2016/03/09 22:42:32 [emerg] 2912#0: bind() to 64.4.6.100:80 failed (99: Cannot assign requested address)2016/03/09 22:46:17 [emerg] 4026#0: bind() to 64.4.6.100:80 failed (99: Cannot assign requested address)2016/03/09 22:46:26 [emerg] 4092#0: bind() to 64.4.6.100:80 failed (99: Cannot assign requested address)2016/03/09 22:49:49 [emerg] 795#0: bind() to 64.4.6.100:80 failed (99: Cannot assign requested address)2016/03/09 22:52:31 [emerg] 2517#0: bind() to 64.4.6.100:80 failed (99: Cannot assign requested address)2016/03/09 23:00:15 [emerg] 4723#0: bind() to 64.4.6.100:80 failed (99: Cannot assign requested address)Any help would be greatly appreciated!Thanks in advance! | nginx won't restart | debian;nginx;plesk | Well this is embarrassing.So the after looking at the recurring pattern (99: Cannot assign requested address) I decided to look at the servers assigned IP Address and it turns out it didn't pick up it's static IP Address instead picked up a dynamic IP Address. After correcting this i was able to restart nginx. Hopefully someone else will be able to benefit from this. |
_cs.68241 | I honestly haven't an idea how to proof that eventhough I can understand the background, could someone help me? | Why is DCFL not closed under kleene star? | context free;pushdown automata;kleene star | null |
_unix.16958 | It's been a while since my web-browsing has really suited me. What I would really like is:A javascript-enabled web browser with a tab-based browsing system that can be controlled simultaneously using a console and a GUI.For example, I'd like to be able to...1) open a bunch of tabs2) go to the console and tell it something like 'copy the url open in each tab and write it, along with the html, to a file, for each open tab'in other words, I want to be able to browse with tabs at my leisure, and then write scripts that iterate over each tab. Does anything like that exist? | is there anything like this web browser in the debian/ubuntu repositories? | software rec;browser;web | I would suggest that uzbl is just the right ninja magic for this. It is a scriptable, console-controllable single-purpose browser. Being based on webkit, its rendering and javascript support is first class, but it follows the unix phylosophy of doing one thing and doing it well while allowing other programs to push data in and out.There is a wrapper for it that adds support for a tab-like interface as well. |
_softwareengineering.328520 | I have an excel like table with a should value and an is value for each day of a month:descrip. | | 01 | 02 | 03 | 04 |_______________________________________column 1 | should | 60 | 0 | 60 | 0 | | is | 60 | 0 | 60 | 60 |_______________________________________column 2 | should | 0 | 15 | 0 | 15 |column 3 | is | 0 | 0 | 0 | 15 |I need the values of this table for two purposes: Extract some statistics (total; should / is ratio; etc)Based on the values see if the actions (entry in column) are doneJust onceOnce every weekdaily periodicallyShould I get the statistics with SQL queries or calculate them with JavaScript based on the JSON response? What are the (dis)advantagesSome additional information:I'm using SpringBoot with JPA and a PostgreSQL database.Tables:ChartColumnAction (should / is values)Here is a part of my JSON response:columns: [{ id: 12, should: [{ id: 13, date: 1438552800000, min: 60 }], is: [] } }] | Statistics with SQL queries or in JavaScript | patterns and practices | null |
_webmaster.107535 | I am building a web app and trying to add two languages to the website. So I will make the same documents ending in -gr. Some examples of the files look like this:English Language paths:www.example.com/index.htmlwww.example.com/Blog.htmlGreek Language path:www.example.com/index-gr.htmlwww.example.com/Blog-gr.htmlIs it possible to rename the Greek files like the following:www.example.com/blog-gr.html/to something like:www.example.com/blog/greg: Remove the -gr of all the Greek documents and add /gr at the end.Also only for index file, example.com/index-gr.html should be example.com/gr instead.So i am asking for the .htaccess code to replace those greek file urls ending in -gr.html to /gr | Rename translated documents into ending language initials | htaccess;url;apache;url rewriting;translation | .htaccess: RewriteEngine on RewriteBase / RewriteRule ^(.*)\/gr$ $1-gr.html [NC,L]HTML:Add this inside the head tags of every greek file (replacing index-gr.html with your current file): <base href=https://gragop.herokuapp.com/index-gr.html>Change the URLs that link to the greek files to: filename.html/gr eg: <a href=index.html/gr>Greek file </a> |
_unix.353934 | I'm trying to create a simple SSH tunnel using OpenSSH.I have a VPS server listening on port 4444 for SSH.From my local Ubuntu machine, I wish to create the SSH tunnel to http://edition.cnn.com/.I use the following command:ssh -L 5050:edition.cnn.com:80 x.x.x.x -p 4444where x.x.x.x is my VPS IP.I then press Enter, and wait a couple of seconds.Instead of the tunnel being created, I'm logged in via SSH to my VPS.What is wrong with my command syntax? Everywhere I look I find that I invoke it correctly. | SSH Tunneling not working properly | ssh tunneling;openssh | null |
_unix.192385 | Due to historical reason, I am bound to use kernel 3.0 for my existing custom operating system.Now, I'm trying to use this OS onto new board, which requires radeon kernel module for native X driver to start the GUI.Problem is that, required radeon do not support the intended chipset board. But the same kernel driver of 3.12 do support the said chipset.How can I compile 3.12's (for argument) radeon kernel module against 3.0 ?[ One way is to replace source directory /usr/src/3.12/kernel/drivers/gpu/drm/radeon at /usr/src/3.0/kernel/drivers/gpu/drm/radeon. Though, I haven't tried this, will try it. ] | Kernel Module Upgrade | linux;radeon | null |
_cogsci.8650 | It is true that blood flows to wherever the brain is most activated and does fMRI measure the blood flow inside of the brain through oxygen content? | What does fMRI measure exactly | measurement;neuroimaging;fmri | As a slight modification of your statement: blood flow increases wherever activity in the brain increases. The type of fMRI that uses this principle is blood-oxygenation-level-dependent fMRI or BOLD fMRI. MRI in general detects signals by picking up proton signals from water molecules. This proton signal is basically caused by magnetizing the protons causing their spin to change. A subsequent powerful radiowave disrupts this spin and the following relaxation phase of the protons to the original state can be detected by MRI. Water, and hence protons are everywhere in the body, including the brain and the blood. Deoxygenated hemoglobin (hemoglobin without oxygen) in blood changes the proton signal in its immediate surroundings due to the magnetic properties of deoxyhemoglobin. This is caused by the fact that deoxygenated hemoglobin is paramagnetic and decreases the signal that protons release. In fact, it has been regarded as noise in structural MRI scans. Oxygenated hemoglobin does not have this property. Radiopaedia has a nice explanation as to exactly how the BOLD signal is used in BOLD fMRI, and I quote:When a specific region of the cortex increases its activity in response to a task, the extraction fraction of oxygen from the local capillaries leads to an initial drop in oxygenated haemoglobin [...]. Following a lag of 2-6 seconds, cerebral blood flow (CBF) increases, delivering a surplus of oxygenated haemoglobin, washing away deoxyhemoglobin. It is this large rebound in local tissue oxygenation which is imaged. So to sum up: brain activity increases the BOLD signal by picking up oxygen-changes after an increased blood flow to that specific part of the brain. So your statement that fMRI measures blood flow is technically incorrect. Doppler techniques can be used to measure the actual flow of blood. |
_cs.71674 | Most of the operations in computer are using floating point arithmetic.Put it simply why a Floating Point Unit alone is not sufficient? Can we do away with ALU?Is FP operations are resource intensive alone be the reason for this, over the advantages provided by FP operations? | Why a separate ALU is needed, since any integer can be represented as floating point numbers? | floating point | null |
_hardwarecs.1918 | Ok so I am going to be building a smart home using a small Raspberry possibly but I needed some recommendations! I hope this is the right place to post. But basically here is what i was thinking:Possibly use a Relay Switch to control the lights in my room, but use a transmitter and receiver to make it go on and off using the Raspberry Pi. But i am not sure exactly HOW to do it. What would the layout be or the schematics?Thank you everyone for helping! | IoT Smart home recommendation - controlling lights | raspberry pi;smart device | null |
_codereview.115591 | I've been told that using God objects at all is a Bad ThingIn object oriented languages, God objects know all, they control too much. I'm trying to build a game (or for the scope of this question a generic app with a GUI) and I'm using a Main object that holds all the other objects needed to make it all work.At the moment (and probably forevermore), my main.py module contains only functions that initialise the other modules in order to setup my app.The other modules setup are managers of different things and most inherit from a class ManagerBase which among other things can retrieve other managers and from that, the contents of said managers.#Import modulesimport pygameimport sys, osimport assets.config.config_managerimport assets.events.event_managerimport assets.events.ai_event_managerimport assets.font.font_managerimport assets.ui.subscription_managerimport assets.ui.screenimport assets.ui.keyboard_injectorimport assets.entity.entity_managerimport assets.databinCAPTION = Generic Applicationclass Main(object): def __init__(self): self.args = sys.argv self.debug = debug in self.args def init_databin(self): self.databin = assets.databin.Databin() def init_config_manager(self): self.config_manager = assets.config.config_manager.ConfigManager() self.fps_limit = self.config_manager[video_config,fps_limit] self.show_fps = self.config_manager[video_config, show_fps] if not self.show_fps: self.blit_fps = lambda: None def init_screen(self): pygame.init() self.screen = assets.ui.screen.Screen(pygame.display.set_mode(*self.config_manager.get_screen_properties()), pygame) pygame.display.set_caption(CAPTION) if self.config_manager[video_config][screen_properties][fullscreen]: pygame.mouse.set_visible(False) self.screen.blit(self.screen.old_im_load(os.path.join(assets, loading.png)), (0,0)) self.update_screen() self.clock = pygame.time.Clock() def init_event_manager(self): self.event_manager = assets.events.event_manager.EventManager() self.event_manager.add_events() def init_keyboard_injector(self): self.keyboard_injector = assets.ui.keyboard_injector.KeyboardInjector() def init_ai_event_manager(self): self.ai_event_manager = assets.events.ai_event_manager.AiEventManager() self.ai_event_manager.add_events() def init_entity_manager(self): self.entity_manager = assets.entity.entity_manager.EntityManager() def init_font_manager(self): self.fonts = assets.font.font_manager.FontManager() self.fonts.register_font(fps, verdana, 12) def init_subscription_manager(self): self.subscription_manager = assets.ui.subscription_manager.SubscriptionManager() self.subscription_manager.load_subscription() def run(self): while 1: self.keyboard_injector.run() self.event_manager.parse_events(pygame.event.get()) self.subscription_manager.run_subscription() self.clock.tick(self.fps_limit) self.blit_fps() self.update_screen() def blit_fps(self): try: count = int(self.clock.get_fps()) except OverflowError: count = Infinate? fps = self.fonts[fps].render(FPS: %s %(count), True, (255,255,255)) self.screen.blit(fps, (10, 30))#, no_scale = True) def update_screen(self): pygame.display.update(self.screen.blit_rects_old) pygame.display.update(self.screen.blit_rects) self.screen.blit_rects_old = self.screen.blit_rects self.screen.blit_rects = []def main(): global main_class main_class = Main() main_class.init_databin() main_class.init_config_manager() main_class.init_screen() main_class.init_event_manager() main_class.init_keyboard_injector() main_class.init_ai_event_manager() main_class.init_entity_manager() main_class.init_font_manager() main_class.init_subscription_manager() main_class.run()if __name__ == __main__: os.environ['SDL_VIDEO_CENTERED'] = '1' if debug in sys.argv: try: import cProfile as profile except ImportError: import profile profile.run('main()') else: main()ManagerBase uses something dreaded... import __main__ in order to get access to the main class.import __main__class ManagerBase(object): def get_main_class(self): return __main__.main_class def get_databin(self): return self.get_main_class().databin def get_pygame(self): return __main__.pygame def get_main_dict(self): return __main__.__dict__ def get_config_manager(self): return self.get_main_class().config_manager | How Godly does an object need to be before it becomes unholy? | python;object oriented;design patterns | According to Wiki a god object is an object that knows too much or does too much. The general problem I have with this is: What means too much? Your question is about searching an absolute statement as we already know we can only find absolute statements within very restricted areas (and that's what they all have in common) that are not related to reality. As soon as we deal with real world applications we have to deal with uncertainty. That is because we derive OO models from reality as we perceive it. And this can be error prone.To escape this dilemma in computer science some principles are discovered that lead to a step by step improvement of source code. They are called S.O.L.I.D. principles. If you violate a principle your source code becomes worse. So the target is to violate the principles as less as possible. So easy as I say it: The violation of these principles is an identification problem that sometimes becomes very very difficult.In the case of the so called God class the S of these principles is addressed: The single responsibility principle (SRP). It says that one code fragment (module, class, method) should only have one responsibility. BTW this is applicable to other programming paradigms as well. A God class seems to have at least more than one responsibility. That can be said for sure. Anything else is popular speech if someone says God class.So working with SRP your code will improved step by step by identifying violations of this principle and eliminate them. That is by consolidating redundant responsibility and vice versa by separating different responsibilities.But the whole thing only works if you identify the violation. And that's the core. To identify a violation you look for indicators:real redundant code fragmentsa bug that was not fixed everywhere because of code redundancya bug that was fixed but broke the application at another placelong classes, long methodsa lot of object local variablesdeep nestingobjects that do a lot of different things...I want to underline that these are only indicators. A long method method for example can do only one thing: initialize a hashmap with key value pairs. Although you would have thought about solving this another way it's not violoating SRP.So if you have an indicator you can make a thought experiment if there is a violation. If you think you have redundant responsibilities then you should think about a new business requirement that changes one code fragment and ask yourself should the other change as well. This you should discuss with the business people. BTW consolidating responsibilities is much harder that separating them because this may break the application as one redundant code fragment will be omitted.So back to your question: A class does not become holy or a god. It will become less godly by eliminating violations of SRP. Theoretically the class becomes godless when an 1:1-relationship of responsibility and code fragment is reached. But to ask when there are too much responsibilities does not make sense in the context of code quality. This will only be a matter of costs to maintain the code. If it costs too much (for the business men) and the costs can be assigned to the SRP violation then certainly there are too much responsibilities. |
_webmaster.46699 | I moved my wiki fromhttp://jklatex.square7.de/wiki/doku.php/starttohttp://logicpuzzle.square7.de/startand now i want to redirect the URL with mod_rewrite. My .htaccess is as follows:# BEGIN WordPress<IfModule mod_rewrite.c>RewriteEngine OnRewriteBase /RewriteRule ^/wiki/doku.php/(.*)$ http://logicpuzzle.square7.de/$1 [R,NC,L]RewriteRule ^index\.php$ - [L]RewriteCond %{REQUEST_FILENAME} !-fRewriteCond %{REQUEST_FILENAME} !-dRewriteRule . /index.php [L]</IfModule># END WordPressI don't understand why it does not work :-(Any hints? | .htaccess redirect with mod_rewrite | htaccess;mod rewrite | The leading slash is evil! ;-) The slash is part of RewriteBase.Changing the RewriteRule toRewriteRule ^wiki/doku.php/(.*)$ http://logicpuzzle.square7.de/$1 [R=301,NC,L]works as desired. |
_webapps.92926 | A user made a comment on a popular post and checked the 'notify on future comments' button. She now wants to turn those off or be removed from that. How can I turn off Wordpress new comment notifications for 1 specific user for 1 specific comment thread? | Turn off Wordpress new comment notifications for 1 specific user (not admin) | wordpress;comments | null |
_scicomp.19945 | I am trying to do a simple parallel sparse matrix vector multiplications using PETSC. My sparse matrix is a simple tridiagonal laplacian matrix, which is distributed over multiple processors using PETSC.My main question is that if I do the same operation by simply iterating over the vector and updating each valueA[i] = -1*A[i-1] + 2*A[i] + -1*A[i+1], it takes much lesser time than by using the PETSC SpMV. Why is it so? Or am I doing something wrong? | Sparse matrix vector product using PETSC | sparse;petsc;matrix;vector | I assume that you are comparing multiplication with an assembled PETSc matrix with your hand-coded matrix-free method. The latter may indeed be faster, but this could be because no entries of A need be loaded from memory. A more meaningful comparison might be to a matrix-free operator in PETSc (see Section 3.3 in the manual).Once you have an apples-to-apples comparison, you can try to determine if any remaining speed differences are due to PETSc overhead or not by using an optimized build (configure --with-debugging=0) and running your code with the -log_summary option to see timings of various operations. |
_webapps.8031 | Please note that my computer's time is set correctly. Dates and times are correct in all other applications including Google's services such as Google Docs and Google Calendar.However, messages in Gmail are always showing with a timestamp eight hours into the future. Occasionally, after repeated set/reset cycles in my account settings, I get the correct timestamp but when I log out and log back in timestamps are again eight hours into the future.I have already inspected the email headers, and the timestamp information (including time zones) is correct at each hop.There are a number of threads on Google's support forums regarding this and the one that is being monitored by the Google staff seems to be Wrong time posted on all my email - how to fix.Is anyone aware of a fix or a work-around or at least an explanation of why the timestamps are messed up? | Why is Gmail showing the wrong date/time for my messages? | gmail;time zone | We have a standing FAQ in our organisation that if you see any sort of timezone-related issues in Gmail your should enable the Sender Time Zone lab, reload, then disable the lab again (unless you actually want it). This seems to reset Gmail's timezone handling. We haven't yet got to the root cause yet (despite much back and forth with Google support), but we find that this resolves timezone issues most of the time. Give it a try. |
_webmaster.95790 | Google Webmaster Tools seems to be giving me an erroneous report. The crawl Errors (Smart Phone tab) still shows a link to http://mypubguide.com/good-pubs/Blacko from http://mypubguide.com/good-pubs/blacko-in-pendle-districtI cannot find any such link when I view source or use web kit to search for it. This been going on for a while but this comes back after being marked as fixed and it's on the report as being detected today.Caching policy in the web.config is set to 30 minutes: | Google Websmaster Tools crawl errors reporting a link that does not exist | google search console;links;crawl errors;broken links | null |
_cs.27578 | The related and interesting fields of Information Theory, Turing Computability, Kolmogorov Complexity and Algorithmic Information Theory, give definitions of algorithmically random numbers.An algorithmically random number is a number (in some encoding, usually binary) for which the shortest program (e.g using a Turing Machine) to generate the number, has the same length (number of bits) as the number itself.In this sense numbers like $\sqrt{e}$ or $\pi$ are not random since well known (mathematical) relations exist which in effect function as algorithms for these numbers.However, especially for $e$ and $\pi$ (which are transcendental numbers) it is known that they are defined by infinite power series.For example $e = \sum_{n=0}^\infty \frac{1}{n!}$So even though a number, which is the binary representation of $\sqrt{e}$, is not alg. random, a program would (still?) need the description of the (infinite) bits of the (transcendental) number $e$ itself.Can transcendental numbers (really) be compressed?Where is this argument wrong?UPDATE:Also note the fact that for almost all transcendental numbers, and irrational numbers in general, the frequency of digits is uniform (much like a random sequence). So its Shannon entropy should be equal to a random string, however the Kolmogorov Complexity, which is related to Shannon Entropy, would be different (as not alg. random)Thank you | Can a transcendental number like $e$ or $\pi$ be compressed as not algorithmically random? | turing machines;information theory;randomness;kolmogorov complexity;descriptive complexity | The problem is in your poor definition of algorithmically random number as applied to irrational numbers. In particular:has the same length (number of bits) as the number itself.has no meaning if the number is of unbounded length.Your Wikipedia link gives better definitions, which don't have this problem. For example (and paraphrasing formatting):Kolmogorov complexity [...] can be thought of as a lower bound on the algorithmic compressibility of a finite sequence (of characters or binary digits). It assigns to each such sequence $w$ a natural number $K(w)$ that, intuitively, measures the minimum length of a computer program (written in some fixed programming language) that takes no input and will output $w$ when run. Given a natural number $c$ and a sequence $w$, we say that $w$ is $c$-incompressible if $K(w) \geq |w| - c$.An infinite sequence $S$ is Martin-Lf random if and only if there is a constant $c$ such that all of $S$'s finite prefixes are $c$-incompressible.This is a test passed by $\sqrt{e}$ by setting $c$ a bit larger than the program to generate $\sqrt{e}$ and including in it the length to generate. |
_softwareengineering.65989 | In the below Sequence diagram, when the user have entered the Username and Password, I have to do the authentication. Now you can see two details valid details and invalid detail in the diagram , which i will return when the user password match and miss-match respectively. Now My big question is which one i have to draw first, either valid details or invalid detail, how I know which one will come first. | How to do this in Standard UML? | uml;diagrams;sequence | Neither. I would likely put them on different diagrams. The sequence diagram is supposed to represent a single flow of execution through your design, and these are two different flows.If you are worried now that you have to duplicate the diagram, and it would be more efficient to represent both you would be right - but remember that your UML will not be compiled - it does not need to represent everything your program does.Ask yourself why you are drawing this. It is a tool to help describe ( or explore ) your design - you only need to diagram the things that you need to explore, and in many cases it is just the complex flows. |
_webapps.67172 | I was wondering if there was a way to set hot keys to easily switch between styles in a Google doc. Say, for example, I want to switch between a 1 inch and 2 inch right indent, but I don't want to have to use the ruler every time to do that, and instead just use a key combo to switch. Is there a way I can do that? If it requires writing a script, I can do that if I'm pointed in the right direction. | Google doc switch styles with hot keys | google documents;keyboard shortcuts | null |
_unix.341161 | I've created a software RAID 6 from five 4TB drives with mdadm --create /dev/md0 --chunk=256 --level=6 --raid-devices=5 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1. Before that, I've created partitions on each drive with the max size. 'fdisk -l' shows below output. However, the overall size is only 6TB. With Raid 6 having 2 parity, shouldn't there be around 12TB?Disk /dev/sda: 525.1 GB, 525112713216 bytes, 1025610768 sectorsUnits = sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk label type: gpt# Start End Size Type Name 1 46139392 83888127 18G Microsoft basic 2 8390656 46139391 18G Microsoft basic 3 87033856 1025610734 447.6G Linux LVM 4 83888128 84936703 512M BIOS boot parti 5 2048 8390655 4G Microsoft basic 6 84936704 87033855 1G Linux swapDisk /dev/sdb: 4000.8 GB, 4000787030016 bytes, 7814037168 sectorsUnits = sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 4096 bytesI/O size (minimum/optimal): 4096 bytes / 4096 bytesDisk label type: dosDisk identifier: 0x00000000 Device Boot Start End Blocks Id System/dev/sdb1 1 4294967295 2147483647+ ee GPTPartition 1 does not start on physical sector boundary.Disk /dev/sdc: 4000.8 GB, 4000787030016 bytes, 7814037168 sectorsUnits = sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 4096 bytesI/O size (minimum/optimal): 4096 bytes / 4096 bytesDisk label type: dosDisk identifier: 0x00000000 Device Boot Start End Blocks Id System/dev/sdc1 1 4294967295 2147483647+ ee GPTPartition 1 does not start on physical sector boundary.Disk /dev/sde: 4000.8 GB, 4000787030016 bytes, 7814037168 sectorsUnits = sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 4096 bytesI/O size (minimum/optimal): 4096 bytes / 4096 bytesDisk label type: dosDisk identifier: 0x00000000 Device Boot Start End Blocks Id System/dev/sde1 1 4294967295 2147483647+ ee GPTPartition 1 does not start on physical sector boundary.Disk /dev/sdd: 4000.8 GB, 4000787030016 bytes, 7814037168 sectorsUnits = sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 4096 bytesI/O size (minimum/optimal): 4096 bytes / 4096 bytesDisk label type: dosDisk identifier: 0x00000000 Device Boot Start End Blocks Id System/dev/sdd1 1 4294967295 2147483647+ ee GPTPartition 1 does not start on physical sector boundary.Disk /dev/sdf: 4000.8 GB, 4000787030016 bytes, 7814037168 sectorsUnits = sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 4096 bytesI/O size (minimum/optimal): 4096 bytes / 4096 bytesDisk label type: dosDisk identifier: 0x00000000 Device Boot Start End Blocks Id System/dev/sdf1 1 4294967295 2147483647+ ee GPTPartition 1 does not start on physical sector boundary.Disk /dev/mapper/XSLocalEXT--b30a297a--410a--d586--640b--e10ac011aaf3-b30a297a--410a--d586--640b--e10ac011aaf3: 480.5 GB, 480537214976 bytes, 938549248 sectorsUnits = sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytes | Software RAID too small | linux;partition;software raid | Your partitions are much smaller than the full disks:/dev/sdc1 1 4294967295 2147483647+ ee GPToccupies only 4294967295 sectors (out of 7814037168), i.e. just under 2TiB.If you intend to use the full disks in a RAID array, I would suggest just using the whole disks without bothering with partitions. First, zero out anything looking like an md superblock:mdadm --zero-superblock /dev/sdbmdadm --zero-superblock /dev/sdcmdadm --zero-superblock /dev/sddmdadm --zero-superblock /dev/sdemdadm --zero-superblock /dev/sdfThen create the array:mdadm --create /dev/md0 --chunk=256 --level=6 --raid-devices=5 /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdfIf you want to allow replacing failing drives with drives with a slightly smaller number of sectors, you may want to leave some space free; you can do this with the --size= option which takes a size (the amount of disk space to use) in kibibytes, e.g. in your case somewhere around 3,907,018,300KiB (your drives have 3,907,018,584KiB total space, of which 128KiB needs to be kept for the RAID superblock). |
_unix.299421 | I have setup few user systemd.timer(s).How to make them start automatically ? (either on system start or once user logged into X session).After I restart of the system (even systemctl --user enable was run before restart, i.e. does not help) I have none running:~$ systemctl --user enable {rsync_backup1,rsync_another_backup}.timer ~$ systemctl --user list-timers --all0 timers listed.Here I commands I need to use to start them after :~$ systemctl --user start {rsync_backup1,rsync_another_backup}.timer ~$ systemctl --user list-timers --allNEXT LEFT LAST PASSED UNIT Sun 2016-07-31 13:26:45 CEST 1h 16min ago Sun 2016-07-31 14:43:32 CEST 2s ago rsync_backup1Sun 2016-07-31 13:26:45 CEST 1h 16min ago Sun 2016-07-31 14:43:32 CEST 2s ago rsync_another_backup2 timers listed.~$ Here is example how timers are currently configured :$HOME/.config/systemd/user/rsync_backup1.service :[Unit]Description=rsync --delete /home/USER data to NASUSER@NAS[Service]Type=simpleExecStart=/home/USER/scripts/rsync_backup1.sh$HOME/.config/systemd/user/rsync_backup1.timer :[Unit]Description=Runs every 12 minutes rsync --delete /home/USER data to NASUSER@NAS[Timer]OnBootSec=12minAccuracySec=10minOnCalendar=*:0/12Unit=rsync_backup1.service[Install]WantedBy=multi-user.targetP.S. Yes, I know I can drop commands, which start my timers into .bashrc or .xinit or my Window Manager startup scripts. What I am asking is: is there systemd clean way to define this to run after every restart (/login) ? | How to start user systemd.timer (s) automatically? | configuration;systemd timer | null |
_unix.154919 | The other day I tried installing opencv-git from the AUR with makepkg on Arch Linux. Of course it pulls from the git repository as the name indicates. This pulls 1Gb. I am reading about making a shallow clone with git. When I look at the PKGBUILD file, using grep git PKGBUILD, I see:pkgname=opencv-gitmakedepends=('git' 'cmake' 'python2-numpy' 'mesa' 'eigen2')provides=(${pkgname%-git})conflicts=(${pkgname%-git})source=(${pkgname%-git}::git+http://github.com/Itseez/opencv.git cd ${srcdir}/${pkgname%-git} git describe --long | sed -r 's/([^-]*-g)/r\1/;s/-/./g' cd ${srcdir}/${pkgname%-git} cd ${srcdir}/${pkgname%-git} cd ${srcdir}/${pkgname%-git} install -Dm644 LICENSE ${pkgdir}/usr/share/licenses/${pkgname%-git}/LICENSEIs there a way to modify the recipe or the makepkg command to pull only a shallow clone (the latest version of the source is what I want) and not the full repository to save space and bandwidth? Reading man 5 PKGBUILD doesn't provide the insight I'm looking for. Also looked quickly through the makepkg and pacman manpages - can't seem to find how to do that. | How to modify a PKGBUILD which uses git sources to pull only a shallow clone? | arch linux;git | This can be done by using a custom dlagent. I do not really understand Arch packaging or how the dlagents work, so I only have a hack answer, but it gets the job done.The idea is to modify the PKGBUILD to use a custom download agent. I modified the source${pkgname%-git}::git+http://github.com/Itseez/opencv.gitinto${pkgname%-git}::mygit://opencv.gitand then defined a new dlagent called mygit which does a shallow clone. I did this by adding to the DLAGENTS array in /etc/makepkg.conf the following dlagent:'mygit::/usr/bin/git clone --depth 1 http://github.com/Itseez/opencv.git'My guess is you could probably define this download agent somewhere else, but I do not know how. Also notice that the repository that is being cloned is hard coded into the command. Again, this can probably be avoided. Finally, the download location is not what the PKGBUILD expects. To work around this, I simply move the repository after downloading it. I do this by adding mv ${srcdir}/../mygit:/opencv.git ${srcdir}/../${pkgname%-git}at the beginning of the pkgver function.I think the cleaner solution would be to figure out what the git+http dlagent is doing and redfine that temporarily. This should avoid all the hack aspects of the solution. |
_webmaster.90688 | I am currently working on a twitter clone. How do I make it like Tumblr where a subdomain is created for each user that creates a blog? I would plan to show the user feed there, with posts from the users he follows. | How do I develop a software product with multiple dynamic sub-domains? | subdomain;web applications | null |
_cs.23056 | Is there a way to take the interection of two NPDAs?I can't seem to find anything that can make that happen, but it seems like the type of thing that is should be relatively trival. | Intersection of two NPDAs | formal languages;automata;closure properties;pushdown automata | The intersection of two context-free languages can be non-context-free. The classical example is$$ \{ a^n b^n c^m : n,m \geq 0 \} \cap \{ a^m b^n c^n : n,m \geq 0 \} = \{ a^n b^n c^n : n \geq 0 \}. $$So in general you cannot simulate the intersection of two NPDAs with an NPDA. |
_softwareengineering.327857 | I would like to develop and release and SDK in an open fashion, via GitHub.That said, I would like to have a team of developers work on this, make commits, create issues and comments etc in a private fashion.It would be nice if I could only push the desired code to the public. The public section would also allow public people to do similar to the private team.My current thinking is that I should have two repositories, one public and one private. Once happy with the work in private I can merge the branch into the public one. I guess it would be good to maintain a release branch in private repo and only merge that onto the public repo.Is there a way that I can prevent the commit history going public? I'm sure there is a Git flag for that.Is this a suitable workflow? Thanks. | Github workflow for public/provide codebase | git;github | null |
_unix.334813 | I'm trying to install puttygen on an Amazon Linux server. puttygen is provided by the putty package available in EPEL, but installation fails with it unable to find several required C libraries:Error: Package: putty-0.63-7.el6.x86_64 (epel) Requires: libgtk-x11-2.0.so.0()(64bit)Error: Package: putty-0.63-7.el6.x86_64 (epel) Requires: libatk-1.0.so.0()(64bit)Error: Package: putty-0.63-7.el6.x86_64 (epel) Requires: libgdk_pixbuf-2.0.so.0()(64bit)Error: Package: putty-0.63-7.el6.x86_64 (epel) Requires: libgdk-x11-2.0.so.0()(64bit)Currently I'm installing putty on a Centos 6 box, then copying the binary /usr/bin/puttygen on to the Amazon Linux box. This works for my use-case, but I'm not keen on circumventing the package manager in this way. Is there a 'proper' way of doing things? | Proper way of installing puttygen on Amazon Linux | putty;puttygen;amazon linux | null |
_webmaster.107295 | On my WordPress blog in default mode I have a <canvas> where I draw a chart with ChartJS. I've just downloaded the AMP plugin, run the Google AMP Test tool, and it says:Fix the following issue Prohibited or invalid use of HTML Tag The tag 'canvas' is disallowed.How do you go about fixing this? Is there any way to do JavaScript with AMP? | for chartjs versus AMP | javascript;amp;canvas | You can't run your own JavaScript when using AMP, that defeats the purpose of AMP. Instead, you can iframe your external content using the amp-iframe element.Add the amp-iframe JS to the head.<script async custom-element=amp-iframe src=https://cdn.ampproject.org/v0/amp-iframe-0.1.js></script>Add the amp-iframe element where you want your iframe to go.<amp-iframe width=200 height=100 sandbox=allow-scripts allow-same-origin layout=responsive src=https://example.com/></amp-iframe>A full guide to using amp-iframe is available at:https://www.ampproject.org/docs/guides/iframes |
_ai.156 | From Wikipedia:A mirror neuron is a neuron that fires both when an animal acts and when the animal observes the same action performed by another.Mirror neurons are related to imitation learning, a very useful feature that is missing in current real-world A.I. implementations. Instead of learning from input-output examples (supervised learning) or from rewards (reinforcement learning), an agent with mirror neurons would be able to learn by simply observing other agents, translating their movements to its own coordinate system. What do we have on this subject regarding computational models? | Are there any computational models of mirror neurons? | neural networks;models | null |
_webapps.37908 | Is there a keyboard shortcut that moves Trello cards up or down in list order?I want to prioritize cards without dragging and dropping. | Shortcut key for moving cards up or down in Trello | trello;keyboard shortcuts;trello cards | null |
_softwareengineering.240729 | I'm using an offline application's javascript API and I'd like to know if I can use deferred objects to handle the callbacks. The API calls do not use HTTP, the calls are to and from the applications local database.The only way I've been able to display information is by using 'setTimeout' on subsequent calls, which I know, is terrible! So I have a long list of callbacks and timeouts.var jsObj = {};var anotherObj = {};//first async callmethodName(arg1, jsObj, callback);function callback(result){ jsObj[data] = result;}//second async callsetTimeout(function(){methodName(arg1, anotherObj, callback2);}, 200);function callback2(result){ jsObj[data] = result;}//waitsetTimeout(function(){ $(#content).html(JSON.stringify(jsObj));},300);Is there anyway to refactor this? Any advice is appreciated. I've looked at the following post, but I'm not sure it would work. I'm aware I can create custom deferred objects (jQuery), but altering the API to use this or any other method of promise objects seems unrealistic. | How to handle asynchronous calls in an offline application | javascript;jquery;asynchronous programming | Deferred objects are indeed the way to go here. You don't have to change the API, just use a promise library like Q that can wrap it. It would look something like this:promiseMethodName = Q.denodeify(methodName);promiseMethodName(arg1, jsObj).then(function(result) { jsObj[data] = result; return result;}).then(function(result) { return promiseMethodName(arg1, anotherObj);}).then(function(result) { jsObj[data] = result; return result;}).then(function(result) { $(#content).html(JSON.stringify(jsObj));});If your API doesn't use node-style callbacks, you might need to implement your own version of denodeify, which can be tricky, but the rest will be the same. |
_unix.193673 | From http://pubs.opengroup.org/stage7tc1/basedefs/V1_chap12.htmlEllipses ( ... ) are used to denote that one or more occurrences of an operand are allowed. When an option or an operand followed by ellipses is enclosed in brackets, zero or more options or operands can be specified. The form:utility_name [-g option_argument]...[operand...]indicates that multiple occurrences of the option and its option-argument preceding the ellipses are valid, with semantics as indicated in the OPTIONS section of the utility. (See also Guideline 11 in Utility Syntax Guidelines .)The form:utility_name -f option_argument [-f option_argument]... [operand...]indicates that the -f option is required to appear at least once and may appear multiple times.Are there differences between the order of bracket and ellipses?Do [something]... and [something...] both mean repeating zero or more times?Do something [something]... and something... both mean the same as repeating once or more times? | Usage of ellipse in synopsis of command line argument | man | null |
_softwareengineering.75460 | Let preface this by saying that I understand that any advice I may receive is not to be taken as 100% correct, I am just looking for what people's understand of what this license is.I have been looking for a library that allow be to deal with archived compressed files (like zip files) and so far the best one I have found is DotNetZip. The only concern I have is that I am not familiar with the Microsoft Public License. While I plan to release a portion of my project (a web application platform) freely (MIT/BSD style) there are a few things. One is that I don't plan on actually releasing the source code, just the compiled project. Another thing is that I don't plan on releasing everything freely, only a subset of the application. Those are reason why I stay away form (L)GPL code. Is this something allowed while using 3rd party libraries that are licensed under the Microsoft Public License?EDITThe part about the Microsoft license that concerns me is Section 3 (D) which says (full license here):If you distribute any portion of the software in source code form, you may do so only under this license by including a complete copy of this license with your distribution. If you distribute any portion of the software in compiled or object code form, you may only do so under a license that complies with this license.I don't know what is meant by 'software'. My assumption would be that 'software' only refers to the library included under the license (being DotNetZip) and that is doesn't extends over to my code which includes the DotNetZip library. If that is the case then everything is fine as I have no issues keeping the license for DotNetZip when release this project in compiled form while having my code under its own license. If 'software' also include my code that include the DotNetZip library then that would be an issue (as it would basically act like GPL with the copyleft sense). | Microsoft Public License Question | licensing;ms pl | I don't know what is meant by 'software'. My assumption would be that 'software' only refers to the library included under the license (being DotNetZip) and that is doesn't extends over to my code which includes the DotNetZip library.That's correct. The term Software used in the license refers to the software that license is about: DotNetZip.If you distribute any portion of DotNetZip, you must retain all copyright, patent, trademark, and attribution notices that are present in the software (3c).If that is the case then everything is fine as I have no issues keeping the license for DotNetZip when release this project in compiled form while having my code under its own license. If 'software' also include my code that include the DotNetZip library then that would be an issue (as it would basically act like GPL with the copyleft sense).The latter is not the case, MS-PL is not a reciprocal license. It only requires that the software you distribute if it contains MS-PL'ed parts, must comply with the license requirements for the MS-PL'ed parts. As long as you don't give any sources from DotNetZip you do not even need to provide a copy of the license text if I read the license correctly. |
_webmaster.11779 | I am thinking of creating a website where the content will be only useful for a max of one week. I am also assuming that I will be getting most of the traffic thru search engines basically Google. For important sites like stackoverflow the crawling happens multiple times a day. But for a new site with time-dependent content, is it possible to get Google to index the site more frequently. | How to make sure that Google indexes your site in less than a day? | search;google;seo;web crawlers;googlebot | null |
_unix.363865 | I am running motion, and so far everything work nicely, I can access the stream on port 8081, without problem.Is there a way to show the stream of the same webcam on multiple port ? If yes, how ? I tried setting up this in motion.conf :stream_port 8081stream_port 8082But only the last port is reachable. How can both be reachable ? | Show Motion stream on multiple web page | debian;webserver;camera;motion | null |
_unix.685 | So recently a Debian 5.0.5 installer offered me to have separate /usr, /home, /var and /tmp partitions (on one physical disk).What is the practical reason for this? I understand that /home can be advantageous to put on a separate partition, because user files can be encrypted separately, but why for anything else? | Why put things other than /home to a separate partition? | linux;partition | Minimizing loss: If /usr is on a separate partition, a damaged /usr does not mean that you cannot recover /etc.Security: / cannot be always ro (/root may need to be rw etc.) but /usr can. It can be used to make ro as much as possible.Using different FS: I may want to use a different system for /tmp (not reliable but fast for many files) and /home (has to be reliable). Similary /var contains data while /usr does not so /usr stability can be sacrifice but not so much as /tmp.Duration of fsck: Smaller partitions mean that checking one is faster.Mentioned filling up of partions, although other method is quotas. |
_codereview.30816 | Consider the following:If myString = abc Or myString = def [...] Or myString = xyz ThenIn C# when myString == abc the rest of the conditions aren't evaluated. But because of how VB works, the entire expression needs to be evaluated, even if a match is found with the first comparison.Even worse:If InStr(1, myString, foo) > 0 Or InStr(1, myString, bar) > 0 [...] ThenI hate to see these things in code I work with. So I came up with these functions a while ago, been using them all over the place, was wondering if anything could be done to make them even better:StringContains is used like If StringContains(this is a sample string, string):Public Function StringContains(string_source, find_text, Optional ByVal caseSensitive As Boolean = False) As Boolean 'String-typed local copies of passed parameter values: Dim find As String, src As String find = CStr(find_text) src = CStr(string_source) If caseSensitive Then StringContains = (InStr(1, src, find, vbBinaryCompare) <> 0) Else StringContains = (InStr(1, src, find, vbTextCompare) <> 0) End IfEnd FunctionStringContainsAny works in a very similar way, but allows specifying any number of parameters so it's used like If StringContainsAny(this is a sample string, false, foo, bar, string):Public Function StringContainsAny(string_source, ByVal caseSensitive As Boolean, ParamArray find_strings()) As Boolean 'String-typed local copies of passed parameter values: Dim find As String, src As String, i As Integer, found As Boolean src = CStr(string_source) For i = LBound(find_strings) To UBound(find_strings) find = CStr(find_strings(i)) If caseSensitive Then found = (InStr(1, src, find, vbBinaryCompare) <> 0) Else found = (InStr(1, src, find, vbTextCompare) <> 0) End If If found Then Exit For Next StringContainsAny = foundEnd FunctionStringMatchesAny will return True if any of the passed parameters exactly matches (case-sensitive) the string_source:Public Function StringMatchesAny(string_source, ParamArray find_strings()) As Boolean 'String-typed local copies of passed parameter values: Dim find As String, src As String, i As Integer, found As Boolean src = CStr(string_source) For i = LBound(find_strings) To UBound(find_strings) find = CStr(find_strings(i)) found = (src = find) If found Then Exit For Next StringMatchesAny = foundEnd Function | A more readable InStr: StringContains | strings;vba;vb6 | My 2 cents,the first function seems fine, you could make it a little DRYer by just setting the compareMethod in your if statement and then have only 1 complicated line of logic. And if you are doing that, you might as well put the Cstr's there.Public Function StringContains(haystack, needle, Optional ByVal caseSensitive As Boolean = False) As Boolean Dim compareMethod As Integer If caseSensitive Then compareMethod = vbBinaryCompare Else compareMethod = vbTextCompare End If 'Have you thought about Null? StringContains = (InStr(1, CStr(haystack), CStr(needle), compareMethod) <> 0)End FunctionNotice as well that I love the idea of searching for needles in haystacks, I stole that from PHP.For StringContainsAny, you are not using the code you wrote for StringContains, you repeat it. If you were to re-use the first function, you could do this:Public Function StringContainsAny(haystack, ByVal caseSensitive As Boolean, ParamArray needles()) As Boolean Dim i As Integer For i = LBound(needles) To UBound(needles) If StringContains(CStr(haystack), CStr(needles(i)), caseSensitive) Then StringContainsAny = True Exit Function End If Next StringContainsAny = False 'Not really necessary, default is False..End FunctionFor the last one I wanted to you consider passing values that you will convert as ByVal, since you are going to make a copy anyway of that variable.Public Function StringMatchesAny(ByVal string_source, ParamArray potential_matches()) As Boolean string_source = CStr(string_source) ... 'That code taught me a new trick ;)End Function |
_webapps.60650 | Someone hacked my friend's Facebook account and sent me messages which were indecent from his account. He also commented stuff which were highly indecent on my pictures. How do I find out who hacked his account? I asked my friend if it was him, but he said that his account had been hacked and thus couldn't do anything. I need to find the hacker of my friend's Facebook account and how do I do it? | How do I find the hacker of my friend's account? | facebook | null |
_codereview.33046 | so, I am pretty new to this game, and am trying to understand javaScript way better than I currently do. I have this block of code, if it is too long to read, then just skip to my question at the bottom... function createCSSRule(selectorName, necessaryProperties){ //add class to control all divs var propertyNameBases, propertyPrefixes, propertyValues, propertySuffixes; var cssString = selectorName + {\n; for (var i9 = 0; i9 < necessaryProperties.length; ++i9){ switch (selectorName){ case .+options.allPictures: switch(necessaryProperties[i9]){ case position: propertyNameBases = [position]; propertyPrefixes = [], propertyValues = [absolute], propertySuffixes = []; break; case height: propertyNameBases = [height]; propertyPrefixes = [], propertyValues = [100%], propertySuffixes = []; break; case width: propertyNameBases = [width]; propertyPrefixes = [], propertyValues = [100%], propertySuffixes = []; break; case background: propertyNameBases = [background]; propertyPrefixes = [], propertyValues = [scroll,#fff,50% 50%,no-repeat,cover], propertySuffixes = [-attachment,-color,-position,-repeat,-size]; break; case transform: propertyNameBases = [transform], propertyPrefixes = [, -moz-, -webkit-], propertyValues = [options.threeDOrigin,options.threeDStyle,translate3d(+options.translate3dpx+)], propertySuffixes = [-origin,-style,]; break; case transition: propertyNameBases = [transition], propertyPrefixes = [, -webkit-], propertyValues = [options.transitionLength + ms, options.transitionPath, all], propertySuffixes = [-duration,-timing-function,-property]; //-delay]; break; default: console.log(missing); propertyNameBases = null; propertyPrefixes = null; propertyValues = null; propertySuffixes = null; break; } break; case .+options.currentPic: switch(necessaryProperties[i9]){ case transform: propertyNameBases = [transform], propertyPrefixes = [, -moz-, -webkit-], propertyValues = [options.threeDOrigin,translate3d(0px, 0px, 0px)], propertySuffixes = [-origin,]; break; default: console.log(missing); propertyNameBases = null; propertyPrefixes = null; propertyValues = null; propertySuffixes = null; break; } break; case .+options.currentPic+.+options.picAfterCurrent: switch(necessaryProperties[i9]){ case transform: propertyNameBases = [transform], propertyPrefixes = [, -moz-, -webkit-], propertyValues = [options.threeDOrigin,translate3d(+options.negativeTranslate3dpx+)], propertySuffixes = [-origin,]; break; default: console.log(missing); propertyNameBases = null; propertyPrefixes = null; propertyValues = null; propertySuffixes = null; break; } break; default: console.log(wait a second); break; } //name the selector //iterate through properties for (i10 = 0; i10 < propertyNameBases.length; i10++){ //iterate through suffixes and value pairs for (var i11 = 0; i11 < propertyValues.length; i11++){ //iterate through prefixes if(propertyValues !== false){ for (var i12 = 0; i12 < propertyPrefixes.length; i12++){ cssString = cssString+ +propertyPrefixes[i12]+propertyNameBases[i10]+propertySuffixes[i11]+: +propertyValues[i11]+;\n } } } } }var forAllPictures = [position,height,width,background,transition,transform]; var forCurrentPic = [transform];var forpicAfterCurrent = [transform];createCSSRule(.+options.allPictures, forAllPictures);createCSSRule(.+options.currentPic, forCurrentPic);createCSSRule(.+options.currentPic+.+options.picAfterCurrent, forpicAfterCurrent);basically, what is going to happen is I am going to pass a string (which is in a combination of variables) to the first parameter, and an array to the second. The first parameter acts as my class name, and the second parameter acts as my array of necessary css properties. I have included the output below so you can get a simple understanding of what I am going for. Each array inside of the if statements is used by the i 's in each for loop to output a string.Each switch statement sets a specific variable and then 3 for-loops take over concatenating a very long string, which happens to be the css below.slideShowPics{ position: absolute; height: 100%; width: 100%; background-attachment: scroll; background-color: #fff; background-position: 50% 50%; background-repeat: no-repeat; background-size: cover; transition-duration: 5000ms; -webkit-transition-duration: 5000ms; transition-timing-function: ease-in; -webkit-transition-timing-function: ease-in; transition-property: all; -webkit-transition-property: all; transform-origin: 0% 0%; -moz-transform-origin: 0% 0%; -webkit-transform-origin: 0% 0%; transform-style: flat; -moz-transform-style: flat; -webkit-transform-style: flat; transform: translate3d(-640px, 0px, 0px); -moz-transform: translate3d(-640px, 0px, 0px); -webkit-transform: translate3d(-640px, 0px, 0px);}.currentSlideShowPic{ transform-origin: 0% 0%; -moz-transform-origin: 0% 0%; -webkit-transform-origin: 0% 0%; transform: translate3d(0px, 0px, 0px); -moz-transform: translate3d(0px, 0px, 0px); -webkit-transform: translate3d(0px, 0px, 0px);}.currentSlideShowPic.movingOut{ transform-origin: 0% 0%; -moz-transform-origin: 0% 0%; -webkit-transform-origin: 0% 0%; transform: translate3d(640px, 0px, 0px); -moz-transform: translate3d(640px, 0px, 0px); -webkit-transform: translate3d(640px, 0px, 0px);}I would love for someone to suggest an easier way to do this. I do not feel like I am using this language correctly. If there is anyone out there who has a better idea than what I am currently using, I would love to hear it. Like I said, I am still learning. I feel like I should be able to do this with an object, I just have no idea what I am doing when it comes to objects. If anyone has any articles that are written in clean everyday vernacular, or at least some really good examples, I would appreciate that, otherwise your own examples/explainations would be most appreciated. If, of course, I am able to do this with an object... | create a really long string with javaScript more efficiently than this | javascript;jquery;css | null |
_unix.16240 | I come from Windows, and I've been getting into Linux a little bit lately. Trying to make that my default OS for now. I've wanted to try out a couple different flavors of Linux. I spent probably a week getting Ubuntu to fully work correctly with drivers and all that and that is what I'm running right now. What I want to do is wipe out my Ubuntu installation and try some Fedora 15. I also come from the Android world where you boot into recovery, do a complete backup of everything, wipe it and flash something else and play around with it, and if you don't like it you restore your backup from before.Is there anything similar? So just in case I don't stick with Fedora I can reload my Ubuntu and not have to spend another week setting it up. | Completely Backing Up Linux Installation | backup;restore | You colud use Clonezilla. It is Linux LiveCD distribution created to make backup copies of full disks or partitions.Download it, burn on CD and boot computer. After that you need to choose source and destination - when you want to back up whole drive you need another drive to write backup on it. When you choose to back up only one partition your backup can be stored on another partition of the same HDD.If you are not using any weird filesystems (it's probably ext3 or ext4, so it's ok) Clonezilla can back up only these parts of partition that are really used, so image is only that size that your data on your partition.Clonezilla have easy to use console interface and every option is well explained.If you want to restore backup you just boot Clonezilla again, choose option to restore and show where backup is on HDD. |
_scicomp.3233 | I want to compare two floating point numbers for equality relative to a known absolute tolerance. However, this is inside an algorithm I wrote quite some time ago, and I believe the logic of that algorithm would get corrupted if the equality relation is not transitive.Some false negatives are no problem, i.e. if two equal numbers compare unequal, all that will happen is that the algorithm will use a bit more time and memory. However, I now got input data preprocessed by another algorithm (to smooth out corners), but that algorithm added noise to every single straight line, which leads to memory consumption issues (> 4GB) during my algorithm.I see essentially two options, how I could fix the issue:I could try to remove the noise from the results of the preprocessing algorithm.I could try to find a way to do tolerance based equality comparison in a transitive way.The first approach looks easier to me. I would basically have a fixed set of doubles and would need to pick a set of representatives such that every double in the set is within epsilon of a representative. The only idea I have for the second approach is to snap the values to a grid for the comparison. However, I vaguely remember having implemented such a grid snapping approach before, but it broke down as soon as the (c++) compiler started to inline the corresponding code. I fixed this by moving the snapping code to a different translation unit, but later rewrote the code to make the snapping obsolete.QuestionIs it possible to do tolerance based equality comparisons (in c++) without violating transitivity?What is a good way to implement a noise removal algorithm? My approach would probably be to keep a sorted list of representatives, and look up each new double value by bisection in that list, leading to a $O(n \log n)$ runtime and $O(n)$ (additional) memory consumption for the noise removal algorithm. | transitive floating point comparison with (absolute) tolerance | c++;floating point | If you have all the numbers up front, you can efficiently compute the transitive closure of tolerance-based comparison with a union-find structure. First loop through all pairs of nearby points (e.g., using a bounding box hierarchy), and for each pair within your tolerance mark them as merged in the union-find. Later on, compare points for equality by checking if they're in the same union-find component. |
_unix.168863 | This sentence is from a Linux command's return,I can only thought it as 'statistics' but it is the noun form rather than the verb form.unable to stat ./config-2.6.32-431.el6.i686: No such file or directory Some files were modified! | What does 'stat' mean in this sentence? | stat | Unix, and by inheritance, Linux and *BSD, get the file status via one of the stat-related systems calls: stat(), fstat() and lstat(). I believe the original was stat(). The status in this case constitutes what we currently call metadata: information about the file, like ownership, permissions, sizes, access, modification and status change times, things like that.Whoever wrote the error message you quote (unable to stat) used the name of the Unix/Linux/*BSD system call as a verb. That would be consistent with a lot of the system calls, which have names like read, write, close, open. In the context of using and thinking about Unix system calls, using stat as a verb comes pretty naturally.So, to stat a file, is to get some or all of the file's metadata. |
_cs.1905 | How to prove that $\mathsf{NP}^A \neq \mathsf{coNP}^A$ ? I am just looking for a such oracle TM $M$ and a recursive language $L(M) = L$ for which this holds. I know the proof where you show that there is an oracle $A$ such that $\mathsf{P}^A \neq \mathsf{NP}^A$ and an oracle $A$ such that $\mathsf{P}^A = \mathsf{NP}^A$. I have a hint that I should find such oracle $A$ by extending the proof of $\mathsf{P}^A \neq \mathsf{NP}^A$ but wherever I search and read, it is obvious or straightforward everywhere but I just do not see how prove it at all. | An oracle to separate NP from coNP | complexity theory;relativization | As Max said the modification is not difficult, I suggest that you do not read the rest of this answer and think about the problem a little bit more, there is only one part that needs modification and remembering the definition of when a $\mathsf{coNP}$ machine accepts will help you fix that part. I will explain the required modification below, but first let's have a brief view at the original proof.In the original proof $A=\bigcup_n A_n$ is built in steps where at step $i$ with make sure that $i$th machine in $\mathsf{P}$, $M_i$, doesn't decide the language $\{x \mid \exists y\in A \ |x|=|y| \}$ correctly. Note that the set is in $\mathsf{NP}^A$. We achieve this by simulating $M_i$ using the part of $A$ we have built on a $0^m$ where $m$ is large enough (the string is longer than strings considered in previous steps). $M_i$ accepts, we don't add anything, if it rejects we add a string of length $m$ that $M_i$ doesn't make a query to the set (Such a string exists since there are exponentially many strings of length $m$ but $M_i$ cannot ask about all of them in polynomial time). We will not modify this part of $A$ in future steps (i.e. strings of length $m$ or less will stay the same). This makes sure that $M_i^A$ will not decide the language correctly and completes the proof. Now, assume that the machines $M_i$ were in $\mathsf{coNP}$ in place of $\mathsf{P}$. We need to modify the proof to make sure that $M_i^A$ will not recognize $L$. If it is accepting we keep $A$ as before and everything works fine as in the original proof. If it rejects, we need to add a string to the set to make sure it doesn't answer correctly. We still can simulate $M_i$ with the part of $A$ we have, the problem is that $M_i$ might query all strings of length $n$. Here the way a $\mathsf{coNP}$ machine works becomes important. It accepts if and only if all computation paths accept. Since it is rejecting in this case, there is a computation path that is rejecting. As long as we keep this path intact everything will work, so we only need to keep the answers to the queries in that path the same. The number of queries in this path is polynomial (since the machine runs in polynomial time), so there are strings of length $m$ that the path doesn't query about, just add one of them to $A$ and the rest of the proof works as before.The steps are algorithmic, so the set $A$ is recursive (the essential part of the construction is being able to simulate machines which can be done in say $\mathsf{DSpace}(n^{\omega(1)})$). |
_unix.110397 | I'm using:# uname -roFreeBSD 9.0-RELEASE-p3And the latest ssldump:# pkg_info | grep ssldumpssldump-0.9b3_4 SSLv3/TLS network protocol analyzerWhen I try starting it with decryption - I got the following error:# ssldump -Xnd -i em0 port 8443 -k name.pem -p passwordPCAP: syntax errorI've installed libpcap:# pkg_info | grep libpcapdnstop-20121017 Captures and analyzes DNS traffic (or analyzes libpcap dumplibpcap-1.4.0 Ubiquitous network traffic capture libraryFound one reference about possible problems with some network interfaces:Support is provided for only for Ethernet and loopback interfacesSo I tried to eun ssldump with lo0:# ssldump -Xnd -i lo0 port 8443 -k name.pem -p passwordPCAP: syntax errorSo - how I can runssldump with packets decryption? Where is my mistake? | ssldump: PCAP: syntax error | networking;freebsd;ssl | null |
_unix.383696 | I am trying to use apt on a network that is only intermittently connected to the Internet. The network has a local apt mirror and I have put the ip address of that mirror in all the entries in sources.list.Unfortunately when disconnected from the Internet there is an annoying delay in running apt commands. Investigating with tcpdump shows.14:44:52.271437 IP 172.19.0.2.42208 > 8.8.8.8.domain: 180+ SRV? _http._tcp.172.19.0.1. (39)14:44:57.277063 IP 172.19.0.2.42208 > 8.8.8.8.domain: 180+ SRV? _http._tcp.172.19.0.1. (39)14:44:57.277160 IP 172.19.0.1 > 172.19.0.2: ICMP net 8.8.8.8 unreachable, length 7514:45:02.286414 IP 172.19.0.2.42208 > 8.8.8.8.domain: 180+ SRV? _http._tcp.172.19.0.1. (39)14:45:02.286504 IP 172.19.0.1 > 172.19.0.2: ICMP net 8.8.8.8 unreachable, length 75Is there any way to stop apt doing this and just make it connect immediately to the local mirror? | stop apt looking for srv records | debian;apt | Ok I found the answer by reading the sourcecodeAdd the following to /etc/apt/apt.conf (create it if it doesn't exist)Acquire::EnableSrvRecords false; |
_unix.130627 | I have a certain shell (zsh) script which reads one character at a time and performs an action afterwards. In the shell, this is realized by read -k in a loop. I want to execute the script as a keyboard shortcut, without opening a shell.What is the easiest way to grab keyboard input for this? I could use dmenu if I wanted to read an entire string, but the script needs to be able to parse the characters one at a time.Thanks. | Grabbing keyboard control in shell script | shell script;x11;keyboard;input | null |
_cs.51144 | I'm interested in how fast SVMs can classify new data with $c \in \mathbb{N}_{\geq 2}$ classes and $n \in \mathbb{N}_{\geq 1}$ features.Example for Neural NetworksFor neural networks, this depends very much on the architecture. For supposing you only have one hidden layer with $3n$ neurons, you would have a $n:3n:c$ topology and henceone multiplication of a $n$-dimensional vector with a matrix in $\mathbb{R}^{n \times 3n}$,then a multiplication of a vector in $\mathbb{R}^{3n}$ with a matrix in $\mathbb{R}^{3n \times c}$and of course $3n+c$ applications of the activation functions.Adding the biases is dominated by the matrix multiplications.This results in an overall complexity of $\mathcal{O}(n^2 \cdot c)$.QuestionI would be interested in a similar analysis of the classification complexity (NOT the training!) of SVMs, preferably with a reference to literature. | What is the complexity of classification with SVMs? | complexity theory;machine learning;svm | null |
_cs.70076 | Consider strings $s \in \{0,1\}^*$. Define $c_1(s)$ to be the ones' complement of $s$; i.e., the string obtained from $s$ by inverting all of its bits. So, for example, $c_1(000111) = 111000$. Call a language $L \subset \{0,1\}^*$ ones' complement closed, or OCC, if $s' \in L$ $\iff$ $c_1(s') \in L$, provided $s'$ is not the empty string. Given these assumptions, I have a few questions.Does there exist an OCC language that is both Turing-recognizable and not decidable?Does there exist an OCC language $L$ with the property that both $L$ is not decidable and $\overline{L}$ is Turing-recognizable?Does there exist an OCC language $L$ that is such that neither it nor $\overline{L}$ is Turing-recognizable?I am having some trouble answering these questions because of how limited the alphabet $\{0,1\}$ is. I was thinking that perhaps this alphabet alone could be used to encode Turing machines, and from there we could talk about the well-known decidable/undecidable and recognizable/not-recognizable languages involving Turing machine encodings, but I'm not sure this is possible. | Decidability of languages containing bitstrings and their corresponding ones' complements | formal languages;turing machines;undecidability | Given a language $L$, you can create a language $L'$ which is equivalent in power and OCC:$$L' = \{ 0 x : x \in L \} \cup \{ 1 c_1(x) : x \in L \}.$$The two languages are recursively equivalent. This means that there is a computable reduction from $L$ to $L'$ and another one from $L'$ to $L$.Using this construction, you can now answer your own question. |
_unix.282134 | ScenarioI have two identical Lenovo (previously IBM) servers [xSeries 3250 M5 - model 5458EHM]. I've built Linux on server 1 and I want to be able to cold swap that hard drive to server 2. (This is so that I can build a specific Linux configuration and send it to a client for them to cold swap on the same hardware).Further informationIt is a clean Linux install (Debian) on a fresh drive.Linux was installed from CD in UEFI mode.Once the install boots, here is the output I think is relevant:# efibootmgr -vBootCurrent: 0004Timeout: 10 secondsBootOrder: 0004,0000,0001,0002,0003Boot0000* CD/DVD Rom ACPI(a0341d0,0) PCI(1d,0) USB(0,0) USB(1,0)Boot0001* Hard Disk 0 ACPI(a0341d0,0) PCI(1f,2) SATA(0,0,0) HD(1,800,100000,ab3dde4a-f8dd-420c-a103-53bbe95bc74f)Boot0002* PXE Network ACPI(a0341d0,0) PCI(1c,0) PCI(0,0) MAC(MAC(6cae8b5b6ae0,0)Boot0003* Hard Disk 1 Vendor(0c588db8-6af4-11dd-a992-00197d890238,09)Boot0004* debian HD(1,800,100000,ab3dde4a-f8dd-420c-a103-53bbe95bc74f) File(\EFI\debian\grubx64.efi)You can see that the Boot0004 debian installation is installed in UEFI mode.Output from cat /etc/fstab# <file system> <mount point> <type> <options> <dump> <pass># / was on /dev/sda2 during installationUUID=8ac79015-aa86-4105-85dd-43e3e8761ed4 / ext4 errors=remount-ro 0 1# /boot/efi was on /dev/sda1 during installationUUID=4539-CB77 /boot/efi vfat umask=0077 0 1# swap was on /dev/sda3 during installationUUID=ddcc51da-f15a-4d36-b799-2fb00789e676 none swap sw 0 0Edit: I've tried removing the UUID lines so instead it points to the /dev/sda partitions, same problem.I have reloaded the default Boot firmware settings and UEFI doesn't try to load BIOS legacy boot by default. Output from # parted /dev/sdaGNU Parted 3.2Using /dev/sda Welcome to GNU Parted! Type 'help' to view a list of commands. (parted) p Model: ATA ST2000NM0033 (scsi) Disk /dev/sda: 2000GB Sector size (logical/physical): 512B/512B Partition Table: gpt Disk Flags: Number Start End Size File system Name Flags 1 1049kB 538MB 537MB fat32 boot, esp 2 538MB 1992GB 1991GB ext4 3 1992GB 2000GB 8418MB linux-swap(v1)`ProblemI can't boot the HDD on server 2. On load the UEFI messages state that it can't boot the 'debian' image. What I have triedI have tried doing the reverse, installing Linux on server 2 and cold swapping the disk to server 1 and have the same problem. I have moved the HDD to a third machine (desktop grade PC) and it won't boot either.What am I missing? | Can't boot when moving Linux installation from one server to another - UEFI | boot | null |
_unix.137164 | It appears that new permissions on /etc/issue and /etc/motd are reverting back to the original even if we change them. This is on systems running RHEL 5 and RHEL 6. Is there any rc script which controls the permissions on /etc files? | Permissions changing on few files under /etc/ | permissions;rhel;etc | null |
_unix.167008 | A weird issue after we migrated some of our servers to a new DC. We assigned new IPs to all the machines and all the servers are working fine except one. It is not picking up the new IP address and its failing with the following message when I try to restart network with the new IP, new GW and the new netmask in the ifcfg-eth0 file.Bringing up loopback interface: [OK]Bringing up interface eth0: Determining if ip address 10.80.3.2 is already in use for device eth0Error, some other host already uses address 10.80.3.2 [FAILED]After I added ARPCHECK=no in /etc/sysconfig/network-scripts/ifcfg-eth0 the eth0 comes up fine with the new IP address, but then we will have no network access. The machine will behave as if the network is dead. We are 100% sure that the IP address 10.80.3.2 is not active on any other machines. I am not very good with networking. Can anyone throw some light to where I can fix this?P.S:- It is a VMware VM. | Unable to start network, is already in use for device eth0, but its not | networking;rhel;arp | null |
_webmaster.79655 | A couple of months ago I had published on my blog an article with a title in the form: what is a table (table is not really the word I used-I write the word here as an example).I had seen through Google Keyword Planner that this search term has very low competition and a decent volume.However, when I search in Google for this term I cannot find my post even in the last page after 450 results. Apart the first couple of Google pages with relevant results, all other pages which outrank my post do not answer the particular question (what is a table) but they have a great density of the word table in their content just because they happen to make or sell tables.On the other hand, when I put the search term in quotes (what is a table) I find my post in the 7th position of the first page!My post's url is in the form www.example.com/eating/what-is-a-table/ and the post is optimized with Yoast SEO plugin (wordpress). The plugin returns that my keyword contains a stop-word which is the word a, however I have to include a as it is a part of the search term which I'm targeting.My site is 15 months old but I rank decently in other search terms. For example in the term [table definition], my relevant post ranks on the 4th page. The posts [what is a table] and [table definition] were published on the same date.In light of the above explanations and in a few words the question is:why search results differ so much when this search term what is a ... is asked within quotes or not. | why search results differ so much when the search term [what is a ...] is asked within quotes or not | seo;google search | null |
_unix.48798 | I am a newbie to Linux. I am trying to add a webiste on Ubunutu Linux server and following this tutorial http://www.hostly.com/hosting-info/build-website-using-apache-1658.htmlCreated a webconfiguration file /etc/apache2/sites-available and it looks like this<VirtualHost *:80># Basic setupServerName analys.ideometrics.seDocumentRoot /home/micke/www/analys.ideometrics/# LogfilesErrorLog /home/micke/wwwlogs/apache2/error.logCustomLog /home/micke/wwwlogs/apache2/access.log combined</VirtualHost>created a index.html in the location /home/micke/www/analys.ideometrics/restarted apache webserverwhen I am trying to access the URL (analys.ideometrics.se), getting internal error. I am not sure, if have to edit host file. Can you please give me a clue. Thanks for your help !! | adding a site to Apache2 Ubuntu Linux | linux;ubuntu | null |
_codereview.28165 | How would I write a foreach statement that includes both of these statement together//clear textboxesforeach (Control c in panel1.Controls.OfType<TextBox>()) { if (!string.IsNullOrEmpty(c.Text)) { c.Text = ; } } //clear price label text foreach (Control c in panel1.Controls.OfType<Label>()) { if ((string)c.Tag == Clearable) { c.Text = ; } } | Foreach statement for two OfType<> | c#;winforms | You could write it in 1 loop but you'll still need separation of logic for each type of Control:foreach(Control c in panel1.Controls){ if(c is TextBox) { var tb = c as TextBox; if(!String.IsNullOrEmpty(tb.Text)) tb.Text = ; } if(c is Label) { var l = c as Label; if(l.Tag != null && l.Tag.ToString() == Clearable) l.Text = ; }} |
_codereview.110080 | I've got an Angular controller where I have two functions that are repeated inside two functions:(function () { 'use strict'; angular .module( 'app.purchases.products' ) .controller( 'ReadProductController', ReadProductController ); ReadProductController.$inject = [ '$scope', 'ReadProductFactory' ]; function ReadProductController( $scope, ReadProductFactory ) { /* jshint validthis: true */ var vm = this; vm.products = {}; vm.product = {}; vm.getProductsList = getProductsList; vm.getProductDetails = getProductDetails; function getProductsList( columnOrder, sortOrder ) { var data = { columnOrder: columnOrder, sortOrder: sortOrder }; ReadProductFactory.listProducts( data, success, fail ); //The following are the callbacks funcs but they repeat //for the getProductDetails func too, should i set them global? function success( products ) { vm.products = products; } function fail( error ) { console.log( error ); } } function getProductDetails( id ) { var data = { id: id }; ReadProductFactory.detailProduct( data, success, fail ); function success( product ) { vm.product = product; } function fail( error ) { console.log( error ); } } }})();The above is the controller of a Product Listing view, i've tried to take rest logic to a factory, what i feel is that my code is not completely DRY because of the callback functions, maybe you'll understand better what i've tried to do with the factory code, here it is:(function () { 'use strict'; angular .module( 'app.purchases.products' ) .factory( 'ReadProductFactory', ReadProductFactory ); ReadProductFactory.$inject = [ 'Restangular' ]; function ReadProductFactory( Restangular ) { return { detailProduct: detailProduct, listProducts: listProducts }; function detailProduct( data, success, fail ) { Restangular .one( '/purchases/products/', data.id ) .all( '/detail' ) .getList() .then( success, fail ); } function listProducts( data, success, fail ) { Restangular .all( '/purchases/products/list' ) .getList() .then( success, fail ); } }})();Somes have told me to use promises instead of callback, how could i do that?Also, is it good practice to declare empty arrays as I've done? | Handling success and failure when retrieving product information | javascript;error handling;angular.js;controller;callback | null |
_computergraphics.2237 | I did a quick investigation about the topic but there doesn't seem a decent resource to find related problems without digging into latest CG papers (unlike CS problems you can easily find a Wikipedia list)With open problems I mean phenomenas that still do not have a tractable solution/approximation to find its way into CG and better yet real time CG. | What are the current open problems in Computer Graphics? | real time;physically based;render | null |
_unix.76521 | After looking at the man page for ls on my system and searching Google, I see there IS a hack of way to use awk or perl to show octal permissions when using ls, but with bash is there anything more native?Standard output of ls -alh$ lltotal 0drwxr-xr-x 5 user group 170B May 20 20:03 .drwxr-xr-x 17 user group 578B May 20 20:03 ..-rw-r--r-- 1 user group 0B May 20 20:03 example-rw-r--r-- 1 user group 0B May 20 20:03 example-1-rw-r--r-- 1 user group 0B May 20 20:03 example-3Desired output including octal representation of permissions$ lltotal 0drwxr-xr-x 1775 5 user group 170B May 20 20:03 .drwxr-xr-x 1775 17 user group 578B May 20 20:03 ..-rw-r--r-- 1644 1 user group 0B May 20 20:03 example-rw-r--r-- 1644 1 user group 0B May 20 20:03 example-1-rw-r--r-- 1644 1 user group 0B May 20 20:03 example-3(disclaimer: not sure if those octals are exactly right)ReasoningI am more familiar with the drwxr-xr-x notation for permissions but sometimes when the dashes fall in odd places I might mis-read it at a quick glance. I'd like to see the octal equivalent as well.Conversion Ability (question part 2)I think a long time ago octal permissions might have been limited to 000 - 777 but in recent times there are some things like set-group-ID and sticky that have given us octals with 4 places like 1775. Is it possible to represent every possible permission in octal format? If it is not then I'd better understand why bash's ls command doesn't seem to have this format. | How can I display octal notation of permissions with ls - and can octal represent all permissions? | ls;coreutils | I also use stat to get a ls-like output but I use a different approach to format the output: I use TAB as a delimiter (allows for easier parsing afterwards, if needed), format the time via stat and finally filter the output with numfmt (included in GNU coreutils >= 8.21 2013-02-14) to get nice file sizes:stat --printf=%A\t%a\t%h\t%U\t%G\t%s\t%.19y\t%n\n * | numfmt --to=iec-i --field=6 --delimiter=' ' --suffix=BNote the delimiter used for numfmt is also a Tab (to input in terminal hit Ctrl+V then Tab).This is what the output looks like:drwxr-xr-x 755 2 don users 4.0KiB 2013-05-17 03:37:02 150905-adwaita-x-dark-light-1.3drwxr-xr-x 755 8 don users 4.0KiB 2011-10-13 07:30:39 Adwaita Slimdrwxr-xr-x 755 3 don users 4.0KiB 2013-05-17 19:26:41 Awaydrwxr-xr-x 755 5 don users 4.0KiB 2013-05-17 03:09:14 elementary-rw-r--r-- 644 1 don users 539KiB 2013-05-10 00:32:14 gdm.jpg-rw-r--r-- 644 1 don users 1.5MiB 2013-05-19 04:30:16 gnome-shell-3.8.2.tar.xzdrwxrwxr-x 775 4 don users 4.0KiB 2013-05-18 18:34:38 gnome-themes-standard-3.8.1-rw-r--r-- 644 1 don users 3.7MiB 2013-05-18 18:30:06 gnome-themes-standard-3.8.1.tar.xzdrwxrwxr-x 775 17 don users 4.0KiB 2013-05-18 18:37:05 gtk+-3.8.2-rw-r--r-- 644 1 don users 14MiB 2013-05-18 18:30:56 gtk+-3.8.2.tar.xzdrwxr-xr-x 755 13 don users 4.0KiB 2013-05-18 02:41:51 MediterraneanNight-2.02-rw-r--r-- 644 1 don users 603B 2013-05-19 20:07:26 python-pytaglib.tar.gz-rw-r--r-- 644 1 don users 442KiB 2013-05-19 00:33:27 Stripes.jpgNote: as per cwd's comment, on OSX, coreutils commands are gstat and gnumfmt. |
_unix.263434 | As a disclaimer, I have read related question to this topic, but still a bit confused in regards to the situation I am seeing.Understanding system loadand also:Understanding top and load averageI am concerned about the load on one of my servers.When running htop, It displays that I have 40 cores.MY load averages are 9.35, 9.58, 8.55.My initial though was that this was high, but the processors installed in the server are :INTEL XEON E5-2650V3 (2.3GHZ/10-CORE/25MB/105W) FIO PROCESSOR KITINTEL XEON E5-2650V3 (2.3GHZ/10-CORE/25MB/105W) PROCESSOR KITMy confusion is that I am not sure why htop lists 40 cores, but I only have two 10-core processors.2 questions:If I have two 10 core processors (20 cores total), is a load of 10 reasonable?Also, why would htop show 40 cores at the top? | System Load averages | linux;load;load average | A load of 10 is reasonable in this case. The rule of thumb is that you want your load average to be less than your total number of cores. The reason that you appear to have double the amount of cores is because of hyper-threading. Here is an excerpt from the linked wikipedia article:For each processor core that is physically present, the operating system addresses two virtual or logical cores, and shares the workload between them when possible. The main function of hyper-threading is to increase the number of independent instructions in the pipeline; it takes advantage of superscalar architecture, in which multiple instructions operate on separate data in parallel. With HTT, one physical core appears as two processors to the operating system, which can use each core to schedule two processes at once. In addition, two or more processes can use the same resources: if resources for one process are not available, then another process can continue if its resources are available. |
_webapps.35203 | I have an old YouTube account which I can't log into. But one user PMs me frequently and I want to block him just him.How can I do that? | How do I block a user on an old YouTube account? | youtube | null |
_cs.24355 | I'm studying binomial heaps in anticipation for my finals and the CLRS book tells me that insertion in a binomial heap takes $\Theta(\log n)$ time. So given an array of numbers it would take $\Theta(n\log n)$ time to convert it a a binomial heap. To me that seems a bit pessimistic and like a naive implementation. Does anyone know of a method/implementation that can convert an array of numbers to a binary heap in $\Theta(n)$ time? | Can we create binomial heaps in linear time? | data structures;efficiency;heaps;priority queues | Wikipedia claims that insertion takes $O(1)$ amortized time, and so converting an array of numbers into a binomial heap should indeed take time $O(n)$. This is also supported by these lecture notes, and probably mentioned in CLRS. |
_codereview.173710 | I have this piece of code that iterates two lists, one of participants in a round and another list with the bids made in a round, I need to compare both lists to determine which participants did not make a bid in the round: for (final InvestorModel participant : roundParticipants) { boolean investorNotFound = true; for (final AuctionBidModel bid : bids) { if (bid.getInvestor().equals(participant)) { investorNotFound = false; } } if (investorNotFound) { investorsWithNoBid.add(participant); } }As you can see, a flag is used to mark if a participant is not found in the list of bids made, said flag is used after the nested for finishes to determine if said participant should be added to the list of participants with no bids.I'm having coming up with a lambda expression that compares the contents of two lists of different objects, where one checks each of its objects against each of the objects' investor property in the other list.How can I convert this piece of code to a stream? Or is it one of the cases that cannot be converted? | Passing to a lambda expression a double nested for with internal conditionals | java;collections;stream | null |
_unix.107692 | Enabling history to display time via export HISTTIMEFORMAT='%F %T ' shows the times of the commands but .bash_history doesn't contain any times.Where does bash store the times the commands were executed?Are they always stored automatically? | Where does bash store the time commands were executed? | bash;command history;storage;timestamps | From the BASH_BUILTINS man page:If the HISTTIMEFORMAT variable is set, the time stamp information associated with each history entry is written to the history file, marked with the history comment character. When the history file is read, lines beginning with the history comment character followed immediately by a digit are interpreted as timestamps for the previous history line.So the information is stored in the history file only if HISTTIMEFORMAT is set.(Try history -a to append the currently in-memory history entries to your history file. You should now see comments with unix timestamps in there.) |
_unix.29775 | Is there a generic way to reset a PCI device in Linux from the command line? That is, cause the PCI bus to issue a reset command. | Reset a PCI Device in Linux | linux;pci | null |
_unix.344127 | I want to keep the /home directory in a folder on a disk partition other than the boot partition. Please note I said folder, not partition, meaning that I do not want to mount an entire partition as /home.Bad fstab entry: LABEL=G_Giant_257/common/home /home would be exactly what I want, if only such syntax would work.Actual (good) fstab:LABEL=G_Giant_257 /mnt/g auto nosuid,nodev,nofail,nobootwait,x-gvfs-show 0 0Now I need to get the commandmount /mnt/g/common/home /hometo execute before anything tries to access /home. Of course, I want all references to any user's /home/~ directory to access a sub-folder of /common/home on my G_Giant_257 partition.The kicker: my root partition is ext4, the G_Giant_257 partition is NTFS, so I don't see how a link could be made to work. I am running ubuntu 16.04. What do you recommend, please? | How do I mount a directory early as possible, at or just after fstab? | ubuntu;mount;directory;home | mount --bind your /home in /etc/fstab with/mnt/g/common/home /home none bind 0 0(See this question on ServerFault.)I have no idea how practical is to have /home on an NTFS filesystem. |
_unix.327480 | I have file called server.txtSuppose it has below servers , there could be more servers server1 server2server3server4how can I copy file (file.txt on all servers using scp command) at /tmp/ location . | Scp files to multiple server simountaneously | shell script;shell;scp | null |
_codereview.56054 | I have a class that implements Queue and draws values from other queues which may still be referenced outwith it. I want my method to draw values from the contained queues, using synchronized locks on them to ensure thread safety with other code that uses synchronized locks on the queues.The way I've tried to achieve this is by having my method loop through all values indefinitely, storing the value if it's the next one, updating the stored values if it reaches the same queue again and the variables the queues are being measured by have changed since the list iteration, and returning the next value of the queue if it's unchanged since the last time it was checked and evaluated to have the next value - my logic being that at that point, all queues have been checked and the currently stored value/queue was checked earliest and has been reconfirmed to be the next value.Is this the ideal way to create a thread-safe version of this method? Or would there be a better way?http://pastebin.com/TSkiJFh0Collection<Queue<T>> memberQueues;final Lambda<T, Comparable> keyGetter;public T get(boolean remove){ // workaround to ensure thread-safety when synchronized locks can't extend past the block they're declared in. // Work on a copy of memberQueues List<Queue<T>> memberQueuesCopy; synchronized(memberQueues) { memberQueuesCopy = new ArrayList<Queue<T>>(memberQueues); } // Declare variables and initialise with last member of memberQueuesCopy. // If it checks every variable and the last one is next, then I don't think I need to check again. Queue<T> nextQueue = memberQueuesCopy.get(memberQueuesCopy.size() - 1); T nextValue = nextQueue.peek(); Comparable nextValueComparable = keyGetter.getMember(nextValue); // Check all members of memberQueuesCopy. Find the lowest and hold the value until it gets to it again incase // any values have changed. When it gets back to the current lowest value, check whether the value it's being // sorted by has changed - and if it has, use its new values and run through all members again. If it hasn't, // return it. // I want a less contrived method. Suggestions that maintain thread safety? for(;;) { for(Queue<T> i : memberQueuesCopy) { synchronized(i) { T iValue = i.peek(); Comparable iComparable = keyGetter.getMember(iValue); if(i == nextQueue) { if(nextValue == iValue && nextValueComparable.equals(iComparable)) { if(remove) return nextQueue.remove(); else return iValue; } nextValue = nextQueue.peek(); nextValueComparable = keyGetter.getMember(nextValue); } else { if(iComparable.compareTo(nextValueComparable) < 0) { nextQueue = i; nextValue = iValue; nextValueComparable = iComparable; } } } } }} | Threadsafe get method on queue that draws values from other queues? | java;multithreading;queue | Code StyleJava Code Style puts the open-brace at the end of the line, not the start of the next line. For example, you have: if(i == nextQueue) {but that should be: if(i == nextQueue) {Variable conventionsi as a variable name is a great idea, if the variable is the control integer in a for loop. In your case, I presume it is short for 'item', or something, but, a Queue, being called i is unconventional.As it happens, the letter q is perfect as a substitute....Now, your nextQueue variable is actually the lastQueue odd.Function extractionWith synchronization, return-balues from methods are often a great help for readibility. Consider this code you have:// Work on a copy of memberQueuesList<Queue<T>> memberQueuesCopy;synchronized(memberQueues){ memberQueuesCopy = new ArrayList<Queue<T>>(memberQueues); }Which should really be written as:// Work on a copy of memberQueuesList<Queue<T>> memberQueuesCopy;synchronized(memberQueues) { memberQueuesCopy = new ArrayList<Queue<T>>(memberQueues);}would be even better if written as:private final List<Queue<T>> copyQueues() { synchronized(memberQueues) { return new ArrayList<Queue<T>>(memberQueues); }}and then:// Work on a copy of memberQueuesList<Queue<T>> memberQueuesCopy = copyQueues();BugsThere are three bugs I should point out:NoSuchElementException if memberQeues is empty:Queue nextQueue = memberQueuesCopy.get(memberQueuesCopy.size() - 1);(and bug 3) NullPointerException if any of the queues are empty (in some combinations) (one bug on iComparable, the other on nextValueComparable):T nextValue = nextQueue.peek();Comparable nextValueComparable = keyGetter.getMember(nextValue); .... T iValue = i.peek(); Comparable iComparable = keyGetter.getMember(iValue); .... if(nextValue == iValue && nextValueComparable.equals(iComparable)) .... if(iComparable.compareTo(nextValueComparable) < 0) |
_codereview.59538 | In this Data Explorer query I am trying to do the following:For each tag:Compute sum of answer scores in this tag (S)Compute count of answers in this tag (A)For each tag class (Bronze, Silver, Gold)add two columns:S divided by this class's score goalA divided by this class's answer goalI wanted to do this in the most general way possible, allowing more tag classes/goals to be added later. I came up with this:-- Predefined tag badge goals... TagBadges as ( select * from (values (1, 'Bronze', 100, 20), (2, 'Silver', 400, 80), (3, 'Gold', 1000, 200)) as Badge(Idx, Class, Score, Answers)),-- Progress per tag, per badge classTypeProgress as ( select RawData.TagName, format(iif(RawData.Score > TagBadges.Score, 1, cast(RawData.Score as float)/TagBadges.Score), '#0.#%') as Score, format(iif(RawData.Answers > TagBadges.Answers, 1, cast(RawData.Answers as float)/TagBadges.Answers), '#0.#%') as Answers, TagBadges.Class from RawData cross join TagBadges),-- Combine class & type columnsAllProgress as ( select TagName, Progress, Class+' '+Type as Category from TypeProgress unpivot (Progress for Type in (Score, Answers)) p) ...But in the end I still had to list all the cases (2 3 = 6) explicitly:select *from AllProgresspivot ( max(Progress) for Category in ([Bronze Score], [Bronze Answers], [Silver Score], [Silver Answers], [Gold Score], [Gold Answers])) qIs there a better way of doing this? | SQL query with dynamic unpivot+pivot for cross product | sql;sql server;stackexchange | null |
_codereview.55044 | Interview question from the interwebzYou have a set of envelopes of different widths and heights. One envelope can fit into another if and only if both the width and height of one envelope is greater than the width and height of the other envelope. What is the maximum number of envelopes can you russian doll?My implementation:# assuming no dupsdef max_russian_doll(enve): if not enve: return 0 enve.sort() max_global = 1 for j in xrange(len(enve) - 1): max_local = 1 for i in xrange(j, len(enve) - 1): if enve[i][1] < enve[i + 1][1] and enve[i][0] != enve[i + 1][0]: # @comment max_local += 1 max_global = max(max_global, max_local) return max_globalenvelopes = [(4,5), (6,7), (2,3)] max_russian_doll(envelopes)obviously this is \$O(n^2)\$. Right now I'm trying to figure out faster solution. Any tips? | Russian doll envelops | python;optimization;interview questions;complexity | null |
_unix.10588 | Possible Duplicate:Which run dialog I'm a unix noob, looking for a good replacement to Windows 7's start menu (pressing Windows key and typing Ch will bring up Chrome).I was told I can just press Alt-F2 to get a launcher, but it's a bit slow, and it doesn't seem to do auto-complete (at least not out of the box) | What's a quick Launcher app which will do auto-complete? | ubuntu | null |
_cs.51403 | I encountered some system of ~5000 random nodes connected by ~8000 non-hookean springs, with ~1300 nodes at the boundary fixed as the wall, the potential of the springs are of the form $dx*e^{(dx/a)}$ where $a$ is a constant and $dx$ the strain (displacement/original length) of the spring, I am using Monte Carlo method to find the energy-minimized configuration after I performed some perturbation, say, a simple shear or a isotropic expansion of the whole system.It seems that the conventional energy minimization schemes such as steepest Descent, or simulated annealing is not working as efficiently here as the case of linear situations, it always fail to converge to a satisfactorily balanced state.Could someone share your experiences in dealing with such non-linear situations?Thank you so much! | What is a proper way of solving a multibody nonlinear problem? | monte carlo | OK, I finally fixed this issue, the right thing to do in such non-linear situation is to use simulated annealing. I am implementing a gradient guided simulated annealing, which works pretty efficiently.Thanks for everyone who gave me suggestions and guidance to the right path!Have fun (mixed with a lot of frustrations) with modeling! |
_vi.2955 | I have three vertically split windows. I want the leftmost window to remain as it is, but move the two other windows from a vertical to a horizontal split. How can I achieve this?I want to get from----------------| b1 | b2 | b3 || | | || | | |----------------to----------------| b1 | b3 || |-------|| | b2 |----------------I can't figure out how to do this with the CTRL-W maps listed in :h window-moving. The only thing I could think of involves opening and closing windows, not moving them, and before I create a mapping or command for it I wanted to ask if there isn't a way to do it by window movement. Here's what I've got::spl - split middle window:b 3 - open the buffer from the rightmost window in the new splitCTRL-W+l - move cursor to rightmost windowCTRL-W+c - close current (rightmost) window | How can I move windows from a vertical split to a horizontal split? | split;vim windows | I don't know if it is the best way to do what you want, but you can accomplish this only with window movements by doing (start from the rightmost window b3):1 - CTRL-W+K - You'll have:----------------| b3 ||------|-------||b1 | b2 |----------------2 - Go to b1 with CTRL-W+j3 - CTRL-W+H to move b1 to the left.You should have the layout you want now. The only downside I see with this method is that size and position of b1 are changed temporarily during the movement. |
_codereview.142351 | I'm trying to make a random object spawning script in Unity.Bellow is the code, any suggestions for improvement / changes?I'm new to both Unity and C#.// Game objects and Transformspublic Transform playerTransform;public Transform[] obstaclePrefab;// spawn paramspublic float minYDicstane = 6.0f;public float maxYDistance = 11.0f;private float boxPositionY;public float minXDistance = 0.0f;public float maxXDistance = 3.0f;private float boxPositionX;private float minSpawnTime;private float maxSpawnTime;private float spawnTime;private float timeCounter;// Distancesprivate float playerDistance;private float boxDistance;private float ySpread;private void Start() { // Count sequence minSpawnTime = 3.0f; maxSpawnTime = 8.0f; // Count timer timeCounter = 0; spawnTime = Random.Range(minSpawnTime, maxSpawnTime);}private void Update() { // Count the random spawn time timeCounter += Time.deltaTime; Debug.Log (Spawn Time: + spawnTime + spawnCount: + timeCounter); if(timeCounter >= spawnTime) { // boxDistanceFromPlayer playerDistance = playerTransform.position.y; // Box spawn distance // X position boxPositionX = Random.Range(minXDistance, maxXDistance); boxPositionX = (boxPositionX-1)*2.0f; // Y Position boxPositionY = playerDistance + Random.Range(minYDicstane, maxYDistance); // Select box color int boxColor = Random.Range (0, 4); // Let the boxes awake!!! Instantiate (obstaclePrefab [boxColor], new Vector2 (boxPositionX, boxPositionY), Quaternion.identity); // Make new random spawn time spawnTime = Random.Range (minSpawnTime, maxSpawnTime); timeCounter = 0; }}Code after changes, using coroutines// Game objects and Transformspublic Transform playerTransform;public GameObject[] obstaclePrefab;// spawn paramspublic float minYDicstane = 6.0f;public float maxYDistance = 11.0f;public float minXDistance = 0.0f;public float maxXDistance = 3.0f;public float minSpawnTime = 2.0f;public float maxSpawnTime = 5.0f;public float spawnTime = 4.0f;IEnumerator SpawnBoxes() { while (true) { float boxPositionY; float boxPositionX; //Distances float playerDistance; // Player position playerDistance = playerTransform.position.y; // Box position boxPositionX = Random.Range(minXDistance, maxXDistance); boxPositionY = playerDistance + Random.Range(minYDicstane, maxYDistance); // Select box GameObject box = obstaclePrefab[Random.Range(0, obstaclePrefab.Length - 1)]; // Instantiate box Instantiate (box, new Vector2 (boxPositionX, boxPositionY), Quaternion.identity); // Coroutine random amount of time yield return new WaitForSeconds(Random.Range(minSpawnTime, maxSpawnTime)); } }private void Start() { StartCoroutine(SpawnBoxes());} | Object spawning script Unity | c#;unity3d | public Transform[] obstaclePrefab;That's wrong. Prefabs are of type GameObject, not Transform.private float minSpawnTime;private float maxSpawnTime;Why did you make these private? They should have been public with default values - just as all the others.private float boxPositionY;private float boxPositionX;// Distancesprivate float playerDistance;private float boxDistance;private float ySpread;These shouldn't even been private, but local to the Update() method, as their values are never reused.// Select box colorint boxColor = Random.Range(0, 4);This is very likely to break. Better read the array length instead of hard coding it:// Select box colorint boxColor = Random.Range(0, obstaclePrefab.Length - 1);Or just get rid of boxColor all together:// Select boxGameObject box = obstaclePrefab[Random.Range(0, obstaclePrefab.Length - 1)];// Count the random spawn timetimeCounter += Time.deltaTime;if(timeCounter >= spawnTime) { // Make new random spawn time spawnTime = Random.Range (minSpawnTime, maxSpawnTime); timeCounter = 0;}That's one way to do it. The cleaner method would have been to handle this in a Coroutine and then use WaitForSeconds. Right now, your code runs every single frame, without actually doing anything useful.Debug.Log (Spawn Time: + spawnTime + spawnCount: + timeCounter);Be careful when you log. Logging when something spawns? OK. But spamming a log entry every single frame? Waste of resources.boxPositionX = (boxPositionX-1)*2.0f;What is this line supposed to do? That should have been directly computed into minXDistance and maxXDistance, so this line is obsolete.If this is actually supposed to be C#, the whole script you posted should actually have wrapped in a regular C# class:using UnityEngine;using System.Collections;public class ScriptName : MonoBehaviour { // <---- Your stuff goes here}Did you just omit this when posting your code here, or did you actually write your scripts without it? |
_unix.190751 | On my Ubuntu 14.04.2 server IPv4 goes offline several times per hour (one to four times I've seen, but at no particular minute per hour or so).My hoster insists that the problem is on the server-side and the fact that a Debian-based rescue system doesn't show the same symptoms makes me think they're right. However, the rescue system doesn't configure a global IPv6 address on any interface, like the installed Ubuntu system does.Routinely between one to four times an hour the (IPv4-based) SSH connection will drop due to too many timed out packets.When monitoring the server from another remote server ICMPv4 pings will either time out or the router will respond that the destination host isn't available (I routinely see both!). At the same time the ICMPv6 pings are totally unaffected.Also, when I use IPv6 to connect from that other remote host via SSH, that connection doesn't stall nor does the system appear to freeze or so (as I had suspected initially).The system and kernel logs indicate no issues either and it makes no difference whether I disable all firewall rules or leave the firewall turned on. I also had it running with logging enabled for all dropped packets to see whether I could correlate something there.No cron jobs are running at those offline times and it also doesn't happen at the same minute or so, indicating some regular cron job.I also narrowed another aspect of this down. When I ping (ICMPv4) from the host that shows the symptoms, loopback is not affected, eth0 is. This would suggest to me that it's not about IPv4 in general, but specific to the interface that corresponds to the one network card in the system.How can I proceed my troubleshooting from here? What would be the next step(s), given what I have done so far? Is there perhaps even a known bug that would correspond to the symptoms I see?NB: I have worked on diagnosing this for well over a month. So asking here, to me is kind of a last resort. Please request more details as needed and I will add them.What I have done so far:ping vs. ping6mtr from and to the server, my hoster doesn't deem the few lost packets anything irregularSSH connection via IPv4 and IPv6 respectivelytail-ed /var/log/kern.log, /var/log/syslog and /var/log/auth.log to see whether anything would show up during the offline periodflushed all firewall rules for IPv4 and IPv6 respectivelyalso simply enabled logging for dropping of packetsremoved several packages I suspected of being potential culpritsHere are the list of manually installed packages:# echo $(apt-mark showmanual)acl adduser aggregate apparmor apparmor-profiles apparmor-utils apt apt-cacher-ng apt-file apt-rdepends apt-utils base-files base-passwd bash bash-completion bash-static bridge-utils bsdutils btrfs-tools busybox-initramfs busybox-static bzip2 bzr ca-certificates cgmanager cgroup-bin cifs-utils colordiff coreutils cpio crda cron cron-apt cryptmount cryptsetup dash debconf debianutils debootstrap debsums dh-python dialog diffutils dnsutils dpkg dpkg-dev duplicity e2fslibs e2fsprogs ed etckeeper fakechroot fakeroot file findutils gcc-4.8-base gcc-4.9-base gdisk-noicu git git-svn gnupg gnutls-bin gpgv grep gzip haveged heirloom-mailx hostname htop ifupdown init-system-helpers initramfs-tools initramfs-tools-bin initscripts insserv iproute2 ipset iptables iputils-ping klibc-utils kmod kpartx less libacl1 libapt-inst1.5 libapt-pkg4.12 libattr1 libaudit-common libaudit1 libblkid1 libbz2-1.0 libc-bin libc6 libcap2 libcgmanager0 libck-connector0 libcomerr2 libdb5.3 libdbus-1-3 libdebconfclient0 libdrm2 libedit2 libevent-2.0-5 libexpat1 libffi6 libgcc1 libgdbm3 libgssapi-krb5-2 libjson-c2 libjson0 libk5crypto3 libkeyutils1 libklibc libkmod2 libkrb5-3 libkrb5support0 liblzma5 libmount1 libmpdec2 libncurses5 libncursesw5 libnih-dbus1 libnih1 libnl-3-200 libnl-genl-3-200 libpam-modules libpam-modules-bin libpam-mount libpam-runtime libpam-systemd libpam0g libpci3 libpcre3 libplymouth2 libpng12-0 libprocps3 libpython-stdlib libpython2.7-minimal libpython2.7-stdlib libpython3-stdlib libpython3.4-minimal libpython3.4-stdlib libreadline6 libselinux1 libsemanage-common libsemanage1 libsepol1 libslang2 libsqlite3-0 libss2 libssl1.0.0 libstdc++6 libtinfo5 libudev1 libui-dialog-perl libusb-0.1-4 libusb-1.0-0 libustr-1.0-1 libuuid1 libwrap0 linux-firmware linux-image-3.13.0-24-generic linux-image-extra-3.13.0-24-generic linux-image-generic localepurge locales logcheck logcheck-database login logrotate lsb-base lsb-release lshw lsof lxc lxc-templates make makedev man-db manpages manpages-dev mawk mc md5deep mdadm mercurial mime-support mlocate module-init-tools molly-guard mount mountall mtr-tiny multiarch-support ncurses-base ncurses-bin ndisc6 net-tools netcat-openbsd netsniff-ng nmap openntpd openssh-client openssh-server openssh-sftp-server p7zip-full p7zip-rar passwd pax pciutils perl perl-base perl-modules plymouth postfix procps psmisc pv python python-apt-common python-mako python-mechanize python-minimal python2.7 python2.7-minimal python3 python3-apt python3-minimal python3.4 python3.4-minimal readline-common reprepro resolvconf rsyslog sed sensible-utils sharutils smartmontools subversion sudo sysv-rc sysvinit-utils tar tcpdump tcptraceroute tmux traceroute tree tzdata ubuntu-keyring ucf udev uidmap unattended-upgrades unbound-host unrar unzip upstart usbutils util-linux vim-nox vnstat wget whois wireless-regdb xz-utils zerofree zip zlib1g zsh-doc zsh-static(Some of these come from the debootstrap process, of course.)The requested information:$ uname -a|sed 's/'$(hostname -f)'/foobar/g'Linux foobar 3.13.0-46-generic #79-Ubuntu SMP Tue Mar 10 20:06:50 UTC 2015 x86_64 x86_64 x86_64 GNU/LinuxI updated to a newer kernel (package linux-image-generic-lts-utopic):$ uname -a|sed 's/'$(hostname -f)'/foobar/g'Linux foobar 3.16.0-33-generic #44~14.04.1-Ubuntu SMP Fri Mar 13 10:33:29 UTC 2015 x86_64 x86_64 x86_64 GNU/LinuxThe sysctl -a output has been anonymized and put here.The command was (minus one sed to replace the name of an interface to _bridge):sudo sysctl -a|sed 's/'$(hostname -f)'/foobar/g;s/'$(hostname -s)'/foobar/g'|grep -Ev '^net\.ipv[46]\.(neigh|conf)\._[s]'|grep -v nf_logThere are overall three interfaces like _bridge all configured for IPv4 and IPv6 and only differing in IP addresses. However, they aren't currently in use. They are slated to be used for one LXC guest each.# lspci -s 06:00.0 -vv06:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 02) Subsystem: Micro-Star International Co., Ltd. [MSI] X58 Pro-E Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+ Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0, Cache Line Size: 256 bytes Interrupt: pin A routed to IRQ 42 Region 0: I/O ports at e800 [size=256] Region 2: Memory at fbeff000 (64-bit, non-prefetchable) [size=4K] Region 4: Memory at f6ff0000 (64-bit, prefetchable) [size=64K] [virtual] Expansion ROM at fbe00000 [disabled] [size=128K] Capabilities: [40] Power Management version 3 Flags: PMEClk- DSI- D1+ D2+ AuxCurrent=375mA PME(D0+,D1+,D2+,D3hot+,D3cold+) Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME- Capabilities: [50] MSI: Enable+ Count=1/1 Maskable- 64bit+ Address: 00000000fee00000 Data: 40c1 Capabilities: [70] Express (v1) Endpoint, MSI 01 DevCap: MaxPayload 256 bytes, PhantFunc 0, Latency L0s <512ns, L1 <64us ExtTag- AttnBtn- AttnInd- PwrInd- RBE+ FLReset- DevCtl: Report errors: Correctable- Non-Fatal- Fatal- Unsupported- RlxdOrd+ ExtTag- PhantFunc- AuxPwr- NoSnoop- MaxPayload 128 bytes, MaxReadReq 4096 bytes DevSta: CorrErr+ UncorrErr- FatalErr- UnsuppReq+ AuxPwr+ TransPend- LnkCap: Port #0, Speed 2.5GT/s, Width x1, ASPM L0s L1, Exit Latency L0s <512ns, L1 <64us ClockPM+ Surprise- LLActRep- BwNot- LnkCtl: ASPM Disabled; RCB 64 bytes Disabled- CommClk+ ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt- LnkSta: Speed 2.5GT/s, Width x1, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt- Capabilities: [b0] MSI-X: Enable- Count=2 Masked- Vector table: BAR=4 offset=00000000 PBA: BAR=4 offset=00000800 Capabilities: [d0] Vital Product Data Unknown small resource type 05, will not decode more. Capabilities: [100 v1] Advanced Error Reporting UESta: DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol- UEMsk: DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol- UESvrt: DLP+ SDES+ TLP- FCP+ CmpltTO- CmpltAbrt- UnxCmplt- RxOF+ MalfTLP+ ECRC- UnsupReq- ACSViol- CESta: RxErr+ BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr+ CEMsk: RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr+ AERCap: First Error Pointer: 00, GenCap+ CGenEn- ChkCap+ ChkEn- Capabilities: [140 v1] Virtual Channel Caps: LPEVC=0 RefClk=100ns PATEntryBits=1 Arb: Fixed- WRR32- WRR64- WRR128- Ctrl: ArbSelect=Fixed Status: InProgress- VC0: Caps: PATOffset=00 MaxTimeSlots=1 RejSnoopTrans- Arb: Fixed- WRR32- WRR64- WRR128- TWRR128- WRR256- Ctrl: Enable+ ID=0 ArbSelect=Fixed TC/VC=01 Status: NegoPending- InProgress- Capabilities: [160 v1] Device Serial Number 01-00-00-00-68-4c-e0-00 Kernel driver in use: r8169# modinfo r8169filename: /lib/modules/3.16.0-33-generic/kernel/drivers/net/ethernet/realtek/r8169.kofirmware: rtl_nic/rtl8168g-3.fwfirmware: rtl_nic/rtl8168g-2.fwfirmware: rtl_nic/rtl8106e-2.fwfirmware: rtl_nic/rtl8106e-1.fwfirmware: rtl_nic/rtl8411-2.fwfirmware: rtl_nic/rtl8411-1.fwfirmware: rtl_nic/rtl8402-1.fwfirmware: rtl_nic/rtl8168f-2.fwfirmware: rtl_nic/rtl8168f-1.fwfirmware: rtl_nic/rtl8105e-1.fwfirmware: rtl_nic/rtl8168e-3.fwfirmware: rtl_nic/rtl8168e-2.fwfirmware: rtl_nic/rtl8168e-1.fwfirmware: rtl_nic/rtl8168d-2.fwfirmware: rtl_nic/rtl8168d-1.fwversion: 2.3LK-NAPIlicense: GPLdescription: RealTek RTL-8169 Gigabit Ethernet driverauthor: Realtek and the Linux r8169 crew <[email protected]>srcversion: D0E1934D763B6927E0CB4A4alias: pci:v00000001d00008168sv*sd00002410bc*sc*i*alias: pci:v00001737d00001032sv*sd00000024bc*sc*i*alias: pci:v000016ECd00000116sv*sd*bc*sc*i*alias: pci:v00001259d0000C107sv*sd*bc*sc*i*alias: pci:v00001186d00004302sv*sd*bc*sc*i*alias: pci:v00001186d00004300sv*sd*bc*sc*i*alias: pci:v00001186d00004300sv00001186sd00004B10bc*sc*i*alias: pci:v000010ECd00008169sv*sd*bc*sc*i*alias: pci:v000010ECd00008168sv*sd*bc*sc*i*alias: pci:v000010ECd00008167sv*sd*bc*sc*i*alias: pci:v000010ECd00008136sv*sd*bc*sc*i*alias: pci:v000010ECd00008129sv*sd*bc*sc*i*depends: miiintree: Yvermagic: 3.16.0-33-generic SMP mod_unload modversionssigner: Magrathea: Glacier signing keysig_key: 25:26:EE:FE:32:C9:58:B4:CD:85:CA:5F:BF:EB:ED:A1:75:D1:B2:18sig_hashalgo: sha512parm: use_dac:Enable PCI DAC. Unsafe on 32 bit PCI slot. (int)parm: debug:Debug verbosity level (0=none, ..., 16=all) (int) | IPv4 goes offline several times per hour on headless remote server, IPv6 unaffected | ubuntu;ipv6;ipv4;packet | null |
_unix.280492 | How do I perform an ascending sort of a word list, based upon how many syllables each word contains? Example Input:somethingimportantdogcatbookshelfExample Output:dogcatsomethingbookshelfimportant | How do I sort words by syllable count? | text processing;awk;sed;perl | This perl script builds a hash with words (read one per line from stdin, and/or from any filenames listed on the command line) as keys, and syllable counts as the values.Then it prints the hash keys, sorted by the syllable counts. #! /usr/bin/perluse strict;use Lingua::EN::Syllable;my %words = ();while(<>) { chomp; $words{$_} = syllable($_);};print join(\n,sort { $words{$a} <=> $words{$b} } keys(%words)), \n;Output:catdogbookshelfsomethingimportantIf you want to print the syllable count along with each word, change the last line to something like this:foreach my $word (sort { $words{$a} <=> $words{$b} } keys(%words)) { printf %2i: %s\n, $words{$word}, $word;};Output: 1: cat 1: dog 2: bookshelf 3: something 3: importantThis version highlights the fact that, as the module itself claims, it only estimates the syllable count. bookshelf is correctly counted as having only two syllables but something should also be two.I haven't examined the module code closely, but it's probably getting confused by the e after the m. In many (most?) words, that wouldn't be a silent e and would count as an extra syllable. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.