id
stringlengths 5
27
| question
stringlengths 19
69.9k
| title
stringlengths 1
150
| tags
stringlengths 1
118
| accepted_answer
stringlengths 4
29.9k
⌀ |
---|---|---|---|---|
_webapps.60285 | Yesterday I deleted some messages from facebook. I want to bring it back to my inbox today. So I logged into my facebook. Then -I went to messages.Clicked on More -> Then Archived.Then I found the deleted messages.I clicked on it and from Action selected Unarchive.Then the selected message disappeared from there and I thought it would be in the inbox. So I went inbox and looked there.But the message is not found. Why is it so ? UPDATE:So I decided to test facebook and as the part of it, I just archived a message and tried to unarchive it. No problem occured this time. The message successfully returned to the inbox. Now my question is what happened to the previously unarchived messsage ? Can I get it back still ? | Cannot view 'Unarchive' messages back in the inbox in facebook | facebook;facebook chat | null |
_vi.5126 | I would like to create a new scratch buffer in vim script. I would like to use this buffer to output the result of the execution of a scala script.I am creating the buffer with this function:function! ScratchBuffer() vnew setlocal nobuflisted buftype=nofile bufhidden=wipe noswapfileendfunction then I call it as follows:let outputBuf = ScratchBuffer()outputbuf should contain the buffer number. However, this doesn't seem to work. I need the buffer number in order to then use the buffer in a python script. | How to detect the buffer number of new buffer | vimscript;buffers;vimscript python;scratch buffer;hidden buffers | Your function returns nothing, but you call it expecting the buffer number. This should work:function! ScratchBuffer() vnew setlocal nobuflisted buftype=nofile bufhidden=wipe noswapfile return bufnr('%')endfunction |
_codereview.108722 | This is my second algorithm and I will try to make it as simple for you to understand how it works. It is pretty expensive and I'd like to make it more efficient..It works by splitting a square into 4 sides and then by determining if an edge is within a side (which is a triangle). Then collision can respond by using collision direction as a tool to reduce velocity in the y or x axis.from vector import Vectorfrom math_extra import Mathfrom utilities import utilimport randomclass Collisions(): def Detect(me, ent): me_Pos = Vector.Add(me.GetPos(), [me.GetVelocity()[0], -me.GetVelocity()[1]]) # first entity with a predicted pos ent_pos = Vector.Add(ent.GetPos(), [ent.GetVelocity()[0], -ent.GetVelocity()[1]]) # second entity with a predicted pos y_max, y_min, x_max, x_min = me_Pos[1] + (me.Entity.h * 0.5), me_Pos[1] - (me.Entity.h * 0.5), me_Pos[0] + (me.Entity.w * 0.5), me_Pos[0] - (me.Entity.w * 0.5) # defining edge coordinates for the first entity y_max2, y_min2, x_min2, x_max2 = ent_pos[1] + (ent.Entity.h / 2), ent_pos[1] - (ent.Entity.h / 2), ent_pos[0] - (ent.Entity.w/2), ent_pos[0] + (ent.Entity.w/2) # defining edge coordinates for the second entity isColliding = ((x_max >= x_min2 and x_max <= x_max2) or (x_min <= x_max2 and x_min >= x_min2)) and ((y_min <= y_max2 and y_min >= y_min) or (y_max <= y_max2 and y_max >= y_min2)) # are two entitities interceting at all? y_range = Math.Clamp((abs(me_Pos[0] - ent_pos[0])) / (0.5 * ent.Entity.w) * ent.Entity.h, 0, ent.Entity.h) * 0.5 # y range (refer to the picture) This defines valid y coordinate range for left and right edge y_range_2 = (y_range*0.5) # y range (refer to the picture) This defines valid y coordinate range for top and bottom range left = (x_max >= x_min2 and x_max <= ent_pos[0]) and ((y_min <= ent_pos[1]+y_range and y_min >= ent_pos[1]-y_range) or (y_max <= ent_pos[1]+y_range and y_max >= ent_pos[1]-y_range)) # is something hitting me from the left right = (x_min <= x_max2 and x_min >= ent_pos[0]) and ((y_min <= ent_pos[1]+y_range and y_min >= ent_pos[1]-y_range) or (y_max <= ent_pos[1]+y_range and y_max >= ent_pos[1]-y_range)) # is something hitting me from the right top = ((x_max >= x_min2 and x_max <= x_max2) or (x_min <= x_max2 and x_min >= x_min2)) and ((y_min <= y_max2 and y_min >= ent_pos[1] + y_range_2) or (y_max <= y_max2 and y_max >= ent_pos[1] + y_range_2)) # is something hitting me from the top bottom = ((x_max >= x_min2 and x_max <= x_max2) or (x_min <= x_max2 and x_min >= x_min2)) and ((y_max >= y_min2 and y_max <= ent_pos[1] - y_range_2) or (y_min >= y_min2 and y_min <= ent_pos[1] - y_range_2))# is something hitting me top Collisions.Response(me, ent, [isColliding, left, right, top, bottom]) # respond to the collision return isColliding, left, right, top, bottom # return data about the collision def Response(me, ent, physdata): isColliding, left, right, top, bottom = physdata[0], physdata[1], physdata[2], physdata[3], physdata[4] me_Pos = me.GetPos() ent_Pos = ent.GetPos() me_Velocity = me.GetVelocity() if left == True: me.SetVelocity([me_Velocity[0] * -0.2, me_Velocity[1]]) if right == True: me.SetVelocity([me_Velocity[0] * -0.2, me_Velocity[1]]) if top == True: me.SetVelocity([me_Velocity[0], me_Velocity[1] * -0.2]) if bottom == True: me.SetVelocity([me_Velocity[0], me_Velocity[1] * -0.2]) y_max, y_min, x_max, x_min = me_Pos[1] + (me.Entity.h * 0.5), me_Pos[1] - (me.Entity.h * 0.5), me_Pos[0] + (me.Entity.w * 0.5), me_Pos[0] - (me.Entity.w * 0.5) # again defining coordinates for edges for x in [x_max, x_min]: # looping through all edges and seeing if the distance between them and center of entity two is less than the radius for y in [y_max, y_min]: colliding, byDistance = util.isInSphere([x,y], ent.GetPos(), ent.Entity.w * 0.5 ) if colliding: me.Entity.move_ip(Vector.Multiply(Vector.Normalize(Vector.Sub(me.GetRealPos(),ent.GetRealPos())), 1+byDistance)) # if so then move the entity in other direction Collisions.Stuck_Response(me, ent) def Stuck_Response(me,ent): if Vector.Distance(me.GetRealPos(), ent.GetRealPos()) < me.Entity.w * 0.7: me.Entity.move_ip(random.randint(1,2), random.randint(1,2)) me.Entity.move_ip(Vector.Sub(me.GetRealPos(), ent.GetRealPos())) def Translate(table): # loops through all entities and checks for collision with all of them for k, v in enumerate(table): for k2, v2 in enumerate(table): ent_one = table[k] ent_two = table[k2] if ent_one != ent_two: # don't collide myself with myself Collisions.Detect(ent_one, ent_two) | Collision detection algorithm | python;performance;collision;pygame | 1. ReviewThere are no docstrings. What do these functions do? What arguments do they take? What do they return?The long lines mean that we can't read the code here without scrolling it. The Python style guide (PEP8) recommends sticking to 79 characters.In Python it's good practice to use object attributes instead of getter functions. Using me.pos and me.velocity would make the code easier to read.The code start with objects called me and ent, computes positions me_Pos and ent_pos (consistent apart from the capitalization), but then goes on to compute x_min and x_min2. Later on there are ent_one and ent_two. It is hard to remember which of these goes with me and which with ent. More consistency in naming would help.The predicted position is computed like this:Vector.Add(me.GetPos(), [me.GetVelocity()[0], -me.GetVelocity()[1]])Why is the y component of the velocity inverted? It would be better to store it the other way round so you could write:Vector.Add(me.GetPos(), me.GetVelocity())It's wrong to add a position to a velocity: the dimensions don't match. Velocity is the rate of change of position: it needs to be multiplied by a timestep in order to get a change of position. Presumably your code happens to work because your velocities are measured per frame and so the timestep is always 1. But this is inflexible: it means that if you ever need to change your framerate then you have to change all the velocities. Better to measure velocities per second.The code would be much easier to read if the Vector class supported arithmetic operations (which is easy to do in Python using __add__, __mul__ and other special methods. You'd then be able to compute the predicted position like this:me.pos + me.velocity * timestepThe code could make more use of vectors. If an object's size were stored as a vector (instead of a pair of attributes w and h) then you could compute the axis-aligned bounding box more simply, perhaps like this:halfsize = me.size / 2bounding_box = me.pos - halfsize, me.pos + halfsizeThis code repeatedly computes each object's axis-aligned bounding box before each collision. This is a waste of effort: better to compute this information once per frame and remember it.The code in Stuck_Response relies on the objects having square bounding boxes (it only uses w). So why have h at all?The strategy you're following here is to move the objects and then test to see if they intersect in their updated positions. This has a problem, which is that objects can pass through each other. Consider a timestep like this, with two objects in their initial positions and their movement vectors shown:The updated positions of the objects don't intersect:But a look at the swept paths of the two objects shows that at some point during the timestep they must have collided:See this answer on Stack Overflow for some advice on finding collisions between moving convex polygons.The code in Translate compares every pair of objects. This means that it won't be able to handle very many objects before it starts to slow down due to the \$(n^2)\$ runtime. Better to use some kind of spatial lookup structure like a quadtree to quickly find candidate collisions. |
_unix.70612 | I am attempting to install Arch Linux (for the hundredth time) and I recently ran across another problem. I am trying to find a list of my partitions. In order to do this I entergdiskWhen I do this however it returnsType device filename, or press to exit:I have attempted entering gdisk /dev/disk1When I do this I get there errorProblem opening /dev/disk1 for reading! Error is 2.However, I am still able to mount partitions when I know the partition. I am simply trying to get a list of my partitions so I can remember which ones they are. Any help understanding the problem would be useful. (Off topic question: Boot loaders do not need to be installed in the first partition of root correct? Last time I installed it I put it in /boot yet I was given an error) | GPT Type device filename, or press to exit? | arch linux;partition | gdisk is throwing an error because /dev/disk1 is only an example, not a real block device. Use gdisk /dev/sda if you want to work on your first drive.gdisk is extradordinarily well documented on it's author's, Rod Smith, site: Rod's Books. |
_unix.53869 | I need to use tsocks (to tunnel an ssh connection through an ssh tunnel to reach given machines), but there is a problem. There are two servers that has the same IP address (that I need to reach through ssh tunnels). So the config for this situation would look like this: cat /etc/tsocks.confpath { reaches = 10.1.1.2/255.255.255.255 server = 127.0.0.1 server_port = 3000}path { reaches = 10.1.1.2/255.255.255.255 server = 127.0.0.1 server_port = 2000}How can I resolve this problem? | Same target IP address when using tsocks | tsocks | You'll need two tsocks.conf, one for each SOCKS server and then use them as:TSOCKS_CONF_FILE=~/.tsocks-A.conf tsocks some-cmd 10.1.1.2andTSOCKS_CONF_FILE=~/.tsocks-B.conf tsocks some-cmd 10.1.1.2Alternatively, if the SOCKS servers support SOCKS4A or SOCKS5, you could use dante's socksify instead of tsocks and use host names if they have different host name on the remote ends (and specify which SOCKS server to use based on host names or domain names in dante.conf). |
_hardwarecs.6151 | I'm looking for a router that could wirelessly bridge with my main wireless router and still work as a wireless access point, just like the Wireless #2 router is doing in the image below:Is there a router in the market that could do that?After some research, I found this router AC3200 but I'm not sure if it would be able to do what I described above. I assume it could be possible since it has 3 wifi interfaces, but not sure if I could setup it that way.Any suggestions? | Wireless bridge with AP | wireless;router | null |
_unix.59203 | I would like vim to do this in order to have syntax highlight as set up in vim (or without need for extra tools). So instead to use cat file | <some_sh_tool> I would use vim +some_opts +... +q file. The problem is that vim restores previous screen upon exit, but using some remote access tools this didn't happen so it was basically working as cat with syntax highlight. So, is this possible ?EDITThinking more about this I think this is great thing to have. Apart from syntax highlighting other features of vim could be used while displaying file content, like line numbers, white space, wrapping,etc... especially within script and because vim is omnipresent. | Vim to print file on terminal and exit | vim | I found exactly what I needed in a package called vimpager.It ships with vimcat utility. |
_softwareengineering.269548 | We are planning to build a travel website in which we will be integrating multiple APIs (eg. DOTW, GTA, Expedia) for Hotels. I have initially tried to use MySQL but since there are huge amounts of data in hotels and it may contain numerous one to many relationships with Images, Amenities and Rooms, the search becomes very slow when we have data for around 200000 Hotels. Even fetching all details for just one hotel may results in a JOIN query from at least four tables, and scanning over all hotels records. So we are planning to migrate our product schema to any NoSQL database to make our search as fast as possible.Also sometimes we need to run certain schedulers on our database for eliminating duplicates from our database and also updating the newly added hotels which are sent by our providers.Our tech stack is basically on Java, J2EE along with Springs and Hibernate.I have read about about MongoDB, Cassandra, Redis and ElasticSearch but I am now confused if simply using these tools can optimize the website search performance. If so then what features differ between these tools that could help me make a determination? | Are NoSQL databases the best choice for more efficiently querying large amounts of data? | database design;mongodb;redis;cassandra;elasticsearch | null |
_webmaster.2746 | There are a lot of questions on StackOverflow relating to session security / session hijacking, but there doesn't seem to be a really good solution to the problem. The three most common suggestions are as follows:Track the users IP address as part of their $_SESSION data, and possibly invalidate a session if it changes. The downside is that lots of users have dynamic IP addresses, so you risk invalidating a user seemingly at random (their perspective).Same as 1., but using a User Agent. Two issues here: there may not be a UA to track, and they can change during browser upgrades, etc.Second cookie, with a unique token. The problem here is that if an attacker gets a hold on the normal session cookie, they're very likely to be able to get a hold on your secondary token as well.So, with these three options it seems that IP address is the best option, since you're guaranteed to be passed one and its independent of physical security (and if the user is physically compromised, you lose regardless). With that in mind, I have a couple questions relating to IP address changes:How often would a users IP address really change under normal conditions. I have DSL at home, with the usual dynamic IP concerns, and according to gmail my IP hasn't changed in days. AFAIK, this only really happens when the modem cycles anyway, right? That seems like a rare enough event that it might be ok to invalidate the session.I think I remember Jeff saying in one of the SO podcasts that they did something similar, though it was possibly for something else. The idea was that using the first two (I believe) octets of an IP address could be considered close enough in some circumstances. This allows a user to move around on the same ISP, but the system would notice if the user was suddenly in another ISPs range. Is this a viable tactic? | Variable IPs: How variable are they? Best practices for tracking | security;cookie;tracking;ip address | null |
_unix.250067 | I wanted to have a git openlast command which would look at the last commit, get a list of added or changed files, filter them to only the text files, and finally open them in an editor.So far I've done the following:git show --stat HEADread -p Open in Vim tabs? (Y/n) -n 1 -rif [[ -z $REPLY || $REPLY =~ [Yy] ]]; then vim -p $(git diff --diff-filter=AM --ignore-submodules --name-only HEAD^)fiThe down fall is if I add or change a binary file in the previous commit then it will be opened by the editor (Vim in this case). Is there a way to take the list outputted by the git diff command and remove binary files? | How do I filter a list of files for text only files? | bash;git | You can pipe to xargs and use grep -Il to filter out binary files:git diff --diff-filter=AM --ignore-submodules --name-only HEAD^ | \ xargs grep -Il Example git openfiles command#!/bin/bashgit show --stat HEADfiles=($(git diff --diff-filter=AM --ignore-submodules --name-only HEAD^ | xargs grep -Il ))read -p Open ${#files[@]} files in Vim tabs? (Y/n) -n 1 -rif [[ -z $REPLY || $REPLY =~ [Yy] ]]; then exec vim -p ${files[@]}else exit 1fi |
_codereview.23284 | I have recently implemented my SQLite helper class that supports SQLite in a memory class to be opened to not to be lost. Please review it and tell me if there is a coding problem and tell me what to do to prevent\fix it.using System;using System.Collections.Generic;using System.Data;using System.Data.SQLite;using System.Diagnostics;using System.Globalization;using System.IO;using System.Linq;namespace SQLite{ public class SqLiteDatabase : IDisposable { private readonly SQLiteConnection _dbConnection; /// <summary> /// Default Constructor for SQLiteDatabase Class. /// </summary> public SqLiteDatabase() { _dbConnection = new SQLiteConnection(Data Source=default.s3db); } /// <summary> /// Single Param Constructor to specify the datasource. /// </summary> /// <param name=datasource>The data source. Use ':memory:' for in memory database.</param> public SqLiteDatabase(String datasource) { _dbConnection = new SQLiteConnection(string.Format(Data Source={0}, datasource)); } /// <summary> /// Single Param Constructor for specifying advanced connection options. /// </summary> /// <param name=connectionOpts>A dictionary containing all desired options and their values.</param> public SqLiteDatabase(Dictionary<String, String> connectionOpts) { String str = connectionOpts.Aggregate(, (current, row) => current + String.Format({0}={1}; , row.Key, row.Value)); str = str.Trim().Substring(0, str.Length - 1); _dbConnection = new SQLiteConnection(str); } #region IDisposable Members public void Dispose() { if (_dbConnection != null) _dbConnection.Dispose(); GC.Collect(); GC.SuppressFinalize(this); } #endregion public bool OpenConnection() { try { if (_dbConnection.State == ConnectionState.Closed) _dbConnection.Open(); return true; } catch (Exception e) { Console.WriteLine(SQLite Exception : {0}, e.Message); } return false; } public bool CloseConnection() { try { _dbConnection.Close(); _dbConnection.Dispose(); } catch (Exception e) { Console.WriteLine(SQLite Exception : {0}, e.Message); } return false; } /// <summary> /// Gets the specified table from the Database. /// </summary> /// <param name=sql>The table to retrieve from the database.</param> /// <returns>A DataTable containing the result set.</returns> public DataTable GetDataTable(string sql) { var table = new DataTable(); try { using (SQLiteTransaction transaction = _dbConnection.BeginTransaction()) { using (var cmd = new SQLiteCommand(_dbConnection) {Transaction = transaction, CommandText = sql}) { using (SQLiteDataReader reader = cmd.ExecuteReader()) { table.Load(reader); transaction.Commit(); } } } return table; } catch (Exception e) { Console.WriteLine(SQLite Exception : {0}, e.Message); } finally { table.Dispose(); } return null; } /// <summary> /// Executes a NonQuery against the database. /// </summary> /// <param name=sql>The SQL to execute.</param> /// <returns>A double containing the time elapsed since the method has been executed.</returns> public double? ExecuteNonQuery(string sql) { Stopwatch s = Stopwatch.StartNew(); try { using (SQLiteTransaction transaction = _dbConnection.BeginTransaction()) { using (var cmd = new SQLiteCommand(_dbConnection) {Transaction = transaction}) { foreach (string line in new LineReader(() => new StringReader(sql))) { cmd.CommandText = line; cmd.ExecuteNonQuery(); } transaction.Commit(); } } s.Stop(); return s.Elapsed.TotalMinutes; } catch (Exception e) { Console.WriteLine(SQLite Exception : {0}, e.Message); } return null; } /// <summary> /// Gets a single value from the database. /// </summary> /// <param name=sql>The SQL to execute.</param> /// <returns>Returns the value retrieved from the database.</returns> public string ExecuteScalar(string sql) { try { using (SQLiteTransaction transaction = _dbConnection.BeginTransaction()) { using (var cmd = new SQLiteCommand(_dbConnection) {Transaction = transaction, CommandText = sql}) { object value = cmd.ExecuteScalar(); transaction.Commit(); return value != null ? value.ToString() : ; } } } catch (Exception e) { Console.WriteLine(SQLite Exception : {0}, e.Message); } return null; } /// <summary> /// Updates specific rows in the database. /// </summary> /// <param name=tableName>The table to update.</param> /// <param name=data>A dictionary containing Column names and their new values.</param> /// <param name=where>The where clause for the update statement.</param> /// <returns>A boolean true or false to signify success or failure.</returns> public bool Update(String tableName, Dictionary<String, String> data, String where) { string vals = ; if (data.Count >= 1) { vals = data.Aggregate(vals, (current, val) => current + String.Format( {0} = '{1}',, val.Key.ToString(CultureInfo.InvariantCulture), val.Value.ToString(CultureInfo.InvariantCulture))); vals = vals.Substring(0, vals.Length - 1); } try { ExecuteNonQuery(String.Format(update {0} set {1} where {2};, tableName, vals, where)); return true; } catch (Exception e) { Console.WriteLine(SQLite Exception : {0}, e.Message); } return false; } /// <summary> /// Deletes specific rows in the database. /// </summary> /// <param name=tableName>The table from which to delete.</param> /// <param name=where>The where clause for the delete.</param> /// <returns>A boolean true or false to signify success or failure.</returns> public bool Delete(String tableName, String where) { try { ExecuteNonQuery(String.Format(delete from {0} where {1};, tableName, where)); return true; } catch (Exception e) { Console.WriteLine(SQLite Exception : {0}, e.Message); } return false; } /// <summary> /// Inserts new data to the database. /// </summary> /// <param name=tableName>The table into which the data will be inserted.</param> /// <param name=data>A dictionary containing Column names and data to be inserted.</param> /// <returns>A boolean true or false to signify success or failure.</returns> public bool Insert(String tableName, Dictionary<String, String> data) { string columns = ; string values = ; foreach (var val in data) { columns += String.Format( {0},, val.Key); values += String.Format( '{0}',, val.Value); } columns = columns.Substring(0, columns.Length - 1); values = values.Substring(0, values.Length - 1); try { ExecuteNonQuery(String.Format(insert into {0}({1}) values({2});, tableName, columns, values)); return true; } catch (Exception e) { Console.WriteLine(SQLite Exception : {0}, e.Message); } return false; } /// <summary> /// Wipes all the data from the database. /// </summary> /// <returns>A boolean true or false to signify success or failure.</returns> public bool WipeDatabase() { DataTable tables = null; try { tables = GetDataTable(select NAME from SQLITE_MASTER where type='table' order by NAME;); foreach (DataRow table in tables.Rows) { WipeTable(table[NAME].ToString()); } return true; } catch (Exception e) { Console.WriteLine(SQLite Exception : {0}, e.Message); } finally { if (tables != null) tables.Dispose(); } return false; } /// <summary> /// Wipes all the data from the specified table. /// </summary> /// <param name=table>The table to be wiped.</param> /// <returns>A boolean true or false to signify success or failure.</returns> public bool WipeTable(String table) { try { ExecuteNonQuery(String.Format(delete from {0};, table)); return true; } catch (Exception e) { Console.WriteLine(SQLite Exception : {0}, e.Message); } return false; } }} | SQLite helper class | c#;sqlite | null |
_unix.172296 | I'm currently experiencing an issue whilst trying to create a bash script that restarts the LXPanel so that the image I use for the background is updated (I'm trying to make a tool that keeps me focused on my current task and this was the best solution I could come up with) - I'm almost there but it seems that the LXPanel crashes ALWAYS after the 6th restart.I've tried using lxpanelctl restart but it does not respond.On inspection in /var/log/apport.log the error returned is that /usr/bin/lxpanel is blacklisted. A log-out then log-in brings it back up and I am able to run the program using lxpanel in the command line but it's obviously not using the config file.Is there a way to take this program out of the blacklist and is there any way to find out why this is crashing only on the 6th attempt (I've tested this several times and it is always on the 6th)?Please let me know if you need more info; I've only been using Linux for the last few months so there's lots I don't know yet. | LX Panel crashing mysteriously after 6 restarts | lxde;lubuntu | null |
_webmaster.58001 | A simple (probably stupid) question: is it possible to show different pages for www and non-www versions of a website?I mean for example, www.example.com will show the content in /mainfolder and domain.com will show the content in /secondfolder.I don't want to redirect, like if the user goes to www.domain.com he will be redirected to domain.com/www - I don't want that, I just wanna know how to show 2 different index.html files, one for the www version of the website and one for the version without www.Or if possible, how can I create the www subdomain in cPanel (as it won't let me) - any way to override this?UPDATEI tried this .htaccess edit and it worked partially:RewriteCond %{HTTP_HOST} =www.site.comRewriteRule ^(.*)$ http://site.com/folder/ [P]It now shows the contents of site.com/folder when users access www.site.comhowever it doesn't load the CSS of the page - any way to fix this? | How to show different pages for www and non-www versions | subdomain;no www | null |
_codereview.90553 | Utility to calculate the square root of a number. The method also accept an epsilon value, which controls the precision. The epsilon value could range to any number including zero. I am expecting a general code review or if there is a better way to write the code./** * The class MathUtil contains methods for performing basic numeric operations such as the squareRoot functions. Square root of 2.0: 1.4142135623746899 */public class MathUtil { /** * Returns the correctly rounded positive square root of a double value. * Special cases: If the argument is NaN or less than zero, then the result * is NaN. If the argument is positive zero or negative zero, then the * result is the same as the argument. Otherwise, the result is the double * value closest to the epsilon. * * Function takes a non-negative real number * * @param a double * @param epsilon double * @return double */ public static double squareRoot(final double a, final double epsilon) { if (a == 0) return a; else return internalSqrRoot(a, a / 2, epsilon); } /** * Method internalSqrRoot. * * @param a double * @param x double * @param epsilon double * @return double */ public static double internalSqrRoot(final double a, final double x, final double epsilon) { if (closeEnough(a, x, epsilon)) { return x; } else { return internalSqrRoot(a, betterGuess(a, x), epsilon); } } /* * return true if the current guess is close enought o accepted value * Method closeEnough. * * @param a double * @param x double * @param epsilon double * @return boolean */ public static boolean closeEnough(double a, double x, double epsilon) { return (Math.abs(x - ((a / x) + x) / 2)) <= epsilon; } /* * perform a simple manipulation to get a better guess (one closer to the * actual square root) by averaging y with x/y * Method betterGuess. * @param a double * @param x double * @return double */ public static double betterGuess(double a, double x) { return ((a / x) + x) / 2; }} | Utility Method to find the Square root of a number | java;mathematics;reinventing the wheel;numerical methods | Duplicated logicAvoid duplicated logic like this:public static boolean closeEnough(double a, double x, double epsilon) { return (Math.abs(x - ((a / x) + x) / 2)) <= epsilon;}public static double betterGuess(double a, double x) { return ((a / x) + x) / 2;}The closeEnough method includes the exact same logic as betterGuess.Make it a habit to look at duplicated code fragments with suspicion.If you eliminate the duplication, the code becomes:public static boolean closeEnough(double a, double x, double epsilon) { return (Math.abs(x - betterGuess(a, x))) <= epsilon;}public static double betterGuess(double a, double x) { return ((a / x) + x) / 2;}... but then, does this actually make sense? Calling betterGuess from closeEnough? Not really. How can you check if a candidate square root is close enough? You cannot compare with the real square root, because that's the target unknown. What you can compare with, is the original number, which you should be able to get by squaring:public static boolean closeEnough(double a, double x, double epsilon) { return Math.abs(a - x * x) <= epsilon;}This implementation is more logical, and has another positive side effect:it makes it possible to clean up the squareRoot method with the suspicious 0-check:public static double squareRoot(final double a, final double epsilon) { if (a == 0) return a; else return internalSqrRoot(a, a / 2, epsilon);}The method can become simply:public static double squareRoot(final double a, final double epsilon) { return internalSqrRoot(a, a / 2, epsilon);}It would seem the 0-check was there to prevent a division by zero problem in Math.abs(x - ((a / x) + x) / 2))Method visibilityThe MathUtil class name suggests it's a utility class,but the public MathUtil.betterGuess method (to name just one) doesn't seem very useful.It's an implementation detail that should be hidden, so make it private.Question the other methods too, and change their visibility as appropriate.The variable names are quite poor. For example in closeEnough it's impossible to tell if the target number is a or x.target and candidate might have been better names.Input validationMore than just commenting Function takes a non-negative real number ,it would be better to enforce that by throwing an IllegalArgumentException.Poor JavaDocThis kind of JavaDoc is worse than no JavaDoc at all:/** * Method internalSqrRoot. * * @param a double * @param x double * @param epsilon double * @return double */It's worse, because tells nothing new about the method,but since it's there, I've read it, in vain.Minor thingsIt's recommended to use braces even with single-line if statementsRedundant parentheses in the expression (Math.abs(x - ((a / x) + x) / 2))Perhaps instead of internalSqrRoot, squareRootHelper might be a better name. Or, since the method has a different signature than squareRoot, it can just as well be squareRoot (overloading) |
_unix.124572 | I am using Fedora 20, and whenever a new line opens in the command line terminal, the cursor, which is a solid black rectangle, flashes on and off about ten times, then remains steady. I think I have read somewhere that I can do something useful during the flashing period, but I have forgotten what it was, or where to find the reference again; or am I just imagining it?Please can someone confirm or explain this?In response to @sim's query about the terminal emulator:[Harry@localhost ~]$ echo $TERMxterm-256color | Why does terminal cursor flash briefly? | fedora;xterm;cursor | I can't find anything about something useful you can do during that time (though some random undocumented feature would not surprise me). However, it seems that this behavior is to save energy (by not having to wake up the GPU and redraw the screen for each blink).See the related question, and the (rejected) GNOME bug. |
_codereview.123732 | I created event delegation like concept which is present in jQuery. AFAIK, event delegation is used to register an event for an element which is supposed to be added dynamically. So, in jQuery we do the following:$(document.body) .on('click', '.main', function(){ /* code here */ }); .on('click', 'div.main', function(){ /* code here */ }); .on('click', 'div#main.main', function(){ /* code here */ }); .on('click', 'div#main.main[attr=1]', function(){ /* code here */ });So, instead of doing this event delegation in jQuery, I am doing using pure JavaScript. Here is my code:var sb = (function () { var eventObject = {}; document.body.addEventListener('click', function(e){ console.dir(e.currentTarget); var el = e.target; var parent = el.parentElement; while(parent !== document.body){ for(var item in eventObject){ var elements = document.querySelectorAll(item); for(var i = 0, ilen = elements.length; i < ilen; i++){ if(el === elements[i]){ eventObject[item].call(el, e); return; } } } el = parent; parent = el.parentElement; } }, false); return { on: function(event, selector, callback){ eventObject[selector] = callback; return sb; } }})();As you can see, in the above code, I have hard coded the click event (which is fine for now)Now, I will use the above code like this:sb .on('click', '.main', function(){ /* code here */ }); .on('click', 'div.main', function(){ /* code here */ }); .on('click', 'div#main.main', function(){ /* code here */ }); .on('click', 'div#main.main[attr=1]', function(){ /* code here */ });This is working fine but I feel like I am overdoing things here in the above created plugin. | Event delegation without jQuery | javascript;reinventing the wheel;event handling;delegates | null |
_codereview.173210 | In the following Python implementation I have used color coded vertices to implement the Dijkstra's Algorithm in order to take negative edge weights.G16 = {'a':[('b',3),('c',2)], 'b':[('c',-2)], 'c':[('d',1)], 'd':[]} # the graph is a dictionary with they key as nodes and the value as a # list of tuples# each of those tuples represent an edge from the key vertex to the first # element of the tuple# the second element of the tuple is the weight of that edge. # so, 'a':[('b',3),('c',2)] means an edge from a to b with weight 3 # and another edge from a to c with weight 2. class minq: #min_queue implementation to pop the vertex with the minimum distance def __init__(self,dist): self.elms = [] def minq_len(self): return len(self.elms) def add(self,element): if element not in self.elms: self.elms.append(element) def min_pop(self,dist): min_cost = 99999999999 for v in self.elms: min_cost = min(min_cost,dist[v]) for key,cst in dist.items(): if cst == min_cost: if key in self.elms: v = key self.elms.remove(v) return vdef modified_dijkstras(graph,n): cost = {} color = {} # color interpretation: w = white = unvisited, g = grey = to be processed, b = black = already processed for vertex in graph: cost[vertex] = 9999999 #setting cost of each vertex to a large number color[vertex] = 'w' #setting color of each vertex as 'w' q = minq(cost) q.add(n) cost[n] = 0 while q.minq_len() != 0: x=q.min_pop(cost) color[x] = 'g' for j,cost_j in graph[x]: temp = cost[j] cost[j] = min(cost[j],cost[x] + cost_j) if cost[j] < temp and color[j] == 'b': #if the cost varries even when the vertex is marked 'b' color[j] = 'w' #color the node as 'w' if color[j] != 'g': color[j] = 'g' q.add(j) #this can insert a vertex marked 'b' back into the queue. color[x] = 'b' return cost The following is what is returned when you run the code on the Python interpreter with the graph defined on top:>>>import dijkstra>>>G16 = {'a':[('b',3),('c',2)], 'b':[('c',-2)], 'c':[('d',1)], 'd':[]}>>>dijkstra.modified_dijkstras(G16,'a'){'a': 0, 'b': 3, 'c': 1, 'd': 2} Please let me know if this algorithm has a better runtime than Bellman Ford as I am not iterating through all the vertices repeatedly.Please also report your analysis of the run time for this algorithm, if any. | Dijkstra's Algorithm modified to take negative edge weights | python;algorithm;graph | null |
_scicomp.16347 | Suppose I have an unstructured polygonal mesh system like so: Each node $x$ has Cartesian coordinates $(x_1,x_2)$, so for a given node can form matrices like this: $$J(x,y,z) = \left(\begin{array}{cc}y_1-x_1 & z_1-x_1 \\ y_2-x_2 & z_2-x_2 \end{array}\right) = (y-x,z-x)$$If my mesh were structured, i.e. a mapped Cartesian grid so that $(x_1,x_2) = (x_1(\xi,\eta),x_2(\xi,\eta))$, then these matrices would be approximations to the Jacobian matrix of the map $(\xi,\eta)\mapsto x$. My question is, for an unstructured mesh like the one above, what are my $J(x,y,z)$ approximations of? Is there a well-defined, underlying continuum mapping for an unstructured grid? | Jacobian matrices on unstructured grids: underlying map? | unstructured mesh | null |
_webmaster.101740 | The title is self-explained, darn I have to type and match 30 characters constraint | How can Similarweb.com monitor traffic of one website they don't own? | traffic;web traffic | null |
_ai.1925 | My question is regarding standard dense-connected feed forward neural networks with sigmoidal activation.I am studying Bayesian Optimization for hyper-parameter selection for neural networks. There is no doubt that this is an effective method, but I just wan't to delve a little deeper into the maths.Question: Are neural networks Lipschitz functions? | Are FFNN (MLP) Lipschitz functions? | neural networks;optimization;math | I'm not an expert in this area, but it would appear to depend on the choice of activation function:e^x is not Lipschitz continuous. See Analytic functions which are not Lipschitz continuous.tanh(x) is.That said, this paper appears to give some conditions (specifically for dynamic ANNs) for which networks with activation function involving e^x can be Lipschitz continuous, so possibly the above is not the whole story. |
_softwareengineering.41678 | I'm developing a Driver Safety Monitoring System which is kind of small software which would be implemented inside a car with connections to few cameras. What I want to know is to implement this software inside the vehicle what kind of computer can I use? | How to implement software for my vehicle | software;mobile;artificial intelligence | www.mp3car.com has a lot of resources and information on installing computers into cars. Of course they are usually using the computers for playing music, but a computer is computer right? It's up to you to choose what software you want to run on it. |
_softwareengineering.119825 | Lots of specialized mobile devices use Windows CE or Windows Mobile.I'm not talking about smart phones here -- I know that Windows Phone 7 is Microsoft's current technology of choice here. I'm talking about barcode readers, embedded devices, industry PDAs with specialized hardware, etc... the kind of devices (Example 1, Example 2) where Windows Phone Silverlight development is not an option (no P/Invoke to access the hardware, etc.).Since direct Compact Framework support has been dropped in Visual Studio 2010, the only option to develop for these device currently is to use outdated development tools (VS 2008), which already start to cause trouble on modern machines (e.g. there's no supported way to make the Windows Mobile Device Emulator's network stack work on Windows 7).Thus, my question is: What are Microsoft's plans regarding these mobile devices? Will they allow native applications on Windows Phone, such that, for example, barcode reader drivers can be developed that can be accessed in Silverlight applications? Will they re-add native Compact Framework support to Visual Studio and just haven't found the time yet? Or will they leave this niche market? | What's Microsoft's strategy on Windows CE development? | .net;mobile;windows phone 7 | I think the roadmap of Windows CE is still clear, because there's Microsoft Windows CE 7, the next successor of Windows CE 6.x.Frustatingly, they named the product officially as Microsoft Windows Embedded Compact 7, but they said it's because in line with Windows 7 releases, only now for embedded.http://www.microsoft.com/windowsembedded/en-us/develop/windows-embedded-compact-for-developers.aspxfor Windows Embedded Compact 7:http://www.microsoft.com/windowsembedded/en-us/develop/windows-embedded-compact-7-for-developers.aspxI suggest you install VS 2008 for Windows CE 7 because VS 2008 support Windows CE5, CE6 and CE7 devices, and VS 2010 still can't support Windows CE devices.This is from CE7 website:NEW: Developing with Windows Embedded Compact 7 (formerly CE) Just released, Windows Embedded Compact 7 (the next generation of Windows Embedded CE) is based on the power of the Windows 7 platform. Compact 7 is a real-time operating systems for a wide range of small-footprint consumer and enterprise devices. Development tools like Platform Builder, a Visual Studio 2008 plug in, provide an integrated development environment (IDE) that enables you to build applications and Windows Embedded CE operating system software in a familiar environment.And Windows CE 7 has little relation with Windows Phone 7, because they share the same code base of basic services of Windows 7. There's also no detail information about Silverlight running on Windows CE 7, but I can assure you they support it:http://www.microsoft.com/windowsembedded/en-us/develop/windows-embedded-compact-7-user-interface-development-gui-design.aspx |
_unix.264702 | i have list name.txt with this string :Los Angeles, CA us1.vpn.goldenfrog.comWashington, DC us2.vpn.goldenfrog.comAustin, TX us3.vpn.goldenfrog.comMiami, FL us4.vpn.goldenfrog.comNew York City, NY us5.vpn.goldenfrog.comChicago, IL us6.vpn.goldenfrog.comSan Francisco, CA us7.vpn.goldenfrog.comAmsterdam eu1.vpn.goldenfrog.comCopenhagen dk1.vpn.goldenfrog.comStockholm se1.vpn.goldenfrog.comHong Kong hk1.vpn.goldenfrog.comLondon uk1.vpn.goldenfrog.comnow i want with sed delete everything before *.vpn.goldenfrog.com ( *=3Char)output i want : hk1.vpn.goldenfrog.comdk1.vpn.goldenfrog.cometc ... | sed remove Space | command line;text processing | If you want a sed solution:sed 's/.*[[:blank:]]\([^[:blank:]]*\)$/\1/' file.txtThe captured group (\1) will contain the portion of the line after last space, we are using that in the replacement.Example:% sed 's/.*[[:blank:]]\([^[:blank:]]*\)$/\1/' file.txt us1.vpn.goldenfrog.comus2.vpn.goldenfrog.comus3.vpn.goldenfrog.comus4.vpn.goldenfrog.comus5.vpn.goldenfrog.comus6.vpn.goldenfrog.comus7.vpn.goldenfrog.comeu1.vpn.goldenfrog.comdk1.vpn.goldenfrog.comse1.vpn.goldenfrog.comhk1.vpn.goldenfrog.comuk1.vpn.goldenfrog.comgrep can easily do this too:% grep -o '[^[:blank:]]*$' file.txt us1.vpn.goldenfrog.comus2.vpn.goldenfrog.comus3.vpn.goldenfrog.comus4.vpn.goldenfrog.comus5.vpn.goldenfrog.comus6.vpn.goldenfrog.comus7.vpn.goldenfrog.comeu1.vpn.goldenfrog.comdk1.vpn.goldenfrog.comse1.vpn.goldenfrog.comhk1.vpn.goldenfrog.comuk1.vpn.goldenfrog.com |
_unix.24026 | There is a directory A whose contents are changed frequently by other people.I have made a personal directory B where I keep all the files that have ever been in A.Currently I just occasionally run rsync to get the files to be backed up from A to B. However I fear the possibility that some files will get added in A, and then removed from A before I get the chance to copy them over to B.What is the best way to prevent this from occurring? Ideally i'd like to have my current backup script run every time the contents of A get changed. | How to run a command when a directory's contents are updated? | files;directory;backup;monitoring | If you have inotify-tools installed you can use inotifywait to trigger an action if a file or directory is written to:#!/bin/shdir1=/path/to/A/while inotifywait -qqre modify $dir1; do /run/backup/to/B doneWhere the -qq switch is completely silent, -r is recursive (if needed) and -e is the event to monitor, in this case modify. From man inotifywait:modify A watched file or a file within a watched directory was written to. |
_unix.273481 | The ability to start a phone call from the terminal is mentioned in this transcript of a Q&A-session with the Canonical community team. So - did somebody figure out, how to do it? The terminal app for Ubuntu Phone/Touch is a powerful tool. Edit:I tried to start dialer-app via terminal (terminal app) on my BQ Aquaris E5 HD Ubuntu Edition [Ubuntu 15.04 (OTA-9.1)] with sudo dialer-appAs a result I get the following error: QXcbConnection: Could not connect to displayI tried to google this error, but I was not able to link the suggested solutions to my problem. | How to start a phone call from the terminal in Ubuntu Phone? | ubuntu;terminal | You will have to do in this way:ubuntu-app-launch dialer-app tel:///###-###-#### |
_reverseengineering.6368 | I have the following two lines: .... push 401150h call sub_401253 ....So, when I click on push 401150h IDA PRO shows: seg0001 : 00401120 dword_401120 dd 6F662F3Ch, 3C3E746Eh, 3E702Fh, 253A4E52h, 54522073h dd 2073253Ah, 73253A55h, 253A5020h, 656C774h, 616223Dh, 72676B63h, 646E756Fh dd 0D73h, 7320703Ch, 335504h, 7265464h, 5484531h 55E4ADEh, A585B5448h, .....(and so on)So, my first question would be : what is this? what it can be?My own results: that thing which I mentioned above is a string because in the function sub_401253 they copy it using lstrcpy() into a buffer: ... lea eax, [esp+1FC + Buffer] ... mov edi, [esp+208+arg_0] push edi, push eax, call lstrcpy ...After that, in a next block the content of the buffer(which are the hexadecimal numbers now) is XORed in a loop. I assume that they encrypt or decrypt it (but that is not so importan for me right now.)I only want to know what IDA PRO try to depict with push 401150h which represents the hexadecimal numbers.Thats it. I hope you can help me.best regards, | PUSHing a lot of hexadecimal numbers | ida;assembly;hexadecimal | The data at 00401120 is ASCII-encoded text:3C 2F 66 6F 6E 74 3E 3C 2F 70 3E 00 52 4E 3A 25 </font></p>.RN:%73 20 52 54 3A 25 73 20 55 3A 25 73 20 50 3A 25 s RT:%s U:%s P:%77 6C 65 00 3D 22 62 61 63 6B 67 72 6F 75 6E 64 wle.=background73 0D 00 00 s...You can tell IDA to decode those bytes as text by clicking on the data at 00401120 and pressing the A key |
_unix.266231 | I have two Debian Jessie servers. One is my home server that I use for personal/hobby stuff, the other is my development server for work. For arguments sake lets say...Personal: 1.1.1.1Development: 1.1.1.2I have one domain. Let's say it's example.com. Currently, Personal calls a Dynamic DNS service every few seconds to tell the DDNS service (Which is hosting my domain.) what my external IP address is. From there, my router is set up to port-forward all requests at ports 21, 22, 80, and 3000 to Personal. I don't want to buy an external IP from my ISP, let alone request two for each server. Also, I would like this setup to be semi-portable. I.E. no matter what router it's connected to as long as the port is open, it works. From a little bit of research I think the answer to my question is a reverse-proxy. I've installed Pound to Personal. However, I have been unable to find a tutorial which is close enough to my situation to reverse-engineer, and have found the amount of example Pound configs and general documentation lacking. This is what I would like to have happen...1) Router port-forwards on ports 21, 22, 80, and 3000 to Personal on those same ports.2) Pound on Personal sends all requests from my domain to Development unless the subdomain was personal.In affect this would mean...personal.example.com -> Personal (1.1.1.1)*.mydomain.com -> Development (1.1.1.2)Can this be accomplished using Pound? If so, what would I put in Pound's config file? | Forward ALL incoming HTTP requests to one of two servers based on subdomain? | dns;ip;port forwarding;domain;reverse proxy | Domain name dispatch is only available to protocols that include domain names, i.e. HTTP and (mostly) HTTPS. Other protocols (such as FTP, SSH) don't include any domain name, but rather the client software uses DNS to resolve a given domain name into an IP address, and then connects using it.So, the short answer would be no.Rather you could set up your routing to present other, different external ports for routing the least used host. E.g, using ports 20021, 20022, 20080 and 23000Or, you could go for a tunneling solution (VPN) to allow portable hosts to access the local network.EDIT: I got confused by the port list you gave. If you're only interested in HTTP and HTTPS (typically ports 80 and 431), then the answer should be yes, and my ramblings should be ignored. |
_unix.322636 | I'm trying to join Active Directory in Xubuntu 16.04 in a enterprise business enviroment so I'll change the name of my REALM by MY.EXAMPLE.CORP. My issue is: when I runnet ads join -U Administratorit asks me the password for the AD administrator account, I put the password but it stills there, it doesn't give an error or success message. Just still there. The terminal just hanged in thereI tried the kinit and klist commands and the result is:Tickect cache: FILE:/tmp/krb5cc_0Default principal: [email protected] starting Expires Service principal11/11/16 09:58:40 11/11/16 19:58:40 krbgt/[email protected] renew until 12/11/16 09:58:34I've modified all the files as I've read. krb5.conf, smb.conf, nsswitch.conf | Kerberos net ads join doesn't respond | active directory;kerberos | null |
_unix.138651 | I'm trying to reflect traffic from the internet to an internal device that I only want to have local access. I have a host in my DMZ that I'm trying to DNAT traffic to this internal device, and SNAT so no internet route is needed.In testing, I am able to DNAT/SNAT from a local computer to this proxy host and access the resources on the internal device. However when accessing the port on my router, I can see the requests arriving at proxy host via tcpdump, and I see them increment the iptables DNAT rule counter, yet no connection is made. Further more local tests increment both the DNAT and SNAT rule counters, but external traffic only increments the DNAT counter.The proxy host was spun up for only this purpose and has no other services. There is one interface with two IPs .254 and .253, Incoming traffic should come to .254 and be SNAT'd from .253 on its way to the internal device. Kernel IPv4 forwarding is also enabled.Below is my iptables config:# Generated by iptables-save v1.4.7 on Sun Jun 22 22:49:18 2014*filter:INPUT ACCEPT [32:4832]:FORWARD ACCEPT [0:0]:OUTPUT ACCEPT [14:1016]-A INPUT -p tcp -m tcp --dport 443 -j MARK --set-mark 7-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT-A INPUT -p icmp -j ACCEPT-A INPUT -i lo -j ACCEPT-A INPUT -i admin -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT-A INPUT -i admin -p udp -m state --state NEW -m udp --dport 161 -j ACCEPT-A INPUT -i local -p tcp -m state --state NEW -m tcp --dport 5308 -j ACCEPT-A INPUT -i admin -p tcp -m state --state NEW -m tcp --dport 10050 -j ACCEPT-A FORWARD -d 10.254.254.1/32 -p tcp -m state --state NEW,RELATED,ESTABLISHED -mm tcp --dport 443 -j ACCEPT-A FORWARD -j ACCEPT-A INPUT -j ACCEPT-A OUTPUT -j ACCEPTCOMMIT# Completed on Sun Jun 22 22:49:18 2014# Generated by iptables-save v1.4.7 on Sun Jun 22 22:49:18 2014*mangle:PREROUTING ACCEPT [73:6562]:INPUT ACCEPT [33:4290]:FORWARD ACCEPT [18:972]:OUTPUT ACCEPT [18:1408]:POSTROUTING ACCEPT [27:1624]-A INPUT -s 173.214.161.60 -j MARK --set-xmark 0x6/0xffffffff-A FORWARD -s 173.214.161.60 -j MARK --set-xmark 0x5/0xffffffff-A POSTROUTING -s 173.214.161.60 -j MARK --set-xmark 0x4/0xffffffff-A PREROUTING -s 173.214.161.60 -j MARK --set-xmark 0x3/0xffffffff-A OUTPUT -s 173.214.161.60 -j MARK --set-xmark 0x2/0xffffffffCOMMIT# Completed on Sun Jun 22 22:49:18 2014# Generated by iptables-save v1.4.7 on Sun Jun 22 22:49:18 2014*nat:PREROUTING ACCEPT [31:3139]:POSTROUTING ACCEPT [14:1016]:OUTPUT ACCEPT [14:1016]-A PREROUTING -d 10.254.254.254/32 -i dmz -j DNAT --to-destination 10.254.254.2-A POSTROUTING -o dmz -j SNAT --to-source 10.254.254.253COMMIT# Completed on Sun Jun 22 22:49:18 2014 | Packets do not cross IP tables Postrouting when originating from an external IP | iptables;routing | null |
_webmaster.92767 | Wondering if having identical, verified correct by the Google Structured Data Testing Tool, JSON-LD Schema data on index and the same JSON-LD Schema Data on all subpages would create any sort of problem with google search engine? Could this be hurting my ranking?Webmaster Console recognizes pages with it, and is not complaining. | JSON-LD Schema Data on Multiple Pages | seo;schema.org;json ld | null |
_codereview.149292 | I have gone through all steps to optimize the code, deactivating screenupdate, deactivating calculation, events, and pagebreaks removed the unnecessary selects, adding constants, and etc. However, I don't think my algorithm for coping is the best one, even if it is probably the simplest of them all.For Each cell In currentRange If cell = vbNullString Then Exit For cell.Select cellValue = cell.Value If cellValue = x Then Sheets(sourceData).Activate 'ActiveCell.Offset(0, -Selection.Column + 1).Range(F1:L1).Select ActiveCell.Offset(0, -Selection.Column + 1).Range(F1:L1).Copy Sheets(pasteSheet).Activate Rows(destinationCell).Select Set destinationRange = ActiveCell.Offset(0, -Selection.Column + 1).Range(F1:L1) 'destinationRange.Select destinationRange.PasteSpecial xlPasteValues Rows(destinationCell).Range(H1).Cut Rows(destinationCell).Range(G1).Insert 'ActiveCell.Offset(0, -Selection.Column + 1).Range(F1:L1).Select destinationCell = destinationCell + 1 Worksheets(sourceData).Select End IfNextI copy a range from a row into another sheet, while shifting values from H to G column. I did not manage to create this functionality with .Copy Destination:=, which I think is faster.What would be the best way to optimizing this code? | Copy/Paste of Range | excel;vba;performance | null |
_unix.308112 | Hy,I'm getting an issue when I'm doing a file on some .php files on apache2 Vhost.Here is the problem: # file *.phpfile1.php: PHP script, UTF-8 Unicode text, with very long linesfile2.php: PHP script, UTF-8 Unicode text, with very long linesfile3.php: HTML document, UTF-8 Unicode text, with very long linesAny ideas on why the system (RHEL) doesn't see file3.php as PHP script ? # head file3.php <?include(./some/files.php);$var=;$var = select var, var from vars order by 2;$var = var($var,$var);while ($var = @var($var)){ var($var) ; $var .= \.$var.\, ;I've changed <? to <?php but nothing has changed.php -vPHP 5.4.16 (cli) | File on php showing HTML document | files;php | The file utility uses different heuristics to determine the file type. It may be the case that file3.php has more HTML-tags than the other two.However, the output of the file utility does not influence your system's operation (unless you are parsing the output, of course).In particular, it is not your system (RHEL) that treats this file as HTML.If it is a valid PHP file, php will execute the script as it should - independent of what file says. (Try php -l file3.php for a syntax check.) |
_codereview.63461 | I am designing a web application and a windows service and want to use the unit of work + repository layer in conjunction with a service layer, and I am having some trouble putting it all together so that the client apps control the transaction of data with the unit of work.The unit of work has a collection of all repositories enrolled in the transaction along with commit and rollback operationspublic interface IUnitOfWork : IDisposable{ IRepository<T> Repository<T>() where T : class; void Commit(); void Rollback();}The generic repository has operations that will be performed on the data layer for a particular model (table)public interface IRepository<T> where T : class { IEnumerable<T> Get(Expression<Func<T, bool>> filter = null, IList<Expression<Func<T, object>>> includedProperties = null, IList<ISortCriteria<T>> sortCriterias = null); PaginatedList<T> GetPaged(Expression<Func<T, bool>> filter = null, IList<Expression<Func<T, object>>> includedProperties = null, PagingOptions<T> pagingOptions = null); T Find(Expression<Func<T, bool>> filter, IList<Expression<Func<T, object>>> includedProperties = null); void Add(T t); void Remove(T t); void Remove(Expression<Func<T, bool>> filter);}The concrete implementation of the unit of work uses entity framework under the hood (DbContext) to save the changes to the database, and a new instance of the DbContext class is created per unit of work.public class UnitOfWork : IUnitOfWork{ private IDictionary<Type, object> _repositories; private DataContext _dbContext; private bool _disposed; public UnitOfWork() { _repositories = new Dictionary<Type, object>(); _dbContext = new DataContext(); _disposed = false; }The repositories in the unit of work are created upon access if they don't exist in the current unit of work instance. The repository takes the DbContext as a constructor parameter so it can effectively work in the current unit of work.public class Repository<T> : IRepository<T> where T : class{ private readonly DataContext _dbContext; private readonly DbSet<T> _dbSet; #region Ctor public Repository(DataContext dbContext) { _dbContext = dbContext; _dbSet = _dbContext.Set<T>(); } #endregionI also have a service classes that encapsulate business workflow logic and take their dependencies in the constructor.public class PortfolioRequestService : IPortfolioRequestService{ private IUnitOfWork _unitOfWork; private IPortfolioRequestFileParser _fileParser; private IConfigurationService _configurationService; private IDocumentStorageService _documentStorageService; #region Private Constants private const string PORTFOLIO_REQUEST_VALID_FILE_TYPES = PortfolioRequestValidFileTypes; #endregion #region Ctors public PortfolioRequestService(IUnitOfWork unitOfWork, IPortfolioRequestFileParser fileParser, IConfigurationService configurationService, IDocumentStorageService documentStorageService) { if (unitOfWork == null) { throw new ArgumentNullException(unitOfWork); } if (fileParser == null) { throw new ArgumentNullException(fileParser); } if (configurationService == null) { throw new ArgumentNullException(configurationService); } if (documentStorageService == null) { throw new ArgumentNullException(configurationService); } _unitOfWork = unitOfWork; _fileParser = fileParser; _configurationService = configurationService; _documentStorageService = documentStorageService; } #endregionThe web application is an ASP.NET MVC app, the controller gets its dependencies injectedin the constructor as well. In this case the unit of work and service class are injected. The action performs an operation exposed by the service, such as creating a record in the repository and saving a file to a file server using a DocumentStorageService, and then the unit of work is committed in the controller action.public class PortfolioRequestCollectionController : BaseController{ IUnitOfWork _unitOfWork; IPortfolioRequestService _portfolioRequestService; IUserService _userService; #region Ctors public PortfolioRequestCollectionController(IUnitOfWork unitOfWork, IPortfolioRequestService portfolioRequestService, IUserService userService) { _unitOfWork = unitOfWork; _portfolioRequestService = portfolioRequestService; _userService = userService; } #endregion[HttpPost] [ValidateAntiForgeryToken] [HasPermissionAttribute(PermissionId.ManagePortfolioRequest)] public ActionResult Create(CreateViewModel viewModel) { if (ModelState.IsValid) { // validate file exists if (viewModel.File != null && viewModel.File.ContentLength > 0) { // TODO: ggomez - also add to CreatePortfolioRequestCollection method // see if file upload input control can be restricted to excel and csv // add additional info below control if (_portfolioRequestService.ValidatePortfolioRequestFileType(viewModel.File.FileName)) { try { // create new PortfolioRequestCollection instance _portfolioRequestService.CreatePortfolioRequestCollection(viewModel.File.FileName, viewModel.File.InputStream, viewModel.ReasonId, PortfolioRequestCollectionSourceId.InternalWebsiteUpload, viewModel.ReviewAllRequestsBeforeRelease, _userService.GetUserName()); _unitOfWork.Commit(); } catch (Exception ex) { ModelState.AddModelError(string.Empty, ex.Message); return View(viewModel); } return RedirectToAction(Index, null, null, The portfolio construction request was successfully submitted!, null); } else { ModelState.AddModelError(File, Only Excel and CSV formats are allowed); } } else { ModelState.AddModelError(File, A file with portfolio construction requests is required); } } IEnumerable<PortfolioRequestCollectionReason> portfolioRequestCollectionReasons = _unitOfWork.Repository<PortfolioRequestCollectionReason>().Get(); viewModel.Init(portfolioRequestCollectionReasons); return View(viewModel); }On the web application I am using Unity DI container to inject the same instance of the unit of work per HTTP request to all callers, so the controller class gets a new instance and then the service class that uses the unit of work gets the same instance as the controller. This way the service adds some records to the repository which is enrolled in a unit of work and can be committed by the client code in the controller.One question regarding the code and architecture described above. How can I get rid of the unit of work dependency at the service classes? Ideally I don't want the service class to have an instance of the unit of work because I don't want the service to commit the transaction, I just would like the service to have a reference to the repository it needs to work with, and let the controller (client code) commit the operation when it see fits.On to the Windows service application, I would like to be able to get a set of records with a single unit of work, say all records in pending status. Then I would like to loop through all those records and query the database to get each one individually and then check the status for each one during each loop because the status might have changed from the time I queried all to the time I want to operate on a single one. The problem I have right now is that my current architecture doesn't allow me to have multiple unit of works for the same instance of the service.public class ProcessPortfolioRequestsJob : JobBase{ IPortfolioRequestService _portfolioRequestService; public ProcessPortfolioRequestsJob(IPortfolioRequestService portfolioRequestService) { _portfolioRequestService = portfolioRequestService; }The Job class above takes a service in the constructor as a dependency and again is resolved by Unity. The service instance that gets resolved and injected depends on a unit of work. I would like to perform two get operations on the service class but because I am operating under the same instance of unit of work, I can't achieve that.For all of you gurus out there, do you have any suggestions on how I can re-architect my application, unit of work + repository + service classes to achieve the goals above?I intended to use the unit of work + repository patterns to enable testability on my service classes, but I am open to other design patterns that will make my code maintainable and testable at the same time while keeping separation of concerns.Here's the DataContext class that inherits from EF's DbContext where I declared my EF DbSets and configurations:public class DataContext : DbContext{ public DataContext() : base(name=ArchSample) { Database.SetInitializer<DataContext>(new MigrateDatabaseToLatestVersion<DataContext, Configuration>()); base.Configuration.ProxyCreationEnabled = false; } public DbSet<PortfolioRequestCollection> PortfolioRequestCollections { get; set; } protected override void OnModelCreating(DbModelBuilder modelBuilder) { modelBuilder.Conventions.Remove<PluralizingTableNameConvention>(); modelBuilder.Configurations.Add(new PortfolioRequestCollectionConfiguration()); base.OnModelCreating(modelBuilder); }} | Unit of work + repository + service layer with dependency injection | c#;design patterns;dependency injection;asp.net mvc;repository | null |
_datascience.9528 | I've understood that SVMs are binary, linear classifiers (without the kernel trick). They have training data $(x_i, y_i)$ where $x_i$ is a vector and $y_i \in \{-1, 1\}$ is the class. As they are binary, linear classifiers the task is to find a hyperplane which separates the data points with the label $-1$ from the data points with the label $+1$.Assume for now, that the data points are linearly separable and we don't need slack variables.Now I've read that the training problem is now the following optimization problem:${\min_{w, b} \frac{1}{2} \|w\|^2}$s.t. $y_i ( \langle w, x_i \rangle + b) \geq 1$I think I got that minizmizing $\|w\|^2$ means maximizing the margin (however, I don't understand why it is the square here. Would anything change if one would try to minimize $\|w\|$?).I also understood that $y_i ( \langle w, x_i \rangle + b) \geq 0$ means that the model has to be correct on the training data. However, there is a $1$ and not a $0$. Why? | Where exactly does $\geq 1$ come from in SVMs optimization problem constraint? | machine learning;svm | First problem: Minimizing $\|w\|$ or $\|w\|^2$:It is correct that one wants to maximize the margin. This is actually done by maximizing $\frac{2}{\|w\|}$. This would be the correct way of doing it, but it is rather inconvenient. Let's first drop the $2$, as it is just a constant. Now if $\frac{1}{\|w\|}$ is maximal, $\|w\|$ will have to be as small as possible. We can thus find the identical solution by minimizing $\|w\|$. $\|w\|$ can be calculated by $\sqrt{w^T w}$. As the square root is a monotonic function, any point $x$ which maximizes $\sqrt{f(x)}$ will also maximize $f(x)$. To find this point $x$ we thus don't have to calculate the square root and can minimize $w^T w = \|w\|^2$.Finally, as we often have to calculate derivatives, we multiply the whole expression by a factor $\frac{1}{2}$. This is done very often, because if we derive $\frac{d}{dx} x^2 = 2 x$ and thus $\frac{d}{dx} \frac{1}{2} x^2 = x$. This is how we end up with the problem: minimize $\frac{1}{2} \|w\|^2$.tl;dr: yes, minimizing $\|w\|$ instead of $\frac{1}{2} \|w\|^2$ would work.Second problem: $\geq 0$ or $\geq 1$:As already stated in the question, $y_i \left( \langle w,x_i \rangle + b \right) \geq 0$ means that the point has to be on the correct side of the hyperplane. However this isn't enough: we want the point to be at least as far away as the margin (then the point is a support vector), or even further away.Remember the definition of the hyperplane,$\mathcal{H} = \{ x \mid \langle w,x \rangle + b = 0\}$.This description however is not unique: if we scale $w$ and $b$ by a constant $c$, then we get an equivalent description of this hyperplane. To make sure our optimization algorithm doesn't just scale $w$ and $b$ by constant factors to get a higher margin, we define that the distance of a support vector from the hyperplane is always $1$, i.e. the margin is $\frac{1}{\|w\|}$. A support vector is thus characterized by $y_i \left( \langle w,x_i \rangle + b \right) = 1 $. As already mentioned earlier, we want all points to be either a support vector, or even further away from the hyperplane. In training, we thus add the constraint $y_i \left( \langle w,x_i \rangle + b \right) \geq 1$, which ensures exactly that.tl;dr: Training points don't only need to be correct, they have to be on the margin or further away. |
_webapps.39069 | I'm trying to fire a webhook(HTTP request) from Zapier(An If This Then That like service) when a new Github gist is posted, ie have a new gist as the trigger.Zapier has GitHub integration, and supports webhooks, but sadly does not support gists. I know there is a work around - create an RSS feed for the gists and use that as a trigger, but I'd prefer not having to do that.Is there a way I can do this with Zapier, or if not, are there any other web-services/apps that have a similar functionality? | An If-This-Than-That like service fire an HTTP requres triggered by Github Gists | github;if this then that;zapier | Zapier co-founder here, for anyone else curious about how you might do this yourself, Github has really killer API docs which show how to use their API to read/create your gists. Its kind of annoying as you'll have to poll for new entries and compare them across time, but it isn't infeasible. Using a standar RSS reader like Google reader is a simple solution as you eluded to.Further, this is a great suggestion, in fact I spent the last 30 minutes adding support for this. Its live now. If you have a Zapier account already, you'll need to add your Github account to us again to catch the new gist scope.For convenience's sake I've even spun up a quick template that sends a POST whenever a new Gist is detected. |
_cstheory.32605 | I have a set of N items, each with a subset of those items they can be paired with; each pair has a weight. I'd like to choose pairs to maximize the total weight, subject to each item being in at most M pairs.I believe this can be seen as an instance of the Stable Fixtures Problem (itself a generalization of the Stable Roommates Problem), which seeks to find a stable matching if one exists. However, I don't really care if the matching is stable, and I want to find a high-weight matching in every case. It doesn't need to be optimal, approximate is fine.Are there any approximate solutions for this problem, or does this problem go by another name in another field? Approaches I can think of would be to randomly perturb the ranks (weights) until a stable matching is found, or perhaps treat it as a linear programming problem. | Approximations for the Stable Fixtures Problem | ds.algorithms;co.combinatorics;approximation algorithms;matching;heuristics | Your problem is called maximum weighted simple b-matching, and it's solvable in strongly polynomial time. See this paper for instance. |
_unix.161964 | Background: I'm investigating methods for encrypted storage on untrusted machines. My current setup uses sshfs to access a LUKS-encrypted image on the remote machine, which is decrypted locally and mounted as ext3. (If I were to use sshfs only, someone gaining access to the remote machine could see my data.) Here's my example setup:# On the local machine:sshfs remote:/home/crypt /home/cryptcryptsetup luksOpen /home/crypt/container.img containermount /dev/mapper/container /home/crypt-open# Place cleartext files in /home/crypt-open,# then reverse the above steps to unmount.I want to make this resilient against network failures. To do this, I'd like to understand what caching / buffering happens with this setup. Consider these two commands:dd if=/dev/random of=/home/crypt-open/test.dat bs=1000000 count=100dd if=/dev/random of=/home/crypt-open/test.dat bs=1000000 count=100 conv=fsyncThe first command returns very quickly, and I can see from the network traffic that the data is still being transmitted after the command has returned. The second command seems to wait until the data is finished transferring.Concrete questions: What guarantees does fsync() make under this setup? When fsync() returns, how far along these layers is the data guaranteed to be synced? And what can I do to guarantee that it gets synced all the way down to the remote machine's hard drive?--- /home/crypt-open on the local machine|| (ext3 fs)|--- /dev/mapper/container on the local machine|| (LUKS)|--- /home/crypt/container.img on the local machine|| (sshfs)|--- /home/crypt/container.img on the remote machine|| (ext3 fs)|--- hard drive on the remote machine | Meaning of fsync() in sshfs+LUKS setup | luks;sshfs;buffer | I'd assume the weakest link here is the SSHFS code -- the rest of the stuff is in kernel and pretty heavily used, so it's probably fine. I've never actually looked at any FUSE code before, so there could be something else going on that I've missed, but according to the SSHFS source code, SSHFS's implementation of fsync() doesn't do a whole bunch, it just calls flush() on the IO stream.static int sshfs_fsync(const char *path, int isdatasync, struct fuse_file_info *fi){ (void) isdatasync; return sshfs_flush(path, fi);}At sshfs.c:2551, we can see that sshfs_flush() function doesn't send any sort of sync command to the remote machine that enforces an fsync. I believe the sshfs.sync_write flag means wait for commands to go to the server before returning from write, not fsync on the server on every write because that second meaning would be very odd. Thus your fsync measurement is slower because it's bottlenecked by network speed, not remote disk speed.static int sshfs_flush(const char *path, struct fuse_file_info *fi){ int err; struct sshfs_file *sf = get_sshfs_file(fi); struct list_head write_reqs; struct list_head *curr_list; if (!sshfs_file_is_conn(sf)) return -EIO; if (sshfs.sync_write) return 0; (void) path; pthread_mutex_lock(&sshfs.lock); if (!list_empty(&sf->write_reqs)) { curr_list = sf->write_reqs.prev; list_del(&sf->write_reqs); list_init(&sf->write_reqs); list_add(&write_reqs, curr_list); while (!list_empty(&write_reqs)) pthread_cond_wait(&sf->write_finished, &sshfs.lock); } err = sf->write_error; sf->write_error = 0; pthread_mutex_unlock(&sshfs.lock); return err;}Note that it's possible that the remote SFTP implementation does actually fsync on writes, but I think that's actually not what's happening. According to an old draft of the SFTP standard (which is the best I can find) there is a way to specify this behavior:7.9. attrib-bits and attrib-bits-valid...SSH_FILEXFER_ATTR_FLAGS_SYNC When the file is modified, the changes are written synchronously to the disk.which would imply that this isn't the default (as it's faster to not fsync). According to that standards document there doesn't appear to be a way to request a fsync on the remote file, but it looks like OpenSSH supports this as an extension to SFTP/* SSH2_FXP_EXTENDED submessages */struct sftp_handler extended_handlers[] = { ... { fsync, [email protected], 0, process_extended_fsync, 1 }, ...};static voidprocess_extended_fsync(u_int32_t id){ int handle, fd, ret, status = SSH2_FX_OP_UNSUPPORTED; handle = get_handle(); debug3(request %u: fsync (handle %u), id, handle); verbose(fsync \%s\, handle_to_name(handle)); if ((fd = handle_to_fd(handle)) < 0) status = SSH2_FX_NO_SUCH_FILE; else if (handle_is_ok(handle, HANDLE_FILE)) { ret = fsync(fd); status = (ret == -1) ? errno_to_portable(errno) : SSH2_FX_OK; } send_status(id, status);}I doubt it'd be hard to query for that extension and properly support fsync in SSHFS, that seems a pretty reasonable thing to do. That said, I think it'd probably be easier to just use Linux's network block device support which I assume supports all this stuff properly (though I've never used it myself, so it could be horrible). |
_webapps.97931 | I'm new at GitHub and I've noticed that GitHub allows to commit under any user's data (and submit pull requests under my account using commits made with fake users data). For example, I am able to set my user.name and user.email and pretend that I'm another user, and GitHub will automatically link this user name to the original owner of the e-mail address, while actually that person didn't commit anything and didn't give any permissions to commit using his/her pesonal data. I'm quite lost, cause I have no idea how to prevent this. Not only I can use another's data, but people also can use mine. Can anyone please clarify this for me? | Commit under another user on GitHub | github | Each git commit contains author information as plain text (call it the commiter or the author). This data is filled from git config or from command line at commit time and can be faked because it can not be verified in any way.Each git server accepts to receive git objects (including commit git-objects) from all write-enabled registered users. Those registered users push their work to the server (with commit author matching their name) as well as the contributed work of anyone which authorship is kept (commit author still matching real author).Having core developers to accept commits from any contributor (sent as pull request, or by email for example) and push their contributed work mainline is a common git workflow in open-source projects.Git does not permit the use of someone else's data but rather the use of its identity in commit author fields, yes. Hooks can be set on server-side to reject those commits but I never heard about such bad idea which denies the opportunity to keep root author names. |
_softwareengineering.355233 | I have a big legacy C++ project need to implement unit testing with google test framework.I have managed to mock a module B which is a A depends on. So successfully write a unit test. But my question is now B is mocked. So to unit test B we need another test executable. So is it okay to have more than one test executable build with different mocks?Is it okay with test/build server automation? | more than one google test executable? | c++ | null |
_softwareengineering.219755 | Working on a E-commerce solution where I need to handle checkout based on anonymous customer and as of now I am not able to think properly how best this can be implemented.Our ShoppingCart is being saved in database and and every update/ edit in ShoppingCart is being updated in database.Now I need to take care about creating an anonymous customer and than assign this cart to that customer so that add to cart and well checkout can be associated with this customer.Can anyone suggest me what can be the right way to go for this?Should I create one anonymous user in database and use it everytime a request for new customer (anonymous ) is being created.Place that user in current user session.Perform any operation on cart with respect to the current session | anonymous checkout | java;design;e commerce;session | null |
_softwareengineering.313712 | I am thinking about the concepts of a web-application in which users can upload files to the server. I have multiple questions about the storage of these files.Imagine a service with 10,000+ users and 20TB of uploaded data. What would be the better practice when it comes to storing the data on the server?Directly in the Database. This would probably be very nice because of automatic backups of the files when a backup of the database is created. However, I am worried it could drastically slow down the database access. As Files on the server. The better choice? Backups are more complicated but maybe the access to the files is faster?Also, when more and more users register and upload more and more files the disk capacity will decrease exponentially. Would you, at some point, suggestMake use of a 3rd party service provider to store the masses of data - how fast is the access here, can I download / upload at any time?Get more and more harddrives and build a huge RAID storage system? | Storing mass user-files | data;storage;file storage | null |
_softwareengineering.178551 | This is a follow up question to my original question. I'm thinking of going with generating diffs and storing those diffs in the database 'History' table. I'm using diff-match-patch library to generate what is called a 'patch'. On every save, I compare previous and new version and generate this patch. The patch could be used to generate a document at specific point in time. My dilemma is how to store this data. Should I:a Insert a new database record for every patch?b. Store these patches in javascript array and store that array in history table. So there is only one db History record for document with an array of all the patches.Concerns with:a. Too many db records generated. Will be slow and CPU intensive to query.b. Only one record. If record is somehow corrupted/deleted. Entire revision history is gone.I'm looking for suggestions, concerns with either approach. | Storing revisions of a document | database;database design;versioning | null |
_codereview.29269 | I'm trying to clean some incoming $_GET parameters. I've not done this before, so I'd love feedback.I'm especially concerned with the array. While all the other parameters control simple logic, the array will be saved into the database and potentially output to users.Please also feel free to point out any redundancies.<?php// int: only want a positive value// array: wont know what this could be but it could be anything urlencoded// string: three possible outcomes: foo, bar, fudge( isset( $_GET[val1] ) ? (bool) $val1 = true : (unset) $val1 );( isset( $_GET[val2] ) ? (int) $val2 = sanitize_absint( $_GET[val2] ) : (unset) $val2 );( isset( $_GET[val3] ) ? (array) $val3 = sanitize_array($_GET[val3]) : (unset) $val3 );( isset( $_GET[val4] ) ? (string) $val4 = sanitize_string($_GET[val4]) : (unset) $val4 );// further refine val4 stringval4_strip($val_4);// sanitizefunction gfahp_sanitize_absint( $int ) { $int = filter_var( $int, FILTER_SANITIZE_NUMBER_INT ); $int = abs( intval( $int ) ); // positive number return $int;}function gfahp_sanitize_string( $string ) { // is this enough to prevent anything icky? $string = filter_var( $string, FILTER_SANITIZE_STRING, FILTER_FLAG_STRIP_LOW ); return $string;}function gfahp_sanitize_array( $array ) { $array = array_walk_recursive( $array, gfahp_sanitize_string ); return $array;}function val4_strip($val_4){// strip down val4 if (!empty($val4){ switch ($val4) { case 'foo': $val4 = 'foo'; break; case 'bar': $val4 = 'bar'; break; case 'fudge': $val4 = 'fudge'; break; default: $val4 = (unset) $val4 break; } } return $val4;} | Am I sufficiently cleaning incoming $_GET parameters, especially in the array? | php;url | null |
_unix.206891 | I'm aware of how to audit for changes to the /etc/sysconfig/iptables file in CentOS/RHEL 6 and earlier, but how do I audit for changes made only to the running configuration? | Audit on changes to the running iptables configuration | linux;iptables;linux audit | The following auditctl rule should suffice:[root@vh-app2 audit]# auditctl -a exit,always -F arch=b64 -F a2=64 -S setsockopt -k iptablesChangeTesting the change:[root@vh-app2 audit]# iptables -A INPUT -j ACCEPT[root@vh-app2 audit]# ausearch -k iptablesChange----time->Mon Jun 1 15:46:45 2015type=CONFIG_CHANGE msg=audit(1433188005.842:122): auid=90328 ses=3 op=add rule key=iptablesChange list=4 res=1----time->Mon Jun 1 15:47:22 2015type=SYSCALL msg=audit(1433188042.907:123): arch=c000003e syscall=54 success=yes exit=0 a0=3 a1=0 a2=40 a3=7dff50 items=0 ppid=55654 pid=65141 auid=90328 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts0 ses=3 comm=iptables exe=/sbin/iptables-multi-1.4.7 key=iptablesChangetype=NETFILTER_CFG msg=audit(1433188042.907:123): table=filter family=2 entries=6[root@vh-app2 audit]# ps -p 55654 PID TTY TIME CMD55654 pts/0 00:00:00 bash[root@vh-app2 audit]# tty/dev/pts/0[root@vh-app2 audit]# cat /proc/$$/loginuid90328[root@vh-app2 audit]#As you can see from the above output, after auditing for calls to setsockopt when optname is IPT_SO_SET_REPLACE (which is 64 decimal, 0x40 hex) it was able to log changes to the running iptables configuration.I was then able to catch the relevant audit information such as the the user's loginuid (since they would likely have sudo'd to root prior to updating the firewall) as well as the PID of the calling program. |
_codereview.107798 | I have a 'Membership' model that keeps track of a Member's membership. A membership can be updated by an admin (admin_add), or by the user (update). This code is working great, but I'd love some feedback on how it looks, sections that could be improved, etc.Membership Model: class Membership extends AppModel { /** * Validation rules * * @var array */ public $validate = array( 'membership_type_id' => array( 'notempty' => array( 'rule' => 'notBlank', 'message' => 'Membership Type Required', 'allowEmpty' => false, 'required' => true, ), 'validateMembership' => array( 'rule' => 'validateMembershipTypeModel', 'message' => 'Invalid Membership Type', ), ), 'updated_by_member_id' => array( 'validateMember' => array( 'rule' => 'validateMemberId', 'message' => 'Invalid Updated By Member ID', 'allowEmpty' => true, ), ), 'payment_id' => array( 'validatePayment' => array( 'rule' => 'validatePayment', 'message' => 'Invalid Payment ID', 'allowEmpty' => true, ), ), 'expires' => array( 'rule' => array('date', 'ymd'), 'message' => 'Please enter a valid expiration date', 'allowEmpty' => false, 'required' => true ), 'renewed' => array( 'rule' => array('date', 'ymd'), 'message' => 'Please enter a valid renewal date', 'allowEmpty' => true, ), ); /* * Validation function to make sure the membership type id is valid * * @return bool */ public function validateMembershipTypeModel($check) { $mTypeModel = ClassRegistry::init('MembershipType'); if($mTypeModel->validateMembershipType($this->data['Membership']['membership_type_id'])) { return true; } return false; } /* * Validation function to make sure the updated_by_member_id is valid (a valid member) * * @return bool */ public function validateMemberId($check) { if(! isset($this->data['Membership']['updated_by_member_id'])) { return false; } $mModel = ClassRegistry::init('Member'); $mModel->contain(); if($mModel->findById($this->data['Membership']['updated_by_member_id'])) { return true; } return false; } /* * Validation function to make sure the payment ID is valid * * UPDATE * * @return bool */ public function validatePayment($check) { return true; } /** * belongsTo associations * * @var type array */ public $belongsTo = array( 'Member', 'MembershipType' ); /* * Validate Membership Data * * return array membership data */ public function _validateMembershipData($data = array()) { // Validate 'corporate' flag if(isset($data['corporate']) && $data['corporate'] == 'on') { $data['corporate'] = 1; } // Validate 'international' flag if(isset($data['international']) && $data['international'] == 'on') { $data['international'] = 1; } return $data; } /* * Create new membership * Anytime a membership is updated, this function is called * * * return integer membershipID */ public function createNewMembership($data) { if(empty($data)) { return false; } // Validate data $membership_data = $this->_validateMembershipData($data); $this->set($membership_data); if ($this->validates()) { // Save data $this->create(); if(! $this->save($membership_data)) { throw new NotFoundException('Could not find save membership'); } // Assign the membership_id to the member if(! $this->assignMembershipToMember($this->id, $data['member_id'])) { throw new NotFoundException('Could not find save membership_id to member'); } // Return Membership ID return $this->id; } return false; } /* * Assign a membership id to a member * * param membership_id * param member id * return bool on success */ public function assignMembershipToMember($membership_id, $member_id) { $member = ClassRegistry::init('Member'); $member->id = $member_id; if($member->saveField('membership_id', $membership_id)) { return true; } return false; }}/** * Memberships Controller * * @property Membership $Membership * @property PaginatorComponent $Paginator * @property SessionComponent $Session */class MembershipsController extends AppController { public $uses = array('Member', 'Membership', 'MembershipType'); /** * admin_update method - Update a member's Membership * * @throws NotFoundException * @param integer $id * @return void */ public function admin_update($id = null) { // Validate the member id $this->Member->contain('Membership'); $member = $this->Member->findById($id); if (! $this->Member->exists($id)) { throw new NotFoundException(__('Invalid member')); } if ($this->request->is(array('post', 'put'))) { // Fill in some of the membership data manually $this->request->data['Membership']['updated_by_member_id'] = $this->Auth->user('id'); $this->request->data['Membership']['member_id'] = $id; $this->request->data['Membership']['renewed'] = date('Y-m-d'); $this->request->data['Membership']['created_by'] = 'MANUAL'; // Create the membership if ($this->Membership->createNewMembership($this->request->data['Membership'])) { $this->Session->setFlash(__('Membership Updated.'), 'success'); return $this->redirect(array('admin' => false, 'controller' => 'members', 'action' => 'view', $id)); } else { $this->Session->setFlash(__('There was an error updating the membership. Please try again.'), 'error'); } } else { $this->request->data = $this->Membership->find('first', array('conditions' => array('id' => $member['Member']['membership_id']))); } } /** * update method - Update a member's Membership * * @throws NotFoundException * @param integer $id * @return void */ public function update($membership_type_id = null) { // Validate the membership type id $this->MembershipType->contain(); $membership_type = $this->MembershipType->findById($membership_type_id); if(! $membership_type) { $this->Session->setFlash(Invalid Membership Type, 'error'); return $this->redirect('/'); } // Set the member ID to the person logged in $memberId = $this->Auth->user('id'); // If they are submitting if ($this->request->is('post')) { // Assume true for testing $payment = true; $international = true; if($payment === true) { // Build the membership data $membership_data = array( 'member_id' => $memberId, 'membership_type_id' => $membership_type_id, 'renewed' => date('Y-m-d'), 'expires' => date('Y-m-d', strtotime(+1 year)), ); // Create a new membership if($this->Membership->createNewMembership($membership_data)) { $this->Session->setFlash(Success! Your membership has been updated! Please make sure your address is correct below!, 'success'); return $this->redirect(array('controller' => 'members', 'action' => 'view')); } else { $this->Session->setFlash(__('There was an error updating the membership. Please try again.'), 'error'); } } } $this->set(compact('membership_type')); }} | CakePHP Membership Management | php;cakephp | null |
_reverseengineering.8017 | Say we have a Windows application, which sends some packets over HTTPS. We need to extract the content of this packets (unencrypted of course).There is no way to get hands on server private certificate and MitM attack doesn't work (some MitM defense is used by this application). So, decryption seems to be off the table.The only choice (I suppose) is to extract these packets from the application before they get encrypted. Application is well protected, it has no dependency on OpenSSL DLLs. However, we have a certain feeling that it uses OpenSSL (but, statically linked, may be OpenSSL source was even modified before compiling/linking).Hooking a call to OpenSSL functions (like ssl_write()) is not simple, because the application's executable is packed and obfuscated. It also has a debugging protection, but a stealth debugger, which avoid this defense, is already found. So, we can debug this application. However, the code, as seen during debugging, is a complete mess (obfuscated). Even the system DLLs, being loaded by this application, are completely messed. Here is an example of how the send() function from WS2_32.dll looks like during debugging of this application. For reference, here is how it looks like from normal (unprotected) application. So, it's very hard to understand how the function arguments are passed, moreover it looks like they can be passed via different ways (not sure, but looks so according debugging experiments).This seems to be a quite common task, since there are many Windows applications which use HTTPS and statically linked OpenSSL.Hopefully somebody have such experience and can share it. | Extracting HTTPS packets before encryption | debugging;dll injection;https protocol | null |
_codereview.4960 | I'm trying to isolate a webservice in its own class, and I plan to add separate classes to each webmethod there is in the webservice. What I have so far works, but I have this feeling tickling that I've missed something (except for the invisible variable declarations down here, I didn't want to clog the page).Webservice instantiation class and its fault handler: public class CfdWS { [Bindable] private var model:ModelLocator = ModelLocator.getInstance(); public function loadWebService():WebService{ var webService : WebService = new WebService(); webService.wsdl = model.configXML.cfdwsWSDL; webService.addEventListener(FaultEvent.FAULT, onWebServiceFault); webService.loadWSDL(); return webService; } private function onWebServiceFault(event:FaultEvent):void{ var fault: Fault = event.fault; var message:String = \ncodigo: + fault.faultCode; message += \nDetalle: + fault.faultDetail; Alert.show(Error de webservice: + message); } } }The following is my webservice method call class. I have written only what I think is the essential code for the question.public class GeneratePDF extends CfdWS{ public function generatePDF():void{ webService = loadWebService(); webService.addEventListener(LoadEvent.LOAD, doGeneratePDF); } private function doGeneratePDF(event:LoadEvent):void{ webService.generatePDF.addEventListener(ResultEvent.RESULT, generatePDFResultHandler); webService.generatePDF(pdfData); } private function generatePDFResultHandler (event:ResultEvent):void{ // After getting what I want, I remove the event listeners here. }}I'm trying to re-write an application that is already in production while on testing phase (testing for the next version I mean). | Actionscript web service and web method call classes | actionscript 3 | I don't see why would you put every method of the service in a separate class. A method is a function of the class. I imagine you wanted to decouple your code, but doing it this way you will force a lot of overhead:the service is instantiated for every 'method' called, and then, hopefully, garbage collected (as you remove event listeners and there's no more references to the service left)because of above, the service is stateless; with time you may want to add some functionality like caching, but you'd need to change whole code structure for thatCfdWS - not descriptive name; your way of decoupling code will force you to make three or even five times more classes then you normally would, so I would expect a hell on the file-naming levelreally, dividing to so many classes is not a good idea - you don't want to switch between files all the time; try to put related code in one Class, and if it grows big, create some helper classesI think you already understand the benefit of a good MVC implementation, try Robotlegs, it really makes a life easier:http://www.robotlegs.org/ |
_computerscience.4155 | I have got a weird bug that I cannot figure out. The Apple forums were not too helpful.I updated my MacBookPro (Retina, early 2015) to the newest OS X Sierra. My code ran fine before, now when I call glfwCreateWindow(mWidth, mHeight, mTitle, nullptr, nullptr);, I get following error:> ERROR: Setting <private> as the first responder for window<private>,but it is in a different window ((null))! This would> eventually crash when the view is freed. The first responder will be> set to nil.I have got no idea, why my program does not work and yes, I have set the glfw to be forward compatible.P.S. computer info:MacBook Pro (Retina 13 inch, early 2015)CPU: 2.7 GHz Intel core i5GPU: Intel Iris Graphics 6100 1536 MB | Error when calling glfwCreateWindow() on OS X after updating to Sierra | opengl | null |
_cs.66845 | I was reading the book lambda calculi , encountered an equality theory called the rule of weak extensionality, which is shown as follows.$\frac{M \, = \, N}{\lambda x.M \, = \, \lambda x.N}$Yes, it is obviously true. But why it is called the rule of weak extensionality? what is weak? what is extensionality here means?I think the reason should be interesting. | what is weak extensionality in $\lambda$-calculus? | lambda calculus | There are different kinds of equality. Equality is said to be extensional when things are equal when their parts are equal, so to speak:Equality the elements of $A \times B$ is extensional if, for all $u, v : A \times B$, $u = v$ if and only if $\pi_1(u) = \pi_1(v)$ and $\pi_2(u) = \pi_2(v)$. Here $\pi_1$ and $\pi_2$ are canonical projections. That is, $u$ and $v$ are equal if they have the same components (parts).Equality of elements of $A \to B$ is extensional if, for all $f, g : A \to B$, $f = g$ if and only if $f(a) = g(a)$ for all $a : A$. That is, $f$ and $g$ are equal if they have the same values (parts).Equality of sets is equal if, for all sets $S$ and $T$, $S = T$ if and only if $x \in S \Leftrightarrow x \in T$ for all $x$. That is, $S$ and $T$ are equal if they have the same elements (parts).Equality which is not extensional is called intensional. For instance, if by $A \to B$ we mean programs (pieces of code) which take inputs $A$ and produce outputs $B$ then equality of functions could simply mean equality of the code.The rules of the $\lambda$-calculus do not specify whether equality is extensional or intensional. They allow for both possibilities. With this in mind we see that it is not entirely clear whether the $\xi$-rule (that's what it's called)$$\frac{M = N}{\lambda x . M = \lambda x . N} \tag{$\xi$}$$should be part of $\lambda$-calculus. Normally it is, but we could envision a situation in which $\lambda x . M$ gets compiled differently from $\lambda x . N$ for some reason. So much about it being obviously true.Personally, I would not call the $\xi$-rule weak extensionality because that conveys the wrong idea. It is a congruence rule expressing the fact that $\lambda$-abstraction preserves equality. Another congruence rule is$$\frac{M_1 = M_2 \qquad N_1 = N_2}{M_1 N_1 = M_2 N_2}.$$If we insist, we can think of the $\xi$-rule as weak form of extensionality in the sense that two $\lambda$-abstractions are equal if their bodies (parts) are equal. |
_unix.200422 | It seems that the encryption provided by pdftk(1) is the level supported by Acrobat 5 (I added a password to a PDF file using pdftk then opened it in Acrobat Pro 9 and examined the security settings; Acrobat 5-compatible was the selected setting).However, I want to encrypt a PDF yet leave the metadata as clear text. It appears that jPDFTweak can do this (jPDF Tweak Documentation and scroll to the Encrypt/Sign section and the screenshot shows a checkbox for Do not encrypt metadata). (Edit: Yep, tried jPDFTweak and it does work.)I have a few dozen PDFs to process (repeatedly!) and I need a command line interface, so I'd prefer to use pdftk for this (the rest of my workflow uses pdftk).Any ideas? | How do I encrypt (password protect) a PDF without encrypting the metadata? | encryption;pdf;pdftk | null |
_softwareengineering.221265 | Are all scripting languages dynamically typed?I am using TCL. It is a scripting language and it does not enforce or allow type delaration of variables. It is instead a dynamically-typed language with ducktyping. The type of a variable is assumed by the interpreter according to the value assigned to it.I would really like to know is there are scripting languages that are strictly/strongly typed. | Are all scripting languages dynamically typed? | scripting;dynamic typing;interpreters | null |
_softwareengineering.137599 | Recently I had a discussion with a developer who mentioned that during program development, they routinely create and delete tables and columns on a regular basis while working on new features and justified things by saying that this is normal when using an agile development process. As most of my background is in a waterfall development environment, I wonder if this is actually considered proper under agile development, or if this might be a sign of an underlying problem, either with the program architecture or with their following of the agile process. | Is continuous creation and deletion of tables a sign of an architectural flaw? | architecture;agile;database | It's becoming more-and-more apparent to me every day that agile is becoming a synonym for poorly thought-out, chaotic, hurried and seat-of-your-pants. And none of those things are compatible with an Agile approach as I understand it.Having an effective and repeatable Agile process is not easy, and I don't believe that it inherently reduces the total amount of work to be done even though it may very well lead to better products.If they've said that they don't have time to refactor the database then they probably also don't have time to set up versioning and migration for the database. They probably haven't taken the time to create a suite of functional tests for it. All of those things are what I think of when I think of a solid Agile process that's headed for success.In the end, Agile is just a word. What you are doing day-to-day determines if you'll be successful or not. |
_codereview.66889 | I want to make sure that the code I have for encrypting and decrypting a serializable object makes sense and is proper. Does it look right too? Here's what I have so far: public static void encrypt(Serializable object, String path) throws IOException, NoSuchAlgorithmException, NoSuchPaddingException, InvalidKeyException { try { // Length is 16 byte SecretKeySpec sks = new SecretKeySpec(MyDifficultPassw.getBytes(), AES/ECB/PKCS5Padding); // Create cipher Cipher cipher = Cipher.getInstance(AES/ECB/PKCS5Padding); cipher.init(Cipher.ENCRYPT_MODE, sks); SealedObject sealedObject = new SealedObject(object, cipher); // Wrap the output stream CipherOutputStream cos = new CipherOutputStream(new BufferedOutputStream(new FileOutputStream(path)), cipher); ObjectOutputStream outputStream = new ObjectOutputStream(cos); outputStream.writeObject(sealedObject); outputStream.close(); } catch(IllegalBlockSizeException e){ e.printStackTrace(); } }public static void decrypt(Serializable object, String path) throws IOException, NoSuchAlgorithmException, NoSuchPaddingException, InvalidKeyException { SecretKeySpec sks = new SecretKeySpec(MyDifficultPassw.getBytes(), AES); Cipher cipher = Cipher.getInstance(AES); cipher.init(Cipher.DECRYPT_MODE, sks); CipherInputStream cipherInputStream = new CipherInputStream( new BufferedInputStream( new FileInputStream(path) ), cipher ); ObjectInputStream inputStream = new ObjectInputStream( cipherInputStream ); SealedObject sealedObject = null; try { sealedObject = (SealedObject) inputStream.readObject(); TransferData td = (TransferData) sealedObject.getObject( cipher ); } catch (ClassNotFoundException e) { e.printStackTrace(); } catch (IllegalBlockSizeException e) { e.printStackTrace(); } catch (BadPaddingException e) { e.printStackTrace(); } } | Encrypt and decrypt a serializable object | java;android;serialization;aes | Some obvious problems that jump into the eye:Don't use duplicate string literals, like MyDifficultPassw and AES/ECB/PKCS5Padding. Put them into constants and define them near the top.Don't e.printStackTrace(). It's considered bad practice.The encrypt and decrypt methods violate the single responsibility principle, because they encrypt / decrypt and at the same time do file I/O. Instead of writing to / reading from a filesystem path, it would be better to work with streams. That would open make them testable too (see below).The decrypt method takes a Serializable object that's not used at all. Also, the initialization SealedObject sealedObject = null; is pointless, as the variable is always assigned before use anyway.It would be slightly better this way:private static final byte[] key = MyDifficultPassw.getBytes();private static final String transformation = AES/ECB/PKCS5Padding;public static void encrypt(Serializable object, OutputStream ostream) throws IOException, NoSuchAlgorithmException, NoSuchPaddingException, InvalidKeyException { try { // Length is 16 byte SecretKeySpec sks = new SecretKeySpec(key, transformation); // Create cipher Cipher cipher = Cipher.getInstance(transformation); cipher.init(Cipher.ENCRYPT_MODE, sks); SealedObject sealedObject = new SealedObject(object, cipher); // Wrap the output stream CipherOutputStream cos = new CipherOutputStream(ostream, cipher); ObjectOutputStream outputStream = new ObjectOutputStream(cos); outputStream.writeObject(sealedObject); outputStream.close(); } catch (IllegalBlockSizeException e) { e.printStackTrace(); }}public static Object decrypt(InputStream istream) throws IOException, NoSuchAlgorithmException, NoSuchPaddingException, InvalidKeyException { SecretKeySpec sks = new SecretKeySpec(key, transformation); Cipher cipher = Cipher.getInstance(transformation); cipher.init(Cipher.DECRYPT_MODE, sks); CipherInputStream cipherInputStream = new CipherInputStream(istream, cipher); ObjectInputStream inputStream = new ObjectInputStream(cipherInputStream); SealedObject sealedObject; try { sealedObject = (SealedObject) inputStream.readObject(); return sealedObject.getObject(cipher); } catch (ClassNotFoundException | IllegalBlockSizeException | BadPaddingException e) { e.printStackTrace(); return null; }}This is only slightly better, it still looks quite messy. But it has the big advantage that now this is testable, for example:@Testpublic void testEncryptDecryptString() throws InvalidKeyException, NoSuchAlgorithmException, NoSuchPaddingException, IOException { String orig = hello; ByteArrayOutputStream baos = new ByteArrayOutputStream(); encrypt(orig, baos); ByteArrayInputStream bais = new ByteArrayInputStream(baos.toByteArray()); assertEquals(orig, decrypt(bais));}@Testpublic void testEncryptDecryptPerson() throws InvalidKeyException, NoSuchAlgorithmException, NoSuchPaddingException, IOException { Person orig = new Person(Jack, 21); ByteArrayOutputStream baos = new ByteArrayOutputStream(); encrypt(orig, baos); ByteArrayInputStream bais = new ByteArrayInputStream(baos.toByteArray()); assertEquals(orig, decrypt(bais));}static class Person implements Serializable { private static final long serialVersionUID = 0; private final String name; private final int age; Person(String name, int age) { this.name = name; this.age = age; } @Override public boolean equals(Object o) { if (this == o) { return true; } if (o == null || getClass() != o.getClass()) { return false; } Person person = (Person) o; if (age != person.age) { return false; } if (!name.equals(person.name)) { return false; } return true; } @Override public int hashCode() { int result = name.hashCode(); result = 31 * result + age; return result; }} |
_opensource.5211 | Citing the Philosophy of the GNU project:Free software does not mean noncommercial. A free program must be available for commercial use, commercial development, and commercial distribution. Commercial development of free software is no longer unusual; such free commercial software is very important. You may have paid money to get copies of free software, or you may have obtained copies at no charge. But regardless of how you got your copies, you always have the freedom to copy and change the software, even to sell copies.Free software does not mean non-commercial software. So a software that can be shared for free can also be sold. Why isn't this definition contradictory? If I wanted, person A could take an open source project and sell it to some dummy person B that knows nothing about free-software. It seems that the definition above favors person A and somehow tricks person B, which unfortunately does not have the time to learn everything about free-software. | Contradiction: Free software does not mean noncommercial | open source definition;free software definition | null |
_codereview.20082 | I am working on a little browsergame project written in PHP and using PostgreSQL as DBMS. Now I'm not really lucky with the process started after a userlogin was successful.Some info:There are 3 different kinds of properties a game character can have:AttributesSkillsTalentsEach of these properties is a table in my databaseEach of these properties is related to the character table in an extra tableAfter the login was successful I want to store both general information about these properties and the character-related values of them in the session (the first in 'game' and the second in 'user').How I currently get the data:[...]$this->getIngameInfo();//one account can have up to 4 characters//each of the characters can have different values foreach($_SESSION['user']['character'] as $key => $data){ $_SESSION['user']['character'][$key]['attribute'] = $this->getAttributes($data['id']); $_SESSION['user']['character'][$key]['skill'] = $this->getSkills($data['id']); $_SESSION['user']['character'][$key]['talent'] = $this->getTalents($data['id']);}[...]private function getIngameInfo(){ $sql = SELECT id, name, tag, description FROM attribute; if($this->db->query($sql, array())){ while($row = $this->db->fetchAssoc()){ $_SESSION['game']['attribute'][] = $row; } } $sql = SELECT id, name, tag, description FROM skill; if($this->db->query($sql, array())){ while($row = $this->db->fetchAssoc()){ $_SESSION['game']['skill'][] = $row; } } $sql = SELECT id, name, description FROM talent; if($this->db->query($sql, array())){ while($row = $this->db->fetchAssoc()){ $_SESSION['game']['talent'][] = $row; } }}private function getAttributes($charid){ $sql = SELECT attributeid, value FROM character_attribute WHERE characterid = $1 ORDER BY attributeid ASC ; $attributes = array(); if($this->db->query($sql, array($charid))){ while($row = $this->db->fetchAssoc()){ $attributes[] = $row; } } return $attributes;}private function getSkills($charid){ $sql = SELECT skillid, value FROM character_skill WHERE characterid = $1 ORDER BY skillid ASC ; $skills = array(); if($this->db->query($sql, array($charid))){ while($row = $this->db->fetchAssoc()){ $skills[] = $row; } } return $skills;}private function getTalents($charid){ $sql = SELECT talentid, value FROM character_talent WHERE characterid = $1 ORDER BY talentid ASC ; $talents = array(); if($this->db->query($sql, array($charid))){ while($row = $this->db->fetchAssoc()){ $talents[] = $row; } } return $talents;}I now wonder how I could merge these quite similar queries, because I'll need to fetch more information after that and I don't like firing so much queries in one process.I thought about using prepared statements (I use a self-written pgsql-PDO-class), but I am not calling the same table multiple times (and table 'talent' does not have exactly the same columns as the other both).I also mentioned creating one or two stored procedures which return all the needed data. But in this case I would not know how to assign such a bunch of data to the different named sessionarrays.The methods shown belong to a loginmodel and are called only one time. I used the sessionarray because the properties of a character should be shown in different ways (which would lead to caching) and used for calculations in different ways. As I don't like firing queries against the db to calculate with values that maybe did not change I didn't see a real alternative to sessions.Think about that:Fetch character properties once after loginDepending on user's interactions, show (cached if not changed) or calculate (? if not changed) with these propertiesDepending on user's interactions, change these properties, update db and update sessionTODO:Encapsulate sessiondata in another modelUse prepared queries for getAttributes, getSkills and getTalents Sum them to one method Move it to another model, as it will be not only needed when logging in, but when chars interact with other chars (wasn't away of)I would like to know how I can reduce the queries and simplify the code/improve performance of the script. | Browser game project | php;performance;session;postgresql | I agree with MECU about not storing everything in the session. Caching is probably the best way to go. Sessions are typically used for continuity between page loads. Meaning you can store information like the character ID or login status, but the rest should be done differently. So, even though I'm about to start explaining how to better do what you are trying to accomplish with your sessions, I hope you will apply it to whatever new method you come up with.Speaking of sessions. Using an outside resource, such as sessions or cookies or post, is a violation of the Law of Demeter (LoD). Simply put, this law suggests that your code, either method/function or class, not know more than is necessary to accomplish its task. Right now you have your entire class tightly coupled with the session. What if, as both MECU and I suggested, you wanted to move away from using the session? Then you'd have to rewrite this entire class. The better thing to do would be to write this class in such a way as to not be dependent upon it in the first place. You could instead return these same arrays to your main application to then apply them to the session array or even a cache. You could also inject any outside parameters you needed into your method's arguments to share initial values. Always try to write your code so that it is as reusable as possible.Now, you can more easily get the character stats by following the Don't Repeat Yourself (DRY) Principle. As the name implies, your code should not repeat itself. So, instead of writing out that long array pointer multiple times, you can more easily, and cleanly, create a new array and merge the two when you are done. Additionally, you can abstract the $data[ 'id' ] to its own variable as well to make this even easier.$stats = array();foreach( $_SESSION[ 'user' ] [ 'character' ] AS $key => $data ) { $id = $data[ 'id' ]; $stats[ $key ] = array( 'attribute' => $this->getAttributes( $id ), 'skill' => $this->getSkills( $id ), 'talent' => $this->getTalents( $id ) );}//use array_merge_recursive if you want to keep original $data arrayarray_merge( $_SESSION[ 'user' ] [ 'character' ], $stats);Seen as how you use the same id to access the attributes, skills, and talents, it would make more sense to create a new method to get all three and return the results in an array, similar to the one above. This follows two core OOP principles, the first I already mentioned, DRY, and Single Responsibility. This new principle means that our methods should be responsible for just one thing. If our methods are responsible for more than one task, then that makes them harder to reuse and we typically end up repeating code to accomplish similar tasks, which violates the first principle again. Its a vicious cycle.private function getStats( $id ) { return array( 'attribute' => $this->getAttributes( $id ), 'skill' => $this->getSkills( $id ), 'talent' => $this->getTalents( $id ) );}MECU mentioned something similar, but I think it should be elaborated. He expressed a dislike for using non-conditionals as a conditional statement. This is a good thing to be adverse to. Complex conditionals should also be avoided. Complex meaning nested parenthesis, or long lists of conditionals, or even just a long condition. I don't see any of the latter, so I'll just cover the first. Both of these types of statements tend to cause issues with legibility, thus the need for abstracting the conditional to a variable. At first glance the first statement is hard to read because the parenthesis tend to run together. The second statement is a little better, but only because I added whitespace around the parenthesis, this just happens to be my style for this very reason. The third abstracts this to avoid excessive nesting, making it even easier to read than the second and allows for potential expansion should you want to use the $result later.//a complex conditionalif($this->db->query($sql, array())){//a complex conditional using whitespaceif( $this->db->query( $sql, array() ) ) {//compared to...$result = $this->db->query( $sql, array() );if( $result ) {I demonstrated DRY a couple of times above, so I'll leave this one to you. Your getIngameInfo() shows a more standard violation of DRY. It queries the database three times in a very similar method. The only thing that really changes is the SQL used and the portion of the session game array. I would suggest creating a new method to accomplish this for you. In fact, that new method can probably be reused for the getAttributes(), getSkills(), and getTalents() methods as well. I would use those later 3 methods as a template and use the returned array to populate your session.Hope this helps! |
_opensource.270 | If yes, what are the consequences of Open Source projects being discontinued, if it's done by a large organization?As per this post,the older version of project can still be used under the same old open source license. Is there a way to make it so that the project can't be used under the old license? | Is it possible to close an open source project? | distribution;relicensing;proprietary code | Much would depend on the initial license chosen when creating the OS project. If the OSP was originally published under a copyleft license such as GPL, then the answer is clearly no. They can not continue development under a more restrictive license without violating the terms of the original license.A permissive license, such as Apache, allows the original publisher to effectively fork the project internally and abandon the open source version, making no more commits.However, if the project was ever used (or even downloaded) by someone, even deleting the 'authorative' source repository will not stop it reappearing under a different guise. |
_codereview.23907 | Please check my code below. I don't have any problems but I'm not aware how far the code will work, release and kill excel instances..try{ wBook = xCel.Workbooks.Open(excelfilepath); xCel.Visible = false; this.xCel.DisplayAlerts = false; wSheet = (Excel.Worksheet)wBook.Worksheets.get_Item(1); wSheet.Copy(Type.Missing, Type.Missing); wSheet = (Excel.Worksheet)wBook.Sheets[1]; wSheet.SaveAs(1.xls);}catch{}finally{ if (wBook != null) { try { wBook.Close(); } catch { } Thread.Sleep(500); } if (excelprocid > 0) { Process xcelp = Process.GetProcessById(excelprocid); xcelp.Kill(); } try { GC.Collect(); GC.WaitForPendingFinalizers(); GC.Collect(); GC.WaitForPendingFinalizers(); } catch{} Marshal.FinalReleaseComObject(wSheet); wSheet = null; Marshal.FinalReleaseComObject(wBook); wBook = null; } | Excel instances: release and kill | c#;exception handling;excel | Since Excel runs via COM, it won't be released from memory until you remove all references to it. Your example (above) does a pretty good job, but after you say wBook.Close(), you should say wBook = null. Likewise Excel won't gracefully close-down while your xCel object refers to an instance of Excel.This article on CodeProject shows the recommended/industry-standard way of closing-down excel. ffwd down to Sections 13 and 14.http://www.codeproject.com/Articles/404688/Word-Excel-ActiveX-Controls-in-ASP-NET |
_webapps.52775 | Why can people, who are among my connections, see all my connections (even if they're not common to both), even though on privacy settings I selected only me to see the connections? Another buddy, one from my connections, also has this setting, and I cannot see her connections (can't click the number of connections). | People see my connections on LinkedIn, despite the setting set to only me | privacy;linkedin | null |
_softwareengineering.310272 | I'm working on a project with has different checklists (questions and answers) associated with an entity (Protocol). There is a business requirement to have these questions be altered in the future and when a new entity is created it would be associated with the current checklist.Example:Lets say there is a checklist and has 21 questions (the actual questions are nested with questions having other questions but I believe this is out of the scope of this question). This would be version 1.0. Something changes and now there are 22 questions and the version would be bumped up to 1.1.When a new Protocol is created, it needs to have a Checklist associated with it - the current Checklist.Simplified classes:class Checklist { String version List<ChecklistQuestion> checklistQuestions}class ChecklistAnswerSet { Checklist checklist List<ChecklistAnswer> checklistAnswer}class Protocol { ChecklistAnswerSet checklistAnswerSet ...}New Protocol's are created within the ProtocolService; the child checklistAnswerSet is also created here as well but needs to refer to the current Checklist instance.We are working with a grails backend and it's extremely easy getting references to instances by their fields:Checklist checklist = Checklist.findByVersion('1.1')I could drop this in my ProtocolService to get the current instance but I know this isn't a good idea. Any changes to this version would require code changes to the Service and although I could avoid a redeploy (grails magic), this feels completely wrong.Where do I store this 1.1? In a configuration file? In the database? Or am I completely wrong and my design needs a complete rework?Initially, I was storing this 1.1 data within a generic key/value table we have in the database called System_Property, but it just felt wrong. My gut reaction is to use a configuration file (there are other Checklist's and therefore other current versions that would also go here), but a coworker is saying only environmental settings go in config files. | How to store the current version of an instance? Store reference to specific instance? | versioning;configuration;separation of concerns;storage;grails | null |
_softwareengineering.246077 | I have an AppHarbor site where I need to do weekly updates to some of the data. I don't want to go the route of deploying an exe and adding an additional webworker to my site because those cost money. My thought is to add a web service/REST api service to the site so I can just call it, it will execute the batch job stuff, then when it's complete, return a custom status code, like success or failure. Behind the scenes, it would update a BatchLog table or something like that, then I could create a page/view where I could access the log details and see which batch processes did or did not run.So, that's how I'm thinking I want to implement, but I'm a little skeptical of the security around this. First of all, obviously I don't want ANYONE to be able to kick off these batch jobs just by going to my web service/rest api.To fix that, I'm thinking there are 2 different ways to do this, or a combination of both.1) Require some credentials and maybe an additional secret code in order to get them to actually kick off.2) Configure in a table when each batch job can actually run, and the frequency. That way, if a hacker does call my service/rest api. It will only be able to execute 1 time per hour/day/week/month/etc. So they could hammer the service, but each successive call would just return a failure, or something like that.One thing to note, I read someplace a couple months ago that there are cloud services out there that will schedule batch jobs like this for you. And the free one that I read about will do 1 service. Any more than 1 and you have to start paying for it. So for my example, I'd just create one service, and have it called multiple times per day/week, and let my BatchJob configuration table determine whether it actually needs to process or not.So, how horrible of an idea is this? What are some other approaches to accomplishing batch jobs in a cloud environment where they don't offer batch services. | How to execute batch jobs via webservices or rest api | cloud computing | This is quite normal and in many environments just the easiest way to do this. I have various batch jobs running that often enough don't do more than calling a REST API controller with curl (on the same machine as my web server).For the protection part you have many options. Simple user authentication is easy enough and should be safe if coded properly (and you can use a super complex password and totally weird user name if you like, same for the URL of the task), in addition you could limit requests to certain IP addresses, if the batch is running on the same machine then limit to localhost is as secure as you can get. If you can change settings of the web server there would be even more options, if not your code should be able to do most important things anyway.Also as you write you can limit the batch processing by querying the time and date. I do this anyway, since some jobs should not run every hour. So in my case there is only one controller called hourly and that decides what to do dependent on time. For example some heavy load image processing is done only once at night, one hour later some other heavy worker is running and during the day some simple data import runs every hour. |
_vi.11257 | This may seem a little nit-picky, but I like using the wildmenu to switch between buffers: I do :b and then hit tab until I get to the file I want. The problem is that sometimes, vim shows the entire file path instead of just the file. So instead of getting something nice likefoo.cpp bar.cpp foobar.cppI getfoo.cpp ~/Documents/programming/projects/my_project/src/bar.cpp foobar.cppWhich ANNOYS THE HELL out of me. Sometimes it happens, sometimes it doesn't. Deleting the buffer and reopening the file doesn't do anything; I have to restart vim in order for it to go away.Does anyone know why vim does this?EDIT: So, I haven't experienced this problem since I last created this post; however, just now the problem happened again, and I now know the situation in which it manifests. The situation is as follows: I use the 'quickfix' window for viewing compile errors. When I build my project (via :make) and there are errors, if the files that contain the errors are not currently buffered within vim, then the absolute path of the file is shown in the quickfix window and everywhere else for the rest of the vim session; even if I do :edit foo.cpp after the :make, it will still show the full path for the buffer.Deleting the buffer and doing another :edit doesn't fix it; vim shows the full path no matter what. The only remedy is to kill vim, open a new process, and to open the files containing the errors before calling :make.Very strange. Any ideas? | vim sometimes displays full path of file instead of just the filename? | buffers;path;wildmenu | null |
_softwareengineering.154931 | I just started using requirejs and I love it. I have one concern though. I've been compressing all my js files into one single file. Even with requirejs optimizer, I need to load module files from the server time to time and I'm concerned with it.Performance and user experience wise, which one is better? | one single compressed js file VS compressed requirejs module files | javascript;modules | Performance and user experience wise, which one is better?A single compressed file.It is a single connection, so the browser is free to download other assets.It is compressed, so it takes less time to transfer to the browser.Both mean that the page and the javascript run faster - this is better user experience and better performance.Win win. |
_unix.153281 | Shouldn't bridge (or a switch) be working without having an IP address? I believe I can have a bridge br0 setup with eth0 and eth1 as members both having no IP addresses. I can't understand why an address should be allocated to br0? | Why IP address for Linux Bridge which is layer 2 virtual device? | linux;networking;bridge | A bridge does not need an IP address to function. Without one it will just perform layer 2 switching, spanning tree protocol and filtering (if configured).An IP address is required if you want your bridge to take part in layer 3 routing of IP packets.As an example you can setup a bridge without an IP address in Debian/Ubuntu using the following in /etc/network/interfacesauto br0iface br0 inet manual bridge_ports eth0 eth1 |
_codereview.70268 | I have a package directory pkg with several classes that I would like to build into a convenient dict property.The structure of pkg/ looks like:pkg/base.py:class _MyBase(object): passpkg/foo.py:from .base import _MyBaseclass Foo(_MyBase): passAnd in pkg/__init__.py, it is a bit clunky, but once pkg is imported, a all_my_base_classes dict is built with a key of the class name, and value of the class object. The classes are all subclasses of pkg.base._MyBase.import osimport sysimport pkgutilimport base# I don't want to import foo, bar, or whatever other file is in pkg/all_my_base_classes = {}pkg_dir = os.path.dirname(__file__)for (module_loader, name, ispkg) in pkgutil.iter_modules([pkg_dir]): exec('import ' + name) pkg_name = __name__ + '.' + name obj = sys.modules[pkg_name] for dir_name in dir(obj): if dir_name.startswith('_'): continue dir_obj = getattr(obj, dir_name) if issubclass(dir_obj, base._MyBase): all_my_base_classes[dir_name] = dir_objRunning it from an interactive Python shell, one directory below pkg/:>>> import pkg>>> pkg.all_my_base_classes{'Foo': <class 'pkg.foo.Foo'>}So it works as expected, but pkg/__init__.py is pretty terrible looking. How can it be better? | List all classes in a package directory | python;modules;dynamic loading | Since the classes are all subclasses of _MyBase, they can be accessed via _MyBase.__subclasses__() after they have been imported:for (module_loader, name, ispkg) in pkgutil.iter_modules([pkg_dir]): importlib.import_module('.' + name, __package__)all_my_base_classes = {cls.__name__: cls for cls in base._MyBase.__subclasses__()}For importing the modules, I followed the advice of Nihathrael. |
_cs.43250 | The Problem: A high speed workstation has 64 bit words and 64 bit addresses with address resolution at the byte level. Assuming a direct mapped cache with 8192 64 byte lines, how many bits are in each of the following address fields for the cache? 1) byte 2) Index 3) Tag?I know that an address for specifying data within a cache is 64 bits. I know that an address for a cache has to have the byte, index, and tag field so byte + index + tag = 64The index field should take up 13 bits to account for the 8192 byte linesHow many bits would be in the byte field though? I know that a processor processes one word at a time and each word consists of 8 bytes. A 64 byte cache line would contain 8 words. Would this byte field need to identify each word or each byte itself. If it was byte itself, it be 6 bits but if it was word it be 3 bits.If I had to take a stab, I would say the byte field needs to be 3 bits to identify each word because it doesn't make sense for the processor to just process one byte. Can anyone confirm my suspicisions? | How many bits would be needed for the byte? | computer architecture;cpu cache;memory access | null |
_unix.223958 | Is it possible to access the aggregate menu, or system menu, located in the top right corner on the activities bar of the GNOME shell interface, with a keyboard shortcut? If not, can such a shortcut be created? | Access the GNOME shell aggregate menu per keyboard | keyboard shortcuts;gnome3;gnome shell | As far as I know there's no dedicated shortcut for the aggregate menu. You could use the ctrlalttab.js helper (also known as the accessibility switcher). Hit Ctrl+Alt+Tab:and select Top Bar, this will focus the first element on the top bar (that is, the Activities button). You then navigate with right arrow to the system tray and use the down arrow to open the menu...Not very convenient, I know, so here's a way to define a dedicated shortcut for the system menu:You can invoke gnome-shell evaluator via dbus and call the open() or toggle() methods on that particular shell element:gdbus call -e -d org.gnome.Shell -o /org/gnome/Shell -m org.gnome.Shell.Eval string:'Main.panel.statusArea.aggregateMenu.menu.toggle();'ordbus-send --session --type=method_call --dest=org.gnome.Shell /org/gnome/Shell org.gnome.Shell.Eval string:'Main.panel.statusArea.aggregateMenu.menu.open();'So, it's only a matter of going to Settings > Keyboard > Shortcuts and assign a shortcut to one of the above commands. |
_softwareengineering.42091 | I've recently started learning C++, and I enjoy it a lot.I've often read it's easier to write bad code in C++ than in most languages, and that it is a lot deeper than what it seems.As I'd like to avoid writing bad code, I was wondering what exactly I shouldn't do, and what I should do, to write good code in C++. | What should I know about C++? | c++ | The pitfallsThere are so many pitfalls in C++, that if you don't know them you will create very unstable code, with tons of memory leaks and buffer overruns. Compared to more modern languages with garbage collection, you must release all memory yourself. Also, the code is very low-level. There is nothing preventing you from overwriting your own program code (which has been exploited by many IE hacks).So the next you must learn are the programming practices that mitigate these risks, e.g. using smart pointers to handle freeing objects, wrapping byte arrays in classes handling the data, etc.I can recommend Scott Meyers' books Effective C++ and More Effective C++.Those books essentially taught me the beauty of C++. Note that these are not beginners books. They assume that you are already familiar with the language. |
_unix.255558 | I have a LeMaker HiKey development board. I purchased it for testing a couple of libraries on ARM64 cpu architecture. The board provides two Cortex-A53 processors, provides eight cores, and uses Linaro Linux:$ uname -aLinux hikey 3.18.0-linaro-hikey #1 SMP PREEMPT Mon Nov 30 00:11:03 UTC 2015aarch64 GNU/LinuxI observed the self tests are running a little slower than expected, so I'm mildly investigating it. I also noticed a cat of /proc/cpuinfo is returning something that does not look quite right, but I'm not sure if its cause for concern. It does not look quite right to me because I used to seeing cpu information present for each core (something like shown in Number of processors in /proc/cpuinfo).Does the output of /proc/cpuinfo indicate a problem with the board or its configuration? Or is this output expected with some dev boards?ARM Cortex A53 (octa-core):$ cat /proc/cpuinfo Processor : AArch64 Processor rev 3 (aarch64)processor : 0processor : 1processor : 2processor : 3processor : 4processor : 5processor : 6processor : 7Features : fp asimd evtstrm aes pmull sha1 sha2 crc32 CPU implementer : 0x41CPU architecture: AArch64CPU variant : 0x0CPU part : 0xd03CPU revision : 3Hardware : HiKey Development Board | Understanding the output of /proc/cpuinfo | linux;cpu;arm | This is the expected output to Arm based processors. All Serialized cores are shown in list with line breaks instead of separated processors. Features are evaluated by cpuinfo code, and only show if all cores support them /* * Mismatched CPU features are a recipe for disaster. Don't even * pretend to support them. */ WARN_TAINT_ONCE(diff, TAINT_CPU_OUT_OF_SPEC, Unsupported CPU feature variation.);The other variables are:CPU implementer: Your code means ARM;CPU architecture: AArch64 means 64 bit ARM board:CPU variant : Indicates the variant number of the processor, or major revision. Yours is zero.CPU part: Part number. 0xd03 indicates Cortex-A53 processor.CPU revision: Indicates patch release or minor revision. 3, in your caseHardware : HiKey Development Board is self explanatoryIf you want to check your processor max clock, just type cat /sys/devices/system/cpu/cpu0/cpufreq/cpuinfo_max_freq. To check the current clock dmidecode | grep Current Speed should do the trick.Another thing that could impact the performance of your processor is the cpu governor you are using. Maybe setting the performance one could be better for your needs:cpupower frequency-set -g performanceDocumentation:Arm InformationHow to understand the information of my android processor [closed]arm64: restore bogomips information in /proc/cpuinfo |
_cstheory.16561 | Most of the algorithms for estimating the volume of a convex polyhedron $K \subset R^d$ assume the existence of an affine transform $T$ with the property that $$ B \subset TK \tilde{\subset}\ \sigma B$$where $B$ is the unit ball in $d$ dimensions, and $\sigma$ is $O(\sqrt{d})$. (Update: the $\tilde{\subset}$ indicates that the containment is true except for an $\epsilon$-fraction of $K$)The algorithms that I've seen for computing this transform are quite tricky. They require a bootstrap sampling process to extract a few points from inside $K$ which are then used to define the transformation.However, the fact that such a transformation exists is folklore, and my question was:Is there a simple algorithm (with possibly a weaker bound on $\sigma$) to compute the affine transform, given only a membership oracle for $K$ ? | On preprocessing a convex polyhedron prior to sampling | cg.comp geom;randomized algorithms;convex geometry | null |
_unix.312782 | I want to use this terminal theme on Linux Mint 18 Sarah Cinnamon 64-bit.https://github.com/ahmetsulek/flat-terminalWhen you open the flat.terminal on OSX it opens up a bash terminal with that theme. In Linux it opens in Firefox showing the code. Is there a way to make this work on Linux? | Flat UI terminal, works on OSX not on Linux | linux;terminal;theme | short: nolong: you could translate the file, but it happens to work as described on OSX because (a) the file-suffix tells OSX what it is and (b) the file-contents are an exported theme for OSX Terminal. For reference, this is the beginning of the file:<?xml version=1.0 encoding=UTF-8?><!DOCTYPE plist PUBLIC -//Apple//DTD PLIST 1.0//EN http://www.apple.com/DTDs/PropertyList-1.0.dtd><plist version=1.0><dict> <key>ANSIBlackColor</key> <data> YnBsaXN0MDDUAQIDBAUGFRZYJHZlcnNpb25YJG9iamVjdHNZJGFyY2hpdmVyVCR0b3AS AAGGoKMHCA9VJG51bGzTCQoLDA0OVU5TUkdCXE5TQ29sb3JTcGFjZVYkY2xhc3NPECYw LjE4MDM5MjE2MSAwLjIzOTIxNTcwMTggMC4zMTc2NDcwNjk3ABACgALSEBESE1okY2xh c3NuYW1lWCRjbGFzc2VzV05TQ29sb3KiEhRYTlNPYmplY3RfEA9OU0tleWVkQXJjaGl2 ZXLRFxhUcm9vdIABCBEaIy0yNztBSE5bYouNj5SfqLCzvM7R1gAAAAAAAAEBAAAAAAAA ABkAAAAAAAAAAAAAAAAAAADYFor any other system, the theme would not work (without preparation) because the settings mean something only to the program(s) which read it and know what it is. |
_unix.13573 | To create a tar file for a directory, the tar command with compress, verbose and file options can be typed thus:$ tar -cvf my.tar my_directory/But it also works to do it this way: $ tar cvf my.tar my_directory/That is, without the dash (-) preceding the options. Why would you ever pass a dash (-) to the option list? | Why use superflous dash (-) to pass option flags to tar? | utilities;tar | There are several different patterns for options that have been used historically in UNIX applications. Several old ones, like tar, use a positional scheme:command options argumentsas for example tar usestar *something*f file operated on *paths of files to manipulate*In a first attempt to avoid the confusion, tar and a few other programs with the old flags-arguments style allowed delimiting the flags with dashes, but most of us old guys simply ignored that.Some other commands have a more complicated command line syntax, like dd(1) which uses flags, equal signs, pathnames, arguments and a partridge in a pear tree, all with wild abandon.In BSD and later versions of unix, this had more or less converged to single-character flags marked with '-', but this began to present a couple of problems:the flags could be hard to remembersometimes you actually wanted to use a name with '-'and especially with GNU tools, there began to be limitations imposed by the number of possible flags. So GNU tools added GNU long options like --output.Then Sun decided that the extra '-' was redundant and started using long-style flags with single '-'s.And that's how it came to be the mess it is now. |
_cs.32972 | In literature, one can find many approximation algorithms for the multicommodity min cost flow problem or other variants of the standard single-commodity min cost flow problem. But are there FPTASs for the min cost flow problem?Possibly, there is no need for an FPTAS here since an optimal solution can be computed very fast (using double scaling or the enhanced capacity scaling algorithm, for example). But from a theoretical point of view, this would be interesting to know. | Are there FPTASs for the min cost flow problem? | algorithms;approximation;polynomial time;network flow | null |
_unix.126931 | We have two lists. A bigger A: A=`echo -e '1\n2\n3\n4\n5'`echo $A12345and a smaller B: B=`echo -e '1\n2\n3'`echo $B123Q: But we need a third list that contains all the elements of A, but doesn't have any of B, how do I do it in bash?echo $C45The numbers could be anything, from foo to 99, etc..UPDATE: It's working in the shell by hand, but it's strange because if I put it in a script, it doesn't works!cat a.txt A=$(seq 5)B=$(seq 3)comm -23 <(sort <<< $A) <(sort <<< $B)sh a.txt a.txt: line 3: syntax error near unexpected token `('a.txt: line 3: `comm -23 <(sort <<< $A) <(sort <<< $B)'doing it by hand it works..: A=$(seq 5)B=$(seq 3)comm -23 <(sort <<< $A) <(sort <<< $B)45Why? update on update: Need to use bash instead of sh :D | We need a C list that contains all the elements of A, but doesn't have any of B | text processing | The comm command is what you need:$ A=$(seq 5)$ B=$(seq 3)$ comm -23 <(sort <<< $A) <(sort <<< $B)45Here's a method that does not require the input to be sorted. This is a common idiom in awk that reads the first file into memory, and then does some filtering on the 2nd file based on the 1st. Let's try with randomized data$ A=$(seq 5 | sort -R); echo $A35124$ B=$(seq 3 | sort -R); echo $B213We expect the output to be 5 then 4:$ awk 'NR==FNR {b[$1]=1; next} !($1 in b) {print}' <(echo $B) <(echo $A)54 |
_codereview.135435 | I'm having a bit of trouble trying to find a more Rubyist way to achieve the following. Essentially, I want to try and iterate over every element e and apply e.method(n) for every \$n \in \text{array}\$, \$n \ne e\$. In order to determine whether or not \$n = e\$, I'll have to use an index comparison (really just test for reference equality as opposed to functional equality).arr = [413, 321, 654, 23, 11](0...arr.length).each do |outer_i| (0...arr.length).each do |inner_i| next if outer_i == inner_i arr[outer_i].apply arr[inner_i] endendThis reeks of Java/C++ and I can tell that this is not the Ruby way, but I can't seem to find an alternative. Any ideas to improve its Ruby-ness? I was thinking of Array#product but I'm not sure where to go from there. | Nesting loops on same array but skipping same element | ruby | Note that you are just doing a permutation of two elements from a set, and there is an abstraction in the core for that, Array#permutation(n):arr.permutation(2).each { |x, y| x.apply(y) } |
_unix.200157 | I am trying to set up public key access to a couple of machines that I have a user account on.What I did:I used ssh-keygen to generate a key pair (without a passphrase) on my personal computer that I use to access the two machines in question.I appended the id_rsa.pub file thus created to the ~/.ssh/authorized_keys on both machines.This setup works fine and lets me SSH onto one machine. For the other machine, though, it still prompts me for my password. I tried using ssh -vvv and here are the relevant lines of output:debug1: Offering public key: /xxxx/xxxxx/.ssh/id_rsadebug3: send_pubkey_testdebug2: we sent a publickey packet, wait for replydebug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic,passworddebug1: Trying private key: /xxxx/xxxx/.ssh/id_dsadebug3: no such identity: /xxxx/xxxx/.ssh/id_dsadebug2: we did not send a packet, disable methoddebug3: authmethod_lookup passworddebug3: remaining preferred: ,passwordI'm then prompted for my password and I can use it to log in normally.It's the same account on both machines, exported via an NIS map. The machine where authentication succeeds is the NIS server and the other one is the client. My home directories on both machines are not the same (no NFS mount or the like). These are the only differences I can think of that set the two machines apart.What can be going wrong here? | SSH key works on NIS server, fails on NIS client | ssh;nis | Make sure you put the private key on both systems (you don't mention that explicitly).Check permissions of your home directory, directories leading to your home directory, your .ssh directory and finally the private key file and authorized_keys. Nothing should be writeable by non-root outside your home dir. This is a check done by the ssh daemon, as too open permissions could mean that a third party places his own public key in your authorized_keys file, and using that can gain your privileges.As you have a system that's working you can compare the permissions / ownership with the non-working one. |
_unix.256208 | I'd like to do this, on OSX:alias rm=rm -IIn GNU rm, this means that rm will prompt if it's recursive or if it's deleting three or more files, but not if it's just deleting one or two files. However, OSX (Mavericks) rm doesn't support this.Is there a workaround so that rm will prompt, once, when deleting several files, but won't prompt for single files, or for every single file in mass deletes? | Workaround for missing rm -I option on OSX? | bash;osx;rm | null |
_webapps.106027 | I'm trying to accomplish the below input and output in Google Sheets.I was actually able to find a partial solution on Stack Exchange.This is the script I pulled, but it only functions on one column, whereas I need to function on two at the same time:function result(range) { delimiter = , targetColumn = 1 var output2 = []; for(var i=0, iLen=range.length; i<iLen; i++) { var s = range[i][targetColumn].split(delimiter); for(var j=0, jLen=s.length; j<jLen; j++) { var output1 = []; for(var k=0, kLen=range[0].length; k<kLen; k++) { if(k == targetColumn) { output1.push(s[j]); } else { output1.push(range[i][k]); } } output2.push(output1); } } return output2;} | Split comma separated cell data into rows while keeping surrounding row data | google spreadsheets;google apps script | I believe this will work for you:function myFunction(range) { delimiter = , ; targetColumn = 1; targetColumn2 = 2; var output2 = []; for(var i=0, iLen=range.length; i<iLen; i++) { var s = range[i][targetColumn].split(delimiter); var s2 = range[i][targetColumn2].split(delimiter); for(var j=0, jLen=s.length; j<jLen; j++) { var output1 = []; for(var k=0, kLen=range[0].length; k<kLen; k++) { if(k == targetColumn) { output1.push(s[j]); } else if (k == targetColumn2) { output1.push(s2[j]); } else { output1.push(range[i][k]); } } output2.push(output1); } } return output2; }The other option would be to more or less run your own function twice. Create another copy of it, this time setting targetColumn = 2. Run the first function on the cells you have, and then run the second function on what the first function returns. |
_unix.264819 | I have a batch file that runs a command, and one of those commands requires you to type in yes and hit enter.Is there any way that the batch file can do this? | How to make a batch file answer a user prompt | batch jobs | null |
_unix.26161 | I have 1000 gzipped files which I want to sort.Doing this sequentially, the procedure looks pretty straightforward:find . -name *.gz -exec zcat {} | sort > {}.txt \;Not sure that the code above works (please correct me if I did a mistake somewhere), but I hope you understand the idea.Anyway, I'd like to parallelize ungzip/sort jobs in order to make the whole thing faster. Also, I don't want to see all 1000 processes running simultaneously. It would be great to have some bounded job queue (like BlockingQueue in Java or BlockingCollection in .NET) with configurable capacity. In this case, only, say, 10 processes will run in parallel.Is it possible to do this in shell? | How to create a bounded queue for shell tasks? | shell;command line;parallel | A quick trip to Google reveals this interesting approach: http://pebblesinthesand.wordpress.com/2008/05/22/a-srcipt-for-running-processes-in-parallel-in-bash/ |
_reverseengineering.16088 | I have some data from a game which appears to possibly contain a checksum or CRC implementation. The game has different arenas with a share link for each team in the arena (and there are 4 teams). The game is web-based and I believe the link is calculated & verified on the server, not the game client.Here's some sample data from two arenas (spaces added for clarity):D2835718 BB30 C602 E874D2835718 BB30 D602 DB5FD2835718 BB30 E602 1AE8D2835718 BB30 F602 D231D202FA10 BB30 48B0 4B56D202FA10 BB30 58B0 08CAD202FA10 BB30 68B0 48FCD202FA10 BB30 78B0 3656The first 8 characters refer to the arena; the next 4 seem to be some sort of constant or padding (it's always BB30), and the next 4 characters signify the team. I'm assuming that the last 4 characters are a checksum of some kind to verify the integrity of the link.How would I go about reverse engineering the process used to calculate the checksum? | Determining checksum parameters | deobfuscation;crc | null |
_unix.103415 | I mounted a samba share using the smbmount command: $ sudo smbmount \\\\foo\\bar /mnt/bar -o user=tomWhen I create new files, they get created with the executable bit set for owner, group and world. For e.g. $ touch hello.txt $ ls -la hello.txt-rwxr-xr-x 1 root root 0 Dec 2 12:28 hello.txtThe same file when created on a NFS mounted share sets up correct permissions without any executable bit set. Why is this happening? How can it be fixed? | Why are files in a smbfs mounted share created with executable bit set? | permissions;cifs | NFS was invented in the Unix world and so understands traditional Unix permissions out of the box. (The ACL of modern unix systems are another matter, but recent implementations of NFS should cope with them.)Samba was invented in the IBM/Microsoft PC world, to exchange files with systems that had no permissions beyond read-only/read-write. It is now native to Windows. By default, Samba does not transmit Unix permissions. Depending on the configuration, either all files are marked executable (which is annoying) or all files (except directories) are marked non-executable (which is annoying).There are various extensions to the Samba/CIFS protocol that make it more suited for Unix use. Try enabling Unix extensions in the server configuration:[global]unix extensions = yes |
_cs.14749 | Assembly language is converted in to machine language by assembler. Why would a compiler convert high-level language to assembly? Can't it directly convert from the high-level language to machine code? | Why do compilers produce assembly code? | compilers;code generation | Other reason for compilers to produce assembly rather than proper machine code are:The symbolic addresses used by assemblers instead of hard-coding machine addresses make code relocation much easier.Linking code may involve safety checks such as type-checking, and that's easier to do with symbolic names.Small changes in machine code are easier to accomodate by changing the assembler rather than the code generator. |
_unix.73123 | I've installed latest firefox linux-x86_64 from ftp.mozilla.com on a usb device and created a new profile file with the -P command. Unfortunately, the application does not recognize the flash plugin that is already installed on the operating system.How can I enable the flash plugin on the portable version? | Portable Firefox Linux | linux;firefox;adobe flash | How to Use Mozilla Firefox, Portable with flash pluginMake your firefox portable for Linux (all versions):Download the latest release of Firefox and unpack it on your usb device: http://ftp.mozilla.org/pub/mozilla.org/firefox/releases/Go to unpack_directory/firefox/browser/plugins (firefox 22+).Add a short link to your installed flash-plugin binary (libflasplayer.so). It's usually in /usr/lib64/flash-plugin/. Optionally: Download the UNIX version of the flash-plugin binary from adobe.com and copy it from the archive. Please remember: the flash-plugin is a binary file, no compilation process is needed!1. Copy the firefox directory to your portable device2. Create a simple shortcut:Here's my startup.sh that I have placed on my usb device ($PWD is the current directory (example: USB_DEVICE/firefox_x64).#!/bin/sh$PWD/firefox_x64/firefox -no-remote -profile$PWD/../.mozilla/firefox/YOUR_PROFILE_ID3. Run firefox with command line to create a new profile:You can create a new profile with the -P command as shown below.I've created my profile inside USB_DEVICE/.mozilla/firefox. You can set this path later. This is Mozilla's default folder skeletton for application settings (like seamonkey, thunderbird or B2G). To create a new profile run:[user@home]# cd /USB_DEVICE/firefox_x64[user@home firefox_x64]# ./firefox -no-remote -PFAQ: How to use the new USB profile with windows:For Windows just use the Portable Firefox from portableapps.com and run the same commands (step no. 3, simply add the -profile command to the executable .exe). |
_codereview.77150 | I've been programming Clojure for a little while and recently started learning Common Lisp. One of my favorite things about Clojure is the threading operator ->, which greatly simplifies long chains of nested function calls. Naturally I wanted to have this in Common Lisp.I found an implementation here:(defmacro -> (x &rest args) (destructuring-bind (form &rest more) args (cond (more `(-> (-> ,x ,form) ,@more)) ((and (consp form) (or (eq (car form) 'lambda) (eq (car form) 'function))) `(funcall ,form ,x)) ((consp form) `(,(car form) ,x ,@(cdr form))) (form `(,form ,x)) (t x))))This uses a recursive macro expansion; I've read that it's better to use iteration over recursion in CL when you can, so I wrote my own version:(defmacro -> (x &rest forms) (labels ((expand-form (x form) (if (consp form) (if (or (eq (car form) 'lambda) (eq (car form) 'function)) `(funcall ,form ,x) `(,(car form) ,x ,@(cdr form))) `(,form ,x)))) (do ((forms forms (cdr forms)) (x x (expand-form x (car forms)))) ((not forms) x))))I'm relatively new to Lisp so I don't know how to judge which way is better. Any comments, suggestions?EDIT: I knew something was fishy! You don't have to use any looping constructs at all, a plain old reduce will cut it:(defmacro -> (x &rest forms) (flet ((expand-form (x form) (cond ((atom form) `(,form ,x)) ((member (car form) '(lambda function)) `(funcall ,form ,x)) (t `(,(car form) ,x ,@(cdr form)))))) (reduce #'expand-form forms :initial-value x))) | Recursion vs. iteration in Lisp macro | beginner;lisp;common lisp | I wouldn't be so focused on iteration vs. recursion; use what isnecessary and convenient. For macros in general you should care aboutclarity of the macro and the generated code, performance of the macrocode itself comes way last in general.Now to add only two things to the discussion, using DO is notparticularly common because in most cases there are better options;personally I always have to look up the meaning of all the clauses ofDO, which is why I am not a fan of it. The other thing is to useCOND to be a bit more concise if your IF clauses allow for it.Since you now already have a good version of the first solution, thefollowing would be a way to do those things for the iterative solution:(defmacro -> (x &rest forms) (flet ((expand-form (x form) (cond ((atom form) `(,form ,x)) ((member (car form) '(lambda function)) `(funcall ,form ,x)) (T `(,(car form) ,x ,@(cdr form)))))) (loop for form in forms for y = (expand-form x form) then (expand-form y form) finally (return y))))Note that I've switched (not (consp x)) to (atom x); now the LOOP isless concise then the DO, but I'd argue that it's more obvious what ishappening with it, YMMV. |
_datascience.22070 | What I know so far in DCGAN is that a discriminator is trained using the labeled data (so maybe that occurs before training the generative model). Also, I know that there is race between the generator and the discriminator, so maybe training occur online. So I have some concerns here:How many outputs the discriminator should have (Is it one output that describes the probability, ex: P(x))?How do we chose its output when feeding fake data vs. real data?Is the discriminator trained before using it with the DCGAN or the training is done online (It is mentioned in the Original Paper: Generative Adversarial Nets https://arxiv.org/pdf/1406.2661.pdf, that the whole network is trained using the back propagation), hence I think its online?Any help is much appreciated!! | Training the Discriminative Model in Generative Adversarial Neural Network | unsupervised learning;gan | In normal GANs, there are no labels, the training is completely unsupervised.The role of the discriminator is to tell apart samples generated by the generator from those taken from the training dataset. The training dataset is just a bunch of images. The discriminator is trained to output 0 for data generated by the generator (i.e. fake data) and 1 for real data (so the discriminator has a single output). This should answer points 1 and 2.The training of discriminator and generator takes place alternatively in a loop: first we train the discriminator, then the generator, then the discriminator again, etc. It is possible (and common) to train the discriminator a few times per each time we train the generator. This should answer point 3.It is also possible to use labels, but not in the way you were suggesting. When labels are used, we have Conditional GANs (https://arxiv.org/abs/1411.1784). In this case, the label is supplied as input to both the generator and the discriminator. The generator has to generate data that is associated to the supplied label. The discriminator has to tell apart fake data from real data, given the label. |
_webmaster.91957 | I have 3 Wordpress instances of the same site in different languages hosted in different regions - Ireland, Russia and Spain. The 3 sites are code identical - just the content is different.I'm planning to combine them a use a multilingual plugin instead and redirect the .ru and .es versions to the .ie site. I'm wondering:Is this a good idea? Are any multilingual plugins good enough?Is an search engine penalty likely?Do search engine rankings rank much better if each site ishosted in the respective country? e.g. .ru is hosted in Russia andnot Ireland. My main concern is a bit hit on the search rankings. | Multiple domains & one installation - SEO penalty? | seo;multilingual | null |
_unix.242861 | I have Debian 8.2 installed on a VirtualBox VM and I added the unstable (sid) repository to /etc/apt/sources.list without removing any other repositories and afterwards I ran apt-get update && apt-get upgrade && apt-get autoremove. Before this, I had Plasma 4 installed, I was hoping that adding this repository would give me Plasma 5, but instead I have no Plasma desktop installed at all, or so it seems. Whenever I run:apt-get install kde-fullI get the error:The following packages have unmet dependencies: kde-full : Depends: kde-plasma-desktop (>= 5:84) but it is not going to be installed Depends: kde-plasma-netbook (>= 5:84) but it is not going to be installed Depends: kdeartwork (>= 4:4.11.3) but it is not going to be installed Depends: kdenetwork (>= 4:4.11.3) but it is not going to be installed Depends: kdeutils (>= 4:4.11.3) but it is not going to be installed Depends: kdepim (>= 4:4.11.3) but it is not going to be installed Depends: kdeplasma-addons (>= 4:4.11.3) but it is not going to be installed Recommends: kde-standard (>= 5:84) but it is not going to be installed Recommends: kdewebdev (>= 4:4.11.3) but it is not going to be installedE: Unable to correct problems, you have held broken packages.whenever I try to install specific Plasma components using apt-get install like by running apt-get install kde-plasma-desktop I get similar errors, with the exact same final line (i.e., E: Unable to correct problems, you have held broken packages.).TroubleshootingAs far as troubleshooting goes, I have Googled E: Unable to correct problems, you have held broken packages. and found the questions Fix held broken packages on debian? and E: Unable to correct problems, you have held broken packages and tried:apt-get install -f kde-fullwhich returned the same error as running without the -f option. I also ran apt-get -f install and it just returned:Reading package lists... DoneBuilding dependency tree Reading state information... Done0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.I have also tried running:aptitude why-not kde-fullandaptitude why-not kde-plasma-desktopand both returned:Unable to find a reason to remove ...where ... is the package name provided after aptitude why-not. While apt-mark showhold returned no output whatsoever. | Broken KDE Plasma desktop after adding sid repository under Debian | debian;apt;plasma | null |
_codereview.26445 | I want to run a background job on my web server to do some database maintenance. I am looking at using APScheduler to do this.I am planning on running the below code in a separate process to my main web server. I don't really want to tie the code to my web server.Question: Is using While True pass at the end of a cron-like scheduler considered bad practice? How should it be done? (time.sleep()?)from apscheduler.scheduler import [email protected]_schedule(days=1)def tick(): # do a clean up jobwhile True: pass | Leaving an APScheduler in a while True loop | python | while True: pass will consume 100 % of one CPU which is not something you want. I'm not familiar with APScheduler, but a quick look into the docs reveals a daemonic option:Controls whether the scheduler thread is daemonic or not.If set to False, then the scheduler must be shut down explicitly when the program is about to finish, or it will prevent the program from terminating.If set to True, the scheduler will automatically terminate with the application, but may cause an exception to be raised on exit. |
_unix.353994 | I created a shell script which I am running using nohup. This script runs various sql script in sequence but few in parallel also. I have the following statements in my script-echo exit | sqlplus -s ${username}/${pwd}@${DB} @close1.sqlecho exit | sqlplus -s ${username}/${pwd}@${DB} @close2.sqlecho exit | sqlplus -s ${username}/${pwd}@${DB} @insertPricing1.sql &pid1=$!echo exit | sqlplus -s ${username}/${pwd}@${DB} @insertPricing2.sql &pid2=$!echo Pricing Insert PIDs => ${pid1}, ${pid2}while [ `ps -p ${pid1},${pid2} | wc -l` > 1 ]dosleep 5doneecho exit | sqlplus -s ${username}/${pwd}@${DB} @insertPricing3.sqlThe intention is to run close1 -> close2 -> insertPricing1 & insertingPricing2 in parallel -> insertPricing3. where -> means in sequence.When I checked the result the next day (after sufficient time it should have been completed), I saw that the shell script was still running. Pricing1 and Pricing2 were done but Pricing3 didn't start. The processes for 1 and 2 had finished.ps -p 19105,19107 PID TTY TIME CMDThere is some problem in the while loop as when I run this in # ps -p 19105,19107 | wc -l1but this-# while [ `ps -p 19105,19107 | wc -l` > 1 ]> do> echo hello> donehellohellohellohellohellohellohellohellohellohellohellohellohellohellohello........ ctrl+Cso why this loop works when 1 is not greater than 1? What should be the solution? | While loop for checking active processes fails | bash;shell script;ps | Thanks to @steeldriver's comment for helping me out. It was a silly mistake from my side. > is considered as redirection operator inside [ ] (or most of the places in a shell script). The standard way to use is -gtFor comparing integers as per the answer in the link--eq #Is equal-ne #Is not equal-lt #Less than-le #Less than or equal-gt #Greater than-ge #Greater than or equal |
_unix.220771 | Part of my software issues various commands to open and view different file types. For instance I use atril for PDFs and eom for PNGs.However I have a slight problem with CSV files. I can open them with soffice calc <filepath> but each time it goes through the Import stage.Is there a way I can avoid this, to avoid the risk of users creating issues, as the format is consistent and the only separator I need to include is the comma ,?Thanks in advance. | Open CSV File And Go Straight To Spreadsheet | command line;linux mint;csv;libreoffice | A method to skip importing would be to convert the file to a format that can be read without importing - so for instance:soffice --headless --convert-to ods --outdir /tmp tblIssues.csvsoffice --view /tmp/tblIssues.odsrm /tmp/tblIssues.odsThis converts the file tblIssues.csv to a ODS spreadsheet, saves it to /tmp and opens it in Libreoffice. Once it has finished it removes the converted file (optional).The --view option opens the file as read-only, and also hides the GUI elements needed for editing, making LibreOffice more practical as a viewer.You could also use other formats, such as PDF (--convert-to pdf) and then you can use another viewer like atril.Note with I think the libreoffice convert command may use the settings used by the user last in the Importer, so if it is set to use a delimiter other than , it may not work.Also, you can modify the commands to...hide output:COMMAND > /dev/null 2>&1separate from the terminal:COMMAND & disown |
_softwareengineering.241152 | I have the following extension method: public static IEnumerable<T> Apply<T>( [NotNull] this IEnumerable<T> source, [NotNull] Action<T> action) where T : class { source.CheckArgumentNull(source); action.CheckArgumentNull(action); return source.ApplyIterator(action); } private static IEnumerable<T> ApplyIterator<T>(this IEnumerable<T> source, Action<T> action) where T : class { foreach (var item in source) { action(item); yield return item; } }It just applies an action to each item of the sequence before returning it.I was wondering if I should apply the Pure attribute (from Resharper annotations) to this method, and I can see arguments for and against it.Pros:strictly speaking, it is pure; just calling it on a sequence doesn't alter the sequence (it returns a new sequence) or make any observable state changecalling it without using the result is clearly a mistake, since it has no effect unless the sequence is enumerated, so I'd like Resharper to warn me if I do that.Cons:even though the Apply method itself is pure, enumerating the resulting sequence will make observable state changes (which is the point of the method). For instance, items.Apply(i => i.Count++) will change the values of the items every time it's enumerated. So applying the Pure attribute is probably misleading...What do you think? Should I apply the attribute or not? | Is this method pure? | c#;pure function | No it is not pure, because it has side effect. Concretely it is calling action on each item. Also, it is not threadsafe.The major property of pure functions is that it can be called any number of times and it never does anything else than return same value. Which is not your case. Also, being pure means you don't use anything else than the input parameters. This means it can be called from any thread at any time and not cause any unexpected behavior. Again, that is not case of your function.Also, you might be mistaken on one thing: function purity is not question of pros or cons. Even single doubt, that it can have side effect, is enough to make it not pure.Eric Lippert raises a good point. I'm going to use http://msdn.microsoft.com/en-us/library/dd264808(v=vs.110).aspx as part of my counter-argument. Especially line A pure method is allowed to modify objects that have been created after entry into the pure method.Lets say we create method like this:int Count<T>(IEnumerable<T> e){ var enumerator = e.GetEnumerator(); int count = 0; while (enumerator.MoveNext()) count ++; return count;}First, this assumes that GetEnumerator is pure too (I can't really find any source on that). If it is, then according to above rule, we can annotate this method with [Pure], because it only modifies instance that was created within the body itself. After that we can compose this and the ApplyIterator, which should result in pure function, right?Count(ApplyIterator(source, action));No. This composition is not pure, even when both Count and ApplyIterator are pure. But I might be building this argument on wrong premise. I think that the idea that instances created within the method are exempt from the purity rule is either wrong or at least not specific enough. |
_codereview.165111 | I am just sharing the method that i am using to convert stacktrace to string. This stacktrace is then being used in AWS Lambda to log. There are two things i am mainly concerned aboutprivate String convertStackTrace(Throwable throwable){ StringWriter stringWriter = new StringWriter(); PrintWriter printWriter = new PrintWriter(stringWriter); String stackTrace; try { throwable.printStackTrace(printWriter); stackTrace = stringWriter.toString(); } catch(Exception e){ logSelf(Error converting exception. Simple exception message will be logged); stackTrace = throwable.getMessage(); } finally { try { stringWriter.flush(); stringWriter.close(); printWriter.flush(); printWriter.close(); } catch (Exception e){ logSelf(Error closing writers); } } return stackTrace;}Here is logSelf methodprivate void logSelf(String error){ lambdaLogger.log(formatMessage( this.getClass().getCanonicalName(), LOG_LEVEL_ERROR, error, null) );}Shall i be opening/closing those printwriters everytime an error is logged?Is it correct way to convert stacktrace to string? | Converting stacktrace to string | java;logging | Yes, you should be opening/closing the printwriters each time.Yes, conceptually it's a decent way to convert the stack trace..... but... there are concerns, and is there a better way?The most significant concern is that, if there's a problem writing to the PrintWriter, the call throwable.printStackTrace(printWriter); will fail, and the method will return a default value. This is not ideal.The reality, though, is that those methods can never fail because the IO is to a StringWriter, and there's no actual IO component. A failure there would be..... inconceivable. None of your exception handling can really happen... it's just not going to fail (the Titanic Logger).The issue is that your code is checking for impossible conditions in a way that's overkill.Still, using some Java 7 semantics, you can use try-with-resource functionality, and get a much more concise method:private static String convertStackTrace(Throwable throwable) { try (StringWriter sw = new StringWriter(); PrintWriter pw = new PrintWriter(sw)) { throwable.printStackTrace(pw); return sw.toString(); } catch (IOException ioe) { // can never really happen..... convert to unchecked exception throw new IllegalStateException(ioe); }} Note that the method is now static, and it does not need the finally section to clean up.You can see this running in ideone: https://ideone.com/rKj9mT |
_webmaster.55024 | My site was hacked about 6 months ago. I managed to finally get around to removing the malicious code, and Google removed the warning from my site. However, my traffic has not yet returned to where it was pre-hack. How long does it usually take for traffic to return after malware is removed? | site hacked but positions not back | google;domain hacks | null |
_unix.339687 | Premise:I am using a raspberry pi3 as AP.I have added an USB to ethernet adapter and this is the configuration I have:built in eth port as eth0 (WAN)built in wifi interface as wlan0 (LAN, wireless)usb to ethernet adapter as eth1 (LAN, wired)I have bridged successfully wlan0 and eth1 into a bridge, br0.Then I have setup a nat to allow the devices on br0 to connect to the internet. All of this works.Problem:Now I would like to split the wired LAN, so that there is a virtual network (eth1:0) for trusted devices and another virtual network for less trusted devices (eth1:1).The idea would be to add to br0 only eth1:0.This seems to work, but when I list the bridges, br0 seems to use directly eth1, instead of the virtual interface eth1:0.In fact, if I try to create another bridge (br1) and add the other virtual network (eth1:1), I get an error saying that the interface is already in a bridge.So it seems that a virtual interface cannot be added to a bridge, only its parent.Is this true?Is there some other way to do it?This is the test script I am using:function configure_firewall() { echo CONFIGURE FIREWALL START ####################### FORWARDING ##################### # Enable IP forwarding echo 1 > /proc/sys/net/ipv4/ip_forward # Allow forwarding of traffic LAN -> WAN iptables -A FORWARD -i ${BRIDGE} -o ${WAN} -j ACCEPT # Allow traffic WAN -> LAN but only as reply to communication initiated from the LAN iptables -A FORWARD -i ${WAN} -o ${BRIDGE} -m state --state RELATED,ESTABLISHED -j ACCEPT # Drop anything else iptables -A FORWARD -j DROP ####################### MASQUERADING ######################## # Do the nat iptables -t nat -A POSTROUTING -o ${WAN} -j MASQUERADE ###################### INPUT ############################# # Allow local connections iptables -A INPUT -i lo -j ACCEPT iptables -A INPUT -i ${BRIDGE} -j ACCEPT iptables -A INPUT -p tcp --dport 22 -i ${WAN} -j ACCEPT iptables -A INPUT -i ${WAN} -m state --state RELATED,ESTABLISHED -j ACCEPT iptables -A INPUT -j DROP ###################### OUTPUT ############################# iptables -A OUTPUT -j ACCEPT echo CONFIGURE FIREWALL END}function teardown_bridge() { echo TEARDOWN BRIDGE START ifconfig ${BRIDGE} down brctl delif ${BRIDGE} ${LAN}:0 brctl delif ${BRIDGE} ${WIFI} brctl delbr ${BRIDGE} echo TEARDOWN BRIDGE END}function configure_bridge() { echo CONFIGURE BRIDGE START brctl addbr ${BRIDGE} brctl addif ${BRIDGE} ${LAN}:0 brctl addif ${BRIDGE} ${WIFI} ifconfig ${BRIDGE} up 192.168.10.1 netmask 255.255.255.0 broadcast 192.168.10.0 echo CONFIGURE BRIDGE END}function configure_interfaces() { echo CONFIGURE INTERFACES START ifconfig ${LAN} up 0.0.0.1 ifconfig ${LAN}:0 up 0.0.0.2 ifconfig ${LAN}:1 up 0.0.0.3 echo CONFIGURE INTERFACES END}function teardown_interfaces() { echo TEARDOWN INTERFACES START ifdown ${LAN}:1 ifdown ${LAN}:0 ifdown ${LAN} echo TEARDOWN INTERFACES END}function delayed_reset() { for i in `seq 15 -1 0`; do sleep 1 echo ${i} done sync reboot exit}#test_network#if [ $? -ne 0 ] ; then teardown_firewall teardown_bridge teardown_interfaces configure_interfaces configure_bridge configure_firewall #delayed_reset#fiAfter running the script, if I run ifconfig, it looks like the virtual networks exist:eth1 Link encap:Ethernet HWaddr 00:13:3b:62:11:f6 inet addr:0.0.0.1 Bcast:255.255.255.255 Mask:0.0.0.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:30712 errors:0 dropped:0 overruns:0 frame:0 TX packets:19110 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:5261152 (5.0 MiB) TX bytes:5355909 (5.1 MiB)eth1:0 Link encap:Ethernet HWaddr 00:13:3b:62:11:f6 inet addr:0.0.0.2 Bcast:255.255.255.255 Mask:0.0.0.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1eth1:1 Link encap:Ethernet HWaddr 00:13:3b:62:11:f6 inet addr:0.0.0.3 Bcast:255.255.255.255 Mask:0.0.0.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1But the entire eth1 appears to be in br0:root@raspberrypi:/home/pi# brctl showbridge name bridge id STP enabled interfacesbr0 8000.00133b6211f6 no eth1 wlan0And this seems to confirm it:root@raspberrypi:/home/pi# brctl addbr br1root@raspberrypi:/home/pi# brctl addif br1 eth1:1device eth1:1 is already a member of a bridge; can't enslave it to bridge br1.Note:I did look at Create and bridge virtual network interfaces in Linux but it seems to be obsolete, as it refers to iproute2. | Linux: using a virtual network inside a bridge | linux;networking;bridge | You can't create br0 and br1 bridges on one interface eth1, because eth1:0 and eth1:1 is the same interface eth1 with two different ip addresses.You can create vlan If your wired network and switch allow it. If you create two vlans eth1.10 and eth1.20 you will have two different interfaces, witch can be used for bridges br0 and br1. |
_cs.47980 | I was reading Wikipedia about the von Neumann bottleneck.Surely there is some simple answer to this. Why can we not read and write to the same address at the same time? We can if the addresses are different. | Why can we not read and write to the same address at the same time? | computer architecture | null |
_unix.203386 | I have a question after reading about extended glob.After using shopt -s extglob,What is the difference in the following??(list): Matches zero or one occurrence of the given patterns.*(list): Matches zero or more occurrences of the given patterns.+(list): Matches one or more occurrences of the given patterns.@(list): Matches one of the given patterns.Yes, I have read the above description that accompanies them, but for practical purpose, I can't see situations where people would prefer ?(list) over *(list). That is, I don't see any difference.I've tried the following:$ ls> test1.in test2.in test1.out test2.out`$ echo *(*.in)> test1.in test2.in$ echo ?(*.in)> test1.in test2.inI'd expect $ echo ?(*.in) to output test1.in only, from the description, but it does not appear to be the case. Thus, could anyone give an example where it makes a difference regarding the type of extended glob used?Source: http://mywiki.wooledge.org/BashGuide/Patterns#Extended_Globs | Extended Glob: What is the difference in syntax between ?(list), *(list), +(list) and @(list) | bash;wildcards | $ shopt -s extglob$ lsabbc abc ac$ echo a*(b)cabbc abc ac$ echo a+(b)cabbc abc$ echo a?(b)cabc ac$ echo a@(b)cabc |
_vi.9218 | For me it's very annoying to have two functions to close window (:quit) or quit vim. I just want one command to close window and another command to quit vim. For example :q command to close window and :e to quit/exit vim. How to create these shortcuts in vim configuration? | How to stop quitting vim but close windows? | key bindings | Personally I very much dislike having to type out the entire word close - its smallest abbreviation is still :clo.To solve this, I created the following command in my vimrc:command -nargs=0 C :closeThis means I have a nice, quick command :C which is very similar to :q but it only closes the current window, rather than quits. I use it all the time. |
_unix.157293 | Predictive Self Healing is a feature of the OS to predict, detect a fault with one of its components and automatically repair it. MINIX, Solaris OS and Linux on POWER all have this. But is it available in modern Linux distributions on x86 platform? Or will be? | Does Linux provide Predictive Self-Healing on x86? | linux;x86 | null |
_cstheory.32549 | Carathodory's theorem says that if a point $x$ of $R^d$ lies in the convex hull of a point set $P$, then there is a subset $P \subseteq P$ consisting of $d + 1$ or fewer points such that $x$ can be expressed as a convex combination of $P$.A recent result by Barman (see paper) shows an approximate version of the above theorem. More precisely, given a set of points $P$ in the $p$-unit ball with norm $p \in [2,\infty)$, then for every point $x$ in the convex hull of $P$ there exists an $\epsilon$-close point $x'$ (under the $p$-norm distance) that can be expressed as a convex combination of $O\left(\frac{p}{\epsilon^2} \right)$ many points of P.Now, my question is that does the above result implies (or have some connection with) some kind of dimensionality reduction for the points in the convex hull of $P$. It seems intuitive to me (however I don't have a formal proof of it) - as for any point $x$ inside the $P$ there is a point (say) $x'$ in a close neighborhood of $x$ which can be written as convex combination of constant many points of $P$, which in some sense dimensionality reduction of $x'$. Pls let me know if I am able put my question clearly.Thanks. | Does Approx Carathodory's theorem implies dimensionality reduction | machine learning;computational geometry | The approximate Caratheodory theorem goes back to the 60s, and probably way earlier than that (it follows for example from the mistake bound of the preceptron algorithm analysis). As for the dimensionality reduction, the answer is no - the supporting subsets are different subsets, and their number is too large. In particular, the number of possible subsets is $n^{O(1/\epsilon^2)}$ - but there is some connection... Specifically, let $P$ be a set of $n$ points in high dimenionsional Euclidean space of diameter $1$. Let $Q$ be an $\epsilon$-net in the convex-hull of $P$. That is, any pair of points of $Q$ is in distance at least $\epsilon$ from each other, and every point of $CH(P)$ is in distance at most $\epsilon$ from some point of $Q$. Now, by the approximate Caratheodory theorem, we know that $|Q| = n^{O(1/\epsilon^2)}$. Now, imagine that you do some experiment, and with probability half the experiment succeeds for half the points of $Q$ (or, more formally, half the pairs of $Q \times Q$ -- since we look on vectors formed by differences of points of $Q$). How many times do you have to repeat the experiment till all the points are served? Well, roughly $\log_2 |Q| = O(\epsilon^{-2} \log n)$, which is, surprise surprise, the target dimension in the JL lemma. This is of course, does not imply the JL lemma - it is somewhat of a coincidence - a wrong calculation that gives the right bounds...There is a useful lesson here however - a set of $n$ points in high dimensions, induces roughly $n^{O(1/\epsilon^2)}$ points that are $\epsilon$-distinct, independent of the ambient dimension. |
_unix.154072 | In a previous question, I asked about how to write a PKGBUILD to install a binary .deb package. The solution was to extract the contents of the .deb and copy the data to the archlinux package fakeroot, ${pkgdir}/.That means if the .deb contains a data.tar.gz with the binaries stored in a usr/lib directory, the process to install this package is (In the PKGBUILD):package(){ cd $srcdir tar -xvzf data.tar.gz install -dm755 ${pkgdir}/usr/lib cp -r -f ${srcdir}/usr/lib ${pkgdir}/}However if I do that the package is installed successfully, but I cannot open the binaries (Written in python). If I execute a binary installed in that way, returns this error:Cannot open self [path to executable] or file [path to executable].pkgOn the other hand, if I write the PKGBUILD in the wrong way, that is, copying the binaries directly to the system root during package():cp -r -f ${srcdir}/usr/lib /The programs work perfectly.Is there something I'm missing?Here is the package. | Archlinux proper PKGBUILD: Python executable error | arch linux;python;makepkg | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.