id
stringlengths
5
27
question
stringlengths
19
69.9k
title
stringlengths
1
150
tags
stringlengths
1
118
accepted_answer
stringlengths
4
29.9k
_unix.353456
I have done the follow:chown'd it to a groupchmod 775usermodthis is the status of the permission drwxrwxr-x 3 root www-data 4096 Mar 22 23:11 htmlThe groups my user is in ubuntu : ubuntu adm dialout cdrom floppy sudo audio dip www-data video plugdev netdev lxdBut if I do touch test it just gives me a permission denied. Am I missing something.As for a little background, I'm running a Ubuntu server on AWS. I'm also trying to give my apache user writing permission in the html folder.
File permissions changed but group user can't write in them
ubuntu;permissions;permission denied
null
_reverseengineering.5874
I'm debugging a process inside a VM via Olly, and occasionally exporting a section dump when needed and loading it on the host system for better analysis.Right now I'm looking at a dump of a certain code section that's referencing function calls in another, dynamically allocated, section. In the debugger I can of course see all the function calls, but in IDA all I have are calls to immediate addresses that don't exist.I'd like to be able to dump the referenced section and somehow bluntly attach it to the same .idb so IDA would be able to resolve the references for me.I couldn't find anything about it on google or when digging around the menus.Did I miss something or is this impossible or requires an addon? It's also possible for me to write an idapython script that defines and copies the section over, but I don't see any relevant API calls.Debugging via IDA and taking a full memory snapshot is a solution I'd like to not have to use; I enjoy using olly.
Adding another section to an idb file
ida
After loading the main dump into IDA, in IDA's menubar go to File Load file Additional binary file..., select the dump of the dynamically allocated memory, and specify the dynamic allocation address as the Loading segment.
_softwareengineering.166059
I am starting a new job soon as a frontend developer. The App I would be working on is 100% Javascript on the client side. all the server returns is an index page that loads all the Javascript files needed by the app.Now here is the problem:The whole of the application is built around having functions wrapped to different namespaces. And from what I see, a simple function like rendering the HTML of a page can be accomplished by having a call to 2 or more functions across different namespace...My initial thought was this does not feel like the perfect solution and I can just envisage a lot of issues with maintaining the code and extending it down the line.Now I would soon start working on taking the project forward and would like to have suggestions on good case practices when it comes to writing and managing a relatively large amount of javascript code.
How to have a maintainable and manageable Javascript code base
javascript;functional programming;maintainability
This is a very good question, and a common problem in advancing JavaScript architecture.It sounds to me like you are describing the situation of tight-coupling.Once functions become objectified, the tendency is to reference these wonderful objects directly, from object to object, across namespaces even. Because it is easy right? var Object1, Object2 = {};Object1.somefunction = function(){ //Tight Coupling!! Object2.functionCall();}It is easy, but these seemingly innocent hard-references gang up, to make you sad when you have to remove or replace objects. That happens a lot in JavaScript, so understanding tight-coupling is key to making a JS codebase maintainable.Here are some other thoughts:1 - If your objects are not already communicating by - triggering and listening to events; they should be. This is the solution to hard references.2 - Design Patterns. There are many challenges that have already been solved out there, the standardized reusable solutions - are Design Patterns.Understanding where the patterns are helps you focus on what solutions may make sense. One pattern for communicating across objects is called Publisher/Subscriber, or PubSub.3 - These things help with maintainability: MVC with a router, Templates - data binding, AJAX - through a Proxy or Delegate objects. 4 - Look for frameworks and libraries that solve the cross-browser problems for you. Don't re-invent the wheels you don't need to know more about. Depending on your environment, some may frameworks may become obvious choices.5 - Think about enhancement and optimizations like build systems, minification, linting, tdd, etc. 6 - Also, most important of all: Module Loaders. Take a look at Require.js It is a very nice way to break JS into modules, and then load them all in an optimized way.Hope that helps.
_webmaster.12457
While checking server logs, I have seen that often the Bing bot requests this URL. URL: /da-net/products/cadpac/cadpac_jis.shtmlBrowser: Mozilla/5.0 (compatible; bingbot/2.0; +http://www.bing.com/bingbot.htm)Can anyone explain this?
Why does the Bing bot request this URL?
bing
null
_codereview.115871
Recently, I ventured into the realm of C++ programming. I have extensive knowledge in C and C#, but very basic knowledge of C++. I decided to build a brief TicTacToe example to test my knowledge.It works by typing any comma-separated value (i.e. 2,1) and decides whose turn it is by odds and evens. X always goes first.Problems: Does not claim a winner. There is no fail-safe for picking the same spot twice. I realize this, but I wanted the semantics over the algorithm. And I don't know of a sure-fire way to find a winner with a vector since I imagine a sea of nested if-statements.I would love to know what looks legal, and what I should never do again.Board.h#include <iostream>#include <array>#include <vector>#include <string>#include <iterator>class Board{ /* ** Protected boolean for checking for winner */ bool gameWon = false; /* ** Use odd or even to tell whos turn it is */ int turnCount = 0;public: /* ** Pair vector for locations of X's and O's */ std::vector<std::pair<int, int>> locations; bool GameWon(void){ return gameWon; } void DrawBoard(void); void NextTurn(void);private: int FindLocation(std::pair<int, int>);};Board.cpp#include Board.hconst std::string line = ------------- ;const std::string wall = | ;/*** Brute Force Drawing**** Iterate over the 9 squares and** decide whether or not an X belongs** there or if it's an O. Upon each** square, check if the pair-location** (x,y) matches an item in our vector*/void Board::DrawBoard(){ // Clear screen on windows systems, throws an error // on Unix and OS X system(CLS); std::cout << Tic-Tac-Toe Console << std::endl; std::cout << line << std::endl; for (int i = 0; i < 3; i++) { for (int j = 0; j < 3; j++) { bool tileOpen = true; std::cout << wall; if (locations.size() > 0) { for (std::pair<int, int> p : locations) { if (p.first == i && p.second == j) { tileOpen = false; if (FindLocation(p) % 2 == 0) { std::cout << X; } else { std::cout << O; } } } } else { std::cout << ; } if (tileOpen == true) std::cout << ; } std::cout << wall << std::endl; std::cout << line << std::endl; }}/*** Handles Board Turns**** Assume every even turn is X,** and every odd is O.*/void Board::NextTurn(){ std::string input; if (turnCount % 2 == 0) std::cout << X's turn: << std::endl; else std::cout << O's turn: << std::endl; std::cin >> input; locations.push_back(std::pair<int, int>(input[0] - '0' - 1, input[2] - '0' - 1)); turnCount++;}/*** Vector Find Function**** Returns the index of a found object** in the vector; 0 if otherwise*/int Board::FindLocation(std::pair<int, int> p){ std::vector<std::pair<int, int>>::iterator it; it = std::find(locations.begin(), locations.end(), p); return std::distance(locations.begin(), it);}Main.cpp#include <iostream>#include Board.hint main(){ Board mainBoard; /* ** Check if there is a winner each cycle */ while (!mainBoard.GameWon()) { /* ** Update Board */ mainBoard.DrawBoard(); /* ** Handle Players Turn */ mainBoard.NextTurn(); } /* ** Grab a random char so the ** game doesn't immediately exit */ getchar();}
Brief TicTacToe C++ example
c++;tic tac toe
null
_unix.275963
When we make a swap using a regular physical drive, we just use fdisk and make a partition with type swap, using the swap code for that partition, followed by mkswap and swapon commands. But what if, in the case of lvm, we have to specify the partition type to be lvm using the lvm code, and then just make a pv, vg and lv, and then mkswap on that partition followed by swapon (please correct me if I am wrong).So my question is: why we don't need the partition to be of type swap in the lvm case; what is the logic behind this?
Making a swap partition using lvm
partition;lvm;swap
Linux pretty much ignores partition types, it cares more about the content on those partitions. So you don't need a swap partition type to use swap in Linux, and thus there is no issue with LVM not having partition types either. But you have to use the correct partition type to stop Windows from attempting to format your Linux data/swap partitions... it's also useful for humans to be able to tell what's what when just looking at the partition table.
_scicomp.24300
I am trying to use a projection method that deals with the viscous effects implicitly to model flow around a cylinder. I'm having trouble figuring out what the boundary conditions should be, particularly on the inflow and outflow. I think we can consider the Stokes equations without loss of generality: $$ \mathbf{u}_t = \Delta \mathbf{u} - \nabla p $$$$ \nabla \cdot\mathbf{u} = 0$$If we discretize explicitly in time (ignoring the pressure term) we get:$$ \frac{\mathbf{u}^{*} - \mathbf{u}^n}{\delta} = \Delta\mathbf{u}^{n}$$This leads to:$$ \mathbf{u}^{n+1}=\mathbf{u}^* - \delta\nabla p^{n+1}$$Taking the divergence of this equation yields the pressure Poisson equation. The boundary conditions can be found by dotting with the normal:$$\nabla p^{n+1} \cdot\mathbf{n} = \frac{(\mathbf{u}^* - \mathbf{u}^{n+1})}{\delta}\cdot\mathbf{n}$$where $\mathbf{u}^*\cdot\mathbf{n}$ can be computed and $\mathbf{u}^{n+1}\cdot\mathbf{n}$ is given as a boundary condition.Now if we discretize implicitly in time we get:$$ \frac{\mathbf{u}^* - \mathbf{u}^n}{\delta} = \Delta \mathbf{u}^{*}$$This is the diffusion equation for $\mathbf{u}^*$, and requires boundary conditions on $\mathbf{u}^*$. The most obvious choice is setting $\mathbf{u}^* = \mathbf{u}^{n+1}$ on the boundary. However if we do that, for the pressure Poisson equation we end up with $\nabla p\cdot\mathbf{n} = 0$ everywhere (even at the inflow and outflow). This seems incorrect to me.What should the correct boundary conditions on the pressure Poisson equation be in this case?
Implicit projection method with inflow boundary conditions
fluid dynamics;projection
tldr: Reformulate the projection and avoid the need for boundary conditions on the pressure.I think you are misinterpreting the projection scheme. In all formulations that I know, the pressure is never really computed.It rather goes like:Compute a tentative velocity $u^*$ approximating $u^{n+1}$Project $u^*$ onto the space of divergence-free functions to obtain $u^{n+1}$ by solving $$ \begin{bmatrix} I & \nabla \\ \nabla \cdot & 0 \end{bmatrix} \begin{bmatrix} u^{n+1} \\ \phi \end{bmatrix} = \begin{bmatrix} u^{*} \\ 0 \end{bmatrix} $$Some remarks here:You may compute the $u^*$ as you suggest (setting $\nabla p^{n+1}=0$). Typically, one rather uses a guess for the pressure gradient.In step 2. one computes the actual velocity, so that the given boundary conditions apply. However,The function $\phi$ is not the pressure, so that it might be difficult to interprete boundary conditions that include the pressure (like Navier-conditions). There are ways to relate $\phi$ to the pressure; cf., e.g. [1].Furthermore, the system in 2. can be brought into the Pressure Poisson Equation that you have. But at the expense of another spatial differentiation (taking the divergence) and the additional need of boundary conditions. If you want to really compute the pressure, you can use the relation to $\phi$ (this is often sufficient for the next pressure gradient guess) or solve the actual Pressure Poisson Equation derived from your actual equation and when you have the velocity computed. There is some issues about the right boundary conditions, but also some good answers to that in [2].[1] P. M. Gresho. On the theory of semi-implicit projection methods for viscous incompressible flow and its implementation via a finite element method that also introduces a nearly consistent mass matrix. I: Theory. Int. J. Numer. Methods Fluids, 11(5):587620, 1990.[2] P. M. Gresho and R. L. Sani. Incompressible flow and the finite element method. Vol. 2: Isothermal laminar flow. Wiley, Chichester, UK, 2000.
_codereview.129546
I am working on a genetic algorithm using the following code. The variable best in generateNewPopulation stores the best chromosomes from the previous generation and adds it without modification to the new generation. The problem is that if I allow crossover, the best changes it value to the best of the current generation.It prints different values of best just before and after crossover method. The code works fine without crossover, considering only mutation. Random rand = new Random(); int rand1,rand2,var; int numRuns=0; ArrayList<Pair> pop = new ArrayList<Pair>(); ArrayList<Pair> popClone = new ArrayList<Pair>(); boolean didRun=false; //Initialize population public void genPop (){ if(!didRun){ for(int j=0; j<12;j++){ StringBuffer x1 = new StringBuffer(); StringBuffer x2 = new StringBuffer(); for (int i=0; i<10;i++){ rand1 = rand.nextInt(4)%2; rand2 = rand.nextInt(4)%2; x1.append(rand1); x2.append(rand2);} Pair p = new Pair(x1,x2); pop.add(p); } System.out.println(Initial Population is ); for(int i=0; i<pop.size();i++){ Pair b = pop.get(i); System.out.print(b.getX() + + b.getY() + fitness value is + decodedVal(b)); System.out.println(); } didRun=true; Pair best = getMax(pop); System.out.println(best chromosome is + best.getX() + + best.getY() + fitness + decodedVal(best) ); System.out.println(Average fitness is + getAverage(pop) + \n); } generateNewPopulation(); }//For creating successive generationspublic void generateNewPopulation(){ numRuns++; popClone.clear(); Pair best = getMax(pop); popClone.add(0,best); popClone.addAll(rouletteWheel(pop)); ArrayList<Pair> Clone = new ArrayList<Pair>(popClone); popClone = crossover(popClone); //Crossover of strings //Mutation of strings for(int k=0; k<6; k++){ int randN = rand.nextInt(8)+1; if(checkMutate()){ popClone.remove(randN); popClone.add(mutate(Clone.get(randN))); } } popClone.add(best); pop.clear(); pop.addAll(popClone); System.out.println(New Population Generated : No - + numRuns ); for(int i=0; i<pop.size(); i++){ System.out.print(pop.get(i).getX() + + pop.get(i).getY() + \n);} System.out.println(best chromosome is + getMax(pop).getX() + + getMax(pop).getY() + fitness + decodedVal(getMax(pop)));System.out.println(Average fitness is + getAverage(pop) + \n); }The roulette wheel selection that enters 11 chromosomes to new generation. Elitism adds one best from previous generation afterwards (which is not happening if I allow crossover). public ArrayList<Pair> rouletteWheel (ArrayList<Pair> p){ ArrayList<Pair> popClone1 = new ArrayList<Pair>(); double [][] values = new double[p.size()+1][4]; double sum=0; double cumulative=0; double rands; values[0][0]=0; values[0][1]=0; values[0][2]=0; for(int i=0; i<p.size(); i++){values[i+1][0] = decodedVal(p.get(i));sum+= values[i+1][0];}for(int i=0; i<p.size(); i++){values[i+1][1] = (values[i+1][0]/sum);}for(int i=0; i<p.size(); i++){cumulative+= values[i+1][1];values[i+1][2] = cumulative;}label1: for(int i=0; i<(p.size()-2); i++){ rands = rand.nextDouble();for(int j=((values.length)-1); j>=0; j--){if(values[j][2]<= rands){ popClone1.add(p.get(j)); continue label1;}else; } }return popClone1;}Why is the best value changed after calling crossover? The crossover method isn't taking the chromosome pair at index no 0 still the value is getting altered and is set to the best of current generation.The method crossover is as follows:public ArrayList<Pair> crossover(ArrayList<Pair>p ){ int randNo1=1; int randNo2=1; for(int j=1; j<5; j++){ do{ randNo1 = rand.nextInt(8)+1; randNo2 = rand.nextInt(8)+1; } while(randNo1== randNo2); int random = rand.nextInt(8)+1; char swap; for(int i=random; i<10; i++){ swap = p.get(randNo1).getX().charAt(i); char y = p.get(randNo2).getX().charAt(i); p.get(randNo1).getX().setCharAt(i,y); p.get(randNo2).getX().setCharAt(i, swap); } for(int i=random; i<10; i++){ char swap2; swap2 = p.get(randNo1).getY().charAt(i); char y = p.get(randNo2).getY().charAt(i); p.get(randNo1).getY().setCharAt(i, y); p.get(randNo2).getY().setCharAt(i, swap2); } } return p;}This is the mutation method: public Pair mutate(Pair p){ StringBuffer newParentX = new StringBuffer(10); StringBuffer newParentY = new StringBuffer(10); StringBuffer x = p.getX(); StringBuffer y = p.getY(); for(int i=0; i<10;i++){ int randNo = rand.nextInt(2); char c = x.charAt(i); if(randNo==1){ if(c=='0') newParentX.append('1'); if(c=='1') newParentX.append('0'); } else newParentX.append(x.charAt(i)); } for(int i=0; i<10;i++){ int randNo = rand.nextInt(2); char c = y.charAt(i); if(randNo==1){ if(c=='0') newParentY.append('1'); if(c=='1') newParentY.append('0'); } else newParentY.append(y.charAt(i)); } Pair newPair = new Pair(newParentX, newParentY); return newPair; }This is getMax(), that returns the best chromosome of the population:public Pair getMax(ArrayList<Pair>p){ int maxIndex=0; double fitnessValue; double maxValue= decodedVal(p.get(maxIndex)); //decodedVal gets the fitness value of corresponding chromosome for(int i=0; i<p.size(); i++){ fitnessValue = decodedVal(p.get(i)); if(fitnessValue > maxValue){ maxValue=fitnessValue; maxIndex=i; } else; } return p.get(maxIndex); }This is the problem i am getting in the output.The best chromosome of previous generation is not added to new generation.New Population Generated : No - 150100111011 01100001010111001110 00100001010111001110 00100001010111001110 00100001011101110110 01101100100111001110 00100001010111001110 00100001011001001001 01100100000111000010 10011110100011001001 11111010011010011011 00100001010100111011 0110000101best chromosome is 0100111011 0110000101 fitness 0.6773401619818802Average fitness is 0.4868817692316813 New Population Generated : No - 160110011111 01100001011001101001 01100100000110011111 01100001010101001010 00100001010110011111 01100001010101001010 00100001010110011111 01100001010110011111 01100001010011100100 00011110011110011011 00010101000111110111 01100110100110011111 0110000101best chromosome is 0101001010 0010000101 fitness 0.6195407585437024Average fitness is 0.4824919712727356
Genetic algorithm in Java
java;genetic algorithm
null
_codereview.36647
I'm trying out a new approach to SASS stylesheets, which I'm hoping will make them more organizined, maintainable, and readable. I often feel like there is a thin line between code that is well structured and code that is entirely convoluted. I would appreciate any thoughts as to which side of the line this code falls. I don't want to tell you too much more about what these styles are intended to produce -- my hope is that the code will explain this for itself. Also note that this is part of a larger project, so don't worry about missing dependencies, etc.Questions for reviewHow would you make this code easier to read/maintain? Can you understand what these styles are trying to produce?Is the purpose of the mixins/placeholders clear?File structure:theme/sass/partials/ widget/ collapsable/ _appendicon.scss _closeall.scss _toggleswitch.scss collapsable.scss collapsablered.scss _button.scsscollapsable.scss/** * Collapsable widget. * * The widget has open and closed states. * The widget has a Toggle Switch, which is visible in * both open and closed states. * All other content is hidden in the closed state.*/@mixin setOpenState { &, &.state-open { @content; }}@mixin setClosedState { &.state-closed { @content; }}@mixin setToggleSwitchStyles { &>h1:first-child, .collapseableToggle { @content; }}@import collapsable/closeall;@import collapsable/appendicon;@import collapsable/toggleswitch;%collapsable { @include setOpenState { @include setToggleSwitchStyles { @extend %toggleSwitch; } } @include setClosedState { @extend %closeAllExceptToggle; @include setToggleSwitchStyles { @extend %toggleSwitchClosed; } }}collapsablered.scss@import collapsable;@import ../button;%collapsableRed { @extend %collapsable; @include setOpenState { @include setToggleSwitchStyles { @extend %buttonWithRedBg; } } @include setClosedState { @include setToggleSwitchStyles { @extend %buttonWithDarkBg; } }}collapsable/_closeall.scss%closeAllChildren { * { display: none; }}%closeAllExceptToggle { @extend %closeAllChildren; @include setToggleSwitchStyles { display: block; .icon-sprite { display: inline-block; } }}collapsable/_appendicon.scss@import compass/utilities/general/clearfix;@import ../../icon;@mixin appendIcon { @include pie-clearfix; .icon-sprite { margin-right: 5px; vertical-align: -3px; } &:after { content: ''; position: relative; top: 2px; float: right; @content; }}%withCloseIcon { @include appendIcon { @extend .icon-close; // defined in _icon.scss }}%withOpenIcon { @include appendIcon { @extend .icon-rChevronDk; // defined in _icon.scss top: 1px; }}collapsable/_toggleswitch.scss%toggleSwitch { cursor: pointer; @extend %withCloseIcon;}%toggleSwitchClosed { @extend %toggleSwitch; @extend %withOpenIcon;}partials/_button.scss@import typography;%buttonWithRedBg { @extend %textOnRedBg; // defined in _typography.scss cursor: pointer; &:hover { background-color: $redDk; } &:active { background-color: $black; }}%buttonWithDarkBg { @extend %textOnDarkBg; // defined in _typography.scss cursor: pointer; &:hover { background-color: #000; } &:active { background-color: $redDk; }}
SASS Code Structure / Readability
css;sass
Overall, your naming conventions are pretty good. I don't feel like I need to go look at mixins themselves to figure out what their purpose is. The extensive use of extends does concern me, since it can lead to larger CSS rather than smaller like you might expect (see: Mixin, @extend or (silent) class?).Your %textOnDarkBg and %textOnRedBg extend classes might be redundant. If you're not already using Compass, you might want to take a look at it. It offers a function as well as a mixin for setting a good contrasting color against your desired background color (see: http://compass-style.org/reference/compass/utilities/color/contrast/). Highly useful if your project is intended to be themed.Generally speaking, using colors for class names isn't very clear unless the content is about color (eg. a color wheel or a rainbow). What is red for? Is it for errors? Or maybe a call to action? The same thing goes for dark. Using inverted or closed might be better choices. If the site's design is already dark, a dark button probably doesn't make much sense.Your code only allows the user to have exactly 2 colors (default and red), which seems more limited than it needs to be. You could easily make it very flexible by making use of lists (or maps, which will be in the next version of Sass). Here's an example from my own project:// name dark light$dialog-help: #2E3192 #B9C2E1 !default; // purple$dialog-info: #005FB4 #BDE5F8 !default; // blue$dialog-success: #6F7D03 #DFE5B0 !default; // green$dialog-warning: #A0410D #EFBBA0 !default; // orange$dialog-error: #C41616 #F8AAAA !default; // red$dialog-attributes: ( help nth($dialog-help, 1) nth($dialog-help, 2) , info nth($dialog-info, 1) nth($dialog-info, 2) , success nth($dialog-success, 1) nth($dialog-success, 2) , warning nth($dialog-warning, 1) nth($dialog-warning, 2) , error nth($dialog-error, 1) nth($dialog-error, 2) ) !default;@each $a in $dialog-attributes { $name: nth($a, 1); $color: nth($a, 2); $bg: nth($a, 3); %dialog-colors.#{$name} { color: $color; background-color: $bg; } %dialog-colors-inverted.#{$name} { color: $bg; background-color: $color; } %badge-colors.#{$name} { background-color: $color; color: $background-color; } %button-colors.#{$name} { @include button($base: $bg) { @include button-text($color, inset); @include button-states; } } %button-colors-inverted.#{$name} { @include button($base: $color) { @include button-text($bg, inset); @include button-states; } } %button-colors-faded.#{$name} { @include button($base: fade($bg, 10%)) { color: #CCC; @include button-states; } }}In case you're wondering why I'm using multiple classes, I've setup a short demo: http://sassmeister.com/gist/7792677
_unix.278496
I am writing a script which accepts two arguments:#! /bin/basheval for i in {$1..$2}; do echo $i; doneI run it like:$ ./myscript 0002 0010 syntax error near unexpected token `do'Why is the error?I though it might be because the looping should be grouped. But by replacing eval for i in {$1..$2}; do echo $i; done with eval { for i in {$1..$2}; do echo $i; done; }, the error remains.Note: I hope to perform parameter expansion before brace expansion by using eval. The desired output of my example is 0002 0003 0004 0005 0006 0007 0008 0009 0010. (See Perform parameter expansion before brace expansion?)
Error when eval a for-loop
bash;shell;eval
null
_codereview.137994
This code is a server for websocket connections, it handles the low level stuff and delegates incoming messages to handler objects. I wanted performance to win in any trade off against maintenance costs or readability, but security should be as good as possible without becoming unfit for purpose.It works and I'm using it for a multiplayer shooter which is near finished, but I wanted to get a second opinion on its quality, as it is a core element of the network infrastructure of the game.I wanted it to be robust enough that bad input from a client won't allow the server to be crashed, and that bad code within handler objects is also unable to crash the server (its OK if those objects crash but this code should run indefinitely).It has to handle errors internally so that one bad actor doesn't bring down the server for everybody else - a delay of even a few milliseconds while the server recovers from bad input would reduce the user experience on the client. If a client sends bad input it should experience a silent failure.It also needs to compress its output to reduce traffic, so I chose a library to handle this, and it shouldn't (in theory) be possible to inject code via JSON as I've used the node library json-safe-parseuse strict;var websocket = { Server : require(websocket).server};var http = require(http);var jsonSafeParse = require(json-safe-parse);var lz_string = require(lz-string);/** * @class server.Server * @desc Game server, handles client connections and disconnections and delegates events to any message handlers it is given * @param {Number} port the port to run from */function Server(port) { var THIS = this; this.port = port; // list of currently connected clients (users) this.clients = {}; // array of objects which will respond to messages from clients this.messageHandlers = []; // build HTTP server this.server = http.createServer(); this.server.listen(this.port, function() { console.log((new Date()) + Server is listening on port + THIS.port); }); // build websocket server, which is attached to the HTTP server this.wsServer = new websocket.Server({httpServer: this.server}); // connect the server callbacks to this object this.__connectCallbacks();}/** * @method Server#getClients * @desc get the clients on this server * @returns {Object} a hash of the clients */Server.prototype.getClients = function(){ return this.clients;};/** * @method Server#getClient * @desc Get the client with the given ID, or null if none exists * @param {String} clientId * @returns {Client|null} client if one is found, null otherwise */Server.prototype.getClient = function(clientId) { // return null if no client was found if (!this.clients.hasOwnProperty(clientId)) { return null; } // if it was found, we can return the client return this.clients[clientId];};/** * @method Server#addMessageHandler * @desc Add a message handler - this object will be called when messages come through that the server doesnt handle internally * @param {Object} handler the handler object */Server.prototype.addMessageHandler = function(handler){ // guarantee that its not already attached for (var i = 0; i < this.messageHandlers.length; i++) { if (this.messageHandlers[i] == handler) { return; } } this.messageHandlers.push(handler);};/** * @method Server#serialize * @desc take this object and turn it into a string for transportation using lzw compression * @param {Object} object the object to be serialized, be sure it doesn't contain cycles * @return {String} a serialized string */Server.prototype.serialize = function(object) { var json = JSON.stringify(object); var lz = lz_string.compressToUTF16(json); return lz;};/** * @method Server#sendMessages * @desc Send a message * @param {Client|String} client who to send it to - either their ID or the actual client * @param {String} type message type * @param {Object} params object of all the parameters */Server.prototype.sendMessage = function(client, type, params) { if( typeof(client) == typeof ( )) { client = this.getClient(client); } var message = { type : type, params : params }; // encode as a string, utf format client.connection.sendUTF(this.serialize(message));};/** * @method Server#broadcastMessage * @desc Broadcast the given message to all clients * @param {String} type the type of message * @param {Object} params the parameters of the message */Server.prototype.broadcastMessage = function(type, params) { // send a message to each client for (var send_k in this.clients) { if (this.clients.hasOwnProperty(send_k)) { this.sendMessage(this.clients[send_k], type, params); } }};// Hook up all the callbacks that the server will requireServer.prototype.__connectCallbacks = function() { var THIS = this; this.wsServer.on('request', function(request) { return THIS.addNewClient(request); });};/** * @method Server#handleAuthentication * @desc handle an authentication message on the given client * @param client the client * @param params the parameters of auth message */Server.prototype.handleAuthentication = function(client, params) { // if they are already authenticated, ignore this if (client.authenticated) { return; } // the server then has to check the database to ensure the client was registered client.authenticated = true; // give a new nick client.nickname = params.requested_name + Math.round(Math.random() * 255); // now that they've authenticated, make the client permanent this.clients[client.clientId] = client; // send back the accepted string along with their new nickname, which may be different than what they wanted console.log(Connection accepted for client + client.clientId); // send the client's details, include times to allow the client to synchronise with the server var connectionAcceptedParams = { clientId : client.clientId, nickname : client.nickname, lastTime : client.lastTime, currentTime : client.currentTime }; // send this.sendMessage(client, 'CONNECTION_ACCEPTED', connectionAcceptedParams); // now delegate to message handlers for ( var i = 0; i < this.messageHandlers.length; i++) { try { this.messageHandlers[i].handleClientAuthentication(client); } catch(err) { console.error(err); console.error(err.stack); } }};/** * @method Server#handleNetworkMessage * @desc handle messages (other than authentication, which is seperate) on the given client * @param client the client the message came from * @param messageType the type of message * @param params the paramaters of the message */Server.prototype.handleNetworkMessage = function(client, messageType, params) { // until they are authenticated, ignore other types of message - silently fail if (!client.authenticated) { return; } // update the client's last seen time client.lastSeenTime = Date.now(); // response to ping includes the given send time, so the client can judge latency if (messageType === PING) { this.sendMessage(client, ACK, {sendTime : params.sendTime}); } // now delegate to message handlers for ( var i = 0; i < this.messageHandlers.length; i++) { try { this.messageHandlers[i].handleNetworkMessage(client, messageType, params); } catch(err) { console.error(err); console.error(err.stack); } }};/** * @method Server#onMessage * @desc called when Recieved a message from a client * @param client the client it came from * @param message the contents of the message */Server.prototype.onMessage = function(client, message) { // as far as the client is concerned, silently fail if the server had an error try { // accept only utf8 // silently fail from the client's perspecive if (message.type !== 'utf8') { return; } // safely parse the json - this library restricts certain things which may allow code injections message = jsonSafeParse(message.utf8Data); // make message types case insensitive var messageType = message.type.toUpperCase(); var messageParams = message.params; // specific type of message which the server will always handle itself, without delegating the handshaking if (messageType === CONNECTION_REQUEST) { this.handleAuthentication(client, messageParams); // for all other messages, process normally } else { // now route the message through this function this.handleNetworkMessage(client, messageType, messageParams); } } catch(err) { console.error(err); console.error(err.stack); return; }};/** * @method Server#closeConnection * @desc CLose the connection to the given client * @param client the client to disconnect */Server.prototype.closeConnection = function(client) { if (client.authenticated) { console.log(Disconnecting client + client.clientId); // now delegate to message handlers - all messages except authentication may be passed down to clients for ( var i = 0; i < this.messageHandlers.length; i++) { try { this.messageHandlers[i].handleClientDisconnect(client); } catch(err) { console.error(err); console.error(err.stack); } } this.broadcastMessage(CLIENT_DISCONNECT, {clientId : client.clientId}); // remove user from the list of connected clients delete this.clients[client.clientId]; }};/** * Called when a new client connects * @param request the data from the remote websocket */Server.prototype.addNewClient = function(request) { var THIS = this; // get a new client ID var time = new Date().getTime(); var clientId = (Math.random() * Math.pow(2, 32)) + _ + time % 1000; // accept connection var connection = request.accept(null, request.origin); // build the initial client object var client = { clientId : clientId, nickname : anonymous, authenticated : false, connection : connection, lastTime : Date.now(), currentTime : Date.now(), lastSeenTime : Date.now() }; // when a message is received, delegate to this function connection.on('message', function(message) { return THIS.onMessage(client, message); }); // when connection is closed, delegate to this function connection.on('close', function() { return THIS.closeConnection(client); });};// export public stuffexports.Server = Server;
Robust websocket server for use by a game running on node
javascript;game;node.js;networking;websocket
A bunch of small things:In .addNewClient(), you can change this:var time = new Date().getTime();to this:var time = Date.now();Then, later in that same function you can use that value rather than calling Date.now() three more times.In .sendMessage(), you can change this:if( typeof(client) == typeof ( ))to this:if( typeof(client) === string)You should avoid assigning to named argument variables like you are doing in .sendMessage() because this prevents some JS optimizations.When coining your clientID, you don't need to multiply at all (you said you cared about performance). You can just leave the random number in decimal form. You're just trying to have a random string so it's no big deal if you have a decimal point in it. If you really don't want a decimal, then you could just remove the decimal with a string replace rather than multiply. Also, why time % 1000? Why not just time (it's more unique without the %)?So, you can change this:var clientId = (Math.random() * Math.pow(2, 32)) + _ + time % 1000;to this:var clientId = Math.random() + _ + time;Your this.clients object would be simpler if it was a Map object because you can avoid all the hasOwnProperty() stuff and just use Map methods like .has(), .delete(), etc... It also has built in .forEach() iterator instead of your manual iteration.On your .closeConnection() method, you are only removing the client from the this.clients data structure if client.authentication is already true. I know it's supposed to be the case that those two operations are innately tied together, but why not remove it from this.clients no matter what? You don't want any chance of a memory leak here and it's not like some random attacker can send an unauthenticated message using your client object. The client object is uniquely associated with the socket.Change == to ===.In .addMessageHandler(), why don't you use .indexOf() to see if a handler is already in the array rather than do your own from scratch iteration?Bigger things to check on:Is your webSocket library safe from malformed packets?Is your webSocket library safe from DOS attacks with giant messages?Do you need rate limiting per connection to be safe from DOS attacks from a single connection?Do you know what happens if a client connection just silently disappears without an orderly TCP shut-down. Will your server eventually close the socket and remove the client object? Or do you need to check for inactive connections/clients and get rid of them?What happens if a single client who passes authentication, connects a zillion times?I don't understand your authentication step. The code you show doesn't actually do any authentication so the client gets into the this.clients map without passing anything and then you call some message handlers for authentication, but they don't have any return value to actually indicate failure.
_unix.151325
When I open my .java file in vim, I could see a couple of lines prefixed with one / more ^I characters. It looks like tabs in Eclipse that has got converted into ^I.I would like to replace a single ^I into spaces with 4 characters.E.g^I^I^I^IList<History> rulePackagesHistory = result.getHistory();How can do that in vim editor?
How can I replace ^I into tab spaces in vim editor for .java file?
sed;vim;ed
Add these lines to your .vimrc:set tabstop=4set shiftwidth=4set expandtabAfter that, each new tab character entered will be changed to 4 spaces, old tabs don't. You must type::retabThis will convert all existing tabs in files to spaces.If you don't want to use retab, you can use perl to replace each tab by 4 spaces:perl -i.bak -pe 's/\t/ /g' file
_unix.215670
Code and outputsapt-cache search adduseradduser - add and remove users and groups$ sudo apt-get install adduserReading package lists... DoneBuilding dependency tree Reading state information... Doneadduser is already the newest version.0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.$ addadd-apt-repository addpart addr2line $ which adduser$ echo $PATH/home/masi/bin:/sbin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/gameswhich I do not understand since it says that I have adduser but it is not in the PATH although I have most bin folders. There must be other location where adduser is added in Debian 8.1. Where is adduser installed in Debian 8.1?
Where does Debian 8.1 install adduser?
debian;software installation;path
When I'm looking for a tool I first look to see if it is in my PATH. type adduser bash: type: adduser: not foundIf it is not found then I'll use apropos apropos adduser add.user.conf (5) - configuration file for adduser (8) and addgroup (8) . adduser (8) - add a user or group to the systemSection 8 is System administration commands and daemons. So, I would look in /sbin or /usr/sbin. /sbin and /usr/sbin are left out of a normal user's PATH for security and many of those commands require root privileges to run. Both are added to your path by /etc/sudoers when you preference a command with sudo.
_cstheory.27234
Let $S=s_1,\ldots,s_n$ be a sequence and $p$ be a permutation on the indices of $S$ such that $p$ sorts $S$.Define a sequence to be locally sorted with degree $k$ if $\forall s_i \in S |p(i) - i | \leq k$. How many locally sorted sequences of degree $k$ are there for $n$ elements? Hopefully, there will be a better approach than enumerate all possible sequences and check each one.I wasn't sure how to approach this problem, so I tried proofing results for various values of $k$. I solved this for $k=1$; it's just a Fibonacci sequence. $k=2$ is a lot more difficult to me. I keep overcounting.For handiness, here's some Python code to use when checking results:from itertools import permutationsdef check(seq, k): for i in range(0, len(seq)): index = i+1 if abs(seq[i] - index) > k: return False return Truen = 5k = 2orig = [ i+1 for i in range(0,n) ] # easier if we use sequences of integerscheck_count = 0for perm in permutations(orig): if check(perm,k): check_count += 1print(check_count)
Locally sorted sequences
ds.algorithms;sorting;dynamic programming
Your problem is solved in the paper Spheres of Permutations under theInfinity Norm — Permutations with limited displacement by Torleiv Klve.See also A002524 and other sequences linked there. I found Klve's paper by calculating the first few values of A002524 and finding it on the OEIS.
_cs.53402
Why ternary computers like Setun didn't become popular despite being cheaper and more reliable than binary computers, and also having important computational advantages? We could have had cheaper computers for everyone.Edit: The answers to question about binary system do not resolve my question, since Setun, as I understand it, was based on some sort of binary circuit which used 2-bit combinations to represent the three values, the fourth combination wasn't used. Thus an argument about non-binary circuits being non-reliable doesn't apply to Setun.
Why ternary computers like Setun didn't catch on?
computer architecture
null
_unix.109110
I am on Os X and I am rsyncing from a ext3 volume (that I read with osxfuse 2.6.2) to a HFS+ volume. The data I am backupping are ~ 500GB. Sometimes rsync gives the following message:file has vanished: '/path/to/file'if I check the file path I find that the file is listed there but then no such file or directory. I would think this is a problem of osxfuse. Sometimes if I run rsync again some more files are transferred, but I always get the same warning regarding other files. I would think my backup is incomplete, how can I solve this?
rsyncing from ext3 on mac: file has vanished
osx;rsync;fuse;hfs+
null
_softwareengineering.112642
It's possible to publish daily updates for your application on Google's Android Market, but I have an assumption that users don't like to get frequent updates. How can I determine when I should publish updates? Is there an optimum interval to balance between delivering new features and bug fixes while not upsetting my user base?
What frequency of updates is acceptable for a mobile application?
software updates
While there can't some general best frequency for updates, here are some factors that you should look at when making your decision:Severity of the update. Fixes for critical bugs and security vulnerabilities should of course be pushed out as soon as possible. But if you only have minor changes, you might want to wait until you collected a few of those. User expectations. If the users expect updates bringing major enhancements, you might not want to push the same kind of update just to fix a typo. :-)Ease of applying the update. If the update applies itself in the background without needing any user interaction, I doubt anybody would mind frequent updates. If OTOH applying an update requires several interaction steps by the user, thus interrupting his workflow, frequent updates are quite annoying. E.g. almost every time I use NeoOffice, it tells me there is a new version available. But I would have to download and install it manually, taking me away from my original task. So I rarely updated, but was still annoyed every time.Likelihood to break something else with the update. If there are external plugins/extensions/whatever depending on you application, they should be aware and be able to depend on your update schedule. They also should have enough time to adjust to the changes you made before the update is applied. E.g. when Firefox recently changed to a more rapid update cycle, it broke many of the less-maintained extensions, because the developers couldn't keep up with the rapidly changing version numbers.Dependence on external factors/libraries/APIs/.... If the update depends on changes of external libraries, APIs, or (as Manfred Moser said) a server side application, you'll have to wait until those are finished.
_unix.139880
I'm currently in an environment with two wireless networks: OrganizationFoo, which is present pretty much everywhere I go, and OrganizationFooSubset, which is only present in a certain location.When possible, I'd like to connect to OrganizationFooSubset because it's a faster, more reliable network. If this is out of range, I'd like to connect to OrganizationFoo.How can I configure this from within Linux Mint 17 Cinnamon (64-bit)?I've found the network connections configuration dialog, but there are no Move Up or Move Down buttons, and drag-and-drop has no effect. In the actual /etc/NetworkManager/system-connections/ directory, each connection gets its own file, and I don't see a master list where I could reorder them.Thoughts?
How do I set the preferred wireless network in Linux Mint 17?
networking;linux mint;wifi
null
_unix.131622
I need to recursively find all files that contain a specific word and if the word exists in file I need to find out the number of lines in that file. I have been trying to use grep but I have been not successful so far.
Finding all files containing a word and then counting the number of lines
bash;text processing
null
_webapps.79282
My typical case is following. When I need to check/do something in future (let's say in a month) I add an appropriate event in Google Calendar. The problem with it is that it's inconvenient to check if I missed some reminder recently: I don't want to examine each event every time.Currently I change event color (to gray) to mark it completed. Are there any better ways?
Track completion of Google Calendar events
google apps;google calendar
null
_unix.295601
I have 2 Nodes GlusterFS setup on 2 Redhat 6.7 Servers. (GlusterFS versions are both 3.7.12) Then the NFS Server on localhost status on one Server shows n/a and Online N, while it is showing all fine on the another one.[root@webserver1 ~]# gluster volume status gv0Status of volume: gv0Gluster process TCP Port RDMA Port Online Pid------------------------------------------------------------------------------Brick gluster1:/glusterfs-data/brick 49152 0 Y 27149Brick gluster2:/glusterfs-data/brick 49152 0 Y 1677 NFS Server on localhost N/A N/A N N/A Self-heal Daemon on localhost N/A N/A Y 27176NFS Server on gluster2 2049 0 Y 1629 Self-heal Daemon on gluster2 N/A N/A Y 1638 Task Status of Volume gv0------------------------------------------------------------------------------There are no active volume tasksWhat services are required to be started (or) what seems to be missed out here please?
Redhat : gluster volume status shows NFS Server on localhost as N/A and Offline
centos;rhel;nfs;glusterfs
null
_unix.141167
ContextI have a directory of thousands of zip files that are dated in the form YYYYMMDD_hhmmss.zip and each about 300K. Within each zip file is about 400 xml files each about 3K.The problemI need to be able to search and find a given string within a date-range of the zip files.The current (albeit mediocre) solutionI have the following one-linerfind /home/mydir/ -type f | sort | \awk /xml_20140207_000016.zip/,/xml_20140207_235938.zip/ | \xargs -n 1 -P 10 zipgrep my search stringThe point of it is tolist all the files in my thousand-file directorysort this list of filesretrieve a range of files based on given dates (this awk command only prints lines after that first matched string and up to that second matched string)pass each line of the result which corresponds to a single file to zipgrepThe questionThis one-liner runs horribly slow, even with 10 processes on a 24-core machine. I believe it's slow because of the zipgrep command but I'm not wise enough to know how to improve it. I don't know if I should be, but I'm a little embarrassed that a colleague wrote a java tool that runs faster than this script. I'd like to reverse that if possible. Then, does anyone know how to make this command faster in this context? Or to improve any part of it at all?
Is there a way to make this one-liner faster?
bash;shell script;awk;grep;xargs
null
_unix.174496
I am trying to understand why ps does not behave the way I expect it to. From the man pages, the following command should display ppid and lstart, with items sorted by lstart order. However when I run the same command in three different terminals:First term:gauthier@sobel:~/ $ ps -o ppid -o lstart --sort=lstart PPID STARTED21142 Tue Dec 16 13:45:18 2014 3383 Mon Dec 15 15:40:35 2014Second term:gauthier@sobel:~/bin $ ps -o ppid -o lstart --sort=lstart PPID STARTED19595 Tue Dec 16 13:45:03 2014 3383 Mon Dec 15 14:49:14 2014Third term:gauthier@sobel:~ $ ps -o ppid -o lstart --sort=lstart PPID STARTED 3383 Tue Dec 16 13:39:05 201416357 Tue Dec 16 13:45:12 2014There are several things I don't understand here.items in terms 1 and 2 are sorted most recent first. Items in term 3 are sorted oldest first. Even considering alphabetical order the orders differ.PPID 3383 is the same for all three terms, but at different start times. It seems that several distinct PPID might be the same although they are different processes?System info:$ ps -Vprocps-ng version 3.3.9$ uname -aLinux sobel 3.13.0-37-generic #64-Ubuntu SMP Mon Sep 22 21:28:38 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
Sorting the output of `ps`
process;sort;ps
ps --sort=lstart doesn't actually sort by lstart, According to this serverfault comment:lstart gives a full timestamp, but cannot be used as a sort key. start_time gives the usual 'time within the last 24 hours, date otherwise' column, and can be used as a sort key.This is implicitly documented in ps's man pages, where lstart is not listed under the OBSOLETE SORT KEYS section, but start_time is.The source code also backs this up, see this definition starting on line 1506:/* Many of these are placeholders for unsupported options. */static const format_struct format_array[] = {/* code header print() sort() width need vendor flags */[...]{lstart, STARTED, pr_lstart, sr_nop, 24, 0, XXX, ET|RIGHT},[...]{start_time, START, pr_stime, sr_start_time, 5, 0, LNx, ET|RIGHT},[...]};Edit: expanded on the explanation on the right answer, and removed the misleading part of the original one.
_codereview.21537
I've written this module (using a tutorial on the web I can't find now to stop unusual requests from clients. It's working as I've tested it on a local system. Is the logic fine enough?Another problem is that it counts requests for non-aspx resources (images, css, ...), but it shouldn't. How can I filter request for aspx pages ?This is the module code:public class AntiDosModule : IHttpModule{ const string blockKey = IsBlocked; const string reqsKey = Requests; public void Dispose() { } public void Init(HttpApplication context) { context.BeginRequest += ValidateRequest; } private static void ValidateRequest(object sender, EventArgs e) { // get configs from web.config AntiDosModuleSettings setting = AntiDosModuleSettings.GetConfig(); int blockDuration = setting.IPBlockDuration; // time window in which request are counted e.g 1000 request in 1 minute int validationTimeWindow = setting.ValidationTimeWindow; // max requests count in specified time window e.g 1000 request in 1 minute int maxValidRequests = setting.MaxValidRequests; string masterKey = setting.MasterKey; HttpContextBase context = new HttpContextWrapper(HttpContext.Current); CacheManager cacheMgr = new CacheManager(context, masterKey); // is client IP blocked bool IsBlocked = (bool)cacheMgr.GetItem<Boolean>(blockKey); if (IsBlocked) { context.Response.End(); } // number of requests sent by client till now IPReqsHint hint = cacheMgr.GetItem<IPReqsHint>(reqsKey) ?? new IPReqsHint(); if (hint.HintCount > maxValidRequests) { // block client IP cacheMgr.AddItem(blockKey, true, blockDuration); context.Response.End(); } hint.HintCount++; if (hint.HintCount == 1) { cacheMgr.AddItem(reqsKey, hint, validationTimeWindow); } }}internal class IPReqsHint{ public int HintCount { get; set; } public IPReqsHint() { HintCount = 0; }}and this is the CacheManager class:public class CacheManager{ HttpContextBase context; string masterKey; public CacheManager(HttpContextBase context, string masterKey) { this.context = context; this.masterKey = masterKey; } public void AddItem(string key, object value, int duration) { string finalKey = GenerateFinalKey(key); context.Cache.Add( finalKey, value, null, DateTime.Now.AddSeconds(duration), System.Web.Caching.Cache.NoSlidingExpiration, System.Web.Caching.CacheItemPriority.Normal, null); } public T GetItem<T>(string key) { string finalKey = GenerateFinalKey(key); var obj = context.Cache[finalKey] ?? default(T); return (T)obj; } string GenerateFinalKey(string key) { return masterKey + - + context.Request.UserHostAddress + - + key; }}
Optimizing this AntiDos HttpModule
c#;asp.net
null
_unix.312253
All right, I have a problem, My UAP make a request to my router linksys wrt 54 gl he receive a response and then skip over and take his default ip 192.168.1.20. Even if I try to change the IP manually true a ssh terminal, it stays 10 sec. and then it goes back to 192.168.1.20. I've already reset it multiple times, even flash differents ROM's true tftp but nothing works... Can someone help me please?
Unifi UAP DHCP problem
networking
null
_softwareengineering.120019
I'm trying to understand the difference between procedural languages like C and object-oriented languages like C++. I've never used C++, but I've been discussing with my friends on how to differentiate the two.I've been told C++ has object-oriented concepts as well as public and private modes for definition of variables: things C does not have. I've never had to use these for while developing programs in Visual Basic.NET: what are the benefits of these?I've also been told that if a variable is public, it can be accessed anywhere, but it's not clear how that's different from a global variable in a language like C. It's also not clear how a private variable differs from a local variable.Another thing I've heard is that, for security reasons, if a function needs to be accessed it should be inherited first. The use-case is that an administrator should only have as much rights as they need and not everything, but it seems a conditional would work as well:if ( login == admin) { // invoke the function}Why is this not ideal?Given that there seems to be a procedural way to do everything object-oriented, why should I care about object-oriented programming?
What's the benefit of object-oriented programming over procedural programming?
c++;object oriented;c;procedural
All answers so far have focused on the topic of your question as stated, which is what is the difference between c and c++. In reality, it sounds like you know what difference is, you just don't understand why you would need that difference. So then, other answers attempted to explain OO and encapsulation.I wanted to chime in with yet another answer, because based on the details of your question, I believe you need to take several steps back.You don't understand the purpose of C++ or OO, because to you, it seems that your application simply needs to store data. This data is stored in variables.Why would I want to make a variable inaccessible? Now I can't access it anymore! By making everything public, or better yet global, I can read data from anywhere and there are no problems. - And you are right, based on the scale of the projects you are currently writing, there are probably not that many problems (or there are, but you just haven't become aware of them yet).I think the fundamental question you really need to have answered is: Why would I ever want to hide data? If I do that, I can't work with it!And this is why:Let's say you start a new project, you open your text editor and you start writing functions. Every time you need to store something (to remember it for later), you create a variable. To make things simpler, you make your variables global.Your first version of your app runs great. Now you start adding more features. You have more functions, certain data you stored from before needs to be read from your new code. Other variables need to be modified. You keep writing more functions. What you may have noticed (or, if not, you absolutely will notice in the future) is, as your code gets bigger, it takes you longer and longer to add the next feature. And as your code gets bigger, it becomes harder and harder to add features without breaking something that used to work.Why?Because you need to remember what all your global variables are storing and you need to remember where all of them are being modified. And you need to remember which function is okay to call in what exact order and if you call them in a different order, you might get errors because your global variables aren't quite valid yet.Have you ever run into this?How big are your typical projects (lines of code)?Now imaging a project 5000 to 50000 times as big as yours. Also, there are multiple people working in it. How can everyone on the team remember (or even be aware of) what all those variables are doing?What I described above is an example of perfectly coupled code. And since the dawn of time (assuming time started Jan 1, 1970), human kind has been looking for ways to avoid these problems. The way you avoid them is by splitting up your code into systems, subsystems and components and limiting how many functions have access to any piece of data. If I have 5 integers and a string that represent some kind of state, would it be easier for me to work with this state if only 5 functions set/get the values? or if 100 functions set/get these same values?Even without OO languages (i.e. C), people have been working hard on isolating data from other data and creating clean separation boundaries between different parts of the code. When the project gets to a certain size, ease of programming becomes not, can I access variable X from function Y, but how do I make sure ONLY functions A, B, C and no one else is touching variable X.This is why OO concepts have been introduced and this is why they are so powerful. They allow you to hide your data from yourself and you want to do it on purpose, because the less code that sees that data, the less chance there is, that when you add the next feature, you will break something. This is the main purpose for the concepts of encapsulation and OO programming. They allow you to break our systems/subsystems down into even more granular boxes, to a point where, no matter how big the overall project is, a given set of variables may only be accessed by 50-200 lines of code and that's it! There's obviously much more to OO programming, but, in essence, this is why C++ gives you options of declaring data/functions as private, protected or public.The second greatest idea in OO is the concept of abstraction layers. Although procedural languages can also have abstractions, in C, a programmer must make a conscious effort to create such layers, but in C++, when you declare a class, you automatically create an abstraction layer (it's still up to you whether or not this abstraction will add or remove value). You should read/research more about abstraction layers and if you have more questions, I'm sure this forum will be more than happy to answer those as well.
_reverseengineering.2846
I am trying to find the function that sends packets to the server in a game client. I have read many tuts about finding the SEND function. But they are not helpful in finding in my case.So i started as follows:I first attached the game client in ollydbg.Then found all the executable modules.Then opened the client.exe. Further i searched for all intermodular calls.Then I searched for the SEND function. I got Five SendMessage() functions.From this step I don't understand what to do further.
Finding send function for tcp packets in game client
ollydbg
null
_cstheory.23815
I am wondering if there is an algorithm that, given a sorted array, allows you to build a binary search tree in linear time?I am facing a problem where I have about 8 million elements in a file that need to be loaded into a BST so O(n) would be vastly preferable to O(n log n) if it's possible.
Pre order traversal of an array
tree;binary trees
null
_softwareengineering.335569
I'm creating a Node app. I have JavaScript files that include custom functions that make calls to external APIs (in this case Google APIs) I have JavaScript files in my node app that are related to setting up the web app and using these custom functions described aboveWhat would be a good name for these JavaScript files that make external API calls? Should I call it a wrapper or a service? (How would I visualize the separate responsibility layers of this project?) I'm trying to pick a proper name that is intuitive for other new comers to the project understand what files are responsible for what.
What do you call the layer of modules that call external APIs?
programming practices;web applications;api design;node.js
null
_softwareengineering.343357
For example, the SysInternals tool FileMon from the past has a kernel-mode driver whose source code is entirely in one 4,000-line file. The same for the first ever ping program ever written (~2,000 LOC).
Why are some C programs written in one huge source file?
design;c;source code
Using multiple files always requires additional administrative overhead. One has to setup a build script and/or makefile with separated compiling and linking stages, make sure the dependencies between the different files are managed correctly, write a zip script for easier distribution of the source code by email or download, and so on. Modern IDEs today typically take a lot of that burden, but I am pretty sure at the time when the first ping program was written, no such IDE was available. And for files that small as ~4000 LOC, without such an IDE which manages multiple files for you well, the trade off between the mentioned overhead and the benefits from using multiple files might let people make a decision for the single file approach.
_codereview.135824
I am new to Python (and coding in general) and after about a week of reading Thinking Like a Computer Scientist: Learning with Python I decided to try and build a version the classic guessing game. I added some extra features such as counting the number of guesses the user takes, and playing against a simulated computer player to make the program slightly more interesting. Also, the number of guesses the computer takes is based on the mean number of guesses needed to guess a number in a given range (which is logarithmic of base 2 for range n) and varies according to standard deviation.Any feedback on the structure of my code or the way I generate the number of guesses the computer takes would be much appreciated!# Number guessing game in Python# Taylor Wright# July 27 2016import randomdef get_number(level): #selects a random number in range depending on difficulty selected if level == e: number = random.randint(1,20) if level == m: number = random.randint(1,100) if level == h: number = random.randint(1,1000) elif level != e and level != m and level != h: print (Invalid input!) get_number() return numberdef select_level(): #prompts the user to select a difficulty to play on while True: level = str(input(Would you like to play on easy, medium, or hard? \n Type 'e' for easy, 'm' for medium, or 'h' for hard!\n)) if level != e and level != m and level != h: print(Invalid input!\n) if level == e or level == m or level == h: break return leveldef guess_number(level): #function that prompts the user to guess within range depending on chosen difficulty if level == e: guess = int(input(Guess a number between 1 and 20:\n)) if level == m: guess = int(input(Guess a number between 1 and 100:\n)) if level == h: guess = int(input(Guess a number between 1 and 1000:\n)) return guessdef check_guess(guess,number): #processes the guess and tells the user if it is too high, too low, or bang on if guess > number: print (your guess is too high! Try again! \n) if guess < number: print (your guess is too low! Try again! \n) if guess == number: print(\n{0} was the number!.format(number))def com_num_guesses(level): #function to get the number of guesses taken by the computer if level == e: com_guesses = round(random.normalvariate(3.7,1.1)) if level == m: com_guesses = round(random.normalvariate(5.8,1.319)) if level == h: com_guesses = round(random.normalvariate(8.99,1.37474)) print(The computer guessed the number in {0} guesses! Can you beat that?.format(com_guesses)) return com_guessesdef mainloop(): level = select_level() number = get_number(level) com_guesses = com_num_guesses(level) num_guesses = 0 while True: #tells program how to handle guesses after the first guess guess = guess_number(level) check_guess(guess,number) num_guesses += 1 if guess == number: print( You got it in {0} guesses..format(num_guesses)) if num_guesses == com_guesses: print(It took the computer {0} guesses too!\nIt's a tie!\n.format(com_guesses)) if num_guesses > com_guesses: print(It took the computer {0} guesses.\nThe computer wins!\n.format((com_guesses))) if num_guesses < com_guesses: print(It took the computer {0} guesses.\nYou win!\n.format(com_guesses)) play_again = str(input(To play again type 'yes'. To exit type 'no'. \n)) if play_again == yes: mainloop() if play_again == no: raise SystemExit(0) breakmainloop()
Beginning Python guessing game
python;beginner;python 3.x;number guessing game
You need to avoid duplicating code and use the if/elif/else logic a bit more.I've added an example of how your code could look to make it more extendable and cleanerimport randomclass Level: #make a level class that you can extend def __init__(self, difficulty, computer): self.difficulty = difficulty self.computer = computerleveldict = { #a dictionary to store your levels e : Level(20, (3.7, 1.1)), m : Level(100, (5.8, 1.319)), h : Level(1000, (8.99,1.37474)), }def get_number(level): return random.randint(1, leveldict[level].difficulty)def select_level(): print(Would you like to play on easy, medium, or hard?\nType 'e' for easy, 'm' for medium, or 'h' for hard!) level = str(input()) while level not in leveldict.keys(): #check for errors in select_level not in get_number print (Invalid input!) level = str(input(Type 'e' for easy, 'm' for medium, or 'h' for hard!)) return leveldef guess_number(level): print(Guess a number between 1 and {0}:\n.format(leveldict[level].difficulty)) # a try/except block to check if the user really gives a number # you could add a check to see if the number is in the given range (e.g. 1-20) try: n = int(input()) except ValueError: print(Invalid input!) n = guess_number(level) return ndef check_guess(guess, number): #use if, elif, else logic if guess > number: print (your guess is too high! Try again! \n) elif guess < number: print (your guess is too low! Try again! \n) else: print(\n{0} was the number!.format(number))def com_num_guesses(level): # the * in leveldict[level].computer is to unpack your tuple with the normalvariate range com_guesses = round(random.normalvariate(*leveldict[level].computer)) print(The computer guessed the number in {0} guesses! Can you beat that?.format(com_guesses)) return com_guessesdef mainloop(): level = select_level() number = get_number(level) com_guesses = com_num_guesses(level) guess = guess_number(level) check_guess(guess, number) num_guesses = 1 # use a statement for the while loop, it's cleaner in this case than while True: (...) break # and you have less duplicate code while guess != number: guess = guess_number(level) check_guess(guess,number) num_guesses += 1 print(You got it in {0} guesses..format(num_guesses)) print(It took the computer {0} guesses.format(com_guesses), end=) #use if/elif/else logic and remove the duplicate code if num_guesses > com_guesses: print(.\nThe computer wins!\n.format((com_guesses))) elif num_guesses < com_guesses: print(.\nYou win!\n.format(com_guesses)) else: print( too!\nIt's a tie!\n) #you dont need the if no because it will exit anyways if the input is not yes play_again = str(input(To play again type 'yes'. To exit type 'no'. \n)) if play_again == yes: mainloop()mainloop()
_vi.3694
I would like to go to the file I just edited last and next kind of like MRU plugins do.:bnext and :bprev works sometimes, but most often than not I just end up in some obscure file I don't remember editing and forced to fall back to MRU plugin.Is there a way to fix it?Ctrl-^ swaps between two last files. What is the best way to navigate between more?I understand it might be tricky but I agree to anything that can improve current :bn :bp behavior. The buffers I often see are totally out of place. Maybe there is a plugin that can keep track of the recent files and provide hooks so I can create mappings?Replying to comments cleared up my thoughts a bit. I believe what I want is to be able to move through files in order of latest saves. That way if I go back in history the order won't change until I save the file which then becomes last and make one step back to the file saved right before that, i.e. the one I've started from. Something like Ctrl-O Ctrl-I pair that switches files immediately without jumping around the current buffer. Sort of like u and U in netrw: u Change to recently-visited directory |netrw-u| U Change to subsequently-visited directory |netrw-U|
Is there a way to reliably go back and forth in file history
buffers
I wrote a little function to repeatedly hit CTRL-O for me, until the buffer changes.You can find it here. I mapped it to CTRL-U but you could override CTRL-O if you wanted to.function! GoBackToRecentBuffer() let startName = bufname('%') while 1 exe normal! \<c-o> let nowName = bufname('%') if nowName != startName break endif endwhileendfunctionnnoremap <silent> <C-U> :call GoBackToRecentBuffer()<Enter>You could probably write something similar for <C-I>.Issues:If there is no previous buffer, it will continue silently looping until you hit CTRL-C!Related::jumps lists the historical locations that CTRL-O will step back through.Vim's default CTRL-T is a good alternative to mashing CTRL-O, because it is coarser grained: it moves back through tag jumps only.
_unix.266091
I want to set up port forwarding with ssh like so:ssh [email protected] -L 5656:remoteserver:80 -Nand then run a curl command:curl http://localhost:5656/my/endpoint/I can accomplish this just fine using two commands, but how can I combine them into a single working command? I'm on OSX if that matters.
How do I set up ssh port forwarding and run a curl in a single command?
ssh;curl
Do you really need to do both of the things? Would not be easier to curl on the remote server and pull the result without port forwarding, such asssh [email protected] curl http://remoteserver/my/endpoint/ -o - > result
_scicomp.16004
I have a a function $f(k)$ (calculated in Maple) which is huge and stored in a variable called 'sum' on my drive (with the help of 'save' command in Maple). Since the function is huge, Maple is unable to plot it and is taking endless time. Thus, I want to read the variable 'sum' into Matlab and plot it. I am unable to also just copy paste as the function is really big. I have searched around but am unable to find a solution. Can somebody help me out?
Maple stored variable to be read into Matlab
matlab;data storage;maple
You can use the CodeGeneration package to do this. It allows to translate to different languages, being Matlab one of those.A simple example here:with(CodeGeneration);suma := sum(sin(n*x)/factorial(n), n = 0 .. 10);thenMatlab(suma)with answercg3 = sin(x) + sin(0.2e1 * x) / 0.2e1 + sin(0.3e1 * x) / 0.6e1 + sin(0.4e1 * x) / 0.24e2 + sin(0.5e1 * x) / 0.120e3 + sin(0.6e1 * x) / 0.720e3 + sin(0.7e1 * x) / 0.5040e4 + sin(0.8e1 * x) / 0.40320e5 + sin(0.9e1 * x) / 0.362880e6 + sin(0.10e2 * x) / 0.3628800e7;Off course, you can store this in a string and then write it to a text file, or translate a Maple procedure to a Matlab function.
_unix.27684
I am looking for a script which we can pass as an argument to the pbrun command.Eg: Login: test1Passwd: xxxxxxxWelcome to Solaris 10 gcmsys01$ pbrun sysadmins safekshHere sysadmins is the group-name and safeksh is a script which will disable any harmful commands like rm, init 6, format etc etc., similarly, there should be a fullksh script which will allow full shell access to the server (can execute any root commands without any restriction). This script is to overcome any unwanted outages due to some harmful commands.Any suggestions are highly appreciated.
Power broker safe shell or full shell script
shell;shell script
null
_webmaster.34399
I am seeing the following oddity with IE7-10 on Windows Vista, 7, 8:When declaring font-family: serif; I am seeing an old bitmapped serif font that I can't identify (see screenshot below) instead of the expected font Times New Roman. I know it's an old bitmapped font because it displays aliased, without any font smoothing, with IE7-10 on Win Vista-8 (just like Courier on every version of Win).Screenshot:I would like to know (1) can anyone else confirm my research and (2) BONUS: which font is IE displaying?Notes: IE6 and IE7 on Win XP displays Times New Roman, as they should. It doesn't matter if font-family: serif; is declared in an external stylesheet or inline on the element. Quoting the CSS attribute makes no difference. Adding Unkown Font to the stack also makes no difference.New Screenshot: The answer from Jukka below is correct. Here is a new screenshot with Batang (not BatangChe) to illustrate. Hope this helps someone.
Unknown CSS font-family oddity with IE7-10 on Windows Vista, 7, 8
css;fonts;internet explorer;windows
I can confirm the observation, using IE 9 on Win 7. Checking in the IE settings (Tools Internet settings General Fonts), I can see BatangChe mentioned as the font under user defined for normal text, and the font used for serif looks like Batang Che but has different spacing. And setting fonts there does not seem to change this. I guess they only matter if the author does not set font family at all, even generically.Looks like the font is Batang. I suppose there is no way to change this (i.e. the mapping of serif to a specific font). So the practical conclusion is that using serif as a fallback font isnt a good idea. Or at least you should put some fonts like Times New Roman and Georgia before it, so that IE will use one of them instead of falling to Batang.
_webapps.71185
When I log on to my Google account and have to enter a two-factor code, I get an option to trust this device for 30 days:Some time ago, this option trusted the device permanently - which is a lot more comfortable, at least for your main device. Is there still a way to permanently trust a device?
Trust a device for more then 30 days in Google Two-Factor Authentication
google;authentication;multi factor auth
null
_scicomp.11006
Going to teach students of undergraduate level a course titled Introduction to Computer Programming. I am confused a bit. In Computational Physics scientists use C/C++ or Python or Fortran,CUDA etc..... this is time to build their base. What should I use? I know you can learn new programming language anytime in your life but which is wiser choice for me to elaborate them all basic programming concepts and OOP concepts later on.
What language should I use when teaching an undergraduate course in computer programming?
python;c++;computational physics;languages
First, if your undergraduates are like ours and had no prior introduction to computers, expect to spend some time teaching them how to use basic stuff like using a proper editor (i.e., not MSWord), the command line, etc.I think the answer somewhat depends on where you set the focus of your course (or what you are required to teach). For example: How relevant are the internal workings of the computer? Do you need classes and other advanced OOP structures? Do you want to teach them how to produce efficient programs or are you happy if they produce working programs at all? Also, do not forget that you most probably will need capable tutors.But now something to advantages and disadvantages of the languages, I am familiar with. Note that this is mainly from my experience as a computational physicist and some of this may depend on the particular field, workgroup, university, etc.PythonI generally recommend using Numpy from almost the very beginning and I am assuming it to be used in the following.Advantages:Its easy to learn and so is reading other peoples code (e.g., your example code, but also the students code for the tutors).Input and output (which should not be the focus of your course) can be fully covered by print, Numpys savetxt and loadtxt, and maybe sys.argv. It can be introduced on the fly and it does not eat much programming time.You do not need to deal with or only need to deal little with such details as number representation, memory management, data types. Thus its fast to program and you can focus on the actual algorithms.Its not a compiled language. This has two advantages: Students do not need to deal with a compiler and students can test stuff directly in the console without having to compile, restart and rerun the program. Relatedly, debugging is easier.There are easy-to-use libraries for almost everything.You do not need to learn additional script languages like shell scripts, Make, Gnuplot and so on all this can be done from Python.There are a lot of good tutorials (for free).Disadvantages:Its not compiled. Therefore Python programs may be drastically slower than compiled programs in some cases relevant to computational physics. In other cases, however, libraries (especially Numpy) can yield a comparable performance. Another way, to get good performances with Python is to write the relevant code snippets in another language likeC. Obviously you need to learn this language for this, but this can be done later and your time learning Python is not wasted.Its more difficult to teach such details as number representation, memory management, data types and their pitfalls, since they are somewhat obfuscated.C/C++Advantages:It is compiled and therefore its easier to produce efficient code.You are directly dealing with number representation, memory management, data types and thus it is more intuitive to teach these your students will get closer to what is really happening in their computer.There are libraries for basically everything but understanding and using a library takes some work.There is a relevant amount of existing code in C/C++ and thus students need to learn the language if they want to work with this code.If you already know C/C++, you can learn Python (for example) very fast.Disadvantages:It is compiled and your students have to deal with the compiler, the preprocessor, headers and so on. You would be surprised how much students fail at this step, even at the end of the semester.It is slower too learn and it takes longer to produce working code.Dealing with marginal stuff such as input and output takes some time as well in teaching as in programming. In C++, there is an extra syntax for input and output.Compiler and operating-system dependencies.You have to deal with the C/C++ confusion.Reading the code of others especially in C++ can be quite difficult due to the vast amount of syntax features.The main advantages of C++ over C (Classes, templates) should not be relevant for your course and are only becoming relevant for larger projects. Therefore I would choose C of the two, since it is more concise.OthersSome comments on the other languages:Fortran: This is still used by a lot of groups and there is a lot of legacy code, but you cannot get around dealing with the old standards and their huge limitations and pitfalls (a lot of people are still working with Fortran77). Also, it will be much harder to find tutorials, help on the Internet and so on.Matlab/Mathematica: All the problems of proprietary software. Consider in particular that your students are likely to collaborate with people who do not have access to this software and the ensuing problems.Cuda: This is only relevant for certain problems, if performance matters. Also, after all I know, you do not want to learn programming this way. Which is the standard workflow at least in our group.
_unix.68499
I used xev to find the key code for fn-f5 and came up with a little script to toggle the bluetooth on or off. My question is whether it's possible to bind fn-f5 (keycode 246) to my shell script (bttoggle) using, preferably, xmodmap.
Is it possible to bind a shell script to a key press
keyboard shortcuts;bind;bluetooth;xmodmap
null
_softwareengineering.228040
I am new to the MVVM pattern. I have a window which has 3 text boxes (Name, Address, Description), a save button, and a listview which displays the above fields. When the save button is clicked I want to save the fields into database as well as show the record in the listview.How do I design my Model, ViewModel for this interface ?
Model and ViewModel for View
c#;.net;wpf;mvvm
This is how I see (and do) it.In your case (which is rather simple), I would have a ViewModel holding a list of Models and a SelectedModel property. The view would have a form and a list of course. The form would be bound to the currently selected model. The View would be bound to the ViewModel and indirectly to the Model (the data in the list and the form). Your ViewModel would also take care of validating and saving the data to the database by calling a method on your repository (or the Context, which is basically a repository, if you're using EF).The Model is your business logic, the Domain Model. The objects here will have all the data relevant to the business and the methods to manipulate that data. The model should be designed to be as simple as possible while accomplishing the business needs. This means normalized relationships, decoupled design and so on. Make it as easy to maintain as possible, regardless of the view. This means the same Model can be reused in different applications (a desktop and a web app can use the same model). In your case this is an object with the Name, Address and Description. This is all you need to accomplish your goal.The View will not always be this simple. Sometimes you will need to aggregate data from multiple models, or do some other manipulation of the data just to show the data on a view. A report for instance, can have lots of data from lots of models. Other times, the view will really need to simplify a lot of complex things that are going on in the model, to give you a high level overview without too many details. In your case, the view is also a bit more complex than your domain model. You need a list to see all your domain models and a form to edit one at the same time. This is where the ViewModel comes in. The VM will be between the model and the view. It will wrap a model object (or multiple model objects), introduce new properties that are a combination of other properties in the model and so on. The ViewModel is the one that is designed in a way that it makes presentation easier. This means simple properties your view can bind to and similar things. No matter how complex your Model or your View is, your ViewModel sits in between and makes all the necessary transformations of the data that you want to display. The same thing but in reverse happens when a view is sending data back (form submission or other input). The ViewModel is the one that transforms the view data into something that your Model understands. In your case the ViewModel will be rather simple, holding just the list of Models and a Selected Model, but in more complex examples, it might do some calculations or whatnot to accomplish what the view needs.
_unix.356346
I have CentOS 6.7. I installed OpenJDK 1.8 with the following command.yum install java-1.8.0-openjdk-develAfter installing I executed the following two commands.export JAVA_HOME=/usr/jdk/jdk1.8.0_121export PATH=$JAVA_HOME/bin:$PATHBut when I type java -version I still see the following output. I do not see OpenJDK.java version 1.8.0_121 Java(TM) SE Runtime Environment (build 1.8.0_121-b13) Java HotSpot(TM) 64-Bit Server VM (build 25.121-b13, mixed mode)EDITI have posted a similar thread, regarding not finding 'javac' in the thread -bash: javac: command not found error after installing OpenJDK 1.7 In that thread I was not able to execute javac which was resolved and it was about OpenJDK 1.7 (not 1.8). But this thread is all about java -version not showing OpenJDK for OpenJDK 1.8.
java -version is not showing OpenSDK
centos;software installation;java
null
_codereview.165114
I've done this simple c++ assignment. The homework was Design a Tree class that allows insertion of nodes and visit of the graph.What do you think of the style/design I used? I chose to store in the STL container the pointers of sub trees. Is it memory efficient?Is it sufficiently readable?Did I chose the right STL container?Thanks a lot for any tips about problems or bad practices!What do you think of the style/design I used? I chose to store in the STL container the pointers of sub trees. Is it memory efficient?Is it sufficiently readable?Did I chose the right STL container?Thanks a lot for any tips about problems or bad practices!#include <set> #include <deque> #include <iostream> using namespace std;template < typename T >class Tree {struct compare { bool operator()(const Tree * t1, const Tree * t2) const { return t1 -> GetContent() < t2 -> GetContent(); }};typedef typename std::multiset < Tree * , typename Tree::compare > NodeSet;private: NodeSet children;T content;public: Tree& AppendNode(const T& node) { Tree *t = new Tree(node); AttachTree(t); return *t; }void Clear() { typename NodeSet::iterator it = children.begin(); while (children.end() != it) { children.erase( *it); delete *it; it++; }}const T& GetContent() const { return content;}Tree(const T& root) { content = root;}void AttachTree(Tree* t) { children.insert(t);}void Visit(std::deque <T>& exp) const { exp.push_back(content); typename NodeSet::iterator it = children.begin(); while (it != children.end()) { (*it) -> Visit(exp); it++; }}Tree() {}Tree(Tree & c) { c.DeepCopyTo(this);}T & operator = (const Tree & b) { b.DeepCopyTo(this);}~Tree() { Clear();}void DeepCopyTo(Tree* dest) const { dest -> content = content; typename NodeSet::iterator it = children.begin(); while (it != children.end()) { Tree* t = new Tree(); (*it)->DeepCopyTo(t); dest->AttachTree(t); it++; }} };https://ideone.com/62Ggwu
Simple Tree C++ implementation
c++;template
null
_webapps.66776
I added a Python (py) file in Google Drive. I cannot preview it (but the main trouble is that Google won't search in it) even if it is a UTF8 text file. I can add it with the txt extension, but I would prefer to keep it as it is. Can I force the preview? If not, can I change the type somewhere to make Google Drive understand that is just a plain text file?One partial solution is to add the python file with the supplementary txt extension (like <filename>.py.txt) and then remove it.
How to force a file with a different extension to preview as a text file in Google Drive
google drive
null
_unix.32420
I have a set of data in a text file (X,Y coordinates which are not sorted). I want to plot it using gnuplot and connect plotted points using lines.I tried:plot a.txt with linesbut it is connecting the first point to the second point and so on. I want it to just connect plotted points, not first to second, and so on.
Plotting in gnuplot
gnuplot
You will have to sort it before gnuplot reads it, to do what you want. gnuplot implicitly uses the order of data in the file as the information about connection between points. If the X coord is the coordinate you want to connect-the-dots by do this at the command line:sort -n +0 -1 a.txt > b.txtUse gnuplot to plot the contents of file b.txt. Sometimes a gnuplot command like this will help you see the data better:plot 'b.txt' using 1:2 with linespointsThat puts a visible mark (an X or triangle or something) at the actual (X,Y) pairs, as well as drawing lines between them.
_vi.4389
I'm looking to map Ctrl-N to lbvhe in normal mode. This should visually select the word under the cursor, and works fine unless the word is at the beginning of a line.Having investigated, I've found that – in a mapping – firing h when the cursor is in column 1 (which should simply do nothing) traps the cursor at the beginning of the line. Any following movement commands seem to be ignored; j, k, l, e, and w all do nothing.My mapping works perfectly on any word preceded by whitespace or punctuation, but not words preceded by the previous line's EOL.
Mapping to bh causes cursor to be trapped in first column
key bindings;cursor movement
Use viw to visually select the current word. iw is a text object for the inner word.I suggest you run vimtutor from the command line as well as look at :h quickref for more motions and text objects.
_cs.40811
We ran into a problem that was mentioned in an interview 2 days ago. Can you help us with any idea or hint?A sequence of $n$ people, $\langle\,p_1,p_2,\dotsc p_n\,\rangle$ enter a room. We want to find the index, $i$, of the tallest person in the room. We have one variable for saving that index, which is updated when we see a person whois taller than the maximum height so far. We want to calculate the average number of times our variable is updated. For simplicity, assume that $p$ is a permutation of $\{1, 2, \dotsc, n\}$Short answer: it is close to $\ln n$. (i.e: natural logarithm) How can one solve such a question?
Tallest Person Average Memory Updating?
algorithms;algorithm analysis;data structures;runtime analysis;discrete mathematics
Let $T(n)$ denote the total number of updates to the variable when $n$ people have entered the room. For example, with $n=3$ there will be $3!=6$ possible orders where the heights are $1, 2, 3$. Notice that once the height 3 person enters, no further updates will occur, so let's group the six possible arrangements by when the height 3 person enters:$$\begin{array}{ccc}\mathbf{3}21 & \mathbf{3}12 \\1\mathbf{3}2 & 2\mathbf{3}1 \\12\mathbf{3} & 21\mathbf{3} \\\end{array}$$For each of these arrangements, let's count the number of updates we have to make before we see the tallest person. In the two arrangements where the height-3 person arrives first, there are 0 earlier updates. In the second row above, there will be 1 update we have to make before the tallest person arrives and in the third row there will be either 1 or two updates before the tallest person arrives: arrangement $12\mathbf{3}$ will require 2 updates (one for the height-1 person and another for the height-2 person and the arrangement $21\mathbf{3}$ will require 1 update, for the height-2 person. In this $n=3$ case, then, we'll have $0+2+3$ total prior updates (0 in row 1, 2 in row 2, and 3 in row 3). To this add the $3!=6$ updates for the tallest person, giving us a total $T(3)=5+6=11$.Now let's look at the general case for $n$ people. As we did above, let's group the $n!$ possible arrangements by when the height-$n$ person arrived. Denote this tallest person by $X$ and the rest of the people by dots. As we did above, we'll defer counting the last update until later.We'll have these possibilities, each of which can happen in $(n-1)!$ ways:$$\begin{array}{cc}\text{arrival of }X & \text{arrangement}\\1 & X\circ\dotsc\circ \\2 & \circ X\circ\dotsc\circ \\3 & \circ\circ X\circ\dotsc\circ\\\dotsm & \dotsm \\n & \circ\dotsc\circ X\end{array}$$For each row above we'll count the total updates before the tallest person arrives: Obviously if the tallest person arrives first, we'll have no prior updates, so the first row above will give a total count of 0. Each of the following rows will look like this:$$\underbrace{\circ\dotsc\circ}_k\ X\ \underbrace{\circ\dotsc\circ}_{n-1-k}$$ The first set of $k$ dots can be filled with the heights $\{1, 2, \dotsc, n-1\}$ and there are $\binom{n-1}{k}$ ways to choose these sets, each of which will contribute $T(k)$ updates. The last set of $n-1-k$ dots can be filled with the remaining numbers in $(n-1-k)!$ ways. Thus each row in the table will contribute$$\binom{n-1}{k}T(k)(n-1-k)!$$to the total of prior updates. Adding these and including the $n!$ updates for the tallest person we have the recurrence relation$$\begin{align}T(n) &= n! + \sum_{k=1}^{n-1}\binom{n-1}{k}T(k)(n-1-k)!\\ &= n!+ \sum_{k=1}^{n-1}\frac{(n-1)!}{k!(n-1-k)!}T(k)(n-1-k)!\\ &= n!+ \sum_{k=1}^{n-1}\frac{(n-1)!}{k!}T(k)\\ &= n!+ (n-1)!\sum_{k=1}^{n-1}\frac{T(k)}{k!}\\\end{align}$$Divide both sides by $n!$ to get$$\frac{T(n)}{n!} = 1 + \frac{1}{n}\sum_{k=1}^{n-1}\frac{T(k)}{k!}$$and remember that the average number of updates, $U(n)=T(n)/n!$ which gives us.$$U(n) = 1 + \frac{1}{n}\sum_{k=1}^{n-1}U(k)$$Now we'll find a simple closed form for this:$$\begin{align}U(n) &= 1+\frac{1}{n}\sum_{k=1}^{n-1}U(k)\\ &= 1+\frac{1}{n}\left(\sum_{k=1}^{n-2}U(k)\right)+\frac{1}{n}U(n-1) &\text{pulling out the last term}\\ &= 1+\frac{n-1}{n}\left(\frac{1}{n-1}\sum_{k=1}^{n-2}U(k)\right)+\frac{1}{n}U(n-1)\\ &= 1+\frac{n-1}{n}(U(n-1)-1)+\frac{1}{n}U(n-1) & \text{definition of }U(n-1)\\ &= U(n-1)\left(\frac{n-1}{n}+\frac{1}{n}\right)+\left(1-\frac{n-1}{n}\right)\\ &= U(n-1) + \frac{1}{n}\end{align}$$Whaddya know? With $U(0)=0$ we just have$$U(n) = 1 + \frac{1}{2}+\frac{1}{3}+\dotsm+\frac{1}{n} = H(n)$$the $n$-th harmonic number, as it's known and it's also known that $H(n)$ is asymptotic to $\ln n$.
_unix.191514
I'm looking to rename multiple files with the same name, with exception to the end of their file names. I want to replace the differing parts with an incremental counter. E.g. With the following files,71116_123 71116_134 71116_113 71116_02371116_923 71116_103 71116_125 71116_223I want to rename them to 71116_1 71116_2 71116_3 71116_471116_5 71116_6 71116_7 71116_8I found a similar question here , but the given solution seems somewhat complicated! Is there a simpler solution?
Renaming multiple files; appending
rename
So, given the limited info you've given, let me state my assumptions which, if any are wrong, let me know and I/we can adjust the answer. The assumptions are critical to keeping the logic short and sweet, otherwise you do descend into 'overkill' mode.Assumption 1. The 'suffix' for your file is what you want changed, and the 'suffix' is the number(s) after the underscore, meaning the underscore ( _ ) is your separator.Assumption 2. There will only ever be exactly one underscore in the original file name - not two or more underscores, and none without underscoresAssumption 3. You want the 'prefix', which is the value before the separator (as defined in Assumption 1) to remain exactly the same.Assumption 4. Regardless of the original suffix, you want the new suffix to use the same separator (as defined in Assumption 1) and that the new suffix is incremented not by time stamp or any other value other than the sort order as it was prior to renaming (meaning, driven by the original alpha-numeric sort order of the file names).Assumption 5. You want it fairly 'simple', which is unfortunately completely arbitrary, but I will define it as 'no complicated for loops or while loops' and 'not too many weird sed or awk' commands.So, it is not a hard problem, but given the constraints listed, it does become and interesting challenge. Regardless, I still resorted to using a 'simple' for loop and a 'simple' awk:cnt=0for i in *; do let cnt=cnt+1 mv $i $(echo ${i}_${cnt} | awk -F_ '{print $1_$3}')doneIf there are other files in the same directory you don't want renamed, as was the case in my original testing (I was testing in the /tmp directory, which had other files I didn't want renamed), then filter with some part of the prefix, like so:cnt=0for i in 71116*; do let cnt=cnt+1 mv $i $(echo ${i}_${cnt} | awk -F_ '{print $1_$3}')doneI did look at the answers in your link - and I agree those are egregious overkill, as are many of the answers on stackexchange. However, most 'overkill' answers rely on fewer assumptions, so in theory, they are more 'versatile' and/or more 'portable' than what I offer here.
_softwareengineering.78176
On 26 May 2011, a new EU directive comes into force that users accessing websites should now be asked for permission to allow the website to store cookies containing information about them and their visit to the website.How have you have tackled this issue? Is there another way to handle this besides an opt-in prompt the first time a person visits my site?
How do I comply with the EU Cookie Directive?
legal;privacy;cookies
The exact answer depends on what country you are in - remember it is up to individual countries to implement directives so it will vary. In Britain, if you use cookies for sessions when users log-in or storing preferences, it seems to boil down to wait and see.http://www.torchbox.com/blog/eu-law-cookies-and-icoEDIT: This answer is now years old and isn't really correct any more! Also, if reading this now, remember GDPR comes in to force next year in the EU and so this question isn't even really relevant any more! Good luck!
_codereview.157448
I have been doing some searching into how to properly handle errors in C++. I have found that the two most common techniques are exceptions and enumerations. I have created a simple class to expand on the enumeration method. It's main features are boolean operator overloading to return if the object actually has an error attached to it, and the ability to set messages at the point where the error occured. It uses an enumeration so that default error messages can be set. The ones in my code are for my current program, and are meant to be examples. Here is my code:enum Err_Type { NO_ERR, EMPTY, INVALID_INPUT, OUT_OF_RANGE, INVALID_FUNC_NAME, INVALID_PARANS, INVALID_OPERATOR, DIV_ZERO, INVALID_BASE, FILE_IO_ERROR, CUSTOM}; class Error { public: // Constructors Error() { Type = NO_ERR; Message = ; } Error(Err_Type type) { Type = type; SetMessage(type); } Error(std::string msg) { Type = CUSTOM; Message = msg; } // Operator Overloading explicit operator bool() const { return Type != NO_ERR; } bool operator !() const { return Type == NO_ERR; } void operator ()(Err_Type type) { Type = type; SetMessage(type); } // Public Methods void ChangeMessage(std::string msg) { Type = CUSTOM; Message = msg; } std::string GetErrMessage() { return Message; } void DisplayMessage() { std::cout << Message << endl; } private: Err_Type Type; std::string Message; void SetMessage(Err_Type type) { switch (type) { case EMPTY: Message = Entry cannot be empty!; break; case INVALID_INPUT: Message = Entry contained invalid characters!; break; case OUT_OF_RANGE: Message = Entry contains a value that is out of range!; break; case INVALID_FUNC_NAME: Message = Entry contained invalid function name!; break; case INVALID_PARANS: Message = Entry contained a parantheses error!; break; case INVALID_OPERATOR: Message = Entry contained an operator error!; break; case DIV_ZERO: Message = Entry contained an attempt to divide by zero!; break; case INVALID_BASE: Message = Entry contained an invalid base!; break; case FILE_IO_ERROR: Message = File opening error!; break; case NO_ERR: Message = ; } } }; I know that it is very simple and basic, and maybe not very good (I am a novice and a student to C++), so I would like some suggestions for improvements. My main objective with this is to allow my functions to pass around error information easily, and also to construct the error messages at the point where the errors are found. All suggestions are appreciated.
Error-Handling Class
c++;beginner;c++11;error handling
I have found that the two most common techniques are exceptions and enumerationsThat works for small and short lived projects but quickly becomes a problem when the types of errors you have deal with keep increasing.enums are best when the enumerators are fixed for most part. For example, you can create a simple enum like below:enum Status {SUCCESS, FAILURE};The reasons for failure is ever expanding in real world projects. It's best to capture the different types of errors through simple class hierarchies. Example:struct Error{ virtual ~Error() {} virtual std::string getMessage() = 0;};Now, you can use a class that captures results of an operation or a function call. It depends on enum Status and Error as its member variables. Of course, you can add as many convenience functions as you see fit to it.struct Result{ Result() : s(SUCCESS), e(nullptr) {} Result(Error* in) : s(in == nullptr ? SUCCESS : FAILURE), e(in) {} operator bool () const { return (s == SUCCESS); } bool operator!() const { return (s != SUCCESS); } std::string getMessage() const { if ( nullptr == e ) { return ; } else { return e->getMessage(); } } Status s; std::shared_ptr<Error> e;};You can add different types of Errors by sub-classing Error. For example:struct Empty : Error{ std::string getMessage() { return Entry cannot be empty!; }};Now you have a framework where the basic objects are in place. The only things that will keep on increasing are the types of errors your application deals with. That's easy to do by sub-classing Error. Here's a small program that shows how they can be used:#include <iostream>#include <string>#include <memory>// Using a namespace helps with avoiding conflictsnamespace MyApp{ enum Status {SUCCESS, FAILURE}; struct Error { virtual ~Error() {} virtual std::string getMessage() = 0; }; struct Result { Result() : s(SUCCESS), e(nullptr) {} Result(Error* in) : s(in == nullptr ? SUCCESS : FAILURE), e(in) {} operator bool () const { return (s == SUCCESS); } bool operator!() const { return (s != SUCCESS); } std::string getMessage() const { if ( nullptr == e ) { return ; } else { return e->getMessage(); } } Status s; std::shared_ptr<Error> e; }; std::ostream& operator<<(std::ostream& out, Result const& r) { return (out << ( r ? SUCCESS : FAILURE) << << r.getMessage()); } struct Empty : Error { std::string getMessage() { return Entry cannot be empty!; } }; struct InvalidInput : Error { std::string getMessage() { return Entry contained invalid characters!; } }; struct OutOfRange : Error { std::string getMessage() { return Entry contains a value that is out of range!; } }; // Add other subtypes of Error corresponding to // INVALID_FUNC_NAME // INVALID_PARENS // INVALID_OPERATOR // DIV_ZERO // INVALID_BASE // FILE_IO_ERROR} int main(){ using namespace MyApp; Result r1; Result r2(new OutOfRange); std::cout << r1 << std::endl; std::cout << r2 << std::endl;}Output of the program:SUCCESS FAILURE Entry contains a value that is out of range!
_unix.210224
I did netstat -anto and got following result:Proto Recv-Q Send-Q Local Address Foreign Address State Timertcp 0 0 127.0.0.1:1169 127.0.0.1:40238 ESTABLISHED off (0.00/0/0)what this time off mean?does it mean keepalive is off?if yes then how to enable keep alive?
Meaning of 'netstat -anto' output
networking;tcp
This link describes as best is generally known the meaning of the Timer field. off means not any of the a number of things including keepalive. So on that socket keepalive is off.The thing that sets up the socket on port 1169 needs to enable keepalive. Since that's a totally different topic and will have nothing to do with the title, I suggest closing this one off, and starting another question regarding how to set keepalive on the program that is running on port 1169.
_codereview.64890
The problemConcurrentHashMap provides very weak consistency guarantees w.r.t iteration:guaranteed to traverse elements as they existed upon construction exactly once, and may (but are not guaranteed to) reflect any modifications subsequent to construction(note the may part).In my case, I have a concurrent map that I need to periodically back-up. It's very important that what I back-up be a consistent point-in-time representation of the map. Back-ups are few and far between, triggered on a schedule and never run at the same time. I ended up implementing my own map (only the methods I use, not a full-blown map impl) based on 2 underlying concurrent maps and a RW lock:notesthe map is expected to be big. VERY big. so probably no copying it. also - there's absolutely no guarantee that a copy-constructor call will land you a consistent copy - it uses iteration under the hood.Holder.java (used to return values out of closures/inner classes)public class Holder<T> { private T value; private boolean isEmpty = true; public T getValue() { return value; } public void setValue(T value) { this.value = value; isEmpty = false; } public boolean isEmpty() { return isEmpty; }}AutoCloseableLock.java (abuses AutoCloseable for locking, which I think makes for cleaner code)import java.util.concurrent.locks.Lock;public class AutoCloseableLock implements AutoCloseable{ private final Lock delegate; public AutoCloseableLock(Lock delegate) { this.delegate = delegate; } public AutoCloseableLock lock() { delegate.lock(); return this; } @Override public void close() { delegate.unlock(); }}SingleSnapshotMap.java - a (partial) map implementation to allows a single consistent snapshotimport java.util.Map;import java.util.concurrent.ConcurrentHashMap;import java.util.concurrent.locks.ReadWriteLock;import java.util.concurrent.locks.ReentrantReadWriteLock;import java.util.function.BiConsumer;public class SingleSnapshotMap<K,V>{ private ConcurrentHashMap<K,V> baseMap = new ConcurrentHashMap<>(); private ConcurrentHashMap<K,V> diffMap; private ReadWriteLock readWriteLock = new ReentrantReadWriteLock(); private AutoCloseableLock readLock = new AutoCloseableLock(readWriteLock.readLock()); private AutoCloseableLock writeLock = new AutoCloseableLock(readWriteLock.readLock()); private final Object DELETION_MARKER = new Object(); public V put(K key, V value) { try (AutoCloseableLock ignored = readLock.lock()){ if (diffMap != null) { final Holder<V> prevValueHolder = new Holder<>(); diffMap.compute(key, (k,v) -> { if (v == null) { //no previous mapping in diff. check in base prevValueHolder.setValue(baseMap.get(key)); } else if (v == DELETION_MARKER) { //was marked as deleted. means prev==null prevValueHolder.setValue(null); } else { prevValueHolder.setValue(v); } return value; //new value is arg either way }); return prevValueHolder.getValue(); } else { return baseMap.put(key, value); } } } public V get(K key) { try (AutoCloseableLock ignored = readLock.lock()){ if (diffMap != null) { final Holder<V> valueHolder = new Holder<>(); diffMap.compute(key, (k,v) -> { if (v == null) { //no value in diff. check base valueHolder.setValue(baseMap.get(key)); } else if (v == DELETION_MARKER) { //was marked as deleted. return null valueHolder.setValue(null); } else { //got a value valueHolder.setValue(v); } return v; //do not change the current mapping }); return valueHolder.getValue(); } else { return baseMap.get(key); } } } public V remove(K key) { try (AutoCloseableLock ignored = readLock.lock()){ if (diffMap != null) { final Holder<V> prevValueHolder = new Holder<>(); ((ConcurrentHashMap)diffMap).compute(key, (k,v) -> { if (v == null) { prevValueHolder.setValue(baseMap.get(key)); } else if (v == DELETION_MARKER) { prevValueHolder.setValue(null); } else { prevValueHolder.setValue((V) v); } return DELETION_MARKER; }); return prevValueHolder.getValue(); } return baseMap.remove(key); } } private void startSnapshot() { try (AutoCloseableLock ignored = writeLock.lock()){ if (diffMap != null) { throw new IllegalStateException(only a single snapshot at a time); } diffMap = new ConcurrentHashMap<>(); } } private void endSnapshot() { try (AutoCloseableLock ignored = writeLock.lock()){ if (diffMap == null) { throw new IllegalStateException(no snapshot active); } //nothing else active. flush diff back into base for (Map.Entry<K,V> e : diffMap.entrySet()) { if (e.getValue() == DELETION_MARKER) { baseMap.remove(e.getKey()); } else { baseMap.put(e.getKey(), e.getValue()); } } diffMap = null; } } public void snapshot(BiConsumer<? super K, ? super V> action) { startSnapshot(); try { baseMap.forEach(action); } finally { endSnapshot(); } }}Expected usageSingleSnapshotMap<Long, String> map = new SingleSnapshotMap<>();//place stuff into mapMap<Long, String> copyMap = new HashMap<>();map.snapshot((aLong, s) -> { //stream key-value pair to disk somewhere}); //meanwhile activity goes on in the background//copyMap now holds a consistent point-in-time copy of mapThings I'm concerned aboutCorrectness - above all. I have some tests around this class (horrible nightmare code full of locks threads sleeps and yields), but MT code is tricky.Elegance - if there's a library that does this that I've missed, or a simpler solution.Performance and concurrency - the process running inside the snapshot() method could be long. I want to continue map operations while its running in the background.Things I'm already aware ofJava 8 has StampedLock which performs better than ReadWriteLock. I do plan on switching.
Threadsafe HashMap with snapshot support
java;concurrency
The first answer I gave was based on the premise that the data could be 'cloned' out of the HashMap. The alternative way for processing the data, as suggested, is a form of serializing the data away to a slow target (disk, network, etc.). That serialization cannot happen while holding a lock on the source.The current implementation accomplishes this by 'freezing' the underlying datastore, and then dumping that datastore to the output. To keep the system available, it also creates a 'hold and store' mechanism that tracks changes made, and then, when the serialization is complete, it 'replays' the changes to the underlying store.The problems with that system are numerous:in the normal course of events, every get, remove, and put requires a 'readlock' to be held for that operation. This significantly reduces the amount of concurrency available for the store. Actually, now that I look at it, the implementation uses a ReadWriteLock, but there is a bug, and even the 'write-lock' is a readLock(), so there is no effective exclusion from the process. If the lock was implemented right though, it would still require a full lock when the 'replay' was performed.even when the system is not having a snapshot taken, the overhead is required to conditionally manage the data load. Using a Strategy Pattern we can improve that, by having one simple strategy that is used most of the time, and then a more complicated strategy that is only used when performing snapshots.Using the compute mechanism of ConcurrentHashMap is overly complicated, and has resulted in the degradation of generic typing to raw types, and is a problem.I believe the overall strategy of managing a 'diff' concept is, in the long run, the right strategy. Additionally, the concept of the Holder is good too.The problem with the global read-locks, and alternatively, the blocking write-lock while the data is flushed, is hard to overcome without introducing more granular locking.I have taken the liberty of re-implementing your code with some alternate schemes. Note that there is no global Lock mechanism. There is a single core AtomicReference which contains the current 'strategy'. The simple PassThrough strategy has almost no overhead, and will have no performance impact on the general use case.The more complicated LoggedThrough class extends the PassThrough strategy, but it logs all operations going though, and does not pass the values back, unless the recording is complete (the snapshot done). Once the snapshot is complete, the LoggedThrough class can fall-back to a strategy of handling each Holder independently as they are called (get, put, etc.), and a background process flushes any inactive values.The 'magic' in this granular locking is that each Holder is individually synchronized, and knows its own state. This state can be safely dumped to the backing store, and when it does, the Holder becomes a simple pass-through entity.import java.util.HashMap;import java.util.Iterator;import java.util.Map;import java.util.Random;import java.util.concurrent.ConcurrentHashMap;import java.util.concurrent.Exchanger;import java.util.concurrent.ExecutorService;import java.util.concurrent.Executors;import java.util.concurrent.TimeUnit;import java.util.concurrent.atomic.AtomicBoolean;import java.util.concurrent.atomic.AtomicReference;import java.util.function.BiConsumer;public class VersionedSnapshotMap<K,V> { private static class Holder<T> { private boolean live = true; private T value = null; } private class PassThrough { public V put(K key, V val) { return store.put(key,val); } public V remove(K key) { return store.remove(key); } public V get(K key) { return store.get(key); } } private class LoggedThrough extends PassThrough { private final ConcurrentHashMap<K, Holder<V>> diff = new ConcurrentHashMap<>(); private final AtomicBoolean record = new AtomicBoolean(true); @Override public V get(K key) { // no need to worry about recording things.... Holder<V> holder = diff.get(key); if (holder != null) { synchronized(holder) { if (holder.live) { return holder.value; } } } // no race condition, the get can safely get the old value even // if a new holder was created in the race. return store.get(key); } private V undercover(K key, V val, boolean remove) { if (!record.get()) { // recording complete for this logger. // either the Holder has: // 1. never been created // 2. created, but not yet written back // 3. created, and written back // 4. created, written back, and removed. final Holder<V> holder = diff.get(key); if (holder != null) { // 2, or 3. synchronized (holder) { if (holder.live) { V prev = holder.value; holder.value = null; // push back this Holder, and mark it dead. // subsequent calls will find it gone... store.put(key, val); holder.live = false; return prev; } } } // 1, 3, or 4. if (remove) { return store.remove(key); } return store.put(key, val); } // we are still recording... // optimistically create a new Holder. // we will have to discard this if another thread has already done one. Holder<V> nref = new Holder<>(); nref.value = store.get(key); // yes, put it on the queue even if the recording may have stopped. Holder<V> race = diff.putIfAbsent(key, nref); // holder becomes whatever instance was first registered for this key. Holder<V> holder = race == null ? nref : race; synchronized(holder) { if (holder.live) { V prev = holder.value; holder.value = val; if (!record.get()) { // we thought we were recording, but that // changed in a race condition. We push our value // back through to the source. holder.live = false; holder.value = null; diff.remove(key); if (remove) { store.remove(key); } else { store.put(key, val); } } return prev; } } if (remove) { return store.remove(key); } return store.put(key, val); } @Override public V put(K key, V val) { return undercover(key, val, false); } @Override public V remove(K key) { return undercover(key, null, true); } public void flush() { // OK, recordings are no longer applied record.set(false); while (!diff.isEmpty()) { Iterator<Map.Entry<K, Holder<V>>> it = diff.entrySet().iterator(); while (it.hasNext()) { Map.Entry<K, Holder<V>> entry = it.next(); Holder<V> holder = entry.getValue(); K key = entry.getKey(); synchronized (holder) { if (holder.live) { holder.live = false; if (holder.value != null) { store.put(key, holder.value); } else { store.remove(key); } holder.value = null; } } it.remove(); } } } } private final PassThrough simplepass = new PassThrough(); private final ConcurrentHashMap<K, V> store = new ConcurrentHashMap<>(); private final AtomicReference<PassThrough> core = new AtomicReference<>(simplepass); public V get(K key) { return core.get().get(key); } public V put(K key, V val) { return core.get().put(key, val); } public V remove(K key) { return core.get().remove(key); } public void snapshot(BiConsumer<? super K, ? super V> action) { LoggedThrough logged = new LoggedThrough(); if (core.compareAndSet(simplepass, logged)) { try { store.forEach(action); } finally { logged.flush(); if (!core.compareAndSet(logged, simplepass)) { throw new IllegalStateException(Unable to restore the simple passthrough); } } } else { throw new IllegalStateException(Only one snapshot at a time, please); } }}I tested the above code using the folowing hacks...take a copy of your system properties using the snapshottake another copy, each time you dump an item though, use the Exchanger as an interlock, and randomly change something in the Map.ensure the original copy, and the concurrently modified copy are the same.ensure that the modifications made during the second snapshot are now shown in the third.Here's the test code (please don't hold it up to code review standards, it is a hack...):private static final <P,Q> void printPQ(P p, Q q, HashMap<P,Q> outsb) { outsb.put(p, q);}private static final void randomAte(VersionedSnapshotMap<String,String> smap, Exchanger<String> ex, final String term) { String[] keys = System.getProperties().stringPropertyNames().toArray(new String[0]); Random rand = new Random(); try { String token = Boo; while ((token = ex.exchange(token)) != term) { switch(rand.nextInt(10)) { case 0: case 1: // remove a key smap.remove(keys[rand.nextInt(keys.length)]); break; case 2: case 3: // add a new key smap.put( + System.nanoTime(), Modded); break; default: // modify an existing key. smap.put(keys[rand.nextInt(keys.length)], + System.currentTimeMillis()); } } } catch (InterruptedException ie) { ie.printStackTrace(); }}public static void main(String[] args) throws InterruptedException { final VersionedSnapshotMap<String, String> smap = new VersionedSnapshotMap<>(); System.getProperties().forEach((k,v) -> smap.put(String.valueOf(k), String.valueOf(v))); HashMap<String,String> base = new HashMap<>(); smap.snapshot((k,v) -> VersionedSnapshotMap.printPQ(k,v, base)); final Exchanger<String> exchanger = new Exchanger<>(); ExecutorService service = Executors.newCachedThreadPool(); // go through and randomize values. final String DONE = Done; service.execute(() -> randomAte(smap, exchanger, DONE)); HashMap<String,String> after = new HashMap<>(); smap.snapshot((k,v) -> { printPQ(k, v, after); try { exchanger.exchange(Hoo); } catch (InterruptedException ie) { } }); exchanger.exchange(DONE); service.shutdown(); service.awaitTermination(10, TimeUnit.SECONDS); System.out.println(base); System.out.println(after); System.out.println(base.equals(after)); HashMap<String,String> post = new HashMap<>(); smap.snapshot((k,v) -> VersionedSnapshotMap.printPQ(k,v, post)); System.out.println(post); System.out.println(base.equals(post));}
_webmaster.30214
Possible Duplicate:Services to monitor and report if a web site goes down? I've had an interesting day. I've been with a hosting company for 8+ years without a hitch. Today the MySQL on my server failed without any real reason. I had no idea so my site was down for 3 hours and of course I got emails from customers wondering what had happened. Not fun.What are ways to ensure my site is always live? It would be great if I got a text message saying it's down. Are there any practical things that one should do to ensure their site is performing?
How do I make sure my website is live all the time?
mysql
http://www.pingdom.com/ or http://newrelic.com are both really good services!
_reverseengineering.3738
I'm using IDA Starter 6.5 on linux. (Debian Wheezy 32bit)I would like to perform batch analysis on a bunch of iOS apps with an IDAPython script.To do so, I use command as such, to call text interface:$ ~/.ida-6.5/idal -A -SDump.py my_appHowever, it just flashed out in a sec and quited before any analysis happened.The only thing I saw on the last line of IDA is:The file is encrypted. The disassembly of it will be likely useless.Do you want to continue? ? -> N~oAnyone know how to make it Yes so I can use command line?Thanks.
Force IDA starter 6.5 to disassemble encrypted in autonomous mode
ida;idapython;automation
null
_cs.63013
I want to add a constraint to a convex program, to guarantee some matrix $A$ to be positive semidefinite.How should I do it?The library I am working with can cope with linear/ quadratic inequalities only.By definition, $A$ is positive semidefinite iff $\forall x \in \mathbb C^n : x^T A x \geq 0$, but this is a set of inifinitely many constraints. So, my question is: how can I formulate it using a finitely many set of contraints and using linear/ quadratic inequalities only.Thanks in advance!
Positive Definiteness Constraint
optimization;linear algebra
A matrix $A$ is positive semidefinite if and only if there exists a matrix $V$ such that $$A = V^\top V.$$So, you can use the entries of $V$ as your unknowns, and express each entry of $A$ as a quadratic function of the unknowns. Whenever you want to use $A$, instead rewrite that equation in terms of the entries of $V$.
_webapps.44477
I want students to be able to search YouTube for educational videos in their home language. For example, if a Dutch student wants to search for a Dutch video on Napoleon, a dutch result will probably not end up in the top 5 search results. Is their a way around this? I see that it is not possible to filter for language or region on Youtube and the YouTube V3 APIs also don't allow for this either (This is not an API question) but maybe some of you have come up with creative work-arounds?
YouTube: How can a user filter search results by region or language?
youtube;search
null
_codereview.155781
The really stupid way to evaluate bezier curves is with recursion, which is \$O(2^n)\$, which can be lowered to \$O(n^2)\$ with memoization. De Casteljau's algorithm improves on this, and is still \$O(n^2)\$ but faster.I misinterpreted De Casteljau's and thought it was the Bernstein form:$$\sum_{i=0}^{n-1}{\binom{n}{i}p_i (1-t)^{n-i-1} t^i}$$Where there are \$n\$ control points including the start and end named \$p_0, p_1,p_2, \dots, p_{n-1}\$.Calculating the combination is normally \$O(n)\$ which would make this algorithm as a whole \$O(n^2)\$ since there are \$n\$ terms to sum, however, there is a cheat:$$\binom{n}{k}=\prod_{i=1}^{k}{\frac{n+1-i}{k}}$$We can store the result after each iteration, resulting in the entire sequence calculated in \$O(n)\$ time, making the algorithm as a whole \$O(n)\$.\$2\leq n\leq 5\$ is the general use case (2 and 4 probably actually), so it's hardcoded. For \$n\geq 6\$, which uses this algorithm, I've done a ridiculous number of optimizations.The smallest double increment (one mantissa bit) is \$2.2E{-16}\$, and based on some quick benchmarks with random data it never was more inaccurate than \$1E{-14}\$, so the inaccuracy is acceptable.For very large \$n\$ (hundreds, 1200 was where I first saw it) the combination function exceeded the max value for doubles, causing the algorithm to return NaN, whereas De Casteljau's, though much slower, returned what was more or less the correct value.The final code:public static double bezier2(double a,double b,double t){ //Total of 3 floating point operations // 1 2 3 return a+t*(b-a);}public static double bezier(double t,double... ds){ return bezier(ds,t);}public static double bezier(double[] ds,double t){ int count = ds.length; switch(count){ case 0:throw new IllegalArgumentException(Must have at least two items to interpolate between); case 1:return ds[0]; case 2:return bezier2(ds[0],ds[1],t); case 3:{ double a = ds[0]; double b = ds[1]; double c = ds[2]; double t1 = 1d - t; /* * Hardcoded for n=3 * Total of 8+1=9 floating point operations * 1 2 3 4 5 6 7 8 */ return (a*t1+2d*b*t)*t1+c*t*t; } case 4:{ double a = ds[0]; double b = ds[1]; double c = ds[2]; double d = ds[3]; double t1 = 1d - t; /* * Hardcoded for n=4 * Total of 13+1=9 floating point operations * 1 2 3 4 5 6 7 8 9 10 11 12 13 */ return (a*t1+3d*b*t)*t1*t1+(3d*c*t1+d*t)*t*t; } case 5:{ double a = ds[0]; double b = ds[1]; double c = ds[2]; double d = ds[3]; double e = ds[4]; double t1 = 1d - t; double i = t1*t1; double j = t*t; double k = t1*t; double l = 4d*k; /* * Hardcoded for n=5 * Total of 12+5=17 floating point operations * 1 2 3 4 5 6 7 8 9 10 11 12 */ return (a*i + b*l + 6d*c*j) * i + (d*l + e*j) * j; } case 6: case 7: case 8: case 9: case 10: case 11:return decasteljauBezier(ds,t); default:{ double t1 = 1d - t; int n1 = count - 1; int halfn = (n1>>1)+1; int halfn1 = halfn+1; double[] choose; if(count<29){ choose = new double[halfn1]; int[] chooseInt = chooseIntRange(n1,halfn); for(int i=0;i<halfn1;i++){ choose[i]=chooseInt[i]; } }else if(count<60){ choose = new double[halfn1]; long[] chooseLong = chooseLongRange(n1,halfn); for(int i=0;i<halfn1;i++){ choose[i]=chooseLong[i]; } }else{ choose = chooseDoubleRange(n1,halfn); } double[] terms = new double[count]; double power = 1d; for(int i=0;i<halfn;i++){ terms[i] = ds[i] * choose[i] * power; power *= t; } for(int i=halfn;i<count;i++){ terms[i] = ds[i] * choose[n1-i] * power; power *= t; } power = t1; for(int i=1;i<count;i++){ terms[n1-i] *= power; power *= t1; } double sum = 0d; for(double v:terms){ sum += v; } return sum; } }}public static double decasteljauBezier(double[] ds,double t){ int n = ds.length; double[] result = Arrays.copyOf(ds, n); for(int i=n-1;i>0;i--){ for(int j=0;j<i;j++){ result[j]+=(result[j+1]-result[j])*t; } } return result[0];}public static int[] chooseIntRange(int n,int k){ int n1 = n+1; int product = 1; int[] result = new int[k+1]; result[0] = 1; for(int i=1;i<=k;i++){ product = product*(n1-i)/i; result[i] = product; } return result;}public static long[] chooseLongRange(int n,int k){ int n1 = n+1; int product = 1; long[] result = new long[k+1]; result[0] = 1; for(int i=1;i<=k;i++){ product = product*(n1-i)/i; result[i] = product; } return result;}public static double[] chooseDoubleRange(int n,int k){ int n1 = n+1; double product = 1d; double[] result = new double[k+1]; result[0] = 1; for(int i=1;i<=k;i++){ product = (product * (n1-i)) / i; result[i] = product; } return result;}Notes:This was optimized by hand by me, and likely again by the compiler. It would explain how \$2\leq n\leq 5\$ was all roughly the same speed but \$n=6\$ then suddenly was \$\frac{1}{3}\$ of the speed.These numbers aren't randomly chosen. They're based on the max values for ints and longs. It might be possible to increase these a bit (and therefore improve the algorithm ever so slightly) but I like safety.Semi-in-place De Casteljau's with all the optimizations I could think of. It's pretty fast but still \$O(n^2)\$.De Casteljau's was faster for \$6\leq n\leq 11\$, so I thought of redirecting. But as it turns out, the extra cost of that single if statement and a redirect call meant that it was only worth redirecting for \$6\leq n\leq 9\$. Even that redirect might be gone in the future with more optimizing. It's not like the time isn't comparable for such a small \$n\$.Beating \$O(n)\$ is definitely impossible, but I'm certain there is a better algorithm out there, or if not, a better way to implement this one. If not even that, surely there are optimizations I've missed.
Bezier evaluation in O(n)
java;performance;algorithm
null
_webmaster.17583
After setting up a blog at blogspot.com I added the site to my webmasters tools of google. I noticed an error: Restricted by robots.txt and looked a little into the matter. I found in the robots.txt of blogspot that Google prevents the /search directory by default in order to avoid duplicate entries in their search engine. As tags can be found under /search/label/SOME_TAG, these are not indexed too.I run different sites, especially one is an important e-commerce site for me. For each product, we use tags. Each tags leads to a separate site like /tags/tag1/ and lists all products that are linked to this tag. And this leads me to my question:Should I also block search and/or tag pages on my sites by using robots.txt?I feel google might punish my pagerank/results for using this low-quality content. However I think they are quite useful. We provide a short description of each tag and how the listed products can be used for the problem the tag describes.Moreover users landing on tags-pages have very high bounce rates (>90%), which is far above the average bounce rate.So, what is the best practice?
Search and Tags for robots.txt
seo;google;pagerank;robots.txt
I find a lot of pages in Google's search results from the various StackExchange sites that are the tag pages, most popular questions, etc. So it doesn't look like Google considers them to be low quality. I would leave them unblocked unless you see some sort of problem such as a Panda update having a very negative effect on your rankings. Otherwise these pages are a source of traffic, and although the bounce rate is high, you are receiving visitors who are going further into your site thanks to these pages being in the search results.
_webmaster.96569
I'm building a website for my client's hospice center. Right now, she wants to have a landing page for each city that the hospice serves, but for the content on those pages to be identical (like, adding a city name to the main landing page). The reason for this is because her hospice is located in only one city but will send people up to about 50 miles away (encompassing several counties and cities).The end idea is to be able to search Hospice in {This City} and for her website to show up in the search listing (even if not local). My problem is that I'm afraid that if I just make multiple duplicate pages Google and other search engines will penalize the website and it won't show up at all.Right now, my thought is to create pages for each county with the cities listed (/locations/{county}/ will contain a list of all of the cites served, plus content on that county, or an About Us page). I've considered making each city a page (locations/{county}/{city}), but there isn't any way to add that much unique content.My issue seems similar to this one: Multiple index pages on website for multiple locations, SEO no-no?(even in that she has seen this behavior in competing websites).Here is an example of a hospice center that has multiple location pages with identical information: https://www.heartlandhospice.com/find-an-agency/So my basic question is: is there a safe way to do this (maybe besides unique content) or should I convince her that this is too dangerous of a practice?
How should I make my website rank for multiple cities?
seo;google search;web development;local seo
null
_webapps.86130
I have an MSDN profile, with points, achievements etc.How can I unlink this from the currently linked Live ID ([email protected]) and link to my other Live ID ([email protected])?
How do you unlink your MSDN profile from your Microsoft Live ID?
microsoft;microsoft account
null
_opensource.5178
How can I easily determine a project's dependencies' licenses? For example on my github repo which includes multiple open source softwares.
Finding license dependencies from github repo
licensing;github
null
_webapps.69190
How do I send questions to select (eg animal rights and animal rescue) groups on Twitter?
How do I send a questions to a select group of people on Twitter?
twitter
null
_codereview.115332
I'm trying to get back into programming by building an app I've had in mind for quite a while. I've created an SQLite database and have managed to get some data in it. I'm trying to display the data inside a Listview, and whilst it is displaying, I'm concerned I've made things more difficult for myself for making the Listview display the data in a more coherent fashion.For reference, this is the tutorial I followed for creating my database: http://hmkcode.com/android-simple-sqlite-database-tutorial/My question basically boils down to this: Have I done this the easiest way so far? If not, what do I need to do to fix it?I'm also not entirely sure I'm populating the listview correctly in the Games class as I'm doing it in onCreate. I plan on having the ListView refresh when I insert a new row.This is my SQLiteHelper class:package bassios.initiativetracker;import android.content.ContentValues;import android.content.Context;import android.database.Cursor;import android.database.sqlite.SQLiteDatabase;import android.database.sqlite.SQLiteOpenHelper;import android.util.Log;import java.util.LinkedList;import java.util.List;import bassios.initiativetracker.model.Game;import bassios.initiativetracker.model.Player;public class SQLiteHelper extends SQLiteOpenHelper {private static final int DATABASE_VERSION = 1;private static final String DATABASE_NAME = InitiativeTracker;public SQLiteHelper(Context context) { super(context, DATABASE_NAME, null, DATABASE_VERSION);}@Overridepublic void onCreate(SQLiteDatabase db) { String CREATE_PLAYER_TABLE = CREATE TABLE Player ( + playerId INTEGER PRIMARY KEY AUTOINCREMENT, + playerName TEXT, + characterName TEXT, + gameId INTEGER ); String CREATE_GAME_TABLE = CREATE TABLE Game ( + gameId INTEGER PRIMARY KEY AUTOINCREMENT, + gameName TEXT, + gameSystem TEXT ); db.execSQL(CREATE_PLAYER_TABLE); db.execSQL(CREATE_GAME_TABLE);}@Overridepublic void onUpgrade(SQLiteDatabase db, int oldVersion, int newVersion) { db.execSQL(DROP TABLE IF EXISTS Player, Game); this.onCreate(db);}/** * CRUD stuff follows *///Table Namesprivate static final String TABLE_PLAYER = Player;private static final String TABLE_GAME = Game;//Table Columns//Player Tableprivate static final String PLAYER_KEY_ID = playerId;private static final String PLAYER_KEY_PNAME = playerName;private static final String PLAYER_KEY_CNAME = characterName;private static final String PLAYER_KEY_GAMEID = gameId;//Game Tableprivate static final String GAME_KEY_ID = gameId;private static final String GAME_KEY_NAME = gameName;private static final String GAME_KEY_SYSTEM = gameSystem;//Columns Arrayprivate static final String[] PLAYER_COLUMNS = {PLAYER_KEY_ID, PLAYER_KEY_PNAME, PLAYER_KEY_CNAME, PLAYER_KEY_GAMEID};private static final String[] GAME_COLUMNS = {GAME_KEY_ID, GAME_KEY_NAME, GAME_KEY_SYSTEM};public void addGame(Game game) { Log.d(addGame, game.toString()); SQLiteDatabase db = this.getWritableDatabase(); ContentValues values = new ContentValues(); values.put(GAME_KEY_NAME, game.getGameName()); values.put(GAME_KEY_SYSTEM, game.getGameSystem()); db.insert(TABLE_GAME, null, values); db.close();}public void addPlayer(Player player) { Log.d(addPlayer, player.toString()); SQLiteDatabase db = this.getWritableDatabase(); ContentValues values = new ContentValues(); values.put(PLAYER_KEY_PNAME, player.getPlayerName()); values.put(PLAYER_KEY_CNAME, player.getCharacterName()); values.put(PLAYER_KEY_GAMEID, player.getGameId()); db.insert(TABLE_PLAYER, null, values); db.close();}public Game getGame(int id) { SQLiteDatabase db = this.getReadableDatabase(); Cursor cursor = db.query(TABLE_GAME, GAME_COLUMNS, gameId = ? , new String[]{String.valueOf(id)}, null, null, null, null); if (cursor != null) { cursor.moveToFirst(); } Game game = new Game(); game.setGameId(Integer.parseInt(cursor.getString(0))); game.setGameName(cursor.getString(1)); game.setGameSystem(cursor.getString(2)); Log.d(getGame( + id + ), game.toString()); return game;}public Player getPlayer(int id) { SQLiteDatabase db = this.getReadableDatabase(); Cursor cursor = db.query(TABLE_PLAYER, PLAYER_COLUMNS, playerId = ? , new String[]{String.valueOf(id)}, null, null, null, null); if (cursor != null) { cursor.moveToFirst(); } Player player = new Player(); player.setPlayerId(Integer.parseInt(cursor.getString(0))); player.setPlayerName(cursor.getString(1)); player.setCharacterName(cursor.getString(2)); player.setGameId(Integer.parseInt(cursor.getString(3))); Log.d(getPlayer( + id + ), player.toString()); return player;}public List<Game> getAllGames() { List<Game> games = new LinkedList<>(); String query = SELECT * FROM + TABLE_GAME; SQLiteDatabase db = getWritableDatabase(); Cursor cursor = db.rawQuery(query, null); Game game; if (cursor.moveToFirst()) { do { game = new Game(); //game.setGameId(Integer.parseInt(cursor.getString(0))); game.setGameName(cursor.getString(1)); game.setGameSystem(cursor.getString(2)); games.add(game); } while (cursor.moveToNext()); } Log.d(getAllGames(), games.toString()); return games;}public List<Player> getAllPlayers() { List<Player> players = new LinkedList<>(); String query = SELECT * FROM + TABLE_PLAYER; SQLiteDatabase db = getWritableDatabase(); Cursor cursor = db.rawQuery(query, null); Player player = null; if (cursor.moveToFirst()) { do { player = new Player(); player.setPlayerId(Integer.parseInt(cursor.getString(0))); player.setPlayerName(cursor.getString(1)); player.setCharacterName(cursor.getString(2)); player.setGameId(Integer.parseInt(cursor.getString(3))); players.add(player); } while (cursor.moveToNext()); } Log.d(getAllPlayers(), players.toString()); return players;}public int updateGame(Game game) { SQLiteDatabase db = getWritableDatabase(); ContentValues values = new ContentValues(); values.put(gameName, game.getGameName()); values.put(gameSystem, game.getGameSystem()); int i = db.update(TABLE_GAME, values, GAME_KEY_ID + = ?, new String[]{String.valueOf(game.getGameId())}); db.close(); return i;}public int updatePlayer(Player player) { SQLiteDatabase db = getWritableDatabase(); ContentValues values = new ContentValues(); values.put(playerName, player.getPlayerName()); values.put(characterName, player.getCharacterName()); values.put(gameId, player.getGameId()); int i = db.update(TABLE_PLAYER, values, PLAYER_KEY_ID + = ?, new String[]{String.valueOf(player.getPlayerId())}); db.close(); return i;}public void deleteGame(Game game) { SQLiteDatabase db = getWritableDatabase(); db.delete(TABLE_GAME, GAME_KEY_ID + = ?, new String[]{String.valueOf(game.getGameId())}); db.close(); Log.d(deleteGame , game.toString());}public void deletePlayer(Player player) { SQLiteDatabase db = getWritableDatabase(); db.delete(TABLE_PLAYER, PLAYER_KEY_ID + = ?, new String[]{String.valueOf(player.getPlayerId())}); db.close(); Log.d(deletePlayer , player.toString()); }}Game POJO class:package bassios.initiativetracker.model;public class Game {private int gameId;private String gameName;private String gameSystem;public Game() {}public Game(String gameName, String gameSystem) { super(); this.gameName = gameName; this.gameSystem = gameSystem;}public int getGameId(){ return gameId;}public void setGameId(int gameId){ this.gameId = gameId;}public String getGameName() { return gameName;}public String getGameSystem() { return gameSystem;}public void setGameName(String gameName) { this.gameName = gameName;}public void setGameSystem(String gameSystem) { this.gameSystem = gameSystem;}@Overridepublic String toString() { return gameName + : + gameSystem;}}Games class:package bassios.initiativetracker;import android.app.AlertDialog;import android.content.DialogInterface;import android.os.Bundle;import android.support.design.widget.FloatingActionButton;import android.support.design.widget.Snackbar;import android.support.v7.app.AppCompatActivity;import android.support.v7.widget.Toolbar;import android.view.LayoutInflater;import android.view.View;import android.widget.ArrayAdapter;import android.widget.Button;import android.widget.EditText;import android.widget.ListView;import java.util.List;import bassios.initiativetracker.model.Game;public class Games extends AppCompatActivity {EditText gameName;EditText gameSystem;@Overrideprotected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_game); Toolbar toolbar = (Toolbar) findViewById(R.id.toolbar); setSupportActionBar(toolbar); ListView listContent = (ListView) findViewById(R.id.gamesListView); FloatingActionButton fab = (FloatingActionButton) findViewById(R.id.fab); fab.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View view) { showAddGameDialog(); } }); SQLiteHelper db = new SQLiteHelper(this); List<Game> game; game = db.getAllGames(); ArrayAdapter adapter = new ArrayAdapter(this, android.R.layout.simple_expandable_list_item_1, game); listContent.setAdapter(adapter); getSupportActionBar().setDisplayHomeAsUpEnabled(true);}public void saveGame(String gameName, String gameSystem) { SQLiteHelper db = new SQLiteHelper(this); db.addGame(new Game(gameName, gameSystem));}public void showAddGameDialog() { AlertDialog.Builder dialogBuilder = new AlertDialog.Builder(this); LayoutInflater inflater = this.getLayoutInflater(); final View dialogView = inflater.inflate(R.layout.save_game_dialog, null); dialogBuilder.setView(dialogView) .setTitle(Create Game) .setMessage(Enter game details) .setPositiveButton(getResources().getString(R.string.game_save), null) .setNegativeButton(getResources().getString(R.string.cancel), null); final AlertDialog b = dialogBuilder.create(); b.setOnShowListener(new DialogInterface.OnShowListener() { @Override public void onShow(DialogInterface dialog) { Button saveGame = b.getButton(AlertDialog.BUTTON_POSITIVE); saveGame.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View view) { String gameName = ((EditText) dialogView.findViewById(R.id.gameNameField)).getText().toString(); String gameSystem = ((EditText) dialogView.findViewById(R.id.gameSystemField)).getText().toString(); if (gameName.isEmpty()) { Snackbar.make(view, Please enter a game name, Snackbar.LENGTH_LONG).setAction(Action, null).show(); } else { if (gameSystem.isEmpty()) { Snackbar.make(view, Please enter a game system, Snackbar.LENGTH_LONG).setAction(Action, null).show(); } else { saveGame(gameName, gameSystem); b.dismiss(); } } } }); } }); b.show();}}
Listview from SQLite
java;android
null
_softwareengineering.331168
I was hoping on a suggestion about whether forking or branching was better for this particular use case. I work with a professor and have created a jupyter notebook for some of our research. The notebook is hosted on a server on our cluster, and the professor makes periodic changes to the notebook including changes to some of the code etc.I on the other hand maintain the notebook code and am always adding new features and such.The challenge is keeping our changes in sync. So if she is working on some updates to the notebook and I am working on some updates--it is hard to keep the notebooks in sync.My thought was to create a fork of the notebook for her. Then I can pull in her changes through periodic pull requests. But I was not sure if trying to setup a separate branch for her would be better. In my mind, a branch is more for working on a feature and then merging it back into the original--not for a continual processes of change and synchronization.Any suggestions?
should I fork or branch in a use case where I and a collaborator make changes to jupyter notebooks
git;branching;forking
null
_softwareengineering.317049
I want to use a ShowMessageAsync method, but, at first sight, there is no obvious way to do ViewModel binding, even though there are already a certain number of answers and examples about this.Now, for a classical, standard Status TextBlock, <TextBlock x:Name=TbStatusMsg Text={Binding StatusMsg} VerticalAlignment=Bottom Height=20 TextAlignment=Center HorizontalAlignment=Stretch />I would have had the usual Property in the ViewModel:private string statusMsg;public string StatusMsg { get { return statusMsg; } set { statusMsg = value; OnPropertyChanged(() => StatusMsg); }}public void ClearMsg(){ StatusMsg = ;}and I think (from an abstract viewpoint) that it would make sense to keep it as such (can always make TextBlock Visibility=Hidden) because I believe that the way the message is shown it's only up to the View, so, in other words, I'm going to manage it through an UI event handler: public Window1(){ InitializeComponent(); DataContext = viewModel; DependencyPropertyDescriptor dp = DependencyPropertyDescriptor.FromProperty(TextBlock.TextProperty, typeof(TextBlock)); dp.AddValueChanged(TbStatusMsg, async (object a, EventArgs b) => { if (TbStatusMsg.Text.Length == 0) { return; } await this.ShowMessageAsync(Message, TbStatusMsg.Text); viewModel.ClearMsg(); });}Do you have any criticism or any conceptual reason why this wouldn't be fine?Well, my only reservation is that a light attached property (a simple text to do the binding) could be a cleaner replacement of the TextBox - unless it is really part of the View design.
MVVM approach to mahapps.metro Dialogs
c#;mvvm;metro
null
_cs.53909
I'm a third year computer science student. I'm working on a project Data-show touch screen In schools classrooms.I'll try to explain my problem as much as I can. The project has three main components; A computer, a Data-show and a webcam.The teacher will plug the data-show in the computer and the computer screen will appear on the wall of the classroom to all students.The main purpose of the project is to turn the image of the screen displayed on the classroom wall into interactive screen; when the teacher tabs with his finger on an image of a Button displayed on the wall, the webcam that is connected to the computer will capture the position of the teacher's finger and find his (x,y) coordinates for a reference point on the wall, and raise a click event in the related (x,y) position on the screen.The screen of the computer has two dimensions; Width->X and Height->Y. And for every point in the screen such as P, it could be located on the screen using two numbers (Px,Py), where Px is the distance between the point P and the left side of the screen, and Py is the distance between the point P and the top side of the screen. In other words, the reference for all points in the screen is the top left corner of the screen.The data-show will display an irregular Quadrilateral shape of the screen on the wall. the shapes will not be regular squares or rectangles due to the angle that the teacher puts the data-show in. What I'm asking for are the equations that will calculate the (x,y) point on the screen that represents the (x,y) tapped point on the wall.There is mainly four shapes the data-show may display on the wall. For each shape of them the only known things are the coordinates of the four angles(corners) of the quadrilateral shape.1. an optimal rectangleThe displayed image on wall has a very low chance to shape an optimal rectangle, but it's the basic shape that could be formed.Suppose that the red point P'(Px',Py') represents the coordinates of the place the teacher tapped on with his finger.To get the original (Px,Py) coordinates from the point (Px',Py') on the wall, I can do the following.Calculate the width->X' and the height->Y' of the displayed image by the law of distance between two points.find the ratio between X&X', and between Y&Y'. I'll call the first ratio rx and the second ratio ry.Multiply Px' by rx to get Px, and multiply Py' by ry to get Py. 2. an optimal trapezoidal.The displayed image on wall could also shape an optimal trapezoidal. I asked some of my friends from the applied mathematics college to help me to find the two equations to find the original coordinates of the point, and they did some calculations and came out with these two equations;In this shape To find X:and to find Y I can use the same way used in the rectangle; finding the ratio between Y and the height of the trapezoidal H.3. an irregular quadrilateral.My question is about this shape, the data-show in most times will shape an irregular shape. Imagine this shape likeNone of the shape's lines is vertical or horizontal, all lines may have different lengths and they may have different angels from each other.My question isI'm searching for equations that will find the original P(x,y) point of the point P'(X',Y'). Things I know are the coordinates of the points P1, P2, P3, P4, P'What are those equations? and how are they derived?
How to find the original coordinates of a point inside an irregular rectangle?
computational geometry
null
_codereview.143993
I make a 2D matrix class (where matrix elements are of type float) which so far can:Create a matrix of zeros of any size by typing Matrix2D myMatrix(n,m) where n is row size and m is column size (default is 1)Create a matrix based on an initializer list, for example Matrix2D myMatrix({{1,2},{3,4}}) creates the matrix$$\texttt{myMatrix} = \begin{bmatrix}1 & 2 \\ 3 & 4\end{bmatrix}$$Delete row i of a matrix by typing myMatrix.removeRow(i)Delete column i of a matrix by typing myMatrix.removeColumn(i)Concatenate two Matrix2D matrices A and B horizontally by typing A.horzcat(B)Concatenate two Matrix2D matrices A and B vertically by typing A.vertcat(B)Do matrix equality by overloading the = operatorAccess a matrix element by overloading the () operator, e.g. A(i,j)Do matrix addition by overloading the + operatorHere is my code:#ifndef _MATRIXCLASS2_HPP#define _MATRIXCLASS2_HPP// System includes#include <iostream>#include <vector>#include <cstdio>// Personal includes#include exceptionClass.hppusing namespace std;/******************************* * Matrix class: definition *******************************/typedef float type;typedef vector<vector<type> > vector2D;typedef vector<type> vector1D;class Matrix2D {private: vector2D _matrix; // the matrix itself, a two-dimensional vectorpublic: /* Constructors */ Matrix2D(size_t numRows=1, size_t numCols=1) : _matrix(vector2D(numRows, vector1D(numCols))) {} // zero matrix explicit Matrix2D(const initializer_list<initializer_list<type> > & matrixAsAList) { // matrix given by brace enclosed initializer list, e.g. {{1,2},{3,4}} _matrix.assign(matrixAsAList.begin(), matrixAsAList.end()); } Matrix2D(const Matrix2D & matrixToCopy) : _matrix(matrixToCopy._matrix) {} /* Getters */ vector2D fullMatrix() const { return _matrix; } // output the full matrix size_t numRows() const { return _matrix.size(); } size_t numColumns() const { return _matrix[0].size(); } void print() const; /* Setters */ void removeRow(size_t); void removeColumn(size_t); void horzcat(const Matrix2D &); // horizontal matrix concatenation void vertcat(const Matrix2D &); // vertical matrix concatenation /* Operator overloads */ type & operator () (size_t, size_t); Matrix2D & operator = (const Matrix2D &); Matrix2D & operator + (Matrix2D &);};// Print the whole matrixvoid Matrix2D::print() const { for (size_t i=0; i<numRows(); i++) { // iterate over rows printf([ ); for (size_t j=0; j<numColumns(); j++) { // iterate over columns printf(%.3f ,_matrix[i][j]); } printf(]\n); }}// Delete rowvoid Matrix2D::removeRow(size_t row) { if (row>=numRows()) { throw E(Row number for deletion is out of range, not going to delete anything); } else { _matrix.erase(_matrix.begin()+row); // delere row (NB: .erase() decrements both size and capacity) }}// Delete column number columNumbervoid Matrix2D::removeColumn(size_t column) { if (column>=numColumns()) { throw E(Column number for deletion is out of range, not going to delete anything); } else { for (size_t i=0; i<numRows(); i++) { // iterate over rows _matrix[i].erase(_matrix[i].begin()+column); // delete element in column } }}// Horizontally concatenate matrix with another matrix, matrix2void Matrix2D::horzcat(const Matrix2D & matrix2) { if (numRows() != matrix2.numRows()) { throw E(Row sizes do not correspond, cannot concatenate matrices!); } else { for (size_t i=0; i<numRows(); i++) { // reserve necessary space (reserve throws length_error exception if unable to do so) _matrix[i].reserve(_matrix[i].size()+matrix2._matrix[i].size()); // append matrix2 row i to end of matrix row i _matrix[i].insert(_matrix[i].end(), matrix2._matrix[i].begin(), matrix2._matrix[i].end()); } }}// Vertically concatenate matrix with another matrix, matrix2void Matrix2D::vertcat(const Matrix2D & matrix2) { if (numColumns() != matrix2.numColumns()) { throw E(Columnn sizes do not correspond, cannot concatenate matrices!); } else { // reserve necessary space (reserve throws length_error exception if unable to do so) _matrix.reserve(_matrix.size()+matrix2._matrix.size()); // append matrix2 to the bottom of matrix _matrix.insert(_matrix.end(), matrix2._matrix.begin(), matrix2._matrix.end()); }}// Overload (), get element at row and column of _matrixtype & Matrix2D::operator () (size_t row, size_t column) { return _matrix.at(row).at(column);}// Overload =Matrix2D & Matrix2D::operator = (const Matrix2D & rhs) { if (this != &rhs) { _matrix = rhs._matrix; } return *this;}// Overload + (matrix addition)Matrix2D & Matrix2D::operator + (Matrix2D & rhs) { // compute result = matrix + rhs static Matrix2D result; // initialize the result in static storage (safer & more efficient) result = *this; // copy matrix intro result if (rhs.numRows() != numRows() || rhs.numColumns() != numColumns()) { // throw error if rhs column or row size does not match matrix throw E(Row of column size mismatch, won't add matrix.); } // perform matrix addition for (size_t i=0; i<numRows(); i++) { for (size_t j=0; j<numColumns(); j++) { result(i,j) += rhs(i,j); } } return result;}#endif // _MATRIXCLASS2_HPPThe exceptionClass.hpp header is pretty simple:#ifndef _EXCEPTIONCLASS_H#define _EXCEPTIONCLASS_H#include <exception>class E: public std::exception {private: const char * message = nullptr; E(){}; // such a constructor not possible!public: explicit E(const char * s) throw() : message(s) {} const char * what() const throw() { return message; }};#endif // _EXCEPTIONCLASS_HThis is my first ever object-oriented project. I'm looking for any advice on how I can improve my code in terms of efficiency/readability/portability. Thank you!
C++ small 2D matrix class
c++;performance;object oriented;c++11;portability
null
_unix.254335
I have a folder with lots of subfolders, that contain two times the same but slightly different:Movie1 {Action}{Adventure}{Sci-Fi}{Thriller}{Science Fiction}/Movie2 {Action}{Adventure}{Thriller}{Science Fiction}/Movie3 {Action}{Adventure}{Thriller}{Sci-Fi}}/Movie4 {Action}{Adventure}{Thriller}/How do I unify these by deleting the part {Science Fiction} where {Sci-Fi} already exists, renaming the fodlers that dont't contain {Sci-Fi} but only {Science Fiction}?I would go for a for loop:for f in *; do if [ *{Science Fiction}* == $f ] && [ *{Sci-Fi}* == $f ]; then #delete the {Science Fiction} part else ... fidoneBut that doesn't seem very elegant. is there a cleaner solution?
rename all files in a folder deleting duplicate string-parts
bash;rename;batch jobs
You can use sed to remove the duplicates from string:for f in *; do r=$(echo $f | sed -r s/(.*)(\{Sci-Fi\}|\{Science Fiction\})(.*)(\{Sci-Fi\}|\{Science Fiction\})(.*)/\1\2\3\5/g); echo $r;doneReplace echo $f with mv $f $r if you like the output.The above sed line will take the first matching word and remove the second, if you want to always priorize Sci-Fi over Science Fiction, even when only Science Fiction exists, you can do it in two steps:for f in *; do r=$(echo $f | sed s/{Science Fiction}/{Sci-Fi}/); s=$(echo $r | sed -r s/(.*)(\{Sci-Fi\})(.*)(\{Sci-Fi\})(.*)/\1\2\3\5/g); if [ $f != $s ]; then echo moving $f to $s fidone
_softwareengineering.234564
We follow pair programming in our company and always face the issue of balanced and effective pair rotation within the developers on stories.We follow a simple metrics in which every developer's name is mapped with every other developer and we mark the respective intersection whenever two developers are pairing. This is not working out well, we cannot track how much time a pair has spent pairing and people forget to update the metrics many times. Tracking the pair rotation is helpful because we want the project knowledge to be shared across the team, and not just one pair. So usually what happens is, whoever is pairing keeps pairing till the entire story is completed (given they have better context), and no body else knows about what is being done & if the story or a regression/production bug comes back, the same pair has to pick it up (leaving whatever they are currently doing), which is what creates a bottleneck.Are there any known metrics that can be used for tracking the pair rotations.
Pair Rotation in a team for effective pair programming
agile;pair programming;extreme programming
null
_webmaster.14872
I installed xampp this evening and none of my PHP pages will display.I am storing all my pages in the htdocs folder as directed too and calling via http://localhost/index.php and nothing appears? All systems are online. Is there anything else that should have been done before copying over my pages?i'm using a MAC. I've tried putting my files in there in another file aswell but nothing displays! I get this error:Warning: Unknown: failed to open stream: Permission denied in Unknown on line 0 Fatal error: Unknown: Failed opening required '/Applications/XAMPP/xamppfiles/htdocs/XXXXXXX/index.php' (include_path='.:/Applications/XAMPP/xamppfiles/lib/php:/Applications/XAMPP/xamppfiles/lib/php/pear') in Unknown on line 0 Any ideas? Very new to PHP and Xampp!
Xampp not displaying PHP pages in htdocs due to failed to open stream: Permission denied
php;localhost;xampp
null
_unix.213022
I cannot find /etc/sysconfig/clock in redhat 7. Is there any equivalent file in redhat7??
Equivalent of /etc/sysconfig/clock in redhat 7
linux;rhel
null
_webmaster.101660
I have 200+ errors flagging in Search Console > Search Appearance > Structured DataThese are all hentry errors that say 'Missing:updated'.When testing live in the Structure Data Testing Tool, no errors appear.The code has not changed since the last detected date.I know that to fix this I would just add the updated property but, is Search Console unreliable/out of date that I can just ignore if all appears fine in the testing tool?
Discrepancy between Search Console and Structured Data Testing Tool
seo;google search console;structured data
null
_unix.251786
I have two XML file first one ~/tmp/test.xml second one /data/myuser/.mycontent/mytest.xml I want to add all of the content on the first XML file to line 35 in the second one. I tried the following but with no lucksed -n '35,~/tmp/test.xml`' /data/myuser/.mycontent/mytest.xml(cat /data/myuser/.mycontent/mytest.xml; echo) | sed '35r ~/tmp/test.xml'ed -s ~/tmp/test.xml <<< $'35r /data/myuser/.mycontent/mytest.xml\nw'Line 33 from second XML file line 34 is empty#the following tags contain employee locationXML tag in the first XML file<Location /mylocation> first Address second Address Mylocation XX/XX/XX/XX Myphone XXXXXXX</Location>What did I do wrong, please advise .Edit 1first XML ~/tmp/test.xml file contain only <Location /mylocation> first Address second Address Mylocation XX/XX/XX/XX Myphone XXXXXXX</Location>second XML /data/myuser/.mycontent/mytest.xml contain:NameVirtualHost *:XXXX<VirtualHost *:XXXX> ServerName AAAAAAAA# Manager comment 1# Manager comment 2# Manager comment 3#DocumentRoot /data/myuser/.mycontent/# support email [email protected]# started at 2010<employee /*> AllowOverride None</employee><Location /> mylocation Deny from all</Location><Location /icons/># employee info my employee info Allow from all</Location>DavLockDB /tmp/${APACHE_HOSTNAME}.DavLockDAVMinTimeout 5000LimitXMLRequestBody 0# This should be changed to whatever you set DocumentRoot to.## I need to add new tags here ##<Location /employee1> first Address second Address Mylocation XX/XX/XX/XX Myphone XXXXXXX</Location><Location /employee2> first Address second Address Mylocation XX/XX/XX/XX Myphone XXXXXXX</Location>## more tags same as above## then manager commentEdit 2second file /data/myuser/.mycontent/mytest.xml should be like: NameVirtualHost *:XXXX <VirtualHost *:XXXX> ServerName AAAAAAAA # Manager comment 1 # Manager comment 2 # Manager comment 3 # DocumentRoot /data/myuser/.mycontent/ # support email [email protected] # started at 2010 <employee /*> AllowOverride None </employee> <Location /> mylocation Deny from all </Location> <Location /icons/> # employee info my employee info Allow from all </Location> DavLockDB /tmp/${APACHE_HOSTNAME}.DavLock DAVMinTimeout 5000 LimitXMLRequestBody 0 # This should be changed to whatever you set DocumentRoot to. ## I need to add new tags here ## ## this tag from first file <Location /mylocation> first Address second Address Mylocation XX/XX/XX/XX Myphone XXXXXXX </Location> ## edit end <Location /employee1> first Address second Address Mylocation XX/XX/XX/XX Myphone XXXXXXX </Location> <Location /employee2> first Address second Address Mylocation XX/XX/XX/XX Myphone XXXXXXX </Location> ## more tags same as above ## then manager commentNote: ## this tag from first file and ## edit end to specify merge location location
Add content of XML file to another one using bash script
bash;xml
OK, so this isn't XML inserting into XML like I thought - if it was, the answer would be 'use a parser'. However it's not, you're just merging one text file into another. So I would break out the perl as I so often do:#!/usr/bin/env perluse strict;use warnings;open ( my $insert, '<', '~/tmp/test.xml' ) or die $!;open ( my $modify, '<', '/data/myuser/.mycontent/mytest.xml' ) or die $!; open ( my $output, '>', '/data/myuser/.mycontent/mytest.xml.new' ) or die $!; select $output; while ( <$modify> ) { if ( $. == 32 ) { print <$insert>; }; print; }This should do the trick - if you're after a one liner, then it can be condensed down to:perl -p -i.bak -e 'BEGIN { open ( $insert, <, shift ) } if ( $. == 32 ) { print <$insert> }' ~/tmp/test.xml /data/myuser/.mycontent/mytest.xmlNote $. is perl for current line number. You can apply a different sort of conditional if you prefer. Like whether a regex matches (which might be more appropriate, given config files tend to get lines inserted into them).
_softwareengineering.340392
I am wondering how the apps that allowa user to choose an item and, once the user has selected an item and checked out, give the retailer information about order that has been placed. For example, say a takeaway has an iOS app and customer has chosen fish and chips and placed an order. How does the takeaway know an order has been placed? If it's TCP IP then I guess we need to start a server on the takeaway's computer? Is that right? How can modify the menu without making any changes in app from developer side?I am looking for an answer about how things work in real world. Once I have the idea then developing it is a piece of cake.
How does an app send an order to a retailer? What happens under the hood?
android;ios;app
I am wondering how does the retailer apps works which allows user to choose an item and once user have selected & checkout, the retailer get information about order been placed.Typically the app posts the selection to a server. The buzz words you need to know to study this are Shopping Cart and E-Commerce. Many frameworks exist that would allow Fish & Chips takeaway to add their menu and pictures to their existing code. TCP/IP is just one of many technologies at work here (I'm assuming no one uses UDP for this but could be wrong). 2) how can add /remove/modify menu without making any changes in app from developer side?Same way we've added our respective question and answer to softwareengineering.stackexchange.com. Code and content are separated. Everything we typed here doesn't end up in someones source code. It just becomes data. The source code doesn't care about the contents of the data. It just needs to know how to find it and display it. That means the content (menu, pictures) can be added long after they're done writing the code. The app can download the latest content the same as a web browser would do.
_cstheory.30978
Is that the same as saying the one will try to generate a higher-degree pseudo expectation functional by solving a SOS-program ? Or is there a difference between the two things? Or to take a different view, We needed to show that the projector to low-degree polynomials has bounded hypercontractive norm. We start off defining the projector $\mathcal{P}_d$ as the map,$$\mathcal{P}_d : ( \{ \pm \}^n \rightarrow \mathbb{R} ) \rightarrow ( \{ \pm \}^n \rightarrow \mathbb{R} )$$$$ f = \sum_{\alpha \subseteq [n] }\hat {f}_\alpha \chi_\alpha \rightarrow f' = \sum_{\vert \alpha\vert \leq d } \hat{f}_\alpha \chi_\alpha$$Where $\chi_\alpha = \prod_{i \in \alpha} x_i$Then we show that the over the space of such ''$n-$variate Fourier polynomials $f'$ with degree at most $d$, $\mathbb{E} [f'^4 ] \leq 9^d ( \mathbb{E} [ f'^2 ] )^2 $ (which is equivalent to showing that $\Vert \cal{P} \Vert_{2 \rightarrow 4 } \leq 9^d$ ) So in the above context is the choice of ``4 what quantifies the number of rounds of Lasserre hierarchy used? (the above is called a degree-4 SOS proof!) (as in you can run the SOS-program trying to optimize the hypercontractive norm of an operator only for as large a value of $x$(here $4$) as for which above kind of hypercontractive bounds can be established?) Since this hypercontractive bound on such projection operators is already proven as a theorem then what does the so-called Tensor-SDP algorithm achieve in terms of giving an efficient certificate?
What are multiple rounds of SOS/Lasserre hierarchy?
cc.complexity theory;approximation algorithms;approximation hardness;unique games conjecture
null
_unix.337960
I am using an embedded device with onboard storage (mmcblk0).The system is using UEFI (and GRUB), on mmcblk0 I have a GPT partition with 3 partition: root, configurations, swap.My command to boot is:linux /vmlinuz root=/dev/mmcblk0p1 net.ifnames=0 splashNow my problem is that when I set the quiet or loglevel param, it fails to boot and hangs up in a kernel panic. When I don't set one of those it boots perfectly. Root param is always the same.Full kernel panic log:
Kernel panic using logelevel or quiet
kernel;linux kernel;kernel panic
null
_unix.149111
This question was stimulated by asking the questionChromium browser does not allow setting the default paper size for Print to File, and also by a conversation with @Gilles on chat. As pointed out by @don_crissti, and as verified by me, changing the locale (at least LC_PAPER) makes a difference in what paper size is selected.I had never given much thought to what to select, and had always gone with en_US.UTF-8 because it seemed like a reasonable default choice.However, per @Gilles on chat (see conversation starting at http://chat.stackexchange.com/transcript/message/17017095#17017095). Extracts:Gilles: LC_PAPER defaults to $LANGGilles: You must have LANG=en_US.UTF-8. That's a bad idea: it sets LC_COLLATE and that's almost always a bad thingGilles: LC_COLLATE doesn't describe correct collation, it's too restrictive (it goes character by character) remove LANG and instead set LC_CTYPE and LC_PAPERGilles: plus LC_MESSAGES if you want messages in a language other than EnglishClearly, there are issues here I am not aware of, and I am sure many others are as well. So, what issues should you consider when setting locales, and how should you set them? I've always just run dpkg-reconfigure locales in Debian, and not thought twice about it.Specific question: Should I set my locale to en_IN.UTF-8? Are there any drawbacks of doing so?See also: Does (should) LC_COLLATE affect character ranges?
What should I set my locale to and what are the implications of doing so?
locale
Locale settings are user preferences that relate to your culture.Locale namesOn all current unix variants that I know of (but not on a few antiques), locale names follow the same pattern: An ISO 639-1 lowercase two-letter language code, or an ISO 639-2 three-letter language code if the language has no two-letter code. For example, en for English, de for German, ja for Japanese, uk for Ukrainian, ber for Berber, For many but not all languages, an underscore _ followed by an ISO 3166 uppercase two-letter country code. Thus: en_US for US English, en_UK for British English, fr_CA Canadian (Qubec) French, de_DE for German of Germany, de_AT for German of Austria, ja_JP for Japanese (of Japan), etc.Optionally, a dot . followed by the name of a character encoding such as UTF-8, ISO-8859-1, KOI8-U, GB2312, Big5, etc. With GNU libc at least (I don't know how widespread this is), case and punctuation is ignored in encoding names. For example, zh_CN.UTF-8 is Mandarin (simplified) Chinese encoded in UTF-8, while zh_CN is Mandarin Chinese encoded in GB2312, and zh_TW is Taiwanese (traditional) Chinese encoded in Big5.Optionally, an at sign @ followed by the name of a variant. The meaning of variants is locale-dependent. For example, many European countries have an @euro locale variant where the currency sign is and where the encoding is one that includes this character (ISO 8859-15 or ISO 8859-16), as opposed to the unadorned variant with the older currency sign. For example, en_IE (English, Ireland) uses the latin1 (ISO 8859-1) encoding and as the currency symbol while en_IE@euro uses the latin9 (ISO 8859-15) encoding and as the currency symbol.In addition, there are two locale names that exist on all unix-like system: C and POSIX. These names are synonymous and mean computerese, i.e. default settings that are appropriate for data that is parsed by a computer program.Locale settingsThe following locale categories are defined by POSIX:LC_CTYPE: the character set used by terminal applications: classification data (which characters are letters, punctuation, spaces, invalid, etc.) and case conversion. Text utilities typically heed LC_CTYPE to determine character boundaries.LC_COLLATE: collation (i.e. sorting) order. This setting is of very limited use for several reasons:Most languages have intricate rules that depend on what is being sorted (e.g. dictionary words and proper names might not use the same order) and cannot be expressed by LC_COLLATE.There are few applications where proper sort order matters which are performed by software that uses locale settings. For example, word processors store the language and encoding of a file in the file itself (otherwise the file wouldn't be processed correctly on a system with different locale settings) and don't care about the locale settings specified by the environment.LC_COLLATE can have nasty side effects, in particular because it causes the sort order A < a < B < , which makes between A and Z include the lowercase letters a through y. In particular, very common regular expressions like [A-Z] break some applications.LC_MESSAGES: the language of informational and error messages.LC_NUMERIC: number formatting: decimal and thousands separator.Many applications hard-code . as a decimal separator. This makes LC_NUMERIC not very useful and potentially dangerous:Even if you set it, you'll still see the default format pretty often.You're likely to get into a situation where one application produces locale-dependent output and another application expects . to be the decimal point, or , to be a field separator.LC_MONETARY: like LC_NUMERIC, but for amounts of local currency.Very few applications use this.LC_TIME: date and time formatting: weekday and month names, 12 or 24-hour clock, order of date parts, punctuation, etc.GNU libc, which you'll find on non-embedded Linux, defines additional locale categories:LC_PAPER: the default paper size (defined by height and width).LC_NAME, LC_ADDRESS, LC_TELEPHONE, LC_MEASUREMENT, LC_IDENTIFICATION: I don't know of any application that uses these.Environment variablesApplications that use locale settings determine them from environment variables.Then the value of the LANG environment variable is used unless overridden by another setting. If LANG is not set, the default locale is C.The LC_xxx names can be used as environment variables.If LC_ALL is set, then all other values are ignored; this is primarily useful to set LC_ALL=C run applications that need to produce the same output regardless of where they are run.In addition, GNU libc uses LANGUAGE to define fallbacks for LC_MESSAGES (e.g. LANGUAGE=fr_BE:fr_FR:en to prefer Belgian French, or if unavailable France French, or if unavailable English).Installing localesLocale data can be large, so some distributions don't ship them in a usable form and instead require an additional installation step.On Debian, to install locales, run dpkg-reconfigure locales and select from the list in the dialog box, or edit /etc/locale.gen and then run locale-gen.On Ubuntu, to install locales, run locale-gen with the names of the locales as arguments.You can define your own locale.RecommendationThe useful settings are:Set LC_CTYPE to the language and encoding that you encode your text files in. Ensure that your terminals use that encoding.For most languages, only the encoding matters. There are a few exceptions; for example, an uppercase i is I in most languages but in Turkish (tr_TR).Set LC_MESSAGES to the language that you want to see messages in.Set LC_PAPER to en_US if you want US Letter to be the default paper size and just about anything else (e.g. en_GB) if you want A4.Optionally, set LC_TIME to your favorite time format.As explained above, avoid setting LC_COLLATE and LC_NUMERIC. If you use LANG, explicitly override these two categories by setting them to C.
_webapps.75180
I'd like to put the share link to a my files in spreadsheet to make it easier to to share. Is it possible to get the link to all files without going to each file individually?
Get share link of multiple files in Google Drive
google drive
null
_unix.226776
Are basic system administrator utilities such as useradd or adduser standardized? If so, where can I find the specs? (POSIX doesn't seem to encompass those, but I might need to take a better look).
Are basic system administrator utilities such as useradd or adduser standardized?
administration;standard
No, these utilities are not standardized. A quick look through the useradd(8) manual on RHEL6 versus OpenBSD reveals that while there are similarities, various flags differ in purpose. For a broader view, http://bhami.com/rosetta.html lists under managing users a variety of different commands, depending on the particular flavour of unix.
_unix.120368
Some applications simulate a virtual USB or CD Rom drive as if a USB drive is attached to the computer.Is there any configuration or application that provides a virtual USB drive, not for the the operating system itself, but for other equipments which accept USB drive, through a USB port.So I'll have a virtual hard disk (e.g. a *.vdi file) in the computer, which is connected, through a USB socket, as a USB drive to some other equipment (e.g. a cell phone or a laptop).
Make a computer act as a virtual USB device for other equipments
usb
You would need to add a USB Device/Peripheral controller to the computer, as opposed to the USB Host Controller they tend to come with.Something like this: https://www.maximintegrated.com/en/products/interface/controllers-expanders/MAX3420E.htmlUnfortunately, you'd have to find a way to wire it onto your motherboard. Technically, it can be done. Practically, you'd have to redesign the motherboard to include it. You might be lucky enough to find an SPI or I2C bus exposed somewhere on your motherboard to allow you to add it, but they're usually wired directly into whatever they're being used for unless you're using a dev board or single-board computer with exposed GPIO and other ports such as a Raspberry Pi.The other option would be a USB On-the-Go Controller. Motherboards designed for embedded and portable devices tend to have a USB OTG (On-the-go) contoller, which can function as either a Host or Device controller. For example, the aforementioned Raspberry Pi has an On-the-Go Controller, but on all models except the Pi Zero that gets rewired to a host port or an onboard USB hub denying the use of USB device functionality. The BeagleBone Black has an OTG port.That's not all though - once you've got the hardware, you'd also need the software. Linux has some useful kernel USB Gadget drivers (USB gadget is another term for USB peripheral/device) such as g_serial and g_ethernet that allow you to plug your device into another computer and be visible as a serial or ethernet-over-USB device (there are others for exposing a device as mass storage, which allow you to use a file as a block device and expose the computer as a mass storage gadget). The BeagleBone Black tends to come with this enabled by default, so you can simply plug it into your PC over USB and see it as a networked device - and I believe it also appears as a mass storage device by using a composite driver (which allows it to appear as multiple USB device types over a single connection.) The Pi Zero can use these, but does not by default. For Windows or other OSes, you'd probably have to write that device driver yourself.So, theoretically, you can do it. You can tear down your desktop PC, try and find an unused compatible bus on the motherboard somewhere (most likely some unused pins on a controller IC), or a way to extend an internal I2C or SPI bus, or something you can tear out and replace, and solder a USB OTG or device controller chip onto it. Then you can install Linux and use a gadget driver, or write your own for another OS. Practically, unless you're a top-notch electronics engineer, you're not going to be able to do it. At least, not until someone comes out with that elusive adapter with a device or OTG port on it that plugs into a USB port (theoretically, that could be done with a microcontroller such an Arduino wired to a pair of USB device controller ICs), and writes the drivers to run it.
_webapps.8989
I am totally aware of this question, however this isn't a very elegant solution that seems very buggy or even outdated, judging from the comments.I heard that there was a Greasemonkey/UserScript that would achieve the same goal: Removing all my own status updates from the Facebook profile (but for example not what other people have posted me). I could not find it - has anybody got an idea?In other terms - would it be possible for me to write my own Facebook app that achieves that goal, i.e. by using the Facebook API?
Script or App to remove own Facebook status updates
facebook;greasemonkey
You can use an app like Exfoliate, available on Android phones, that candelete everything youve posted on friends walls, including commentsand likes, as well as your own wall. It cleans out photo galleries too.You can set the age of stuff you want deleted too. Search for Exfoliatein the Android Marketplace to find it, or:https://market.android.com/details?id=com.worb.android.exfoliate
_codereview.135632
I have been assigned to developed a feature that filters a conversation. For example, I want to filter the conversation by user id then export it to something either JSON or text file. In this case, I created a class that handles the filters like this.import java.util.ArrayList;import java.util.List;/** * Represents the filter that operates the filter functions for messages * * @author Muhammad */public class Filter { String filterType; String argumentValue; //Constructor that takes a parameter of filter type and argument value. public Filter(String filterType, String argumentValue) { this.filterType = filterType; this.argumentValue = argumentValue; } //Method that filters a conversation by a specific user and return filterd conversation. public static Conversation filterByUser(Conversation conversation, String specificUser) { List<Message> messageList = new ArrayList<>(); //Filter by used id for (Message message : conversation.messages) { if (message.senderId.equals(specificUser)) { messageList.add(message); Conversation filteredConversation = new Conversation(conversation.name, messageList); conversation = filteredConversation; } } return conversation; } //Method that filters a conversation that contains a specific keywod and returns filterd conversation. public static Conversation filterByWord(Conversation conversation, String specificWord) { List<Message> messageList = new ArrayList<>(); //Filter by keyword for (Message message : conversation.messages) { if (message.content.contains(specificWord)) { messageList.add(message); Conversation filteredConversation = new Conversation(conversation.name, messageList); conversation = filteredConversation; } } return conversation; } //Method that hides a word in a conversation by a specificword public static Conversation hideWord(Conversation conversation, String specificWord) { List<Message> messageList = new ArrayList<>(); //Filter by used id for (Message message : conversation.messages) { if (message.content.contains(specificWord)) { message.content = message.content.replaceAll(specificWord, *redacted*); messageList.add(message); Conversation filteredConversation = new Conversation(conversation.name, messageList); conversation = filteredConversation; } } return conversation; }}In another class, I used it inside a method called filter like this.private void filter(Filter filter, Conversation conversation, String outputFilePath) throws Exception { String filterType = filter.filterType; //used to get the type of filter String argumentValue = filter.argumentValue; //Filterers switch (filterType) { case filteruser: conversation = Filter.filterByUser(conversation, argumentValue); this.writeConversation(conversation, outputFilePath); break; case filterword: conversation = Filter.filterByWord(conversation, argumentValue); this.writeConversation(conversation, outputFilePath); break; case hideword: conversation = Filter.hideWord(conversation, argumentValue); this.writeConversation(conversation, outputFilePath); break; default: this.writeConversation(conversation, outputFilePath); break; } }This code works, however I'd like feedback on anyways that I can improve the code as I am just a graduate.
Filtering conversation by filter type and value
java
I want to mention only one thing especially:You're using magic string to choose your filter types. Instead of doing that (because it's brittle) you are probably better off with an enum:public enum FilterType { USER, WORD, HIDE_WORD}Your code would have to adjust a little, but to give a short look into the filter method:private void filter(Filter filter, Conversation conversation, String outputFilePath) { switch (filter.filterType) { case FilterType.USER: conversation = Filter.filterByUser(conversation, filter.argumentValue); writeConversation(conversation, outputFilePath); break; case FilterType.WORD: conversation = Filter.filterByWord(conversation filter.argumentValue); writeConversation(conversation, outputFilePath); break; // ...This exposes another small improvement possibility in your code.In every case you will call this.writeConversation with exactly the same arguments. You can move that to outside the switch-case block:switch (filter.filterType) { // ...}this.writeConversation(conversation, outputFilePath);Another last thing I want to recommend is using Path intead of String to refer to outputFilePath. This makes it blatantly obvious, that you're actually referring to a File. Strings are .. not Paths...
_unix.256607
I've seen several answers where there are hints at creating a partition in memory, copying the contents of a SD Card into that partition and then booting an operating system (linux) from that memory partition.What boot loader would I use for something like this and where can I find documentation on setting it up?
Load system from SD Card into Memory and then Boot from Memory
boot loader;sd card
The bootloader is not involved at all, this task is usually performed by Linux kernel after it gets loaded into memory from SD card by bootloader which is located on SD card.The modern way of booting from memory requires you to write a custom initramfs script that will detect media where Linux is booted from (since bootloaders do not provide such a useful information although some of them certainly can detect media where they boot from), open it's filesystem in readonly mode, allocate tmpfs space for the future root filesystem and then copy everything from media to it, then just switchroot and execute /sbin/init from there.You can find a good example here - a script which detects where to find a media to copy from, and you will need to create initramfs image, usually by hand, see this script for some key instructions.If you do not know how initramfs works, you should check out good info first, consider reading Documentation/filesystems/ramfs-rootfs-initramfs.txt as well as Linux From Scratch - About initramfs, and google linux initramfs.
_unix.78987
I have a bash script that should return the XML response for AWS EC2 regionsbut I am getting an XML error response as:SignatureDoesNotMatch The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details.Is the method for generating the signature for EC2 web query correct? Here's the code:#!/bin/bashdt=$(date +%FT%TZ | sed 's/:/%3A/g')echo $dtq=GETec2.amazonaws.com/Action=DescribeRegions&AWSAccessKeyId=<aws access key>&SignatureMethod=HmacSHA256&SignatureVersion=2&Timestamp=$dt&Version=2013-02-01sig=$(echo -en $q | openssl dgst -sha256 -hmac <aws secret key> -binary | openssl enc -base64)echo the signature is $sigcurl --get --data-urlencode DATA https://ec2.amazonaws.com/?Action=DescribeRegions&AWSAccessKeyId=<aws access key>&SignatureMethod=HmacSHA256&SignatureVersion=2&Timestamp=$dt&Version=2013-02-01&Signature=$sigecho -e \n\n finished P.S: the AWS access key and AWS secret key are removed for security reason, they work very well.
AWS XML response error signature does not match
bash;amazon ec2
null
_unix.131004
I want to change my old FC 4 client to new Fedora 19 client, but I need to use all the same files on openvpn and IP address. Can I use SCP command for copy all this configurations? Thanks!
Change OpenVPN client, can I use the same server configurations using scp?
scp;openvpn;fedora
Normally it should work, but since the version of OpenVPN client on Fedora 4 is pretty old you might encounter some inconsistencies regarding option names and usage.Yes scp can used to copy openvpn client configs and certificates to the new Fedora 19 client.
_unix.110579
When I look at the properties of an image, I can see the date the photo was taken in Date Taken. When I edit the images (proprietary program) this data gets lost.How can I rename the image files before editing to include this date (preferably in ISO format for sorting by name).
renaming images to include creation date in name
rename;images
You can do this with exiftool. From the man page: exiftool '-FileName<CreateDate' -d %Y%m%d_%H%M%S%%-c.%%e dir Rename all images in dir according to the CreateDate date and time, adding a copy number with leading '-' if the file already exists (%-c), and preserving the original file extension (%e). Note the extra '%' necessary to escape the filename codes (%c and %e) in the date format string.The example format should get you ISO format filenames. Include the time to make sure you can handle multiple images per day.
_unix.231778
My iptables -L:Chain INPUT (policy ACCEPT)target prot opt source destination ACCEPT udp -- anywhere anywhere udp dpt:domainACCEPT tcp -- anywhere anywhere tcp dpt:domainACCEPT udp -- anywhere anywhere udp dpt:bootpsACCEPT tcp -- anywhere anywhere tcp dpt:bootpsChain FORWARD (policy ACCEPT)target prot opt source destination ACCEPT all -- anywhere 192.168.122.0/24 ctstate RELATED,ESTABLISHEDACCEPT all -- 192.168.122.0/24 anywhere ACCEPT all -- anywhere anywhere ACCEPT tcp -- anywhere nginx tcp dpt:httpACCEPT tcp -- anywhere nginx tcp dpt:httpsChain OUTPUT (policy ACCEPT)target prot opt source destination ACCEPT udp -- anywhere anywhere udp dpt:bootpcACCEPT tcp -- anywhere anywhereAlso:$ cat /proc/sys/net/ipv4/ip_forward1I have several VMs in 192.168.122.0/24, one is the nginx receiving 80 and 443. All networking works properly except when VMs request :80 and :443 from each other, even when doing so from a FQDN (which should land on the nginx).
iptables forward from host to guest interferes with vm-vm communication
iptables;kvm
null
_softwareengineering.271776
Since quite many dynamic programming languages have the feature of duck typing, and they can also open up and modify class or instance methods at anytime (like Ruby and Python), thenQuestion 1)Whats the need for a class in a dynamic language? Why is the language designed that way to use a class as some kind of template instead of do it the prototype-way and just use an object?Also JavaScript is prototyped-based, but CoffeeScript (the enhanced version of JavaScript) chooses the class-based way. And it goes the same for Lua (prototyped-based) and MoonScript (class-based). In addition, theres class in ES 6. SoQuestion 2)Is it suggesting that if you try to improve a prototype-based language, among other things, you should change it to class-based? If not, why is it designed that way?
Why would many duck-typed dynamic programming languages use a class-based approach instead of prototype-based OOP?
object oriented;programming languages;duck typing;dynamic languages
Question 1) Whats the need for a Class in a dynamic language? Why the language is designed that way to use a class as some kind of template instead of do it the prototype-way and just use a object?The very first OO language (even though it wasn't called OO), Simula, didn't have inheritance. Inheritance was added in Simula-67, and it was based on classes.Around the same time, Alan Kay started working on his idea of a new programming paradigm, which he later named Object-Orientation. He really liked inheritance and wanted to have it in his language, but he also really disliked classes. However, he couldn't come up with a way to have inheritance without classes, and so he decided that he disliked classes more than he liked inheritance and designed the first version of Smalltalk, Smalltalk-72 without classes and thus without inheritance.A couple of months later, Dan Ingalls came up with a design of classes, where the classes themselves were objects, namely instances of metaclasses. Alan Kay found this design slightly less appaling than the older ones, so Smalltalk-74 was designed with classes and with inheritance based on classes.After Smalltalk-74, Alan Kay felt that Smalltalk was moving in the wrong direction and didn't actually represent what OO was all about, and he proposed that the team abandon Smalltalk and started fresh, but he was outvoted. Thus followed Smalltalk-76, Smalltalk-80 (the first version of Smalltalk to be released to researchers), and finally Smalltalk-80 V2.0 (the first version to be released commercially, and the version which became the basis for ANSI Smalltalk).Since Simula-67 and Smalltalk-80 are considered the grandparents of all OO languages, almost all languages that followed, blindly copied the design of classes and inheritance based on classes. A couple of years later, when other ideas like inheritance based on mixins instead of classes, and delegation based on objects instead of inheritance based on classes surfaced, class-based inheritance had already become too entrenched.Interestingly enough, Alan Kay's current language is based on prototype delegation.
_webapps.101106
We would like to know where our clicks are coming from when a customer signs up. How can we do this?
How can we get the referring URL value in a Cognito form?
cognito forms
null
_opensource.115
The prime example being iojs as a fork of nodejs, where the main difference being (aside from the obvious advancement in technology versioning and being more up-to-date), is that iojs has an Open Governance Model, which nodejs does not.What does that mean exactly? How are the two different?
What is the Open Governance Model? How is it different from other models?
untagged
The reason that io.js forked from node.js in the first place was that those originally involved in the forked project wanted the community using the fork to be able to give feedback to the design and add improvements. In the words of team member Mikeal Rogers,We've been working with Joyent since July to try and move the project to a structure where the contributors and community can step in and effectively solve the problems facing Node.. . .In my opinion, the best way to move Node forward is to get the community organized around solving problems and putting out releases, so that's what we're doing. Open-source governance has been applied to a wide range of things, just as the open source movement has spread to many fields.
_softwareengineering.288100
I am using wxHaskell to create a simple GUI that has typical components like Buttons, Panels, etc.When some of these components perform an action (like callback), the generic status of the application can change.To keep the status I am using IORef as a sort of pointer to a generic data structure with all the properties of the status self.Anyway using IORef as a sort of top level mutable state is generally not considered a good solution based on https://wiki.haskell.org/Top_level_mutable_state. It might be better State/StateT monad.wxHaskell is a bind to an object oriented library (wxWidgets) and using a State monad is hard unless it's hooked to the main event loop thread.What is the best way to manage a generic GUI status with Haskell in a functional programming way?
How to manage the state in a GUI app with Haskell
haskell;monad
null
_reverseengineering.6104
Being rather new to the concept of RE, I wanted to try and take a look at the assembly code in one DLL that I know exports some functions.First, I used this tool - http://www.nirsoft.net/utils/dll_export_viewer.html - to obtain a list of exports within said DLL. These are some of the functions:GI_Call 0x100590a7 0x000590a7 2 (0x2) mydll.dll I:\test\mydll.dll Exported Function GI_CleanReturnStack 0x10058eae 0x00058eae 3 (0x3) mydll.dll I:\test\mydll.dll Exported Function GI_Cmd_Argc 0x10058bd4 0x00058bd4 4 (0x4) mydll.dll I:\test\mydll.dll Exported Function GI_Cmd_Argc_sv 0x10059593 0x00059593 5 (0x5) mydll.dll I:\test\mydll.dll Exported Function When I, however, load the DLL up in OllyDbg and browse to any of these addresses, I get instructions that don't really resemble a beginning of a function, for example GI_Call:100590A7 10E9 ADC CL,CH100590A9 CE INTO100590AA FC CLD100590AB FFFF ??? ; Unknown command100590AD FF75 10 PUSH DWORD PTR SS:[EBP+10]100590B0 8D45 FC LEA EAX,DWORD PTR SS:[EBP-4]100590B3 50 PUSH EAX100590B4 57 PUSH EDIWhat's even more puzzling is that once I scroll up/down, the code actually changes - there's no100590A7 10E9 ADC CL,CHanymore, it changes to a completely different instruction, also that address is gone.Am I doing something wrong? Or is the DLL possibly encrypted? Though if it is, how could DLL Export Viewer dump the exports so easily?
Viewing an exported DLL function in OllyDbg - garbage code
disassembly;ollydbg;dll;functions
Your library might get loaded to a location that's completely different from the one it wants to be loaded at, i.e. the address in the header, due to ASLR.Also, when loading a DLL, Ollydbg doesn't load the DLL directly; instead, it uses loaddll.exe. Which means, it starts the executable, but the breakpoint it sets is before loaddll has a chance to, well, load the DLL.Try the following:Set a breakpoint on LoadLibraryA : right click in CPU Window - Go To - Expression - LoadLibraryA - Press F2;Repeat the same with LoadLibraryW (the A version should be sufficient, just to make sure);Run the program;Once your breakpoint is it, press CTRL-F9 (execute till return);If your DLL depends on others, you'll hit one of your breakpoints again; else you'll hit the breakpoint at the RET instruction. Don't worry, in either case, your DLL will be loaded;Use View->Memory or View->Executable Modules to learn where your DLL was actually loaded. This may be the same address that DLL Export Viewer shows you, but often, it will be different (address conflicts between two DLLs, in which case one has to be relocated, or ASLR, as above);Only if the addresses match: Right Click -> GoTo -> Expression -> 0x12345678 or whatever address you want to see;No matter if they match or not: Right Click -> GoTo -> Expression -> (Function name) will scroll to that function.The reason for your 'disappearing' instruction is that it's the middle of another instruction. Consider this function start:10001280 > 53 PUSH EBX10001281 56 PUSH ESI10001282 57 PUSH EDI10001283 8B7C24 10 MOV EDI,DWORD PTR SS:[ESP+10]10001287 8BF1 MOV ESI,ECX10001289 3BF7 CMP ESI,EDI1000128B 0F94C3 SETE BL1000128E 84DB TEST BL,BL10001290 75 32 JNZ SHORT 100012C4The byte at 10001284, 0x7c, is part of the instruction at 1001283. But if you disassemble from 10001284,10001284 7C 24 JL SHORT 100012AA10001286 108B F13BF70F ADC BYTE PTR DS:[EBX+FF73BF1],CL1000128C 94 XCHG EAX,ESP1000128D C3 RETN1000128E 84DB TEST BL,BL10001290 75 32 JNZ SHORT 100012C4The wrong bytes get interpreted as instructions. Once you scroll up a few rows, Ollydbg syncs correctly again - and shows the 'real' instructions.
_unix.73041
I am trying to set up some automation scripts to set up a Linux environment. I would like to enable remote desktop sharing without the user having to actually use the GUI to do so. My plan is to write a batch script that maybe edits some file to do this automatically, if possible.I am using Fedora 16 with the Gnome.I want to achieve the following:http://docs.fedoraproject.org/en-US/Fedora/13/html/User_Guide/chap-User_Guide-Sharing_your_desktop.htmlAny tips on what file to edit would be greatly appreciated.
Enable remote desktop for Gnome from command line?
fedora;gnome;vnc;remote desktop
If I understand you right: you want to share gnome or other environment remotely as it is, then the easiest way to achieve this is to use x11vnc. It shares real X11 server as it is after user logged in:x11vnc -display :0Or if you want vnc server run after login, you can automate with this script:#!/bin/bash/usr/bin/x11vnc -nap -wait 50 -noxdamage -passwd PASSWORD -display :0 -forever -o /var/log/x11vnc.log -bgYou can place this script in startup programs in gnome, so that it could be run automatically when the user logins. Please note that this script is not secure as session PASSWORD variable is clearly seen to anyone who could read the file and anyone knowing password can connect to vnc session (password in this case is 8 symbols word asked when you are connecting remotely). If you want more secure connection search how to do vnc ssh tunneling.
_webmaster.18266
I am designing a website for a Photographer. Obviously I compressed the images so that they would load faster and not use all the visitors cap. (There is quite a few gbs of pictures so that would just about take all of my bandwidth for a month if I were to browse all the albums.)The photographer was worried that having pixelated or bad resolution images on her website would create an unproffesional image for herself.How much quality can I sacrifice against professionality?Does it indeed reflect on the photographer if the pictures are in low resolution or will people simply understand the nature of the web?
Photography website: How much quality to sacrifice for file size?
load time;photo
In my experience, this is more a balance between load speed and photo quality. Given i'm a professional photographer myself, I understand this difficult challenge.I feel that the internet has gone a long way and there are now some great techniques and tools that you can utilize to avoid this issues. None of these techniques are germane or specific to a photography website.Regarding your first question: How much quality can I sacrifice against professionality?In my experience, generally you can find a balance by playing with the settings of a jpeg image. Settings of 70-85 quality will be sufficient. Additionally, utilizing 72 DPI is sufficient to minimize file size. Your second question: Does it indeed reflect on the photographer if the pictures are in low resolution or will people simply understand the nature of the web?Yes, any pixelation and/or poor quality of an image will reflect poorly if the photos aren't very high quality. A photographer's work is solely dependent on visual quality and attention to visual detail. Techniques to considerI would avoid extreme compression if you must. The internet provides lots of great techniques to improve load:Provide excellent user experience: Providing your visitors rich feedback will reduce frustration and will keep them at your website longer. Consider the following UI related techniques:Utilizing a loader Interactive image zoom makes a lot of sense.Dynamic Image Caching - Utilize website software that generates different sizes of images for the user ahead of time. If you're using a LAMP stack PHP offers two great code libraries (GD & Imagemagick and lots of great scripts that can do this for you.Utilize a CDN: Consider architectural (server) setups. Like loading static images from a CDN or a different network altogether will provide you a quicker load time.Consider javascript lazy loading: There are lots of javascript plugins that provide lazyloading images; that wait to load the image until it's absolutely necessary.Utilize a image resizing Service: There's a great thread right below this one about this very topic! Smush.It or similar lossless image shrinking with APIFinally, Review YUI Performance guide: Yahoo offers an awesome guide on performance. Some of these tips are mentioned above already but this guide is pretty comprehensive.
_codereview.153321
I'm practising with graphs, and trying to solve a problem of calculating the minimum number of flight segments, applying breadth-first search.The code is working, but I think, that it's not clean. Can anyone suggest how I refactor it to make it cleaner?def distance(adj, s, t): n = len(adj) queue = [] visited = set() path = [] queue.append([s]) dist = 0 while (len(queue) > 0): path = queue.pop(0) last_vertex = path[-1] if last_vertex == t: # print(path) dist = len(path)-1 elif last_vertex not in visited: for w in adj[last_vertex]: new_path = list(path) new_path.append((w)) queue.append(new_path) visited.add(last_vertex) if dist != 0: return dist else: return -1if __name__ == '__main__': input = sys.stdin.read() data = list(map(int, input.split())) n, m = data[0:2] data = data[2:] edges = list(zip(data[0:(2 * m):2], data[1:(2 * m):2])) adj = [[] for _ in range(n)] for (a, b) in edges: adj[a - 1].append(b - 1) adj[b - 1].append(a - 1) s, t = data[2 * m] - 1, data[2 * m + 1] - 1 print(distance(adj, s, t))I represent graph in such way. The first line contains non-negative integers n and m the number of vertices and the number of edges respectively. The vertices are always numbered from 1 to n. Each of the following m lines defines an edge in the format u v where 1 u, v n are endpoints of the edge :4 52 14 31 42 43 21 3The last two digits stands for two vertices, we need to find path between.
Minimum number of flight segments using breadth-first search
python;graph;breadth first search
null
_reverseengineering.15306
few days ago here I asked about IOCTL codes necessary for using functionality of Windows driver from user-app.When I want to read some MSR, DeviceIOControl works well and returns nonzero value.But write attemption cause BSoD with 0x0000003B code (executing a routine that transitions from non-privileged code to privileged code).Also reading 100 bytes from 0x0x00000000 cause BSoD with 0x00000050 code (that invalid system memory has been referenced).I use RWeverything to view memory dump and msr state, so msr numbers are correct and memory at 0x000000000 is not empty.Here is crash dump displayed in WinDbg: PAGE_FAULT_IN_NONPAGED_AREA (50) Invalid system memory was referenced. This cannot be protected by try-except, it must be protected by a Probe. Typically the address is just plain bad or it is pointing at freed memory. Arguments: Arg1: fffff80802a33f28, memory referenced. Arg2: 0000000000000000, value 0 = read operation, 1 = write operation. Arg3: fffff800028a8c62, If non-zero, the instruction address which referenced the bad memory address. Arg4: 0000000000000005, (reserved) Debugging Details: ------------------ Page ec4b not present in the dump file. Type .hh dbgerr004 for details READ_ADDRESS: fffff80802a33f28 FAULTING_IP: nt!MiInsertCachedPte+82 fffff800`028a8c62 48 dec eax MM_INTERNAL_CODE: 5 DEFAULT_BUCKET_ID: WIN7_DRIVER_FAULT BUGCHECK_STR: 0x50 CURRENT_IRQL: 0 ANALYSIS_VERSION: 6.3.9600.17336 (debuggers(dbg).150226-1500) x86fre LAST_CONTROL_TRANSFER: from 0000000000000000 to 0000000000000000 STACK_TEXT: 00000000 00000000 00000000 00000000 00000000 0x0 STACK_COMMAND: .bugcheck ; kb FOLLOWUP_IP: nt!MiInsertCachedPte+82 fffff800`028a8c62 48 dec eax SYMBOL_NAME: nt!MiInsertCachedPte+82 FOLLOWUP_NAME: MachineOwner DEBUG_FLR_IMAGE_TIMESTAMP: 0 IMAGE_VERSION: 6.1.7601.17514 IMAGE_NAME: Unknown_Image BUCKET_ID: INVALID_KERNEL_CONTEXT MODULE_NAME: Unknown_Module FAILURE_BUCKET_ID: INVALID_KERNEL_CONTEXT ANALYSIS_SOURCE: KM FAILURE_ID_HASH_STRING: km:invalid_kernel_context FAILURE_ID_HASH: {ef5f68ed-c19c-e34b-48ec-8a37cd6f3937}Please point to stupid mistakes and explain how to resolve described problems properly. Thank you for any answers.
BSOD (3b) DeviceIOControl and x64 Windows driver
windows;driver
null
_unix.44757
These two question is driving me crazy and I don't have good expertise of ssh. (but I suspect it is to do with redirection only)The questions are,You want to pass multiple lines of input from a file called abc.txt to the ssh command. Complete the command required to do this$ssh _ _ abc.txt (that is only two characters) (a details explanation would be helpful)ANDYou want to pass multiple lines of input from a file called Remote.txt to ssh but all leading tabs in the subsequent input should be stripped. Complete the command to do this$ssh _ _ _ Remote.txt
ssh input from text file
bash;ssh;shell script;io redirection
To pass input from a local file to ssh, you should use input redirection like this:ssh user@server < abc.txtAre you sure the _ must be really a single character? In that case this is possible if x is configured in ~/.ssh/config as an alias to some user@host:ssh x < abc.txtI cannot answer Q2 because I don't really understand it. I suppose Remote.txt is on the remote. As per the second question, I suppose Remote.txt is a file on the remote side, in which case the command should be of the form:ssh user@server bash < Remote.txt...but this does not fit the problem description with _ _ _ and of course to remove the trailing tabs some more would be necessary like:ssh user@server bash < <(sed -e 's/^[ ]*//' Remote.txt)In other words this does NOT answer the second question. I hope this helps you anyway understanding redirection when used with ssh.EDITAfter reading the Q another time, since it says passing multiple lines of input to ssh suggests that we have to use redirection to ssh again, in which case the file must be local.ssh user@server < <(sed 's/^[ ]*//' Remote.txt)But again, I don't think this qualifies as an answer in the form ssh _ _ _ Remote.txt
_unix.377715
I have Oracle 11.2.0.4 Rac on OEL6.4 at my workplace. I created one more single instance database as name of preprod for our developers. System taking everynight rman backup and copying these backup pieces to preprod side. And restore recover process happening on this preprod side. Sometimes backup piece does not comes completely probably cause of network blackout and thatswhy preprod cant restore recover and this days preprod stays closed status.My question is how to be sure my rman backup pieces goes to preprod ? I mean how to provide backup pieces goes complete byte to byte. Is this any control mechanism for this operation ? For example when copying process, something checks pieces byte to byte if anything wrong copy process will begin from zero until to everything goes to preprod.
Oracle duplicate database copied backup checker
linux;oracle linux;oracle database;oracle
null
_softwareengineering.279998
I have been basically out of the programming world for about 10 years, with only a bit of dabbling here and there with small Java utilities and one large Access database I wrote for someone, and some VBA macros here and there.Now I've come back with a project I would like to work on, and I'm very confused about the whole new web-programming scene. I am trying to write a web based multi-user database, and I'm not so familiar with the technologies used.So I figured that I should go with what I'm familiar with. I know a bit of PHP and lots of SQL, so that's a good start for the backend. And I hate CSS and javascript, so I'll try to use a standard Java desktop application for the front-end (which is anyways more appropriate for this database then a web-based front-end). I don't know C# at all and am not willing to learn it just for this project, though I am curious to learn it eventually.I've spent many hours on it, and I've familiarized myself with Jackson for JSON processing, and even made a fancy command queue that sends requests and receives responses to the server on a separate thread and deals nicely with errors.But I'm finding the whole process quite tedious. For every little table I make in the database, I need to make a Swing JFrame, then I have to connect the user interface with the underlying Java class that holds the data, then I have to make the requests that put the data into the appropriate JSON, which often involves little fiddling with Jackson annotations, then I have to make the PHP that takes the data and makes it into an SQL query. And same for the other direction, I have to make a request to the server that asks for the data, I need to write php to make to appropriate SELECT queries, pack it into JSON (at least json_encode does that quite nicely, though it's complicated if there are one-to-many relationships involved) get the information back into my java class, and then to get that displayed in the GUI. And all along the way data has to validated and errors dealt with.I feel like this is way too much work for such a simple thing. I'm used to Access, where you just make a query, and then displaying the results and allowing the user to edit them is just a matter of running a wizard and moving around the controls a bit. And I feel like a lot of what I've done already with my command processing queue is probably re-inventing the wheel - every web-based application needs something like that.Am I missing something?
Java front-end, PHP/MySQL back end methodology
java;php;database;tools;front end
Since you are not developing a website or a web application, but a desktop application which stores its data on a server, there are indeed a few layers you can skip.A common approach, in this situation, is to use web services.When a service should be lightweight and interoperable (that is, you can use it with ease from virtually every programming language), the service can take a form of REST. The drawback is that not every server-side language makes it possible to create REST services painlessly, without writing too much code.Another form a web service can take is SOAP. In .NET Framework, this was a de facto standard option for web services for a long time, although recently, there is a major shift towards REST; for instance, SharePoint itself relies more and more on REST instead of previously ubiquitous WCF services. I could imagine that the situation is similar with Java and PHP.The benefit of SOAP is that you probably don't have to write any code at all. In .NET Framework, you declare your web service interfaces server-side and let the framework generate the WSDL (the detailed schema of the service) and process the requests and the responses. Then you import the service client-side, without the need to write a single line of code.The drawback of SOAP is that it is usually much heavier compared to REST. Responses are also usually larger, which impacts the bandwidth.In your case, consider the web service as an interface to your database. Since you don't want anyone to be able to select everything from your users' table or delete all your tables, the web service is a way to tell who can access what.This is different from simply giving access to an SQL database with a fine-grained configuration of permissions, so that a given user can access and do only a very limited amount of things. With a web service, you can also sanitize inputs, control how much resources are accessed by a user, use cache to boost performance, access resources outside the database, etc.Of course, you can simply even the step of writing the service interface if the only thing you need is to bind the service to the database. In .NET Framework, WCF Data Services are used for that (the project is originally called Microsoft Astoria; search for this name if you want articles which are not too technical). I'm sure Java and PHP have something similar.
_webapps.88462
Is it possible to post a photo on Facebook that links to a website?Can this only be done with Facebook ads?
How to post photo with website clickthrough on Facebook?
facebook
null
_webapps.88937
So I have this spreadsheet I use to control Content funnel.I've come up with a few formulas to use something else than a pivot table.But in order to get the trends and alarms I'm looking for, I need to start recording this data daily.. automatically.This is the spreadsheet:I'm looking for a script that would add a column between columns B and C, setting current date in cell C1, and completing each cell below with the corresponding value to each formula in each cell.Here's the list of forumlas I'm using:C2: =counta(Check!$J:$J)C3: =COUNTIF(Check!$L:$L,Validation!$A$6)C4: =COUNTIF(Check!$L:$L,Validation!$A$7)C5: =ARRAYFORMULA(countif(Check!$L:$L&Check!$O:$O,Validation!$A$5&) )C6: =countif(Check!$O:$O,Validation!$A$1)C7: =ARRAYFORMULA(countif(Check!L:L&Check!O:O,Validation!A5&Validation!A2) )C8: =ARRAYFORMULA(countif(Check!O:O&Check!Q:Q,Validation!A1&) )C9: =countif(Check!Q:Q,Validation!A1)C10: =countif(Check!Q:Q,Validation!A2)C11: =countif(Check!T:T,Validation!A1)So far I've only achieved this much:function recordHistory() { var ss = SpreadsheetApp.getActiveSpreadsheet(); var sheet = ss.getSheetByName(Approval Funnel); var source = sheet.getRange(C2:C11); var values = source.getValues(); values[0][0] = new Date(); sheet.insertColumns(3);And that's the frontier of my coding knowledge.Any ideas?
Automatically record daily values in a new column
google spreadsheets;google apps script;formulas
Your draft is pretty good, but there is a design flaw: if you insert the values between B and C, then the column with formulas will become D. So, next time the script will try to get data, it will be looking at a wrong place. Simply put, the source of the data you are recording (i.e., the column with formulas) should stay in the same place. You can put the historical data to the right of it. Like this: function recordHistory() { var ss = SpreadsheetApp.getActiveSpreadsheet(); var sheet = ss.getSheetByName(Approval Funnel); var source = sheet.getRange(C2:C11); var values = source.getValues(); values = [[new Date()]].concat(values); // prepending the date to values sheet.insertColumnAfter(3); // inserting AFTER column C SpreadsheetApp.flush(); sheet.getRange(D1:D11).setValues(values);}I put SpreadsheetApp.flush(); to make sure that the previous changes (namely, inserting a column) is indeed made before the script puts the data in with setValues. Also you had an error in values[0][0] = new Date(); -- this command would overwrite the 0th element of array (namely, the content of C2) with the date. You wanted to prepend the date, which is what I did by creating a new array with one element, [[new Date()]], and concatenating values to it.