id
stringlengths 5
27
| question
stringlengths 19
69.9k
| title
stringlengths 1
150
| tags
stringlengths 1
118
| accepted_answer
stringlengths 4
29.9k
⌀ |
---|---|---|---|---|
_webmaster.23402 | So I've been assigned to take a look at our SEO (an area I have some, but not amazing competence in), and the first thing I noticed is that our robots.txt file says the following:# go awayUser-agent: *Disallow: /Now, I'm pretty competent at reading computer, and as far as I can tell, this says ALL spiders shouldn't look at ANYTHING in the root directory or below.Am I reading this correctly? Because that just seems insane. | Is this robots.txt file really preventing all crawling of our website? I'm trying to find out why our SEO is so poor | seo;robots.txt | Maybe someone didn't want to pay for spider traffic? Regardless, you are reading it correctly:http://www.robotstxt.org/robotstxt.htmlWeb site owners use the /robots.txt file to give instructions about their site to web robots; this is called The Robots Exclusion Protocol. It works likes this: a robot wants to vists a Web site URL, say http://www.example.com/welcome.html. Before it does so, it firsts checks for http://www.example.com/robots.txt, and finds: User-agent: * Disallow: /The User-agent: * means this section applies to all robots. The Disallow: / tells the robot that it should not visit any pages on the site. |
_unix.223843 | I know I am noob....As far as I know, Linux can only install package from its Repository list.And there are plenty of repositories out there.After you have installed/updated the repository list, you can install the package you want.But... I have no idea how to find a repository list that has the specified version of a package I want to install.For example, I want to install php55u, not php55w or whatever.Where/how can I find such a repository list or whatever that contains the php55u so I can have yum install it?Correct me if I am wrong and explanation for dummy will be nice :) | How to find and install specify package by command? | centos;package management;yum;repository | null |
_unix.153318 | I need to store specific number of spaces in a variable.I have tried this:i=0result=space() { n=$1 i=0 while [[ $i != $n ]] do result=$result ((i+=1)) done}f=firsts=lastspace 5echo $f$result$sThe result is firstlast, but I expected 5 space characters between first and last.How can I do this correctly? | Storing whitespace in a shell script variable | shell script;variable;whitespace | Use doublequotes () in the echo command:echo $f$result$sThis is because echo interprets the variables as arguments, with multiple arguments echo prints all of them with a space between.See this as an example:user@host:~$ echo this is a testthis is a testuser@host:~$ echo this is a testthis is a testIn the first one, there are 4 arguments:execve(/bin/echo, [echo, this, is, a, test], [/* 21 vars */]) = 0in the second one, it's only one:execve(/bin/echo, [echo, this is a test], [/* 21 vars */]) = 0 |
_unix.267418 | I became curious when I was diffing two large (>326MB) files, and noticed that the second run took much less time than the first. This was frustrating, since I was trying to time the second run, to see how long the diff took. :)The man page doesn't mention a cache, and searching for 'diff cache' seems to flood me with results for the git diff subcommand, which is not what I'm interested in learning about. So, my question is:Why did the second run of diff largeFile1 largeFile2 take so much less time than the first? Where can I find more information? | How does the diff command handle caching? | shell;command;diff;cache | diff isn't doing any caching. The OS is. If you are using Linux, you can flush the disk buffers and cache. See How do you empty the buffers and cache on a Linux system? |
_softwareengineering.340649 | Map (or HashMap) takes Constant time for Insert, Remove and Retrieve. While all other data structures which I know so far, do not take constant time and their time for above operations depends on the size of input.So, why do we need all other data structures ever ? Isn't HashMap is universal data structure ? | Why do we need datastructures other than HashMap | data structures;hashing | Hash map has constant time for most operations - on average. Worst case complexity is much higher. A list or a tree can guarantee fast (even if not constant) time in every case (even for maliciously crafted input).Element lookup in an array is usually a single CPU instruction. Hash map adds a lot overhead over that - even if both have constant average complexity.It doesn't keep any order of elements. Inserting elements at particular position requires rebuilding whole hash map. Iterating over elements in particular order requires converting to a different data structure.Hash map has much higher memory footprint than tightly packed array or carefully designed linked list. It usually has much worse cache locality.Not all data types are easily hashable. |
_codereview.100822 | In the past I have always called the repositories directly from the controller, but that is a bad practice and now I am implementing a Business Layer to my project.Would I have two UnitOfWorks? One for the Service and then one for the repositories?BaseController:public class BaseController : Controller{ protected IUnitOfWorkService UnitOfWorkService; private readonly IEmployeeService _employeeService; protected BaseController(IUnitOfWorkService unitOfWorkService) { UnitOfWorkService = unitOfWorkService; _employeeService = unitOfWorkService.EmployeeService; }}HomeController:public class HomeController : BaseController{ private readonly IEmployeeService _employeeService; public HomeController(IUnitOfWorkService unitOfWorkService) : base(unitOfWorkService) { _employeeService = unitOfWorkService.EmployeeService; } public ActionResult Index() { var emp = _employeeService.GetEmployee(employeeId); ... } ...}EmployeeService:public class EmployeeService : IEmployeeService{ private IEmployeeRepository _employeeRepo; public EmployeeService(IUnitOfWork unitOfWork) { _employeeRepo = unitOfWork.EmployeeRepository; } public Employee GetEmployee(int employeeId) { return _employeeRepo.GetEmployee(employeeId); } ...}EmployeeRepositorypublic class EfEmployeeRepository : EfRepository, IEmployeeRepository{ public EfEmployeeRepository(SqlContext context) : base(context) { } public Employee GetEmployee(int employeeId) { var employee = Context.Employees .SingleOrDefault(e => e.EmployeeId == employeeId); return employee != null ? employee.ToDomain() : null; }}Unit of WorkIUnitOfWorkService:public interface IUnitOfWorkService : IDisposable{ IEmployeeService EmployeeService { get; }}UnitOfWorkService:public class UnitOfWorkService : IUnitOfWorkService{ private readonly IUnitOfWork _unitOfWork; private EmployeeService _employeeService; public UnitOfWorkService() { _unitOfWork = new EfUnitOfWork(); } public IEmployeeService EmployeeService { get { return _employeeService ?? (_employeeService = new EmployeeService(_unitOfWork)); } } public void Dispose() { Dispose(true); GC.SuppressFinalize(this); } protected virtual void Dispose(bool disposing) { if (disposing) _unitOfWork.Dispose(); }}IUnitOfWork:public interface IUnitOfWork : IDisposable{ IEmployeeRepository EmployeeRepository { get; }}UnitOfWork:public class EfUnitOfWork : IUnitOfWork{ private readonly SqlContext _context; private EfEmployeeRepository _employeeRepository; public IEmployeeRepository EmployeeRepository { get { return _employeeRepository ?? (_employeeRepository = new EfEmployeeRepository(_context)); } } public EfUnitOfWork() { _context = new SqlContext(); } public void Save() { _context.SaveChanges(); } public void Dispose() { Dispose(true); GC.SuppressFinalize(this); } protected virtual void Dispose(bool disposing) { if (disposing) _context.Dispose(); }}Register Dependenciesbuilder.RegisterType<EfUnitOfWork>().As<IUnitOfWork>().InstancePerRequest();builder.RegisterType<UnitOfWorkService>().As<IUnitOfWorkService>().InstancePerRequest(); | Configuring MVC 5 project with service layer and DI and UOW | c#;dependency injection;asp.net mvc 5;autofac | public class BaseController : Controller{ protected IUnitOfWorkService UnitOfWorkService; private readonly IEmployeeService _employeeService; protected BaseController(IUnitOfWorkService unitOfWorkService) { UnitOfWorkService = unitOfWorkService; _employeeService = unitOfWorkService.EmployeeService; }}public class HomeController : BaseController{ private readonly IEmployeeService _employeeService; public HomeController(IUnitOfWorkService unitOfWorkService) : base(unitOfWorkService) { _employeeService = unitOfWorkService.EmployeeService; } public ActionResult Index() { var emp = _employeeService.GetEmployee(employeeId); ... } ...} I don't see a reason here to have private readonly IEmployeeService _employeeService; in the HomeController. It would be better to make the _employeeService variable of the BaseController protected so it can be used by the objects which inherits the BaseController. Why is the protected IUnitOfWorkService UnitOfWorkService; not readonly ? Style Using abbreviations or naming variables is bad practice. Does it hurt to name the variable IEmployeeRepository _employeeRepo like IEmployeeRepository _employeeRepository ? It does not hurt to use braces {} for single if statements but will make the code less error prone. I would like to encourage you to always use them. |
_unix.71462 | I have the followin directory structure:base/ files/ archives/ scripts/I want a script to run from scripts/, compress files that match results.*.log in files/ into a gzipped tar archive in archives/.I'm trying the following command:tar czfC ../archives/archive.tar.gz ../files results.*.logBut I gettar: results.*.log: Cannot stat: No such file or directorytar: Exiting with failure status due to previous errorsWhile tar czfC ../archives/archive.tar.gz ../files results.a.logworks as expected. Also tar czf ../archives/archive.tar.gz ../files/results.*.logworks the way I would like, except it adds the prefix files/ to the file and also emits a warning:tar: Removing leading `../' from member namesSo my conclusion is that tar globbing doesn't work properly when using the -C option. Any advice on how I make this work in a simple manner? | How to make tar globbing work with the 'change directory' option | wildcards;tar | Write it the more portable way:(cd ../files && tar cf - results.*.log) | gzip -9 > ../archives/archive.tar.gz |
_codereview.75424 | I am coding a 2D board game. I let the user choose the difficulty level in the beginning, which influences the skills points of the player. Here is the code I wrote for the Player class:package model;public class Player { // VARIABLES --------------------------- private Position position; private int score = 0; private int stepsLeft; private int fightingSkill; private int jokingSkill; private int visionScope = 2; private String skillChoice; // CONSTRUCTOR -------------------------- public Player(Position position, int difficultyLevel) { this.position = position; switch(difficultyLevel) { case 1: this.stepsLeft = 150; this.fightingSkill = 5; this.jokingSkill = 5; case 2: this.stepsLeft = 150; this.fightingSkill = 2; this.jokingSkill = 2; case 3: this.stepsLeft = 100; this.fightingSkill = 2; this.jokingSkill = 2; case 4: this.stepsLeft = 10; this.fightingSkill = 1; this.jokingSkill = 1; } } public Player(int stepsLeft, int fightingSkill, int jokingSkill) { this.stepsLeft = stepsLeft; this.fightingSkill = fightingSkill; this.jokingSkill = jokingSkill; this.position = new Position(1,1); this.score = 0; } // METHODS ------------------------------ public void move(Position destination) { setPosition(destination); stepsLeft -= 1; } public void increaseScore(int bonus) { score += bonus; } public void increaseStepsLeft(int bonus) { stepsLeft+= bonus; } public void increaseFightingSkill(int bonus) { fightingSkill += bonus; } public void increaseJokingSkill(int bonus) { jokingSkill += bonus; } // GETTERS public Position getPosition() { return position; } public int getXPosition() { return position.getX(); } public int getYPosition() { return position.getY(); } public int getScore() { return score; } public int getStepsLeft() { return stepsLeft; } public int getFightingSkill() { return fightingSkill; } public int getJokingSkill() { return jokingSkill; } public String getSkillChoice() { return skillChoice; } public int getVisionScope() { return visionScope; } // SETTERS public void setPosition(Position position) { this.position = position; } public void setSkillChoice(int choice) { switch(choice) { case 0: this.skillChoice = joke; break; case 1: this.skillChoice = fight; break; case 2: this.skillChoice = magic; break; default: ; } } public String toString() { return play \t+position+\t+stepsLeft+\t+jokingSkill+\t+score+\n; }}My questionsI chose to use a switch statement in the constructor, to create my player with features according to the difficulty level. I have the feeling that is maybe not the good practice to have such a switch in a constructor but can't really say why, and don't find a better way. Is there a better way to do this? Any other comments about that class? I've programmed scientific code but am quite new to OOP and Java. Any comment is welcome. | Create player according to chosen difficulty | java;game | null |
_unix.70827 | I've installed CentOS 6.3. However, eth0 does not show up on ifconfig.So I figured out that I need to install alx Ethernet driver from here.Since my internet is not working, as ethernet drivers are not installed, I've installed Development Tools from the DVD.Steps performed:./scripts/driver-select alxOn 'make' I get this error. http://paste.ubuntu.com/5669506/How do I solve this issue? Would be grateful for any kind of help. Thanks. | Unable to install Atheros AR8161 Ethernet controller driver for centOS 6 | linux;centos;drivers;linux kernel;ethernet | null |
_webapps.36933 | Someone shared a folder with me on Dropbox and I cant see this folder in my Dropbox folder at home. I have to access it through the Dropbox website.I thought it was supposed to be visible on every PC that has access to the folder, so people can just copy and paste the files from that folder to their own computer.Or is it normal that I have to access it through the website? | Can't see a folder people shared with me in my own Dropbox folder | dropbox | null |
_unix.293017 | I have two programs that I called program1 and program2 which perform a main task of two operations. When program1 start running it pipe its results to program2 which performs the final result. Because both programs must run continuously, I implemented a systemd service for this and here is the content of /lib/systemd/system/servers.service file.[Unit]Description=Start serversAfter=network.target[Service]Type=simpleExecStart=/usr/local/bin/servers >> /var/log/servers.logRestart=alwaysTimeoutStartSec=100User=rootExecStartPre=killall -w -q program1 && killall -w -q program2User=rootExecStopPost=killall -w -q program1 && killall -w -q program2User=root[Install]WantedBy=multi-user.targetWhen run the service, systemd create two process to realize the task, here you can see the status.sudo service servers status# result servers.service - Start servers Loaded: loaded (/lib/systemd/system/servers.service; enabled) Active: active (running) since Wed 2016-06-29 21:00:27 UTC; 2h 5min ago Main PID: 11942 (server) CGroup: /system.slice/servers.service 11942 program1 11944 program2when one of the program crash the service is still in it running state which is not logically correct because the main task is stopped.Please I will like to know if it is possible to implement such a situation with systemd services? If so, how? | Systemd: stop the main process (or the service) when a subprocess crash | linux;debian;systemd;cgroups | null |
_datascience.6749 | I have a function: m=a*n+2the range of a is 1 to 100 and the range of a is 1 to 5My question: is it possible to plot a figure with x-axis is n, and y-axis is a and also shows the value of m ? | plotting graph in R: show 3 values in one graph | r | null |
_codereview.2978 | Following is the code which finds the social network of a friend (i.e. friends of friends and so on). Friends definition is, ff W1 is friend of W2, then there should be Levenshtein distance equal to 1. It is working fine with a smaller dictionary, but is taking a lot of time with a bigger dictionary.Need some code review and advice.#include <stdio.h>#include <string>#include <vector>#include <queue>#include <fstream> #include <iostream> #include <sstream>#include <iterator>#include <algorithm>#include <set>class BkTree { public: BkTree(); ~BkTree(); void insert(std::string m_item); void get_friends(std::string center, std::deque<std::string>& friends); private: size_t EditDistance( const std::string &s, const std::string &t ); struct Node { std::string m_item; size_t m_distToParent; Node *m_firstChild; Node *m_nextSibling; Node(std::string x, size_t dist); bool visited; ~Node(); }; Node *m_root; int m_size; protected:};BkTree::BkTree() { m_root = NULL; m_size = 0;}BkTree::~BkTree() { if( m_root ) delete m_root; }BkTree::Node::Node(std::string x, size_t dist) { m_item = x; m_distToParent = dist; m_firstChild = m_nextSibling = NULL; visited = false;}BkTree::Node::~Node() { if( m_firstChild ) delete m_firstChild; if( m_nextSibling ) delete m_nextSibling;}void BkTree::insert(std::string m_item) { if( !m_root ){ m_size = 1; m_root = new Node(m_item, -1); return; } Node *t = m_root; while( true ) { size_t d = EditDistance( t->m_item, m_item ); if( !d ) return; Node *ch = t->m_firstChild; while( ch ) { if( ch->m_distToParent == d ) { t = ch; break; } ch = ch->m_nextSibling; } if( !ch ) { Node *newChild = new Node(m_item, d); newChild->m_nextSibling = t->m_firstChild; t->m_firstChild = newChild; m_size++; break; } }}size_t BkTree::EditDistance( const std::string &left, const std::string &right ) { size_t asize = left.size(); size_t bsize = right.size(); std::vector<size_t> prevrow(bsize+1); std::vector<size_t> thisrow(bsize+1); for(size_t i = 0; i <= bsize; i++) prevrow[i] = i; for(size_t i = 1; i <= asize; i ++) { thisrow[0] = i; for(size_t j = 1; j <= bsize; j++) { thisrow[j] = std::min(prevrow[j-1] + size_t(left[i-1] != right[j-1]), 1 + std::min(prevrow[j],thisrow[j-1]) ); } std::swap(thisrow,prevrow); } return prevrow[bsize];}void BkTree::get_friends(std::string center, std::deque<std::string>& flv) { if( !m_root ) return ; std::queue< Node* > q; q.push( m_root ); while( !q.empty() ) { Node *t = q.front(); q.pop(); if ( !t ) continue; size_t d = EditDistance( t->m_item, center ); if( d == 1 ) { if ( t->visited == false ) { flv.push_back(t->m_item); t->visited = true; } } Node *ch = t->m_firstChild; q.push(ch); while( ch ) { if( ch->m_distToParent >= 1 ) q.push(ch); ch = ch->m_nextSibling; } } return;}int main( int argc, char **argv ) { BkTree *pDictionary = new BkTree(); std::ifstream dictFile(word.list); std::string line; if (dictFile.is_open()) { while (! dictFile.eof() ) { std::getline (dictFile,line); if ( line.size()) { pDictionary->insert(line); } } dictFile.close(); } std::deque<std::string> flq; pDictionary->get_friends(aa, flq); int counter = 0; while ( !flq.empty()) { counter++; std::string nf = flq.front(); flq.pop_front(); pDictionary->get_friends(nf, flq); } std::cout << counter << std::endl; return 0;} | Finding social network of a friend | c++;algorithm;edit distance | null |
_codereview.28609 | I've got a class with a property TotalCost, which is calculated by some simple math of two other properties, CostOfFood and NumberOfPeople. The code works the way I want it to, but I was wondering if this is a satisfactory method in the long run of application development, a bad idea to have one property that depends on another all together (I'm pretty sure this is the case, but sometimes it makes sense to), or if the informed reader would deem it acceptable. Helpful hints are in the comments.class DinnerParty{ private int numberOfPeople; public int NumberOfPeople { get { return numberOfPeople; } set { numberOfPeople = value; //TotalCost property is updated when more people are added to the party TotalCost =CalculateFoodCost(value); } } private decimal totalCost; public decimal TotalCost { get { return totalCost; } private set { totalCost = value; } } private decimal costOfFood; public decimal CostOfFood { get { return costOfFood; } set { costOfFood = value; //TotalCost property is updated when CostOfFood changes //directly below line was my initial idea //TotalCost = value * NumberOfPeople was my initial thought //before overloading the CalculateFoodCost method //this calls the CalculateFoodCost version that takes a decimal TotalCost = CalculateFoodCost(value); } } private decimal CalculateFoodCost(int costOfFood) { //the int coming in as a parameter is the 'value' of the NumberOfPeople property return this.costOfFood * NumberOfPeople; } private decimal CalculateFoodCost(decimal costOfFood) { return this.costOfFood * NumberOfPeople; } public DinnerParty() { //so food cost is never 0 CostOfFood = 10; }}TestingDinnerParty d = new DinnerParty();d.NumberOfPeople = 1; Console.WriteLine(d.TotalCost);//output = 10d.CostOfFood = 2;Console.WriteLine(d.TotalCost);//CostOfFood changed, output =2d.NumberOfPeople = 2;Console.WriteLine(d.TotalCost);//output=4;d.CostOfFood = 3;Console.WriteLine(d.TotalCost); //output =6 | When one property is calculated from another | c# | The C# property model allows external classes to inspect (or set) a given member as though it were a public 'field', and the implementation details are left to the property's accessor and mutator. In your case, you want to expose TotalCost and hide the implementation details about how it is derived. And your code reflects best practices. Following the comment from Clockwork-Muse, your implementation can be made more elegant by... public decimal TotalCost { get { return CostOfFood * NumberOfPeople; } }This avoids the calculation penalty for setting either of the calculation ingredients and performs the calculation only when called upon to do so. It's also a bit more readable and transparent. In this particular case, there's no need for an asymmetric mutator, so it's been removed. |
_datascience.9036 | Most of us want to build a recommendation engine as accurate as possible, however, an experienced chief data scientist believes a practical machine learning algorithm should randomly shuffle the results, therefore non-optimal results. This is known as Results Dithering. Slide 15 at:http://cikm2013.org/slides/ted.pptxWhile I understand how to do it (adding a small fixed Gaussian noise), but why would we do it to make the performance worse. | Result Dithering? Why randomly shuffle results? | machine learning | null |
_vi.4150 | I am trying to change a text file with data in 'document' form into unnormalized csv. The data are a list of hymn authors ('document header') and for each author a list of one or more hymns they have written ('document line'). The specific operation I am struggling with is taking each 'document header' and prepending it to the one or more 'document lines' that follow it.Example dataFor example, I want to turn thisBeskow Natanael 140 Ack saliga dag 399 Trnger i dolda djupen ner 478 Ditt verk r stort 479 Krlek av hjdenBexell Gran 103 Herren lever vga tro detBze Thodore de 283 Lovsjung nu alla lnder Gud 360 Ssom hjorten ivrigt lngtarintoBeskow Natanael,140,Ack saliga dagBeskow Natanael,399,Trnger i dolda djupen nerBeskow Natanael,478,Ditt verk r stortBeskow Natanael,479,Krlek av hjdenBexell Gran,103,Herren lever vga tro detBze Thodore de,283,Lovsjung nu alla lnder GudBze Thodore de,360,Ssom hjorten ivrigt lngtarThe specific change that I am struggling with is taking the 'document header' and prepending it to each 'document line' so I am leaving the other changes (removing the header, comma between hymn number and hymn title, etc.) to one side for the moment.QuestionI could write a function to do this change, but I find that I am collecting quite a number of ad hoc functions for different editing operations, so I would prefer to learn how to do it with a global command, or, if it is not a suitable operation to perform with global, to understand why. How can I do this operation with global; and, if I can't, why?What I have triedI have tried using the :global command to execute different combinations of normal commands.1) Yank, move, putg/^\S/normal! y$jPa,I take this to meanfor each line beginning with non-whitespaceyank to end-of-line into unnamed registermove down one line and put before from unnamed registerappend a commaThis works, but only for each first 'document line'. The result isBeskow NatanaelBeskow Natanael, 140 Ack saliga dag 399 Trnger i dolda djupen ner 478 Ditt verk r stort 479 Krlek av hjdenBexell GranBexell Gran, 103 Herren lever vga tro detBze Thodore deBze Thodore de, 283 Lovsjung nu alla lnder Gud 360 Ssom hjorten ivrigt lngtarTo repeat the put operation on each 'document line' I think I would need to know how many there are and put in a loop, and I don't see an easy way to do that without writing a custom function.2) Yank, visually select, substituteTo avoid having to count I thought I could instead visually select the 'document lines' and insert the 'document header' with a substitute over the visual selection.g/^\S/normal! y$v/^\S^M:s/^\s/\=@ . , . submatch(0)I intend this command to mean the followingfor each line beginning with non-whitespaceyank to the end of linevisually select until the next line beginning with non-whitespacesubstitute over visual selectionmatch beginning-of-line + whitespacesubstitute contents of unnamed register + a comma + the entire match (i.e., the whitespace)This command does not work at all.It does not change the buffer at all.It leaves me in visual mode, with the last character on the first line selected.Hitting /<Up> to look at the last search pattern shows /^\S^M:s/^\s/\=@ . , . submatch(0). This tells me that the ^M does not execute the search as I had hoped, but rather everything following / is interpreted as part of the pattern.Checking the unnamed register after running the command, it contains the last 'document header' in the buffer. This tells me that it goes through the buffer and performs the yank operation.If, however, I type out the normal mode commands manually they work as expected. | How can I use the `global` command to prepend 'document headers' to 'document lines'? | cut copy paste;repeated commands | null |
_cstheory.30952 | One can bound the Rademacher average $R_n(A)$ of a finite set of vectors $A\subseteq\{0,1\}^n$ using Massart's Finite Lemma:$$R_n(A)\le \max_{a\in A}\|a\|\frac{\sqrt{2\ln|A|}}{n}$$where $\|\cdot\|$ is the Euclidean norm.Then, using Sauer's Lemma, one can obtain $$R_n(A)\le C\max_{a\in A}\|a\|\sqrt{\frac{V\ln\frac{n}{V}}{n}},$$where $V$ is the (empirical) VC-dimension.Using chaining and a bound on covering numbers, one can get rid of the logarithmic factor and obtain$$R_n(A)\le C'\sqrt{\frac{V}{n}}.$$Looking at the proof that uses chaining, I can't seem to find a way to have $\max_{a\in A}\|a\|$ in the second bound. Is it even possible?It may not change much in theory, but it does in practice, and (in my opinion), it intuitively makes sense that the bound should depend on it. | Bounding Rademacher Averages, with and without chaining | machine learning;pr.probability;lg.learning;vc dimension | Converting the comment to an answer: See the notes here: cs.cornell.edu/~sridharan/dudley.pdfwith the dependence on $\sup_f \hat E[f^2]$ |
_unix.106219 | I have a text file abc.txt and its contents are: /lag/cnn/org/one.txt /lag/cnn/org/two.txt /lag/cnn/org/three.txtIf I use:$ tar -cvf allfiles.tar -T abc.txtI'm getting the tar of files in the list. Similarly is it possible to copy those files in abc.txt to a folder?I tried this: $ cp --files-from test1.txt ./FolderBut it is not working. | Copy files from a list to a folder | shell;file copy | null |
_unix.316918 | I run physics analysis jobs in a computer grid. Sometime the jobs go wrong and I have to kill them (one by one) which is painful. Can you please suggest me how can I select only the numeric values (748736838 and so on) from this output:$ps -Wmhaque -748736838 0 W mhaque -748736879 0 W mhaque -748737079 0 W mhaque -748737185 0 W mhaque -748737276 0 W(and hundred of lines like this)I tried few sed/awk/grep command (from stackExchange) but could not separate the numeric values. Is there command which can select the numeric values and also place 'kill' in front of them? For example something like this (piping):ps -W | awk/sed/grep (what_to_use) | (some_command to place kill) | > file.listWhich would give me the following (in file.list):kill 748736838 kill 748736879...so on..Then I can simply copy paste it in the grid shell to kill all offending/long waiting jobs. | How to select job ids using grep/sed/awk | shell;sed;awk;grep | null |
_softwareengineering.347952 | There are a couple of examples from a simple game based on two players putting X and O on 2d arrayFirst method should return true if element filed[y][x] is ok;public boolean yxInField(int y, int x, char[][] field) { return field != null && y >= 0 && x >= 0 && y < field.length && field[y] != null && x < field[y].length;}Second should return true if at least one of calls of check win condition for a specific direction returned true public boolean checkWin(int y, int x, char[][] field) { return //E-W checkDirection(y, x, 0, -1, field) //S-N || checkDirection(y, x, -1, 0, field) //SE-NW || checkDirection(y, x, -1, -1, field) //SW-NE || checkDirection(y, x, -1, 1, field);}But i'm not certain if this style is ok. If not what would be the better way to do those methods? | Are returns with large statements at the top of a method a good style? | coding style | null |
_webmaster.98531 | How do I do URL masking using .htaccess?I've tried a lot of different things on different websites, but they usually just redirect me to a different page or caused a 500 internal server error.I want to take this url:http://www.example.com/profile/index?id=$1and mask it to look like:http://www.example.com/profile/$1Thank you in advance! | .htaccess URL masking | masking | null |
_unix.150260 | I have a process that generates output mostly in lexicographically sorted order according to a (timestamp) field, but occasionally the lines will be output in the wrong order:2014-08-14 15:42:02.019220203 ok2014-08-14 15:42:03.523164367 ok2014-08-14 15:42:04.525655832 ok2014-08-14 15:42:06.523324269 ok2014-08-14 15:42:05.930966407 oops2014-08-14 15:42:07.643347946 ok2014-08-14 15:42:07.567283110 oopsHow can I identify each location where the data are unsorted?Expected output (or similar):2014-08-14 15:42:05.930966407 oops2014-08-14 15:42:07.567283110 oopsI need a solution that works as the data are generated (e.g. in a pipeline); it's less useful if it only operates on complete files. sort --check would be ideal but it only outputs the first point of disorder; I need a full listing. | Identify lines that are out of order | text processing;sort;verification | awk 'NR>1 && $0 < last; {last=$0}'Prints the lines that sort before the preceding line. The $0 is to force lexical comparison (on the output of seq 10 it would spot 10 as sorting before 9). |
_unix.270071 | I have large tree, with many pdf files in it. I want to delete the pdf files in this tree, but only those pdf files in sub folders named rules/ There are other type of files inside rules/. The rules/ subfolders have no other subfolders.For example, I have this tree. Everything below 'source' source/ A/ rules/*.pdf, *.txt, *.c,etc.. etc/ B/ keep_this.pdf rules/*.pdf whatever/ C/ D/ rules/*.pdf something/and so on. There are pdf files all over the place, but I only want to delete all the pdf files which are in folders called rules/ and no other place.I think I need to use cd source find / -type d -name rules -print0 | xargs -0 <<<rm *.pdf?? now what?>>>But I am not sure what to do after getting list of all subfolders named rules/Any help is appreciated.On Linux mint. | how to delete all files with specific extension in specific named folders in large tree? | files;find | I would execute a find inside another find. For example, I would execute this command line in order to list the files that would be removed:$ find /path/to/source -type d -name 'rules' -exec find '{}' -mindepth 1 -maxdepth 1 -type f -iname '*.pdf' -print ';'Then, after checking the list, I would execute:$ find /path/to/source -type d -name 'rules' -exec find '{}' -mindepth 1 -maxdepth 1 -type f -iname '*.pdf' -print -delete ';' |
_codereview.1903 | I have a class which is responsible for checking whether a website is up or down. I've made it to run asynchronously, but I would like critique if this is a good/bad way or if there is another better way to do it.class UrlChecker { private readonly IValidationCondition _condition; private readonly Dictionary<string, UrlStatus> _status = new Dictionary<string, UrlStatus>(); private readonly object _lock = new object(); public UrlChecker(IValidationCondition condition) { _condition = condition; } public void CheckRange(IEnumerable<string> urls, Action<Dictionary<string, UrlStatus>> callback) { var options = new ParallelOptions {MaxDegreeOfParallelism = 5}; Parallel.ForEach(urls, options, Check); callback(_status); } private void Check(string url) { Console.WriteLine(Checking + url); var req = (HttpWebRequest) WebRequest.Create(url); req.Timeout = 10 * 10000; // 10 seconds HttpWebResponse resp; try { resp = (HttpWebResponse)req.GetResponse(); } catch(WebException ex) { // We got an exception, consider it as down lock (_lock) _status.Add(url, UrlStatus.Down); return; } if(resp.StatusCode != HttpStatusCode.OK) { lock (_lock) _status.Add(url, UrlStatus.Down); return; } using(var reader = new StreamReader(resp.GetResponseStream())) { // Check for empty response var html = reader.ReadToEnd(); if(string.IsNullOrEmpty(html)) { lock(_lock) { _status.Add(url, UrlStatus.Down); } } // Validate against condition if(!_condition.IsValid(html)) { lock(_lock) { _status.Add(url, UrlStatus.Down); } return; } } // We reached the end without problems, it's a valid url lock(_lock) { _status.Add(url, UrlStatus.OK); } }}It's called like so:checker.CheckRange(urls, status => { if(status.Any(x => x.Value == UrlStatus.Down)) EmailFailing(message);});The second parameter is obviously a callback that's invoked when all checks are done.Am I locking correctly?Is this an acceptable way of doing it? Checking with Fiddler proves that it's working correctly, but is there a better way? | Asynchronous website monitor | c#;asynchronous | I don't see how this runs asynchronously. callback(_status); will still only be called after all processing is finished, parallel or not. In this case this makes the callback rather redundant, as it does the same as a simple return value.If you want to make it asynchronous you'll need to store the callback as a member variable, but return immediately in CheckRange. Start the actual processing on a separate thread by e.g. using a BackgroundWorker. Once this execution completes, you can call the stored callback with the result. |
_unix.222968 | I have a running dovecot with:$ dovecot -n# 2.2.18: /etc/dovecot/dovecot.conf# OS: Linux 4.1.0-x86_64-linode59 x86_64 Fedora release 22 (Twenty Two) ext4auth_debug = yesauth_mechanisms = plain login digest-md5 cram-md5auth_verbose = yesauth_verbose_passwords = yesdefault_internal_user = rootimap_client_workarounds = delay-newmail tb-extra-mailbox-sepmail_debug = yesmail_location = maildir:/home/vmail/%d/%n/Maildirmaildir_very_dirty_syncs = yesmbox_write_locks = fcntlnamespace { inbox = yes location = prefix = INBOX. separator = . type = private}namespace inbox { location = mailbox Drafts { special_use = \Drafts } mailbox Junk { special_use = \Junk } mailbox Sent { special_use = \Sent } mailbox Sent Messages { special_use = \Sent } mailbox Trash { special_use = \Trash } prefix =}passdb { args = /etc/dovecot/dovecot-sql.conf.ext driver = sql}postmaster_address = pmatosprotocols = imapquota_full_tempfail = yesservice auth { unix_listener /var/spool/postfix/private/auth { group = postfix mode = 0666 user = postfix } unix_listener auth-master { mode = 0600 user = vmail } user = $default_internal_user}ssl = requiredssl_cert = </etc/pki/dovecot/certs/dovecot.pemssl_key = </etc/pki/dovecot/private/dovecot.pemuserdb { args = uid=5000 gid=5000 home=/home/vmail/%d/%n allow_all_users=yes driver = static}protocol lda { auth_socket_path = /var/run/dovecot/auth-master deliver_log_format = msgid=%m: %$ log_path = /home/vmail/dovecot-deliver.log}protocol imap { mail_max_userip_connections = 100}I started to filter my email with imapfilter running on the same host as dovecot. I would therefore like to deliver email to a folder called PreINBOX so that imapfilter then sorts the email and ends up delivering only the useful email to INBOX.How can I change the name of the inbox dovecot delivers to? | Delivering email to Maildir PreINBOX | email;dovecot | null |
_unix.143820 | What is the best way to monitor running services and ports on a system with systemd?My intent is to alert me, when a service stops running or if a new service is listening on a new port (possibly a security breach).Currently, I have a script that uses netstat to see the open ports and compares it with what is expected.Is there some sort of utility I should be using?Also, can systemd alert me or again, should I just my my scriptthat will be executed via cron?I have munin installed and it does monitor things, but as for alerts, I don't see it having the capability as I describe. | monitor running services and ports | monitoring;systemd;munin | null |
_codereview.14553 | I've written my first class to interact with a database. I was hoping I could get some feedback on the design of my class and areas that I need to improve on. Is there anything I'm doing below that is considered a bad habit that I should break now?public IEnumerable<string> ReturnSingleSetting(int settingCode) interacts with a normalized table to populate combo boxes based on the setting value passed to it (for example, a user code of 20 is a user, sending that to this method would return all users (to fill the combobox).public void InsertHealthIndicator(string workflowType, string workflowEvent, int times, string workflowSummary) interacts with a stored procedure to write a workflow error type into another normalized table.public DataView DisplayHealthIndicator(DateTime startDate, DateTime endDate) uses another stored procedure to return the workflow error types between specific dates.Note: Although this seems like I likely shouldn't use stored procedures in some areas here, I've done so so I can base some SSRS reports off the same stored procedures (so a bug fixed in one area is a bug fixed in both).using System;using System.Collections.Generic;using System.Data;using System.Globalization;using System.Data.SqlClient;using System.Windows;namespace QIC.RE.SupportBox{ internal class DatabaseHandle { /// <summary> /// Class used when interacting with the database /// </summary> public string GetConnectionString() { // todo: Integrate into settings.xml return Data Source=FINALLYWINDOWS7\\TESTING;Initial Catalog=Testing;Integrated Security=true; } public IEnumerable<string> ReturnSingleSetting(int settingCode) { var returnList = new List<string>(); string queryString = select setting_main + from [marlin].[support_config] + where config_code = + settingCode.ToString(CultureInfo.InvariantCulture) + and setting_active = 1 + order by setting_main; using (var connection = new SqlConnection(GetConnectionString())) { var command = new SqlCommand(queryString, connection); try { connection.Open(); using (SqlDataReader reader = command.ExecuteReader()) { while (reader.Read()) { returnList.Add(reader[0].ToString()); } reader.Close(); } } catch (Exception ex) { MessageBox.Show(ex.ToString()); throw; } connection.Close(); } return returnList; } public void InsertHealthIndicator(string workflowType, string workflowEvent, int times, string workflowSummary) { string queryString = EXEC [marlin].[support_add_workflow_indicator] + @workflow_type = @workflowType, + @workflow_event = @workflowEvent, + @event_count = @eventCount, + @event_summary = @eventSummary; using (var connection = new SqlConnection(GetConnectionString())) { try { connection.Open(); using(var cmd = new SqlCommand(queryString, connection)) { cmd.Parameters.AddWithValue(@workflowType, workflowType); cmd.Parameters.AddWithValue(@workflowEvent, workflowEvent); cmd.Parameters.AddWithValue(@eventCount, times); cmd.Parameters.AddWithValue(@eventSummary, workflowSummary); cmd.CommandType = CommandType.Text; cmd.ExecuteNonQuery(); } connection.Close(); } catch(SqlException ex) { string msg = Insert Error: ; msg += ex.Message; throw new Exception(msg); } } } public DataView DisplayHealthIndicator(DateTime startDate, DateTime endDate) { string queryString = [marlin].[support_retrieve_workflow_history]; using (SqlConnection connection = new SqlConnection(GetConnectionString())) { using (var cmd = new SqlCommand(queryString, connection)) { connection.Open(); cmd.CommandType = CommandType.StoredProcedure; cmd.Parameters.AddWithValue(date_from, startDate.Date); cmd.Parameters.AddWithValue(date_to, endDate.Date); var reader = cmd.ExecuteReader(); var dt = new DataTable(); dt.Load(reader); connection.Close(); return dt.DefaultView; } } } }} | Interacting with a database | c#;sql;ado.net | I believe it's much better to use some ORM together with LINQ, rather than writing raw SQL. It means more errors are checked at compile time, it will help you avoid some common mistakes and it will make your code much shorter.I would also always use parametrized SQL queries and never concatenate them by hand. You do use them most of the time, and in the one case where you don't, there is no danger of SQL injection, because the parameter is an integer, but I still think it's better to use parameters everywhere. (I think it may also make your query faster thanks to caching, but I'm not completely sure about that.)Also, you shouldn't throw Exception, you should create a custom class that inherits from Exception. And, if possible, include the original exception as inner exception, to make debugging the original source of the error easier. |
_unix.209968 | All I was able to find out about the %gs register is, that it seems to be a free to use register on >32bit x86 architectures. It seems that a gs_change is executed before any system-call.Can someone point me to a documentation how this register is used for? — I assume its a register used for kernel-/user-mode switches.The background of my question is, that I try to understand a kernel stack trace and what exactly happened.The stack trace was produced from the flush process that reached the /proc/sys/kernel/hung_task_timeout_secs. | What is the register %gs used for? | linux kernel;x86 | It seems %gs is reserved for GCC'c stack protection feature on x86 Linux kernel with CONFIG_CC_STACKPROTECTOR enabled in order to set up stack canaries. You can see some explanation at arch/x86/include/asm/stackportector.h. |
_unix.92532 | Assume a script runs on boot as root. From this script I want to start tcpsvd -E 0 515 lpd. I want tcpsvd to run as an unprivileged user. But it requires root privileges to bind to the port 515. How can I achieve this? Further I have to use busybox tcpsvd:tcpsvdtcpsvd [-hEv] [-c N] [-C N[:MSG]] [-b N] [-u USER] [-l NAME] IP PORT PROGCreate TCP socket, bind to IP:PORT and listen for incoming connection.Run PROG for each connection. IP IP to listen on. '0' = all PORT Port to listen on PROG [ARGS] Program to run -l NAME Local hostname (else looks up local hostname in DNS) -u USER[:GRP] Change to user/group after bind -c N Handle up to N connections simultaneously -b N Allow a backlog of approximately N TCP SYNs -C N[:MSG] Allow only up to N connections from the same IP New connections from this IP address are closed immediately. MSG is written to the peer before close -h Look up peer's hostname -E Do not set up environment variables -v Verbose | How do I get tcpsvd to drop its root privileges? | root;not root user | You need to have the program bind to the port while running as root, and then switch to your unprivileged user. tcpsvd offers the -u option for doing this: -u user[:group] drop permissions. Switch user ID to users UID, and group ID to users primary GID after creating and binding to the socket. If user is followed by a colon and a group name, the group ID is switched to the GID of group instead. All supplementary groups are removed. |
_unix.2600 | My organization is running an exchange server 2007 with MAPI disabled for security reasons. How do I connect with evolution? When I connect using the Microsoft Exchange option I get the error The Exchange server is not compatible with Exchange Connector.The server is running Exchange 5.5. Exchange Connector supports Microsoft Exchange 2000 and 2003 only.If I use the Exchange MAPI option I getAuthentication failed. MapiLogonProvider:MAPI_E_NETWORK_ERRORWhich appears to be a network timeout, which confirms that administrators have MAPI turned off. | Evolution and Exchange Server 2007 without MAPI | email;evolution;exchange | As far as I know, this is not possible, at least if you want a reasonably stable solution. Which would, at this point, also exclude the Exchange MAPI option, even if it were available. |
_unix.110903 | I am trying to backup a remote server to my machine. I am trying something likessh user@ip dd if=/dev/sda | dd of=~/backup.imgBut that obviously doesn't work. Other variants that don't work.ssh user@ip sudo dd if=/dev/sda | dd of=~/backup.imgssh user@ip -t sudo dd if=/dev/sda | dd of=~/backup.imgI have public key authentication set up. Note that even after compression, the remote machine can not hold its own backup. What do I do?(Note in the long run I want to try and put this in an automatic script, but I just want a backup for now.)Note: I should mention that I don't want to just back up the files (like with rsync) but to have a complete image that I can just drop on a new hard-drive should this one go belly up with little hassle. | SSH remote backup script | ssh;backup;dd | Something like this should work:ssh user@ip sudo -S dd if=/dev/sda > backup.img You don't need to pipe to dd, you can just redirect the output into a file. |
_unix.280770 | This is on a macintosh but it's still unix command.I am running some computers on a network that have file sharing turned off because of security so the only way to connect is via ssh.I need to look at what applications are installed on the computers /applications folder so we can push our a few things.What I would normally type in terminal would be:ssh [email protected](password)cd /applicationsls -lthen this shows me all the applications installed in that folder.Is there anyway to put that query into a .sh file to automate this? Like a .bat on windows when you can just double click it and it runs it. | .sh query - running a ssh task | shell script;ssh;terminal | Use public key authenticationIn the source host run this only once:ssh-keygen -t rsa # ENTER to every fieldssh-copy-id myname@somehostThat's all, after that you'll be able to do ssh without password.Coming to your question, use below command now, ssh [email protected] 'ls -l /applications' |
_scicomp.7868 | I would like to find the optimal set $ \{ x_i \} $ given $ L $ and $ \{ a_i \} $ that minimizes the problem below. My first thought was to use linear programming. Is there a transformation that makes it possible, or do I need a more general optimization technique?$$ \min_{x_i} \left[ 2\sum_i | x_i | + \max_i | x_i + a_i | \right]$$$$ \mathrm{s.t.} \sum_i (x_i + a_i) \le L$$ | Constrained optimization with max and absolute values in objective function | optimization;linear programming | You haven't told us what set $i$ ranges over, so I'll just assume $i=1, 2, \ldots, n$. A standard trick in LP formulation of problems with absolute values is to introduce auxiliary variables and constraints with the basic idea that $\min | x | $is equivalent to $\min t $$t \geq x $$t \geq -x $Applying that idea to your problem, introduce auxiliary variables $t_{i}$, $i=1, 2, \ldots n$, and $s$. Then formulate the problem as:$\min 2 \sum_{i=1}^{n} t_{i} + s $subject to$t_{i} \geq x_{i}$, $i=1, 2, \ldots n$.$t_{i} \geq -x_{i}$, $i=1, 2, \ldots n$.$s \geq x_{i}+a_{i} $, $i=1, 2, \ldots, n$. $s \geq -(x_{i}+a_{i}) $, $i=1, 2, \ldots, n$.$\sum_{i=1}^{n} (x_{i}+a_{i}) \leq L$ |
_softwareengineering.218194 | When considering the disk space of a storage medium, normally the computer or operating system will represent it in terms of powers of 1024 - a kilobyte is 1,024 bytes, a megabyte is 1,048,576 bytes, a gigabyte is 1,073,741,824 bytes, and so on.But I don't see any practical reason why this convention was adopted. Usually when disk size is represented in kilo-, mega-, or giga-bytes, it has to be converted into decimal first. In places where a power-of-two byte count actually matters (like the block size on a file system), the size is given in bytes anyway (e.g. 4096 bytes).Was it just a little aesthetic novelty that computer makers decided to adopt, but storage medium vendors decided to disregard? Whenever you buy a hard drive, there's always a disclaimer nowadays that says One gigabyte means one billion bytes. It would feel like using the binary definition of gigabyte would artificially inflate the byte count of a device, making drive-makers have to pack 1.1 terabytes into a drive in order to have it show up as 1 TB, or to simply pack 1 terabyte in and have it show up as 931 GB (and most of them do the latter).Some people have decided to use units like KiB or MiB in favour of KB and MB in order to distinguish the two. But is there any merit to the binary prefixes in the first place?There's probably a bit of old history I'm not aware of on this topic, and if there is, I'm looking for somebody to explain it.(Apologies if this is in the wrong place. I felt that a question on best practice might belong here, but I have faith that it will be migrated to the right place if it's incorrect.) | What is the advantage to using a factor of 1024 instead of 1000 for disk size units? | history;conventions | The reason is because in any organized file system that is placed on the drive must be able to uniquely identify each spot on the drive. and those addresses are stored in binary format,because,well, we are using binary computers, not say, analog computers. So to most compactly represent all the addresse on a disk requires a certain minimum number of bits. and file directories are basically wasted, overhead space so why not make them as small as possible. Thus you actually see manufacturers targeting their physical drive platters toward certain powers of twos, because more space than that would be wasted,unless they changed the directory address length of addresses.It is kind of convenience and optimization for getting the most usable, addressable space between the two considerations of addressing and physical platter size.remember there is a table taking up nearly 2% of the disk,and all that table does is show where on the disk each file starts, and what the name of the file is, etc. so this table can be made smaller by being smart about the numer of bits used for each files address.if you make the table address two bits longer, and you only make the drive platter twice as large, you're kind of wasting most of that second bit you added,so why not target the physical platter manufacturing to match what bits you have to create addresses.and the of course, humans wanted some way to understand the magnitude of these numbers in our digital ( Powers of Ten fingers ) world. thus, the closest one is , 1000.at this point the whole thing is pretty moot because drive capacities are so huge we dont worry quite as much about the overhead of addressing, but at one point it was important. |
_codereview.40865 | I've written a Python client for a new NoSQL database as a service product called Orchestrate.io. The client is very straightforward and minimal. It uses the requests library, making the underlying code even more streamlined. I've been using the service as part of the private beta. However, Orchestrate.io went live to the public today. They have added a few new features to the API and I would like to include them in the Python client. With these updates, I'm considering other design choices as well.I am relatively new to writing such an API and I would like to get feedback on the current design and perhaps suggestions for how it can be improved. Personally, I like where it is now because it is super simple/minimal. That said, I am open to making changes that will make the code most useful to the community.Below are few things that I'm considering in the next round of updates:Putting the client and each service (Key/Value, Search, Events, ect...) into their own classesImplementing optional success and error callbacksProviding an asynchronous option (currently requests are blocking)Improved error handlingHere is the code in its current state (also available here):'''A minimal implementation of an Orchestrate.io client'''import requests# Settingslogging = Falseauth = ('YOUR API KEY HERE', '')root = 'https://api.orchestrate.io/v0/'header = {'Content-Type':'application/json'}# Authdef set_auth(api_key): global auth auth = (api_key, '')# Collectionsdef delete_collection(collection): ''' Deletes an entire collection ''' return delete(root + collection + '?force=true') # Key/Valuedef format_key_value_url(collection, key): ''' Returns the url for key/value queries ''' return root + '%s/%s' % (collection, key) def get_key_value(collection, key): ''' Returns the value associated with the supplied key ''' return get(format_key_value_url(collection, key))def put_key_value(collection, key, data): ''' Sets the value for the supplied key ''' return put(format_key_value_url(collection, key), data)def delete_key_value(collection, key): ''' Deletes a key value pair ''' return delete(format_key_value_url(collection, key))# Searchdef format_search_query(properties = None, terms = None, fragments = None): ''' propertes - dict: {'Genre' : 'jazz'} terms - list, tuple: ['Monk', 'Mingus'] fragments - list, tuple: ['bari', 'sax', 'contra'] ''' def formatter(items, pattern=None): result = '' for i in range(0, len(items)): item = items[i] if pattern: result += pattern % item else: result += item if i < len(items) - 1: result += ' AND ' return result query = '' if properties: query += formatter(properties.items(), '%s:%s') if terms: if properties: query += ' AND ' query += formatter(terms) if fragments: if properties or terms: query += ' AND ' query += formatter(fragments, '*%s*') return querydef format_event_search(span, start, end, start_inclusive = True, end_inclusive = True): ''' Formats a query string for event searches. Example output: Year:[1999 TO 2013} span - string: YEAR, TIME start - string: beginning date or time end - string: ending date or time start_inclusive - boolean: whether or not to include start end_inclusive - boolean: whether or not to include end ''' result = span + ':' result += '[' if start_inclusive else '{' result += start + ' TO ' + end result += ']' if end_inclusive else '}' return result def search(collection, query): ''' Searches supplied collection with the supplied query ''' return get(root + %s/?query=%s % (collection, query))# Eventsdef format_event_url(collection, key, event_type): ''' Returns the base url for events ''' return root + '%s/%s/events/%s' % (collection, key, event_type)def get_event(collection, key, event_type, start='', end=''): ''' Returns an event ''' return get(format_event_url(collection, key, event_type) + '?start=%s&end=%s' % (start, end))def put_event(collection, key, event_type, time_stamp, data): ''' Sets an event ''' return put(format_event_url(collection, key, event_type) + '?timestamp=%s' % (time_stamp), data)def delete_event(collection, key, event_type, start='', end=''): ''' Delets an event ''' return delete(format_event_url(collection, key, event_type) + '?start=%s&end=%s' % (start, end))# Graphdef format_graph_url(collection, key, relation): ''' Returns the base url for a graph ''' return root + '%s/%s/relations/%s/' % (collection, key, relation)def get_graph(collection, key, relation): ''' Returns a graph retlationship ''' return get(format_graph_url(collection, key, relation))def put_graph(collection, key, relation, to_collection, to_key): ''' Sets a graph relationship ''' return put(format_graph_url(collection, key, relation) + ('%s/%s') % (to_collection, to_key))def delete_graph(collection, key, relation): ''' Deletes a graph relationship ''' return delete(format_graph_url(collection, key, relation))'''Convenience methods used by client for generic, get, put and delete. '''def get(url): log('GET', url) return requests.get(url, headers=header, auth=auth) def put(url, data=None): log('PUT', url) return requests.put(url, headers=header, auth=auth, data=data)def delete(url): log('DEL', url) return requests.delete(url, auth=auth)def log(op, url): if logging: print '[Orchestrate.io] :: %s :: %s' % (op, url.replace(root, )) | Orchestrate.io Client API Design | python;api | This is a very thin layer around the Orchestrate.io database. By thin I mean that it provides no abstraction and no mapping of concepts between the Orchestrate and Python worlds. Without your module, someone might have written a sequence of operations like this:import requestsvalue = requests.get(root + collection_name + '/' + key, headers=header, auth=auth)requests.delete(root + collection_name + '/' + key, headers=header, auth=auth)but with your module they can write it like this:import orchestrateorchestrate.auth = authorchestrate.root = rootvalue = orchestrate.get(collection_name, key)orchestrate.delete(collection_name, key)which you have to admit is not much of an improvement. All you've done is factor out a bit of boilerplate, which any Python programmer could easily have done for themselves.What you should do is figure out some way to map concepts back and forth between the Orchestrate and Python worlds. For example, a key-value store is very like a Python dictionary, so wouldn't it be nice to be able to write the above sequence of operations like this:import orchestrateconn = orchestrate.Connection(root, api_key)collection = conn.Collection(collection_name)value = collection[key]del collection[key]The advantage of this kind of approach is not just that it results in shorter code, but that it interoperates with other Python functions. For example, you'd be able to write:sorted(data, key=collection.__getitem__)or:{product} has {stock_count} items..format_map(collection)By using the requests module, you require all your users to install that module. If you are trying to write something for general use, you should strive to use only features from Python's standard library. Even if requests is easier to use than urllib.request, a bit of inconvenience for you could save a lot of inconvenience for your users if it would enable them to run your code on a vanilla Python installation.There doesn't seem to be any attention to security or validation. You should strive to make your interface robust against erroneous or malicious data. Some examples I spotted:What if root doesn't end with a /? It would be safer to use urllib.parse.urljoin instead of string concatenation.What if collection or key contains a / or a ? or a %? You might consider using urllib.parse.quote_plus.Instead of appending ?force=true, why not use the requests module's params interface?Similarly for ?query=%s. Using the params interface would ensure that the query is properly encoded. format_search_query and format_event_search look vulnerable to code injection attacks. |
_codereview.27038 | I have a very simple Pong game that I've built in Java. The code is quite long so I've decided to focus this question on the collision that occurs with the ball and the bat and also the effects in the game. I'm using ACM Graphics package to learn Java so most of the methods are from that package. I want to know how I can improve this checking process and if the way I revert the speed and direction is efficient. I'm also open to any suggestions for the game.//Setting up variablesstatic final int WAIT = 50;static final int MV_AMT = 20;static final int BATWIDTH = 120;static final int BATHEIGHT = 20;static final int WINDOWX = 400;static final int WINDOWY = 400;static final int BALLRADIUS = 10;private int batX = 150, batY = 400; //Starting positionsprivate int ballX = 160, ballY = 370;private int ballSpeedX = 2; //the ball speed on the X axisprivate int ballSpeedY = -9; //the ball speed on the Y axispublic void run(){ //... Stuff that runs before the game, ie. draw the sceen etc. int currentTime = 0; //Do all our stuff here while(continueGame){ //Pause dat loop pause(WAIT); currentTime = currentTime + WAIT; //Up the speed every 5 seconds if (currentTime % 5000 == 0) { if(ballSpeedY>0) ballSpeedY += 2; else ballSpeedY -= 2; if(ballSpeedX>0) ballSpeedX += 2; else ballSpeedX -= 2; } //Move the ball ballX=ballX+ballSpeedX; ballY=ballY+ballSpeedY; ball.setLocation(ballX, ballY); //Check checkCollisions(); } //... Stuff that gets done after game over}public void checkCollisions(){ //This method is quite long so I won't be posting it all //Just the part the calls the collision method //Get the bounds GRectangle batBounds = bat.getBounds(); GRectangle ballBounds = ball.getBounds(); //Where is the ball? ballX = (int)ball.getX(); ballY = (int)ball.getY(); //Where is the bat? batX = (int)bat.getX(); batY = (int)bat.getY(); //Did the bat touch the ball? if(batBounds.intersects(ballBounds)){ batCollision(); }}public void batCollision(){ if( ballX+BALLRADIUS > batX+(BATWIDTH/2) ){ //Which side of the bat? //Which direction is the ball traveling when it hits? if(ballSpeedX >> 31 !=0){ ballSpeedX = ballSpeedX * -1; } else { ballSpeedX = ballSpeedX; } } else { if(ballSpeedX >> 31 !=0){ ballSpeedX = ballSpeedX; } else { ballSpeedX = -ballSpeedX; } } ballSpeedY = -ballSpeedY; //Adjust Y speed} | Java Pong Game Collision between bat and ball | java;game;collision | null |
_webmaster.69551 | I have a requirement where i need to lost large mp3 files on a web server. file size may vary from 1MB - 120MB (where most of the files are more than 20MB)With large size in mind i cant host these files on the same web server as can slow down the web-server & degrade performance of several websites hosted on the same server.I would appreciate if someon can tell me which is the best service paid service to host these files for streaming & downloading. | I need to host large mp3 files & link them on website for download | web hosting;cloud hosting | null |
_unix.320 | I want to make my Fedora Linux capable of following :Use Linux for complete development platform without requiring any other OS installation but still able to build and test programs under different platforms.Completely replace Windows machine for all the other work e.g. Office, Paint, Remote Desktop, etc. Can you suggest open source projects and tools for achieving above objectives ? | Linux as a complete development platform? | linux;opensource projects | You can easily do cross-platform development whether you are a systems programmer, a web developer or a desktop application developer. If you are into systems, then any utilities and/or drivers you write for linux are likely to work well for other *nix with very minimal modifications. Provided that you write standard C code and don't use too many system specific calls, they may be even easy to port to windows.If you are a desktop application dev, you can target GTK, QT or wxWidgets and your app will likely work well across the major 3 platforms today (*nix, Windows, Mac). Again, keep system specific calls to a minimum or isolate them into a wrapper library that's going to be system specific. You call also target a Virtual Machine like the JVM and/or CLR which will allow application to work across the board.If you are a web dev, then you are likely to run into too many different alternatives to choose from. I prefer a little web server called Cherokee and I develop and run ASP.NET (mono) and Django apps that run on it and use a PgSQL backend. So the conclusion is that cross-platform development in Linux can be done, provided that you can compile the code on the target platform and you keep that in mind while writing your code or if you target a VM. The other point is that you may run into The Paradox of Choice and not know what to use. For that read below my answer to the second question.As to the second question, the best resource I have found is called Open Source Alternatives. This web site lists out commercial software and their open source alternatives. Almost all the alternatives run on Linux and FreeBSD. |
_unix.175166 | I have been hammering my head on the wall for the past two days trying to figure out a way to combine two keys to execute a function with xbindkeys.To make a long story short I have a chromebook c720 running OpenSUSE, and as you may already know chromebooks don't have Function keys but they do have hotkeys rather. Since I have already mapped with xbindkeys each hotkeys to do what they are supposed to do, my goal now is to combine the ctrl + hotkey to emulate the FN key.I can map a single key with xbindkey and xmodmap to act as FN key by using xmodmap -e 'keycode 72 = F6'however I can't seem to be able to map two to act as such.Here's the ctrl + F6 key codes output. caino@chromebook:~> xbindkeys -kPress combination of keys or/and click under the window.You can use one of the two lines after NoCommandin $HOME/.xbindkeysrc to bind a key.(Scheme function) m:0x0 + c:72 F6caino@chromebook:~> xbindkeys -kPress combination of keys or/and click under the window.You can use one of the two lines after NoCommandin $HOME/.xbindkeysrc to bind a key.(Scheme function) m:0x4 + c:37 Control + Control_L | How do I combine two keys to act as FN key with xbindkeys? | linux;keyboard shortcuts;opensuse;key mapping;xbindkeys | null |
_unix.9816 | So Im a complete newb when it comes to enterprise level linux distros, and linux servers in general.I know my way around Most Linux Desktops, but Im going to be setting up a small Linux Server that multiple people would be able to Terminal into (probably through SSH or Putty)How would I go about doing this (storing the users/passwords and such)....And is there a good FREE distro to do this? I was looking at Ubuntu Server, I was gonna do Centos but im a little bit iffy as their latest release is taking a LONGGG time. (We use Red Hat Enterprise 5.3 at work....but obviously I can't afford that lol)Thanks all.edit: Also how do you make like names for the server, so instead of 164.25.252.35 (or w/e ip, i just made that one up)it could be something like tron.dev.sauron.com or something.... (ya im a NUB) | Setting a Multi-Terminal Linux Server | linux;terminal;multiuser | First of all, your users should not be using passwords to log in to SSH, and should be using keys+passphrases, unless you absolutely must use passwords for some reason. For general information on how to set up SSH, I would look into specific information for setting up SSH on whatever distribution you end up choosing (most of them will have a tutorial on their site), or just google for How to set up SSH.Ubuntu Server is an excellent server distribution (which powers extremely high-traffic servers, such as those that Wikipedia runs on) and has packages for everything you'd need to do this (openssh-server, etc.) They also have very regular releases, so if you're worried about slow release cycles, this will not be a problem.As far as how names like tron.dev.sauron.com get converted to IP addresses, this is known as domain name resolution. If you are trying to set up a remote server for people to log in to, you're going to need to register a domain name and either (a) run a DNS server yourself, or (b) use a DNS service that will route it to the proper IP. (See this for more info: http://www.boutell.com/newfaq/creating/domainathome.html). The latter is likely a much better option. |
_codereview.173728 | I am trying to implemenent the adjacency list representation of graph.Here is the link to the code:http://ide.geeksforgeeks.org/6wjVCwNow the problem that I am facing is that the double pointer does not seem to work.Why is it happening? I am particularly sceptical about this part of the code : struct node** adjList(int V, int E) { struct node ** arr = (struct node **)malloc(sizeof(struct node *)*V); int u,v; for(int i = 0; i < V; i++) arr[i]->next = NULL;What I want to do is to creat | Graph implementation? | graph | null |
_cstheory.1956 | Computational geometry is an area I find pretty interesting, and I'd like to devote about a month or two to a project that will introduce me to this and help me learn key concepts.What is a good way to approach this and what are the key concepts I should be sure I'm introduced too? | What is a really good problem to get your hands dirty in computational-geometry? | soft question;cg.comp geom | To mix Suresh V.'s and Dave C.'s suggestions, it might be fun to try to gain experimental evidenceon an unsolved problem by implementing the necessary algorithms. For example, it is now known that the Delaunay triangulation is not a ($\pi$/2)-spanner[Prosenjit Bose, Luc Devroye, Maarten Lffler, Jack Snoeyink, Vishal Verma: The spanning ratio of the Delaunay triangulation is greater than $\pi$/2. CCCG 2009: 165-167.]You could implement a Delaunay triangulation algorithm, and shortest paths, and try todetermine experimentally what the true spanning ratio might be.Or, more challenging, try to compute the combinatorial complexity of the Voronoi diagram of lines in $\mathbb{R}^3$,another unsolved problem (and in the list that Suresh mentions as Problem 3.) |
_unix.107041 | I need to make a copy of my home directory and place that copy in the same home directory. The exit code of the command must be 0. Currently, my home directory does not contain any other directories.Is there a better way than the following? (pwd is the home directory)mkdir /tmp/temp && cp * /tmp/temp && mv /tmp/temp . | How to copy the home directory in the home directory? | file copy;home | Call rsync and exclude the directory where you're putting the copy.cdmkdir copyrsync -a --exclude=copy . copyCopying * excludes dot files (files whose name begins with a .), which are common and important in a home directory. |
_vi.11387 | Saw this in a .vimrc file:====[ Ensure autodoc'd plugins are supported ]===========runtime plugin/_autodoc.vimAny idea on what autodoc'd docs are and how they work? | What are autodoc'd plugins? | vimrc | null |
_webmaster.106940 | .dj domains appear to suddenly not resolve almost anywhere (lots of complaints incoming from users on .dj websites' social media) i.e. http://www.dj http://plug.dj https://click.djGoing to any .dj domain results in DNS_PROBE_FINISHED_NXDOMAINWhat's happening? When & how will this be solved? | ALL Djibouti (.dj) domains suddenly no longer being resolved on most devices/DNS (DNS_PROBE_FINISHED_NXDOMAIN) | domains;domain registration;top level domains;nameserver;dns servers | null |
_codereview.36841 | This week's review challenge is a poker hand evaluator. I started by enumerating the possible hands:public enum PokerHands{ Pair, TwoPair, ThreeOfKind, Straight, Flush, FullHouse, FourOfKind, StraightFlush, RoyalFlush}Then I thought I was going to need cards... and cards have a suit...public enum CardSuit{ Hearts, Diamonds, Clubs, Spades}...and a nominal value:public enum PlayingCardNominalValue{ Two, Three, Four, Five, Six, Seven, Eight, Nine, Ten, Jack, Queen, King, Ace}So I had enough of a concept to formally define a PlayingCard:public class PlayingCard { public CardSuit Suit { get; private set; } public PlayingCardNominalValue NominalValue { get; private set; } public PlayingCard(CardSuit suit, PlayingCardNominalValue nominalValue) { Suit = suit; NominalValue = nominalValue; }}At this point I had everything I need to write my actual Poker hand evaluator - because the last time I implemented this (like, 10 years ago) it was in VB6 and that I'm now spoiled with .net, I decided to leverage LINQ here:public class PokerGame{ private readonly IDictionary<PokerHands, Func<IEnumerable<PlayingCard>, bool>> _rules; public IDictionary<PokerHands, Func<IEnumerable<PlayingCard>, bool>> Rules { get { return _rules; } } public PokerGame() { // overly verbose for readability Func<IEnumerable<PlayingCard>, bool> hasPair = cards => cards.GroupBy(card => card.NominalValue) .Count(group => group.Count() == 2) == 1; Func<IEnumerable<PlayingCard>, bool> isPair = cards => cards.GroupBy(card => card.NominalValue) .Count(group => group.Count() == 3) == 0 && hasPair(cards); Func<IEnumerable<PlayingCard>, bool> isTwoPair = cards => cards.GroupBy(card => card.NominalValue) .Count(group => group.Count() >= 2) == 2; Func<IEnumerable<PlayingCard>, bool> isStraight = cards => cards.GroupBy(card => card.NominalValue) .Count() == cards.Count() && cards.Max(card => (int) card.NominalValue) - cards.Min(card => (int) card.NominalValue) == 4; Func<IEnumerable<PlayingCard>, bool> hasThreeOfKind = cards => cards.GroupBy(card => card.NominalValue) .Any(group => group.Count() == 3); Func<IEnumerable<PlayingCard>, bool> isThreeOfKind = cards => hasThreeOfKind(cards) && !hasPair(cards); Func<IEnumerable<PlayingCard>, bool> isFlush = cards => cards.GroupBy(card => card.Suit).Count() == 1; Func<IEnumerable<PlayingCard>, bool> isFourOfKind = cards => cards.GroupBy(card => card.NominalValue) .Any(group => group.Count() == 4); Func<IEnumerable<PlayingCard>, bool> isFullHouse = cards => hasPair(cards) && hasThreeOfKind(cards); Func<IEnumerable<PlayingCard>, bool> hasStraightFlush = cards =>isFlush(cards) && isStraight(cards); Func<IEnumerable<PlayingCard>, bool> isRoyalFlush = cards => cards.Min(card => (int)card.NominalValue) == (int)PlayingCardNominalValue.Ten && hasStraightFlush(cards); Func<IEnumerable<PlayingCard>, bool> isStraightFlush = cards => hasStraightFlush(cards) && !isRoyalFlush(cards); _rules = new Dictionary<PokerHands, Func<IEnumerable<PlayingCard>, bool>> { { PokerHands.Pair, isPair }, { PokerHands.TwoPair, isTwoPair }, { PokerHands.ThreeOfKind, isThreeOfKind }, { PokerHands.Straight, isStraight }, { PokerHands.Flush, isFlush }, { PokerHands.FullHouse, isFullHouse }, { PokerHands.FourOfKind, isFourOfKind }, { PokerHands.StraightFlush, isStraightFlush }, { PokerHands.RoyalFlush, isRoyalFlush } }; }} | Poker Hand Evaluator Challenge | c#;linq;weekend challenge;playing cards | public enum PokerHandsThis type should be called in singular (PokerHand). When you have a variable of this type, it represents a single hand, not some collection of hands. Your other enums are named correctly in this regard.public enum CardSuitpublic enum PlayingCardNominalValueYou should be consistent. Either start both types with PlayingCard or both with Card. I prefer the former, because this is a playing card library, there is not much chance of confusion with credit cards or other kinds of cards.private readonly IDictionary<PokerHands, Func<IEnumerable<PlayingCard>, bool>> _rules;public IDictionary<PokerHands, Func<IEnumerable<PlayingCard>, bool>> Rules { get { return _rules; } }I would use auto-property with private setter (like you did in PlayingCard) here. It won't enforce the readonly constraint, but I think shorter code is worth that here.Also, this pretty dangerous code, any user of this class can modify the dictionary. If you're using .Net 4.5, you could change the type to IReadOnlyDictionary (and if you wanted to avoid modifying by casting back to IDictionary, also wrap it in ReadOnlyDictionary).One more thing: I question whether IDictionary is actually the right type here. I believe that the common operation would be to find the hand for a collection of cards, not finding out whether a given hand matches the cards.// overly verbose for readabilityI agree that the lambdas are overly verbose, but I'm not sure it actually helps readability. What I don't like the most is all of the GroupBy() repetition. What you could do is to create an intermediate data structure that would contain the groups by NominalValue and anything else you need and then use that in your lambdas.Func<IEnumerable<PlayingCard>, bool> isStraight = cards => cards.GroupBy(card => card.NominalValue) .Count() == cards.Count() && cards.Max(card => (int) card.NominalValue) - cards.Min(card => (int) card.NominalValue) == 4;What is the cards.Count() supposed to mean? Don't there always have to be 5 cards? The second part of this lambda seems to indicate that. |
_unix.281024 | Eclipse scrolling happens with unbearable jitter, whereas other X applications (e.g. Google Chrome) behave more smoothly, what could be the cause? Why would Eclipse be different from other apps?$ lsb_release -aDistributor ID: DebianDescription: Debian GNU/Linux testing-updates (sid)Release: testing-updatesCodename: sid$ gnome-shell --versionGNOME Shell 3.18.1$ uname -aLinux thinkpad 4.5.0-1-amd64 #1 SMP Debian 4.5.1-1 (2016-04-14) x86_64 GNU/Linux$ Xorg -versionX.Org X Server 1.18.3Release Date: 2016-04-04$ cat .eclipseproductversion=4.5.2I istalled TrackPoint support for backports kernels in Jessie or later using these instructions: https://wiki.debian.org/InstallingDebianOn/Thinkpad/Trackpoint (namely, I created the recommended /usr/share/X11/xorg.conf.d/20-thinkpad.conf file, which allows both vertical and horizontal scrolling) | Jittery ThinkPad TrackPoint scrolling in Debian / X / GNOME using Eclipse Mars | debian;gnome;thinkpad;eclipse | null |
_unix.185282 | There are various servers with various OS, we cannot touch the PS1 of them. There is a sysadmin notebook, having CentOS 6.5/GNOME, so using gnome-terminal. We log in via SSH to the servers in gnome-terminal. Question: How can we modify the given gnome-terminals name (can have several tabs) to the remote machines name? It is working sometimes.. on ex.: Ubuntu servers, but looks like not working by default on older servers.. | How to set gnome-terminal to the remove server name without touching PS1? | ssh;gnome terminal | null |
_unix.89216 | We define idle based on how screen savers in Linux define it.I found this tool called xautolock.I tested it like this:/usr/X11R6/bin/xautolock -time 1 -locker notify-send testI placed this in /etc/rc.d/rc.local, but for some reason it was not working and I couldn't debug it.Someone said to place it in .bash_profile. I found this file and placed it in there, but now my GUI won't start.Because this command is a forever command, it always listens once executed. It never stops listening in order to determine idleness, so this means it can not go into .bash_profile.I do not know how to place it in to /etc/rc.d/rc.local, so where can it go if it can not go into these files?Perhaps there is a way to modify it so it can go into /etc/rc.d/rc.local? Perhaps something like:DISPLAY=:0.0 /usr/X11R6/bin/xautolock -time 1 -locker notify-send testWould that work?I'm on CentOS and GNOME. | How to shut down Linux if idle for 30+ minutes? | startup;shutdown | You can't place it in rc.local because it will require a running X session and rc.local is usually executed before or during starting X. Also the DISPLAY variable would have to be set as you already figured out correctly.If you want to place it in your .bash_profile then just put a & at the end to run it in the background. |
_softwareengineering.134920 | While some platform in some languages already address this issue, I would like to keep this semi-language agnostic and to focus on patterns associated with this issue.I have a data model that contains FirstName, MiddleName, and LastName (to keep things simple). First and Last names are required and may have other rules to ensure their validity. Middle name is optional.By default, the model is empty and thus, invalid.When a value changes, an event can be triggered to validate the field that changed.My question is what patterns can be used to best manage the state of the object since, based on the scenario, I'm doing field level validation. Should I move to model level validation on field change which would address the state issue or is there an alternate way? | Patterns for Maintaining Model State in Real Time | design patterns;language agnostic | This is a common problem in data validation; When entering new data into the system, before any information is entered by the user, the default working state of the object (mostly nulls) is an invalid state for virtually all other processes of your system.Validation is always context-dependent. If you have, for instance, a business requirement that all entered people records must have at least a first and last name, then neither field-level validation (which would only be run when setting a field, and so would not catch a failure to set the other field), nor model-level validation (which, if invoked real-time, would return a validation error for this rule when entering the first name because the last name is invalid) would work all the time when implemented naively.Instead, your domain model has to be made intelligent enough to know when it is ok to be in an inconsistent state, and when it is not. Usually, this is accomplished by providing some sort of scope or context identifier into your validation routines:When entering a single field, you can, and should, only validate rules the define the behavior of that single field. You simply cannot require the user to enter a last name when they are attempting to specify a first name in a new record, and vice-versa.At certain points, you may know that a subset of your object should now be in a consistent state. This may happen when filling out a multi-page form and attempting to continue to the next page; at that point, you may validate rules that involve one or more fields on the current page. This includes all field-level validations, but also additional multi-field validations, such as making sure date ranges composed of a start and end date are valid (end date after start date for instance), and that a person has both a first and last name. Sometimes, if one page's data depends on data in a previous page, you may be able to make these validations as well, but in that case those validations should not prevent a person going backwards to fix a mistake; it should only prevent the user continuing now that there is an obvious inconsistency. Understand that allowing real-time validation of values within a page or across multiple pages may introduce undesirable coupling; the validation rules must incorporate logic that is dependent on the structure of the View layer.Finally, when persisting an object (or retrieving it from persistence for use behind the scenes), it must be wholly consistent. That includes all field validations, all page-level validations, and additionally all rules involving data spread across multiple pages.Rules that the model must meet at one of these levels may prevent proper execution of the program when run at a lower level. It is often possible to include the scope or level of validation in a suite of rules that can be run with one method call, thus allowing the rule to pass if the data is consistent enough for a particular level of validation. However, doing so often couples the domain (or controller) to a very specific View, making the design brittle; if you want to move a field to a different page of the View, the validation routines back in the controller or data model may have to change to reflect this. There may not be a good way around this if you want to validate each rule as soon as possible. |
_codereview.33548 | I have recently written this Minesweeper game in Python:import randomclass Cell(object): def __init__(self, is_mine, is_visible=False, is_flagged=False): self.is_mine = is_mine self.is_visible = is_visible self.is_flagged = is_flagged def show(self): self.is_visible = True def flag(self): self.is_flagged = not self.is_flagged def place_mine(self): self.is_mine = Trueclass Board(tuple): def __init__(self, tup): super().__init__() self.is_playing = True def __str__(self): board_string = (Mines: + str(self.remaining_mines) + \n + .join([str(i) for i in range(len(self))])) for (row_id, row) in enumerate(self): board_string += \n + str(row_id) + for (col_id, cell) in enumerate(row): if cell.is_visible: if cell.is_mine: board_string += M else: board_string += str(self.count_surrounding(row_id, col_id)) elif cell.is_flagged: board_string += F else: board_string += X board_string += + str(row_id) board_string += \n + .join([str(i) for i in range(len(self))]) return board_string def show(self, row_id, col_id): if not self[row_id][col_id].is_visible: self[row_id][col_id].show() if (self[row_id][col_id].is_mine and not self[row_id][col_id].is_flagged): self.is_playing = False elif self.count_surrounding(row_id, col_id) == 0: [self.show(surr_row, surr_col) for (surr_row, surr_col) in self.get_neighbours(row_id, col_id) if self.is_in_range(surr_row, surr_col)] def flag(self, row_id, col_id): if not self[row_id][col_id].is_visible: self[row_id][col_id].flag() else: print(Cannot add flag, cell already visible.) def place_mine(self, row_id, col_id): self[row_id][col_id].place_mine() def count_surrounding(self, row_id, col_id): count = 0 for (surr_row, surr_col) in self.get_neighbours(row_id, col_id): if (self.is_in_range(surr_row, surr_col) and self[surr_row][surr_col].is_mine): count += 1 return count def get_neighbours(self, row_id, col_id): SURROUNDING = ((-1, -1), (-1, 0), (-1, 1), (0 , -1), (0 , 1), (1 , -1), (1 , 0), (1 , 1)) neighbours = [] for (surr_row, surr_col) in SURROUNDING: neighbours.append((row_id + surr_row, col_id + surr_col)) return neighbours def is_in_range(self, row_id, col_id): return 0 <= row_id < len(self) and 0 <= col_id < len(self) @property def remaining_mines(self): remaining = 0 for row in self: for cell in row: if cell.is_mine: remaining += 1 if cell.is_flagged: remaining -= 1 return remaining @property def is_solved(self): for row in self: for cell in row: if not(cell.is_visible or cell.is_flagged): return False return Truedef create_board(size, mines): board = Board(tuple([tuple([Cell(False) for i in range(size)]) for j in range(size)])) available_pos = list(range((size-1) * (size-1))) for i in range(mines): new_pos = random.choice(available_pos) available_pos.remove(new_pos) (row_id, col_id) = (new_pos % 9, new_pos // 9) board.place_mine(row_id, col_id) return boarddef get_move(board): INSTRUCTIONS = (First, enter the column, followed by the row. To add or remove a flag, add \f\ after the row (for example, 64f would place a flag on the 6th column, 4th row). Enter your move: ) move = input(Enter your move (for help enter \H\): ) if move == H: move = input(INSTRUCTIONS) while not is_valid(move, board): move = input(Invalid input. Enter your move (for help enter \H\): ) if move == H: move = input(INSTRUCTIONS) return (int(move[1]), int(move[0]), True if move[-1] == f else False)def is_valid(move_input, board): if move_input == H or (len(move_input) not in (2, 3) or not move_input[:1].isdigit() or int(move_input[0]) not in range(len(board)) or int(move_input[1]) not in range(len(board))): return False if len(move_input) == 3 and move_input[2] != f: return False return Truedef main(): SIZE = 10 MINES = 9 board = create_board(SIZE, MINES) print(board) while board.is_playing and not board.is_solved: (row_id, col_id, is_flag) = get_move(board) if not is_flag: board.show(row_id, col_id) else: board.flag(row_id, col_id) print(board) if board.is_solved: print(Well done! You solved the board!) else: print(Uh oh! You blew up!)if __name__ == __main__: main()I am currently aware that when counting remaining_mines, the property function runs each time, whereas it would have been more efficient only adding and subtracting from the mine count when a mine or flag is placed.When I implemented this, however, it looked quite messy, so I chose readability over performance. Was this the right decision? I also think the for a in b: for c in a: which are repeated throughout the code could be cleaned up. Finally, I wasn't sure whether to use a list comprehension or a normal loop in the final elif of Board.show().Have you got any answers to my questions, or any general tips on how to improve the performance or readability? | Minesweeper in Python | python;game;minesweeper | null |
_unix.209534 | Linux Mint 17.1 (MATE) running on HP G250 Laptop and older HP desktops. It's just me and the dog at home and I like to run the computer all day, but it keeps returning to the login screen after a few minutes of inactivity. Typing the long secure password all day gets tiring and I'd like to at least lengthen the time, or even stop the timeout alltogether. | How to avoid system timeout to login screen when I make a cup of tea? | linux mint;mate;timeout;screensaver;screen lock | null |
_scicomp.8769 | I have a need to get a variable value from another process rank of which I know. This happens in context of a parallel solver of a A x = b equation, for which process with rank 0 knows matrix A, and the other processes put some values (i,j) from this matrix into the matrix of a parallel solver data type (Petsc's Mat). This means that these other processes walk through the range delegated to them, calculate i and j, retrieve ij's element, and call MatSetValue. There is no way to avoid such way - it is a finite differencing method and the parent process has variable values in central points and in neighbouring points (left,right,top,bottom) For example the same happens in ex13F90.F in petsc library examples.The problem is that I don't know a proper MPI subroutine to retrieve the a_central, a_top, a_bottom, ..., values from the parent processes. Right now I tried to broadcast them (MPI_Bcast) but this means that each process has the entire matrix and runs out of memory.Per the answer below, the essence of this question is How do I do one-sized MPI communication?... | mpi retrieve a variable value from process with known rank to process that made an mpi_something call | petsc;mpi | null |
_unix.45198 | I used the uuid command from the uuid-1.6.2-8.fc17.x86_64 package to generate version 1 UUIDs. The man page said that the default is to use the real MAC address of the host, but when I decoded the generated UUID, it is using the local multicast address. uuid v 1 shows:5fc2d464-e1f8-11e1-9c3d-ff8beec65651Decoding with uuid -d 5fc2d464-e1f8-11e1-9c3d-ff8beec65651 shows:encode: STR: c7ee12de-e1f7-11e1-99f1-53d638ec6296 SIV: 265752520555487307909286258714002350742decode: variant: DCE 1.1, ISO/IEC 11578:1996 version: 1 (time and node based) content: time: 2012-08-09 07:56:52.526563.0 UTC clock: 6641 (usually random) node: 53:d6:38:ec:62:96 (local multicast)How can I make it use my actual MAC address, and my time zone (Asia/Tehran, not UTC)? | UUID based on global MAC address | mac address;uuid | The reason it's not using your actual MAC address is because the code is poorly written. The mac_address function in uuid_mac.c has this block of code: if ((s = socket(PF_INET, SOCK_DGRAM, 0)) < 0) return FALSE; sprintf(ifr.ifr_name, eth0); if (ioctl(s, SIOCGIFHWADDR, &ifr) < 0) { close(s); return FALSE; }It's looking for the MAC address of the eth0 interface, and silently falling back to a randomly-generated local multicast address if it can't find it. If your network interface is called eth1 or wlan0 or anything else, it fails to find it.I would consider this a bug in the software. It should use the MAC address of the hardware interface corresponding to the current default route, and let the user specify an alternate interface if desired. I'd recommend reporting that upstream.Regarding timezone: the UUID doesn't store the timezone. The time information in the UUID is stored as UTC time, and so that's how uuid -d displays it. An enhancement to the uuid program might be to provide an option to display times according to the local timezone when decoding -- but either way, that info doesn't get stored inside the UUID itself. |
_codereview.127612 | I was recently tasked with architecting a user/profile object in Objective-C and wanted the ability to access a static instance from the class level, similar in style to the way Parse manages their current user (a static variable on a class, not a singleton). With Parse, I can call a class method to return the current user object if a user is logged in, or nil if there is no current user session with [PFUser currentUser].I architected my class like so:User.h@interface User : NSObject/// Returns the current user if logged in, or nil if logged out+ (User *)currentUser;/// Sets the current user, call this on login+ (void)setCurrentUser:(NSDictionary *)userInfo;/// Removes the current user, call this on logout+ (void)removeCurrentUser;// Getters/// Returns the user's name if logged in, nil if logged out@property (nonatomic, readonly, strong) NSString *name;@endUser.m@interface User ()@property (nonatomic, readwrite, strong) NSString *name;@property (nonatomic, strong) NSString *address;@end@implementation User#pragma mark - Static Variablesstatic User *currentUser;#pragma mark - Static Object Setters and Getters+ (User *)currentUser{ return currentUser;}+ (void)setCurrentUser:(NSDictionary *)userInfo{ currentUser = [[User alloc] initWithDictionary:userInfo];}+ (void)removeCurrentUser{ currentUser = nil;}#pragma init- (instancetype)initWithDictionary:(NSDictionary *)userInfo{ self = [super init]; if (self) { NSString *name = userInfo[@name]; NSString *address = userInfo[@address]; if (!name || !address ) { return nil; } _name = name; _address = address; } return self;}@endThis works well. I have a single static User instance which can be non-nil if a user is logged in, and nil if the user is logged out. I can access this object from anywhere with [User currentUser] without a singleton.Now as an additional challenge I'm trying to translate this type of pattern into Swift. Here is my fist pass:public class User { // Private Properties private var name: String? private var address: String? // Private instance of User private static var user: User? // Class Functions public class func currentUser() -> User? { return self.user } public class func setCurrentUser(userInfo: [String : AnyObject]) { self.user = User(dictionary: userInfo) } public class func removeCurrentUser() { self.user = nil } // Instance methods public func getName() -> String? { return self.name } // Private private convenience init?(dictionary: [String : AnyObject]) { self.init() guard let name = dictionary[name] as? String else { return nil } guard let address = dictionary[address] as? String else { return nil } self.name = name self.address = address }}This works but doesn't feel Swifty enough.Questions:Is there a way to expose a public getter on a private property (sort of like a property redeclaration through a class extension in Objective-C)? Or do I need to write an additional accessor.Does declaring my private User object as private static ensure that it can only be accessed on the class level and not on the instance level? Is this variable really static like it would be in C or Objective-C?Can a User setter and getter be added/modified to remove the need for setCurrentUser and removeCurrentUser?I am open to optimizations in both my Swift and Objective-C code. | Current user as a class-level property in Objective-C and Swift | object oriented;objective c;swift;static | null |
_reverseengineering.13286 | I have an IDAPython script x.py which takes some arguments, which prevents me from simply using alt + F7 and selecting my script.How can I execute this script within IDA Pro and specify the arguments for the script? | Executing an IDAPython script with arguments within IDA Pro | ida;idapython;idapro plugins | Naturally, the best way would be editing the script and have it ask the user for those parameters. IDA has quite a few ways of doing that. You could use one or several of the many idc.Ask* functions. Such as: AskYN, AskLong, AskSelector, AskFunction, AskFile and others. Sometimes when multiple input parameters are needd, it becomes inconvenient to ask for many speciif values, you could then create a full blown dialog instead.You could create a new process using popen or something similar, but I can't say I recommend doing that.If depends on how the python script you're trying to execute is implemented, but you're probably better off trying to include/import it in one pythonic way or another.Importing a protected moduleIf the script is properly written, it probably wraps any execution functionality with an if __name__ == __main__ clause, protecting such cases as executing when imported. If that's the case, simply import it with an import modulename and then call its main/whatever.Importing a sys.argv moduleIf the module directly uses sys.argv and you cannot/would not prevent it from doing so, you can mock your sys.argv before importing the module. Simply doing something like the following should work:sys.argv = ['./script.py', 'command', 'parameter1', 'parameter2', 'optional']import scriptCalling execfile of the fileIf neither of those approaches works for you, you can always directly call execfile and completely control the context in which the python script is executed. You should read the documentation of execfile and eval here and here, respectively. |
_webmaster.81038 | We're having issues with spam filters with our emails. They're not being received by our clients about half of the time. We decided to make sure the SPF and DKIM are correctly set.Suppose I have a hosting with some external domains purchased and linked here.Now, suppose that we're externalising the email management to Google Apps, where we have the main domain as the only company domain, but are also using secondary domains from Gmail, and directly contacting the hosting SMTP server. Others are simply domains purchased that act as an alias of the main domain (though inside the GMail accounts they aren't considered as Alias).With that in mind, I'm having a huge trouble making this work. This is the current situation:Emails sent with the main domain are correctly authenticated.Emails sent with hosting-stmp handled might not be authenticated, but seem to work well.Emails sent with aliased accounts are sent, but via main domain.I've set include:_spf.google.com as a SPF in the hosting panel, per domain. But to which servers should I add the DKIM from Google Apps? To the main domain, all of them? I've set it to the main domain and it seems to work as always. | Where to place the SPF records and Google Apps DKIM on a multidomain website? | email;multiple domains;google apps;spf;dkim | null |
_cogsci.8640 | What physiological changes are seen in the brain when a person is experiencing frustration? What effects do these changes have on learning?Optional background:I'm trying to figure out an exploration schedule (exploration = increasing the noise in the action selection neural population) for Hierarchical Reinforcement Learning and I'm wondering if there's an easy way to base it biologically. | Physiological mapping of frustration | neurobiology;learning;emotion;physiology | null |
_softwareengineering.123146 | I'm starting a hobby project and I'm in the middle of designing its architecture. I would like to make my program plugin-based (never done anything like that before), to make it extensible. Now I'm trying to grasp how such an architecture is created conceptually.Now, this wikipedia article about Plugins says that the host application is supposed to provide a protocol service (among other things) to establish how data is exchanged with the plugin. I don't really understand this tidbit, what does that mean? What kind of data actually needs to be exchanged?EDIT: Just to be clear, I'm not looking for implementation specifics, but for a clear explanation of the mechanism, that the wikipedia article presents. As it stands now, I have no idea what the purpose of a protocol in a plugin-based application is.I figured, the plugin could just fetch relevant data from outside the application on its own and present it to the user, without directly involving the application. What would be the purpose of establishing a protocol for data exchange?It's a desktop application, which will be written mainly in Java, thus object-oriented. The application at its core mainly provides an interface for plugins to register itself in the application and a plugin-manager, which interacts with the plugin. | Designing a plugin-based architecture - what is a protocol service supposed to provide to a plugin? | architecture;plugins;protocol | In my view, plugins are sharing the same address space (& process) as the receiving application and involve dynamic linking.In Linux parlance, I think that a plugin is a dynamically loaded shared object which is dlopen-ed by the application.Then the application has to define what dlsym-ed symbols are expected by the application in the plugin, and how they interact with (i.e. how they are called by) the application.A concrete exemple is given by Gcc plugins (I'm working on MELT, a high-level domain specific language to extend GCC, implemented as a [meta-] plugin); as the document explains, it define some set of conventions that you would call a protocol serviceAdded:So the plugin protocol is the set of conventions (and associated API and names) defining how the plugin is installed, and which plugin's functions (and names) are expected, in what order and conditions they are invoked by the application, and which application data and API the plugin can access (and modify). |
_unix.289932 | I found this expression for a homework assignment that will print all lines containing a vowel (a, e, i, o, or u) followed by a single character followed by the same vowel again. Thus, it will find eve or adam but not vera. The expression works correctly but I am looking for someone who can explain what each part does so I can further understand how it works. | Can anyone explain this expression piece by piece please? grep '\([aeiou]\).\1' | grep | null |
_unix.186750 | I shrunk my root partition and it seems nice. But I am thinking about overwriting now at least the most important files from the backup copy (external drive, rsync, weekly backup) in order to be sure that none of my files got corrupted during the shrinking. That is probably a waste of time (and perhaps it may result in more fragmentation). I can check that the files are OK after moving them during the shrinking by means of a CRC comparison with those in the backup (e.g. with md5sum) as one unser kindly says in his answer.But specifically I would like a short explanation on the algorithm that GNU Parted uses in order to ensure that no data corruption happens while moving information from one sector of the disk to another, prior to the shrinking of the partition. Is there such algorithm, or the program copies bytes blindly? I would like to read a simple explanation. | Why can I rest assured that GNU Parted has not corrupted a single bit after shrinking my partition? | partition;gparted;parted;corruption;integrity | Why can I rest assured that GNU Parted has not corrupted a single bit after shrinking my partition?You can't, in fact, gparted man page clearly says (under NOTES):Editing partitions has the potential to cause LOSS of DATA.......You are advised to BACKUP your DATA before using the gparted application.Reboot your system after resizing the partition and run fsck. If it doesn't find any errors then the operation was successful and the data is intact.There have been issues in the past with gparted corrupting data when resizing partitions even though it wasn't reporting any error (e.g. see this thread on their forum and the warning linked there).When resizing, (g)parted only moves the END position of partition NUMBER. It does not modify any filesystem present in the partition. Underneath, gparted uses fs specific tools to grow/shrink the filesystem.You can get detailed information for each operation, as per the online manual:To view more information, click Details. The application displays more details about operations.To view more information about the steps in each operation, click the arrow button beside each step. Let's see what it actually does when shrinking an ext4 partition (skipping the calibrate&fsck steps):shrink file system 00:00:02 ( SUCCESS )resize2fs -p /dev/sdd1 409600KResizing the filesystem on /dev/sdd1 to 409600 (1k) blocks.Begin pass 3 (max = 63)Scanning inode table XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXThe filesystem on /dev/sdd1 is now 409600 (1k) blocks long.resize2fs 1.42.12 (29-Aug-2014)As you can see, gparted does nothing, it just calls resize2fs -p with the specified device and new size as arguments. If you're interested in the algorithm you could look at resize2fs.c. In short:Resizing a filesystem consists of the following phases:1. Adjust superblock and write out new parts of the inode table2. Determine blocks which need to be relocated, and copy the contents of blocks from their old locations to the new ones.3. Scan the inode table, doing the following: a. If blocks have been moved, update the block pointers in the inodes and indirect blocks to point at the new block locations. b. If parts of the inode table need to be evacuated, copy inodes from their old locations to their new ones. c. If (b) needs to be done, note which blocks contain directory information, since we will need to update the directory information.4. Update the directory blocks with the new inode locations.5. Move the inode tables, if necessary.Filesystem resizing should be a safe operation, as per one of the authors, Ted Tso:resize2fs is designed not to corrupt data even if someone hits the Big Red switch while it is operating. That was an explicit design goal.but like all code, it isn't bug-free.Once fs resize is done, gparted shrinks the partition:shrink partition from 500.00 MiB to 400.00 MiB 00:00:00 ( SUCCESS )old start: 2048old end: 1026047old size: 1024000 (500.00 MiB)new start: 2048new end: 821247new size: 819200 (400.00 MiB)Bottom line: always backup your data before altering partitions/filesystems and run fsck after making the changes. |
_unix.223232 | I'm doing a bash script to backup my computer to a local server. I need to compress the archives but I can't find a way to make this if condition work with an ssh command inside:if [ ssh [email protected] '$(ls -d /snapshots/$(date -v -7d +%Y%m%d)* 2> /dev/null | wc -l) != 0' ]then ssh [email protected] tar -czf $ARCHIVES_DIR/$YESTERDAY.tar.gz $SNAPSHOT_DIR/$YESTERDAY* \ && rm -rf $SNAPSHOT_DIR/$YESTERDAY*fiI've got a Too many arguments (inside the if) error.What am I doing wrong? | If condition with ssh command inside | shell script;test | I'd suggest you simplify the construct and give the next person reading the code a chance to see what's going on. Your main issue is that you seem to be confusing your indirect execution $( ... ) with [ ... ] as a test operator. Apologies if I've misunderstood the flow, but I think this is what you intend:# Count files on the remote system and confirm that there is at least oneDATE7=$(date -v -7d '+%Y%m%d')NFILES=$(ssh [email protected] ls -d '/snapshots/$DATE7'* 2> /dev/null | wc -l)# If the ssh worked and we have found files then archive themif [ $? -eq 0 && 0 -lt $NFILES ]then # Archive the files ssh [email protected] tar -czf '$ARCHIVES_DIR/$YESTERDAY.tar.gz' '$SNAPSHOT_DIR/$YESTERDAY'* && rm -rf '$SNAPSHOT_DIR/$YESTERDAY' fiThis supposes that ARCHIVES_DIR, SNAPSHOT_DIR and YESTERDAY are defined locally elsewhere in your script.Remember that ... will interpolate variables' values immediately, whereas '...' will treat text such as $WIDGET as a literal seven character string starting with a dollar symbol. This is important to note given I have got sequences like '...' and ' ... ' in this code. |
_unix.206633 | Assume you can install something on a system because you have sudo rights to do so, but only have sudo rights for the installer. In that case it is fairly easy to create a package that installs a binary owned by root that has the setuid bit set during installation, and have that binary execute any command that you feed it, as root. This makes it insecure to allow limited sudo access for any given user to a package that can arbitrarily change change permissions. The other obvious (IMO) security hole is that a package can update the /etc/sudoers file and grant the user all kind of additional rights.As far as I know apt-get nor yum have an option that you can set, or check how they are invoked, that causes what is installed in the normal, default, locations, but in a limited way (e.g. not overwriting already available files, or not setting setuid bits). Did I miss something and does installation with such restrictions exists? Is it available in other installers? Or are there other known workarounds that would make such restrictions ineffective (and implementing them a waste of time)? | Disallow `apt-get`, `yum` to install setuid binaries when itself run via sudo | software installation;sudo;account restrictions | This is probably doable with an SELinux policy (and probably not doable without SELinux or other a security module that can confine root), but it's pointless.As you note, a package could declare that it installs /etc/sudoers. Even if you make an ad hoc rule to somehow prevent that, the package could drop a file in /etc/sudoers.d. Or it could drop a file in /etc/profile.d, to be read the next time any user logs in. Or it could add a service that's started by root at boot time. The list goes on and on; it's unmanageable, and even if you caught the problematic cases, you'd have prevented so many packages from installing that you might as well not bother (for example, that facility wouldn't allow most security updates). Another thing the package could do is to install a program that you'd be tricked into using later (for example, if you forbid write access to /bin altogether, it could install /usr/local/bin/ls) and which injects a backdoor via your account the next time you invoke the program. To prevent a package installation from injecting a potential security hole, you need to either restrict the installation to trusted packages, or to make sure you never use the installed packages.Basically, if you don't trust a user, then you can't let them install arbitrary packages on your system. Let them install software in their home directory if they need something that isn't in the distribution.If you want to give an untrusted user the ability to install more packages (from a predefined list of sources that you approve as safe) or upgrade existing packages on the main system, that can be safe, but you need to take precautions, in particular to disable interaction during the installation. See Is it safe for my ssh user to be given passwordless sudo for `apt-get update` and `apt-get upgrade`? for some ideas about apt-get upgrade.Under recent Linux versions (kernel 3.8), any user can start a user namespace in which they have user ID 0. This basically allows a user to install their own distribution in their own directory. |
_codereview.146616 | This is my first time using classes objects and functions. What can I improve on?#include <iostream>#include <conio.h>using namespace std;class functions{public: void Body(){ cout << :: Welcome to Taylor's CALCULATOR! :: << endl; } int Addition(int x, int y){ int ans = x + y; return ans; } int Subtraction(int x, int y){ int ans = x - y; return ans; } int Multiplication(int x, int y){ int ans = x * y; return ans; } int Division(int x, int y){ int ans = x / y; return ans; }};int main(){int func;int x, y;functions key; //Objectkey.Body(); //Objectcout << What function do you want to use? << endl;cout << 1 - Addition << endl;cout << 2 - Subtraction << endl;cout << 3 - Multiplication << endl;cout << 4 - Division << endl;cout << Input: << endl;cin >> func;cout << endl;switch(func){ case 1: //Addition cout << **ADDITION** << endl; cout << Please enter first number: << endl; cin >> x; cout << Please enter second number: << endl; cin >> y; cout << x << + << y << = ; cout << key.Addition(x, y); break; case 2: //Subtraction cout << **SUBTRACTION** << endl; cout << Please enter first number: << endl; cin >> x; cout << Please enter second number: << endl; cin >> y; cout << x << - << y << = ; cout << key.Subtraction(x, y); break; case 3: //Multiplication cout << **MULTIPLICATION** << endl; cout << Please enter first number: << endl; cin >> x; cout << Please enter second number: << endl; cin >> y; cout << x << x << y << = ; cout << key.Multiplication(x, y); break; case 4: //Division cout << **DIVISION** << endl; cout << Please enter first number: << endl; cin >> x; cout << Please enter second number: << endl; cin >> y; cout << x << / << y << = ; cout << key.Division(x, y); break; default: cout << Invalid Input...; break;}} | C++ calculator using classes | c++;beginner;object oriented;calculator | null |
_reverseengineering.11741 | I have disassembled the exe in IDA 6.1 and I think I found a hand full of text files and was wondering how to go about dialing into the addresses and extracting the data. Here is what I foundI know how to code a bit in C and .net and thought maybe it would be possible with guidance. Thanks in advance | Can I extract .txt files from .exe if I know their addresses? | ida;c# | null |
_unix.203401 | I have recently installed CentOS (into a machine with only one hard drive) and I would like to know how to Partition the main hard drive into two.As it is a fresh install there is no data to lose and I am using linux rescue from a live CD 50GB[/dev/sda ] 25GB 25G[/dev/sda1][/dev/sda2]These are bogus numbers at the moment and I doubt the result will be what I expect but anything close or any ideas would be really great | CentOS 6 Partitioning Root Drive | centos;partition;hard disk;split | null |
_unix.349146 | Question 1: are the following rules equal?iptables -t raw -A PREROUTING -p tcp --tcp-flags FIN,SYN,RST,PSH,ACK,URG NONE -j DROPiptables -t raw -A PREROUTING -p tcp --tcp-flags ALL NONE -j DROPQuestion 2: are the following rules equal?iptables -t raw -A PREROUTING -p tcp --tcp-flags FIN,SYN FIN,SYN -j DROPiptables -t raw -A PREROUTING -p tcp --tcp-flags SYN,FIN SYN,FIN -j DROPI'm new to iptables and I'm a bit confused because some tutorials suggest to use those four rules. | iptables --tcp-flags | iptables | null |
_unix.388032 | Trying to dual-boot Linux with windows 10. I formatted my flash drive with Rufus, then copied the 18.2 64-bit Linux mint Cinnamon file to the root of the drive. When I try to boot from the drive, it opens FreeDOS, and won't boot to the GUI. | Linux Mint Cinnamon won't boot to GUI, only FreeDOS | linux mint;linux kernel;gui;freedos | null |
_codereview.78357 | I wrote the following that spawns n threads, uses them to process a queue of jobs, then returns a result.As well as any general suggestions, I'd like feedback on the following:How safe is SyncQueue? Previously, it went into a deadlock on inexpensive tasks, but I changed the notify to notifyAll, and I haven't had any problems since. I'd still like it looked at though.Is there a better way to delay execution of the jobs? I'm using an implicit to make the delay method available; but it would be nice to have it be completely implicit.Does the JVM prevent the stdout from interleaving? I remember back from c++ that outputting over different threads at once ending up creating a mess of interleaved text, but this doesn't. For the sole purpose of testing, is outputting text from several threads at once in any way harmful?SyncQueue.scala - A theoretically thread-safe, mutable FIFO queue:package threadPoolimport scala.collection.mutable.Queueclass SyncQueue[A] { private val q: Queue[A] = new Queue def nQueued: Int = synchronized { q.length } def available: Boolean = synchronized { nQueued > 0 } def pop: A = synchronized { if (!available) { wait; pop } else q.dequeue } def push(x: A) = synchronized { q enqueue x; notifyAll } def toList: List[A] = synchronized { q.toList } def clearQ = synchronized { q.clear }}JobQ.scala - A wrapper over 2 SyncQueues that helps with adding jobs/collecting results:package threadPoolimport scala.concurrent._class JobQ[Result] { type Job = () => Result type PossibleResult = Either[Throwable,Result] private val workQ = new SyncQueue[Job] private val resQ = new SyncQueue[PossibleResult] //So it knows when all jobs are finished private var runningJobs = 0 //Waits until all started jobs are finished def waitForJobsToFinish(checkDelayMS: Int) = while(!allJobsFinished) Thread sleep checkDelayMS def allJobsFinished: Boolean = synchronized { runningJobs == 0 } def jobsAvailable: Boolean = workQ.available def resultsAvailable: Boolean = resQ.available def giveJob(job: Job) = synchronized { workQ push job runningJobs += 1 } def giveJobs(jobs: Seq[Job]) = jobs map (giveJob(_)) def getJob: Job = workQ.pop def giveResult(result: PossibleResult) = { resQ push result runningJobs -= 1 } def getResults: List[PossibleResult] = { val xs = resQ.toList resQ.clearQ xs }}Worker.scala - The Runnable used by each thread. It forms an infinite loop of taking a job, processing it, and queuing the result:package threadPoolclass Worker[Result](jobQ: JobQ[Result]) extends Runnable { def run = while (true) { val job = jobQ.getJob //blocks until a job is made available val result: Either[Throwable,Result] = try { Right( job() ) //Long computation } catch { case e: Throwable => Left(e) } jobQ giveResult result }}Timer.scala - Used to assist the timing in the test:package threadPoolimport java.util.Datecase class Timer(startTime: Long = new Date().getTime) { private def curMs: Long = new Date().getTime def restart: Timer = Timer(curMs) def stop: Long = curMs - startTime def lap: (Long, Timer) = { val curTime = curMs (curTime - startTime,Timer(curTime)) }}object Timer { def timeBlock(body: => Unit): Long = { val t = Timer() body t.stop }}ThreadPool.scala - Spawns the threads, and manages the queues:package threadPoolimport java.lang.Runtime._import scala.util.Random._object Implicits { implicit class delayCall[A](body: => A) { def delay: (() => A) = () => body }}class ThreadPool[Result](nThreads: Int) { //By default, it spawns 1 thread per available processor def this() = this(Runtime.getRuntime.availableProcessors) type Job = () => Result type PossibleResult = Either[Throwable,Result] val jobQ: JobQ[Result] = new JobQ val threads = 1 to nThreads map { _=> new Thread( new Worker(jobQ) ) } def start = threads map (_.start) def giveJob(job: Job) = jobQ giveJob job def giveJobs(jobs: Seq[Job]) = jobs map (giveJob(_)) def getResultsIfDone: Option[List[PossibleResult]] = if(jobQ.jobsAvailable) None else Some(jobQ.getResults) def waitForResults: List[PossibleResult] = { jobQ waitForJobsToFinish 500 jobQ.getResults }}The Main - Just a sample case:object ThreadPoolTest extends App { import Implicits._ val nThreads = 4 val nJobs = 10 val pool = new ThreadPool[Long](nThreads) //Returns the time taken to execute, to be summed and compared later def expensiveLong(id: Int): Long = Timer.timeBlock { val s = scala.util.Random.nextInt(20000) println(sStarting expensive task $id: ${s / 60000.0} minutes) Thread.sleep(s) println(s\tEnding $id: Started ${s / 60000.0} minutes ago) } val jobs: List[() => Long] = (1 to nJobs).toList map { id => expensiveLong(id).delay } pool.giveJobs(jobs) pool.start var rs: List[Either[Throwable,Long]] = Nil //timeBlock will time all the executions to compare against val realTime:Long = Timer.timeBlock { rs = pool.waitForResults } //Print results println(rs) //For this test, I'm having it crash on an error, because any exceptions would invalidate the results (sum of times taken) val checkedResults: List[Long] = rs map { case Left(e) => throw e case Right(r) => r } val sum = checkedResults. foldLeft(0L)(_+_) println(sDone:\n\tTotal Time Needed:\t${sum / 60000.0} minutes\n\tTime Spent:\t\t\t${realTime / 60000.0} minutes) println(((sum * 1.0) / realTime) + x faster) println((realTime / nThreads / 1000.0) + seconds per thread)}Since posting this, I've noticed a couple things:giveResult in JobQ isn't synchronized, which I believe is the cause of a deadlock problem I noticed when running inexpensive tasks (not actually a deadlock, but for the better part of today, that's what I was trying to diagnose).I saw a post that mentioned it's good practice not to lock on this, so in JobQ and SyncQueue, I switched to using a separate lock object (defined as class Lock)I haven't changed the above code though. | A Simple Thread Pool in Scala | multithreading;scala | null |
_webapps.99245 | Back in the days I was using Endnote to document references, e.g. if wikipedia was consulted it was possible to add a link to Endnote and this program was able to sort the references. At the moment I have to update the references all the time once a new references is added.DiscussionThis Q&A was found although this does not solve the issue, an issue was reported by using the send feedback button as described in one of the answers. | Equivalent of EndNote in Google Docs in order to document references? | google documents | null |
_softwareengineering.311813 | I am writing a small app that manages a couple of recipes. I have a MySQL database that is used by my data persistance layer. I need some kind of id in my business objects representing the recipes to use my persistence layer.Currently i am just calling SELECT LAST_INSERT_ID() on my database after i inserted a new recipe and then assign that id. In a project at my part time job we use a different approach. We generate the ids in our application before inserting and then use them from there.I think my current approach is better, because i only generate an int (the other application uses an id-generator that factors in time etc) which should be faster for lookups and i can scale the system for multiple applications (because i can put the insert and last_insert_id calls into the same transaction) and i can easily roll back, if something goes wrong while executing the transaction.Is my reasoning right? Or is there something im missing? | Generate id in application or use database generated one? | design;mysql;dao | Currently I am just calling SELECT LAST_INSERT_ID() on my databaseThis may give unwanted effects when your application is used by several users at the same time. (Another user could have inserted a new record in the table between the intended insert and the request for the id.) Unless you perform this request within the same database transaction.Most persistance layers will return the generated ID to use in your application. You should look up how this works in your chosen environment.Generating an ID in your application can have a similar effect if the last ID given out is requested from the database, incremented in software and then used to insert a new database record. However, if the ID generated in the application is guaranteed to be unique (more or less) by some mechanism, this is an acceptable approach.Assuming both approaches are safe for multi-user environments, both approaches can be acceptable. The points you name as advantages of your approach can also be applied to the approach of generating an ID. Both approaches can scale and both approaches can be rolled back in the database. An advantage the latter approach might have over the first, is that it is less dependent on the specific database engine. Thus if a different database engine is chosen at a later point, it may be easier to migrate the software to it (for instance because of differing syntax to request the last inserted ID). |
_softwareengineering.143181 | I've seen this in a lot of IDEs (even in the most popular and heavily used ones, like Visual Studio): if you want to watch a variable's value, you have to manually type its name in the Watches section of the debugger. Why can't there just be a list of all of them with checkboxes next to them ? The developer can then just check the box next to the one he wants to watch and that's it. Variables with identical names can probably be numbered in some way (for example a, b, x(1), x(2), c, etc.I've seen some exceptions to this (NetBeans or BlueJ), but there are exceptions to everything, right ?Maybe it's a stupid question, maybe not, but I've always wondered why this is so. | Why do you have to manually type variable names while debugging? | ide;debugging | I've actually never seen an IDE (haven't worked with Visual Studio though) where the debugger didn't have a view that shows you all the the variables of the current stack frame. A watch expression view is provided separately because it allows you to have complex expressions (that may include method calls as well as variables) computed automatically. |
_cogsci.14157 | When training fine motor skills, are identically setup practice sessions ideal or, like machine learning, does adding noise/variability to the practice session increase skill acquisition? | Effect of variability due during motor skill training | cognitive psychology;learning;motor | According to Motor Skills Are Strengthened through Reconsolidation (available through SciHub) adding variability to practice sessions increases learning speed. In the paper, patients were directed to move a cursor on a screen to certain targets via pinch force. Patients who's pinch force mapping was modified during each trial ended up learning faster and performing better on the original pinch task. |
_unix.287512 | I am having an odd problem when i boot.It brings me to the recovery mode and tells me to press Ctrl-D or enter my root password.When I enter the root password, I mount my /dev/mapper/sdc1_crypt which is my /home drive. if I log out of the root shell it then launches lightdm and I can sign into my user account as if nothing happened.How can I fix it so i dont have to do this each time I boot?I'm using Debian. | Linux boots to recovery mode, mounting /home and loging out brings up lightdm | mount;login | null |
_softwareengineering.287292 | I have some source code I want to release but I'm unsure of the best license to put on it. From my research such licenses as MIT and GNU are horrible as they offer little to no protection towards crediting the original authors nor controlling distribution. GPL seems to provide some protection, but users are still allowed to modify the code and redistribute it as their own and I don't want that.What I want is relatively simple from my stand point.User/Licensee can: - Use source code - Compile source code - Modify source code for personal useUser/Licensee MUST (If I [Author] Allow distribution upon request): - Credit original author - Credit original hosting site - Link original hosting site's page with the code - NEVER claim or alter any credits, licenses, copyrights, etc, etcUser/Licensee cannot: - Distribute source code/Distribute it without author's written consent - Modify source code and release it - Create derivative works with source code - Use source code in other software - Remove/Alter copyrights, credits, licenses, etc, etc. - Sell source code - Sue/hold liable the original author in ANY way shape or form for anything (standard legal disclaimer and disclosure agreement - similar to MIT and others)This may seem unreasonably restrictive, but it's mostly to protect credits as I've had numerous people use my code in the past and claim it as their own or alter credits/copyrights or post it on sites that I don't want my work on.I've looked up some licenses that seem correct to implement, but from what I can see there are problems.MS-RSL - Restricts a lot of the clauses I have, but the user can't use the source code or can they? As it says, it's just reference material, yet they can use it for debugging/etc only. Can this license (or any) be slightly tweaked in it's terms/clauses?No License - Just a copyright, but this seems like an oxymoron in some degrees, what notice or other conditions does it specify so the user/licensee knows what's allowed/not? Can I specify my own clauses? Is this legal? Wouldn't this be tantamount to writing my own license?I found a site called Binpress where you can create your own licenses (or it seems so), but are these enforceable? There are clauses about payments and such which seem to contradict conditions. Licensees are allowed to distribute the code even when one selects No distribution so the conditions seem to be negligible. Anyone use this before?Anyone know of a license that will satisfy above requirements or any advice?P.S.I did read other topics on the matter, but found most to be circumstantial or very vague on some matters. I read an article by Jeff Atwood, but the article Pick a License, Any License seemed to just regurgitate everything I already know or found out via other sites. It doesn't offer any deep in-detail information, explain each license in-depth, or explain anything related to altering/using licenses. He compared licenses to other licenses which is useless because if I don't know what license X is then comparing to license Y is about as useful as speaking Chinese to me.Any advice or if anybody can answer the above questions it would be deeply appreciated. If you need me to give more detail then please let me know, but I think I have explained things well enough. :) | Licensing - Restrictive Open Source License OR Custom License | licensing;open source;legal;source code;reference | The basic idea behind open-source licensing is that anyone who has (legally) obtained a copy of the source code also has the right to make modifications and the right to distribute the modified or original work. The main difference between open-source licenses is in what rights you must give away when re-distributing the work.With one or two exceptions, all licenses require that at least the copyright and license statements must be kept intact. At most, you may add your own copyright statement if you made modifications1. The exceptions are when the work is placed in the public domain or an equivalent license (like CC0). Even a very permissive license like MIT requires that the license and copyright remain intact.If people don't respect the license terms, then it is possible to take legal action against them for violating your Intellectual Property rights. At the very least, you can request them to re-instate your copyright and/or license.1. Copyright statements could be removed it it can be proven that all contributions by that copyright holder have been removed from the code.If you don't like that basic principle behind open-source that everyone can share the code, then you should use a proprietary/closed-source license. These licenses are typically specific for a particular product or manufacturer and are not easy to re-use by someone else. Your best option is to have a lawyer write a license that exactly fits your desires. |
_codereview.167460 | I have written a simple code for Runge-Kutta fourth order integration to solve a system of ordinary differential equations and parallelized it using OpenMP. I don't know if it is the best we can do for maximum performance of the code with little effort.I need all values of to be returned, so I kept values in all steps.I also create threads in each time step and paralleled in position, i.e. pragma opm parallel is inside the loop over time.Here is my try: (link to gitlab repository)//RHS for a system of equationsvoid xprsys(const int n,const vector<double>& x, vector<double>& f){ /* * n : number of equations * x : value in each time step * f : RHS of equtions dx/dt = f */ double sum1=0; #pragma omp parallel for reduction(+:sum1) for (int i=0; i<n; i++){ sum1 = 0; for(int j=0; j<n; j++) sum1 += sin(x[j]-x[i]); f[i] = M_PI + 2.0 * sum1; }}void SolveRK4(const int n, double h,vector<double> x, vector<vector<double>>& x_vec, int nstep, vector<double>& times){ /* * times : vector contains the time 0 : t_final step dt * x_vec : [nstep by N] 2 Dimensional vector * dim1 : defined as typedef vector<double> dim1 */ times[0] = 0.0; dim1 y(n); dim1 f1(n),f2(n),f3(n),f4(n); double half_h = 0.5 * h; double h_sixth = h/6.0; // x_vec[nstepxN] for (int i=0; i<n; i++) x_vec[0][i] = x[i]; for (int k=1; k<nstep; k++){ times[k] = k*h; xprsys(n,x,f1); #pragma omp parallel for for(int i=0; i<n; i++) y[i] = x[i] + half_h * f1[i]; #pragma omp master xprsys(n,y,f2); #pragma omp barrier #pragma omp for for(int i=0; i<n; i++) y[i] = x[i] + half_h * f2[i]; #pragma omp master xprsys(n,y,f3); #pragma omp barrier #pragma omp for for(int i=0; i<n; i++) y[i] = x[i] + h * f3[i]; #pragma omp master xprsys(n,y,f4); #pragma omp barrier #pragma omp for for(int j=0; j<n; j++) { x[j] = x[j] + h_sixth * (f1[j] + f4[j] + 2.0 * (f2[j] + f3[j])); x_vec[k][j] = x[j]; } }} | Runge-Kutta fourth order integration | c++;performance;numerical methods;openmp | null |
_codereview.103786 | SummaryI am experimenting with a plugin pattern for a generic Python application (not necessarily web, desktop, or console) which would allow packages dropped into a plugin folder to be used according to the contract they would need to follow. In my case, this contract is simply to have a function called do_plugin_stuff(). I'd like the pattern to make sense for a system that sells plugins like the plugin store in Wordpress.Minimal Python plugin mechanism is a decent question (despite being 4 years old) with some very good discussion about Django (which I haven't used) and how it allows for a plugin to be installed anywhere via pip. I'd see that as a phase two, because it seems like a pip-based plugin pattern is (sweeping generalization probably not always true) most valuable for purely free (as in money) plugin store. If free (as in open source) plugins are sold for money in a store, if seems that pip would be a poor choice for installation because which people might pay for something they're about to get the source code for and use freely / redistribute, they might be unlikely to pay / donate for something they've already installed.Project StructureCodeIt's also on GitHub under my same username (PaluMacil) and I made a release tag of v1.0.0 to freeze the the repo at the code shown below.app/plugins/blog/__init__.pydef do_plugin_stuff(): print(I'm a blog!)app/plugins/toaster/__init__.pydef do_plugin_stuff(): print(I'm a toaster!)app/plugins/__init__.py(empty)app/__init__.pyfrom importlib import import_modulefrom os import path, listdirdef create_app(): app = Application() plugin_dir = path.join(path.dirname(__file__), 'plugins') import_string_list = [''.join(['.plugins.', d]) for d in listdir(plugin_dir) if path.isdir(path.join(plugin_dir, d)) and not d.startswith('__')] print(str(len(import_string_list)) + imports to do...) for import_string in import_string_list: module = import_module(import_string, __package__) app.plugins.update({module.__name__.split('.')[2]: module}) print(str(len(app.plugins)) + plugins in the app) return appclass Application: def __init__(self): self.plugins = {}The line not d.startswith('__') eliminated my pychache dir from Pycharm.run.pyfrom app import create_appfrom pprint import PrettyPrinterapp = create_app()app.plugins['toaster'].do_plugin_stuff()printer = PrettyPrinter(indent=4)printer.pprint(app.plugins.__repr__())Points for ReviewI'm new enough to Python (very new but coming from a decent C# background, and I read PEP8 before attempting this) that I've never written a Python 2 application. I think my method of importing requires Python 3.3 or 3.4, though I'm not certain. Commentary on this might be nice. Ways of making this code accessible to earlier versions of Python seem to be messy; they involve conditional imports and such, which are verbose and ugly. If there is a trick or two that would make my code better for different versions of Python with minimal cruft, that would be great to see.Am I missing anything that makes my code much more verbose than it should be? For instance, I'm iterating twice through the directories--once to make a list of packages, and again to make my dictionary. Would it be cleaner to make both parts one loop? The one-loop alternative seems verbose, but there could be further improvements, perhaps:# Alternative to current code which uses a single loop:for d in listdir(plugin_dir): if path.isdir(path.join(plugin_dir, d)) and not d.startswith('__'): module = import_module(''.join(['.plugins.', d]), __package__) app.plugins.update({module.__name__.split('.')[2]: module})Is module.__name__.split('.')[2] a fragile way to get the value for my plugin dictionary? Would [-1] be a better index to use on the result of the split?I'm having trouble understanding why I might chose to use pkgutil.iter_modules instead of my approach, but I'm wondering if there might be some benefit. It seems to be based on importlib since Python 3.3 (PEP 302). Would the only difference be that I wouldn't pull in a folder that doesn't have an __init__.py inside it to make it a package? | Plugin Pattern for Generic Python Application | python;plugin | You shouldn't call str on the int returned from len, instead use str.format. {} plugins in the app.format(len(app.plugins))Format will coerce the int to a string implicitly and is neater to read.Also you're calling repr backwards. The whole point of an object having a __repr__ function is that it allows an object to be passed to repr(). So you could change app.plugins.__repr__() to repr(app.plugins). |
_codereview.55983 | I've written an asynchronous retry method as an answer for this question. I'd like to get your opinion of the implementation and whether there are better ways to implement this. You could also implement this with async-await but I thought this would be a more efficient implementation.public static Task RetryAsync(Func<bool> retryFunc, CancellationToken cancellationToken, int retryInterval){ var tcs = new TaskCompletionSource<object>(); var timer = new Timer((state) => { var taskCompletionSource = (TaskCompletionSource<object>)state; if (!taskCompletionSource.Task.IsCompleted) { try { if (cancellationToken.IsCancellationRequested) { taskCompletionSource.SetException(new TaskCanceledException(RetryAsync cancelled)); } else if (retryFunc()) { taskCompletionSource.SetResult(null); } } catch (Exception ex) { taskCompletionSource.SetException(ex); } } }, tcs, 0, retryInterval); //// Once the task is complete, dispose of the timer so it doesn't keep firing. tcs.Task.ContinueWith(t => timer.Dispose(), CancellationToken.None, TaskContinuationOptions.ExecuteSynchronously, TaskScheduler.Default); return tcs.Task;} | Asynchronous retry method | c#;task parallel library;async await;rags to riches | You could also implement this with async-await but I thought this would be a more efficient implementation.When performance matters, don't guess, measure. When it doesn't matter (which is 97 % of the time according to some), write code that is readable and maintainable.Why do you think small increase in efficiency (most likely less than 1 ms) would matter here, when the retry interval is probably going to be hundreds of milliseconds or more (and can't effectively be less than 15 ms)?Func<bool> retryFuncConsider adding another overload that allows you to retry async functions (i.e. Func<Task<bool>> retryFunc).CancellationToken cancellationTokenIt might make sense to make this an optional parameter, some users might not need cancellation.int retryIntervalI think it would be better to use TimeSpan here, that way both your code and the code of your users becomes more clear. If you want to keep using int, document very clearly the unit used, possibly by even renaming the parameter to something like retryIntervalMs.You don't want to be the next Mars Climate Orbiter.When the CancellationToken is canceled, why do you wait for the timer tick to cancel the returned Task? You could use Register() to make sure the Task is canceled as soon as the CancellationToken is.if (!taskCompletionSource.Task.IsCompleted)What's the purpose of this check? If it's because you're worried that retryFunc might run longer than retryInterval, then I think your logic is flawed. When I have an operation that takes 1 minute to run and I ask for retry after 5 seconds, I probably don't want to have 12 instances of the operation running at the same time.You could achieve this by using Change().And even when you want this behavior, you should probably also switch to using TrySet versions of the TaskCompletionSource methods, to avoid unnecessary exceptions.if (cancellationToken.IsCancellationRequested){ taskCompletionSource.SetException(new TaskCanceledException(RetryAsync cancelled));}This way, the Task will be in the Faulted state. To get it to the correct Canceled state, use SetCanceled().//// Once the task is complete, dispose of the timer so it doesn't keep firing.tcs.Task.ContinueWith(t => timer.Dispose(), CancellationToken.None, TaskContinuationOptions.ExecuteSynchronously, TaskScheduler.Default);This closure is important also because it keeps the Timer rooted, so it prevents it from being GCed prematurely. I would expand the comment to explain that.(Also, why are you using four slashes for a comment? Two are enough.) |
_hardwarecs.456 | I'm looking for a HDTV capable of playing HEVC h.265 encoded videos from USB/PEN drive. Most of the HDTVs support h.264 standard but I could find any playing h.265 standard. More over it would be better if it has the below features (not necessarily but better if they are present)Multiple USB portsMultiple HDMI portsLED displayEthernet / Wifi support4K resolution class32 inches or higher3DMy price range is at most $1800 and it should not be an Android TV. | HDTV with HEVC (h.265 / x.265) support? | television;hdtv | null |
_unix.28967 | What libraries can be used for 3D graphics on Linux? Are there big differences for 3D graphics programming between Linux and Windows?I found about DirectX and OpenGl by searching, but I'm not sure that these are graphic libraries. | Linux 3D graphic libraries | linux;graphics;opengl | null |
_unix.298663 | I have a file that looks like the following:TITLE Protein in water t= 0.00000REMARK THIS IS A SIMULATION BOXATOM 1 N SER A 107 20.799 63.728 25.985 1.00 0.00 NATOM 2 H1 SER A 107 21.658 64.259 25.980 1.00 0.00 HThis is a very large file: 1.6G and a little over 20 million lines. I would like to get the lines that do not start with ATOM and end with H and save them into another file. What would be the most efficient way to do this? | Extract lines from a large file that do not end with H into another file | text processing | Based on the clarification from the comments,sed -n '/^ATOM.*H$/!p' input > outputwill remove (not print) lines that start with ATOM and end with H from the file named input and print the rest of the lines into the file named output. The sed syntax goes, from left to right:-n -- don't print lines by default/^ATOM.*H$/ -- look for lines that start with ATOM, followed by any number of characters, ending ($) with H!p -- print lines that don't match the above patternA sample input file of:TITLE Protein in water t= 0.00000REMARK THIS IS A SIMULATION BOXATOM 1 N SER A 107 20.799 63.728 25.985 1.00 0.00 NATOM 2 H1 SER A 107 21.658 64.259 25.980 1.00 0.00 HTITLE Protein in water t= 0.00000HREMARK THIS IS A SIMULATION BOXHATOM 1 N SER A 107 20.799 63.728 25.985 1.00 0.00 NATOM 2 H1 SER A 107 21.658 64.259 25.980 1.00 0.00 Hresults in:TITLE Protein in water t= 0.00000REMARK THIS IS A SIMULATION BOXATOM 1 N SER A 107 20.799 63.728 25.985 1.00 0.00 NTITLE Protein in water t= 0.00000HREMARK THIS IS A SIMULATION BOXHATOM 1 N SER A 107 20.799 63.728 25.985 1.00 0.00 NA more direct sed syntax would be:sed '/^ATOM.*H$/d' input > outputwhich says:(print lines by default)search for lines that start with ATOM and end with Hdelete (don't print) those lines |
_webapps.101161 | Having trouble with the Date function. I'm using the form to confirm a date range and specific dates within that range. I see that Date Range is an option but not sure how to assign Date Values and Min/Max Values.I also want a date field that will allow users to input multiple specific dates using the calendar icon to the right of the date field.Are these things doable? | Cognito Forms: Date Range and Specific Dates | cognito forms | null |
_codereview.162934 | I had a technical test with a simple CRUD application where I used n layered architecture as explained on the Patterns In Action book that I bought.However after delivering one of their feedbacks was the following.DbContext lifetime is completely wrong. (literally)I will copy the relevant files on this question, because I want to learn what I did wrong and if that product I bought has just problems conceptually.So, in my DataAccess class library I have this:namespace DataObjects{ // abstract factory interface. Creates data access objects. // ** GoF Design Pattern: Factory. public interface IDaoFactory { //Product Dao interface that must be implemented by each provider IProductDao ProductDao { get; } //Color Dao interface that must be implemented by each provider IColorDao ColorDao { get; } //Size Dao interface that must be implemented by each provider ISizeDao SizeDao { get; } //Category Dao interface that must be implemented by each provider ICategoryDao CategoryDao { get; } //File Dao interface that must be implemented by each provider IFileDao FileDao { get; } //File Error Dao that must be implemented by each interface IFileErrorDao FileErrorDao { get; } }}Then I have also this Interface using BusinessObjects;using System.Collections.Generic;namespace DataObjects{ public interface ICategoryDao { //Gets a list of categories List<Category> GetCategories(); //Inserts one category void InsertCategory(Category category); //To verify if category exists bool CategoryExists(string category); //Get Category by name Category GetCategoryByName(string category); }}And now, in the EntityFramework namespace I have the following implementationsnamespace DataObjects.EntityFramework{ // Data access object factory // ** Factory Pattern public class DaoFactory : IDaoFactory { public IProductDao ProductDao => new ProductDao(); public IColorDao ColorDao => new ColorDao(); public ISizeDao SizeDao => new SizeDao(); public ICategoryDao CategoryDao => new CategoryDao(); public IFileDao FileDao => new FileDao(); public IFileErrorDao FileErrorDao => new FileErrorDao(); }}DaoCategory implementationusing AutoMapper;using System.Collections.Generic;using System.Linq;using BusinessObjects;namespace DataObjects.EntityFramework{ // Data access object for Product // ** DAO Pattern public class CategoryDao : ICategoryDao { /// <summary> /// Inserts category into database /// </summary> /// <param name=category></param> public void InsertCategory(Category category) { using (var context = new ExamContext()) { Mapper.Initialize(cfg => cfg.CreateMap<Category, CategoryEntity>()); var entity = Mapper.Map<Category, CategoryEntity>(category); context.CategoryEntities.Add(entity); context.SaveChanges(); // update business object with new id category.Id = entity.Id; } } /// <summary> /// Gets all categories from database /// </summary> /// <returns>Returns a list of Category</returns> public List<Category> GetCategories() { using (var context = new ExamContext()) { Mapper.Initialize(cfg => cfg.CreateMap<CategoryEntity, Category>()); var categories = context.CategoryEntities.ToList(); return Mapper.Map<List<CategoryEntity>, List<Category>>(categories); } } /// <summary> /// Verifies if one category name exists /// </summary> /// <param name=category>Category name</param> /// <returns>Returns true if exists</returns> public bool CategoryExists(string category) { using (var context = new ExamContext()) { return context.CategoryEntities.Any(x => x.CategoryName == category); } } /// <summary> /// Gets color by name /// </summary> /// <param name=categoryName>color name</param> /// <returns>Category</returns> public Category GetCategoryByName(string categoryName) { using (var context = new ExamContext()) { Mapper.Initialize(cfg => cfg.CreateMap<CategoryEntity, Category>()); var category = context.CategoryEntities.FirstOrDefault(x => x.CategoryName == categoryName); return Mapper.Map<CategoryEntity, Category>(category); } } }} | Simple CRUD application with n layered architecture | c#;entity framework | The problem is that Entity Framework's DbContext is really a unit of work. By newing them up inside each method you lose the ability to do several interesting things in the same transaction/unit of work in an easy way. You should be able to have multiple repositories (Dao in your parlance) using the same context/unit of work.Your code is also incredibly coupled to EF and you're violating the dependency inversion principle.This is the best write up of DbContext lifetime that I've ever read: Managing DbContext the right way with Entity Framework 6: an in-depth guide. Although you're not actually using any of the 3 main patterns discussed. If that Mapper is Automapper, that's also the wrong place to be configuring it and is a conflation of concerns. |
_unix.9451 | Is it possible to boot linux without a initrd.img ? I am planning to add default drivers as a part-of-kernel itself and avoid initrd completely.What are the modules that should be made part-of-the-kernel instead of loadable modules ? | Booting without initrd | linux;kernel;boot;kernel modules;initrd | It is, unless your root volume is on an LVM, on a dmcrypt partition, or otherwise requires commands to be run before it can be accessed.I haven't used an initrd on my server in years. You need at a minimum these modules built in:the drivers of whatever controller where your root volume disk livesthe drivers necessary to get to that like PCI, PCIe support, USB support, etc.the modules that run the filesystem mounted on it It's also a very good idea to build in your network card drivers as well.I've found that lspci/lsmod can help you here from your currently running kernel, look at what's there and use the make menuconfig search option before compiling to find where to enable the modules. |
_reverseengineering.5839 | I have wanted to get into the art of reverse engineering for quite some time now, so I took a look at a few online lessons (such as opensecuritytraining.info) and also got my hands on IDA Pro.Obviously, since this is a complex topic, I was overwhelmed by registers, pointers, instructions et cetera. I know assembler and C fairly well, it's just (like I said earlier) a topic where you have to learn much.Now to my actual question: I have downloaded a CrackMe-Program and started debugging it. Basically the objective is to find a key which you then have to enter into a Textbox in the program. I found the key checking function fairly easily and identified some logic, but I can't wrap my head around how the program actually compares the string. My C skills are much greater than my Assembler skills, so I decided to get some Pseudocode printed. The problem is that the Pseudocode is pretty messy (I'm guessing that's because of compiler optimizations) and I basically can't understand what this piece of code is supposed to do.Here is the code (sorry it's a bit long):int __usercall TSDIAppForm_Button1Click<eax>(int a1<eax>, int a2<ebx>, int a3<edi>, int a4<esi>){ int v4; // ebx@1 int v5; // esi@1 int v6; // eax@1 signed int v7; // eax@3 signed int v8; // edx@3 int v9; // ebx@3 int v11; // edx@12 int v12; // [sp-24h] [bp-34h]@1 int (*v13)(); // [sp-20h] [bp-30h]@1 int *v14; // [sp-1Ch] [bp-2Ch]@1 int v15; // [sp-18h] [bp-28h]@1 int (*v16)(); // [sp-14h] [bp-24h]@1 int *v17; // [sp-10h] [bp-20h]@1 int v18; // [sp-Ch] [bp-1Ch]@1 int v19; // [sp-8h] [bp-18h]@1 int v20; // [sp-4h] [bp-14h]@1 int v21; // [sp+0h] [bp-10h]@1 int v22; // [sp+4h] [bp-Ch]@1 int v23; // [sp+8h] [bp-8h]@2 void (__fastcall *v24)(int); // [sp+Ch] [bp-4h]@7 int v25; // [sp+10h] [bp+0h]@1 v22 = 0; v21 = 0; v20 = a2; v19 = a4; v18 = a3; v5 = a1; v17 = &v25; v16 = loc_45C6FD; v15 = *MK_FP(__FS__, 0); *MK_FP(__FS__, 0) = &v15; JUMPOUT(Controls__TControl__GetTextLen(*(_DWORD *)(a1 + 872)), 0xFu, *(unsigned int *)j); v6 = Controls__TControl__GetTextLen(*(_DWORD *)(a1 + 872)); System____linkproc___DynArraySetLength(v6); System____linkproc___DynArraySetLength(664); v14 = &v25; v13 = loc_45C699; v12 = *MK_FP(__FS__, 0); *MK_FP(__FS__, 0) = &v12; v4 = 0; do { Controls__TControl__GetText(*(_DWORD *)(v5 + 872), &v21, v12); *(_DWORD *)(v23 + 4 * v4) = *(_BYTE *)(v21 + v4 - 1); ++v4; } while ( v4 != 16 ); v8 = 1; v9 = 0; v7 = 4585384; do { if ( v8 == 16 ) v8 = 1; *(_BYTE *)(v22 + v9++) = *(_BYTE *)v7++ ^ *(_BYTE *)(v23 + 4 * v8++); } while ( v9 != 665 ); v24 = (void (__fastcall *)(int))v22; // I know the key to success lies here, but I can't figure out what this if is supposed to do if ( *(_BYTE *)v22 != 96 || *(_BYTE *)(v22 + 4) != 208 || *(_BYTE *)(v22 + 9) ) MessageBoxA_0(0, Invalid Key, Error, 0); else v24(v22); *MK_FP(__FS__, 0) = v12; v11 = v15; *MK_FP(__FS__, 0) = v15; System____linkproc___LStrClr(&v21, v11, 4572932); System____linkproc___DynArrayClear(&v22, off_45C568); return System____linkproc___DynArrayClear(&v23, off_45C548);}I would appreciate it if someone could give me advice on how to deobfuscate this piece of code. Are there any plugins available to do this? Is there a special technique that I can use to figure it out? Would it be better to look at the Assembler code instead of the Pseudocode?I especially wonder where these weird constants (like 872 for example) come from.Answers would be highly appreciated. | Deobfuscating IDA Pseudocode | ida;decompilation;deobfuscation | IDA Pro is no magic tool to automatically decompile binaries to their source code. The decompiler output should not be relied every time (as compiling leads to loss of information) although IDA boasts of the finest decompiler available. Instead focus on the disassembly listing. For specific parts, you can use the decompiler output as your reference.Deobfuscation is a multi-step process. First try to understand the usages of variables, structures, etc and then give them a more easy-to-understand names reflecting their purpose. In many cases you can understand the variables purposes by just noting how it is used in function calls. For example Vcl.Controls.TControl.GetTextLen returns the length of the control's text. That means among the parameters passed, one must be the a pointer to the TControl. You can use this information to rename variables.In case of VCL binaries, Interactive Delphi Reconstructor, will give you more easy-to-understand disassembly, as it is geared for that purpose. IDR also has a somewhat very limited decompilation capability.For better understanding IDA Pro and its myriad of features, I would recommend to go through these two books The IDA Pro Book and Reverse Engineering Code with IDA Pro |
_unix.35761 | How can I write a script that basically just runs pkill -HUP inetd? I want to restart inetd via a script so I can schedule it to run at a particular time. I tried to write it myself, but I'm getting a Hangup error. | How to pkill from a script? | bash;shell;command line;shell script;dash | null |
_unix.200031 | I'm using Centos7 minimal. I've installed acpid and the daemon is running.When I hit the power button, I get the following in /var/log/messagesMay 2 18:52:53 localhost systemd-logind: Power key pressed.May 2 18:52:53 localhost systemd: SELinux policy denies access.and in /var/log/audit/audit.log:type=USER_AVC msg=audit(1430589539.562:468): pid=815 uid=81 auid=4294967295 ses=4294967295 subj=system_u:system_r:system_dbusd_t:s0-s0:c0.c1023 msg='avc: denied { send_msg } for msgtype=method_call interface=org.freedesktop.DBus.Properties member=Get dest=org.freedesktop.systemd1 spid=4177 tpid=1 scontext=system_u:system_r:apmd_t:s0 tcontext=system_u:system_r:init_t:s0 tclass=dbus exe=/usr/bin/dbus-daemon sauid=81 hostname=? addr=? terminal=?'type=USER_AVC msg=audit(1430589539.571:469): pid=815 uid=81 auid=4294967295 ses=4294967295 subj=system_u:system_r:system_dbusd_t:s0-s0:c0.c1023 msg='avc: denied { send_msg } for msgtype=method_call interface=org.freedesktop.DBus.Properties member=Get dest=org.freedesktop.systemd1 spid=4182 tpid=1 scontext=system_u:system_r:apmd_t:s0 tcontext=system_u:system_r:init_t:s0 tclass=dbus exe=/usr/bin/dbus-daemon sauid=81 hostname=? addr=? terminal=?'type=USER_AVC msg=audit(1430589539.586:470): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='avc: denied { start } for auid=-1 uid=0 gid=0 path=/usr/lib/systemd/system/poweroff.target scontext=system_u:system_r:apmd_t:s0 tcontext=system_u:object_r:power_unit_file_t:s0 tclass=service exe=/usr/lib/systemd/systemd sauid=0 hostname=? addr=? terminal=?'Piping that through audit2why gives the following output:type=USER_AVC msg=audit(1430589539.562:468): pid=815 uid=81 auid=4294967295 ses=4294967295 subj=system_u:system_r:system_dbusd_t:s0-s0:c0.c1023 msg='avc: denied { send_msg } for msgtype=method_call interface=org.freedesktop.DBus.Properties member=Get dest=org.freedesktop.systemd1 spid=4177 tpid=1 scontext=system_u:system_r:apmd_t:s0 tcontext=system_u:system_r:init_t:s0 tclass=dbus exe=/usr/bin/dbus-daemon sauid=81 hostname=? addr=? terminal=?' Was caused by: Missing type enforcement (TE) allow rule. You can use audit2allow to generate a loadable module to allow this access.type=USER_AVC msg=audit(1430589539.571:469): pid=815 uid=81 auid=4294967295 ses=4294967295 subj=system_u:system_r:system_dbusd_t:s0-s0:c0.c1023 msg='avc: denied { send_msg } for msgtype=method_call interface=org.freedesktop.DBus.Properties member=Get dest=org.freedesktop.systemd1 spid=4182 tpid=1 scontext=system_u:system_r:apmd_t:s0 tcontext=system_u:system_r:init_t:s0 tclass=dbus exe=/usr/bin/dbus-daemon sauid=81 hostname=? addr=? terminal=?' Was caused by: Missing type enforcement (TE) allow rule. You can use audit2allow to generate a loadable module to allow this access.type=USER_AVC msg=audit(1430589539.586:470): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='avc: denied { start } for auid=-1 uid=0 gid=0 path=/usr/lib/systemd/system/poweroff.target scontext=system_u:system_r:apmd_t:s0 tcontext=system_u:object_r:power_unit_file_t:s0 tclass=service exe=/usr/lib/systemd/systemd sauid=0 hostname=? addr=? terminal=?' Was caused by: Missing type enforcement (TE) allow rule. You can use audit2allow to generate a loadable module to allow this access.And finally, piping the audit to audit2allow -lar gives me:require { type power_unit_file_t; type init_t; type apmd_t; class dbus send_msg; class service start;}#============= apmd_t ==============allow apmd_t init_t:dbus send_msg;allow apmd_t power_unit_file_t:service start;I'm not sure what to do next. How can I get from the output above to an active selinux policy? | Where to put the SElinux policy to allow acpid shutdown the system? | security;selinux;acpid | null |
_webapps.98999 | I can access the photos that I sent through Hangouts album but what about the photos received from my friend through Hangouts? Where this photo will be stored? Is it not possible to access to this photo (received) other than downloading it? | Photos received through Hangouts | google hangouts | null |
_unix.287545 | Any quick ideas on how to write a program that extends the terminal's basic functionality? I want to do everything the terminal does but additionally do some custom processing on whatever the user types on my terminal derivative. | Extending Linux terminal Program | bash;terminal;gnome terminal | null |
_softwareengineering.244348 | I was creating and discussing a class diagram with a partner of mine. To simplify things, I've modify the real domain we're working on and made up the following diagram:Basically, a company works on constructions that are quite different one from each other but are still constructions. Note I've added one field for each class but there should be many more.Now, I thought this was the way to go but my partner told me that if in the future new construction classes appear we would have to modify the Company class, which is correct. So the new proposed class diagram would be this:Now I've been wondering:Should the fact that in no place of the application will there be mixed lists of planes and bridges affect the design in any way?When we have to list only planes for a company, how are we supposed to distinguish them from the other elements in the list without checking for their class names?Related to the previous question, is it correct to assume that this type of diagram should be high-level and this is something it shouldn't matter at this stage but rather be thought and decided at implementation time?Any comment will be appreciated. | What alternative is better to diagram this scenario? | object oriented;inheritance;class diagram | First and foremost, the models are highly dependent of the business domain so, since you have changed it, it might be that my answer fits the question but not your real domain.Depends of what the relationship means. An inventory of company or construction projects currently active would be examples where the relationship with AbstractProject would be understandable.I would do, so. Of course, your language might make it difficult and you might find it easier to add a type attribute to AbstractProject. (Not good because it breaks the open/closed principle).As, outside 1 & 2, I think that Plane and Bridge have so few things it common that probably you should skip the abstract class altogether. Of course, that would complicate the model, since know you either have two relationships with company (bad for the open/closed principle, if you end producing also Car), or you have to add another middle class (ConstructionProject), which is what I would do.No, it is not high level, free to change. It is the data that you pass to the code-monkeys so they do their job, and the data that you pass to other teams that must develop SW to use your system, so you have to stick with it.That does not forbid you from modifying it if you see that it does not really adapt to your needs, but those modifications in the code must be approved, reflected in the UML, and communicated. Depending of your project management phylosophy, they would be interpreted as mistakes or refinements of your original model. |
_codereview.68999 | I'm going to be opensourcing some code I'm working on. I don't need help with the code, I just want to make sure my code is readable and my comments make sense. I have a knack for the esoteric. This code is to control GE G35 Christmas lights using a Netduino controller. Due to the .NET overhead, I've written a custom lower-level driver compiled into the firmware.Can you follow my code? This is on an embedded processor which is why I'm doing a lot of bit-shifting. It's easier to write this way. public Int32[,] getData() { int maxBulbs = getMaxBulbs(); // Using an abdnormal form of bitpacking here to make the C loop, very tight and efficient. // The first address of the array is the bulb position on all strings. // The second address of the array is actually the corresponding bulb information for that bit. // Within the array, we store a 32-bit int. Each bit in this int, represents a G35 Strand/string // So at data[0, 0] we have a 32 bit int, this int represents the first bulb on all strings, first data information bit for up to 32 strands. Int32[,] data = new Int32[getMaxBulbs(), 26]; // number bulbs on a strand, 26 bit bulb info foreach (G35String gstring in Strings) { for (short c_bulb = 1; c_bulb < maxBulbs; c_bulb++) { // This is a bit of a shortcut. because we know that G35's just pass the information down // the pipe and due to the way we are sending data in parallel, if one string has // more bulbs than another, we just send fake data to the non existent bulb on that string G35Bulb gbulb = (c_bulb < gstring.bulbs.Length ? gstring.bulbs[c_bulb] : new G35Bulb(0, 0)); // bulb address data[c_bulb, 0] = (c_bulb & 0x20) << gstring.StringPinAddress; data[c_bulb, 1] = (c_bulb & 0x10) << gstring.StringPinAddress; data[c_bulb, 2] = (c_bulb & 0x08) << gstring.StringPinAddress; data[c_bulb, 3] = (c_bulb & 0x04) << gstring.StringPinAddress; data[c_bulb, 4] = (c_bulb & 0x02) << gstring.StringPinAddress; data[c_bulb, 5] = (c_bulb & 0x01) << gstring.StringPinAddress; // bulb brightness data[c_bulb, 6] = (gbulb.BulbBrightness & 0x80) << gstring.StringPinAddress; data[c_bulb, 7] = (gbulb.BulbBrightness & 0x40) << gstring.StringPinAddress; data[c_bulb, 8] = (gbulb.BulbBrightness & 0x20) << gstring.StringPinAddress; data[c_bulb, 9] = (gbulb.BulbBrightness & 0x10) << gstring.StringPinAddress; data[c_bulb, 10] = (gbulb.BulbBrightness & 0x08) << gstring.StringPinAddress; data[c_bulb, 11] = (gbulb.BulbBrightness & 0x04) << gstring.StringPinAddress; data[c_bulb, 12] = (gbulb.BulbBrightness & 0x02) << gstring.StringPinAddress; data[c_bulb, 13] = (gbulb.BulbBrightness & 0x01) << gstring.StringPinAddress; // Blue data[c_bulb, 14] = (gbulb.BulbColor >> 8 & 0x8) << gstring.StringPinAddress; data[c_bulb, 15] = (gbulb.BulbColor >> 8 & 0x4) << gstring.StringPinAddress; data[c_bulb, 16] = (gbulb.BulbColor >> 8 & 0x2) << gstring.StringPinAddress; data[c_bulb, 17] = (gbulb.BulbColor >> 8 & 0x1) << gstring.StringPinAddress; // Green data[c_bulb, 18] = (gbulb.BulbColor >> 4 & 0x8) << gstring.StringPinAddress; data[c_bulb, 19] = (gbulb.BulbColor >> 4 & 0x4) << gstring.StringPinAddress; data[c_bulb, 20] = (gbulb.BulbColor >> 4 & 0x2) << gstring.StringPinAddress; data[c_bulb, 21] = (gbulb.BulbColor >> 4 & 0x1) << gstring.StringPinAddress; // Red data[c_bulb, 22] = (gbulb.BulbColor & 0x8) << gstring.StringPinAddress; data[c_bulb, 23] = (gbulb.BulbColor & 0x4) << gstring.StringPinAddress; data[c_bulb, 24] = (gbulb.BulbColor & 0x2) << gstring.StringPinAddress; data[c_bulb, 25] = (gbulb.BulbColor & 0x1) << gstring.StringPinAddress; } } return data; }In the C++ driver, I can blast all the registers I need like this in parallel:// LED AddresssendBits(data[i][0]);sendBits(data[i][1]);sendBits(data[i][2]);sendBits(data[i][3]);sendBits(data[i][4]);sendBits(data[i][5]);void sendBits(uint16_t gpioPins, uint32_t data){ *_BSRRL = gpioPins; delayMicroseconds(DELAYSHORT); // 10us *_BSRRH = gpioPins; delayMicroseconds(DELAYSHORT); // 10us *_BSRRL = ~data; delayMicroseconds(DELAYSHORT); // 10us *_BSRRL = gpioPins;} | Embedded C# bitpacked arrays to low-level STM32F4 driver for GE G35 RGB LED Christmas tree light | c#;.net;bitwise;embedded;device driver | null |
_webmaster.53287 | I'm planning to add an XML Sitemap to a client's existing website. According to this, it will help define the difference between the root website (aimed at UK) and the US website (aimed at the United States) and a few other languages/locales.Is it enough to add only this to the sitemap or will Google punish me for not adding all pages in there? The content changes quite often and we don't have a way to deal with updating the XML regularly at this point. Also the existing content is well indexed on Google already, it's just the concern about multiple pages in English that's behind this. | Do all pages have to be added to XML Sitemaps? | seo;xml sitemap;language;hreflang | Google's John Mueller has answered the question should I include every single page of my blog in the Sitemap (including tag pages and the date-based archives) or just the important ones?:Its always a good idea for your XML Sitemap file to include all pages which you want to have indexed. While he says that it is a good idea it shouldn't be necessary. Google uses sitemaps primarily for URL discovery. If Googlebot can discover URLs by crawling your website, those URLs wouldn't have to be in a sitemap. URLs excluded from the sitemap wouldn't get any of the other side benefits such as:Recognizing preferred URLs for canonicalizationBeing included in the indexed URL count in webmaster tools (WMT)Getting prioritized in the list of crawl errors in WMT |
_opensource.2224 | For my OSS project, is it a good idea to release the brand assets under a CC license?I want to make it easy for people to use the logo for blogs/articles talking about the product and derivatives for forks of the project, but I also want to keep some restriction on it so it doesn't look like the official project is endorsing another branch/unofficial forum/paid service/etc. is clearly not associated with the project.It seems as trademark with usage guidelines would be the best, but it's unrealistic to trademark a name for a project that I cannot guarantee success. I could, in theory, keep the copyright and license it under another license (any recommendations?) that allows the uses specified above. Is this a good idea?I technically did derive the logo from a CC0 public domain image, but it has very little resemblance (it's a common icon, but the trademark identity is in the exact colors and shapes). I changed the shape, orientation, colors, and outlines; I think it's unique enough for copyright protection. | How should I license my project's logo? | licensing;license recommendation;trademark;logos | null |
_webmaster.21298 | I started to develop my own webdesign using an grid framework 960 CSS Framework and also noticed that most of other famous css grid frameworks use 940/960px as maximum page width? Some of them have an online generators where you can calculate and generate the same framework but for different width size.Can you tell me why they suggest 960 px as default? And more important: Why everything is measured in pixels rather than pt, cm, % or any other css units?Edit: Isn't it better to use 'in' as css unit and be sure that on every screen (computer, smartphone) it will have the same size?P.S. Some other grid css frameworks:BlueprintYUI 2: Grids CSSBootstrapSkeleton | Most CSS Grid frameworks use pixel as css units, why? | css;web development;grid;website design | null |
_webapps.43208 | I created a Google Spreadsheet file which I want to share with my coworkers so each one has his own copy. So far the only way I found is to share the file, however these doesn't work for me because each coworker has his own information, and the information used in the file is not for collaboration.Is there a way to send a copy of the file so that each one has his own copy, like the way you would send an excel file over email so that everyone would have their own copy?Thanks. | Share individual copies of a Google spreadsheet | google drive;google spreadsheets;google apps | null |
_webapps.91066 | I am looking for a formula to generate a moving average of the last two weeks OR the last 10 data points (whichever produces more data points) conditional upon the presence of data in two other columns. Example.I want to calculate:The average of column $K for the past two weeks (from today's date) OR the past ten data points (whichever is a larger data set) when column $G=HenkkyG and column $U=LAN. Effectively I want Player HenkkyG's average over the past two weeks or 10 games (data points).I am currently using this formula for overall average:=IFERROR(AVERAGEIF($G:$G,AF2,$K:$K)) where AF2=player name I am drawing data for. | Conditional moving average: the last two weeks or over 10 data points | google spreadsheets | This can be done with a few filter commands. To filter by columns G and U: =filter(B2:K, (G2:G = HenkkyG) * (U2:U = LAN))(Here, multiplication is logical, meaning AND). To filter the scores by either within 14 days or among the last 10, the condition would be: =filter(K2:K, (B2:B >= today()-14) + (rank(B2:B, B2:B, false) <= 10)) Here + is logical OR, and the rank is in descending order, picking the 10 largest entries from the date column. It remains to combine these. In the interest of maintainability, it may be best to do things separately (perhaps on another sheet): apply the first filter, and then use the second on its output. But it's possible to do everything in one formula, it just looks scary: the first filter is applied to each column appearing in the second filter.=filter(filter(K2:K, (G2:G = HenkkyG) * (U2:U = LAN)), (filter(B2:B, (G2:G = HenkkyG) * (U2:U = LAN)) >= today()-14) + (rank(filter(B2:B, (G2:G = HenkkyG) * (U2:U = LAN)), filter(B2:B, (G2:G = HenkkyG) * (U2:U = LAN)), false) <= 10)) This is not the kind of formulas that I would want to deal with in a spreadsheet inherited from someone else. |
_codereview.82628 | I'm looking for some help on how I can optimize adding multiple data attribute tags to elements, and really, any feedback at all.BackgroundClient uses a analytics tool through a tag management application (Ensighten) that picks-up data attributes when links are clicked. I'm adding attributes when the DOM is ready to provide them with more information about what people are clicking, where they are clicking, etc.init.jsHere is an example of my init.js file (Ensighten wraps this in a IIFE):// global namespacewindow.analytics = window.analytics || {};window.analytics.heatmapping = window.analytics.heatmapping || {};window.analytics.heatmapping.header = { logo: function () { var $this = jQuery(this), name = $this.closest('ul.navL2').prev().text(), type = $this.attr('alt'), title = $this.attr('title'); window.analytics.utilities.setDataAttributes($this, { 'region': 'header', 'name': name, 'type': type, 'title': title, 'index': '1' }); } // ... more below,};// initializingjQuery('.top a').each(window.analytics.heatmapping.header.logo);utilities.jsI have another custom javascript tag that houses all of the utility functions that we can reuse. This is where the setDataAttributes function is kept. Here is the setDataAttributes function with its supporting functions. /** * Set data attributes on an element * @param {object} element A jQuery object, typically we'll pass jQuery(this). * @param {object} dataAttributes The data attributes we wish to set */window.analytics.utilities.setDataAttributes = function (element, dataAttributes) { var util = window.analytics.utilities, dataAnalyticsTagAttributes, if (util.hasDataAnalyticsTag(element)) { dataAnalyticsTagAttributes = util.parseDataAnalyticsTag(element); // merge objects $.extend(dataAttributes, dataAnalyticsTagAttributes); } dataAttributes = util.prefixAndTrimProperties(dataAttributes); element.attr(dataAttributes);};/** * Prefixes the incoming objects keys with 'data-' and trims objects values * @param {object} dataAttributes * @return {object} dataAttributeWithPrefix */window.analytics.utilities.prefixAndTrimProperties = function (dataAttributes) { var util = window.analytics.utilities, dataAttributesWithPrefix = {}, dataKeyWithPrefix, dataKey, dataValue for (dataKey in dataAttributes) { if (dataAttributes.hasOwnProperty(dataKey)) { // prefix key with data- and trim value dataKeyWithPrefix = util.addPrefixToKey(dataKey) dataValue = jQuery.trim(dataAttributes[dataKey]); // returns new prefixed and clean property in dataAttributesWithPrefix object dataAttributesWithPrefix[wedcsKeyWithPrefix] = dataValue; } } return dataAttributesWithPrefix;};/** * Determines if input element has the data-analytics tag attibute * @param {object} element jQuery(this) * @return {Boolean} */window.analytics.utilities.hasDataAnalyticsTag = function(element) { return element.is('[data-analyticstag]');};/** * adds the 'data-' prefix to the input string * @param {string} key The objects key it currently iterating on. * @return {string} */window.analytics.utilities.addPrefixToKey = function (key) { return 'data-' + key;}/** * Parses the data-analytics attribute on * @param {object} element A jQuery object, typically we'll pass jQuery(this). * @return {object} An object with the properties index, linktype and cmpgrp */window.analytics.utilities.parseDataAnalyticsTag = function (element) { var dataAnalyticsAttributeArray = element.attr('data-analyticstag').split('_'); return { 'index': dataAnalyticsAttributeArray[4].match(/\d$/), 'type': dataAnalyticsAttributeArray.splice(0, 4).join(':'), 'region': dataAnalyticsAttributeArray[3] };};Let me explain what the setDataAttributes function does:it takes two arguments: element and dataAttributes (an object)it checks to see if the element has a dataAnalytics tag (some links have a data-analytics tag that we can get some values from)if the element does have the data-analyticstag, then we parse it and return an object (see parseDataAnalyticsTag) and merge it with the original dataAttributes object.Next, we take the dataAttributes object and pass it into another function prefixAndTrimProperties where we prefix each key with 'data-' and trim each value, this function returns an object.we take the returned object and pass it into element.attr(dataAttributes) where it then sets the data attributes for that specific element.QuestionsI'm currently reading Clean Code by Robert C. Martin, and I'm attempting to apply some of his practices around naming and functions - haven't made it to the rest of the book yet. How does my naming look? I'm a little lost on the prefixAndTrimProperties function. In his book he states that you only want the function to do one thing, and my function is doing two - at least.Am I splitting up my functions in a way that are more testable? For example, is it really necessary to just have a function like hasDataAnalyticsTag return true or false? How granular should I be getting? Is it overkill?Any other advice? | Add multiple data attributes to elements using jQuery | javascript;jquery | I'm a little lost on the prefixAndTrimProperties function. In [Martin's] book he states that you only want the function to do one thing, and my function is doing two - at least.It's true that a function should ideally only do one thing. But what constitutes one thing is somewhat debatable. For instance, if you instead call your function prepareProperties then its one thing is to, well, prepare a properties object. That's an operation that counts as one thing in your context. Yes, it entails both trimming values and prefixing keys, but that's an implementation detail.Am I splitting up my functions in a way that are more testable? For example, is it really necessary to just have a function like hasDataAnalyticsTag return true or false? How granular should I be getting? Is it overkill?Probably a little overkill, yes. But I'm more concerned about its use - or lack thereof. In parseDataAnalyticsTag you don't use it, meaning you may get an exception: element.attr('data-analyticstag').split('_') will fail if the attribute doesn't exist, since attr() will returned undefined, which you can't split.In fact, I'd say it'd be easier to simply call parseDataAnalyticsTag and have it return null or an empty object if there's no attribute to parse. Right now, you've split it into checking and parsing, but - as far as I can tell - you only need to check if you want to parse. And if you want to parse, you need to check. So that's one thing.So, how granular should it be? Enough to keep the code DRY. If you find yourself repeating something, extract it into a function. Conversely, combine dependent/sequential steps into a function, and call that one thing.By the way, there's a hint that you may be too diligent in splitting things up. The comments for addPrefixToKey say// @param {string} key The objects key it currently iterating on.Who said anything about an object? Or iteration? Or a key for that matter? The function just takes an argument - any argument, really - and prepends data- to it. That's it. Its name and comments indicate that it was intended for or extracted from a very specific context, but the function itself really doesn't care. But if its intended use-case is so specific, it probably shouldn't be a separate function at all.As to the code itself:You're exposing all your functions in the window.analytics.utilities object, though you seem to only use one: setDataAttributes. So that's your API; the rest is - viewed from the outside - implementation details.I'm also not a big fan of the parseDataAnalyticsTag function. For one, its @return comment lies: The object does not contain index, linktype and cmpgrp properties - it contains index, type and region. Boo.It's also fragile and fairly tricky to follow, despite its short length. As mentioned, you assume that the attribute exists when you call split, but after that, you also assume that there are at least 8 elements in the resulting array. And the use of splice instead of slice makes it hard to keep track of indices, and requires things to happen in the right order. I.e. the region value is actually index 7 - not 3 - in the original array, so it only works because you've used splice.Lastly, setDataAttributes has side effects: You're modifying the dataAttributes object you're given. It's ok for the usage you've got right now, since you're not keeping a reference to the object on the caller's side, but it's icky nonetheless.Suggestions:I'd consider making this a jQuery plugin. You're depending on jQuery anyway.Something like this, perhaps (note: incomplete implementation)// get any existing attributes from the `data-analyticstag` attribute (if present)function analyticsTagAttributes(element) { // ... see current implementation, and all the stuff above ...}// Prefixes keys, and trims valuesfunction prepareAttributes(object) { var key, prepared = {}; for(key in object) { if(object.hasOwnProperty(key)) { prepared[data- + key] = $.trim(object[key]); } } return prepared;}// extend jQuery$.fn.extend({ setAnalyticsAttributes: function (attributes) { return this.each(function () { var prepared = prepareAttributes(attributes), existing = analyticsTagAttributes(this) || {}, merged = $.extend(existing, prepared); $(this).attr(merged); }); }});With that, you can simply call$(elementOrSelector).setAnalyticsAttributes({ region: 'header', name: name, type: type, title: title, index: '1'});Or, if you want to mimic jQuery, you make it a analytics function, so you can use it much like .attr():$(elementOrSelector).analytics() // returns existing values$(elementOrSelector).analytics(obj) // sets values |
_unix.83773 | I want to play the game Aquaria in a Debian Wheezy 64 bits. The installation went ok, but when trying to play the game I get these errors:ALSA lib conf.c:3314:(snd_config_hooks_call) Cannot open shared library libasound_module_conf_pulse.soALSA lib control.c:951:(snd_ctl_open_noupdate) Invalid CTL hw:0AL lib: alsa.c:1000: control open (0): No such file or directoryMessage: SDL_GL_LoadLibrary Error: Failed loading libGL.so.1I have added 32 bit compatibility with dpkg --add-architecture i386 and I think that the required libraries are present in the system since typing locate libasound_module_conf_pulse.so yields:/usr/lib/x86_64-linux-gnu/alsa-lib/libasound_module_conf_pulse.soand locate libGL.so.1:/etc/alternatives/glx--libGL.so.1-x86_64-linux-gnu/usr/lib/mesa-diverted/i386-linux-gnu/libGL.so.1/usr/lib/mesa-diverted/i386-linux-gnu/libGL.so.1.2/usr/lib/mesa-diverted/x86_64-linux-gnu/libGL.so.1/usr/lib/mesa-diverted/x86_64-linux-gnu/libGL.so.1.2/usr/lib/x86_64-linux-gnu/libGL.so.1/usr/lib/x86_64-linux-gnu/fglrx/fglrx-libGL.so.1.2/usr/lib/x86_64-linux-gnu/fglrx/libGL.so.1However, it seems that Debian is ignoring them. What can I do to play Aquaria?EDIT 1: ldd aquarialinux-gate.so.1 => (0xf77e1000)libSDL-1.2.so.0 => /opt/Aquaria/./libSDL-1.2.so.0 (0xf7748000)libopenal.so.1 => /opt/Aquaria/./libopenal.so.1 (0xf76fa000)libstdc++.so.6 => /opt/Aquaria/./libstdc++.so.6 (0xf760d000)libm.so.6 => /lib/i386-linux-gnu/i686/cmov/libm.so.6 (0xf75c3000)libgcc_s.so.1 => /opt/Aquaria/./libgcc_s.so.1 (0xf75b8000)libc.so.6 => /lib/i386-linux-gnu/i686/cmov/libc.so.6 (0xf7455000)libdl.so.2 => /lib/i386-linux-gnu/i686/cmov/libdl.so.2 (0xf7451000)libpthread.so.0 => /lib/i386-linux-gnu/i686/cmov/libpthread.so.0 (0xf7437000)librt.so.1 => /lib/i386-linux-gnu/i686/cmov/librt.so.1 (0xf742e000)/lib/ld-linux.so.2 (0xf77e2000) | Running a 32-bit application in Debian Wheezy 64 bits: Missing libraries | 64bit;debian;multiarch | It seems you're missing the 32-bit libraries (/usr/lib/x86_64-linux-gnu contains 64-bit libraries).Now, let's figure out which packages you need for your libraries:$ dpkg -S /usr/lib/x86_64-linux-gnu/alsa-lib/libasound_module_conf_pulse.solibasound2-plugins:amd64: /usr/lib/x86_64-linux-gnu/alsa-lib/libasound_module_conf_pulse.so$ dpkg -S /usr/lib/x86_64-linux-gnu/libGL.so.1libgl1-mesa-glx:amd64: /usr/lib/x86_64-linux-gnu/libGL.so.1So you need 32-bit versions of these packages:# apt-get install libasound2-plugins:i386 libgl1-mesa-glx:i386In general, before you can install any 32-bit libraries, you must add the i386 architecture to dpkg:# dpkg --add-architecture i386# apt-get updateUpdateSince the above didn't solve the libGL.so.1 issue and it seems from your ldd output that Aquaria can see all its required libraries, I googled the libGL.so.1 error message and two things came up. Please try the following 2 solutions in order:As explained here try symlinking libGL.so.1:ln -sv /usr/lib/i386-linux-gnu/libGL.so.1.2 /usr/lib/libGL.so.1Note that I modified the paths from the answer I linked to so that they're relevant to Debian instead.The answer here suggests that you need to install libgl1-mesa-glx:i386 (which you've already done) plus libgl1-mesa-dri:i386 (which is what I'm suggesting you try next).Update: What finally workedapt-get purge libgl1-mesa-glx:i386 apt-get install libgl1-mesa-glx:i386 ln -s /usr/lib/mesa-diverted/i386-linux-gnu/libGL.so.1 /usr/lib/i386-linux-gnu/ |
_cs.57482 | Let L {0, 1}* . Then1) If all proper subsets of L are regular,is L regular?2) If all finite subsets of L are regular,is L regular?3) If a proper subset of L is not regular,is L non regular?I am not sure if one or more of the above are true.I think 2) is true because any finite subset can be accepted by a DFA.Are 1) and 3) always true?If not,I am not able to provide counter examples. | Subsets and Proper subsets of regular language | finite automata | null |
_reverseengineering.8369 | I'm currently reversing a function which looks like the following.text:0040383F 8D 04 BF lea eax, [edi+edi*4].text:00403842 6A 14 push 20.text:00403844 C1 E0 03 shl eax, 3.text:00403847 99 cdq.text:00403848 59 pop ecx.text:00403849 F7 F9 idiv ecx.text:0040384B 03 45 08 add eax, [ebp+arg_0].text:0040384E 8A 84 30 C8 31 00+mov al, [eax+esi+31C8h].text:00403855 32 C3 xor al, bl.text:00403857 88 84 3E 28 27 00+mov [esi+edi+2728h], al.text:0040385E 47 inc edi.text:0040385F 81 FF 07 0B 00 00 cmp edi, 0B07h.text:00403865 75 D8 jnz short loc_40Since I don't have any clue what's going there I wanted to Debug this part with OllyDbg. I want to understand what's inside al, bl and the result of xor al, bl for all 0B07h steps the loop is running.I just saw that Immunity provides some sort of scripting functionality. Is it possible to achieve this with a simple python script in Immunity? Maybe there are other ways with OllyDbg?I just want something like:If EIP == 403855 then print al, blElse go_ahead | How to efficiently debug Loops with OllyDbg/Immunity? | ollydbg;debugging;immunity debugger;xor | No scripting required.In OllyDbg's disassembly window, left-click on line .text:00403855 32 C3 xor al, bl to select the line, then right-click on the selected line and choose Breakpoint Conditional log....In the breakpoint dialog box that opens up, use the following options:Press OK, run the program, and every time .text:00403855 32 C3 xor al, bl is executed, OllyDbg will print the values of al and bl to the log window. |
Subsets and Splits