id
stringlengths
5
27
question
stringlengths
19
69.9k
title
stringlengths
1
150
tags
stringlengths
1
118
accepted_answer
stringlengths
4
29.9k
_webapps.108998
I just clicked on Confirm button in Requests feature and got following messages:Facebook is for connecting and sharing with your real life friends. If you are sending friend requests to people you don't know, you may be disabled.also tried clicking accept on this page:https://www.facebook.com/friends/requestsand got this error message:You are sending friend requests that may be considered abusive.Why did I get this error on accepting friend requests? I didn't send any requests and I accepted 1,500-2,000 requests on the same day, after which error message appeared. I can still send requests and delete requests but I can't accept requests. Total number of friends in friends list is currently about 2,002.
How to accept unlimited Facebook friend requests up to the max limit?
facebook;facebook friend request
null
_cogsci.1398
To what extent does cooperative versus competitive learning influence personality development or even pathological behaviors?If these activities need to be narrowed down to a specific category, I'm more interested in gaming patterns or gamified activities used in education. Still, I would prefer a broad answer if there was one. Still, ideally, these would include sports, games of any type, common activities and everything which can be compared from the cooperative vs. competitive point of view.So far, my searches are only returning results that focus on the effectiveness of such learning strategies, but not on the imprinting effect that it may have on the individual, specially if short-lived.If particular research or studies cannot be founds, I can settle for an answer explaining hypothesis on how these would work towards building personality and behaviors.The question arises from trying to draw a parallel with AI and general game playing, where cooperative/competitive behaviors can be trained through heuristics. These heuristics and the balance between cooperation/competitiveness (for mixed activities) usually display a personality, a strategy that depends on the coder or designer, unless the algorithm is self-taught. At that point, I would like to see if the behaviors seen by the self-taught algorithms can parallel in any way those nuances by human strategies, but I don't know how those are to be expected in humans at all - hence the question.
How do cooperative vs. competitive activities impact the learning patterns of an individual?
social psychology;learning;gamification;game theory
null
_codereview.158371
I have decided to create a discount Pokemon battle simulator using a module called guizero. It's basically a wrapper for Tkinter that is easier to learn. Recently, I started to learn OOP and thought that this project would be a good practice for my new-found skills. Someone in my computing group looked at my code and told me it was terrible so I thought I'd get someone else's opinion.EditI just want to point out that I've been trained to comment everything I code to a standard the a beginner could understand it.This is the code (you will have to download guizero to make it work):# Import guizero for the GUI and randomimport guizero as guiimport random# Create an empty list for the Animalsactives = []# Create an immutable list (tuple) for default animalsDEFAULT_ATTRS = (('Cow', 100, 10, 15, 4), ('Chicken', 40, 50, 40, 5), ('Duck', 45, 35, 70, 2))# Create a function that checks if all values in a list are equaldef all_equal(iterable): for i in range(len(iterable)): if iterable[i-1] != iterable[i]: return False return True# Create an Animal class for the animalsclass Animal: # Create an immutable value so that another values can't change the string # Make a string that can be formated and evaluated POWERSUM = 'int((({1}+{2})*{3})/{0})' def __init__(self,name=None,strength=None,speed=None,skill=None,age=None): assign = all_equal([name,strength,speed,skill,age]) and name == None self.assigned = not assign if assign: return None self.optimum = 50 self.name = name.title() attr = [strength,speed,skill] while sum(attr) > random.randint(180,220): # If the sum is greater than 220 (or less) # Change the max and the min values attr[attr.index(max(attr))] -= random.randint(1,9) attr[attr.index(min(attr))] += random.randint(1,5) self.strength, self.speed, self.skill = attr self.fitness = 100 self.attr = attr[:] self.active = True # Create a list with the values [number of battles, battles lost] self.battles = [0,0] self.age = age self.power = 0 def __repr__(self): # Create the display string # Name. Stats: Strength, Spped, Skill and Age attr = self.attr[:]+[self.age] return '{}. Statistics: {},{},{} and {}'.format(self.name,*attr) def returnPower(self): # Get the power. The optimum age is 50 # Effectively create a parabola effect on the age if self.age > 50: age = self.optimum / (101-self.age) else: age = self.optimum / self.age self.power = eval(self.POWERSUM.format(age,*self.attr)) return self.power# Add the three default valuesfor attr in DEFAULT_ATTRS: actives.append(Animal(*attr))class BattleWindow(gui.App): # Create a class that creates a GUI, # avoiding the need for global variables; they're all attributes def __init__(self,*animals): super().__init__(title='Animals Battles',layout='grid') # Create the function so that if the window is closed, # it automatically opens the menu window self.on_close(self.cancel) texts = [[],[]] for i,person in enumerate(['Animal Selected','Opponent']): texts[i].append(person) for cate in ['Strength','Skill','Speed','Age','Fitness','Power']: texts[i].append(cate) buttons = ((self.power, 'Power' ,[0,0]), (self.opponent,'Opponent' ,[1,0]), (self.battle, 'Battle' ,[2,0]), (self.firstaid,'First aid',[3,0])) for func,text,grid_xy in buttons: self.aidbtn = gui.PushButton(self,func,text=text,grid=grid_xy) self.animals = list(animals) # Create 2 'empty' animals that can't do anything # just in case the user tries to do something self.chosen = Animal() self.opponent = Animal() self.displays = [[],[]] self.options = ['None'] for animal in animals: self.options.append(animal.name) # Create a Combo to choose the animal self.combo = gui.Combo(self,self.options,command=self.disp,grid=[0,2]) for i,text in enumerate(texts): for x,tx in enumerate(text): pos = [[x],[x]] if i%2 == 0: pos[0].append(1) pos[1].append(2) else: pos[0].append(3) pos[1].append(4) gui.Text(self,text=tx+': ',grid=pos[0],align='left') if tx != 'Animal Selected': self.displays[i].append(gui.Text(self,grid=pos[1])) # Display the GUI so that everything shows up self.display() def battle(self): fitness = 'fitness' if not (hasattr(self.chosen,fitness) or hasattr(self.opponent,fitness)): gui.warn('No opponent!','You need an opponent!') return # Decrease the fitnesses of the animals by 75% of the value self.opponent.fitness *= 0.75 self.chosen.fitness *= 0.75 # Add 1 to the number of battles self.chosen.battles[0] += 1 self.opponent.battles[0] += 1 # If power has not yet been calculated, # return so that the battle never happens if self.displays[0][-1].get() == 'N/A': return if self.opponent.power > self.chosen.power: winner = self.opponent self.chosen.fitness *= 0.75 self.chosen.battles[1] += 1 else: winner = self.chosen self.opponent.fitness *= 0.75 self.chosen.battles[1] += 1 gui.info('The winner is...','The Winner is ... {}'.format(winner.name)) # Set the fitness display to the fitness to 2d.p. self.displays[0][-2].set(round(self.chosen.fitness,2)) self.displays[1][-2].set(round(self.opponent.fitness,2)) # Check if either fitness is less than 1 as # 0 can never be reached if self.opponent.fitness < 1 or self.chosen.fitness < 1: if self.opponent.fitness < 1: self.opponent.active = False name = self.chosen.name popname = self.opponent.name x = 1 if self.chosen.fitness < 1: self.chosen.active = False name = 'None' popname = self.chosen.name x = 0 # Clear the displays if the fitnesses are less than 1 for disp in self.displays[x]: disp.clear() # Remove the name from the dropdown options # then destroy the combo and create a new one # The new combo is then set to either the current # animal or None if the user animal faints self.options.remove(popname) self.combo.destroy() self.combo = gui.Combo(self,self.options,grid=[0,2]) self.combo.set(name) # Get rid of the Animal object from the animals so that # the random opponent can't be one of the fainted ones actives.pop([i.name for i in actives].index(popname)) def cancel(self): # Go back to the menu system self.destroy() Menu() def disp(self,_): # If the combo is None, set the displays to N/A if self.combo.get() == 'None': for disp in self.displays[0]: disp.set('N/A') self.chosen = self.animals[self.options.index(self.combo.get())-1] # Create a copy of the attr attribute of self.chosen.attr # Next add the age and the fitness to the list attrs = self.chosen.attr[:] attrs.append(self.chosen.age) attrs.append(self.chosen.fitness) # Next change the displays to the # appropriate values for i in range(len(attrs)): self.displays[0][i].set(attrs[i]) # Finally set the 'Power' display to N/A self.displays[0][-1].set('N/A') def firstaid(self): # Create a function that allows self.chosen to get more fitness if self.chosen.battles[0] == 0: return # Check if the battle win percentage is high enough to get first aid if 100 * (self.chosen.battles[1] / self.chosen.battles[0]) > 60: if self.chosen.fitness > 50: amount = 100 - self.chosen.fitness else: amount = 50 self.chosen.fitness += amount self.displays[0][-2].set(round(self.chosen.fitness,2)) # Make the button disabled so that it can't be pressed again self.aidbtn.config(state=gui.DISABLED) else: gui.warn('Too many losses','You haven\'t won enough battles!') def opponent(self): # Randomly choose an enemy. While that enemy # is the same as the 'chosen', choose again value = random.choice(actives) while value == self.chosen: value = random.choice(actives) self.opponent = value # Create a copy of the opponent attrs # Then add the age, fitness and name attrs = self.opponent.attr[:] attrs.append(self.opponent.age) attrs.append(self.opponent.fitness) attrs.insert(0,self.opponent.name) # Add the displays for the opponent for i in range(len(attrs)): self.displays[1][i].set(attrs[i]) self.displays[1][-1].set('N/A') def power(self): # Set the text to the power. Doesn't need # the value to be assigned; happens in the returnPower() function if self.chosen.assigned: self.displays[0][-1].set(self.chosen.returnPower()) if self.opponent.assigned: self.displays[1][-1].set(self.opponent.returnPower())# Create the default window that creates# a menu systemclass Menu(gui.App): def __init__(self): super().__init__(title='Menu System',layout='grid',height=300) gui.Text(self,text='Please choose an option',grid=[0,0]) # Create a 2d tuple containing the infos options = (('Add new animal',self.addNew,[1,0]), ('Battle!!!',self.battle,[2,0]), ('Delete animal',self.delete,[3,0])) # Create a list containing the names of the # animals for leter self.names = [i.name for i in actives] # Create the buttons for the options for text,func,grid_xy in options: gui.PushButton(self,func,text=text,grid=grid_xy) # Display all the widgets from the GUI self.display() def clear(self): # Clear the texts used for text in self.text: text.destroy() # Clear the entries used for ent in self.entries: ent.destroy() # Clear and delete the 2 buttons self.btn.destroy() self.cancel.destroy() del self.btn,self.cancel def addAnimal(self): # Create a list of the gotten values self.got = [] for i in range(len(self.entries)): self.got.append(self.entries[i].get()) if self.got[0] == '': gui.error('Name','Please provide a name') return # Add the animal to the actives values actives.append(Animal(*self.got)) gui.info('Animal Added','{} added!'.format(self.got[0])) # Clear the widgets self.clear() def addNew(self): # Create a tuple containg the Text widget information texts = (('strength',[1,2],[1,1]), ('speed', [2,2],[2,1]), ('skill', [3,2],[3,1]), ('age', [4,2],[4,1])) entries = [] text = [] text.append(gui.Text(self,text='Enter animal name: ',grid=[0,1])) entries.append(gui.TextBox(self,grid=[0,2],width=25)) # Create the Text widgets and the Slider widgets for t,sc,tc in texts: text.append(gui.Text(self,text='Enter animal '+t+': ',grid=tc)) entries.append(gui.Slider(self,start=1,end=100,grid=sc)) # Create copies of the entries and text lists self.entries = entries[:] self.text = text[:] # Create the 2 buttons for submitting and cancelling self.btn = gui.PushButton(self,self.addAnimal,text='Submit',grid=[6,1]) self.cancel = gui.PushButton(self,self.clear,text='Cancel',grid=[6,2]) def battle(self): # Destroy menu window and open BattleWindow self.destroy() BattleWindow(*actives) def deleteOne(self): # If the combo for deletion does not equal None # pop the name from actives and give a info window if self.todelete.get() != 'None': index = self.names.index(self.todelete.get()) delete = actives.pop(index).name gui.info('Deleted','{} has been deleted!'.format(delete)) self.names.pop(index) # Destroy the button and Combo self.todelete.destroy() self.bn.destroy() def delete(self): # Create a combo for the animal and a 'delete' button self.todelete = gui.Combo(self,['None']+self.names,grid=[0,1]) self.bn = gui.PushButton(self,self.deleteOne,text='Delete',grid=[1,1])# Initialize the menu window to start Menu()
Discount Pokemon Battles
python;beginner;python 3.x;battle simulation;pokemon
# Import guizero for the GUI and randomimport guizero as guiimport randomThis, like many of your comments, is exactly the kind of comment you should never write. It just says what the code does. We can see what the code does. Use comments, where absolutely necessary, to explain why it does it. You've added that I've been trained to comment everything I code to a standard the a beginner could understand itbut I don't think commenting everything helps in that respect, compared to well-written code with sensible variable names and good docstrings that will appear in the help. Also you should group imports, standard library first, per the style guide. import randomimport guizero as gui# Create an immutable list (tuple) for default animalsDEFAULT_ATTRS = (('Cow', 100, 10, 15, 4), ('Chicken', 40, 50, 40, 5), ('Duck', 45, 35, 70, 2))A minor thing, but a tuple is not just an immutable list; see e.g. What's the difference between lists and tuples? Also, I wouldn't align values like that; use a single space after each comma, otherwise if you add another item with a longer name or value you have to realign everything. # Create a function that checks if all values in a list are equaldef all_equal(iterable):This comment is redundant again and, worse, inconsistent with the code. Evidently you've realised that this functionality works fine with non-list iterables and renamed the argument, but you haven't updated the comment. Also when you're describing modules, classes and functions you should do so with docstrings, not just comments:def all_equal(iterable): Whether all of the values in the iterable are equal.This makes them useful to IDEs, documentation generators, etc. assign = all_equal([name,strength,speed,skill,age]) and name == Noneself.assigned = not assignif assign: return NoneThis is bad; sorry, no two ways about it. If this is intended as validation of the inputs, so at least one of the values must be provided, it should look something like:if all(item is None for item in [name,strength,speed,skill,age]): raise ValueError('at least one of the inputs must be provided')You could use any instead of all to mean all of the inputs must be provided, but in that case why provide default parameter values at all? Note the comparison of None by identity (is) not equality (==); it's a singleton. Your initialisation in general seems too long and complex. Specifically, I would extract this:attr = [strength,speed,skill]while sum(attr) > random.randint(180,220): # If the sum is greater than 220 (or less) # Change the max and the min values attr[attr.index(max(attr))] -= random.randint(1,9) attr[attr.index(min(attr))] += random.randint(1,5)self.strength, self.speed, self.skill = attrTo be:self.strength, self.speed, self.skill = self._adjust_attr_values(strength, speed, skill)Again, this method would have a docstring explaining why this is necessary. I wouldn't store self.attr; that duplicates existing information, and risks updating one but not the other. If it's really needed, it should be a calculated and ideally read-only property:@propertydef attr(self): return [self.strength, self.speed, self.skill]# Create a list with the values [number of battles, battles lost]self.battles = [0,0]One problem with this is that the list doesn't actually keep that content with it. You will find yourself writing battles, lost = thing.battles, even when you only want one of them, and then you mix up the order one time and you've got a really tricky bug to track down. Why not have two attributes?self.battles_won = 0self.battles_lost = 0 You could add another property for the total. Or create a new object entirely to just hold wins and losses with the total, but that's probably overkill. In general also I would first assign all of the parameters, then do all of the initialisation from fixed values. This means that the reader can get the context of the parameters out of their head as early as possible. def __repr__(self): # Create the display string # Name. Stats: Strength, Spped, Skill and Age attr = self.attr[:]+[self.age] return '{}. Statistics: {},{},{} and {}'.format(self.name,*attr)Per the data model, __repr__ should:...look like a valid Python expression that could be used to recreate an object with the same value (given an appropriate environment). If this is not possible, a string of the form <...some useful description...> should be returned.Your method does neither, so should be called __str__. def returnPower(self): # Get the power. The optimum age is 50 # Effectively create a parabola effect on the age if self.age > 50: age = self.optimum / (101-self.age) else: age = self.optimum / self.age self.power = eval(self.POWERSUM.format(age,*self.attr)) return self.powerBased on all of the above:@propertydef power(self): Calculate the power. The optimum age is 50. age = 50 / ((101 - self.age) if self.age > 50 else self.age) return int(((self.strength + self.speed) * self.skill) / age)This is far less cryptic than your formulation; I'm not sure what risk you were trying to mitigate with the POWERSUM, and if all animals have the same optimum why make it an instance attribute?# Create an empty list for the Animalsactives = []# Create an immutable list (tuple) for default animalsDEFAULT_ATTRS = (('Cow', 100, 10, 15, 4), ('Chicken', 40, 50, 40, 5), ('Duck', 45, 35, 70, 2))...# Add the three default valuesfor attr in DEFAULT_ATTRS: actives.append(Animal(*attr))This all seems a bit odd looking back. Why not define the class then just do:actives = [ Animal('Cow', 100, 10, 15, 4), Animal('Chicken', 40, 50, 40, 5), Animal('Duck', 45, 35, 70, 2),]
_unix.345528
I read the man pages on both, and they seems to be interchangeable and to be doing the same job.So can someone explain when I should use partx, and when kpartx ?
What's the difference between partx and kpartx?
linux
null
_unix.232828
My router currently provides the NAT to all the PCs and the Ubuntu desktop (with some server functions) in the network. I want to use the Ubuntu system as a proper firewall, but it only has one Ethernet interface. As such, I envision the following to get it running: Ubuntu/firewall router/WAN DHCPIP 192.168.1.1 192.168.1.10 192.168.1.*GW 192.168.1.10 WAN IP 192.168.1.1Can I expect everything to work fine if I statically configure my Ubuntu system and router as I described?Will it be fine to use the single physical interface to handle INPUT and FORWARD? Do I need to do things like create virtual interfaces?
Route incoming and outgoing on same interface
firewall;networkmanager;network interface
I've accomplished what I've described in the question. Here's the Ubuntu configuration that allowed me to do so:$ sudoedit /etc/network/interfacesauto eth0iface eth0 inet static address 192.168.0.10 netmask 255.255.255.0 gateway 192.168.0.1(Actually br0 in my files, from bridging VM to physical LAN, but I've replaced them with more generic eth0)$ sudoedit /etc/sysctl.conf# Uncomment the next line to enable packet forwarding for IPv4net.ipv4.ip_forward=1Basically, sysctl.conf's packet forwarding configuration is what, I believe, allowed my setup to work.I've confirmed that the setup indeed worked by seeing traceroutes going through 192.168.0.10 before 192.168.0.1 and the firewall rules configured on Ubuntu actually filtered the traffic as intended.
_cs.80083
There is a non-regular language that exists the pumping lemma for regular language?I didn't find one.Edit: From what i understood the pumping lemma does not prove that it is a regular language becuse there are non regular languages that support the pumping lemma.Thanks.
Non-regular language and pumping lemma
regular languages;pumping lemma
null
_unix.89767
I am looking or zsh functionality to expand disk-labels into mountpoins:Example: I have disk with label DISK-LABEL1 mounted on /run/media/god/DISK-LABEL1.Is there a plugin which expands input Like: cat //DISK-LA<Tab> to the cat /run/media/god/DISK-LABEL1?// was chosen as an example to trigger that type of autocompletion...
Zsh completion for mounts (/run/media/DISK-LABEL)?
mount;zsh;autocomplete
null
_codereview.121959
I would like to create a regex that will validate that a string is an equation made up of a single digits and either the * or + operator and is no longer than 100 characters. So these would be valid:1+2*3*8+099*9And these would not be valid:1++112+12*251+47++111I came up with the regex below to accomplish this:^(\d{1}[\+\*]{1}){0,99}\d$Which appears to work, but I'm curious if there is a cleaner way to accomplish this. Any suggestions or is this about as clean as it gets?It is saved here if you would like to play with it.
Regex for matching expression that consists of single digit numbers and operators
python;regex
While your regex appears to work, it will fail for the condition that your expression should not exceed 100 characters mark. With the boundary of {0,99} on a 2 character pattern \d{1}[\*\+]{1}, you are already expecting a possible expression of length 198 characters.Using {1} quantifier is just redundant.No need for escaping inside a character set. The only things needing a leading backslash (\) inside a characters list are ] and ^, where the caret is the only character inside, or the first.Your expressions will not reach a 100 character mark, unless you allow the + to act as an unary operator.Therefore, the following pattern will be the simplest approach (imo)^(?:\d[*+]){0,49}\d$which is the tiniest bit modified from your original expression.You can check the pattern in action here on the following expressions:1+2*3*8+099*91*2+1*2+1*2+1*2+1*2+1*2+1*2+1*2+1*2+1*2+1*2+1*2+1*2+1*2+1*2+1*2+1*2+1*2+1*2+1*2+1*2+1*2+1*2+1*2+5*81++112+12*251+47++1111*2+1*2+1*2+1*2+1*2+1*2+1*2+1*2+1*2+1*2+1*2+1*2+1*2+1*2+1*2+1*2+1*2+1*2+1*2+1*2+1*2+1*2+1*2+1*2+5*8+9
_softwareengineering.322820
In many of my personal and professional projects inevitably comes the moment where we have to weight pros and cons of integrating third parties versus developping a home solution. I've always been a fervent user of open source resources and faced programmers I respect who were following other path. The arguments are complex and I often get lost in what are pros and cons of using third parties in general.As an example one of the project I work for implements a message broker with a custom protocol. I am fairly sure it could have been implemented by using a 3rd party message AMQP broker. It would reduce the code base, and, although I can't be sure yet, may enhance overall performance and stability.I'm still new to the project and probably missing something, but isn't this a case of reinventing the wheel, or is there pitfalls in using 3rd parties I am not aware of ? I am looking if there is a general guideline for borderline cases where both options (using a 3rd party or developing in-house) are realistic.
Are there pitfalls to consider when integrating with third parties?
open source;third party libraries
null
_reverseengineering.11175
I have read that HBGary's FastDump Pro (FDPro) can capture kernel dumps and include the page file contents.Although I'm not sure if the tool is still available commercially (it's not listed on the countertack.com webpage), I'd like to know whether the file format created by FastDump Pro is compatible with WinDbg or if I need other tools to analyze it (HBGary/Countertack tools).If they are compatible, I see some benefit in having the page file contents included in the dump, since that would e.g. give the possibility of debugging a .NET application from a kernel dump, which is usually not possible since parts of the of the virtual memory have been paged out.
Are HBGary FastDump Pro dumps compatible with WinDbg?
debugging;windbg;dumping
null
_softwareengineering.111920
Consider the formal definition: f(n) = O(g(n))Why is it not: f(n) = O(f(n)) or f(n) = O(c*f(n))since for the Big O analysis, f(n)=2n and g(n)=n are identical? I am confused by the function f(n) using another function.UpdateWhy isn' t the definition as follows: f(n) <= c*abs(g(n))What does the formal O(g(x)) add to the definition? It seems like it overcomplicates things.
Why is the formal definition of Big O notation formulated as such?
algorithms;computer science;complexity;big o
This is an extremely weird definition and is actually new to me. The symbols, as defined by Bachmann and Landau, are not defined like that.Unfortunately, the German wikipedia is the only source, I can find exactly this, as of now, but I suppose you can see without much translation, that is defined as .(Please note: the french wikipedia has the following similar definition , which I suppose basically states the same, although I think it is incorrect, given that f(n) is something completely different than f).As I explained in response to a different question, O means the order of, and thus O(g) is actually the set of all functions, that have the same order as g. It makes only sense to say:f is the same order as g (or more explicitely: the order of f is the order of g), which is O(f) = O(g)f is in the order of g, which translates to f ∈ O(g)So for the sake of nitpicking (which is the fun part in formalization), one can say the definition you criticize is indeed wrong.
_softwareengineering.255110
We started following the Agile Scrum methodology and we have completed about 10 sprints.One observation I had is that not all in the team are taking up the responsibilities of completing the tasks and user stories by themselves and everytime they have to be instructed or allotted with some tasks. Also the estimate they are giving are not very agile.There is always a need for someone to look after all the user stories, their completion status, tasks that are yet to be done etc and allot them to team members who are not occupied.I also feel that we would be able to deliver faster if we had one person who would create tasks and assign them to people along with deadlines to complete them (project manager role).And this is what I feel is missing in Agile Scrum. Given that the team is not taking up tasks and not risking to take up tight estimates, what are the alternates that we can look for? Or, are there any provisions in Agile Scrum to fasten things?
Alternatives for Agile Scrum
project management;agile;scrum;team
If you only looked at agile because you were expecting an increased productivity be aware that agile (/scrum) is not a silver bullet. Yes, self empowered teams can become more productive, but they need help.So get a coach. Agile is like playing chess. It takes 30 minutes to explain the rules and after that you can start playing. But it takes years to reach a reasonable level as a chess player.
_cs.46899
I have a pretty good handle on what recursive and recursively enume table languages mean with respect to Turing machines and how they relate to one another through my algorithms class. What I don't understand is how these languages relate to the computability of problems, and whether these languages correlate to problems or something else. I'm missing the bridge between the theory and the practical application of theory so abstract, could somebody bridge the gap?In particular, what does the recursive nature of a language tell us about the problem being considered? Recursive nature being recursive or r.e.
Between languages and problems
formal languages;turing machines;recursion theory
null
_unix.297151
I'm not able to create a hotspot in order to share my wifi connection.I use Linux Mint and I want connect my phone to WIFI through my laptop hotspot.
Share WIFI creating hotspot on Linux Mint
linux;wifi;wifi hotspot
null
_unix.119031
I have a sample directory shared out with Samba which all users should have read/write access to. I would like to prevent some of these users from deleting any files (even the ones they create). This is mostly to prevent accidental deletions. How can I ensure that some users have the ability to delete files while others do not?Things I've Tried:Sticky bit +t --- This still allows users to delete their own files. Not desired.(Samba) create mode=555 --- This prevents all deletions. I want some users to still be able to delete files.
How can I prevent some users from deleting files in samba?
debian;permissions;samba
null
_codereview.52441
after profiling my application, it turns out that a single method is taking 3 minutes to run, which is about a third of the total runtime.The method deletes approx. 400.000 rows from each table (PROCESSED_CVA and PROCESSED_DVA).The code executing the queries :public final static String DELETE_CVA = delete from PROCESSED_CVA where RUN_ID = ?;public final static String DELETE_DVA = delete from PROCESSED_DVA where RUN_ID = ?;public void purge(Run run) throws HibernateException { Session session = null; if (session == null) { session = sessionFactory.openSession(); } Transaction t = session.beginTransaction(); try { SQLQuery query = session.createSQLQuery(DELETE_CVA); query.setLong(0, run.getRunId()); query.executeUpdate(); query = session.createSQLQuery(DELETE_DVA); query.setLong(0, run.getRunId()); query.executeUpdate(); t.commit(); } catch (HibernateException he) { logger.error(Failed to purge processed cva and dva for run: + run.getRunId(), he); t.rollback(); throw he; }}Both tables have the same structure.CREATE TABLE PROCESSED_CVA (DEAL_ID VARCHAR2(23 BYTE), NTT_ID VARCHAR2(10 BYTE), CVA FLOAT(126), RUN_ID NUMBER(10,0)) ;ALTER TABLE PROCESSED_CVA ADD CONSTRAINT PK_CVA PRIMARY KEY (DEAL_ID, RUN_ID)There is an index on the primary key.The execution plan :OPERATION OBJECT_NAME OPTIONS COSTDELETE STATEMENT 100582 |_ DELETE PROCESSED_CVA |_ INDEX PK_CVA SKIP SCAN 100582 |_ Access Predicates |_ RUN_ID=100 |_ Filter Predicates |_ RUN_ID=100Can I speed this up ?UPDATEDBMS : Oracle
Slow delete query on table with composite index
java;performance;sql;oracle
Oracle uses the following strategy when deleting data. It:identifies the rows that need to be deleted (it does use your PRIMARY KEY index to check the RUN_ID value, but because the RUN_ID is not the first column in the index it needs to 'skip' values in the index).it deletes the record in the online version of the datait writes a physical record to the transaction / redo log to record the values that were deletedOracle works on a per-block bases for it's redo log:A redo record, also called a redo entry, is made up of a group of change vectors, each of which is a description of a change made to a single block in the databaseEach time you change a record, the block it is stored in is changed, and the difference is recorded in the redo log. The number of blocks you change is a key factor in determining the amount of redo-log work that you do. The number of blocks is closely related to the amount of data you are changing, and the way that the data is distributed.If you are deleting a bunch of records that are all stored really close to each other, then the chances are that the number of blocks that are affected will be small. If the records are scattered on many blocks, then the number of blocks affected is high.Based on the key you have specified for your data DEAL_ID, RUN_ID it appears to me that your data for a specific RUN_ID will be scattered all over the database.This means that, for each time you delete the data, you are actually inserting 400,000 redo-log entries, modifying 400,000 blocks of storage (let's say 8K each, so that's 3GB of IO), and generally processing the system quite hard.So, apart from the basic problem of deleting 400K records, and inserting 400K redo-entries, and writing all that data to disk, what else could it be?Locks will likely need to be escalated. Oracle will start by trying to lock the records one at a time, but will quickly find that the lock-management requires a bigger lock strategy, so it make replace the row-locks with block-locks, and then finally escalate the block locks to a full table lock. In itself, this is not a significant performance problem, but what is a problem is if anyone else is running anything against the table.... the lock escalation will have to wait until all other locks on the table are serviced. Only then will it gain exclusive access.Ways to improve the performance would be:monitor the database. Confirm that IO is a real problemmonitor the lock strategies... are there significant lock-wait situations.reduce the logging requirments... physically order the data in the same order as the RUN_ID. you can 'cluster' the data, or, in Oracle terms, you can have index organized table. Much fewer blocks will change with this.improve the log-device performance - put your log files on an SSD?
_unix.194098
How can I grep a specific word in a single string that contains repetitions?For example :Apple_1 Apple_1_Test Juice_2 Juice_2_HIf I use grep -Eo 'Apple_1' I get two results (because of two Apple_1 in the original string)But what if I want to grep only the perfect word Apple_1 and not Apple_1_Test or Juice_2 and not Juice_2_H??
Grep a specific word in a single string with repetitions
bash;grep
Add a word boundary assertion:grep -Eo '\bApple_1\b'
_webmaster.7396
My site has 30,000 visits with 86,000 page views. If the bounce rate of 54%, then does it means only 15,000 users generated the 86,000 page views or 30,000 visits generated the 86,000 page views.My website is www.cricandcric.comEven though I have got 86,000 page views, my site's Alexa ranking is still getting worse day by day. How do I control that?
How do bounce rate and page views change Alexa rankings?
google;alexa
null
_unix.347595
I have one SSD and one HDD on my PC. SSD runs Windows 7, i want to set up Ubuntu as dual boot on the HDD. I downloaded the WUBI.exe for 16.04 and installed it to my HDD. After first startup i got some errors like root file system not defined after it successfully booted to the main Ubuntu screen. I googled a bit and it seemed like (i was using both the HDD and SSD prior to that in Windows) my HDD was in NTFS and i needed to set up Ext4 first of all. So i downloaded Partition Wizard, and changed the HDD from NTFS to Ext4. Upon booting Ubuntu i get the error error starting windows for file \ubuntu\winboot\wubildr.bmr. I can't boot Ubuntu, Windows 7 works fine, my HDD is not visible in Windows 7 anymore. How can i fix that? I don't know alot about system settings, but the fact that my HDD is not visible under Windows should be ok, because Windows can't react Ext4. How can i repair my HDD tho, if i isn't visible under Windows anymore?EDIT: I formatted the HDD again with partion wizard as EXT 4 and made a bootable USB stick with Ubuntu on it. After booting the stick to install Ubuntu on the HDD, it doesn't find the HDD in the installation menu and i can only select the SSD where my windows is installed. I don't want to partition the SSD. What's wrong here?
Problems on setting up Ubuntu as dual boot
linux;ubuntu;windows;ext4
null
_unix.26727
As part of a larger autocomplete function I'm writing, I want to use compgen to generate a list of files. I read the bash manual entries for compgen and complete, and from there I assumed that the option -G * would be the solution. I could not get it to work, though: The list of files in the current directory was shown, regardless of my input, i.e.:$ cmd <Tab>aa bb cc$ cmd a<Tab>aa bb cc$ cmd aa<Tab>aa bb ccTherefore, I tried to debug this by using complete, which supports the same options as compgen, but I got the same result:$ complete -G * cmd$ cmd a<Tab>aa bb ccI also tried complete -o filenames, but this doesn't work either..
autocomplete filenames using compgen
bash;autocomplete;compgen
I found the answer myself: I have to use the -A action option:compgen -o filenames -A file ...complete -o filenames -A file
_webapps.82539
my team uses a private Google+ Community for updates. If I need to call in sick, I would have to navigate to the community and post. The last time I tried, it continuously failed on my Galaxy S3, though my gmail worked fine. Is there a way to send an email to the Community and my message be added as a post?Note: I did read this post (Can I create a post in a Google+ Community direct from Gmail?), and the answer was not determined and seems that the post was abandoned as an argument was beginning. To clarify, I want to know how to use email to send a post to Google + Community.Thanks in advance for any tips that make this happen. I am fairly certain this is possible as I am able to respond to other members' posts directly from my inbox.
How to use email to send a post to Google+ Community
google plus communities
null
_softwareengineering.351538
We have a master system storing customer data. Data is replicated to client systems (channels) at night. During the day data can be updated by users/customers on the master as well as on the clients.I need all data to be kept in sync between master and clients. Preferably real-time (within minutes).What pattern should I be looking at? CQRS/ES?Some notes:I don't control the master, so I can't implement a broadcast soltution on that end.Master data can be set real-time using web-services, but can only be bulk-read once a day.When updates are coming from certain clients, I need to do some processing on other clients.<1000 messages a day.MS/.NET environment.
Keeping customer data in sync between master and multiple clients
patterns and practices;data replication
null
_softwareengineering.244751
Is telnet just a simple socket connection?I usually have a difficult time in the networking area so I use some code from the internet to help me out, but I can't seem to find a library for Telnet in Objective-C.The closest thing I've found is CocoaAsyncSocketI was wondering, Is telnet just plain socket connections?Do I just create a socket to the server and send the commands?
How does Telnet work?
objective c;networking
Telnet is a bit more than just plain socket connections, but in many cases, just opening a socket to the server and sending the commands will do the trick anyway. See the wikipedia page for more details and links to the RFCs.
_unix.359346
Basically I created an alias which isalias 1=python /root/sqlmap-dev/sqlmap.pyand when i type 1 it's working excellent but if i opened another terminal and typed 1 again. it's not recognize the alias !so how to make the alias available everywhere ?
How to make alias work in other terminals
bash;shell;alias
null
_unix.36146
This is my command:echo Test | sed -f <(sed -e 's/.*/s,&,gI/' mydic) The file mydic contains 2 columns delimited by commas (,)a,AlphabetA . . . e,AlphabetE . . s,AlphabetS . t,AlphabetT test,testedd . . zebra,zebraaaaThe expect result is testedd, but I get AlphabetTAlphabetEAlphabetSAlphabetT.
sed substitution matches too many inputs
sed
echo Test |sed -f <(sed 's/\(.*\),\(.*\)/s,\\<\1\\>,\2,gI/' mydic)\< and \> indicate the start and end of a word, respectively.
_webmaster.55525
Our site is receiving page views with strange browser locales. The most recent, on Sunday, included the following;vi_VNvi_VIzh_SGas_ASbn_BNmr_MRkn_KNor_ORml_MLpa_PApa_INpa_PKta_TAte_TEThe UA string is;Mozilla/5.0 (X11; Linux x86_64; rv:13.0) Gecko/20100101 Firefox/13.0We can't see any malicious input being attempted but we don't currently support any locales besides en_GB and en_US. The IP is located in Arizona.Has anyone experienced this before? If so, what was the motivation behind it?Is this something I should be concerned about?
Why would someone try viewing the site with unsupported languages?
language
Firefox has the ability to send many languages. This feature is designed such that somebody can specify which language(s) they understand and the web server can choose which to display.In Firefox this is available under Preferences -> Content -> Languages -> Choose. It appears that in this case a user has added a ton of languages here. Probably ones that they don't actually speak. I can do the same thing if I want to:
_softwareengineering.341557
I have a finished Chinese-English dictionary website as a personal project and it works pretty great. I'm using node/Express with node-dirty, which is a barebones file-based JSON database that loads everything into memory on start-up. As I keep adding more features though, I have to continuously update my database generation scripts (tedious string manipulation) and server start-up load times are a couple minutes. I'm also in later need of implementing paging.I'm interested in potentially moving to a more typical database (MongoDB, for instance).Being everything is in-memory with node-dirty, I just loop through the entire thing, check exact matches or edit distance and I'm good to go. Even with 120,000 entries it takes less than 50ms (for one user of course).How does one search with a typical database? I'm not sure how I would narrow down the result set with Chinese/pinyin/English matches in a query or if I would have to store all the results in memory like I do now and parse through them in the service side.I'm not really interested in how Google searches the internet (nor have I studied advanced algorithms) but a simple sensible solution for a hobbyist.
Where does search query logic go? Database or Service code?
database;search
null
_unix.110015
I've been reading a tutorial that shows how to set the background image that will be displayed behind the GRUB2 boot options menu. However, I am concerned that the text might not be visible against the image I've chosen. How can I preview what the screen will look like, without having to restart the computer?
How can I preview the GRUB2 boot screen?
grub2;dual boot;images
The easiest way is to use grub-emu. On Debian based systems, this can be installed withsudo apt-get install grub-emu Once you have installed it, you can run it to preview your grub setup:sudo grub-emu
_softwareengineering.271540
I am writing an angular application, and I'm wondering how much client side memory to use.I'm currently working on a scenario where there are 2 dropdowns. The second will load new values depending on the selection of the first. I'm thinking the max # of total records in the 2nd dropdown would be around 2000-3000 items, each being around 2k each. Each selection would display probably 10-15 items of the 2000-3000.Should I load the entire array into memory and parse the selected values from there, or should I read from the server every time the first dropdown changes?I know for a desktop this wouldn't be a big deal. But we support phones and tablets, and I'm not sure how much memory to worry about with these devices.
Any advice on how much browser memory to use?
memory usage
What you're describing is called a cascading dropdown. It's commonly used by car websites to get Year, Make and Model.I've seen a lot of these sites do an AJAX/JSON round trip for the sub-combos. There's a bit of a lag if you do this, unless it happens before the user opens the second dropdown. On a phone, I think you should probably do that instead of loading all of the items. Phone users are already used to things happening a bit more slowly.In any case, make sure you can get the server to send only the 20 bytes per entry that you need for the dropdown. If you can't get it to do that, then taking the hit for all 2000 complete objects is probably out of the question (that's 4 megabytes, just for one page).
_unix.147198
Let's say I have to perform these actions from an input file: extract nth field from a line starting with a given pattern (in the exemple: 2nd field of the line starting with pattern 'name') print the field content at the beginning of every following line, while the line does not start with the selected pattern when a new line matching the pattern is found, repeat step 1 and 2I'm currently doing this using Python, but it would be better using something light and fast from command line (like awk, for exemple).Sample input name NAME_Ainf field_A1name NAME_B inf field_B1inf field_B2Expected output: name NAME_ANAME_A inf field_A1name NAME_B NAME_B inf field_B1NAME_B inf field_B2
Patterns and file processing
awk
This can be a way to do it. Note the format may vary depending on the field separators you indicate - those you can define with FS and OFS:$ awk -v n=2 '/^name/ {a=$(n); print; next} {print a, $0}' filename NAME_ANAME_A inf field_A1name NAME_B NAME_B inf field_B1NAME_B inf field_B2Explanation-v n=2 defines the field number to copy when the pattern is found./^name/ {a=$(n); print; next} if the line starts with the given pattern, store the given field and print the line.{print a, $0} otherwise, print the current line with the stored value first.You can generalize the pattern part into something like:awk -v n=2 -v pat=name '$1==pat {a=$(n); print; next} {print a, $0}' file
_cs.60652
Atomic exchange instruction is given as follows:void exchange (int *a, int *b){ int temp; temp = *b; *b = *a; *a = temp;}Now consider the solution to critical section problem based on above instruction:1 int const n = /* number of processes */;2 int lock = 0;3 void P(int i)4 {5 int key = 1; //intent to obtain lock6 while (true) 7 {8 do exchange (&key, &lock)9 while (key != 0); //if lock wasnt free10 /* critical section */;11 lock = 0; //release lock, unblock12 //other processes13 /* remainder */;14 }15 }16 void main()17 {18 lock = 0;19 parbegin (P(1), P(2), ..., P(n));20 }Now I want to analyse two properties of critical section problem solution for this algorithm: bounded waiting and progress. People online define it many ways. For example: 1, 2.One way is as explained here:Progress: means process will eventually do some workBounded waiting: means that the process will eventually gain control of the processorHowever I feel these are incorrect (Q.1 Am I wrong?) as this will imply lack of bounded waiting result in the lack of progress, essentially suggesting two requirements are one and the same (Q.2 Or is it like that only?).Galvin et al defined these requirements more verbosely in their book:Considering the process has following structure:do{ //entry section (implementing some locking mechanism) //critical section //exit section (implementing some unlocking mechanism) //remainder section}while(true);Progress: If no process is executing in its critical section and some processes wish to enter their critical sections, then only those processes that are not executing in their remainder section can participate in the decision on which will enter its critical section next, and this selection cannot be postponed indefinitely.Bounded Waiting: There exists a bound on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted.Now with the help of these definitions I want to know whether bounded waiting and progress is ensured or not.I feel bounded waiting is not ensured in above code as process P1 can enter its critical section any number of times before P2 can enter its critical section. This can be seen in below table:| Step# | P1 | P2 ||-------|---------|-------------------|| 1 | Line 5 | || 2 | Line 6 | || 3 | Line 8 | || 4 | Line 9 | || 5 | | Line 5 || 6 | | Line 6 || 7 | | Line 8 || 8 | | Line 9 //spinwait || 9 | Line 10 | || 10 | Line 11 | || 11 | Line 13 | || 12 | Line 5 | || 13 | Line 6 | || 14 | Line 8 | || 15 | Line 9 | || 16 | | Line 8 || 17 | | Line 9 //spinwait |However this means that P1 can execute forever without letting P2 enter its critical section at all. If we consider that the progress means process will eventually do some work (as stated in non-book definition above), then this will also mean that progress is also not ensured.However I am trying to deduce if this is indeed the case with Galvin's definition also. But before that I want to interpret Galvin's definition of progress as follows:If no process is executing in its critical section and some processes wish to enter their critical sections, then only those processes that are not executing in their remainder section (in other words, processes executing in their entry and exit section) can participate in the decision on which will enter its critical section next, and this selection cannot be postponed indefinitely.Now in table above, at step 10, no process is executing in its critical section. Process 1 executes line 11 lock = 0; which is essentially its exit section. Thus we can say that the decision to let process 2 enter its critical section is taken in exit section of process 1. Also in step 14, process 1 executed line 8 do exchange (&key, &lock), which is essentially entry section. Thus we can say that the decision to let process 2 not to enter its critical section is taken in entry section of process 1. (Q.3) With this interpretation, should we say that the progress is ensured?I feel tangled with words here. Or I might be giving unnecessary importance to the concept of progress. Neverthless, I need to know whats the truth about progress requirement.PS: The same confusion occurred to me while dealing with attempts made to produce Dekker's algorithm by Stallings in his book as I explained in comments to this answer.
Bounded waiting and progress requirements of critical section problem solution based on exchange instruction
operating systems;concurrency;critical section
null
_cs.65943
In a flow network, the exact edge capacities are not known. The range of capacity for each edge is known. You can get the exact edge capacity by paying a cost for each query.Calculate max flow with min query cost.
Ford-Fulkerson Algo variation
network flow;ford fulkerson
null
_softwareengineering.351299
so I'm trying to understand how microservices are set up in a language agnostic manner for purely experimental purposes.For the sake of having a more concrete example, how would a microservice architecture work in a Node.js CRUD server, does a simple CRUD server even benefit from microservices?What are the kinds of things that commonly get delegated to a microservice?Is a microservice the same as a module in a program or does it have a completely separate process?How do microservices communicate with the main server, is it something like UNIX sockets?
What is the common practice for implementing a microservice architecture?
language agnostic;microservices;crud
Allow me to be as strigthforward with my answers as you are in your questions.How would a microservice architecture work in a Node.js CRUD server, does a simple CRUD server even benefit from microservices?This is a wrong question. The Microservices architecture is not a technical answer to a technical question. It's rather a technical strategy for organizational needs.Don't ask what Microservices can do for your actual application. Do ask what can they do for your company. For your business.They provide the company with the capacity to deliver new services to the customers quickly and directly. To adapt your business to the changes of the market. In a constantly changing world, that could be an unvaluable capacity.However, it's not for free. It might interest you to take a look to the trade-offs.What are the kinds of things that commonly get delegated to a microservice?Business capabilities. Often referred as bounded contexts. This's a very broad and complex subject. For further references, seek out by: Microservices decomposition strategies.Is a microservice the same as a module in a program or does it have a completely separate process?They are completely separated processes. Microservices are independent in almost all the senses.How do microservices communicate with the main server, is it something like UNIX sockets?There's no centric components (server) in the Microservices architecture. That goes totally against its nature. As @scriptin commented, Microservices are stand alone applications. Small applications working together. A Microservice is both client and server at the same time.Allow me to do a naive comparision. The Microservices philosophy is cooperativism. They work like a soccer team. Microservices (players) cooperate with each other for a greater good.
_unix.287779
If I am running an 'unsupported' version of Linux, which is based on Debian, is there any way that I can still get the updates from debian systems as they are released? OR am I stuck waiting on the developers to release patches for the Operating system which I am running ?Thanks,
Debian Security Updates
debian;security;upgrade
null
_softwareengineering.225145
First, I want to say this seems to be a neglected question/area, so if this question needs improvement, help me make this a great question that can benefit others!In my experience, there are two sides of an application - the task side (where the users interact with the application and it's objects where the domain model lives) and the reporting side, where users get data based on what happens on the task side.On the task side, it's clear that an application with a rich domain model should have business logic in the domain model and the database should be used mostly for persistence. Separation of concerns, every book is written about it, we know what to do, awesome.What about the reporting side? Are data warehouses acceptable, or are they bad design because they incorporate business logic in the database and the very data itself? In order to aggregate the data from the database into data warehouse data, you must have applied business logic and rules to the data, and that logic and rules didn't come from your domain model, it came from your data aggregating processes. Is that wrong?I work on large financial and project management applications where the business logic is extensive. When reporting on this data, I will often have a LOT of aggregations to do to pull the information required for the report/dashboard, and the aggregations have a lot of business logic in them. For performance sake, I have been doing it with highly aggregated tables and stored procedures.As an example, let's say a report/dashboard is needed to show a list of active projects (imagine 10,000 projects). Each project will need a set of metrics shown with it, for example:total budgeteffort to dateburn ratebudget exhaustion date at current burn rateetc.Each of these involves a lot of business logic. And I'm not just talking about multiplying numbers or some simple logic. I'm talking about in order to get the budget, you have to apply a rate sheet with 500 different rates, one for each employee's time (on some projects, other's have a multiplier), applying expenses and any appropriate markup, etc. The logic is extensive. It took a lot of aggregating and query tuning to get this data in a reasonable amount of time for the client.Should this be run through the domain first? What about performance? Even with straight SQL queries, I'm barely getting this data fast enough for the client to display in a reasonable amount of time. I can't imagine trying to get this data to the client fast enough if I am rehydrating all these domain objects, and mixing and matching and aggregating their data in the application layer, or trying to aggregate the data in the application.It seems in these cases that SQL is good at crunching data, and why not use it? But then you have business logic outside your domain model. Any change to the business logic will have to be changed in your domain model and your reporting aggregation schemes.I'm really at a loss for how to design the reporting/dashboard part of any application with respect to domain driven design and good practices.I added the MVC tag because MVC is the design flavor du jour and I am using it in my current design, but can't figure out how the reporting data fits into this type of application.I'm looking for any help in this area - books, design patterns, key words to google, articles, anything. I can't find any information on this topic.
Best practice or design patterns for retrieval of data for reporting and dashboards in a domain-rich application
mvc;domain driven design;enterprise architecture;reporting;data warehouse
null
_unix.237123
The GNU Coreutils manual for mv says:If a destination file exists but is normally unwritable, standard input is a terminal, and the -f or --force option is not given, mv prompts the user for whether to replace the file. (You might own the file, or have write permission on its directory.) If the response is not affirmative, the file is skipped.However, the version of mv I am using (GNU coreutils 8.21 on Ubuntu 14.04.3 LTS) exhibits unexpected behaviour:$ which mv/bin/mv$ ls -ltotal 0$ echo foo > 1; chmod -w 1; cp 1 2; ls -l | cut -d' ' -f 1-5,9-r-x------ 1 me me 4 1-r-x------ 1 me me 4 2$ echo bar > 2-bash: 2: Permission denied$ mv 1 2$ ls -l | cut -d' ' -f 1-5,9-r-x------ 1 me me 4 2Based upon the manual excerpt quoted above, I would have expected the mv 1 2 command to have prompted the user before overwriting file 2.Is there a bug in my version of mv, or a bug in my understanding? If the latter, then what does the manual mean?
mv overwrites read-only file without prompting
shell;mv;coreutils
null
_unix.327845
How to use regexp/pattern-searching under gunzipped files. For instance ummm... let's use -/usr/share/doc/linux-image-4.8.0-1-amd64$ zcat changelog.gz | lessNow the way I use is when reading the contents via less, use the / to locate the name or whatever term I use but this doesn't work/scale well if name/term is repeated many a time. I also tried - /usr/share/doc/linux-image-4.8.0-1-amd64$ zcat changelog.gz | grep $search-term | lessI do get the names/search-term but without the surrounding context as of date and other things. Is there a way to get the search-term highlighted even if it duplicated n number of times while reading the changelog.gz An example of what I mean https://gist.github.com/shirishag75/e1238c16d2d372c4cfc3f62e25da335aAs can be seen I do get the search term/regexp but without date-time context it is and can be somewhat meaningless without knowing when the changes happened.
How to do Regexp/pattern-searching in gunzipped files?
debian;regular expression;gzip
null
_webapps.21337
I really like Github's impact graph about my Open Source project.I would like to include it in a presentation, so I would like to download it as an image.QUESTION: How to export this graph to an image?Anything smarter than taking hundreds of screenshots and assembling them? That's the lame method I used last year to produce the image below (out-of-date):
Export Github impact graph to image
github
If you don't need the textual labels (which are drawn as <p> overlays), you could just right-click the graph in Firefox and choose Save Image As... (It's a <canvas> element and Chrome doesn't yet offer a save option for those).If you do need the date labels, I'd suggest looking for a Firefox screenshot extension capable of grabbing a complete screenshot of a scrolling sub-element in one go. (I know they exist for <iframe> elements, so being able to do it for a scrolled <div> isn't outside the realm of possibility)
_unix.46485
I'm finding a lot of conflicting information out there, and as of yet haven't found anyone trying to pull together all of the components that I'm trying to do, so I'm hoping someone who understands SSD, encrypted LVMs and so on can stop by and help out.Basically, my system is a laptop with:/dev/sda: 32 GB SSD/dev/sdb: 256 GB SSD/dev/sdc: 1000 GB HDGenerally my Linux installs consist of three partitions:~50 mb /bootlarge /home~30 gb everything elseSo effectively I'd like/dev/sda1 -> /boot/dev/sda2 -> //dev/sdb1 -> /home/dev/sdc1 -> /swap/dev/sdc2 -> /mnt/storageThe catch is I'd like to encrypt all of this (except for /boot and /mnt/storage which can stay unencrypted). I've read that when encrypting SSDs there can be issues with things like TRIM, and that ideally I'd want to use EXT4 with some particular options set, and that I must be very careful with partition alignment, and some just claim that encrypted LVMs really don't play well with SSDs and I should just use EncFS or eCryptfs (although people seem unclear and/or polarized on whether these should be used to encrypt mount-at-boot partitions like / and /home).Is there any canonical information on this?
LVM + LUKS + SSD + Gentoo -- making it all work together
gentoo;lvm;encryption;ssd
I'm running btrfs on top of dm-crypt for a while now. Since btrfs is a multi-device capable and dynamic (grow, shrink etc) filesystem, I don't really need the LVM layer for my purposes.Other than that, use a recent enough dm-crypt that has --allow-discards capability, 3.1+ kernel and a filesystem that also allows discards (btrfs, ext*, ...).Some stuff to read through doing all this:https://code.google.com/p/cryptsetup/wiki/Cryptsetup140 (--allow-discards)http://thread.gmane.org/gmane.linux.kernel.device-mapper.dm-crypt/4075/http://asalor.blogspot.com/2011/08/trim-dm-crypt-problems.html (Milan Broz, dm-crypt developer)I'll update with more links over time as I find them from my bookmark abyss :>I have not benchmarked my setup particularly much. For me it's behaving more than adequate in performance i.e. still worlds ahead of HDD. I don't know for now exactly what state my SSD is in and whether the multi-layer discard system is really working 100%. What's important rather is I have enough performance with enough of a security model to fend off the higher-probability issues, such as , forgetting the device, device being stolen by random people, etc.So finding out exactly how much my SSD lifespan has possibly shortened, or performance slowed because of the discard system not working 100% correctly for TRIM, or how much dm-crypt discards weaken its inherent security - I have not been able to gather information to warrant giving these questions a high priority. One of the reasons I'm writing this answer is perhaps I'm wrong too much and putting this out here is currently the optimal way for me to try to find out.
_softwareengineering.123461
Here's a question for me that really makes me wonder every time I start designing and developing a thick-client desktop application.I'm a .NET developer, and with my forms (WinForms mostly, but the platform should be unimportant) I also choose the startup position to be Center. The reason I do this is because I can't stand randomness, and as a user myself I don't like when windows jump all over my monitor.So my question for you, is what is the best way to go about this? Center an application startup window or have it randomly placed? And why?
Window Position on Startup
.net;windows
What I like to do is save the last size and position of the application window (at app shutdown) as a user setting. Then when the application is restarted I restore the last used size and position.If you do this, be careful of using the spurious settings you'll get if the app is minimized at the time it is shut down (e.g. by a forced shutdown).If I don't have a user setting to go by, I like to centre the window horizontally and then vertically I like it about 1/3 of the way down from the top. This has a nicer feel to it than dead centre vertically. Purely an asthetic choice on my part.
_unix.254359
I am trying to clarify my understanding of terminal here.Terminal is actually a device (keyboard+monitor). When in CLI mode, the input from your keyboard goes directly to shell and also displayed on monitor.Meanwhile, when using GUI mode, you have to open terminal emulator program to interact with shell. The input from your keyboard goes to terminal emulator program and also displayed on terminal emulator window on monitor. The input does not directly goes to shell. The terminal emulator program will relay the input from your keyboard to shell. The terminal emulator program communicates with the shell using pseudo-terminal.There is no terminal emulator program involved when you go straight to CLI from boot.Please comment and correct me if anything wrong with my understanding.Update: I read back TTY demystified. I think what I should ask is the difference between text terminal (boot straight to text mode) and GUI terminal because I thought terminal=text terminal, terminal emulator=GUI terminal e.g. Gnome Terminal, which are wrong. From the answers in regards to before this update, a user is actually using terminal emulator program (user space) too like in GUI mode. May I know is it TTY program because I found TTY process when running command 'ps aux'. I never knew there is terminal emulator program involved too (not referring to terminal emulator in kernel space) in text mode.Update2: I read Linux console. According to it, text mode is console, meanwhile terminal software in GUI mode is terminal emulator. Well, it makes sense and it is same with my understanding before. However, according diagram from TTY demystified, terminal emulator is in the kernel space instead of user space. Interestingly, the diagram refers to text mode.
Terminal vs Terminal emulator
terminal;terminal emulator
null
_unix.288738
I'm building a bash script that uses wget to GET information from a server using a REST api. I'm using getopts to parse options given to the script and then using an if statement to redirect the script correctly based on the options given. The if goes to the main body of the script (ie the wget call), the elif prints the help menu, and the else prints an error message. However my elif appears to be acting as an else statement. When I run:>./jira -hI get the proper response, i.e. the help menu:----------jira options----------Required:-d [data/issueID]-u [username] -> [username] is your JIRA username-p [password] -> [password] is your JIRA passwordOptional:-q -> quiet, i.e. no output to console-h -> help menuHowever, when I run something that should give me the error message I get help menu instead:>./jira -u jsimmons----------jira options----------Required:-d [data/issueID]-u [username] -> [username] is your JIRA username-p [password] -> [password] is your JIRA passwordOptional:-q -> quiet, i.e. no output to console-h -> help menu My script is below:#!/bin/bash#using getopts to parse optionswhile getopts :hqd:u:p: opt; do case $opt in h) help=true ;; q) quiet=true ;; d) data=$OPTARG ;; u) username=$OPTARG ;; p) password=$OPTARG ;; \?) echo Invalid option: -$OPTARG >&2 ;; :) echo Option -$OPTARG requires an argument. >&2 ;; esacdone#check if required options have been setif [[ -n $data && -n $username && -n $password ]]; then wget -q --http-user=$username --http-passwd=$password --header=Content-Type: application/json [URI] #placing issue info into variable response=$(< $data) #using heredoc to run python script #python script uses regular expressions to find the value of the field #customfield_10701 ie the branch version output=$(python - <<EOFimport rematchObj = re.search(r'(?<=customfield_10701:).*(?=,customfield_10702)', '$response', re.I)if(matchObj): print(matchObj.group())EOF) #writes branch version in .txt file echo $output>branchversion.txt #prints the branch version if the quiet option hasn't been set if [ -z $quiet ]; then echo ------------------------------------------- echo echo The branch version for issue $data is: cat branchversion.txt echo fi #removes file that wget creates containing all data members for the issue rm $dataelif [ -n $help ]; then #if help option has been set echo echo ----------jira options---------- echo Required: echo -d [data/issueID] echo -u [username] -> [username] is your JIRA username echo -p [password] -> [password] is your JIRA password echo echo Optional: echo -q -> quiet, i.e. no output to console echo -h -> help menu echo #http GET data members for issueelse #if not all required options or help have been set echo Error: Missing argument(s) echo Usage: ./jira [option(s)] -d [data] -u [username] -p [password] echo echo Try: ./jira -h for more optionsfi
Why is my elif being treated as an else statement in my bash script?
bash;shell script;test
the -n option checks if a string is non-zero length.if [ ... ]; then #posix compliant condition testsif [[ ... ]]; then #extended condition testsIt seems the extended condition tests work differently than the posix ones.> if [ -n $unsetVar ];then echo yes ; fiyes>> if [ -n $unsetVar ];then echo yes ; fi>> if [[ -n $unsetVar ]];then echo yes ; fi>either use the extended conditions for both [[ ... ]] or wrap your variable in quotations. Currently your elif statement is always true.
_codereview.171125
I have written the code for linked list in c++. It has function list_search(int)which returns the node of the given number. list_insertfront(int) inserts the first element. list_insert(int) inserts the element. list_delete(int) deletes the element after searching its node.#include <iostream>class double_list{ struct Node{ int data; struct Node *next; //next will hold the address and *next will show value at that address struct Node *prev; //prev will hold the address and *prev will show value at that address } *head; public: double_list(){ head=NULL; // all address in head are initialised to NULL } void list_insert(int); void list_insertfront(int); struct Node *list_search(int n){ Node *node =new Node(); node=head; while(node!=NULL){ if(node->data==n) return node; node=node->next;} std::cout<<No such elementin the list \n; } void list_delete(int); void display();};void double_list::list_insertfront(int n){Node *node=new Node();node->data=n;node->next=head;head=node;node->prev=NULL;}void double_list::list_insert(int n){ Node *node =new Node(); Node *temp =new Node(); node->data=n; node->next=NULL; temp=head; while(temp){ if(temp->next==NULL){ temp->next=node; break; } temp=temp->next; }}void double_list::list_delete(int n){ Node *node=list_search(n);//to search node of the given number Node *temp=new Node(); temp=head; if(temp==node){ head=temp->next; } while(node!=NULL){ if(temp->next==node) temp->next=node->next; temp=temp->next; return; }}void double_list::display(){ Node *node=new Node(); node =head; while(node!=NULL){ std::cout<<node->data<< ; node=node->next; }}int main(){ double_list list1; list1.list_insertfront(5); list1.list_insert(1); list1.list_insert(6); list1.list_insert(7); list1.display(); list1.list_delete(1); std::cout<<\n; list1.display(); std::cout<<\n; return 0;}What should I do to improve this code.
C++: Doubly linked list
c++;linked list
null
_unix.302185
I am using a CentOS via VNC and the using vim editor, where the color scheme looks something like this..What is the name of this color scheme ? Where I can get the properties of the same ?In Windows 7, I am using MobaXterm and want to use the same color scheme shown above. Using Settings->Configuration->Terminal->Default color scheme->Customize option, how I can configure the above color scheme in MobaXterm ?
Customize MobaXterm color scheme
shell;vim;colors
null
_unix.258753
I have a data file looks like :1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 3 3 3 3 3 . . .1 0 4 4 3 1 2 0 0 0 3 1 1 2 1 1 1 1 1 1 0 1 1 3 . . .0 0 0 0 0 0 0 3 3 1 1 2 3 2 1 2 2 3 1 2 3 1 2 2 . . . ...first I want to insert space among each 5 identical values keeping each 5 identical numbers together in one colum by looking at the first row and then I do not want anz space among those group characters: first step:1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 3 3 3 3 3 . . .1 0 4 4 3 1 2 0 0 0 3 1 1 2 1 1 1 1 1 1 0 1 1 3 . . .0 0 0 0 0 0 0 3 3 1 1 2 3 2 1 2 2 3 1 2 3 1 2 2 . . . ...secouns step (output):11111 11111 1 22222 222 33333 . . .10443 12000 3 11211 111 10113 . . .00000 00331 1 23212 231 23122 . . . ...Meanwhile in my real data which is huge I may want to try different group sizes. So I need the script to be flexible ..any suggestion please?
How to group a bunch of rows based on information from the first row?
shell;sed;awk;perl
Other variant with awkawk ' NR==1{ for(i=2;i<=NF;i++){ count++ if($(i-1)!=$i || count>4){ D[i]=1 count=0 } } } { for(i in D) $i= $i print } ' OFS= data.file >new.fileAnd sedsed -re ' s/ +//g;s/^/\n/ ' -f <( sed -r ' s/(. )\1*/s_\\n(&)_\n/g s/\S /./g s/\n\s*/\\1 \\n_\n/g s/\\n[^\n]*\n$/ \\n__/ 1q ' data.file ) -e ' s/\S{5}/& /g ' data.file >new.file
_unix.10646
There's a built-in Unix command repeat whose first argument is the number of times to repeat a command, where the command (with any arguments) is specified by the remaining arguments to repeat.For example,% repeat 100 echo I will not automate this punishment.will echo the given string 100 times and then stop.I'd like a similar command let's call it forever that works similarly except the first argument is the number of seconds to pause between repeats, and it repeats forever. For example,% forever 5 echo This will get echoed every 5 seconds forever and ever.I thought I'd ask if such a thing exists before I write it. I know it's like a 2-line Perl or Python script, but maybe there's a more standard way to do this. If not, feel free to post a solution in your favorite scripting language, Rosetta Stone style.PS: Maybe a better way to do this would be to generalize repeat to take both the number of times to repeat (with -1 meaning infinity) and the number of seconds to sleep between repeats.The above examples would then become:% repeat 100 0 echo I will not automate this punishment.% repeat -1 5 echo This will get echoed every 5 seconds forever.
Repeat a Unix command every x seconds forever
command line;scripting
null
_unix.350459
After creating a few snapshots in a lxd container using lxc snapshot I cannot find a way to list those snapshots. lxc list lists only containers, not the snapshots of each container. How can I list the names of all snapshots of a container? Thanks.
List snapshots of a lxd container
lxd
You can list the snapshots for a container named example with:lxc info example --verbose
_softwareengineering.291672
I'm new to Node and JavaScript (well, asynchronous programming in general) and I noticed when I was working on a project that the following code is a circular pattern and that these are bad practice for the obvious reason that the module might not have loaded yet (and the example code is throwing errors because of that).Here's my code:Main modulevar module2 = require('./module2');var data = 'data';module2.fetchStuff(data);Module2var module3 = require('./module3');var cleanDataArray = [];function fetchStuff(data){ // Fetches stuff based on data module3.cleanStuff(data);}function takeStuffBack(data){ cleanData.push(data);}module.exports = { fetchStuff: fetchStuff, takeStuffBack: takeStuffBack, cleanData: cleanDataArray};Module3var module2 = require('./module2');function cleanStuff(data){ // Clean data from needless stuff module2.takeStuffBack(data); // I get a TypeError here because `module2` is yet to fully load.}module.exports = { cleanStuff: cleanStuff};The XYWhat this structure is supposed to do is for the start module to call a fetching function in module2, the fetching function needs to wash the data in the 3rd module before taking it back and providing it for whatever wants to export it. So I suppose the XY is that I need to do is to get data from a 3rd party API and then clean the data from the things it contains but I don't want, and then I need to make that clean version available to the rest of the application.What other ways are there to do this in a better manner, without a circular pattern such as this which is broken because module2 won't load before module3 tries to call it?
How to avoid circular patterns in Node?
design patterns;javascript;node.js
null
_softwareengineering.138520
I am contemplating paying a software consulting firm to provide my company with some enhancements to a piece of software that is licensed under the Eclipse Public License (EPL). I'm wondering what rights we will have to what they produce, and whether they can tie us to paying them royalties forever.What rights we will have to modify and redistribute what they provide us?Can they insist on a royalty payment when we distribute it to 3rd parties?Before I start negotiating, I need to understand what we're getting - and ideally get this sort of thing explicitly agreed in the contract.I know I should consult a lawyer.
What are the restrictions on derived works of EPL licensed software?
licensing
null
_unix.98921
I've noticed this when trying to watch movies on that laptop running eOS. After 10 minutes or so the display is turned down. I've looked for settings against this and found the following:Power setting: put the computer to sleep: I set that to 'Never'. But it couldn't be this setting, my problem being that the display is shut, not that the computer is put to sleep.Brightness and lock: Brightness: Turn screen off when inactive for: set that to 'Never'. That should be it but it does not work.   Because I'd experienced a similar issue with GUI settings for display not being followed in another Ubuntu based distro - Xfce - reported here - I imagined also that a screensaver setting was the matter. I've found a situation similar to that and tried that solution. Only that, unlike in Xfce, now a gnome-screensaver was installed but without accessible GUI settings for it. So, it looked like a certain blank-screen screensaver was active in the background. To get a GUI for screensaver I installed xscreensaver. When starting that I was prompted that gnome-screensaver was already running and asked to shut it down. Said yes and then disabled screensaver in Xscreensaver.  Afterwards I also uninstalled gnome-screensaver, but the same problem would still reappear.
Display shuts down while watching a movie after 10 minutes no matter the settings in Elementary OS
gui;display settings;screensaver;display;elementary os
BackgroundThere are 2 solutions that were determined for this particular problem. The 1st involved launching xscreensaver, and disabling it so that no screensaver is configured. The 2nd method involved completely disabling the screensaver in X altogether, through the use of the xset command.Solution #1A solution with a narrow scope (by cipricus) is that of adding a fourth step to those included in the answer.Install xscreensaverRemove gnome-screensaverSet Xscreensaver NOT to use any screensaver ('Disable screensaver')Add xscreensaver in the startup programs list. The command to add is:xscreensaver -no-splashThis solution was suggested by the fact that this message appeared when starting xscreensaver before adding the fourth step:                    Further instructions came from this source. NOTE: To add a program to startup list in eOS, go to System Settings > Stertup Applications > AddSolution #2A solution with a wider scope by slm:xsetCheck to see what the xset setting is for screen blanking as well. You can check using this command:$ xset qWe're specifically interested in this section of the output from the above command:$ xset q...Screen Saver: prefer blanking: yes allow exposures: yes timeout: 600 cycle: 600...Disabling screensaverYou can change these settings like this:$ xset s off$ xset s noblankConfirm by running xset q again:$ xset q...Screen Saver: prefer blanking: no allow exposures: yes timeout: 0 cycle: 600...DPMSYou might also need to disable power management as well, that's the DPMS settings in the xset q output:$ xset q...DPMS (Energy Star): Standby: 0 Suspend: 0 Off: 0 DPMS is Enabled Monitor is On...Disable it like so:$ xset -dpmsConfirm:$ xset q...DPMS (Energy Star): Standby: 0 Suspend: 0 Off: 0 DPMS is Disabled...Re-enabling featuresYou can re-enable these features at any time with these commands$ xset s blank # blanking screensaver$ xset s 600 600 # five minute interval$ xset +dpms # enable power managementConfirming changes:$ xset q...Screen Saver: prefer blanking: yes allow exposures: yes timeout: 600 cycle: 600......DPMS (Energy Star): Standby: 0 Suspend: 0 Off: 0 DPMS is Enabled Monitor is On...
_codereview.24891
I use this code to Load and Insert data to a table using a DataGridView in a C# windows application. SqlCommand sCommand; SqlDataAdapter sAdapter; SqlCommandBuilder sBuilder; DataSet sDs; DataTable sTable; private void form1_Load(object sender, EventArgs e) { string connectionString = Data Source=.\\SQLEXPRESS;AttachDbFilename=|DataDirectory|\\Database1.mdf;Integrated Security=True;User Instance=True; string sql = SELECT * FROM mytable; SqlConnection connection = new SqlConnection(connectionString); connection.Open(); sCommand = new SqlCommand(sql, connection); sAdapter = new SqlDataAdapter(sCommand); sBuilder = new SqlCommandBuilder(sAdapter); sDs = new DataSet(); sAdapter.Fill(sDs, mytable); sTable = sDs.Tables[mytable]; connection.Close(); dataGridView1.DataSource = sDs.Tables[mytable]; dataGridView1.ReadOnly = true; save_btn.Enabled = false; dataGridView1.SelectionMode = DataGridViewSelectionMode.FullRowSelect; } private void new_btn_Click(object sender, EventArgs e) { dataGridView1.ReadOnly = false; save_btn.Enabled = true; new_btn.Enabled = false; delete_btn.Enabled = false; } private void delete_btn_Click(object sender, EventArgs e) { if (MessageBox.Show(Are you sure?, Delete, MessageBoxButtons.YesNo) == DialogResult.Yes) { dataGridView1.Rows.RemoveAt(dataGridView1.SelectedRows[0].Index); sAdapter.Update(sTable); } } private void save_btn_Click(object sender, EventArgs e) { sAdapter.Update(sTable); dataGridView1.ReadOnly = true; save_btn.Enabled = false; new_btn.Enabled = true; delete_btn.Enabled = true; } }It's ok and works, but when I try to work with a query that has a condition then no rows are added to the DataGrid and MyTable anymoresql = SELECT * FROM mytable where col2 = 1;
Insert to datagridview when SELECT query has WHERE condition
c#;.net;sql;winforms
null
_unix.319236
i have a problem.I want to make a bash script that writes data to new column every time i run script.For example every week i check how many files in each folder i have.find /home/user/admin/stuff/ -mtime -7 | wc -l >> results.xlsfind /home/user/admin/old/ -mtime -7 | wc -l >> results.xlsI run script every Monday, but i don't want to overwrite data. I need that new data will be in new column.For example:Week1 Week2 Week3 ... 2 3 5 1 2 3
New column every time i run script
text processing;text formatting
#!/bin/bashoutput_file=/tmp/results.xls[ ! -f ${output_file} ] && echo -e \n\n\n > ${output_file}stuff_count=$(find /home/user/admin/stuff/ -mtime -7 | wc -l)old_count=$(find /home/user/admin/old/ -mtime -7 | wc -l)now=$(date +%y%m%d)sed -i 1 s/$/\t$now/ /tmp/out.txtsed -i 2 s/$/\t$stuff_count/ /tmp/out.txtsed -i 3 s/$/\t$old_count/ /tmp/out.txt
_unix.148835
I am specifically looking for is dynamic formatting of output. In every terminal emulator I can remember having used in Linux, when some program prints to the screen, the output gets formatted to fit to the terminal window so that longer lines will wrap around. If I then change the width of the window, the previous wrapped around formatting still remains.On OSX, Terminal.app acts differently. The text is still formatted for the current size of window just as on Linux terminal emulators. However if I re-size the window, the text is automatically reformatted to match the new dimensions.This is super useful when, after the running a utility, I realize that I didn't make the window wide enough to show all the output clearly. On an especially slow running utility, it can be frustrating to need to run everything all over again only to get better formatting. I could redirect the output to a program like less, view or gview. However this just feels like too much work to do every time I run a utility that might not format well with the current window dimensions. Also, as far as I know less doesn't support bash style text coloration.Does anyone know of a Linux terminal emulator that has this behavior? It doesn't need to be out of the box behavior; I am willing to monkey with configuration settings to get something like this working. I have already poked around a number of terminal emulators on Linux to see if they support this, but I don't really have the time to try every single one of them. There are just too many! If truly no program exists that does this, is it because no one is trying to create this behavior? Is there some technical limitation on Linux in specific that does not allow this (don't see how this could be the case)?
Dynamic text wrapping of terminal output
terminal
null
_softwareengineering.332554
I have a node.js server for an API that is split between controllers and models (there is a router which is autopopulated on runtime). So for example here is a classic end point for fetching config data:controllerimport config from '../models/config';let routes = { '/v1/config/:key': { get: async function (next) { let value = await config.get(this.params.key); this.body = { key: this.params.key, value: value }; return; } }}modelimport datastore from 'nedb-promise';import conf from '../../../config';import mkdirp from 'mkdirp';mkdirp(conf.baseDir + '/data/')const db = new datastore(config.baseDir + '/data/config.db');const config = { get: async (key) => { const res = await db.find({key: key}); return { key: res[0].key, value: res[0].value }; }, getValue: async (key, defVal) => { return config.get(key).value || defVal; }, insert: async (key, value) => { doc = { key: key, value: value }; return db.insert(doc); }}So for example the model tranforms the databases data into sendable data. What it does here is very basic, it simply create a new object with intended keys, therefore removing the _idkey. The getKeyfunction is a shorthand intended internally to fetch config keys in the backend. (please keep in mind this code might contains some error I havent tested it yet).But the tranformation could be more complex, from validating input values, pre processing request result or populating 1-1 or 1-to-many relations if the database won't do it automatically. So who should do those kinds of operations? The model or the controller? In my case, if the model is doing it the code would loose its reusability, because in my example the _idfield is removed, and may be some internal backend code would need it. I would then need to create another function, which would be a little more complicated. Further more, my controller would be small in terms of lines of code compared to the model. Plus, this would increase coupling between datastore and the app, and if I wanted to provide different database provider according to the user's need I would need to duplicated the transform code.On the other hand, if the controller is doing it I would loose all my transformation steps or input validations.What about error handling? Here is an example of an expected return body (in my tests, which I wrote before writing the API) in case of a duplicated entry in the controller: { success: false, status: 400, data: { error_message: 'A config entry already exists with this key', error_code: 'EDUPENTRY' } }Who should generate this object? The controller or the model?
When splitting a Node.JS server between model and controllers, who should tranform the data for the database to understand?
architecture;api;node.js;model;controller
null
_webapps.97389
On Quora, the author of an answer may delete any comment left on their answer. Is there a way for a user to retrieve their comment on an answer that the answer's author deleted?
Is there a way to retrieve a comment on an answer that the answerer deleted on Quora?
quora
No, as of now there is no way to retrieve comment if answer's author has deleted the comment.
_unix.58049
I would like to be able to use the sudo command in a chroot environment. I start the chroot as follows:chroot /debian-squeeze /bin/bashNow I'm logged in as root in the chroot. I can do su user to log in as a user named user. Now, sudo does not work:user@HD:/$ sudo lssudo: must be setuid rootSome diagnostics:user@HD:/$ which sudo/usr/bin/sudouser@HD:/$ ls -al /usr/bin/sudo-rwsr-xr-x 2 root root 143884 May 23 2012 /usr/bin/sudouser@HD:/$ ls -aln /usr/bin/sudo-rwsr-xr-x 2 0 0 143884 May 23 2012 /usr/bin/sudoroot@HD:/# cat /etc/sudoersDefaults env_resetroot ALL=(ALL) ALLuser ALL=(ALL) ALL%sudo ALL=(ALL) ALLAs root, I can execute sudo without error.Can anyone explain me why sudo (or setuid) does not work like this?
Sudo does not work in chroot
linux;sudo;chroot
My guess is that /debian-squeeze is on a separate filesystem mounted without defaults or suid. The kernel will ignore the setuid bit on filesystems mounted without suid (defaults implies suid). To fix it:mount -o remount,suid /debian-squeeze
_softwareengineering.128888
When I began to use parser combinators my first reaction was a sense of liberation from what felt like an artificial distinction between parsing and lexing. All of a sudden everything was just parsing!However, I recently came across this posting on codereview.stackexchange illustrating someone reinstating this distinction. At first I thought this was very silly of them, but then the fact that functions exist in Parsec to support this behavior leads me to question myself.What are the advantages/disadvantages to parsing over an already lexed stream in parser combinators?
Are separate parsing and lexing passes good practice with parser combinators?
parsing;lexer;parser combinator
null
_unix.147516
In pycharm, there is an option to upload changes to a remote svn repository. However, it does not ask for password. How do I provide it?
subversion not working with pycharm
python;subversion
null
_softwareengineering.158217
I'm working on a project in which I'm considering using a hybrid of interfaces and composition as a single thing.What I mean by this is having a contain*ee* class be used as a front for functionality implemented in a contain*er* class, where the container exposes the containee as a public property.Example (pseudocode):class Visibility(lambda doShow, lambda doHide, lambda isVisible) public method Show() {...} public method Hide() {...} public property IsVisible public event Shown public event Hiddenclass SomeClassWithVisibility private member visibility = new Visibility(doShow, doHide, isVisible) public property Visibility with get() = visibility private method doShow() {...} private method doHide() {...} private method isVisible() {...}There are three reasons I'm considering this:The language in which I'm working (F#) has some annoyances w.r.t. implementing interfaces the way I need to (unless I'm missing something) and this will help avoid a lot of boilerplate code.The containee classes could really be considered properties of the container class(es); i.e. there seems to be a fairly strong has-a relationship.The containee classes will likely implement code which would have been pretty much the same when implemented in all the container classes, so why not do it once in one place? In the above example, this would include managing and emitting the Shown/Hidden events.Does anyone see any isseus with this Composiface/Intersition method, or know of a better way?EDIT 2012.07.26 - It seems a little background information is warranted:Where I work, we have a bunch of application front-ends that have limited access to system resources -- they need access to these resources to fully function. To remedy this we have a back-end application that can access the needed resources, with which the front-ends can communicate. (There is an API written for the front-ends for accessing back-end functionality as though it were part of the front-end.)The back-end program is out of date and its functionality is incomplete. It has made the transition from company to company a couple of times and we can't even compile it anymore. So I'm trying to rewrite it in my spare time.I'm trying to update things to make a nice(r) interface/API for the front-ends (while allowing for backwards compatibility with older front-ends), hopefully something full of OOPy goodness. The thing is, I don't want to write the front-end API after I've written pretty much the same code in F# for implementing the back-end; so, what I'm planning on doing is applying attributes to classes/methods/properties that I would like to have code for in the API then generate this code from the F# assembly using reflection.The method outlined in this question is a possible alternative I'm considering instead of implementing straight interfaces on the classes in F# because they're kind of a bear: In order to access something of an interface that has been implemented in a class, you have to explicitly cast an instance of that class to the interface type. This would make things painful when getting calls from the front-ends. If you don't want to have to do this, you have to call out all of the interface's methods/properties again in the class, outside of the interface implementation (which is separate from regular class members), and call the implementation's members. This is basically repeating the same code, which is what I'm trying to avoid!
Is this Hybrid of Interface / Composition kosher?
interfaces;composition
null
_unix.30531
I just realised that I can move a file that I do not own and don't have write permissions on. I have write permissions to the directory, so I am guessing that is why I could move it, but in this instance, is there anyway of protecting the source file? The permissions for the file are as follows;cgi-bin> ls -al drwxrwxrwx 3 voyager endeavor 512 Feb 1 10:45 .drwxrwxrwx 6 voyager endeavor 512 Feb 1 09:38 ..-rwxr-xr-x 1 voyager endeavor 22374 Feb 1 10:45 webvoyage_link.cgicgi-bin> whoamimoorccgi-bin> groupslrsn endeavorcgi-bin> rm webvoyage_link.cgirm: webvoyage_link.cgi: override protection 755 (yes/no)? yesThis last one is a big surprise to me to. How can I delete a file that I don't have access to. There is obviously something I'm missing.
mv file without write permission to the source file
permissions;files;directory;rename;rm
Move (mv) is essentially an attribute-preserving copy followed by a deletion (rm), as far as permissions are concerned.1 Unlinking or removing a file means removing its directory entry from its containing directory. You are writing to the directory, not the file itself, hence no write permissions are necessary on the file. Most systems support the semantics of the sticky bit on directories (chmod +t dir/), which when set only allows file owners to remove files within that directory. Setting the sticky bit on cgi-bin/ would mean moorc can no longer unlink files in cgi-bin that belong to voyager.1 In general, when the destination is in the same filesystem as the source, there is no physical copy. Instead, a new link is made to the file in the destination directory, but the same general concept still holds that the file itself does not change.For more reading, look at this article explains how file and directory permissions (including the sticky bit) affect system calls.PostscriptI ran across an amusing analogy I really liked in a comment by @JorgWMittag on another question on this site.ExcerptIt is identical to how an actual, real-life directory works, which is why it's called directory, and not, for example, folder, which would behave quite differently. If I want to delete someone from my phone directory, I don't go to her house and kill her, I simply take a pen and strike through her number. IOW: I need write access to the directory, and no access to her. The analogy does break down a bit if you try to stretch it, because there's no effective way to describe the situation where the filesystem implementation automatically frees a file's disk blocks once the number of directory entries pointing to it drops to zero and all of its open handles are closed.
_unix.340715
What is the origin of the letter d in dmesg?Wikipedia says:dmesg (display message or driver message)No reference are given for this assertion - it could just as well be debug message.What is the etymology of dmesg?
What does the 'd' mean in dmesg?
history;dmesg
No reference are given for this assertion - it could just as well be debug message.Not all the messages are for debugging. Some are purely informational. From the dmesg manual page (emphasis added):The default action is to display all messages from the kernel ring buffer.Update: Alternatively, d is from diagnostic. See: Why is dmesg called dmesg?.
_unix.115304
I need a notifier such that:It is possible to send messages from one machine to another without any password (like in case of notify-send). Correct me if I'm wrong.It is such that only when a user clicks on cross button that it closes.I found Dunst while searching. it needs basic packages.dbuslibxineramalibxftlibxsslibxdg-basedirout of which, I'm not able to get libxdg-basedir installed on my system. I tried searching for it, but there aren't any packages available for CentOS. Question: is it possible to compile and install Dunst (or a notifier) on CentOS? If so how?
Dunst notifier on CentOS
linux;notifications
null
_webapps.40864
How can I have integrated translation of Facebook posts from English to another language in my browser? I have seen that in some cases, Facebook adds a Translate link, but not in my non-English account.
How can I get my Facebook posts translated from English to another language?
facebook
null
_codereview.18978
In SQL there is no way to do an INSERT... SELECT. If you want to do it without using raw SQL in several places of your code you can create custom SQL compilation.There is an example about how to do INSERT...SELECT in SA documentation. This example doesn't support column in the INSERT part of the sentence, some thing like: INSERT into table(col1, col2)....I've modified the example to that, this support table (INSERT INTO table (SELECT...)) or columns (INSERT INTO table (col1, col2) (SELECT...).Please, have a look an comment :)from sqlalchemy.sql.expression import Executable, ClauseElementfrom sqlalchemy.ext.compiler import compilesclass InsertFromSelect(Executable, ClauseElement): def __init__(self, insert_spec, select): self.insert_spec = insert_spec self.select = select@compiles(InsertFromSelect)def visit_insert_from_select(element, compiler, **kw): if type(element.insert_spec) == list: columns = [] for column in element.insert_spec: if element.insert_spec[0].table != column.table: raise Exception(Insert columns must belong to the same table) columns.append(compiler.process(column, asfrom=True)) table = compiler.process(element.insert_spec[0].table) columns = , .join(columns) sql = INSERT INTO %s (%s) (%s) % ( table, columns, compiler.process(element.select)) else: sql = INSERT INTO %s (%s) % ( compiler.process(element.insert_spec, asfrom=True), compiler.process(element.select)) return sqlExample of its use with columns:InsertFromSelect([dst_table.c.col2, dst_table.c.col1], select([src_table.c.col1, src_table.c.col1]))Example of its use only with a table:InsertFromSelect(dst_table, select(src_table]))This works for me, but I want to hear other opinions.
SQLAlchemy - InsertFromSelect with columns support
python;sql
null
_codereview.984
I am been using a code pattern for recursive database actions in my applications.I create two class objects of a database table, singular one (e.g Agent) for holding single record with all fields definition, plural one (e.g Agents) for database actions of that records like, select, insert, delete, update etc. I find it easy using the code pattern.But as the time runs I find it somewhat laborious to define same database action functions in different classes only differing in datatype.How can I make it good and avoid defining it again and again?Sample code of a class file representing the class definition:Imports EssenceDBLayerPublic Class Booking#Region Constants Public Shared _Pre As String = bk01 Public Shared _Table As String = bookings#End Region#Region Instance Variables Private _UIN As Integer = 0 Private _Title As String = Private _Email As String = Private _contactPerson As String = Private _Telephone As String = Private _Mobile As String = Private _Address As String = Private _LastBalance As Double = 0#End Region#Region Constructor Public Sub New() 'Do nothing as all private variables has been initiated' End Sub Public Sub New(ByVal DataRow As DataRow) _UIN = CInt(DataRow.Item(_Pre & UIN)) _Title = CStr(DataRow.Item(_Pre & Title)) _Email = CStr(DataRow.Item(_Pre & Email)) _contactPerson = CStr(DataRow.Item(_Pre & contact_person)) _Telephone = CStr(DataRow.Item(_Pre & Telephone)) _Mobile = CStr(DataRow.Item(_Pre & Mobile)) _Address = CStr(DataRow.Item(_Pre & Address)) _LastBalance = CDbl(DataRow.Item(_Pre & Last_Balance)) End Sub#End Region#Region Properties Public Property UIN() As Integer Get Return _UIN End Get Set(ByVal value As Integer) _UIN = value End Set End Property Public Property Title() As String Get Return _Title End Get Set(ByVal value As String) _Title = value End Set End Property Public Property Email() As String Get Return _Email End Get Set(ByVal value As String) _Email = value End Set End Property Public Property ContactPerson() As String Get Return _contactPerson End Get Set(ByVal value As String) _contactPerson = value End Set End Property Public Property Telephone() As String Get Return _Telephone End Get Set(ByVal value As String) _Telephone = value End Set End Property Public Property Mobile() As String Get Return _Mobile End Get Set(ByVal value As String) _Mobile = value End Set End Property Public Property Address() As String Get Return _Address End Get Set(ByVal value As String) _Address = value End Set End Property Public Property LastBalance() As Double Get Return _LastBalance End Get Set(ByVal value As Double) _LastBalance = value End Set End Property#End Region#Region Methods Public Sub [Get](ByRef DataRow As DataRow) DataRow(_Pre & Title) = _Title DataRow(_Pre & Email) = _Email DataRow(_Pre & Contact_person) = _contactPerson DataRow(_Pre & Telephone) = _Telephone DataRow(_Pre & Mobile) = _Mobile DataRow(_Pre & Address) = _Address DataRow(_Pre & last_balance) = _LastBalance End Sub#End RegionEnd ClassPublic Class Bookings Inherits DBLayer#Region Constants Public Shared _Pre As String = bk01 Public Shared _Table As String = bookings#End Region#Region Standard Methods Public Shared Function GetData() As List(Of Booking) Dim QueryString As String = String.Format(SELECT * FROM {0}{1} ORDER BY {0}UIN;, _Pre, _Table) Dim Dataset As DataSet = New DataSet() Dim DataList As List(Of Booking) = New List(Of Booking) Try Dataset = Query(QueryString) For Each DataRow As DataRow In Dataset.Tables(0).Rows DataList.Add(New Booking(DataRow)) Next Catch ex As Exception DataList = Nothing SystemErrors.Create(New SystemError(ex.Message, ex.StackTrace)) End Try Return DataList End Function Public Shared Function GetData(ByVal uin As String) As Booking Dim QueryString As String = String.Format(SELECT * FROM {0}{1} WHERE {0}uin = {2};, _Pre, _Table, uin) Dim Dataset As DataSet = New DataSet() Dim Data As Booking = New Booking() Try Dataset = Query(QueryString) If Dataset.Tables(0).Rows.Count = 1 Then Data = New Booking(Dataset.Tables(0).Rows(0)) Else Data = Nothing End If Catch ex As Exception Data = Nothing SystemErrors.Create(New SystemError(ex.Message, ex.StackTrace)) End Try Return Data End Function Public Shared Function Create(ByVal Data As Booking) As Boolean Dim QueryString As String = String.Format(SELECT * FROM {0}{1} WHERE {0}uin = Null;, _Pre, _Table) Dim Dataset As DataSet = New DataSet() Dim Datarow As DataRow Dim Result As Boolean = False Try Dataset = Query(QueryString) If Dataset.Tables(0).Rows.Count = 0 Then Datarow = Dataset.Tables(0).NewRow() Data.Get(Datarow) Dataset.Tables(0).Rows.Add(Datarow) Result = UpdateDB(QueryString, Dataset) Else Result = False End If Catch ex As Exception Result = False SystemErrors.Create(New SystemError(ex.Message, ex.StackTrace)) End Try Return Result End Function Public Shared Function Update(ByVal Data As Booking) As Boolean Dim QueryString As String = String.Format(SELECT * FROM {0}{1} WHERE {0}uin = {2};, _Pre, _Table, Data.UIN) Dim Dataset As DataSet = New DataSet() Dim Result As Boolean = False Dim DataRow As DataRow = Nothing Try Dataset = Query(QueryString) If Dataset.Tables(0).Rows.Count = 1 Then DataRow = Dataset.Tables(0).Rows(0) Data.Get(DataRow) Result = UpdateDB(QueryString, Dataset) Else Result = False End If Catch ex As Exception Result = False SystemErrors.Create(New SystemError(ex.Message, ex.StackTrace)) End Try Return Result End Function Public Shared Function UpdateBulk(ByRef DataList As List(Of Booking)) As Boolean Dim Result As Boolean = False Try For Each Data As Booking In DataList Update(Data) Next Result = True Catch ex As Exception SystemErrors.Create(New SystemError(ex.Message, ex.StackTrace)) End Try Return Result End Function Public Shared Function FillGrid() As List(Of Booking) Return GetData() End Function#End RegionEnd Class
Recursive database actions
object oriented;.net;database;vb.net
What you're talking about is called object-relational mapping.You could do this, but it will be a fair amount of effort. Luckily many people have run into this same question before, answered it and open-sourced that solution. I suggest looking at using one of those solutions.nHibernate is just one example but is a popular and mature solution.Edit: More accurately, object-relational mapping is mapping fields to columns, objects to tables and object relationships to table relationships, so it does exactly what you want and (optionally) much more.
_unix.66195
Does sane have a technical definition in a unix / linux context?I mean in situations such as this:checking whether build environment is sane... yes
Definition of sane
terminology
null
_unix.134037
I'm starting to use btrfs. I want to be able to snapshot certain directories but do not want to create sub-volumes. Is this possible?
btrfs snapshots without subvolumes?
linux;debian;filesystems;btrfs;snapshot
null
_codereview.47956
Please verify security from SQL injection attacks.homepage.php<html><head></head><body><ul id=list> <li><h3><a href=search.php?name=women-top>tops</a></h3></li> <li><h3><a href=#>suits</a></h3></li> <li><h3><a href=#>jeans</a></h3></li> <li><h3><a href=search.php?name=women>more</a></h3></li> </ul></body></html>second.php<?php$mysqli = new mysqli('localhost', 'root', '', 'shop'); if(mysqli_connect_errno()) { echo Connection Failed: . mysqli_connect_errno(); }?><html><head></head><body><?phpsession_start();$lcSearchVal=$_GET['name'];//echo hi;$lcSearcharr=explode(-,$lcSearchVal);$result=count($lcSearchVal);//echo $result;$parts = array();$parts1=array();foreach( $lcSearcharr as $lcSearchWord ){ $parts[] = '`PNAME` LIKE %'.$lcSearchWord.'%'; $parts1[] = '`TAGS` LIKE %'.$lcSearchWord.'%'; //$parts[] = '`CATEGORY` LIKE %'.$lcSearchWord.'%';}$stmt = $mysqli->prepare(SELECT * FROM xml where ('.implode ('AND',:name).'));$stmt->bind_Param(':name',$parts);$list=array();if ($stmt->execute()) { while ($row = $stmt->fetch()) { $list[]=$row; }} $stmt->close(); $mysqli->close();foreach($list as $array){?> <div class=image><img src=<?php echo $array['IMAGEURL']?> width=200px height=200px/></a><?php}?></div></body></html>When I click on a link in homepage.php, it will search from XML for the products related to the clicked link. Please verify whether the SQL statement is secured from a Google bot's attack and whether it's handling the data securely or not.
Is this shopping site safe from SQL injection attacks?
php;sql;mysql;security
null
_softwareengineering.103285
Possible Duplicates:How do managers know if a person is a good or a bad programmer?How to recognize a good programmer? For your record, I am a programmer myself, and I still do coding. We are not doing your-just-another-CRUD-app, instead we are working on CAD apps. The nature of software development makes it really hard to gauge a programmer's worth. How can you tell whether a programmer is good or not-so-good?All programmers who are working with me work on different parts of the applications, and how difficult it is to get those parts working is only known to the person who spend most time in it, in this case it's the programmers themselves; me as an outsider would not be able to fully appreciate the amount of sweat, ingenuity, effort they put in into solving those problems precisely because I don't have a chance to do the same job. This gives me a hard time when I evaluate them. How do I know programmer A is really great at solving the problem at hand and therefore I can throw him a bigger, harder task? And how do I know programmer B is just working hard, but not working smart?How can I evaluate and compensate programmers fairly?
How can you tell good programmers from the average one?
management
null
_codereview.123240
I want to generate statistical reports and I have many different where clauses so the function became long. I did much refactoring, but it is not enough. Can someone help me with some techniques that these techniques make the function short and easily readable?public static function allRejUserByProv($prov = '', $gender = '', $dist = '', $ttc = '') { if (!empty($gender) && !empty($prov) && !empty($dist)) { return self::where('decision', '3')->where('gender', $gender)->where('p_province', $prov)->where('p_district', $dist)->count(); } if (!empty($prov) && !empty($dist)) { return self::where('decision', '3')->where('p_province', $prov)->where('p_district', $dist)->count(); } if (!empty($prov) && !empty($gender)) { return self::where('decision', '3')->where('p_province', $prov)->where('gender', $gender)->count(); } if(!empty($gender) && !empty($ttc)) { return self::where('decision', '3')->where('ttc_name', $ttc)->where('gender', $gender)->count(); } if(!empty($ttc)) { return self::where('decision', '3')->where('ttc_name', $ttc)->count(); } if (!empty($gender)) { return self::where('decision', '3')->where('gender', $gender)->count(); } if (!empty($prov)) { return self::where('decision', '3')->where('p_province', $prov)->count(); } return self::where('decision', '3')->count(); }I am calling the function like this:public function resultTTC($prov, $dist, $ttc){return [ 'rejected' => number_format(self::allRejUserByProv($prov, '', $dist, $ttc)), 'rejected_male' => number_format(self::allRejUserByProv($prov, '1', $dist, $ttc)), 'rejected_female' => number_format(self::allRejUserByProv($prov, '2', $dist, $ttc)),]
Generating statistical reports
php;laravel
If you just want to do the where for every non-empty value. You can do the following:public static function allRejUserByProv($prov = '', $gender = '', $dist = '', $ttc = ''){ $fields = [ 'gender' => $gender, 'p_district' => $dist, 'p_province' => $prov, 'ttc_name' => $ttc ]; $result = self::where('decision', '3'); foreach ($fields as $attr => $value) { if(! empty($value)) { $result = $result->where($attr, $value); } } return $result->count();}This way you will remove the multiple ifs.If you will always use number_format to format the return value why not just adding it to the function:return number_format($result->count());Hope this helps :)
_webmaster.68223
A year has gone missing from my domains. What can I do to get it back?Here is the complete history (I have assumed abc.com and xyz.in as the domain names as I do not want to disclose my own domain names)abc.com and xyz.in were registered on November 2012 via a reseller of WebiqOn November 2013, I was notified about the expiration of these two domains. When I contacted my reseller explaining that I would like to transfer the domains to GoDaddy he told that I was to renew them in order to transfer. Soabc.com and xyz.in were renewed on November 2013 via the same reseller of WebiqI had started the transfers via GoDaddy to whom I paid a minimal fee (and they even offered 1 addition year for each domain on the renewal)On 17th November 2013abc.com got transferred from Webiq to Godaddy. The records showed it's valid till 11/05/2015on 18th January 2014xyz.in got transferred from Webiq to GoDaddy. The records showed it's valid till 11/3/2016Two weeks ago from today when I logged into my cpanel it notified that my domain was getting expired soon and that I renew it. This was surprising because it's supposed to be valid till 11/05/2015 but both my domains seemed to show one year lesser now! On contacting GoDaddy they requested that I contact my old registrar as the one missing one year must've been because of themWhen I tried submitting a support request to Webiq whether they cancelled it, they replied:Your domain abc.com has been transferred away from us on 17-11-2013 and the domain xyz.in was transferred away from us on 18-01-2014. There are no order cancellation actions placed. If you have any billing related issues kindly contact your parent reseller.GoDaddy has now made me aware of something called the 45-day rule which clearly states that I am to get a refund for the renewal as the old registrar (webiq) would have gained this refund regardless of whether they made a refund or not!I found the details of this in this link >> Transfer of Recently Renewed Domains
What happens when my domain provider cancels order after domain transfer?
domains;dns;domain registration;domain registrar;domain transfer
null
_datascience.904
I need to generate periodic (daily, monthly) web analytics dashboard reports. They will be static and don't require interaction, so imagine a PDF file as the target output. The reports will mix tables and charts (mainly sparkline and bullet graphs created with ggplot2). Think Stephen Few/Perceptual Edge style dashboards, such as: but applied to web analytics. Any suggestions on what packages to use creating these dashboard reports? My first intuition is to use R markdown and knitr, but perhaps you've found a better solution. I can't seem to find rich examples of dashboards generated from R.
What do you use to generate a dashboard in R?
r;visualization
null
_datascience.15798
I have done several machine learning projects but all of them have been connected to the traditional machine learning (predictions, classifications, etc.). I have currently been offered a project to finish in less than 6 months.The idea is to develop/improve a pre-existing software. The software takes the image of a molecule from an advanced molecule and then tries to highlight the cell line with red, some times the software takes the background or the lines of other cells also as the highlighted part, and thus the user has to manually edit and trim such mistakes. The idea is to make the software learn from user's edits and behavior over time.One thing I want to know is whether such a 6-month project is realistic for someone with no background in image processing and pattern recognition? Or is it going to be terribly difficult because I only have had experience with data-oriented/statistical machine learning?My other question is; what type of concepts/topics should I dig deep into to learn the fundamentals of carrying out this project?
What knowledge should I gain for developing a supervised image processing software that learns how to edit photos based on past behavior?
machine learning;deep learning;image recognition
From the description of your problem, you need both Computer Vision and Deep Learning for a task like that. It is going to be extremely difficult, but you are more in advantage than anyone else, given you have a strong statistical and machine learning background. You don't have to worry a lot about the image processing part as there are libraries that will do that for you. You can look into PIL for that.The hard part is the learning from the edits part.A simpler way to solve this problem would be to focus on image processing and pick out the cell line clearly. The other way would be to train a Convnet on a large collection of 'labelled-images' of the cell-line, so that it is able to identify it in any picture. I do think it is quite a hard problem to solve but do give it a try. Cheers. All the best.
_codereview.159351
I have implemented as follows, a class applying singleton pattern to get a global single access to database.I intend to provide a thread-safe implementation.using System.Data.SqlClient;public sealed class Database { private static volatile SqlConnection instance; private static object syncRoot = new object(); private const string connectionString = Data Source=ServerName; + Initial Catalog=DataBaseName; + User id=UserName; + Password=Secret;; private Database() { } public static SqlConnection Instance { get { if (instance == null) { lock (syncRoot) { if (instance == null) instance = new SqlConnection(connectionString); } } return instance; } } }
Singleton implementation of a database connection
c#;thread safety;singleton
null
_webapps.14859
I want to remove (not change) my Facebook username, so that my profile page should be accessible from my ID number. Is this even possible?
How to remove Facebook username and return to profile ID?
facebook;username;profile;delete
null
_unix.344840
I want to forward all the locally generated traffic on a dummy interface to ppp interface. Since PPP interface is dynamic (comes up and goes down based on connected devices), my process binds to the dummy interface and sends traffic through it.I have created a separate routing table with the following rules:ip rule add oif dummy0 table rt_dummyip rule add from source <dummy0-ip> table rt_dummyip rule add fwmark 100 table rt_dummyDefault route of the routing table is through ppp interfaceip route default dev ppp0 table rt_dummyand iptables -t nat -A POSTROUTING -s dummy-interface-ip -o ppp0 -j MASQUERADEiptables -t raw -A OUTPUT -s dummy-interface-ip -j MARK --set-mark 100 But still packets are NOT going through ppp0 interface
Outgoing traffic over dummy interface to ppp interface
iptables
null
_unix.44639
A few years ago I added this repository to my sources.list:http://www.deb-multimedia.org/because it contained packages like acroread or flash player, which were either missing or out of date in the official repos.However, now I have just realized that some of the packages from that repository are broken, e.g. mencoder. Hence a few questions:How can I find out which packages are installed from this particular repository?How can I make this repository lower priority, so that only the packages I want are automatically installed/upgraded from there?EDIT:I edited `/etc/apt/preferences' file as someone suggested:grzes:/home/ga# cat /etc/apt/preferencesPackage: *Pin: release a=testingPin-Priority: 700Package: *Pin: release a=stablePin-Priority: 600Package: *Pin: release a=unstablePin-Priority: 50Package: *Pin: origin deb-multimedia.org/Pin-Priority: 50but it didn't seem to work (note that I downgraded this package manually):grzes:/home/ga# apt-cache policy mencodermencoder: Installed: 2:1.0~rc4.dfsg1+svn34540-1+b2 Candidate: 3:1.1-dmo5 Version table: 3:1.1-dmo5 0 50 http://www.deb-multimedia.org/ unstable/main i386 Packages 700 http://www.deb-multimedia.org/ testing/main i386 Packages *** 2:1.0~rc4.dfsg1+svn34540-1+b2 0 50 http://ftp.uk.debian.org/debian/ unstable/main i386 Packages 700 http://ftp.uk.debian.org/debian/ testing/main i386 Packages 100 /var/lib/dpkg/status 2:1.0~rc3++final.dfsg1-1 0 600 http://ftp.uk.debian.org/debian/ stable/main i386 Packages
Managing unofficial repositories on a Debian system
debian;package management;apt;repository
It turns out that you can't have both the origin and release clauses at the same time. Every repository provides a label though, which can be used for filtering. In my case the correct /apt/cache/preferences file looks like this:Package: acroread acroread-data acroread-debian-files acroread-dictionary acroread-dictionary-en acroread-escript acroread-fonts-jpn acroread-l10n acroread-l10n-en acroread-plugin-speech acroread-plugins cinelerra flashplayer-mozilla mozilla-acroread w32codecsPin: release a=testing,l=Unofficial Multimedia PackagesPin-Priority: 550Package: acroread cinelerra flashplayer-mozilla mozilla-acroread w32codecsPin: release a=stable,l=Unofficial Multimedia PackagesPin-Priority: 500Package: *Pin: origin www.deb-multimedia.orgPin-Priority: 50Package: *Pin: release a=testingPin-Priority: 700Package: *Pin: release a=stablePin-Priority: 600Package: *Pin: release a=unstablePin-Priority: 50To get the list of all available labels you need to run:apt-cache policywithout specifying package name.
_unix.371762
hping is available on Alpine. https://pkgs.alpinelinux.org/contents?branch=edge&name=hping3&arch=x86&repo=testingHowever when I tried to install it, I'm getting the following error message.localhost:~$ apk search -v hpinglocalhost:~$ sudo apk search -v hpinglocalhost:~$ localhost:~$ sudo apk add hpingERROR: unsatisfiable constraints: hping (missing): required by: world[hping]localhost:~$ localhost:~$ sudo apk add hping2ERROR: unsatisfiable constraints: hping2 (missing): required by: world[hping2]localhost:~$ localhost:~$ sudo apk add hping3ERROR: unsatisfiable constraints: hping3 (missing): required by: world[hping3]localhost:~$ I don't have this problem on other packages such as tcpdump.
Alpine Linux unable to install hping; ERROR: unsatisfiable constraints
software installation;alpine linux
hping3 is in the testing repository.# apk add hping3 --update-cache --repository http://dl-cdn.alpinelinux.org/alpine/edge/testingYou can also add this repository to /etc/apk/repositories.
_webmaster.35858
There is a similar question here, but the solution does not work in Apache for our site.I'm trying to remove multiple trailing slashes from URLs on our site. I found some .htaccess code that seems to work:RewriteCond %{REQUEST_FILENAME} !-fRewriteCond %{REQUEST_URI} ^(.*)//(.*)$RewriteRule . %1/%2 [R=301,L]This rule removes multiple slashes from anywhere in the URL:http://www.mysite.com/category/accessories////becomeshttp://www.mysite.com/category/accessories/However, it redirects once for every extra slash. So:http://www.mysite.com/category/accessories///////301 Redirects tohttp://www.mysite.com/category/accessories//////301 Redirects tohttp://www.mysite.com/category/accessories/////301 Redirects tohttp://www.mysite.com/category/accessories////301 Redirects tohttp://www.mysite.com/category/accessories///301 Redirects tohttp://www.mysite.com/category/accessories//301 Redirects tohttp://www.mysite.com/category/accessories/Is it possible to rewrite this rule so that it does it all in a single 301 redirect?Also, this above directive does not work at the root level of our site:http://www.mysite.com///// does not redirect but it should.
Remove multiple trailing slashes in a single 301 in .htaccess?
htaccess;apache;301 redirect;trailing slash
If the slashes may only occur at the end of the URL, you may use thisRewriteCond %{REQUEST_URI} ^(.*?)(?:/){2,}$RewriteRule . $1/ [R=301,L]
_codereview.15485
I have developed a prototype framework (MVVM) where control on the form is bound to a model property using a naming convention and its UI behavior is controlled using custom attributes.As of now the control name is divided into two parts - 3 part prefix and then the Name of the property in model to bind to. For eg txtFirstName textbox is bound to FirstName in model. During construction/load, all controls are looped through - BaseEdit baseEdit = (BaseEdit) control;baseEdit.DataBindings.Add(EditValue, viewModelBindingSource, baseEdit.Name.Remove(0, 3), true, DataSourceUpdateMode.OnPropertyChanged);There are some other attributes such as [ReadOnly], [Unbound] which are used to control the UI behavior.baseEdit.Properties.ReadOnly = Util.GetReadOnlyAttributeValue( control.Name.Remove(0, 3), viewModel.GetType());I am thinking of doing the looping the other way around, ie the Model properties, using a Bound attribute. Bound[ControlName, PropertyName]Bound[txtFirstName, EditValue]FirstNameAll the dropdown type controls, combobox, dropdown, checkeddropdown etc are autofilled by using a 'Key' from their tag property.if (control.GetType() == typeof(LookUpEdit) && !string.IsNullOrEmpty(Convert.ToString(control.Tag))) //Exact match{ LookUpEdit lookUpEdit = (LookUpEdit)control; DataBinding.InitializeLookUpEdit(lookUpEdit, lookUpEdit.Tag.ToString()); }I took the biggest data-entry form with around 45 controls and tested the framework. Is the above approach suitable for a big project? Any suggestions for improving the framework?
Automatic Databinding of controls to Model
c#;winforms;.net 2.0
IMHO Anything that relies on controls' Tag property is fishy. In 15 years of VB4-5-6/VBA and then WinForms development, every single time I saw the Tag property assigned, there was a better way to solve the problem. I think using that property more often than not violates the principle of least surprise (POLS), because it's not typical to put anything in there - if anything it means a control is begging to be derived from that control's class, and featured with the relevant properties - which is obviously too much trouble to be worth the while.That's my rant against using the Tag property. That said it looks rather clean, much cleaner than this similar question, but it suffers from the same issue: WinForms applications are better off with the Model-View-Presenter pattern. [...] if you want to simulate WPF behavior in WinForms, the real thing will be much less trouble, and leave you with much cleaner code.Sorry if that's not what you wanted to hear...For a big project I'd seriously consider either MVP or MVVM with WPF.
_vi.8704
I have the following autocmd to search for function definitions in C files. autocmd Filetype c,cpp execute nnoremap ]m /\\v^[^=]*(([a-zA-Z_][a-zA-Z_0-9]+)|(operator .*))\\(.*\\)( const|:)?( \\{)?$\<cr>:nohl\<cr>As I have incsearch defined, it will hightlight all the matches, why I can :nohl at the end.However the search fails if there are no matches found, and the :nohl is not executed in this caseThis would be fine, if the search didn't fail and throw an error if it found nothing, but would just didn't change the cursor (which it should).
Don't fail if pattern is not found
search;error
null
_unix.161736
How can we remove only the numbers that are 0,1,2,3,4,5,6,7,8,9 character long from a file? I mean the lines that are matching this pattern. Example for removable lines: cat input.txt112342311383728472323Example for lines that shouldn't be removed:cat input.txt1a1245d458565438753b395923827495Hx
How to remove only 0-9 character long numbers from a file?
bash;sed;perl
Using sed:sed -i.bak -e '/^[0-9]\{1,9\}$/d' fileUsing perl:perl -i.bak -nle 'print unless /^[0-9]{1,9}$/' file
_cs.13372
I've constructed an algorithm that solves the 3SUM problem in $O(n\lg n) + \frac{n^2}{4}$ time. I'm new to algorithms and was wondering how good is my running time? Googling didn't help.thanks..
Computing 3SUM problem in $O(n\lg n) + \frac{n^2}{4}$ time
algorithms;algorithm analysis;time complexity
null
_webmaster.72812
It's very easy to see how much of your traffic is from Search, how much is from Social and how much is from Direct and so on, like this:We can tell that 11.03% is from Organic Search. And after setting the date duration to another range afterwards, the number became 15.55%. That is growing.But I can't see the percentage trend for this. Is it possible to see it directly rather than dumping out all data and put everyday's number in a new Excel or Google Spreadsheet to figure out its trends?What I expected is something like this:Thanks,
How to see the trend of the percentage of organic search in all acquisition channels in Google Analytics?
google analytics
I don't know of a way of seeing the percentage over time, but you can get the absolute numbers over time.Like other compare stats over time, Google Analytics hides it under motion charts. Choose the date range for which you are interestedNavigate to Acquisition -> All Traffic -> ChannelsClick on motion charts (icon with three black circles top right of the graph)Change the metric of the graph from % new sessions to sessions (sideways drop down to the left of the graph)Change the graph to a line chart (small gray line chart in a tab over the graph as opposed to the black line chart icon next to the motion charts icon)
_unix.213717
I start interactive terminal using farm (LSF). These terminals get opened on some random host and get closed automatically after 15 days. I want to save the current shell environment (env, current directory, aliases, history of shell, ...) in a file. I would like to save the setting on periodically or on the 14th day. So, when the current shell terminates, I will relaunch a new terminal and set the same environment from the saved file. This will help in getting the almost the same shell again.
csh: How to save the shell environment (env, current dir, shell history) in a file and set it on another shell?
shell;csh
null
_unix.384119
In my script I create a directoryand need to execute subsequent commands within the directory.The below script creates a directory, but the next script it invokes(repoinit) does not get executed within that directory.mkcdir (){ echo creating directory $1 mkdir -p -- ~/$1 && cd -P -- ~/$1}mkcdir $1repo init -u [email protected]:P0/manifest.git -b refs/tags/$1repo sync
Change directory to execute a script
bash;shell script;cwd
null
_unix.19672
I am not looking for a how-to on creating a repo (createrepo) or using yum.I want to understand how they work together.I want to know what files yum looks at and why, what those files contain.I want to understand the structure of the repo and its files. I want to understand how it all works together.I have read many how-to's, I am looking for a more conceptual understanding. I working with Centos 6 32bit.
How does createrepo work. How does yum understand parse its files. A conceptual explanation
centos;yum;rpm
Createrepo creates some informational files that can be used by yum tool while fetching data from a reposiotry.The files are filelists.xml,repomd.xml etc.the below tutorial explains the complete yum working.How does YUM work?
_unix.174543
I have opened webcam for capturing using OpenCV in C++.Then I stopped the program using CTRL+Z;The webcam could not turn off, Because was not defined in program. And I can not start my program again because the capture program is still using webcam and is busy.Error:libv4l2: error setting pixformat: Device or resource busyHIGHGUI ERROR: libv4l unable to ioctl S_FMT...I found the process id using lsof|grep libv4l2:capture 5591 mylove mem REG 8,8 52584 1737777 /usr/lib64/libv4l2.so.0.0.0and tried to close the capture using kill 5591 and also pkill capture using normal user and root user. But the camera LED is still turned on and my program can not start.What is fastest and best method to release/close the camera?
release/close capture of camera
camera;v4l;opencv
null
_scicomp.12897
What is a good direct method to compute the spectral decomposition / Schur decomposition / singular decomposition of a symmetric matrix?Direct means as in LU decomposition, Cholesky decomposition, or Golub's SVD algorithm, in contrast to iterative methods.You can, of course, apply the aforementioned algorithm of Golub, but I guess that would be breaking a butterfly on a wheel.
Spectral decomposition of symmetric matrix
linear algebra;dense matrix
In addition to the QR algorithm, the divide and conquer method is also worth mentioning. It is applicable to symmetric tridiagonal matrices, but any matrix can be reduced to such a form via the Lanczos method*. It hinges on the observation that a tridiagonal matrix is, up to a rank 1 perturbation, a block diagonal matrix. One can then find the eigendecomposition of the sub-blocks (in parallel!) and glue them back together using a clever trick. Of course, finding the eigenvalues of the blocks must be done with the QR method as Federico alludes to.However, you've asked for a direct method -- both QR and divide and conquer are iterative methods. Well, there is no direct method akin to the LU decomposition to find the eigendecomposition of a matrix of dimension greater than 5, there are only iterative methods. If such a direct method existed, one could find the zeros of an arbitrarily high-degree polynomial as an algebraic function of the coefficients, which Galois told us is impossible. The Golub-Kahan SVD algorithm is also iterative.
_unix.105610
I am trying to understand the Linux block layer so I am writing a blog about it: http://www.linuxintro.org/wiki/blktrAce. When calling blktrace like this:blktrace -d /dev/sdg -o - | blkparse -i -I see e.g. the output8,96 4 695 430.080106382 2356 I N 0 (00 ..) [kworker/4:2]8,96 3 29 430.082179440 53 D N 0 (00 ..) [ksoftirqd/3]I do not understand what this means. According to the man page of blkparse, there is an RWBS field (containing R for read, W for write, B for barrier, D for discard or S for sync). With some experimenting I found out it is the 7th column. However it contains N. What does that mean? Where can I find the info what it means?
how does blktrace work?
linux;monitoring;block device
null
_unix.231588
Is that possible to downgrade from 7.9 to 7.8 on Debian vm ?
Debian: Downgrade from 7.9 to 7.8
debian;version
null
_webapps.2461
What are some good Web Tools to help me format my code for blogs?I'd like to be able to copy & paste my code into textbox/textarea and have the web tool format it nicely for various popular blogging sites. Features: Color, indentation, line numbers, etc.Languages: C/C++/C#, VB.net, XML/HTML/Xaml, Ruby, etc.Bloging sites: Blogger, Wordpress, etc
What are some good Web Tools to help me format my code for blogs?
blog;formatting
GitHub's gist service has a rather neat embedding tool and recognises loads of different languages. I've use this to embed code snippet occasionally.Also anything based around GeSHi works great, I've had good experiences with the CodeColorer plugin for wordpress.
_softwareengineering.334072
I have been having quite a time getting this to work reliably for 100s of thousands of terms and potentially millions of pages per source and ETL the resulting data into a database in an automated fashion. I need to run the tasks in Mesos on a repeating schedule. The required languages are Scala/Java. For acquisition, I need to parse javascript, render data from ajax, work with tracking cookies; etc. in order to scrape the sites. I've been working on an open source tool to do this as well. I discovered and have created an extremely simple API surrounding Selenium for this task with serializable configuration for distribution. The tool is plug and play for a webdriver. However, the crawls constantly run into trouble in that they always hang despite being isolated fairly well and stripped down from one another (by specifying cache locations,minimizing the cache size, not downloading images;etc.).Errors range from phantomjs returning a cleanup error and failing to continue to a general hang in Chrome Driver despite not running out of memory according to VisualVM. In fact, the highest memory use has been 25% and CPU use at 50% using 3-5 individual child processes.Should I be running each term in a container? How to make web driver reliable over a period of weeks or months? Is there an equally generic alternative?
How to make a webdriver run reliably in Selenium?
java;scala;selenium;web scraping;selenium webdriver
null
_softwareengineering.279005
I am trying to serialize a Message object using ObjectOutputStream ,take the byte[ ] output of the serializer,encrypt it using an encryption tool and then trying to de-serialize it and cast it as an object. It gives an error - invalid stream header. Is it not possible to modify the OutputStream after serializing it and then de-serialize it? Excuse me if I am doing something atrocious, I am a novice to Java. Code for the serializer - serialMsg():ByteArrayOutputStream bos = new ByteArrayOutputStream();ObjectOutputStream out = new ObjectOutputStream(bos);out.writeObject(this);bytes = bos.toByteArray();out.close();The output of the serializer is then used as below:byte[] data = this.serialMsg(); // This is where I serialize the Message objectbyte[] out = util.Encrypt(data, fromsk , to.toString(), param); // This is where I encrypt the byte[ ] - data and obtain an encrypted byte [ ] - outThe object is then de-serialised as:ByteArrayInputStream in = new ByteArrayInputStream(out);ObjectInputStream objin = new ObjectInputStream(in) ;msg = (Message)objin.readObject(); // This is where I cast the output of readObject into the type Message - the original type of the object that was serialized.I hope the objective behind this is clear. I am trying to modify the output stream and then trying to cast it back into the original type.
Serialization - can the output from a serializer be modified and then de-serialized
java;serialization
The serialization API makes a specific promise: If you save the bytes created by a serialize() call, you can later feed them into a deserialize() call, and your object will be reconstructed. That is all it does.You are doing something different: you are feeding a different stream of data to the deserializer. That isn't in its job description, so unsurprisingly it fails. It doesn't matter that the new data stream is intimately related to the original one; all that matters is that it isn't the same one.To achieve what you want, you would have to understand how the serialization process works and custom-craft your transformation so that the reverse operation would succeed. That is technically possible, but it would mean that you basically have to do all the work that the library is supposed to do for you. At that point, there is no longer much point in using it at all, and you're better off using a different mechanism that serializes and encrypts data the way you want to.Luckily, there are solutions that do both. See https://stackoverflow.com/questions/16950833/is-there-an-easy-way-to-encrypt-a-java-object for an example of a previous question about similar requirements.
_codereview.105743
I have a string which I want to check for characters other than those in this list {1, 2, 3, 4, 5, 6, 7, 8, 9, 0, ., i} however this check gets run several thousand times a second.I am therefore looking for the most efficient way of performing this check, what I have tried so far:If Not System.Text.RegularExpressions.Regex.IsMatch(Input, [^0-9\.i]) ThenThis is equivalent to the code below which I have also triedImports SystemImports System.LinqPublic Module Module1 Public Sub Main() Console.WriteLine(IsValidString()) End Sub Private Function IsValidString() As Integer 'Dim Input As String = Hello World 'This is Invalid 'Dim Input As String = 13.4i+4 'This is Invalid Dim Input As String = 13.4i 'This is valid If Function() Dim IsValid As Boolean = True For Each Character In Input If Not {1, 2, 3, 4, 5, 6, 7, 8, 9, 0, ., i}.Contains(Character) Then IsValid = False End If Next Return IsValid End Function() Then Return -1 Else 'Do some other stuff Return 1 ' End If End FunctionEnd ModuleI am looking for optimal performance possibly at the cost of readability as I am willing to write pages on how the code works and what it does if only I can speed up the execution of my app!
Searching string for character not in array
performance;vb.net
In this case (and many others), when it comes to performance, there's really nothing that beats the old trusty For Next and Select Case statements.Private Function IsValid(input As String) As Boolean Dim index As Integer Dim length As Integer = (input.Length - 1) For index = 0 To length Select Case input.Chars(index) Case 0c, 1c, 2c, 3c, 4c, 5c, 6c, 7c, 8c, 9c, ic, .c Continue For Case Else Return False End Select Next Return TrueEnd FunctionI've run a test, as seen in this fiddle (Release build - Any CPU) with 10000 iterations and this is the result:{ Name = IsValid1, Repetitions = 10000, Result = False, Time = 0,7359 } { Name = IsValid2, Repetitions = 10000, Result = False, Time = 58,787 } { Name = IsValid3, Repetitions = 10000, Result = False, Time = 106,5004 } { Name = IsValid1, Repetitions = 10000, Result = True, Time = 0,4382 } { Name = IsValid2, Repetitions = 10000, Result = True, Time = 42,1742 } { Name = IsValid3, Repetitions = 10000, Result = True, Time = 65,9497 } Private Function IsValid2(input As String) As Boolean Dim validChars = {1, 2, 3, 4, 5, 6, 7, 8, 9, 0, ., i} Return input.All(Function(c) validChars.Contains(c))End FunctionPrivate Function IsValid3(input As String) As Boolean Dim IsValid As Boolean = True For Each Character In input If Not {1, 2, 3, 4, 5, 6, 7, 8, 9, 0, ., i}.Contains(Character) Then IsValid = False End If Next Return IsValidEnd Function
_cs.10081
I'm working on some exercises regarding graph theory and complexity. Now I'm asked to give an algorithm that computes a transposed graph of $G$, $G^T$ given the adjacency matrix of $G$. So basically I just have to give an algorithm to transpose an $N \times N$ matrix.My first thought was to loop through all rows and columns and simply swapping values in each of the $M[i,j]$ place. Giving a complexity of $O(n^2)$ But I immediately realized there's no need to swap more than once, so I can skip a column every time e.g. when I've iterated over row i, there's no need to start iteration of the next row at column i, but rather at column i + 1.This is all well and good, but how do I determine the complexity of this. When I think about a concrete example, for instance a 6x6 matrix this leads to 6 + 5 + 4 + 3 + 2 + 1 swaps (disregarding the fact that position [i,i] is always in the right position if you want to transpose a $N \times N$ matrix, so we could skip that as well).This looks alot like the well-known arithmetic series which simplifies to $n^2$, which leads me to think this is also $O(n^2)$. There are actually $n^2/2$ swaps needed, but by convention the leading constants may be ignored, so this still leads to $O(n^2)$. Skipping the i,i swaps leads to $n^2/2 - n$ swaps, which still is $O(n^2)$, but with less work still..Some clarification would be awesome :)
What is the complexity of this matrix transposition?
graph theory;time complexity;algorithm analysis;linear algebra;adjacency matrix
The sequence you (correctly) identified sums up to $\frac{n(n-1)}{2}=\theta(n^2)$, which gives the runtime you were looking for.I'm not sure what you are referring to by leading constants. Do you mean you can ignore e.g. $1+2+3$ ? sure, but that will reduce $6$ from the complexity, which is clearly meaningless. On the other hand, you cannot ignore $f(n)$ leading constants for any $f(n)=\omega(1)$, so there is really no point in ignoring them altogether.
_cstheory.36564
What's stopping ghc from translating Haskell into a concatenative programming language such as combinatory logic and then simply using stack allocation for everything? According to Wikipedia, the translation from lambda calculus to combinatory logic is trivial, and also, concatenative programming languages can rely solely on a stack for memory allocation. Is it feasible to do this translation and thus eliminate garbage collection for languages such as Haskell and ocaml? Are there downsides to doing this?EDIT: moved here https://stackoverflow.com/questions/39440412/why-do-functional-programming-languages-require-garbage-collection
Why do functional programming languages require garbage collection?
functional programming;haskell
null
_codereview.48537
Problem:\$n!\$ means \$n (n 1) ... 3 2 1\$For example, \$10! = 10 9 ... 3 2 1 = 3628800\$, and the sum of the digits in the number \$10!\$ is \$3 + 6 + 2 + 8 + 8 + 0 + 0 = 27\$.Find the sum of the digits in the number \$100!\$.My solution in Clojure:(reduce + (map (fn[x](Integer. (str x))) (seq (str (apply *' (range 1 101))))))Questions:Is there a way to avoid the *' in the factorial bit? (apply *' (range 1 101))I converted the result of the factorial to a string, then to a sequence, and then mapped an Integer cast to a string cast. Surely there must be a way to simplify this?
Project Euler #20 solution in Clojure
clojure;programming challenge
Your first question: you could make range return a list of bigints, and reduce over it(reduce * (range (bigint 1) 101))your second question: :you dont have to explicitly use seq, clojure will automatically treat your string as a seqyou dont have to use the full-blown string to number converter, you could for example use int to get the char code: (map #(- (int %) (int \0)) 1234)`for other ways of getting digits of a number, check out this thread
_cstheory.11302
Is there any software package allowing decomposition of unitaries from $U(2^n)$ into quantum circuits over a predefined universal gate set?
Software package for decomposing quantum circuits
quantum computing;software
null
_unix.316802
I've successfully been able to configure the latest Firefox (source) without errors. All the required dependencies are in place (i.e. GCC 4.9.2 via devtoolset-3, Python 2.7, Yasm, libffi 3.2.1, and on). When I run ./mach build it also successfully configures and starts makeing the binaries... then after about 24 minutes it chokes on24:40.15 /home/osboxes/firefox-50.0b7/gfx/thebes/gfxFontconfigFonts.cpp: In member function virtual already_AddRefed<gfxFont> gfxPangoFontGroup::FindFontForChar(uint32_t, uint32_t, uint32_t, gfxFontGroup::Script, gfxFont*, uint8_t*):24:40.15 /home/osboxes/firefox-50.0b7/gfx/thebes/gfxFontconfigFonts.cpp:1628:66: error: g_unicode_script_from_iso15924 was not declared in this scope24:40.15 (const PangoScript)g_unicode_script_from_iso15924(scriptTag);24:40.15 ^The pertinent part beingg_unicode_script_from_iso15924 was not declared in this scopeI searched online for this error first and the only reference to this is a fixed bug in v52 (ref) which isn't even in the sources repo at this time. This isn't a bug.How to compile Firefox 50 for a system using GLibc 2.12?Solved: I discovered that g_unicode_script_from_iso15924 is a new symbol in GLib 2.30 (ref). Glib needs to be updated to at least version 2.30.
Compiling Firefox 50 under GLibc 2.12
centos;compiling;firefox;glibc;glib
That's not a symbol in glibc, it's a symbol in GLib. If you build and install GLib 2.30 or later, you should be able to build Firefox 50.