id
stringlengths
5
27
question
stringlengths
19
69.9k
title
stringlengths
1
150
tags
stringlengths
1
118
accepted_answer
stringlengths
4
29.9k
_softwareengineering.247262
I'm writing a method and depending on a config field I need to change where I get my data from. What this results in me having to write code that looks like this:List<string> result = new List<string>();if (configField){ result.Add(fieldA);}else{ result.Add(...BusinessLogic...);}I'll have to write that if statement many many times so of course I want to turn it into a method instead so I can just write result.Add(newMethod(configField, fieldA, dataForBusinessLogicToParse)). This method would be useless to anyone not writing the specific method I'm writing so does it make sense to declare this method as a separate private method or can I just declare it inline as a delegate like this:Func<Enum, int, int> newMethod = (configField, fieldA, dataForBusinessLogicToParse) => ...businessLogic...I'm worried declaring it inline might make the code more difficult to understand but I think it makes the class cleaner.
Is it good practice declare a function inline?
c#;programming practices
As far as I know, declaring a helper method as a lambda like this is not commonly done in C#, so I would advise against doing this unless you have a good reason.Good reasons include:The lambda could do something a separate method can't, like closing over a local variable or using anonymous type.Others in your team agree with you that this is a good practice.
_cs.68003
In Alloy Tutorial they denote some reflexive transitive closure with Kleene star saying that they admit zero or more elements at that position. // File system is connected fact { FSObject in Root.*contents }In Alloy, in can be read as subset of (among other things). The operator * denotes reflexive transitive closure. Thus, this fact says that the set of all file system objects is a subset of everything reachable from the Root by following the contents relation zero or more times.Reflexive Transitive Closure *In Alloy, *bar denoted the reflexive transitive closure of bar. It is equavalent to (iden + ^bar) where ^ is the (non-reflexive) transitive closure operator.Can you explain the closure and star operators such thut it becomes obvious that they are identical?
Reflexive transitive closure = (zero or more) Kleene star?
terminology;closure properties;kleene star;transitivity
null
_webmaster.60546
I have two different versions of my site: a desktop version, and a mobile optimised version. That is, for the same URL, the server renders different HTML for different user agents. I had been using vary header for this scheme as recommended by Google.However, now I want to move the mobile website to a single page application.I want to know if Google stops seeing anything on my mobile web version but the desktop version continues to work as it is, then how would the search rank be impacted given that mobile web gets more traffic than the desktop version. How would the vary header come into play
How will search rankings get impacted if I move my mobile website to a single page application?
seo;google search
null
_webapps.86417
So I know how to get google spreadsheets to automate dates down a column. Write a date, left click on the crosshair in the bottom right hand corner, drag down and let go. The thing I'm hoping to do though is have, say, ten (or any given amount rows down a column of todays date, then ten of tomorrows date, and so on down indefinitely. It's for inputting sales data. If it's not possible thanks anyway.
Date automation down columns
google spreadsheets;date
null
_webmaster.7093
I am currently a CS student and an aspiring programmer/web developer. I am wondering whether it is worth taking the time to master html and css to make websites when these CMS services/wysiwyg editors (wordpress, squarespace) seem to be becoming more and more functional. Does anyone think these publishing services might eventually make the need to design websites from raw code unnecessary? If not, please explain why. If designing a website eventually becomes as simple as using Photoshop I would much rather invest my time in programming languages.
html/css vs CMS
html;css;cms
I can't imagine myself using wysiwyg for css and html. If you like to learn to DESIGN you gotta know the 'backend' 'messy' part.Wysiwyg is ok if you are not building something robust.but definitely invest your time in programming languages. thats the engine.
_unix.106215
Found the line \+::::::/bin/bash in my /etc/passwd, which looks strange to me. What does that mean? Has my computer been hacked?
Whta does '\+::::::/bin/bash' in /etc/passwd mean?
users;nis;nsswitch
The answer lies in the nsswitch.conf(5) man page:Interaction with +/- syntax (compat mode)Linux libc5 without NYS does not have the name service switch but does allow the user some policy control. In /etc/passwd you could have entries of the form +user or +@netgroup (include the specified user from the NIS passwd map), -user or -@netgroup (exclude the specified user), and + (include every user, except the excluded ones, from the NIS passwd map).You can override certain passwd fields for a particular user from the NIS passwd map by using the extended form of +user:::::: in /etc/passwd. Non-empty fields override information in the NIS passwd map.Since most people only put a + at the end of /etc/passwd to include everything from NIS, the switch provides a faster alternative for this case (passwd: files nis) which doesnt require the single + entry in /etc/passwd, /etc/group, and /etc/shadow. If this is not sufficient, the NSS compat service provides full +/- semantics. By default, the source is nis, but this may be overridden by specifying nisplus as source for the pseudo-databases passwd_compat, group_compat and shadow_compat. These pseudo-databases are only available in GNU C Library.Assuming that your /etc/nsswitch.conf contains passwd: compat, I believe that that line means include all NIS users, but override the login shell to /bin/bash.
_unix.256541
I am trying to use motion software (but I could use just any other linux software) in order to log some manual work by snapshots. So imagine I want to make a stop-motion film with my LEGO(r) toys (just an example, but for better understanding), and my webcam should take one snapshot when my hands get out of the way. There are tons of docs in the Internet to achieve the opposite (record when move detected), but none this way:wait until there is movement in the field of viewdetect movement, so wait until movement stopsmovement stops, so take an snapshot.Is this possible with motion, cheese, or other webcam software?
using motion to trigger snapshot when movement stops
camera;snapshot;motion
null
_webmaster.99490
I've seen several research lab/universities websites that have a webpage for each researcher (in which there is kind of a CV + links to personal content, etc.). URLs of these pages are named after following pattern: www.example.com/~name. (e.g. www.example.com/~doe and www.example.com/~mustermann for John Doe and Erika Mustermann, respectively).My question is: why is there a ~ before name? Is it related to GNU/Linux ~ home folder? Is there a convention for that?
Why do URLs of personal pages use the pattern: /~name?
url
null
_vi.10031
I often find scrolling a full page too disorienting, half a page too much, but a quarter page is just right. I currently do it just by holding down the arrow keys.How do I scroll 25% of the page down and up easily?
Scroll a quarter (25%) of the screen up or down
cursor movement;scrolling
Maybe ctrld and ctrlu could be what you are looking for. By default then move half of the screen.From :h CTRL-D:Scroll window Downwards in the buffer. The number of lines comes from the 'scroll' option (default: half a screen).If [count] given, first set 'scroll' option to [count].Which means that the first time you want to scroll in a window you can do XXctrld where XX is the 25% of the number of lines in your window. As it sets scroll to the XX value you can then use ctrld and ctrlu to move 25% of the screen.Also I think that :h scrolling might be interesting for you.EditAnd here is another solution with a function and some mappings to add to your vimrc:function! ScrollQuarter(move) let height=winheight(0) if a:move == 'up' let key=^Y else let key=^E endif execute 'normal! ' . height/4 . keyendfunctionnnoremap <silent> <up> :call ScrollQuarter('up')<CR>nnoremap <silent> <down> :call ScrollQuarter('down')<CR>The function will get the height of the current window, and accordingly to its parameter will scroll the screen up or down of one quarter of the height.Important note On the lines let key=^Y and let key=^E, you have to enter manually ^Y and ^E. To do so use the key combinations CTRL+vCTRL+y and CTRL+vCTRL+e. If you simply copy these lines vim will understand the command as the literal characters ^ followed by y whereas what we want is Vim to use the keycode ^Y which represent the code sent by the terminal when you press ctrlyThe mappings will call the function, the first one to go up and the second one to go down.Of course you can change <up> and <down> to some other keys if you want to keep the default behavior of your arrow keys.
_cs.78043
In C++ a simple function like int id_int(int x){return x;}has type id_int :: int->inta class template liketemplate<class T>class List<T>{...};has kind List :: *->*But what is the type or kind of a function template liketemplate<class T>T id(T x){return x;}Could it be id :: \x. x->x? Does this even make sense?
Type of a function template?
type theory
A function like T id(T x){return x;} is actually a (parametrically) polymorphic function, and there are many frameworks that allow assigning a type to such beasts.One of the most popular frameworks to talk about such things is system F, which allows expressing the type of the above statement as: $$\forall T:*.\ T\rightarrow T$$This requires the ability to quantify over types to form other types, which is the main feature of system F. Note that because id is a value, it's type has itself a kind, that is$$\forall T:*.\ T\rightarrow T\ \ :\ \ * $$in keeping with the analogy with List, whose kind is, as you noted, $*\rightarrow *$, which makes it a type constructor.Note also that the List type constructor is not a term in system F, for that you need to go further out to system ${F}_\omega$, which was designed in part to study how polymorphism and type constructors may interact.
_unix.10855
Is it possible to create a guest account in Linux? by guest account I mean an account that does not require a password to log in graphically.I want this account for when people come over and are like can I use your computer to check my email. Then I don't have to worry about them snooping my stuff.I realize that some of this may require doing stuff specific to the login manager, since I wouldn't be surprised that this is a common problem, it'd be best to include instructions for xdm, kdm, and gdm and any other login managers that I haven't listed.
Can I create a local only guest account?
security;pam;account restrictions;login manager
null
_codereview.29699
I would love it if someone could give me some suggestions for these 2 graph search functions. I am new to scala and would love to get some insight into making these more idiomatic. type Vertex=Int type Graph=Map[Vertex,List[Vertex]] val g: Graph=Map(1 -> List(2,4), 2-> List(1,3), 3-> List(2,4), 4-> List(1,3)) //example graph meant to represent // 1---2 // | | // 4---3//I want this to return results in the different layers that it finds them (hence the list of list of vertex) def BFS(start: Vertex, g: Graph): List[List[Vertex]]={ val visited=List(start) val result=List(List(start)) def BFS0(elems: List[Vertex],result: List[List[Vertex]], visited: List[Vertex]): List[List[Vertex]]={ val newNeighbors=elems.flatMap(g(_)).filterNot(visited.contains).distinct if(newNeighbors.isEmpty) result else BFS0(newNeighbors, newNeighbors :: result, visited ++ newNeighbors) } BFS0(List(start),result,visited).reverse }//I would really appreciate some input on DFS, I have the feeling there is a way to do this sans var. def DFS(start: Vertex, g: Graph): List[Vertex]={ var visited=List(start) var result=List(start) def DFS0(start: Vertex): Unit={ for(n<-g(start); if !visited.contains(n)){ visited=n :: visited result=n :: result DFS0(n) }} DFS0(start) result.reverse } //some examplesscala> BFS(1,g)res84: List[List[Vertex]] = List(List(1), List(2, 4), List(3))scala> BFS(2,g)res85: List[List[Vertex]] = List(List(2), List(1, 3), List(4))scala> DFS(1,g)res86: List[Vertex] = List(1, 2, 3, 4)scala> DFS(3,g)res87: List[Vertex] = List(3, 2, 1, 4)
BFS and DFS in Scala
scala;graph
OK, so I'm going to start with your DFS method. You're right - you should be able to do it without those vars in the outer function. You should be able to work out why - after all, you have vals in the outer layer of your BFS method. Why? Because your BFS uses a recursive helper function, so the vals are only used once (and could be discarded).So your DFS function should really use recursion, but I suspect you may have rejected recursion because you couldn't see how visited would be properly preserved as a recursive function popped back and forth. The answer is foldLeft.def DFS(start: Vertex, g: Graph): List[Vertex] = { def DFS0(v: Vertex, visited: List[Vertex]): List[Vertex] = { if (visited.contains(v)) visited else { val neighbours:List[Vertex] = g(v) filterNot visited.contains neighbours.foldLeft(v :: visited)((b,a) => DFS0(a,b)) } } DFS0(start,List()).reverse} I don't have space here to explain foldLeft, if you've never encountered it - maybe Matt Malone's blog post will help. You can rewrite almost anything with foldLeft, although it isn't always a good idea. Definitely the right thing to do here, though. Notice that I completely dropped your result var since visited is the result, the way you are doing this.My version of your DFS method is entirely functional, which is how Scala really wants to be used. Note also the lack of braces and brackets in val neighbours:List[Vertex] = g(v) filterNot visited.containsIt can be written val neighbours:List[Vertex] = g(v).filterNot(visited.contains)but the Scala style is to omit the brackets and braces except where essential.Your BFS method is similarly over-populated. I've slimmed it down a little without altering the basic way it works:def BFS(start: Vertex, g: Graph): List[List[Vertex]] = { def BFS0(elems: List[Vertex],visited: List[List[Vertex]]): List[List[Vertex]] = { val newNeighbors = elems.flatMap(g(_)).filterNot(visited.flatten.contains).distinct if (newNeighbors.isEmpty) visited else BFS0(newNeighbors, newNeighbors :: visited) } BFS0(List(start),List(List(start))).reverse} It still gives the same results.The other big point to make is that while Scala is a functional language it is also an Object Oriented language. Those DFS and BFS methods should belong to a graph object, preferably at least derived from a generic class. Something like this:class Graph[T] { type Vertex = T type GraphMap = Map[Vertex,List[Vertex]] var g:GraphMap = Map() def BFS(start: Vertex): List[List[Vertex]] = { def BFS0(elems: List[Vertex],visited: List[List[Vertex]]): List[List[Vertex]] = { val newNeighbors = elems.flatMap(g(_)).filterNot(visited.flatten.contains).distinct if (newNeighbors.isEmpty) visited else BFS0(newNeighbors, newNeighbors :: visited) } BFS0(List(start),List(List(start))).reverse } def DFS(start: Vertex): List[Vertex] = { def DFS0(v: Vertex, visited: List[Vertex]): List[Vertex] = { if (visited.contains(v)) visited else { val neighbours:List[Vertex] = g(v) filterNot visited.contains neighbours.foldLeft(v :: visited)((b,a) => DFS0(a,b)) } } DFS0(start,List()).reverse }}And then you could do this:scala> var intGraph = new Graph[Int]scala> intGraph.g = Map(1 -> List(2,4), 2-> List(1,3), 3-> List(2,4), 4-> List(1,3))scala> intGraph.BFS(1)res2: List[List[Int]] = List(List(1), List(2, 4), List(3))scala> intGraph.BFS(2)res3: List[List[Int]] = List(List(2), List(1, 3), List(4))scala> intGraph.DFS(3)res4: List[Int] = List(3, 2, 1, 4)or this:scala> var sGraph = new Graph[String]scala> sGraph.g = Map(Apple -> List (Banana,Pear,Grape), Banana -> List(Apple,Plum), Pear -> List(Apple,Plum), Grape -> List(Apple,Plum), Plum -> List (Banana,Pear,Grape))scala> sGraph.BFS(Apple)res6: List[List[java.lang.String]] = List(List(Apple), List(Banana, Pear, Grape), List(Plum))
_unix.117582
People!Trying to install little PDF presentation soft (https://github.com/TrilbyWhite/Slider). When trying to do make I get this: slider.h:9:21: fatal error: poppler.h: No such file or directoryif I go to slider.h and change the #include<poppler.h> to <#include </usr/include/poppler/glib/poppler.h>, then I get:/usr/include/poppler/glib/poppler.h:22:25: fatal error: glib-object.h: No such file or directory #include <glib-object.h>So maybe could someone help me with this. Is it just unsatisfied dependencies or what?
Poppler.h fatal error while installing Slider from git on Tanglu (Debian)
debian;make
On wheezy I getroot@orwell:/home/faheem# apt-file search poppler.hemscripten: /usr/share/emscripten/tests/poppler/glib/poppler.hemscripten: /usr/share/emscripten/tests/poppler/glib/reference/html/poppler-poppler.htmlemscripten-doc: /usr/share/emscripten/demos/poppler.htmllibpoppler-glib-dev: /usr/include/poppler/glib/poppler.hlibpoppler-glib-dev: /usr/share/doc/libpoppler-glib-dev/html/poppler/poppler-poppler.htmllibpoppler-glib-doc: /usr/share/gtk-doc/html/poppler/poppler-poppler.htmlDo you have libpoppler-glib-dev or similar installed?Also, did you really mean<#include </usr/include/poppler/glib/poppler.h>? I think you want something like#include <poppler/glib/poppler.h>
_cstheory.22455
A colleague of mine recently interviewed for a software engineering job, and he was given a problem regarding unique identifier creation and testing for validation.So, the problem is: if a generated unique-identifier, let's say an order id provided online by an ecommerce site, is provided to a customer, and when the customer attempts to lookup the order, they have inadvertently transposed two characters, how to quickly test that the id is invalid, and how to create an id such that the transposition of two characters does not represent another valid id.I want to know what class of problem is this (not in the complexity sense but categorically) and what are general methods that attempt to solve it. Looking for variations on the theme of unique identifier and invalidation on google has not produced interesting results. I am hoping someone here might lead me in the right direction to learn more about this kind of problem.I hope I have found the right forum for posing the question, and apologies if I have not.
Unique Identifier Creation and Invalidation
string matching
Looks like some Data integrity check stuff.For example adding something like CRC16 (4hex digits) will allow filter ID with typos
_scicomp.24293
I've recently finished an introductory course on the finite element method from a more mathematical perspective (following Brenner and Scott) and we were introduced to the finite element mass matrix in elliptic problems as the matrix arising from terms without a derivative. For example, a one-dimensional Helmholtz type equation with the form$$-u(x)'' + au(x) = f(x), \quad 0 < x < 1, \quad a>0\\ u(0) = u(1) = 0$$has a corresponding weak formulation that requires us to find $u$ such that$$\int_0^1 u' v'dx + a\int_0^1uv dx = \int_0^1fvdx \quad \forall v \in H^1_0$$where $H^1_0 = \{v \in H^1 : v(0) = v(1) = 0\}$.Choosing $S \subset H^1_0$ to be a conforming finite dimensional subset with a basis $\{ \phi_i \}_{i=1}^N$, and saying $u = \sum_{j = 0}^N u_j \phi_j $, we get the linear problem$$(\pmb{K} + a \pmb{M})U = F $$where $K_{ij} = \int_0^1 \phi_i' \phi_j' dx$ is the stiffness matrix and $M_{ij} = \int_0^1 \phi_i \phi_j dx$ is the mass matrix. The finite element method typically proceeds by choosing $S$ to be the space of piecewise polynomials for example. This formulation extends naturally to higher dimensions.From this previous post: How to formulate lumped mass matrix in FEM, there are various ways to lump the mass matrix. For example, by summing the off-diagonal terms: $M_{ii} = \sum M_{ij}$.My question is what is the justification of this? Is there mathematical reasoning why this should give a consistent method? Is there a way to quantify the error introduced by doing this? I've seen an explanation that justifies mass matrix lumping in the context of mechanics where this assumption implies that the mass of the system is concentrated at discrete points, but how does this generalize to more general elliptic PDE problems?
Effects of Lumping Mass Matrix
finite element;pde;matrix
null
_codereview.148961
Inputs are two sorted lists (of the same length) of numbers. I would like to return a new merged, sorted new list. I have written the following code, but most lines are used to deal with the end case. I am wondering if there is any better way to write it.def merge(array1,array2): result = [0]*len(array1)*2 i = 0 # left index j = 0 # right index for k in range(0,len(result)): # when none of the pointer is at the end of the list if i != len(array1)-1 and j != len(array2)-1: if array1[i] < array2[j]: result[k] = array1[i] i = i + 1 elif array1[i] > array2[j]: result[k] = array2[j] j = j + 1 # the following codes are used to deal with the end cases. # possible to write it more compactly? elif i == len(array1)-1: if j > len(array2)-1: result[-1] = array1[-1] return result elif array1[i] < array2[j]: result[k] = array1[i] result[k+1:] = array2[j:] return result else: result[k] = array2[j] j = j + 1 elif j == len(array2)-1: if i > len(array1)-1: result[-1] = array2[-1] elif array2[j] < array1[i]: result[k] = array2[j] result[(k+1):] = array1[i:] return result else: result[k] = array1[i] i = i + 1 return result
Merge two sorted lists of numbers
python;algorithm;reinventing the wheel;mergesort
If you only want to support same-length array, you should do so explicitly, either by returning and empty list or an error codeIt's harder to read if you have to go back in the code to check what i, j and k mean.I find it's better to remove the comment and rename the variables to a more significant name:left_index = 0right_index = 0for result_index in range(0,len(result)):This means maybe you could also rename array1 and array2 to left_array and right_arrayIf you keep using the result of a function, just store it. Also, the length of the two arrays is supposed to be the same, so no need to make a distinction between len(array1) and len(array2)This check is easier to read if you invert it, leaving this as the else case.Something like: # the following codes are used to deal with the end cases. # possible to write it more compactly? if left_index == len(left_array)-1: [...] elif right_index == len(right_index)-1: [...] else: if left_array[left_index] < right_index[right_index]: result[merged_index] = left_array[left_index] left_index = left_index + 1 elif left_array[left_index] > right_index[right_index]: result[merged_index] = right_index[right_index] right_index = right_index + 1return resultBut, as @Simon said, you don't need all that code, because you're putting a lot of restrictions on the input data. They have to be the same length and the have to be sorted. Something like this should also work:def merge(left_array, right_array): if (len(left_array) != len(right_array)): return [] array_length = len(left_array) result_length = array_length * 2 result = [0] * result_length left_index = 0 right_index = 0 for result_index in range(0, result_length): if (left_index < array_length) and (left_array[left_index] <= right_array[right_index]): result[result_index] = left_array[left_index] result[result_index+1:] = right_array[right_index:] left_index += 1 elif (right_index < array_length): result[result_index] = right_array[right_index] right_index += 1 return result
_softwareengineering.338328
TL;DR - What criteria should you use to decide whether to 'do micro services'?I lead a team of developers and one of them insists that we adopt a micro services approach to architecture. I was hesitant at first, because I had been coding under a rock for several years and never knew that micro services was a thing.I began to warm up to the idea but I still don't think that a micro services approach is warranted in our case. We won't ever be servicing millions of users and there's only 5 of us so it's not like we're going to have full teams dedicated to such fine-grained services.We have a web based network management portal that we build and maintain. There are number of other applications that handle different things like VOIP call billing, Netflow collection, SNMP based usage collection etc. I wouldn't call these micro services as they're a bit more coarse than the fine-grained responsibilities that micro services appear to have.Should all dev teams everywhere 'do' micro services? If not, how do you decide whether micro services are appropriate for your environment?
How to decide whether to adopt a micro services approach
microservices
If not, how do you decide whether micro services are appropriate for your environment?Simply via pain. It sounds unusual, but is from my perspective a valid indicator, that something is going wrong.If you look at the reasons, why microservices are all the rage, there is a historical dimension to it which plays a big part.Usually succesful projects go like this:1) Start with a prototype2) Flesh out prototype3) Get business going4) Enormous growth which results in 4a) A big number of features are cranked out4b) Codebase growth beyond control5) PAIN starts: scheduling of deployments become a nightmare, dependend subsystems could not be deployed separately6) RELIEF Microservices FTWDividing the whole codebase in easy deployable components.The question you are asking is a good indicator, that you are not experiencing pain on such a level, that it would be necessary to move to microservices.Doing microservices is not without a price. Your system will definitively increase in terms of complexity. When you have a monolith, your world is plain simple: Call a method, do stuff, get results after a precalculatable amount of timeWhen you are dealing with microservices, you jump right into the mud of distributed systems: Call me maybe Things which are certain in a monolith, become uncertain in a microservice world.The reason, why the microservice approach was chosen by many big companies is simple: dealing with the problems of distributed systems was simpler than scaling their monolith. Of course: from an architectural point of view, a bunch of separated units looks cleaner (on paper) than a hairball of a monolith.I lead a team of developers and one of them insists that we adopt a micro services approach to architecture.I would ask him what would change (better or worse) in your concrete scenario.We won't ever be servicing millions of users and there's only 5 of us so it's not like we're going to have full teams dedicated to such fine-grained services.I do not see a (direct) problem here. Splitting up your codebase into separate deployable parts has nothing to do with team size. The codebase as such would be nearly the same. If your team handles the codebase now, it should be possible to do it after the migration. What is necessary, besides splitting up the codebase, is: educating your team in terms of how to deal with problems of distributed systems. This is an investment to make.We won't ever be servicing millions of users and there's only 5 of us so it's not like we're going to have full teams dedicated to such fine-grained services.I wouldn't call these micro services as they're a bit more coarse than the fine-grained responsibilities that micro services appear to have.Microservices have nothing to do with millions of users - though with problems of deploying a codebase facing a million of users. More: Despite the term micro, the services must not only be 100 Lines long or so - which is one, but not the only reason for calling it micro.I like the term focussed service much more. That's what it is: in terms of separation of concerns such a service deals with one topic.tl;drIf you do not have any problem running your current system, you shouldn't make a switch.
_unix.134170
I am using a laptop and Putty on a Windows system.When I connect to my Debian Squeeze server in Bash environment, I can use the Pos1/Home or End (at the numlock part of the keyboard) to navigate through the commandline I am just writing.However, when I create a subshell using screen, I cannot use Pos1/Home or End anymore. Pressing Num-Lock does not help.
No numlock in screen?
bash;debian;gnu screen;putty;numlock
I have found the reason why it didn't work.In the PuTTy configuration I had to change the session settings as follows:Connection -> Data -> Terminal details -> Terminal-type stringThe value was: xtermI changed it to linuxNow I can use the Home+End keys in Bash and in Screen as well.echo $TERM will show linux outside screen and screen.linux inside screen.
_webmaster.81665
What is the correct usage of using the Brand schema from schema.org?I have a website that sells clothes. I have a brands page where I list all the brands for which I have products on my website, for example like Levi, Calvin Klein, etc. If you click on any of these names I take you to the brand details page (on my website) that lists products that I have for sale on my website for that brand. A link would look something like thiswww.example.com/brands (list)www.example.com/brands/levi (details)www.example.com/brand/calvin-klein (details)Given my scenario above, on the brand details page, do I have to use the Brand schema from schema.org? Or should I just use it for my own brand, namely my website? This is what I currently have:<div class=container> <div itemscope itemtype=http://schema.org/Brand> <h1 itemprop=name>Calvin Klein</h1> <p itemprop=description>blah blah blah</p> </div></div>If I were to include a URL, do I need to link it to my URL on my website, or to the brand's website?
Correct usage of Brand from schema.org
seo;html;html5;microdata
What is the correct usage of using the brand schema from schema.org?There is not one correct usage it depends on what you want to convey.If you want to say something about a brand, you can use Schema.orgs Brand type.The Product type has the property brand, which takes a Brand item as value. This would allow you to reference the Brand from each of your Product items, for example by using Microdatas itemref attribute:<div itemprop=brand itemscope itemtype=http://schema.org/Brand id=brand-ck> <h1 itemprop=name>Calvin Klein</h1> <p itemprop=description>blah blah blah</p></div><article itemscope itemtype=http://schema.org/Product itemref=brand-ck> <!-- product 1 --></article><article itemscope itemtype=http://schema.org/Product itemref=brand-ck> <!-- product 2 --></article>If I were to include a url, do I need to link it to my url on my website, or to the band's website?The url property takes the URL of the item. This does not have to be the items official website (if it has one at all). On your site, you could specify the URL of your page about this brand.If you want to link to the brands official website, you could use the sameAs property (bold emphasis mine):URL of a reference Web page that unambiguously indicates the item's identity. E.g. the URL of the item's Wikipedia page, Freebase page, or official website.
_unix.212871
Based on hostapd, I m building a captive portal.- My Linux Machine provides a Wifi access.- iPad's and Android clients-tablets connect this Wifi.Generally, any client OS check if a url is reachable, if not : client OS states it is captive, and displays a popup browser window. Popup is used for login, presentation or else.Id like to display such a popup, to present my machine's service.But I dont get it. I ve avoided the net forward though. All connexions are redirected in the machine localhost website.Why dont I get such a popup ? How to get it ? How/Where should I implement it on my localhost ?// link to something in the same context: https://bugzilla.mozilla.org/show_bug.cgi?id=562917Captive portal [HostApd] detection by the browser?when popup show happens, how its content is defined ? You see what I mean ? For instance, a restaurant captive portal asks for your secret number on your note, where this page is stored ? how the OS know the URL to display in the popup ? That s really my quest
Captive portal detection, popup implementation?
linux;wifi;io redirection;authentication;hostapd
null
_cogsci.15330
As mentioned in this answer, it's possible to generate an fMRI BOLD signal from neurotransmitter consumption. What equation would be appropriate for this use?
Generate fMRI from neurotransmitter consumption
theoretical neuroscience;fmri
The simplest equation for getting a BOLD signal from neurotransmitter that I could find was in Tracing Problem Solving in Real Time: fMRI Analysisof the Subject-paced Tower of Hanoi, which itself references many other publications where it was used:$$H(t)= m \times(t/s)^a\times e^{-(t/s)}$$The parameters $s$, $a$ and $m$ don't have an explicit meaning. Heuristically, from the text:$m$ is the magnitude of the response and $s$ is a time scale. The function peaks at time $a \times s$. The parameter a determines the shape of the function such that the larger $a$ is the more narrowly the function will be distributed around its peak.So it seems you have to fit it to some previous data in the area you're trying to generate data from before it can be used.
_codereview.150845
I have to implement a double linked list as an exercise for a further education.There are three interfaces which have to be implemented:IValueElementpackage schnittstellen; // schnittstellen == interfacespublic interface IValueElement{ public String getName(); public void setName (String paramName); public int getValue() ; public void setValue(int paramValue);}IListElementpackage schnittstellen; public interface IListElement{ public IValueElement getValueElement(); public void setValueElement(IValueElement value); public IListElement getPredecessor(); public void setPredecessor (IListElement predecessor); public IListElement getSuccessor(); public void setSuccessor (IListElement successor);}IListpackage schnittstellen; public interface IList{ public IListElement getHead ( ) ; public void insertAtTheEnd(IValueElement value); public void insertAtPos(int pos , IValueElement value); public IValueElement getElementAt(int position); public int getFirstPosOf(IValueElement value); public void deleteFirstOf(IValueElement value); public void deleteAllOf(IValueElement value); public boolean member (IValueElement value); public void reverse(); public String toString();}Requirements for the implementation of IList:Has a default constructor.Head of the list isn't allowed to become null.A dummy element has to used as 0th element of the list.The predecessor reference of the the head has to point to the last element of the list.Here are my implementations of the interfaces:Class IValueElementpackage implementierung;import schnittstellen.IValueElement;public class ValueElement implements IValueElement{ private String name; private int value; public ValueElement(String name, int value) { if (name == null) { this.name = ; } else { this.name = name; } this.value = value; } public String getName() { return this.name; } public void setName(String paramName) { if (name == null) { this.name = ; } else { this.name = paramName; } } public int getValue() { return this.value; } public void setValue(int paramValue) { this.value = paramValue; } public String toString() { return Name : + this.name + , + Value : + this.value; }}Class IListElementpackage implementierung;import schnittstellen.IListElement;import schnittstellen.IValueElement;public class ListElement implements IListElement{ private IValueElement valueElement; private IListElement predecessor; private IListElement successor; public ListElement(IValueElement value) { if (value == null) { value = new ValueElement(, 0); } this.valueElement = value; this.predecessor = null; this.successor = null; } public IValueElement getValueElement() { return this.valueElement; } public void setValueElement(IValueElement value) { if (value == null) { value = new ValueElement(, 0); } else { this.valueElement = value; } } public IListElement getPredecessor() { return this.predecessor; } public void setPredecessor (IListElement predecessor) { this.predecessor = predecessor; } public IListElement getSuccessor() { return this.successor; } public void setSuccessor(IListElement successor) { this.successor = successor; }}Class Listpackage implementierung;import schnittstellen.IList;import schnittstellen.IListElement;import schnittstellen.IValueElement;public class List implements IList{ private IListElement head; private IListElement end; private int length; public List() { this.head = new ListElement(new ValueElement(Dummy, 0)); this.end = this.head; this.length = 1; } public IListElement getHead() { return this.head; } private ListElement createListElement(IValueElement value) { if (value == null) { return new ListElement(new ValueElement(, 0)); } else { return new ListElement(value); } } public void insertAtTheEnd(IValueElement value) { ListElement newElement = createListElement(value); IListElement currentEnd = this.end; currentEnd.setSuccessor(newElement); newElement.setPredecessor(currentEnd); this.end = newElement; this.length++; } @Override public void insertAtPos(int pos , IValueElement value) { ListElement newElement = createListElement(value); if (pos <= 1) { newElement.setSuccessor(this.head.getSuccessor()); newElement.setPredecessor(this.head); this.head.setSuccessor(newElement); } else if (pos > this.length) { newElement.setSuccessor(null); newElement.setPredecessor(this.end); this.end = newElement; } else { IListElement currentElement = this.head; for (int i = 1; i <= pos; i++) { currentElement = currentElement.getSuccessor(); if (i == pos) { IListElement predecessor = currentElement.getPredecessor(); newElement.setPredecessor(predecessor); newElement.setSuccessor(currentElement); predecessor.setSuccessor(newElement); currentElement.setPredecessor(newElement); break; } } } this.length++; } public IValueElement getElementAt(int position) { if (position <= 0 || position > this.length) { return null; } else if (position == 1) { return this.head.getSuccessor().getValueElement(); } else { IListElement ret = this.head; for (int i = 1; i < position; i++) { ret = ret.getSuccessor(); } return ret.getSuccessor().getValueElement(); } } public int getFirstPosOf(IValueElement value) { IListElement currentElement = this.head; int i = 1; while ((currentElement = currentElement.getSuccessor()) != null) { IValueElement currentValueElement = currentElement.getValueElement(); if (value == currentValueElement) { return i; } i++; } return -1; } public void deleteFirstOf(IValueElement value) { IListElement currentElement = this.head; while ((currentElement = currentElement.getSuccessor()) != null) { IValueElement currentValueElement = currentElement.getValueElement(); if (value == currentValueElement) { IListElement predecessor = currentElement.getPredecessor(); IListElement successor = currentElement.getSuccessor(); predecessor.setSuccessor(successor); // Successor? => Then it is NOT the last element in the list. if (successor != null) { successor.setPredecessor(predecessor); } else { this.end = predecessor; // In case it's the last element in the list it becomes the new end. } this.length--; return; } } } public void deleteAllOf( IValueElement value) { IListElement currentElement = this.head.getSuccessor(); while (currentElement != null) { IValueElement currentValueElement = currentElement.getValueElement(); if (value == currentValueElement) { IListElement predecessor = currentElement.getPredecessor(); IListElement successor = currentElement.getSuccessor(); predecessor.setSuccessor(successor); if (successor != null) { successor.setPredecessor(predecessor); } else { this.end = predecessor; } currentElement = successor; this.length--; } else { currentElement = currentElement.getSuccessor(); } } } public boolean member (IValueElement value) { IListElement currentElement = this.head; while ((currentElement = currentElement.getSuccessor()) != null) { IValueElement currentValueElement = currentElement.getValueElement(); if (value == currentValueElement) { return true; } } return false; } public void reverse() { IListElement currentElement = this.head.getSuccessor(); IListElement currentNext = currentElement; IListElement currentFirst = currentElement; while (currentNext != null) { currentNext = currentElement.getSuccessor(); if (this.getHead() == currentElement.getPredecessor()) { currentElement.setSuccessor(null); currentElement.setPredecessor(currentNext); } else if (currentNext != null) { currentElement.setSuccessor(currentElement.getPredecessor()); currentElement.setPredecessor(currentNext); } else { currentElement.setSuccessor(currentElement.getPredecessor()); currentElement.setPredecessor(this.head); } currentElement = currentNext; } this.head.setSuccessor(this.end); this.head.setPredecessor(currentFirst); this.end = currentFirst; } @Override public String toString() { IListElement currentElement = this.head; String ret = Head: + this.head.getValueElement().getName() + , + this.head.getValueElement().getValue() + \n; while ((currentElement = currentElement.getSuccessor()) != null) { IValueElement currentValueElement = currentElement.getValueElement(); ret += currentValueElement.getName() + , + currentValueElement.getValue() + \n; } return ret + End: + this.end.getValueElement().getName() + , + this.end.getValueElement().getValue() + \n; }}Moreover I have made (voluntarily) a test Class. For trying out what I got so far.package implementierung;import schnittstellen.*;public class ListTest{ public static void main (String[] args) { IList list = new List(); IValueElement data01 = new ValueElement(K1, 10); IValueElement data02 = new ValueElement(K2, 20); IValueElement data03 = new ValueElement(K3, 30); IValueElement data04 = new ValueElement(K4, 40); IValueElement data05 = new ValueElement(K5, 50); list.insertAtTheEnd(data01); list.insertAtTheEnd(data02); list.insertAtTheEnd(data03); list.insertAtTheEnd(data04); list.insertAtTheEnd(data05); System.out.println(list.toString()); // Testing reverse() list.reverse(); System.out.println(After reverse --- \n + list.toString()); // Testing getHead() System.out.println( Name of head element: + list.getHead().getValueElement().getName() + \n); // Testing getElementAt() System.out.println(At 2: + list.getElementAt(2).getName()); System.out.println(At 3: + list.getElementAt(3).getName()); System.out.println(At 5: + list.getElementAt(5).getName()); // Testing insertAtPos() IValueElement atPosN = new ValueElement(A-B, 99); list.insertAtPos(3, atPosN); // Testing insertAtTheEnd() IValueElement atTheEnd = new ValueElement(X-Y-Z, 100); list.insertAtTheEnd(atTheEnd); // Testing getElementAt() after additional insert System.out.println(After additional insert : ); System.out.println(At 2: + list.getElementAt(2).getName()); System.out.println(At 3: + list.getElementAt(3).getName()); System.out.println(At 5: + list.getElementAt(5).getName()); // Testing getFirstPosOf System.out.println(Element found at : + list.getFirstPosOf(data03)); System.out.println(Element found at : + list.getFirstPosOf(atPosN)); IValueElement test1 = new ValueElement(D-E-F, 10); System.out.println(Element found at : + list.getFirstPosOf(test1) + \n); System.out.println(list.toString()); // Testing member() IValueElement notMember = new ValueElement(x-y, 12); System.out.println(list.member(atPosN)); System.out.println(list.member(notMember)); System.out.println(list.member(data03)); // Testing deleteFirstOf() System.out.println(\nTrying to delete K3 - \n); list.deleteFirstOf(data03); System.out.println(list.toString()); // Testing deleteAllOf() System.out.println(\nTrying to delete all of K2 - \n); list.insertAtTheEnd(data02); // Add data02 a second time. System.out.println(list.toString()); list.deleteAllOf(data02); System.out.println(list.toString()); }}I should mention that I've tried to implement everything based upon what I've understood in the corresponding lecture. I avoided to lookup the internet. Instead figured out everything myself to become more confident with these data structures.I seems to work alright. But I'm sure there are flaws. Perhaps even errors. So therefore: All hints, comments and suggestions concerning improvements highly welcomed.
Java: Double Linked List which uses a sentinel node as zero element
java;linked list
Advice 1: a bugYou reversal operation will enter an infinite loop on empty list. In order to remedy this, write public void reverse() { if (length == 1) { // Otherwise, on empty list infinite loop. return; } IListElement currentElement = this.head.getSuccessor(); IListElement currentNext = currentElement; IListElement currentFirst = currentElement; ...}Advice 2Also, it is kind of funny you count the sentinel element in your length. Better design was ignoring it and start counting only the actual elements. Furthermore, what you call element is actual is called list node. Advice 3Prepending an I to interface names is a C# convention, not a Java convention.Advice 4You can be more clear in your code by simply swapping the element/node data instead of restructuring the entire list:public void reverseV2() { IListElement element1 = head.getSuccessor(); IListElement element2 = end; while (head != end) { String tmpString = element1.getValueElement().getName(); element1.getValueElement().setName(element2.getValueElement().getName()); element2.getValueElement().setName(tmpString); int tmpInt = element1.getValueElement().getValue(); element1.getValueElement().setValue(element2.getValueElement().getValue()); element2.getValueElement().setValue(tmpInt); element1 = element1.getSuccessor(); if (element1 == element2) { return; } element2 = element2.getPredecessor(); if (element2 == element1) { return; } }}In overall, your code is pretty clear and well written.
_unix.328308
I am happily limiting upload speed by port - but really want to limit download by process.It seems iptables did have functionality for matching and marking packets by process in the form of --pid-owner or --cmd-owner - but both have now been removed?$ iptables -m owner --help...owner match options:[!] --uid-owner userid[-userid] Match local UID[!] --gid-owner groupid[-groupid] Match local GID[!] --socket-exists Match if socket existsseems there are options to match by user or group, but not process.I am aware of trickle, and wondershaper - but neither allow shaping of an already running process
How can I limit Download bandwidth of an existing process? (iptables, tc, ?)
linux;networking;iptables;tc;packet
null
_unix.309560
I'm trying to join a Ubuntu 16.04 to a Windows domain (active directory) using realmd + sssd. Basically I was following this post which worked pretty well and I was able to join my server and could successfully authenticate as AD user. However there are two pieces missing in the integration:Register server's hostname in DNSUse sssd-sudo for user authorizationRegister server's hostname in DNS As mentioned I successfully join the AD by using realm join --user=dpr MYDOMAIN.INT --install=/:root@ip-172-28-5-174 ~ # realm listmydomain.int type: kerberos realm-name: MYDOMAIN.INT domain-name: mydomain.int configured: kerberos-member server-software: active-directory client-software: sssd required-package: sssd-tools required-package: sssd required-package: libnss-sss required-package: libpam-sss required-package: adcli required-package: samba-common-bin login-formats: %[email protected] login-policy: allow-realm-loginsHowever, dispite the successful join, my server is not known to the other machines in the domain using its hostname ip-172-28-5-174.mydomain.int. I found this documentation that mentions a dyndns_update setting in the sssd.conf file.As I'm using realm. The sssd configuration is generated automatically by issuing the join command. The generated config file looks like this:[sssd]domains = mydomain.intconfig_file_version = 2services = nss, pam[domain/mydomain.int]ad_domain = mydomain.intkrb5_realm = MYDOMAIN.INTrealmd_tags = manages-system joined-with-adclicache_credentials = Trueid_provider = adkrb5_store_password_if_offline = Truedefault_shell = /bin/bashldap_id_mapping = Trueuse_fully_qualified_names = Truefallback_homedir = /home/%u@%daccess_provider = adThat is I somehow need to add dyndns_update = True to this generated file. But how?Use sssd-sudo for user authorization Additionally I want to make sssd to read my sudo configuration from AD. I think this can be achieved using sssd-sudo but this needs to be enabled/configured in the sssd.conf file as well by adding sudo to the sssd services and use sudo_provider = ldap for my domain. Again I'm not able to figure out how to do this with realm.Basically I want my generated config file to look like this:[sssd]domains = mydomain.intconfig_file_version = 2services = nss, pam, sudo[domain/mydomain.int]id_provider = adaccess_provider = adsudo_provider = ldapad_domain = mydomain.intkrb5_realm = MYDOMAIN.INTrealmd_tags = manages-system joined-with-adclicache_credentials = Truekrb5_store_password_if_offline = Truedefault_shell = /bin/bashldap_id_mapping = Trueuse_fully_qualified_names = Truefallback_homedir = /home/%u@%dAny ideas on how this can be achieved?
Configure SSSD (sudo and dyndns_update) with realmd
ubuntu;active directory;sssd
Sadly there doesn't seem to be an option to add custom configuration parameters to the sssd.conf file generated by realmd.I had to adjust the generated config to contain my needed settings after joining the domain with realm join and restart sssd (service restart sssd) for the settings to take effect.
_webapps.21028
For some time now, I've happily been using bitly with a custom domain to shorten URLs on Twitter, but recently Twitter has decided to start shortening my already shortened URLs.Will this affect my statistics on bitly?
Twitter shortens my already shortened URLs. Will this affect my bitly statistics?
twitter;bit.ly
Yes*, it will. Consider this tweet of mine which was shortened to ( http://u.sbhat.me/rwa550 ) & which twitter wrapped with the t.co URL ( http://t.co/X3V8Hhsp ) & posted to my timeline. Checking on the stats confirmed that the referrer was t.co.*I believe, the reason there are couple of twitter.com & hootsuite.com referrers comes down to the way clients handle the API - twitter API provides both the t.co & the long URL(in this case, the shortened URL). If the client shows & sends request directly to the long URL, the obviously, the referrer won't be altered. However, twitter webpage shows the long URL but is actually linked to the short one and hence the referrer remains the short, i.e., t.co one.
_unix.126083
I recently discovered that my MidnightCommander takes around 40 seconds for every startup and the same goes for McEdit.I access my machine only via ssh and of course I'm not logging in as root, only to prevent the questions.I did an strace and it puts out two system calls that take around 20 seconds:poll([{fd =3, events=POLLIN}], 1, -1) = 1 ([{fd=3, revents=POLLIN}])select(5, [4], NULL, NULL, NULL) = 1 (in [4])Unfortunately I don't have the heck of a clue what these calls are, any hints or help would be appreciated, thanks in advance!Update:If I do a sudo mc it works as usual, only with my accout it does take that long.Solution:The solution is simple, I had X11 forwarding in Putty enabled, after deactivating it everything works like a charm. Strange, but anyway, it works again.Thanks for your answers!
MC and MCedit take long time to start
mc;strace
null
_softwareengineering.287995
I am writing a program, where in the beginning of the execution, I am instantiating a number of classifier objects using parameters stored in some files. I later use those classifiers in multiple objects. My question is: how should an object which uses a classifier obtain this classifier object?The objects which use the classifiers do not even exist at the initialization time of the program and they are far away from the initialization class, so even if they existed, passing the classifiers through multiple classes is a code smell.
Deserializing an object at beginning of program which is used (much) later
object oriented design
null
_unix.245155
I've already searched for it in google or stackexchange but didn't find anything. Is there anything for linux (exactly Kali Linux) for scrolling with the scroll mouse button? I mean, you know, like when in Windows I click on scrollMouseButton while on scrollView, there appears and icon and when I move my mouse pointer up, it scrolls up. When down - scrolls down.
Linux Scroll Extension
kali linux;scrolling
null
_scicomp.22031
How does the pseudo inverse of a full column rank matrix change if I rescale a single row?In more detail the problem is the following:We have a fixed matrix $V$ with linear independent columns and lots of matrices $D_i$ of the form$$D_i = \begin{pmatrix}d_i & 0 \\0 & E_n \end{pmatrix}$$where $E_n$ is the identity matrix. All matrices are real and have nice condition numbers. So far so sweet.We know the pseudoinverse $V^+=(V^*V)^{-1} V^*$ of $V$ and the job is to compute all the pseudo inverses $$(D_i \cdot V)^+$$in a fast way. Any ideas how to exploit the structure of the problem here? For example if the problem was the other way round, algebra of pseudo inveses would allow to break it down to inverting a single number and multiplying a single row:$$(V \cdot D_i )^+ = D_i^{-1} \cdot V^+$$In my applications $V$ is of small size, like $4 \times 3$ for example.
Pseudoinverse of perturbed matrix
linear algebra;performance;least squares
null
_webmaster.100065
I am about to move my sites to VPS (Ubuntu 16, Apache, with Webmin VHM).In Webmin.com I came across this article that says:WARNING : Running Webmin under Apache is almost never necessary unless you are on a very low-memory system that is already running Apache. Doing so will make Webmin slower, break some features and force use of the old ugly UI.Is it only a bad phrasing? Because I can't think of any reason why Webmin will warm people not to use Webmin and Apache together --- Almost any Webmin user I know, uses Apache (or Nginx) as its server software.
Webmin (a free VHM) recommend not running it with Apache?
apache;webmin
It can indeed be defined as a bad wording on their side, even though it is technically correct:Running Webmin under Apache and not Running Webmin with/and Apache. This is referring to whether Webmin is run with it's own HTTP server on port 10000 or using Apache. It is in no way saying that Webmin and Apache should not be run at the same time.
_webapps.35357
Trello supports referencing one card from the description and comments of another card by using the referenced card Short ID. (More info here.)The only ways I currently know of detecting a card Short ID is:Entering into the card and looking at the Card # legend at the bottom of the right column.Hovering with the mouse over the card and looking at the last part of the URL. It might say something like /22, so in that case the short ID will be 22.However, both of those methods are quite uncomfortable when you're writing the description/comment for another card.I wonder, is there any way of having Trello autocomplete/suggest the Short ID for a card, or help with the referencing?Thanks!
Autocomplete/suggest for short IDs in Trello?
trello
When you are entering info on a card, type # then whatever your search term for the card is and it should start matching cards for you.For example: #fix should find all cards that have fix in their titles for you.With this, you don't have to know the shortcode.
_cs.49009
Is the jury still out on this or do we now know which of the above mentioned ways of randomizing Quick Sort is the most optimum as far as average case running time (averaged over all possible input arrays, with all permutations of the numbers being equally likely) is concerned?Or perhaps, has a case been made for the assertion that a generalization is not possible?
Quick Sort: Randomized Pivot vs Median of 3/'Ninther' Pivot vs Uniform Shuffle of Input
algorithms;sorting;efficiency
The asymptotic expected running time of quicksort is $\Theta(n \log n)$: this is true for all three pivot methods you mention.Wikipedia says that the expected number of comparisons is approximately $1.386 n \log n$ when using a random pivot, and approximately $1.188 n \log n$ when using median-of-three pivot. There's some experimental evidence that the number of comparisons might be about $1.094n \log n$ when using a ninther pivot for large arrays, median-of-three for medium-sized arrays, and single element for small arrays. See the following research paper:Jon L. Bentley, M. Douglas McIlroy, Engineering a Sort Function. Software Practice and Experience, 23(11):1249-1265, Nov 1993.(This paper is cited in the Wikipedia article I mentioned above.)I'm not familiar with the uniform shuffle pivot selection method. It sounds equivalent to choosing a random element and using that as the pivot.
_cs.10227
I am trying to solve a recurrence by using substitution method. The recurrence relation is:$$T(n)=4T(n/2)+n^2$$My guess is $T(n)$ is $\Theta(n\log n)$ (and I am sure about it because of master theorem), and to find an upper bound, I use induction. I tried to show that $T(n)\le cn^2\log n$ but that did not work, I got $T(n)\le cn^2\log n+n^2$.I then tried to show that, if $T(n)\le c_1 n^2\log n-c_2 n^2$, then it is also $\mathcal O(n^2\log n)$, but that also did not work and I got $T(n)\le c_1n^2\log(n/2)-c_2 n^2+n^2$.What trick can I use to show that? Thanks.
Solving $T(n)=4T(n/2)+n^2$
asymptotics;recurrence relation
null
_unix.96534
Is it possible to disable Bash's autocompletion for a specific command only?Use case: For obvious reasons, I would like to disable autocompletion for the rm command when I'm root. It would also be a terrible pain if I disabled autocompletion altogether, so I'd like to remove it for rm only.Can this be done at all, preferably without hacking /etc/bash_completion and friends?
Disable Bash autocompletion for a specific command only
bash;autocomplete
You can do this easily by setting rm's completion to an empty wordlist.complete -W rmSet it in /root/.bashrc if you only want it to apply to root.
_unix.3261
I've been messing around with my system too much and messed something up. I'm new to Ubuntu, but have been using linux on servers for a few years. I'm not sure of the correct terminology so I'm including screen shots to explain what is going on.First, system specs:Ubuntu 10.4 LTS x64 Lucid Core i7-970 Nvidia GTX 480 Dual Screen with Twinview Nvidia proprietary dev driver 260.24 (64-bit)Now what I screwed up:First major customization was ppa:goehle/goehle-ppa customizations for keeping evolution open after closing the main window. That worked fine until I started messing with getting hibernate working.I never got hibernate working even after installing linux-generic-tuxonice; it gave a warning about usb09 not stopping. The only things that I have in USB are a keyboard and mouse.Then I started getting the error:Trying to fix this, I reinstalled the Evolution customizations. The error persists and now the panel is messed up as well. I'm not getting the application icons, the menu with the chat status, or the shutdown/restart/lock screen menu.This is what it should look like:But this is what I'm getting now:How do I get my icons back?EDIT: I found how to get my application icons back.Right-click on panelAdd to Panel ...Notification Area.I still have not figured out what the chat bubble menu and power menu are called.
Gnome panel missing application icons, chat bubble menu, and power menu
ubuntu;gnome;notifications;gnome panel
The Power thingy and the user chat bubble thingy are both the same applet called Indicator Applet Session.
_unix.58324
I know that .profile / .bash_profile are loaded when a terminal session is started, either through local machine or SSH. Are there any files that are loaded/called when the session terminates?Reason:I have the .profile set to log the date and IP address that connects to a terminal session for a specific user.
Are any files loaded when a terminal session terminates?
bash;shell;ubuntu;exit
~/.bash_logout executed by bash when login shell exits. but you can also get IP address and date details using lastlog , did your try that ?~/.bash_logout , but it will not run when session kill -9 $$ or may be close forcefully
_webmaster.18001
Is liquid layout (everything in % and font-size in 'em') good or creating multiple css file for different resolutions or browsers with help of Javascript good?I am a aspiring web designer, and want create a universal accessible web pages?
Is liquid layout good or creating multiple css file for universal accessibility of web pages?
html;css;website design
null
_unix.266309
I have a server at home that I use as a NAS and some other services. The server has Debian Jessie on it, with 4x 4 TB harddrives in RAID5. I use this server to store all my home data, movies, games, etc. About 75% of it is filled.I learned about AIDE some time ago after checking my cron reports, with aide giving an error:run-parts: /etc/cron.daily/aide exited with return code 1/etc/cron.daily/tripwire:### Error: File could not be opened.### Filename: /var/lib/tripwire/myhostname.twd### No such file or directory### Exiting...run-parts: /etc/cron.daily/tripwire exited with return code 8So to initialize my cron database, I executed the command: sudo aideinit according to this tutorial. However, this command has been running for the last two days!!!I noticed that it's scanning the whole server including my whole RAID array! This I learned because it gave stdout messages related to the data in my RAID. Part of them are the following:/raidarray/Games/UT2004/Help/BallisticFiles/Render_PistolP.jpg mtime in future/raidarray/Games/UT2004/Help/BallisticFiles/Render_M290P.jpg mtime in future/raidarray/Games/UT2004/Help/BallisticFiles/Render_FP9A5Pickups.jpg mtime in future/raidarray/Games/UT2004/Help/BallisticFiles/Render_NRP57P.jpg mtime in future/raidarray/Games/UT2004/Help/BallisticFiles/Render_M290S.jpg mtime in future/raidarray/Games/UT2004/Help/BallisticFiles/Render_R78S.jpg mtime in future/raidarray/Games/UT2004/Help/BallisticFiles/Render_A42S.jpg mtime in future/raidarray/Games/UT2004/Help/BallisticFiles/Render_MRT6Clip.jpg mtime in future/raidarray/Games/UT2004/Help/BallisticFiles/Render_EKS43S.jpg mtime in future/raidarray/Games/UT2004/Help/BallisticFiles/BallisticStripe2.jpg mtime in future/raidarray/Games/UT2004/Help/BallisticFiles/Render_M50Clip.jpg mtime in future/raidarray/Games/UT2004/Help/BallisticFiles/BallisticGoldLogo.jpg mtime in future/raidarray/Games/UT2004/Help/BallisticFiles/Render_Rockets.jpg mtime in future/raidarray/Games/UT2004/Help/BallisticFiles/Render_M925S.jpg mtime in futureSo what's going on? I'm quite new to AIDE, but I would like to understand how it works. Should it really take that long? Does it make sense for it to scan my whole RAID array? How would you manage this?
AIDE is taking forever to initialize
cron;aide
It turns out the solution is to exclude that directory of the array. This is how to exclude it: Just add this line to the config file of aide to exclude the folder /raidarray:!/raidarray/.*
_codereview.82920
I am using .Net Identity 2.0 with Entity Framework 6.0. I have a Person class inheriting IdentityUser. I have a Teacher (has additional Title property) and Student (has additional StudentNumber property) class inheriting from the Person class. I also have their corresponding roles in the database: Student and Instructor (or teacher).I wonder if the code below is the most efficient way of creating users with specific roles, it seems to be a redundant but I could not figure a shorter way: gEchoLuDBContext _db = new gEchoLuDBContext(); var userStore = new UserStore<IdentityUser>(_db); var userManager = new UserManager<IdentityUser>(userStore); foreach (ListItem role in rb_Roles.Items) { if (role.Selected) { var user = new Person() { UserName = txt_Username.Text, FirstName = txt_Firstname.Text, LastName = txt_Lastname.Text, Email = txt_Email.Text }; if (role.Text == Student) { user = new Student() { StudentNumber = , UserName = txt_Username.Text, FirstName = txt_Firstname.Text, LastName = txt_Lastname.Text, Email = txt_Email.Text }; } else if (role.Text == Instructor) { user = new Teacher() { Title = , UserName = txt_Username.Text, FirstName = txt_Firstname.Text, LastName = txt_Lastname.Text, Email = txt_Email.Text }; } IdentityResult result = userManager.Create(user, txt_Password.Text); if (result.Succeeded) { IdentityResult result2 = userManager.AddToRole(user.Id, role.Text); if (!result2.Succeeded) { lbl_Result.Text = The user created successfully. But, the selected roles are not assigned!; } } } }
Creating users with different roles using .Net Identity 2.0 within Entity Framework
c#;entity framework;inheritance
null
_cstheory.1944
Given a DAG with $|V| = n$ and has $s$ sources, we have to present subgraphs such that each subgraph has approximately $k_1=\sqrt{s}$ sources and approximately $k_2=\sqrt{n}$ nodes.(Note: Approximately means that each subgraph contains $\lceil \sqrt{n}\rceil$ or $\lfloor \sqrt{n} \rfloor$ nodes and covers $\lceil \sqrt{s}\rceil$ or $\lfloor \sqrt{s} \rfloor$ sourses of the original graph. All sources of the original graph have to be covered by some subgraph, so there has to be $\lceil \sqrt{s}\rceil$ or $\lfloor \sqrt{s} \rfloor$ subgraphs.)Assume following about the graph G(V,E):We try to solve the problem forgraphs in which such partitionexists - if the partition doesn'texist it can be stated that it isimpossible to create partitionAll the graph's node will have $\forall v \in V\ $ in_deg(v)=2 or in_deg(v)=1Let's define the height of the DAG to be the maximum path length from some source to some sink.The subgraphs have following requirements:We require that all subgraphsgenerated will have the same height(max length of longest path)Nodes of each subgraph should bereachable from the sources withinthat subgraph, using nodes of thatsubgraph as intermediate nodes.Moreover, the intersection of eachpair of node sets (of subgraphs)must be empty.In the following picture, you can see an example of a right partition (assume that each edge in the graph is directed upwards).There are 36 nodes and 8 sources [#10,11,12,13,20,21,22,23] in the example. So each subgraph should have 6 nodes and 2 or 3 sources.Do you have idea for algorithm?Thank you very much
DAG partitioning to subgraphs
ds.algorithms;graph theory;graph algorithms;directed acyclic graph;clustering
null
_unix.205023
I have versions 3.16 and 4.0 of linux-image package installed. During login I can select which kernel I want to boot in the advanced options menu item. However, when I install a DKMS module it is compiled only for the newer version:Setting up fglrx-modules-dkms (1:14.12-1) ...Loading new fglrx-14.12 DKMS files...Building only for 4.0.0-1-amd64Relevant packages (linux-headers, linux-kbuild, linux-compiler-gcc) are installed for 3.16 too.Why does the package not get compiled for the old kernel image? Can I configure something so it is compiled?
How do I compile DKMS module for multiple kernel image versions in Debian?
debian;compiling;kernel modules;dkms
null
_cs.26405
Do there exist two computable functions, a and b, which can construct every computable function by a finite serie of a's and b's which is function composed? Fx. let's take the serie, a,b,a,b,b,a,a,a , which function composed is the function, ababbaaa ( =a(b(a(b(b(a(a(a(x)))))))) ), this function is the function described by the serie, a,b,a,b,b,a,a,a. And I want to know if every program can be described, by such serie.If such functions exist, can you tell a example of a and b?Thanks.
Two functions which can create any computable function by composing?
computability;turing completeness
If such functions existed, they would constitute a computable enumeration of all computable functions, which is impossible for the following reason. Suppose you had a computable enumeration $f_i$ of all computable functions. The function $g\colon i \mapsto f_i(i) + 1$ is then computable, but by definition $g \neq f_i$ for all $i$.
_unix.330323
I have a problem thatI can not solve even after scanning the Web. I trust in your help.I have a text file that contains several strings of different lengths.https: //insidemiamitatto.com/gugwywgifuw ';https://insidemiamitatto.com/gugyiwyeiuiuweyiweyi ';https://insidemiamitatto.com/gugyiipi9uuuppopi ';I need to eliminate with Applescript or Terminal the last 3 characters, i.ee ';I tried it with sed, but my invocation eliminates the characters only from the longer strings, leaving the others with 3 characters.Is there a way to eliminate the final 3 characters in each string?I also have a second question:Always with sed I can remove strings e.g:sed -i.bak -e '1,200d; 1874,2842d'This virtually eliminates a part of the initial and final text.In the rest of the files, I string groups that alternate every 18 strings, and I would like to erase 17 in each group, for example:1-18 19-37 38-55.I would keep the strings 1 19 38.Is sed or other feasible? I am using BBEdit, but every time I have to count manually, and it is exhausting when editing many files.
Sed and BBedit Html
sed;osx
null
_webmaster.56637
I'm looking for a white-hat method of getting Google to show a local site search result as a Google search result. For clarification:There is a page filled with names of certain people. Each of the names is linked to the local site search engine. So clicking on David jones would go to mysite.com/?q=david+jones. I want Google to show up the aforementioned link mysite.com/?q=david+jones as a search result if something like mysite david jones is queried. There is an obstacle I need to avoid:There are more than 450 people names (or links) on theaforementioned page. I've heard that having more than say 150 linkson the page is bad for SEO. In addition to those names there areother links to various other pages. i.e main menu, footer links,latest article links etc. (it's a Joomla system.)What I want to try:My solution to this is use a robots tag to index only content and not links. But I'm still stumped how to show the site search result as a Google Search result.Will adding all these local site search engine links in sitemap help?
How to get Google to show local site search engine results?
seo;search engines;site search;search results
You do not want to show your local site search results to Google to be indexed. First of all, as John Conde stated, Google doesn't necessarily want site search results in the index and, frankly, you don't want to display a huge page of links to Google as it will appear spammy to the algorithm under nearly all circumstances.As far as the authoritative source than John was unable to locate, I think there are several that serve. The first is an old Matt Cutts post from 2007 that mostly describes the problem but does quote Vanessa Fox responding to a question on Webmaster Help thusly:Typically, web search results dont add value to users, and since our core goal is to provide the best search results possible, we generally exclude search results from our web search index. (Not all URLs that contains things like /results or /search are search results, of course.)Cutts then goes on to point to the quality guidelines on the official Webmaster Guidelines support page that was modified to include the following bullet:Use robots.txt to prevent crawling of search results pages or other auto-generated pages that don't add much value for users coming from search engines.Cutts further states in the 2007 post:its still good to clarify that Google does reserve the right to take action to reduce search results (and proxied copies of websites) in our own search results.So by now it should be abundantly clear that this is a practice to avoid and has been for quite some time (more than 6 years at the time of this answer).In case you need more proof, there is a post on Search Engine Land from September of 2013 featuring a video of Matt Cutts basically answering the same question as he did in 2007 but also adds a link to the Automatically generated content article on Webmaster Tools Help that basically restates all of the above, only much more succinctly than I have done.tl;drDon't do this.What you should be doing instead is making sure you have actual content pages for David Jones et al and have those in your sitemap so Google will index them. A local site search is really just another navigational tool for your users once they are on the site...it is not a destination for inbounds.
_unix.330176
I have 5 files called file1, file2, file3, file4, file5, . I am attempting to run the following command echo contents >> file{1,2,3,4,5}. I get the following error when I run this command; -bash: file{1,2,3,4,5}: ambiguous redirect. My goal is to echo some text to multiply files in one command. How can I achieve this? Thanks in advance.
Bash Brace Shell Expansion Fail
bash;io redirection;echo;brace expansion
null
_unix.82261
I just created a tor hidden (plausible deniability) volume. I used a 32GB USB flash drive, for the outer volume, and made the inner volume 20GB. Does this mean that I can safely add another ~12GB of data to the outer volume without corrupting the hidden inner volume? I'd like to store my bank statements on the outer one, so I should never need more than 1GB. When creating the inner volume, it said that it would be better to make it smaller to allow for more storage space in the outer volume, but when I finished making the inner volume, it said not to modify the outer volume under any circumstances.
Editing Truecrypt Hidden Volume's Outer Volume?
encryption;privacy;truecrypt
null
_opensource.2819
Background: In the audio processing world, most programs to compose and mix music in (collectively called DAW's from now on) are commercial and closed source. These programs can extend their functionality using some common plug-in specifications.There is one common 'free' and cross-platform specification that is widely supported (Steinberg's VST specification). There are several others, most commonly Apple's AudioUnit and Avid's RTAS- and AAX specifications. AudioUnit, RTAS and AAX are locked in to the companies' proprietary DAW platforms (Logic and Pro Tools).For whatever reason these specifications were developed, they are nearly completely identical. However, Pro Tools and Logic refuse/don't support loading plugins from any open specifications except their own.These specifications are so identical that creating a wrapper around all of these specifications is trivial and possible. This, then, has created a mess in which developers of audio plug-ins must separately distribute a lot of permutations of their plugins, but it has become common practice.Situation: I created a meta-plugin/wrapper, that allows to identify itself as any format (specification) and load any format (such that you can load VSTs in Logic, for instance). The plugin is free and licensed under the GPL.It works completely on its own. But it can, at run-time optionally load any end-user provided plug-in, or, optionally, save an emulated 'copy' of the loaded plug-in disguised as any other format. This last feature allows the unsupported format to be loaded seamlessly in any other locked-in proprietary host (but under the hood, it is still wrapped through my plug-in, just statically (not it terms of linking) and invisibly).The question is, whether this is in violation of the GPL, when the end-user provided library loaded in my GPL program is not GPL-compatible (proprietary or closed source, for instance).Notice that, the GPL program Audacity allows the same functionality - it can for instance load any VST-plugin, that may or may not be proprietary. I can even, through creative audio system routing, simulate the exact situation using a project in Audacity that can be routed through something like Logic, emulating the wrapped plug-in situation.I guess the question can be boiled down to: Can GPL hosts support loading of optionally provided non-GPL plug-ins in this specific situation, where the GPL host imitates the non-GPL plug-in in what effectively seems like one plug-in?
Hosting (potentially) non-GPL plugin's
gpl;plugins
IMHO your question boils down to: can a piece of GPL-licensed code load arbitrary code under non-GPL or other licenses assuming it does not know about any of this other code ahead of time?The closest thing that comes to mind would be an OS user space such as the Linux user space. Linux does not know anything ahead of time about your program. Does its GPL license extend to your program? Since this can be a grey area for some, Linus made it clear that the GPL does not extend to user space programs.I think the same context applies here. For the sake of clarity if you want to allow or disallow the loading of non-GPL-licensed plugins by your framework, you should make this explicit such that there is no source of confusion for your users. An explicit GPL exception would be the thing I would do if it was for me.
_codereview.47638
I've tried this problem on Codeforces. Given a set of string patterns, all of which are the same length, and which consist of literal lowercase ASCII characters and ? wildcards, find a pattern that satisfies them all.I want to optimize and simplify it further and make it a more elegant solution, if possible.import sysdef main(): n = int(sys.stdin.readline()) t = sys.stdin.readlines() l = len(t[0]) result = '' if n==1: result = t[0].replace('?', 'x') else: for y in range(0,l-1): let = t[0][y] for x in range(1,n): if let == '?' and t[x][y]!= '?': let = t[x][y] if t[x][y] != let and t[x][y] != '?': result += '?' break elif x == n-1: if let == '?': result += 'x' else: result += let print resultmain()
String pattern riddle
python;programming challenge
(This answer is in Python 3. Feel free to make the necessary adjustments to run it on 2.x)IntroductionThe review posted by janos underscores the need for good names, following coding conventions. The post suggests incremental improvements, which is a step in the right direction.To make more radical improvements to your code, you need to recognize the deeper structural problems, which arise because you're not using the full flexibility of Python and because you aren't using the right data structure to simplify your algorithm.Handling the inputThe only responsibility of main()should be to collect and sanitize the input and subsequently output the result:def main(): pattern_count = int(sys.stdin.readline()) patterns = itertools.islice(sys.stdin, pattern_count) result = intersect_patterns(p.strip() for p in patterns) print(result)The calculation should be kept separate, in the intersect_patterns function.Join instead of +=It would be more elegant to separate the concatenation of the resulting string from the calculation of its contents. You can achieve this by using Python's yield keyword to create a generator whose elements can be joined like so:def intersect_patterns(lines): return ''.join(_intersect_patterns(lines))Iterating the right wayYou are making your algorithm a lot more complex by iterating over the input in the traditional line-by-line fashion when you are in fact interested in examining one column at a time. The solution is to think of the lines as rows and the characters as columns in a matrix. To iterate over the columns instead of the rows, transpose it using the built-in zip function with the * operator, as shown in this answer on StackOverflow.def _intersect_patterns(lines, wildcard='?', fill='x'): for column in zip(*lines): literals = {char for char in column if char != wildcard} if not literals: yield fill elif len(literals) == 1: yield literals.pop() else: yield wildcardThe right data structure for the jobWow, where did all the code go? It turns out that there is a data structure which can do most of the work for you: the set (which we create using a set comprehension, or {...}), because for each column, we only need to examine the unique literals, disregarding any wildcards, to determine what to put in the intersecting pattern we are calculating.There are only three possible casesThe column we are examining contains...No literals only wildcards, so we need to insert a literal (for instance x) into the output.Exactly one unique literal, so we need to insert that literal into the output.More than one unique literal, so we need to insert a wildcard into the output.We simply yield the correct character on every iteration. Our caller can then take care of assembling the result string and printing it.ConclusionBefore racing to implement a solution, think about the algorithms and data structures that might help you. For example, iterate over the data in a way that makes sense and use what the standard library has to offer.Analyze the possible scenarios by writing them down before coding them, which will help you discover the simplest solution.Separate your concerns.If you need to write if statements for special cases, you might be doing it wrong.
_softwareengineering.252625
I am working in an organisation with 11 scrum teams developing on the same code base. Currently, all development is done in trunk, and at the end of a sprint everything MUST be releasable, or it must be backed out (an arduous process) as a release is cut.My opinion is that while work in a sprint is (ideally) 'done' at the end of the sprint, this doesn't necessarily mean ready for release. You may be at a point where you do not have a Minimum Viable Product to release, but have some stories complete. If a story is not done, or the story is only part of a larger feature, it should be easy to keep this separate from the release ready code. At the moment it is all backed out, the release cut is taken, then it is checked back in. A huge waste of time!We use continuous integration, which is the main argument for everyone developing on trunk. Occasionally teams use a team branch, but this is currently frowned upon.I've been considering simply having a 'dev' and 'release' branch, and pushing features to release when they are an MVP, or having a branch for each story or feature, but this is very admin heavy in TFS. Have other people dealt with similar issues in the past, and what are your thoughts on the best way forward? Unfortunately, we are tied to TFS in the short term.
Managing 'done' but not releasable code in TFS
agile;scrum;team foundation server;large scale project
Looks like you are completely borked with that lack of branching. At the very least you should be developing on a Dev branch and merging completed code onto Main when your code is working and 'releasable'. This would stop the stupidity of reverting committed work if you failed to meet your deadline and re-committing it afterward. The days of using VSS are long gone!!Every team should have their own dev branch, but I can understand if you all want to work on a single Dev branch. CI should be applied to both Dev and Main branches, and extra analysis on Main too - we used to put some very long running static analysis, doc generation and testing on Main that would have slowed Dev down too much. Microsoft recommends using a Main branch (or trunk) with Dev branches and Release branches in their TFS model. (the old docs are here, though they say they are outdated.. but neglect to link to their current views)
_unix.14034
Can I just dd an Ubuntu 11.04 mini.iso to an usb flashdrive an boot from it? or what am I missing?
How can I make a bootable usb flashdrive?
linux;dd
You should be able to dd if=linux.iso of=/dev/sdx, where x is the letter for your USB device. Don't use /dev/sdx1, just /dev/sdx. It has worked for me (not with Ubuntu, though). Beware that this will destroy any data previously on the flashdrive.
_unix.243829
I thought this was easy even for a beginner like me, but I'm stuck - piping a text file like this: cat file1.txt | sed '/^[0-9].*[0-9]$/d' > file2.txtThis regex catches the lines in a text editor, and it works when I use it to delete all blank lines in the same file, so no problem with (Linux/Windows) newline format I guess.I wonder why this does not delete those lines, or how this can be done otherwise?
Delete lines beginning and ending with a digit
sed;tr
null
_unix.102051
I type this: export JAVA_HOME=/usr/lib/jvm/java-1.7.0-openjdk so that I can access that directory by typing cd $JAVA_HOME but every time I close and open the terminal I have to do this again and again. Is there a way of saving this? I did some research but am not understanding how you could add it to the bash_profile.I'm on the latest Fedora. please explain as basic as you can as I'm a complete newbie! :)
exported variable disappears when I open a new terminal
bash;environment variables
You need to add your export line in /your/home/directory/.bashrc, which is the Bash initialization file sourced when you start an interactive shell.If you're using the GUI to edit the file, you should note that its name begins with a . so it's hidden in the GUI by default. To make it visible, assuming you're using Nautilus, you can press CTRL+H. If you're using some other file manager, look in its documentation for how you can show hidden files.Simply edit your .bashrc and append your export line at its end. This should work when you open and close the terminal and should also be persistent across reboots.
_codereview.15349
After a week of searching and testing each approach via Stopwatch, I came to this method using the fastest way possible to capture screen into a bitmap and then to a byte[].Is it possible to make it any faster using parallel features or any idea I have not taken into account? (As I am a newbie, 4 months of self learning.)I mixed two or three versions of the copying function (portion of screen to memory then convert captured/crop into byte[]). I might have left unnecessary lines of code, I would like to refine it (if and where needed).unsafe public static Bitmap NatUnsfBtmp(IntPtr hWnd, Size Ms){ Stopwatch swCap2Byte = new Stopwatch(); swCap2Byte.Start(); WINDOWINFO winInfo = new WINDOWINFO(); bool ret = GetWindowInfo(hWnd, ref winInfo); if (!ret) { return null; } int height = Ms.Height; int width = Ms.Width; if (height == 0 || width == 0) return null; Graphics frmGraphics = Graphics.FromHwnd(hWnd); IntPtr hDC = GetWindowDC(hWnd); //gets the entire window //IntPtr hDC = frmGraphics.GetHdc(); -- gets the client area, no menu bars, etc.. System.Drawing.Bitmap tmpBitmap = new System.Drawing.Bitmap(width, height, frmGraphics); Bitmap bitmap = (Bitmap)Clipboard.GetDataObject().GetData(DataFormats.Bitmap); Graphics bmGraphics = Graphics.FromImage(tmpBitmap); IntPtr bmHdc = bmGraphics.GetHdc(); BitBlt(bmHdc, 0, 0, width, height, hDC, 0, 0, TernaryRasterOperations.SRCCOPY); swCap2Byte.Stop(); string swCopiedFF = swCap2Byte.Elapsed.ToString().Remove(0, 5); swCap2Byte.Restart(); #region <<=========== CopytoMem->ByteArr ============>> BitmapData bData = tmpBitmap.LockBits(new Rectangle(new Point(), Ms), ImageLockMode.ReadOnly, PixelFormat.Format24bppRgb); MyForm1.MyT.Cap.TestBigCapturedBtmp = tmpBitmap; // number of bytes in the bitmap int byteCount = bData.Stride * tmpBitmap.Height; byte[] bmpBytes = new byte[byteCount]; // Copy the locked bytes from memory Marshal.Copy(bData.Scan0, bmpBytes, 0, byteCount); byte[] OrgArr = bmpBytes;//File.ReadAllBytes(testFcompScr.bmp); // don't forget to unlock the bitmap!! swCap2Byte.Stop(); string SwFCFscr = swCap2Byte.Elapsed.ToString().Remove(0, 5); System.IO.File.WriteAllBytes(MyForm1.AHItemsInitialDir + testBig4BenchViaChaos.bar, OrgArr); System.Windows.Forms.MessageBox.Show( Copied @ +swCopiedFF + Environment.NewLine+Converted @ + SwFCFscr); btmp.UnlockBits(bData); if(System.IO.File.ReadAllBytes(MyForm1.AHItemsInitialDir + testBig4BenchViaChaos.bar)== OrgArr) System.Windows.Forms.MessageBox.Show(OK); else System.Windows.Forms.MessageBox.Show(Not same); if (BigOrsmall == Big) { MyT.Cap.TestBigCapturedBtmp = btmp; MyT.CapSave.TestBigCaptSavedAsBar = OrgArr; File.WriteAllBytes(AHItemsInitialDir + testBig4BenchViaChaos.bar, OrgArr); } else if (BigOrsmall == Small) { MyT.Cap.TestSmallCapturedBtmp = btmp; File.WriteAllBytes(AHItemsInitialDir + testSmall4BenchViaChaos.bar, OrgArr); MyT.CapSave.TestSmallCaptSavedAsBar = OrgArr; } TestedCap_DoPutInPicBox(PicBox_CopiedFromScreen); #endregion bmGraphics.ReleaseHdc(bmHdc); ReleaseDC(hWnd, hDC); return tmpBitmap;}
Comparing screen captures using unsafe / API calls
c#;performance;image
null
_unix.284149
I am trying to extract the compilation date from a linux command (or cpp would be fine too). I am using:stat -c %z ./myProgram.binHowever, if I copy myProgram.bin to an another place via ssh for example, the stat command is basically giving me the date of the copy.How can I get the real compilation date?Thanks.
Get compilate date
linux;command line;c++
Thomas Dickey's answer addresses the issue in general, for any (ELF) binary. Given the way your question's phrased, you might find the __DATE__ and __TIME__ predefined macros useful; they allow the compilation date and time to be referred to within a program (so a program knows its own compilation date and time).Here's a quick example:#include <stdio.h>int main(int argc, char **argv) { printf(This program was compiled on %s at %s.\n, __DATE__, __TIME__); return 0;}
_webapps.106050
I have a google sheet with 1300 lines. I want to enable users to see the list and be able to enter filter text, so only matching records will be displayed.I can send the link to the sheet, but then when a user enters a filter, all other users will see it, and it will override their filter.
How do I allow users to see and filter a data list without allowing them to edit the sheet?
google spreadsheets
null
_unix.304671
Background: I have a FreeNas box with a boot SSD and a 2x 3TB HDD. I know only enough linux and FreeNas to get me in trouble and must have gotten it up and running a while ago. I transferred data to the drive (somehow) and backed it up to CrashPlan (since disappeared). I moved the box to the garage to get it out of the middle of the floor and forgot about it.Recently, I went to retrieve data off the hard drive by pulling it out of the box and putting it in my Windows box. The drive was seen by disk management with two partitions, but I was unable to assign a drive letter (disk1). Starting to panic, I grabbed the other drive and put it in the Windows box to find that Windows did see it and assign it a drive letter, but it was empty (disk2).I cloned the drive that I couldn't mount (disk1) to the drive Windows could mount (disk2) so I could go about recovering the partition. I loaded up easeus to recover the gpt partition and found that it said invalid ZFS file system. I grabbed the SSD from the FreeNas box, put it in the computer I'm working on and booted FreeNas. I was able to get in and saw the FreeNas saw a pool, but it stated that 2.7TB were empty, which is not right.Here is what I know. If I copied the original data to the FreeNas pool, it would have been setup for disk1 to be mirrored to disk2, so I don't think I destroyed any parity information during the clone. I don't think disk2 had any data, unless the partition was damaged and it stated it was empty when it wasn't. I have the original FreeNas box, but at this point, I don't remember which SATA port each drive was plugged in to (if that makes a difference). I REALLY would like to get this data as it is pictures of my wedding and when we were dating. If I need to leave this to a professional, please recommend someone and tell me what I need to tell them (is my zfs file system invalid?).
Invalid ZFS file system has no data
zfs;freenas
null
_unix.168611
I'm using a Latitude DELL notebook with XUbuntu which is working really well.But there's this one issue. The missing context menu key which is usually between CTRL and AltGR.I already found a way to press the right key but it only opens the context menu at my coursors point and not for example the focused text in firefox.Is there some way to open the context menu for the selected item?(Like the menu key on every normal keyboard would do)Cheers.
Bind right click contect menu to key
keyboard shortcuts;xfce;menu
null
_unix.303298
I recently got GPG setup on my Mac:brew install gpg;brew install gpg-agent;And generated a key pair with a passphrase.I added use-agent to my ~/.gnupg/gpg.conf and allow-preset-passphrase to ~/.gnupg/gpg-agent.confI successfully decrypted a file using:gpg --use-agent --output example.txt --decrypt example.gpgwhich prompted me to enter my private key passphrase. The trouble is, when decrypting subsequent files, gpg-agent again prompts me for this passphrase.Currently, my passphrase is a really long string which is near impossible to type each time. I would like gpg to behave like ssh-agent wherein the passphrase is stored securely and remembered forever (even between sessions).I understand that this might decrease security if my laptop was comprised, but this inconvenience would probably deter me from using gpg all together.I'm not sure if:default-cache-ttl 31536000max-cache-ttl 31536000are the options I'm looking for to store between reboots There's sadly no man entry for gpg-agent.How can I make gpg/gpg-agent remember my private key passphrase forever?
Make GPG Agent Permanently Store Passphrase
gpg;gpg agent
null
_unix.348327
When using the terminal tool ip, there is a number of flags for every interface.Example: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueueWhat is the meaning of M-DOWN? What command to be used to make it up or down?
Using ip, what does M-DOWN mean?
terminal;ip;x86;interface
null
_cs.45404
The Bitcoin-solution can be described as [...] a solution to the double-spending problem using a peer-to-peer network. (official Bitcoin paper, PDF, abstract, first page).Now I wonder if a similar technology can be used to collect, send and receive other data with the goal to create unique content. That is data that you either have or you don't have (like a physical object).A follow-up question is if these objects can be only created by a central instance but once distributed there is no control from this central instance anymore. Like real money: It gets minted but once it is public you don't have to register every cash transaction with the your state.One application could be a digital trading card game where you either have a card or not. You can either trade new cards directly with other humans or you get (buy) new ones from the creator who holds the monopoly over creating and releasing cards.Did I miss anything in the crypto-currency tech that prevents this scenario?
Can systems that prevent double-spending (e.g. crypto-currencies) be used to attach other unique data?
cryptography;computer games;peer to peer
null
_unix.249467
I have created a script to check if I have installed Node, Npm, Bower and Susy but when I execute it I get an error which I can not solve.This is the script: isInstalled(){ command -v $1 >/dev/null 2>&1 || command -v $2 >/dev/null 2>&1 || { echo >&2 I require $1 but it's not installed. Aborting.; return false;} }installNode() { if [[ !isInstalled('node', 'nodejs') ]]; then echo Node is not installed. Installing... curl https://www.npmjs.org/install.sh | sh fi}installBower(){ if [[ !isInstalled('npm') ]]; then echo Npm is not installed. Installing... curl -L https://npmjs.org/install.sh | sh else echo Npm is installed. Checcking Bower... if [[ !isInstalled('bower') ]]; then echo Bower is not installed. Installing... npm install -g bower fi}installSusy(){ if [[ !isInstalled('npm') ]]; then echo Npm is not installed. Installing... curl -L https://npmjs.org/install.sh | sh else echo Npm is installed. Checcking Bower... if [[ !isInstalled('bower') ]]; then echo Susy is not installed. Installing... npm install susy fi}This is the error message:begin.sh: 6: begin.sh: Syntax error: ( unexpected (expecting then)I know this question is quite stupid and that's because my lack of experience on bash scripting. I also tried googling before posting but I guess the error is so basic that I can't find an answer. Thanks for everything.
Can not execute bash script (unexpected element '(' )
shell;scripting;npm
Functions in bash are called just like commands, and not like functions in other languages. Instead of isInstalled('node', 'nodejs'), do:isInstalled 'node' 'nodejs'And the if condition would look like:if ! isInstalled 'node' 'nodejs';then ...
_softwareengineering.164606
I have been noticing for a long time on Stack Overflow that most users recommend to use PDO instead of mysql_*, because PDO is more secure than mysql_*. But my question is if websites which are already running with mysql_* will stops working? Or what exactly does deprecating mean here? So should we have never used mysql_*? From which PHP version is is deprecated?
Is mysql_* deprecated after PDO was introduced?
php;mysql;deprecation
See PHP.net page FAQ. It answers your question and gives migration advice.Your code won't suddenly stop working unless when PHP remove the functionality, you upgrade your PHP version. The FAQ page advice recommends you write new code using one of the alternatives. If it's not a massive job, it could be worth considering switching.. that depends on your project though.
_codereview.115988
The below script will compare a set of arrays according to similarities between their key's values. For example, if the first 4 keys values of an array are equal to another array's first 4 keys values, they are equal and consists a cluster. Here is my code:<?php$arrays = [array('a'=>1, 'b'=>2, 'c'=>3, 'd'=>4),array('a'=>1, 'b'=>2, 'c'=>3, 'd'=>4),array('a'=>1, 'b'=>2, 'c'=>3, 'd'=>4),array('a'=>1, 'b'=>2, 'c'=>4, 'd'=>3),];$result = [];//get the keys of a sub-array that is inside $arrays, to be used later$keys = array_keys($arrays[0]);for($i=0; $i < sizeof($arrays); $i++){ $sa = array(); // to store similar arrays indexes for($k=$i+1; $k < sizeof($arrays); $k++){ $similar = false; //compare the values of keys in the two arrays. Just compare the first 4 keys (as the user's desire) for($j=0; $j < 4; $j++){ //check if the values are similar, if they are, assign $similar to true, and assign $j=3 to end the loop, (a bit of laziness here) ($similar = $arrays[$i][$keys[$j]] == $arrays[$k][$keys[$j]] ? true : false) ? null : ($j=3); } // check if the key (which represents an index in $arrays) is in $sa or not, if not, push it. $similar ? (in_array($i, $sa) ? null : array_push($sa, $i) && in_array($k, $sa) ? null : array_push($sa, $k)) : null; //if $similar is true, make $i jumps to the $k index (saving time) $similar ? $i=$k : null; } //if $sa not empty, push it to $result empty($sa) ? null : ($result[] = $sa);}/* // at this stage, $result includes all the similar arrays// so we need another loop to push the unique arrays to $result// just check if an index of $arrays is in an sub-array of $result, if not, push it as an array of one record */for($j=0; $j < sizeof($arrays); $j++){ $f = false; for($i=0; $i < sizeof($result); $i++){ in_array($j, $result[$i]) ? $f = true : null; } if(!$f){ $sa = array(); array_push($sa, $j); array_push($result, $sa); }}If the result was as follows:array(2) { [0]=> array(3) { [0]=> int(0) [1]=> int(1) [2]=> int(2) }, [1]=> array(1) { [0]=> int(3) } }this means that $arrays has two clusters of sub-arrays, where $arrays[0], $arrays[1], and $arrays[2] are similar (cluster 1), then $arrays[3] is unique (cluster 2).Does this code have vulnerabilities? Could it be optimized?
Cluster arrays according to similarity of key values
php;array;clustering
null
_codereview.139007
I built this web app to present a random group of questions for quizzes and tests. The page opens with random questions. Clicking anywhere shows (only) the spinner div. Clicking anywhere again brings up new questions.One thing is bothering me, though. I have succeeded in moving all the JS out of the body except for this:<div id=click onclick=location.reload();>I've tried using this in the script section, but it hasn't worked for me:document.getElementById('click').onclick = location.reload();Looks like it should do the same thing, but it doesn't, so I'm out of ideas. Other feedback is welcome, too.jsFiddle<!DOCTYPE html><html><head> <meta charset=utf-8> <title>Random Test Questions</title> <!-- Mobile viewport--> <meta name=viewport content=width=device-width, height=device-height,initial-scale=1.0, user-scalable=no> <script language=javascript> // change questions here -- in quotes, comma separated function setUP() { var questionSets = [ [Set 1 Question 1, Set 1 Question 2, Set 1 Question 3, Set 1 Question 4, Set 1 Question 5], [Set 2 Question 1, Set 2 Question 2, Set 2 Question 3, Set 2 Question 4, Set 2 Question 5], [Set 3 Question 1, Set 3 Question 2, Set 3 Question 3, Set 3 Question 4, Set 3 Question 5], [Set 4 Question 1, Set 4 Question 2, Set 4 Question 4, Set 4 Question 4, Set 4 Question 5], [Set 5 Question 1, Set 5 Question 2, Set 5 Question 5, Set 5 Question 4, Set 5 Question 5] ]; for (var setIndex = 0; setIndex < questionSets.length; ++setIndex) { var questionSet = questionSets[setIndex]; var questionIndex = Math.floor(Math.random() * questionSet.length); var question = questionSet[questionIndex]; var selector = '#questions div:nth-child(' + (setIndex + 1).toString() + ')'; document.querySelector(selector).innerHTML = question; //alternative method follows -- comment out above two lines, uncomment below two lines //var setId = 'set_' + (setIndex + 1).toString(); //document.getElementById(setId).innerHTML = question; } } function showQuestions() { document.getElementById('spinner').style.display = none; document.getElementById('click').style.display = none; document.getElementById('questions').style.display = block; } function showSpinner() { document.getElementById('questions').style.display = none; document.getElementById('click').style.display = block; document.getElementById('spinner').style.display = block; } function startTimer(duration, display) { var timer = duration, minutes, seconds; setInterval(function() { minutes = parseInt(timer / 60, 10); seconds = parseInt(timer % 60, 10); minutes = minutes < 10 ? 0 + minutes : minutes; seconds = seconds < 10 ? 0 + seconds : seconds; display.textContent = minutes + : + seconds; if (--timer < 0) { timer = 0; document.getElementById('time').style.backgroundColor = red; } }, 1000); } window.onload = function() { setUP(); showQuestions(); var minutesLeft = 239, //Change to minutes you need -- counted in seconds -- minus one second display = document.querySelector('#time'); startTimer(minutesLeft, display); document.getElementById('questions').onclick = setUP; document.getElementById('questions').onclick = showSpinner; }; </script> <style> #questions div { font-family: Arial, Helvetica, sans-serif; font-size: 7vh; margin-top: 6vh; border: 1px solid gray; padding: 1vh; width: 100%; } .questions { background-color: #ececff; } .time { background-color: #4cdc4c; text-align: center; } #spinner { height: 30vw; width: 30vw; position: absolute; top: 12vh; margin-left: 35vw; overflow: hidden; -webkit-animation: rotation .6s infinite linear; -moz-animation: rotation .6s infinite linear; -o-animation: rotation .6s infinite linear; animation: rotation .6s infinite linear; border-left: 3vw solid #ececff; border-right: 3vw solid #ececff; border-bottom: 3vw solid #ececff; border-top: 3vw solid #4cdc4c; ; border-radius: 100%; } @-webkit-keyframes rotation { from { -webkit-transform: rotate(0deg); } to { -webkit-transform: rotate(359deg); } } @-moz-keyframes rotation { from { -moz-transform: rotate(0deg); } to { -moz-transform: rotate(359deg); } } @-o-keyframes rotation { from { -o-transform: rotate(0deg); } to { -o-transform: rotate(359deg); } } @keyframes rotation { from { transform: rotate(0deg); } to { transform: rotate(359deg); } } #click { height: 100vh; width: 100vw; } </style></head><body> <div id=questions> <div id=set_1 class=questions>First question</div> <div id=set_2 class=questions>Second question</div> <div id=set_3 class=questions>Third question</div> <div id=set_4 class=questions>Fourth question</div> <div id=set_5 class=questions>Fifth question</div> <div id=time class=time>04:00</div> </div> <div id=click onclick=location.reload();> <div id=spinner></div> </div></body></html>
Displaying quiz questions in a web app
javascript;beginner;quiz
null
_codereview.48
I am currently developing a custom CMS being built on top of Codeigniter and was wondering if you can spot any flaws in my page fetching model code. The page fetching model is not entirely complete but the main functionality for retrieving a page is done, as well as retrieving modules assigned to a page (a module is really just a widget).Can this model be better in some parts, perhaps in relation to the joins I am doing? although not really joins, but multiple queries to pull out bits of related info the pages like modules and media.<?php class Mpages extends CI_Model { public function __construct() { parent::__construct(); } public function fetch_page($page_slug = 'home') { $db = $this->db; $query = $db ->where('page_status', 1) ->where('page_slug', strtolower($page_slug)) ->get('pages') ->result_array(); $page_id = $query[0]['id']; $query['modules'] = $db ->select('modules.module_name, modules.module_slug, modules.id moduleid') ->where('page_id', $page_id) ->join('pages_modules lpm', 'moduleid = lpm.module_id') ->order_by('module_order', 'asc') ->get('modules') ->result_array(); /*$query['media'] = $db ->select('lucifer_media.media_file_name, lucifer_media.media_file_extension, lucifer_media.media_directory') ->join('lucifer_pages_media', 'lucifer_pages_media.page_id = '.$page_id.'') ->get('lucifer_media') ->result_array();*/ if ($query) { return $query; } else { return false; } } public function fetch_navigation() { $result = $this->db->order_by(nav_order, asc)->where('published', 1)->get('navigation')->result_array(); return $result; } public function fetch_layout($id) { $result = $this->db->where('id', $id)->get('layouts')->result_array(); return $result[0]; } }?>
Critique My Codeigniter Custom CMS Pages Model
php;codeigniter;mvc
aaaah, a CodeIgniter fella :-)I'm just working on a CI project myself and already implemented some of the optimization you could use for your CMS as well... so let's have a look:for as little overhead as possible, try implementing lazy-loading of your files (libraries, models...)for caching purposes, you can use KHCache - a library that allows you to cache parts of the website instead of full pageinstead of always doing $this->db->..., you can create a helper function, for instance function _db() and then simply do _db()->where...also, you can optionally create a helper function to give you the results array automatically, so ->result_array() will not be neccessary anymore: function res() {} ... $query = res(_db()->where...);now, for the code :-)$query = $db ->where('page_status', 1) ->where('page_slug', strtolower($page_slug)) ->get('pages') ->result_array();$page_id = $query[0]['id'];here, you seem to be selecting all values from DB, while in need of a single first ID - try limiting number of results or this will create overhead in your database$db->where...->limit(1);the second query could probably use a LEFT JOIN instead of a regular JOIN, although I leave it to you to decide (the JOIN approach might not list everything you need)$db-select...->join('pages_modules lpm', 'moduleid = lpm.module_id', 'left')I guess that's all... just remember to put correct indexes on your SQL fields and use the EXPLAIN statement to check for bottlenecksgood luck!
_scicomp.26504
I'm studying the Fisker-KPP equation on the line (and in $]0, 100[$ numerically):$$\partial_t u = \Delta_{xx} u + u(1-u)$$I notice a behavior I don't understand with a smooth initial condition $u_0$ that has the following form:$$u_0(x) =\left\{\begin{aligned}&1 \quad \mbox{if} \quad |x-50| < 10 \\&\exp( 1/3^2 - 1/||x-50|-13|^2)( 1 - \exp( -1/||x-50|-10|^2)) \quad \mbox{if} \quad 10 < |x-50| < 13 \\&0 \quad \mbox{if} \quad |x-50| > 13\end{aligned}\right.$$I'm using as a numerical scheme the Strang splitting, and a code taken from here:solve a scalar diffusion-reaction equation: phi_t = kappa phi_{xx} + (1/tau) R(phi)using operator splitting, with implicit diffusionM. Zingale#from __future__ import print_functionimport numpy as npfrom scipy import linalgfrom scipy.integrate import ode#import sysimport matplotlib.pyplot as pltdef frhs(t, phi, tau): reaction ODE righthand side return 0.25*phi*(1.0 - phi)/taudef jac(t, phi): return Nonedef react(gr, phi, tau, dt): react phi through timestep dt phinew = gr.scratch_array() for i in range(gr.ilo, gr.ihi+1): r = ode(frhs,jac).set_integrator(vode, method=adams, with_jacobian=False) r.set_initial_value(phi[i], 0.0).set_f_params(tau) r.integrate(r.t+dt) phinew[i] = r.y[0] return phinewdef diffuse(gr, phi, kappa, dt): diffuse phi implicitly (C-N) through timestep dt phinew = gr.scratch_array() alpha = kappa*dt/gr.dx**2 # create the RHS of the matrix R = phi[gr.ilo:gr.ihi+1] + \ 0.5*alpha*( phi[gr.ilo-1:gr.ihi] - 2.0*phi[gr.ilo :gr.ihi+1] + phi[gr.ilo+1:gr.ihi+2]) # create the diagonal, d+1 and d-1 parts of the matrix d = (1.0 + alpha)*np.ones(gr.nx) u = -0.5*alpha*np.ones(gr.nx) u[0] = 0.0 l = -0.5*alpha*np.ones(gr.nx) l[gr.nx-1] = 0.0 # set the boundary conditions by changing the matrix elements # homogeneous neumann d[0] = 1.0 + 0.5*alpha d[gr.nx-1] = 1.0 + 0.5*alpha # dirichlet #d[0] = 1.0 + 1.5*alpha #R[0] += alpha*0.0 #d[gr.nx-1] = 1.0 + 1.5*alpha #R[gr.nx-1] += alpha*0.0 # solve A = np.matrix([u,d,l]) phinew[gr.ilo:gr.ihi+1] = linalg.solve_banded((1,1), A, R) return phinewdef est_dt(gr, kappa, tau): estimate the timestep # use the proported flame speed s = np.sqrt(kappa/tau) dt = gr.dx/s return dtclass Grid(object): def __init__(self, nx, ng=1, xmin=0.0, xmax=1.0, vars=None): grid class initialization self.nx = nx self.ng = ng self.xmin = xmin self.xmax = xmax self.dx = (xmax - xmin)/nx self.x = (np.arange(nx+2*ng) + 0.5 - ng)*self.dx + xmin self.ilo = ng self.ihi = ng+nx-1 self.data = {} for v in vars: self.data[v] = np.zeros((2*ng+nx), dtype=np.float64) def fillBC(self, var): if not var in self.data.keys(): sys.exit(invalid variable) vp = self.data[var] # Neumann BCs vp[0:self.ilo+1] = vp[self.ilo] vp[self.ihi+1:] = vp[self.ihi] def scratch_array(self): return np.zeros((2*self.ng+self.nx), dtype=np.float64) def initialize(self): initial condition phi = self.data[phi] length1 = 10. length2 = 13. epsilon = length2 - length1 phi[:] = np.maximum( \ np.exp( 1./epsilon**2 - 1./(np.abs(np.abs(self.x-50.)-length2))**2) * \ ( 1. - np.exp( -1./(np.abs(np.abs(self.x-50.)-length1))**2)) * \ ( self.x > 50.-length2 ) * \ ( self.x < 50.+length2 ) \ , \ ( self.x >= 50.-length1 ) * \ ( self.x <= 50.+length1 ) \ )def interpolate(x, phi, phipt): find the x position corresponding to phipt idx = (np.where(phi >= 0.5))[0][0] xs = np.array([x[idx-1], x[idx], x[idx+1]]) phis = np.array([phi[idx-1], phi[idx], phi[idx+1]]) xpos = 0.0 for m in range(len(phis)): # create Lagrange basis polynomial for point m l = None n = 0 for n in range(len(phis)): if n == m: continue if l == None: l = (phipt - phis[n])/(phis[m] - phis[n]) else: l *= (phipt - phis[n])/(phis[m] - phis[n]) xpos += xs[m]*l return xposdef evolve(nx, kappa, tau, tmax, dovis=1, return_initial=0): the main evolution loop. Evolve phi_t = kappa phi_{xx} + (1/tau) R(phi) from t = 0 to tmax # create the grid gr = Grid(nx, ng=1, xmin = 0.0, xmax=100.0, vars=[phi, phi1, phi2]) # pointers to the data at various stages phi = gr.data[phi] phi1 = gr.data[phi1] phi2 = gr.data[phi2] # initialize gr.initialize() phi_init = phi.copy() # runtime plotting if dovis == 1: plt.ion() t = 0.0 while t < tmax: dt = est_dt(gr, kappa, tau) if t + dt > tmax: dt = tmax - t # react for dt/2 phi1[:] = react(gr, phi, tau, dt/2) gr.fillBC(phi1) # diffuse for dt phi2[:] = diffuse(gr, phi1, kappa, dt) gr.fillBC(phi2) # react for dt/2 -- this is the updated solution phi[:] = react(gr, phi2, tau, dt/2) gr.fillBC(phi) t += dt if dovis == 1: plt.clf() plt.plot(gr.x, phi) plt.grid() plt.xlim(gr.xmin,gr.xmax) plt.ylim(0.0,1.0) plt.title(Reaction-Diffusion, $t = {:3.2f}$.format(t)) plt.draw() plt.pause(0.1) if return_initial == 1: return phi, gr.x, phi_init else: return phi, gr.xkappa = 1.0tau = 0.25nx = 256tmax1 = 1.0phi1, x1 = evolve(nx, kappa, tau, tmax1)As far as I can tell, the initial condition being of class $\cal{C}^{\infty}$, and $1$ being stable, the solution should remain $1$ where the initial condition is $1$. But this is not what I observe.Is this a numerical artefact?
Growing error from a smooth initial condition for Fisher KPP equation
parabolic pde;operator splitting
null
_softwareengineering.269653
I have an iOS app that uses the VLCKit framework for it's video player function. Today I got an email from the creator of VLC and in the email is stated this:According to the LGPLv2.1 VLCKit and libvlc are licensed under, I hearby request the source code for our libraries.Of course I want to comply with this request but what I'm not entirely sure about is what I should comply with. He wants the source code to his own libraries. Obviously he would already have the source code to his own library so I'm assuming there's some other purpose in him asking this. Is it to see if I have modified/changed it in any way? I haven't so, do I zip up the source files that I downloaded from his website last year and send them to him in an email? My app already does have a link to the source-code on VLC's website in the help/about section. I thought this was sufficient.
How to comply with LGPL 2.1 source-code request?
licensing;legal;lgpl
Your question appears to have two parts:How to comply with an LGPL backed source request.Why the authors of a library you included would request their own source.Source distribution mechanicsThe first question is pretty mechanical and fairly straightforward. Namely: tar / zip up the files that were used and send them to the requestor. It makes no difference who the requesting person is. You provide the source, as requested.If you were providing the source via FTP, you could verify the FTP repository was working and have them retrieve the source from there. It's possible that not all variations of the GPL1 licenses will support that approach. The safest version for distribution or conveyance is directly sending the source.Rationale of request from library authorPart of providing Free software (that's Free as in Freedom means following up and making sure that downstream consumers of the Free software are also complying with the terms of the license.It's one thing to put up an FTP link or provide a disclaimer of source available upon request. But it's another level to actually verify that the FTP links do provide the source or that the source is actually available when requested.It sounds like the creator of the library you used wanted to verify that you were complying with the terms of the license. They (obviously) didn't need their own source code back. They may have been concerned that you made modifications without re-releasing them, too. Given the size of VLCkit, I don't think that was the case. The most likely answer then is they want to make sure you're complying with the terms of the *GPL1 licensing that was used.And based upon your follow-up comment: I asked and he replied. Just zip up the source that I used and send it to him. No problem.It sounds like they were making sure that you were doing your part in the Free software movement.1 I'm writing under the presumption that other packages were used that were also either LGPL, AGPL, or GPL licensed.
_softwareengineering.10672
Ever since my very first programming class in high school, I've been hearing that string operations are slower — i.e. more costly — than the mythical average operation. Why makes them so slow? (This question left intentionally broad.)
Why are strings so slow?
computer science;strings
The average operation takes place on primitives. But even in languages where strings are treated as primitives, they're still arrays under the hood, and doing anything involving the whole string takes O(N) time, where N is the length of the string.For example, adding two numbers generally takes 2-4 ASM instructions. Concatenating (adding) two strings requires a new memory allocation and either one or two string copies, involving the entire string.Certain language factors can make it worse. In C, for example, a string is simply a pointer to a null-terminated array of characters. This means that you don't know how long it is, so there's no way to optimize a string-copying loop with fast move operations; you need to copy one character at a time so you can test each byte for the null terminator.
_codereview.57878
Here is a short and simple Ajax method that returns True or False if an entity exists in a database via a stored procedure that returns just Y or N (the details of this entity and database are not relevant to my question though). This is the first time I've used the C# using() statement, and was wondering if anyone would be kind enough to review this and give me feedback.[WebMethod]public string ValidateEntity(string EntityType, string EntityName){ string connstr = (from c in Companys where c.Name.Equals(company, StringComparison.OrdinalIgnoreCase) select c.ConnectionString).FirstOrDefault(); if (connstr == null) { return False; } using (SqlConnection conn = new SqlConnection(connstr)) { using (SqlDataAdapter da = new SqlDataAdapter()) { using (da.SelectCommand = new SqlCommand(ValidateEntity, conn)) { da.SelectCommand.CommandType = CommandType.StoredProcedure; da.SelectCommand.Parameters.AddWithValue(@EntityType, EntityType); da.SelectCommand.Parameters.AddWithValue(@EntityName, EntityName); using(DataSet ds = new DataSet()) { da.Fill(ds, result_name); DataTable dt = ds.Tables[result_name]; if ( dt.Rows.Count > 0){ if (dt.Rows[0][Valid].ToString()==Y) { return True; } } } } } } return False;}
Determining if an entity exists in a database via a stored procedure
c#;ajax;validation;stored procedure
null
_unix.325360
Question: /var is mounted twice, is this normal for a system that uses docker?df -m | grep var only shows it mounted at /var. Versions are: RHEL 7.2 Maipo, docker-engine-1.12.1-1.el7.centos.x86_64 and docker-engine-selinux-1.12.1-1.el7.centos.noarch. UPDATE: Maybe normal? Can someone confirm? From OS perspective the two RW mounted FS doesn't look so OK.https://github.com/docker/docker/issues/16884This should not be an issue. /var/lib/docker/devicemapper is a bind mount onto itself.
Is it normal to have duplicate /var mount if using docker?
rhel;mount;docker;xfs
null
_cstheory.14153
I know that's impossible to decide $\beta$-equivalence for untyped lambda calculus. Quoting Barendregt, H. P. The Lambda Calculus: Its Syntax and Semantics. North Holland, Amsterdam (1984).:If A and B are disjoint, nonempty sets of lambda terms which are closed under equality, then A and B are recursively inseparable. It follows that if A is a nontrivial set of lambda terms closed under equality, then A is not recursive. So, we cannot decide the problem M=x? for any particular M. Also, it follows that Lambda has no recursive models.If we have a normalizing system, such as System F, then we can decide $\beta$-equivalence from outside by reducing the two given terms and comparing if their normal forms are the same or not. However, can we do it from inside? Is there a System-F combinator $E$ such that for two combinators $M$ and $N$ we have $E M N = \mbox{true}$ if $M$ and $N$ have the same normal form, and $E M N = \mbox{false}$ otherwise? Or can this be done at least for some $M$s? To construct a combinator $E_M$ such that $E_M N$ is true iff $N\equiv_\beta M$? If not, why?
Is it possible to decide $\beta$-equivalence within System F (or another normalizing typed -calculus)?
lo.logic;computability;lambda calculus;normalization;decidability
No, it's not possible. Consider the following two inhabitants of the type $(A \to B) \to (A \to B)$. $$\begin{array}{l}M = \lambda f.\;f \\N = \lambda f.\;\lambda a.\; f\;a\end{array}$$These are distinct $\beta$-normal forms, but cannot be distinguished by a lambda-term, since $N$ is an $\eta$-expansion of $M$, and $\eta$-expansion preserves observational equivalence in a pure typed lambda calculus. Cody asked what happens if we mod out by $\eta$-equivalence, also. The answer is still negative, because of parametricity. Consider the following two terms at the type $(\forall \alpha.\;\alpha \to \alpha) \to (\forall \alpha.\;\alpha \to \alpha)$:$$\begin{array}{lcl}M & = & \lambda f:(\forall \alpha.\;\alpha \to \alpha).\;\Lambda \alpha.\lambda x:\alpha.\;f \;[\forall \alpha.\;\alpha \to \alpha]\;(\Lambda \beta.\lambda y:\beta.\;y)\;[\alpha]\;x\\N & = & \lambda f:(\forall \alpha.\;\alpha \to \alpha).\;\Lambda \alpha.\lambda x:\alpha.\;f\;[\alpha]\;x\end{array}$$They are distinct $\beta$-normal, $\eta$-long form, but are observationally equivalent. In fact, all functions of this type are equivalent, since $\forall \alpha.\;\alpha \to \alpha$ is the encoding of the unit type, and so all functions of the type $(\forall \alpha.\;\alpha \to \alpha) \to (\forall \alpha.\;\alpha \to \alpha)$ must be extensionally equivalent.
_unix.362459
My problemIf I transfer file with rsync using tapes,or disk(usb,e-sata,firewire)linux hang,no way to resume if not using powerbutton(brutal shutdown!)sysrq-trigger don't work,ssh don't answer,keyboard and screen no input.I have a M5A97 R2.0 Asus board,with 16G ram Crucial.I 've ordered a couple of other ram by kingstonIn your opinion can be a hw problem,or ram problem?Using the program ramtest no error given.I have tried also this solution..but hang anyway.Your opinion?I forgot,hang happen on every transfer especially with big files(over 10G)I tried different kernel versions..same problem.
linux hang on transfer with rsync,can be a ram error?
freeze;ram;panic;hang
Change ram and..works fine.Transfer of over 2TB completedwithout panic.So problem was my old ram.
_unix.26000
I am running Cygwin 1.7 on Win7 Pro x64, and I can query my Ubuntu 10.04 LTS server just fine.XWin.exe -clipboard -once -rootless -nodecoration -notrayicon -query $IP_ADDRESSI recently installed Ubuntu 11.10 with XFCE desktop on another machine, and I cannot connect to this one.Of course, I enabled TCP and XDMCP in LightDM using /etc/lightdm/lightdm.conf[SeatDefaults]# ...xserver-allow-tcp=true[XDMCPServer]enabled=trueand I think the fact that I can connect using my Xubuntu 11.10 laptop proves that it works.X -query $IP_ADDRESS :1Xwin fails to connect, while logging something like:[333305.324] XDMCP fatal error: Session failed Failed to connect to display :0[333305.324] [333305.324] Server terminated with error (1). Closing log file.Today I updated Cygwin.CYGWIN_NT-6.1-WOW64 1.7.9(0.237/5/3) 2011-03-29 10:10Still doesn't work. Does anyone have a clue as to what 'feature' the new and improved LightDM or Xserver has that I forgot to take into account?Oh and did I mention the exact same Cygwin/Xwin connects to Ubuntu 10.04 just fine, using the same command line (different IP of course)?
How to Cygwin Xwin -query an Ubuntu 11.10 Xserver?
ubuntu;cygwin;x server
I don't know what the guys over at Cygwin/X are doing to make this fail. And I don't know why I cannot find any help or even mention of similar trouble anywhere in this galaxy that is within the reach of Google. I believe I am not the only one using the software, so the lack of help puzzles me.But let me provide a solution to my own question; I discovered that VCXsrv.exe is some kind of Cygwin/X clone in a way.http://sourceforge.net/projects/vcxsrv/VcXsrv Windows X-server based on the xorg git sources (like xming or cygwin's xwin), but compiled with Visual C++ 2010.It works almost the same, except you need to add the -from [ip-address] command line option. No idea why. But it works:vcxsrv.exe -clipboard -once -rootless -nodecoration -notrayicon -query [target hostname or ip] -from [current (local) ip]Tested with both Xubuntu and xubuntu-desktop on Ubuntu. (XFCE)
_unix.38408
I'm using Linux Debian Squeeze and I already have compiz installed. I want metacity stop automatically boot, and instead want compiz to start automatically.
How to make Compiz start automatically?
debian;compiz;metacity
Change the gconf key withgconftool-2 --type string --set /desktop/gnome/session/required_components/windowmanager compizYou can go back to the default Gnome Metacity window manager withgconftool-2 --type string --set /desktop/gnome/session/required_components/windowmanager gnome-wmIf this fails You can simply add compiz --replace to your startup applications. Name the entry what you want, give it whatever description you want, but make the commandcompiz --replaceSource: http://wiki.debian.org/Compiz#Start_compiz_instead_of_the_default_Gnome_Window_Manager
_unix.107072
I am using Oracle virtualbox and I have installed centos 6.4. How can I connect remotely to centos with WinSCP from Windows? There are fields like hostname in WinSCP but I don't know what to write.
how to connect centos from windows remotely?
centos;remote
null
_codereview.36300
I'm trying to find all the 3, 4, 5, and 6 letter words given 6 letters. I am finding them by comparing every combination of the 6 letters to an ArrayList of words called sortedDictionary. I have worked on the code a good bit to get it to this point.I tested how many six letter words are checked and got 720 which is good because 6*5*4*3*2*1=720 which means I am not checking any words twice. I can't make it faster by getting rid of duplicate checks because I have already gotten rid of them all. Can I still make this faster?Note that sortedDictionary only contains about 27 hundred words. for(int l1 = 0; l1 < 6; l1++){ for(int l2 = 0; l2 < 6; l2++){ if(l2 != l1) for(int l3 = 0; l3 < 6; l3++){ if(l3 != l1 && l3 != l2){ if(sortedDictionary.contains(+anagramCharacters[l1]+anagramCharacters[l2]+anagramCharacters[l3])) anagram_words.add(+anagramCharacters[l1]+anagramCharacters[l2]+anagramCharacters[l3]); for(int l4 = 0; l4 < 6; l4++){ if(l4 != l1 && l4 != l2 && l4 != l3){ if(sortedDictionary.contains(+anagramCharacters[l1]+anagramCharacters[l2]+anagramCharacters[l3]+anagramCharacters[l4])) anagram_words.add(+anagramCharacters[l1]+anagramCharacters[l2]+anagramCharacters[l3]+anagramCharacters[l4]); for(int l5 = 0; l5 < 6; l5++){ if(l5 != l1 && l5 != l2 && l5 != l3 && l5 != l4){ if(sortedDictionary.contains(+anagramCharacters[l1]+anagramCharacters[l2]+anagramCharacters[l3]+anagramCharacters[l4]+anagramCharacters[l5])) anagram_words.add(+anagramCharacters[l1]+anagramCharacters[l2]+anagramCharacters[l3]+anagramCharacters[l4]+anagramCharacters[l5]); for(int l6 = 0; l6 < 6; l6++){ if(l6 != l1 && l6 != l2 && l6 != l3 && l6 != l4 && l6 != l5) if(sortedDictionary.contains(+anagramCharacters[l1]+anagramCharacters[l2]+anagramCharacters[l3]+anagramCharacters[l4]+anagramCharacters[l5]+anagramCharacters[l6])) anagram_words.add(+anagramCharacters[l1]+anagramCharacters[l2]+anagramCharacters[l3]+anagramCharacters[l4]+anagramCharacters[l5]+anagramCharacters[l6]); } } } } } } } }}My solution, still probably not the best written code but it reduce the loading time from about 1-2 seconds to nearly instant (no noticeable wait time; didn't actually test how long it was).for(int i = 0; i < sortedDictionary.size(); i++){ for(int index = 0; index < anagram.length(); index++) anagramCharacters[index] = anagram.charAt(index); forloop: for(int i2 = 0; i2 < sortedDictionary.get(i).length(); i2++){ for(int i3 = 0; i3 < anagramCharacters.length; i3++){ if(sortedDictionary.get(i).charAt(i2) == anagramCharacters[i3]){ anagramCharacters[i3] = 0; break; } else if(i3 == anagramCharacters.length-1) break forloop; } if(i2 == sortedDictionary.get(i).length()-1) anagram_words.add(sortedDictionary.get(i)); }}
Finding words of different lengths given six letters
java;array;strings;combinatorics
if the number of words in the dictionary is small then you might be better off turning the code around and going over the words in the dictionary and checking if the words there have the 6 lettersdisregarding that you recreate the string several times it would be more efficient to keep the prefix in a char arraychar[] prefix= new char[6];for(int l1 = 0; l1 < 6; l1++){ prefix[0]=anagramCharacters[l1]; for(int l2 = 0; l2 < 6; l2++) if(l2 != l1){ prefix[1]=anagramCharacters[l2];then you can create the string with new String(prefix,0,3) (replace the 3 with the length)otherwise recursion to the rescue:List<String> createAnagrams(char[] chars, SortedSet<String> sortedDictionary){ List<String result = new ArrayList<String>(); fillListWithAnagrams(result, sortedDictionary, chars, 0); return result;}void fillListWithAnagrams(List<String> result, SortedSet<String> sortedDictionary, char[] chars, int charIndex){ if(charIndex>=3){ String resultString = new String(chars,0,charIndex); if(sortedDictionary.contains(resultString); list.add(resultString); } if(charIndex>=chars.length) return;//end of the line for(int i = charIndex;i<chars.length;i++){ char t = chars[i]; chars[i] = chars[charIndex]; chars[charIndex] = t; fillListWithAnagrams(list, sortedDictionary, chars, charIndex+1); // revert the char array for the next step //t=chars[charIndex]; chars[charIndex] = chars[i]; chars[i] = t; }}
_unix.227195
I'm using Debian GNU/Linux 7.8 and would like to have PDFXCview.exe as the standard application to open pdfs. Opening a pdf using a small executable file including#!/bin/bashwine /foldername/PDFXCview.exe $1works fine. However, I would like to set up Open with.. properly so that whenever I double-click on a pdf it opens with PDFXCview. Passing this executable seems not to work. How to solve this?
How to create a custom command in Linux for a wine pdf application?
bash;pdf;wine
null
_softwareengineering.50034
Ok, I almost lost a job offer because I didn't have enough experience as an enterprise software engineer.I've been a programmer for over 16 years, and the last 12-14 professionally, at companies big and small.So this made me think of this question: What's the difference between a software engineer and an enterprise software engineer?Is there really a difference between software architecture and enterprise architecture?BTW: I try to do what every other GOOD software programmer does, like architecture, tdd, SDLC, etc.
Enterprise VS Regular corporate developer
.net;enterprise architecture
Rick. I think big companies inheritently don't like Jack's of All Trades. You say you do everything. In a small company, we want people who can do everything. Those people are more valuable because they can wear multiple hats.In an enterprise environment, there is clear job separation. They don't want people who wear many hats. They want people who focus on one thing and one thing only and who excel at doing just that one thing.I personally prefer the excitement of not knowing what hat I'll need to wear that day. That's my preference. Other people may prefer the structure and stability of knowing exactly what they're going to work on that day.I believe that the company's main concern is that you may not stick around because the job is different than what you're used to. In these interviews, I believe it's important to find a way to demonstrate that you seek this type of job and understand the differences between work you've done before. It may be best to focus only on the strengths that apply to the job description. Tailor your resume and your questions to fit the job. Make sure you are prepared to give answers that tell the interviewers what they want to hear. Most importantly, make sure you actually want to work in this environment and that what you're saying really reflects your desired career path.
_codereview.99006
The existing design of class DList and DListNode is taken. The main criteria is to do successive updates in \$O(1)\$ time.Part III (3 points)Implement a lockable doubly-linked list ADT: a list in which any node can be locked. A locked node can never be removed from its list. Any attempt to remove a locked node has no effect (not even an error message). Your locked list classes should be in the list package alongside DList and DListNode. First, define a LockDListNode class that extends DListNode and carries information about whether it has been locked. LockDListNode's are not locked when they are first created. Your LockDListNode constructor(s) should call a DListNode constructor to avoid code duplication.Second, define a LockDList class that extends DList and includes an additional method public void lockNode(DListNode node) { ... } that permanently locks node.Your LockDList class should override just enough methods to ensure that(1) LockDListNode's are always used in LockDList's (instead of DListNode's), and(2) locked nodes cannot be removed from a list.WARNING: To override a method, you must write a new method in the subclass with EXACTLY the same prototype. You cant change a parameters type to a subclass. Overriding wont work if you do that.Your overriding methods should include calls to the overridden superclass methods whenever it makes sense to do so. Unnecessary code duplication will be penalized.Solution/* DListNode.java */package cs61b.homework4;/** * A DListNode is a node in a DList (doubly-linked list). */public class DListNode { /** * item references the item stored in the current node. prev references the * previous node in the DList. next references the next node in the DList. * * DO NOT CHANGE THE FOLLOWING FIELD DECLARATIONS. */ public Object item; private DListNode prev; private DListNode next; /** * DListNode() constructor. * * @param i * the item to store in the node. * @param p * the node previous to this node. * @param n * the node following this node. */ DListNode(Object i, DListNode p, DListNode n) { item = i; setPrev(p); setNext(n); } DListNode getNext() { return next; } void setNext(DListNode next) { this.next = next; } DListNode getPrev() { return prev; } void setPrev(DListNode prev) { this.prev = prev; }}/* DList.java */package cs61b.homework4;/** * A DList is a mutable doubly-linked list ADT. Its implementation is * circularly-linked and employs a sentinel (dummy) node at the sentinel * of the list. * * DO NOT CHANGE ANY METHOD PROTOTYPES IN THIS FILE. */public class DList { /** * sentinel references the sentinel node. * size is the number of items in the list. (The sentinel node does not * store an item.) * * DO NOT CHANGE THE FOLLOWING FIELD DECLARATIONS. */ protected DListNode sentinel; protected int size; /* DList invariants: * 1) sentinel != null. * 2) For any DListNode x in a DList, x.next != null. * 3) For any DListNode x in a DList, x.prev != null. * 4) For any DListNode x in a DList, if x.next == y, then y.prev == x. * 5) For any DListNode x in a DList, if x.prev == y, then y.next == x. * 6) size is the number of DListNodes, NOT COUNTING the sentinel, * that can be accessed from the sentinel (sentinel) by a sequence of * next references. */ /** * newNode() calls the DListNode constructor. Use this class to allocate * new DListNodes rather than calling the DListNode constructor directly. * That way, only this method needs to be overridden if a subclass of DList * wants to use a different kind of node. * @param item the item to store in the node. * @param prev the node previous to this node. * @param next the node following this node. */ protected DListNode newNode(Object item, DListNode prev, DListNode next) { return new DListNode(item, prev, next); } /** * DList() constructor for an empty DList. */ public DList() { this.sentinel = this.newNode(null,null,null); this.sentinel.setNext(sentinel); this.sentinel.setPrev(sentinel); } /** * isEmpty() returns true if this DList is empty, false otherwise. * @return true if this DList is empty, false otherwise. * Performance: runs in O(1) time. */ public boolean isEmpty() { return size == 0; } /** * length() returns the length of this DList. * @return the length of this DList. * Performance: runs in O(1) time. */ public int length() { return size; } /** * insertFront() inserts an item at the front of this DList. * @param item is the item to be inserted. * Performance: runs in O(1) time. */ public void insertFront(Object item) { DListNode node = this.newNode(item, this.sentinel, this.sentinel.getNext()); node.getNext().setPrev(node); this.sentinel.setNext(node); this.size++; } /** * insertBack() inserts an item at the back of this DList. * @param item is the item to be inserted. * Performance: runs in O(1) time. */ public void insertBack(Object item) { DListNode node = this.newNode(item, this.sentinel.getPrev(), this.sentinel); this.sentinel.setPrev(node); node.getPrev().setNext(node); this.size++; } /** * front() returns the node at the front of this DList. If the DList is * empty, return null. * * Do NOT return the sentinel under any circumstances! * * @return the node at the front of this DList. * Performance: runs in O(1) time. */ public DListNode front() { if (this.sentinel.getNext() == sentinel){ return null; }else{ return this.sentinel.getNext(); } } /** * back() returns the node at the back of this DList. If the DList is * empty, return null. * * Do NOT return the sentinel under any circumstances! * * @return the node at the back of this DList. * Performance: runs in O(1) time. */ public DListNode back() { if(this.sentinel.getPrev() == sentinel){ return null; }else{ return this.sentinel.getPrev(); } } /** * next() returns the node following node in this DList. If node is * null, or node is the last node in this DList, return null. * * Do NOT return the sentinel under any circumstances! * * @param node the node whose successor is sought. * @return the node following node. * Performance: runs in O(1) time. */ public DListNode next(DListNode node) { if ((node == null) || (node.getNext() == this.sentinel)){ return null; }else{ return node.getNext(); } } /** * prev() returns the node prior to node in this DList. If node is * null, or node is the first node in this DList, return null. * * Do NOT return the sentinel under any circumstances! * * @param node the node whose predecessor is sought. * @return the node prior to node. * Performance: runs in O(1) time. */ public DListNode prev(DListNode node) { if ((node == null) || (node.getPrev() == this.sentinel)){ return null; }else{ return node.getPrev(); } } /** * insertAfter() inserts an item in this DList immediately following node. * If node is null, do nothing. * @param item the item to be inserted. * @param node the node to insert the item after. * Performance: runs in O(1) time. */ public void insertAfter(Object item, DListNode node) { if (node == null){ return; }else{ DListNode newNode = this.newNode(item, node, node.getNext()); node.getNext().setPrev(newNode); node.setNext(newNode); } this.size++; } /** * insertBefore() inserts an item in this DList immediately before node. * If node is null, do nothing. * @param item the item to be inserted. * @param node the node to insert the item before. * Performance: runs in O(1) time. */ public void insertBefore(Object item, DListNode node) { if (node == null){ return; }else{ DListNode newNode = this.newNode(item, node.getPrev(), node); node.getPrev().setNext(newNode); node.setPrev(newNode); this.size++; } } /** * remove() removes node from this DList. If node is null, do nothing. * Performance: runs in O(1) time. */ public void remove(DListNode node) { if(node == null){ return; }else{ node.item = null; node.getPrev().setNext(node.getNext()); node.getNext().setPrev(node.getPrev()); this.size--; } } /** * toString() returns a String representation of this DList. * * DO NOT CHANGE THIS METHOD. * * @return a String representation of this DList. * Performance: runs in O(n) time, where n is the length of the list. */ public String toString() { String result = [ ; DListNode current = sentinel.getNext(); while (current != sentinel) { result = result + current.item + ; current = current.getNext(); } return result + ]; }}/* LockDListNode.java */package cs61b.homework4;public class LockDListNode extends DListNode{ protected boolean lock; protected LockDListNode(Object i, DListNode p, DListNode n){ super(i, p, n); this.lock = false; }}/* LockDList.java */package cs61b.homework4;public class LockDList extends DList { /** * newNode() calls the LockDListNode constructor. Use this method to * allocate new LockDListNodes rather than calling the LockDListNode * constructor directly. * * @param item * the item to store in the node. * @param prev * the node previous to this node. * @param next * the node following this node. */ protected LockDListNode newNode(Object item, DListNode prev, DListNode next) { return new LockDListNode(item, prev, next); } /** * LockDList() constructor for an empty LockDList. */ public LockDList() { super(); } /** * remove() removes node from this DList. If node is null, do nothing. * Performance: runs in O(1) time. */ public void remove(DListNode node) { if (node == null) { return; } else if (((LockDListNode)node).lock == true) { return; } else { node.item = null; node.getPrev().setNext(node.getNext()); node.getNext().setPrev(node.getPrev()); this.size--; } } public void lockNode(DListNode node) { if(node == null){ return; }else{ ((LockDListNode)node).lock = true; } }}With the given skeleton code for DList and DListNode here:Assume that a user passes a node that is part of the correct list. This is out of scope here.Access specifier for class/method/constructor can be improved (if required).Can I avoid typecasting in overriding the remove method of the LockDList class?Can I avoid typecasting in the lockNode method of the LockDListclass?Note: The package name is cs61b.homework4 instead of list.
Lockable linked list
java;object oriented;linked list;inheritance
null
_softwareengineering.298267
I have a site that lists several items per page. When the user clicks he can see each item's prices (a list of about 10-20). They are not visible at first because the page will be very long. I want to tag them with the offers schema.org microdata structure. How can I make this tagging seo friendly? Is it only possible if each product has its own page? Items are dynamic and change every week that's why I figured its not good to have a dedicated page for each (and also for user experience).
How to mark up structured data that is visible upon click with microdata
seo
null
_unix.144377
I have Windows and want to install encrypted Ubuntu (Home + Swap + System) - it giving me that option during installation when I'm choosing partitions so my question is will I be able to boot both Windows and Ubuntu when I encrypt Ubuntu or will it break Dual Boot ?
Dual Boot and encrypting Linux?
dual boot;encryption
null
_softwareengineering.278778
In this benchmark, the suite takes 4 times longer to complete with ES6 promises compared to Bluebird promises, and uses 3.6 times as much memory.How can a JavaScript library be so much faster and lighter than v8's native implementation written in C? Bluebird promises have exactly the same API as native ES6 promises (plus a bunch of extra utility methods).Is the native implementation just badly written, or is there some other aspect to this that I'm missing?
Why are native ES6 promises slower and more memory-intensive than bluebird?
javascript;performance;io.js
Bluebird author here.V8 promises implementation is written in JavaScript not C. All JavaScript (including V8's own) is compiled to native code. Additionally user written JavaScript is optimized, if possible (and worth it), before compiled to native code. Promises implementation is something that would not benefit much or at all from being written in C, in fact it would only make it slower because all you are doing is manipulating JavaScript objects and communication.The V8 implementation simply isn't as optimized as bluebird, it for instances allocates arrays for promises' handlers. This takes a lot of memory when each promise also has to allocate a couple of arrays (The benchmark creates overall 80k promises so that's 160k unused arrays allocated). In reality 99.99% of use cases never branch a promise more than once so optimizing for this common case gains huge memory usage improvements.Even if V8 implemented the same optimizations as bluebird, it would still be hindered by specification. The benchmark has to use new Promise (an anti-pattern in bluebird) as there is no other way to create a root promise in ES6. new Promise is an extremely slow way of creating a promise, first the executor function allocates a closure, secondly it is passed 2 separate closures as arguments. That's 3 closures allocated per promise but a closure is already a more expensive object than an optimized promise.Bluebird can use promisify which enables lots of optimizations and is a much more convenient way of consuming callback APIs and it enables conversion of whole modules into promise based modules in one line (promisifyAll(require('redis'));).
_webmaster.52389
I keep reading everywhere that if you have a multilanguage site, where the same page appears in, say, French and English, then this is considered as duplicate content by google. It is written that using canonical link is the solution, but I do not understand how to use it in this case. Should I:Choose either French URL or English URL to be the canonical (main) one, and where I will place the canonical link? If so, how do I decide which of the two URLs must be canonical? both languages are important to me and I want the content under both languages to be indexed by google and served to the user, depending on the language in which he searches.OR should I place a canonical link on both French and English URLs? If so, then I do not understand the meaning of using the canonical link? In this case would both URLs be indexed, are both of them considered as important by google and not duplicates?Also I read that link rel=alternate can be used to indicate to google that, for example the French URL is the French-language equivalent of the English page. This makes sense and I understand how to use such links, but how are they combined with canonical links? Should I define both the canonical URL AND specify rel=alternate in both URLs? Could someone help me to clarify this, cause I'm stuck with this and can't seem to find a good-enough explanation in different sources.
Multi language site - use of canonical link and link rel=alternate
seo;canonical url;multilingual
null
_codereview.112846
I'm quite new to threading primitives in C# and was hoping you might be able to suggest improvements to this. I need to ensure that the XXX call below happens within the calling thread (XXX is a foreign call into a thread-unsafe library), so I used a queue here. It seems a bit like there should be a better primitive for this. Maybe delegates are applicable somehow? I don't understand delegates.I also have to wonder if I've gotten this whole scheme right in the first place! Maybe there's a deadlock I'm not seeing. Threading is so tricky.As an additional restriction, it's very important that this works on .NET 3.5. public void RunProc(AutoResetEvent killSubProc) { using (Process process = new Process()) { var timeout = 8000; var channel = new Queue<string> {}; process.StartInfo.FileName = blah.exe; process.StartInfo.Arguments = @stuff; process.StartInfo.UseShellExecute = false; process.StartInfo.RedirectStandardOutput = true; process.StartInfo.RedirectStandardError = true; process.EnableRaisingEvents = true; using (AutoResetEvent channelWaitHandle = new AutoResetEvent(false)) { process.OutputDataReceived += (sender, e) => { if (e.Data != null) { lock (channel) { channel.Enqueue(STDOUT); channel.Enqueue(e.Data); } channelWaitHandle.Set(); } }; process.ErrorDataReceived += (sender, e) => { if (e.Data != null) { lock (channel) { channel.Enqueue(STDERR); channel.Enqueue(e.Data); } channelWaitHandle.Set(); } }; process.Exited += (sender, e) => { lock (channel) { channel.Enqueue(EXIT); } channelWaitHandle.Set(); }; process.Start(); process.BeginOutputReadLine(); process.BeginErrorReadLine(); bool running = true; while (running) { int idx = WaitHandle.WaitAny(new WaitHandle[] {killSubProc, channelWaitHandle}); if (idx == 0) { process.Kill(); running = false; } else { lock (channel) { while (channel.Count > 0) { var item = channel.Dequeue(); XXX(item); if (item == EXIT) { running = false; } } } } } } } }
Ensuring that events raised by system.diagnostics process class happen in the parent thread
c#;multithreading
null
_datascience.19479
Problem descriptionI have a data set about 10000 patients in a study. For each patient, I have a list of various measurements. Some information is scalar data (e.g. age), some information is time series of measurements, some other information can be even a bitmap. The individual record itself can be quite thick (10kB to 10MB). The data is to be processed practically in two steps:Preprocessing at the level of individual records (patients), i.e. to extract some features in raw data, store them. Calculate some slopes in time series etc. All this can be done on individual level and it can be very easily distributed.On top of the preprocessed data (extracted features), I will need to calculate some aggregated things such as e.g. average age, but also some machine learning tasks.The questionObviously, this is very suitable to be addressed in Apache Spark (or any map-reduce architecture). At the most general level, my question is: what is the most appropriate NoSQL database for this situation?So far, I have considered two basic options:MongoDB - to take advantage of the document oriented storage where everything is on the same place. However, I am not sure about the performance on the larger binary data (pictures, time-series).Cassandra - this may have some better storage of binary data, but the joins will be necessary (even if optimized by indexing all data by patient id).
Data representation (NoSQL database?) for a medical study
machine learning;nosql;mongodb
null
_unix.373954
How can I achieve to add some metadata (tags) to audio files (mp3),so that mplayer will play music by specific tags or their combination?I don't mean ID3 tags, but similar tags as I append to this question.e.g. there will be commands like$ tagmusic jazz, good_background, slow file.mp3$ mplayer --by-tag jazz+good_backgroundThe player may be another than mplayer, but I prefer mplayer.And I really prefer command-line application.
Mplayer playing music by tag
mplayer;music;file metadata;music player;tagging
null
_webapps.12636
Whenever I click full screen on YouTube, it changes the quality settings from 240p to 480p. I prefer to control the quality setting myself, and not have it influenced by the full screen button.Is there a way to stop this? A Greasemonkey script perhaps?
Stop YouTube from changing my quality settings on full screen
youtube
Officially you can't and there are a lot of requests for that in Google Forums.One of the Google Employees says:There are a few different options if you login, then choose settings in the drop down where you select quality (right under the video player):I have a slow connection. Never play higher-quality video. Always choose the best option for me based on my player size. Always play HD when switching to fullscreen (when available)You can try one of the scripts to find the one that suits you best.
_webapps.80182
Majority of the search results for download periscope lead to websites allowing me to download the app.download periscope stream is a much better search term that actually led me to two solutions but none of them are convenient.1) tried and workshttp://www.quora.com/How-do-I-save-my-Periscope-video-or-broadcast-to-my-phone/answer/Andrew-Leyden - that requires that the phone is connected all the time (for the whole 50 minutes of the stream)2) didn't try, should workRecording screen + using http://www.instructables.com/id/How-to-record-audio-from-your-computer-using-Quick/ to record audio (again, 50 minutes waiting)I also tried using technique described here - https://stackoverflow.com/questions/30073330/downloading-storing-periscope-live-streaming-broadcasts - just by observing network request and populating list of URLs to download via wget:https://replay.periscope.tv/ ... ==/chunk_1.tshttps://replay.periscope.tv/ ... ==/chunk_2.tsBut that naive version fails - they check cookie... Maybe I should use PhantomJS to accept cookie and then download the chunks? Before I start digging into that - maybe there is a straightforward, off the bat, ready to go solution I'm missing?The question is generic and applies to any periscope video.Bonus question - what if the video is mine - can I retroactively save it to the camera roll from the periscope app? (assuming autosave broadcast option was turned off)
How to (most efficiently) download video from Periscope?
video;download;video streaming;streaming
null
_unix.244211
ProblemHi. I'm from Debian Land. I've used OpenSUSE before, but never on my own systems. I'm now attempting to understand it better as we have an application in development which will run on OpenSUSE.OpenSUSE has a 'Tumbleweed', 'Factory', and a 'Leap'.There is very little clear and concise information on the official OpenSUSE website describing the differences between these. The information is jumbled, mixed, poorly written and frustrating. (The OpenSUSE Wikipedia article appears outdated as well.)QuestionWhat is the difference between these various OpenSUSE releases/flavors?
What is the difference between the many OpenSUSE flavors?
linux;opensuse
null
_webapps.25681
Every other image I see hotlinked from imgur returns a 403 - Forbidden error.If I copy and paste the link into my browser the image will load. Or if I delete the initial i. in the URL, the image loads. The image will not load if it's used in a bbcode style tag, or if I right click on it and choose open in a new tab.Do you know the cause or a fix?These are a couple of examples that you'll probably be able to see, but I can't unless I do one of those above mentioned actions:http://i.imgur.com/MLGyL.pnghttp://i.imgur.com/dwvwq.jpg
How to remedy imgur 403 forbidden errors?
imgur
I found out that imgur has an outright ban on the site for some reason. No one knows why.The workaround is to use the site's https connection, then the images will load. We guess imgur didn't ban https://site.com.
_unix.387735
Frist: I'm new to linux. I'm using debian 9.1.0 lxde 64 bits.I gave up trying to change the resolution when I got black screen at the login screen, just the terminal mode (Ctrl+Alt+F1) works. I tried the commom cvt -> xrandr --newmode -> xrandr --addmode -> xrandr output I get an error at the addmode step, and tried with gft too, but the same. also tried to edit the xorg.conf file, it was when I got this issue. I actually have access to my ext4 partition throuth windows 10 (using ExtFS).My driver version is 384.69, my gpu is gtx 750ti, my monitor is XP911AW. I got this data from it's EDID (all from the same file, at one run):DumpEDID v1.06Copyright (c) 2006 - 2017 Nir SoferWeb site: http://www.nirsoft.netActive : NoRegistry Key : DISPLAY\PEB038F\1&8713bca&0&UID0Monitor Name : XP911AWSerial Number : 0708500665431Manufacture Week : 37 / 2007ManufacturerID : 41536 (0xA240)ProductID : 911 (0x038F)Serial Number (Numeric) : 16843009 (0x01010101)EDID Version : 1.3Display Gamma : 2.20Vertical Frequency : 56 - 76 HzHorizontal Frequency : 30 - 81 KHzMaximum Image Size : 41 X 26 cm (19.1 Inch)Maximum Resolution : 1440 X 900Support Standby Mode : YesSupport Suspend Mode : YesSupport Low-Power Mode : YesSupport Default GTF : NoDigital : NoSupported Display Modes : 720 X 400 70 Hz 640 X 480 60 Hz 640 X 480 72 Hz 640 X 480 75 Hz 800 X 600 56 Hz 800 X 600 60 Hz 800 X 600 72 Hz 800 X 600 75 Hz1024 X 768 60 Hz1024 X 768 70 Hz1024 X 768 75 Hz1280 X 960 60 Hz1280 X 960 75 Hz1440 X 900 60 Hz1440 X 900 75 Hz1280 X 1024 60 Hz1280 X 1024 75 HzActive : NoRegistry Key : DISPLAY\PEB038F\4&2c0f5421&0&UID16843008Monitor Name : XP911AWSerial Number : 0708500665431Manufacture Week : 37 / 2007ManufacturerID : 41536 (0xA240)ProductID : 911 (0x038F)Serial Number (Numeric) : 16843009 (0x01010101)EDID Version : 1.3Display Gamma : 2.20Vertical Frequency : 56 - 76 HzHorizontal Frequency : 30 - 81 KHzMaximum Image Size : 41 X 26 cm (19.1 Inch)Maximum Resolution : 1440 X 900Support Standby Mode : YesSupport Suspend Mode : YesSupport Low-Power Mode : YesSupport Default GTF : NoDigital : NoSupported Display Modes : 720 X 400 70 Hz 640 X 480 60 Hz 640 X 480 72 Hz 640 X 480 75 Hz 800 X 600 56 Hz 800 X 600 60 Hz 800 X 600 72 Hz 800 X 600 75 Hz1024 X 768 60 Hz1024 X 768 70 Hz1024 X 768 75 Hz1280 X 960 60 Hz1280 X 960 75 Hz1440 X 900 60 Hz1440 X 900 75 Hz1280 X 1024 60 Hz1280 X 1024 75 HzJust a little off-topic, do you know if this driver (downloaded from geforce.com, which is a .run file) includes cuda too?EDIT: I tried to edit the xorg.conf file from windows, no sucess, and now the screen don't become black anymore, but don't start the graphical interface at all. I still can use the CTRL+Alt+F1
How to set a custom resolution with nvidia drivers installed?
nvidia;resolution
null
_webmaster.69617
I run a website where I have 3 AdSense units and 2 ad units from another ad network.The OTHER ad network sent me an DFP AdExchange invitation, saying that by signing up, they would be able to send more ads to my ad units and increase my revenue.The invitation url looks like thishttps://www.google.com/adxseller/participant-registration?invitation=[LONG CODE]Is it okay to sign up for AdExchange when I am already using AdSense on my site.Is there any official statement from Google about its policies on this topic ?I did come across this doc which explains how AdExchange works.
Can I use AdSense and AdExchange enabled networks together on a website
google adsense;google adsense policies;doubleclick ad exchange
null
_unix.350079
I saw Dual boot - Installed arch and windows entry disappeared on grub and I have the same/similar problem. I have Grub and it shows only Debian setup and not MS-Windows. I also tried the following but without success - [$] sudo grub-install /dev/sda [sudo] password for shirish: Installing for i386-pc platform. Installation finished. No error reported.Then - [$] sudo grub-mkconfig -o /boot/grub/grub.cfg Generating grub configuration file ...Found background image: /usr/share/images/desktop-base/desktop-grub.pngD000001: cmpversions a='0:4.9.0-2-amd64' b='0:4.9.0-1-amd64' r=1Found linux image: /boot/vmlinuz-4.9.0-2-amd64Found initrd image: /boot/initrd.img-4.9.0-2-amd64Found linux image: /boot/vmlinuz-4.9.0-1-amd64Found initrd image: /boot/initrd.img-4.9.0-1-amd64Found memtest86+ image: /boot/memtest86+.binFound memtest86+ multiboot image: /boot/memtest86+_multiboot.binFound GRUB Invaders image: /boot/invaders.execdoneThe above tells me it isn't able to find the MS-Windows partition. Here's the output from parted -l -l: ATA ST1000DM003-9YN1 (scsi)Disk /dev/sda: 1000GBSector size (logical/physical): 512B/4096BPartition Table: msdosDisk Flags: Number Start End Size Type File system Flags 1 32.3kB 52.4GB 52.4GB primary ntfs 2 52.4GB 1000GB 948GB extended lba 5 52.4GB 105GB 52.4GB logical ntfs 6 105GB 305GB 200GB logical ext4 boot 7 305GB 405GB 100GB logical ext4 8 405GB 995GB 590GB logical ext4 9 995GB 1000GB 5348MB logical linux-swap(v1)Model: Seagate BUP Slim BK (scsi)Disk /dev/sdb: 2000GBSector size (logical/physical): 512B/4096BPartition Table: msdosDisk Flags: Number Start End Size Type File system Flags 1 1049kB 2000GB 2000GB primary ntfsand then lsblk output - [$] sudo lsblk -f NAME FSTYPE LABEL UUID MOUNTPOINTsda sda1 ntfs WIN xxxxxxxxxxxxxxxxxxxx sda2 sda5 ntfs Data xxxxxxxxxxxxxxxxxxxx sda6 ext4 xxxxxxxxxxxxxxxxxxxx /sda7 ext4 xxxxxxxxxxxxxxxxxxxx /homesda8 ext4 xxxxxxxxxxxxxxxxxxxx/datasda9 swap xxxxxxxxxxxxxxxxxxxx [SWAP]sdb iso9660 ISOIMAGE 2015-06-04-16-30-00-00 sdb1 ntfs Seagate-Slim-Backup xxxxxxxxxxxxxxxxxxxx /media/shirish/Seagate-Slim-Backupsr0 I haven't shared UUID info. for safety and privacy concerns. My /boot/grub/grub.cfg makes no mention of any MS-Windows [$] cat [$]How do I get the MS-Windows again on the menu ?I even tried osprober but no avail :([$] cat /usr/share/doc/os-prober/READMEI even tried os-prober readme -$ sudo cat /usr/share/doc/os-prober | grep $I even tried the README but to no avail, from the README 0 Tests that require the partition to be mounted can be placed in 30 /usr/lib/os-probes/mounted/. These tests are passed the following 31 parameters: partition, mount point, filesystem. $ sudo mount /dev/sda1 /usr/lib/os-probes/mounted/and tried things like - [$] sudo os-prober partition /dev/sda1 /usr/lib/os-probes/mounted/ [sudo] password for shirish: [$]Then I ran os-prober as sudo - [$] sudo os-proberand then ran - [shirish@debian] - [/boot] - [10119][$] sudo grub-mkconfig -o /boot/grub/grub.cfg Generating grub configuration file ...Found background image: /usr/share/images/desktop-base/desktop-grub.pngFound linux image: /boot/vmlinuz-4.9.0-2-amd64Found initrd image: /boot/initrd.img-4.9.0-2-amd64Found memtest86+ image: /boot/memtest86+.binFound memtest86+ multiboot image: /boot/memtest86+_multiboot.binFound GRUB Invaders image: /boot/invaders.execdoneAs can be seen it doesn't find the MS-Windows partition, is it lost forever or there may be a way out ?Sadly had to unmount it :([$] sudo umount /usr/lib/os-probes/mounted/[$] All out of ideas, it seems that Windows bootloader is all shot otherwise we should have had some output ?This is how it looks in /etc/grub.d/40_custom after GAD3R's sharing -#!/bin/sh exec tail -n +3 $0# This file provides an easy way to add custom menu entries. Simply type the# menu entries you want to add after this comment. Be careful not to change# the 'exec tail' line above.menuentry Windows { insmod part_msdos insmod ntfs set root='(hd0,msdos1)' chainloader +1 }After putting GAD3R's suggestions I get -[$] cat /etc/default/grub | grep GRUB_DISABLE_OS_PROBER 11 GRUB_DISABLE_OS_PROBER=falseand running update-grub I get the following -[$] sudo update-grub Generating grub configuration file ...Found background image: /usr/share/images/desktop-base/desktop-grub.pngFound linux image: /boot/vmlinuz-4.9.0-2-amd64Found initrd image: /boot/initrd.img-4.9.0-2-amd64Found memtest86+ image: /boot/memtest86+.binFound memtest86+ multiboot image: /boot/memtest86+_multiboot.binFound GRUB Invaders image: /boot/invaders.execNo change, so something is still amiss :(
How to use os-prober to find MS-Windows boot data?
debian;dual boot;grub;mbr
null
_codereview.120052
I've made the following VBA script to analyse text recurrence in a huge batch of descriptions.For a small part of the batch the code run smoothly, but when I include everything it tends to loose control, get stuck and both Excel and VBE freeze.What I did to avoid this (at least most of the times), is to include temporisation (DoEvents) and use the Immediate Window to show that the code is still alive :If Int(i / 1000) = i / 1000 Then Debug.Print iElse If Int(i / 100) = i / 100 Then DoEvents Else End IfEnd IfI guess there are better ways to handle that kind of behavior in VBA, but I don't know.Here is the full code, that is probably improvable :Sub test_usedW()Dim A()A = get_most_used_words_array(An_Array, 1, True)End SubFunction get_most_used_words_array(ByVal ArrayToAnalyse As Variant, Optional ByVal ColumnToAnalyse As Integer = 1, Optional OutputToNewSheet As Boolean = False) As VariantDim A() As String, _ wb As Workbook, _ wS As Worksheet, _ Dic As Scripting.Dictionary, _ DicItm As Variant, _ NbMaxWords As Integer, _ TpStr As String, _ Results() As Variant, _ DicItm2 As Object, _ R(), _ iA As Long, _ i As Long, _ j As Long, _ k As Long, _ c As RangeSet wb = ThisWorkbookSet Dic = CreateObject(Scripting.Dictionary)Dic.CompareMode = TextCompareNbMaxWords = 5'--1--Balayage du tableauFor iA = LBound(ArrayToAnalyse, 1) To UBound(ArrayToAnalyse, 1) If ArrayToAnalyse(iA, ColumnToAnalyse) <> vbNullString Then '--2--Uniformisation des descriptions pour plus de conformit ArrayToAnalyse(iA, ColumnToAnalyse) = CleanStr(ArrayToAnalyse(iA, ColumnToAnalyse)) A = Split(ArrayToAnalyse(iA, ColumnToAnalyse), ) DoEvents '--1--Ajout mots simples For i = LBound(A) To UBound(A) TpStr = CleanStr(A(i)) If Len(TpStr) > 3 Then If Not Dic.exists(TpStr) Then Dic.Add TpStr, TpStr Else DoEvents End If Else End If Next i '--1--Ajout expressions (plusieurs mots) If NbMaxWords < 10 Then For i = LBound(A) To UBound(A) For k = 2 To NbMaxWords j = 0 TpStr = vbNullString Do While j <= k And i + j <= UBound(A) TpStr = TpStr & & CleanStr(A(i + j)) j = j + 1 Loop TpStr = CleanStr(TpStr) If Len(TpStr) > 3 Then If Not Dic.exists(TpStr) Then Dic.Add TpStr, TpStr Else DoEvents End If Else DoEvents End If Next k Next i End If Else End IfNext iA'Results = Application.Transpose(Dic.Items) ReDim Results(Dic.Count - 1) For i = 0 To Dic.Count - 1 Results(i) = Dic.Items(i) If Int(i / 1000) = i / 1000 Then Debug.Print i Else If Int(i / 100) = i / 100 Then DoEvents Else End If End IfNext iReDim R(1 To UBound(Results), 3)Debug.Print UBound(Results) : & UBound(Results)For i = 1 To UBound(Results) R(i, 0) = Results(i) ', 1) R(i, 2) = Len(R(i, 0)) For iA = LBound(ArrayToAnalyse, 1) To UBound(ArrayToAnalyse, 1) If ArrayToAnalyse(iA, ColumnToAnalyse) <> vbNullString Then 'Affinage du compatge? Exclusif? instr( & search & )? If InStr(1, ArrayToAnalyse(iA, ColumnToAnalyse), R(i, 0)) Then R(i, 1) = R(i, 1) + 1 If InStr(1, ArrayToAnalyse(iA, ColumnToAnalyse), & R(i, 0) & ) Then R(i, 3) = R(i, 3) + 1 Else End If Next iA If Int(i / 1000) = i / 1000 Then Debug.Print i Else If Int(i / 100) = i / 100 Then DoEvents Else End If End IfNext iDoEventsIf OutputToNewSheet Then Set wS = wb.Worksheets.Add wS.Activate 'ws.Range(A1).Resize(UBound(R, 1), UBound(R, 2)).Value = R For i = LBound(R, 1) To UBound(R, 1) For j = LBound(R, 2) To UBound(R, 2) If InStr(1, R(i, j), =) Then wS.Cells(i + 1, j + 1) = ' & R(i, j) Else wS.Cells(i + 1, j + 1) = R(i, j) End If Next j Next i DoEventsElseEnd IfDoEventsget_most_used_words_array = REnd FunctionAnd the functions to simplify text :Function CleanStr(ByVal TheString As String) Dim SpA() As String Dim SpB() As String Dim i As Integer Const AccChars = | - | -|- |-| / | /|/ | . | .|. | , | ,|, | ) | )|) | ( | (|( |=| | | Const RegChars = | | | | |/|/|/|.|.|.|,|,|,|)|)|)|(|(|(|'=| | | SpA = Split(AccChars, |) SpB = Split(RegChars, |) For i = LBound(SpA) To UBound(SpA) TheString = Replace(TheString, SpA(i), SpB(i)) Next i CleanStr = StripAccent(Trim(Trim(TheString)))End FunctionFunction StripAccent(ByVal TheString As String) Dim A As String * 1 Dim B As String * 1 Dim i As Integer Const AccChars = Const RegChars = aaaaaaceeeeiiiidnooooouuuuyySZszYAAAAAACEEEEIIIIDNOOOOOUUUUY For i = 1 To Len(AccChars) A = Mid(AccChars, i, 1) B = Mid(RegChars, i, 1) TheString = Replace(TheString, A, B) Next i StripAccent = TheStringEnd Function
Code to analyse text get stuck if too much data
vba;error handling;excel;time limit exceeded
First:Simple speed-enhancementsThe 3 lowest hanging fruit in the VBA performance garden are Application.ScreenUpdating = FalseApplication.EnableEvents = FalseApplication.Calculation = xlCalculationManualPersonally, I have the following standard Methods for dealing with those:Option ExplicitPublic varScreenUpdating As BooleanPublic varEnableEvents As BooleanPublic varCalculation As XlCalculationPublic Sub StoreApplicationSettings() varScreenUpdating = Application.ScreenUpdating varEnableEvents = Application.EnableEvents varCalculation = Application.CalculationEnd SubPublic Sub DisableApplicationSettings() Application.ScreenUpdating = False Application.EnableEvents = False Application.Calculation = xlCalculationManualEnd SubPublic Sub RestoreApplicationSettings() Application.ScreenUpdating = varScreenUpdating Application.EnableEvents = varEnableEvents Application.Calculation = varCalculationEnd SubWhich will return the settings to whatever they were before your sub runs. But, if you really want to do it properly, this question is a much better implementation.And now, in rough order of when I encounter things in your code, these are my thoughts:Your interruption check could be a lot betterIf Int(i / 1000) = i / 1000 Then Debug.Print iElse If Int(i / 100) = i / 100 Then DoEvents Else End IfEnd IfPersonally, I prefer Mod() as in If i Mod 100 = 0 Then ...Also, did you intend for i to call DoEvents every 100 iterations except for every 1000th iteration?If not, it should be:If i Mod 100 = 0 Then DoEvents If i Mod 1000 = 0 Then Debug.Print iEnd IfOn this note i is not a very useful thing to Debug.print. If somebody else runs your program (or if you have more than one thing to print to the immediate window) then it's going to be very difficult to figure out what is going on. I recommend something like: Debug.Print [Name of procedure / loop / some other descriptor] - Iteration Counter: & iSince it's in a For Loop, you already know how many iterations it should run for, so you should probably include that as well.This:For i = 0 To Dic.Count - 1 Results(i) = Dic.Items(i) If Int(i / 1000) = i / 1000 Then Debug.Print i Else If Int(i / 100) = i / 100 Then DoEvents Else End If End IfNext iThen Becomes:For i = 0 To Dic.Count - 1 Results(i) = Dic.Items(i) If i Mod 100 = 0 Then DoEvents If i Mod 1000 = 0 Then Debug.Print Copy Dic to Results Array - Iteration Counter: & i & / & Dic.Count - 1 End IfNext iAnd rather than seeing this in your immediate window:1000 2000 3000 4000 You'll seeCopy Dic to Results Array - Iteration Counter: 1000 / 4192 Copy Dic to Results Array - Iteration Counter: 2000 / 4192 Copy Dic to Results Array - Iteration Counter: 3000 / 4192 Copy Dic to Results Array - Iteration Counter: 4000 / 4192 Much more useful.Be ExplicitSub is not Sub it is actually (implicitly) Public SubSame with Function --> Public FunctionAnd Dim A --> Dim A As Variant Methods should be Public or PrivateVariables should have an explicit type (even if that type is intended to be Variant). You do at least appear to be declaring your variables, so +1 for that.Don't abuse the _ operator.Dim A() As String, _ wb As Workbook, _ wS As Worksheet, _ Dic As Scripting.Dictionary, _ DicItm As Variant, _ NbMaxWords As Integer, _ TpStr As String, _ Results() As Variant, _ DicItm2 As Object, _ R(), _ iA As Long, _ i As Long, _ j As Long, _ k As Long, _ c As Range Why do you want all these declarations on the same line?Just declare them separately like so:Dim A() As StringDim wb As WorkbookDim ws As WorksheetDim Dic As Scripting.DictionaryDim DicItm As VariantDim NBMaxWords as Integeretc.Now, you don't have to spend precious development time fiddling around with alignments and the inevitable missing / mis-typed _s that will crop up.Good naming is really, really importantTo quote developers far more experienced than I:There are only three hard things in computer science: cache invalidation, off-by-one errors and naming things.Good names should be Clear, Concise and Unambiguous.Variables should sound like what they are. ArrayToAnalyse is a good name. It is the array this function needs to analyse. Awesome. TpStr is not. I haven't got the faintest idea what this thing is or what it's meant to represent. I just spent a minute looking for it in your code to try and figure it out and I've still got no idea what it really is, except that it invariably gets cleaned and then added to your dictionary.A() and R() are particularly bad. I know they're arrays (due to their declaration) but I've got no idea what they're meant to be used for. When I see A = Split(ArrayToAnalyse(iA, ColumnToAnalyse), ) in your code, how am I meant to know that it should be A and not R?Whereas if A was called, say, splitString and R was called resultsStorage then it's much easier to spot errors. (I don't actually know what R should be called, your names make it difficult to figure out what's actually going on and why).Also,Standard VBA Naming conventions have camelCase for local variables, and PascalCase only for sub/function names and Module/Global Variables. This allows you to tell at a glance if the variable you're looking at is local to your procedure, or coming from somewhere else.So:Dim localScope as VariantPrivate ModuleScope as VariantPublic GlobalScope as VariantPublic/Private Const CONSTANT_VALUE as String = This value never changesPublic Sub DoThisThing (ByRef firstParameter as Variant)following standard conventions is good because it allows other developers to easily read and understand your code.
_unix.378888
I just installed CentOS 7 using the minimal settings, but there's no prompt shown on the display past the bootloader. I removed rhgb quiet but still nothing. The display says there's a signal but it's just black. There's a discrete GPU installed but the BIOS is configured to start IGP first and when I attach a display to the GPU it gets no signal, unlike the IGP.I found a few things saying it might have to do with SecureBoot, but I'm pretty sure I have that disabled.Mobo: MSI B250M PRO-VDCPU: Celeron G3930
CentOS 7 minimal install display signal but no prompt
centos
null
_webapps.8215
I have a gmail contact which I don't want them to see me online - but I still want the others contact see me online. A feature similar to stealth setting in Yahoo. How can I do that with gmail/gtalk?[Edit]Thanks to help from user, blocking user as offically said here is the answered.Blocking someone will prevent him or her from talking to you, and vice-versa. Blocked users can't see when you're signed in to Google Talk, and you won't see their status in your Friends list, either. If you decide you'd like to communicate with someone you've blocked, just unblock them.
How to get rid of a Google contact in your chatting list?
google contacts
Click the Video & More drop down when you hover over the contact, then select Block. You will always appear in their list as offline.(Edit: In Google Talk, hover over the contact and you get a down arrow. Click Block (name).)
_softwareengineering.334474
Shared Access Signature is a delegated access mechanism available for Azure resources and account. Based on the documentation here - https://azure.microsoft.com/en-in/documentation/articles/storage-dotnet-shared-access-signature-part-1/, it is clear SAS is never linked to a user principal. Thus making it vulnerable to repudiation and defeating the very meaning of access. Please help me in understanding the rationale of using SAS the way it is implemented Azure.
How could Azure's implementation of SAS be called Delegated Access Mechanism?
security;azure
null
_unix.83055
Using iotop, I found out that [flush-8:0] is doing a lot of hard disk activity, which explains the annoying slowdowns. However, I can't figure out what's the actual cause behind this flushing. No other process seems to be doing inordinate amounts of IO. And you can't strace a kernel task.I've also enabled IO debugging (via echo 1 >/proc/sys/vm/block_dump), but it gives me block numbers on a JFS filesystem, and I have no idea how to translate those into filenames (just try a web search to see what you get).I actually have a suspect, a java process that seems to cause hard disk activity whenever I interact with it. But strace -eread,write,open,stat -fp $PID show very little activity. Are there any other syscalls that could cause problems?Any other ideas?
find ount what [flush-8:0] is writing
linux;filesystems;logs;hard disk;io
null
_unix.203086
I have a file with thousands of lines that start with:>Miriam132_38138 Otu32555|1I need to remove 'Miriam*********' so that each lines begins with:>Otu32555|1The first string of characters is always a combination of the word 'Miriam' and a set of 7, 8, or 9 characters. I played around with sed without much success.
How to remove a string of characters after and before a specific character?
text processing;sed
null
_datascience.778
I read in this post Is the R language suitable for Big Data that big data constitutes 5TB, and while it does a good job of providing information about the feasibility of working with this type of data in R it provides very little information about Python. I was wondering if Python can work with this much data as well.
Is Python suitable for big data
bigdata;python
To clarify, I feel like the original question references by OP probably isn't be best for a SO-type format, but I will certainly represent python in this particular case.Let me just start by saying that regardless of your data size, python shouldn't be your limiting factor. In fact, there are just a couple main issues that you're going to run into dealing with large datasets:Reading data into memory - This is by far the most common issue faced in the world of big data. Basically, you can't read in more data than you have memory (RAM) for. The best way to fix this is by making atomic operations on your data instead of trying to read everything in at once.Storing data - This is actually just another form of the earlier issue, by the time to get up to about 1TB, you start having to look elsewhere for storage. AWS S3 is the most common resource, and python has the fantastic boto library to facilitate leading with large pieces of data.Network latency - Moving data around between different services is going to be your bottleneck. There's not a huge amount you can do to fix this, other than trying to pick co-located resources and plugging into the wall.