text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringlengths
9
15
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Feedback Getting Started Discussions Site operation discussions Recent Posts (new topic) Departments Courses Research Papers Design Docs Quotations Genealogical Diagrams Archives I posted this as a comment on Hacker News yesterday, but didn't receive any responses. I hope I might get some comments here, in the more PL oriented community. --------------- The handling of the this argument is of crucial importance in the design of programming languages with first-class functions that feature some kind of objects, i.e. data with attached behavior (call them functions or methods). (I use call as in f(x) and invoke as in o.m()) this f(x) o.m() There are basically 3 possibilities that I see: Lua is an example of a language that uses this approach: table.method1(arg1, arg2, arg3) table:method2(arg2, arg3, arg4) Here, method1 is called with 3 parameters (without the this parameter), but method2 is called with 4 (the first parameter is the this object, table). method1 method2 table This is undoubtedly the approach with the most efficient execution, but it places a huge burden on the programmer, as one has to distinguish methods from ordinary functions both at call site, and at definition (if you attach a normal function to a table and invoke it as a method, you have a problem if the function doesn't expect the first parameter to be this). Also, this approach is the opposite from the one found in the traditional OOP languages such as Java and C++. Python is an example of a language that uses this approach: f = obj.method f(arg1, arg2) In this case, the function f is called with the first parameter being self, the Pythonic version of this. In Python, the self parameter (Pythonic this) is declared explicitly at method definition, but I believe that it could also work if it were implicit. f self This approach is perhaps the most sensible approach from the programmer's point of view, where methods "just work" - you can call them using the normal dot syntax, and you don't need to worry when using methods as first-class functions - they will remember the object they were attached to. However, a naive implementation of this technique is quite slow, as the function must be bound to the this object every time it is accessed. It is possible to invoke (access and call at the same time) a method without this overhead (e.g. use some object's internal dictionary to check whether it has a method with the appropriate name, and call it, passing this as the first parameter), but not when accessing attributes - one must check whether the attribute is a method, and bind it to this if it is. However, this argument is irrelevant in a language that supports arbitrary getters/setters (which seems to be the direction that new OO languages are headed in). Also, it complicates matters with ad-hoc objects: o = {} f = (arg1, arg2, arg3) -> arg1 + arg2 + arg3 o.m = f When we invoke the method m of the object o, do we pass this as the first parameter (when the function f might no expect it) or not (loosing the OO aspect)? Python conveniently avoids this matter by not supporting ad-hoc objects out-of-the-box. m o Javascript is an example of a language that uses this approach: f = (arg1, arg2) -> print this, arg1 + arg2 f(1, 2) // prints "undefined, 3" o.m = f o.m(1, 2) // prints "o, 3" Here, this is an implicit, undeclared parameter of every function, and is passed at every call/invocation (if a function is called like in the second line, this is set to (i) the global object (window) in the browser, or to (ii) undefined in strict mode). Here, there is some overhead at every unbound function call, as an extra parameter (undefined) is passed, and it is not clear how this overhead can be avoided (except by avoiding plain function calls, and sticking to OOP). Ad-hoc object creation is simple, as functions/methods have a simple way of knowing whether they were called (without a this parameter) or invoked (with a this parameter). However, care must be taken when using methods as first class values: window undefined f = o.m // f will have the this parameter undefined g = o.m.bind(o) // g will have o passed as the this parameter A syntax extension would be nice, e.g. g = bind o.m but it might be ambiguous. In my experience, this situation is not encountered very frequently, so the trade-off is acceptable. I see all these possibilities lacking, but tend to consider the third solution as the most favorable, mainly because it supports elegant and unambiguous ad-hoc object construction. Maybe we could combine solutions 2 and 3, e.g. by using getters that set this on method objects automagically, but to keep ad-hoc objects simple, an implicit this seems to be a must, which means we cannot avoid the plain function call overhead... I'm sure I'm missing a very simple, straightforward alternative, and I would be extremely grateful for any comments or constructive criticism. Edit: formatting. I think when you want to compare an explicit and an implicit method, a good way would be to define a rigorous translation of the more implicit into the more explicit technique. In your case, it would be a translation from solutions 2 and 3 to the solution 1 (or possibly a different explicit presentation). It's very useful when your description of the implicit behavior is otherwise informal and prone to confusion ("implicit, undeclared parameter" ?), and it's even better if you translation exhibits the performance trade-off of each solution. On a more personal plan, I think that programmers should make a distinction between functions and methods: if you consider something a method (whether it was initially defined as a method or was independently defined or attached after on in a dynamic language), it *should* have the state of the object in its execution content (by the way of an explicit 'this' parameter or some more implicit mechanism). If it was not designed with this object state in mind, it does not make sense to transparently consider this as a method; but you could define a wrapper that takes a "non-self-conscious" function (sorry for the bad pun) and return a "self-conscious" function that just ignores its object context. So I would choose an approach similar to point 2, accepting `o.m = f` only if `f` is indeed take that context parameter. The solution of point 3 looks like an "everyone lose" perspective to me: in the common case of "pure function", you have an useless undefined parameter (which is quite bad from a semantics point of view), and if you want to transform it into a method you have to use an additional explicit `bind` operation in any case (while with the point 2 approach you need to transform only the function that are not directly compatible with the "method call convention"). More generally I would avoid anything like "oh it does not really make sense but let's say that it actually means this" when you have the opportunity to be explicit (possibly using some nice syntactic sugar or designing some inference later). But that depend on your target public and expected usage: large-scale programs, scripts, something in between? If you want an object or class to have both "methods" and "static functions", then you get something which is similar to your point 1. This looks very reasonable to me : `table.foo(..)` should be thought as a method call on instance `table`, and `table:foo(..)` as a static function/method call on *class* `table`. In C++ you also have a syntactic distinction at call sites between static and non-static methods. Finally, I think the distinction of point 1 (: or .) can also answer you question about point 2 : you could write `o.m = f` to bind a self-conscious function `f`, and `o:m = f` to bind a non-self-conscious function. You can also explain that by saying that method call are just sugared function calls, ie. `o.m = f` translates to `o:m = fun(*params) -> f(o, *params)`, or that static methods are actually usual methods wrapped into a context-ignored transform, ie. `o:m = f` translates into `o.m = fun(o, *params) -> f(params)`. I would make whatever choice is the more implementation-sensible and efficient giving the expected common use among the target users of your programming language. The first choice seems more minimalistic and thus better to me, but you really can't tell, eg. if you target a virtual machine that has fast instructions for this-passing calls. > It's very useful when your description of the implicit behavior is otherwise informal and prone to confusion I assumed basic familiarity with Lua, Python and Javascript. My bad, I probably should clarify the OP so that everyone would understand exactly what each approach is. > But that depend on your target public and expected usage: large-scale programs, scripts, something in between? Yes, I should have emphasized my goal/the expected usage of the language: a scripting/rapid prototyping language that tries to stay out of the programmer's way as much as possible. If you want to write large-scale applications, you can always enhance your dynamic program with static types/contracts/optimized classes/etc later. > If you want an object or class to have both "methods" and "static functions" Aren't static functions simply the methods of the class? If you have first-class classes (which seems to be the case in most class-based dynamic OO languages), then a class is just an object, and can thus have methods. A more interesting case are first-class modules. Then the dot-notation is used simply to denote namespaces, and methods are really just functions that need no this context. > I think the distinction of point 1 (: or .) Actually, I'm not considering option 1 at all, I only included it to cover all existing approaches (that I know of). Option 1 seems the terrible from many perspectives. The most obvious are the usability issues - the user has to know what kind of "callable object" f is to be able to use it. Also, I consider it an insult to my intelligence and humanity whenever I have to tell the computer something that it should know / could figure out by itself. In most cases, only one of the possibilities is correct, and using the other one would results in wrong results or runtime errors. Also, it is wrong from an engineering point of view. As I mentioned before, one has to know whether f is a function or a method, and this information might be far away from where you use f. Furthermore, changing the nature of f would mean you have to edit a lot of code (that calls f) in a lot of different places. The aspect at which point 1 excels is runtime efficiency. However, chasing runtime efficiency at the expense of programmer productivity or, god forbid, correctness, is a path that leads us back to C or even assembly. And Lua's excellent tracing JIT implementation LuaJit has shown us that efficiency can be achieved even in a very dynamic language. All in all, thank you for your response. You raise interesting points, and while I didn't reply to most of them, I took them into account. I feel I'm getting closer to a solution that would be acceptable to me and my goals. I prefer the following way of doing it: For every message "X" understood by an object (in the Smalltalk sense), there is a procedure also called X, that sends that message. Schematically: (define X (lambda (object . args) (send-message object "X" args))) This means you can use such generic procedures as ordinary procedures (because they are ordinary procedures): ;; Map X across list of two newly created objects (map X (list (make-object) (make-object)))) Note that there is no "this" - I find "this" to be a Rube Goldberg device. P.S. This approach has long been used in Lisp and related languages. Note that this "reversal of point of view" is much older than smalltalk, as it is for example the idea underlying the well-known equivalence of a finite vector space and its bidual in a linear algebra: a vector (data, here "message") can also be seen as the second-order function that take a linear function (here "object") and pass it the data. More generally, in all systems where you have a notion of "data" and a notion of "computation", and are reasonably observational, you can transition freely between "the data" and "the computation of passing this data to a computation". Actually this could be seen as a definition of being "reasonably observational". This is probably, as everything else in the world, related in some way to the Yoneda Lemma. That is an idea that I considered as well, but I dropped it because I believe it brings more problems than advantages. The main problem with this approach is that the method name has to be available in the current namespace. This could easily result in namespace pollution, or alternatively every name would have to be defined in advance. Also, I'm not sure how this would combine with ad-hoc objects that are created dynamically. Maybe a better solution is to use some special syntax sugar for this case, such as _.x. _.x map(_.tostring, [obj1, obj2]) Now, I know that I (and thus this reply) am influenced by my programming experience, which has mostly been in Python/Javascript. I should study Lisp and its descendants more, maybe I get some ideas how to unify both approaches. Actually, the fact that procedures live in a namespace is an advantage, instead of looking up methods dynamically by name at runtime. Sometimes you may need that flexibility, but for nearly all code you don't. In languages like Ruby it's a problem that methods can't live in namespaces. There was some talk about providing a mechanism to make methods live in namespaces to allow different parties to provide different methods with the same name without fear of names clashing. Namespaces already do this for all values (which includes classes). That you need a different system for methods is telling. Sure, you do lose the ability to create new methods with new names at runtime, but this is good riddance. Meta programming with macros works much better than meta programming by monkey patching. Once you use function call syntax for methods, you realize that the first argument isn't special, and that you could dispatch on any of the arguments. This leads to multiple dispatch or even to predicate dispatch. It is entirely possible that I am completely wrong about this :) Could you give a concrete scenario that you have in mind where this brings more problems than advantages? Actually, the fact that procedures live in a namespace is an advantage Yes. Although the interactions of module systems and objects can be confusing. (E.g. if you have a module system that allows renaming of imported identifiers, then you can have generic procedures with names different from the messages they send... Oh, and whether message names should be simple strings or (module, message) pairs themselves is another complicated topic. Previous discussions: Should method names be (module-) scoped?, Namespaces for methods?, How important is language support for namespace management?.) Sure, you do lose the ability to create new methods with new names at runtime Well, you can always use SEND-MESSAGE for those. (Or if you have EVAL, define new generic functions at runtime.) I'm also among those who think the best answer for 'this' is to get rid of it - and model self-reference explicitly when we need it. If 'this' is an explicit argument to each 'method' that needs it, we have a lot more freedom to model open recursion, develop alternative object models, control reentrant computations, and prevent looping constructs in cases we don't want them. We can separate the concepts of corecursion and fixpoint. Similarly, I would say that objects should generally not construct their own dependencies. I.e. if you need an Integer, ask for one in your constructor. This is a valuable separation of concerns, e.g. with respect to persistence, testing, debugging. Encapsulation can be separated from the 'object' concept. When I was developing a language based on actors model, I used a separate 'configuration' concept, which also handled the fixpoint if desired. (There are also a lot of optimization advantages to constructing objects in one big declarative configuration.) open recursion, develop alternative object models, control reentrant computations, and prevent looping constructs in cases we don't want them. We can separate the concepts of corecursion and fixpoint. open recursion, develop alternative object models, control reentrant computations, and prevent looping constructs in cases we don't want them. We can separate the concepts of corecursion and fixpoint. Unfortunately, I'm not familiar with these concepts and how the presence of implicit this affects them. Could you please provide some references or examples? class test(object): foo = 2 bar = lambda x:x o = test() print o.foo # prints '2' print o.bar(2) # error... got 2 arguments instead of 1 If objects can house regular values, they should also be able to house function values without assuming they are methods. This illustrates why Lua has the dot and the colon (as in the example in the OP). The Lua user has to explicitly distinguish between functions stored in the instance and methods. I would argue that. Lua's scheme is sort of reasonable but I don't like it either, and Python's scheme is too much of a hack for my liking. If object methods are going to seem closed over the object, then they should actually be closed over the object (hooking up self in methods could have been something the class does when an object is created). Another design similar to the one I'm using is to have dot be a special reverse application that uses the type to resolve overload selection. So instead of obj.foo being a lookup inside obj, it's just an application of foo to obj. If there are multiple foo's in scope the type of obj is used to disambiguate them. Or you could just write foo(obj) and if needed manually qualify foo.. this is not the programmer's task, its the compiler's task (or VM for dynamic languages). This argues for a more implicit method of making the instance values available in the functions environment so that the compiler can make that decision. I couldn't agree more. Maybe function objects could have some property that tells whether the function needs a context (this) or not. This could easily be detected at function definition (if this is a keyword that cannot be used for anything else). Except in presence of eval and similar constructs. eval Then, when an object is created or it's attribute is assigned a function value, it could detect what kind of function we're using, and how it should be treated. ...but it's the default type metaclass that modifies bar into a method. You can work around this problem by using a decorator: type bar class Test(object): foo = 2 bar = staticmethod(lambda x:x) Or if you want to go fancy: def DontTouchMyLambdas(base, name, dict): def isLambda(v): aLambda = lambda: None return isinstance(v, type(aLambda)) and v.__name__== aLambda.__name__ for k in dict: if isLambda(dict[k]): dict[k] = staticmethod(dict[k]) return type(base, name, dict) class Test: __metaclass__ = DontTouchMyLambdas foo = 2 bar = lambda x: x Thanks for the explanation. I was thinking the magic was on the lookup side, which would have been much worse. It still seems a little weird to me that Python chooses to interpret every function that occurs in a class as a method. It seems to me that rather than having a 'staticmethod' type that doesn't get hooked up, it would have been cleaner to have a 'method' type that does. I was thinking the magic was on the lookup side Until now, I've never considered that anything else could be possible. Thanks, you gave me a great idea. It seems to me that rather than having a 'staticmethod' type that doesn't get hooked up, it would have been cleaner to have a 'method' type that does. Why not have the compiler figure that out by itself? If it requires the use of the name this for the context, then figuring out whether a function needs the context or not is a simple source code analysis. 'this' is an interesting concept and although I had it in the initial version of my language Babel-17, I removed it from it for a while, only to put it in later again. I think now 'this' just works in Babel-17 as you would expect it to work. It basically uses approach 2 (automatic method binding), but there is no problem with ad-hoc objects and so on: val o = {} val f = (arg1, arg2, arg3) => arg1 + arg2 + arg3 o.m = f just assigns f to o.m, so the following holds: o.m (3, 4, 5) == 12 'this' doesn't play a role here, because it can only be used in an object definition, like: val o = object def m (a, b, c) = this + a + b + c def plus_ x = x end The code val o = {} o.m = this would be rejected as illegal. val o = {} val f = (arg1, arg2, arg3) => arg1 + arg2 + arg3 o.m = f o.m (3, 4, 5) == 12 val o = object def m (a, b, c) = this + a + b + c def plus_ x = x end val o = {} o.m = this This is certainly a step in the right direction. The only thing left is to allow the user to be able to define methods outside of class/object definitions and add them to objects laters: f = (x) -> print this, x obj = {} obj.m = f obj.m(1) // prints "obj, 1" Edit: Of course, my approach complicates things a little bit. One source of a lot of bugs in Javascript is the following situation: o1 = { x = 1 f = () -> g = () -> print this.x o2 = {x = 2} o2.m = g } If this is bound at function creation time, then o2.m() == 1. However, this is inconsistent with the above, so maybe this should be bound at method invocation time, which would result in o2.m() == 2. o2.m() == 1 o2.m() == 2 If using approach (2) (this is bound at function invocation time), then to access the "previous" this, we could use one of the following approaches: g = (this = this2) -> print this.x, this2.x // prints 1, 2, as this2 is the self-reference // now, and this is resolved lexically or this2 = this g = () -> print this2.x, this.x // prints 1, 2 This is indeed a very complicated problem (pun intended). The `this' reference as provided in most OO languages, dynamic or otherwise, is simply broken. Starting with the fact that it is a single keyword, which is not that great when you can nest objects/classes/methods. In JavaScript, for example, that is a frequent source of (sometimes subtle and hard to spot) errors. Only few languages (e.g. Scala) do the right thing and allow to bind a self variable per object instead, with proper lexical scoping. (However, Scala still makes an annoying distinction between functions and methods.) Of course, that prevents not just accidental, but also intentional self-capture, i.e. invoking a method on a different object than it was defined with. But in my experience the latter is almost always a horribly bogus thing to do anyway, and interferes badly with encapsulation. Better make the object argument explicit if you need it to vary. Edit: for clarity, the only sane semantics I see is the following: let o1 = {(self) x = 1; m = () -> self.x} o1.m() == 1 let m1 = o1.m; m1() == 1 let o2 = {(self) x = 2; m = o1.m} o2.m() == 1 let o3 = {(self) x = 3; i = {(self') x = 4; f = () -> self.x; g = () -> self'.x}} o3.i.f() == 3 o3.i.g() == 4 Note that all methods are ordinary functions here, and capture `self' (which is an ordinary variable) simply through their lexical context. It's the good old objects-as-records-of-closures model. You can still program "first-class methods" easily: let mm = (self) -> () -> self.x * 10 o1.m := mm(o1); o1.m() == 10 o2.m := mm(o2); o2.m() == 20 You should like Babel-17 then, it works exactly as you described it: val o1 = object def x = 0; def m = this.x end #assert o1.m == 0 o1.x = 1 #assert o1.m == 1 val o1 = object def x = 0; def m = this.x end #assert o1.m == 0 o1.x = 1 #assert o1.m == 1 val m1 = o1.m #assert m1 == 1 val o2 = object def x = 2; def m = o1.m end #assert o2.m == 1 val o3 = object def self = this def x = 3 def i = object def x = 4; def f = self.x; def g = this.x end end #assert o3.i.f == 3 #assert o3.i.g == 4 I think it is not a problem that 'this' is bound to the inner most context, but find it rather convenient. If you need to address 'this' from an outer context, just bind 'this' in this outer context to another name ('self' in the above). That's great, the only thing I'm missing here is implicitness. Now, of course implicitness could be a bad thing, but in my limited experience a little bit of implicitness in just the right places can be a good thing, especially in scripting/rapid prototyping languages, which are kind-of my focus. So, I would rewrite your examples like this: o1 = {x = 1, m = () -> this.x} // this is the default self-reference o1.m() == 1 m1 = o1.m; m1() == 1 o2 = {x = 2, m = o1.m} // equivalent to {x = 2, m = m1} o2.m() == 1 o3 = { x = 3 i = { this = this2 // just a convenient notation for rebinging // the self-reference (and unbinds this // in this lexical context) x = 4 f = () -> this.x // refers to o3 g = () -> this2.x } } o3.i.f() == 3 o3.i.g() == 4 Do you think this is too confusing, or do you "get it" immediately and could get used to programming like this? Maybe not "confusing", but in my experience, implicit `this' is rather error-prone in languages where you frequently nest constructs that bind it. See JavaScript. Also, to be honest, I find your self binding notation far from optimal. It looks like a property definition, which would suggest that (1) it defines `this', not `this2', and (2) it defines it as a property of i, not a lexically scoped variable in the object body. Well, reconsidering it, I see that it really was an awful idea. It certainly breaks the useful principle of least surprise... I have to find another way. As far as the implicit/explicit goes... What I'm trying to achieve is to make programming easier - faster, less tedious, less boring, less repetitive, more natural. I really hate it when I have to do something as a programmer/user that could probably be done by the compiler, or possibly by the library designer. That is why I dislike Lua's explicit invocation syntax, and Python's obligatory explicit declaration of the self parameter. Now, I do realize that there are situations that these more explicit approaches turn out to be very useful and much better than implicit, but in most situations, they only consume keystrokes. Personally, I find Javasctipt's this almost good - it allows rapid prototyping in many situations, and only misbehaves in a handful of situations. However, for someone not familiar to JS, these situations are very tricky, and the behaviors very unexpected. I'd like to find some combination of features, where in most situations, the programmers would not have to be explicit, and the implicit behaviors would be the intuitive ones. Some situations would still require explicitness.
http://lambda-the-ultimate.org/node/4328
CC-MAIN-2018-39
refinedweb
4,755
62.07
When you use Breadcrumbs extension is expected to link all the previous page tree, but you only get a link to the main page. Reproducible: Always Steps to Reproduce: 1.Create a new page (ej.Wiki) 2.Create a subpage for the previous one (ej.Wiki:Help) 3.Add the Breadcrumbs extension in both pages 4.Save your changes Actual Results: In Wiki page you get a link to the main page (Main > Wiki) but in the second one the Breadcrumbs extension does not work, you get (Main >Wiki:Help) instead of (Main > Wiki > Help) Expected Results: You should get (Main > Wiki > Help) with links to each page instead of (Main > Wiki:Help). taking bug, will investigate when Bug 311823 is complete. How does this depend on the advanced syntax bug? For the specific "Wiki:Help" case, I suspect it's because "Wiki" is a namespace in mediawiki, which means that it gets special treatment in the database. Sorry, it's not a namespace, it's taken as an interwiki link. Deb: do you think we need all the default interwiki entries here? Seems like some of them ("Wiki" and "RFC", at least) are more likely to cause confusion than to be used for links to their respective sites. We tend to prefer templates to interwiki stuff anyway, in most cases, too. Oh, but of course the example in the comment uses "Wiki:", while the actual problem as linked from URL uses "MDC:", meaning that the example doesn't actually describe the same bug. Ahem! MDC: is a namespace issue, almost certainly. Using Wiki: will likely slam up against interwiki issues at some point, though perhaps not in the breadcrumbs path. Regarding the "Wiki" thing -- could someone post/send me the full interwiki links list? I suspect we can get rid of the vast majority of non-MDC related links. Might as well clean that up as well. I filed bug 339668 on the interwiki-cleanup issue. I set it as depends because what I am doing in Bug 311823 will change (slightly) the way Breadcrumbs finds its parent. I suspect that it might fix this, but am not sure yet. If you intend to fix this sooner, or think my depend marking is wrong, feel free to remove the dependancy. After that bug is fixed, one can use <breadcrumbs>[[MDC]]</breadcrumbs> to get correct breadcrumbs on [[MDC:Help]]. Do we really need anything else, given that we don't use namespaces that much? (Nickolay: it's not clear from bug 311823 _what_ the new syntax or behaviour will be, though I still hope that someone will say so there, and soon.) This doesn't require that advanced syntax to fix, necessarily, and there doesn't seem to be any activity in the advanced syntax bug to indicate that this would indeed be fixed by it, so I'm removing the dep. WFM with the switch to deki, since we don't have the breadcrumbs extension anymore.
https://bugzilla.mozilla.org/show_bug.cgi?id=339653
CC-MAIN-2016-22
refinedweb
497
71.34
Greatest Common Divisor Let a and b be integers, not both zero. The largest integer d such that d exactly divides both a and b is called the greatest common divisor of a and b . The GCD of the two integers a and b is denoted as gcd(a, b) The Process This implementation would follow the Euclid’s Algorithm. The gcd function for two integers is as shown below: First we divide a with b and get the remainder r . If r is 0 then b is the GCD as it exactly divides a and obviously itself, else r might be the GCD in which case r should divide b exactly. To test for this in the next iteration the new value of a is the old value of b from the old iteration, and the new value of b is the value of r from the previous iteration. Thus the process continues while r is not 0. When r is zero that is b exactly divides a and thus b has the GCD of the initial two integers. The process to find the GCD of two integers is as below: INTEGER a, b, r; WHILE TRUE , run loop forever r = a%b IF 'r' = '0' THEN BREAK from loop ENDIF a = b b = r ENDWHILE Now b is the GCD of the two initial integers. This process is used to determine the GCD of n integers. Let there be n integers . Taking , if the GCD , then the GCD of will be , and the GCD of is , thus for integers the GCD would be . This is implemented by taking one variable used to hold the intermediate GCD values and another the input numbers. The intermediate GCD value is set to 0, and the next input number and the current intermediate GCD result is processed and the next GCD is found. This is performed iteratively and at ith iteration. We find the GCD of the first i integers by and initially . Sourcecode #include <stdio.h> #define TRUE 1 int gcd (int a, int b); int main (void) { int g = 0, a, i = 0; printf ("\nFinding GCD of n integers"); printf ("\nEnter numbers serially (\"0\" to end)\n"); while (TRUE) { printf ("Enter Number [%d] :", i++); scanf ("%d", &a); /* 0 to end */ if (a == 0) break; /* a is used to hold the temporary value of * * gcd at ith iteration * */ g = gcd (g, a); } printf ("\nGCD is %d\n", g); return 0; } int gcd (int a, int b) { int rem; if (b == 0) return a; while (TRUE) { rem = a % b; if (rem == 0) break; a = b; b = rem; } return b; } g is initialized to 0 which would be used to hold intermediate GCD results. The integers are input inside the loop and the GCD of the currently input integer a with the old value of g is found and the value of g is updated with this result which reflects the GCD of the all input integers till now. When 0 is input the loop is terminated. The function GCD is coded following the exact structure of the algorithm. Output Run 1: input = 35, 95, 215, 65 Finding GCD of n integers Enter numbers serially ("0" to end ) Enter Number [1] :35 Enter Number [2] :95 Enter Number [3] :215 Enter Number [4] :65 Enter Number [5] :0 GCD is 5 Run 2: input = 221, 39, 4199, 247, 91 Finding GCD of n integers Enter numbers serially ("0" to end ) Enter Number [1] :221 Enter Number [2] :39 Enter Number [3] :4199 Enter Number [4] :247 Enter Number [5] :91 Enter Number [6] :0 GCD is 13 Resources Check more about GCD/HCF/GCF in wikipedia: Update 27.03.2011 : Code slightly changed. gcd () function now safe from division by 0 07.05.2011 : Code updated, slight modification in iterative function.
https://phoxis.org/2011/02/27/gcd-of-n-integers/
CC-MAIN-2019-18
refinedweb
638
63.02
Connecting the dots. Spanning trees have a wide variety of applications in GIS... or did. I have been finishing writings on distance and geometry and I almost forgot this one. The principle is simple. Connect the dots, so that no line (edge) overlaps, but only connects at a point. In practice, you want the tree to produce a minimal length/distance output. There are a variety of names forms, each with their subtle nuance and application, but my favorite is Prim's... only because I sortof understand it. I am not sure if this his implementation exactly, but it is close enough. So I post it here for those that want to try it. Besides, Prim's can be used to produce mazes, a far more useful application of dot connection. My favorite code header to cover imports and array printing and some bare-bones graphing import sys import numpy as np import matplotlib.pyplot as plt from textwrap import dedent, indent ft = {'bool': lambda x: repr(x.astype('int32')), 'float': '{: 0.1f}'.format} np.set_printoptions(edgeitems=10, linewidth=100, precision=2, suppress=True, threshold=120, formatter=ft) np.ma.masked_print_option.set_display('-') script = sys.argv[0] Now the ending of the script where the actual calls are performed and an explanation of what is happening occurs. # --------------------------------------------------------------------- if __name__ == "__main__": """Main section... """ #print("Script... {}".format(script)) # ---- Take a few points to get you started ---- a = np.array([[0, 0], [0,8], [10, 8], [10,0], [3, 4], [7,4]]) # idx= np.lexsort((a[:,1], a[:,0])) # sort X, then Y a_srt = a[idx,:] # slice the sorted array d = _e_dist(a_srt) # determine the square form distances pairs = mst(d) # get the orig-dest pairs for the mst plot_mst(a_srt, pairs) # a little plot o_d = connect(a_srt, d, pairs) # produce an o-d structured array Now the rest is just filler. The code defs are given below. def _e_dist(a): """Return a 2D square-form euclidean distance matrix. For other : dimensions, use e_dist in ein_geom.py""" b = a.reshape(np.prod(a.shape[:-1]), 1, a.shape[-1]) diff = a - b d = np.sqrt(np.einsum('ijk,ijk->ij', diff, diff)).squeeze() #d = np.triu(d) return d def mst(W, copy_W=True): """Determine the minimum spanning tree for a set of points represented : by their inter-point distances... ie their 'W'eights :Requires: :-------- : W - edge weights (distance, time) for a set of points. W needs to be : a square array or a np.triu perhaps :Returns: :------- : pairs - the pair of nodes that form the edges """ if copy_W: W = W.copy() if W.shape[0] != W.shape[1]: raise ValueError("W needs to be square matrix of edge weights") Np = W.shape[0] pairs = [] pnts_seen = [0] # Add the first point n_seen = 1 # exclude self connections by assigning inf to the diagonal diag = np.arange(Np) W[diag, diag] = np.inf # while n_seen != Np: new_edge = np.argmin(W[pnts_seen], axis=None) new_edge = divmod(new_edge, Np) new_edge = [pnts_seen[new_edge[0]], new_edge[1]] pairs.append(new_edge) pnts_seen.append(new_edge[1]) W[pnts_seen, new_edge[1]] = np.inf W[new_edge[1], pnts_seen] = np.inf n_seen += 1 return np.vstack(pairs) def plot_mst(a, pairs): """plot minimum spanning tree test """ plt.scatter(a[:, 0], a[:, 1]) ax = plt.axes() ax.set_aspect('equal') for pair in pairs: i, j = pair plt.plot([a[i, 0], a[j, 0]], [a[i, 1], a[j, 1]], c='r') lbl = np.arange(len(a)) for label, xpt, ypt in zip(lbl, a[:,0], a[:,1]): plt.annotate(label, xy=(xpt, ypt), xytext=(2,2), size=8, textcoords='offset points', ha='left', va='bottom') plt.show() plt.close() def connect(a, dist_arr, edges): """Return the full spanning tree, with points, connections and distance : a - point array : dist - distance array, from _e_dist : edge - edges, from mst """ p_f = edges[:, 0] p_t = edges[:, 1] d = dist_arr[p_f, p_t] n = p_f.shape[0] dt = [('Orig', '<i4'), ('Dest', 'i4'), ('Dist', '<f8')] out = np.zeros((n,), dtype=dt) out['Orig'] = p_f out['Dest'] = p_t out['Dist'] = d return out The output from the sample points is hardly exciting. But you can see the possibilities for the other set. This one is for a 100 points, with a minimum spacing of 3 within a 100x100 unit square. Sprightly solution even on an iThingy using python 3.5 Now on to maze creation .....
https://geonet.esri.com/blogs/dan_patterson/2017/01
CC-MAIN-2017-30
refinedweb
721
69.38
Hello,This is the second version of the patch previously posted.v1-->v2 : Extended the guard code to cover the byte exchange case as wellfollowing opinion of Will Deacon. Checkpatch has been run and issueswere taken care of.From: Sarbojit Ganguly <[email protected]>Date: Thu, 3 Sep 2015 13:00:27 +0530Subject: [PATCHv2] ARM: Add support for half-word atomic exchangeSince support for half-word atomic exchange was not there and Qspinlockon ARM requires it, modified __xchg() to add support for that as well.ARMv6 and lower does not support ldrex{b,h} so, added a guard codeto prevent build breaks.Signed-off-by: Sarbojit Ganguly <[email protected]>--- arch/arm/include/asm/cmpxchg.h | 17 +++++++++++++++++ 1 file changed, 17 insertions(+)diff --git a/arch/arm/include/asm/cmpxchg.h b/arch/arm/include/asm/cmpxchg.hindex 916a274..a53cbeb 100644--- a/arch/arm/include/asm/cmpxchg.h+++ b/arch/arm/include/asm/cmpxchg.h@@ -39,6 +39,7 @@ static inline unsigned long __xchg(unsigned long x, volatile void *ptr, int size switch (size) { #if __LINUX_ARM_ARCH__ >= 6+#if !defined(CONFIG_CPU_V6) case 1: asm volatile("@ __xchg1\n" "1: ldrexb %0, [%3]\n"@@ -49,6 +50,22 @@ static inline unsigned long __xchg(unsigned long x, volatile void *ptr, int size : "r" (x), "r" (ptr) : "memory", "cc"); break;++ /*+ * Half-word atomic exchange, required+ * for Qspinlock support on ARM.+ */+ case 2:+ asm volatile("@ __xchg2\n"+ "1: ldrexh %0, [%3]\n"+ " strexh %1, %2, [%3]\n"+ " teq %1, #0\n"+ " bne 1b"+ : "=&r" (ret), "=&r" (tmp)+ : "r" (x), "r" (ptr)+ : "memory", "cc");+ break;+#endif case 4: asm volatile("@ __xchg4\n" "1: ldrex %0, [%3]\n"-- Regards,Sarbojit------- Original Message -------Sender : Sarbojit Ganguly<[email protected]> Technical Lead/SRI-Bangalore-AP Systems 1/Samsung ElectronicsDate : Aug 20, 2015 19:55 (GMT+05:30)Title : Re: Re: Re: [PATCH] arm: Adding support for atomic half word exchange>> My apologies, the e-mail editor was not configured properly.>> CC'ed to relevant maintainers and reposting once again with proper formatting.>> >> Since 16 bit half word exchange was not there and MCS based qspinlock >> by Waiman's xchg_tail() requires an atomic exchange on a half word, >> here is a small modification to __xchg() code to support the exchange.>> ARMv6 and lower does not have support for LDREXH, so we need to make >> sure things do not break when we're compiling on ARMv6.>> >> Signed-off-by: Sarbojit Ganguly >>> --->> arch/arm/include/asm/cmpxchg.h | 18 ++++++++++++++++++>> 1 file changed, 18 insertions(+)>> >> diff --git a/arch/arm/include/asm/cmpxchg.h >> b/arch/arm/include/asm/cmpxchg.h index 1692a05..547101d 100644>> --- a/arch/arm/include/asm/cmpxchg.h>> +++ b/arch/arm/include/asm/cmpxchg.h>> @@ -50,6 +50,24 @@ static inline unsigned long __xchg(unsigned long x, volatile void *ptr, int size>> : "r" (x), "r" (ptr)>> : "memory", "cc");>> break;>> +#if !defined (CONFIG_CPU_V6)>> + /*>> + * Halfword exclusive exchange>> + * This is new implementation as qspinlock>> + * wants 16 bit atomic CAS.>> + * This is not supported on ARMv6.>> + */>I don't think you need this comment. We don't use qspinlock on arch/arm/.Yes, till date mainline ARM does not support but I've ported Qspinlock on ARM hence I think that commentmight be required.>> + case 2:>> + asm volatile("@ __xchg2 ">> + "1: ldrexh %0, [%3] ">> + " strexh %1, %2, [%3] ">> + " teq %1, #0 ">> + " bne 1b">> + : "=&r" (ret), "=&r" (tmp)>> + : "r" (x), "r" (ptr)>> + : "memory", "cc");>> + break;>> +#endif>> case 4:>> asm volatile("@ __xchg4 ">> "1: ldrex %0, [%3] ">We have the same issue with the byte exclusives, so I think you need to extend the guard you're adding to cover that case too (which is a bug in current mainline).Ok, I will work on this and release a v2 soon. >Will- Sarbojit???????? ?? ?? ??----------------------------------------------------------------------+The Tao lies beyond Yin and Yang. It is silent and still as a pool of water. |It does not seek fame, therefore nobody knows its presence. |It does not seek fortune, for it is complete within itself. |It exists beyond space and time. |----------------------------------------------------------------------+
http://lkml.org/lkml/2015/9/3/649
CC-MAIN-2018-30
refinedweb
662
63.49
Join the community to find out what other Atlassian users are discussing, debating and creating. Hi All, I am trying to return a count of all of the words in a multi-line text field using a "Calculated Text Field". Is there an in-built function to do this? The ".length" function returns the character count for the contents of the custom field. ".size" does not appear to work on the same custom field. If there is not an in built way to do this, I was trying to loop through each character in the field looking for ' ' or '\n' which would indicate a new word, however this is error prone e.g. newlines appended to the end of the text. Appreciate any guidance. Regards Hi, Not sure what version of JIRA you are using and what add-ons you have installed, but as far as I know, this isn't possible out of the box. We've done a similar thing using the ScriptRunner add-on. We simply created a Script Field and have the following code in it: import com.atlassian.jira.component.ComponentAccessor def customFieldManager = ComponentAccessor.getCustomFieldManager() def cField = customFieldManager.getCustomFieldObjectByName('fieldName') String cFieldValue = issue.getCustomFieldValue(cField) if(cFieldValue) { return cFieldValue.split().size() } return 0.
https://community.atlassian.com/t5/Jira-questions/Count-Words-in-a-multi-line-Custom-Text-Field/qaq-p/1004001
CC-MAIN-2020-24
refinedweb
208
51.04
Bindstorming and JavaFX Performance It is within our nature, even in the most infinitesimal way, to leave our mark on this world before we exit it. I'd like to coin the following term, heretofore unseen in the JavaFX space, and submit it as my humble contribution to the human collective: bindstorm \'bïndstorm\ (noun): condition where a multitude of JavaFX bind recalculations severely hampers interactive performance Yeah, I know, using the word you wish to define inside its definition is bad, but there is precedent for this: (1) Fancy-schmancy, hoity-toity college dictionaries do it all the time. (2) Mathematicians and computer scientists call this recursion: that mysterious concept which developers use to impress others of their programming prowess. Don't get me wrong, JavaFX binding is incredibly powerful. Heck, we dedicated a whole chapter to it in our new book JavaFX: Developing Rich Internet Applications. But binding does come with a price, and like most anything else, over-consumption can lead to abuse. Consider this use case: you've got a JavaFX application with dozens or maybe even hundreds of Nodes that are part of the scenegraph. Each of the Nodes are ultimately sized and positioned in proportion to height and width instance variables that are passed on down. If you define width and height at startup and have no interest in a resizeable interface, then you stand a good chance of avoiding the use of many bind expressions. The one potential twist here is that if you're sincerely interested in a non-resizeable application, but want it to consume the entire screen, what do you do? As screens come in all shapes and sizes, you may not know what the resolution is at start time. JavaFX has an elegant solution for this which uses binding. Here's a simple application which defines a Rectangle and Circle that fill the entire screen. You can click anywhere within the Circle to exit the application. Notice the number of binds required to get this to work. import javafx.stage.*; import javafx.scene.*; import javafx.scene.shape.*; import javafx.scene.paint.*; import javafx.scene.input.*; function run() : Void { var stage: Stage = Stage { fullScreen: true scene: Scene { content: [ Rectangle { width: bind stage.width height: bind stage.height fill: Color.BLUE } Circle { centerX: bind stage.width / 2 centerY: bind stage.height / 2 radius: bind if (stage.width < stage.height) then stage.width / 2 else stage.height / 2 fill: Color.RED onMouseClicked: function(me: MouseEvent) { FX.exit(); } } ] } } } Imagine what this would look like if you had lots of complex custom components with many more dependencies on height and width. In addition to the potential performance impact, this could be error-prone and cumbersome to code. To avoid the over usage of binding and the potential for a bindstorm, applications of this sort could be re-written as follows: import javafx.stage.*; import javafx.scene.*; import javafx.scene.shape.*; import javafx.scene.paint.*; import javafx.scene.input.*; function run() : Void { var AWTtoolkit = java.awt.Toolkit.getDefaultToolkit (); var screenSizeFromAWT = AWTtoolkit.getScreenSize (); Stage { fullScreen: true scene: Scene { content: [ Rectangle { width: screenSizeFromAWT.width height: screenSizeFromAWT.height fill: Color.BLUE } Circle { centerX: screenSizeFromAWT.width / 2 centerY: screenSizeFromAWT.height / 2 radius: if (screenSizeFromAWT.width < screenSizeFromAWT.height) then screenSizeFromAWT.width / 2 else screenSizeFromAWT.height / 2 fill: Color.RED onMouseClicked: function(me: MouseEvent) { FX.exit(); } } ] } } } We achieve the same effect as the first example by first making a call to a method in the java.awt.Toolkit package. With this information we can statically define our scenegraph without the use of binding. There is one caveat to this solution. As the AWT (Advanced Windowing Toolkit) is an integral part of Java SE, this code should be portable across all JavaFX desktops. However, if you wish to deploy a JavaFX Mobile solution, the AWT calls would likely change. Is there a mechanism that might work across both models? As a final thought, while we're on this theme of coining terms, my compadres Jim Clarke and Eric Bruno, co-authors of the aforementioned JavaFX book, jokingly asked what word could be used to describe this scenario: "Condition where binds lead to binds that leads back to the original bind, ending up in a Stack fault?" BindQuake? BindTsunami? Bindless? BindSpin? BindHole (BlackHole)? BindPit?
http://www.informit.com/articles/article.aspx?p=1353604
CC-MAIN-2017-13
refinedweb
710
65.83
Simon Richter schrieb: Robert,Ok, here it is. This patch changes AC_LIBTOOL_PROG_COMPILER_PIC so that it only appends -DPIC to the default "C" tag and the CXX tag for C++. I would also like to deprecate -DPIC in the 1.5 release to make it clear we intend to do away with it. I would also like to ask anyone who does depend on this to let us know when/where/why & how so we can add a section to the documentation on how to modify code to not need -DPIC.Inline assembler is compiler dependent anyway. So for inline assembler the correct syntax is #if defined(__GNUC__) && !defined(__PIC__) && defined(__i386) asm( /* Non-PIC asm implementation */ ) #else /* C implementation */ #endif or, respectively #if defined(__GNUC__) && defined(__i386) asm( /* PIC asm implementation */ ) #else /* C implementation */ #endif Perhaps there should also be a small comment why #ifndef __PIC__ is not enough. If you have compiler independent code that you want to conditionally compile depending on the PIC setting, you have a real problem now, since there is no longer a standardized preprocessor symbol, and you cannot work around that. It may be worth investigating whether glibc has such portions (they don't use libtool, but set -DPIC for themselves when compiling the shared library; there are lots of #ifdef PIC in the code) and whether this warrants a check in configure whether we need to set -D__PIC__ in order to get that feature back. Yes, it should be portable still. btw, once up a time I had severe problems with porting an application trying to make it into a library - the software did use PIC as one of its symbols, a #define PIC struct picture. That's actually quite naturally for the original programmer - PIC is short, it is written in big caps for being a define, and it does not use a "__" prefix that is used for "reserved" symbols. But we need to care about a PIC define somewhere, as I do have some code too that depends on recognizing a sharedlib mode (library, plugin, whatever). That's only minorily different from the original meaning of PIC as that some platforms do not need / care for position-independent code for its --enabled-shared plugins. Currently, I wonder if it would be okay to just replace the current definition of -DPIC with -D__PIC__. Well, ye know, people don't actually read change infos when upgrading. With a -D__PIC__ showing up on the commmand line, that might make things easier - and some software with lots of PIC-use around can still make up a compatiblity define in some header... #if defined __PIC__ && !defined PIC #define PIC #endif proposal: don't just delete -DPIC for C/CXX, replace it with -D__PIC__
http://lists.gnu.org/archive/html/libtool/2003-01/msg00054.html
CC-MAIN-2014-41
refinedweb
458
57.71
Due Wednesday Jan 29, 11pm Worth: 13% of your final grade In the first part of this lab you’ll assemble the mobility subsystems on your robot: the front drive wheels, the rear omni wheel, the bottom body plate, the battery, and the org (LLP). For now, the robot will receive remote commands by a USB tether to a PC. In the second part of the lab you’ll write code for the org to implement forward and inverse velocity and position kinematic control. This will be based on a library of supporting code we have prepared for the org called the OHMM Monitor. Land R. -, red= +) to the labelled screw terminals on the front of the org board. The left motor terminals are labelled -l+and the right motor terminals are labelled -r+. It is very important to connect these wires correctly—trace them back to the left and right motors to be sure you got it right. Please read about battery charging and safety here. These instructions assume you have already configured your machine and made SVN checkouts as described in lab 0. robotics/ohmm-sw/llp/mottestrun maketo build the motor test program. make program) Be careful when the drive wheels are turning. They can be surprisingly strong and fast. Keep fingers, hair, and dangling clothing away from the spinning wheels and the motor backshafts (with the encoder discs). These instructions assume you have already configured your machine and made SVN checkouts as described in lab 0. Follow similar instructions to copy the lab 1 framework as for lab 0, but change l0 to l1 in the instructions (do not delete your old l0 directory). Please do not edit the provided files, in part because we may need to post updates to them. Any edits you make in them will be overwritten if/when you re-copy them to get our updates. You will need to modify the provided main.c file, at least to include one or more .h files and some function calls to your own code, as described below. By limiting your edits to main.c to just these lines, you will have an easier time if we need to distribute a new version of main.c. robotics/ohmm-sw/llp/monitorrun make libto build the OHMM monitor library libohmm.a. This is a library of C functions that we have developed to run on the org and to manage many of the robot’s functions. In this lab you will be adding functions to the monitor, but it already includes many built-in functions that you should get to know first. You only need to re-run this make command if we update the provided monitor library code (which we will do at least once, to distribute the solution code for this lab). Then in robotics/gN/l1 run to build the monitor executableto build the monitor executable > make > make program ohmm.hexand to upload it to the org. This tacks on an entry point (the main()function) to the monitor library code. The entry point calls some of the library functions to start it up. /dev/ttyACM1. See the comments in robotics/ohmm-sw/llp/monitor/minirc.acm1and robotics/ohmm-sw/llp/monitor/kermit.scrfor details. Once minicom or kermit is connected, reset the org and you should see a banner and prompt. Follow the instructions to get a command list. Explore the commands. Make sure your robot is up on a jackstand if you try any of the motor commands. Use your editor to study the sourcecode for the OHMM monitor library. Read the documentation for each module in robotics/ohmm-sw/llp/monitor/ohmm/*.h. There is both general documentation at the top of the file and documentation for each public macro and function further down. Also (possibly first) read the entry point code in robotics/gN/l1/main.c. As you develop your code for this lab, please create one or more new foo.c files in robotics/gN/l1 and write your code there. Each should also have a matching foo.h file which is included both in the foo.c file and in main.c (use the syntax #include "foo.h", i.e. use double quotes, not angle brackets). The makefile should not need to be modified: it is pre-configured to find the Pololu AVR Library (installed in a system location) and the OHMM monitor library in ../../ohmm-sw/llp/monitor. It also will automatically compile and link all .c files you create in robotics/gN/l1. cmdmodule in the monitor, and how monitor shell commands are registered and implemented in the other modules. Implement a new monitor command dgvw (drive get ) which takes no arguments and which returns two floating point numbers. The first return is the current robot forward velocity in mm/sec and the second is the current CCW angular velocity in rad/sec. These should be calculated from the results of motGetVel() using the velocity FK transformation. You will also need to supply the wheel radius and baseline (distance between the wheel contact points on the ground). You can either add monitor commands to set/get these interactively (start them out with reasonable hardcoded defaults), or just use #define constants. If you study the code carefully, you may notice that we use fixed point math internally in the mot module; you should not need to do this. Just use float math as you usually would. It will work but you should realize that all floating point operations are implemented by avr-gcc as software subroutines, and as such will be much slower than integer operations. There should be plenty of cycles in this context. You may find the HLP_DBG() macro in the hlp module helpful. Search through the provided monitor code for some examples of its use (you may also find the script find-in-c-source.sh handy). Implement a new monitor command dsvw (drive set ) which takes two floating point arguments, and in the same units as above, and which makes a corresponding call to motSetVelCmd() after applying the velocity IK transformation. Implement a new monitor command dsvl (drive set ) which takes two floating point arguments and . is the forward speed command in mm/s as above. is the desired turning radius in mm. Figure out the transformation from to and then continue with a call to your handling code. Please make the arguments have the same semantics as the two arguments to the iRobot Create drive command, which is documented on page 9 of the Create Open Interface v2. The special cases specified for the Create with should be respected. Note that in the standard two’s complement binary representation , (there appears to be a missing minus sign before 32768 in the iRobot document), and where the values are in hexadecimal. Please treat the additional special case the same as case . Implement a new monitor foreground task that implements incremental odometry as developed in Lecture 3. Register the task to run at 20Hz. Assume the robot pose starts at , i.e. aligned to world frame. Use units of mm for and radians for . Each time the task runs, use motGetPos() to get the new wheel rotation amounts, subtract their values from the last update, then update accordingly. Implement a new monitor command dgp (drive get pose) that returns the most recently calculated as three floating point numbers. Note that you will not need to worry about the handler for this command running concurrently with the periodic odometry update task — they are both run in the foreground, and foreground tasks never interrupt each other. Implement a new monitor command drp (drive reset pose) that resets . Implement a new monitor command df (drive forward) that takes one floating point argument: the number of millimeters to drive forward (or backward, if negative). Upon receipt of the first df command, the robot should start moving in the indicated direction. One option is to hardcode a reasonable travel speed, say 100mm/s. Another option is to implement trapezoid velocity profiling as explained in lecture 4. The command handler must not block while the motion completes, but instead should register a periodic task to check for completion, and then return quickly so that other foreground tasks can run. You can also modify your odometry task so that it additionally checks for completion of the current drive command. Of course, the desired position will not exactly be reached; you will need to define a reasonable tolerance interval within which the drive is considered complete. If another df command is received while one is ongoing, the new command should be queued. Implement a queue that can store up to at least 32 drive commands. (You may wish to study the code in the task and cmd modules which uses arrays of structs to store other kinds of data.) When a drive command completes, the next one, if any, should be popped off the queue and begin execution. Implement a new monitor command dgs (drive get status) that returns one int, which the number of commands currently waiting in the drive command queue, if any, plus one if a command is currently executing. Thus, dgs should return zero iff (if and only if) no command is executing and none are queued. Implement a new monitor command dp (drive pause) that pauses the current drive command, if any. Also implement a companion command dup (drive unpause) that resumes driving if paused. Make sure you can run these commands even if the queue is empty. If a df or dt command is issued while a pause is in effect then the command should still be added to the queue (as long as the queue is not full). That way you can enqueue a “batch” of commands before the robot starts to move. Implement a new monitor command dri (drive reinit) that stops any ongoing drive command, if any, and then drops it and any other queued drive commands. Implement a new monitor command dt (drive turn) that takes one floating point argument: a turn-in-place angle in CCW radians. The execution semantics of dt should parallel those of df. Note that there should be a single queue for both the df and dt commands, so that you can queue up a mix of commands to drive in e.g. a 500mm square. We provided a file drive-square-500mm.txt in g0/l1 containing such a sequence. See sequences-howto.txt in the same directory for instructions on how to upload it using minicom. Devise two experiments: one to help you calibrate your estimated value for the wheel radius , and another to help you calibrate the estimated value for the baseline . Use these to try to optimize the accuracy of drive sequences. For example, try driving in a square or triangle pattern where the robot should ideally return to its original pose. (The calibration experiments should actually be simpler, e.g. just one df for the calibration and one dt for the calibration.). To a much lesser extent, we will also consider the accuracy of your odometry and drive sequences. We will not consider the robot speed as long as it is not absurdly slow (e.g. it should be able to drive a 1m square pattern in no more than ~2 minutes)..
http://www.ccs.neu.edu/course/cs4610/LAB1/LAB1.html
CC-MAIN-2014-42
refinedweb
1,894
63.7
1. Under which component must I open a message if I have problems with BW with MaxDB? Open a message under the BW-SYS-DB-SDB component. 2. When are versions of BW taken out of maintenance? You can find out the maintenance durations for BW versions in the SAP Support Portal under: Maintenance & Services - > Maintenance Offering - > Overview Maintenance Durations - > SAP BW 3. On which SAP Basis versions are the BW systems integrated? BW System 3.0 - SAP Basis 6.20 BW System 3.1 - SAP Basis 6.20 BW-System 3.5 - SAP Basis 6.40 All BW systems should use at least MaxDB Version 7.5.00 Build 18. 4. Where can I find information about the heterogeneous system copy of a BW system? You can find the documentation in SAPNet under the alias 'instguides', service.sap.com/instguides 1.) Path for BW 3.5: SAP NetWeaver -> Release 04 -> Installation SAP Web AS -> SAP Web AS 6.40 and Related Documentation -> Homogeneous and Heterogeneous System Copy See also the BW supplementary 771209. 2.) Path for BW 3.0/3.1: SAP Components -> SAP Web Application Server -> Release 6.20 System Copy for SAP Systems Based on Web AS 6.20 See also the BW supplementary 777024. After a heterogeneous system migration, see also 931333. 5. What can cause performance problems in the BW system? The causes can be both in the database and in the application. a) Causes in the database Inaccurate table statistics increase the reading load on the database because the optimizer can no longer access the tables in the most favorable way. Consequently, the data cache hit rate deteriorates. The statistics are therefore estimated using a sample rate. If this sample rate is not set correctly for large tables, this leads to inaccurate statistics being generated. For large tables, the sample rate must be set to 10%. For more information, see 808060. Database parameters that have been set incorrectly can also impair the performance of the BW system. This contains detailed information on this problem. When you configure the BW database, you should pay particular attention to the disk configuration (data volumes). The size of the data cache and the correct setting of the JOIN parameters have a critical effect on the performance of a BW system. This note contains tips and references to other sources of detailed information on the configuration. Missing indexes may result in performance problems. See 930768 and 931333. b) Causes in the application If indexes or aggregates are missing, this will impair the response time of your BW system. The selected data model can also be responsible for poor response times. 6. What should I be aware of when data modeling my BW system? a) InfoCube The selective characteristics in an InfoCube should be defined as a line item dimension, if possible. You should define characteristics that have many attributes but uneven distribution as a line item. b) Aggregate If possible, you should create flat aggregates. An aggregate is only useful for queries if the data can be aggregated to a great extent. 7. What should I be aware of when setting up structures? You should be aware of the following points when setting up structures: - the data is transferred using INSERT....SELECT - Large datasets are processed in blocks. - You can set the block size in transaction RSCUSTV8. The default value for the block size is 10 million. You should only change this default value if aggregates with a low level of aggregation are defined on large cubes. - The parallel creation of several aggregates can be inefficient because this may result in data cache displacements. Indexes are created after the aggregates are filled. If more than one CREATE index is active at the same time for MaxDB, only one CREATE index can be processed in parallel with several server tasks. 8. Are there special parameter recommendations for a BW system with MaxDB? See 767635 (MaxDB 7.4) and 814704 (MaxDB 7.6) for general parameter recommendations for MaxDB. See 901377 for special recommendations for BW systems. See also 514907 for information about settings for the RSADMIN parameters. For MaxDb, you should set the RSADMIN parameters as follows: SPLIT_DATAMART_TABL_THRES <= 64 (MaxDB Version 7.5) SPLIT_QUERY_TABL_THRES <= 64 (MaxDB Version 7.5) SPLIT_DATAMART_TABL_THRES <= 254 (MaxDB Version 7.6) SPLIT_QUERY_TABL_THRES <= 254 MaxDB Version 7.6) 9. Is the data cache hit rate important for analyzing the performance of a BW system? The data cache hit rate alone is not enough to analyze the performance of a BW system. You must also consider the absolute I/O accesses (transaction DB50). In addition, the area that is occupied temporarily should not exceed the size of the data cache. 10. What should I know about parallel access to joins? You activate the parallel index using the OPTIMIZE_JOIN_PARALLEL_SERVERS parameter (see 901377). The parallel importing of index blocks can greatly improve the performance for join transfers that have to be processed using an index. Several server tasks are activated when index blocks are imported in parallel. In the B* tree of an index, the corresponding primary key list of the records for the Basis table is stored for each index entry. The primary key list is no longer processed sequentially using a server task; instead, it can be processed in parallel using several server tasks. For example: Server task 1 reads the first key entry and retrieves the corresponding data page of the Basis table in the data cache; server task 2 reads the second key entry and so on. 11. What are the naming conventions for BW tables? BW tables created by SAP have the namespace /BIO. BW tables created by customers have the namespace /BIC. 12. What types of tables are contained in the BW system? A BW system contains fact tables, dimension tables, master data tables and hierarchical tables. 13. Are there temporary objects in BW? Yes, see 449891 for more information. 14. How will I be able to recognize the table type? After the /BIO or /BIC prefix, fact tables begin with 'F' (the data is not compressed) or 'E' (the data is compressed), for example: /BIC/FZBCS_BC01. Dimension tables begin with 'D' after the prefix, for example: /BIC/DZBCS_BC017. After the prefix, master data tables begin with 'S', 'X' (time-independent), 'Q' (time-dependent), 'Y' (time-dependent) or 'P' (time-independent), for example: /BI0/SCS_VERSION. Hierarchy tables begin with 'H' after the prefix, for example: /BIC/HZBCS_FIK. 15. Is there a naming convention for alias table names? Yes. BW contains very complex table names. Aliases are provided for the tables to define the analysis of the commands more clearly. The alias 'F' is assigned for fact tables. For dimension tables, the alias starts with 'D' followed by the following dimension identifiers: - 'P' for the package dimension (for example, alias 'DP') - 'T' for the time dimension (alias 'DT') - 'U' for the unit dimension (alias 'DU') - Numbers are used for user-defined dimensions [1-9, a, b, c, d] (for example, D1, D9, DA, and so on.) For the master data tables, the aliases begin with 'S' followed by a number, for example, alias 'S34' for the /BIC/SZBCS_FIKO master data table. 16. Do fact tables have a user-defined primary key? Up to Version 7.5, no. Up to this version, fact tables have a SYSKEY rather than a user-defined primary key. The primary key (and consequently the secondary indices) would become too large because the key would have to be created from the fields of all the dimension tables. In addition, fact tables ('F') that are not compressed cannot contain duplicate records - these would be rejected with a 'Duplicate key' error if a primary key were defined. As of the MaxDB BW Feature Pack (Version 7.6), it makes sense to create a primary key in fact tables. In this version, the primary key is created from a time dimension and the SYSKEY, not from the fields of all the dimension tables. For more information, see 1040431 FAQ: MaxDB BW Feature Pack. 17. The fact tables in my BW system with MaxDB 7.5 mistakenly have a user-defined primary key. As a result, the 'Duplicate key' error occurs when I add new records. What can I do to correct this? After you convert tables in the SAP ABAP Dictionary (transaction SE11 or SE14), fact tables are mistakenly given a primary key. In the BW system, you are not allowed to convert tables using the SAP ABAP Dictionary; instead, you must use the BW functions to convert these tables (create cube and copy data). You can use 897165 to correct this problem or you can import the relevant Support Package to avoid it in the future. 18. What is the maximum number of dimensions that I can create for a fact table? You can define a maximum of 16 dimensions. 19. How is the relationship between the fact table and its dimension tables formed? The fact table contains all key fields of the associated dimension tables in its table definition. the following naming convention here: The key fields of the dimension tables in the fact table begin with the 'KEY_' prefix followed by the short form of the name of the fact table, and end with an identifier for the dimension. For example, the 'P' indicator means package dimension, 'T' is the time dimension, 'U' is the unit dimension, or a sequential number may be used to specify other dimensions. For example: The following dimension tables have been assigned to the /BIC/FZBCS_BC01 fact table: /BIC/DZBCS_BC01P, /BIC/DZBCS_BC01T, /BIC/DZBCS_BC0U, /BIC/DZBCS_BC011, /BIC/DZBCS_BC012, /BIC/DZBCS_BC013, /BIC/DZBCS_BC015, /BIC/DZBCS_BC016, /BIC/DZBCS_BC017, /BIC/DZBCS_BC018. The column names are created as follows: KEY_ZBCS_BC01P: represents the reference to the package dimension using the DIMID lead column of the /BIC/DZBCS_BC01P package dimension. The same applies to the following columns: KEY_ZBCS_BC01T, KEY_ZBCS_BC01U, KEY_ZBCS_BC011, KEY_ZBCS_BC012, KEY_ZBCS_BC013, KEY_ZBCS_BC015, KEY_ZBCS_BC016, KEY_ZBCS_BC017, KEY_ZBCS_BC018. The relationship of the tables to each other is defined in the JOIN condition of the query. Single indexes are defined on all dimension fields of the fact table. Special feature of aggregates: They have names that are partly the same names as those in the corresponding fact table. 20. How is the relationship between the dimension table and its master data tables formed? The columns of the dimension table that form a relationship to the master data table are defined as key fields in the master data table. To ensure that the assignment of master data tables to the dimension table is immediately apparent for a performance analysis, the dimension table is sorted and stored in the FROM clause, followed by its master data tables. For example: SELECT .....FROM /BIC/FZBCS_BC01 " "F" -> Fact table /BIC/DZBCS_BC01T " "DT" , -> Dimension table /BI0/SFISCVARNT " "S44" , -> Master data table /BI0/SFISCYEAR " "S45" , -> Master data table For example, the S44 master data table has the FISCVARNT column as a key field. The S45 master data table has the FISCVARNT and FISCYEAR columns as its keys (columns that are contained in the dimension table as SID_0<NAME>, for example, SID_0FISCVARNT and SID_0FISCYEAR). The relationship of the tables to each other is defined in the JOIN condition of the query. 21. What is a line item dimension? Line item dimensions have only one attribute. This means that an interim step using the dimension table is not necessary when the fact table accesses the master data table. 22. How do I know whether I can optimize a query in the database? Create a SQL trace of the query that causes performance problems. You can use the SELECT statement from the SQL trace to execute an EXPLAIN either directly in the SQL trace or in the SQL Studio. The EXPLAIN may display a selection strategy where a very high number of dimension tables are processed before the fact table is accessed. This is only useful if just one record is accessed (EXPLAIN JOIN) when the dimension tables are accessed. However, this is rarely the case with large BW systems. Therefore, you could say that if more than 3 dimensions are accessed before the fact table, this query can be optimized over 2 or (at most) 3 suitable columns (characteristics) on the fact table with a multiple index. 23. Is it useful to create a multiple index on the fact table using the fields of different dimensions? If several restrictions on different dimensions are relatively unselective in queries, a multiple index on the corresponding dimension fields can improve the performance. A multiple index should generally be defined on two or a maximum of three dimension fields. The data in the fact table is sorted (by the SYSKEY) and therefore stored according to the sequence of inserted records. Since the data is loaded on time into the BW system, records with time characteristics are close to each other, which means that (when data is read using a time characteristic) you can expect a lot of hits on a data page - this in turn means that fewer pages have to be loaded into the data cache. The MaxDB optimizer does not have this information. It can therefore be more beneficial to select data using time characteristics, even if other characteristics are more selective. You can optimize access by creating a suitable multiple index on the fact table that contains a time characteristic and the selective characteristic. 24. In which column sequence should I create multiple indexes on fact tables? The sequence of the indexed columns in the index is a crucial factor if you want to create a multiple index on a fact table. You should therefore consider the following when choosing the sequence: - if the OPTIMIZE_JOIN_PARALLEL_SERVERS parameter is activated (>0), you should place the more selective column in the index in front of the time characteristic. This is necessary because of the way that the PARALLEL_SERVERS parameter works (see above in this ). - if the parallel access is not activated for the join (OPTIMIZE_JOIN_PARALLEL_SERVERS=0), you should place the time characteristic in the index in front of the more selective column. - if you want to create a multiple index on two characteristics and one of the two characteristics is restricted with a BETWEEN, the column where the BETWEEN qualification is specified should appear at the back in the index. 25. When can secondary indexes be counterproductive on fact tables? The data in fact tables is stored in chronological order (due to the SYSKEY). Fact tables are usually very large and therefore cannot be loaded completely into the data cache. If a time period is read in a query, it may be useful to create an index using less selective columns that contain time data (for example, an index using columns that contain periods or fiscal years). However, the optimizer program will only choose this type of index if it assumes that a more selective index does not exist. If an index of more selective columns exists for this query but the index has no chronological reference, the optimizer will select this index because it does not know anything about the favorable storage format of the index that contains time-related data. As a result, more pages might have to be loaded into the cache than the number of pages loaded when (from where the optimizer is concerned) the least favorable index is used. You should bear this in mind when creating indexes. If you do notice this behavior, you should not deactivate any standard indexes without first consulting MaxDB Development Support. A better course of action in this case would be to consider whether a suitable multiple index can achieve the required effect on the fact table. 26. What are SID tables? SID is an abbreviation of the German "Stammdaten-ID" (which means master data ID in English). SID tables are master data tables. 27. What must I be aware of with master data tables? Fields in master data tables are usually not indexed. If selective filter conditions are defined on master data fields in a query, these fields must be indexed. Otherwise the database optimizer ignores the restrictions. Therefore, when analyzing the performance of your MaxDB BW system, you should check whether you can create suitable SINGLE indexes to solve the performance problem. The qualification of a selective column of a master data table in the WHERE clause, which can considerably limit the result set, is a crucial factor in deciding whether you should create a SINGLE index. 28. What does the /*+ SHORT_SUM_VALUES */ hint mean in BW queries? This hint is not relevant for the strategy and therefore does not have any effect on the query processing sequence. This hint ensures that, when the totals are being formed using the columns, the actual length of a numeric field is used for the calculation rather than the maximum field length. For more information, see 723020. In relation to the hints, see also s 652096 and 832544. 29. What is the significance of the hint /*+ QUERYREWRITE_OP */ in BW queries? As of MaxDB Version 7.6 and New York (BW 7.10) this hint activates the use of QUERYREWRITE in queries, including with the parameter setting OPTIMIZE_QUERYREWITE < > OPERATOR. For additional information, see 832544. 30. Where can I find information about updating the statistics in the BW environment? See s 927882, 808060 and 797667. 31. Can I prevent temporary views from being deleted when a performance analysis is being carried out? Temporary views that no longer exist after the SQL statement is processed are also partially created in the BW environment and therefore cannot be used for a performance analysis (EXPLAIN). You can prevent temporary views from being deleted by making an entry in the RSADMIN table. See 373738. As an alternative to the procedure described in 373738, you can also create a SQL trace (transaction ST05). This SQL trace contains the CREATE view statement of the temporary views. You can create this view in SQL Studio and then execute the Explain on the corresponding SELECT. 32. How can I delete temporary objects that stop by mistake in the BW system? You can run report SAP_DROP_TMPTABLES to remove temporary objects. (See 449891). 33. How can I solve performance problems with temporary /BI0/06... SID tables? In older Support Packages, temporary SID tables with the /BI0/060... naming convention do not have primary keys. With a BW Query, this can cause the database optimizer to calculate an unfavorable table sequence. For more information, see 759936. 34. Is it useful to check whether indexes are missing after the transfer from the fact table to the dimension table? No, it actually makes no sense to carry out a more detailed check because the corresponding indexes are already created here. This means that you do not have to check the join conditions of fact tables to the dimension tables in the WHERE clause more closely. 35. Where do indexes usually go missing in the BW environment with MaxDB? Indexes are usually missing from master data tables if a selective column that can significantly restrict the result set was qualified in the WHERE clause. Since there is no relation between dimension tables, it can be useful to create a suitable multiple index on the fact table if two restrictions on different dimension tables were specified in the WHERE clause. Get More Questions and Answers with Explanation related to SAP MaxDB at SAP BW Forum. MaxDB works well on the other SAP systems, but not with BW. Both the 7.5 and the 7.6 with the "BW pack" includes bugs and malfunctioning. If you are in doubt, please call any SAP BW teacher asking them about MaxDB. Either they will recommend you not to use it, or you will realize they didn't even knew BW existed on MaxDB. /Lasse
https://www.stechies.com/faq-business-information-warehouse-system-with-maxdb/
CC-MAIN-2017-26
refinedweb
3,295
55.84
This patch implements the analysis state classes needed for sparse data-flow analysis and implements a dead-code analysis using those states to determine liveness of blocks, control-flow edges, region predecessors, and function callsites. Depends on D126751 Thanks for adding these comment blocks, can you merge these into the base commit? comoment blocks _H clang-format add code blocks I feel like "lattice" is a generic term and shouldn't be limited to sparse analyses. Can we rename this to AbstractSparseLattice? I don't think we have time to discuss/address this, so just a comment here: The concepts of pessimistic/optimistic fixpoint and known/optimistic value are really uncommon, and we probably should get rid of it... As discussed earlier today, can we remove markOptimisticFixpoint() since it's not used? (I personally think that the pessimistic value should never change.) know knownPredecessors? Note that there's inconsistency in the naming style. GenericProgramPointBase<...> : GenericProgramPoint Lattice<...> : AbstractLattice Can we converge to a single naming style? Can we move this to a separate file, maybe DeadCodeAnalysis.{h,cpp}? Likewise we probably need a ConstantPropagationAnalysis.{h,cpp}. DeadCodeAnalysis is not a sparse analysis, right? I personally think that it's hard to classify it as either sparse or dense, and it does not inherit from SparseDataFlowAnalysis in the next patch. Even if it is a sparse analysis, I still think it's worthwhile to create a dedicated file for it. SG These concepts came from the original SCCP paper that developed the idea of an optimistic analysis. The idea of an optimistic value is, for an analysis, given its current, limited view of the IR, it is the best guess the analysis can provide to the state. If later, a conflicting value is discovered, it falls back to a value that is guaranteed to be true. It's possible this concept is tied too much to SCCP (e.g. in integer range analysis, the pessimistic fixpoint is actually (-inf, +inf), and so having a "known" and "optimistic" value isn't needed). Yes. It's not used anywhere and it wasn't used in many places in the previous framework too. This naming style is consistent with the rest of MLIR. Roughly speaking: review comments Looks fairly good, a lot seems to be adapted from other places so much of it isn't too surprising. Drop the braces here. We should move these into a nested namespace, we shouldn't pollute mlir with such generally named things as these. Will note that the name "Predecessor" here was quite confusing because I expected this to be about Blocks, not operations. Can you clarify further what it means to be a "predecessor" here? You could also use std::exchange here to drop a line. +1 here! I would also start a new DataFlowAnalysis/ directory to hold these things. nit: Ifs with nested fors need braces. Split SparseDataFlowAnalysis.h/cpp into DataFlow/ConstantPropagationAnalysis DataFlow/DeadCodeAnalysis DataFlow/SparseAnalysis Do we need all of these includes? Can you forward declare instead? trim includes
https://reviews.llvm.org/D127064?id=436580
CC-MAIN-2022-40
refinedweb
505
55.95
OK, I just have one more question. So i decided to not try to confuse myself by making all of these complicated scripts (for my level understanding of the C++ languge anyway) so i decided to make a simpler script, here it is: #include <iostream> using namespace std; int main () { int x; cout << "what is x?" << endl; cin >> x >> endl; if (x == 2) cout << "you entered a two!" << endl; } else { cout << "you entered a one!" << endl; cin.get(); return 0; } ------------------------------------------------------------------ So i tried compiling and running this script, and it highlighted the "cin >> x >> endl;" line (line 8) and the error message was "no match for 'operator>>' in '(&std::cin)->std::basic_istream<_CharT, _Traits>::operator>> [with _CharT = char, _Traits = std::char_traits<char>](((int&)(&x))) >> std::endl' ". Could someone tell me what this means? I have no idea at all why this wouldn't compile. BTW, i'm using the compiling program Dev-C++
http://www.hackthissite.org/forums/viewtopic.php?f=102&t=3565&p=51506
CC-MAIN-2017-17
refinedweb
153
70.73
Introduction This post follows on from the previous post on Gibbs sampling in various languages. In that post a simple Gibbs sampler was implemented in various languages, and speeds were compared. It was seen that R is very slow for iterative simulation algorithms characteristic of MCMC methods such as the Gibbs sampler. Statically typed languages such as C/C++ and Java were seen to be fastest for this type of algorithm. Since many statisticians like to use R for most of their work, there is natural interest in the possibility of extending R by calling simulation algorithms written in other languages. It turns out to be straightforward to call C, C++ and Java from within R, so this post will look at how this can be done, and exactly how fast the different options turn out to be. The post draws heavily on my previous posts on calling C from R and calling Java from R, as well as Dirk Eddelbuettel’s post on calling C++ from R, and it may be helpful to consult these posts for further details. Languages R We will start with the simple pure R version of the Gibbs sampler, and use this as our point of reference for understanding the benefits of re-coding in other languages. The background to the problem was given in the previous post and so won’t be repeated here. The code can be given as follows: gibbs<-function(N=50000,thin=1000) { mat=matrix(0,ncol=2,nrow=N) x=0 y=0 for (i in 1:N) { for (j in 1:thin) { x=rgamma(1,3,y*y+4) y=rnorm(1,1/(x+1),1/sqrt(2*x+2)) } mat[i,]=c(x,y) } names(mat)=c("x","y") mat } This code works perfectly, but is very slow. It takes 458.9 seconds on my very fast laptop (details given in previous post). C Let us now see how we can introduce a new function, gibbsC into R, which works in exactly the same way as gibbs, but actually calls on compiled C code to do all of the work. First we need the C code in a file called gibbs.c: #include <stdio.h> #include <math.h> #include <stdlib.h> #include <R.h> #include <Rmath.h> void gibbs(); } This can be compiled with R CMD SHLIB gibbs.c. We can load it into R and wrap it up so that it is easy to use with the following code: dyn.load(file.path(".",paste("gibbs",.Platform$dynlib.ext,sep=""))) gibbsC<-function(n=50000,thin=1000) { tmp=.C("gibbs",as.integer(n),as.integer(thin), x=as.double(1:n),y=as.double(1:n)) mat=cbind(tmp$x,tmp$y) colnames(mat)=c("x","y") mat } The new function gibbsC works just like gibbs, but takes just 12.1 seconds to run. This is roughly 40 times faster than the pure R version, which is a big deal. Note that using the R inline package, it is possible to directly inline the C code into the R source code. We can do this with the following R code: require(inline) code='();' gibbsCin<-cfunction(sig=signature(Np="integer",thinp="integer",xvec="numeric",yvec="numeric"),body=code,includes="#include <Rmath.h>",language="C",convention=".C") gibbsCinline<-function(n=50000,thin=1000) { tmp=gibbsCin(n,thin,rep(0,n),rep(0,n)) mat=cbind(tmp$x,tmp$y) colnames(mat)=c("x","y") mat } This runs at the same speed as the code compiled separately, and is arguably a bit cleaner in this case. Personally I’m not a big fan of inlining code unless it is something really very simple. If there is one thing that we have learned from the murky world of web development, it is that little good comes from mixing up different languages in the same source code file! C++ We can also inline C++ code into R using the inline and Rcpp packages. The code below originates from Sanjog Misra, and was discussed in the post by Dirk Eddelbuettel mentioned at the start of this post. require(Rcpp) require(inline) gibbscode = ' int N = as<int>(n); int thn = as<int>(thin); int i,j; RNGScope scope; NumericVector xs(N),ys(N); double x=0; double y=0; for (i=0;i<N;i++) { for (j=0;j<thn;j++) { x = ::Rf_rgamma(3.0,1.0/(y*y+4)); y= ::Rf_rnorm(1.0/(x+1),1.0/sqrt(2*x+2)); } xs(i) = x; ys(i) = y; } return Rcpp::DataFrame::create( Named("x")= xs, Named("y") = ys); ' RcppGibbsFn <- cxxfunction( signature(n="int", thin = "int"), gibbscode, plugin="Rcpp") RcppGibbs <- function(N=50000,thin=1000) { RcppGibbsFn(N,thin) } This version of the sampler runs in 12.4 seconds, just a little bit slower than the C version. Java It is also quite straightforward to call Java code from within R using the rJava package. The following code import java.util.*; import cern.jet.random.tdouble.*; import cern.jet.random.tdouble.engine.*; class GibbsR {(2*x+2)); } mat[0][i]=x; mat[1][i]=y; } return mat; } } can be compiled with javac GibbsR.java (assuming that Parallel COLT is in the classpath), and wrapped up from within an R session with library(rJava) .jinit() obj=.jnew("GibbsR") gibbsJ<-function(N=50000,thin=1000,seed=trunc(runif(1)*1e6)) { result=.jcall(obj,"[[D","gibbs",as.integer(N),as.integer(thin),as.integer(seed)) mat=sapply(result,.jevalArray) colnames(mat)=c("x","y") mat } This code runs in 10.7 seconds. Yes, that’s correct. Yes, the Java code is faster than both the C and C++ code! This really goes to show that Java is now an excellent option for numerically intensive work such as this. However, before any C/C++ enthusiasts go apoplectic, I should explain why Java turns out to be faster here, as the comparison is not quite fair… In the C and C++ code, use was made of the internal R random number generation routines, which are relatively slow compared to many modern numerical library implementations. In the Java code, I used Parallel COLT for random number generation, as it isn’t straightforward to call the R generators from Java code. It turns out that the COLT generators are faster than the R generators, and that is why Java turns out to be faster here… C+GSL Of course we do not have to use the R random number generators within our C code. For example, we could instead call on the GSL generators, using the following code: #include <stdio.h> #include <math.h> #include <stdlib.h> #include <gsl/gsl_rng.h> #include <gsl/gsl_randist.h> #include <R.h> void gibbsGSL(2*x+2)); } xvec[i]=x; yvec[i]=y; } } It can be compiled with R CMD SHLIB -lgsl -lgslcblas gibbsGSL.c, and then called as for the regular C version. This runs in 8.0 seconds, which is noticeably faster than the Java code, but probably not “enough” faster to make it an important factor to consider in language choice. Summary In this post I’ve shown that it is relatively straightforward to call code written in C, C++ or Java from within R, and that this can give very significant performance gains relative to pure R code. All of the options give fairly similar performance gains. I showed that in the case of this particular example, the “obvious” Java code is actually slightly faster than the “obvious” C or C++ code, and explained why, and how to make the C version slightly faster by using the GSL. The post by Dirk shows how to call the GSL generators from the C++ version, which I haven’t replicated here. Tags: API, C, calling, COLT, extending, faster, Gibbs, GSL, inline, Java, MCMC, parallel, Parallel COLT, R, Rcpp, rJava, rstats, sampling, speed 01/08/2011 at 08:34 | Hi Darren, Very nice post. I’m glad you reversed the Java-biased bit of the analysis! It wasn’t really fair to compare the R functions with parallel colt. I interpret your observation of 8.0 s for C vs. 10.7 s for Java differently. I think that for tasks which can be highly CPU intensive, which could be running for days or weeks, speed is extremely important. The fact that the Java code takes 34% more CPU time than the faster C equivalent could be very important, depending on your application. 4 weeks runtime for java implementation versus 3 weeks runtime for C for instance. It seems that C/C++ has everything: speed, more beautiful code, portability. Java’s only slight advantage (easy OS portability) is eliminated by the elegant R packaging system in this context. Cheers, CONOR. 30/12/2011 at 15:44 | […] Faster Gibbs sampling MCMC from within R: How to call MCMC code written in C, C++ and Java from R, with timing […] 28/02/2012 at 16:26 | […] Two writeups by Darren Wilkinson on empirical evaluations of R, Java, Python, Scala and C for MCMC based learning, in particular using Gibbs Sampling. […] 12/02/2013 at 04:37 | […] Two writeups by Darren Wilkinson on empirical evaluations of R, Java, Python, Scala and C for MCMC based learning, in particular for Gibbs Sampling (a type of MCMC algorithm). […] 06/06/2014 at 07:51 | how to install gibbs? can anyone please tell me?
https://darrenjw.wordpress.com/2011/07/31/faster-gibbs-sampling-mcmc-from-within-r/
CC-MAIN-2015-14
refinedweb
1,562
63.09
Testing with PHPUnit This feature is supported in the Ultimate edition only. IntelliJ IDEA supports unit testing of PHP applications through integration with the PHPUnit tool. Before you start - Make sure the PHP plugin is installed and enabled. The plugin is not bundled with IntelliJ IDEA, but it can be installed from the JetBrains plugin repository as described in Installing, Updating and Uninstalling Repository Plugins and Enabling and Disabling Plugins. - Make sure the PHP interpreter is configured in IntelliJ IDEA on the PHP page, as described in Configuring Local PHP Interpreters and Configuring Remote PHP Interpreters. Where do I get PHPUnit from? Option 1: Download phpunit.phar Download phpunit.phar as described on PHPUnit Official website and save it on your computer: - To get full coding assistance in addition to simply running PHPUnit tests, store phpunit.pharunder the root of the project where PHPUnit will be later used. - If you only need to run PHPUnit tests and you do not need any coding assistance, you can save phpunit.pharoutside the project. Option 2: Use Composer - On the context menu of composer.json, choose Composer | Manage Dependencies. - In the Manage Composer Dependencies Dialog that opens, select the phpunit/phpunit Composer command line options during installation, see. - Click Install. How do I integrate PHPUnit with IntelliJ IDEA in a project? Step 1: Choose the type of configuration for PHPUnit or deployment server to use To use PHPUnit with a remote PHP interpreter or a web server, choose one of the configurations from the dialog box that opens. - Choose a remote PHP interpreter: - Choose a deployment configuration: Step 3: Specify the PHPUnit installation type In the right-hand pane, choose one of the methods: - Option 1: Run PHPUnit downloaded via Composer Specify the path to the autoload.phpfile in the vendorfolder. See PHPUnit Installation via Composer and Composer for details. - Option 2: Run PHPUnit from phpunit.phar Download phpunit.phar, save the archive in the project root folder, and specify the path to it. When you click , IntelliJ IDEA detects and displays the PHPUnit version. - Option 3: Run PHPUnit from PEAR Pear should be configured as an include path. The PHPUnit installation procedure depends on the operating system you use and your system settings. Please, refer to the PHPUnit installation instructions for information on installing and configuring this tool. Step 4 (optional): Specify the default configuration file In the Test Runner area, appoint the configuration XML file to use for launching and executing scenarios. By default, PHPUnit looks for a phpunit.xml configuration file in the project root folder or in the config folder. You can appoint a custom configuration file. You can also type the path to a bootstrap file to have PHP script always executed before launching tests. In the text box, specify the location of the script. Type the path manually or click and select the desired folder in the dialog that opens. How do I generate a PHPUnit test for a class? Step 1: Open the Generate PHPUnit Test dialog In the Project view, select the PHP class to create unit tests for, e.g. MyPHPClass as shown in the image below, and choose New | PHPUnit | PHPUnit Test on the context menu of the selection. Step 2: Configure test generation In the Generate PHPUnit Test dialog, specify the following: - The fully qualified name of the class to be tested, this name will be used to propose the Test Class Name. To use completion, press Ctrl+Space. - The name of the test class. IntelliJ IDEA automatically composes the name from the production class name as follows: <production class>Test.php. The test class name is displayed in the Name text box of the Test Class area. - The folder for the test class, by default the folder where the production class is stored. To specify another folder, click next to the Directory text box and choose the relevant folder. - generate a test for a PHP class defined among others within a PHP file? Step 1: Open the Generate PHPUnit Test dialog In the file, select the class to generate the test for and choose Go To | Test on the context menu, then choose Create New Test in the pop-up list. Step 2: Configure test generation In the Generate PHPUnit Test dialog, proceed as described above: specify the name of the production class, the name of the test class, the name of the test file, and the folder for the test file. Specify the namespace the test class will belong to. IntelliJ IDEA completes the namespace automatically based on the specified directory and displays the generated value in the Namespace text box. When the test is ready, navigate back to the production class by choosing Navigate | Go to Test Subject. For details, see Navigating Between Test and Test Subject. How do I generate a PHPUnit test method? - Open the required test class in the editor, position the cursor anywhere inside the class definition, and choose Generate on the context menu, then choose PHPUnit Test Method from the Generate pop-up list. - Set up the test fixture, that is, have IntelliJ IDEA generate stubs for the code that emulates the required environment before test start and returns the original environment after the test is over: On the context menu, choose Generate | Override method, then choose SetUp or TearDown in the Choose methods to override dialog that opens. For more details, see Fixtures on the PHPUnit Official website. How do I run and debug PHPUnit tests? You can run and debug single tests as well as tests from entire files and folders. IntelliJ IDEA creates a run/debug configuration with the default settings and a launches the tests. You can later save this configuration for further re-use. Option 1: To run or debug a single test Open the test file in the editor, right-click the call of the test and choose Run '<test_name>' or Debug '<test_name>' on the context menu. Option 2: To run or debug tests from a file In the Project view, select the file with the tests to run and choose Run '<file_name>' or Debug '<file_name>' on the context menu. Option 3: To run or debug tests from a folder In the Project view, select the folder with the tests to run and choose Run '<folder_name>' or Debug '<folder_name>' on the context menu. Option 4: To save an automatically generated default configuration After a test session is over, choose Save <default_test_configuration_name> on the context menu of the test, test file, or folder. Option 5: To run or debug tests through a previously saved run/debug configuration Choose the required PHPUnit configuration from the list on the tool bar and click or . Option 6: To create a custom run/debug configuration - In the Project view, select the file or folder with the tests to run and choose Create run configuration on the context menu. Alternatively, choose Run | Edit Configurations on the main menu, then click and choose PHPUnit from the list. - In the Run/Debug Configuration: PHPUnit dialog that opens, specify the test scope and (optionally) test runner options. How do I monitor test results? IntelliJ IDEA PHPUnit output: How do I run PHPUnit tests automatically? You can configure IntelliJ IDEA to re-run tests automatically when the affected code is changed. This option is configured per run/debug configuration and can be applied to a test, a test file, or a folder depending on the test scope specified in this run/debug configuration. To configure re-running tests automatically - Run the tests. - In the Test Runner tab, press the toggle button on the toolbar: - Optionally, set the time delay for launching the tests upon the changes in the code:
https://www.jetbrains.com/help/idea/2017.2/testing-with-phpunit.html
CC-MAIN-2019-43
refinedweb
1,281
54.02
Introduction NumPy (Numerical Python) is an open-source library for the Python programming language. It is used for scientific computing and working with arrays. Apart from its multidimensional array object, it also provides high-level functioning tools for working with arrays. In this tutorial, you will learn how to install NumPy. Prerequisites - Access to a terminal window/command line - A user account with sudo privileges - Python installed on your system Installing NumPy You can follow the steps outlined below and use the commands on most Linux, Mac, or Windows systems. Any irregularities in commands are noted along with instructions on how to modify them to your needs. Step 1: Check Python Version Before you can install NumPy, you need to know which Python version you have. This programming language comes preinstalled on most operating systems (except Windows; you will need to install Python on Windows manually). Most likely, you have Python 2 or Python 3 installed, or even both versions. To check whether you have Python 2, run the command: python -V The output should give you a version number. To see if you have Python 3 on your system, enter the following in the terminal window: python3 -V In the example below, you can see both versions of Python are present. If these commands do not work on your system, take a look at this article on How To Check Python Version In Linux, Mac, & Windows. Note: If you need help installing a newer version of Python, refer to one of our installation guides – How to Install Python on CentOS 8, How to Install Python 3.7 on Ubuntu, or How to Install Python 3 on Windows. Step 2: Install Pip The easiest way to install NumPy is by using Pip. Pip a package manager for installing and managing Python software packages. Unlike Python, Pip does not come preinstalled on most operating systems. Therefore, you need to set up the package manager that corresponds to the version of Python you have. If you have both versions of Python, install both Pip versions as well. The commands below use the apt utility as we are installing on Ubuntu for the purposes of this article. Install Pip (for Python 2) by running: sudo apt install python-pip If you need Pip for Python 3, use the command: sudo apt install python3-pip Important: Depending on the operating system you are using, follow the instructions in one of our Pip installation guides: Finally, verify you have successfully installed Pip by typing pip -V and/or pip3 -V in the terminal window. Step 3: Install NumPy With Pip set up, you can use its command line for installing NumPy. Install NumPy with Python 2 by typing: pip install numpy Pip downloads the NumPy package and notifies you it has been successfully installed. To install NumPy with the package manager for Python 3, run: pip3 install numpy As this is a newer version of Python, the Numpy version also differs as you can see in the image below. Note: The commands are the same for all operating systems except for Fedora. If you are working on this OS, the command to install NumPy with Python 3 is: python3 -m pip install numpy. Step 4: Verify NumPy Installation Use the show command to verify whether NumPy is now part of you Python packages: pip show numpy And for Pip3 type: pip3 show numpy The output should confirm you have NumPy, which version you are using, as well as where the package is stored. Step 5: Import the NumPy Package After installing NumPy you can import the package and set an alias for it. To do so, move to the python prompt by typing one of the following commands: python python3 Once you are in the python or python3 prompt you can import the new package and add an alias for it (in the example below it is np): import numpy as np Upgrading NumPy If you already have NumPy and want to upgrade to the latest version, for Pip2 use the command: pip install --upgrade numpy If using Pip3, run the following command: pip3 install --upgrade numpy Conclusion By following this guide, you should have successfully installed NumPy on your system. Check out our introduction tutorial on Python Pandas, an open-source Python library primarily used for data analysis, which is built on top of the NumPy package and is compatible with a wide array of existing modules. The collection of tools in the Pandas package is an essential resource for preparing, transforming, and aggregating data in Python. For more Python package tutorials, check out our other KB articles such as Best Python IDEs and more!
https://phoenixnap.es/kb/install-numpy
CC-MAIN-2022-33
refinedweb
781
63.83
Hacking on CloudStack with Flask Hacking on CloudStack with Flask Join the DZone community and get the full member experience.Join For Free Discover a centralized approach to monitor your virtual infrastructure, on-premise IT environment, and cloud infrastructure – all on a single platform. Curator's Note: The content of this article was written by Sebastien Goasguen on the Build a Cloud blog. The problem with middleware/backend work is that nobody sees what you do and since I am terrible at graphics/design and know nothing of User Interface principles I am pretty much stuck in the dark. So today I invested couple hours on the Tuesday Silly CloudStack Hack. It is made of Flask, a good read, Twitter Bootstrap and some stolen code from CloudMonkey Flask is a terrific web microframework for Python. Of course I like Python so I think Flask is terrific. It is also great because you can use it to design clean REST services. Bootstrap is en vogue these days, and CloudMonkey, also written in Python is the new CloudStack command line interface boasting some cool features, like auto-completion, interactive shell and so on. I will keep it short, grab requester.py from the CloudMonkey source tree, it will help you make API calls to a CloudStack instance. Create a simple script with flask, and set your CloudStack endpoint variables, plus the keys (I am using DevCloud): import requester from flask import Flask, url_for, render_template, request app = Flask(__name__)' path='/client/api' host='localhost' port='8080' protocol='http' Setup a route that you will use to trigger a call to CloudStack. Something like this: @app.route('/users') def listusers(): response, error = requester.make_request('listUsers',{},None,host,port,apikey,secretkey,protocol,path) resp=json.loads(str(response)) return render_template('users.html',users=resp['listusersresponse']) Now Download Bootstrap and stick it in a static directory in your Flask application. Then create html template files using the jinja2 syntax, something like this: {% extends "base.html" %} {% block content %} {% if users %} {{ users }} {% else %} Hello World! {% endif %} {% endblock content %} sebmini:templates sebastiengoasguen$ Now run the app with Python and hit and bang, you just got yourself your quarter end Bonus....! Well not quite, but that's a start :) Joke aside, this should be the start of a fun Google Summer of Code project, I will put it on github if there is interest. Also if you want another silly hack, push requester.py to your android phone and using SL4a you can make calls to CloudStack from your phone...Silly CloudStack Wednesday hack anyone ? Learn how to auto-discover your containers and monitor their performance, capture Docker host and container metrics to allocate host resources, and provision containers. Published at DZone with permission of Mark Hinkle , DZone MVB. See the original article here. Opinions expressed by DZone contributors are their own. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/hacking-cloudstack-flask
CC-MAIN-2019-04
refinedweb
484
55.03
Macro processing in C/C++ The following code fragment produces the correct result sometimes and sometimes it does not. Explain what is happening and why. #include <stdio.h> #define max(a,b) ((a) > (b) ? (a): (b)) void main(void) { int i = 0; int j = 0; int a = 3; int b = 4; int c = 6; int d = 5; i = max(a,b); j = max(c,d); printf("i = %i, j = %in",i,j); i = max(a++, b++); printf("a = %i, b = %in",a,b); j = max(c++,d++); printf("c = %i, d=%in",c,d); return; } Solution Preview Given code fragment will produce unexpected results in case of i = max(a++, b++); and j = max(c++,d++); Things would have worked as expected if max(...) were a function call. However it is a macro call, and thus given code fragments become following after preprocessing step during compilation. i = ((a++) > (b++) ? (a++) : (b++)); and j = ((c++) > (d++) ? (c++) : (d++)); Before these code fragments execute, (a,b) : (3,4) (c,d) : ... Solution Summary Response first points out the code fragments that will produce unexpected results, and then goes on explaining in complete detail (and step-by-step manner), why it is so.
https://brainmass.com/computer-science/c/macro-processing-in-c-c-468303
CC-MAIN-2017-17
refinedweb
199
71.24
Here into two parts: those that target .Net 4 and those that target .Net 4.5. The latter use the async/await keywords introduced in Visual Studio 2012 which makes programming with asynchronous tasks much simpler. Many of the samples use self host for starting ASP.NET Web API (that is, they run outside ASP.NET). These samples all use the local TCP port 50231 to listen for HTTP traffic. In order to start listening on that port you have to register an HTTP namespace with http.sys (or alternatively run Visual Studio as Administrator). You register the namespace from an elevated command prompt like this: - netsh http add urlacl user=%USERDOMAIN%\%USERNAME% For details, see how to register an http namespace with http.sys. Getting the Samples You can either clone the samples repository using the command (see Using Git with CodePlex for details): - git clone or download the samples by selecting the Download link: The former (i.e. using git) is best if you want to stay up to date with respect to the samples. Running the Samples The samples can all be run by hitting F5 in Visual Studio but some may need additional information such as an application key. The first time you run a sample, it will download the various NuGet packages and ask you to confirm their installation for that project. Note: When downloading the NuGet packages you may see a compilation warning saying something like this:'. In that case do as the message says and check Allow NuGet to download missing packages during build under Package Manager in Options. Please file issues on the samples as well as ideas for other samples. Let us know what you think; what you would like to see, and how this can be a useful collection of samples. Have fun! Henrik Thanks for another informative site. Where else could I get that type of information written in such an ideal way like this post.This post is really goregious. Hi Henrik, Can you please upload samples about how to authenticate using various kinds of authentication (OAuth, FormsAuth, OpenId, WIndows Integration, etc) in ASP.NET Web API? It would really be very helpful. Thanks Mel, yes, authentication samples is a great idea — will see what we can do. Hi Henrik, Can you please show an example how to post a collection ie. List<Object> to web api POST using httpclient Neon, If you look at the JsonUploadSample then it does something very similar — just using a single Contact object. If you just change Contact to List<Contact> and upload a list instead then it will work. Is that what you are looking for? Henrik Grreat job. The samples I have gone through so have given me a good insight into how httpClient works. Auth Sample will be great especially using httpClient. (There are already many Javacsript samples for WinLive, Google, Facebook) the link or "download the samples by selecting the Download link:", just above the picture is incorrect. It is going to aspnetwebstack instead of aspnet. Matthew, Thanks for pointing out the broken link — fixed! Henrik SammyD — auth samples are indeed on the list — I am going to the IETF meeting in Vancouver next week so it may take some more days. Henrik Henrik: I'll be happy to Test them for you, if you want. I already have accounts setup for WinLive, facebook, gmail and twitter Please provide a how-to-sample for a complex object like a customer order (which requires insert/update to multiple tables). This would be highly appreciated. Thanks It would be interesting to see samples of self-hosting the Web API in a Windows service application and also self-hosting using HTTPS and certificates. This would be very helpful. Thanks for the samples! hi when i try to buid the solution it gives me the following error Henrik, testing fw 4 WorldBankSample and I get this exception thrown from contentTask: Type 'Newtonsoft.Json.Linq.JToken'. mgmoody, I suspect you may have old bits. If you try the RTM version then there should be no problems. Thank you Good day! When start one of the "Web API" samples the start page has error: "This operation requires IIS integrated pipeline mode." Does anyone knows what might be the reason. I just download the source code and restore the nuget packages. Thanks, Stefan Hi: I am trying to run the Web Api Basic Authentication sample (aspnet.codeplex.com/…/latest) – the instructions say: To run the sample, first start the server; then run the client and press ENTER. For more information about the samples, please see go.microsoft.com/fwlink 1. How to I start the server 2. Once the server is running, how do i run the sample? Thank you. Roger
https://blogs.msdn.microsoft.com/henrikn/2012/08/15/asp-net-web-api-samples-on-codeplex/?replytocom=1653
CC-MAIN-2018-17
refinedweb
796
65.22
How to retrieve the process start time (or uptime) in python in Linux? I only know, I can call "ps -p my_process_id -f" and then parse the output. But it is not cool. If you are doing it from within the python program you're trying to measure, you could do something like this: import time # at the beginning of the script startTime = time.time() # ... def getUptime(): """ Returns the number of seconds since the program started. """ # do return startTime if you just want the process start time return time.time() - startTime Otherwise, you have no choice but to parse ps or go into /proc/pid. A nice bashy way of getting the elapsed time is: ps -eo pid,etime | grep $YOUR_PID | awk '{print $2}' This will only print the elapsed time in the following format, so it should be quite easy to parse: days-HH:MM:SS (if it's been running for less than a day, it's just HH:MM:SS) The start time is available like this: ps -eo pid,stime | grep $YOUR_PID | awk '{print $2}' Unfortunately, if your process didn't start today, this will only give you the date that it started, rather than the time. The best way of doing this is to get the elapsed time and the current time and just do a bit of math. The following is a python script that takes a PID as an argument and does the above for you, printing out the start date and time of the process: import sys import datetime import time import subprocess # call like this: python startTime.py $PID pid = sys.argv[1] proc = subprocess.Popen(['ps','-eo','pid,etime'], stdout=subprocess.PIPE) # get data from stdout proc.wait() results = proc.stdout.readlines() # parse data (should only be one) for result in results: try: result.strip() if result.split()[0] == pid: pidInfo = result.split()[1] # stop after the first one we find break except IndexError: pass # ignore it else: # didn't find one print "Process PID", pid, "doesn't seem to exist!" sys.exit(0) pidInfo = [result.split()[1] for result in results if result.split()[0] == pid][0] pidInfo = pidInfo.partition("-") if pidInfo[1] == '-': # there is a day days = int(pidInfo[0]) rest = pidInfo[2].split(":") hours = int(rest[0]) minutes = int(rest[1]) seconds = int(rest[2]) else: days = 0 rest = pidInfo[0].split(":") if len(rest) == 3: hours = int(rest[0]) minutes = int(rest[1]) seconds = int(rest[2]) elif len(rest) == 2: hours = 0 minutes = int(rest[0]) seconds = int(rest[1]) else: hours = 0 minutes = 0 seconds = int(rest[0]) # get the start time secondsSinceStart = days*24*3600 + hours*3600 + minutes*60 + seconds # unix time (in seconds) of start startTime = time.time() - secondsSinceStart # final result print "Process started on", print datetime.datetime.fromtimestamp(startTime).strftime("%a %b %d at %I:%M:%S %p")
https://codedump.io/share/t2F7P9areSMw/1/how-to-retrieve-the-process-start-time-or-uptime-in-python
CC-MAIN-2017-30
refinedweb
477
72.97
httpsclient-particle-xguan (community library) Summary A library that helps photon boards talk https to web servers Example Build Testing Device OS Version: This table is generated from an automated build. Success only indicates that the code compiled successfully. Library Read Me This content is provided by the library maintainer and has not been validated or approved. Overview Designing this with IOT in mind. Most webservers willing to collect data from devices prefer talking https. Securing communication to-and-from smaller footprint devices (like particle.io Photon etc.) and standard web servers doing interesting stuff with this data (glowfi.sh, thingspeak.com, etc) is the goal. There is a hard limitation you hit with https and that dictates that you have about 50kB of meemory allocated for it. This is because of huge Certificate chains that get shipped from https servers. In all this, lack of a readily available plug and play httpsclient brings us here. Integrating MatrixSSL with a standard TCP client seemed like a good starting point. Any feedback (especially critical) and contributions are welcome! Building it with Web IDE - Import the httpsclient-particle - Add it to the application you are currently working on - (Or) use one of the examples in the examples/ directory - If you are using timeapi-test, you should be able to compile and run it readily - If you are using glowfish-post-test, you will need the authentication credentials from glowfish to be able to make the post. Once you have them follow the instructions in the example file to carefully fill them in. Building it locally with spark firmware Assuming you are comfortable using spark firmware library located: - Clone the httpsclient-particle - Copy the contents of the firmware directory into the firmware/user/src/ directory of the spark firmware library. - Pick one of the files in the examples directory. And copy it into user/src directory. - Modify the first line of the example to remove the path prefix (needed only for web IDE): #include "httpsclient-particle.h" - Delete/relocate the examples folder (local build won't succeed if it's left there) - Goto firmware/main/ of the spark firmware directory again. And build it (Again, instructions for this are at) - don't forget PLATFORM=photon Current State What's here is a semi-stable working httpsclient that can make requests from the particle photon board to webservers running https. The client implementation is simple, and as of now can handle 1 connection at any given time. Other features (for now) are: Adhere to security - TLS 1.2 - RSA 2048bit key length - Slow is OK, but secure is a must! - Ramping this up to 4096bit key length ought to be tested, and this may just consume a whole lot of memory (AGAIN, especially the server certificates). Writing the certificate chains to flash is an option. Small memory footprint: - Client only - Single session - No Client Authentication - Static memory allocation License: GPL, as matrixSSL-open library is under GPL. A few important changes from MatrixSSL: - Make the ssl structure static, as we are just using a single session. - Header file compatibility with particle.io build system. This means adjusting the include paths (this needs to be fixed). - Keep SSL in-out buffers static. TODO: - Add and test Elliptic curve support (This will take up a larger footprint) - Find a better way to seed entropy. Currently takes the last 8 bits of the system microsecond counter. - Add a feature to generate header files from RSA keys, etc. After this remove samplecerts from the repository (?) - Inspect all dynamic memory allocations and check for memory leaks (all psMallocs) - Add tests!! MatrixSSL tests are heavy handed. Need to carefully go through these and add the ones needed. - Find a better way to include header files - A memory pool implementation (if needed), especially to give back the obscene amount of memory SSL Certificates consume. - The only way to currently print and trace info on the particle.io's photon is by using Serial (written in c++). This is a bit painful if the rest of your library is in C, necessary '.h' file needs to be wrapped with extern C wrappers to get it to build correctly. - Last but no way the least, a thorough security AUDIT. - Decide on keeping this repository in sync with MatrixSSL-open. This isn't trivial as keeping up with Photon/Arduino/MatrixSSL build systems maybe be a handful. Browse Library Files
https://docs.particle.io/reference/device-os/libraries/h/httpsclient-particle-xguan/
CC-MAIN-2022-27
refinedweb
736
56.96
Mastering phpMyAdmin 3.1 for Effective MySQL Management — Save 50% Increase your MySQL productivity and control by discovering the real power of phpMyAdmin 3.1. Magic methods For starters, let's take a look at the magic methods PHP provides. We will first go over the non-overloading methods. __construct and __destruct class SomeClass { public function __construct() { } public function __destruct() { } } The most common magic method in PHP is __construct. In fact, you might not even have thought of it as a magic method at all, as it's so common. __construct is the class constructor method, which gets called when you instantiate a new object using the new keyword, and any parameters used will get passed to __construct. $obj = new SomeClass(); __destruct is __construct's "pair". It is a class destructor, which is rarely used in PHP, but still it is good to know about its existence. It gets called when your object falls out of scope or is garbage collected. function someFunc() { $obj = new SomeClass(); //when the function ends, $obj falls out of scope and SomeClass __destruct is called } someFunc(); If you make the constructor private or protected, it means that the class cannot be instantiated, except inside a method of the same class. You can use this to your advantage, for example to create a singleton. __clone class SomeClass { public $someValue; public function __clone() { $clone = new SomeClass(); $clone->someValue = $this->someValue; return $clone; } } The __clone method is called when you use PHP's clone keyword, and is used to create a clone of the object. The purpose is that by implementing __clone, you can define a way to copy objects. $obj1 = new SomeClass(); $obj1->someValue = 1; $obj2 = clone $obj1; echo $obj2->someValue; //echos 1 Important: __clone is not the same as =. If you use = to assign an object to another variable, the other variable will still refer to the same object as the first one! If you use the clone keyword, the purpose is to return a new object with similar state as the original. Consider the following: $obj1 = new SomeClass(); $obj1->someValue = 1; $obj2 = $obj1; $obj3 = clone $obj1; $obj1->someValue = 2; What are the values of the someValue property in $obj2 and $obj3 now? As we have used the assign operator to create $obj2, it refers to the same object as $obj1, thus $obj2->someValue is 2. When creating $obj3, we have used the clone keyword, so the __clone method was called. As __clone creates a new instance, $obj3->someValue is still the same as it was when we cloned $obj1: 1. If you want to disable cloning, you can make __clone private or protected. __toString class SomeClass { public function __toString() { return 'someclass'; } } The __toString method is called when PHP needs to convert class instances into strings, for example when echoing: $obj = new SomeClass(); echo $obj; //will output 'someclass' This can be a useful example to help you identify objects or when creating lists. If we have a user object, we could define a __toString method which outputs the user's first and last names, and when we want to create a list of users, we could simply echo the objects themselves. __sleep and __wakeup class SomeClass { private $_someVar; public function __sleep() { return array('_someVar'); } public function __wakeup() { } } These two methods are used with PHP's serializer: __sleep is called with serialize(), __wakeup is called with unserialize(). Note that you will need to return an array of the class variables you want to save from __sleep. That's why the example class returns an array with _someVar in it: Without it, the variable will not get serialized. $obj = new SomeClass(); $serialized = serialize($obj); //__sleep was called unserialize($serialized); //__wakeup was called You typically won't need to implement __sleep and __wakeup, as the default implementation will serialize classes correctly. However, in some special cases it can be useful. For example, if your class stores a reference to a PDO object, you will need to implement __sleep, as PDO objects cannot be serialized. As with most other methods, you can make __sleep private or protected to stop serialization. Alternatively, you can throw an exception, which may be a better idea as you can provide a more meaningful error message. An alternative to __sleep and __wakeup is the Serializable interface. However, as its behavior is different from these two methods, the interface is outside the scope of this article. You can find info on it in the PHP manual. __set_state class SomeClass { public $someVar; public static function __set_state($state) { $obj = new SomeClass(); $obj->someVar = $state['someVar']; return $obj; } } This method is called in code created by var_export. It gets an array as its parameter, which contains a key and value for each of the class variables, and it must return an instance of the class. $obj = new SomeClass(); $obj->someVar = 'my value'; var_export($obj); This code will output something along the lines of: SomeClass::__set_state(array('someVar'=>'my value')); Note that var_export will also export private and protected variables of the class, so they too will be in the array. Overloading methods Now that we've gone through all the non-overloading methods, we can move to the overloading ones. If you define an overloading magic method, they all have some behavior that's important to know before using them. They only apply to methods and variables that are inaccessible: - methods or variables that do not exist at all - variables which are outside the scope Basically, this means that if you have a public member foo, overloading methods will not get called when you attempt to access it. If you attempt to access member bar, which does not exist, they will. Also, if you declare a private/protected variable $hiddenVar, and attempt to access it outside the class' own methods, the overloading methods will get called. This applies to classes which inherit the original— any attempt to access the parent's private variables will result in overloading method calls. __call class SomeClass { public function __call($method, $parameters) { } } This magic method is called when the code attempts to call a method which does not exist. It takes two parameters: the method name that was being called, and any parameters that were passed in the call. $obj = new SomeClass(); $obj->missingMethod('Hello'); //__call is called, with 'missingMethod' as the first parameter and //array('Hello') as the second parameter This can be used to implement all kinds of useful things. For example, you can use __call to create automatic getter methods for variables in your class: class GetterClass { private $_data = array( 'foo' => 'bar', 'bar' => 'foo' ); public function __call($method, $params) { if(substr($method, 0, 3) == 'get') { //Change the latter part of the method name to lowercase //so that it matches the keys in the data array return $this->_data[strtolower(substr($method, 3))]; } } } $obj = new GetterClass(); echo $obj->getBar(); //output: foo You could also use this to emulate mixins. By storing mixin classes inside a variable in the class, you could use __call to check each of them for a method that's not in the main class. If you implement __call, any call that attempts to use a method which does not exist will go into it. You should always throw an exception in the end of __call if the method was not handled. This will help prevent bugs that can occur, if you mistype a method name and it goes into call which could silently ignore it. If you, for some reason, want to create a method with a PHP reserved word as its name, you can fake one using __call. For example, normally you can't have a method called "function" or "class", but with __call it's possible. __get and __set class SomeClass { public function __get($name) { } public function __set($name, $value) { } } The __get and __set pair is called when attempting to read or write inaccessible variables, for example: $obj = new SomeClass(); $obj->badVar = 'hello'; //__set is called with 'badVar' as first parameter and 'hello' as second parameter echo $obj->otherBadVar; //__get is called with 'otherBadVar' as the first parameter __get works similar to __call, in that you can return a value from it. If you don't return a value from __get, the value is assumed to be null. These two can be used to, for example, creating read-only variables, or C#-style properties which run some code when being read or written. class PropertyClass { private $_foo; public function setFoo($value) { //some code here $this->_foo = $value; } public function getFoo() { return $foo; } public function __set($name, $value) { $setter = 'set' . ucfirst($name); $this->$setter($value); } public function __get($name) { $getter = 'get' . ucfirst($name); return $this->$getter(); } } $obj = new PropertyClass(); //Since foo does not exist, __set is called, and //set then calls setFoo which can run additional code $obj->foo = 'something'; Like __call, it's important that you throw an exception if you don't handle a variable in __get or __set. Again, this will help prevent bugs that are caused by misspelled variable names. __unset and __isset class SomeClass { public function __isset($name) { } public function __unset($name) { } } If you want to use __get and __set, it's often useful to also implement __unset and __isset. They are called with unset and isset respectively: $obj = new SomeClass(); isset($obj->someVar); //calls __isset with 'someVar' unset($obj->otherVar); //calls __unset with 'otherVar' Words of warning about magic methods While it's possible to do fun things with magic methods, like making setting variables actually call methods, and even completely nonsensical things like making unset ($obj->foo) echo the value of $foo, keep in mind that while they are useful, it's easy to go overboard and make something that's more difficult to maintain because it's confusing. Magic functions This topic should probably be just called "magic function", since at the moment there is only one. __autoload You can define a function called __autoload to implement a default autoloader. Normally you would need to call spl_autoload_register to register any autoloader functions or methods, but not with __autoload. function __autoload($class) { require_once $class . '.php'; } $obj = new SomeClass(); //assuming SomeClass isn't defined anywhere in the code, //the __autoload function is now called with 'SomeClass' as its parameter If you have more complex autoloading logic, you may want to just implement it as a class, similar to Zend_Loader in the Zend Framework, and then register the autoloader method manually with spl_autoload_register or a special method of the class. Magic constants Magic constants are special predefined constants in PHP. Unlike other predefined constants, magic constants have a different value depending on where you use them. All of them use a similar naming style: __NAME__ - that is, two underscores, name in upper case, and then two more underscores. Here are all the magic constants: You can use __FUNCTION__ inside class methods. In this case, it will return just the method's name, unlike __METHOD__, which will always return the name with class name prefixed. These constants are mainly useful for using as details in error message, and to assist in debugging. However, there are a few other uses as well. Here's an example of throwing an exception with magic constants in the message: throw new Exception('Error in file ' . __FILE__. 'on line ' . __LINE__); It's worth noting that exception backtraces already come with the file and line. __FILE__ can be used to get the current script's directory: dirname(__FILE__); With __CLASS__, it's possible to determine a parent class' name: class Parent { public function someMethod() { echo __CLASS__; } } class Child extends Parent { } $obj = new Child(); $obj->someMethod(); //Will output: Parent Finally, the more special __COMPILER_HALT_OFFSET__ can only be used when a file contains a call to __halt_compiler(). This is a special PHP function, which will end the compiling of the file at the point where the function is called. Any data after this will be ignored. You could use this, for example, to store some additional data. You can then read this data by reading the current file, and looking at __COMPILER_HALT_OFFSET__. The usage of this approach is outside the scope of this article, so check the PHP manual for more details. PHP 5.3 magic features Lastly, let us check out what new magic methods and constants PHP 5.3 brings to the table! New magic methods in PHP 5.3 PHP 5.3 adds two new magic methods: __invoke and __callStatic. class NewMethodsClass { public static function __callStatic($method, $parameters) { } public function __invoke($parameters) { } } __callStatic is the same as __call, except it's used in static context: NewMethodsClass::someStaticMethod('Hi'); //calls __callStatic with 'someStaticMethod' and array('Hi') As with __call, remember to throw an exception if you don't handle some method in __callStatic, so that you prevent any bugs that are caused by mistyped method names. __invoke is a more interesting: It gets called when you use a class instance like a function $obj = new NewMethodsClass(); $obj('Hi'); //calls __invoke with 'Hi' This could be useful for implementing the command pattern, or to create something similar as the Runnable or Callable interfaces in Java. New magic constants in PHP 5.3 PHP 5.3 adds two new magic constants: __DIR__ and __NAMESPACE__ __DIR__ is the same as calling dirname(__FILE__) __NAMESPACE__ contains the current namespace, which is a new feature of PHP 5.3. New magic functions in PHP 5.3 PHP 5.3 does not add any new magic functions. Summary PHP contains magic methods, magic constants and a magic function, which are special features of the PHP language and can be used for various purposes. They are some of the features in PHP that makes it stand out from the crowd, as not many others provide similar capabilities, and as such, any serious PHP programmer should know them. About the Author : Jani Hartikainen Jani Hartikainen is a finnish web-developer. He has been programming in PHP for over 6 years, and is also skilled in various other technologies such as JavaScript, C# and Python. Visit his programming blog at. Books From Packt Post new comment
http://www.packtpub.com/article/php-magic-features
CC-MAIN-2014-10
refinedweb
2,338
57.81
As promised, we’re publishing a general ReSharper 5.0 overview, elaborating on its feature set. Please keep in mind that this is a preliminary document. The general picture will stay unchanged, but local amendments cannot be ruled out at this point, and many user interface items will probably change. Features - External Sources A solution is not limited to sources included in your projects, but also contains sources that were used to build your libraries. Some companies publish parts of their sources using the Source Server feature of debug information files (PDB). This is the same technology that Microsoft uses to provide access to source code for parts of a new project, and the source code is full of [your favorite code smell here]. Please, make ReSharper analyze and fix it!”. Fortunately, ReSharper 5 can address this demand. You can set up your own code patterns, search for them, replace them, include them in code analysis, and even use quick-fixes to replace! Building patterns and enforcing good practices has never been this easy. Corporate and team policies, your own frameworks, favorite open source libraries and tools — you can cover them all. - Project Refactorings and Dependencies View Once you’ve gotten used to smart, automated refactorings that ReSharper provides, you can’t think of doing them manually anymore. In this release, we extend ReSharper’s coverage to bring you several refactorings for project structure. With ReSharper 5, you can move files and folders between projects; synchronize namespaces to folder structure in any scope – as large as your solution; safely delete obsolete subsystems without going type by type; and split a file with lots of types created from usages into their own dedicated files – in one go. We have also added a special project dependencies view to help you track down excessive dependencies between projects and eliminate them. As an early ReSharper 5 user said, “I’m no longer afraid of restructuring my project. I just go and do it whenever I feel it’s right”. - Call Tracking Find usages, find usages, find usages. Formerly, attempting to track call sequences in code could end up with lost context, lots of Find Results windows and, ultimately, frustration. With ReSharper 5, you can inspect an entire call sequence in a single window, in a simple and straightforward manner. Stuck in unfamiliar code? ReSharper’s code inspecting tools for the rescue! - Value Tracking Value Tracking gives working with resources by providing a full stack of features for ResX files and resource usages in C# and VB.NET code, as well as nowadays appear at a great speed, of using these features and improve their support for-rate IntelliSense experience, and the new version gives even more. We have added automatic completion for enum members and boolean values, made automatic triggering smarter, and greatly improved instantly switch between several code spots. Inspections - Solution-Wide Warnings and Suggestions We have received a lot of positive feedback from our users regarding solution-wide error analysis, which allows you to immediately see compilation errors in the whole solution. In ReSharper 5, we took this technology to a new level by adding warnings and suggestions to the list. Now you can browse code smells that ReSharper finds across your solution and quickly improve the offers to perform the conversion automatically, to make the developer’s intent crystal clear. - Use IEnumerable Where Possible With the power of LINQ, IEnumerable is more than enough to pass a collection of values. So why restrict yourself with a superb configurable formatter for XML files. Pingback: Twitter Trackbacks for JetBrains .NET Tools Blog » Blog Archive » ReSharper 5.0 Overview [jetbrains.com] on Topsy.com I was wondering if R# 5.0 will provide a feature to automatically generate documentation for methods, classes and assemblies; much like GhostDoc? Pingback: ReSharper 5.0 Overview - Laurent Kempé Brian, we don’t have plans to put GhostDoc functionality in ReSharper. what versions of visual studio will it support? when does the “free upgrade period” start for R# 5.0? so if someone buys 4.5 now will they get upgraded for free? or are they better off waiting? Pingback: Resharper 5.0! Hurrah! - Technicals and Technicalities Lotta cool stuff. I gotta say, I would love to have had that “delete unused code” feature the past three days, as I was trying to boil our million-line code base down to a simple repro case for a vendor’s bug. But I’m skeptical of the “auto-humps without Shift” idea, unless it’s gotten a lot better from its current incarnation. We’re migrating our code base to C# from Delphi, where the convention is that all type names start with T… TStrings, TMatcher, etc.; but our shop’s naming convention for automated test fixtures is to leave off the extra T: TestStrings, TestMatcher, etc. If I hit Ctrl+N and type “TMatcher”, ReSharper always defaults to “TestMatcher” instead of the exact match I typed. I’m finally learning that, if I don’t use Shift, and just type “taccount”, ReSharper will find what I actually type… now you’re taking that away? Please tell me you guys are at least paying attention to the bug reports I (and probably others) have already written up about usability problems with the current match algorithm. hurray! i was waiting for the new version. when are you planning to deliver it for purchasing? Pingback: Reflective Perspective - Chris Alcock » The Morning Brew #453 I hope Resharper has better support for xaml and workflow xoml. Features sound exciting and can’t wait for the EAP! Great work guys! Thanks! Wow, it looks like there is some really cool stuff on the way. I can’t wait to try it out! Keith, R# 5 will support VS2008 and VS2010. At to purchase timing, we’re planning a “buy R# 4.5, upgrade to 5.0 for free” campaign very soon, presumably by the time 5.0 EAP is open, so my guess is that you should wait just a little bit. פורומים, We’re hoping to release 5.0 to the general public before NY, so you’ll be able to buy it pretty soon. cecildt, As far as I know, there are plans to support XOML files for 5.0. As to XAML improvements, there are some but not too many. Pingback: New and Notable 381 : Sam Gentile's Blog (if (DeveloperTask == Communication && OS == Windows) Joe, Here’s your request, and it turned out we’ve already fixed the issue. It should now work the way you prefer No support for F# in VS 2010? Sweet. Here’s hoping you also fix the two major bugs I’ve been running into: * All the crashes I’ve been reporting around Safe Delete (and code deletion in general) the past few days. JIRA shows no sign that these will ever be worked on. There are dozens of JIRA issues, all marked “closed as duplicate”, and ending in a circular loop — all the “duplicate of X” chains lead to either RSRP-101785 or RSRP-101786, and those two are closed as duplicates of *each other*! And there’s no “fixed in version” on any of them. So as far as I can tell, this bug will never even get looked at, and any new reports get marked as duplicates and fall into the same black hole. * The “Color Color” problem (property with the same name as its type — RSRP-83171). Andrey told me by e-mail that the “Color Color” problem was “by design at the moment and it’s not easy to fix”, which surprises me since it’s just removing a duplicate from the list. The “Color Color” problem is my absolute #1 annoyance when I’m using ReSharper (more so since it *used* to work in older versions). Little things matter — this problem is the reason I currently wouldn’t recommend ReSharper to anyone. (Fix the Color Color problem, and I’ll be singing ReSharper’s praises from the rooftops again!) Pingback: DotNetShoutout Damian, Sad but true, no F# support yet. On a positive side, we’re still hoping to provide it in a future version. Do you plan for R# to support code contracts and/or PEX? Thanks for the good overview! Pingback: progg.ru Im wondering the same as Yuri: Do you plan for R# to support code contracts and/or PEX? We could put code contracts practice (ex: null test) with Structured Patterns? thanks Can’t wait to try this version out. I really like the foreach refactorings and the lowercase intellisense matching CamelCase. Pingback: ReSharper 5 – and I thought 4 was good » Lab49 Blog Wow, another great feature set added to ReSharper! I’ve been using it since version 1.5 and looking forward again to another release to this great productivity tool. I already love R#5 and I can hardly wait to get into my hands. I’ve been a user since the early days and can’t wait to see the new features, but I have to agree with the earlier commenter who said commented on the “* The “Color Color” problem (property with the same name as its type — RSRP-83171). This is really a horrible usability problem. Is there no way just to prefer the property over the type in the selection list? I’m excited about R#, but quite honestly, my enthusiasm is guarded. You guys dropped the ball on VS2010b1 support for EAP and made a lot of promises you didn’t keep. Those of us who dove into the early support were strung along for a long time with a poorly-performing nightly build. I hope this time it is different! Looks like some awesome new features coming in 5. The Upgrade to Linq is damn nifty. One thing I’m wondering about is the Layout Members xml definition. Has it been made easier to import, export and edit it. Right now just being in a multiline textbox (that has major scroll problems) is cumbersome. A simple, export/import dialog would be good. Even better than that would be some editing feedback, syntax highlighting or even VS integration. Eh, Tom, the current support for VS2010b1 is already really great. “ReSharper for Visual Studio 2010 (Preview) Nightly Builds” have been available since July and I am using Resharper on VS2010 all day long, with great pleasure – and a few VS restarts :). I’m usually the first to whine, but VS2010 support from JetBrains has been exceptional so far! Will Resharper 5.0 support .NET Ria Services? Tom, I can understand your frustration with VS2010 and ReSharper preview, but we were not going or promising release quality build for VS2010 beta 1. It is beta 1, with unstable API which has 99% chances to change in beta 2 and probably even in RTM. Thus, it would be waste of time to continue improving the build, knowing that all that would go straight to recycle bin. There were other reasons I can’t talk about. ReSharper 5 EAP, which will open soon will not have an option to install for VS2010 for a reason. When VS2010 Beta 2 will come out, we will start providing builds for this version as soon as we are ready. Pingback: Бюро находок #8 | С кодом по жизни. when will possible support refactoring for entity framework? if i rename an association in a edms file, i have to track the changes in the project, especially difficult is i use the Include(“Association”) in the code Pingback: Weekly Links #75 | GrantPalin.com I have been using resharper since version 1, and saw performance and stability get worse with each release. I completely stopped using it at version 4.5, because in spite of promises to work on performance it was still terrible. I now have a paid license for 4.5 that is unused because the software is so unstable and I had to uninstall.. Pingback: Andrew Veresov blog | Resharper 5.0 First Public Build is now available Pingback: JetBrains .NET Tools Blog » Blog Archive » Welcome to ReSharper 5.0 EAP! Pingback: JetBrains .NET Tools Blog » Blog Archive » ReSharper 5.0 Overview « Tech Notes Although I do still use (and like) R#, I have to agree with Mark Jones: please take some time to work on performance and memory usage before adding more features. Hi R# Team, I was also wondering about support for Code Contracts and/or Pex? Will there be a feature such as “suggesting” pre-/post-conditions? Is there anything special regarding the Parallel Task Library that will be part of .NET 4.0 ‘s mscorlib? In conjunction with the Patterns (Structured Search) feature, do you provide some templates for common patterns (searches)? Looking forward to use the new version on VS2010! Great to be able to get access to version 5. I personally haven’t experienced any issues with performance. Pingback: Interesting Finds: 2009 10.14 ~ 10.21 - gOODiDEA.NET @orangy: “When VS2010 Beta 2 will come out, we will start providing builds for this version as soon as we are ready.” That’s nice and I look forward to it, but what you need to understand is that there are a lot of us who purchased Resharper licenses and who are now using VS2010 for daily work. Because you effectively have no support for VS2010, neither with release versions nor with EAP, please don’t take too long to at least put something out there that is usable. I don’t personally care if it is 4.5- or 5.0-level; I just need something soon. @Tom: what feature do you specifically miss VS2010 doesn’t have? just tried beta2 and I feel there’s no need to slow my computer down with this plugin, as VS2010 is already good at refactoring… probably you can enlighten me what features I’ll gain, as many of them seems to complement each other (call to windows) Please do something about performance. Performance even in 4.5.1 is absolutely terrible. Pingback: mookid on code » R# FTW @coder: Resharper is much more than what VS2010 has built-in. Check out the web pages on the Jetbrains site that describe what Resharper does. @coder: The main new feature that Resharper brings to VS2010 is the ability to have it crawl so slowly that it is completely useless. The constant popup error messages are just a nice bonus. Top three new features needed for v5.0: 1) Speed it up. 2) Improve the performance. 3) Make it run faster. After that, you can put in the other features in any order you choose @CSharpDev: I’ll have to admit, I’m actually enjoying *not* having Resharper loaded in beta 2 since the built-in Intellisense is so much faster. But I do miss a lot of the helpful features. I hope that Jetbrains works hard on getting the performance right. It would be nice if they could eliminate the trade-off between having it slow down Visual Studio and having the extra functionality. That is always the pain-point in using Resharper. @Tom: Here is my experience with intellisense: No Resharper: Type some characters, accept a suggestion, type, accept, type, accept. With Resharper: Type some characters, sit back and wait for those characters you just typed to appear on the screen, accept a suggestion, type, wait for what you typed to appear on the screen… @CSharpDev You’ve been only able to try out ReSharper preview builds for VS2010. They’ve not been intended for production use. You should at least wait for EAP builds for VS2010 (should be available soon) before you can judge. As for intelliSense, try increasing the value in ReSharper | Options | Environment | IntelliSense | Completion Behavior | Automatically show completion list in…. Helps in most cases. @Jura Gorohovsky: I am talking about using ReSharper with VS2005 and VS2008, large projects. I have no reason to believe it would magically be better with VS2010. Resharper works fine on tiny projects in VS2005 or VS2008, so probably would work fine on tiny projects in VS2010. Your suggestion about automatically showing completion list has been suggested in forums before, but does no good whatsoever. Even turning intellisense completely off in Resharper doesn’t help. Here is No Resharper, VS intellisense: Type some characters, accept a suggestion, type, accept, type, accept. Here is With Resharper on a large project, intellisense turned completely off in Resharper, and every single recommendation stated time and time again in the forums about how to make Resharper more responsive: Type some characters, wait for what you typed to appear on the screen, type some more, oops I overran the keyboard buffer and lost some of what I just typed waiting for it to appear on the screen, get frustrated, type some more and wait for what you typed to appear on the screen. any support of C++? @jmowla Sorry, no support for C++ My Experience with Resharper - I came back after working in java 4 many years to use visual studio 2008 and was impressed I thought finally microsoft got an IDE right this just worked. - I then went to help a friend do something in VS 2008 and it was terrible unintuitive etc etc. I then realised exactly why resharper is so awesome it just does what it is supposed 2 do. I have now been using it for the last 9 months and would not program without it. This goes for our entire team of 12 Developers. This is the one peace of software that none of us would work without. - I have never experienced problems with it slowing my machine down (Although we do all program on top of the range machines). Resharper is by far my favorite piece of software ever. I have just started working with VS2010 and resharper and so far so good Brett, Thanks for your feedback! Quite relieving to know we’re not slowing everyone’s machines down No, you’re not slowing everyone down. We were getting some painful slowdowns on our now 3- or 4-year-old development machines (older processors, only 2 GB of RAM), but when we upgraded machines half a year ago, we got 64-bit OSes and 8 GB of RAM — and now the speed is fine. Opening the solution is still pretty slow, but we don’t do that very often — though it is still a pain point whenever we update from svn and someone else modified the project, and we have to reload it. But apart from that, and the occasional bug that slows something down, the speed has been pretty good on our new hardware. It sounds like the minimum machine requirements for using Resharper on large projects needs to be stated as much higher than what is needed for Visual Studio without Resharper. About those numbered bookmarks (finally!!!), what will the key combination be? I’m crossing my fingers it’ll be Ctrl+Shift+ to set and Ctrl+ to go… It ate my less than and greater than signs; I meant Ctrl+Shift+[number] and Ctrl+[number] Peter, Exactly! Ctrl+Shift+number to set, Ctrl+number to go, in both keymaps. That also means that the traditional “Ctrl+8″ to switch analysis on/off in current file will move to another shortcut (hard to say which exactly yet). Sweet! I could never get Visual Studio’s bookmarks to behave in a way that was comprehensible by mere mortals, much less actually usable… I’m looking forward to having Delphi-style bookmarks again! Pingback: Five reasons to install Resharper 5 today - Helper Code increase speed decrease memory usage no more features needed thanks a lot JeroenH Says nice words: “please take some time to work on performance and memory usage before adding more features” Pingback: ReSharper 5.0 Nightly Build – In Action | Łukasz Gąsior - Blog Pingback: The .NET Aficionado | ReSharper 5.0 Overview I accept the fact that Resharper MUST incur some slowdown in order to do its tremendous work. But a decent hardware with 2 cores and 4 gigs of RAM lets it work without problems. However there may be ways to mitigate that slowdowns – and here you R# devs may listen. Once I worked in a C project with Eclipse IDE. I had simultaneously opened the Linux kernel and OpenSolaris kernel (Eclipse people say it’s not a big project). It took some time to create an index with all symbols, but after that “Find usages”, “Find in files”, “Go to declaration” worked instantly, and I don’t know how many other great features too. This is an example, I’m pretty sure that at some level inside R# there are things that could be cached or precomputed and saved or anything. So maybe it’s worth creating some more temporary files with indexes and I don’t know, some intermediate representation of the source, to be really fast? Maybe some clever data structures to cache Intellisense would do the trick? I’d be certainly slower and my software would have been more buggy without Resharper. Version 5 introduces my favorite feature – a Linq converter, which will help my software to be even simpler, quicker written and looking more functional style, easier to maintain and easier to parallelize (thanks to .NET 4). @ Mark Jones, JeroenH, Aleksey, 5.0 is planned as a feature-focused release. We have just recently released 4.5 focused on performance and memory usage. The next performance-enhancing iteration will hopefully take place some time after 5.0, no earlier. Not necessary a separate performance-specific release but there’s some work to be done. @ Max, I’m glad you’re pretty happy with R# with decent hardware. I secretly wish more people had decent hardware. Caching and intermediate representation are there of course. Unfortunately, they don’t immediately cure everything. Memory is a big problem. To the extent that I may need to remove resharper from the mix. I constantly get ‘Out of memory’ exceptions in Resharper. Not necessarily r#’s fault but my symptoms are that things get gradually worse, then suddenly unworkable. All I can do is restart VS. After doing so memory in use is down to 1.5GB and 1.9GB after Resharper has finished its startup exercises. I can’t physically add any more memory to my PC without getting a new 64bit machine. I have a dual core PC with 4GB RAM – maybe Max should come and work on our 57 project solution (and growing) with his whizzy machine and see how he feels after a few hours work. The alternative to ditching Resharper is to split the solution but it is a great shame that we would have to go to this extent. On the status bar when memory usage is listed as 240MB or above I know I will soon get Resharper reporting out of memory and will have to restart Visual Studio. It is a real PIA. OK – Found that there is some additional things to do if you hit out of memory exceptions frequently. I’ve followed the instructions in this link which is for a wrapper to devenv.exe which causes the memory allocation algorithm ofVisual Studio to be altered so that memory fragmentation is better managed. Well done JetBrains! I also followed the instructions in the link on that page to Steven Harman’s blog where he details how you can get Visual Stuido to *actually* use all the memory on your machine if you have 4GB of RAM. These made a HUGE IMPROVEMENT. But make sure you backup first before changing anything – a colleague didn’t have quite so plain a sailing as I had with these changes. Pingback: popcyclical - Resharper 5 Beta Impressions Hello, Will it be possible to remove the resharper scrollbar in this version? Please, make it optional! Cheers, André Carlucci @andrecarlucci What do you call ReSharper scrollbar?
http://blog.jetbrains.com/dotnet/2009/10/12/resharper-50-overview/
CC-MAIN-2015-35
refinedweb
3,991
64.1
span8 span4 span8 span4 I have written a Python code to retrieve a list of unique values from several rows with several attributes (in the example below, there are 5 attributes). The problem with this code is that the output field has only the result for the first row. It copies it to the rest of the fields. Here is the code: import fme import fmeobjects index = 0 list_yes = [] # Template Function interface: # When using this function, make sure its name is set as the value of # the 'Class or Function to Process Features' transformer parameter def compare_attributes(feature): global list_yes global index s1=feature.getAttribute('Att_1') s2=feature.getAttribute('Att_2') s3=feature.getAttribute('Att_3') s4=feature.getAttribute('Att_4') s5=feature.getAttribute('Att_5') myList_1 = [s1,s2,s3,s4,s5] for i in myList_1: myList_2 = myList_1[index:] if i!='': for j in myList_2: if i == j: list_yes.append(i) else: pass index = index +1 list_set = set(list_yes) unique_list = list(list_set) var_def = str(unique_list) feature.setAttribute('NEWATT',var_def) I have been looking for another built-in Python function and I found the following : list(set(feature) so i implemented it in the code like this: import fme import fmeobjects # Template Function interface: # When using this function, make sure its name is set as the value of # the 'Class or Function to Process Features' transformer parameter def compare_attributes(feature): s1=feature.getAttribute('Att_1') s2=feature.getAttribute('Att_2') s3=feature.getAttribute('Att_3') s4=feature.getAttribute('Att_4') s5=feature.getAttribute('Att_5') myList_1 = [s1,s2,s3,s4,s5] unique_list = list(set(myList_1)) feature.setAttribute('NEWATT',unique_list) This code doesn't work, it shows the following error: 2019-04-01 17:50:54| 0.5| 0.0|ERROR |Python Exception <TypeError>: Failure to convert list unicode values to native UTF-8 values. 2019-04-01 17:50:54| 0.5| 0.0|ERROR |Error encountered while calling function `compare_attributes' 2019-04-01 17:50:54| 0.5| 0.0|FATAL |PythonCaller_31(PythonFactory): PythonFactory failed to process feature 2019-04-01 17:50:54| 0.5| 0.0|ERROR |A fatal error has occurred. Check the logfile above for details Is it possible to correct this? You are using the function interface which processes each feature independently. If you want a unique set of values across all features, you need to use the class interface. Do you really need to do this in Python? The DuplicateFilter transformer is built for this task. Answers Answers and Comments 22 People are following this question. Python setAttribute list of dict attributes 3 Answers PythonCaller input loop 2 Answers How to calculate distinct values within a list using Python? 1 Answer How to setAttribute lists in PythonCaller 2 Answers Python Script - Overwriting records 2 Answers
https://knowledge.safe.com/questions/89440/failed-to-create-unique-list-python.html
CC-MAIN-2019-30
refinedweb
452
55.44
Hi guys, I've been studying type conversions and have some problems when running a simple display program. It prints out what I think might be some garbage and I do not get why. Can you take a look and tell me where I should look for an error? Thanks. The results I am getting are:The results I am getting are:Code: #include <stdio.h> main() { char c = '\1'; short int s = 2; int i = -3; long int m = 5; float f = 6.5; double d = 7.5; printf("%c %hd %d %ld %f %f\n", c, s, i, m, f, d); printf("%d %ld %f %ld %ld %d\n", c * i, s + m, f / c, d / s, f - d, (int) f); /* (int, long int, float, double, double, int) */ return 0; } :) 2 -3 5 6.500000 7.500000 -3 7 6.500000 0 1074659328 0 It looks to me as if something does not get printed out...
http://cboard.cprogramming.com/c-programming/57568-type-conversions-printable-thread.html
CC-MAIN-2015-32
refinedweb
158
88.97
Before we finish up with testing, we're going to make two more changes to make our tests more useful and less likely to fall over in the future. First, just testing that the rendered.state.forks.length property is equal to 30 is a good start, but it would be nice to make sure that all 30 of those got rendered correctly by React. Each fork is rendered in our code using a <p> tag, so you might think we could write something like this in the last test: const forks = TestUtils.scryRenderedDOMComponentsWithTag(rendered, 'p'); expect(forks.length).toEqual(30); Sadly, that won't work: Jest will find 31 <p> tags in the page and fail the test. This happens because our page already has one <p> tag on there showing our breadcrumbs, so we have the 30 <p> tags from the forks plus one more from the breadcrumbs. There are a few solutions here. Option 1: remove the breadcrumbs. This would work, but means giving up a nice feature of our app. Option 2: render the commits, forks and pulls using a different tag name, such as <li>. This would also work, and doesn't require losing a feature, so this is certainly possible. But there's a third option, and it's the one we'll be using here: scryRenderedDOMComponentsWithClass(). This lets you find all tags based on their CSS class name rather than their tag name. This class name doesn't actually need any style information attached to it, so all it takes is to adjust the renderCommits(), renderForks(), and renderPulls() methods of our Detail component from this: src/pages/Detail.js return (<p key={index}>… …to this: src/pages/Detail.js return (<p key={index}… Back in the test code, we can now use scryRenderedDOMComponentsWithClass() to pull out exactly the things we mean, regardless of whether they are <p>, <li> or anything else: __tests__/Detail-test.js const forks = TestUtils.scryRenderedDOMComponentsWithClass(rendered, 'github'); expect(forks.length).toEqual(30); There's just one more thing we're going to do before we're finished with testing, which is to take a cold, hard look at this line: __tests__/Detail-test.js rendered.setState({mode: 'forks', forks: testData}); This is another example of code that works great but is still less than ideal. This time it's because we're breaking the fourth wall of object-oriented programming: our test is forcing a new state on our component rather than making a method call. If in the future you update the Detail component so that setting the forks state also calls some other code, you'll have to copy that code into your test too, which is messy and hard to maintain. The correct solution here is to use an OOP technique called encapsulation, which means our test shouldn't try to peek into and adjust the internals of our Detail component. Right now all our tests do exactly that: they read and write the state freely, which isn't very flexible going forward. I'm going to fix one of these with you right now, but you can fix the others yourself if you want to. We need a new method in the Detail component that updates the component state. This can then be called by our test to inject the saved JSON cleanly rather than by forcing a state. Realistically all we need is to move one line of code out of the fetchFeed() method and wrap it into its own method. Find this line: src/pages/Detail.js this.setState({ [type]: response.body }); That uses a computed property name along with the response body from SuperAgent in order to update our component state. We're going to make that a new method called saveFeed(), which will take the type and contents of the feed as its parameters: src/pages/Detail.js saveFeed(type, contents) { this.setState({ [type]: contents }); } You can now call that straight from the fetchFeed() method: src/pages/Detail.js if (!error && response) { this.saveFeed(type, response.body); } else { console.log(`Error fetching ${type}.`, error); } If you've made the correct changes, the two methods should look like this: src/pages/Detail.js fetchFeed(type) { if (this.props.params.repo === '') { // empty repo name – bail out! return; } const baseURL = ''; ajax.get(`${baseURL}/${this.props.params.repo}/${type}`) .end((error, response) => { if (!error && response) { this.saveFeed(type, response.body); } else { console.log(`Error fetching ${type}.`, error); } } ); } saveFeed(type, contents) { this.setState({ [type]: contents }); } With that new saveFeed() method in place, we can update the fifth test to use it rather than forcing a state on the component: __tests__/Detail-test.js it('fetches forks from a local source', () => { const rendered = TestUtils.renderIntoDocument( <Detail params={{repo: ''}} /> ); const testData = require('./forks.json'); rendered.saveFeed('forks', testData); rendered.selectMode('forks'); const forks = TestUtils.scryRenderedDOMComponentsWithClass(rendered, 'github'); expect(forks.length).toEqual(30); }); That shows you the technique of encapsulating your component's state behind a method call, which will make your code much more maintainable in the future. Yes, it's extra work in the short term, but it will pay off when you aren't up at 3am trying to debug an obscure problem! I'm not going to go through and adjust the rest of the tests, because that's just pointless repetition – you're welcome to consider it homework if you want to!
http://www.hackingwithreact.com/read/1/38/cleaning-up-our-tests-last-tweaks
CC-MAIN-2019-04
refinedweb
898
65.73
New Tool Reveals Internet Passwords 140 wiredmikey writes "A new password cracking tool released today instantly reveals cached passwords to websites in Microsoft Internet Explorer, and mailbox and identity passwords in all versions of Microsoft Outlook Express, Outlook, Windows Mail, and Windows Live Mail." Prettier Tool, Old Exploit (Score:5, Insightful) When you click "remember my password" the browser stores it in a semi-obfuscated way. Yes, it encrypts it but it must also put the key it uses to encrypt your password on your hard drive somewhere. Since your browser is not also a rootkit, any application you run on your box can access everything your browser can write. Therefore you need only spend the time to figure out where the encryption key is being stored and what kind of encryption the browser is employing to encrypt your password. When your mail client or chat client are remembering your passwords, it's no different. We could have a lengthy debate about whether 'remember your password' should be allowed but apparently the majority of users are okay with it considering the convenience it grants them. If they use the same machine to surf malicious websites, this makes it easier for malware to steal the passwords than a complex keylogging system A few simple lines of code later and you too can write your own command line password discovery tool. Slap a seksi user interface on that and apparently you can sell it for $49. Re:Prettier Tool, Old Exploit (Score:5, Informative) Re:Prettier Tool, Old Exploit (Score:5, Funny) Re:Prettier Tool, Old Exploit (Score:5, Funny) Re: (Score:2) WTF? I have the same password on my atmosphere shield! Re: (Score:1) Thank you for pressing the self destruct button. Re:Prettier Tool, Old Exploit (Score:5, Interesting)!). :-) Re: (Score:2) You keep going on and on about people going on and on about something. That's funny. Re: (Score:1, Offtopic) Re: (Score:2) Mr. Balmer? Re: (Score:3, Funny) Re: (Score:1, Redundant) If I enter mine, all you'll get is asterisks. Watch: ******* Re:Prettier Tool, Old Exploit (Score:5, Funny) [bash.org]. Re: (Score:1) Re: (Score:2, Funny) Re: (Score:1, Insightful) And this is what Windows does. The CryptProtectData API uses a key that is itself encrypted with (data derived from) the user's password. So you can only access the cached passwords if the user is logged on or you know the password. Is that supposed to be PRAISING that boneheaded scheme? Re:Prettier Tool, Old Exploit (Score:4, Interesting) Well, the Windows scheme only protects your password from malicious software if you never log in at all; once you're logged in any program can pull the passwords, even if you never load the browser. Firefox can only give up master password protected passwords if you launch the browser and provide the master password. And an extension exists to configure the Firefox password manager to "forget" the master password (which is never actually stored, but you know what I mean) after a few minutes, limiting the window of vulnerability further. Beyond that, if you've got truly malicious software actively running on your computer at all times (not just some website that gets brief read access through an exploit), you're hosed no matter what. Even if you never use a password manager, they can read the password as you type it into the browser; it might take more time than decrypting a password store and forwarding the data in bulk, but it's just as effective over the long haul. It's a trade off between window of vulnerability, scale of breach, and hassle. No manager at all is a hassle (to remember all usernames and passwords), but it's the most secure, since you can only lose one password at a time, with narrow windows of vulnerability. Password managers mean the scale of breach potential increases (you can lose them all at once). Firefox with a master password narrows the window of vulnerability relative to IE, and the extension that re-locks the store narrows it further, at the cost of needing to remember and type the password store password. I consider it a reasonable trade-off, given that I'm not going to remember the user name and password for every site I visit. Even if I wanted to use the same one everywhere (and I don't, because then one site breach means I lose everything), differing username and password requirements make that impossible, and frankly, my memory isn't good enough to track login info for fifty odd websites, including a dozen I visit only once or twice a year. Re: (Score:2) Re: (Score:2) My girlfriend does it first thing after installing Firefox on every machine she's ever owned (and she's not particularly computer savvy; she's a Re: (Score:2) You could also look at LastPass - [lastpass.com] - which works very well across Windows/Mac/Linux, Firefox, Chrome, Safari, etc, and on many mobile phones as well. Quite well designed and mature, and can be used offline though it's a browser addon, and syncs your password data to/from the cloud automatically, but also supports export to various formats if the cloud goes away. Now has a feature to manage non-browser passwords as well. Re: (Score:2) 12345? That's amazing! I've got the same combination on my luggage... Re: (Score:2) Using this a browser can store what it needs in a secure way. Access to each and every item is controlled by ACLs that you can tweak to your heart's content. Re: (Score:2) Apple offers the Keychain APIs for secure storage of identity items as well. Using this a browser can store what it needs in a secure way. Access to each and every item is controlled by ACLs that you can tweak to your heart's content. And we all know that with the excellent security synergy between users and application developers, the result of having freely tweakable security settings that default to moderate strength inevitably tends towards most users finding their own optimal balance of security and convenience that never leaves anyone at significant risk. What, you haven't noticed that? I'm SHOCKED! Snark aside: YES, Apple provides a strong toolkit and default behaviors (in Safari and elsewhere) that set a reasonably secure norm Re: (Score:2) Which is why I like Seamonkey's ability to secure the password store with a password of its own so that you're not simply relying on security through obscurity. Re: (Score:2, Informative) Except the first time you want to access the password store in each session, you present your password that "unlocks" the password store, then THAT password is persisted for the remainder of the session. So, either way, if you visit a malicious website the chances are your password store is in a vulnerable state (the password store is open for business, and the password is available somewhere). In both the Seamonkey/Firefox and Microsoft cases, the password store is vulnerable once it's logged in. The on Re: (Score:3, Informative) Re: (Score:2, Insightful) Not to mention that for the open source browsers you can probably just look to see where it stores those keys. This is not a knock against the system, or even the approach, but just an observation. Assuming the tool is just using the associated "Remember my password" functionality, then this is a non-story and people could get it without the tool. Heck, in Firefox, and I believe Chrome, you can view your stored passwords in plain text using the built-in password manager. Re: (Score:2) Firefox doesn't even attempt to hide it: Preferences -> Security -> Saved Passwords -> Show Passwords. Re: (Score:3, Informative) If you assign a master password that changes for you a bit; it won't show them without you entering the master password, twice IIRC. Re: (Score:1, Interesting) Perhaps this needs a rethink on filesystem security? I'm thinking a desktop OS wherein each application is assigned a directory/folder on installation, and is only able to access its own folder a per user generic 'documents' folder, and a per user, application specific configuration folder. There'd be some costs to that - developers would have to compile against APIs and libraries rather than importing them in from the system at runtime. This would make individual programs larger and increase maintenance req Re: (Score:2) On OS X, the keychain is stored encrypted. When you log in, the keychain daemon runs and, if your keychain password matches your login password, decrypts the store into RAM. Individual passwords can only be accessed by other apps via RPC to this daemon. This RPC uses Mach ports, which allow the process on the other end to be identified. Access to individual passwords must be specifically granted (on a one-off or permanent basis) to apps, although any app can access all passwords that it created. If the Re: Interface (Score:2) I wanna see the Skeksi interface! The Dark Crystal (1982) [imdb.com] Re: (Score:2) Re: (Score:1) Re: (Score:1) Slashvertisment if EVER I saw one. (Score:5, Interesting) Check out [nirsoft.net] for password recovery tools, for free, that have been available for ages. Re: (Score:2) Re: (Score:2) My point is valid and still stands. The tools I linked to are EXACTLY the same. Re: (Score:2) You definitely didn't RTFA, or understand the summary. It's a locally run program that reveals passwords for the sites you visit to the person who runs the program. Re: (Score:1) Nowhere in TFA it says anything about an expoit that reveals your password to a website you visit. It mentions that it reveals passwords that are cached. From the F. Re: (Score:2) Re: (Score:1) The OP is actually agreeing with you, dude. Read again. I agree, this is nothing new. Nirsoft tools are great, I've been using them for ages. Time to make a donation. Re: (Score:1) [blogspot.com]. New? I don't think so. (Score:3, Funny) vs OS X keychain? (Score:2) Is it also $49 safe? Thanks Depends (Score:3, Interesting) Anything that just stores passwords for automatic login, and doesn't require any user interaction, is not secure from something like this. Reason is if a program, like say Thunderbird, can get your e-mail password to hand off to the server, well then another program can too. It is stored in some easily reversible form. However, if the program itself needs a password to access the password store, then it should be secure provided a good password is used. The reason is that it uses that password to encrypt th Re: (Score:2) The default keychain file is the login keychain, typically opened on login by the user's login password (although the password for this keychain can instead be different from a user’s login password, adding security at the expense of some convenience). ... The keychain file(s) stores a variety of data fields including a title, URL, notes and password. Only the password is encrypted and it is encrypted with Triple DES. Title is Inaccurate (Score:5, Informative) Re: (Score:2) Re: (Score:2) Actually, to be honest, when I first saw the headline, I thought to myself, "When asked to stop revealing people's passwords, the tool put his oakleys on, popped his collar, and then nah "Nah, bro," before walking away. No that is incorrect (Score:2) Windows passwords are stored using non-reversible encryption be default. For Vista and 7, they are stored only using the HTLMv2 hash by default, which is extremely secure. For XP passwords under 14 characters it does store the LM has as well by default, which can generally be cracked with only a little effort as it is not secure. What this tool does is reveal saved passwords in programs. That is not hard to do. Any password you save for a remote system must, by definition, be stored using some sort of revers Bah... (Score:1) Re: (Score:1) Re: (Score:2) No. Your saved browser passwords are only secure if the browser provides (properly implemented) password protection for the saved passwords. i.e. The passwords are encrypted with a key, which is encrypted with a password that the browser requires you to enter before it will allow access to your saved passwords. Heh (Score:5, Interesting). This was how I discovered the password for our dial-up internet back when I was in middle school in the mid-90's. My mom entered the password, and usually waited until it connected...but one time she slipped up, and left before it connected. I hit "cancel", and sure enough the password was still there, just blocked by asterisks. Thanks to "Revelation", I got it and was able to log in during the middle of the night, chatting it up on Yahoo and working on my Angelfire web page. Ah, memories... Re: (Score:2, Funny) wtf? I almost have the exact same story... Re: (Score:1) Re: (Score:2) My mother went for the low-tech solution to keeping my brother and I off the internet when she wasn't around - taking the power cord to the PC with her. Suffice to say, they don't call them kettle cords for nothing ;) Re: (Score:2) Re: (Score:2) It's been a long time, but I'm 99.9% sure that was it! Re: (Score:1) What kind of mutant alien monster are you?? Re: (Score:3, Informative) The kind who had found his step-dad's "collection", and didn't need crappy mid-90's Internet video for his fapping ;-) Re: (Score:2). In that version of Windows, a password edit control just had a password style set on it and you could effectively disable that with some simple Windows API calls. Worse, you could just WM_GETTEXT and get the password out in plaintext without changing the style. Re: (Score:2) That's an odd way to misspell "masturbating furiously". Re: (Score:1) Years ago I once lost the password for my dial-up internet, and it was easier to make a 'modem tap' to recover it than it was to dig into the binaries and extract the encrypted password from the dialup networking glop I used back then. I just soldered on a third 'listen only' tap connector on my modem cable and intercepted the password as it was sent out to the modem. Re: (Score:2) <code> #include "stdafx.h" int ReadOtherProcess (HWND hwnd, void *address, void *buf, unsigned len) { unsigned long pid; HANDLE process; GetWindowThreadProcessId ( hwnd, &pid ); process = OpenProcess (PROCESS_VM_OPERATION|PROCESS_VM_READ| Sigh. (Score:5, Interesting) This isn't anything like Cain & Abel or 1000+ other tools did before for OVER TEN FSCKING YEARS. If slashdot ever posts "news" from sites like securityweek again I might cancel my newsletter subscription. Tip: security knowledge comes from security related blogs/forums (ie. hackers), not "news" websites which place more product placement than news. Requesting delete because that VB.NET tool doesn't deserve the bandwidth it will cost. Re: (Score:2) Tip: A large number of stories on Slashdot are product placement. It has been this way since, to my recollection, the series of stories on They Might be Giants. It was probably going on before that and I just didn't recognize. Those seemed like the first slashvertisements that made no real effort to disguise themselves. Slashdot is good for its user submitted content. There are still some really good, really informative discussions going on involving people who really know the subjects, that can't be found a Re: (Score:1) Requesting delete because that VB.NET tool doesn't deserve the bandwidth it will cost. In their defense, the core logic is written in C#. Re: (Score:2) Re: (Score:2) Re: (Score:2) Passwords (Score:3, Funny) And it's for this reason that I write all my passwords down on the back of my hand. I've already addressed the problem of them washing off by using using permanent marker. And not bathing. Re: (Score:2) Which is this? (Score:5, Insightful) Is this an alert or an advert? ;) Re: (Score:2) An Adlert? Re: (Score:2) Ask your doctor if Adlert is right for you. Nice. I appreciated that. Well, ok (Score:1) in Microsoft Internet Explorer, mailbox and identity passwords in all versions of Microsoft Outlook Express, Outlook, Windows Mail and Windows Live Mail." ...But how does this effect me? Re: (Score:2) That would be an interesting question, if you didn't actually mean affect. Re: (Score:3, Funny) I think it effected his post. Solve the problem (Score:2) Re:Solve the problem (Score:5, Funny) Firefox password security (Score:4, Informative) Firefox offers an option to use a [user-supplied] master password to encrypt/decrypt password data. If a Firefox user enables that functionality, then Firefox would not [by my guess] be vulnerable to an exploit strategy such as the one employed by this cracking product (which relies on rule-based keys instead of a user-supplied key). Firefox passwords may, however, be vulnerable to other cracking strategies. Here are some more details about how Firefox stores passwords. [luxsci.com] Site seems to be down (Score:1) I'm glad they finally figured this out. (Score:4, Funny) Re: (Score:1) I was beginning to think IE cache was unbreakable... How does one break something that is already broken? Naw, just kidding. Shocked (Score:1, Troll) I am shocked, shocked to find a security flaw in Microsoft Internet Explorer. New Tool Reveals Internet Passwords (Score:1) all your password belong to us PR (Score:1) and it doesnt work with LINUX?!?!?! (Score:1) I am outraged! Why doesn't this work on Linux? Its always the same... people think that FOSS is not that important blablabla... </tong-in-cheek> "Remember my password" is inherently insecure. (Score:2) Any "remember my password" feature in any app is inherently insecure. Whenever I write such a feature, I encrypt the saved password, but I understand that this will only defeat wannabe crackers whose level of sophistication is limited to running strings on cache files. Any cracker worth their salt will reverse-engineer the encryption used by the app. It's for this reason that I never enable "remember my password" where important passwords are involved. 1995 wants its news back (Score:2) Yawn. LSA secrets aren't particularly. Why not write stories about those who build things rather than give valuable Slashdot electrons to breaking stuff? Boring. My wife needs a tool like this. (Score:2)
https://it.slashdot.org/story/10/07/01/1239234/new-tool-reveals-internet-passwords
CC-MAIN-2016-30
refinedweb
3,146
62.27
Filtering means preserving certain favored signal frequencies while simultaneously suppressing others. At first, this may seem as easy as simply removing all the offending frequencies of interest in the Fourier transform and keeping the rest, but, as we will see, practical considerations prohibit this. There are many, many approaches to filtering and here we focus on the popular finite impulse response (FIR) filters. As the name suggests, these filters have no feedback loops, which means that they stop producing output when the input runs out. These are very popular in practice, with blazing-fast on-chip implementations and easy-to-understand, flexible design specifications. This section introduces the main concepts of FIR filter design. Finite Impulse Response (FIR) filters have the following form: $$ y_n = \sum_{k=0}^{M-1} h_k x_{n-k} $$ with real input $ x_n $ and real output $ y_n $. These are called finite impulse response because they stop after running out of input (i.e. there is no feedback to keep this going indefinitely on its own). These are also sometimes called moving average filters or all-zero filters. The word taps is used for $ M $ so a 10-tap filter has $ M=10 $ coefficients. For example, given the two-tap filter, $ h_0 = h_1 = 1/2 $, we have $$ y_n = x_{n}/2 + x_{n-1}/2 $$ For example, for input $ x_n=1 \hspace{.5em} \forall n \ge 0 $, $ y_n = 1 \hspace{.5em} \forall n \ge 1 $. Note that we lose one sample in filling the filter for $ n=0 $ which means we have to wait one sample for a valid filter output. This is the filter's transient state. As another example, for input $ x_n= \exp \left( j\pi n \right) \hspace{.5em} \forall n\ge 0 $, then $ y_n=0 \forall n \ge 1 $. These two cases show that this moving average has eliminated the highest frequency signal ($ \omega=\pi $) and preserved the lowest frequency signal ($ \omega=0 $). Let's analyze this filter using the tools in the scipy.signal module. from __future__ import division from scipy import signal fig, axs = subplots(2,1,sharex=True) subplots_adjust( hspace = .2 ) fig.set_size_inches((5,5)) ax=axs[0] w,h=signal.freqz([1/2., 1/2.],1) # Compute impulse response ax.plot(w,20*log10(abs(h))) ax.set_ylabel(r"$20 \log_{10} |H(\omega)| $",fontsize=18) ax.grid() ax=axs[1] ax.plot(w,angle(h)/pi*180) ax.set_xlabel(r'$\omega$ (radians/s)',fontsize=18) ax.set_ylabel(r"$\phi $ (deg)",fontsize=18) ax.set_xlim(xmax = pi) ax.grid() # fig.savefig('[email protected]', bbox_inches='tight', dpi=300) In the figure above, the top plot shows the amplitude response of the filter (in dB) and the bottom plot shows the phase response in degrees. At $\omega=0 $, we have $|H(\omega=0)|=1 $ (i.e unity-gain) which says that our moving average filter does not change the amplitude of signals at $ \omega=0 $. We observed this earlier with $x_n=1 $ that produced $y_n=1 $. When we consider the other extreme with $\omega= \pi $, we have $|H(\omega=\pi)|=0$ which we observed earlier for $ x_n= \exp \left( j\pi n \right) \hspace{.5em} \forall \hspace{.5em} n \ge 0 $. Thus, signals at $ \omega=\pi $ are completely zero-ed out by the filter. Now, let's consider a signal halfway between this two extremes: $$ x_n= \exp \left( j\pi n/2 \right) \hspace{.5em} \forall \hspace{.5em} n \ge 0 $$ The following figure shows the filter's corresponding output. Ns=30 # length of input sequence n= arange(Ns) # sample index x = cos(arange(Ns)*pi/2.) y= signal.lfilter([1/2.,1/2.],1,x) fig,ax = subplots(1,1) fig.set_size_inches(10,3) ax.stem(n,x,label='input',basefmt='b-') ax.plot(n,x,':') ax.stem(n[1:],y[:-1],markerfmt='ro',linefmt='r-',label='output') ax.plot(n[1:],y[:-1],'r:') ax.set_xlim(xmin=-1.1) ax.set_ylim(ymin=-1.1,ymax=1.1) ax.set_xlabel("n",fontsize=18) ax.legend(loc=0) ax.set_xticks(n) ax.set_ylabel("amplitude",fontsize=18); # fig.savefig('[email protected]', bbox_inches='tight', dpi=300) <matplotlib.text.Text at 0x548abb0> The figure above shows the input/output time-domain response of the filter when $\omega=\pi/2 $. At this point, $ |H(\omega)|^2 = 1/2 $ meaning the signal energy has been cut in half which is shown by the corresponding lower amplitude of the output signal. The signal phase has been shifted by 45 degrees. To see this, note that the input signal repeats every four samples (360 degrees). The graph above shows that the signal phase has shifted -45 degrees which is equivalent to a shift of one-half sample. Note that signal.lfilter automatically inserts a zero initial condition so we had to drop that one incomplete output point to line up the plots. By interpreting the magnitude/phase plots above, we can reconcile the filter's input/output behavior in the time domain. What happens when we lengthen our moving average filter to consider averaging over eight samples instead of two? from matplotlib import gridspec fig=figure() fig.set_size_inches((10,5)) gs = gridspec.GridSpec(2,2) gs.update( wspace=0.5, hspace=0.5) ax = fig.add_subplot(subplot(gs[0,0])) ma_length = 8 # moving average filter length w,h=signal.freqz(ones(ma_length)/ma_length,1) ax.plot(w,20*log10(abs(h))) ax.set_ylabel(r"$ 20 \log_{10}|H(\omega)| $",fontsize=18) ax.set_xlabel(r"$\omega$",fontsize=18) ax.vlines(pi/3,-25,0,linestyles=':',color='r',lw=3.) ax.set_ylim(ymin=-25) ax.grid() ax = fig.add_subplot(subplot(gs[0,1])) ax.plot(w,angle(h)/pi*180) ax.set_xlabel(r'$\omega$ (radians/s)',fontsize=18) ax.set_ylabel(r"$\phi $ (deg)",fontsize=16) ax.set_xlim(xmax = pi) ax.set_ylim(ymin=-180,ymax=180) ax.vlines(pi/3,-180,180,linestyles=':',color='r',lw=3.) ax.grid() ax = fig.add_subplot(subplot(gs[1,:])) Ns=30 n= arange(Ns) x = cos(arange(Ns)*pi/3.) y= signal.lfilter(ones(ma_length)/ma_length,1,x) ax.stem(n,x,label='input',basefmt='b-') ax.plot(n,x,':') ax.stem(n[ma_length-1:],y[:-ma_length+1],markerfmt='ro',linefmt='r-',label='output') ax.plot(n[ma_length-1:],y[:-ma_length+1],'r:') ax.set_xlim(xmin=-1.1) ax.set_ylim(ymin=-1.1,ymax=1.1) ax.set_xlabel("n",fontsize=18) ax.set_xticks(n) ax.legend(loc=0) ax.set_ylabel("amplitude",fontsize=18); # fig.savefig('[email protected]', bbox_inches='tight', dpi=300) <matplotlib.text.Text at 0x721ac30> The figure above shows the magnitude and phase responses of the longer moving average filter. The zig-zag lines of the phase plot are due to the wrap-around of the phase as it wraps around 180 degree mark. The bottom plot shows the input/output sequences. Note that the output is delayed by the length of the input filter. Because the frequency of the input signal is $ 2\pi/6 $, its period is $T=6$ samples the input signal repeats every six samples. According to the phase plot above, the phase at this discrete frequency is approximately 30 degrees plus the 180 degree jump, the output sequence is shifted over by half a sample ($ 30/360 = 0.5/6 $) plus the 3 samples (half the six sample period, $ 180/360=1/2 $). Note that the magnitude plot shows multiple lobes and dips at discrete frequencies where the output is zero-ed out by the filter. Thus, even though we just lengthened our moving average filter by a few samples, we have encountered much more complicated amplitude and phase behavior. We need to assemble the right tools to understand this problem in general. So far in these pages, we have considered samples of the Fourier transform at discrete frequences ($ \omega = \frac{2\pi}{N} k $). Now we want to consider the Fourier Transform of the discrete input at continuous frequency defined as the following: $$ H(\omega) = \sum_{n\in \mathbb{Z}} h_n \exp \left( -j\omega n \right) $$ Note that this is periodic, $H(\omega)=H(\omega+2\pi)$. The discrete convolution of infinite sequences $ x_n $ and $ h_n $ is defined as $$ y_n = \sum_{k\in \mathbb{Z}} x_k^* h_{n-k} $$ where the asterisk superscript denotes complex conjugation. If we have a finite filter length of $ M $ ($ h_n =0, \forall n \notin \{0,1,...,M-1\} $), then the filter output reduces to $$ y_n = \sum_{k=0}^{M-1} h_k x_{n-k} $$ Note this is closely related to but not the same as the circular convolution we have already discussed because there is no wrap-around. However, because it is very efficient to compute this using a DFT, we need to relate these two versions of convolution. If $x_n$ is nonzero for $ P $ samples, then the output $ y_n $ is non-zero only for $ P+M-1 $ samples. Thus, if we zero-pad each sequence out to this length, take the DFT, multiply the DFTs, and then invert the DFTs, we obtain the results of this non-circular convolution. Let's code this up below using our last example. h=ones(ma_length)/ma_length # filter sequence yc=fft.ifft(fft.fft(h,len(x)+len(h)-1)*np.conj(fft.fft(x,len(x)+len(h)-1))).real fig,ax=subplots() fig.set_size_inches((10,2)) ax.plot(n,yc[ma_length-1:],'o-',label='DFT method') ax.plot(n,y,label='lfilter') ax.set_title('DFT method vs. signal.lfilter',fontsize=18) ax.set_xlabel('n',fontsize=18) ax.set_ylabel('amplitude',fontsize=18) ax.legend(loc=0) ax.set_xticks(n); # fig.savefig('[email protected]', bbox_inches='tight', dpi=300) [<matplotlib.axis.XTick at 0x7097f50>, <matplotlib.axis.XTick at 0x71f3650>, <matplotlib.axis.XTick at 0x7547790>, <matplotlib.axis.XTick at 0x7547bb0>, <matplotlib.axis.XTick at 0x7543050>, <matplotlib.axis.XTick at 0x75434d0>, <matplotlib.axis.XTick at 0x75437d0>, <matplotlib.axis.XTick at 0x7543c50>, <matplotlib.axis.XTick at 0x75da0f0>, <matplotlib.axis.XTick at 0x75da570>, <matplotlib.axis.XTick at 0x75da9d0>, <matplotlib.axis.XTick at 0x75dae50>, <matplotlib.axis.XTick at 0x75de2f0>, <matplotlib.axis.XTick at 0x75de770>, <matplotlib.axis.XTick at 0x75debf0>, <matplotlib.axis.XTick at 0x75e4090>, <matplotlib.axis.XTick at 0x75e4510>, <matplotlib.axis.XTick at 0x75e4990>, <matplotlib.axis.XTick at 0x75e4e10>, <matplotlib.axis.XTick at 0x75eb2b0>, <matplotlib.axis.XTick at 0x75eb730>, <matplotlib.axis.XTick at 0x75ebbb0>, <matplotlib.axis.XTick at 0x75f1050>, <matplotlib.axis.XTick at 0x75f14d0>, <matplotlib.axis.XTick at 0x75f1950>, <matplotlib.axis.XTick at 0x75f1dd0>, <matplotlib.axis.XTick at 0x75f8270>, <matplotlib.axis.XTick at 0x75f86f0>, <matplotlib.axis.XTick at 0x75f8b70>, <matplotlib.axis.XTick at 0x75f8ff0>] The figure above compares the filter output sequence computed using the DFT and signal.lfilter. The only difference is the transient startup section $ M-1 $ where the taps of the filter have not filled out. This technique is fine for processing blocks of convenient size and there are many other methods (e.g. overlap-add) to compute this using different blocks that patch together the output while dealing with these transient effects. By keeping track of summation indicies, but it is not hard to show that $$ Y(\omega) = H(\omega) X(\omega) $$ where $H(\omega)$ is called the transfer function or the frequency response of the filter $ h_n $. This product of transforms is much easier to work with than convolution and allows us to understand filter performance through the properties of $ H(\omega) $. In our last example, by simply increasing the length of the moving average filter, we obtained many more zeros in $ H(\omega) $. Because our filters produce real outputs $ y_n $ given real inputs, $ x_n $, the zeros of the $ H(\omega) $ must be in complex conjugate pairs. To analyze this, we need to generalize the Fourier Transfrom to the z-transform. The filter's z-transform is defined as the following: $$ H(z) = \sum_n h_n z^{-n} $$ The Fourier transform is a special case of the z-transform evaluated on the unit circle ($ z=\exp(j\omega) $), but $z$ more generally spans the entire complex plane. Thus, to understand how our moving average filter removes frequencies, we need to compute the complex roots of the z-transform of $ h_n $ as $$ H(z) = \sum_n h_n z^{-n} $$ This notation emphasizes the transfer function as a polynomial of the complex variable $z$. Thus, for our eight-tap moving average filter, we have $$ H(z) = \sum_{n=0}^{M-1} h_n z^{-n} = \frac{1}{8} \sum_{n=0}^7 z^{-n} =\frac{1}{8} (1+z)(1+z^2)(1+z^4)/z^7 $$ Thus, the first zero occurs when $z=-1$ or when $ \exp(j\omega) = -1 \Rightarrow \omega=\pi$. The next pair of zeros occurs when $z= \pm j$ which corresponds to $ \omega = \pm \pi/2 $. Finally, the last four zeros are for $\omega=\pm \pi/4$ and $\omega=\pm 3\pi/4 $. Notice that any filter with this $ z+1 $ term will eliminate the $ \omega=\pi $ (highest) frequency. Likewise, the term $ z-1 $ means that the filter zeros out $ \omega=0 $. In general, the roots of the z-transform do not lie on the unit circle. One way to understand FIR filter design is as the judicious placement of these zeros in the complex plane so the shape of the resulting transfer function $ H(z) $ evaluated on the unit circle satisfies our design specifications. We need a special case of the Fourier transform as a tool for our analysis. There is a special case when the input sequence is symmetric, $$ x_n = x_{-n} $$ which leads to a real-valued (i.e. zero-phase) Fourier transform property, $$ H(\omega)= x_0+\sum_{n \gt 0} 2 x_n \cos\left(\omega n \right) $$ When the input is anti-symmetric, $$ x_n = -x_{-n} $$ $$ H(\omega)= j\sum_{n \gt 0} 2 x_n \sin\left(\omega n \right) $$ The Fourier transform is purely imaginary (phase = $ \pi/2 $). Note that $ x_n = -x_{-n} $ for $ n=0 $ means that $ x_0=0 $ for this case. By changing the indexing in our first moving average filter example from $ h_0 = h_1 = 1/2$ to $ h_{-1} = h_1 =1/2$, then we have symmetry around zero with the resulting Fourier transform, $$ H(\omega) = \frac{1}{2} \exp \left( j\omega \right) + \frac{1}{2} \exp\left( -j\omega \right) = \cos(\omega) $$ which is a real function of frequency (with zero-phase). While this is nice theoretically, it is not possible practically because it requires future-knowledge of input sequence as shown below $$ y_n = \sum_{k=0}^{M-1} h_n x_{n-k} = y_n = h_{-1} x_{n+1}+h_{1} x_{n-1} = \left( x_{n+1}+x_{n-1} \right)/2 $$ which shows that $ y_n $ depends on th future value of $ x_{n+1} $. This is what non-causal means and we must omit this kind of symmetry about zero from our class of admissible filter coefficents. However, we can scoot the symmetric point to the center of the sequence at the cost of introducing a linear phase factor, $ \exp\left(-j\omega (M-1)/2 \right) $. Filters with linear phase do not distort the input phase across frequency. This means that all frequency components of the signal emerge at the other end of the filter in the same order they entered it. This is the concept of group delay. Otherwise, it would be very hard to retrieve any information embedded in the signal's phase in later processing. Thus, we can build linear phase causal filters with symmetric coefficients, $$ h_n = h_{M-1-n} $$ or anti-symmetric coefficients, $$ h_n = -h_{M-1-n} $$ by putting the point of symmetry at $ (M-1)/2 $. Note that this symmetry means that efficient hardware implementations can re-use stored filter coefficients. Now we know how to build linear phase filters with symmetric or anti-symmetric coefficients and enforce causality by centering the point of symmetry, we can collect these facts and examine the resulting consequences. Given $$ h_n = \pm h_{M-1-n} $$ with $ h_n =0 \hspace{0.5em} \forall n \ge M \wedge \forall n \lt 0 $ For even $ M $, the z-transform then becomes, $$ H(z) = \sum_{n=0}^{M-1} z^{-n} h_n = h_0 + h_1 z^{-1} +\ldots + h_{M-1}z^{-M+1}= z^{-(M-1)/2} \sum_{n=0}^{M/2-1} h_n \left( z^{(M-1-2 n)/2} \pm z^{-(M-1-2 n)/2}\right) $$ Likewise, for odd $ M $, $$ H(z) = z^{-(M-1)/2} \left\lbrace h_{(M-1)/2}+ \sum_{n=0}^{(M-3)/2} h_n \left( z^{(M-1-2 n)/2} \pm z^{-(M-1-2 n)/2}\right) \right\rbrace$$ By substituting $ 1/z $ and multiplying both sides by $ z^{-(M-1)} $, we obtain $$ z^{-(M-1)}H(z^{-1}) = \pm H(z) $$ This equation shows that if $z$ is a root, then so is $1/z$, and since we want a real-valued impulse response, complex roots must appear in conjugate pairs. Thus if $z_1$ is complex valued, then $ z_1^* $ is also a root and so is $1/z_1$ and $ 1/z_1^* $. Thus, one complex root generates four roots. This means that having $ M $ taps on the filter does not imply $M$ independent choices of the filter's roots or of the filter's coefficents. The symmetry conditions reduce the number of degrees of freedom available in the design. We can evaluate these results on the unit circle when $ h_n = +h_{M-n-1}$to obtain the following, $$ H(\omega) = H_{re}(\omega)\exp \left( -j\omega(M-1)/2 \right) $$ where $ H_{re}(\omega)$ is a real-valued function that can be written as $$ H_{re}(\omega) = 2 \sum_{n=0}^{(M/2)-1} h_n \cos \left( \omega \frac{M-1-2 n}{2} \right) $$ for even $ M $ and as $$ H_{re}(\omega) = h_{(M-1)/2}+ 2\sum_{n=0}^{(M-3)/2} h_n \cos \left( \omega \frac{M-1-2 n}{2} \right) $$ for odd $ M $. Similar results follow when $ h_n = -h_{M-1-n} $. For $ M $ even, we have $$ H_{re}(\omega) = 2 \sum_{n=0}^{M/2-1} h_n \sin \left( \omega \frac{M-1-2 n}{2} \right) $$ and for odd $M$. $$ H_{re}(\omega) = 2 \sum_{n=0}^{(M-3)/2} h_n \sin \left( \omega \frac{M-1-2 n}{2} \right) $$ By narrowing our focus to $ H_{re}(\omega)$ and separating out the linear-phase part, we can formulate design techniques that focus solely on this real-valued function, as we will see later with Parks-McClellen FIR design. The table below shows the number of independent filter coefficients that must be specified for a FIR filter in each case. The design problem is finding the coefficents that satisfy a filter's specifications. Picking any of the cases shown in the table depends on the application. For example, for odd $ M $ and the anti-symmetric case, $ H_{re}(\omega=0)= H_{re}(\omega=\pi)= 0 $ so this would be a bad choice for high or low pass filters. Because many books refer to the items in this table as type-I through type-IV filters, I'm including this terminology in the table below. We can use these results to reconsider our earlier result for the two-tap moving average filter for which $ M=2 $ and $ h_0 = 1/2 $. Then, $$ H_{re}(\omega) = \cos \left( \omega/2 \right) $$ with phase, $$ \exp \left( -j\omega(M-1)/2 \right) = \exp \left( -j\omega/2 \right)$$ which equals $ \exp (-j\pi/4) $ when $ \omega = \pi/2 $ as we observed numerically earlier. In this section, we began our work with FIR filters by considering the concepts of linear phase, symmetry, and causality. By defining FIR filter coefficients symmetrically, we were able to enforce both causality and linear phase. We introduced the continuous-frequency version of the Fourier Transform and its relationship to the Discrete Fourier Transfrom (DFT). We demonstrated how the circular convolution of the DFT can be used to compute the continuous-frequency Fourier Transform version of the convolution. Then, we introduced the z-transform as a more general tool than the Fourier transform for understanding the role of zeros in filter design. All this led us to conditions on the filter coeffients that satisfy our practical requirements of linear phase (no phase-distortions across frequency) and causality (no future knowledge of inputs). Finally, we considered the mathematical properties of FIR filters that apply to any design. Sadly, all this work is exactly backwards because all our examples so far started with a set of filter coefficients ($ h_n $) and then drew conclusions, numerically and analytically, about their consequences. In a real, situation, we start with a desired filter specification, and then (by various means) come up with the corresponding filter coefficients. Our next section explores this topic. As usual, the corresponding IPython notebook for this post is available for download here. %qtconsole
https://nbviewer.jupyter.org/github/unpingco/Python-for-Signal-Processing/blob/master/Filtering.ipynb
CC-MAIN-2018-39
refinedweb
3,455
56.55
A programmer I respect said that in C code, #if #ifdef #ifdef Hard to maintain. Better use interfaces to abstract platform specific code than abusing conditional compilation by scattering #ifdefs all over your implementation. E.g. void foo() { #ifdef WIN32 // do Windows stuff #else // do Posix stuff #endif // do general stuff } Is not nice. Instead have files foo_w32.c and foo_psx.c with foo_w32.c: void foo() { // windows implementation } foo_psx.c: void foo() { // posix implementation } foo.h: void foo(); // common interface Then have 2 makefiles1: Makefile.win, Makefile.psx, with each compiling the appropriate .c file and linking against the right object. Minor amendment: If foo()'s implementation depends on some code that appears in all platforms, E.g. common_stuff()2, simply call that in your foo() implementations. E.g. common.h: void common_stuff(); // May be implemented in common.c, or maybe has multiple // implementations in common_{A, B, ...} for platforms // { A, B, ... }. Irrelevant. foo_{w32, psx}.c: void foo() { // Win32/Posix implementation // Stuff ... if (bar) { common_stuff(); } } While you may be repeating a function call to common_stuff(), you can't parameterize your definition of foo() per platform unless it follows a very specific pattern. Generally, platform differences require completely different implementations and don't follow such patterns. makeat all, such as if you use Visual Studio, CMake, Scons, etc. common_stuff()actually has multiple implementations, varying per platform.
https://codedump.io/share/1pEW75ao0mke/1/why-should-ifdef-be-avoided-in-c-files
CC-MAIN-2017-30
refinedweb
227
53.07
My program is supposed to replace all sets of 4 spaces with a tab but it only seems to be editing the first line then skipping the rest of the lines. "work_string" is defined via another function which takes and saves an entire files input into one string. I just can't get my replacing function to act on multiple lines instead of just the first one. Here's my replacing function, anyone have an idea of how to get it to act on every line of "work_string"? def space_to_tab(work_string): s_to_tab = 4 whitespace = work_string[ : len(work_string) - len(work_string.lstrip())] whitespace = whitespace.replace(REAL_SPACE * s_to_tab, REAL_TAB) whitespace = whitespace.lstrip(REAL_SPACE) result = whitespace + (work_string.lstrip()) return result
https://www.daniweb.com/programming/software-development/threads/487126/python-program-only-acts-on-the-first-line-of-string
CC-MAIN-2018-30
refinedweb
116
76.52
On Sat, Mar 24, 2007 at 06:00:24PM +0800, Wei Shen wrote: > 2) Let the root fs server judge which server port to return on a > specific name qurry. > 3) Modify hurd_file_name_lookup function in the C lib. If necessary, > replace the default sever name to the name of an overiding server > before qurring the root fs. Note that both of these approaches are really specific single-purpose implementations of local namespaces. (The former server-side, the latter client-side.) I'm not saying this is bad or anything -- just an interesting observation. For comparision, did you evaluate the straight-forward approach to check environment variables in libc *before* the name lookup? I.e. instead of diverting the name lookup of the default location, first check whether a different location is requested, and lookup this one instead? While this would require adoptions in libc for every server individually, the changes should be rather small and obvious, so I guess it might still be an interesting alternative to the more generic but much more complicated namespace approach... This would also avoid any possible performance implications, as the path for normal filesystem lookups wouldn't be altered. What do you think? Note that the exec server needs special handling, because it's not invoked from the application process directly, but from the filesystem server on behalf of the application. There is no way to override it per-application directly (you could only override *all* execs on a specific filesystem); thus the forwarding done by the default exec server instead. For most other servers, such special handling shouldn't be necessary, though. -antrik- _______________________________________________ Bug-hurd mailing list address@hidden
http://lists.gnu.org/archive/html/bug-hurd/2007-03/msg00128.html
CC-MAIN-2015-06
refinedweb
277
53.71
İ want to add some editable grid tables with a button called “new table” in Vue. I haven’t found any example or something else. And then the User get or post datas into/from my mysql database. How I can do this. What I have to prepare? thankful for any reply data: new DataManager({ url: "Home/DataSource", updateUrl: "Home/Update", insertUrl: "Home/Insert", removeUrl: "Home/Delete", adaptor: new UrlAdaptor }), another question where should i use this code? public ActionResult DataSource(DataManager dm) { var DataSource = OrderRepository.GetAllRecords(); DataResult result = new DataResult(); result.result = DataSource.Skip(dm.Skip).Take(dm.Take).ToList(); result.count = result.result.Count; return Json(result, JsonRequestBehavior.AllowGet); } public class DataResult { public List<EditableOrder> result { get; set; } public int count { get; set; } } when i want to perform a insert operation on the server-side. How and where i should to use this example: public ActionResult Insert(EditableOrder value) { //Insert record in database } This post will be permanently deleted. Are you sure you want to continue? Sorry, An error occured while processing your request. Please try again later. This page will automatically be redirected to the sign-in page in 10 seconds.
https://www.syncfusion.com/forums/144416/get-post-data-into-mysql-database
CC-MAIN-2021-04
refinedweb
194
51.55
Build Portable Mp3 Player 117 Greenpiece/Toasty writes: "Build your own portable MP3 player around 8000-9000 Yen. Uses 32 Megabyte flash media cards its the ultimate in geek. The link can be found here with circuit diagrams and pictures of the finished product. The kit can also be bought, but not from that page; another company is manufacturing it in Japan. The board seems quite easy to manufacture. " Re:Another site (Score:1) Then there's the cost of a PROM blower to write data to the GAL (Generic Array Logic - it's an EPROM type of technology, similar to the way data is stored by your PC's BIOS etc). PROM blowers are seriously expensive (several hundreds of dollars last time I looked). So while the Soundbastard looks like a really professional attempt and I applaud their effort and skill, I think that this is not a project for everyone. However, if you can get hold of equipment from your friendly local electronics geek or your high school/university then this may be a really interesting project to make. Re:And the cheapest car mp3 player is... (Score:1) What about those low power FM transmitters? The kind you get with those CD changers that you don't have to wire up to your stereo, but tune to a certain frequency and listen to the CD. Does anyone know where to get those/how much they cost? Re:Mp3 player (Score:1) Re:Noisy circuitry! (Score:1) Patching "some shit" into the circuit will do nothing to reduce the noise, but may make your circuit smell bad :-) But seriously, this is a different sort of noise to normal audio noise. The noise exists intrinsically in the Veroboard circuit (due to uneccessarily long copper tracks picking up radio frequency signals, noise on the power and ground lines, parasitic capacticance etc). Your Dolby chip would be subjected to the same noise as the rest of the circuit and so wouldn't help much, and may even make things worse. (Dolby is horrible anyway, it really spoils the vibrancy of a recording IMHO.) Good circuit design and construction is your only hope to reduce noise. Why use memory chips? Why not an HD? (Score:1) Sure, it'd be more expensive that 32 mb ram, and it'd be more fragile - but I'm not trying to make a ToughPlayer(tm). Radio Shack and doing it yourself. (Score:1) But does anyone know of a site that just lists the parts I could buy at say Radio Shack or somewhere similiar and also has the schemata/instructions for it? (My soldering/machine assembly skills have declined since the last time I had to pull out the old Odyssey II and repair it for use). Re:HI SLASH DOT IS INSECURE (Score:1) Re:HI SLASH DOT IS INSECURE (Score:1) Altoids (Score:1) "What are the three words guaranteed to humiliate men everywhere? Re:Noisy circuitry! (Score:1) Ummm... Isn't dolby a noise reduction method which requires a specific Dolby encoding method? It isn't just a filter, IIRC -- it is some fancy-schmancy encoding scheme that allows encoded music to be played without the decode, but allows playback through Dolby-licensed processing to be better. Re:Interesting But... (Score:1) 1.) it's less sensitive to shock 2.) (related to 1) less likely to crap out randomly (I've heard from others w/this problem) 3.) if there's any alternative to using SmartMedia cards Re:Exchange rate (Score:1) Re:Looks pretty easy. (Score:1) -russ Re:Interesting But... (Score:1) -russ SMD's on perfboard?!? (Score:1) How about a USB? (Score:1) There's an IDE to USB [allusb.com] adapter that would let you use your existing CDROM. Re:Where's the link to the KIT?! (Score:1) -- BeDevId 15453 - Download BeOS R5 Lite [be.com] free! Re:Some Assembly Required (Score:1) Well, this is the first design I have seen and I like it for one reason. It gives me a starting point. In my work, especially my hobby stuff, I feel like people will accuse me of working for a Japanese conglomerate in that I am really go at taking someone else's stuff and getting what I need, but not inventing it on my own. So, a design like this gives me a start on my own portable that meets my requirements. That is worthwhile. While it may be nothing more than a Rio clone, I do not have Rio schematics. I have schematics for this and the basic problem solved. Plus, building it might be just plain fun. Herb Re:I'm Off to RadioShack! (Score:1) While reading the article, I was imagining using an empty Penguin Mints tin for the case. I've got a ton of 'em lying around. Re:Where's the link to the KIT?! (Score:1) Hah! I knew about this thing over a week ago. The shop does have a URL, and I've put it in at the bottom of this post, but it ain't gonna do you any good for two reasons: 1) You almost have to be able to order in Japanese (AFAIK, Wakamatsu doesn't take orders in English). 2) These kits are back-ordered for more than a month in advance; good luck at actually getting your hands on one. I'll probably drop by the store (it's in the Akihabara district of Tokyo) either tomorrow or the day after, but it looks pretty hopeless. OK, here's the URL: CL ICK HERE [wakamatsu-net.com] (Be warned, the above link is in Japanese.) Re:Where's the link to the KIT?! (Score:1) Since some people have been asking for a translation of the page, here you go: Construction kits Product number: 9301001000239 Maker: Wakamatsu Tsusho Product name/model: WAKA-MP3 Ver1.1 Notes: A kit to build your own pocket-size MP3 player. We are very sorry, but any orders made at the present time cannot be filled for more than one month. Price: 9800 yen A kit to build an MP3 player for leading-edge digital audio. We're proud of the fact that it sounds better than prebuilt products from corporate manufacturers. By using a new deice, it can be run on one AA (AAA) battery. MP3 decoder (MAS3507D) D/A converter (DAC3550A) MCU (AT90S8515) Printed circuit board (84x64mm) Smartmedia socket A complete kit that includes MCU firmware and programming tools for PCs. To build this kit, you will need an AT-compatible PC, a tester, a narrow-pointed soldering iron, tweezers and solderwick. It is aimed at people who have experience with embedded MPU (PICs, etc.) devices, and who can handle soldering (technical term for parts that are soldered directly to the circuit board - can't remember the English term, dammit) parts. PLEASE NOTE: The photograph is only an assembled example. ( larger photo is available here [wakamatsu-net.com]. The Smartmedia, battery, headphones and case are not included. [I've editied out some pointless stuff here] Because of demand, shipping is behind schedule. Any orders received at the present time will take more than a month to ship. We apologize for the inconvenience. We will provide estimates for completed kits, modules, etc., depending on the amount required. [That's about it. The text box at the bottom is for the number of kits you want to order.] Re: Noisy circuitry! (Score:1) Re:Yippeee.....love the cover design (Score:1) Hack value (Score:1) I was dismayed when I saw the 'why bother' responses to what is IMHO one of the coolest projects I've seen lately. Seems like lately money and market share, rather than hacking, has ruled --Kevin =-=-= Sakura-chan! (Score:1) I want that one just because it has Sakura Kinomoto from Card Captor Sakura on it! "Hoee!" (I'm a sucker for cute...) Also noticed that they had some Megumi Hayashibara and Kikuko Inoue songs on their playlist (the monitor picture). They not only made an MP3 player, they actually use it for good music! ^_^) -- 32 MB ? Forget it (Score:1) What I do want is a portable player that can play mp3s from a cd - or from some other cheap and spacious media. If I wanted a device that uses a media that can only hold 10 songs, I'd buy a regular CD player. I don't know how easy it will be to get the kit... (Score:1) Good news - they take Visa and Mastercard, bad news - there is no mention about shipping outside of Japan anywhere on their site. They list shipping costs for delivery inside Japan, but unless you speak Japanese and ask REALLY nicely (not to mention pay through the nose for shipping) you aren't going to be able to order the kit any time soon. You mean something like this ?? (Score:1) "As seen on Slashdot" Does this thing actually work? (Score:1) Rio (Score:1) MP3 Players: wave of future? (Score:1) There are several on the market currently but I have yet to find one that would make me want to go out and buy it. The only thing keeping me from going out and building my own is my meager electronics skills and the fact that I wouldn't have a warranty. I wonder if this creation will cause a 'cottage industry' utilizing the 'net for distribution. I think I just came up with my first million dollar idea. Engrish (Score:1) Re:HI SLASH DOT IS INSECURE (Score:1) - store the login into the cookie string - append the hashed password to the string - append the client IP to the string - append an expiration time (makes user session mortal) - append a hash result of the previous string with a secret key (known only from Slashdot). This "sign" the cookie and ensure Slashdot is the original author of it - crypt the whole string (just to be 100% safe) Unfortunately there are few site with such high level of security... Re:Noisy circuitry! (Score:1) The only analog part of the entire design is chip outs 5 and 7 from the DAC and those go directly (well, via some resistors, but still fairly directly) to the headphones. (NB: volume is apparently performed on the digital data stream, and not as a post processing step by the DAC -- although I could be misinterpreting the pins on the dac) Mind you, he did mention that he had high THD, which might be due using the veroboard. Or have I misunderstood everything (IANAEE (electrical engineer)). Re:Smartmedia vs. Compactflash (Score:1) they were nice, but never made it to the public. thinner (height), but wider than a standard 2.5" floppy. Too bad i got pissed and decided to put the one in a tree, and accidently pull the other one out during a write operation.... Re:HI SLASH DOT IS INSECURE (Score:1) Re:HI SLASH DOT IS INSECURE (Score:1) Re:Trademark namespace collision (Score:1) My old school logo looked pretty similar to the Timberland logo, tree and lines under. Timberland sued us and won. So now we have a new logo. How there could possibly be confusion between a school and Timberland is anyone's guess. This guy is hardware and software hacker (Score:1) But this guy is serious hardware/software hacker. Look at his site in English. He has many FSW (Free Softwares). Though, FSW in Japanese tends to mean not-GPL but free-to-use. But most of these FSW auther opens source code and if asked with reasonable tone, they agree to GPL. Japanese FSW sometimes tends to imply non-commercial use only but usually not explicit enough to prevent modified version used for commercial use. Also sometimes restrictive about modification in fear of distribution of virus-contaminated software to disgrage original author. WAKAMATSU is a small shop in Tokyo-Akihabara where they sell kits. These kits are usually just parts with generic PC board (Sometimes no specific pattern and you are required to connect parts running thin wires in between). Complete circuit is posed. If you are in US, you will have easier time to get parts by your self and those IC's from device manufactures as sample for free or minimul cost. Tough part is getting small quantities os special socket etc. Nice thing about Wakamatsu is they have detailed circuits and all small parts and socket in one package. But not much more. Some of the kit I did required QFP 1.0 mm pitch hand soldering to PC board. Not for faint hearted. Anyway, his bootloader seems to have nice Japanese-Anime chick photo. Maybe not politically correct in US but typically geeky japanese stuff. Check it out. Re:And the cheapest car mp3 player is... (Score:1) I got mine (not good quality, but audio wise my cds still sound better than fm radio) for about US$15 at Fry's (an electronics supermarket in California, mostly). Runs on 2 AA batteries. Re:Engrish (Score:1) Re:And the cheapest car mp3 player is... (Score:1) The other kind is the versatile but cheap kind. These either run on AAs or plug into a cig lighter. PLug it into the headphone jack on your discman, and viola! A low power FM transmitter. Just tune your radio to the right frequency, and then fine tune the transmitter. These are very narrow band, so changes in air density will affect your transmission enough for you to need to retune every time you use it. The Good kind is $50 from PartsExpress (don't remember the exact url for the item). The other kind is about $15 from lots of places (Fry's Radioshack, Best Buy, etc). --Jeff Re:And the cheapest car mp3 player is... (Score:1) <signature> "No food or beverage in computer lab". Hmm. I think they mean to say "Don't Spill". Re:I don't know how easy it will be to get the kit (Score:1) 8,000-9,000 Yen = $75-$84 US (Score:1) ----- Digital Camera / MP3 Player combo yet? (Score:1) Is there a digital camera out yet which can use it's compact flash card for MP3's??? It seems like such an OBVIOUS perfect match. Re:Need this for my hooptie (Score:1) Seems like what you're asking for. Re:Link for kit! (Score:1) 1. WARNING: The picture is an example of the product (Implying that what you actually recieve might be different). 2.Due to the popularity of this kit the date of delivery will be more than a month from when you place your order. There is also some interesting commentary on the possible uses of the kit and a mention that smart media, batteries, headphones and the case are all not included. Have a good night. Feel free to email me at [email protected] if you need J->E translations in the future (especially technical translation). Re:Looks pretty easy. (Score:1) (and I have two left paws - I also have two right paws...think about it) I won't, though. My time is too vaulable a comodity at the moment, it'd cost me less to work an extra hour and then buy a Rio clone off E-Bay. On the other hand, 20 years ago I'd be headed over to Nutron Electronics with a shopping list. (which is exactly how my time got so valuable in the first place) Meow! Re:Car MP3 players... (Score:1) btw, did you ever make it to the technomart while you were in seoul? 8 floors of electronic gadgets, mmm...they have some mp3 players there too, along with pirate psx cds and the like. btw, i keep trying to find the electronics discrict but can't find out which subway line leads there. could you give me directions from, say, seoul station or dongdaemun? Re: Stupid time saving device (Score:1) Car MP3 players... (Score:1) In this case, I'm pretty sure you actually put the MP3 player itself right into the cassette player! I think the turning of the rotors in the cassette player triggered the player to start playing. Of course the player also had a headphone jack, and little embedded buttons to use just as a walkman type device. Awesome stuff. Re:Car MP3 players... (Score:1) From the way you're talking, it sounds like you're in Seoul now, so maybe thse photos I took will help you: Re:Smartmedia vs. Compactflash (Score:1) I have built my own home and car mp3 players from older computer parts scrapped by the company I work for. I've built them out of a pentium 75 and 120. Got 'em in a metal box about the size of a car amp, and have in fact attached the car mp3 player to the amp in my car. IRman plus a bit of cabling gives me remote control. Now all I have to do is figure out how to get input to my player from an IDE cdrom drive which out of necessity will be about 4.5 feet away. (a Pioneer slot-load cdrom.) For now I just stick an IDE hard drive in the thing. Anybody got any ideas here? Re:Smartmedia vs. Compactflash (Score:1) I want more! (Score:1) Ahh, having different options is such a great thing! Exactly one year ago, the German computer magazin c't featured a Do-It-Yourself-MP3-Player, developed by some students from the university of Aachen. More info can be found here [heise.de], but it's in German. Re:Mp3 player (Score:1) Re:Some Assembly Required (Score:1) I wish HeathKit was still around, my Amp from them still works. Oh, well, maybe I could tweak the balance a bit, but it works and was fun to build. -d Re:Looks pretty easy. (Score:1) 1) buy Radio Shack soldering iron. 2) ZAP chip with cheap .45 cal tip. 3) realize that soldering irons that are too hot and conduct a charge killed the chip. 4) start over with proper tools. -d Re: Noisy circuitry! (Score:1) Peter Allen Re:Argh... (Score:1) Ironic that he's converting DC to AC to DC. It'd be more elegant to just condition the DC, but I don't know of any cheaper way to do it than with a cheapie PC power supply. *sigh* Re:HI SLASH DOT IS INSECURE (Score:1) If you go here: It sends you here:.)(.document.location='http:/ Which then shows you your usernum and password Re:Interesting But... (Score:1) Depends on your definition of valuable (Score:1) Trademark namespace collision (Score:1) Re:Playing MP3 from a CD (Score:1) [email protected] Thanks! Re:Link for kit! - Poor Translation! (Score:1) This reads: ManufacturerMerchandise nameRemarksPriceStockQuanityPurchase Young Pine Tree TradeWAKA-MP3 Ver1.1Kit to build your own MP3 player.9,800 small amountAmount to buyAdd to cart image? h ead=1&detail_kit_930100100023939 This reads Model number, 9301001000239 ....too much for my rusty Japanese ... Manufacture's name, Young Pine Tree Trade Merchandise name WAKA-MP3, Ver1.1 Notes: Cost: 9,800yen MP3 decoder, MAS3507D DA conveter, DAC3550A MCU AT90S8515 Size 84x64mm Play, Pause, Next, Prev, Stop MCU can communicate with a PC by... Memory Kits, 64MB, 32MB, 16MB, 8MB I hope this helps. Re:Exchange rate (Score:1) Informative = +4 Karma Bad Pun = -2 Karma Total = +2 Karma... kwsNI Re:Micro hard drives, maybe? (Score:1) I know of two big problems with using hard drives for this kind of application. (I think that) Hard drives use more power than RAM-esque storage. More power use -> less battery life -> less play time -> less fun. They're much better than they used to be, but hard drives are still not suited for shaky environments. Think how much shaking the unit would get in your pocket as you walk? Try stairs or jogging. I think people want their music to be very portable, so it will have to last a long time and take a beating. --- Dammit, my mom is not a Karma whore! Re:Sakura-chan! (Score:1) Re:Mp3 player (Score:1) Use the preview button, Luke. Noisy circuitry! (Score:2) I hope that the MP3 player kit comes complete with a properly fabricated and well designed printed circuit board. Of course, you can always design your own PCB using shareware software (I don't know if there is any GPL'ed stuff to do PCB design), but it takes a fair bit of skill to do a good PCB design and you'll need to know someone who can etch the PCB for you. PCB etching equipment isn't cheap! But all this talk of MP3 players and electronics is pretty dull. I'm far more excited that Stone Cold Steve Austin is back at the WWF Pay Per View this Sunday. Austin 3:16 is back, I can't wait! Oh, man! Re:HI SLASH DOT IS INSECURE (Score:2) something like: If so, it does say that this method of logging in is *very* insecure. PaIA should make a kit out of this... (Score:2) It'd rock. I'm off to spec out components. Where's the link to the KIT?! (Score:2) I want to build this thing, but it'd be nice if the company that's making the kit had a URL... It sure looks like the overall set of hardware is the same set that the vendors of all the MP3 players are using. It seems likely to me that this is not merely "similar" hardware, but really is the same hardware. And if that be the case, once the MPAA foists "copy control clients" on the industry, those clients will be happy to update the "firmware" on the MP3 players, whether they're boxed units from Sony or Panasonic, or a custom job that you built yourself. Not that it'shoul affect my Diamond Rio; I use the open source "SnowBlind Alliance" interface software, and merely upload files. Re:Looks pretty easy. (Score:2) Re:Exchange rate (Score:2) That will get you in the range within a few dollars. Re:/.ers complaing about a hack. How sad. (Score:2) Part of "Hack Value" is creating something that either isn't available elsewhere, or being able to put something together for far less than buying it at the shops. It's not about re-inventing the wheel. If you just want to say that you built it with your own hands you're not a hacker, you're an enthusiest. Same thing if you're just putting together something that's a slightly higher quality. Now, if you take a cheap old Rio and solder in 128MB of RAM that you salvaged from something else cheaply, you're a hacker. Re:Smartmedia vs. Compactflash (Score:2) CompactFlash is much easier to handle than SmartMedia... I'm the kind of person that scratches CDs easily, and I'd be scared to have those (relatively) delicate SmartMedia cards. Can anyone here adapt this hack ("hack this hack"?) to be able to use CompactFlash? Plus, there are more applications for CompactFlash (The TRGPro [trgpro.com] for example) that would offset the cost of an IBM MicroDrive. Could this control a Hard drive as well? It'd be nice to be able to make your own EMPEG [empeg.com] type device.. Throw on your own LCD and one of these monsters [buy.com] and you're set. 75 Gigs of MP3 storage. Is there a better way to do this than with these schematics? Re:Some Assembly Required (Score:2) Go buy a rio, with solidly soldered circuit boards hidden away beneath a nice shiny black case; a mass-produced masterpiece designed by some faceless intrepid entity toiling away in a forgotten corporate cubicle. You will gain an excellent warranty, a pretty cardboard box, and a nice pair of cheap earphones in the process. But if you take as much joy from the melting of metal as you do from the music itself, if you dream of harnassing the secrets of the universe for your own personal pleasure, then this kind of thing is the only option. Nothing my mother can buy at Wal-Mart will be as exciting or as interesting as something I piece together out of scraps of metal, a broken Walkman and a radio shack chip. I had no interest whatsoever in portable mp3 players until building my own became a possibility. I don't yet have the skill to design one of these myself- but I can solder, and I think I can read a schematic well enough to put this thing together. I can probably even modify this to do other neat things- and in doing that, I will learn a great deal. Or maybe you already know everything there is to know about electrical engineering. Maybe you can design one of these in your sleep, and that is why this doesn't excite you. If that's the case, do it. And then put up a web page and show me how- cuz I am excited. Rev Neh Re:HI SLASH DOT IS INSECURE (Score:2) Re:Sakura-chan! (Score:2) Damn straight Re:Some Assembly Required (Score:2) IMnsHO, I'll learn more than the ~$50-100 dollars I'll save over buying a ready-made MP3 player and that's more than a fair tradeoff. I reckon we all should have just used existing computers and operating systems 'cuz they're just another clone that provide the same functionality. Dave Smartmedia vs. Compactflash (Score:2) Why do I care? Well, because, I don't see IBM being able to squeeze their 340 MB Microdrive [ibm.com] into a Smartmedia form factor anytime soon. Other than that, what a cool project! This is the stuff Slashdot outta have more of! Where can I get mas3507d? (Score:2) Argh... (Score:2) I'm starting out with an M590 motherboard from PCWare. True, PCWare doesn't make the highest quality mobos in the world, but my experience is that if you get one that works, it tends to keep working. This mobo has onboard sound and video (linux supported!), and so will allow me to place it in a relatively flat case (no cards to worry about). My friend has developed a library to interface with 3-line text LCDs, so that they can display menus while selecting and audio meters while playing; it's open source (all of his stuff is), and you can find it at I was originally thinking of using hard disk storage to avoid swapping media all the time, but since hard disks of sufficient durability are not available at a reasonable size/price ration, I'm going with a CD-ROM for now. One CD-R will hold FAR more than a Rio...I can put Linux on a small solid-state hard disk, and I'm set! For power, an adaptor from car-DC to computer-AC is not terribly expensive. Playing MP3 from a CD (Score:2) Looks pretty easy. (Score:2) I'm Off to RadioShack! (Score:2) Building this seems like a great project. I am gonna give this one a go!. I am gonna try to tweak the case design though (see if I can't cram it into an empty smoke pack, or can of Spam) That would rock eh? the Spam Brand MP3 Player? Hey, If you can get a Linux Server in a Pizza Box, then why not a Spam MP3 Player! If anyone knows of any similar projects, I'd sure like to know (click here to mail me) [mailto] Lotteries are a tax for people who suck at math. Some Assembly Required (Score:3) (Is it me, or do others detect a "glut" of designs that are almost identical to the Rio?) I don't see much point in having to integrate my own "Rio" when it provides no more functionality. If the design provided some reasonable way of storing 1GB of data, cheaply, that sure would be interesting. But another "me too!" design integrating together a synthesizer, some simple CPU/DSP, parallel interface, and a 32MB chunk of "flash" memory just does not excite. Re:Noisy circuitry! (Score:3) There is absolutely nothing wrong with using veroboard for a prototype, or even for a finished design! Keep your decoupling caps very close (on top or under if you can) and keep the DAC and amp far away from the DSP and processor and you'll be fine. As someone who's built lots of low-noise (0.1mV sensitivity) and/or medium speed (40MHz) equipment on protoboard first, I know what I'm talking about. You can always throw copper shield up around the sensitive components and keep the power supplies clean with carefully selected bypass caps and even lowish resistances. Or get fancy and use ferrite. If it is true veroboard (with tracks, as opposed to the board-only stuff I like), you can just rip off the copper you don't need and down goes all your sensitivity issues. True you haven't got a ground plane but if you can keep everything encased in grounded metal sheild you're flying high. where can i get a cheap laptop? (Score:3) #---------------------------- $mrp=~s/mrp/elite god/g; Interesting But... (Score:3) Picked mine up at an online auction site for about $50... They always have a buttload of em. Re:Exchange rate (Score:3) -- "And is the Tao in the DOS for a personal computer?" Link for kit! (Score:3) And the cheapest car mp3 player is... (Score:4) My car came with a pretty good casette player, so I didn't want to replace it. Instead, I got a casette adapter from Future Shop, that plugs into the line out of any cd player/sound card. I also bought a lighter power adapter for my old Thinkpad laptop. The total cost for the adapters and laptop was around $600CAD, which is pretty steep. But I get to play mp3s in my car off my mp3 cds, and have a laptop that is usefull for something other than just that. So IMHO, this is the best solution to having a car mp3 player. Feel free to disagree though... Exchange rate (Score:5) Today 108 yen = 1 USD. So 8000-9000 yen = $74-$83 -- Have Exchange users? Want to run Linux? Can't afford OpenMail? Another site (Score:5) Soundbastard [go.to] /.ers complaing about a hack. How sad. (Score:5) "Oh, I can get a Rio for the same amount." Pah. A pox on you and your like. Whatever happened to pure HACK VALUE? Sorry, but building the equivalent of a commercial machine for fun is neat, fun, and educational. Go buy your little Rio and leave the real hackers be. Dave
https://slashdot.org/story/00/04/28/0852256/build-portable-mp3-player
CC-MAIN-2017-47
refinedweb
5,191
73.78
Here are my files I have so far and here is my intent w/them. I am currently taking a c programming course at a university. I am trying to create a header file and a .c file that will make a function usable in the main function. We have been instructed to do it in this format. I am trying to just get it to work and am having some problems. What i'm trying to do w/this program. I have my main section of the program and I'm trying to call the function readValue, which reads in a number. I do this three times and then I add the numbers together in the main function and print the result. //error: 'undefined reference to readValue'//error: 'undefined reference to readValue'Code://example.c #include <stdio.h> #include <stdlib.h> #include "avg.h" int main(int argc, char *argv[]) { int n1, n2, n3, sum = 0; double avg; n1 = readValue(); n2 = readValue(); n3 = readValue(); sum = n1 + n2 + n3; system("PAUSE"); return 0; } //avg.h #ifndef AVG_H #define AVG_H #include<stdio.h> int readvalue(); #endif //avgfunction.c #include "avg.h" int readValue() { int num; printf("please enter an int > 0"); scanf("%i", &num); while(num <=0) { printf("please enter an int > 0"); scanf("%i", &num); } } return num; If anyone could help me get this to work that would be awesome. I can not for the life of me find out why. I've read online and read the book, which is using a different style for creating functions.
http://cboard.cprogramming.com/c-programming/112013-simple-question-creating-fucntions.html
CC-MAIN-2016-07
refinedweb
258
76.52
Hi, guys, I have been working on this short program, it is a simple conversion program from feet/inches to metric system. Everything seems fine, apart from one if statement. There are two, the same if blocks, the only difference is the values in the condition and the sign. Why the first if skips the whole 1-11 inches and then the table prints startign from 5 feet, instead of 4'6"??? Thanks.Thanks.Code:#define FEET_TO_CM 30.48 #define INCH_TO_CM 2.54 float conversion(int, int); main() { int feet, inch; float converted = 0; for( feet = 4; feet <= 6; feet++ ) { for( inch = 0; inch <=11; inch++ ) { /* that's the two ifs that I am talking about */ if( feet == 4 && inch <= 5 ) break; if( feet == 6 && inch >= 6 ) break; converted = conversion(feet, inch); printf("%2d\'%3d\" %7.2fcm\n", feet, inch, converted); } } return 0; } float conversion( int feet, int inch ) { return (FEET_TO_CM * feet + INCH_TO_CM * inch); }
http://cboard.cprogramming.com/c-programming/58522-why-if-break-skips-false-vals-here.html
CC-MAIN-2015-32
refinedweb
154
79.19
Hi everyone, Nice to meet you guys. I am a newcomer here:) Recently I encountered a strange problem while developing a piece of software using SAMD21G18A + Atmel Studio 7 + ASF3, for my company. I was trying to send data to PC (let's say inside the infinite while loop we send 64 bytes each time then delay 20 ms then repeat), and on the PC side I have a host software that parses the received data. When I tried to measure the sending time I found that it is not stable. In the below figure I create a simple project just focusing on this problem, hope that makes sense: #include <asf.h> unsigned long Timer1 = 0; unsigned long elapsedTimer1 = 0; uint8_t DataBuffer[64] = {0xEE}; uint8_t crc8Maxim(const uint8_t data[], uint8_t dataLen) { uint8_t len = dataLen; uint8_t crc = 0x00; while (len--) { uint8_t extract = *data++; for (uint8_t tempI = 8; tempI; tempI--) { uint8_t sum = (crc ^ extract) & 0x01; crc >>= 1; if (sum) { crc ^= 0x8C; } extract >>= 1; } } return crc; } int main (void) { //irq_initialize_vectors(); //cpu_irq_enable(); system_init(); udc_start(); delay_init(); /* Insert application code here, after the board has been initialized. */ while(1) { DataBuffer[1] = (elapsedTimer1 >> 16) & 0xFF; DataBuffer[2] = (elapsedTimer1 >> 8) & 0xFF; DataBuffer[3] = elapsedTimer1 & 0xFF; DataBuffer[63] = crc8Maxim(DataBuffer, 63); Timer1 = SysTick->VAL; udi_cdc_write_buf(DataBuffer, 64); elapsedTimer1 = Timer1 - SysTick->VAL; delay_ms(20); } } Here is the measuring result, so normally the time period used for sending a package of 64 bytes is around 24 microseconds, but we can see that sometimes it takes almost double time to transfer the same data. However, when I tried to do the same thing with Arduino framework (so I measured the time using micros()), I got a relatively stable result with measured time always between 22-25 microseconds. The ASF version that I am using is 3.49.1, for delay routines I utilize systick configuration. Anyone can help me with this problem? Sorry for any confusion, my English is not so good. Kind regards, Xin Please see Tip #1 in my signature, below, for how to post source code - not as images: Top Tips: Top - Log in or register to post comments Sorry! The original post is modified. Thanks for the information. Top - Log in or register to post comments
https://www.avrfreaks.net/forum/atmel-asf-cdc-transmission-speed-not-stable
CC-MAIN-2022-33
refinedweb
368
57.71
This article appears in the Third Party Products and Tools section. Articles in this section are for the members only and must not be used to promote or advertise products in any way, shape or form. Please report any spam or advertising. This article is intended to introduce IBM Worklight and its integration with WCF. IBM Worklight is an open source application development platform for smartphones and tablets. Applications developed using Worklight can be seamlessly deployed onto Android, Iphone, Ipad, Blackberry, etc. devices. To learn more about the features and benefits of the IBM Worklight platform visit here. Tutorial PDFs related to IBM Worklight can be found here. To download IBM Worklight framework and to learn about its installation visit here. Prerequisites: Reader must be familiar with WCF restful services and related hosting concepts. IBM Worklight framework provides a complete package of solutions for mobile application development and deployment. Worklight framework provides various approaches for developing mobile browser application as well as mobile native application development. In this article, we would be looking at how to use Worklight framework for mobile native application development, using one of its approaches called Hybrid applications. A mobile application developed using IBM's Worklight can be easily deployed to run on more that one mobile environments such as Android, IPhone, etc. In this demonstration I would be showcasing development of a simple native app using Worklight's hybrid application (web) approach. Here is the summary of steps involved in the demo application development. Step 1: Create a simple Restful service If you are new to WCF or to restful service then you may find this article useful. You may consider reading further after going through the knowledge base article. In order to get started, let us quickly write a simple restful service that we would be later on consuming using Worklight. Before that let us create a simple class library named RestFulServiceDemo and add a class called Employee to the library. Details of the class are shown below. using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace RestFulServiceDemo { /// <summary> /// Employee details /// </summary> public class Employee { public string FirstName { get; set; } public string LastName { get; set; } } } Let us now create a simple service contract that would allow clients to fetch the list of all employees. Details of the contract are shown below. using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.ServiceModel; using System.ServiceModel.Web; namespace RestFulServiceDemo { [ServiceContract] public interface IEmployeeService { [OperationContract] [WebInvoke(UriTemplate="/Employees", Method="GET", ResponseFormat= WebMessageFormat.Json)] List<Employee> GetAllEmployees(); } } We would now create an implementation for IEmployeeService contract. Let us call this service as EmployeeServiceImplementation. In the constructor of EmployeeServiceImplementation service we would initialize a list of employees with some sample data and then make this list accessible to the outside world. Details of EmployeeServiceImplementation are shown below. using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.ServiceModel; namespace RestFulServiceDemo { public class EmployeeServiceImplementation: IEmployeeService { private List<Employee> employees; public EmployeeServiceImplementation() { employees = new List<Employee>(); employees.Add(new Employee() { FirstName = "Pankaj", LastName = "Sharma" }); employees.Add(new Employee() { FirstName = "Om", LastName = "Sharma" }); employees.Add(new Employee() { FirstName = "Pranav", LastName = "Sharma" }); employees.Add(new Employee() { FirstName = "Ankush", LastName = "Sharma" }); } public List<Employee> GetAllEmployees() { return employees; } } } Now we have the service contract and its implementation ready. Final step in simple web service creation is to host this service. Create a console application called EmployeeServiceHosting and add a reference of the class library just created above. Add a class called ServiceHoster, with a main method in it. We would now write hosting related code in the main method. Details are shown below. using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.ServiceModel; using System.ServiceModel.Web; using System.ServiceModel.Description; namespace EmployeeServiceHosting { public class ServiceHoster { static void Main(string[] args) { WebServiceHost serviceHost = new WebServiceHost(typeof(RestFulServiceDemo.EmployeeServiceImplementation), new Uri("")); ServiceEndpoint endPoint = serviceHost.AddServiceEndpoint(typeof(RestFulServiceDemo.IEmployeeService), new WebHttpBinding(), ""); ServiceDebugBehavior behavior = serviceHost.Description.Behaviors.Find<ServiceDebugBehavior>(); behavior.HttpHelpPageEnabled = false; serviceHost.Open(); Console.WriteLine("Service is up and running"); Console.WriteLine("Press enter to quit "); Console.ReadLine(); serviceHost.Close(); } } } Build the console application and then run it either by double clicking the exe or by using Visual Studio.You must see the service running as shown below. /> What is a Worklight adapter? As described by IBM, an adapter is a transport layer used by worklight platform to connect to different back end systems. The over all idea of the adapter is that a mobile application requests the adapter to fetch some resource. Adapter in turn communicates with underlying backend systems and enquires for the resource. Backend system then returns the requested resource to the adapter which the adapter finally hand overs to the mobile application. /> As of now IBM offers threes type of adapters HTTP Adapter Http adapter is used to send and receive HTTP requests.It can be used to communicate with both restful and SOAP services. More details about Http adapter here. SQL Adapter As the name suggests, Sql Adapter is used to communicate with the databases.More details about the Sql adapter here. Cast Iron Adapter Cast iron adapter is used in more complex scenarios. More details about the cast iron adapter here. In this article we would be using a HTTP adapter to consume previously created employee service. More about adapters can be found here. Before you go any further you need to be familiar with the basic concepts related to Worklight project creation and Worklight project folder structure. If you are not already familiar with these, don't worry you can find those here. I hope that by now you have downloaded Worklight framework and have done the necessary settings on the Eclipse IDE to get started. If not then you may refer here. As discussed earlier HTTP adapter is used to send and receive http requests. Following are the steps involved in creation of a http adapter: 1) Open the Eclipse IDE and create a new Worklight project. Name the project as WCFConsumer. This how your Project Explorer should look after creation of the project. 2) In the Project Explorer, right click the adapters folder and select New -> Worklight Adapter as shown below /> 3) In the New Worklight Adapter popup window, select Project Name, Adapter Type and enter Adapter Name as shown below and then click on the finish button. After successful creation of the adapter a new subfolder gets generated under the 'adapters' folder by name ConsumeWcfRestAdapter. This sub folder by default has two important files one xml and another javascript file. If you have time then it is better to get familiar with the details of these files, please refer the link here. Now we need to create a procedure that would call the restful service that we had created earlier. Creation of a procedure invloves two steps: A procedure is declared in the xml file that automatically gets generated in the newly created subfolder of our adapter. In our case we would be declaring the procedure in ConsumeWcfRestAdapter.xml. Double click the xml file to declare the procedure. Using the design view of the popped up Adapter Editor, mention below shown details for the connection policy node. Now, using the remove button remove existing procedure declarations if any. Then click the add button and enter procedure related details as shown below. Now we are done with the procedure declaration part. Next step is to define the procedure. Open the ConsumeWcfRestAdapter-impl.js. Remove all existing content and paste the code snippet shown below. function getEmployees() { var path = 'Employees'; var input = { method : 'get', returnedContentType : 'json', path : path }; return WL.Server.invokeHttp(input); } In the above function 'Employees' refers to the last part of the url. WL.Server.invokeHttp(input) is responsible for sending appropriate http request to the service and for receiving the response. Here we are done with the procedure declaration and definition. Now before invoking this procedure we need to build the adapter and depoly the code to the Worklight server. In order to do this right click the subfolder under the 'adapters' folder and follow RunAs -> Deploy Adapter as shown below. Now we are ready to invoke the adapter. Before invoking the procedure ensure that the web service is running. In order to invoke the adapter right click the subfolder in the adapters folder and select RunAs -> Invoke Worklight Procedure as shown below. In the pop up window select Project Name, Adapter Name and Procedure Name and then click the run button as shown below. If you have followed all the steps so far correctly then the json data fetched from the service should get displayed in the editor as shown below. Congratulations! you have just managed to create your first worklight procedure. It is that simple! We have reached our last step of the demonstration. Now we would be displaying the received data in the mobile application's UI. In the Project Explorer double click the WCFConsumer.html node located under apps -> WCFConsumer -> Common -> js and open the file. Clear existing contents and paste the snippet shown below. <!DOCTYPE html> <html> <head> <meta charset="utf-8" /> <meta name="viewport" content="width=device-width, initial-scale=1.0, maximum-scale=1.0, minimum-scale=1.0, user-scalable=0" /> <title>WCFConsumer</title> <link rel="shortcut icon" href="images/favicon.png" /> <link rel="apple-touch-icon" href="images/apple-touch-icon.png" /> <link rel="stylesheet" href="css/WCFConsumer.css" /> </head> <body id="content" style="display: none"> <div id="invokeResult"></div> <script src="js/initOptions.js"></script> <script src="js/WCFConsumer.js"></script> <script src="js/messages.js"></script> </body> </html> Note that in the above snippet we have added a div, with its id set to 'invokeResult'. We would be appending the data fetched from the web service to this div. Now open the WCFConsumer.js file. Clear existing contents and paste the snippet shown below. // Worklight comes with the jQuery framework bundled inside. If you do not want to use it, please comment out the line below. window.$ = window.jQuery = WLJQ; function wlCommonInit() { GetEmployeeData(); } function GetEmployeeData() { var invocationData = { adapter : 'ConsumeWcfRestAdapter', procedure : 'getEmployees' }; WL.Client.invokeProcedure(invocationData, { onSuccess : handleSuccess, onFailure : handleFailure, }); } function handleSuccess(result) { var httpStatusCode = result.status; var div = $("#invokeResult"); if (200 == httpStatusCode) { var invocationResult = result.invocationResult; var isSuccessful = invocationResult.isSuccessful; if (true == isSuccessful) { var result = invocationResult.array; for ( var i = 0; i < result.length; i++) { div.append('FirstName: ' + result[i].FirstName); div.append(", "); div.append('LastName: ' + result[i].LastName); div.append('<br>'); } } else { div.append("Request Failed!"); } } else { div.append("Request Failed!"); } } function handleFailure(result) { var div = $("#invokeResult"); div.append("Request Failed!"); } The wlCommonInit function in the above snippet gets automatically called after the page load. Any code that you want to call immediately after the page load can be placed here. We would be placing GetEmployeeData method here. In the GetEmployeeData function we are calling the previously delcared adapter procedure getEmployees. In the invokeProcedure method we have mentioned two handlers one for success scenario and the other for failure. In the handleFailure function we simple display an error message. Important functionality is in the handleSuccess function. In this function we are receiving the data from the web service in the result variable. Then we check for the status code and if everything is ok we simply append the contents to the div. Now right click the WCFConsumer folder under the apps folder and select New -> Worklight Environment. This pops up a window. Select Project Name, Application name and check boxes as shown below and click finish. We are done! Now in order to deploy the code, right click WCFConsumer folder and select Run As -> Build All and Deploy. Once done with the deployment open this url in the browser. You do not need to be connected with the internet to access this page. This web page is called Worklight Console. Click the preview button next to each environment and see how the application would look like. After clicking preview, web page show below gets displyed. Here we have managed to create an app for Android, Iphone and Ipad using IBM Worklight and WCF. I would be more than glad to answer any queries pertaining this.
http://www.codeproject.com/Articles/480888/Mobile-Development-using-IBM-Worklight-and-WCF?fid=1804496&df=90&mpp=25&noise=3&prof=False&sort=Position&view=Normal&spc=Relaxed
CC-MAIN-2014-42
refinedweb
2,040
51.24
Clear and Simple (and fast too!) solution in Clear category for Hypercube by BrianMcleod from collections import defaultdict def hypercube(grid): """ Detect wgether there is a path that spells Hypercube in the grid Since True or false is the output, only the tails of successful paths need to be retained between iterative tests. No tests for used letters is needed because all of the letters in 'Hypercube' are unique except 'e' occting at positions 3 and 8. Since the two occurances are in odd and even positions and movement os limited yo up/down/lwft/right, the two 'e's cannot be the same. Simply check for all matches with the current letter in "Hypercube" and retain only those that are adjacent to at least one of the tails. If at the end, there are any tails left, the result is True. """ word = "hypercube" cells = defaultdict(list) for x, row in enumerate(grid): for y, c in enumerate(row): cells[c.lower()].append((x,y)) tails = cells[word[0]] for c in word[1:]: tails = [cell for cell in cells[c] if any([sum([abs(a-b) for a,b in zip(cell,pt)])==1 for pt in tails])] return bool(tails) Sept. 27, 2018 Forum Price For Teachers Global Activity ClassRoom Manager Leaderboard Jobs Coding games Python programming for beginners
https://py.checkio.org/mission/hypercube/publications/BrianMcleod/python-3/clear-and-simple-and-fast-too/share/bc8e90516bacbab57248f9c7bd67c2ff/
CC-MAIN-2022-21
refinedweb
221
57.91
Implementation status: to be implemented Synopsis #include <sys/stat.h> int chmod(const char *path, mode_t mode); #include <fcntl.h> int fchmodat(int fd, const char *path, mode_t mode, int flag); Description The chmod() function modifies file mode bits according to the the mode parameter. fchmodat function is equivalent to chmod() unless the path is a relative path. The file to be changed is determined then relative to the directory associated with the file descriptor fd instead of the current working directory. If the access mode of the open file description associated with the file descriptor is not O_SEARCH, the function checks whether directory searches are permitted using the current permissions of the directory underlying the file descriptor. If the access mode is O_SEARCH, the function does not perform the check. Values for flag are constructed by a bitwise-inclusive OR of the following flags (< fcntl.h>) : AT_SYMLINK_NOFOLLOW- if path names a symbolic link, then the mode of the symbolic link is changed. AT_FDCWD- for this value of the fd parameter of fchmodat() the current working directory is used. If also flag is zero, the behavior shall be identical to that of chmod(). Arguments: path - a pointer to the file path. mode - a required file mode bit sequence. fd - a file descriptor of the directory. flag - flags defining the function behavior as mentioned above. Return value On success the functions return 0and otherwise they return -1 and set the errno value . Errors [ EACCES] - search permission is denied on a component of the path prefix or the access mode of the open file description associated with fd is not O_SEARCH and the permissions of the directory underlying fd do not permit directory searches. [- [ EPERM] - the effective user ID does not match the owner of the file and the process does not have appropriate privileges. [ EROFS] - the named fileINTR] - a signal was caught during execution of the function. [ EINVAL] - the value of the mode argument is invalid or the value of the flagOPNOTSUPP] - the AT_SYMLINK_NOFOLLOW bit is set in the flag argument, path names a symbolic link, and the system does not support changing the mode of a symbolic link.
http://phoenix-rtos.com/documentation/libphoenix/posix/chmod
CC-MAIN-2020-34
refinedweb
358
53.61
In my previous articles we discuss about advantage of Kotlin and create first application in Kotlin using Reddit API. But today we discuss about disadvantage of Kotlin and Please treat it as personal opinion & comment if you have solutions for problems listed below. Although Kotlin is great, it’s not perfect. Here are a few aspects of the language that I’m not in love with. 1No namespaces Kotlin allows you to define functions and properties at the top level of your files. That’s a great feature, but it can cause some confusion when combined with the fact that all top level declarations are referenced unqualified from Kotlin. This can sometimes make it difficult to tell what a function is when reading one of its usages. For example, if you define a top level function: fun foo() {...} You will call that function as foo(). If you have a function with the same name in a different package, it’s not obvious from looking at the call site which function is being called. You can fully qualify the name of the function with the entire name of the package that it’s defined in, but given Java’s convention of very deep package names, that’s not ideal. Recommended : Why Kotlin Anko is better than Java XML for Android? One workaround is to approximate a namespace by using with a singleton object class. object FooActions { fun foo() {...} } That allows you to refer to the function as FooActions.foo() if you’re only calling the functions from Kotlin, but it’s not as pretty if you have Java code that needs to call that function. From Java, you have to refer to the function as FooActions.INSTANCE.foo(), which is certainly not ideal. You can avoid the INSTANCE step by annotating your function with @JvmStatic, which is about the best you can do currently. That’s not a big deal, but it’s some boilerplate that wouldn’t be necessary if Kotlin had namespaces. 2No static modifier Following on the previous point, Kotlin has unusual handling of static function and property declarations that are called from Java. It’s not bad, but it feels dirtier than necessary. For example, the Android View class defines some static constants like View.VISIBLE and static methods like View.inflate: public class View { public static final int VISIBLE = 0x00000000; public static final int INVISIBLE = 0x00000004; public static View inflate(Context context, int resource) {...} } The declaration is simple. In contrast, here’s the equivalent Kotlin: class View { companion object { @JvmField val VISIBLE: Int = 0x00000000 @JvmField val INVISIBLE: Int = 0x00000004 @JvmStatic fun inflate(context: Context, resource: Int) {...} } } Although the Kotlin version isn’t terrible, it’s more verbose than I would normally expect from the language. See this : Create dynamic layout in Android If you skip the annotations, then Java code will have to use awful syntax to refer to your fields: // With annotations: View.VISIBLE; //Without annotations: View.Companion.getVISIBLE(); It feels odd that there are no better ways to create static functions and properties. I know that companion objects are real objects and can do stuff like implement interfaces, but that doesn’t feel like a compelling enough use case to completely replace normal static declarations. 3Automatic conversion of Java to Kotlin This was the first topic in the list of things I like about Kotlin, and it works well. But because it work so well 80% of the time, many of the cases where it fails can be frustrating. Javadocs are often mangled, especially any paragraphs the wrap lines. Static fields and methods are converted to plain declarations on the companion object, which breaks any Java code that previously called them unless you manually add @JvmField or @JvmStatic, respectively. All of these problems will certainly get fixed as the Kotlin team has more time to work on the converter, so I’m optimistic in this case. 4Required property accessor syntax Kotlin has the great syntactic sugar called “property accessor syntax” that allows you to call JavaBeans-style getters and setters as if they were a Kotlin property. So for example, you can call the Activity.getContext() method by writing activity.context instead of writing the whole method name. If you use the actual method call in Kotlin, you will get a lint warning telling you to use the property syntax instead. That’s definitely a nice feature, but there are a few cases where method names start with the word “get”, but you don’t want to use the property syntax. One common case is with Java’s atomic classes. If you have a val i = AtomicInteger(), you might want to call i.getAndIncrement(). But Kotlin wants you to call i.andIncrement. That’s clearly not an improvement. You should also see : How to use Json or Gson in Kotlin? You can annotate every call site with @Suppress(“UsePropertyAccessSyntax”), but that’s ugly. It would be much better if there was a way to annotate functions you write with a similar annotation that would tell the linter that the function shouldn’t be treated like a property. 5Method count Writing code in Kotlin will certainly reduce the number of lines of code in your project. But it will also probably increase the method count of the compiled code, which is of course a drawback if you’re using Android. There are a number of reasons for that, but one of the larger contributors is the way Kotlin implements properties. Unlike Java, the Kotlin language doesn’t provide any way to define a bare field. All val and var declarations instead create a property. This has the advantage of allowing you to add a get or set definition to a property whenever you want without breaking code that references the property. That’s a great feature that removes the need to write defensive getters and setters in the way that you often do in Java. And finally, here are two design decisions that the Kotlin team made that I strongly disagree with, and that I don’t expect to change in the future. 6SAM conversion and Unit returning lambdas This one is a really baffling design decision. One of the best features of Kotlin is the way it embraces lambda functions. If you have a Java function that takes a SAM interface as a parameter (an interface with a Single Abstract Method): public void registerCallback(View.OnClickListener r) You can call it by passing a plain lambda from either Kotlin or Java: // Java registerCallback(() -> { /** do stuff */ }) //Kotlin registerCallback { /** do stuff */ } This is great. But trying to define a similar method in Kotlin is inexplicably harder. The direct translation is called the same from Java, but requires an explicit type when called from Kotlin: fun registerCallback(r: View.OnClickListener) // Kotlin. Note that parenthesis are required now. registerCallback(View.OnClickListener { /** do stuff */ }) That’s annoying to have to type out, especially if you convert some Java code to Kotlin and find out that it breaks existing Kotlin code. The idiomatic way to define that function in Kotlin would be with a function type: fun registerCallback(r: () -> Unit) Which allows the nice function call syntax in Kotlin, but since all Kotlin functions are required to return a value, this makes calling the function from Java much worse. You have to explicitly return Unit from Java lambdas, so expression lambdas are no longer possible: registerCallback(() -> { /** do stuff */ return Unit.INSTANCE; }) If you’re writing a library in Kotlin, there isn’t any good way to write a method with a function parameter that is ideal to call from both Java and Kotlin. Do you know : How to use Lamda in Kotlin? Hopefully the Kotlin designers change their mind and allow SAM conversions for functions defined in Kotlin in the future, but I’m not optimistic. 7Closed by default Every downside to Kotlin I’ve talked about so far are mostly small syntax details that are not quite as clean I’d like, but aren’t a big deal overall. But there’s one design decision that is going to cause a huge amount of pain in the future: All classes and functions in Kotlin are closed by default. It’s a design decision pushed by Effective Java, and it might sound nice in theory, but it’s an obviously bad choice to anyone who’s had to use a buggy or incomplete third-party library. 8Conclusion Kotlin is overall a great language. It is much less verbose than Java, and has an excellent standard library that removes the need to use a lot of the libraries that make Java life bearable. Converting an app from Java to Kotlin is made much easier thanks to automated syntax conversion, and the result is almost always an improvement. If you’re an Android developer, you owe it to yourself to give it a try and share your experience with us via comment. Share your thoughts
http://www.tellmehow.co/disadvantage-of-kotlin/
CC-MAIN-2020-24
refinedweb
1,493
61.56
Printing a document programmatically can be quite involved. Using the ReportPrinting library presented here, you'll be able to print reports with multiple sections with very little code. ReportPrinting The reports are comprised of plain text sections (such as the title "Birthdays", and the other paragraphs) and grids of data from a database (more specifically, from a DataView object). Lines and images (using the .NET Image class) and boxes (like the CSS box model) are also supported. The framework presented can easily be extended to handle many other primitives. DataView Image I do not plan to keep this page up-to-date with the latest code and documentation. This will just be a quick presentation of what you can do and the classes involved. For the latest version, see here. However, this project is not actively being developed, so use at your own risk. This was all written for the .NET 1.1 Framework, and I know a lot of new namespaces were added in .NET 2.0 that may make a lot of this obsolete. It solved a problem I had at the time. This section will take you through the process of using the ReportPrinting library, step by step. The first step is to create a DataTable and DataView that serves as the source of data. The following code will create a table of some famous birthdays: DataTable public static DataView GetDataView() { DataTable dt = new DataTable("People"); dt.Columns.Add("FirstName", typeof(string)); dt.Columns.Add("LastName", typeof(string)); dt.Columns.Add("Birthdate", typeof(DateTime)); dt.Rows.Add(new Object[] {"Theodore", "Roosevelt", new DateTime(1858, 11, 27)}); dt.Rows.Add(new Object[] {"Winston", "Churchill", new DateTime(1874, 11, 30)}); dt.Rows.Add(new Object[] {"Pablo", "Picasso", new DateTime(1881, 10, 25)}); dt.Rows.Add(new Object[] {"Charlie", "Chaplin", new DateTime(1889, 4, 16)}); dt.Rows.Add(new Object[] {"Steven", "Spielberg", new DateTime(1946, 12, 18)}); dt.Rows.Add(new Object[] {"Bart", "Simpson", new DateTime(1987, 4, 19)}); dt.Rows.Add(new Object[] {"Louis", "Armstrong", new DateTime(1901, 8, 4)}); dt.Rows.Add(new Object[] {"Igor", "Stravinski", new DateTime(1882, 6, 17)}); dt.Rows.Add(new Object[] {"Bill", "Gates", new DateTime(1955, 10, 28)}); dt.Rows.Add(new Object[] {"Albert", "Einstein", new DateTime(1879, 3, 14)}); dt.Rows.Add(new Object[] {"Marilyn", "Monroe", new DateTime(1927, 6, 1)}); dt.Rows.Add(new Object[] {"Mother", "Teresa", new DateTime(1910, 8, 27)}); DataView dv = dt.DefaultView; return dv; } This function will return a DataView for a table of a dozen famous individuals and their birthdays. The ReportPrinting.IReportMaker interface is used to create objects that setup a ReportDocument. That is, an object that implements IReportMaker will contain all the code necessary to make a report. For this example, it is a class called SampleReportMaker1. It has one method to implement: ReportPrinting.IReportMaker ReportDocument IReportMaker SampleReportMaker1 public void MakeDocument(ReportDocument reportDocument) { Let's take a look at the implementation of this method step-by-step. First, it is a good idea to reset the TextStyle class. TextStyle provides global styles for the formatting of text (such as heading, normal paragraphs, headers, footers, etc.) Since the scope of this class is application wide, it should be reset to a known state at the beginning of report generation. TextStyle TextStyle.ResetStyles(); Next, setup the default margins for the document, if desired. // Setup default margins for the document (units of 1/100 inch) reportDocument.DefaultPageSettings.Margins.Top = 50; reportDocument.DefaultPageSettings.Margins.Bottom = 50; reportDocument.DefaultPageSettings.Margins.Left = 75; reportDocument.DefaultPageSettings.Margins.Right = 75; As mentioned, the TextStyle class has several static, global styles that can be applied to different text blocks. These styles can each be customized. We'll change some fonts and colors just to show what's possible. // Setup the global TextStyles TextStyle.Heading1.FontFamily = new FontFamily("Comic Sans MS"); TextStyle.Heading1.Brush = Brushes.DarkBlue; TextStyle.Heading1.SizeDelta = 5.0f; TextStyle.TableHeader.Brush = Brushes.White; TextStyle.TableHeader.BackgroundBrush = Brushes.DarkBlue; TextStyle.TableRow.BackgroundBrush = Brushes.AntiqueWhite; TextStyle.Normal.Size = 12.0f; // Add some white-space to the page. By adding a 1/10 inch margin // to the bottom of every line, quite a bit of white space will be added TextStyle.Normal.MarginBottom = 0.1f; Using our method defined earlier, we'll get a dataview and set the default sort based on the values setup in a GUI. (Note, this is a hack. Better encapsulation should be used to isolate the dialog, defined later, from this class.) dataview // create a data table and a default view from it. DataView dv = GetDataView(); // set the sort on the data view if (myPrintDialog.cmbOrderBy.SelectedItem != null) { string str = myPrintDialog.cmbOrderBy.SelectedItem.ToString(); if (myPrintDialog.chkReverse.Checked) { str += " DESC"; } dv.Sort = str; } The next step is creating an instance of the ReportPrinting.ReportBuilder class. This object is used to simplify the task of piecing together text, data, and the container sections that they go into (these classes for text, data and sections are described in more detail later in this article). ReportPrinting.ReportBuilder // create a builder to help with putting the table together. ReportBuilder builder = new ReportBuilder(reportDocument); Creating a header and footer are quite easy with the builder class's five overloaded functions. The one below creates a simple header with text on the left side (Birthdays Report) and on the right side (page #). The footer has the date centered. // Add a simple page header and footer that is the same on all pages. builder.AddPageHeader("Birthdays Report", String.Empty, "page %p"); builder.AddPageFooter(String.Empty, DateTime.Now.ToLongDateString(), String.Empty); Now the real fun begins: we start a vertical, linear layout because every section from here should be added below the preceding section. builder.StartLinearLayout(Direction.Vertical); Now add two text sections. The first section added will be a heading (as defined by TextStyle.Heading1). The second section is just normal text (as defined by the TextStyle.Normal). TextStyle.Heading1 TextStyle.Normal // Add text sections builder.AddTextSection("Birthdays", TextStyle.Heading1); builder.AddTextSection("The following are various birthdays of people " + "who are considered important in history."); Next, we add a section of a data table. The first line adds a data section with a visible header row. Then three column descriptors are added. These are added in the order that the columns are displayed. That is, LastName will be the first column, followed by FirstName, followed by Birthdate. Birthdate The first parameter passed to AddColumn is the name of the column in the underlying DataTable. The second parameter is the string as printed in the header row. The last three parameters describe the widths used. A max-width can be specified in inches. Optionally, the width can be auto-sized based on the header row and/or the data rows. In this case, with false being passed, no auto-sizing is performed. AddColumn string false // Add a data section, then add columns builder.AddDataSection(dv, true); builder.AddColumn ("LastName", "Last Name", 1.5f, false, false); builder.AddColumn ("FirstName", "First Name", 1.5f, false, false); builder.AddColumn ("Birthdate", "Birthdate", 3.0f, false, false); We set a format expression for the last column added (the date column). These format expressions are identical to those used by String.Format. This makes the date show up in long format. String.Format // Set the format expression to this string. builder.CurrentColumn.FormatExpression = "{0:D}"; And the very last thing is to finish the LinearLayout that was started earlier. LinearLayout builder.FinishLinearLayout(); } There are only a handful of controls on the following form: a label, a combo box, a check box, and a usercontrol from ReportPrinting namespace called PrintControl. This control has the four buttons you see at the bottom of the form. PrintControl This form also has an instance of the ReportPrinting.ReportDocument class. This is a subclass of System.Drawing.Printing.PrintDocument. If you create the above form in a designer, here is the constructor required to create a new ReportDocument object. ReportPrinting.ReportDocument System.Drawing.Printing.PrintDocument private ReportDocument reportDocument; public ReportPrinting.PrintControl PrintControls; public System.Windows.Forms.ComboBox cmbOrderBy; public System.Windows.Forms.CheckBox chkReverse; public SamplePrintDialog1() { InitializeComponent(); this.reportDocument = new ReportDocument(); this.PrintControls.Document = reportDocument; SampleReportMaker1 reportMaker = new SampleReportMaker1(this); this.reportDocument.ReportMaker = reportMaker; this.cmbOrderBy.Items.Clear(); this.cmbOrderBy.Items.Add("FirstName"); this.cmbOrderBy.Items.Add("LastName"); this.cmbOrderBy.Items.Add("Birthdate"); } In this constructor, an instance of ReportDocument is created. This instance is assigned to the PrintControls.Document property. A SampleReportMaker1 object (defined above) is then instantiated and assigned to the ReportDocument's ReportMaker property. The final bit of the constructor simply sets up the ComboBox. PrintControls.Document ReportMaker ComboBox The above code prints a fairly simple document. Just to note, you can use the standard PrintDialog, PrintPreview, and PageSettings dialogs (without using the PrintControls usercontrol). PrintDialog PrintPreview PageSettings PrintControls This entire sample can be found in the download of the ReportPrinting library, along with many other tests that I have created (most are boring to look at, but test various random settings). There are several classes introduced into the ReportPrinting namespace. They work together for the printing of the above report (in addition to all the .NET Framework base classes that are used). Here is a quasi-UML diagram that shows the relationship between these classes. An open triangle is generalization (i.e. it points to the super-class in the inheritance chain). The black diamonds are composite (i.e. show that one class instantiates members of another class). The dashed-lines are dependency (i.e. it uses the class). ReportDocument extends from PrintDocument and is customized for printing reports from one or more tables of data. A ReportDocument object is the top-level container for all the sections that make up the report. (This consists of a header, body, and footer.) PrintDocument The ReportDocument's main job is printing, which occurs when the Print() method is called of the base class. The Print() method iterates through all the ReportSections making up the document, printing each one. Print() ReportSections The strategy design pattern is employed for formatting the report. An object implementing IReportMaker may be associated with the ReportDocument. This IReportMaker object is application specific and knows how to create a report based on application state and user settings. This object would be responsible for creating sections, associating DataViews, and applying any required styles through use of the TextStyle class. It will generally use the ReportBuilder class to assist with the complexity of building a report. DataViews ReportBuilder ReportSection is an abstract class that represents a printable section of a report. There are several subclasses of ReportSection, including ReportSectionText (which represents a string of text) and ReportSectionData (which represents a printable DataView). There are also container sections (which derive from SectionContainer class, which in turn derives from ReportSection). These containers hold child ReportSection objects (also known as subsections) to be printed. Let’s take a quick look at how this might work with an example. ReportSection ReportSectionText ReportSectionData SectionContainer In the sample report shown at the top of this article, there is a paragraph of text followed by a table of data. (There are actually two paragraphs of text, one of which is a heading. Plus there is a page header, but we'll ignore all that for now.) We would create a ReportSectionText object to print the paragraph of text and a ReportSectionData object to print the table of data. To add both of these ReportSections to the ReportDocument, we must create a container. We would create a LinearSections container to hold these two sections. This container is then made the body of the ReportDocument. When the document is printed, the section container will first print the ReportSectionText, and then below that, it will print the ReportSectionData. Simply printing each section below the preceding one will result in the finished report. But there are many other ways to set up these classes. LinearSections This abstract class defines a container of sections. There are two types provided with the framework: LinearSections and LayeredSections. LayeredSections The LinearSections class is a subclass of SectionContainer, which is a subclass of ReportSection. Therefore, the LinearSections can be thought of as "a printable section of a report." However, it is also a container of one or more sections. As its name implies, it lays sections out linearly -- that is, in a row or in a column. A property named Direction specifies if this container will layout sections going down the page (typical) or across the page (not as typical). Direction The LayeredSections class is also a subclass of SectionContainer, which is a subclass of ReportSection. Therefore, the LayeredSections can be thought of as "a printable section of a report." It is also a container of one or more sections. The child sections of a LayeredSections object are all painted on top of one another (creating layers). The first section added to a LayeredSections object is the bottom layer. Subsequent ReportSection objects added to the LayeredSections object will be shown on top of each other. The ReportSectionText prints a string to the page. Two public properties are used to setup this section. Text is used to specify the string to print. TextStyle, described later, sets the font, color, alignment and other properties for how the text is printed. public It is interesting to note that the string specified for this section can be just one word, or many paragraphs of text. The ReportSectionData prints a table of data. It uses a DataView object (from the .NET System.Data namespace) as the source of data. It then uses a series of ReportDataColumns to provide the formatting details. These ReportDataColumns are similar to the DataGridColumnStyle class. System.Data ReportDataColumns DataGridColumnStyle The ReportDataColumn provides the necessary information for formatting data for a column of a report. For every column to be presented within a section of data, a new ReportDataColumn object is instantiated and added to the ReportSection. At a minimum, each column describes a source field from the DataSource (that is, a column name from the DataView) and a maximum width on the page. ReportDataColumn DataSource The ReportDataColumn can be setup with its own unique TextStyle for both header and normal rows. Therefore, each column's data can be formatted differently (e.g. an important column could be bold and red). The TextStyle is also used to set the horizontal alignment (justification). The TextStyle class allows styles and fonts to be added to text selectively, allowing default styles to be used when not explicitly set. All styles (except for the static TextStyle.Normal) have another style as their "default" style. Until a property is set (like bold, underline, size, font family, etc), a TextStyle object always uses the corresponding value from its default (or parent) style. For example, a new style can be defined using Normal as its default, but setting bold. Normal bold TextStyle paragraphStyle = new TextStyle(TextStyle.Normal); paragraphStyle.Bold = true; It will have all the same properties as TextStyle.Normal, except it will be bold. A later change to Normal (such as below) will have the effect of increasing the size of both styles (Normal and paragraphStyle). paragraphStyle TextStyle.Normal.Size += 1.0f ReportBuilder assists with the building of a report. This class is the main interface between your code and the ReportPrinting library. In many cases, you will never explicitly create any of the above objects. Instead, the ReportBuilder will create them for you. To instantiate a ReportBuilder, you must provide the ReportDocument to be built. Then you can call its various Add… methods to sequentially add pieces to a report document. Add We've already seen an example of using ReportBuilder above. IReportMaker is an interface used to implement the strategy design pattern. An object that implements IReportMaker can be added to a ReportDocument. When the document is about to be printed, it automatically calls the single method MakeDocument(). The above example shows an implementation of that method to print a one-page report. MakeDocument() For example, you could have an application that can print either detailed reports or a shorter overview. The logic to make each of these reports would be located in separate classes. Each class would implement the IReportMaker interface. Your print dialog could have a "Print What" combo box to allow the user to select the type of report, and use the selection in the combo box to associate the correct implementation of IReportMaker with the ReportDocument. That summarizes the ReportPrinting library. This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) _builder = new ReportBuilder(reportDocument); _builder.StartLinearLayout(Direction.Vertical); SectionBreak _break = new SectionBreak(); _break.NewPageOrientation = PageOrientation.Landscape; _builder.AddSection(_break); // // reportDocument3 // this.reportDocument3.Body = null; this.reportDocument3.PageFooter = null; this.reportDocument3.PageHeader = null; this.reportDocument3.DefaultPageSettings.Landscape = true; General News Suggestion Question Bug Answer Joke Praise Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
http://www.codeproject.com/Articles/4934/Printing-Reports-in-NET?msg=2912730
CC-MAIN-2016-44
refinedweb
2,824
50.84
I'm trying to make a deck of cards to use in a blackjack game but I'm having problems. #include <stdio.h> #include <math.h> #include <string.h> main(){ int i; typedef struct { int rank, suit; } cardtype; cardtype deck[52]; cardtype hand[5]; char *rank[13] = {"ace", "Duece", "3", "4", "5", "6", "7", "8", "10", "Jack", "Queen", "King"}; char *suit[4] = {"Clubs", "Diamonds", "Hearts", "Spades"}; for(i=1; i < 52; i++){ //this is the part that i'm having trouble with rank[deck[i].rank] = rank[i%13-1]; //I didn't really know how to do it so i just suit[deck[i].suit] = suit[i/13];} //experimented, probably gibberish lol //for(i=0; i < 52 ; i++) //{print("%s of %s", rank[deck[i].rank], suit[deck[i].rank]);}; return 0; } any help would be much appreciated thanks!!
https://www.daniweb.com/programming/software-development/threads/357092/help-with-structures-building-a-deck-of-cards
CC-MAIN-2017-47
refinedweb
140
72.87
Basic Theory Complexity. - Place the starting node s on the top of the stack. - If the stack is empty, return failure and stop. - If the element on the stack is goal node g, return success and stop. Otherwise, - Remove and expand the first element , and place the children at the top of the stack. - Return to step 2. Example Source Code ; } Complexity The depth – first search is preferred over the breadth – first when the search tree is known to have a plentiful number of goals. The time complexity of the depth-first tree search is the same as that for breadth-first, O(bd). It is less demanding in space requirements, however, since only the path form the starting node to the current node needs to be stored. Therefore, if the depth cutoff is d, the space complexity is just O(d). error in line 121. should be if(k==required) Error corrected ... thanking for your help ? oh you are right.. the first allocation is actually not needed, it is just to check whether memory is available or not. You can check it using new also. So you can simply omit the memory allocation using malloc. May be its better to comment that portion and address in comments as alternative way of memory allocation for the sake of clarity for beginners. Also in isConnected function when you do x-1 and y-1 kindly make sure that it lies within the bounds of the array size and doesn't become negative. Hey, but you malloc it to test if there's enough memory, then don't free it. So what have you tested? Maybe there was enough memory but there isn't now! You could use placement new with the malloc'ed memory - but better just catch the out of memory exception from the new if it happens! Hi I added the following edges g.addEdge(1, 3); g.addEdge(1, 5); g.addEdge(2, 4); g.addEdge(2, 5); g.addEdge(3, 6); g.addEdge(4, 6); g.addEdge(4, 7); g.addEdge(5, 7); g.addEdge(5, 8); g.addEdge(6, 9); g.addEdge(6, 10); g.addEdge(7, 9); g.addEdge(8, 9); g.addEdge(8, 10); and performed DFS for g.DFS(1, 8); But my program crashes.. Can you tell me why this is happening? Thanks in advance.. for (i = n; i >= 0 ; --i) if (isConnected(k, i) && !visited[i]) { s.push(i); visited[i] = true; } you are visiting every unvisited adjacent of the vertex k in order... this is breadth first , not depth first..in depth first, we go down the tree (for instance, here we would have had to recursively call dfs(i, reqd))... also in the constructor itself instead of using the nested for loop to set all values to 0 cant we do it in the above for loop itself as: for (int i = 0; i < n; i++) { A[i] = new int[n]; // set this row to 0's memset(A[i], 0, n); } Combination of these two lines gives a memory leak: 113 | bool *visited = new bool[n+1]; .... 120 | if(x == required) return; There is some mistake in concept : for (i = n; i >= 0 ; --i) if (isConnected(k, i) && !visited[i]) { s.push(i); visited[i] = true; } Depth first traversal will need recursion, here code is using method of Breadth first search. This seems to be BFS rather than DFS. Implementation using stack STL /* Algorithm 1. Place the starting node s on the top of the stack. 2. If the stack is empty, return failure and stop. 3. If the element on the stack is goal node g, return success and stop. Otherwise, 4. Remove and expand the first element , and place the children at the top of the stack. 5. Return to step 2. */ #include #include #include #include using namespace std; struct node { int info; node (int i=0; i s; bool *vis = new bool[n+1]; for(int i=0; i<=n; i++) vis[i] = false; s.push(x); vis[x] = true; if(x == req) return; cout<<"Depth first Search starting from vertex"; cout<=0; --i) if(isConnected(k, i) && !vis[i]) { s.push(i); vis[i] = true; } } cout<<endl; delete [] vis; } int main() { graph g(8); g.addEdge(1,2); g.addEdge(1,3); g.addEdge(1,4); g.addEdge(2,5); g.addEdge(2,6); g.addEdge(4,7); g.addEdge(4,8); g.DFS(1,4); return 0; } hai i getting the error while not expanded online functions in searching techniques techniques program using c++(linear search,binary search) compiler dosent show the output... screen is displayed just 1 second and dissaper plz help me output does not show on the compiler.. output appear for few seconds and then disappear..
https://www.programming-techniques.com/2012/07/depth-first-search-in-c-algorithm-and.html
CC-MAIN-2018-43
refinedweb
796
77.03
GlassFish v3 Gem Updated By arungupta on Jan 04, 2008 Pramod published an updated JRuby Gem for GlassFish v3. Download the gem here. Here are the updates from last time: Ad of course, you can BYOG (Build Your Own Gem :) Either way, once the Gem is installed then go ahead and use it as described below: That's it! The following output in the console confirms successful start of the Gem: The application is deployed at And And you can try more advanced applications like Mephisto as described here. Here are the updates from last time: - The Gem is now smaller - 2.4 MB instead of 2.9 MB (approx 20% smaller). - The Gem is now using the latest v3 codebase, including Grizzly JRuby module 1.6.1. - 2 Rails instances are created instead of the default one. So now 2 requests can be invoked in parallel and they both will be served instead of returning a blank page for the pending request. This explains/resolves the issue reported here. - file bugs if you see any other issues. Either way, once the Gem is installed then go ahead and use it as described below: - Create a template app as: jruby -S rails hello - Create a controller and view as: cd hello jruby script/generate controller say hello - Edit controller as: vi app/controllers/say_controller.rb and add the following variable in hello helper method: @hello_string = "v3 Gem is getting polished!" - Edit view as: vi app/views/say/hello.rhtml and add the following string at the bottom: <%= @hello_string %> - Ad deploy the application on GlassFish v3 gem as: cd .. jruby -S glassfish_rails hello Jan 4, 2008 3:35:52 PM com.sun.grizzly.standalone.StaticResourcesAdapter <init> INFO: New Servicing page from: /Users/arungupta/testbed/r126 at shows the output as: And displayed in the browser as shown below: And you can try more advanced applications like Mephisto as described here. Technorati: ruby jruby glassfish v3 gem Posted by Arun Gupta's Blog on January 30, 2008 at 02:31 PM PST # Posted by Arun Gupta's Blog on February 07, 2008 at 08:33 PM PST # Hi Arun, I can't have glassfish running multiple concurrent requests: Try this please: rails concurrent cd concurrent script/generate controller test test edit app/controllers/test.rb class TestController < ApplicationController def test sleep 10 @value = Time.now end end edit app/views/test/test.test.html.erb <%= @value %> edit config/database.yml production: adapter: jdbcsqlite3 database: db/production.sqlite3 pool: 5 timeout: 5000 Now launch glassfish gem in production mode: jruby -S glassfish -e production -n 4 --runtimes-min 4 --runtimes-max 4 Now I try (concurrently) get 4 times the page: I receive the results queued (the last result after 40 seconds) seems that glassfish gem cannot handle multiple concurrent requests on multiple separate instances of jruby... What do you think about this issue ? May be I'm doing something wrong ? Posted by Tex on August 23, 2009 at 07:39 PM PDT #
https://blogs.oracle.com/arungupta/entry/glassfish_v3_gem_updated
CC-MAIN-2014-10
refinedweb
500
62.88
/* * mem.c - memory management * *. * */ #include "zsh.mdh" #include "mem. To be sure that they are freed at appropriate times one should call pushheap() before one starts using heaps and popheap() after that (when the memory allocated on the heaps since the last pushheap() isn't needed anymore). pushheap() saves the states of all currently allocated heaps and popheap() resets them to the last state saved and destroys the information about that state. If you called pushheap() and about the heap states (i.e. the heaps are like after the call to pushheap() and you have to call popheap some time later). Memory allocated in this way does not have to be freed explicitly; it will all be freed when the pool is destroyed. In fact, attempting to free this memory may result in a core dump.. hrealloc(char *p, size_t old, size_t new) is an optimisation with a similar interface to realloc(). Typically the new size will be larger than the old one, since there is no gain in shrinking the allocation (indeed, that will confused hrealloc() since it will forget that the unused space once belonged to this pointer). However, new == 0 is a special case; then if we had to allocate a special heap for this memory it is freed at that point. */ #if defined(HAVE_SYS_MMAN_H) && defined(HAVE_MMAP) && defined(HAVE_MUNMAP) #include <sys/mman.h> #if defined(MAP_ANONYMOUS) && defined(MAP_PRIVATE) #define USE_MMAP 1 #define MMAP_FLAGS (MAP_ANONYMOUS | MAP_PRIVATE) #endif #endif #ifdef ZSH_MEM_WARNING # ifndef DEBUG # define DEBUG 1 # endif #endif #if defined(ZSH_MEM) && defined(ZSH_MEM_DEBUG) static int h_m[1025], h_push, h_pop, h_free; #endif /* Make sure we align to the longest fundamental type. */ union mem_align { zlong l; double d; }; #define H_ISIZE sizeof(union mem_align) #define HEAPSIZE (16384 - H_ISIZE) /* Memory available for user data in default arena size */ #define HEAP_ARENA_SIZE (HEAPSIZE - sizeof(struct heap)) #define HEAPFREE (16384 - H_ISIZE) /* Memory available for user data in heap h */ #define ARENA_SIZEOF(h) ((h)->size - sizeof(struct heap)) /* list of zsh heaps */ static Heap heaps; /* a heap with free space, not always correct (it will be the last heap * if that was newly allocated but it may also be another one) */ static Heap fheap; /* Use new heaps from now on. This returns the old heap-list. */ /**/ mod_export Heap new_heaps(void) { Heap h; queue_signals(); h = heaps; fheap = heaps = NULL; unqueue_signals(); return h; } /* Re-install the old heaps again, freeing the new ones. */ /**/ mod_export void old_heaps(Heap old) { Heap h, n; queue_signals(); for (h = heaps; h; h = n) { n = h->next; DPUTS(h->sp, "BUG: old_heaps() with pushed heaps"); #ifdef USE_MMAP munmap((void *) h, h->size); #else zfree(h, HEAPSIZE); #endif } heaps = old; fheap = NULL; unqueue_signals(); } /* Temporarily switch to other heaps (or back again). */ /**/ mod_export Heap switch_heaps(Heap new) { Heap h; queue_signals(); h = heaps; heaps = new; fheap = NULL; unqueue_signals(); return h; } /* save states of zsh heaps */ /**/ mod_export void pushheap(void) { Heap h; Heapstack hs; queue_signals(); #if defined(ZSH_MEM) && defined(ZSH_MEM_DEBUG) h_push++; #endif for (h = heaps; h; h = h->next) { DPUTS(!h->used, "BUG: empty heap"); hs = (Heapstack) zalloc(sizeof(*hs)); hs->next = h->sp; h->sp = hs; hs->used = h->used; } unqueue_signals(); } /* reset heaps to previous state */ /**/ mod_export void freeheap(void) { Heap h, hn, hl = NULL; queue_signals(); #if defined(ZSH_MEM) && defined(ZSH_MEM_DEBUG) h_free++; #endif fheap = NULL; for (h = heaps; h; h = hn) { hn = h->next; if (h->sp) { #ifdef ZSH_MEM_DEBUG memset(arena(h) + h->sp->used, 0xff, h->used - h->sp->used); #endif h->used = h->sp->used; if (!fheap && h->used < ARENA_SIZEOF(h)) fheap = h; hl = h; } else { #ifdef USE_MMAP munmap((void *) h, h->size); #else zfree(h, HEAPSIZE); #endif } } if (hl) hl->next = NULL; else heaps = NULL; unqueue_signals(); } /* reset heap to previous state and destroy state information */ /**/ mod_export void popheap(void) { Heap h, hn, hl = NULL; Heapstack hs; queue_signals(); #if defined(ZSH_MEM) && defined(ZSH_MEM_DEBUG) h_pop++; #endif fheap = NULL; for (h = heaps; h; h = hn) { hn = h->next; if ((hs = h->sp)) { h->sp = hs->next; #ifdef ZSH_MEM_DEBUG memset(arena(h) + hs->used, 0xff, h->used - hs->used); #endif h->used = hs->used; if (!fheap && h->used < ARENA_SIZEOF(h)) fheap = h; zfree(hs, sizeof(*hs)); hl = h; } else { #ifdef USE_MMAP munmap((void *) h, h->size); #else zfree(h, HEAPSIZE); #endif } } if (hl) hl->next = NULL; else heaps = NULL; unqueue_signals(); } #ifdef USE_MMAP /* * Utility function to allocate a heap area of at least *n bytes. * *n will be rounded up to the next page boundary. */ static Heap mmap_heap_alloc(size_t *n) { Heap h; static size_t pgsz = 0; if (!pgsz) { #ifdef _SC_PAGESIZE pgsz = sysconf(_SC_PAGESIZE); /* SVR4 */ #else # ifdef _SC_PAGE_SIZE pgsz = sysconf(_SC_PAGE_SIZE); /* HPUX */ # else pgsz = getpagesize(); # endif #endif pgsz--; } *n = (*n + pgsz) & ~pgsz; h = (Heap) mmap(NULL, *n, PROT_READ | PROT_WRITE, MMAP_FLAGS, -1, 0); if (h == ((Heap) -1)) { zerr("fatal error: out of heap memory"); exit(1); } return h; } #endif /* check whether a pointer is within a memory pool */ /**/ mod_export void * zheapptr(void *p) { Heap h; queue_signals(); for (h = heaps; h; h = h->next) if ((char *)p >= arena(h) && (char *)p + H_ISIZE < arena(h) + ARENA_SIZEOF(h)) break; unqueue_signals(); return (h ? p : 0); } /* allocate memory from the current memory pool */ /**/ mod_export void * zhalloc(size_t size) { Heap h; size_t n; size = (size + H_ISIZE - 1) & ~(H_ISIZE - 1); queue_signals(); #if defined(ZSH_MEM) && defined(ZSH_MEM_DEBUG) h_m[size < (1024 * H_ISIZE) ? (size / H_ISIZE) : 1024]++; #endif /* find a heap with enough free space */ for (h = ((fheap && ARENA_SIZEOF(fheap) >= (size + fheap->used)) ? fheap : heaps); h; h = h->next) { if (ARENA_SIZEOF(h) >= (n = size + h->used)) { void *ret; h->used = n; ret = arena(h) + n - size; unqueue_signals(); return ret; } } { Heap hp; /* not found, allocate new heap */ #if defined(ZSH_MEM) && !defined(USE_MMAP) static int called = 0; void *foo = called ? (void *)malloc(HEAPFREE) : NULL; /* tricky, see above */ #endif n = HEAP_ARENA_SIZE > size ? HEAPSIZE : size + sizeof(*h); for (hp = NULL, h = heaps; h; hp = h, h = h->next); #ifdef USE_MMAP h = mmap_heap_alloc(&n); #else h = (Heap) zalloc(n); #endif #if defined(ZSH_MEM) && !defined(USE_MMAP) if (called) zfree(foo, HEAPFREE); called = 1; #endif h->size = n; h->used = size; h->next = NULL; h->sp = NULL; if (hp) hp->next = h; else heaps = h; fheap = h; unqueue_signals(); return arena(h); } } /**/ mod_export void * hrealloc(char *p, size_t old, size_t new) { Heap h, ph; old = (old + H_ISIZE - 1) & ~(H_ISIZE - 1); new = (new + H_ISIZE - 1) & ~(H_ISIZE - 1); if (old == new) return p; if (!old && !p) return zhalloc(new); /* find the heap with p */ queue_signals(); for (h = heaps, ph = NULL; h; ph = h, h = h->next) if (p >= arena(h) && p < arena(h) + ARENA_SIZEOF(h)) break; DPUTS(!h, "BUG: hrealloc() called for non-heap memory."); DPUTS(h->sp && arena(h) + h->sp->used > p, "BUG: hrealloc() wants to realloc pushed memory"); /* * If the end of the old chunk is before the used pointer, * more memory has been zhalloc'ed afterwards. * We can't tell if that's still in use, obviously, since * that's the whole point of heap memory. * We have no choice other than to grab some more memory * somewhere else and copy in the old stuff. */ if (p + old < arena(h) + h->used) { if (new > old) { char *ptr = (char *) zhalloc(new); memcpy(ptr, p, old); #ifdef ZSH_MEM_DEBUG memset(p, 0xff, old); #endif unqueue_signals(); return ptr; } else { unqueue_signals(); return new ? p : NULL; } } DPUTS(p + old != arena(h) + h->used, "BUG: hrealloc more than allocated"); /* * We now know there's nothing afterwards in the heap, now see if * there's nothing before. Then we can reallocate the whole thing. * Otherwise, we need to keep the stuff at the start of the heap, * then allocate a new one too; this is handled below. (This will * guarantee we occupy a full heap next time round, provided we * don't use the heap for anything else.) */ if (p == arena(h)) { /* * Zero new seems to be a special case saying we've finished * with the specially reallocated memory, see scanner() in glob.c. */ if (!new) { if (ph) ph->next = h->next; else heaps = h->next; fheap = NULL; #ifdef USE_MMAP munmap((void *) h, h->size); #else zfree(h, HEAPSIZE); #endif unqueue_signals(); return NULL; } if (new > ARENA_SIZEOF(h)) { /* * Not enough memory in this heap. Allocate a new * one of sufficient size. * * To avoid this happening too often, allocate * chunks in multiples of HEAPSIZE. * (Historical note: there didn't used to be any * point in this since we didn't consistently record * the allocated size of the heap, but now we do.) */ size_t n = (new + sizeof(*h) + HEAPSIZE); n -= n % HEAPSIZE; fheap = NULL; #ifdef USE_MMAP { /* * I don't know any easy portable way of requesting * a mmap'd segment be extended, so simply allocate * a new one and copy. */ Heap hnew; hnew = mmap_heap_alloc(&n); /* Copy the entire heap, header (with next pointer) included */ memcpy(hnew, h, h->size); munmap((void *)h, h->size); h = hnew; } #else h = (Heap) realloc(h, n); #endif h->size = n; if (ph) ph->next = h; else heaps = h; } h->used = new; unqueue_signals(); return arena(h); } #ifndef USE_MMAP DPUTS(h->used > ARENA_SIZEOF(h), "BUG: hrealloc at invalid address"); #endif if (h->used + (new - old) <= ARENA_SIZEOF(h)) { h->used += new - old; unqueue_signals(); return p; } else { char *t = zhalloc(new); memcpy(t, p, old > new ? new : old); h->used -= old; #ifdef ZSH_MEM_DEBUG memset(p, 0xff, old); #endif unqueue_signals(); return t; } } /* allocate memory from the current memory pool and clear it */ /**/ mod_export void * hcalloc(size_t size) { void *ptr; ptr = zhalloc(size); memset(ptr, 0, size); return ptr; } /* allocate permanent memory */ /**/ mod_export void * zalloc(size_t size) { void *ptr; if (!size) size = 1; queue_signals(); if (!(ptr = (void *) malloc(size))) { zerr("fatal error: out of memory"); exit(1); } unqueue_signals(); return ptr; } /**/ mod_export void * zshcalloc(size_t size) { void *ptr; if (!size) size = 1; queue_signals(); if (!(ptr = (void *) malloc(size))) { zerr("fatal error: out of memory"); exit(1); } unqueue_signals(); memset(ptr, 0, size); return ptr; } /* This front-end to realloc is used to make sure we have a realloc * * that conforms to POSIX realloc. Older realloc's can fail if * * passed a NULL pointer, but POSIX realloc should handle this. A * * better solution would be for configure to check if realloc is * * POSIX compliant, but I'm not sure how to do that. */ /**/ mod_export void * zrealloc(void *ptr, size_t size) { queue_signals(); if (ptr) { if (size) { /* Do normal realloc */ if (!(ptr = (void *) realloc(ptr, size))) { zerr("fatal error: out of memory"); exit(1); } unqueue_signals(); return ptr; } else /* If ptr is not NULL, but size is zero, * * then object pointed to is freed. */ free(ptr); ptr = NULL; } else { /* If ptr is NULL, then behave like malloc */ ptr = malloc(size); } unqueue_signals(); return ptr; } /**/ #ifdef ZSH_MEM /* Below is a simple segment oriented memory allocator for systems on which it is better than the system's one. Memory is given in blocks aligned to an integer multiple of sizeof(union mem_align), which will probably be 64-bit as it is the longer of zlong or double. Each block is preceded by a header which contains the length of the data part (in bytes). In allocated blocks only this field of the structure m_hdr is senseful. In free blocks the second field (next) is a pointer to the next free segment on the free list. On top of this simple allocator there is a second allocator for small chunks of data. It should be both faster and less space-consuming than using the normal segment mechanism for such blocks. For the first M_NSMALL-1 possible sizes memory is allocated in arrays that can hold M_SNUM blocks. Each array is stored in one segment of the main allocator. In these segments the third field of the header structure (free) contains a pointer to the first free block in the array. The last field (used) gives the number of already used blocks in the array. If the macro name ZSH_MEM_DEBUG is defined, some information about the memory usage is stored. This information can than be viewed by calling the builtin `mem' (which is only available if ZSH_MEM_DEBUG is set). If ZSH_MEM_WARNING is defined, error messages are printed in case of errors. If ZSH_SECURE_FREE is defined, free() checks if the given address is really one that was returned by malloc(), it ignores it if it wasn't (printing an error message if ZSH_MEM_WARNING is also defined). */ #if !defined(__hpux) && !defined(DGUX) && !defined(__osf__) # if defined(_BSD) # ifndef HAVE_BRK_PROTO extern int brk _((caddr_t)); # endif # ifndef HAVE_SBRK_PROTO extern caddr_t sbrk _((int)); # endif # else # ifndef HAVE_BRK_PROTO extern int brk _((void *)); # endif # ifndef HAVE_SBRK_PROTO extern void *sbrk _((int)); # endif # endif #endif #if defined(_BSD) && !defined(STDC_HEADERS) # define FREE_RET_T int # define FREE_ARG_T char * # define FREE_DO_RET # define MALLOC_RET_T char * # define MALLOC_ARG_T size_t #else # define FREE_RET_T void # define FREE_ARG_T void * # define MALLOC_RET_T void * # define MALLOC_ARG_T size_t #endif /* structure for building free list in blocks holding small blocks */ struct m_shdr { struct m_shdr *next; /* next one on free list */ #ifdef PAD_64_BIT /* dummy to make this 64-bit aligned */ struct m_shdr *dummy; #endif }; struct m_hdr { zlong len; /* length of memory block */ #if defined(PAD_64_BIT) && !defined(ZSH_64_BIT_TYPE) /* either 1 or 2 zlong's, whichever makes up 64 bits. */ zlong dummy1; #endif struct m_hdr *next; /* if free: next on free list if block of small blocks: next one with small blocks of same size*/ struct m_shdr *free; /* if block of small blocks: free list */ zlong used; /* if block of small blocks: number of used blocks */ #if defined(PAD_64_BIT) && !defined(ZSH_64_BIT_TYPE) zlong dummy2; #endif }; /* alignment for memory blocks */ #define M_ALIGN (sizeof(union mem_align)) /* length of memory header, length of first field of memory header and minimal size of a block left free (if we allocate memory and take a block from the free list that is larger than needed, it must have at least M_MIN extra bytes to be splitted; if it has, the rest is put on the free list) */ #define M_HSIZE (sizeof(struct m_hdr)) #if defined(PAD_64_BIT) && !defined(ZSH_64_BIT_TYPE) # define M_ISIZE (2*sizeof(zlong)) #else # define M_ISIZE (sizeof(zlong)) #endif #define M_MIN (2 * M_ISIZE) /* M_FREE is the number of bytes that have to be free before memory is * given back to the system * M_KEEP is the number of bytes that will be kept when memory is given * back; note that this has to be less than M_FREE * M_ALLOC is the number of extra bytes to request from the system */ #define M_FREE 32768 #define M_KEEP 16384 #define M_ALLOC M_KEEP /* a pointer to the last free block, a pointer to the free list (the blocks on this list are kept in order - lowest address first) */ static struct m_hdr *m_lfree, *m_free; /* system's pagesize */ static long m_pgsz = 0; /* the highest and the lowest valid memory addresses, kept for fast validity checks in free() and to find out if and when we can give memory back to the system */ static char *m_high, *m_low; /* Management of blocks for small blocks: Such blocks are kept in lists (one list for each of the sizes that are allocated in such blocks). The lists are stored in the m_small array. M_SIDX() calculates the index into this array for a given size. M_SNUM is the size (in small blocks) of such blocks. M_SLEN() calculates the size of the small blocks held in a memory block, given a pointer to the header of it. M_SBLEN() gives the size of a memory block that can hold an array of small blocks, given the size of these small blocks. M_BSLEN() calculates the size of the small blocks held in a memory block, given the length of that block (including the header of the memory block. M_NSMALL is the number of possible block sizes that small blocks should be used for. */ #define M_SIDX(S) ((S) / M_ISIZE) #define M_SNUM 128 #define M_SLEN(M) ((M)->len / M_SNUM) #if defined(PAD_64_BIT) && !defined(ZSH_64_BIT_TYPE) /* Include the dummy in the alignment */ #define M_SBLEN(S) ((S) * M_SNUM + sizeof(struct m_shdr *) + \ 2*sizeof(zlong) + sizeof(struct m_hdr *)) #define M_BSLEN(S) (((S) - sizeof(struct m_shdr *) - \ 2*sizeof(zlong) - sizeof(struct m_hdr *)) / M_SNUM) #else #define M_SBLEN(S) ((S) * M_SNUM + sizeof(struct m_shdr *) + \ sizeof(zlong) + sizeof(struct m_hdr *)) #define M_BSLEN(S) (((S) - sizeof(struct m_shdr *) - \ sizeof(zlong) - sizeof(struct m_hdr *)) / M_SNUM) #endif #define M_NSMALL 8 static struct m_hdr *m_small[M_NSMALL]; #ifdef ZSH_MEM_DEBUG static int m_s = 0, m_b = 0; static int m_m[1025], m_f[1025]; static struct m_hdr *m_l; #endif /* ZSH_MEM_DEBUG */ MALLOC_RET_T malloc(MALLOC_ARG_T size) { struct m_hdr *m, *mp, *mt; long n, s, os = 0; #ifndef USE_MMAP struct heap *h, *hp, *hf = NULL, *hfp = NULL; #endif /* some systems want malloc to return the highest valid address plus one if it is called with an argument of zero. TODO: really? Suppose we allocate more memory, so that this is now in bounds, then a more rational application that thinks it can free() anything it malloc'ed, even of zero length, calls free for it? Aren't we in big trouble? Wouldn't it be safer just to allocate some memory anyway? If the above comment is really correct, then at least we need to check in free() if we're freeing memory at m_high. */ if (!size) #if 1 size = 1; #else return (MALLOC_RET_T) m_high; #endif queue_signals(); /* just queue signals rather than handling them */ /* first call, get page size */ if (!m_pgsz) { #ifdef _SC_PAGESIZE m_pgsz = sysconf(_SC_PAGESIZE); /* SVR4 */ #else # ifdef _SC_PAGE_SIZE m_pgsz = sysconf(_SC_PAGE_SIZE); /* HPUX */ # else m_pgsz = getpagesize(); # endif #endif m_free = m_lfree = NULL; } size = (size + M_ALIGN - 1) & ~(M_ALIGN - 1); /* Do we need a small block? */ if ((s = M_SIDX(size)) && s < M_NSMALL) { /* yep, find a memory block with free small blocks of the appropriate size (if we find it in this list, this means that it has room for at least one more small block) */ for (mp = NULL, m = m_small[s]; m && !m->free; mp = m, m = m->next); if (m) { /* we found one */ struct m_shdr *sh = m->free; m->free = sh->next; m->used++; /* if all small blocks in this block are allocated, the block is put at the end of the list blocks with small blocks of this size (i.e., we try to keep blocks with free blocks at the beginning of the list, to make the search faster) */ if (m->used == M_SNUM && m->next) { for (mt = m; mt->next; mt = mt->next); mt->next = m; if (mp) mp->next = m->next; else m_small[s] = m->next; m->next = NULL; } #ifdef ZSH_MEM_DEBUG m_m[size / M_ISIZE]++; #endif unqueue_signals(); return (MALLOC_RET_T) sh; } /* we still want a small block but there were no block with a free small block of the requested size; so we use the real allocation routine to allocate a block for small blocks of this size */ os = size; size = M_SBLEN(size); } else s = 0; /* search the free list for an block of at least the requested size */ for (mp = NULL, m = m_free; m && m->len < size; mp = m, m = m->next); #ifndef USE_MMAP /* if there is an empty zsh heap at a lower address we steal it and take the memory from it, putting the rest on the free list (remember that the blocks on the free list are ordered) */ for (hp = NULL, h = heaps; h; hp = h, h = h->next) if (!h->used && (!hf || h < hf) && (!m || ((char *)m) > ((char *)h))) hf = h, hfp = hp; if (hf) { /* we found such a heap */ Heapstack hso, hsn; /* delete structures on the list holding the heap states */ for (hso = hf->sp; hso; hso = hsn) { hsn = hso->next; zfree(hso, sizeof(*hso)); } /* take it from the list of heaps */ if (hfp) hfp->next = hf->next; else heaps = hf->next; /* now we simply free it and than search the free list again */ zfree(hf, HEAPSIZE); for (mp = NULL, m = m_free; m && m->len < size; mp = m, m = m->next); } #endif if (!m) { long nal; /* no matching free block was found, we have to request new memory from the system */ n = (size + M_HSIZE + M_ALLOC + m_pgsz - 1) & ~(m_pgsz - 1); if (((char *)(m = (struct m_hdr *)sbrk(n))) == ((char *)-1)) { DPUTS1(1, "MEM: allocation error at sbrk, size %L.", n); unqueue_signals(); return NULL; } if ((nal = ((long)(char *)m) & (M_ALIGN-1))) { if ((char *)sbrk(M_ALIGN - nal) == (char *)-1) { DPUTS(1, "MEM: allocation error at sbrk."); unqueue_signals(); return NULL; } m = (struct m_hdr *) ((char *)m + (M_ALIGN - nal)); } /* set m_low, for the check in free() */ if (!m_low) m_low = (char *)m; #ifdef ZSH_MEM_DEBUG m_s += n; if (!m_l) m_l = m; #endif /* save new highest address */ m_high = ((char *)m) + n; /* initialize header */ m->len = n - M_ISIZE; m->next = NULL; /* put it on the free list and set m_lfree pointing to it */ if ((mp = m_lfree)) m_lfree->next = m; m_lfree = m; } if ((n = m->len - size) > M_MIN) { /* the block we want to use has more than M_MIN bytes plus the number of bytes that were requested; we split it in two and leave the rest on the free list */ struct m_hdr *mtt = (struct m_hdr *)(((char *)m) + M_ISIZE + size); mtt->len = n - M_ISIZE; mtt->next = m->next; m->len = size; /* put the rest on the list */ if (m_lfree == m) m_lfree = mtt; if (mp) mp->next = mtt; else m_free = mtt; } else if (mp) { /* the block we found wasn't the first one on the free list */ if (m == m_lfree) m_lfree = mp; mp->next = m->next; } else { /* it was the first one */ m_free = m->next; if (m == m_lfree) m_lfree = m_free; } if (s) { /* we are allocating a block that should hold small blocks */ struct m_shdr *sh, *shn; /* build the free list in this block and set `used' filed */ m->free = sh = (struct m_shdr *)(((char *)m) + sizeof(struct m_hdr) + os); for (n = M_SNUM - 2; n--; sh = shn) shn = sh->next = sh + s; sh->next = NULL; m->used = 1; /* put the block on the list of blocks holding small blocks if this size */ m->next = m_small[s]; m_small[s] = m; #ifdef ZSH_MEM_DEBUG m_m[os / M_ISIZE]++; #endif unqueue_signals(); return (MALLOC_RET_T) (((char *)m) + sizeof(struct m_hdr)); } #ifdef ZSH_MEM_DEBUG m_m[m->len < (1024 * M_ISIZE) ? (m->len / M_ISIZE) : 1024]++; #endif unqueue_signals(); return (MALLOC_RET_T) & m->next; } /* this is an internal free(); the second argument may, but need not hold the size of the block the first argument is pointing to; if it is the right size of this block, freeing it will be faster, though; the value 0 for this parameter means: `don't know' */ /**/ mod_export void zfree(void *p, int sz) { struct m_hdr *m = (struct m_hdr *)(((char *)p) - M_ISIZE), *mp, *mt = NULL; int i; # ifdef DEBUG int osz = sz; # endif #ifdef ZSH_SECURE_FREE sz = 0; #else sz = (sz + M_ALIGN - 1) & ~(M_ALIGN - 1); #endif if (!p) return; /* first a simple check if the given address is valid */ if (((char *)p) < m_low || ((char *)p) > m_high || ((long)p) & (M_ALIGN - 1)) { DPUTS(1, "BUG: attempt to free storage at invalid address"); return; } queue_signals(); fr_rec: if ((i = sz / M_ISIZE) < M_NSMALL || !sz) /* if the given sizes says that it is a small block, find the memory block holding it; we search all blocks with blocks of at least the given size; if the size parameter is zero, this means, that all blocks are searched */ for (; i < M_NSMALL; i++) { for (mp = NULL, mt = m_small[i]; mt && (((char *)mt) > ((char *)p) || (((char *)mt) + mt->len) < ((char *)p)); mp = mt, mt = mt->next); if (mt) { /* we found the block holding the small block */ struct m_shdr *sh = (struct m_shdr *)p; #ifdef ZSH_SECURE_FREE struct m_shdr *sh2; /* check if the given address is equal to the address of the first small block plus an integer multiple of the block size */ if ((((char *)p) - (((char *)mt) + sizeof(struct m_hdr))) % M_BSLEN(mt->len)) { DPUTS(1, "BUG: attempt to free storage at invalid address"); unqueue_signals(); return; } /* check, if the address is on the (block-intern) free list */ for (sh2 = mt->free; sh2; sh2 = sh2->next) if (((char *)p) == ((char *)sh2)) { DPUTS(1, "BUG: attempt to free already free storage"); unqueue_signals(); return; } #endif DPUTS(M_BSLEN(mt->len) < osz, "BUG: attempt to free more than allocated."); #ifdef ZSH_MEM_DEBUG m_f[M_BSLEN(mt->len) / M_ISIZE]++; memset(sh, 0xff, M_BSLEN(mt->len)); #endif /* put the block onto the free list */ sh->next = mt->free; mt->free = sh; if (--mt->used) { /* if there are still used blocks in this block, we put it at the beginning of the list with blocks holding small blocks of the same size (since we know that there is at least one free block in it, this will make allocation of small blocks faster; it also guarantees that long living memory blocks are preferred over younger ones */ if (mp) { mp->next = mt->next; mt->next = m_small[i]; m_small[i] = mt; } unqueue_signals(); return; } /* if there are no more used small blocks in this block, we free the whole block */ if (mp) mp->next = mt->next; else m_small[i] = mt->next; m = mt; p = (void *) & m->next; break; } else if (sz) { /* if we didn't find a block and a size was given, try it again as if no size were given */ sz = 0; goto fr_rec; } } #ifdef ZSH_MEM_DEBUG if (!mt) m_f[m->len < (1024 * M_ISIZE) ? (m->len / M_ISIZE) : 1024]++; #endif #ifdef ZSH_SECURE_FREE /* search all memory blocks, if one of them is at the given address */ for (mt = (struct m_hdr *)m_low; ((char *)mt) < m_high; mt = (struct m_hdr *)(((char *)mt) + M_ISIZE + mt->len)) if (((char *)p) == ((char *)&mt->next)) break; /* no block was found at the given address */ if (((char *)mt) >= m_high) { DPUTS(1, "BUG: attempt to free storage at invalid address"); unqueue_signals(); return; } #endif /* see if the block is on the free list */ for (mp = NULL, mt = m_free; mt && mt < m; mp = mt, mt = mt->next); if (m == mt) { /* it is, ouch! */ DPUTS(1, "BUG: attempt to free already free storage"); unqueue_signals(); return; } DPUTS(m->len < osz, "BUG: attempt to free more than allocated"); #ifdef ZSH_MEM_DEBUG memset(p, 0xff, m->len); #endif if (mt && ((char *)mt) == (((char *)m) + M_ISIZE + m->len)) { /* the block after the one we are freeing is free, we put them together */ m->len += mt->len + M_ISIZE; m->next = mt->next; if (mt == m_lfree) m_lfree = m; } else m->next = mt; if (mp && ((char *)m) == (((char *)mp) + M_ISIZE + mp->len)) { /* the block before the one we are freeing is free, we put them together */ mp->len += m->len + M_ISIZE; mp->next = m->next; if (m == m_lfree) m_lfree = mp; } else if (mp) /* otherwise, we just put it on the free list */ mp->next = m; else { m_free = m; if (!m_lfree) m_lfree = m_free; } /* if the block we have just freed was at the end of the process heap and now there is more than one page size of memory, we can give it back to the system (and we do it ;-) */ if ((((char *)m_lfree) + M_ISIZE + m_lfree->len) == m_high && m_lfree->len >= m_pgsz + M_MIN + M_FREE) { long n = (m_lfree->len - M_MIN - M_KEEP) & ~(m_pgsz - 1); m_lfree->len -= n; #ifdef HAVE_BRK if (brk(m_high -= n) == -1) { #else m_high -= n; if (sbrk(-n) == (void *)-1) { #endif /* HAVE_BRK */ DPUTS(1, "MEM: allocation error at brk."); } #ifdef ZSH_MEM_DEBUG m_b += n; #endif } unqueue_signals(); } FREE_RET_T free(FREE_ARG_T p) { zfree(p, 0); /* 0 means: size is unknown */ #ifdef FREE_DO_RET return 0; ) { struct m_hdr *m = (struct m_hdr *)(((char *)p) - M_ISIZE), *mp, *mt; char *r; int i, l = 0; /* some system..., see above */ if (!p && size) return (MALLOC_RET_T) malloc(size); /* and some systems even do this... */ if (!p || !size) return (MALLOC_RET_T) p; queue_signals(); /* just queue signals caught rather than handling them */ /* check if we are reallocating a small block, if we do, we have to compute the size of the block from the sort of block it is in */ for (i = 0; i < M_NSMALL; i++) { for (mp = NULL, mt = m_small[i]; mt && (((char *)mt) > ((char *)p) || (((char *)mt) + mt->len) < ((char *)p)); mp = mt, mt = mt->next); if (mt) { l = M_BSLEN(mt->len); break; } } if (!l) /* otherwise the size of the block is in the memory just before the given address */ l = m->len; /* now allocate the new block, copy the old contents, and free the old block */ r = malloc(size); memcpy(r, (char *)p, (size > l) ? l : size); free(p); unqueue_signals(); return (MALLOC_RET_T) r; } MALLOC_RET_T calloc(MALLOC_ARG_T n, MALLOC_ARG_T size) { long l; char *r; if (!(l = n * size)) return (MALLOC_RET_T) m_high; r = malloc(l); memset(r, 0, l); return (MALLOC_RET_T) r; } #ifdef ZSH_MEM_DEBUG /**/ int bin_mem(char *name, char **argv, Options ops, int func) { int i, ii, fi, ui, j; struct m_hdr *m, *mf, *ms; char *b, *c, buf[40]; long u = 0, f = 0, to, cu; queue_signals(); if (OPT_ISSET(ops,'v')) { printf("The lower and the upper addresses of the heap. Diff gives\n"); printf("the difference between them, i.e. the size of the heap.\n\n"); } printf("low mem %ld\t high mem %ld\t diff %ld\n", (long)m_l, (long)m_high, (long)(m_high - ((char *)m_l))); if (OPT_ISSET(ops,'v')) { printf("\nThe number of bytes that were allocated using sbrk() and\n"); printf("the number of bytes that were given back to the system\n"); printf("via brk().\n"); } printf("\nsbrk %d\tbrk %d\n", m_s, m_b); if (OPT_ISSET(ops,'v')) { printf("\nInformation about the sizes that were allocated or freed.\n"); printf("For each size that were used the number of mallocs and\n"); printf("frees is shown. Diff gives the difference between these\n"); printf("values, i.e. the number of blocks of that size that is\n"); printf("currently allocated. Total is the product of size and diff,\n"); printf("i.e. the number of bytes that are allocated for blocks of\n"); printf("this size. The last field gives the accumulated number of\n"); printf("bytes for all sizes.\n"); } printf("\nsize\tmalloc\tfree\tdiff\ttotal\tcum\n"); for (i = 0, cu = 0; i < 1024; i++) if (m_m[i] || m_f[i]) { to = (long) i * M_ISIZE * (m_m[i] - m_f[i]); printf("%ld\t%d\t%d\t%d\t%ld\t%ld\n", (long)i * M_ISIZE, m_m[i], m_f[i], m_m[i] - m_f[i], to, (cu += to)); } if (m_m[i] || m_f[i]) printf("big\t%d\t%d\t%d\n", m_m[i], m_f[i], m_m[i] - m_f[i]); if (OPT_ISSET(ops,'v')) { printf("\nThe list of memory blocks. For each block the following\n"); printf("information is shown:\n\n"); printf("num\tthe number of this block\n"); printf("tnum\tlike num but counted separately for used and free\n"); printf("\tblocks\n"); printf("addr\tthe address of this block\n"); printf("len\tthe length of the block\n"); printf("state\tthe state of this block, this can be:\n"); printf("\t used\tthis block is used for one big block\n"); printf("\t free\tthis block is free\n"); printf("\t small\tthis block is used for an array of small blocks\n"); printf("cum\tthe accumulated sizes of the blocks, counted\n"); printf("\tseparately for used and free blocks\n"); printf("\nFor blocks holding small blocks the number of free\n"); printf("blocks, the number of used blocks and the size of the\n"); printf("blocks is shown. For otherwise used blocks the first few\n"); printf("bytes are shown as an ASCII dump.\n"); } printf("\nblock list:\nnum\ttnum\taddr\t\tlen\tstate\tcum\n"); for (m = m_l, mf = m_free, ii = fi = ui = 1; ((char *)m) < m_high; m = (struct m_hdr *)(((char *)m) + M_ISIZE + m->len), ii++) { for (j = 0, ms = NULL; j < M_NSMALL && !ms; j++) for (ms = m_small[j]; ms; ms = ms->next) if (ms == m) break; if (m == mf) buf[0] = '\0'; else if (m == ms) sprintf(buf, "%ld %ld %ld", (long)(M_SNUM - ms->used), (long)ms->used, (long)(m->len - sizeof(struct m_hdr)) / M_SNUM + 1); else { for (i = 0, b = buf, c = (char *)&m->next; i < 20 && i < m->len; i++, c++) *b++ = (*c >= ' ' && *c < 127) ? *c : '.'; *b = '\0'; } printf("%d\t%d\t%ld\t%ld\t%s\t%ld\t%s\n", ii, (m == mf) ? fi++ : ui++, (long)m, (long)m->len, (m == mf) ? "free" : ((m == ms) ? "small" : "used"), (m == mf) ? (f += m->len) : (u += m->len), buf); if (m == mf) mf = mf->next; } if (OPT_ISSET(ops,'v')) { printf("\nHere is some information about the small blocks used.\n"); printf("For each size the arrays with the number of free and the\n"); printf("number of used blocks are shown.\n"); } printf("\nsmall blocks:\nsize\tblocks (free/used)\n"); for (i = 0; i < M_NSMALL; i++) if (m_small[i]) { printf("%ld\t", (long)i * M_ISIZE); for (ii = 0, m = m_small[i]; m; m = m->next) { printf("(%ld/%ld) ", (long)(M_SNUM - m->used), (long)m->used); if (!((++ii) & 7)) printf("\n\t"); } putchar('\n'); } if (OPT_ISSET(ops,'v')) { printf("\n\nBelow is some information about the allocation\n"); printf("behaviour of the zsh heaps. First the number of times\n"); printf("pushheap(), popheap(), and freeheap() were called.\n"); } printf("\nzsh heaps:\n\n"); printf("push %d\tpop %d\tfree %d\n\n", h_push, h_pop, h_free); if (OPT_ISSET(ops,'v')) { printf("\nThe next list shows for several sizes the number of times\n"); printf("memory of this size were taken from heaps.\n\n"); } printf("size\tmalloc\ttotal\n"); for (i = 0; i < 1024; i++) if (h_m[i]) printf("%ld\t%d\t%ld\n", (long)i * H_ISIZE, h_m[i], (long)i * H_ISIZE * h_m[i]); if (h_m[1024]) printf("big\t%d\n", h_m[1024]); unqueue_signals(); return 0; } #endif /**/ #else /* not ZSH_MEM */ /**/ mod_export void zfree(void *p, UNUSED(int sz)) { if (p) free(p); } /**/ mod_export void zsfree(char *p) { if (p) free(p); } /**/ #endif
http://opensource.apple.com/source/zsh/zsh-55/zsh/Src/mem.c
CC-MAIN-2016-30
refinedweb
5,461
54.9
Data items Intro Data items is a way to document all OSM metadata like keys and tags in every language on this wiki in a structured way, useful to both humans and tools. - Tools, such as iD editor and Taginfo are now able to get tag information without complex and error-prone parsing of the wiki markup. Eventually the data may include tag suggestions, validation rules, common pitfalls, presets, and more. - Data consumers are able to get structured metadata to help process main OSM database. - This wiki can now show data as info cards and tables, without information duplication and complicated template hackery. - All metadata can be analyzed using Sophox queries (see query examples). This page documents how to store structured tag metadata on this wiki using data items provided by the Wikibase extension - the same software that runs Wikidata (initial discussion). This project's goal is NOT to replace the primary tag storage for the OSM database, nor to use opaque IDs instead of the human readable key=value strings to tag features. We are only trying to improve metadata documentation, making it more useful to various tools. How can I help? - Add tag descriptions and translations. See the following 3 minute video. - Add descriptions and translations - Most used keys without description in any language - Show the most used keys that have not been translated to a given language (edit query and change the language code to run for your language) - Community and content - Set up a wiki portal, possibly similar to Wikidata's community portal (but simpler), where community can: - propose new properties - write guidelines/docs - discuss Wikibase data structures - Create Lua modules to generate tag tables, such as {{Template:Bridge:movable}}, {{Map Features:highway}}, or {{Template:Religions}}. - Implementation note: Wikibase only links Tags to the corresponding Key, but Keys do not list all possible Tags. To generate a table, we must have a list of items somewhere. We could create a new WB key property that lists all tags, and use a bot to maintain it, or we could list all needed tags as a template parameter, e.g. for highway, {{...|motorway|trunk|primary|secondary|...}}. List as a template parameter does not need to be localized, and it could specify proper ordering of items (not available in WB). Lua code would use mw.wikibase.getEntityIdForTitle("Key:highway=motorway")to find the right data. - Technical - Add Wikibase support to external tools. Simple usage: get key/tag localized description. Complex usage: allow user to add missing or even edit description, especially when user is creating a new key. - Port simple validation rules, e.g. regex-based, to use Wikibase data. - Help parse various tables of tag data. Even if you can only generate plain files with data, user:Yurik can quickly import them. - tasks in progress - Change {{RelationDescription}} to get data from the Wikibase, similar to {{KeyDescription}} is. (being worked on by @Yurik) - done! Add helper templates, e.g. {{O|Q2}} (link to tag (Q2)), {{Label|Q2}} (label of the tag (Q2)). See also Wikidata's Q, label, and other similar templates. Ideally we should have exactly the same functionality, except that we may need to have different template names.Thanks @Teester!!! Create {{Desc|Q2}} (description of the tag (Q2)) templateThanks @Teester!!! Change {{KeyDescription}} and {{ValueDescription}} to get data from the Wikibase. (@Yurik) Tag Keys Each OSM Key is stored as a separate page in the Item namespace. For example, see bridge:movable (Q104) that describes a bridge:movable=*: Tag values For keys like Key:highway, there is a list of the well-known values such as highway=residential, highway=service, highway=footway. These values are stored similarly to keys. See bridge:movable=bascule (Q888) that describes a bridge:movable=bascule. See all items that link to bridge:movable. Tags may also use use on nodes (P33), use on ways (P34), use on areas (P35), use on relations (P36), image (P28), group (P25), status (P6), value validation regex (P13), documentation wiki pages (P31). See their description in Tag Key section above. Relations Similar to keys and tags, here is an example restriction relation (Q16054) copied from Relation:restriction. Relations may also use image (P28), group (P25), status (P6), documentation wiki pages (P31). See their description in the Tag Key section above. Relation Roles Members of the relation could be labeled with "roles", e.g. "inner" and "outer" ways in the multipolygon relation. Each role for each relation type has its own data item. Example for boundary=admin_centre (Q16060). Relation member roles may also use use on nodes (P33), use on ways (P34), use on areas (P35), use on relations (P36), image (P28), group (P25), status (P6), documentation wiki pages (P31). See their description in the Tag Key section above. Storing Geographical Differences A phone booth looks very different depending on the geographical region, e.g. a country. To indicate that an image, or any other value of the data item is specific to a location, use limited to region (P48) qualifier with a geographical region item. A geographical region item is a data item with the instance of (P2) = geographic region (Q19531), and it contains a geographic code (P49) property set to one or more country codes. The limited to region (P48) qualifier should eventually replace the limited to language (P26). Storing Locale Differences Most translated Key:... and Tag:... pages tend to have mismatching parameters like status, group, or the types of elements it should be used on. While some were deliberate results after a careful local community evaluation (see noexit (Q501)), many other cases are simply stale and need to be fixed, or possibly removed from the template's parameters to let it use the underlying data item. All locale differences are stored using limited to language (P26). The value with no qualifiers is the default. It should have the preferred rank, but it is OK to keep normal rank when there are no other values for the property. All language-specific values must use limited to language (P26) and have normal rank. Each value must be used only once, possibly with multiple qualifier values (e.g. a property access:lhv (Q33) can have only one is allowed (Q8000) and one is prohibited (Q8001)). Each language qualifier can only be used once for the whole property. Language must not be listed if it is the same as the default. If there is no value without qualifiers, it means that the default is not set (e.g. English page has no onRelation= parameter). Meta item There are many data items which are neither a Key nor a Tag: - OSM Concepts - element (Q9), key (Q7), tag (Q2), status (Q11), group (Q12) - Statuses of type status (Q11)) - de facto (Q13), in use (Q14), approved (Q15), rejected (Q16), voting (Q17), draft (Q18), abandoned (Q19), proposed (Q20), obsolete (Q5060), deprecated (Q5061), discardable (Q7550) - Statuses of type element status (Q8010) - is allowed (Q8000), is prohibited (Q8001) - Special - OSM concept (Q10), sandbox item (Q2761) Item Creation Process A bot has created all significantly used keys and tags, and will continue creating these items when they are detected in the OSM database (taginfo API) or on the wiki. The bot will: - create an item for any key with 10+ usages if it matches ^[a-z0-9]+([-:_\.][a-z0-9]+)*$, or for any 1000+ usages regardless of the key syntax (see talk page) - set item's label to be the same as the key - set item's description from the corresponding wiki page's info card (if available, from all languages) - set used-by, recommended tags, implies, and any other easy-to-figure-out data from the info cards. - will NOT update any fields modified by a user, e.g. if description in FR has been changed by a user, it should not be changed by the bot. Eventually, it would be better for OSM tools (iD, JOSM, ...) to ask the user for the metadata, and use MW API to create new items. API access and querying - The easiest way for an external tool to get all the data about a key is to use this API call: - - Use languagesto filter labels and descriptions to the needed languages. - Add &format=json&formatversion=2to get the actual JSON instead of HTML. - Due to MediaWiki limitations, the titlesvalue should be ("Key:" + key).replace('_', ' ').trim(). Use permanent key ID (P16) to get the actual format of the key. Make sure to get the "preferred" value, just in case more than one value is present. - Use Sophox to query metadata. There are some metadata-specific examples. Quality Control There are several additional extensions designed to validate Wikibase data, and find items that do not pass validation. Installing such capabilities may not be done in the first deployment stage. Limitations - Wikibase's "Commons File" properties do not yet support files stored on this wiki. Instead, we use a regular string property to store the image name, and use a gadget (see your preferences) to show strings as images. - The sitelink in the upper right corner does not show whether the Tag:* or a Key:* page exists or not. - All sitelinks must use spaces instead of underscores. API sitelink search does not work otherwise. See permanent key ID (P16) and permanent tag ID (P19) for the correct value. Note that regular Mediawiki Key:* and Tag:* pages have the same issue, and use a special hack to change the title. - MediaWiki removes spaces/underscores from the key, so Key:_abc_ would become Key: abc. There are no way to have two items with sitelinks Key:_abc and Key:_abc_ -- they are treated as the same, and fail. See also - sandbox item (Q2761) is a sandbox item - feel free to make any changes to it. - All available properties - technical site configuration details - Wikibase Registry has an entry for OSM Wiki as Item Q26. - OSM Semantic Network
https://wiki.openstreetmap.org/wiki/OpenStreetMap:Wikibase
CC-MAIN-2019-22
refinedweb
1,643
54.73
appindicator ignores menu entries after having sent the menu to the indicator Bug Description Impact: some indicator menus are incomplete Test Case: see comment #8 Regression potential: Check that your menus still work correctly, the change is only impacting gtk2 and renaming from a signal that was dropped to an existant one, it shouldn't create any issue ------- On Saucy, the glipper appindicator shows but none of its menu items show up when left or right-clicking on it. $ glipper SHARED_DATA_DIR: /usr/share/glipper Binding shortcut <Ctrl><Alt>c to popup glipper Changed process name to: glipper /usr/lib/ self. (glipper:20249): LIBDBUSMENU- ProblemType: Bug DistroRelease: Ubuntu 13.10 Package: glipper 2.4-3 ProcVersionSign Uname: Linux 3.10.0-4-generic x86_64 ApportVersion: 2.11-0ubuntu1 Architecture: amd64 Date: Mon Jul 22 18:17:16 2013 InstallationDate: Installed on 2013-06-14 (38 days ago) InstallationMedia: Ubuntu-GNOME 13.10 "Saucy Salamander" - Alpha amd64 (20130613) MarkForUpload: True PackageArchitec SourcePackage: glipper UpgradeStatus: No upgrade log present (probably fresh install) Related branches - PS Jenkins bot (community): Approve (continuous-integration) on 2013-11-04 - Ted Gould (community): Approve on 2013-11-04 - Diff: 35 lines (+0/-8)1 file modifiedlibdbusmenu-gtk/parser.c (+0/-8) - PS Jenkins bot (community): Approve (continuous-integration) on 2013-11-04 - Ted Gould (community): Approve on 2013-11-04 - Diff: 35 lines (+0/-8)1 file modifiedlibdbusmenu-gtk/parser.c (+0/-8) Status changed to 'Confirmed' because the bug affects multiple users. Identical problem with blueman-applet: __load_plugin (/usr/lib/ /usr/lib/ self. _________ Status changed to 'Confirmed' because the bug affects multiple users. Actually it looks a lot like python-appindicator is broken. Following code demonstrates the bug: #!/usr/bin/env python import gtk import appindicator m = gtk.Menu() m.append( a = appindicator. a.set_status( a.set_menu(m) gtk.main() otoh, indicator- Please ignore previous example code. It looks like what is happening is that glipper and blueman are trying to add menu entries after sending the menu to the indicator, which doesn't work any more in Saucy. The following code will display both menu items in 12.04 but in Saucy only "One" is displayed: #!/usr/bin/env python import gtk import appindicator m = gtk.Menu() i = gtk.MenuItem('One') i.show() m.append(i) a = appindicator. a.set_status( a.set_menu(m) j = gtk.MenuItem('Two') j.show() m.append(j) gtk.main() Status changed to 'Confirmed' because the bug affects multiple users. Has anyone found any sort of workaround for this bug to get indicators working until there's a fix? I've been using "xfce4-panel". You can have it float, or dock it to the side of the screen. It also does other stuff (task switcher, etc), but I just removed all that and put the indicator area widget on it. As long as you start it after Unity it works fine. (The last one to start takes control of the notification area.) JFI: for Glipper, GlipIt is a good alternative. The blueman-applet didn't work for me as explained above. I 'autoremoved' the python- I went further and autoremoved python-gconf, which was removing python-gnome2-doc too, but no other packages. Now everything ist fine for me. If there are other users who have python-appindicator and python-gconf installed, but no other applications depending on them, this might be a temporary solution until the real problem is solved. I've get this bug with own gtk application after upgrade to 13.10. This API change/bug seems to affect all indicators that change their menus after creating and showing them for the first time. The feedindicator (https:/ I think it's a problem in python-appindicator or even in glib. Nagstamon (http:// For application developers, I think a work-around to this problem might be simply to call indicator. Calling set_menu() again does not work. The message about "child-added" actually comes from libdbusmenu - none of the other involved packages contain this string at all. I just noticed that libdbusmenu produces the following warning while building: l /usr/bin/vapigen --library= Dbusmenu- < ^ Dbusmenu- < ^ Dbusmenu- < ^ Dbusmenu- < ^ Dbusmenu- < ^ Generation succeeded - 5 warning(s) However, I think this is a red herring because removing menu items works as expected - which means the "child-removed" signal which is also mentioned in the warning is working Ok. I have tried downgrading all the libappindicator and libdbusmenu packages down to the raring versions and the bug is still present, which means it must be a problem with gtk/glib. I don't know if this helps, but if you install blueman from the debian tree it works just fine, just the ubuntu tree one that messes. Some progress: In libdbusmenu/ http:// - "child-added", - G_CALLBACK (child_added_cb), +#ifdef HAVE_GTK3 + "insert", +#else + "child-added" +#endif + G_CALLBACK (item_inserted_cb), - "child-removed", - G_CALLBACK (child_removed_cb), + "remove", + G_CALLBACK (item_removed_cb), Notice in particular that "child-added" has a check which produces a different result depending on gtk2/gtk3, but "child-removed" was simple replaced by "remove". If I change "child-added" to "something-else", the error message produced changes: ./test- However, if I change "child-added" to "insert" (so that it matches the GTK3 version, as with the "remove" signal) then the test case segfaults. Ok, got it working. It turns out that there are some more GTK3 tests in the same file which modify the argument list for the item_inserted_cb. It is necessary to remove all the tests so that only the GTK3 versions are present regardless of whether we are actually using GTK3 or not. Failure to do this caused the callback to be called with the wrong args, causing the segfault. What I still don't understand: * Why "insert" was special cased, but "remove" was not. * How did this code ever work before. * What changed to make it stop working. PPA available shortly: https:/ Use at your own risk. I DO NOT understand why this works :( So, libdbusmenu cannot be built in a PPA because of https:/ If you want to test, you can build my changes like this: sudo apt-get build-dep libdbusmenu wget https:/ wget https:/ wget https:/ dpkg-source -x libdbusmenu*.dsc cd libdbusmenu* dpkg-buildpackage This is the commit where the "insert" signal was backported from GTK3 to GTK2: https:/ This explains why the fix works. However, it still doesn't explain what happened to "child-added". Hey Lars, could you have a look to this issue? I think you had a look at those GTK patches in saucy... did we drop some as not-needed? @seb128: thanks for that clue: you are right, and this fills in all the pieces. In raring the "child-added" signal is added by the Ubuntu specific patch 072_indicator_ In saucy this patch is gone because upstream added the equivalent "insert" signal, but libdbusmenu was not updated to match the new signal name. Thank for the work. Did the new signal got backported to the old gtk? Or should we patch gtk as well? Yes, the new signal was backported and it is in saucy already- that is why my fix worked. What I missed is that "child-added" was an Ubuntu-specific thing, which is why I could never find it in the source - it's simply gone in saucy. See the merge proposal I just submitted for the tl;dr summary of all this. Fix committed into lp:libdbusmenu at revision 462, scheduled for release in libdbusmenu, milestone Unknown Fix committed into lp:libdbusmenu/13.10 at revision 462, scheduled for release in libdbusmenu, milestone Unknown Hello Jeremy, or anyone else affected, Accepted into saucy-proposed. The package will build now and be available in a few hours in the -proposed repository.:/ Commit Log for Tue Nov 12 18:26:58 2013 Upgraded the following packages: gir1.2- gir1.2- libdbusmenu-glib4 (12.10. libdbusmenu-gtk3-4 (12.10. libdbusmenu-gtk4 (12.10. My issue was resolved by this proposed package. Thank you! This bug was fixed in the package libdbusmenu - 12.10.3+ --------------- libdbusmenu (12.10. [ Robert Bruce Park ] * Use "insert" signal instead of "child-added". (LP: #1203888) [ Alistair Buxton ] * Use "insert" signal instead of "child-added". (LP: #1203888) [ Ubuntu daily release ] * Automatic snapshot from revision 462 -- Ubuntu daily release <email address hidden> Mon, 04 Nov 2013 16:46:50 +0000 The verification of the Stable Release Update for libdbusmenudbusmenu - 12.10.3+ --------------- libdbusmenu (12.10. [ Alistair Buxton ] * Use "insert" signal instead of "child-added" (LP: #1203888) In recent Gtk+2 versions, the "insert" signal has been backported from Gtk+3. This replaces the "child-added" signal, which was carried in an Ubuntu-specific patch and was dropped in Saucy. . (LP: #1203888) [ Ubuntu daily release ] * Automatic snapshot from revision 462 -- Ubuntu daily release <email address hidden> Mon, 25 Nov 2013 03:55:53 +0000 klipper doesn't work either, although the terminal give no error messages.
https://bugs.launchpad.net/glipper/+bug/1203888
CC-MAIN-2019-47
refinedweb
1,481
55.64
Model binding JSON POSTs in ASP.NET Core I, which you could previously do in ASP.NET 4/MVC 5. In this post, I am going to show what to do if you are converting a project to ASP.NET Core and you discover your JSON POSTs aren't working. I'll demonstrate the differences between MVC 5 model binding and MVC Core model binding, highlighting the differences between the two, and how to setup your controllers for your project, depending on the data you expect. TL;DR: Add the [FromBody]attribute to the parameter in your ASP.NET Core controller action Where did my data go? Imagine you have created a shiny new ASP.NET core project which you are using to rewrite an existing ASP.NET 4 app (only for sensible reasons of course!) You copy and paste your old WebApi controller in to your .NET Core Controller, clean up the namespaces, test out the GET action and all seems to be working well. Note: In ASP.NET 4, although the MVC and WebApi pipelines behave very similarly, they are completely separate. Therefore you have separate ApiControllerand Controllerclasses for WebApi and Mvc respectively (and all the associated namespace confusion). In ASP.NET Core, the pipelines have all been merged and there is only the single Controllerclass. As your GET request is working, you know the majority of your pipeline, for example routing, is probably configured correctly. You even submit a test form, which sends a POST to the controller and receives the JSON values it sent back. All looking good. As the final piece of the puzzle, you test sending an AJAX POST with the data as JSON, and it all falls apart - you receive a 200 OK, but all the properties on your object are empty. But why? What is Model Binding? Before we can go into details of what is happening here, we need to have a basic understanding of model binding. Model binding is the process whereby the MVC or WebApi pipeline takes the raw HTTP request and converts that into the arguments for an action method invocation on a controller. So for example, consider the following WebApi controller and Person class: public class PersonController : ApiController { [HttpPost] public Person Index(Person person) { return person; } } public class Person { public string FirstName { get; set; } public string LastName { get; set; } public int Age { get; set; } } We can see that there is a single action method on the controller, a POST action, which takes a single parameter - an instance of the Person class. The controller then just echoes that object out, back to the response. So where does the Person parameter come from? Model binding to the rescue! There are a number of different places the model binders can look for data in order to hydrate the person object. The model binders are highly extensible, and allow custom implementations, but common bindings include: - Route values - navigating to a route such as {controller}/{action}/{id}will allow binding to an idparameter - Querystrings - If you have passed variables as querystring parameters such as ?FirstName=Andrew, then the FirstNameparameter can be bound. - Body - If you send data in the body of the post, this can be bound to the Person object - Header - You can also bind to HTTP header values, though this is less common. So you can see there are a number of ways to send data to the server and have the model binder automatically create the correct method parameter for you. Some require explcit configuration, while others you get for free. For example Route values and querystring parameters are always bound, and for complex types (i.e. not primitives like string or int) the body is also bound. It is important to note that if the model binders fail to bind the parameters for some reason, they will not throw an error, instead you will receive a default object, with none of the properties set, which is the behaviour we showed earlier. How it works in ASP.NET 4 To play with what's going on here I created two projects, one using ASP.NET 4 and the other using the latest ASP.NET Core (so very nearly RC2). You can find them on github here and here. In the ASP.NET WebApi project, there is a simple controller which takes a Person object and simply returns the object back as I showed in the previous section. On a simple web page, we then make POSTs (using jQuery for convenience), sending requests either x-www-form-urlencoded (as you would get from a normal form POST) or as JSON. //form encoded data var dataType = 'application/x-www-form-urlencoded; charset=utf-8'; var data = $('form').serialize(); //JSON data var dataType = 'application/json; charset=utf-8'; var data = { FirstName: 'Andrew', LastName: 'Lock', Age: 31 } console.log('Submitting form...'); $.ajax({ type: 'POST', url: '/Person/Index', dataType: 'json', contentType: dataType, data: data, success: function(result) { console.log('Data received: '); console.log(result); } }); This will create an HTTP request for the form encoded POST similar to (elided for brevity): POST /api/Person/UnProtected HTTP/1.1 Host: localhost:5000 Accept: application/json, text/javascript, */*; q=0.01 Content-Type: application/x-www-form-urlencoded; charset=UTF-8 FirstName=Andrew&LastName=Lock&Age=31 and for the JSON post: POST /api/Person/UnProtected HTTP/1.1 Host: localhost:5000 Accept: application/json, text/javascript, */*; q=0.01 Content-Type: application/x-www-form-urlencoded; charset=UTF-8 {"FirstName":"Andrew","LastName":"Lock","Age":"31"} Sending these two POSTs elicits the following console response: In both cases the controller has bound to the body of the HTTP request, and the parameters we sent were returned back to us, without us having to do anything declarative. The model binders do all the magic for us. Note that although I've been working with a WebApi controller, the MVC controller model binders behave the same in this example, and would bind both The new way in ASP.NET Core So, moving on to ASP.NET Core, we create a similar controller, using the same Person class as a parameter as before: public class PersonController : Controller { [HttpPost] public IActionResult Index(Person person){ return Json(person); } } Using the same HTTP requests as previously, we see the following console output, where the x-www-url-formencoded POST is bound correctly, but the JSON POST is not. In order to bind the JSON correctly in ASP.NET Core, you must modify your action to include the attribute [FromBody] on the parameter. This tells the framework to use the content-type header of the request to decide which of the configured IInputFormatters to use for model binding. By default, when you call AddMvc() in Startup.cs, a JSON formatter, JsonInputFormatter, is automatically configured, but you can add additional formatters if you need to, for example to bind XML to an object. With that in mind, our new controller looks as follows: public class PersonController : Controller { [HttpPost] public IActionResult Index([FromBody] Person person){ return Json(person); } } And our JSON POST now works like magic again! So just always include [FromBody]? So if you were thinking you can just always use [FromBody] in your methods, hold your horses. Lets see what happens when you hit your new endpoint with a x-www-url-formencoded request: Oh dear. In this case, we have specifically told the ModelBinder to bind the body of the post, which is FirstName=Andrew&LastName=Lock&Age=31, using an IInputFormatter. Unfortunately, the JSON formatter is the only formatter we have and that doesn't match our content type, so we get a 415 error response. In order to specifically bind to the form parameters we can either remove the FromBody attribute or add the alternative FromForm attribute, both of which will allow our form data to be bound but again will prevent the JSON binding correctly. But what if I need to bind both data types? In some cases you may need to be able to bind both types of data to an action. In that case, you're a little bit stuck, as it won't be possible to have the same end point receive two different sets of data. Instead you will need to create two different action methods which can specifically bind the data you need to send, and then delegate the processing call to a common method: public class PersonController : Controller { //This action at /Person/Index can bind form data [HttpPost] public IActionResult Index(Person person){ return DoSomething(person); } //This action at /Person/IndexFromBody can bind JSON [HttpPost] public IActionResult IndexFromBody([FromBody] Person person){ return DoSomething(person); } private IActionResult DoSomething(Person person){ // do something with the person here // ... return Json(person); } } You may find it inconvenient to have to use two different routes for essentially the same action. Unfortunately, routes are obviously mapped to actions before model binding has occurred, so the model binder cannot be used as a discriminator. If you try to map the two above actions to the same route you will get an error saying Request matched multiple actions resulting in ambiguity. It may be possible to create a custom route to call the appropriate action based on header values, but in all likelihood that will just be more effort than it's worth! Why the change? So why has this all changed? Wasn't it simpler and easier the old way? Well, maybe, though there are a number of gotchas to watch out for, particularly when POSTing primitive types. The main reason, according to Damian Edwards at the community standup, is for security reasons, in particular cross-site request forgery (CSRF) prevention. I will do a later post on anti-CSRF in ASP.NET Core, but in essence, when model binding can occur from multiple different sources, as it did in ASP.NET 4, the resulting stack is not secure by default. I confess I haven't got my head around exactly why that is yet or how it could be exploited, but I presume it is related to identifying your anti-CSRF FormToken when you are getting your data from multiple sources. Summary In short, if your model binding isn't working properly, make sure it's trying to bind from the right part of your request and you have registered the appropriate formatters. If it's JSON binding you're doing, adding [FromBody] to your parameters should do the trick!
http://andrewlock.net/model-binding-json-posts-in-asp-net-core/
CC-MAIN-2016-50
refinedweb
1,740
53.1
Why Avoid Exception Posted by codingsense on March 16, 2010 Hi, Last week I was optimizing a module, as I was working I found that if exceptions are avoided then we can save lot of fruitful time. So made the below sample to see how much time we can save by avoiding an exception. In below sample a method just iterates and another method throws an exception. using System; namespace ConsoleApplication1 { class Program { static void Main(string[] args) { System.Diagnostics.Stopwatch sw = new System.Diagnostics.Stopwatch(); sw.Start(); loopProper(); sw.Stop(); Console.WriteLine("Looping Time : " + sw.ElapsedMilliseconds); sw.Reset(); sw.Start(); ThrowException(); sw.Stop(); Console.WriteLine("Exception not hanlded Time : " + sw.ElapsedMilliseconds); Console.Read(); } static void loopProper() { for (long Index = 0; Index < 350000; Index++) { } } static void ThrowException() { long Temp = 0; long Zero = 0; try { Temp = Temp / Zero; } catch (ArithmeticException ex) { } } } } After running the above sample we can find that an time taken by just one exception is approx around time taken by 3,50,000 iterations. So if we can avoid exceptions then we can do more fruitful work in the same time. I dont mean that exception handling should not be used, but try to avoid it to maximum. If you can use some if conditions where you know the line might give an exception, and see how the performance increases. Any about my output, the module had a algorithm that would take 80-85 sec to complete and after optimization its taking 2 sec to complete. Changes in logic of the algorithm and exception handling increased the performance drastically. Happy learning, Codingsense 🙂
https://codingsense.wordpress.com/2010/03/16/why-avoid-exception/
CC-MAIN-2018-26
refinedweb
264
55.95
I want to create 2 csv files. I have 2 array in one function then i am looping through it and calling another function to write into an csv file so it will create 2 csv files import time import datetime import csv time_value = time.time() def csv_write(s): print s f3 = open("import_"+str(int(time_value))+".csv", 'wt') writer = csv.writer(f3,delimiter = ',', lineterminator='\n',quoting=csv.QUOTE_ALL) writer.writerow(s) f3.close() def process_array(): a = [["a","b"],["s","v"]] for s in a: csv_write(s) process_array() You need to add the argument into the file name: f3 = open("import_"+"".join(s)+"_"+str(int(time_value))+".csv", 'wt') In this case you will have two files (with "ab" and "sv" in the names) if a contains two elements from your example. Here we concatenate all the items of s, so it is not the best solution (since it is possible that result of this concatenation is the same for different elements of the list a). So, I'd recommend to use this solution: In the for loop you can count number of elements: idx = 0 for s in a: csv_write(s, idx) idx = idx + 1 In this case you need to extend your function csv_write (add one more argument): def csv_write(s, idx): print s f3 = open("import_"+str(i)+"_"+str(int(time_value))+".csv", 'wt') ...
https://codedump.io/share/w5fdELwR3ohd/1/how-to-create-2-csv-files-when-using-recursion
CC-MAIN-2017-39
refinedweb
227
64.64
Chen, Kernel: 2.4.26 (I observe the same with 2.4.25) Distribution: Debian (straight SARGE, and pymol was acquired with apt-get, so no funny compiling business here) Python: 2.3 (2.4 is also there, but only for testing purposes) I've noticed on other mailing lists, that people have reported similar error messages with other packages. This is how I found the original solutions. I just wanted to report that it also occurs with Pymol. Perhaps you are right, perhaps it's a python problem, but I'm not prepared to roll back to 2.2 to test it. > What Linux are you using? Seems it's not a problem on RedHat linux, I am > running both 7.3 and EL-AS, with 2.4.20 and 2.4.21 kernel respectively, > and the exact same NVIDIA version without any problem at all. > Although we are using python2.2, looks like you are using python2.3, maybe > that's the problem? > > Best > Chen Robert, According to what I've managed to learn from googling around a bit, this problem doesn't occur on 2.6.x kernels. > I'm running Fedora Core 2 with: > (II) Module glx: compiled for 4.0.2, module version = 1.0.6111 > > And: > 2.6.8-1.521 #1 Mon Aug 16 09:01:18 EDT 2004 i686 athlon i386 GNU/Linux > > With no problems whatsover... > > Robert Well, I'm glad to hear this problem isn't as endemic for Pymol users as I had feared. What surprised me was that it happened on a fresh Debian install. The solutions I had posted earlier *do* solve the problem (they did for me), so hopefully the information will be useful to some people who encounter it. Regards, Peter _.--'"`'--._ _.--'"`'--._ _.--'"`'--._ _ -:`.'|`|"':-. '-:`.'|`|"':-. '-:`.'|`|"':-. '.` : '. | | | |'. '. | | | |'. '. | | | |'. '.: '. '.| | | | '. '.| | | | '. '.| | | | '. '. '. `.:_ | :_.' '. `.:_ | :_.' '. `.:_ | :_.' '. `. `-..,..-' `-..,..-' `-..,..-' ` Laboratory of Dr. Didier Picard University of Geneva Department of Cell Biology Scinces III 30, Quai Ernest-Ansermet 1211 Geneva 4 Switzerland Tel: +41 22 379 3254 _.--'"`'--._ _.--'"`'--._ _.--'"`'--._ _ -:`.'|`|"':-. '-:`.'|`|"':-. '-:`.'|`|"':-. '.` : '. | | | |'. '. | | | |'. '. | | | |'. '.: '. '.| | | | '. '.| | | | '. '.| | | | '. '. '. `.:_ | :_.' '. `.:_ | :_.' '. `.:_ | :_.' '. `. `-..,..-' `-..,..-' `-..,..-' ` There is one another solution. You can install Nvidia linux drivert with "-force-tls=classic" option. The command line loks like this : > NVIDIA-Linux-x86-1.0-5336-pkg1.run -force-tls=classic cheers Vladimir Message: 7 Date: Wed, 13 Oct 2004 18:12:11 +0200 From: Peter.Dudek@... To: pymol-users@... Subject: [PyMOL] Warning: Nvidia 6111 drivers for linux Hello. I did a scan (albeit brief) of the mailing list for this issue, but didn't find it, so I decided to post it FYI. I wanted to mention a serious issue I just discovered after installing the new nvidia drivers for linux (specifically regarding the "NVIDIA-Linux-x86-1.0-6111-pkg1.run" driver, though I imagine the same is true for all other 6111 drivers), on a 2.4.x kernel, as well as the solution. It may be important to make a note of this somewhere in the FAQ or installation instructions (or perhaps it would be easy to fix in one of the pymol scripts). PROBLEM: ======== Running Pymol gives the following message: Traceback (most recent call last): File "/usr/lib/python2.3/site-packages/pymol/__init__.py", line 90, in ? import pymol File "/usr/lib/python2.3/site-packages/pymol/__init__.py", line 306, in ? import _cmd ImportError: libnvidia-tls.so.1: cannot handle TLS data SOLUTIONS: ========== There are 2 solutions to this problem. (1) either rename /usr/lib/tls to something else, or delete it entirely. Pymol then works perfectly ok. However, since this is akin to "If it hurts, cut it off", it might break something down the line. Do note though, that the files contained within this dir are also in /usr/lib (not a symbolic link). (2) edit /etc/default/nvidia-glx (if it exists, for me it didn't), and set USE_TLS to 0. Reboot. Surely there must be another editable config file somewhere with this option, but I haven't found it. According to a thread on PyKDE, "The tls (thread local storage) stuff only works if you are running a tls-enabled glibc on a 2.6 kernel, and when installing nvidia-glx, you are normally asked by debconf on what to use." This suggests anyone running the new nvidia drivers on a 2.4.x kernel may encounter this problem. Admittedly, I haven't tested them on a 2.6.x kernel, so I can't be sure the problem doesn't occur there either. Hope this helps someone. Cheers, FYI for people with this problem. The EASIEST solution is the same that people are using for ut2004 and doom3 in linux oddly enough. Just run: LD_PRELOAD=/usr/lib/libGL.so pymol - Charlie On Tue, Nov 02, 2004 at 01:59:56PM -0500, Charles Moad wrote: > FYI for people with this problem. The EASIEST solution is the same that > people are using for ut2004 and doom3 in linux oddly enough. Just run: > > LD_PRELOAD=/usr/lib/libGL.so pymol I read on the O list that a new set of drivers were made available Nov 5. It'll be interesting to hear how they fare with PyMOL, and if any difficulties get cleared up by them. -- D. Joe Anderson, Asst. Sci. 2252 Molecular Biology Bldg. BBMB Research Computing Support bbsupport@...
http://sourceforge.net/p/pymol/mailman/pymol-users/thread/[email protected]/
CC-MAIN-2015-06
refinedweb
891
76.72
You can schedule inspection scans of your content using Cloud Data Loss Prevention (DLP)'s job trigger feature. Job triggers are events that automate running Cloud DLP jobs to scan Google Cloud storage repositories (Cloud Storage, BigQuery, and Datastore). Before you begin This quickstart assumes that you already have a storage repository in mind that you want to scan. If not, consider scanning one of the available BigQuery public datasets. - DLP API. Open Cloud DLP To access Cloud DLP in the Cloud Console: Alternatively, do the following: - In the Cloud Console, if the navigation menu isn't visible, click the navigation button in the upper-left corner of the page. - Point to Security, and then click Data Loss Prevention. The main Cloud DLP page opens. Create a new job trigger and choose input data To create a job trigger in Cloud DLP: In the Cloud Console, open Cloud DLP. From the Create menu, choose Job or job trigger. Alternatively, click the following button: On the Create job or job trigger page, first enter a name for the job. You can use letters, numbers, and hyphens. Next, from the Storage type menu, choose what kind of repository stores the data you want to scan—Cloud Storage, BigQuery, or Datastore: - For Cloud Storage, either enter the URL of the bucket you want to scan, or choose Include/exclude from the Location type menu, and then click Browse to navigate to the bucket or subfolder you want to scan. Select the Scan folder recursively checkbox to scan the specified directory and all contained directories. Leave it unselected to scan only the specified directory and no deeper. - For BigQuery, enter the identifiers for the project, dataset, and table that you want to scan. - For Datastore, enter the identifiers for the project, namespace, and kind that you want to scan. Once you're finished specifying the data location and any advanced configuration details, click Continue. Configure detection parameters The Configure detection section is where you specify the types of sensitive data you want to scan for. For this quickstart, leave these sections to their default values. This will cause Cloud DLP to scan a portion of the data repository you've specified (50% of all files in Cloud Storage; up to 1,000 rows in BigQuery) for all of the basic built-in information types (infoTypes). For detailed information about the settings in this section, see Configure detection in "Creating Cloud DLP jobs and job triggers." Add post-scan actions The Add actions section is where you specify actions for Cloud DLP to take with the results of the inspection scan after it has completed. In this step, you will choose to save the inspection results to a new BigQuery table. For a detailed explanation of each option, see Add actions in "Creating and scheduling Cloud DLP inspection jobs." Click the BigQuery toggle. As shown in the following screenshot, in the Project ID field, type your project identifier. In the Dataset ID field, type the name you've given your dataset. Leave the Table ID field blank so that Cloud DLP creates a new table. When you're done, click Continue For more information about actions, see the Actions conceptual topic. Set a schedule The Schedule section is where you tell Cloud DLP how often you want it to kick off the job trigger and run the job you've just specified. Choose Create a trigger to run the job on a periodic schedule from the menu. The default value for how often the job runs is 24 hours. You can change this to any value between 1 and 60 days, specifying the span in hours, days, or weeks. Select the Limit scans to only new content added or modified after previous scans are completed checkbox to only scan content that is new since the last scan. Be aware that this only applies to content added since the storage repository was last scanned by this job trigger's spawned jobs. Click Continue. Review the job trigger The Review section contains a JSON-formatted summary of the job settings you just specified. Click Create to create the job trigger. Run the job trigger and view results Once you create the job trigger, the Trigger details page appears. To trigger a job immediately, click Run now at the top of the screen. Jobs that have been triggered by this job trigger are listed in the Triggered jobs section of the details page. After the job trigger you created has run once, select the job by clicking its name beneath the Name column. The Job details page lists the job's findings first, followed by information about what was scanned for. If you chose to save results to BigQuery, on the Trigger details page, click View findings in BigQuery. Within the dataset you specified, Cloud DLP has created a new table with the results of the scan. (If Cloud DLP didn't find any matches to your search criteria, no new table will be present.) Clean up To avoid incurring charges to your Google Cloud account for the resources used in this page, follow these steps. the job trigger If you created the job trigger in an existing project that you want to keep: If necessary, choose the name of the project in which you created a job trigger from the menu at the top of the Cloud Console. Then open Cloud DLP in the Cloud Console. Click the Job triggers tab. The console displays a list of all job triggers for the current project. In the Actions column for the job trigger you want to delete, click the more actions menu (displayed as three dots arranged vertically) , and then click Delete. Alternatively, from the list of job triggers, click the name of the job you want to delete. On the job trigger's detail page, click Delete. What's next - Learn more about creating inspection jobs and job triggers, using either Cloud DLP in the Cloud Console, the Cloud DLP API, or client libraries in several programming languages: Creating and scheduling Cloud DLP inspection jobs.
https://cloud.google.com/dlp/docs/quickstart-create-job-trigger?hl=ca&skip_cache=true
CC-MAIN-2021-43
refinedweb
1,018
69.62
For context, I am trying to move the flap on an airplane wing using a GUI Slider in Unity 2017.3.0f3. Ideally I'm going to have the slider start in the middle, and you can drag the dot left for "down" a certain amount of degrees, and right for "up" a certain amount of degrees. So far I have hit walls in all of my attempts of just proof of concept for scripting this. My setup I have in the scene is: - a slider named "LeftFlap_Slider", - a C# script titled "MoveFlaps.cs", - GameObject titled "LeftWingFlap_Pivot" that the C# script is attached to. public GameObject LeftFlap; public void RotateFlap() { float sliderValue = GetComponent<Slider>().value; LeftFlap.transform.rotation = Quaternion.Euler(sliderValue * 360, 0, 90); } While this doesn't show any errors in the editor, when I move the slider in Unity I get the error ArgumentException: GetComponent requires that the requested component 'Slider' derives from MonoBehaviour or Component or is an interface. And further down it directly calls out the line that float sliderValue = GetComponent<Slider>().value; is on. So I'm assuming that it's something to do with <Slider>? I have also gone down the route of Transforms and Vector3 eulerAngles. Specifically: ArgumentException: GetComponent requires that the requested component 'Slider' derives from MonoBehaviour or Component or is an interface. float sliderValue = GetComponent<Slider>().value; <Slider> public void LeftFlapSlider(float newValue) { Vector3 pos = LeftFlap.transform.eulerAngles; pos.y = newValue; LeftFlap.transform.position = pos; } When I connect it to the slider (and select the Dynamic float version of LeftFlapSlider), the LeftWingFlap_Pivot GameObject teleports to a different location the moment the slider begins to update. A note on its teleport location, this is not world center or GameObject center, as those are both located in front of the airplane, and this is teleporting somewhere above and behind the airplane. The slider does move the GameObject pos and neg when I go left and right, but it's on an axis, and I want to rotate it on an axis. So this was my final attempt: public float speed; public void AdjustSpeed(float newSpeed) { transform.Rotate(speed, 0, 0); speed = newSpeed; } This has the GameObject spin, only while the slider is moving, and it spins faster or slower depending on where I am currently moving the slider. But it's rotating it, which is better than a linear movement on an axis. So that's where I am at now and am at a loss. It would seem that my first example would be the best bet, as that thread stated that it worked just great for what they were using it for, and that's near identical to what I'm doing, but of course it doesn't work. Come on guys. 484 people following this and not even a comment? I'm seriously still working on this and have gotten zero progress. Answer by TreyH · Mar 12, 2018 at 04:09 PM When using Unity's UI elements, there's usually an event listener you can use instead of manually checking for changed values or assigning rotations each frame regardless of change. This script assumes that your slider goes from 0 to 1 and that you want to extend that to 0 to 360, as you had in your question. using UnityEngine; using UnityEngine.UI; public class RotateWithSlider : MonoBehaviour { // Assign in the inspector public GameObject objectToRotate; public Slider slider; // Preserve the original and current orientation private float previousValue; void Awake () { // Assign a callback for when this slider changes this.slider.onValueChanged.AddListener (this.OnSliderChanged); // And current value this.previousValue = this.slider.value; } void OnSliderChanged (float value) { // How much we've changed float delta = value - this.previousValue; this.objectToRotate.transform.Rotate (Vector3.right * delta * 360); // Set our previous value for the next change this.previousValue = value; } } Which will look like: Alright, this does work fantastically and does exactly what I have been looking for. But a new problem has presented itself within this solution. When the slider updates, the wing flap that moves, will teleport 90degrees counter-clockwise, and then work as intended. Even when I move the slider back to its original position, the wing flap stays in the off 90degree rotated position like so: But it does rotate exactly how you would expect an airplane flat to move, except that it's off the vehicle by 90degrees. Its original rotational position is: 0, -14.597, 0. But when I set that back to 0, 0, 0, it's nothing like what's in the screenshot's right image. Looking at the inspector of the GameObject while in play mode, the rotation Y-value gets set to -90degrees. The original response was just taking your (X,0,0) vector and applying it as the object's euler angles. I just updated the answer to only rotate around the given axis without assigning other euler components. Also, which axis is that wing flap supposed to rotate around? I assumed X, but the script will need to be adjusted depending on how your objects are set up. Alright, those changes fix the sudden -90degree change. Final (I hope) problem that's presented itself in this. And to be honest I don't know if it should be in this thread or another new one. But when I rotate an object using the rotate tool, click off that object than back onto it, my rotation sphere (visual showing the x,y,z axis) has reset itself back to the default position. Even though the values in the inspector are not back to 0, 0, 0. And if I type in rotational values in the inspector, the sphere does not change. I've restarted Unity but this problem persists. edit: And even more odd, when I add a child to a parent object, the parent object's positional center changes to that of the child. And when I remove the child, the parent's positional center returns to what it originally was. I've been doing this for the entire project and this has never happened79 People are following this question. Rotate an Instantiated GameObject a set amount from another instantiated GameObject? 1 Answer Distribute terrain in zones 3 Answers Multiple Cars not working 1 Answer Moving gameobject with animation according to my script 0 Answers How to access a game object from a Prefab? 0 Answers
https://answers.unity.com/questions/1477968/rotate-gameobject-with-gui-slider.html?sort=oldest
CC-MAIN-2020-34
refinedweb
1,063
61.87
- NAME - SYNOPSIS - DESCRIPTION - ROLE COMPOSITION - IMPORTED SUBROUTINES - SUBROUTINES - METHODS - CAVEATS - SEE ALSO - AUTHOR - CONTRIBUTORS - LICENSE NAME Role::Tiny - Roles: a nouvelle cuisine portion size slice of Moose SYNOPSIS package Some::Role; use Role::Tiny; sub foo { ... } sub bar { ... } around baz => sub { ... }; 1; elsewhere package Some::Class; use Role::Tiny::With; # bar gets imported, but not foo with 'Some::Role'; sub foo { ... } # baz is wrapped in the around modifier by Class::Method::Modifiers sub baz { ... } 1; If you wanted attributes as well, look at Moo::Role. DESCRIPTION. A method inherited by a class gets overridden by the role's method of the same name, though.. ROLE METHODS All subs created after importing Role::Tiny will be considered methods to be composed. For example: package MyRole; use List::Util qw(min); sub mysub { } use Role::Tiny; use List::Util qw(max); sub mymethod { } In this role, max and mymethod will be included when composing MyRole, and min and mysub will not. For additional control, namespace::clean can be used to exclude undesired subs from roles. IMPORTED SUBROUTINES requires requires qw(foo bar); Declares a list of methods that must be defined to compose role. with with 'Some::Role1'; with 'Some::Role1', 'Some::Role2'; Composes another role into the current role (or class via Role::Tiny::With). If you have conflicts and want to resolve them in favour of Some::Role1 you can instead write: with 'Some::Role1'; with 'Some::Role2'; If you have conflicts and want to resolve different conflicts in favour of different roles, please refactor your codebase. before before foo => sub { ... }; See "before. around around foo => sub { ... }; See "around. after after foo => sub { ... }; See "after. Strict and Warnings In addition to importing subroutines, using Role::Tiny applies strict and warnings to the caller. SUBROUTINES does_role if (Role::Tiny::does_role($foo, 'Some::Role')) { ... } Returns true if class has been composed with role. This subroutine is also installed as ->does on any class a Role::Tiny is composed into unless that class already has an ->does method, so if ($foo->does('Some::Role')) { ... } will work for classes but to test a role, one must use ::does_role directly. Additionally, Role::Tiny will override the standard Perl DOES method for your class. However, if any class in your class' inheritance hierarchy provides DOES, then Role::Tiny will not override it. METHODS make_role Role::Tiny->make_role('Some::Role'); Makes a package into a role, but does not export any subs into it. apply_roles_to_package Role::Tiny->apply_roles_to_package( 'Some::Package', 'Some::Role', 'Some::Other::Role' ); Composes role with package. See also Role::Tiny::With. apply_roles_to_object Role::Tiny->apply_roles_to_object($foo, qw(Some::Role1 Some::Role2)); Composes roles in order into object directly. Object is reblessed into the resulting class. Note that the object's methods get overridden by the role's ones with the same names. create_class_with_roles Role::Tiny->create_class_with_roles('Some::Base', qw(Some::Role1 Some::Role2)); Creates a new class based on base, with the roles composed into it in order. New class is returned. is_role Role::Tiny->is_role('Some::Role1') Returns true if the given package is a role. CAVEATS On perl 5.8.8 and earlier, applying a role to an object won't apply any overloads from the role to other copies of the object. On perl 5.16 and earlier, applying a role to a class won't apply any overloads from the role to any existing instances of the class. SEE ALSO Role::Tiny is the attribute-less subset of Moo::Role; Moo::Role is a meta-protocol-less subset of the king of role systems, Moose::Role. Ovid's Role::Basic provides roles with a similar scope, but without method modifiers, and having some extra [email protected]> Copyright (c) 2010-2012 the Role::Tiny "AUTHOR" and "CONTRIBUTORS" as listed above. LICENSE This library is free software and may be distributed under the same terms as perl itself.
https://web-stage.metacpan.org/pod/Role::Tiny
CC-MAIN-2021-21
refinedweb
647
56.86
don't understand is why infix is so important. The only advantage of it I can think of is familiarity; in all other respects it's inferior to prefix and suffix notations. IMVHO, of course, but I think this opinion has a solid basis in reality ;) It's a mistake to ignore the customer Posted Dec 6, 2012 16:10 UTC (Thu) by renox (subscriber, #23785) [Link] I agree but familiarity is very important. So IMHO one interesting middle path for language designers is to respect the familiarity by using infix for "math" expressions and then use prefix or suffix (one or the other not both as C does) for everything else. Nimrod ( ) is a bit like this: it has no postfix operators, only prefix and infix. Posted Dec 6, 2012 16:28 UTC (Thu) by david.a.wheeler (guest, #72896) [Link] The only advantage of (infix) I can think of is familiarity. Are you going to change all schools and math books, worldwide, to use infix? No? Infix is not going away. And most humans do prefer the familiar. A programming language is not just for the computer, it's also for the humans. Humans can learn to use prefix, but most humans prefer to use a notation similar to what they've used for 10 or more years. The Fortran developers figured out how to do infix years ago, it's time for Lisp implementations to catch up to the first version of Fortran :-). Posted Dec 6, 2012 21:50 UTC (Thu) by dakas (guest, #88146) [Link]. Posted Dec 7, 2012 5:43 UTC (Fri) by bronson (subscriber, #4806) [Link] Don't get me wrong, my HP 48 and RPN rocketed me through 4 years of EE. But most of the other pre-Es in my classes used TI-8x and Casio FX-xxx, I assume partly because they didn't require a tutorial to do the simplest things. So, be careful rolling out the "most popular among X" argument. Not surprisingly, that almost always supports the infix crowd. HP calculators Posted Dec 7, 2012 15:05 UTC (Fri) by david.a.wheeler (guest, #72896) [Link] However, almost no one else I know of can even USE it, nor are they interested in learning how. If I offered the calculator to them, they'd say no thank you, and get a calculator that supports infix instead. Most people will immediately reject something that doesn't support infix today. It's time to support modern expectations. Posted Dec 6, 2012 17:18 UTC (Thu) by etienne (subscriber, #25256) [Link] Just to add that the main disadvantage of infix is that you have to introduce operator precedence - and having "*" priority higher than "-" but only when "*" is used as multiply (and not content-of) and only when "-" is the substraction operator (and not a negative number). Then you overload (in C++) those operator and cannot change the priority of those operators... Language grammar is complex for infix... None of those problem exist with: prefix "(+ 3 (* 4 2))" or postfix "(3 (4 2)* )+" but infix "3 + 2 * 4" can be really complex (when overloading and not managing numbers). Posted Dec 6, 2012 18:27 UTC (Thu) by vonbrand (subscriber, #4458) [Link] "Precedence" is a red herring: The precedence of +-*/ is fixed, you can easily place "all others" in one (or two) categories). Yes, C went overboard with its 13 levels; APL went overboard the other way (all operators left associative with the same precedence). Come on, parsing infix (precedence and all) is ridiculously easy. A nice, top-down parser for C is described in Fraser and Hanson's book on LCC. Posted Dec 7, 2012 7:23 UTC (Fri) by mathstuf (subscriber, #69389) [Link] Posted Dec 7, 2012 9:39 UTC (Fri) by ekj (guest, #1524) [Link] How do you know what will happen with: a + b [custom] c * d Means: (a+b) [custom] (c * d) Or: ((a+b) [custom] c) * d a + (b [custom] c) * d Posted Dec 7, 2012 16:18 UTC (Fri) by mathstuf (subscriber, #69389) [Link] Posted Dec 7, 2012 10:12 UTC (Fri) by etienne (subscriber, #25256) [Link] It is not the parsing which is a problem, it is the description of functions. In pre/post-fix notation, each operator is a function, so you have those functions - thinking in C: number + ( number, number, ...); number * ( number, number, ...); boolean = ( number, number, ...); boolean < ( number, number, ...); boolean && ( boolean, boolean, ...); There is nothing special at all about these functions, compared to any other functions like boolean print ( ... ); I do not see how you can define a single and simple function type when you use infix, so that in a complex class tree you can derive a class and replace a random function with a simple "addition" or "greater_than". Maybe define a priority for each and every functions? Can this priority change at run-time? at instantiation time? Note that I am using infix for my programming, so I deal with it... Note also that I do not want to teach infix to someone writing from right to left, nor do I want to translate mathematics text into these kind of languages... Posted Dec 6, 2012 20:37 UTC (Thu) by neilbrown (subscriber, #359) [Link] I was thinking exactly the reverse - a significant *advantage* of infix is that it allows the use of precedence to reduce bracket-noise. Some care is needed in choose the precedence levels of course but it isn't hard if you apply care. I really do not want to try to write (let along read) if a + 2 < b*3+1 and c & 4 == 4 in anything but precedence-aware infix notation. Posted Dec 6, 2012 22:08 UTC (Thu) by dakas (guest, #88146) [Link] I really do not want to try to write (let along read) if a + 2 < b*3+1 and c & 4 == 4 in anything but precedence-aware infix notation. if a + 2 < b*3+1 and c & 4 == 4 (if (and (< (+ a 2) (+ (* b 3) 1)) (= (logand c 4) 4)) ...) Posted Dec 6, 2012 22:22 UTC (Thu) by Cyberax (✭ supporter ✭, #52523) [Link] Posted Dec 6, 2012 22:48 UTC (Thu) by dakas (guest, #88146) [Link] Posted Dec 6, 2012 23:09 UTC (Thu) by Cyberax (✭ supporter ✭, #52523) [Link] Posted Dec 6, 2012 23:54 UTC (Thu) by nybble41 (subscriber, #55106) [Link] > if (a + 2 < b * 3 + 1 && (c & 4) == 4) {...} At 56 characters, not counting the ellipsis, the Scheme example fits easily into one line. Personally, I would probably have split it across two lines in either language, but to each his own. Granted, the corrected C version is only 42 characters, but that is offset by the need to remember the precedence of each operator, and you just demonstrated how difficult that can be. The fully parenthesized C version > if (((a + 2) < ((b * 3) + 1)) && ((c & 4) == 4)) {...} is 52 characters, which isn't much shorter (or more readable) than the Scheme code. Posted Dec 7, 2012 0:08 UTC (Fri) by Cyberax (✭ supporter ✭, #52523) [Link] So we realistically have: "if ((a+2 < b*3+1) && ((c&4) == 4)))" or 27 non-whitespace symbols. That's more compact and easier to understand. And most people remember precedence rules of logical operators, so we have: "if (a+2 < b*3+1 && (c&4) == 4)" or 23 symbols. Also, infix order has nice feature - it allows me to group relevant operations with whitespaces, with little "graphical" overhead. Posted Dec 8, 2012 9:57 UTC (Sat) by nix (subscriber, #2304) [Link] everybody remembers THAT precedence rule Posted Dec 8, 2012 13:51 UTC (Sat) by paulj (subscriber, #341) [Link] Posted Dec 12, 2012 13:45 UTC (Wed) by HelloWorld (guest, #56129) [Link] Posted Dec 13, 2012 11:39 UTC (Thu) by dakas (guest, #88146) [Link] The solution to that isn't dumbing down the language but firing people who don't have half a clue. Posted Dec 13, 2012 11:55 UTC (Thu) by mpr22 (subscriber, #60784) [Link] Posted Dec 13, 2012 17:26 UTC (Thu) by HelloWorld (guest, #56129) [Link] > Using infix in Scheme is like Yodish Pidgin English. Sort of defeats the original purpose of human and computer sharing a language for talking about code rather than humans expressing themselves to one another. No, it doesn't. What makes Lisp Lisp is the ability to easily represent a program in the language's primary data structure and manipulate it. Infix syntax doesn't change that, it just makes things more readable. Again, you were told this multiple times, so I don't even know why I repeat it again. It's really a waste of time to argue with a stick-in-the-mud like you. Posted Dec 13, 2012 18:38 UTC (Thu) by viro (subscriber, #7872) [Link] Posted Dec 7, 2012 0:22 UTC (Fri) by neilbrown (subscriber, #359) [Link] Sorry, but what language did you think I was using? Given that I used "and" and did not include () around the condition of the 'if', it certainly wasn't C. (It is missing a ':' at the end - sorry about that). Much as I like C, it clearly got some precedence issues wrong. More modern languages do a much better job. Posted Dec 7, 2012 7:21 UTC (Fri) by mathstuf (subscriber, #69389) [Link] C alternative tokens Posted Dec 7, 2012 16:42 UTC (Fri) by dtlin (✭ supporter ✭, #36537) [Link] #include <iso646.h> Precedence Posted Dec 7, 2012 17:16 UTC (Fri) by david.a.wheeler (guest, #72896) [Link] Poster said: Just to add that the main disadvantage of infix is that you have to introduce operator precedence No you don't. Infix means the operator is between the operands, that's all. Precedence can a lot of create problems, and the SRFI-105 has a very simple solution: No built-in precedence. You can still have precedence, though. If you use multiple operations that require precedence, the whole expression is changed to "($nfx$ ...)". You can then define "$nfx$" to do whatever you want. SRFI-105 contains a detailed rationale about its approach to precedence. In practice, {3 - {4 * 5}} isn't hard to read at all, and sure beats (- 3 (* 4 5)). Posted Dec 6, 2012 18:33 UTC (Thu) by dlang (✭ supporter ✭, #313) [Link] this attitude (that familiarity doesn't matter) is a major problem infecting current Linux userspace development. Gnome, Systemd, Unity, Wayland, (KDE4 to a much lesser degree) are all doing things that assume that familiarity with the existing stuff doesn't matter and that anyone who objects is just a stick-in-the-mud who needs to get with the times. It's hard to overstate the value of familiarity. Even if the existing tools are inefficient, if they are familiar to people, it can be really bad to change them. outside of software, one common example is the DORVAK vs QWERTY keyboard debate. Some people argue that DORVAK is significantly better (I'm not one of them by the way), but QWERTY remains dominant, even in things like phone on-screen keyboards, for the simple reason that it's familiar to people. Posted Dec 7, 2012 17:48 UTC (Fri) by sorpigal (subscriber, #36106) [Link] maybe not only familiarity Posted Dec 7, 2012 20:57 UTC (Fri) by tpo (subscriber, #25713) [Link] I'd like to propose that infix (maybe as well as prefix) as a way of thinking and understanding may be rooted in the human brain itself. I've just read the "DCI manifesto" on Artima, where the authors are arguing that humans are understanding in terms of things and behaviors. Consequently they propose to model "things" and "behaviors" separately. As a functional language Scheme would be on the outmost "behavior" side of the possible spectrum. Under the above stated theory it would be a compliment to the human brain that it *is* able to manipulate a symbolic problem representation that is very much focused on only one side of "things and behaviors". On the other hand it would hint to why prefix notation is hard to handle for humans - it supposedly simply doesn't match the "natural" way a brain works. I'm completely ignorant about that topic and respective research and as such could be completely wrong, however I think we should not stop our thinking at the relatively trivial "familiarity" argument but ask whether the problem could be rooted deeper than that. [1] "The DCI Architecture: A New Vision of Object-Oriented Programming" Posted Dec 10, 2012 8:38 UTC (Mon) by jezuch (subscriber, #52988) [Link] At best it's hard to handle for speakers of SVO languages. It's like arguing that Arabic (a VSO language, equivalent to prefix notation) or Japanese (a SOV language, equivalent to suffix notation) are "hard to handle" and "don't match the natural way a brain works". Try telling that to the Arabs and Japanese ;) Posted Dec 10, 2012 9:24 UTC (Mon) by tpo (subscriber, #25713) [Link] (do (another_do (yet_another_do (and_more_do (and_still_more_do arg arg aka verb1 verb2 verb3 verb4 verb5 subject1 object 1 subject2 object2 object2a I honestly do not know, but I'd guess even a VSO language will not work that way as opposed to Lisp'ish languages? Posted Dec 10, 2012 17:06 UTC (Mon) by nybble41 (subscriber, #55106) [Link] Deeply nested expressions are a problem in any language, moreso in natural languages which are not traditionally formatted to highlight the subexpressions. If you find yourself writing such expressions in Scheme, you may want to look into refactoring the code, or at least taking advantage of the "nest" macro to flatten the expression. An example: (do arg1 (another_do (yet_another_do (and_more_do arg2 (and_still_more_do arg3 arg4))))) is equivalent to: (nest [(do arg1) (another-do) (yet-another-do) (and-more-do arg2) (and-still-more-do arg3)] arg4) Linux is a registered trademark of Linus Torvalds
http://lwn.net/Articles/528212/
CC-MAIN-2013-20
refinedweb
2,315
58.01
by Radu Raicea How — and why — you should use Python Generators Generators have been an important part of Python ever since they were introduced with PEP 255. Generator functions allow you to declare a function that behaves like an iterator. They allow programmers to make an iterator in a fast, easy, and clean way. What’s an iterator, you may ask?. An iterator is defined by a class that implements the Iterator Protocol. This protocol looks for two methods within the class: __iter__ and Whoa, step back. Why would you even want to make iterators? Saving memory space Iterators don’t. Let’s = 1 def __iter__(self): return self def __next__(self): self.number += 1 if self.number >= self.max: raise StopIteration elif check_prime(self.number): return self.number else: return self.__next__() Primes is instantiated with a maximum value. If the next prime is greater or equal than the max, the iterator will raise a StopIteration exception, which ends the iterator. When we request the next element in the iterator, it will increment number by 1 and check if it’s a prime number. If it’s not, it will call __next__ again until number is prime. Once it is, the iterator returns the number. By using an iterator, we’re not creating a list of prime numbers in our memory. Instead, we’re generating the next prime number every time we request for it. Let’s try it out: primes = Primes(100000000000) print(primes) for x in primes: print(x) --------- <__main__.Primes object at 0x1021834a8>235711... Every iteration of the Primes object calls __next__ to generate the next prime number. Iterators can only be iterated over once. If you try to iterate over primes again, no value will be returned. It will behave like an empty list. Now that we know what iterators are and how to make one, we’ll move on to generators. Generators Recall that generator functions allow us to create iterators in a more simple fashion.. If we transform our Primes iterator into a generator, it’ll look like this: def Primes(max): number = 1 while number < max: number += 1 if check_prime(number): yield number primes = Primes(100000000000) print(primes) for x in primes: print(x) --------- <generator object Primes at 0x10214de08>235711... Now that’s pretty pythonic! Can we do better? Yes! We can use Generator Expressions, introduced with PEP 289. This is the list comprehension equivalent of generators. It works exactly in the same way as a list comprehension, but the expression is surrounded with () as opposed to []. The following expression can replace our generator function above: primes = (i for i in range(2, 100000000000) if check_prime(i)) print(primes) for x in primes: print(x) --------- <generator object <genexpr> at 0x101868e08>235711... This is the beauty of generators in Python. In summary… - Generators allow you to create iterators in a very pythonic manner. - Iterators allow lazy evaluation, only generating the next element of an iterable object when requested. This is useful for very large data sets. - Iterators and generators can only be iterated over once. - Generator Functions are better than Iterators. - Generator Expressions are better than Iterators (for simple cases only). You can also check out my explanation of how I used Python to find interesting people to follow on Medium. For more updates, follow me on Twitter.
https://www.freecodecamp.org/news/how-and-why-you-should-use-python-generators-f6fb56650888/
CC-MAIN-2019-43
refinedweb
555
59.5
Re: event/delegate question - newbie - From: Adityanand Pasumarthi <AdityanandPasumarthi@xxxxxxxxxxxxxxxxxxxxxxxxx> - Date: Tue, 24 Oct 2006 23:01:02 -0700 Hi Dev, I meant that .Net do not have direct communication classes for named pipe based IPC. Like for instance it has Socket classes. I _did not_ mean that IPC channel in .Net remoting do not support named pipes. Hope this clarifies my point-1 in my original reply. -- Regards, Aditya.P "Dave Sexton" wrote: Hi Adityanand,. There are several other ways to achieve this. 1. Using an IPC mechanism like named pipes. But unfortunately .Net do not have managed classes for named pipes. IpcChannel is the .NET named pipes implemenation. ChannelName corresponds to the name of the pipe. Choosing a Channel on MSDN: -- Dave Sexton "Adityanand Pasumarthi" <AdityanandPasumarthi@xxxxxxxxxxxxxxxxxxxxxxxxx> wrote in message news:51F850AD-7540-4260-99B1-05D27A368D7C@xxxxxxxxxxxxxxxx Hi, Firstly delegates and events cannot be used for communication between two processes (.Net applications). There are several other ways to achieve this. 1. Using an IPC mechanism like named pipes. But unfortunately .Net do not have managed classes for named pipes. 2. Using .Net Remoting. You can create a server object in the .Net applciation that should receive data and create the proxy object in the .Net application that should send data. The client and server can communicate across processes or machines using different channels (IPC, TCP, HTTP) and data formats (Binary and SOAP) Resources: A. B. C. 3. You can use MSMQ through System.Messagin namespace. This can be simpler to use but requires that MSMQ is installed on your Windows OS. MSMQ is an Windows OS component but ensure that it is installed properly. Resources: A. Hope this helps. -- Regards, Aditya.P "mdauria@xxxxxxxxxxx" wrote: I am familiar with using events and delegates in an application. I am working on a new project in work where I have two applications that are running on the same box and I need to send a little bit of data from one of them to the other. I was thinking about using events to do this. I would actually just be passing little bits of data. I am unfamiliar with how (or even if I can do it) I would actually do this. Can anyone point me to an article or some sample code somewhere that might give me a start? - References: - event/delegate question - newbie - From: mdauria - Re: event/delegate question - newbie - From: Dave Sexton - Prev by Date: Re: event/delegate question - newbie - Next by Date: Re: Absolute location of control - Previous by thread: Re: event/delegate question - newbie - Next by thread: Re: event/delegate question - newbie - Index(es):
http://www.tech-archive.net/Archive/DotNet/microsoft.public.dotnet.languages.csharp/2006-10/msg04034.html
crawl-002
refinedweb
439
66.74
How to dynamically change security role without logging outrickkw May 6, 2005 6:47 PM Hi, I have a custom LoginModule derived from some sample code. There, I create a SimpleGroup, "Roles", to store security roles used by my web applications. This "Roles" is then set into the Subject's principal. I also have a group of Web applications that are hooked together with Single-Sign On with each web application requiring a different security role that can be found in the "Roles" group. Everything works as expected. Now, I am trying to allow a user to change the security roles within a web application session. I use SecurityAssociation.getSubject() to get back the current Subject. From the Subject, I retrieved back the Group, "Roles". Finally, from this "Roles" Group, I added additional security roles in the form of Principal. However, I notice that Tomcat returns status 403, forbidden, for resources that are protected by security roles that I newly added within a user session. It does not seem to look into what's in "Roles" Group anymore until I log out the current user, and log back in. Does Tomcat keep its own cache of user's security roles separate from what's stored in the Subject? Does this have anything to do with SSO? What am I missing? Thanks, -- Rick 1. Re: How to dynamically change security role without loggingScott Stark May 7, 2005 10:14 AM (in response to rickkw) There is no support for refreshing an existing login's associated roles without reauthenticating. 2. Re: How to dynamically change security role without loggingrickkw May 9, 2005 4:51 PM (in response to rickkw) Thanks for the quick response Scott. I am wondering if JBossGenericPrincipal can be made a public class. The security roles I have is stored in my callerPrincipal. I am experimenting subclassing JBossSecurityMgrRealm, and in the getCachedPrincpal method, return a subclass of JBossGenericPrincipal that overrides getRoles() and hasRole(). I am currently stuck at JBossGenericPrincipal being a package private class that I cannot subclass. Thanks, -- Rick 3. Re: How to dynamically change security role without loggingrickkw May 9, 2005 8:22 PM (in response to rickkw) Scott, One more note: I tested the approach of subclassing JBossSecurityMgrRealm as mentioned above. It works well. I have to recompile JBoss to make JBossGenericPrincipal a public class, along with making public a few of its methods. Would you please make this class public? I am using JBoss 4.0.1. P.S. I would need public class JBossGenericPrincipal public JBossGenericPrincipal(...) public Principal getAuthPrincipal() public Principal getCallerPrincipal() public Object getCredentials() public Subject getSubject() Thanks, -- Rick 4. Re: How to dynamically change security role without loggingScott Stark May 10, 2005 8:02 AM (in response to rickkw) I don't think I want to support that level of integration as its too tightly coupled to the implementation. There should be some type of refresh capability of the user roles. Create a feature request in jira with your changes and I'll see how this can be supported without requiring subclassing and access to the user representation. 5. Re: How to dynamically change security role without loggingAnil Saldanha Nov 7, 2007 2:43 PM (in response to rickkw) Look at the JIRA issue: The workaround is in: For JBoss5 going forward, we may solve this in a better way than the proposed workaround. 6. Re: How to dynamically change security role without loggingMarc Mar 3, 2008 8:46 AM (in response to rickkw) We were able to finally workaround this issue without resorting to turning off all authentication caching in 4.2.2GA. First I flush the authentication cache for the user who needs their roles refreshed. Then use the new WebAuthentication class that Anil added (see:) to logout the user and programmatically log them right back in. Anil, do you see any drawbacks to this approach? Hope this helps! -Marc
https://developer.jboss.org/message/135815
CC-MAIN-2019-09
refinedweb
650
55.13
Python alternatives for PHP functions Do you know a Python replacement for PHP's Strings ? Write it!. Note: Unlike the two other syntaxes, variables and escape sequences for special characters will not be expanded when they occur in single quoted strings. < } was members. Since PHP 5.3, this limitation is valid only for heredocs containing variables. Example #1 Invalid example <?phpclass foo { public $bar = <<<EOTbarEExampleEOT;?> The above example will output: My name is "MyName". I am printing some Foo. Now, I am printing some Bar2. This should print a capital 'A': A Its also possible to use the Heredoc syntax to pass data to function arguments: Example #3 Heredoc in arguments example <?phpvar_dump(array(<<<EODfoobar!EOD));?> Note: Heredoc support was added in PHP 4. <<< seqeuence used for heredocs, but the identifier which follows is enclosed in single quotes, e.g. <<<'EOT'. All the rules for heredoc identifiers also apply to nowdoc identifiers, especially those regarding the appearance of the closing identifier. Example #4 Nowdoc string quoting example <?php$str = <<<'EOD'Example of stringspanning multiple linesusingEOT;?> members or constants: Example #5 Static data example <?phpclass foo { public $bar = <<<'EOT'barEOT;}?> Note: Unlike heredocs, nowdocs can be used in any static data context. The typical example is initializing class members or constants: Example #5 Static data example <?phpclass foo { public $bar = <<<'EOT'barE$beer = 'Heineken';echo "$beer's taste is great"; // works; "'" is an invalid character for variable namesecho "He drank some $beers"; // won't work; 's' is a valid character for variable names but the variable is "$beer"echo "He drank some ${beer}s"; // worksecho error_reporting(E_ALL);$fruits = array('strawberry' => 'red', 'banana' => 'yellow');// Works, but note that this works differently outside a stringecho "A banana is $fruits[banana].";// Worksecho "A banana is {$fruits['banana']}.";// Works, but PHP looks for a constant named banana first, as described below.echo "A banana is {$fruits[banana]}.";// Won't work, use braces. This results in a parse error.echo "A banana is $fruits['banana'].";// Worksecho "A banana is " . $fruits['banana'] . ".";// Worksecho "This square is $square->width meters broad.";// Won't work. For a solution, see the complex syntax.echo "This square is $square->width00 centimeters broad.";?> For anything more complex, you should use the complex syntax. This isn't called complex because the syntax is complex, but because it allows for the use of complex expressions. In fact, any value in the namespace can be included in a string with this syntax. Simply write the expression the same way as it would have appeared outside the string, and then wrap it in { and }. Since { can not be escaped, this syntax will only be recognised when the $ immediately follows the {. Use {\$ to get a literal {$. Some examples to make it clear: <?php// Show all errorserror_reporting(E_ALL);$great = 'fantastic';// Won't work, outputs: This is { fantastic}echo "This is { $great}";// Works, outputs: This is fantasticecho "This is {$great}";echo "This is ${great}";// Worksecho "This square is {$square->width}00 centimeters broad."; // Worksecho stringsecho ()}}";?> Note: Functions and method calls inside {$} work since PHP 5. Characters within strings may be accessed and modified by specifying the zero-based offset of the desired character after the string using square array brackets, as in $str[42]. Think of a string as an array of characters for this purpose. Note: Strings may also be accessed using braces, as in $str{42}, for the same purpose. However, this syntax is deprecated as of PHP 6. Use square brackets instead.. Example #6 members. The string will be evaluated as a float if it contains any of the characters '.', 'e', or 'E'. Otherwise, it will be evaluated: <?phpecho "\$foo==$foo; type is " . gettype ($foo) . "<br />\n";?> Do not expect to get the code of one character by converting it to integer, as is done in C. Use the ord() and chr() functions to convert between ASCII codes and characters.
http://www.php2python.com/wiki/language.types.string/
crawl-003
refinedweb
644
59.5
I got a moral question from an author of programming language textbooks the other day requesting my opinions on whether or not beginner programmers should be taught how to use arrays. Rather than actually answer that question, I gave him a long list of my opinions about arrays, how I use arrays, how we expect arrays to be used in the future, and so on. This gets a bit long, but like Pascal, I didn't have time to make it shorter. Let me start by saying when you definitely should not use arrays, and then wax more philosophical about the future of modern programming and the role of the array in the coming world. You probably should not return an array as the value of a public method or property, particularly when the information content of the array is logically immutable. Let me give you an example of where we got that horridly wrong in a very visible way in the framework. If you take a look at the documentation for System.Type, you'll find that just looking at the method descriptions gives one a sense of existential dread. One sees a whole lot of sentences like "Returns an array of Type objects that represent the constraints on the current generic type parameter." Almost every method on System.Type returns an array it seems. Now think about how that must be implemented. When you call, say, GetConstructors() on typeof(string), the implementation cannot possibly do this, as sensible as it seems. public class Type { private ConstructorInfo[] ctorInfos; public ConstructorInfo[] GetConstructors() { if (ctorInfos == null) ctorInfos = GoGetConstructorInfosFromMetadata(); return ctorInfos; } Why? Because now the caller can take that array and replace the contents of it with whatever they please. Returning an array means that you have to make a fresh copy of the array every time you return it. You get called a hundred times, you’d better make a hundred array instances, no matter how large they are. It’s a performance nightmare – particularly if, like me, you are considering using reflection to build a compiler. Do you have any idea how many times a second I try to get type information out of reflection? Not nearly as many times as I could; every time I do it’s another freakin’ array allocation! The frameworks designers were not foolish people; unfortunately, we did not have generic types in .NET 1.0. clearly the sensible thing now for GetConstructors() to return is IList<ConstructorInfo>. You can build yourself a nice read-only collection object once, and then just pass out references to it as much as you want. What is the root cause of this malaise? It is simple to state: The caller is requesting values. The callee fulfills the request by handing back variables. An array is a collection of variables. The caller doesn’t want variables, but it’ll take them if that’s the only way to get the values. But in this case, as in most cases, neither the callee nor the caller wants those variables to ever vary. Why on earth is the callee passing back variables then? Variables vary. Therefore, a fresh, different variable must be passed back every time, so that if it does vary, nothing bad happens to anyone else who has requested the same values. If you are writing such an API, wrap the array in a ReadOnlyCollection<T> and return an IEnumerable<T> or an IList<T> or something, but not an array. (And of course, do not simply cast the array to IEnumerable<T> and think you’re done! That is still passing out variables; the caller can simply cast back to array! Only pass out an array if it is wrapped up by a read-only object.) That’s the situation at present. What are the implications of array characteristics for the future of programming and programming languages? Parallelism Problems The physics aspects of Moore’s so-called “Law” are failing, as they eventually must. Clock speeds have stopped increasing, transistor density has stopped increasing. The laws of thermodynamics and the Uncertainty Principle are seeing to that. But manufacturing costs per chip are still falling, which means that our only hope of Moore’s "Law" continuing to hold over the coming decades is to cram more and more processors into each box. We’re going to need programming languages that allow mere mortals to write code that is parallelizable to multiple cores. Side-effecting change is the enemy of parallelization. Parallelizing in a world with observable side effects means locks, and locks means choosing between implementing lock ordering and dealing with random crashes or deadlocks. Lock ordering requires global knowledge of the program. Programs are becoming increasingly complex, to the point where one person cannot reasonably and confidently have global knowledge. Indeed, we prefer programming languages to have the property that programs in them can be understood by understanding one part at a time, not having to swallow the whole thing in one gulp. Therefore we tools providers need to create ways for people to program effectively without causing observable side effects. Of all the sort of “basic” types, arrays most strongly work against this goal. An array’s whole purpose is to be a mass of mutable state. Mutable state is hard for both humans and compilers to reason about. It will be hard for us to write compilers in the future that generate performant multi-core programs if developers use a lot of arrays. Now, one might reasonably point out that List<T> is a mass of mutable state too. But at least one could create a threadsafe list class, or an immutable list class, or a list class that has transactional integrity, or uses some form of isolation or whatever. We have an extensibility model for lists because lists are classes. We have no ability to make an “immutable array”. Arrays are what they are and they’re never going to change. Conceptual Problems We want C# to be a language in which one can draw a line between code that implements a mechanism and code that implements a policy. The “C” programming language is all about mechanisms. It lays bare almost exactly what the processor is actually doing, providing only the thinnest abstraction over the memory model. And though we want you to be able to write programs like that in C#, most of the time people should be writing code in the “policy” realm. That is, code that emphasizes what the code is supposed to do, not how it does it. Coding which is more declarative than imperative, coding which avoids side effects, coding which emphasizes algorithms and purposes over mechanisms, that kind of coding is the future in a world of parallelism. (And you’ll note that LINQ is designed to be declarative, strongly abstract away from mechanisms, and be free of side effects.) Arrays work against all of these factors. Arrays demand imperative code, arrays are all about side effects, arrays make you write code which emphasizes how the code works, not what the code is doing or why it is doing it. Arrays make optimizing for things like “swapping two values” easy, but destroy the larger ability to optimize for parallelism. Practical Problems And finally, given that arrays are mutable by design, the way an array restricts that mutability is deeply weird. All the contents of the collection are mutable, but the size is fixed. What is up with that? Does that solve a problem anyone actually has? For this reason alone I do almost no programming with arrays anymore. Arrays simply do not model any problem that I have at all well – I rarely need a collection which has the rather contradictory properties of being completely mutable, and at the same time, fixed in size. If I want to mutate a collection it is almost always to add something to it or remove something from it, not to change what value an index maps to. We have a class or interface for everything I need. If I need a sequence I’ll use IEnumerable<T>, if I need a mapping from contiguous numbers to data I’ll use a List<T>, if I need a mapping across arbitrary data I’ll use a Dictionary<K,V>, if I need a set I’ll use a HashSet<T>. I simply don’t need arrays for anything, so I almost never use them. They don’t solve a problem I have better than the other tools at my disposal. Pedagogic Problems It is important that beginning programmers understand arrays; it is an important and widely used concept. But it is also important to me that they understand the weaknesses and shortcomings of arrays. In almost every case, there is a better tool to use than an array. The difficulty is, pedagogically, that it is hard to discuss the merits of those tools without already having down concepts like classes, interfaces, generics, asymptotic performance, query expressions, and so on. It’s a hard problem for the writer and for the teacher. Fortunately, for me, it's not a problem that I personally have to solve. If you would like to receive an email when updates are made to this post, please register here RSS Very interesting point of view! Your arguments against arrays sound very strong and I personally agree with them. It will be interesting if there is anyone who can advocate poor arrays :) Instead of GetConstructors() returning IList<ConstructorInfo>, I would prefer the framework actually define a readonly list interface such is IReadOnlyList<T> and return an instance of that. Returning IList<T> for a readonly list is a bad idea because you're not actually returning an IList<T>. You're returning an object that is kind of an IList<T> since it can't fulfull several of the methods. Though probably not what want. IEnumerable<T> is effectively a read-only list. How strange, I was tackling this very issue myself just this afternoon. I wanted to use the array because the code DOM serializer can persist them in nice ways, however, I was aware of the mutable nature of their contents. I managed to solve my serializer problem by having a constructor take IEnumerable<string>, provide the array to consumers as a read-only IList<string> property, and have my type converter wrap the IList in a List<string> and use ToArray with an instance descriptor to my IEnumerable<string> constructor. This way the code DOM serializer persists nicely (rather than persist an IEnumerable which it insists on handling as a resource blob), but my constructor supports more collections and my array contents are immutable for the lifetime of the constructed object. Of course, I could've worked on writing a nicer serializer, but this route was simpler. Having been caught out by arrays in many of the ways you mention, I whole-heartedly concur with you. @Jonathan, It's more of a sequence than a list though. List's have several properties that set them apart from sequences. Namely O(1) random access and O(1) size calculation. In this situation we're converting from an array to a new data structure. It seems more natural to go with a list since we already have all of the elements grouped together. Then again, if reflection could be done more efficiently "on demand" then a sequence would potentially be better. I like how you bring the topic of arrays back into the more general topic of mutable data. I've been writing lots of Python the last two years, and over that time my style has morphed into almost exclusivly using immutable types. Most of the time, I even use a tuple as a collection instead of an array simple because it's immutable. Clearly, a library writer cannot safely pass out references to internal mutable data. I also like how you steer the topic of mutability into the topic of concurrency since, as you describe, they are so closely related. Again, I've found that over the last few years my style of programming has morphed, not just from mutable data to immutable data, but from thread-based concurrnecy to actor-based concurrency. You describe a good solution for C# to address readonly/immutable data, but I've yet to see a good solution in C# for concurrency. Of course, if you addressed THAT in this post, it would have become quite a long post! Still, I'm very curious to see what future versions of C# do to address concurrency. It's an incredibly important topic and no one except the Erlang people seem to have taken it head-on, and everyone seems to think they are crazy for one reason or anothoer. Will we be seeing a "threads considered somewhat harmful" post soon? arrays are good for image processing and science applications: each cell represents a physical object or an atom or a pixel. for images, the size really is fixed but the pixels change. that said, I don't know of other places where they are a good model, and your article is generally spot on. The problem with returning arrays from methods is actually a subset of a more general problem: returning references to private data. The very same argument you make against returning arrays can also be made against returning another object. Also, I don't think that it is bad to return an array casted to IEnumerable<T>. You argue that the caller can simply cast it back -- true. But even if we wrap it in another object, the caller can still access it; if not through reflection, then through unsafe memory operations or whatever. If the caller insists on breaking your code, he will be able to do so regardless. I like arrays for some things. For example, String.Split(). I use that frequently, and then I modify the contents of individual indexes frequently. If Split() returned ReadOnlyCollection<string> I'd end up converting that to a string array, frequently. Likewise, any time you're working with fixed-length linear data, arrays are appropriate. However, I fully agree with the occasional scenario where arrays were used only because generics weren't available, such as in the scenario you demonstrated with, which quite honestly System.Reflection is one of few places I've ever been really annoyed that .net gave me an array. That said, my reasons were quite the opposite from yours. I wanted *more* mutability. For example, GetProperties returns an array and say I wanted to loop through the list and remove the items that didn't match certain criteria, such as checking for certain custom attributes. In both cases of ReadOnllyCollection<PropertyInfo> and PropertyInfo[], I have to convert to a List<PropertyInfo> to do that. Quite a pain. So in other words, no one can be happy. I completely agree with Jared. Having an interface that is essentially IEnumerable<T>+this[int] {get;} (IIndexedEnumerable?) woud be the best solution for this problem. Also this interface would solve the future problem of result variance. Surely this issue would be best addressed by adding a read-only array at the CLR level, or even better a recursive read-only tag for arbitrary types? Then we could have C++ style const-ness as well... I second the request for an expanded interface for read only collections (but perhaps we should take this to the BCL team instead). The more I read on this blog, the more I'm looking forward to PDC. Keep them coming Eric! Superb post. Thanks for that. Story of Engineering. The very foundations and thought asssumptions can crumble at any time. It is ironic how unsuited Arrays are for anything. Most of the time I use arrays in situations where I mean "Block of information for you to look at, not touch it". Indeed that construct is utterly unsuited. Will it not be possible in the future to state that an array should be read-only? Or is it impossible for some known unknown ? Or is it even a silly question because I still don't get it? ;) To Jon Davis: If you need to apply a certain criteria on the list content, you should apply a predicate on it (i.e. myList.Where(...)). In general, if you need to change the [object] stream content, produce a new stream with correct content. This will comply with goals of this article. Basically that's what LINQ does. Kosta My fingers are crossed that CLR embraces Const at some point in the future. Const arrays the enforce immutability at compile-time would be wonderful. Care to share the details of "...considering using reflection to build a compiler..."? ReadOnlyCollection<T> return references to object. The object's members can be changed. We just can't add, remove or change a reference to the ReadOnlyCollection. So returning this collection still does not guarantee 100% readonly collection I mean Add, Remove or change the reference of objects already inside the collection. @Thomas, As I'm sure Eric would say (because he has so many times before), you cannot stop someone accessing private data if they have full security access; in that case you have to assume that they have access to all of the memory in the process. What matters is protecting data that is being passed to code that does NOT have full security access, in particular code that cannot use reflection. In that case, accessing private data inside your class would not be possible, but casting your return value from an IEnumerable to an array would be perfectly legal, and would give that untrusted code mutable access to your private data. @chris The CLR doesn't have a comprehensive 'const' semantic like C++, and in order to maintain interoperability with other programming languages, including C#, it probably never will. That said, you can take and appy attributes to just about everything, including return values. You can invent 'const', or use an existing attribute for that purpose. You could then extend each programming language to require it, and add runtime checks on the IL of callers to ensure they don't perform non-const modifications to const return values, references, and argumenst. And you could extend existing languaged -- writing a C# compiler that issues errors if you violate const correctness. But that's a lot of work. @Eric I appreciate that you'd like to return read-only-proxies in many cases instead of arrays, especially in 'getters'. But Joe Duffy pointed out at (link) that proxies, such as enumerators, are often more expensive than they seem on the current JITter, more expensive than such proxies are in compiled languages (i'm thinking of vector::const_iterator, for instance). "In almost every case, there is a better tool to use than an array" -- i'd agree. It's unfortunate that there's a perf penalty to those better tools, but often it can be justified by the benifits (in product reliability, reduced development costs, bugs found earlier) of proxies. #aaron Indeed. Attributes are the best you can do in C#. If you were writing your own language what you'd probably want to do is use an optional type modifier rather than an attribute; in C++, adding "constness" changes the _signature_ of a method. In the CLR, adding an attribute does not change a signature but adding an optional type modifier does. (This is how managed C++ does constness.) > Care to share the details of "...considering using reflection to build a compiler..."? Sure. csc foo.cs /r:bar.dll Somehow the compiler has to get type information out of bar.dll in order to compile foo.cs. Reflection seems like a good candidate. Today the compiler uses the unmanaged metadata reader interfaces, but I would like to write any new tools we come up with in C#, which means that suddenly System.Reflection becomes a reasonable choice. Not the only choice, but a reasonable one. "transistor density has stopped increasing" Do you mean will stop increasing in the future? Aren't Intel and AMD are both talking about new process generations with higher transistor density. For instance: (link) and (link) 45 nm is now shipping and 32nm and 22nm are both planned. On the issue of arrays, their use can dramatically increase performance in multi-processor, shared memory architectures. They give you locality. You allocate them all at once, so the memory is contiguous. This means cache hits are far more likely with array access. If you are designing producer/consumer queues in C++, you can align the queues to start on a processor cache line, and control the enqueue and dequeue processes such that the cache lines do not "ping pong" between processors. So yes, that solves a problem that I had. Arrays are generally preferred over linked structures when performance is a concern. Here is a typical reference from processor optimization manual: ." What I'm saying is that I've found arrays to be a critical tool to solve performance issues and of course you have to be careful to use them correctly. OK, so lets suppose that transistor density in the very highest end machines quadruples in the next generation. Does anyone seriously believe that it will ever quadruple again? Maybe density is still increasing, but the end is nigh. And as for performance -- sure, there are plenty of applications where performance is gated on minimizing cache misses. If careful testing of your program against user-focused metrics indicates that cache misses are your hot spot AND your performance is unacceptable then sure, consider using an array. But understand that doing so comes with costs of its own, namely, having a whole pile of mutable state which must be carefully protected from misuse. @russelh: If you need minimize cache misses, usually it's enough back your IList by an array - this will bring in the needed memory management optimization without sacrificing the interface elegance. I know this wasn't the focus of your post, but doesn't Moore's Law often taper off from time to time, only to kick back into action via some massive technological leap? It happened with transistors coming from vacuum tubes, and it'll probably happen again with one of the newer areas of research, like quantum chips or DNA or whatever. Maybe in the literal sense, transistor density won't continue to increase forever, but computing power in general probably will, and not just through the addition of more cores. On topic: as others have commented, it seems that 9 times out of 10, what I really need is a read-only, indexed sequence. I always think, hey, let's make things as general as possible by using an IEnumerable<T>, and then realize later on that I need indexed access, and have to change it to an IList<T>, which never feels quite right because its "immutability" is just in the implementation. And using an actual ReadOnlyCollection<T> instance never feels quite right either, I don't know why. 'Course I realized while writing this that it would be trivially easy to write an IReadOnlyList<T>, a wrapper class, and a couple of extension methods for conversion; don't know why this never occurred to me before. Maybe because it's a stupid idea and I haven't yet realized *that*. :-) Moore's Law is badly named. It really ought to be "Moore's Observation"; just because a particular economic trend has held for a few years does not make it a law of nature. It is the case that every time a particular technology has been tapped out, a new technology has come along to replace it. Similarly, every time we've gotten close to running out of oil, either new reserves have been discovered, or new technologies for extending existing reserves have been found. It might well be the case that future technologies open up massive new vistas in cheap computational power. And it might be the case that all the oil we'll ever need is available for cheap in some place that we've not looked for it yet. But there is no law of physics that is going ensure that either case comes to pass, and it would therefore be unwise to assume that this will happen. If new magical technologies are invented, great. I'm pro that. But I'm going to make the conservative assumption that refinements of old technology are what we've got to work with now and going forward. Or, look at it another way. I cannot possibly design tools for an unknown future that is radically different from today. I could guess, and likely be wrong, or I could design tools for the most likely future. > Will we be seeing a "threads considered somewhat harmful" post soon? No, because I already wrote it in 2004: I was probably overreacting, or focusing on "using" an array vs. "handing out" an array. I guess StringBuilder is a decent example of encapsulating an array in a useful way. @Aaron G: When I was in graduate school in the 1991-1992 time frame, optical computing was supposed the next big thing. No one (at least not my professors) thought CMOS technology would get as far as it has. I'm not disagreeing with Eric. Clearly, the trend now is in the direction of more and more cores (threads) per processor with constant or decreasing throughput per thread. Who says arrays must be mutable? The point is mute if they are not. Apart from C# being so deficient with constness (which no matter what you do at runtime will never bring the point back home: compile-time checks and optimisations that are slowly catching up in compilers), it is very deficient with poor-man sequence abstraction. There is no way to do reverse iteration, there is ugliness of ReadOnlyCollection, there are huge, huge perf hits across the board because 'managed' brings 'overhead'. There are hacks on foreach compilation time and time again, there are hacks on generic lists in VM implementation and IL. Seems to me the entire posts has gone off on the tangent of critising something CLR adopted in memory management only to lose itself in another area of non-coherency. Proper sequences (compile-time, meta-programming and more), and vectors are brilliant, mutable or not. Arrays are just a representation you can swap however you wish. That is the real problem, the CLR, not the array type. While here, the 'idioms' you mentioned for C#, they are more deficient that people are willing to admit. Here we are again fighting the runtime against something we losing control over. So wait for VM to catch up? I am in complete agreement with russelh and especially on that long and trivial readlines example, two blog entries back. That classic against an entire lecture in complication and IEnumerable not cutting it with all the candy: while ((line = reader.ReadLine()) != null) simply does it, and no idiom of C# can do it more elegant. If you take it into proper sequence world: while ( std::getline ... or cont<T> sequence( (std::istream_iterator< T >( T ) ), std::istream_iterator< T >() ); Than you can even reverse it in one liner: std::reverse, transform it into anything, and do all that with any type or anything at COMPILE-TIME.. Whoaa, C# looks like 1960s tech in comparison. There is some serious compile-time problems, that move into runtime, then there are patches to compilers, and finally we get LINQ. The problem is that LINQ code is around 50% slower than simply generic lists, and lists are slower than arrays and so it goes on an on. The lesson I learned 20 years back is that if something isn't optimal, it isn't the type, it is the tech or abstraction over it causing the problem, usually a performance and in this instance it is a clear, inferior semantic confusion of 'sequence' in CLR. Aren't arrays actually quite good when it comes to parallelism? They're really easy to divide into chunks processed by different threads, unlike linear data structures such as lists. As you point out, they're not mutable (only the elements are mutable), which helps there (or rather, it doesn't hurt). Your comments about metadata are also shifting the blame for CLI's lack of an equivalent to const& onto arrays, which hardly seems fair. On the other hand, I think you've ignored some of the *actual* pedagogical problems such as jagged vs. rectangular arrays. In the spirit of "considered harmful", I consider your considerations harmful! I think you forget to mention another, more important, point: having a readonly collection of references to objects won't save you from modifying objects. Thus IList<T> won't solve the problem you're mentioning! You either have to copy all objects (or use value types), or wrap all objects your array or collection contains, in a "readonlifier" wrapper. But the best thing is to design your objects (and it doesn't matter if you return them as items of an array or an collection) so that you have different interfaces for different uses. And C++'s const array modifier perfectly solves the problem you're mentioning. My 5c, I might be too sleepy to miss the whole point :) (I've recently replaced LINQ queries with direct array addressing in array to get 5x performance boost. At the same time the interface between the algorithm and the data store has remained the same. But I with all hands for your arguments regarding paralellization. Just arrays is not the key reason of why some algorithms are hard to parallel) Developers in my team are encouraged (with threats of violence) to return interface types wherever possible, using the minimum interface required by the consumer, e.g. IEnumerable<T> if iteration is required, or IList<T> if indexed access is necessary. Ree: With the aforementioned move into many-core processors, the ability to parallelize is becoming more important than single-thread performance, and this is where LINQ will come into its own, especially with the PFX extensions. Roman: ConstructorInfo is an immutable class, in which instance the example is valid. Personally, I tend toward immutable classes for everything except business objects. And as for Moore's Law: what about memristors? "It is important that beginning programmers understand arrays;" It seems to me that one of the main reasons for this was so that they could then quickly understand strings. But that day is long gone. In a world where strings are now first class citizens of their own, it should be possible to leave arrays as a concept to be introduced at the same time as other more appropriate collections. It's much more important to understand the difference between O(1) and O(n), I agree :) In this carnival there're a lot of software design/patterns and frameworks a bit of SOA, UML, DSL @danyel: That may be, but in that case you're still talking about something that should be wrapped in something like an Image class, and not exposed to public view. @ree: It is generally more important to have code that is more maintainable and more amenable to reason than it is to have code that is more performant. The fastest code serves no purpose if it's simply the fastest to produce an incorrect result, or the fastest to befuddle. I'll take the CLR's abstractions any day. @kfarmer, Part of Ree's point was that, if the CLR supported const and non-mutable arrays, you wouldn't need an abstraction to expose array-like data, and you could have the best of both worlds, maintainability and performance. Which is a perfectly valid point. Whether the gains from supporting const and immutability in the CLR would be worth the effort is another question entirely. I say CLR devs hate arrays because CLR does not efficiently deal with arrays. Also, CLR memory management makes using arrays inefficient. Ok, simple example. A string. In a language like C++, a string is an array of chars. Why is array of chars so handy? Because I can modify the chars inside the array and avoid HUGE performance costs of creating new strings over and over and over when doing inline textual processing. I wrote a simple app to process newsgroup posts and push them into SQL Server in C#. Due to string manipulations, I was burning huge amounts of CPU in GC collecting strings and StringBuilders. In C++, I could read the data from NNTP directly into a byte array, then rip through the byte array and do fixups and scan for bad things, and in some cases where there were no fixups pass the byte array directly to SQL Server for upload. Zero allocs, zero memcpys in primary code path. Eventually I redesigned my C# code to do the same thing but I was painfully difficult to do. You can't just cast a byte array to char * in C#. Overall the design compromises the CLR made are sensible. These design choices have handicapped arrays in C#, let's be honest about this at least. In C++ char arrays have probably caused 10,000 more huge horrible disasters than they have solved. Most of the generic collections dealing in atomic list-type structures internally use arrays (with the exception of SortedDictionary, which uses a tree, and LinkedList, which deals with references to succeeding and preceding LinkedListNodes, and a few others). Now, unless you specify (if you are able to) an initial capacity, adding items dynamically, whether using Add() or the indexing property of the collection you are using, will trigger an allocation of a new array, and an Array.Copy(), when the item you are currently attempting to add would result in a new size that exceeds the current internal size of the collection. This can affect performance. However, what's even worse is that all of the adding/setting mechanisms in these collections trigger a check to determine whether the item is already in the internal array (usually using a binary search). Furthermore, this check forces invocations of methods in helper classes that aid in comparison and equality-calculations - this process results in pushing classes and methods onto the call stack. A lot of overhead, and ALL of these contribute to performance strain. If I have a collection and know certain things at runtime, such as how big it will be, that items I'm adding are new, etc., I choose an array to avoid all the performance stresses and overhead. I like to have a great granularity of control over how I deal in collections, so although I do agree with most of your points, I'd still be hard-pressed to pick a List over an array when I don't need to do more than basic array manipulation. On another topics about arrays, I rreally wish that the framework would allow us to be able to create an array with all elements initialized to a particular value of our choosing at creation time. For example, if I want an array of, say, 100 bools, where all elements should be set to true, I have to instantiate the array, then initialize it with a loop; it would be really helpful to just be able to write "bool[] arr = new bool[100](true);", or something like that. I am an "engineer" engineer who has been writing water resources simulation software over a 30-year career. I have moved through every operating system that MS has deployed and nearly every language (Fortran, Basic, Pascal, C, C++ and C#) striving to overcome every less efficient improvement that has been placed in its OSs and revisions to compilers. MFC was a 5-year trip. Now, with managed memory and all of the good things that have been done to protect me and my users from myself, I have concluded that MS has basically abandoned the scientific community in areas where speed is important. Arguments made for distributed (team) development and tool development are well taken. However, people who have high performance needs that arrays used to provide are basically out of luck. I agree, Eric, that "completely mutable, and at the same time, fixed in size" is not often a useful combination. However, an immutable array (of, typically, immutable objects) is often precisely what I want, especially where performance is an issue. In such cases, the following works: struct ConstArray<T> { public ConstArray(T[] values) { this.values = values; } public T this[int index] { get { return values[index]; } } public int Length { get { return values.Length; } } T[] values; } (It would be even better if the jitter could optimise index access in loops as it does for system arrays.) BTW, here's yet another (forlorn, I know) plea for the C++ const modifier, which, as has been pointed out, would fix this issue. Would an empty array be safe to return? Very recently on a project, I was having significant issues with System.IO.Directory.GetFiles, in which In general I'm beginning to come to this view. Arrays don't express programmer intent in the fast majority of cases. Is there any chance that array variance could get fixed in future versions of the framework? My understanding is that becuase of it, the runtime has to type check all array operations. I could see this being a huge win in the cases that I really need an array (like image manipulation), and for all of the framework classes that are using arrays under the hood (such as List<T> I presume). Perhaps the JIT is already smart enough to avoid the penalty in most cases. Just pointing out that the C++ const modifier is an illusion - it does not solve any issues regarding immutability whatsoever, courtesy of const_cast<>. So if the CLR team ever decides to add a const modifier, I'm hoping that they will not make the same mistake that the C++ standards committe did... I was just noticing that arrays get nice syntax help in C#: var a = new[] { 1, 2 }; int[] b = { 1, 2 }; while other collections have to work a little harder: var c = new Dictionary<string, int> { {"a", 1}, {"b", 2} }; // OK Dictionary<string, int> d = { { "a", 1 }, { "b", 2 } }; // error CS0622: Can only use array initializer expressions to assign to array types. Try using a new expression instead. Too bad. Hey, PowerShell can do it: $myHash = @{ "a" = 1; "b" = 2 } @Jay F# can do it succintly as well let d = [("a",1);("b",2)] |> Map.of_list Your comments do not apply to value types, so as long as it is an array of say int, string, KeyValuePair<TKey, TValue> we are fine. A while back, Eric Lippert talked about arrays being somewhat harmful .  The reason being, if you QUOTE:You probably should not return an array as the value of a public method or property, particularly when the information content of the array is logically immutable. Let me give you an example of where we got that horridly wrong in a very visible A bit of light reading while you digest your turkey sandwiches… Fabulous Adventures In Coding : Arrays Interesting discussion. As someone who moved from Algol 68 to Pascal to C to C++ to VB6 to VB.NET to Perl to Ruby to C++ with CLR I think I've seen a few programming paradigms in my time. I've also seen my share of dogma 'thou shalt not do this' or 'do this or else'. Dogma is dangerous in real life and while programming. The important thing is to understand why certain mechanisms are 'good' or 'bad' or even 'dangerous' and then decide what's appropriate for your application. As I'm sure you all do I try to use the right tool for the job. A quick test script that needs to run on Windows and Linux is easily written in Ruby. A telescope control program that wakes up every second to see if a new event needs to be recorded works well with VB.NET. Now I'm working on astronomical image processing software. This software needs to zip through images consisting of millions of floats or ints. I use C++ for that with classes that hide mapping from col/row coordinates to array indexes. Because I use inline classes I get the benefit of immutability of the array and encapsulation in general while still maintaining a semblance of performance. Another poster mentioned that dividing arrays in sections makes good sense for multi-core optimization. I agree with this, for my application anyway. Divide and conquer strategies will work well in this case. Anyway, I'm not ready yet to give up on arrays for storing large amounts of sequential data. It seems silly to store 6 million pixel values in a linked list. But it does make sense to protect that chunk of data with a class that can be unit tested. I find in c# I only use arrays for speed. I use them for cross-referencing data, or anywhere performance is important. Before I read this blog entry I was asking a colleague, does anybody use arrays in c#? Because you're right, there's usually better tools available for what you're trying to do. @Rod: I feel your pain, having been involved in the design and development of scientific, DSP software from back in the 80's. I appreciate C# and .NET and the managed framework for a lot of things and I do feel that it's easier to develop applications now than it was back in MFC days and certainly than in the Good Ol' Win16 days. But there are application domains where an array is simply the best structure to hold large amounts of data for processing. I cannot imagine working with any kind of DSP problem, e.g. signal smoothing, peak detection and integration for scientific analysis; numerical analysis, or low-level high-performance graphics where manipulating the frame buffer(s) is critical, without using arrays. That said, all of the points brought out seem valid to me and I'll certainly keep them in mind when writing code that doesn't fall into these categories. The big question for some applications is actually not whether or not the mutability of the array becomes an issue due to misuse, but whether or not the mutability becomes a *compiler* issue during threading. In other words, can the mutability of an array influence multi-threading access speed because the compiler wronfully deems it necessary to perform synchronization? This becomes quite important in cases where the threads are rather large but you *know* that they will only perform read operations, and the question is whether or not the compiler and the JIT will be able to recognize this and ditch synchronization. I have been unable to find any answers to this so far, but i plan to do some testing in the near future. Mutability of arrays aside, I tend to use my own IReadonlyCollection interface by way of an adapter for the ReadonlyCollection. You can get my code from
http://blogs.msdn.com/ericlippert/archive/2008/09/22/arrays-considered-somewhat-harmful.aspx
crawl-002
refinedweb
7,225
61.46
A value class that defines the style for filling a path. More... #include <Wt/WBrush> A value class that defines the style for filling a path. A brush defines the properties of how areas (the interior of shapes) are filled. A brush is defined either as a solid color or a gradient. A WBrush is JavaScript exposable. If a WBrush is JavaScript bound, it can be accessed in your custom JavaScript code through its handle's jsRef(). At the moment, only the color() property is exposed, e.g. a brush with the color WColor(10,20,30,255) will be represented in JavaScript as: Creates a solid brush of a given color. Creates a solid brush with the indicated color. Creates a solid brush with a standard color. Creates a solid brush with the indicated color. Returns a JavaScript representation of the value of this object. Implements Wt::WJavaScriptExposableObject. Comparison operator. Returns true if the brushes are different. Comparison operator. Returns true if the brushes are exactly the same. Sets the brush color. If the current style is a gradient style, then it is reset to SolidPattern. Sets the brush gradient. This also sets the style to GradientPattern. Returns the fill style.
https://webtoolkit.eu/wt/wt3/doc/reference/html/classWt_1_1WBrush.html
CC-MAIN-2021-31
refinedweb
202
61.12
WRITE(2) Linux Programmer's Manual WRITE(2) write - write to a file descriptor #include <unistd.h> ssize_t write(int fd, const void *buf, size_t count); write() writes up to count bytes from the buffer starting at. On success, the number of bytes written is returned. On error, -1 is returned, and errno is set to indicate the cause of the error. Note that a successful write() may transfer fewer than count bytes. Such partial writes can occur for various reasons; for example, because there was insufficient space on the disk device to write all of the requested bytes, or because a blocked write() to a socket, pipe, or similar was interrupted by a signal handler after it had transferred some, but before it had transferred all of the requested bytes. In the event of a partial write, the caller can make another write() call to transfer the remaining bytes. The subsequent call will either transfer further bytes or may result in an error (e.g., if the disk is now full).. This error may relate to the write-back of data written by an earlier write(), which may have been issued to a different file descriptor on the same file. Since Linux 4.13, errors from write-back come with a promise that they may be reported by subsequent. write() requests, and will be reported by a subsequent fsync(2) (whether or not they were also reported by write())..(), fsync(2), or even close(2)..) An error return value while performing write() using direct I/O does not mean the entire write has failed. Partial data may be written and the data at the file offset on which the write() was attempted should be considered inconsistent.. close(2), fcntl(2), fsync(2), ioctl(2), lseek(2), open(2), pwrite(2), read(2), select(2), writev(2), fwrite(3) This page is part of release 5.10 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at. Linux 2019-10-10 WRITE(2) Pages that refer to this page: pv(1), strace(1), telnet-probe(1), close(2), epoll_ctl(2), eventfd(2), fcntl(2), fsync), sync(2), syscalls(2), aio_error(3), aio_return(3), aio_write(3), curs_print(3x), dbopen(3), fclose(3), fflush(3), fgetc(3), fopen(3), fread(3), gets(3), libexpect(3), mkfifo(3), mpool(3), puts(3), stdio(3), xdr(3), xfsctl(3), dsp56k(4), fuse(4), lirc(4), st(4), proc(5), systemd.exec(5), aio(7), cgroups(7), cpuset(7), epoll(7), fanotify(7), inode(7), inotify(7), pipe(7), sched(7), signal(7), signal-safety(7), socket(7), spufs(7), system_data_types(7), tcp(7), time_namespaces(7), udp(7), vsock(7), x25(7), netsniff-ng(8), xfs_io(8)
https://man7.org/linux/man-pages/man2/write.2.html
CC-MAIN-2021-04
refinedweb
468
61.56
Some context: I’m working on a Grails web application that integrates with our internal business administration systems. It runs on a Tomcat web server and it connects to a DB2 UDB database. Now, like any proper enterprise development effort, we’ve set up multiple environments to facilitate development and testing separately from our stable production systems. In each environment, we’ve got a copy of the DB2 database, an instance of the business admin system’s executables, an instance of Tomcat running some build of our web app, and a copy of any other systems or objects that are part of the whole ecosystem. In addition, in some environments, we’ve actually got more than one instance of Tomcat. In production, we run the web-facing applications on a separate instance from the the web applications used internally, for security. And we’ve got an extra Tomcat instance exposed to the internet that runs against the user acceptance testing database, on which we can demo a beta of our web app for a select group of customers. Problem: So, given that we’ve got all of these different copies of our website, even if you know which URL and port you’ve navigated to, it can still be pretty easy to get confused about which system you’re interacting with at any given moment. Except for the address, the screens look identical. Imagine having one session open into the production system where you’re helping an end user and another one where you’re currently testing code. And then you enter some test data into the production system by accident. D’oh! Before we even had the web app in production, users started requesting a visible indication on each page of the website showing which environment they were looking at. Solution: We really had two platforms that we needed to identify for users: which instance of the database they were connected to and which instance of Tomcat and build of the web application they were logged into. On DB2 UDB, you can retrieve the database name using this statement: values current server. So it was easy enough to code up a method in Groovy to fetch the db name: static String getEnvironmentName() { def sql = your connection string here String env = new String() try { env = (String) sql.firstRow("values current server")[0] } catch (SQLException e) { // This is a db2 statement. May fail if run on other platforms. env = null } return env } To identify the Tomcat instance, we had to be a bit more creative. Most of our Tomcat instances run on the same host, and coding up some kind of matrix that would correlate a port number to an instance was something I had no inclination to have to maintain. And there’s nothing that I know of built into Tomcat that applies a name to a server, other than a hostname. Tomcat does, however, allow you to add your own environment variables in its conf/context.xml file. So I decided upon a convention of adding an environment variable to each Tomcat instance specifying its name, and got our sysadmins to agree to it. <Context> ... <Environment name="appServerInstance" value="DEVELOPMENT" type="java.lang.String" over ... </Context> With an appServerInstance value assigned in each instance, we could use the following code to retrieve the instance name. At our sysadmin’s suggestion, we also decided not to assign a name to the production instance, and I wrote my code so that if no appServerInstance value is found, to show nothing on the web page. So, on every instance except the one our end users see, we’ve got an environment name at the top of the page. Our end users see nothing, since the system name wouldn’t be relevant to them anyway. import javax.naming.InitialContext import javax.naming.Context import javax.naming.NameNotFoundException ... static String getAppServerInstanceName() { Context env = (Context) new InitialContext().lookup("java:comp/env") String name = new String() try { name = (String) env.lookup("appServerInstance") } catch (NameNotFoundException e) { /* If the appServerInstance property has not been set in Tomcat, then fail gracefully. */ name = null } return name } Now, to make the solution complete, I added some lazy initialization methods to our login controller that cache both names as session variables. import javax.servlet.http.HttpSession import org.springframework.web.context.request.RequestContextHolder class SysUtil { /** * Retrieves the database environment name and caches the * value on the session context. */ static String getEnvironmentName() { String env = new String() HttpSession session = RequestContextHolder.currentRequestAttributes().getSession() if (!session.environment) { env = Sys.getEnvironmentName() session.environment = env } else { env = session.environment } return env } /** * Retrieves the app server instance name and caches the * value on the session context. */ static String getAppServerInstanceName() { String env = new String() HttpSession session = RequestContextHolder.currentRequestAttributes().getSession() if (!session.appServer) { env = Sys.appServerInstanceName() session.appServer = env } else { env = session.appServer } return env } } All that was left was to show the names somewhere in the main web page template. In a Grails application, that’s easily accomplished by editing grails-app/views/layouts/main.gsp. At a co-worker’s suggestion (thanks, Andy), I added the names to the page’s <title> tag, so that the environment name would show up in the taskbar button name of a minimized window. I also added it to a <div> at the top of the page: <%@ page ... <head> <g:set <g:set ... <title> <g:if${appServer}: </g:if> <g:layoutTitle </title> ... </head> <body> <g:if <div class="env">App Server: ${appServer} / Environment: ${environment}</div> </g:if> ... </body>
https://geekcredential.wordpress.com/tag/best-practices/
CC-MAIN-2018-17
refinedweb
915
54.12
Jul 02, 2014 11:09 AM|zielony|LINK I have project DAL, BLL and Web. Where should I execute query using ToList()? Right now I execute query using ToList() in controller - is it ok? For example - from the beginning: DAL - my method in NotesRepository class: public IQueryable<Notes> GetAllNotes() { return context.Notes.Include(x => x.Comments).OrderByDescending(x => x.CreateDate); } BLL - my method in NotesService class: public IEnumerable<Notes> GetNotes() { return _unitOfWork.NotesRepository.GetAllNotes(); } Web - my action in controller (execution query using ToList()): public ActionResult Index() { var notes = _notesService.GetNotes().ToList(); return View(notes); } Jul 02, 2014 12:37 PM|AidyF|LINK People will differ in their opinions, but you should return a List from your repository. Your repository is supposed to abstract your data storage from the business layer, and if you return IQueryable then you're not doing this. If (in theory) you wanted to move from a data storage that didn't support IQueryable then you wouldn't be able to. It also means your controller needs a connection to your data as it is only when ToList (or similar) is called that the command is actually executed, so your repository isn't acting like a repository at all, so if you wanted to put your business layer, or your data layer, behind a service etc you wouldn't be able to as the controll\bll and dll are strongly coupled to each other. Jul 08, 2014 12:22 PM|jammycakes|LINK I'd beg to differ. Decoupling your DAL from your business layer and controller sounds like a nice idea in theory, but in practice it causes more problems than it solves. Either you'll lock yourself out of a whole lot of features of your ORM that you need for performance optimisation and other purposes (lazy versus eager loading, caching, transaction management, adding interception into your DAL to handle cross-cutting concerns, streaming tens of millions of objects taking up several gigabytes, and so on) or else you'll end up building something so complex and convoluted that it's riddled with bugs. As for swapping out your data source for something else, in most cases that's just architecture astronaut territory. Ditto putting your data layer or business layer behind a service. Effectively, you'll be optimising for problems that you're not likely to face at the expense of problems that you are. Jul 08, 2014 01:34 PM|AidyF|LINK If you need to keep transactions running all the way to your view then I'd say you have some architecture problems yourself. The only real thing you're "locking yourself out of" in terms of ORM is probably lazy loading...but again, if your presentation makes the decisions on when your data layer should employ lazy loading then you have some architecture problems anyway. It seems you might be of the opinion that exposing your data layer to your presentation is better to do simply because it is "easy" rather than thinking about proper separation of concerns. I was addressing the question of repositories specifically, I stand by that any class that returns a query isn't a repository, it is a query builder. You already have LinqToSQL and EF as query builders, why do you need your own query builder layer in front? Also if you think that separating business from data in case layers and tiers and technology need to be moved around in future is "architecture astronaut territory" then that is simply a reflection on the projects you have worked on yourself and nothing more. Jul 09, 2014 04:53 AM|jammycakes|LINK "All the way through to your view" is a straw man -- I'm not suggesting that, though I could see times when you may want to access your transaction manager in your controllers. You're also locking yourself out of a lot more than just lazy loading in terms of ORM -- as I said above, you're making a rod for your back if you ever need to implement cross-cutting concerns when saving or updating entities, or caching, or iterating through a very large data set that doesn't all fit in memory. There are also a whole raft of other problems, some of them quite subtle, that you're likely to come across too. Oren Eini, aka Ayende Rahien goes into this in more detail in his post, "The false myth of encapsulating data access in the DAL." The point I'm making is that true separation of concerns here is nowhere near as straightforward as you think. You're dealing with a complex set of abstractions, and since all abstractions are leaky, you're just stacking up problems for yourself. Jul 09, 2014 05:27 AM|AidyF|LINK None of your arguments seem to hold any particular weight. In terms of cutting myself out of things....you haven't given any concrete examples of what I'm cutting myself out of that I haven't already addressed. I don't see where caching comes into it either, and disconnecting your backend from your front end doesn't stop you dealing with large data sets too big to fit in memory. It stops you returning them to the front end...but that's a good thing. Again I'm having this nagging doubt that your like for this approach is one of laziness rather than anything else. As long as you properly design things you shouldn't have any issues. Maybe you don't fully get what I'm describing as the link you posted doesn't really have much relevance to what I'm saying...I'm talking about decoupling the repository clients from the underlying data access, not abstracting the data access. When dealing with these issues it often comes down to accepting the lesser of the evils as it is rare that any architecture has no downsides at all. Seeing as you dismiss off-hand any cons to your solution as "astronaut territory", and can only really find invented cons in other solutoins (that you may have got the wrong end of the stick over), maybe you should spend less time taking other people's articles out of context and spend more time programming in the real world :) Jul 10, 2014 10:55 AM|jammycakes|LINK AidyFIn terms of cutting myself out of things....you haven't given any concrete examples of what I'm cutting myself out of that I haven't already addressed. You haven't adequately addressed lazy loading for starters. This is a very real problem of which you were quite dismissive: there are times when you will need to switch lazy loading on or off for the same query in different contexts depending on the size and scope of the data being returned, otherwise you'll end up crucifying performance either with the select n+1 problem or by grabbing too much unnecessary data. This is very often a decision that can only be made in your business layer rather than your repository, and if you're returning an IList from your repository rather than an IDbSet (and hence don't have access to Entity Framework's Include() method), you're not going to be able to do that.. AidyFMaybe you don't fully get what I'm describing as the link you posted doesn't really have much relevance to what I'm saying. On the contrary, it's very relevant. The point Ayende is making is that your DAL is a much more complex abstraction than you think, and if you take that together with what Joel Spolsky said about every abstraction being leaky, the take-away is that the neat separation of concerns that you're advocating simply isn't going to be as neat as you think it is, and if you try to take advantage of the benefits that your separation of concerns claims to offer, you'll find that it simply doesn't work. AidyFI'm talking about decoupling the repository clients from the underlying data access, not abstracting the data access. I presume that means that you're talking about having different front-ends interfacing onto your business layer, rather than your business layer interfacing to different data sources? If so, that's not what you said in your previous reply, where you talked about moving from one type of data source to another. If not, then what exactly do you mean? In either case though, you're still going to run up against the problems that Ayende described in his post -- you're dealing with a complex and leaky abstraction, and things aren't going to be anywhere near as tidy as you think they are. You will sooner or later come across places where either your separation of concerns will get in the way, or else it will fail to offer the benefits that you expect it to offer because the abstractions are too leaky in ways that you didn't take into account. See Ayende's post again for some concrete examples. AidyFAgain I'm having this nagging doubt that your like for this approach is one of laziness rather than anything else. No, it's one of experience. Experience of having inherited multiple codebases written with a tidy BLL/BOL/DAL architecture whose performance could only be measured in geologic eras. Experience of "best practices" that got in the way without providing any real benefit whatsoever. Experience of anaemic business "logic" layers that didn't have any logic in them whatsoever but just shunted data between one anaemic data model and another identical anaemic data model in a different namespace. Experience of seeing other frameworks in other languages (e.g. Python/Django, Ruby on Rails etc) make a much better job of architecting their solutions. Experience of spinning up a pet project of my own at least partly to experiment with what works best and what doesn't. In any case, don't be so dismissive of favouring simpler ("lazier") options. To quote Edsger W. Dijkstra, "Simplicity is prerequisite for reliability" -- more complex approaches carry an increased risk of introducing bugs, so any extra effort involved needs to be able to justify itself. Jul 10, 2014 11:49 AM|AidyF|LINK jammycakes You haven't adequately addressed lazy loading for starters. I have, I said it isn't really possible. I also went on to say that if you need it your code or user interface isn't well designed. If you really need it you can still get around the issue with subsequent calls to the repo. That's all your ORM is doing under the covers anyway. Not as easy as using an ORM that supports lazy loading, but I have admitted that from the start. The number of times I've ever neded to use lazy loading is about zero, so having to throw a little more code around for the time that I ever do need to use it doesn't unduely concern me. I'll certainly take the issue above having to couple my presentation layer to my database. jammycakes. I don't see how using a repo for this woul be too much of a hassle, but again it's not a situation I've ever had to address so I'm not going to give it a lot of thought. I'm certainly not going to let one thing you had problems doing using NHibernate a few years ago make me couple my presentation layer to my database. jammycakes On the contrary, it's very relevant. The point Ayende is making is that your DAL is a much more complex abstraction than you think And the point I've already made is that I'm not talking about abstracting the data layer, I'm simply saying it should not be tightly coupled to the other layers. Who is fighting straw men now? jammycakes In either case though, you're still going to run up against the problems that Ayende described in his post -- you're dealing with a complex and leaky abstraction See above. As I've already said, if you want to tightly couple your layers together due to the benefits it supplies then that's up to you. If you want to dismiss the issues that will cause when layers need to move then that's up to you too. If doing something simple like moving where you ToList means I get the benefits of any native ORM features (apart from lazy loading), and gives me a more flexible architecture that makes it less of a headache to move things around, then that's just what I'll do :) Also, again as already said, we're discussing what makes something a repository and what a repository should do. The argument about if you should use a repo in the first place is an entirely different thread. Jul 10, 2014 12:37 PM|jammycakes|LINK There's one point where I'm not sure where you're coming from here: AidyFI'm not talking about abstracting the data layer, I'm simply saying it should not be tightly coupled to the other layers. As far as I understand it, these are, for all practical purposes considered here, two different ways of saying exactly the same thing. Or am I missing something here? If so, what, exactly, do you understand the difference to be? In particular, what aspects of the difference mean that Ayende's and Joel's analyses do not apply and why? Jul 12, 2014 04:48 AM|jammycakes|LINK AidyFAs I've already said, if you want to tightly couple your layers together due to the benefits it supplies then that's up to you. If you want to dismiss the issues that will cause when layers need to move then that's up to you too. And as I've already said, my point is not that you shouldn't decouple your layers in the way you're suggesting, but that you can't. Yes, you may implement something that looks like a nice neat separation of concerns, but if you actually try to swap out one data source for another in the way you've suggested, you'll quickly find that you almost certainly haven't separated your concerns out as well as you think you have. You have to consider behaviour as well as just method signatures and interface implementations, and that is where it all gets messy. The fact of the matter is that unless the projects you are working on are all very simple, you will need to control lazy loading, access transactions, shape queries, handle cross-cutting concerns, batch updates and deletes, manage concurrency, and do a whole lot of other things like that in your business layer that require direct access to more advanced features of your ORM. Take query shaping for instance -- the most obvious (and simple) example here is filtering, paging and sorting data for display in a table. If you're insisting on putting ToList() in your repository, you're either going to end up with terrible performance (because you're returning masses and masses of data that you don't need) or else you're effectively moving your business logic into your repository, and you'll end up with an Anaemic Business Layer that doesn't do anything except shunt data from one anaemic domain model to another identical anaemic domain model -- and if you're doing that, then what's the point of having a separate business layer in the first place? So to answer the OP's question, the best place to put ToList() is in your business layer, if in fact you use ToList() at all. It shouldn't be automatic: it really needs to be treated on a case by case basis. As for moving the layers or swapping out your data source, I've cried "architecture astronaut" here because these are scenarios that I've seen repeated parrot fashion time and time again over the past twelve or so years that I've been working with .net, with little or no discussion as to whether (a) you're actually going to have to face them in the first place (you usually aren't), or (b) the "best practices" that you're following actually facilitate them in reality without introducing more problems than they solve (they usually don't). In actual fact, when you do need to scale out, it's never as simple in practice as shifting your presentation layer onto a separate server and sticking a service in between. You usually have to adopt other approaches such as caching, or sharding, or moving whole chunks of functionality into a separate application altogether, or switching from a traditional layered architecture to a CQRS/Event Sourcing one. 9 replies Last post Jul 12, 2014 04:48 AM by jammycakes
https://forums.asp.net/t/1994772.aspx?Where+should+I+execute+query+using+ToList+in+DAL+BLL+or+in+controller+
CC-MAIN-2019-26
refinedweb
2,830
54.15
Bit by bit, I’m going to build a Python tool to scrape a Windows system disk image for common forensic artefacts and build a CSV timeline from the evidence gathered. In this first post, I’ll parse and add the data stored in Windows Prefetch files. On my recent SANS course on Windows forensics I learnt about all kinds of forensic artefacts that can be retrieved from Windows systems to determine what the user was doing, which applications they were running, which files they were opening, and much more. All the while, I was wondering whether it would be possible to develop a Python tool to grab common forensic artefacts from a Windows disk image and automatically generate a forensic timeline. Now things have settled down a bit I’m going to start building one. It would be overwhelming to take on every artefact at once, so I’m going to take a modular approach and build in one artefact at a time, beginning with a relatively easy one: Windows Prefetch file data. What is Prefetch and how does it help our investigation? Prefetch is one of the ways Microsoft has attempted to speed up your Windows experience. Basically, when you first run an application, Windows will store data about it in a PF file in the directory C:\Windows\Prefetch. These files’ names will be the executable’s name followed by a dash and a hash of its location – something like CHROME.EXE-CCF9F3F5.pf. How does this help a forensic investigator? Well, the file created and file modified times of these PF files are set to the times the program was first and last run. Furthermore, multiple files with the same name could indicate that multiple versions of the program have been run, or that identical files were run from different directories on the system. Setting things up To successfully parse the information contained in the Prefetch folder, we’ll need to import a few libraries. We’ll get to what exactly each of these is used for a little later. import os, time, csv, operator timeline_csv = open("timeline.csv", "a") windows_drive = raw_input("Enter Windows drive letter: ") prefetch_directory = windows_drive + ":\Windows\Prefetch\\" print "Prefetch directory is %s." % prefetch_directory We’ll also open a CSV file to save our forensic timeline entries to, and ask the investigator which drive the Windows directory sits on. This means it will be possible to use the tool if a forensic image of a drive is mounted on a non-standard drive letter on a system. Iterating through .pf files and getting program names Now we know where the Prefetch folder is, we need to navigate to it, get a list of the files inside, and determine which we’d like to pay attention to based on their extension. prefetch_files = os.listdir(prefetch_directory) for pf_file in prefetch_files: if pf_file[-2:] == "pf": full_path = prefetch_directory + pf_file To achieve this, I’ve used nested conditional statements. First, the os library’s listdir function is used to get a list of files in the Prefetch directory. We then iterate over each file, asking whether the last two characters in its name are “pf”. If that test is successful, we proceed. We also save the file’s full path to the full_path variable. We’ll use this in a moment. Extracting program name and first and last executed times It’s time to get the information we need from each Prefetch file – namely the application’s name and the first and last times it was executed on the system. app_name = pf_file[:-12] first_executed = os.path.getctime(full_path) first_executed = time.strftime('%Y-%m-%d %H:%M:%S', time.localtime(first_executed)) last_executed = os.path.getmtime(full_path) last_executed = time.strftime('%Y-%m-%d %H:%M:%S', time.localtime(last_executed)) The application name can be retrieved by simply trimming the directory hash from the end of the Prefetch filename. For the first executed time, I used the os library’s getctime function to retrieve the timestamp and then the time library’s strftime function to convert it to a readable format. This process is repeated with getmtime to get the last executed time. Writing the results to a CSV timeline To add this information to the forensic timeline we’ll need to create comma-delimited lines to add to a CSV. I’m going to create two entries per file – one for the program’s first execution and one for its most recent – but I’ll include all the available information in each. first_executed_line = first_executed + "," + app_name + "," + first_executed + "," + last_executed + "," + "Program first executed" + "," + "Prefetch - " + pf_file + "\n" last_executed_line = last_executed + "," + app_name + "," + first_executed + "," + last_executed + "," + "Program last executed" + "," + "Prefetch - " + pf_file + "\n" timeline_csv.write(first_executed_line) timeline_csv.write(last_executed_line) timeline_csv.close() There’s not much to this other than stringing all of our data together into a single variable with commas between each field, then appending this line to the CSV file. Then we close it. Sorting the CSV timeline by date and time A forensic timeline isn’t much use if it’s not in chronological order. Our final step (for now) is to take the data and sort it according to the timestamp in the first column. with open("timeline.csv") as f: timeline_csv = csv.reader(f, delimiter=",") sorted_timeline = sorted(timeline_csv, key=operator.itemgetter(0), reverse=True) with open("timeline.csv", "wb") as f: fileWriter = csv.writer(f, delimiter=",") header_row = "Artefact timestamp", "Filename", "First executed", "Last executed", "Action", "Source" fileWriter.writerow(header_row) for row in sorted_timeline: fileWriter.writerow(row) There are two elements at play here. First, I reopen the timeline file and use the sorted function to reorder it according to the first column, where the timestamp is stored. Then I use fileWriter to add a header row and overwrite the CSV file with the sorted data. The output The result is a CSV file that clearly shows the times at which the Prefetch data shows each application ran, whether that’s the first time it was run or the most recent. For my system, this stretches all the way back to when I finished building my PC in June. Now that the Prefetch data is in our CSV timeline, it’s time to turn our attention to the next type of forensic artefact. I’ll be exploring how to add other Windows artefacts – including those stored in the registry – to the timeline in future posts. Photo by Mitchell Orr on Unsplash
https://mattcasmith.net/2018/11/23/python-forensics-tools-windows-prefetch/
CC-MAIN-2019-35
refinedweb
1,065
62.48
We welcome back Aaron Elder, one of our favorite CRM MVP guest bloggers with some helpful tips from the field. A month or so back, I was having lunch with the SDK documentation team and they asked me “does anyone really use the helpers?” to which, I responded “absolutely”, I then proceeded to agree to writing a blog post on the topic and that is the basis for today’s post. First off, what are the “helpers” and why do so few people seem to know about them? They have been in the SDK for quite some time and they are not very hard to use. I think the problem is that people may not know how to put them to good use on their projects. I think the first thing to do is to explain what the helpers are and what they can do for you. The SDK does a pretty good job of explaining what these little guys can do. .” The SDK then goes on to enumerate and describe all of the classes and tools that have been provided… there is a “Readme.doc” file in the helper folder with instructions with how to integrate them in your project. So with all this great documentation, why does it seem that so few developers make use of these. While I have many theories, let’s keep this post to the facts… the facts on how Ascentium has made use of them and the facts about what makes these guys so useful as well as a bit of background and history on the topic. The root of the problem is that the Web Service Definition Language does not support describing client or consumer side logic, the kind of logic you would use to build constructors, indexers, accessors and methods that could execute on the client prior to serialization and transport of the message. This means that for CRM types that need to be constructed before they can be set, you have to write code that looks something like this: asc_entity myEntity = new asc_entity(); myEntity.asc_number = new CrmNumber(); myEntity.asc_number.Value = 12; asc_entity myEntity = new asc_entity(); myEntity.asc_number = new CrmNumber(); myEntity.asc_number.Value = 12; WSDL is able to describe what an “asc_entity” and “CrmNumber” are, but it can’t define a constructor on the type CrmNumber that takes in an integer and sets its Value property. This and many other similar situations is where the “helpers” come. In CRM 3.0, which was in the .NET 1.x time period, provided a simple set of utility classes to make the code above a little bit easier. This looked something like this: asc_entity myEntity = new asc_entity(); myEntity.asc_number = Helper.CreateCrmNumber(12); myEntity.asc_number = Helper.CreateCrmNumber(12); Obviously an improvement, especially for more complex types like Lookups. With the move to CRM 4.0 and the progression to .NET 2.0; partial types provide a much more elegant solution in the CRM 4.0 SDK. The type helpers now look something like this: asc_entity myEntity = new asc_entity(); myEntity.asc_number = new CrmNumber(12); myEntity.asc_number = new CrmNumber(12); This new approach in CRM 4.0 is much better than previously and the time saving benefits that can be found by using the helpers provided by Microsoft and writing a set of your own can be tremendous… assuming your team knows about them. The SDK does a pretty good job of discussing all of the helpers available, some of my favorites include: The SDK also provides a “Full Sample” that demonstrates how to use the helpers. This is located here: \SDK\Server\FullSample\UsingHelpers How Ascentium uses the Helpers To give everyone an idea of how we use these helpers. We, as a standard have something we call “the standard build tree”, this is a basic clean CRM project that starts out with a vanilla WSDL proxy CS files with the namespace Ascentium.Crm.Common.Sdk.CrmServiceSdk. These are then checked in our standard tree under \Common\Sdk. We then also include a copy of the “helpers” (\Common\Extensions), along with our own helpers that exist in the same namespace, using the same partial type method. Here is an example of an extension we made to the CrmService to make use of the auto-impersonator. 1: \Common\Extensions\CrmService.cs 2: 3: using System; 4: using System.Collections.Generic; 5: using System.Text; 6: 7: namespace Ascentium.Crm.Common.Sdk.CrmServiceSdk 8: { 9: public partial class CrmService 10: { 11: public BusinessEntityCollection RetrieveMultiple(QueryBase query, bool useImpersonator) 12: { 13: if (useImpersonator) 14: { 15: using (new Microsoft.Crm.Sdk.CrmImpersonator()) 16: { 17: return this.RetrieveMultiple(query); 18: } 19: } 20: else 21: { 22: return this.RetrieveMultiple(query); 23: } 24: } 25: } 26: } The provided helpers are definitely useful and when integrated into our default build system, they are seamlessly available for use and provide a great deal of benefit at an exceptional low cost. Since we include a vanilla WSDL in our build tree, we also include a handy script to regenerate these from a VPC. This allows developers to quickly re-build the WSDL as they make customization and check in their project-specific WSDL proxies. The code for this is as follows: 1: \common\sdk\regenerate_proxies.cmd 3: @echo off 4: cls 5: echo. 6: echo NOTICE: This script generates WSDL Proxies for the DEFAULT ORGANIZATION on the site 7: echo - If this is a multi-organization box, you will need to update this script 8: echo to point to the proper organization 9: echo ================================================================================================ 10: echo Also note, this script regenerates all proxies, however under normal circumstances only the 11: echo CrmService Proxy will change. 12: echo. 13: pause 14: 15: echo Generating CrmService SDK for default organization 16: wsdl.exe /out:CrmService.cs /namespace:Ascentium.Crm.Common.Sdk.CrmServiceSdk 17: 18: echo Generating Metadata SDK 19: wsdl.exe /out:MetadataService.cs /namespace:Ascentium.Crm.Common.Sdk.MetadataServiceSdk 20: 21: echo Generating CrmDiscoverService SDK 22: wsdl.exe /out:CrmDiscoveryService.cs /namespace:Ascentium.Crm.Common.Sdk.CrmDiscoveryServiceSdk Enjoy, CRM MVP Aaron Elder Disclaimer: This posting is provided "AS IS" with no warranties, and confers no rights. PingBack from These helpers have been immensely helpful in CRM 3.0, but I find that the helpers don't help in Plugins, as you have no partial classes to bolt them onto - no web reference. I would appreciate it if you have any ideas as to how the helpers could be rewritten to not use partial classes, but be standalone helpers (why didn't MS think to include the code by default in their Microsoft.Crm.Sdk namespace!?@*& Okay, so I'll give you a baker's dozen of some really good stuff from the perspective of a Microsoft In addition to the stand-alone cs files that you can add to your projects, the helper code is also built into the Microsoft.Crm.Sdk.dll that you use in Plug-ins. This includes the type helpers, the QE helpers and the Dynamic entity helpers.
http://blogs.msdn.com/b/crm/archive/2008/09/15/anything-you-can-do-to-help-would-be-helpful.aspx
CC-MAIN-2015-18
refinedweb
1,165
61.56
Is there a built-in that removes duplicates from list in Python, whilst preserving order? I know that I can use a set to remove duplicates, but that destroys the original order. I also know that I can roll my own like this: def uniq(input): output = [] for x in input: if x not in output: output.append(x) return output (Thanks to unwind for that code sample.) But I'd like to avail myself of a built-in or a more Pythonic idiom if possible. Related question: In Python, what is the fastest algorithm for removing duplicates from a list so that all elements are unique while preserving order? Here you have some alternatives: Fastest one: def f7(seq): seen = set() seen_add = seen.add return [x for x in seq if not (x in seen or seen_add(x))] Why assign seen.add to seen_add instead of just calling seen.add? Python is a dynamic language, and resolving seen.add each iteration is more costly than resolving a local variable. seen.add could have changed between iterations, and the runtime isn't smart enough to rule that out. To play it safe, it has to check the object each time. If you plan on using this function a lot on the same dataset, perhaps you would be better off with an ordered set: O(1) insertion, deletion and member-check per operation. (Small additional note: seen.add() always returns None, so the or above is there only as a way to attempt a set update, and not as an integral part of the logical test.) Edit 2016 As Raymond pointed out, in python 3.5+ where OrderedDict is implemented in C, the list comprehension approach will be slower than OrderedDict (unless you actually need the list at the end - and even then, only if the input is very short). So the best solution for 3.5+ is OrderedDict. Important Edit 2015 As @abarnert notes, the more_itertools library ( pip install more_itertools) contains a unique_everseen function that is built to solve this problem without any unreadable ( not seen.add) mutations in list comprehensions. This is also the fastest solution too: >>> from more_itertools import unique_everseen >>> items = [1, 2, 0, 1, 3, 2] >>> list(unique_everseen(items)) [1, 2, 0, 3] Just one simple library import and no hacks. This comes from an implementation of the itertools recipe unique_everseen which looks like: In Python 2.7+ the accepted common idiom (which works but isn't optimized for speed, I would now use unique_everseen) for this uses collections.OrderedDict: Runtime: O(N) >>> from collections import OrderedDict >>> items = [1, 2, 0, 1, 3, 2] >>> list(OrderedDict.fromkeys(items)) [1, 2, 0, 3] This looks much nicer than: seen = set() [x for x in seq if x not in seen and not seen.add(x)] and doesn't utilize the ugly hack: not seen.add(x) which relies on the fact that set.add is an in-place method that always returns None so not None evaluates to True. Note however that the hack solution is faster in raw speed though it has the same runtime complexity O(N).
https://pythonpedia.com/en/knowledge-base/480214/how-do-you-remove-duplicates-from-a-list-whilst-preserving-order-
CC-MAIN-2020-29
refinedweb
521
72.56
5 Things to know about Images in React Native The more I work with React Native images, the more I find it tricky. I wrote about Image Caching without Expo, and with Expo. In React Native Sketch Elements and React Native Fiber, I’m using react-native-expo-image-cache. You can see it in action below. react-native-expo-image-cache is new, fits well in my projects but might not be flexible enough yet to fit your requirements. If it doesn’t, please let me know and please find below a list of fives things you ought to know about Images in React Native. Don’t use prefetch() React Native provides out of the box Image.prefetch which can be useful to pre-load images but in some cases but it is somewhat inadequate. You really need the image to be present locally in order to avoid any flickering effect when loading it. It also allows to have more control over the cache of images. In the example below, you see the difference from using the Image.prefetch Image cache versus serving the image locally. You might also need to display your image in a different component than <Image>. For instance in the example below we use <SVGImage> which only works for local paths and or data URIs. Use ExpoKit So you need to store images locally. If you use expo already, there is a File system library out of the box. If your project is detached, the react-native-fetch-blob library seems to has lost its main contributor. Last time I used it, it had some issues. It might be worthwhile to add ExpoKit as a dependency of your project even if you are "detached" from the get go. ExpoKit ships many other great components, including BlurView which can be natively animated in case you also want to implement progressive image loading. Dealing with concurrency on Android Many components might display the same image at the same time. And doing a Filesystem.exists(localURI) operation on Android will return true even if the file download is not finished. This means that you need to implement an observer pattern to download each image only once and be notified when the image download has finished. In react-native-expo-image-cache, this is how the API looks like: import {CacheManager} from "react-native-expo-image-cache"; // Remote URI const {uri} = this.props; CacheManager.cache(uri, localURI => this.setState({ uri: localURI })); Below is the implementation of the observer: static async cache(uri: string, listener: Listener): Promise<void> { const {path, exists} = await getCacheEntry(uri); // Is the image is already downloading, we just listen if (isDownloading(uri)) { addListener(uri, listener); // If it's not downloading and it exists, we serve it } else if (exists) { listener(path); // Else, we download the image and notify everyone when done } else { addListener(uri, listener); await FileSystem.downloadAsync(uri, path); notifyAll(uri, path); unsubscribe(uri); } } Progressive image loading Every-time an image URI is stored in the database, store it’s base64 preview with it. This will allow you to load the image super smoothly. Below is an example: { preview: "9f00" id="9f00" class="graf graf--p graf-after--pre">Now you can immediately display a blurred version of the preview and decrease the blur to 0 when the full version of the image is loaded. On iOS, the <BlurView>from ExpoKit conveniently supports the native animation driver. An Android however, it doesn't default to an animated opacity view. I made a pull request to BlurView.android.jsin order to test a water and find out if the team would be open to have an implementation of <BlurView>that would be more symmetrical on both platforms. In the meantime you can implement your own. This is how it looks like in react-native-expo-image-cache: // intensity is an Animated.Value const opacity = intensity.interpolate({ inputRange: [0, 100], outputRange: [0, 1] }); { Platform.OS === "ios" && ( <AnimatedBlurView tint="dark" style={computedStyle} {...{intensity}} /> ) } { Platform.OS === "android" && ( <Animated.View style={[computedStyle, { backgroundColor: "rgba(0, 0, 0, 0.5)", opacity }]} /> ) } There is a serious bug in <Image> This one. You might be tempted to set a URI in the state of your image component: set it to the preview data URI and then to the full image when loaded. Problem is, sometimes the image won’t refresh. You can use keyto force its refresh but it will flicker. Solution? Superpose the full image to the preview. Again, this is how it looks like in react-native-expo-image-cache: { // If show the preview if it exists hasPreview && ( <RNImage source={{ uri: preview }} resizeMode="cover" style={computedStyle} /> ) } { // If the image is loaded, we show it on top // this.onLoadEnd is used to start the deblurring animation (uri && uri !== preview) && ( <RNImage source={{ uri }} resizeMode="cover" style={computedStyle} onLoadEnd={this.onLoadEnd} /> ) } That’s all Folks! We will never finish to talk about Images in React Native. This is my top 5 list. Am I missing some items? Please let know and in the meantime, Happy Hacking 🎉 Looking for beautiful UI kits? I’m implementing all screens and components from Sketch Elements in React Native. You can get it here. I’m also posting live coding sessions and tutorials on my Youtube channel.
https://hackernoon.com/5-things-to-know-about-images-react-native-69be41d2a9ee?gi=fe58ef9b2900
CC-MAIN-2018-13
refinedweb
878
57.37
java.lang.Object javax.ide.net.VirtualFileSystemHelper public class VirtualFileSystemHelper The VirtualFileSystemHelper class specifies the VirtualFileSystem operations that may have scheme-specific handling. By default, the VirtualFileSystem delegates its operations to VirtualFileSystemHelper. However, a subclass of VirtualFileSystemHelper can be registered with the VirtualFileSystem to handle the VirtualFileSystem operations for a particular scheme. A helper class is registered through the VirtualFileSystem.registerHelper(String, VirtualFileSystemHelper) method. Special implementation note: classes that extend VirtualFileSystemHelper must be completely thread-safe because a single instance of each registered helper is held by the VirtualFileSystem and reused for all threads. VirtualFileSystem protected VirtualFileSystemHelper() public java.net.URI canonicalize(java.net.URI uri) throws java.io.IOException URI, if one is available. The default implementation just returns the specified URI as-is. java.io.IOException public boolean canRead(java.net.URI uri) URI. trueif and only if the specified URIpoints to a resource that exists and can be read by the application; falseotherwise. public boolean canWrite(java.net.URI uri) URI. trueif and only if the specified URIpoints to a file that exists and the application is allowed to write to the file; falseotherwise. public boolean canCreate(java.net.URI uri) URI. This method tests that all components of the path can be created. If the resource pointed by the URIis read-only, this method returns false. trueif the resource at the specified URIexists or can be created; falseotherwise. public boolean isValid(java.net.URI uri) URIis valid. If the resource pointed by the URIexists the method returns true. If the resource does not exist, the method tests that all components of the path can be created. trueif the URIis valid. public java.net.URI convertSuffix(java.net.URI uri, java.lang.String oldSuffix, java.lang.String newSuffix) URIand checks if its Object.toString()representation ends with the specified oldSuffix. If it does, the suffix is replaced with newSuffix. Both suffix parameters must include the leading dot ('.') if the dot is part of the suffix. If the specified URIdoes not end with the oldSuffix, then the newSuffixis simply appended to the end of the original URI. public boolean delete(java.net.URI uri) URI.). The default implementation simply returns false without doing anything. trueif and only if the file or directory is successfully deleted; falseotherwise. public java.net.URI ensureSuffix(java.net.URI uri, java.lang.String suffix) URIends with the specified suffix. The suffix does not necessarily have to start with a ".", so if a leading "." is required, the specified suffix must contain it -- e.g. ".java", ".class". If the URI already ends in the specified suffix, then the URI itself is returned. Otherwise, a new URI is created with the the specified suffix appended to the original URI's path part, and the new URI is returned. The default implementation first checks with hasSuffix(URI, String) to see if the URI already ends with the specified suffix. If not, the suffix is simply appended to the path part of the URI, and the new URI is returned. URI, based on the specified URI, whose path part ends with the specified suffix. java.lang.NullPointerException- if either the specified URIor suffixis null. The caller is responsible for checking that they are not null. public boolean equals(java.net.URI uri1, java.net.URI uri2) URIobjects to determine whether they point to the same resource. This method returns trueif the URIs point to the same resource and returns falseif the URIs do not point to the same resource. This method and all subclass implementations can assume that both URI parameters are not null. The VirtualFileSystem.equals(URI, URI) method is responsible for checking that the two URIs are not null. It can also be assumed that both URI parameters have the same scheme and that the scheme is appropriate for this VirtualFileSystemHelper. This determination is also the responsibility of VirtualFileSystem.equals(URI, URI). The default implementation for this method delegates to URI.equals(Object). public boolean exists(java.net.URI uri) URIlocation currently exists. The test for existence only checks the actual location and does not check any in-memory caches. The default implementation simply returns falsewithout doing anything. trueif and only if a resource already exists at the specified URIlocation; falseotherwise. public java.lang.String getFileName(java.net.URI uri) URI, not including any scheme, authority, directory path, query, or fragment. This simply returns the simple filename. For example, if you pass in an URIwhose string representation is: the returned value is "the returned value is " scheme://userinfo@host:1010/dir1/dir2/file.ext?query#fragment file.ext" (without the quotes). URI. This value should only be used for display purposes and not for opening streams or otherwise trying to locate the document. public long getLength(java.net.URI uri) URIpoints to. If the length cannot be determined, -1is returned. The default implementation returns -1. URI. public java.lang.String getName(java.net.URI uri) URI, not including any scheme, authority, directory path, file extension, query, or fragment. This simply returns the simple filename. For example, if you pass in an URIwhose string representation is: the returned value is "the returned value is " scheme://userinfo@host:1010/dir1/dir2/file.ext1.ext2?query#fragment file" (without the quotes). The returned file name should only be used for display purposes and not for opening streams or otherwise trying to locate the resource indicated by the URI. The default implementation first calls getFileName(URI) to get the file name part. Then all characters starting with the first occurrence of '.' are removed. The remaining string is then returned. public java.net.URI getParent(java.net.URI uri) URIrepresenting the parent directory of the specified URI. If there is no parent directory, then nullis returned. The default implementation returns the value of invoking uri.resolve( ".." ). public java.lang.String getPath(java.net.URI uri) URI. The returned string is acceptable to use in one of the URIFactorymethods that takes a path. The default implementation delegates to URI.getPath(). public java.lang.String getPathNoExt(java.net.URI uri) URIwithout the last file extension. To clarify, the following examples demonstrate the different cases: The default implementation gets the path from getPath(URI)and then trims off all of the characters beginning with the last "." in the path, if and only if the last "." comes after the last "/" in the path. If the last "." comes before the last "/" or if there is no "." at all, then the entire path is returned. public java.lang.String getPlatformPathName(java.net.URI uri) URI; the returned string should be considered acceptable for users to read. In general, the returned string should omit as many parts of the URIas possible. For the "file" scheme, therefore, the platform pathname should just be the pathname alone (no scheme) using the appropriate file separator character for the current platform. For other schemes, it may be necessary to reformat the URIstring into a more human-readable form. That decision is left to each VirtualFileSystemHelperimplementor. The default implementation returns uri.toString(). If the URI is null, the empty string is returned. URIin platform-dependent notation. This value should only be used for display purposes and not for opening streams or otherwise trying to locate the document. public java.lang.String getSuffix(java.net.URI uri) URI, boolean hasSuffix(java.net.URI uri, java.lang.String suffix) trueif the path part of the URIends with the given suffixString. The suffix can be any String and doesn't necessarily have to be one that begins with a dot ('.'). If you are trying to test whether the path part of the URIends with a particular file extension, then the suffixparameter must begin with a '.' character to ensure that you get the right return value. public boolean isBaseURIFor(java.net.URI uri1, java.net.URI uri2) trueif uri1represents a a directory and uri2points to a location within uri1's directory tree. public boolean isDirectory(java.net.URI uri) URIis a directory. The default implementation always returns false. trueif and only if the location indicated by the URIexists and is a directory; falseotherwise. public boolean isDirectoryPath(java.net.URI uri) URIrepresents a directory path. The directory path specified by the URIneed not exist. This method is intended to be a higher performance version of the isDirectory(URI) method. Implementations of this method should attempt to ascertain whether the specified URI represents a directory path by simply examining the URI itself. Time consuming i/o operations should be avoided. The default implementation returns true if the path part of the URI ends with a '/' and the query and ref parts of the URI are null. trueif the location indicated by the URIrepresents a directory path; the directory path need not exist. public boolean isHidden(java.net.URI uri) URIis a hidden file. The exact definition of hidden is scheme-dependent and possibly system-dependent. On UNIX systems, a file is considered to be hidden if its name begins with a period character ('.'). On Win32 systems, a file is considered to be hidden if it has been marked as such in the file system. The default implementation always returns false. public boolean isReadOnly(java.net.URI uri) trueif the resource is read-only. A return value of falsemeans that trying to get an OutputStreamor trying to write to an OutputStreambased on the URIwill cause an IOException to be thrown. If the read-only status cannot be determined for some reason, this method returns true. The default implementation always returns true. This means that all resources are considered read-only unless a scheme-specific VirtualFileSystemHelper is registered for the specified URI and is able to determine that the resource underlying the specified URI is not read-only. public boolean isRegularFile(java.net.URI uri) URIis a regular file. A regular is a file that is not a directory and, in addition, satisfies other system-dependent criteria. The default implementation returns the value of exists( uri ) && !isDirectory( uri ). trueif and only if the resource indicated by the URIexists and is a normal file. public long lastModified(java.net.URI uri) URI. The returned longis the number of milliseconds since the epoch (00:00:00 GMT Jan 1, 1970). If no timestamp is available or if the URIpassed in is null, -1is returned. The default implementation returns -1. URI. public java.net.URI[] list(java.net.URI uri) URIs naming files and directories in the directory indicated by the URI. If the specified URIdoes not represent a directory, then this method returns null. Otherwise, an array of URIs is returned, one for each file or directory in the directory. URIs representing the directory itself or its parent are not included in the result. There is no guarantee that the URIs will occur in any particular order. The default implementation always returns an empty URI array. URIs naming the files and directories in the directory indicated by the URI. The array will be empty if the directory is empty. Returns nullif the URIdoes not represent a directory or if an I/O error occurs. public java.net.URI[] list(java.net.URI uri, URIFilter filter) URIs naming files and directories in the directory indicated by the URI; the specified URIFilteris applied to determine which URIs will be returned. If the specified URIdoes not represent a directory, then this method returns null. Otherwise, an array of URIs is returned, one for each file or directory in the directory that is accepted by the specified filter. URIs representing the directory itself or its parent are not included in the result. There is no guarantee that the URIs will occur in any particular order. If the specified URIFilter is null then no filtering behavior is done. The default implementation calls list(URI) first and then applies the URIFilter to the resulting list. URIs naming the files and directories in the directory indicated by the URIthat are accepted by the specified URIFilter. The array will be empty if the directory is empty. Returns nullif the URIdoes not represent a directory or if an I/O error occurs. public java.net.URI[] listRoots() nullor an empty URI array. If the returned array is not empty, then each URI contained in it must represent a directory and must not be null. The default implementation always returns null. public boolean mkdir(java.net.URI uri) URI. The default implementation always returns false. trueif and only if the directory was created; falseotherwise. public boolean mkdirs(java.net.URI uri) URIincluding java.net.URI createTempFile(java.lang.String prefix, java.lang.String suffix, java.net.URI URIto the temporary file. java.io.IOException public java.io.InputStream openInputStream(java.net.URI uri) throws java.io.IOException InputStreamon the specified URI. The default implementation throws UnknownServiceException. InputStreamassociated with the URI. java.io.FileNotFoundException- if the resource at the specified URI does not exist. java.io.IOException- if an I/O error occurs when trying to open the InputStream. java.net.UnknownServiceException- (a runtime exception) if the scheme does not support opening an InputStream. public java.io.OutputStream openOutputStream(java.net.URI uri) throws java.io.IOException OutputStreamon the URI. If the file does not exist, the file should be created. If the directory path to the file does not exist, all necessary directories should be created. The default implementation throws UnknownServiceException. uri- An OutputStreamis opened on the given URI. The operation is scheme-dependent. OutputStreamassociated with the URI. java.io.IOException- if an I/O error occurs when trying to open the OutputStream. java.net.UnknownServiceException- (a runtime exception) if the scheme does not support opening an OutputStream. public boolean renameTo(java.net.URI oldURI, java.net.URI newURI) URIto the name indicated by the second URI. The default implementation simply returns false without doing anything. If either URI parameter is null or if both of the specified URI parameters refer to the same resource, then the rename is not attempted and failure is returned. If the specified URI parameters do not have the same scheme, then the VirtualFileSystem handles the rename by first copying the resource to the destination with VirtualFileSystem.copy(URI, URI) and then deleting the original resource with VirtualFileSystem.delete(URI); if either operation fails, then failure is returned. Otherwise, the scheme helper is called to perform the actual rename operation. Scheme helper implementations may therefore assume that both URI parameters are not null, do not refer to the same resource, and have the same scheme. If the original URI refers to a nonexistent resource, then the scheme helper implementations should return failure. It is left up to the scheme helper implementations to decide whether to overwrite the destination or return failure if the destination URI refers to an existing resource. oldURI- the URIof the original resource newURI- the desired URIfor the renamed resource trueif and only if the resource is successfully renamed; falseotherwise. VirtualFileSystem.renameTo(URI, URI) public boolean setLastModified(java.net.URI uri, long time) URIto the time specified by time. The time is specified in the number of milliseconds since the epoch (00:00:00 GMT Jan 1, 1970). The return value indicates whether or not the setting of the timestamp succeeded. The default implementation always returns false without doing anything. public boolean setReadOnly(java.net.URI uri, boolean readOnly) URIaccording to the specified readOnlyflag. The return value indicates whether or not the setting of the read-only flag succeeded. The default implementation always returns false without doing anything. public java.lang.String toDisplayString(java.net.URI uri) URI. The default implementation delegates to URI.toString(). public java.lang.String toRelativeSpec(java.net.URI uri, java.net.URI base) uriparameter as the URIwhose relative URI reference is to be determined and the baseparameter as the URIthat serves as the base document for the uripararmeter. If it is not possible to produce a relative URI reference because the two URIs are too different, then a full, absolute reference for the uriparameter is returned. Whatever value is returned by this method, it can be used in conjunction with the base URI to reconstruct the fully-qualified URI by using one of the URI constructors that takes a context URI plus a String spec (i.e. the String returned by this method). Both the uri and base parameters should point to documents and be absolute URIs. Specifically, the base parameter does not need to be modified to represent the base directory if the base parameter already points to a document that is in the directory to which the uri parameter will be made relative. This relationship between uri and base is exactly how relative references are treated within HTML documents. Relative references in an HTML page are resolved against the HTML page's base URI. The base URI is the HTML page itself, not the directory that contains it. If either the uri or base parameter needs to represent a directory rather than a file, they must end with a "/" in the path part of the URI, such as: The algorithm used by this method to determine the relative reference closely follows the recommendations made in RFC 2396. The following steps are performed, in order, to determine the relative reference: Fileso that, for example, on Win32 the comparison is case-insensitive, whereas on Unix the comparison is case-sensitive. When the scheme is not "file", comparison is always case-sensitive. URI(except for the document name itself), then a "../" sequence is prepended to the resulting relative path for each base path element that was not consumed while matching path elements. uriwere consumed, then those path elements are appended to the resulting relative path as well. If the first remaining path element in uricontains a ':' character and there is no "../" sequence was prepended to the relative reference, then a "./" sequence is prepended to prevent the ':' character from being interpreted as a scheme delimiter (this is a special case in RFC 2396). uriare not appended. This method is implemented using the template method design pattern, so it is possible for subclasses to override just part of the algorithm in order to handle scheme-specific details. public java.lang.String toRelativeSpec(java.net.URI uri, java.net.URI base, boolean mustConsumeBase) toRelativeSpec(URI, URI)that has a flag that indicates whether the base URIshould be fully consumed in the process of calculating the relative spec. If mustConsumeBase is true, then this method will return a non- null relative spec if and only if the base URI was fully consumed in the process of calculating the relative spec. Otherwise, if any part of the base URI remained, then this method returns null. If mustConsumeBase is false, then this method will return a non- null relative spec regardless of how much of the base URI is consumed during the determination. public java.net.URI getBaseParent(java.net.URI uri, java.lang.String relativeSpec) urishould be absolute and point to a directory. It must end with a "/" in the path part of the URI, such as: If theIf the uridoes not end with a "/", it will be assumed that the uripoints to a document. The document name will then be stripped in order to determine the parent directory. The relativeSpecparameter should be a relative path. If the relativeSpecdoes not end with a "/", it will be assumed that the relativeSpecpoints to a document. The document name will then be stripped in order to determine the parent directory. For example, if the uripoints to: and theand the relativeSpecis: The returned value would be:The returned value would be: dir2/dir3 If theIf the relativeSpecpath elements are not fully contained in the last part of the uripath the value returned is the uri itself if the uri path ends with a "/" or the uri parent otherwise. public java.net.URL toURL(java.net.URI uri) throws java.net.MalformedURLException URLfrom an URI. This method just calls the URI.toURL()method. java.net.MalformedURLException protected boolean haveSameUserInfo(java.net.URI uri1, java.net.URI uri2) trueif the URIs user infos are equal. protected boolean haveSameHost(java.net.URI uri1, java.net.URI uri2) trueif the URIs hosts are equal. protected boolean haveSamePath(java.net.URI uri1, java.net.URI uri2) trueif the URIs paths are equal. protected boolean haveSameQuery(java.net.URI uri1, java.net.URI uri2) trueif the URIs queries are equal. protected boolean haveSameRef(java.net.URI uri1, java.net.URI uri2) trueif the URIs refs are equal. protected boolean haveSamePort(java.net.URI uri1, java.net.URI uri2) trueif the URIs ports are equal. protected final boolean areEqual(java.lang.String s1, java.lang.String s2) protected boolean haveSameScheme(java.net.URI uri, java.net.URI base) toRelativeSpec(URI, URI)method, which uses the template method design pattern. By default, the uri and base parameters must have identical schemes as a prerequisite to being able to produce a relative URI spec. protected boolean haveSameAuthority(java.net.URI uri, java.net.URI base) toRelativeSpec(URI, URI)method, which uses the template method design pattern. The "authority" part is a combination of the user info, hostname, and port number. The full syntax in the URI string is: It may appear in anIt may appear in an userinfo@hostname:port URIsuch as: The authority part may beThe authority part may be null, if the URIscheme does not require one. By default, the uri and base parameters must have identical authority strings as a prerequisite to being able to produce a relative URI spec. protected boolean appendRelativePath(java.net.URI uri, java.net.URI base, java.lang.StringBuffer relativeURI, boolean mustConsumeBase) toRelativeSpec(URI, URI)method, which uses the template method design pattern. trueif the entire base URIwas consumed in the process of determining the relative path; falseotherwise (i.e. not all of the base URIwas consumed). protected boolean areEqualPathElems(java.lang.String uriElem, java.lang.String baseElem) appendRelativePath(URI, URI, StringBuffer, boolean)method, which uses the template method design pattern. The two Strings that are passed in represent elements of the path parts of the uri and base parameters that are passed into appendRelativePath(URI, URI, StringBuffer, boolean). By default, path elements are compared exactly in a case-sensitive manner using regular String comparison.
http://docs.oracle.com/cd/E35521_01/apirefs.111230/e17493/javax/ide/net/VirtualFileSystemHelper.html
CC-MAIN-2015-35
refinedweb
3,694
51.65
I want to have a list of the items inside a folder created by my user in AGOL, I want to use the ArcGIS API for Python. so far I have found this: import sysfrom arcgis import gisimport os sourceURL=""sourceAdmin="adminUser"sourcePassword="thisIsNotMyPassword" clientAcronym = "Sales"source = gis.GIS(sourceURL, sourceAdmin, sourcePassword)existingItems = source.content.search('title:"{0}*" '.format(clientAcronym)) this code list all the items where their title start with Sales, but I know there is a folder called "Sales-Test" and it has some of those items, i want to know which items are inside of that folder: if I get the list of my folders me = source.users.me me.foldersfor folder in me.folders: print(folder['title']) me.items(folder['title']) I can see the folder "Sales-Test" In the Item documentation i cannot see how to get the folder information. Is there a way to relate the information of existingItems with me.folders?
https://community.esri.com/t5/arcgis-api-for-python-questions/how-to-search-the-folder-s-items-in-arcgis-on-line/td-p/825809
CC-MAIN-2022-27
refinedweb
158
54.63
I have use macro to locate some function in to SPIFlash like this __TEXT_EXT(Flash3,systick_delay) unsigned int systick_delay() { return (unsigned)spifi_func; } I call this function in main(), and boot from internal flash bank A. The first time i download software, LPC Expesso auto reset after download completion, software run OK. But when I push reset button on board, it crash. I use LPC4357 board by Embedded Artist I presume that you are following the techniques shown in : Configuring projects to span multiple flash devices ? It would be worth checking the map file generated by the linker to make sure things have been placed as you expect. Also double check your debug log to make sure that both the internal flash and external flash are being programmed : The Debug Log But more likely is that the problem is due to your code not setting up pinmuxing and clock for the SPIFI interface (which will be being done by the flash driver when you debug). Regards, LPCXpresso Support
https://community.nxp.com/thread/436407
CC-MAIN-2018-22
refinedweb
167
56.18
It is over 5 weeks since I submitted this report and nearly 3 weeks since I asked for an update. Please provide an update as to the status of this issue? Have you confirmed the issue? Do you intend to fix it in this release of Visual Studio (2010)? I wish to use codecvt_utf8 to write UTF-8 data to a std::wofstream file in a MFC C++ application. Compiles and works OK in the Release configuration. Fails to compile in Debug configuration. Workaround is to delete the following lines always added by the MFC project Wizard. However, I do not wish to do this in my large MFC application. #ifdef _DEBUG #define new DEBUG_NEW #endif It seems that the 'new' operator in std::codecvt_utf8 conflicts with the MFC 'DEBUG_NEW' operator. Please wait... Thanks for reporting this bug. We've fixed it, and the fix will be available in VC11. According to the Standard, macroizing keywords when including Standard Library headers triggers undefined behavior, and VC11 will emit a hard #error when it detects this. However, macroizing "new" is unfortunately very common, so we've added special guards to all C++ Standard Library headers that will grant them immunity to macroized "new". If you have any further questions, feel free to E-mail me at [email protected] . Stephan T. Lavavej Visual C++ Libraries Developer
https://connect.microsoft.com/VisualStudio/feedback/details/683483
CC-MAIN-2017-39
refinedweb
225
66.13
Edge computing describes the movement of computation away from cloud data centers so that it can be closer to instruments, sensors and actuators where it will be run on “small” embedded computers or nearby “micro-datacenters”. The primary reason to do this is to avoid the network latency in cases where responding to a local event is time critical. This is clearly the case for robots such as autonomous vehicles, but it is also true of controlling many scientific or industrial apparatuses. In other cases, privacy concerns can prohibit sending the data over an external network. We have now entered the age where advances in machine learning has made it possible to infer much more knowledge from a collection of the sensors than was possible a decade ago. The question we address here is how much deep computational analysis can be moved to the edge and how much of it must remain in the cloud where greater computational resources are available. The cloud has been where the tech companies have stored and analyze data. These same tech companies, in partnership with the academic research community, have used that data to drive a revolution in machine learning. The result has been amazing advances in natural language translation, voice recognition, image analysis and smart digital assistants like Seri, Cortona and Alexa. Our phones and smart speakers like Amazon Echo operate in close connection with the cloud. This is clearly the case when the user’s query requires a back-end search engine or database, but it is also true of the speech understanding task. In the case of Amazon’s Echo, the keyword “Alexa” starts a recording and the recorded message is sent to the Amazon cloud for speech recognition and semantic analysis. Google cloud, AWS, Azure, Alibaba, Tencent, Baidu and other public clouds all have on-line machine learning services that can be accessed via APIs from client devises. While the cloud business is growing and maturing at an increasingly rapid rate, edge computing has emerged as a very hot topic. There now are two annual research conferences on the subject: the IEEE Service Society International conference on Edge computing and the ACM IEEE Symposium on Edge computing. Mahadev Satyanarayanan from CMU, in a keynote at the 2017 ACM IEEE Symposium and in the article “The Emergence of Edge Computing” IEEE Computer, Vol. 50, No. 1, January 2017, argues very strongly in favor of a concept called a cloudlet which is a server system very near or collocated with edge devices under its control. He observes that applications like augmented reality require real-time data analysis and feedback to be usable. For example, the Microsoft Hololens mixed reality system integrates a powerful 32bit Intel processor with a special graphics and sensor processor. Charlie Catlett and Peter Beckman from Argonne National Lab have created a very powerful Edge computing platform called Waggle (as part of the Array of Things project) that consists of a custom system management board for keep-alive services and a powerful ODROID multicore processor and a package of instruments that measure Carbon Monoxide, Hydrogen Sulphide, Nitrogen Dioxide, Ozone, Sulfur Dioxide, Air Particles, Physical Shock/Vibration, Magnetic Field, Infrared Light, Ultraviolet Intensity, RMS Sound Level and a video camera. For privacy reasons the Waggle vision processing must be done completely on the device so that no personal identifying information goes over the network. Real time computer vision tasks are among the AI challenges that are frequently needed at the edge. The specific tasks range in complexity from simple object tracking to face and object recognition. In addition to Hololens and Waggle there are several other small platforms designed to support computer vision at the edge. As shown in Figure 1, these include the humble RaspberryPi with camera, the Google vision kit and the AWS DeepLens. Figure 1. From the left is a RaspberryPi with an attached camera, ANL Waggle array, the Google AIY vision kit and the AWS DeepLens. The Pi system is, by far, the least capable with a quad core ARMv7 processor and 1 GB memory. The Google vision kit has a Raspberry Pi Zero W (single core ARMv7 with 512MB memory) but the real power lies in the Google VisionBonnet which uses a version of the Movidius Myriad 2 vision processing chip which has 12 vector processing units and a dual core risc cpu. The VisionBonnet runs TensorFlow from a collection of pretrained models. DeepLens has a 4 megapixel camera, 8 GB memory, 16 GB storage and an intel Atom process and Gen9 graphics engine which supports model built with Amazon SageMaker that is pre-configured to run TensorFlow and Apache MXNet. As we stated above many applications that run on the edge many must rely on the cloud if only for storing data to be analyzed off-line. Others, such as many of our phone apps and smart speakers, use the cloud for backend computation and search. It may be helpful to think of the computational capability of edge devices and the cloud as a single continuum of computational space and an application as an entity that has components distributed over both ends. In fact, depending upon the circumstances parts of the computation may migrate from the cloud to the device or back to optimize performance. As illustrated in Figure 2, AWS Greengrass accomplishes some of this by allow you to move Lambda “serverless” functions from the cloud to the device to form a network of long running functions that can interact with instruments and securely invoke AWS services. Figure 2. AWS Greengrass allows us to push lambda functions from the cloud to the device and for these functions to communicate seamlessly with the cloud and other functions in other devices. (Figure from ) The Google vision kit is not available yet and DeepLens will ship later in the spring and we will review them when they arrive. Here we will focus on a few simple experiments with the Raspberry Pi and return to these other devices in a later post. Deep Learning Models and the Raspberry PI 3. In a previous post we looked at several computer vision tasks that used the Pi in collaboration with cloud services. These included simple object tracking and doing optical character recognition and search for information about book covers seen in an image. In the following paragraphs we will focus on the more complex task of recognizing objects in images and we will try to understand the limitations and advantages of using the cloud as the backend computational resource. As a benchmark for our experiments we use the Apache MXNet deep learning kit with a model based on the resnet 152-layer neural network that was trained on a collection of over a 10 million images and over 11 thousand labels. We have packaged this MXNet with this model into a Docker container dbgannon/mxnet which we have used for these experiments. (the details of the python code in the container are in the appendix to this blog. Note: If you want to run this container and if you have dockerand Jupyter installed you can easily test the model with pictures of your own. Just download the jupyter notebook send-to-mxnet-container.ipynb and follow the instructions there. How fast can we do the image analysis (in image frames per second)? Running the full resnet-152 model on an installed version of MXNet on more capable machines (Mac mini and the AWS Deeplearning AMI c5.4xlarge, no GPU) yields an average performance of about 0.7 frame/sec. Doing the same experiment on the same machines, but using the docker container and a local version of the Jupyter notebook driver we see the performance degrade a bit to an average of about 0.69 frame/sec (on a benchmark set of images we described in the next paragraph). With a GPU one should be able to go about 10 times faster. For the timing tests we used a set of 20 images from the internet that we grabbed and reduced so they average about 25KB in size. These are stored in the Edge device. Loading one of these images takes about the same amount of time as grabbing a frame from the camera and reducing it to the same size. Two of images from the benchmark set and the analysis output is shown in figure 3 below. Figure 3. Two of the sample images together with the output analysis and call time. How can we go faster on the Pi 3? We are also able to install MXNet on the Pi 3, but it is a non-trivial task as you must build it from the source. Deployment details are here, however, the resnet 152 model is too large for the 1MB memory of the Pi 3, so we need to find another approach. The obvious answer is to use a much smaller model such as the Inception 21 layer network which has a model database of only 23MB (vs 310MB for resnet 152), but it has only 1000 classes vs the 11,000 of the full rennet 152. We installed Tensorflow on the Pi3. (there are excellent examples of using it for image analysis and recognition provided by Matthew Rubashkin of Silicon Valley Data Science.) We ran the Tensorflow Inception_2015_12_05 which fit in memory on the Pi. Unfortunately, it was only able to reach 0.48 frames per second on the same image set described above. To solve the, we need to go to the cloud. In a manner similar to the Greengrass model, we will have the Pi3 sample the camera and downsize the image and send it to the cloud for execution. To test it we ran the MXNet container on a VM in AWS and pointed the Pi camera at various scenes. The results are shown in Figure 4. Figure 4. The result for the toy dinosaur result is as it is logged into the AWS DynamoDB. With the bottom two images show only the description string. The output of the model gives us likelihood of various labels. In a rather simple minded effort to be more conversational we translate the likelihood results as follows. If a label X is more than 75% likely the container returns a value of “This certainly looks like a X”. If the likely hood is less than less than 35% it returns “I think this is an X, but I am not sure” (the code is below). We look at the top 5 likely labels and they are listed in order. The Pi device pushes jpeg images to AWS S3 as a blob. It then pushes the metadata about the image (a blob name and time stamp) to the AWS Simple Queue Service. We modified the MXNet container to wait for something to land in the queue. When this happens, it takes the image meta data and pulls the image from S3 and does the analysis and finally stores the result in an AWS DynamoDB table. However we can only go as fast as we can push the images and metadata to the cloud from the Pi device. With repeated tries we can achieve 6 frames/sec. To speed up the analysis to match this input stream we spun up a set of analyzers using the AWS Elastic Container Service (ECS). The final configuration is shown in Figure 5. Figure 5. The full Pi 3 to Cloud image recognition architecture. (The test dataset is shown in the tiny pictures in S3) To conduct the experiments, we included a time stamp from the edge device with the image metadata. When the MXNet container puts the result in the DynamoDB table it includes another timestamp. This allows us to compute the total time from image capture to result storage for each image in the stream. If the device sends the entire collection as fast as possible then the difference between the earliest recorded time stamp and the most recent gives us a good measure of how long it takes to complete the entire group. While the Pi device was able to fill S3 and the queue at 6 frames a second having only one MXNet container instance yielded the result that the total throughput was only about 0.4 frames/sec. The servers used to host the container are relatively small. However, using the ECS it is trivial to boost the number of servers and instances. Because of the size of the container instance is so large only one instance can fit on each of the 8 GB servers. However as shown in Figure 6 we were able to match the device sending throughput with 16 servers/instances. At this point messages in the queue were being consumed as fast as they were arriving. Using a more powerful device (a laptop with a core I7 processor) to send the images we were about to boost the input end up to just over 7 frames per second and that was matched with 20 servers/instances. Figure 6. Throughput in Frames/second measured from the Pi device to the final results in the DynamoDB instance. In the 20 instance case, a faster core I7 laptop was used to send the images. Final Thoughts This exercise does not fully explore the utility of AI method deployed at the edge or between the edge and the cloud. Clearly this type of full object recognition at real-time frame rates is only possible if the edge device has sophisticated accelerator hardware. On the other hand, there are many simple machine learning models that can be used for more limited applications. Object motion tracking is one good example. This can be done in real-time. This is typically done by comparing a frame to a previous one and looking for the differences. Suppose you need to invoke fire suppression when a fire is detected. It would not be had to build a very simple network that can recognize fire but not simple movement of ordinary objects. Such a network could be invoked whenever movement is detected and if it is fire the appropriate signal can be issued. Face detection and recognition is possible with the right camera. This was done with the Microsoft Xbox-1 and it is now part of the Apple IPhone X. There are, of course, limits to how much we want our devices to see and analyze what we are doing. On the other hand it is clear that advances in automated scene analysis and “understanding” are moving very fast. Driverless cars are here now and will be commonplace in a few years. Relatively “smart” robots of various types are under development. It is essential that we understand how the role of these machines in society can benefit the human condition along the lines of the open letter from many AI experts. Notes about the MXNet container. The code is based on a standard example of using MXNet to load a model and invoke it. To initialize the model, the container first loads the model files into the root file system. That part is not show here. The files are full-resnet-152-0000.parms (310MB), full-resnet-152-symbols.json (200KB) and full-synset.txt (300KB) . Once loaded into into memory the full network is well over 2GB and the container requires over 4GB. Following the load, the model is initialized. import mxnet as mx # 1) Load the pretrained model data with open (' full-synset.txt ','r ') as f: synsets = [l.rstrip() for l in f] sym, arg _params , aux_pa ram s = mx . model .load _checkpoint( 'full-resnet-152' ,0) # 2) Build a model from the data mod = mx.mod .Module (symbol =sym , context =mx. gpu ()) mod. bind ( for_training =False, data_shapes=[( 'data ',(1,3,224,224))]) mod. set_params ( arg_params , aux_params ) The function used for the prediction is very standard. It takes three parameters: the image object, the model and synsnet (the picture labels). The image is modified to fit the network and then fed to the forward end. The output is a Numpy array which is sorted and the top five results are returned. def predict(img, mod, synsets): img = cv2.resize(img, (224, 224)) img = np.swapaxes(img, 0, 2) img = np.swapaxes(img, 1, 2) img = img[np.newaxis, :] mod.forward(Batch([mx.nd.array(img)])) prob = mod.get_outputs()[0].asnumpy() prob = np.squeeze(prob) a = np.argsort(prob)[::-1] result = [] for i in a[0:5]: result.append( [ prob[i], synsets[i][synsets[i].find(' '):]]) return result The container runs as a webservice on port 8050 using the Python “Bottle” package. When it receives a web POST message to “call_predict” it invokes the call_predict function below. the image has been passed as a jpeg attachment with is extracted with the aid of the request package. It is saved in a temporary file and then read by the OpenCV read function. Unfortunately there was no way to avoid the save followed by read because of limitations to the API. However we measured the cost of this step and it was less than 1% of the total time of the invocation. The result of the predict function is a two dimensional array with each row consisting of a probability and the associated label. The call returns the most likely labels as shown below. @route('/call_predict', method='POST') def call_predict(): t0 = time.time() result = '' request.files.get('file').save('yyyy.jpg', 'wb') image = cv2.cvtColor(cv2.imread('yyyy.jpg'), cv2.COLOR_BGR2RGB) t1 = time.time() result = predict(image, mod, synsets) t2 = time.time() answer = "i think this is a "+result[0][1]+" or it may be a "+result[1][1] if result[0][0] < 0.3: answer = answer+ ", but i am not sure about this." if result[0][0] > 0.6: answer = "I see a "+result[0][1]+"." if result[0][0] > 0.75: answer = "This certainly looks like a "+result[0][1]+"." answer = answer + " \n total-call-time="+str(t2-t0) return(answer) run(host='0.0.0.0', port=8050) The version of the MXNet container used in the ESC experiment replace the Bottle code and call_predict with loop that polls the message queue, pulls a blob from S3 and pushes the result to DynamoDB
https://esciencegroup.com/2017/12/11/
CC-MAIN-2022-27
refinedweb
3,045
62.07
If you're developing any type of IoT product, inevitably you'll need some type of mobile app. While there are easy ways, they're not for production use. In this tutorial, we'll talk about the basics of Particle app development. You'll learn about some of the many app frameworks you can take advantage of. Plus there's libraries, tricks, and tools along the way to make your life a lot easier. App Frameworks Sometimes it's dang near irritating to program multiple applications natively. You see, Swift (or Objective C 🤮) and Java aren't terrible at first glance (well, maybe except for Obj-C 🤮). But when you're resource constrained, you have to figure out a new game plan. That's where App Frameworks come in. These frameworks allow an app developer to write, build and test cross platform apps. In some cases, the frameworks convert your app into native code. That means that they run as fast and as well as one written in Swift or Java. I did the research and as of January 2020, here are some of the most supported frameworks: The list goes on for days. I've used a few of these frameworks in the past. I've built a Meteor app which (surprisingly) worked. In the end I had to pick one though. What did I go with? NativeScript. For the most part, NativeScript's documentation and on-boarding experience is fantastic. Not only can you preview your app inside an emulator but you can load it directly to your phone too! One of the cool things about NativeScript is that it supports TypeScript. TypeScript is a superset of JavaScript with some extra wiz-bang features. Unlike other languages, JavaScript technically has no types. If you've done any Particle development you likely know what a type is. We're talking about int, String, float and more. i.e. they're directives to to make sure your JavaScript code stays consistent. NativeScript is also compatible with most major JavaScript web frameworks. This includes Vue.Js and Angular. I've only noticed one major drawback thus far: the mobile preview mode ( tns preview command) does not pay well with native libraries. If you have some native platform specific libraries, you'll have to use the emulator or a device (if you have one). If you're gung-ho and you want to build multiple apps in their respective languages, the more power to you. There is an advantage over the above frameworks: tried and true Particle SDKs. Available Libraries & SDKs Particle has gone out of their way to make app development a little easier. This is thanks to the massive development work that has gone into their own SDKs. Yup, gone are the days you have to write manual HTTP request handlers. Here's a link to both the iOS and Android SDKs: Though we won't be covering them here, they reflect all the potential calls that you can make using the Cloud API. Speaking of Cloud API, Particle has also developed a Node.js library as well. As you can imagine, you can use this for your server side code or JavaScript based app frameworks. Sadly, it doesn't work with NativeScript. Frameworks that use a WebView should be more compatible. In the case of this tutorial, we'll be mostly focusing on the Cloud API. This way you have a good understanding of the overall system. It may seem intimidating but if you do it right, you'll get the hang of it real fast. Making API Calls In NativeScript you can't use libraries like [request](). (Which happens to be the library Particle's very own DMC used in the CLI — DMC if you're reading this, Hi!) You'll have to use the provided HTTP module. If you scroll all the way to the bottom of that page, you'll see a fully fledged POST example. I'll reproduce it here but with some Particle specific changes: // Create form post data var data = new FormData(); data.append("name", "update"); data.append("data", "It's hammer time!"); data.append("private", "true"); data.append("access_token", _token); // Configure the httpModule return httpModule .request({ url: ``, method: "POST", content: data }) .then( response => { const result = response.content.toJSON(); console.log(result); }, e => { if (e) console.log(e); } ); The above is an example of what's equivalent to Particle.publish in DeviceOS. Let's break down the parts. First of all, one of the main gotchas of Particle's Web API is the data format. I first expected that they use JSON but I was sorely wrong. After actually reading the documentation I realized that most POST requests were actually application/x-www-form-urlencoded. That means when you submit data, it's the equivalent to hitting the submit button on an HTML form. Fortunately, there is an easy way to assemble form data in Node/JavaScript. We can use the FormData() object. Take a look at the above. There should be some familiar parameter names in the data.append calls. "name" refers to the name of the event you're publishing to. "data" refers to the string formatted data that you're publishing. "private" dictates whether or not you want to broadcast this data to the whole Particle world, or just your little corner of it. "access_token" is a token that you can generate in order to make these API calls. Without a token though, you're dead in the water. Getting a Token Where do we get this elusive access_token? At first I had no idea. I created an OAuth user and secret in the console. That lead to a dead end. Fiddled around with different API calls and settings. Nothing. Then it hit me like a ton of bricks. There's an access_token attached to the curl request on every device page! Open up any device, click the little console button near Events. A popup with instructions an a URL will pop up. Copy the text after access_token=. That is your access_token! See below: You can use this token to make calls to the Particle API. This can be to subscribe, publish, write to a function, read variables and more. Through the command line That's nice and everything but how the heck can you programmatically generate one? One way is with the command line. particle token create is the name of the command you need to know about. When you run it, you'll be prompted to login. (Also enter your Authenticator code if you use one.) Then the command line will spit out a shiny new access_token you can use with the API! Through the API itself If you couldn't guess, particle token create is a frontend to a raw API call. You can make these API calls directly too. Here's what it looks like in NativeScript. // Create form post data var data = new FormData(); data.append("username", "jaredwolff"); data.append("password", "this is not my password"); data.append("grant_type", "password"); data.append("client_name", "user"); data.append("client_secret", "client_secret_here"); // Configure the httpModule return httpModule .request({ url: ``, method: "POST", content: data }) .then( response => { const result = response.content.toJSON(); console.log(result); }, e => { if (e) console.log(e); } ); This call may get more complicated. Mostly in the case if you have two factor authorization setup. It's well worth it when you figure it all out. After all, no one wants to manually create auth tokens if they don't have to! Now you're ready to write and read from your devices. There's one thing though that may trip you up. Subscribing to events can be troublesome with a regular HTTP client. So much so that if you try to do it with NativeScript's HTTP client, it will lock up and never return. Luckily there is a way to handle these special HTTP calls. Server Sent What? Server Sent Events (SSE for short) is an HTTP/S subscription functionality. It allows you to connect to a SSE end point and continuously listen for updates. It's a similar web technology to what companies use for push notifications. It does require some extra functionality under the hood though... SSE Library After much head scratching and searching I stumbled upon nativescript-sse. It looked simple enough that I could start using immediately. More problems arose when I tried to use it though. First, it turns out you can't use the library in tns preview mode. The alternative is to use tns run ios --emulator or use tns run ios with your iPhone connected to your computer. The non-emulator command will automatically deliver your prototype app. Side note: I had already set up my phone in Xcode. You may have to do this yourself before tns run ios is able to find and deploy to your phone. Secondly, once I got the library working, I noticed I would get some very nasty errors. The errors seemed to happen whenever a new message from Particle came along. Turns out the underlying Swift library for iOS had fixed this last year. So I took it upon myself to figure out how to upgrade the NativeScript plugin. I'll save you the time to say that it can be a pain and there is a learning curve! Fortunately after some hacking I got something working. More instructions on how to compile the plugin are in the README. Alternatively, you can download a pre-built one on the Release page of the repository. Download the .tgz file to wherever you like. Then, you can add it using tns plugin add. The full command looks like this: tns plugin add path/to/plugin/file.tgz You can check to make sure the library is installed by running tns plugin list **jaredwolff$ tns plugin list Dependencies: ┌─────────────────────┬──────────────────────────────────────────────────────────────────────────────────┐ │ Plugin │ Version │ │ @nativescript/theme │ ~2.2.1 │ │ nativescript-sse │ file:../../Downloads/nativescript-sse/publish/package/nativescript-sse-4.0.3.tgz │ │ tns-core-modules │ ~6.3.0 │ └─────────────────────┴──────────────────────────────────────────────────────────────────────────────────┘ Dev Dependencies: ┌──────────────────────────┬─────────┐ │ Plugin │ Version │ │ nativescript-dev-webpack │ ~1.4.0 │ │ typescript │ ~3.5.3 │ └──────────────────────────┴─────────┘ NOTE: If you want to check the dependencies of installed plugin use npm view <pluginName> grep dependencies If you want to check the dev dependencies of installed plugin use npm view <pluginName> grep devDependencies** Once installed, invoking the library takes a few steps. Here's an example: import { SSE } from "nativescript-sse"; sse = new SSE( "<your access token>", {} // Add event listener sse.addEventListener("blob"); // Add callback sse.events.on("onMessage", data=>{ // TODO: do stuff with your event data here! console.log(data); }); // Connect if not already sse.connect(); First you need to import and create an instance of the library. When you create the instance, you will have to enter the URL that you want to use. In this case we'll be doing the equivalent of Particle.subscribe(). It should look something similar to the above:<your event name>?access_token=<your access token>. Replace <your event name> and <your access token> with the name of your event and your freshly created token! Then you set up the library to listen for the event you care about. In this case blob is the event I most care about. Then make sure you configure a callback! That way you can get access to the data when blob does come along. I've made a TODO note where you can access said data. Finally, you can connect using the .connect() method. If you don't connect, SSE will not open a session and you'll get no data from Particle. Placement of the code is up to you but from the examples it looks like within the constructor() of your model is a good place.() Other Examples If you're curious how to use SSE in other places I have another great example: Particle's CLI. Particle uses the [request]() library to handle SSE events in the app. Whenever you call particle subscribe blob it invokes a getStreamEvent further inside the code. You can check it out here. The request library has more information on streaming here. More resources This is but the tip of the iceberg when it comes to connecting with Particle's API. Particle has some great documentation (as always) you can check out. Here are some important links: Conclusion In this post we've talked about app frameworks, NativeScript, NativeScript plugins and Server Sent Events. Plus all the Particle related things so you can connect your NativeScript app to Particle's API. I hope you've found this quick tutorial useful. If you have any questions feel free to leave a comment or send me a message. Also be sure to check out my newly released guide. It has content just like this all about Particle's ecosystem. Until next time! This post was originally from
https://www.freecodecamp.org/news/how-to-develop-particle-iot-apps-using-nativescript/
CC-MAIN-2020-45
refinedweb
2,143
68.36
To define a new type or class, first declare it, and then define its methods and fields. Declare a class using the class keyword. The complete syntax is as follows: [attributes] [access-modifiers] class identifier [:base-class] {class-body} Attributes are covered in Chapter 8; access modifiers are discussed in the next section. (Typically, your classes will use the keyword publicError! Bookmark not defined. as an access modifier.) The identifier is the name of the class that you provide. The optional base-class is discussed in Chapter 5. The member definitions that make up the class-body are enclosed by open and closed curly braces ({}). In C#, everything happens within a class. For instance, some of the examples in Chapter 3 make use of a class named Tester: public class Tester { public static int Main( ) { /... } } So far, we've not instantiated any instances of that class; that is, we haven't created any Tester objects. What is the difference between a class and an instance of that class? To answer that question, start with the distinction between the type int and a variable of type int. Thus, while you would write: int myInteger = 5; you would not write: int = 5; You can't assign a value to a type; instead, you assign the value to an object of that type (in this case, a variable of type int). When you declare a new class, you define the properties of all objects of that class, as well as their behaviors. For example, if you are creating a windowing environment, you might want to create screen widgets (more commonly known as controls in Windows programming) to simplify user interaction with your application. One control of interest might be a list box, which is very useful for presenting a list of choices to the user and enabling the user to select from the list. List boxes have a variety of characteristicsfor example, height, width, location, and text color. Programmers have also come to expect certain behaviors of list boxes: they can be opened, closed, sorted, and so on. Object-oriented programming allows you to create a new type, ListBox, which encapsulates these characteristics and capabilities. Such a class might have member variables named height, width, location, and text_color, and member methods named sort( ), add( ), remove( ), etc. You can't assign data to the ListBox type. Instead you must first create an object of that type, as in the following code snippet: ListBox myListBox; Once you create an instance of ListBox, you can assign data to its fields. Now consider a class to keep track of and display the time of day. The internal state of the class must be able to represent the current year, month, date, hour, minute, and second. You probably would also like the class to display the time in a variety of formats. You might implement such a class by defining a single method and six variables, as shown in Example 4-1. using System; public class Time { // private variables int Year; int Month; int Date; int Hour; int Minute; int Second; // public methods public void DisplayCurrentTime( ) { Console.WriteLine( "stub for DisplayCurrentTime"); } } public class Tester { static void Main( ) { Time t = new Time( ); t.DisplayCurrentTime( ); } } The only method declared within the Time class definition is DisplayCurrentTime( ). The body of the method is defined within the class definition itself. Unlike other languages (such as C++), C# does not require that methods be declared before they are defined, nor does the language support placing its declarations into one file and code into another. (C# has no header files.) All C# methods are defined inline as shown in Example 4-1 with DisplayCurrentTime( ). The DisplayCurrentTime( ) method is defined to return void; that is, it will not return a value to a method that invokes it. For now, the body of this method has been "stubbed out." The Time class definition ends with the declaration of a number of member variables: Year, Month, Date, Hour, Minute, and Second. After the closing brace, a second class, Tester, is defined. Tester contains our now familiar Main( ) method. In Main( ), an instance of Time is created and its address is assigned to object t. Because t is an instance of Time, Main( ) can make use of the DisplayCurrentTime( ) method available with objects of that type and call it to display the time: t.DisplayCurrentTime( ); An access modifier determines which class methodsincluding methods of other classescan see and use a member variable or method within a class. Table 4-1 summarizes the C# access modifiers. It is generally desirable to designate the member variables of a class as private. This means that only member methods of that class can access their value. Because private is the default accessibility level, you do not need to make it explicit, but I recommend that you do so. Thus, in Example 4-1, the declarations of member variables should have been written as follows: // private variables private int Year; private int Month; private int Date; private int Hour; private int Minute; private int Second; Class Tester and method DisplayCurrentTime( ) are both declared public so that any other class can make use of them. Methods can take any number of parameters.[1] The parameter list follows the method name and is encased in parentheses, with each parameter preceded by its type. For example, the following declaration defines a method named MyMethod( ), which returns void (that is, which returns no value at all) and which takes two parameters: an integer and a button. [1] The terms "argument" and "parameter" are often used interchangeably, though some programmers insist on differentiating between the argument declaration and the parameters passed in when the method is invoked. void MyMethod (int firstParam, button secondParam) { // ... } Within the body of the method, the parameters act as local variables, as if you had declared them in the body of the method and initialized them with the values passed in. Example 4-2 illustrates how you pass values into a methodin this case, values of type int and float.); } } The method SomeMethod( ) takes an int and a float and displays them using Console.WriteLine( ). The parameters, which are named firstParam and secondParam, are treated as local variables within SomeMethod( ). In the calling method (Main), two local variables (howManyPeople and pi) are created and initialized. These variables are passed as the parameters to SomeMethod( ). The compiler maps howManyPeople to firstParam and pi to secondParam, based on their relative positions in the parameter list.
http://etutorials.org/Programming/Programming+C.Sharp/Part+I+The+C+Language/Chapter+4.+Classes+and+Objects/4.1+Defining+Classes/
CC-MAIN-2018-05
refinedweb
1,079
61.97
The algorithms presented are intended for use with large maps, and where computation time of some appreciable fraction of a second is tolerable. There are faster algorithms, but they involve solving the intersection and union of arbitrary polygons; the reasons for rejecting this approach are discussed below. Consider a region, such as a cave, dungeon or city, viewed from above, such that floors are shown by areas and walls by lines. Floors are always enclosed by some set of walls. Put another way, all maps are bounded by a continuous wall. We will ignore other map features; for our purposes, the only “important” features are walls and floors. Lights exist at points, and have a given area of effect, defined by a radius. Everything inside this radius that faces the light is considered lit. Lights can have overlapping areas. Walls cast shadows, making the walls and floors behind them unlit (there is no provision for light attenuating – getting dimmer over distance – things are either lit or unlit). The observer also exists at a point. The problem is to quickly discover what he can see, where the area seen is defined as any point he has line of sight to (that is, those walls and floors not occluded by other walls), that are also lit. For efficiency reasons, the observer’s vision also has a maximum radius. Anything outside that radius is not visible, even if it is lit and unobstructed. The algorithm can function without this limit, the limitation is an optimization. In addition to knowing what areas are currently visible, it is desirable to know what areas have been previously visible, and to be able to display those areas differently. An example is given in Figure 1. The observer is the red hollow circle, and standing at the intersection of three passageways. White solid circles are light (the observer, in this case, is carrying one). Dark green is used for walls and floors that have not been seen yet. Blue walls and medium grey floors mark areas that have been seen previously. White walls and light grey floor denote what’s currently visible. Note that while the observer’s own light doesn’t reach all the way down that west passageway, there’s a light to the south that illuminates part of it, giving a disjoint area of visibility. To the southeast, there’s another light that helps light that southeast corridor, so the whole corridor is visible. Finally, to the northeast, there’s a small window in the wall, allowing the observer to illuminate, and see, a small amount of the room to the east. Figure 1 - A simple example of a map In this technique, the floor is broken into small fragments (triangles in this case, because they are easy to render), which serve as the algorithm’s basic unit of area. When determining what part of the floor is visible, I’m really asking what set of triangles is visible. If the centroid (average of the vertices) of a triangle is visible, the whole triangle is considered visible. This has the side effect of making the bounds of the visibility area slightly jagged, as you can see in the disjoint area down the west corridor. In my own application, this is acceptable, but blending of adjacent triangles could be done in order to get a smooth gradient between visible and invisible areas. Because walls divide up floor area and walls can run at all sorts of angles, cutting the floor into small triangles often results in additional, smaller, triangles being generated. All told, a map can contain millions of floor triangles and hundreds of walls. For the curious, figure 2 shows the triangles generated for a small section of the map: Figure 2 - You are in a maze of twisty triangles, all largely the same For lighting (and seeing) the walls themselves, I do something similar – walls are broken into segments, and the center point of each segment is checked to see if it is visible. If it is visible, that whole segment is visible. For this reason, segments are kept short and walls can generate thousands of them in a large map. Because of this, when it comes time to determine which parts of walls and floors are not visible it may be necessary to evaluate millions of points for the floor and thousands of points for wall segments. Conceptually, they all need to be evaluated against every wall to determine if line of sight exists from the observer, and that process has to be repeated for each light as well. Clearly, a brute force approach will not work in reasonable time. The goal is to move the observer to a new point, or move a light to a new point (often both, since the observer often carries a light), and know as quickly as possible what areas of floor and segments of walls can be seen. Comparing possibly millions of points against hundreds or thousands of walls and doing a line of sight calculation – essentially calculating the intersection of two line segments, for one for a line of vision and one for a wall – isn’t acceptably fast. It turns out that lighting and vision can be handled by the same algorithm, since they are both occluded by walls in the same way. They can both be represented by casting rays out from a given point, and stopping the rays when they hit a wall. If there’s no wall along that ray, the ray is cut short by a distance limit instead (this illustrates another difficulty with using polygon intersections – polygons don’t have curved sides, and approximating them with short straight lines increases the cost of the intersection test). Figure 3 - A "polygon" of light, with curved parts Since there can be multiple lights, unions of polygons would be required: Figure 4 - Union of two lit areas Since we’ve defined visibility as areas that are both lit and within line of sight of the observer, the intersection of polygons representing lit areas (itself a union) and the polygon representing the area of sight represent the visible areas. Figure 5 repeats the original example, with yellow lines roughly delineating the lit area, and red lines bounding the area of sight. Areas within both are the visible areas. Figure 5 - Vision and Light compute Visibility The result of the intersection is a (potentially empty) set of polygons. To maintain a history of what’s been visible, a union of the previously visible areas and the currently visible area is also required. The combination can create polygons with holes, and for complex maps, a large number of sides. Figure 6 shows the result of a wall, a number of square columns, and a short walk along the north side of the wall by the observer, carrying a light. The union of previously visible areas is shown in darker grey1. Figure 6 - Columns, a Wall, and A Messy Polygon Union Doing polygon union and intersection is complex. Naïve implementations of these algorithms run into problems with boundary conditions and, in complex cases, floating point accuracy. There are packages available that solve these problems, and deal elegantly with disjoint polygons and holes, but they are available under restricted license2. I wanted an unencumbered solution, and was willing to trade off some amount of runtime to get it. But a brute force computation of every floor point vs. every obstructing wall is unacceptable. What is needed is an efficient way to evaluate the many floor and wall points for visibility. The basic approach amounts to describing the shadows cast by walls. Since each point in the floor has to be tested against these shadows, the algorithms focus on making it as inexpensive as possible to determine if any given point is within a shadow, without loss of accuracy. Since “as inexpensive as possible” is still too expensive, given the sheer number of points to consider, the approach also includes determining which walls are already covered by other walls (in effect, we work to discard walls which don’t change anything we care about.) Where walls cannot be discarded, the algorithm attempts to determine what parts of walls contribute to meaningful shadows, and which parts are irrelevant. Note that while I talk about shadows here, everything also applies to occluding the observer’s view; since, as noted, a wall stops a line of sight in exactly the same way that it cuts short a ray of light. Finally, I discuss the critical optimizations that make the approach fast enough to use on large and complex maps. We start by making all points are relative to the location of the light in question; in other words everything is translated so that the light is at the origin. This translation drops terms out of many formulas and provides significant savings. The first step is to identify when a wall casts a shadow over a point – efficiently. The simplest way to do this is to take the endpoints of a wall and arrange them so that the first point is clockwise from the second point, as seen from the origin. If they are counterclockwise, swap the points. If they are collinear with the light, throw out that wall, because it can’t cast a shadow. I will refer to the endpoints as the left and right endpoints, with the understanding that this is from the perspective of the origin. A 2D cross product3 (from the origin, to the start point, to the end point), reveals both the edge-on case and the clockwise or counterclockwise winding of the end points. Here’s an example: Figure 7 - Walls that do, and don't, matter In figure 7, line C is edge on to the light and casts no shadow of its own, so it gets dropped. B isn’t edge on, so we keep it, with BA as its right endpoint and BC as the left (as seen from the light at the origin). D is also a wall that matters, and DC becomes the right endpoint, while DE becomes the left. In some applications, in which walls always form continuous loops, A and E can also be dropped because they represent hidden surfaces. The same rules that apply to 3D surface removal apply here – in a closed figure, walls that face away from the origin can be dropped without harm. However, this algorithm works even if walls don’t form closed figures. Now, cast a ray out from the origin though the right endpoint of a given wall – use B as an example, and cast a ray through BA. Note that any point that is in shadow happens to be to the left of this line (so are many other points, but the point is that all the shadowed ones are). Calculating “to the left of” is cheap: it’s the 2D cross product from the origin, to the right endpoint of the wall, to the point in question; for example, point X in Figure 7. The cross product gives a value that’s negative on one side of the wall, positive on the other side, and zero if the point in question is on the line. Repeat for the ray from the origin to the left end of the line segment, BC. All shadowed points are to the right of this line, which is determined by another 2D cross product. All that remains is to determine if the point is behind the wall or in front of the wall, with respect to the center. This is yet another 2D cross product, this time from the wall’s right endpoint, to the left end point, to the point in question. Point X in Figure 7 would pass all three of these tests, so it is in B’s shadow. All told, at most three cross products (six multiplies and three subtracts, and three comparisons with 0), tell if a point is shadowed by a wall. In many cases, a single cross product will prove that a point is not shadowed by a wall. But that still leaves the problem of comparing many, many thousands of points against hundreds of walls. Having established an algorithm to test a point against a wall, we now need to find ways to minimize how often we have to use it. Any wall we can cull results in hundreds of thousands of fewer operations! So a first pass at culling is simply to remove any wall which is outside the radius of the light by creating a bounding square around the origin with the “radius” of the light, and a bounding rectangle from each wall’s endpoints. If these don’t overlap, that wall can’t affect lighting, and is discarded. Figure 8 - Using rectangles and overlap to discard walls In Figure 8, F’s rectangle doesn’t overlap the light’s rectangle, so F gets discarded. H and G overlap and so are kept – H is a mistake because it’s not really in the circle of light that matters, but this is a very cheap test that discards most walls in a large map very quickly, and that’s what we want for now. In applications where walls form loops or tight groups, the entire set can be given a single bounding rectangle, allowing whole groups of walls to be culled by a single overlap test. Whatever walls are left might cast shadows. For each, we calculate the squared distance between the origin and the nearest part of the wall. This is slightly messy, since “nearest” could be either endpoint, or a point somewhere between. Given these distances, sort the list of walls so that the closest rise to the top. In the case of ties, there is generally some advantage in letting the longer wall sort closer to the top. This will usually put the walls casting the largest shadows near the top of the list. This helps performance considerably, but nothing breaks if the list isn’t perfectly sorted. In figure 8, G would be judged closer, by virtue of the northernmost endpoint. H’s closest point, near the middle of H, is further off. Once we’ve dropped all the obviously uninvolved walls and sorted the rest by distance, it’s time to walk through the list of walls, adding them to the (initially empty) set of walls that cast shadows. If a wall turns out to be occluded by other walls in this phase, we cull it. Usually, anyway - in the interest of speed, the algorithm settles for discarding most such walls, but can miss cases. In practice, it misses few cases, so I have not troubled to improve this phase’s culling method. To explain how this culling is done, we must introduce some concepts. Each wall generates a pair of rays, both starting from the light (origin) and one through each endpoint. As noted before, points to the right of the “left” ray, and to the left of the “right” ray, bound the possible shadow. However, some of that area might be already shadowed by another wall – one wall can partially cover another. In fact, all of that area might be shadowed by other walls – the current wall might be totally occluded. If it’s only partially occluded, what we want to do is “narrow” its rays, pushing the left ray further right and/or the right ray further left, to account for the existing, known shadows in this area. The reason for this is that we don’t want to compare points against multiple walls if we don’t have to, and we have to compare a point against any wall if it lies between the wall’s left and right rays. The narrower that angle becomes, the fewer points have to be checked against that wall. So when we take in a new wall, the first thing we do is look at the two line segments between the origin and the wall’s endpoints (each in turn). If that line segment intersects a wall we already accepted, then the intersected wall casts a shadow we probably care about in regard to the current wall. The interesting point here is that we don’t care where the intersection occurs. An example will show why: Figure 9 - Intersecting origin-to-endpoint with other walls Assume the current wall, W, is half-hidden by an already accepted, closer one, S. Assume that it’s the left half of W that’s covered by the nearer wall, as in the example above. The way we discover this is by checking the line segment from the origin to left endpoint of W, against the already-accepted walls. It can intersect several. Once we find an intersection, we know immediately that the intersected wall, S, is going to cover at least part of W, and on the left side (ignore the case where the S’s left endpoint and S’s right endpoint are collinear with the origin – it doesn’t change anything). Notice that we don’t care where S gets intersected. So what do we do? We replace W’s left endpoint ray with S’s right endpoint ray. In effect, we push the left endpoint ray of W to the right. Having done that, we check to see if we’ve pushed it so far to the right that it is now at or past W’s own right ray. If so, S completely occludes W and we discard W immediately. If not, we’ve made W “narrower”. In our example, W survived and its shadow got narrower, as marked in grey. Figure 10 - Pushing W's left ray We keep doing this, looking for other walls to play the part of S that intersect W’s left or right end rays. When they do, we update (“nudge”) W’s left (or right) ray by replacing it with S’s right (or left) ray. If the endpoint rays of W meet or cross during this “nudging” process, W is discarded. Figure 11 - Losing W In the example above, S has pushed W’s left ray, and J has pushed W’s right ray. A 2D cross product tells us that left and right rays have gotten past each other in the process, so W is judged to be completely occluded, and gets dropped. Otherwise, it survives, with a potentially much narrowed shadow, and it adds it to the set of kept walls. Note that if J had been longer to the left, it could have intersected both W’s red lines, and it would have pushed both W’s left and right rays by itself, forcing them to cross. This makes sense; it would have completely occluded W all by itself and we’d expect it to cause W to drop out. Note that this algorithm doesn’t notice the case where a short, close wall casts a shadow over the middle of a long wall a little further off. In this case, both walls end up passing this check, and the long wall doesn’t get its rays changed (the only way to do that would be to split the long wall into two pieces and narrow them individually). This isn’t much of a problem in practice, because when it comes time to check points, we will again check the walls in order of distance from the origin, so the smaller, closer wall is likely to be checked first. Points it shadows won’t have to be checked again, so for those points the longer wall never needs to be checked at all. There are unusual cases where points do end up in redundant checks, but they are unusual enough not to be much of a runtime problem. As we work through the list of candidates, we are generally working outward from the origin, so it’s not uncommon for more and more walls to end up discarded because they are completely occluded. This helps keep this part of the algorithm quick. It remains to find a good way to detect intersections of line segments. We don’t want round-off problems (this might report false intersections or miss real ones, causing annoying issues), and we don’t care where the intersection itself actually occurs. It turns out that a reasonable way to do this is to take W’s two points, and each potential S’s two points, and arrange them into a quadrilateral. We take the 4 points in this order: S’s left, W’s left, S’s right, W’s right. If the line segments cross, the four points in order form a convex polygon. If they don’t, it is concave. An example serves: Figure 12 - Detecting intersections R and S cross, so the (green) polygon formed by the 4 endpoints, is a convex kite shape. R and L don’t cross, so the resulting (orange) polygon isn’t convex; it’s not even simple. It turns out that there is a fair amount of misinformation about testing for convex polygons on the ‘Net. Talk about counting changes in sign sounds interesting (and cheap), but I’ve yet to see an implementation of this that works in all cases, including horizontal and vertical lines. What I’ve ended up with is more expensive but gets all cases, even if two points in the quad happen to be coincident. I calculate the 4 2D cross products, going around the quad (in either order). If they are all negative OR they are all positive, it’s convex. Anything else is concave or worse. While not cheap (up to 8 multiplies and quite a few subtracts), we can stop as soon as we get a difference in sign. On hardware that can do floating subtracts in parallel, this is not too bad in cost. By itself, that’s enough to discard unneeded walls in most cases, and minimize the scope of influence of the surviving walls. Just with what we have, it’s possible to determine if points are occluded by walls. But we’d still like it faster. We always want things faster, that’s why we buy computers. Making it faster Make sure all we just discussed makes sense to you, because we’re about to add some complications. There are four optimizations that can be applied to all this, unrelated to each other. 1. In my maps, walls (except doors) always touch one other wall at their endpoint. (They are really wall surfaces - just as in a 3D model, all the surface polygons are just that, surfaces, always touching neighbor surfaces along edges.) This leads to an optimization, though it is a little fussy to apply. An example serves best. Imagine you’re a light at the origin, and over at x=5 there’s a wall stretching from (5,0) to (5,5), running north and south. It casts an obvious shadow to the east. We’ll call that wall A. But imagine that at (5,5) there’s another wall, running to (4,6), diagonally to the northwest. Call it B. Figure 13 - Extending A's influence A has the usual left and right rays: the right ray passes through (0,5), the left through (5.5). B has its own rays, with a right ray at (5,5) and a left ray at (4.6). Between them, they cast a single, joined shadow, wider than the shadows they cast individually. The shape of the shadow is complicated, but it’s worth noticing that for any point behind the line of A (that is, with x > 5, noted in green), that point is in shadow if it is between A’s right ray and B’s left ray. This is because B extends A, and (important point) it extends it by turning somewhat towards the light, not away. (It would also work if A and B were collinear, but in my system, collinear walls that share an endpoint become a single wall). There are also points B shadows that have X < 5, but when we are just considering A, it’s fair to say that we can apply B’s left bound instead of A’s left bound when asking about points that are behind the line A makes. A’s ability to screen things, given in light grey, has effectively been extended by the dark grey area. I don’t take advantage of this when it comes to considering which points are in shadow, because all it does is increase the number of points that are candidates for testing for any given wall, and that doesn’t help. However, I do take advantage of this when determining what walls occlude other walls. I do this by keeping two sets of vector pairs for each wall. The ones I’ve been calling left and right are the “inner” vectors, named because they tend to move inward, and their goal is to get pushed closer together by other walls, ideally to cross. But there is also a pair of left and right vectors I call the outer pair. They start at the endpoints of the wall like the inner ones do, but they try to grow outward. They grow outward when 1) I find the wall that shares an endpoint and 2) this other wall (in my scheme there can only be one) does not bend away from the light. This is an easy check to make – it’s another 2D cross product, from A’s left endpoint, to A’s right, to B’s right. If that comes up counterclockwise, A’s outer right vector gets replaced by a copy of B’s outer right vector (as long as that’s an improvement – it’s important to check that you’re really pushing the outer right vector more to the right.) And note the trick affects both A and B. If B extends A’s right outer vector, then A is a great candidate for extending B’s left outer vector. Applied carefully, the extra reach this gives walls helps discard distant walls very quickly in cases where there are long runs of joined walls. I find that for most maps, the difference this makes is not large, and given the work I put into getting it right, I might not have done it if I’d realized how little it helps most maps. 2. When considering points, it’s important not to waste time testing any given point against a wall that can’t possibly shadow it. Each point, after all, has three tests it has to pass, per wall. Is it to the right of the left vector, to the left of the right vector, and is it behind the line of the wall. On average, half of all points are going to pass that first test, for most walls. That means that a fair amount of the time, the second test is going to be needed for points that are not remotely candidates. And given huge numbers of points, that’s unacceptable. Figure 14 - The futility of any one test Here, P is to the right of the left endpoint-origin line, so it’s a candidate for being shadowed by the wall. But so are Q and R, and they clearly aren’t going to be shadowed. It would be helpful, then, if we could only test a point against the walls that have a good shot of shadowing it. Trigonometric solutions suggest themselves, but trig functions are much too expensive. What I do is create a list of “aggregate wedges.” Each wall’s shadow, called a wedge, is compared with the other wedges of the other walls. If they overlap, they are added to the same set of wedges, and I keep track of the leftmost and rightmost ray among everything in the same set. Figure 15 - Creating sets of shaodw wedges Of course, if you’re in the square room without a door (never mind how you got in), you end up with all the walls in the same set, and the rays that bound the set end up “enclosing” every point on the map! So this trick is useless in these kinds of maps. But in maps of towns, with many freestanding buildings and hence many independent wedges, you can often get whole groups of walls into a number of disjoint sets, and since each set has an enclosing pair of rays that covers all the walls in the set, you can test any given point against the set’s rays: if it’s not between them, you don’t have to test any of the walls in that set. This sounds pretty, but it can be maddening to get right. You can have two independent sets, and then find a wall that belongs in them both, effectively merging two sets into one big one. Figure 16 - Combining sets You end up doing a certain amount of set management. I have a sloppy and dirty way to do this which is reasonably fast, but it’s not pretty. Another difficulty is knowing when a wall’s vectors overlap an existing set’s bounds. There are several cases. A wall’s vectors could be completely inside the set’s bounds, in which case the wall just becomes a member of the set and the set’s bounds don’t change. Or it can overlap to the left, joining the set and pushing the set’s left vector. Or it can overlap on the right. Or it can overlap on both sides, pushing both set vectors outward. Or it can be disjoint to that set. Keep in mind that a set can have rays at an obtuse angle, enclosing quite a bit of area. It’s surprisingly hard to diagnose all these cases properly. The algorithm is difficult to describe, but the source code is at the end of this document. When all is said and done, this optimization is very worthwhile for some classes of maps that would otherwise hammer the point test algorithm with a lot of pointless tests. But it was not much fun to write. 3. This is my favorite optimization because it’s simple and it tends to help a great deal in certain, specific cases. When I’m gathering walls, I’m computing distance (actually, distance squared – no square roots were used in any of this) between the light and each wall. Along the way I keep track of the minimum such distance I see. If, for example, the closest distance to any wall is 50 units, then any point that is closer to the origin than 50 units can never be in shadow and doesn’t have to be checked against any wall. (Of course, if the light’s actual radius of effect is smaller than this distance, that value is used instead). Figure 17 - Identifying points unaffected by walls If the light is close to a wall, this optimization saves very little. If it’s not, this can put hundreds or even thousands of points into “lit” status very quickly indeed. In order to avoid edge cases, I subtract a small amount from the minimum wall distance after I compute it, so there’s no question of points right at a wall being considered lit unduly. 4. This is my second favorite optimization because it’s simple, dirty and very effective. When I generate floor triangles, I use an algorithm that more or less sweeps though in strips. Because triangles are small, and generated such some neighboring triangles are adjacent in the list of triangles, odds are often very high that if a triangle is shadowed by a wall, the next triangle to consider is going to be shadowed by the same wall. Figure 18 - Neighbors often suffer the same fate So the best optimization of all is to remember which wall last shadowed a triangle, and test the next triangle against that one first, always. After all, if a triangle is shadowed by any wall, it doesn’t have to be tested against any other; we don’t care about finding the “best” shadow, we just want to quickly find one that works. If the light happens to be close to a wall (which ruins optimization 3), this one can be very powerful. W, here, is likely to shadow about half the map. One final trick – not exactly an optimization – has to do with the fact that this code runs on a dual core processor. I cut the triangle list in about half and give each half to a separate thread to run though. (Each thread has its own copy of the pointer used in optimization 4, so they don’t interfere with each other in any way.) This trick doesn’t always cut the runtime in half – it’s not uncommon for one thread to get saddled with most or all of the cases that require more checks – but it helps. Other speedups involve not using STL or boost, and sticking to old fashion arrays of data structures – heresy in some C++ circles, but the speed gains are worth any purist’s complaints. What’s left is trivial. Each floor triangle, and short wall segment, have a set of bits, one for each possible light, and one for the viewer. If the object is not in shadow, the appropriate light’s bit is set in that object. If any such bit is set, the object is lit. There is also a bit for the observer, which as noted uses the exact same algorithm. If the object is lit, that algorithm is then run for that point for the observer, and if it comes up un-occluded, it is marked visible (and also marked “was once visible”, because I need to keep a history of what has already been seen). Moving a light is a matter of clearing all bits that correspond to that light in all the objects, and then “recasting” the light from its new location. In my application, most lights don’t move frequently, so not much of that happens. My favorite acid test at the moment is an open town map with 3 million floor triangles and almost 7000 walls. With ambient light turned on (which means everything is automatically considered lit, so everything has to be checked for “visible to observer”), and a vision limit of 400 units (so about 336,000 possible triangles in range), my worst case compute times are about 0.75 seconds on a dual core 2Ghz Intel processor. Typical times are a more acceptable 0.4 sec or so. Kinder maps (underground caves, tight packed cities, open fields with few obstructions) manage times of well under 0.1 sec. Some examples of my implementation: Figure 19 - Room in underground city Figure 20 - Ambient light, buildings and rock outcrops Figure 21 - Limited lights, windows and doors Notice that lights in buildings shine out windows, and also enable peeks inside the building from outside, forming a number of disjoint areas of visibility. Code Listing What follows gives the general sense of the algorithms’ implementation. Do note that the code is not compiler-ready: support classes like Point, Wall, WallSeg and SimpleTriangle are not provided, but their implementation is reasonably obvious. This code is released freely and for any purpose, commercial or private – it’s free and I don’t care what happens, nor do I need to be told. It is also without any warranty or promise of fitness, obviously. It works in my application as far as I know, and with some adjustment, may work in yours. The comments will show some of the battles that occurred in getting it to work. The code may contain optimizations I didn’t discuss above. enum Clockness {Straight, Clockwise, Counterclockwise}; enum Facing {Colinear, Inside, Outside}; static inline Clockness clocknessOrigin(const Point& p1, const Point& P2) { const float a = p1.x * P2.y - P2.x * p1.y; if (a > 0) return Counterclockwise; // aka left if (a < 0) return Clockwise; return Straight; } static inline bool clocknessOriginIsCounterClockwise(const Point& p1, const Point& P2) { return p1.x * P2.y - P2.x * p1.y > 0; } static inline bool clocknessOriginIsClockwise(const Point& p1, const Point& P2) { return p1.x * P2.y - P2.x * p1.y < 0; } class LineSegment { public: Point begin_; Point vector_; //begin_+vector_ is the end point inline LineSegment(const Point& begin, const Point& end) : begin_(begin), vector_(end - begin) {} inline const Point& begin() const {return begin_;} inline Point end() const {return begin_ + vector_;} inline LineSegment(){} //We don't care *where* they intersect and we want to avoid divides and round off surprises. //So we don't attempt to solve the equations and check bounds. //We form a quadilateral with AB and CD, in ACBD order. This is a convex kite shape if the // segments cross. Anything else isn't a convex shape. If endpoints touch, we get a triangle, //which will be declared convex, which works for us. //Tripe about changes in sign in deltas at vertex didn't work. //life improves if a faster way is found to do this, but it has to be accurate. bool doTheyIntersect(const LineSegment &m) const { Point p[4]; p[0] = begin(); p[1] = m.begin(); p[2] = end(); p[3] = m.end(); unsigned char flag = 0; { float z = (p[1].x - p[0].x) * (p[2].y - p[1].y) - (p[1].y - p[0].y) * (p[2].x - p[1].x); if (z > 0) flag = 2; else if (z < 0) flag = 1; } { float z = (p[2].x - p[1].x) * (p[3].y - p[2].y) - (p[2].y - p[1].y) * (p[3].x - p[2].x); if (z > 0) flag |= 2; else if (z < 0) flag |= 1; if (flag == 3) return false; } { float z = (p[3].x - p[2].x) * (p[0].y - p[3].y) - (p[3].y - p[2].y) * (p[0].x - p[3].x); if (z > 0) flag |= 2; else if (z < 0) flag |= 1; if (flag == 3) return false; } { float z = (p[0].x - p[3].x) * (p[1].y - p[0].y) - (p[0].y - p[3].y) * (p[1].x - p[0].x); if (z > 0) flag |= 2; else if (z < 0) flag |= 1; } return flag != 3; } inline void set(const Point& begin, const Point& end) { begin_ = begin; vector_ = end - begin; } inline void setOriginAndVector(const Point& begin, const Point& v) { begin_ = begin; vector_ = v; } /* Given this Line, starting from begin_ and moving towards end, then turning towards P2, is the turn clockwise, counterclockwise, or straight? Note: for a counterclockwise polygon of which this segment is a side, Clockwise means P2 would "light the outer side" and Counterclockwise means P2 would "light the inner side". Straight means colinear. */ inline Clockness clockness(const Point& P2) const { const float a = vector_.x * (P2.y - begin_.y) - (P2.x - begin_.x) * vector_.y; if (a > 0) return Counterclockwise; // aka left if (a < 0) return Clockwise; return Straight; } inline bool clocknessIsClockwise(const Point& P2) const { return vector_.x * (P2.y - begin_.y) - (P2.x - begin_.x) * vector_.y < 0; } //relative to origin inline Clockness myClockness() const {return clocknessOrigin(begin(), end());} inline bool clockOK() const {return myClockness() == Counterclockwise;} //is clockOK(), this is true if p and center are on opposide sides of me //if p is on the line, this returns false inline bool outside(const Point p) const { return clockness(p) == Clockwise; } inline bool outsideOrColinear(const Point p) const { return clockness(p) != Counterclockwise; } void print() const { begin().print(); printf(" to "); end().print(); } }; class Wall; /* A wedge is a line segment that denotes a wall, and two rays from the center, that denote the relevant left and right bound that matter when looking at this wall. Initially, the left and right bound are set from the wall's endpoints, as those are the edges of the shadow. But walls in front of (eg, centerward) of this wall might occlude the end points, and we detect that when we add this wedge. If it happens, we use the occluding wall's endpoints to nudge our own shadow rays. The idea is to minimise the shadow bounds of any given wall by cutting away areas that are already occluded by closer walls. That way, a given point to test can often avoid being tested against multiple, overlapping areas. More important, if we nudge the effective left and right rays for this wall until they meet or pass each other, that means this wall is completely occluded, and we can discard it entirely, which is the holy grail of this code. Fewer walls means faster code. For any point that's between the effective left and right rays of a given wall, the next question is if it's behind the wall. If it is, it's definitively occluded and we don't need to test it any more. Otherwise, on to the next wall. */ class AggregateWedge; enum VectorComparison {ColinearWithVector, RightOfVector, OppositeVector, LeftOfVector}; static VectorComparison compareVectors(const Point& reference, const Point& point) { switch (clocknessOrigin(reference, point)) { case Clockwise: return RightOfVector; case Counterclockwise: return LeftOfVector; } if (reference.dot(point) > 0) return ColinearWithVector; return OppositeVector; } class LittleTree { public: enum WhichVec {Left2, Right1, Right2} whichVector; //(sort tie-breaker), right must be greater than left const Point* position; //vector end LittleTree* greater; //that is, further around to the right LittleTree* lesser; //that is, less far around to the right LittleTree() {greater = lesser = NULL;} void readTree(WhichVec* at, int* ip) { if (lesser) lesser->readTree(at, ip); at[*ip] = whichVector; ++*ip; if (greater) greater->readTree(at, ip); } //walk the tree in order, filling an array void readTree(WhichVec* at) { int i = 0; readTree(at, &i); } }; class VectorPair { public: Point leftVector; Point rightVector; bool acute; VectorPair() {} VectorPair(const Point& left, const Point& right) { leftVector = left; rightVector = right; acute = true; } bool isAllEncompassing() const {return leftVector.x == 0 && leftVector.y == 0;} void set(const Point& left, const Point& right) { leftVector = left; rightVector = right; acute = clocknessOrigin(leftVector, rightVector) == Clockwise; } void setKnownAcute(const Point& left, const Point& right) { leftVector = left; rightVector = right; acute = true; } void setAllEncompassing() { acute = false; leftVector = rightVector = Point(0,0); } bool isIn(const Point p) const { if (acute) return clocknessOrigin( leftVector, p) != Counterclockwise && clocknessOrigin(rightVector, p) != Clockwise; //this accepts all points if leftVector == 0,0 return clocknessOrigin( leftVector, p) != Counterclockwise || clocknessOrigin(rightVector, p) != Clockwise; } //true if we adopted the pair into ourselves. False if disjoint. bool update(const VectorPair& v) { /* I might completely enclose him - that means no change I might be completely disjoint - that means no change, but work elsewhere He might enclose all of me - I take on his bounds We could overlap; I take on some of his bounds -- We figure this by starting at L1 and moving clockwise, hitting (in some order) R2, L2 and R1. Those 3 can appear in any order as we move clockwise, and some can be colinear (in which case, we pretend a convenient order). Where L1 and R1 are the bounds we want to update, we have 6 cases: L1 L2 R1 R2 - new bounds are L1 R2 (ie, update our R) L1 L2 R2 R1 - no change, L1 R1 already encloses L2 R2 L1 R1 L2 R2 - the pairs are disjoint, no change, but a new pair has to be managed L1 R1 R2 L2 - new bounds are L2 R2; it swallowed us (update both) L1 R2 L2 R1 - all encompassing; set bounds both to 0,0 L1 R2 R1 L2 - new bounds are L2 R1 (ie, update our L) If any two rays are colinear, sort them so that left comes first, then right. If 2 lefts or 2 rights, order doesn't really matter. The left/right case does because we want L1 R1 L2 R2, where R1=L2, to be processed as L1 L2 R1 R2 (update R, not disjoint) */ //special cases - if we already have the whole circle, update doesn't do anything if (isAllEncompassing()) return true; //v is part of this wedge (everything is) //if we're being updated by a full circle... if (v.isAllEncompassing()) { setAllEncompassing(); //we become one return true; } /* Now we just need to identify which order the 3 other lines are in, relative to L1. Not so easy since we don't want to resort to arctan or anything else that risks any roundoff. But clockness from L1 puts them either Clockwise (sooner), or Straight (use dot product to see if same as L1 or after Clockwise), or CounterClockwise (later). Within that, we can use clockness between points to sort between them. */ //get the points R1, L2 and R2 listed so we can sort them by how far around to the right // they are from L1 LittleTree list[3]; //order we add them in here doesn't matter list[0].whichVector = LittleTree::Right1; list[0].position = &this->rightVector; list[1].whichVector = LittleTree::Left2; list[1].position = &v.leftVector; list[2].whichVector = LittleTree::Right2; list[2].position = &v.rightVector; //[0] will be top of tree; add in 1 & 2 under it somewhere for (int i = 1; i < 3; ++i) { LittleTree* at = &list[0]; do { bool IisGreater = list[i].whichVector > at->whichVector; //default if nothing else works VectorComparison L1ToAt = compareVectors(leftVector, *at->position); VectorComparison L1ToI = compareVectors(leftVector, *list[i].position); if (L1ToI < L1ToAt) IisGreater = false; else if (L1ToI > L1ToAt) IisGreater = true; else { if (L1ToI != OppositeVector && L1ToI != ColinearWithVector) { //they are in the same general half circle, so this works switch (clocknessOrigin(*at->position, *list[i].position)) { case Clockwise: IisGreater = true; break; case Counterclockwise: IisGreater = false; break; } } } //now we know where [i] goes (unless something else is there) if (IisGreater) { if (at->greater == NULL) { at->greater = &list[i]; break; //done searching for [I]'s place } at = at->greater; continue; } if (at->lesser == NULL) { at->lesser = &list[i]; break; //done searching for [I]'s place } at = at->lesser; continue; } while (true); } //we have a tree with proper order. Read out the vector ids LittleTree::WhichVec sortedList[3]; list[0].readTree(sortedList); unsigned int caseId = (sortedList[0] << 2) | sortedList[1]; //form ids into a key. Two is enough to be unique switch (caseId) { case (LittleTree::Left2 << 2) | LittleTree::Right2: //L1 L2 R2 R1 return true; //no change, we just adopt it case (LittleTree::Right1 << 2) | LittleTree::Left2: //L1 R1 L2 R2 return false; //disjoint! case (LittleTree::Right1 << 2) | LittleTree::Right2: //L1 R1 R2 L2 *this = v; return true; //we take on his bounds case (LittleTree::Right2 << 2) | LittleTree::Left2: //L1 R2 L2 R1 setAllEncompassing(); return true; //now we have everything case (LittleTree::Left2 << 2) | LittleTree::Right1: //L1 L2 R1 R2 rightVector = v.rightVector; break; default: //(LittleTree::Right2 << 2) | LittleTree::Right1: //L1 R2 R1 L2 leftVector = v.leftVector; break; } //we need to fix acute acute = clocknessOrigin(leftVector, rightVector) == Clockwise; return true; } }; class Wedge { public: //all points relative to center LineSegment wall; //begin is the clockwise, right hand direction Point leftSideVector; //ray from center to this defines left or "end" side Wedge* leftSidePoker; //if I'm updated, who did it Point rightSideVector; //ray from center to this defines left or "end" side Wedge* rightSidePoker; //if I'm updated, who did it Wall* source; //original Wall of .wall VectorPair outVectors; float nearestDistance; //how close wall gets to origin (squared) AggregateWedge* myAggregate; //what am I part of? inline Wedge(): source(NULL), leftSidePoker(NULL), rightSidePoker(NULL), myAggregate(NULL) {} void setInitialVectors() { leftSidePoker = rightSidePoker = NULL; rightSideVector = wall.begin(); leftSideVector = wall.end(); outVectors.setKnownAcute(wall.end(), wall.begin()); } inline bool testOccluded(const Point p, const float distSq) const //relative to center { if (distSq < nearestDistance) return false; //it cannot if (clocknessOriginIsCounterClockwise(leftSideVector, p)) return false; //not mine if (clocknessOriginIsClockwise(rightSideVector, p)) return false; //not mine return wall.outside(p); //on the outside } inline bool testOccludedOuter(const Point p, const float distSq) const //relative to center { if (distSq < nearestDistance) //this helps a surprising amount in at least Enya return false; //it cannot return wall.outside(p) && outVectors.isIn(p); //on the outside } inline bool nudgeLeftVector(Wedge* wedge) { /* So. wedge occludes at least part of my wall, on the left side. It might actually be the case of an adjacent wall to my left. If so, my end() is his begin(). And if so, I can change HIS rightSideVectorOut to my right (begin) point, assuming my begin point is forward of (or on) his wall. That means he can help kill other walls better. */ if (wedge->wall.begin() == wall.end() && !wedge->wall.outside(wall.begin())) //is it legal? { outVectors.update(VectorPair(wedge->wall.end(), wedge->wall.begin())); wedge->outVectors.update(VectorPair(wall.end(), wall.begin())); } //turning this on drives the final wedge down, but not very much bool okToDoOut = true; bool improved = false; do { if (wall.outside(wedge->wall.begin())) break; //illegal move, stop here if (clocknessOrigin(leftSideVector, wedge->wall.begin()) == Clockwise) { leftSideVector = wedge->wall.begin(); leftSidePoker = wedge; improved = true; } if (okToDoOut) { okToDoOut = !wall.outside(wedge->wall.end()); if (okToDoOut) outVectors.update(VectorPair(wedge->wall.end(), wall.begin())); } wedge = wedge->rightSidePoker; } while (wedge); return improved; } inline bool nudgeRightVector(Wedge* wedge) { /* So. wedge occludes at least part of my wall, on the right side. It might actually be the case of an adjacent wall to my right. If so, my begin() is his end(). And if so, I can change HIS leftSideVectorOut to my left (end() point, assuming my begin point is forward of (or on) his wall. That means he can help kill other walls better. */ if (wedge->wall.end() == wall.begin() && !wedge->wall.outside(wall.end())) //is it legal? { outVectors.update(VectorPair(wedge->wall.end(), wedge->wall.begin())); wedge->outVectors.update(VectorPair(wall.end(), wall.begin())); } //turning this on drives the final wedge count down, but not very much bool okToDoOut = true; bool improved = false; do { if (wall.outside(wedge->wall.end())) return improved; //illegal move if (clocknessOrigin(rightSideVector, wedge->wall.end()) == Counterclockwise) { rightSideVector = wedge->wall.end(); rightSidePoker = wedge; improved = true; } if (okToDoOut) { okToDoOut = !wall.outside(wedge->wall.begin()); if (okToDoOut) outVectors.update(VectorPair(wall.end(), wedge->wall.begin())); } wedge = wedge->leftSidePoker; } while (wedge); return improved; } }; class AggregateWedge { public: VectorPair vectors; AggregateWedge* nowOwnedBy; bool dead; AggregateWedge() : nowOwnedBy(NULL), dead(false) {} bool isIn(const Point& p) const { return vectors.isIn(p); } bool isAllEncompassing() const {return vectors.leftVector.x == 0 && vectors.leftVector.y == 0;} void init(Wedge* w) { vectors.setKnownAcute(w->leftSideVector, w->rightSideVector); w->myAggregate = this; nowOwnedBy = NULL; dead = false; } //true if it caused a merge bool testAndAdd(Wedge* w) { if (dead) //was I redirected? return false; //then I don't do anything if (!vectors.update(VectorPair(w->wall.end(), w->wall.begin()))) return false; //disjoint AggregateWedge* previousAggregate = w->myAggregate; w->myAggregate = this; //now I belong to this if (previousAggregate != NULL) //then it's a merge { vectors.update(previousAggregate->vectors); //That means we have to redirect that to this assert(previousAggregate->nowOwnedBy == NULL); previousAggregate->nowOwnedBy = this; previousAggregate->dead = true; return true; } return false; } }; class AggregateWedgeSet { public: int at; int firstValid; AggregateWedge agList[8192]; float minDistanceSq; float maxDistanceSq; AggregateWedgeSet() : minDistanceSq(0), maxDistanceSq(FLT_MAX) {} void add(int numberWedges, Wedge* wedgeList) { at = 0; for (int j = 0; j < numberWedges; ++j) { Wedge* w = wedgeList + j; w->myAggregate = NULL; //none yet bool mergesHappened = false; for (int i = 0; i < at; ++i) mergesHappened |= agList[i].testAndAdd(w); if (mergesHappened) { //some number of aggregates got merged into w->myAggregate //We need to do fixups on the wedges' pointers for (int k = 0; k < j; ++k) { AggregateWedge* in = wedgeList[k].myAggregate; if (in->nowOwnedBy) //do you need an update? { in = in->nowOwnedBy; while (in->nowOwnedBy) //any more? in = in->nowOwnedBy; wedgeList[k].myAggregate = in; } } for (int k = 0; k < at; ++k) agList[k].nowOwnedBy = NULL; } if (w->myAggregate == NULL) //time to start a new one { agList[at++].init(w); } } // all wedges in minDistanceSq = FLT_MAX; for (int j = 0; j < numberWedges; ++j) { //get nearest approach float ds = wedgeList[j].nearestDistance; if (ds < minDistanceSq) minDistanceSq = ds; } minDistanceSq -= 0.25f; //fear roundoff - pull this is a little firstValid = 0; for (int i = 0; i < at; ++i) if (!agList[i].dead) { firstValid = i; #if 0 // Not sure this is working? Maybe relates to using L to change bounds? //if this is the only valid wedge and it is all-encompassing, then we can //walk all the wedges and find the furthest away point (which will be some //wall endpoint). Anything beyond that cannot be in bounds. if (agList[i].isAllEncompassing()) { maxDistanceSq = 0; for (int j = 0; j < numberWedges; ++j) { float ds = wedgeList[j].wall.begin().dotSelf(); if (ds > maxDistanceSq) maxDistanceSq = ds; ds = wedgeList[j].wall.end().dotSelf(); if (ds > maxDistanceSq) maxDistanceSq = ds; } } #endif break; } } const AggregateWedge* whichAggregateWedge(const Point p) const { for (int i = firstValid; i < at; ++i) { if (agList[i].dead) continue; if (agList[i].isIn(p)) { return agList + i; } } return NULL; } }; //#define UsingOuter //this slows us down. Do not use. #ifdef UsingOuter #define TheTest testOccludedOuter #else #define TheTest testOccluded #endif class AreaOfView { public: Point center; float radiusSquared; int numberWedges; BoundingRect bounds; Wedge wedges[8192]; //VERY experimental AggregateWedgeSet ags; inline AreaOfView(const Point& center_, const float radius) : center(center_), radiusSquared(radius * radius), numberWedges(0) { bounds.set(center, radius); addWalls(); } void changeTo(const Point& center_, const float radius) { center = center_; radiusSquared = radius * radius; bounds.set(center, radius); numberWedges = 0; addWalls(); } void recompute() //rebuild the wedges, with existing center and radius { bounds.set(center, sqrtf(radiusSquared)); numberWedges = 0; addWalls(); } inline bool isIn(Point p) const { p -= center; const float distSq = p.dotSelf(); if (distSq >= radiusSquared) return false; for (int i = 0; i < numberWedges; ++i) { if (wedges[i].TheTest(p, distSq)) return false; } return true; } /* On the theory that the wedge that rejected your last point has a higher than average chance of rejecting your next one, let the calling thread provide space to maintain the index of the last hit */ inline bool isInWithCheat(Point p, int* hack) const { p -= center; const float distSq = p.dotSelf(); if (distSq >= radiusSquared) return false; if (distSq < ags.minDistanceSq) return true; //this range is always unencumbered by walls if (distSq > ags.maxDistanceSq) //not working. Why? return false; if (numberWedges == 0) return true; //no boundaries //try whatever worked last time, first. It will tend to win again if (wedges[*hack].TheTest(p, distSq)) { return false; } #define UseAgg #define UseAggP #ifdef UseAgg const AggregateWedge* whichHasMe = ags.whichAggregateWedge(p); if (whichHasMe == NULL) return true; //can't be occluded! #endif //try everything else for (int i = 0; i < *hack; ++i) { #ifdef UseAggP #ifdef UseAgg if (wedges[i].myAggregate != whichHasMe) continue; #endif #endif if (wedges[i].TheTest(p, distSq)) { *hack = i; //remember what worked for next time return false; } } for (int i = *hack + 1; i < numberWedges ; ++i) { #ifdef UseAggP #ifdef UseAgg //does seem to help speed, but don't work yet if (wedges[i].myAggregate != whichHasMe) continue; #endif #endif if (wedges[i].TheTest(p, distSq)) { *hack = i; //remember what worked for next time return false; } } return true; } inline bool isInWithWallExclusion(Point p, const Wall* excludeWall) const { p -= center; const float distSq = p.dotSelf(); if (distSq >= radiusSquared) return false; for (int i = 0; i < numberWedges; ++i) { if (wedges[i].source == excludeWall)//this one doesn't count continue; if (wedges[i].TheTest(p, distSq )) return false; } return true; } void addWall(Wall* w, const float nearestDistance); void addWalls(); }; class AreaRef { public: AreaOfView* a; AreaRef() {a = NULL;} void set(const Point& p, float radius) { if (a == NULL) a = new AreaOfView(p, radius); else a->changeTo(p, radius); } ~AreaRef() {delete a;} void empty() {delete a; a = NULL;} AreaOfView* operator->() const {return a;} }; class WallSet { public: int length; int at; WallAndDist* list; WallSet() { at = 0; length = 2038; list = (WallAndDist*)malloc(length * sizeof(*list)); } ~WallSet() {free(list);} void add(Wall* w, const float distSq) { if (at >= length) { length *= 2; list = (WallAndDist*)realloc(list, length * sizeof(*list)); } list[at].wall = w; const LineSeg* s = w->getSeg(); list[at].lenSq = s->p[0].distanceSq(s->p[1]); list[at++].distSq = distSq; } inline void sortByCloseness() { qsort(list, at, sizeof *list, cmpWallDist); } }; void AreaOfView::addWall(Wall* w, const float nearestDistance) { if (numberWedges >= NUMOF(wedges)) return; //we are screwed const LineSeg* seg = w->getSeg(); Point w1 = seg->p[0] - center; Point w2 = seg->p[1] - center; LineSegment* wallSeg = &wedges[numberWedges].wall; switch (clocknessOrigin(w1, w2)) { case Clockwise: wallSeg->set(w2, w1); break; case Counterclockwise: wallSeg->set(w1, w2); break; default: return; //uninteresting, edge on } wedges[numberWedges].setInitialVectors(); //set left and right vectors from wall const LineSegment right(Point(0,0), wallSeg->begin()); const LineSegment left(Point(0,0), wallSeg->end()); //now we start trimming for (int i = 0; i < numberWedges; ++i) { //if this occludes both begin and it, it occludes the wall if (wedges[i].testOccludedOuter(wallSeg->begin(), wedges[numberWedges].nearestDistance) && wedges[i].testOccludedOuter(wallSeg->end(), wedges[numberWedges].nearestDistance)) return; bool changed = false; //test right side if (wedges[i].wall.doTheyIntersect(right)) { changed = wedges[numberWedges].nudgeRightVector(wedges + i); } //test left side if (wedges[i].wall.doTheyIntersect(left)) { changed |= wedges[numberWedges].nudgeLeftVector(wedges + i); } if (changed) { if (wedges[numberWedges].rightSidePoker && wedges[numberWedges].rightSidePoker == wedges[numberWedges].leftSidePoker) return; //cheap test for some total occlusion cases if ( //simplify LineSegment(Point(0,0), wedges[numberWedges].rightSideVector).clockness( wedges[numberWedges].leftSideVector) != Counterclockwise) { return; //occluded } } } //we have a keeper wedges[numberWedges].nearestDistance = nearestDistance; wedges[numberWedges].source = w; ++numberWedges; } void AreaOfView::addWalls() { //get the set of walls that can occlude. WallSet relevant; int initialRun = run1IsMapEdge? 2 : 1; for (int run = initialRun; run < wallRuns; ++run) { const WallRun* currentLoop = &wallLists[run]; //does this loop overlap our area? if (!currentLoop->bounds.overlapNonzero(bounds)) continue; //not an interesting loop, nothing in it can occlude //run 1 is the outer loop; we care about walls facing in. //subsequent runs are inner loops, and we care about walls facing out const Facing relevantFacing = run==1? Inside : Outside; //some walls in this loop may cast shadows. Here we go looking for them. for (int wall = 0; wall < currentLoop->wallCount; ++wall) { Wall* currentWall = currentLoop->list[wall]; //We don't currently have walls that are transparent (those are actually doors), but we could someday if (currentWall->isTransparent()) continue; //toss windows const LineSeg* currentSeg = currentWall->getSeg(); //We need to reject walls that are colinear with our rectangle bounds. //That's important because we don't want to deal with walls that *overlap* //any polygon sides; that complicates the intersection code. Walls don't // overlap other walls, and we will discard edge-on walls, so walls // don't overlap shadow lines; but without this they could overlap the // original bounding rectangle (the polygon-edge-of-last-resort). // We do have to consider overlap with creating shadows. // //Since we're looking at vertical and horisontal lines, which are pretty common, // we can also quickly discard those which are outside the rectangle, as well as // colinear with it. the wall faces away from the center point, or is edge on, it // doesn't cast a shadow, so boot it. (Getting rid of edge-on stuff // avoids walls that overlap shadow lines). if (currentSeg->facing(center) != relevantFacing) continue; //faces away (or edge-on) //We still could be dealing with an angled wall that's entirely out of range - // and anyway we want to know the distances from the center to the line segment's // nearest point, so we can sort. //Getting the distance to a segment requires work. or at the radius of interest, this wall can't matter. if (distSq >= radiusSquared) continue; //out of reach //Need to keep this one relevant.add(currentWall, distSq); } } //add doors, too. They don't have loops or bounding rectangles, and it's important to // get the right seg. Skip transparent ones. const WallRun* currentLoop = &wallLists[0]; //some walls in this loop may cast shadows. Here we go looking for them. for (int wall = 0; wall < currentLoop->wallCount; ++wall) { Wall* currentWall = currentLoop->list[wall]; if (currentWall->isTransparent()) continue; //toss windows const LineSeg* currentSeg = currentWall->getSeg(); //Horisontal and vertical lines are common, and easy to test for out of bounds. //That saves a more expensive distance check. (currentSeg->facing(center) == Straight) continue; //kill edge on walls the radius of interest, this wall can't matter. if (distSq > radiusSquared) continue; //out of reach //Need to keep this one relevant.add(currentWall, distSq); } //sort by nearness; nearer ones should be done first, as that might make more walls //identifiably irrelevant. relevant.sortByCloseness(); //relevant.print(); //now, "do" each wall for (int i = 0; i < relevant.at; ++i) { addWall(relevant.list[i].wall, relevant.list[i].distSq); } //build the aggregate wedge list ags.add(numberWedges, wedges); #if 0 if (center == lastAtD) { char buf[256]; sprintf(buf, "%d wedges in set", numberWedges); MessageBox(gHwnd, buf, "Wedge Count", MB_OK); } #endif } static AreaRef areaOfView; static Number visionMaxDistance; static BoundingRect inView; static Point viewPoint; static unsigned char* changedTriangleFlags; static unsigned char changedLightFlags[NUMOF(lights)]; static bool inComputeVisible = false; //multithreaded, we do triangles i..lim-1 static inline void doVisibleWork(int i, const int lim) { //if the only lights on are at the viewpoint (and no superlight), and the //lights all stay within the visiion limit (they usually do) then we don't // have to do anything but copy the "lit" flag to the "you're visible" state. //That's a huge win; we do't have to consult the area of view bool soloLightRulesVisibility = !superLight; if (soloLightRulesVisibility) { for (int k = 0; k < NUMOF(lights); ++k) if (lights[k].on) { if (lights[k].p != lastAtD || lights[k].range() > vmd) { soloLightRulesVisibility = false; break; } } } if (soloLightRulesVisibility) { //what's visible here is simply what's lit. We just copy the lit flag, no math needed. for (; i < lim; ++i) { if (simpleTriList[i]->setVisiblity( simpleTriList[i]->lit() )) { unsigned char v = 0x4 | (simpleTriList[i]->wasVisible << 1) | (simpleTriList[i]->isVisible & 1); changedTriangleFlags[i] = v; } } return; } int lookupHack[3] = {0,0,0}; for (; i < lim; ++i) { //we get a huge win from not calculating lightOfSight to unlit things if (simpleTriList[i]->setVisiblity( simpleTriList[i]->lit() && areaOfView->isInWithCheat(simpleTriList[i]->center, lookupHack) )) { unsigned char v = 0x4 | (simpleTriList[i]->wasVisible << 1) | (simpleTriList[i]->isVisible & 1); changedTriangleFlags[i] = v; } } } //End 1To give a sense of the algorithm’s performance, computing the currently visible area in figure 5, and combining it with the previous area, took 0.003 seconds on a dual core 2Ghz processor. However, keep mind that figure 5 represents very small and simple map (containing less than 100,000 triangles). 2GPC, PolyBoolean and LEDA are among these packages. Some discussion of the problems of runtime, runspace, and accuracy can be found at. Boost’s GTL is promising, but it forces the user of integers for coordinates, and the implementation is still evolving. 3Formally speaking, there isn’t a cross product defined in 2D; they only work in 3 or 7 dimensions. What’s referred to here is the z value computed as part of a 3D cross product, according to u.x * v.y - v.x * u.y.
http://www.gamedev.net/page/resources/_/technical/graphics-programming-and-theory/walls-and-shadows-in-2d-r2711?st=30
CC-MAIN-2016-40
refinedweb
10,441
59.74
in reply to Re^4: Install Math::GSL fail in thread Install Math::GSL fail I get the same errors on Mac OSx 10.8.2 and gcc 4.6.3 : xs/Roots_wrap.1.15.c:3283:7: error: void function '_wrap_gsl_root_fdfs +olver_fdf_set' should not return a value [-Wreturn-type] return (gsl_nan()); ^ ~~~~~~~~~~~ ... xs/Roots_wrap.1.15.c:3840:7: error: void function '_wrap_gsl_root_fdfs +olver_set' should not return a value [-Wreturn-type] return (gsl_nan()); ^ ~~~~~~~~~~~ [download] This thread is a bit old. I wonder if anybody has made any progress on this? Progress such as? Such as a fix or workaround Workaround? You don't mean so the module can install with failing tests, do you? Well, the bug was reported ( rt://Math-GSL ) ... :) XSRETURN Return from XSUB, indicating number of items on the stack. This is u +sually handled by "xsubpp". void XSRETURN(int nitems) [download] Hell yes! Definitely not I guess so I guess not Results (41 votes), past polls
http://www.perlmonks.org/?node_id=1015116
CC-MAIN-2014-52
refinedweb
160
77.84
I have just started learning Java and I need a random number generator. Can anyone help. Sophie Printable View I have just started learning Java and I need a random number generator. Can anyone help. Sophie Another great java forum: No offense meant, but last time I checked, this is a C, C++, C# board. Use the sun forums, please. And this is a GD board, where anything that doesn't apply to the other boards can be discussed. General Discussions. (Hmm, we dont ever discuss things here, do we?) Not a java help board. If we were to be discussing the pros and cons of java or something to that effect, then I wouldnt have said what I did, but she is asking for help on the General Discussions board. Heh, we should get a general programming board for help on any languages that arent C, C++, C#. Ehh, who cares? Let's not make this a *****ing match (made towards both of us, not just you). >I have just started learning Java and I need a random number generator. Can anyone help. Java has several ways to get a random number, the following being the easiest. :) -Prelude-PreludeCode: import java.util.*; import java.io.*; class RandTest { public static void main ( String args[] ) { int i; // Seed from the current system time. Random rand = new Random(); // Get a random number from 1 - 100. i = rand.nextInt ( 100 ) + 1; System.out.println ( "Your random number is: " + i ); } }
https://cboard.cprogramming.com/brief-history-cprogramming-com/23591-java-question-printable-thread.html
CC-MAIN-2018-05
refinedweb
246
76.93
Create PCR primers with optimal lengths, tms, gc%s and free energies Project description primers This is a tool for creating PCR primers. It has an emphasis on DNA assembly and makes it easy to add sequences to the end of PCR fragments. This is part of overlap extension polymerase chain reaction and preparing unstandardized DNA sequences for Gibson assembly and Golden gate cloning. primers quickly creates pairs with optimized lengths, Tms, GC ratios, secondary structures (minimum free energies) and without off-target binding sites. Each returned primer has two tms: "tm", the melting temperature for the portion of the primer that binds to the template sequence, and "tm_total", the melting temperature for the entire primer with additional sequence added to its 5' end. Unlike the most used alternative, Primer3, primers has a permissive MIT license and support for adding sequence to the 5' ends of primers. Installation pip install primers Usage Python from primers import primers # add enzyme recognition sequences to FWD and REV primers: BsaI, BpiI fwd, rev = primers("AATGAGACAATAGCACACACAGCTAGGTCAGCATACGAAA", add_fwd="GGTCTC" add_rev="GAAGAC") print(fwd.fwd) # True print(fwd.seq) # GGTCTCAATGAGACAATAGCACACACA; 5' to 3' print(fwd.tm) # 62.4; melting temp print(fwd.tm_total) # 68.6; melting temp with added seq (GGTCTC) print(fwd.dg) # -1.86; minimum free energy of the secondary structure # add from a range of sequence to the FWD primer: [5, 12] bp add_fwd = "GGATCGAGCTTGA" fwd, rev = primers("AATGAGACAATAGCACACACAGCTAGGTCAGCATACGAAA", add_fwd=add_fwd, add_fwd_len=(5, 12)) print(fwd.seq) # AGCTTGAAATGAGACAATAGCACACACAGC print(fwd.tm) # 62.2 print(fwd.tm_total) # 70.0 CLI $ primers AATGAGACAATAGCACACACAGCTAGGTCAGCATACGAAA -f GGTCTC -r GAAGAC dir tm ttm dg pen seq FWD 62.4 68.6 -1.86 5.43 GGTCTCAATGAGACAATAGCACACACA REV 62.8 67.4 0 4.8 GAAGACTTTCGTATGCTGACCTAG $ primers --help usage: primers [-h] [-f SEQ] [-fl INT INT] [-r SEQ] [-rl INT INT] [-t SEQ] [--version] SEQ Create PCR primers for a DNA sequence. Logs the FWD and REV primer with columns: dir, tm, ttm, dg, pen, seq Where: dir = FWD or REV. tm = Melting temperature of the annealing/binding part of the primer (Celsius). ttm = The total melting temperature of the primer with added seq (Celsius). dg = The minimum free energy of the primer's secondary structure (kcal/mol). pen = The primer's penalty score. Lower is better. seq = The sequence of the primer in the 5' to the 3' direction. positional arguments: SEQ DNA sequence optional arguments: -h, --help show this help message and exit -f SEQ additional sequence to add to FWD primer (5' to 3') -fl INT INT space separated min-max range for the length to add from '-f' (5' to 3') -r SEQ additional sequence to add to REV primer (5' to 3') -rl INT INT space separated min-max range for the length to add from '-r' (5' to 3') -t SEQ sequence to check for offtargets binding sites --version show program's version number and exit Algorithm Creating primers for a DNA sequence is non-trivial because it's multi-objective optimization. Ideally pairs of primers for PCR amplification would have similar tms, GC ratios close to 0.5, high minimum free energies (dg), and a lack off-target binding sites. In primers, like Primer3, this is accomplished with a linear function that penalizes undesired characteristics. The primer pair with the lowest combined penalty is created. Scoring The penalty for each possible primer, p, is calculated as: PENALTY(p) = abs(p.tm - opt_tm) * penalty_tm + abs(p.gc - opt_gc) * penalty_gc + abs(len(p) - opt_len) * penalty_len + abs(p.tm - p.pair.tm) * penalty_tm_diff + abs(p.dg) * penalty_dg + p.offtargets * penalty_offtarget Each of the optimal ( opt_*) and penalty ( penalty_*) parameters is adjustable through the primers.primers() function. The defaults are below. opt_tm: float = 62.0 opt_gc: float = 0.5 opt_len: int = 22 penalty_tm: float = 1.0 penalty_gc: float = 3.0 penalty_len: float = 1.0 penalty_tm_diff: float = 1.0 penalty_dg: float = 2.0 penalty_offtarget: float = 20.0 Off-targets Off-targets are defined as a subsequence within one mismatch of the last 10bp of a primer's 3' end. This is experimentally supported by: Wu, J. H., Hong, P. Y., & Liu, W. T. (2009). Quantitative effects of position and type of single mismatch on single base primer extension. Journal of microbiological methods, 77(3), 267-275 By default, primers are checked for off-targets within the seq parameter passed to primers.primers(seq). But the primers can be checked against another sequence if it's passed through the offtarget_check argument. This is useful when PCR'ing a subsequence of a larger DNA sequence; for example: a plasmid. seq = "AATGAGACAATAGCACACACAGCTAGGTCAGCATACGAAA" parent = "ggaattacgtAATGAGACAATAGCACACACAGCTAGGTCAGCATACGAAAggaccagttacagga" # primers are checked for offtargets in `parent` fwd, rev = primers(seq, offtarget_check=parent) Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/primers/
CC-MAIN-2020-10
refinedweb
804
57.47
.xa;25 26 /**27 * The XAException is thrown by the Resource Manager (RM) to inform the 28 * Transaction Manager of an error encountered by the involved 29 * transaction.30 */31 public class XAException extends java.lang.Exception {32 33 /**34 * The error code with which to create the SystemException.35 *36 * @serial The error code for the exception.37 */38 39 public int errorCode;40 41 /**42 * Create an XAException.43 */44 public XAException()45 {46 super();47 } 48 49 /**50 * Create an XAException with a given string.51 *52 * @param s The <code>String</code> object containing the exception53 * message.54 */55 public XAException(String s)56 {57 super(s);58 }59 60 /**61 * Create an XAException with a given error code.62 *63 * @param errcode The error code identifying the exception.64 */65 public XAException(int errcode)66 {67 super();68 errorCode = errcode;69 }70 71 /**72 * The inclusive lower bound of the rollback codes.73 */74 public final static int XA_RBBASE = 100;75 76 /**77 * Indicates that the rollback was caused by an unspecified reason.78 */79 public final static int XA_RBROLLBACK = XA_RBBASE;80 81 /**82 * Indicates that the rollback was caused by a communication failure.83 */84 public final static int XA_RBCOMMFAIL = XA_RBBASE + 1;85 86 /**87 * A deadlock was detected.88 */89 public final static int XA_RBDEADLOCK = XA_RBBASE + 2;90 91 /**92 * A condition that violates the integrity of the resource was detected.93 */94 public final static int XA_RBINTEGRITY = XA_RBBASE + 3;95 96 /**97 * The resource manager rolled back the transaction branch for a reason98 * not on this list.99 */100 public final static int XA_RBOTHER = XA_RBBASE + 4;101 102 /**103 * A protocol error occurred in the resource manager.104 */105 public final static int XA_RBPROTO = XA_RBBASE + 5;106 107 /**108 * A transaction branch took too long.109 */110 public final static int XA_RBTIMEOUT = XA_RBBASE + 6;111 112 /**113 * May retry the transaction branch.114 */115 public final static int XA_RBTRANSIENT = XA_RBBASE + 7;116 117 /**118 * The inclusive upper bound of the rollback error code.119 */120 public final static int XA_RBEND = XA_RBTRANSIENT;121 122 /**123 * Resumption must occur where the suspension occurred.124 */125 public final static int XA_NOMIGRATE = 9;126 127 /**128 * The transaction branch may have been heuristically completed.129 */130 public final static int XA_HEURHAZ = 8;131 132 /**133 * The transaction branch has been heuristically committed.134 */135 public final static int XA_HEURCOM = 7;136 137 /**138 * The transaction branch has been heuristically rolled back.139 */140 public final static int XA_HEURRB = 6;141 142 /**143 * The transaction branch has been heuristically committed and 144 * rolled back.145 */146 public final static int XA_HEURMIX = 5;147 148 /**149 * Routine returned with no effect and may be reissued.150 */151 public final static int XA_RETRY = 4;152 153 /**154 * The transaction branch was read-only and has been committed.155 */156 public final static int XA_RDONLY = 3;157 158 /**159 * There is an asynchronous operation already outstanding.160 */161 public final static int XAER_ASYNC = -2;162 163 /**164 * A resource manager error has occurred in the transaction branch.165 */166 public final static int XAER_RMERR = -3;167 168 /**169 * The XID is not valid.170 */171 public final static int XAER_NOTA = -4;172 173 /**174 * Invalid arguments were given.175 */176 public final static int XAER_INVAL = -5;177 178 /**179 * Routine was invoked in an improper context.180 */181 public final static int XAER_PROTO = -6;182 183 /**184 * Resource manager is unavailable.185 */186 public final static int XAER_RMFAIL = -7;187 188 /**189 * The XID already exists.190 */191 public final static int XAER_DUPID = -8;192 193 /**194 * The resource manager is doing work outside a global transaction.195 */196 public final static int XAER_OUTSIDE = -9;197 198 199 }200 Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ |
http://kickjava.com/src/javax/transaction/xa/XAException.java.htm
CC-MAIN-2013-48
refinedweb
644
55.44
@fork decorator with try/except on different platforms, not executing except clause I am having problems using the @fork decorator with a try & except clause. On SageMathCell the piece of code runs fine, whereas on both Jupyter and CoCalc it doesn't throw the exception clause properly. On CoCalc it didn't recognise the @fork decorator at all at first, but from sage.all import * (idea from question: "@fork decorator not recognized in script") seemed to help. Piece of code: from sage.all import * #for CoCalc a_0,a_1 = var('a_0,a_1');s = [a_0,a_1] equations = [69*a_0 + 4556 == 69*a_0 + 63*a_1, 69*a_1 - 3350 == -67*a_0 + 57*a_1, 63*a_0 - 3876 == -1542, 63*a_1 + 2850 == 7406] try: @fork(timeout=0.1, verbose=True) #use e.g. 0.1 and 10 def DirectSolution(): sage_solution = solve(equations, s , solution_dict=True) print('Solves in time ,','sage_solution:',sage_solution) return sage_solution sage_solution = DirectSolution()[0] except KeyboardInterrupt: sage_solution = [] print('Takes too long , ','sage_solution:',sage_solution) print('Execute the rest of the code, ','sage_solution:',sage_solution) Running the @fork decorator without try/except with 0.1 seconds I got the KeyboardInterrupt error, that's why I used it in the except clause. Shouldn't this exception usually be "raised when the user hits the interrupt key" ? As mentioned above, on SageMathCell the code works as intended: With 10 seconds it computes the solution and prints out the text. For 0.1 seconds it sets sage_solution = []and prints out the text. However, both on Jupyter and CoCalc it doesn't use the exception properly for 0.1 seconds. I get the following message: Killing subprocess 1346 with input ((), {}) which took too long Execute the rest of the code, sage_solution: N Meaning it didn't execute the except clause: sage_solution = [] print('Takes too long , ','sage_solution:',sage_solution) I am not sure what to look for here, because it is working on one platform. A simple try/except example worked fine. Another idea was that the KeyboardInterrupt could be the problem, but removing it didn't change anything. I am new to Sage/Python, so there probably is a simple solution but I am happy for any help given. Unfortunately my karma was insufficient to publish links.
https://ask.sagemath.org/question/50912/fork-decorator-with-tryexcept-on-different-platforms-not-executing-except-clause/
CC-MAIN-2021-17
refinedweb
365
53.51
Strongly typed HTML templates with FSharp without a framework Or do the mundane tasks quickly, and move on to more interesting stuff! Have you ever had to generate HTML without a web framework? In most cases HTML generation is tied to web frameworks. Usually you have tooling, examples, tutorials, and all the bells&whistles. If you are using a web framework… But what if you want to generate HTML in a non-web project? For example send email notifications from a stand-alone service. Or you want to declare some templates once in a non-web project and reuse those in multiple projects? One obvious way on .NET would be to use Razor, the default view engine for ASP.Net. But if you ever tried to use it in a non-ASP.Net project, you find it kind of painful: you have to reference the whole web stack, have to hack around to get syntax highlight and intellisense working, add additional build steps to precompile and type-check your templates. Along the way you will find some dead or undead projects that don’t work without visual studio, or don’t work in DotNet Core. Not a nice experience… There other view / template engines for .Net, for example dotliquid, but honestly, didn’t want to learn another template engine, and depend on another package. Or fall back to manual string replacement. Yeah, that’s a good idea too… But, is there a solution? Yes! Enter Paket single file references and the Giraffe ViewEngine! The what? - Paket is dependency manager for .NET project, that among other nice features has the concept of a single file reference: you can declare a dependency just as a file, for example from a GitHub project. - Giraffe is an F# web framework, it’s default view engine is just a single file with no additional dependencies. And since views are just regular F# code, it just works: no additional tooling or build steps needed! All this means, you can just reference Giraffe’s ViewEngine in your project, and use it! No additional dependencies, no framework restrictions, no hacking around or extra build steps! Just use it, create the views, and move on to more interesting things in your life! :) How to do it? Create a new F# console project dotnet new console -lang F# -o src/GiraffeHtmlService Add paket Download the latest paket bootstrapper and save it to .paket folder. Follow the instructions here about setting up Paket. Add GiraffeViewEngine Add GiraffeViewEngine to your paket.dependecies file (the last line): Create src/GiraffeHtmlService/paket.references and add a reference to GiraffeViewEngine.fs: At this point, just check that everything is ok: Run Paket: Install in VS code, after the install is completed, run the project: dotnet run — project .\src\GiraffeHtmlService\ And you should see Hello World from F#! in the console! Creating your first HTML template Add a new file called HtmlTemplates.fs in src/GiraffeHtmlService. Now it only takes a single string parameter, but you can use anything: it’s just F# code. Use records, helper functions, etc: Add the file to your fsproj above your Program.fs: And call it from program.fs: Run your project again, and you should see glorious HTML output! > dotnet run --project .\src\GiraffeHtmlService\ The rendered html document: <!DOCTYPE html> <html><body><div>Hello World!</div></body></html> So, a recap Well, we are done, it’s that simple! You can create HTML in a sane way without a whole web framework! - Strongly typed - Syntax highlight and intellisense works - No additional build step, no project file hacking - No dependency on a whole web stack, just add a single file! And it just works! Spend less time fighting with tooling, and more time on solving actual problems! The code is up on github. Gotchas? Well, obviously, namespaces would collide, if you reference this project in a project that actually uses Giraffe. Haven’t tried, but there are ways to disambiguate types and namespaces in .NET… And thank you! The original authors of the code in SuaveIO: Florian Verdonck for porting it to Giraffe All contributors of Giraffe, Paket, Ionide, and all other tools! The whole FSharp community!
https://medium.com/@akoslukacs42/strongly-typed-html-templates-with-fsharp-without-a-framework-9971575d7fb4
CC-MAIN-2018-51
refinedweb
697
67.25
Automatic transformation of XML namespaces/Future directions Specifying a transformer with source/target being several namespaces treated as if it would be one NS. (Example: Dublin Core and dcterms: namespaces.) Ability to restrict to certain elements/attributes of a namespace instead specifying the whole namespace. Non-XML output formats. (For these only entire document transformers can be applied.) We can also use non-XML input formats. XProc: An XML Pipeline Language Can we process several namespaces at once when the transformations (with different source namespaces) have exactly equal precedence? (Hm, this cannot be done if the transformers have different kinds. Should we enable concurrent processing of several namespaces when both their precedence and their order kind is the same?) Should we point one or several processors (one for each NS) for these multiple-namespaces transformers? (The transformers should be of the same order kind.) The option of interactive choosing order of transformers. It should be customizable for a user. It can be done as a user-specified set of RDF files. (Note that values in user-specified files should take precedence over other RDF files.) There should be a (finite or infinite) mapping from a URL to several URLs when we downloading them. Should we introduce “composite” scripts (consisting of several transformations sequentially)? First, they would badly interact with searching transformation path. Second, it looks like a cart ahead of horse that in this case we define a script through transformations (not vice versa). Mentioning this, are there weighty enough arguments to add such a construct? We should formally describe and use XML Grouping. Some combinations of grouping and order kinds of transformers make no sense. Require to give a warning in such situations. How grouping should limit arbitrary choice one of several enriched scripts of the same singular precedence? (It is a rather difficult problem. Please leave your comments.) What to do with namespaced attributes? We can associate precedences with namespaces and make it the default precedence of the transformers associated with the namespace in question. Option to stop transformation if a document does not validate. We should publish in SoftwareX.
https://en.wikiversity.org/wiki/Automatic_transformation_of_XML_namespaces/Future_directions
CC-MAIN-2017-17
refinedweb
352
59.8
Data.CSS Contents Description This library implements a domain-specific language for cascading style sheets as used in web pages. It allows you to specify your style sheets in regular Haskell syntax and gives you all the additional power of server-side document generation. Synopsis - module Data.CSS.Build - module Data.CSS.Render - module Data.CSS.Types Tutorial Style sheets are values of type CSS. This type denotes style sheets, including for different media and is itself a monoid, so that you can build your style sheets either in a chunk-by-chunk fashion or using a writer monad. The latter is the recommended way, because it gives you access to a large library of predefined properties with additional type safety. It is recommended to enable the OverloadedStrings extension for this library and is assumed for the rest of this tutorial. Style properties are usually specified by using the predefined property writers: display InlineDisplay direction LeftToRight float LeftFloat If a property is not pre-defined you can set it by using setProp or its infix variant $=. setProp "font-family" ("times" :: PropValue) "text-decoration" $= ("underline" :: PropValue) The type signatures are necessary because the property value is actually polymorphic and needs to be an instance of ToPropValue. There are many predefined instances: "z-index" $= 15 "margin" $= ["1em", "2px"] These values will render as 15 and 1em 2px respectively. Selectors and media In order to specify properties you first need to establish media types and selectors. The simplest way to do this is to use onAll, onMedia and stylesheet :: Writer CSS () stylesheet = onAll . select ["p"] $ do lineHeight . Just $ _Cm # 1 zIndex Nothing borderStyle . LeftEdge $ SolidBorder This will render as the following stylesheet: p { line-height: 10mm; z-index: auto; border-left-style: solid; } To restrict the media to which the stylesheet applies just use onMedia instead of onAll: onMedia ["print"] . select ["p"] $ ... This will render as: @media print { p { /* ... */ } } Often it is convenient to specify properties for elements below the current selection. You can use the below combinator to do this: onAll . select ["p"] $ do lineHeight . Just $ _Cm # 1 zIndex Nothing below ["em"] $ do margin . Edges $ [_Em # 0.2, _Ex # 1] padding . Edges $ [_Em # 0.1, _Ex # 0.5] The inner block specifies properties for p em, so the above will render as the following stylesheet: p { line-height: 10mm; z-index: auto; } p em { margin: 0.2em 1ex; padding: 0.1em 0.5ex; } You can also specify properties for multiple selectors simultaneously: onAll . select ["html", "body"] $ do margin . Edges $ [zeroLen] padding . Edges $ [_Em # 1, _Ex # 2] below ["a", "p"] $ do backgroundColor black color limegreen This renders as the following stylesheet: html, body { margin: 0; padding: 1em 2ex; } html a, body a, html p, body p { background-color: #000; color: #32cd32; } Rendering To render a stylesheet you can use fromCSS, renderCSS or renderCSST. All of these will give you a Builder. You can then use combinators like toByteString or toByteStringIO to turn it into a ByteString, send it to a client or write it to a file. The lowest level function is fromCSS, which will take a CSS value and give you a Builder: fromCSS :: CSS -> Builder The most convenient way to write your stylesheets is to use a writer monad, in which case you would use one of these functions instead, depending on the shape of your monad: renderCSS :: Writer CSS a -> Builder renderCSST :: (Monad m) => WriterT CSS m a -> m Builder The following example prints the stylesheet to stdout, assuming stylesheet is of type : Writer CSS () import qualified Data.ByteString as B toByteStringIO B.putStr . renderCSS $ stylesheet Lengths For convenience lengths can and should be specified by using predefined prisms like _Cm (see HasLength): lineHeight $ Just (_Cm # 1) This will render as line-height: 10mm. All compatible lengths are saved and rendered in a canonical unit. For example centimeters, millimeters, inches, picas and points are all rendered as their correspondingly scaled millimeter lengths, i.e. will render as _In # 1 25.4mm. For convenience there are also two ways to specify percental lengths. The lengths and _Percent # 150 are equivalent and both render as _Factor # 1.5 150%. There are also two special lengths, zeroLen and autoLen, which render as 0 and auto respectively. Colors Colors are specified by using the Colour and AlphaColour types from the colour library. They are rendered as either #rgb, #rrggbb or rgba(r,g,b,a) depending on what color you specify and whether it is fully opaque. The following renders as border-left-color: #0f0: import Data.Colour.Names borderColor (LeftEdge lime) The colour library gives you both correct color space handling, sensible operators for mixing colors and a large library of predefined colors in the Data.Colour.Names module. To mix two colors you can use blend for mixing or over for alpha blending: blend 0.5 lime red (lime `withOpacity` 0.5) `over` black Colors are all rendered in the (non-linear) sRGB color space, as this is the assumed color space by most user agents and screens. Edge-oriented properties Many CSS properties are edge-oriented, for example margin, padding and borderColor. This library provides a unified interface to these properties through the Edge type. Examples: margin . BottomEdge $ _Mm # 1 -- margin-bottom: 1mm margin . LeftEdge $ _Mm # 1 -- margin-left: 1mm margin . RightEdge $ _Mm # 1 -- margin-right: 1mm margin . TopEdge $ _Mm # 1 -- margin-top: 1mm To set all edges through the margin property just use the Edges constructor: margin . Edges $ [_Mm # 2, _Mm # 1] -- margin: 2mm 1mm margin . Edges $ [_Mm # 5] -- margin: 5mm You can also use the usual monadic combinators: mapM_ margin [LeftEdge (_Mm # 3), RightEdge (_Mm # 4)] Imports To import an external stylesheet you can use importFrom or importUrl. The former allows you to specify raw URLs: importFrom "screen" "/style/screen.css" In web frameworks like Happstack you would usually work on top of a MonadRoute-based monad (like RouteT) as defined in the web-routes library. In this case the importUrl function allows you to use your type-safe URLs conveniently: importUrl "all" (Style "screen") To import a stylesheet for multiple media types, just use the import functions multiple times for the same URL: mapM_ (`importFrom` "/style/screen.css") ["screen", "print"] This will render as: @import url("/style/screen.css") print, screen; Miscellaneous Important properties To set the !important tag on a property you can use the important function: important (display InlineDisplay) This renders as: display: inline !important; Inheriting To inherit a property value use inherit. You have to spell out the property name in this case: inherit "display" This renders as: display: inherit; Optimizations The underlying representation is a straightforward list of selectors and properties. There may be closer or more efficient representations, but all of them need to make some compromises along the way. Consider the following stylesheet: p { border-color: #0f0; border-color: rgba(0, 255, 0, 1); } You can optimize this to a single property, but which one? It depends on whether the user agent supports CSS level 3, so we would need to make assumptions. We also can't use a map of properties to a list of values. Consider the following stylesheet: p { border-bottom-color: #0f0; border-color: #f00; } The order of the properties does matter here, so we need to preserve it. This rules out a map from properties to lists of values. The final question is: Can we use a map from selectors to property lists? As it turns out no, because CSS specificity does not always apply: a:link { /* ... */ } a:blah { /* ... */ } These two selectors have the same specificity. CSS allows unsupported pseudo-classes and user agents must ignore them, so in edge cases the order can matter. Other than media types CSS does not seem to exhibit any commutativity, so we use regular lists. Also since most authors use mostly hand-written stylesheets with little property overlap the list representation is usually faster anyway, so this choice seems to be sensible. The only optimization performed by this library is the minified output it produces. Pretty-printing is currently not supported. Reexports module Data.CSS.Build module Data.CSS.Render module Data.CSS.Types
https://hackage.haskell.org/package/cascading-0.1.0/docs/Data-CSS.html
CC-MAIN-2016-26
refinedweb
1,354
55.95
ASP. Before we start coding I want to make some notes about my solution: - this solution is pretty new and it is sure that one can improve the code provided here, - using this solution I’m trying to generalize in-place reporting for MVC applications, - also I’m trying to keep my views as clean as possible – chart definitions are not small if you have more complex charts or if you want very nice looking charts. If you are not familiar with ASP.NET Chart control then please read my blog posting<asp:Chart>. You find there simple introduction to this free control and also all necessary links. Solution overview What we are trying to build here is shown on the following diagram. Our main goal is to avoid using ASP.NET forms elements in our MVC line user interface. Also we want to generalize reporting support so we have one interface for all reports. I introduce here pretty simple report. If you have more complex reports then you can extend reporting interface shown below later. You may also find useful to reorganize outputing system. I am using here simple works-for-me or works-for-prototyping solution. I want to focus on point and let’s try not to lose it. Reporting interface Before we create our first report let’s define interface for it. This interface must define all basic actions we need to do with reports: - set source with report data, - bind data to report, - write report image to some (output) stream. This is my reporting interface. Note that DataSource has only setter – it is because I don’t have currently need to ask data from chart. I only provide data to it. public interface IReportControl : IDisposable { void DataBind(); object DataSource { set; } void SaveChartImage(Stream stream); } As you can see this interface is pretty thin. I am sure that it will grow in the future when need for more complex reports appears. Sample report Let’s define one report for testing purposes. Add web user control called MyReport.ascx to Reports folder of your web application and drag ASP.NET Chart control on it. Here is definition of my control. <%@ Control Language="C#" AutoEventWireup="true" CodeBehind="LastEnquiriesChart.ascx.cs" Inherits="ReportingApp.Web.Reports.LastEnquiriesChart" %> assembly="System.Web.DataVisualization, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" namespace="System.Web.UI.DataVisualization.Charting" tagprefix="asp" %> <asp:Chart <series> <asp:Series </asp:Series> </series> <chartareas> <asp:ChartArea <AxisX IntervalAutoMode="VariableCount" IntervalOffsetType="Days" IntervalType="Days" IsLabelAutoFit="False" IsStartedFromZero="False"> <MajorGrid Interval="Auto" IntervalOffsetType="Days" IntervalType="Days" /> </AxisX> </asp:ChartArea> </chartareas> <Titles> <asp:Title </asp:Title> </Titles> </asp:Chart> If you look at the definition and consider it as simple one you understand why I don’t want this mark-up to be there in my views. This is just one simple report. But consider for a moment three complex reports. 90% of my view will be one huge report definition then and I will miss all the good things that views have. As I want this report to be interfaced with my reporting mechanism I make it implement IReportControl interface. Code-behind of my control is as follows. public partial class LastEnquiriesChart : UserControl, IReportControl { public object DataSource { set { Chart1.DataSource = value; } } public override void DataBind() { base.DataBind(); Chart1.DataBind(); } public void SaveChartImage(Stream stream) { Chart1.SaveImage(stream); } } All other user controls we are using for reporting must also implement IReportControl interface. This leads us to one interesting finding – we don’t have to host only chart control in our user controls, we have to host there whatever ASP.NET forms control we need for reporting. We can also create wrapper controls that get report image from some other external source (let’s say we have some COM component that is able to return reports as images). Creating loader Now we have sample report control and interface we can use to provide data and catch output of report. It is time to create meeting place for two worlds: ASP.NET forms and ASP.NET MVC framework. I created class called ReportLoader. The name of this class is good enough for me because it tells me that this is the integration point between two worlds. Let’s look at loader implementation now. public static class ChartLoader { public static void SaveChartImage(string controlLocation, IEnumerable data, Stream stream) { using (var page = new Page()) using (var control = (IReportControl) page.LoadControl("~/Reports/" + controlLocation)) { control.DataSource = data; control.DataBind(); control.SaveChartImage(stream); } } } Loader does one trick: it doesn’t render the control. It only runs as long as report is written to stream and then it disposes user control and temporary page instance to avoid all other actions they may take. I made ChartLoader and SaveChartImage methods as static because I don’t need hell load of classes and super-cool architecture right now. Creating controller action We are almost there… Let’s create now controller action that returns chart image. As I am prototyping my application I use very robust controller action. You may be more polite coders and I strongly suggest you to read Bia Securities blog posting BinaryResult for Asp.Net MVC. You can find BinaryResult implementation also from MVC Contrib project. [HttpGet] public ActionResult GetChart() { var repository = Resolver.Resolve<IPriceEnquiryRepository>(); var enquiries = repository.ListPriceEnquiries(); var data = from p in enquiries group p by p.Date.Date into g select new { Date = g.Key, Count = g.Count() }; Response.Clear(); Response.ContentType = "image/png"; ChartLoader.SaveChartImage( "LastEnquiriesChart.ascx", data, Response.OutputStream); Response.End(); return null; // have to return something } Now we have controller action that asks data from somewhere, prepares it for report and asks report as image from ChartLoader. Before outputing the report Response is cleared and content type is set to PNG. After writing image to response output stream the response is ended immediately to avoid any mark-up that may be written there otherwise. As you can see I created special automatic objects for reporting. If you look at my report definition you can see that x-axis is bound to Date and y-axis to Count. You can also prepare reporting data in some near-DAL class methods and then slide this data through controller to report. The choice is yours. Adding report to view <img src="<%= Url.Action("GetChart")%>" alt="Last week enquiries" title="Last week enquiries" /> Output of report is shown on right. I don’t have much data here and my report is not very nice but it works. Now, if you are not too tired or bored, it is time to make your chart very nice and show it to your boss or customer. Conclusion Mixing forms and MVC worlds of ASP.NET doesn’t always have to end up with hard mess. In this posting I showed you how to add simple but pretty generic reports support to your ASP.NET MVC application. Due to good interfacing we achieved separation between forms and MVC templates and linking reports to views is very-very simple. Of course, code and interfaces represented here are not production-ready examples. But they give you right direction and you can always improve design of my solution. My point was to illustrate how to mix MVC and forms world in 17 thoughts on “ASP.NET MVC: Creating reports using Chart control” Nice article. Didnt know that we can use the UserControl can be used in this way to do a dirty work and save the chart image. Thanks for the article :) This sounds great for a stateless implementation. But doesn’t using an image handler give you caching capabilities? I guess I am confused as to why this way is better than using an image handler? Is the image loaded asynchronously with the page? Also, does IIS cache this image as you can when using an HTTP ImageHandler? Hi Chuck! Sorry for late answer. The solution provided here is just to give you some idea how to mix separate ASP.NET frameworks so this mix doesn’t happen in same files and modules. Of course, you can use this code as base and add caching if necessary etc. You can also use image handler – no problem. Using image handler and caching is more like optimization topic and I wasn’t sure if it is good idea to cover this topic too as this posting is long enough. This post is wonderful. It is just what I want. Thank you. I also would like to export the chart to excel using OpenXML. Do you have an example to share? Thanks for feedback, Mei :) Currently I have no good example for charting in Excel. As soon as I invent something I will write about it. Could you put a complete project code on the page so I can download and try it? You can take source here: You can open files in browser and take source using copy and paste Thanks for the tutorial. It’s very helpful in being able to switch out chart images on the fly. But what I found with this solution is that you can’t have any client-side interactivity with it (because it’s an image). For example, if I want to show a tooltip of the (x,y) of the position when the mouse hovers over it… can’t do that with an image. In this case I guess I’ll have to use control in an iframe or something…. so I can both switch it out on the fly and have client-side interactivity… I’ll try it. Thanks for the example. This is the only line I’m having trouble with Resolver.Resolve 1. Revsolver is unknown 2. IPriceEnquiryRepository is unknown Thus am I missing an Interface ? I’m using 3.5 sp1 mvc2 Thanks T Resolver and IPriceEnquiryRepository are just example types. Resolver is shortcut method to some IoC/DI container that returns you object by interface. You can play it around if you need or you can use some IoC/DI container like Unity, StructureMap, NInject etc. MVC 2 and above includes the File Action Result so the GetChart Controller action could be modified as follows: var ms = new MemoryStream(); ChartLoader.SaveChartImage( “LastEnquiriesChart.ascx”, data, ms); return File(ms.GetBuffer(), “image/png”); Great post thanks. Hi, Thanks for a great article!!… I’ve gotten somewhere but I’d love to see the project if you have a sample. Thanks! Pingback:ASP.NET MVC 3 Beta: Built-in support for charts | Gunnar Peipman - Programming Blog Pingback:ASP.NET MVC: Creating user configurable charts | Gunnar Peipman - Programming Blog Hi, my friend, thanks a lot for this great article , help me a lot. Thanks.
https://gunnarpeipman.com/asp-net-mvc-creating-reports-using-chart-control/
CC-MAIN-2022-40
refinedweb
1,780
58.08
This is the mail archive of the cygwin-announce mailing list for the Cygwin project. Hi Cygwin friends and users, I just released Cygwin 1.7.32-1. This release comes with a few bug fixes in header files and a slightly improved /proc/cpuinfo output, but otherwise it concentrates on a new feature which isn't readily user visible. The new feature requires a new GCC which will be released by Jon_Y shorty. This combination, cygwin 1.7.32 / gcc 4.8.3-3, also allows to handle C++11 thread local objects for the first time. What's. Bug Fixes --------- - Decorate attribute names with __, for namespace safety in various header files. Addresses: - Fix sys/file.h for using in C++ code. Addresses: To install 32-bit Cygwin use To install 64 bit Cygwin use The 64 bit Cygwin distribution doesn't yet come with as many packages as the 32 bit version,. Have fun, Corinna -- Corinna Vinschen Please, send mails regarding Cygwin to Cygwin Project Co-Leader cygwin AT cygwin DOT com Red Hat
https://cygwin.com/ml/cygwin-announce/2014-08/msg00020.html
CC-MAIN-2017-39
refinedweb
177
66.44
HOWTO: M5Stack with GPS, GSM and LoRa, all at the same time I have gotten all three radio boards to work with the M5Stack at the same time. This was a little more involved than it needs to be because documentation is either missing, not all in the same place or (in one case) apparently wrong. I'll use this post to document what I did and I have added a sketch that tests for complete functionality of the boards. I'll also add whatever else I know about the boards, including links to the datasheets of the used modules. In the process of doing this I figured out which pins to use to talk to which boards. In the black boxes with each board are lists of the pins used for each board (sometimes selectable with solder bridges, sometimes hardwired). In brackets are the available pins for that function via solder bridges on the board. Note that either the first or the only option for each pin has a little (sometimes invisible) trace shorting out the solder bridge, so the first option is already connected as you get the board. If you want to disconnect it, you'll have to break that trace by carefully scratching the circuit in-between the two pads of that solder bridge. In case you are low on GPIOs, the text below also lists which wires can be left unconnected for each board. LoRa MOSI 23 MISO 19 SCK 18 CS 5 (5 ) RST 26 (26) INT 36 (36) In the case of the LoRa module, I'm using all the default pins, so there's no need to solder any bridges or scrape any wires. I bought a LoRa board and only then realised that LoRa is more fun if you have two boards (I had two M5Stack main units already). So I bought another one and got a slightly different unit. The picture shows the unit I first got on the bottom, and the second one on top. As you can see the new unit has a connector for an external antenna as well as an internal one. In the picture you see the pigtail for the internal antenna connected, it came with the external antenna jack connected. After taking this picture I taped down the other pigtail with a little square of duck tape so it doesn't touch anything important. The M5Stack LoRa uses the RA-02 radio module by AI-Thinker (datasheet) that is itself built around the Semtech SX-1278 transceiver chip (datasheet). If you actually start playing with the LoRa module, you might also want to read this article that GoJimmyPi wrote on it. Also note for your information that the following information is in the M5Stack documentation on the website as of Jun 20, 2018: This seems to be correcting something that is wrong elsewhere but it is actually wrong itself: RST is GPIO 26 and INT is 36. I measured it on both my boards with a multimeter, RST is really pin 26 and it would have to be because on the ESP32, GPIO 36 is input only so it could never drive the RST pin. People that want to save on available GPIO pins could leave off the reset pin. In that case, just pass -1 as the reset pin number, the second argument of LoRa.setPins in the M5LoRa library. If a pin is provided, all the library does is pull it low briefly once when you call LoRa.begin(). It doesn't seem to affect all units, but in order to prevent a problem where the screen stays white, pull up pin 5, the CS pin of the LoRa, before initialising the m5 library. This is done as follows: pinMode(5,OUTPUT); digitalWrite(5,HIGH); m5.begin(): GSM (SIM800L) TXD 16 (16) RXD 17 (17) RST 2 (!) (5) Rx and Tx can use the default pins for UART2, but as you can see in the LoRa section above, we have already used the pin used to reset the GSM (GPIO5) as the LoRa ChipSelect pin, so we have to choose something else here. Unfortunately the solder jumpers do not allow for any other choice, so I carefully scratched out the hidden trace between the two pads of the 5 solder bridge and soldered in a little wire from the GSM side of that bridge across to a free pin, choosing GPIO2. Note that you want to unscrew the boards from the plastic frames with a little hex screwdriver before soldering wires in. The M5Stack SIM800L GSM board has a SIMCom "Core Board" module on it (the board with the red solder mask) that itself sports a SIM holder and a SIMCom SIM800L module (datasheet). This is a quad band GSM with GPRS, but no EDGE, 3G, LTE or anything else fancy. People that want to save on available GPIO pins could again leave off the reset pin: unless your code pulls it low it is not used. GPS TXD 13 (16, 3, 13) RXD 15 (!) (17, 1, 5) PPS 34 (34, 35, 36) The PPS signal provides one pulse per second, not very interesting unless you want to do very precise timing. I've already used the 17/16 default for the GSM above. So that means I started by carefully scraping away the traces between the solder bridges marked 16 and 17, checking with a multimeter that the connections are really gone. (Note that if you don't scrape away the traces inside the solder bridges, you will connect UART2 and UART1, leading to all sorts of surprisingly hard to debug grief.) I put the Tx at 13, which is a simple solder bridge, but the only solder bridge choices for Rx are 17 (UART2 default, in use by the GSM above), 1 (in use by the USB serial that talks to your computer) and 5 which is the default ChipSelect pin for the LoRa. So I put in a wire again, this time to GPIO15. The GPS module used is a u-blox NEO-M8 (datasheet). People that want to save on available GPIO pins could again leave off the reset pin, as well as the RX pin of the GPS module. After all: if you are happy with the NMEA data that the module provides by default, you don't need to talk to it. And you can also very safely leave off the PPS pin, if your code is not using this one pulse per second signal. Testing it all Now comes the time to test if all connections to each of the modules work. I have written BoardTest.ino, which should show you (in actually readable type) if your hardware is detected. You can set any pins you haven't hooked up to -1, the test program will then skip all the non-applicable tests. #include <M5Stack.h> #include <M5LoRa.h> HardwareSerial GPS(1); HardwareSerial GSM(2); // Change these if your boards are wired differently // // Any pins not hooked up can be set to -1 to skip tests // const int LORA_CS = 5; const int LORA_RES = 26; const int LORA_INT = 36; const int GPS_RX = 15; const int GPS_TX = 13; const int GPS_PPS = 34; const int GSM_RX = 17; const int GSM_TX = 16; const int GSM_RES = 2; // The bytes to be sent to the GPS to request the version info packet back. const uint8_t UBX_MON_VER[] PROGMEM = { 0xb5, 0x62, 0x0a, 0x04, 0x00, 0x00, 0x0e, 0x34 }; void setup() { String response; unsigned long started, last_ati, last_time, time_before_last; char resp_buf[50]; unsigned int i, pps_state; if (LORA_CS != -1) { pinMode(LORA_CS, OUTPUT); digitalWrite(LORA_CS, HIGH); } m5.begin(): M5.Lcd.setTextSize(2); // Test LoRa ConsoleOutput ("Testing LoRa ..."); if (LORA_CS != -1 && LORA_INT != -1) { LoRa.setPins(LORA_CS, LORA_RES, LORA_INT); if (LoRa.begin(433E6)) { ConsoleOutput(" Init Succeeded"); if (LORA_RES != -1) { LoRa.setPins(LORA_CS, -1, LORA_INT); pinMode(LORA_RES, OUTPUT); digitalWrite(LORA_RES, LOW); if (LoRa.begin(433E6)) { ConsoleOutput(" Reset line does not work"); } else { ConsoleOutput(" Reset line works"); } digitalWrite(LORA_RES, HIGH); } else { ConsoleOutput (" Skipping reset test"); } } else { ConsoleOutput(" Hardware Not Found"); } } else { ConsoleOutput(" Skipping test"); } ConsoleOutput(""); // Test GPS ConsoleOutput ("Testing GPS ..."); if (GPS_TX != -1 ) { GPS.begin(9600, SERIAL_8N1, GPS_TX, GPS_RX); GPS.setTimeout(1000); started = millis(); while (true) { response = GPS.readStringUntil(13); response.replace("\n", ""); if (response.substring(0,2) == "$G") { ConsoleOutput(" NMEA data detected"); if (GPS_RX != -1) { GPS.setTimeout(500); GPS.flush(); GPS.write(UBX_MON_VER, sizeof(UBX_MON_VER)); GPS.readStringUntil(0xb5); GPS.readBytes(resp_buf, 50); if (resp_buf[0] == 'b') { response = ""; for (i = 35; i < 44 ; i++) response = response + resp_buf[i]; ConsoleOutput(" Found: HW " + String(response)); } else { ConsoleOutput(" No response. RX ok?"); } } else { ConsoleOutput(" Skipping bidir test"); } break; } if (millis() - started > 2000) { ConsoleOutput(" No NMEA data found"); break; } } GPS.end(); } else { ConsoleOutput(" Skipping serial tests"); } if (GPS_PPS != -1) { pinMode(GPS_PPS, INPUT); pps_state = digitalRead(GPS_PPS); started = millis(); while (true) { if (digitalRead(GPS_PPS) != pps_state) { if (millis() - time_before_last > 900 && millis() - time_before_last < 1100) { ConsoleOutput(" PPS alive"); break; } pps_state = digitalRead(GPS_PPS); time_before_last = last_time; last_time = millis(); } if (millis() - started > 3000) { ConsoleOutput(" PPS not detected"); break; } } } else { ConsoleOutput(" Skipping PPS test"); } ConsoleOutput(""); // GSM testing ConsoleOutput("Testing GSM ..."); if (GSM_TX != -1 && GSM_RX != -1) { if (GSM_RES != -1) { pinMode(GSM_RES, OUTPUT); digitalWrite(GSM_RES, HIGH); } GSM.begin(9600, SERIAL_8N1, GSM_TX, GSM_RX); GSM.setTimeout(200); GSM.println("ATI"); started = millis(); last_ati = millis(); while (true) { // send "ATI" every second in case the modem just woke up if (int(last_ati / 1000) < int(millis() / 1000)) { GSM.println("ATI"); last_ati = millis(); } response = GSM.readStringUntil(13); response.replace("\n", ""); if (response.length() > 6) { ConsoleOutput(" Found: " + String(response)); if (GSM_RES != -1) { started = millis(); GSM.println("ATI"); digitalWrite(GSM_RES, LOW); while (true) { response = GSM.readStringUntil(13); response.replace("\n", ""); if (response.length() > 6) { ConsoleOutput(" Reset line broken"); break; } if (millis() - started > 2000) { ConsoleOutput(" Reset line works"); break; } } digitalWrite(GSM_RES, HIGH); } else { ConsoleOutput(" Skipping reset test"); } break; } if (millis() - started > 4000) { ConsoleOutput(" No response to ATI"); break; } } GSM.end(); } else { ConsoleOutput(" Skipping tests"); } M5.Lcd.setCursor(45,220); M5.Lcd.print("REDO"); M5.Lcd.setCursor(235,220); M5.Lcd.print("OFF"); } void loop() { M5.update(); if (M5.BtnC.wasPressed()) { M5.setWakeupButton(BUTTON_C_PIN); M5.powerOFF(); } if (M5.BtnA.wasPressed()) { M5.Lcd.clear(); M5.Lcd.setCursor(0,0); setup(); } } void ConsoleOutput(String message) { Serial.println(message); M5.Lcd.println(message); } Et voila: I've decided to update the original post to make it clearer in some parts, add some more detailed information as well as links to the datasheets of the modules. For the time being I'll keep updating this so we have a single reference point for information about these modules. Please let me know if you can think of other important information about any of these modules and I'll add it. Eventually there should probably be a more formal reference manual for this project. Ii just recently ordered these stacks and i guess you save me a ton of work. thanks for taking the efforts of sharing. I've updated the BoardTest.ino code, text and images to reflect that it now tests all the wires to and from each board separately. So for instance, it sends a query to the GPS to see if the RX line is also hooked up, tests the GPS PPS signal and sees if the reset lines to the LoRa and GSM modules actually work. The text also documents which lines can be safely left off if you are low on GPIO lines. Just used your code to test with a LoRa module. I just have the black M5 Stack and the LoRa module, but the screen goes white when I have the LoRa module attached? @kieran-osborne With just the M5Stack and the LoRa module? Try pushing it on a little harder first. Try to see if there are any obvious shorts involving the MISO, MOSI or SCK lines, because that would cause the screen to not work. (Can you verify that it's still running by maybe printing something to the serial port?) If that doesn't work maybe try cutting the traces between the solder pads one by one to see if it's one of those wires that the M5Stack doesn't like. But I think your SPI bus is shorted. Rop Hi Rop, I have tried pushing a little harder on it but didn't change anything. On the serial port i'm getting: "Testing LoRa ... Init Succeeded Reset line works" I have had issues with my M5 Stack keep popping up on windows with USB device not recognised, so wonder if somehow they are related, is a bit strange Hi rop, When I run your code the screen is completely white, but If i press the A button (button far left), it looks like it refreshes the screen and then the text is displayed?, weirder and weirder haha I've updated the post and the demo code inside it to show the fix of pulling up the CS pin on the LoRa board before initialising the screen to prevent the screen from staying white.
http://forum.m5stack.com/topic/239/howto-m5stack-with-gps-gsm-and-lora-all-at-the-same-time/5
CC-MAIN-2018-34
refinedweb
2,155
69.21
Beginner here so keep that in mind! We have a data source that can provide me with SQL output in tab delimited form that contains fields + attributes, one of which being the vector info stored as an WKT string. I am able to bring the vector info through into a new feature class easily enough with fromWKT but I also need to bring some of the fields + values along at the same time. Code I have for bringing just the vectors. import arcpy from arcpy import env env.workspace = "C:/Student/clswkt" inTable = "f94fd2128655d74d9788356402dc0434.csv" arcpy,.TableToTable_conversion("inTable", "clswkt.gdb", "<feature class name>") # Define field holding WKT coords field1 = "shape_vector_new" # Define field holding unique identifier (can be PVID or OBJECTID if you didn't include PVID field2 = "OBJECTID" # Redefine inTable to new table created inTable = "clswkt.gdb/<tablename>" featureList = [] cursor = arcpy.SearchCursor(inTable) row = cursor.next() while row: WKT = row.getValue(field1) temp = arcpy.FromWKT(WKT, "") featureList.append(temp) row = cursor.next() # Copy featurelist to a feature class arcpy.CopyFeatures_management(featureList, "clswkt.gdb/<fcname>") Thanks in advance. suggest converting the SQL output into a CSV file. Then joining that CSV file to your feature class created from your current script.
https://community.esri.com/thread/194417-utilize-fromwkt-but-add-fields-also
CC-MAIN-2020-40
refinedweb
200
60.31
A wildcard type is denoted by a question mark, as in <?>. For a generic type, a wildcard type is what an Object type is for a raw type. We can assign any generic of known type to a generic of wildcard type. Here is the sample code: // MyBag of String type MyBag<String> stringMyBag = new MyBag<String>("Hi"); // You can assign a MyBag<String> to MyBag<?> type MyBag<?> wildCardMyBag = stringMyBag; The question mark in a wildcard generic type (e.g., <?>) denotes an unknown type. When you declare a parameterized type using a wildcard as a parameter type, it means that it does not know about its type. MyBag<?> unknownMyBag = new MyBag<String>("Hello"); We express the upper bound of a wildcard as <? extends T> Here, T is a type. <? extends T> means anything that is of type T or its subclass is acceptable. For example, the upper bound can be the Number type. If we pass any other type, which is a subclass of the Number type, it is fine. However, anything that is not a Number type or its subclass type should be rejected at compile time. Using the upper bound as Number, we can define the method as class MyBag<T> { private T ref;/*from w ww. j a v a 2 s . co m*/ public MyBag(T ref) { this.ref = ref; } public T get() { return ref; } public void set(T a) { this.ref = a; } } public class Main { public static double sum(MyBag<? extends Number> n1, MyBag<? extends Number> n2) { Number num1 = n1.get(); Number num2 = n2.get(); double sum = num1.doubleValue() + num2.doubleValue(); return sum; } } No matter what you pass for n1 and n2, they will always be assignment-compatible with Number since the compiler ensure that the parameters passed to the sum() method follow the rules specified in its declaration of <? extends Number>. Specifying a lower-bound wildcard is the opposite of specifying an upper-bound wildcard. The syntax for using a lower-bound wildcard is <? super T>, which means "anything that is a supertype of T." class MyBag<T> { private T ref;/*from w w w. j ava 2 s .co m*/ public MyBag(T ref) { this.ref = ref; } public T get() { return ref; } public void set(T a) { this.ref = a; } } public class Main { public static <T> void copy(MyBag<T> source, MyBag<? super T> dest) { T value = source.get(); dest.set(value); } }
http://www.java2s.com/Tutorials/Java/Java_Object_Oriented_Design/0360__Java_Generic_Constraints.htm
CC-MAIN-2018-09
refinedweb
399
68.36
for connected embedded systems QNX® Software Development Platform 6.4.1: Release Notes Date of this edition: May 25, 2009 Target OS: This development platform produces software that's compatible with targets that are running QNX® Neutrino® 6.4.1. Host OS: You can install this package as a self-hosted QNX Neutrino® development system, or on one of the following development hosts: - Microsoft Windows Vista, Vista 64-bit, XP SP2 or SP3, or 2000 SP4 - Linux Red Hat Enterprise Workstation 4.0 or 5.0, Red Hat Enterprise Server 5.1 64-bit, Red Hat Fedora 10, Ubuntu 8.04 LTS or 8.10, or SUSE 11 You can also install the QNX Neutrino RTOS as a virtual machine on VMware Workstation 6.5, VMware Player 2.5 and Microsoft VirtualPC 2007. We provide a VMware image of a runtime installation of QNX Neutrino; for more information, see “Using the VMware image of a QNX Neutrino runtime system,” below. If you find problems with any virtualization environment, please post your findings in one of the forums on our Foundry27 community website. Contents - What's the QNX Software Development Platform? - What's new in QNX Neutrino 6.4.1? - Migrating from earlier releases - Kernel - Installing, booting, and licensing - Core networking - Filesystems - Graphics - Libraries and header files - Drivers - Documentation - What's new: I/O devices - Security - Web browsers - Using the VMware image of a QNX Neutrino runtime system - What's new in the QNX Momentics Tool Suite? - Discontinued items - Experimental items - Known issues - - Getting started with the documentation - Technical support - List of fixes area. What's the QNX Software Development Platform? The QNX Software Development Platform bundles the QNX Momentics Tool Suite, the QNX Neutrino RTOS, or both, as shown below: The QNX Momentics Tool Suite includes the following: We no longer include the IDE on self-hosted QNX Neutrino systems. What's new in QNX Neutrino 6.4.1? The key improvements in 6.4.1 include: - Core Operating System: - support for ARM Cortex A8 (ARMv7 architecture) - support for Freescale e500 Core Signal Processing Extensions (SPE) — for more information, see QNX support for using Freescale e500 SPE in the QNX Neutrino technotes. - defragmentation of physical memory - We once again support MIPSBE and MIPSLE targets. (Ref# 64313) - We no longer support the PowerPC 900 series of processors, so we've deprecated procnto-900 and procnto-900-smp. (Ref# 68225, 68230) - Networking: - BIND9 support - SSH support - Filesystems: - Read-only support for NTFS and Apple HFS+ - Graphics: - Composition Window Manager (io-winmgr) - support for Intel x86 chipsets For more details, see the following sections: - Migrating from earlier releases - Kernel - Installing, booting, and licensing - Core networking - Filesystems - Graphics - Libraries and header files - Drivers - Documentation - What's new: I/O devices - Security - Web browsers - Using the VMware image of a QNX Neutrino runtime system Migrating from earlier releases - Binaries created with QNX Momentics 6.3 should be compatible with 6.4.1, but note the following: - The same C++ ABI change between gcc 2.95.3 and 3.3.5 exists between 2.95.3 and 4.2.4. Older C++ binaries linked against libcpp.so.3 and libstc++.so.5 will still work because we are shipping those legacy C++ libraries in 6.4.1. You can't link old C++ libraries into new C++ programs. - The QNX Momentics Tool Suite on Windows and Linux includes version 4.5 of the IDE. For information about migrating to the latest IDE, see the Migrating from Earlier Releases appendix of the IDE User's Guide. - You can now get Board Support Packages from our website. For information about building BSPs from earlier releases, see the on our Foundry27 community website. - All 6.3.x driver binaries should be compatible with 6.4.1, except audio (deva-*), block I/O (devb-*), and graphics (devg-*) drivers. The audio and block I/O drivers should compile on 6.4.1 with minor code changes. The 6.4.1 graphics drivers run on top of io-display instead of on Photon. - You need to recompile ATAPI drivers that are BSP-specific for 6.4 (the driver, io-blk, and the filesystems need to be in sync). - Serial drivers are statically linked, so there's no issue running binaries from 6.4.1 on 6.3.x. If you want to compile a 6.4.1 serial driver on 6.3.x, you'll need the 6.4.1 versions of libio-char.a, <io-char.h>, and dcmd_chr.h. - 6.3.x USB drivers should be compatible with 6.4.1. What's new: Kernel - Just as in a disk filesystem, physical memory can become fragmented over time, reducing performance and — in some cases — preventing applications that require contiguous memory from running. To remedy this, the memory manager can now defragment physical memory. This feature is enabled by default; to disable it, specify the -m~d option to procnto. For more information, see “Defragmenting physical memory” in the Process Manager chapter of the System Architecture guide. (Ref# 16405, 64365) - We've reduced the jitter in the interrupt-to-thread latency, especially on x86 platforms. - Qnet doesn't fully support communication between a big-endian machine and a little-endian machine. However, it does work between machines of different processor types (e.g. ARMLE, x86) that are of the same endian-ness. If you require cross-endian networking with Qnet, please contact your QNX sales representative. - We've corrected two separate bugs in the implementation of shared global memory mappings on ARM targets. These global mappings are placed at virtual addresses that are visible to all processes, so access protection is implemented by updating the MMU domain field in the L1 page table descriptors for a mapping to allow access only to the process with the specified domain. The two bugs were: - When variable page size support was enabled, the conversion between 4 KB/64 KB pages to a 1 MB page mapping didn't set the domain field correctly. - When a new global mapping was created at a virtual address that had previously contained a global mapping, the domain field wasn't being set correctly. Both these problems resulted in intermittent crashes (SIGSEGV) in applications that mapped global memory because the incorrect domain field in the L1 page table descriptor would prevent access by the application. (Ref# 60487) - If you exec() a program with a relative path, and you're using Qnet, the kernel no longer leaks small amounts of memory, and Qnet no longer leaks some resources. (Ref# 62502) What's new: Installing, booting, and licensing - The diskboot utility now has a -u option that you can use to override the default options passed to io-usb. (Ref# 52436, 64623; Ticket ID 80346) - If you add a license in text mode (e.g. on the console or via a telnet session) using finstall -l, you can now properly activate. (Ref# 62147) New utilities include: - applypatch - Install and uninstall QNX patches. (Ref# 67885) - showlicense - Display the type of QNX license that's currently active. (Ref# 62227) What's new: Core networking - We now support BIND 9. The new utilities and files include: - dig - DNS domain information groper lookup utility - dnssec-keygen - DNSSEC key-generation tool - dnssec-signzone - DNSSEC zone-signing tool - host - DNS lookup utility - lwresd - Lightweight resolver daemon - named-checkconf - Tool for checking the syntax of a named configuration file - named-checkzone - Tool for checking a zone file - named-compilezone - Tool for converting a zone file - nsupdate - Dynamic DNS update utility - rndc - Name server control utility - rndc-confgen - Key-generation tool for rndc - rndc.conf - Configuration file for rndc We've updated the following: - named - /etc/named.conf The named-xfer ancillary agent for inbound zone transfers is now obsolete. (Ref# 63686, 65079) - The setkey utility now supports the 3des-deriv encryption algorithm. (Ref# 60289, 64367) - We now use version 5.0 of ftp. (Ref# 65060, 65106) - We now use the latest version of the tftp client, which supports a new -e option and port argument, and several new commands: - blksize blk-size - tout - tsize By specifying a larger block size, you can transfer files that are larger than 32 MB. (Ref# 54779, 64021) - The Core Networking Stack documentation now includes a technote on converting io-net drivers into native io-pkt drivers. (Ref# 58620) - We ship the SSH suite, but it isn't currently documented. For now, refer to the NetBSD documentation: - When slinger is executing a CGI script and setting the environment variables to be passed to the script, the environment variable SERVER_ROOT is now correctly set to the directory specified by HTTPD_SCRIPTALIAS. (Ref# 20795) What's new: Filesystems - chkfsys - There's a new -x option that makes chkfsys exit with detailed error codes. If you don't specify this option, an exit status of zero (as before) doesn't indicate that no problems were found with the filesystems. It merely indicates that no irrecoverable errors internal to the chkfsys utility were encountered. For more information, see the Utilities Reference. (Ref# 32176, 61888) - fs-mac.so - This shared object provides read-only support for the Apple Macintosh HFS (Hierarchical File System) and HFS Plus. (Ref# 64406) - fs-nt.so - This shared object provides read-only support for the Windows NT filesystem. (Ref# 64405) -. (Ref# 65320) This filesystem also has a new vcd=num option that you can use to set the number of raw VCD 2352-byte deblocking buffers. (Ref# 15071, 67230) For more information, see the Utilities Reference. - fs-cd.so - We've deprecated this shared object in favor of fs-udf.so, but we'll continue to ship it. (Ref# 64992) What's new: Graphics - pwm - We've reworked the Photon window frames, in order to reduce the CPU load and improve the speed of redrawing. If you prefer the older look, do the following before starting Photon: export PH_WFRAME_STYLE=wframe_updated.so (Ref# 66257) - Composition Manager - A hardware-independent layer of abstraction that encompasses all aspects of window management, such as window creation, realization, and destruction, as well as content updates inside the Human Machine Interface (HMI). The composition manager is compliant with OpenKode and OpenGL ES. For more information, see the Composition Manager Developer's Guide. - Pt_ARG_HIGHLIGHT_ROUNDNESS - If you create a widget, and you set Pt_ARG_HIGHLIGHT_ROUNDNESS to a nonzero value, the widget's gradient fill is now bypassed. (Ref# 24980) What's new: Libraries and header files - calloc(), malloc(), realloc() - The MALLOC_OPTIONS environment variable now controls how these functions(). (Ref# 64337, 64338) - chdir(), chroot() - We've fixed a bug in chdir() that made chroot() not work properly. (Ref# 21089) - endfsent(), getfsent(), getfsfile(), getfsspec(), setfsent() - These functions work with the /etc/fstab file; you can use them to get information about the filesystem that this file describes. (Ref# 65331, 65399) - getpwent_r() - This is a reentrant version of getpwent(). Both these functions get the next entry from the password database. (Ref# 65137, 65324) - mmap() - You can now create more than one writeable mapping to a file. (Ref# 29565, 65582) - resmgr_handle_grow(), resmgr_iofuncs(), resmgr_ocb() - We now provide public versions (with names that don't start with an underscore) of these functions, which you should use instead of _resmgr_handle_grow(), _resmgr_iofuncs(), and _resmgr_ocb(). Note that resmgr_iofuncs() and resmgr_ocb() both take only one argument, a pointer to a resmgr_context_t structure. (Ref# 12297) - _resmgr_io_func() - This function is for our internal use only; use resmgr_iofuncs() instead. (Ref# 12297) - posix_spawnattr_setcred(), posix_spawnattr_getcred() - These new functions let you set or get the credentials, including the user and group IDs, to be used for spawned processes. (Ref# 61079) What's new: Drivers What's new: Block-oriented drivers (devb-*) New drivers: - devb-ahci - Driver for AHCI SATA interfaces (QNX Neutrino) We've also addressed the following: - devb-eide - This driver no longer auto-detects RAID controllers, so as to avoid corrupting RAID configurations. Use the jumper on your card to disable RAID mode. (Ref# 60231) What's new: Graphics drivers (devg-*) The new drivers include: - devg-poulsbo.so - Graphics driver for Intel Poulsbo chipsets We've also addressed the following: - Switching from devg-matroxg.so to devg-svga.so via modeswitching no longer causes io-graphics and/or io-display to fault. (Ref# 61152) What's new: Network drivers (devn-*, devnp-*) The new drivers include: - devnp-e1000.so - Driver for Intel Gigabit Ethernet controllers (Ref# 66074, 67003) (it will support TSO in a future release) The enumerators currently start devnp-e1000.so for Intel Gigabit Ethernet controllers, but you might want to switch to devnp-i82544.so, depending on your hardware. - devnp-rtl8169.so - Driver for Realtek 8169 Gigabit Ethernet controllers; replaces devn-rtl8169.so. (Ref# 67108, 67238) We've also addressed the following: - devnp-i82544.so - We've corrected the default values for the irq_threshold and transmit options in the usage message and documentation. This driver doesn't support the promiscuous option. (Ref# 62229) What's new: Documentation - The Processes chapter of the QNX Neutrino Programmer's Guide now has a section on using the /proc/pid/as files to examine and control processes and threads. (Ref# 17722) - The documentation now explains that if you send with the “nc” variant of MsgSend*(), then when the server replies, you're placed at the front of the ready queue, rather than at the end. Note that your timeslice isn't replenished; for example, if you had already used half of your timeslice before sending, then you will still only have half a timeslice left before being eligible for preemption. This change lets MsgSendnc() to behave in a similar fashion to a non-blocking kernel call, without penalizing the caller by forfeiting its timeslice. (Ref# 62293) - We've moved the chapter on writing a resource manager out of the QNX Neutrino Programmer's Guide and expanded it into its own book. (Ref# 42688) - The QNX Neutrino Programmer's Guide now has a chapter on the kernel's concept of time, and the effects on delays and timers. (Ref# 25817, 41038) - The I2C (Inter-Integrated Circuit) Framework technote no longer refers to a multithreaded resource manager. (Ref# 49900) - The SPI (Serial Peripheral Interface) Framework technote now includes correct information about what the spi_read(), spi_write(), spi_xchange(), spi_cmdread(), and spi_dma_xchange() functions return. If successful, these functions return the number of bytes read, written, or exchanged. (Ref# 59977; Ticket ID 84471) - The System Architecture guide and the QNX Neutrino Library Reference now say that. (Ref# 61585) - While most of our books have had individual indexes for quite a while, the IDE's Help system now combines them all into one very large index. (Ref# 58509) - The IDE's help system now correctly displays all of the 6.4.1 documentation. (Ref# 63072) What's new: I/O devices - The io-usb subsystem now correctly detects USB mice and keyboards on HP XW6600 workstations. (Ref# 61680) What's new: Security This release addresses the following security concerns: - We've fixed a problem that allowed non-kernel processes to access arbitrary memory. (Ref# 67251) - MsgSend*v*() and MsgReceive*v*() now indicate an error of EOVERFLOW instead of overflowing a buffer if the sum of the user's IOV lengths exceeds INT_MAX. (Ref# 62575) - We've corrected a problem associated with using mmap() to create a mapping beyond the current end of the file. (This is legal according to POSIX, but you'll get a SIGBUS if you actually try to touch the page). When the kernel frees the internal structures for the memory-mapped file, it now correctly removes the physical memory that was allocated for the potential file data. (Ref# 64718; Security Focus 33352) - We've corrected the safer_name_suffix() function in GNU tar so as to prevent a buffer overflow. (Ref# 63070; CVE-2007-4476) - We've fixed the handling of too large ICMPv6 messages so as to prevent a denial of service. (Ref# 62468; NetBSD-SA2008-015 CVE-2008-3530) - We've corrected a problem with the handling of long messages in ftpd that could have aided attackers in executing CSRF attacks when, for example, using a web browser to access ftp servers. (Ref# 62463; NetBSD-SA2008-014 CVE-2008-4247) - We've addressed a bug in range checking in pppoe that could have allowed a malicious packet to access memory outside of the allocated buffer and cause a crash. (Ref# 62455; NetBSD-SA2008-015 CVE-2008-3584) - We've incorporated an update for OpenSSH that fixes a weakness and a vulnerability that could have been exploited by malicious local users to bypass certain security restrictions and to disclose sensitive information. (Ref# 57565; Security Update OpenSSH (from Secunia.com) SA29939 SA29522 SA29602) - We've verified that our version of telnet isn't vulnerable to an identified issue with the handling of login and ENVIRON. (Ref# 46283; VU#467577) - Long executable names can no longer cause a buffer overflow in procnto. (Ref# 38199) - We've verified that our login never calls system(); doing so gives an attacker a chance to get a root shell. (Ref# 38023) What's new: Web browsers - Web Browser Engine — a high-performance embeddable web browser that's based on the WebKit open source engine. We ship a native Advanced Graphics version of the browser with basic functionality; your Photon or Flash application can add windowing and a control interface (e.g. address bar and navigation). For more information, see the Web Browser Engine Developer's Guide. Using the VMware image of a QNX Neutrino runtime system We provide a VMware image of a QNX Neutrino runtime system in target/QNX_Eval_RT.zip on the installation DVD; it's also available in the Download area of our website. To install this image, do the following: - Extract the VMware target from the DVD. For example, on Windows, open a Windows Explorer window, double-click on the target\QNX_Eval_RT.zip file, and then drag the QNX_Eval_RT folder to some location on your hard drive (e.g. My Documents). - To launch, either: - Start VMware Player, browse to where you saved the QNX_Eval_RT folder, and then choose QNX_Eval_RT.vmx. Or: - Navigate to where you saved the QNX_Eval_RT folder, and then double-click the VMware configuration file, QNX_Eval_RT.vmx. - When VMware Player displays a dialog saying that the virtual machine was moved, select Create and click OK. If you find problems with any virtualization environment, please post your findings in one of the forums on our Foundry27 community website. What's new in the QNX Momentics Tool Suite? The changes to the QNX Momentics Tool Suite include the following: - Command-line tools: - GCC and GDB updates - support for ARMv7 - support for E500 SPE - We now ship full debugging information and map files for all binaries. - Integrated Development Environment: - Eclipse 3.4.x and CDT 5.0 - Mudflap Visualization - Xray-type functionality in the System Profiler For more details, see the following sections: - Compiler, tools, and utilities - What's new: Debugging information for shipped binaries - Integrated Development Environment - A word about coexistence What's new: Compiler, tools, and utilities The QNX Momentics Tool Suite 6.4.1 includes the following versions of the compiler and tools: - GCC 4.3 tool chain - GDB 6.8 - Binutils 2.19 - Mudflap Other changes include: - gcc - The sizes of symbols in MIPS binaries are now correct. (Ref# 25938) - On MIPS, exceptions in shared libraries now work as expected instead of generating a SIGILL or other signal. (Ref# 23355) - gdb - On PPC targets, ntoppc-gdb uses hardware watchpoints when they're available. If hardware watchpoints aren't available, gdb uses software watchpoints. The kernel has support for hardware watchpoints only on BookE. (Ref# 13133, 21293) - If you have two breakpoints exactly one instruction apart, the second breakpoint now correctly stops the debugger. (Ref# 20833) - pidin - The pidin syspage command now displays the CPU-dependent, mdriver, and pminfo sections of the system page. (Ref# 62406, 67058) - slogger - We've corrected a processing error that occurred when buffers wrapped, so garbage no longer appears in the system log file. (Ref# 62025, 60653) What's new: Debugging information for shipped binaries Starting with QNX SDP 6.4.1, we're generating binaries with debugging information (-g) and map files. With few exceptions, all binaries are available with debugging information, but this data as well as the .ident information are stripped and stored in a separate binaryName.sym file. These files are linked together, so gdb understands where to find the symbol data. There's now no need for a separate debug version of all the binaries. There's a separate tar file containing all the .sym files that will untarred alongside each binary. This file is available in the Download area of our website, as well as in the debugging_info directory on the QNX SDP 6.4.1 DVD. What it means to you: - The target binaries are now stripped. - The target binaries don't have any SRCVERSION information in them (i.e. use -s won't work). - All Neutrino binaries are built with -g (i.e. full debug). - We now produce linker map files for all Neutrino binaries. - The full debug symbols for a binary called some_binary (along with the SRCVERSION information) are stored in a file called some_binary-buildid.sym. - The binary and its associated symbol file are “linked” so gdb knows how to find the symbols. These *-buildid.sym files are in CPU-specific tar files. The usage of these tar files is straightforward. Suppose you want to debug the ls command for x86. You could just add the entire set of debug files to your target (you need to be root, of course): - Get the target-x86-debug-date.tgz file. - cd $QNX_TARGET - tar -zxf path/target-x86-debug-date.tgz Then ntox86-gdb $QNX_TARGET/bin/ls would load the debugging symbols from the .sym file automatically. Since this is the full debugging information, you can point gdb at the location of source for ls. You don't have to extract the entire tar file. In fact all that matters is that the .sym file be in the same directory as the binary. So you could simply copy $QNX_TARGET/x86/bin/ls and the x86/bin/ls-*.sym file (from the debug tar file) to your current directory, and then run gdb there. In order to get a list of the source files used to build a binary (e.g. to determine the associated licensing), use the .sym file instead of the binary. So continuing with the example above, to get a list of the source files used in building ls, type: use -s $QNX_TARGET/x86/bin/ls-*.sym What's new: Integrated Development Environment - The QNX Momentics Tool Suite 6.4.1 includes version 4.6 of the IDE; its new features include: - Eclipse 3.4 - CDT 5.0 - GCC Mudflap visualization - improvements to the System Profiler - the ability to import projects (including BSPs) and browse source packages from our Foundry27 community website - more information about CPU usage in the System Information perspective - We no longer ship the Neutrino-hosted version of the IDE. (Ref# 60706) - The QNX Source Package and BSP wizard replaces separate wizards for importing QNX source packages and BSPs. (Ref# 67407, 67409) This version of the IDE includes the following fixes for previously reported issues: - The Code Coverage tooling now works properly for applications (e.g servers) that don't exit. (Ref# 60457) - In the Target File System Navigator for a target booted from .boot, you can now drag the file .boot from a project and drop it into the / directory in the Target File System view. (Ref# 46882; Ticket ID 77863) - The C/C++ search now works properly for macros. (Ref# 23596) - The IDE's System Information perspective's Malloc Info view is now updated correctly when Memory Analysis Tooling is active on the same process. (Ref# 28712) - The APS CPU Usage view now correctly starts and ends the graph line for CPU usage; the initial value can't be zero. (Ref# 42656, 45780) - In the System Builder, if you add a host directory by specifying a root of target/x86 and a location of / in the image, you can now save the build file, and the root is correctly added. (Ref# 62063) For more information, see the What's New in the IDE appendix of the IDE User's Guide. A word about coexistence. QWinCfg for Windows hosts. qconfig utility for non-Windows hosts If you're using the command-line tools, use the qconfig utility to configure your machine to use a specific version of Neutrino: - If you run it without any options, qconfig lists the versions that are installed on your machine. - If you specify the -e option, you can set up the environment for building software for a specific version of the OS. For example, if you're using the Korn shell (ksh), you can configure your machine like this: eval `qconfig -n "QNX Software Development Platform 6.4.1" . Coexistence and the IDE section “Version coexistence” in the IDE Concepts chapter of the IDE User's Guide.) Discontinued items - We no longer support the PowerPC 900 series of processors, so we've deprecated procnto-900 and procnto-900-smp. (Ref# 68225, 68230) - We no longer ship the Neutrino-hosted version of the IDE. (Ref# 60706, 61193) - We plan to deprecate rtelnet in a future release. (Ref# 61851, 61852) - We no longer ship the following binaries: - sin — use pidin instead. (Ref# 66906, 66908) - devn-rtl8169.so — use devnp-rtl8169.so instead. (Ref# 67108, 67238) Experimental items The experimental items in QNX SDP 6.4.1 are: - asynchronous messaging - asynchronous message queues - adaptive partitioning memory allocator Known issues QNX SDP 6.4.1 contains known issues in these areas: - Known issues: Installing and uninstalling - In order to install QNX SDP on Linux or Windows, the QNX license file must be writable by everyone. If the installer stops and warns you that this file isn't writable, you can make it so as follows: - On Windows, right-click on C:\Program Files\QNX Software Systems\license and choose Properties. Make sure that the “Read-only” attribute isn't checked, click Apply, and then click OK. - On Linux, type the following: chmod a+rw /etc/qnx/license/licenses (Ref# 62419) - On some Linux distributions, the QNX SDP 6.4.1 installer incorrectly displays accented “e” characters as a square box in the French text of the “Language” section of the license agreements. (Ref# 61721) Workaround: To display the text correctly, open a web browser and view the license agreement .txt files located in base_dir/install/qnxsdp/6.4.1, where base_dir is where you installed SDP. - If you install SDP 6.4.1 on a Windows or Linux system that already has QNX Momentics 6.3.2, the installer tells you that it's modifying 6.3.2 to support coexistence with 6.4.1. Here are the details: - Windows - The 6.4.1 installer checks to see if the cleanup utility, QNXWinCleanup.exe, is present under the 6.3.2 host directory (typically C:\QNX632\host) and moves it to C:\Program Files. If you uninstall 6.4.1, the uninstaller checks to see if 6.3.2 is present. If so, it moves the cleanup utility back to its original location. - Linux - The 6.4.1 installer replaces uninstaller.bin in 632_base_dir with a script that launches the 6.3.2 uninstaller with a special option: ./uninstaller.bin -W beanDeleteConfigDir.active="False" If you uninstall 6.4.1, the uninstaller checks to see if 6.3.2 is present. If so, it restores uninstaller.bin. (Ref# 58784, 60037) - If you've installed both QNX Momentics 6.3.2 and the QNX Software Development Platform 6.4.1 on Linux or Windows, and you then uninstall 6.3.2, the value of the QNX_CONFIGURATION environment variable will be incorrect, and you won't be able to use 6.4.1. (Ref# 58784) Workaround: Remove the extra qconfig string from the value of QNX_CONFIGURATION. - If you install 6.4.1, and you then install 6.3.2, you need to do the following: - On Windows, after installing 6.3.2 over 6.4.1, make sure to move the cleanup utility QNXWinCleanup.exe from $QNX_HOST to C:\WINDOWS. - On Linux, when you run the 6.3.2 uninstaller, use the following command-line arguments to leave the 6.4.1 installation unaffected: qnx632_base_dir/_uninstall/qnx632/uninstaller.bin -W beanDeleteConfigDir.active="False" (Ref# 56879) - VMware ESX (or a VMware workstation using SCSI disks) doesn't present an EIDE interface to the guest OS. It offers only a default of an LSI Logic SCSI 320 device (which we don't support) and a second selectable option of a BusLogic 946C device as a PCI device. It does let you boot off an IDE CDROM, but won't let you install to an IDE disk; even if the real physical storage is an IDE device, VMware presents it virtually as one of the two aforementioned devices. (Ref# 51509) Workaround: In order to install Neutrino on a VMware VM using the BusLogic SCSI controller emulation, you must first apply a driver update. We've included this update on the installation media: - Boot from the installation CD. - On seeing the initial “Press space for options” message, press the space bar. - Choose to apply the driver update. - Follow the instructions on the screen using /fs/cd0 (i.e. the installation media) as the source. For more information on installing driver updates, see “Updating disk drivers” in the Controlling How Neutrino Starts chapter of the QNX Neutrino User's Guide. - If you install QNX Software Development Platform 6.4.1 on Windows using a third-party windows explorer (e.g. Total Commander), the installer doesn't display the Activation window once the installation is complete. (Ref# 59359) Workaround: Open the QNX SDP Activation dialog by selectingfrom the Start menu, or by entering the following at the command prompt: drive:\Program Files\QNX Software Systems\bin\qnxactivate -a - The installer launches a web browser at the end of installation to display a landing page on the QNX website. On some versions of Linux, the installer can't launch the browser, but sometimes only if you already have QNX Momentics 6.3.2 installed. (Ref# 61494) Workaround: Launch a web browser and go to. Known issues: Kernel - procnto -) - We've observed some memory corruption for uncacheable memory with the Renesas BigSur (SH7751) board. It might be a problem with the hardware. (Ref# 27741) - Some calls to mmap() with MAP_ANON or MAP_LAZY may be slower with QNX SDP 6.4.1 than with earlier releases on certain platforms. The difference is more pronounced for small sizes (e.g. 4 KB). For larger sizes (more than 32 KB), performance may be the same or better with 6.4. It might take longer to start applications and create threads. In part, this is due to the virtual memory manager's more complete data structures. (Ref# 27831) - If you're in a directory on a remote machine, and you pipe the output of a command to xargs, and you redirect the output to a file, you get a “cannot fork” error. For example: cd /net/remote_machine/tmp find . -type f | xargs grep FAIL > report.txt /bin/sh: cannot fork - try again It seems to be a problem with permissions. Piping the output of xargs to less works. (Ref# 29834) Workaround: Log in as root. - If you have multiple memory, 30045) - Some older versions of VMware may show signs of instability. For example, you might get kernel faults that don't occur on real machines or with VMware 6.5. You might also see messages such as “The CPU has been disabled by the guest operating system.” (Ref# 57058) - procnto and the underlying filesystem may become deadlocked when you use read/write memory-mapped files with multiple threads, under the following circumstances: - If you have multiple, 29380) - procnto-smp - On SMP systems, the functions that lock mutexes — such as pthread_mutex_lock(), and pthread_cond_wait() (when the thread is woken up by a pthread_cond_signal() or pthread_cond_broadcast() — can unblock threads in the wrong order, which can cause a priority inversion. (Ref# 24522) Known issues: Libraries and header files - asyncmsg_*() - Asynchronous messaging doesn't work correctly on multiprocessor systems. (Ref# 57260) - fpemu.so.2 - This library causes some problems on x86 targets if it's compiled with gcc 4 with optimization about -O0. To avoid these problems, we've compiled the DLL with -O0 optimization for x86. (Ref# 55883) - Time zones - QNX Neutrino uses a nonstandard method of defining time zones that's difficult to keep up to date. We plan to replace it in a later release. (Ref# 44425) - libxml.a - This library is a Neutrino interface to version 1.1 of the Expat XML Parser, but it isn't documented. For information about this library, see <xmlparse.h> and <xmltok.h> in $QNX_TARGET/usr/include. (Ref# 56140) Known issues: Filesystems - io-fs-media - On ARM platforms, you can't use an io-fs-media share to store a directory structure with more than 16 MB of data. (Ref# 56601) - chkfsys - If you send a SIGTERM or SIGKILL signal to a devb-* driver, chkfsys might subsequently find errors on the filesystem. (Ref# 48741, 48764, 48765) - fs-ext2.so - If you try to delete a linked file or directory in a Linux Ext2 filesystem, you get a “Corrupted file system detected” error. (Ref# 50264) Known issues: Startup - If you have a PC-compatible system with a BIOS, and the system has 4 GB or more of memory, you should specify the -x option to startup-bios. This option enables extended addressing, which lets you access physical addresses above 4 GB. We'll turn this option on by default in a future release. (Ref# 61758) - A situation exists on PowerPC-based boards with less than 256 MB of RAM whereby a machine check can be received. This is due to a speculative load (for a branch not taken) from a memory address beyond the extent of physical RAM but within the first 256 MB window. This condition has been detected only on a Freescale MGT5200 Lite with 64 MB of memory and only during the execution of a specific sequence of regression and benchmark tests on the 6.4 kernel. The possibility of occurrence has existed in all previous Neutrino releases; however, to best of our knowledge, no such failures have been reported. (Ref# 28335). Known issues: Adaptive partitioning - The usage message for the ap utility refers to memory partitioning, which isn't implemented in 6.4.1. You can get an unsupported version of memory partitioning from Foundry 27. (Ref# 58195) - Overload detection isn't implemented. - Adaptive partitioning isn't supported on the 386 and 486 x86 processors, due to the missing timebase counter on those processors. (Ref# 28080) - SCHED_RR threads might not round robin in partitions whose portion of the averaging window is smaller then one timeslice. For example, when the timeslice is 4 ms (the default) and the adaptive partitioning scheduler's window size is 100 ms (the default), then SCHED_RR threads in a 4% partition may not round-robin correctly. (Ref# 28035) - If you use adaptive partitioning and bound multiprocessing (BMP), some combinations of budgets might not be met. (Ref# 29408) - Threads in a zero-budget partition should run only when all other nonzero-budget partitions are idle. However, on SMP machines, zero-budget partitions may incorrectly run when some other partitions are demanding time. However, at all times, all partitions' minimum budgets are still guaranteed, and zero-budget partitions will not run if all nonzero-budget partitions are ready to run. (Ref# 29434) - On ARM targets, the 10 window and 100 window averages, as reported by the aps show -v command, are sometimes garbled. However, these have no effect on scheduling. (Ref# 27552) Known issues: Booting - The combination of F1 and F4 for diskboot (Safe Mode, Don't mount filesystems) doesn't work. (Ref# 21876) Workaround: Press F5 to start the debug shell; it simply starts fesh just after mounting the filesystems. If you want to run a consistency check a filesystem, run /sbin/chkfsys after the shell starts. - If you install QNX Neutrino on a system that uses the Intel Express Q35 chipset, the OS won't boot. The ITE EIDE interface on this board isn't supported. (Ref# 61188) Workaround: Run the driver in PIO mode. - If you install QNX SDP 6.4.1 directly on a USB drive, it might not boot automatically. (Ref# 61707) Workaround: Replace the partition boot loader. From a working system, run: dloader /dev/part pc2 where part is the device name of the partition you need to boot. - Fujitsu Coral cards don't support text mode, so on x86 systems, you need two video cards: one to use in text mode, and one for the Coral card. - QNX Neutrino might not boot on machines with an ICH6 chipset with the hard drive on SATA, and a CD drive on EIDE. The OS detects the SATA device and then hangs on EIDE detection. (Ref# 40446). - Some Sony VAIO laptops don't assign an interrupt to a USB device; when you're booting Neutrino, you'll see some “InterruptAttachEvent failed” messages. (Ref# 41237) Workaround: Contact Technical Support to get a customized utility that enables the interrupts. - On some Intel 3.2GHz D945G systems, the USB bus is reset by the host after rebooting, while the host is addressing the device. (Ref# 51935) Workaround: Disable legacy USB support in the BIOS. - The Dell Latitude D830 fails to boot QNX Neutrino 6.4.1 from USB mass-storage devices. (Ref# 61688) - The bootable version of QNX Neutrino on the CD doesn't include the documentation, in order to reduce the space requirements. Known issues: BSPs and DDKs - Don't use a BSP with the QNX SDP 6.4.1 unless the BSP's release notes state that you should. Please contact us for more information. - The code for some previously released BSPs accesses a field in the internal data structure for mutex attributes instead of using a function call as it should. This will cause errors when you compile the code with this version of QNX Neutrino. (Ref# 52361) Workaround: Look for this line in the source code: mattr.flags = PTHREAD_RECURSIVE_ENABLE; and change it to this: pthread_mutexattr_setrecursive( &mattr, PTHREAD_RECURSIVE_ENABLE); - On Neutrino hosts, running the default uninstall script uninstalls both the binary and source BSP packages. (Ref# 18894) - SH4 binaries linked with QNX Neutrino 6.2.1 or earlier generate an “illegal instruction” error on exiting when you run them on SH4A targets. Renesas changed the nop opcode between SH4 and SH4A, and some of our process-initialization files (inserted at link time) used the old form. This was fixed in QNX Neutrino 6.3.0. (Ref# 24701) Workaround: Relink any SH4 binaries that you linked with QNX Neutrino 6.2.1 or earlier. Known issues: Compiler and tools - gcc - On most platforms, the gcc options -fpic and -fPIC are synonymous, but on PPC they're different and incompatible. We chose -fpic over -fPIC for performance reasons, and our PPC OS is built that way. (Ref# 21947) Workaround: If you see problems (such as relocation-truncation errors) at link time when building shared objects, consider splitting your shared object into multiple shared objects. - ksh, pipe - There's a problem with interactions between pipe and ksh on SH targets. If you use multiple pipes in a command line under ksh, all output can be lost. For example: # uname -a QNX renesas_sh7785 6.4.0 2008/09/26-04:27:12EDT SDK_7785 shle # uname -a | grep renesas QNX renesas_sh7785 6.4.0 2008/09/26-04:27:12EDT SDK_7785 shle # uname -a | grep shle QNX renesas_sh7785 6.4.0 2008/09/26-04:27:12EDT SDK_7785 shle # uname -a | grep renesas | grep shle # (Ref# 62242) - ld - Between the 2.10.1 version of the GNU linker in QNX Momentics 6.2.1 and the 2.12.1 version in QNX Momentics 6.3.0, a bug was fixed in the handling of relocation addends for SH targets. As a result of this fix, SH startup binaries (e.g. startup-systemh) that were created prior to QNX Momentics 6.3.0 won't work correctly if included in a boot image generated in QNX Momentics 6.3 or later. Workaround: Rebuild the startup binary using QNX Momentics 6.4.1. The resulting startup will work with 6.2.1, 6.3, and 6.4.1. - pidin - The pidin mem command doesn't display the correct amount of memory if it exceeds 231 − 1 bytes. (Ref# 63642) - qcp - The qcp utility works only on x86 platforms. (Ref# 9500) Known issues: Device drivers - You might see a message such as “Range check failed (MEM) - Dev 1b - Vend 168c - Class 20000 - Addr 0 - Size 10000” in the system log, but you can ignore it.) - We haven't fully tested the following drivers: - deva-ctrl-ess1938.so - deva-ctrl-geode.so - deva-ctrl-nmg6.so - deva-ctrl-sb.so - deva-ctrl-via8233.so - devc-serzscc - devg-flat.so - devg-geode.so - devg-sis630.so - devh-touchintl.so - devi-dyna - devi-semtech - devn-el509.so - devn-pegasus.so - devn-rtl8150.so - devn-smc9000.so - devp-pccard (Ref# 61821) Audio device drivers (deva-*) - Audio drivers included in BSPs that were released before QNX SDP 6.4.1 are incompatible with 6.4.1. If you try to start them, you'll get some errors about unresolved symbols. If you have the source code, and you try to recompile it using gcc 4.2, you'll get some compile errors. (Ref# 59692) Workaround: Relink the driver binaries on Neutrino 6.4.1. For updated source code, see Foundry 27 on our website, or contact Technical Support. - The graphics drivers run at a higher priority than applications, but they shouldn't run at a higher priority than the audio, or else breaks in the audio occur. (Ref# 4026) Workaround: Use the on command to adjust the priorities of the audio and graphics drivers. Block-oriented drivers (devb-*) - devb-* - High-priority threads can get starved off by lower ones via devb-* because the filesystem uses sleepon_*() functions, which don't inherit priorities. (Ref# 2109) - devb-adpu320 - Reading DVD-RAM causes devb-adpu320 to become blocked on a CONDVAR. (Ref# 19772) - devb-doc, devb-doc3, dformat, dformat3 - The Disk On Chip drivers were provided by the vendor. If you run use -i on them, the state is given as Experimental. (Ref# 23101) - devb-aha8 - You can't restart this driver on IBM PPC405 boards. (Ref# 16018) - devb-eide - DMA modes don't work on these drives: - Hitachi-LG Data Storage DVD WRITABLE/CD-RW DRIVE, ROM VER.E111, May 2006 - Toshiba Samsung Storage Technology TS-H352C/DELH, DE02, May 2006 (Ref# 41600) Character drivers (devc-*) - devc-con - On x86 systems, the devc-con console manager doesn't work correctly when you're using a USB keyboard. (Ref# 62053) Workaround: Use the devc-con-hid console manager instead. Graphics drivers (devg-*) - The devg-smi5xx.so driver faults or deadlocks in multicard setups. (Ref# 59790, 60369) - The flash-ph player (part of the QNX Aviage HMI Suite) doesn't work with video cards that don't provide a linearly accessible frame buffer; this includes the Fijitsu Carmine graphics card. (Ref# 61783) - If you use devg-radeon.so with two cards that have the same vendor and device IDs, the driver fails. (Ref# 61528) - The devg-radeon.so driver doesn't work properly on DVI-equipped monitors. If you're using an ATI Radeon PCI-Express Vendor ID 0x1002, and device ID 0x5B60, you may experience GUI failure during mode switching. (Ref# 41905) Workaround: Use the devg-svga.so or devg-vesabios.so graphics driver instead, or manually edit /etc/system/config/display.conf to find a display mode that works with devg-radeon.so. - If you use direct mode with the devg-i830.so driver, some images may not appear on the display. (Ref# 61008) - If you use devg-vesabios.so on a Dell 830, the system won't reset when you shut it down while in graphics mode. (Ref# 57168) Workaround: Use phgrafx to change the driver to devg-i830.so. You can also avoid the problem by exiting to text mode, and then typing shutdown at the command prompt. - All graphics drivers hang while trapping on (discontinued) Abit IS-20 (865GV) motherboards because of an issue in the BIOS. (Ref# 39626) Workaround: Use the onboard graphics controller instead. If you set the onboard display as the primary controller, any installed PCI graphics cards will still be detected / trapped. - The planar YUV overlay format doesn't work properly in the devg-radeon.so driver. (Ref# 29014) - The -h and -w options of the phgrafx utility are no longer used. (Ref# 62431) Human interface device drivers (devh-*) - If you press several keys at once on some Microsoft keyboards, the keyboard doesn't produce any indication when you release the keys. As a result, the input driver thinks you're still holding the keys down. For more information, see. (Ref# 40611) - Autorepeat doesn't currently work on ViewSonic 10191 USB keyboards. (Ref# 41118) - Pressing the space bar on a ViewSonic 10191 USB keyboard when the system displays the “Press space bar to input boot options” message doesn't work. You get the menu only after the EIDE enumeration is done. If you also have a Microsoft USB mouse connected, you get a “devh-usb.so - Unable to attach to USB device 1 (10)” message. (Ref# 41122) Network drivers (devn-*, devnp-*) - devn-asix.so - This driver doesn't support the 1000 MB/s interface of the Linksys Gigabit USB Adapter (model no. USB1000). (Ref# 38115) Workaround: Force the driver to use speed and duplex settings that it supports (10 and 100 Mbit/s). - If you use devn-asix.so with devu-uhci.so in a VMware session, and you remove the USB dongle, io-usb fails with a segmentation fault. (Ref# 61765) - devn-i82544.so - If you use more than 64 Tx descriptors, the PPC version of the driver is inoperative. (Ref# 22848) Workaround: We've temporarily changed the PPC version of this driver to use 64 Tx descriptors by default (on other targets, the default is 128). This may result in lost packets for high-throughput transmit operations. - devn-micrel8841.so - This driver supports only PCI versions of the Micrel 8841 (1 port) or 8842 (2 port) Ethernet controllers. (Ref# 67333) - devn-pcnet.so - This driver doesn't support Fiber PCNET cards with the AM79C971KC chip. (Ref# 12497) - We don't recommend that you use devn-pcnet.so on the BCM1250 platform, because it can lose receive interrupts. (Ref# 29714) - devn-pegasus.so, devn-rtl8150.so - Slaying io-net with the devn-pegasus.so and devn-rtl8150.so drivers isn't always successful. (Ref# 28602) Workaround: Use kill -9 instead to kill io-net. - devn-rtl8150.so - On the SH platform, the lan= option gets overridden. (Ref# 29285) Workaround: Fully specify the vendor ID, device ID, bus number, and device number to the driver when starting (e.g. vid=0x0bda,did=0x8150,busnum=1,devnum=2,lan=2). - Multicast and promiscuous modes for the rtl8150 driver aren't currently supported. (Ref# 29352) - devn-tigon3.so - This network driver on the Dell PowerEdge 850 board will run only up to 100 Mbit/s, and not 1000 Mbit/s. Other boards work well at 1000 Mbit/s. (Ref# 39355) - devnp-axe.so - The devnp-axe.so USB-Ethernet dongle sometimes drops packets when used on slow systems or with UHCI. (Ref# 62088) Workaround: If you encounter problems with this driver, use the io-net driver devn-asix.so instead. - The version of this driver that we ship is compiled for x86 only. If you want to use it on other platforms, download the source code for it from Foundry27. (Ref# 61983) - devnp-bcm43xx.so - It isn't currently possible to unmount devnp-bcm43xx.so drivers using the ifconfig ... destroy command. We'll add this capability in the next revision of the driver. (Ref# 61710) - devnp-i82544.so - The io-pkt driver for the Intel i82544 doesn't support dual-port cards (did=0x1010). (Ref# 44299) USB drivers (devu-*) - When you remove a USB device, an error message of the form: Unable to find remove id ### may appear on the console. This error message comes from the enumerator and is not an indication of an error condition. (Ref# 61971) - The PCI-USBNEC101-5P-1 controller card won't allow at least the following boards to boot: SystemH, EDOSK7780, BCM1x80. (Ref# 29496) Flash filesystems & embedding - In previous versions of this software, a program that called umount() without the _MOUNT_FORCE flag would behave as if the flag was provided (i.e. the flash filesystem would be unconditionally unmounted regardless of any operations either pending or in progress on the filesystem).. - libfs-flash3 loses blocks with ftruncate(). (Ref# 25132) - There's a memory leak of approximately 1 KB when you unmount a raw flash filesystem partition (e.g. dev/fsxpy). (Ref# 23643) - ETFS: file creation doesn't update parent directory times as POSIX requires. (Ref# 23243) - There's currently no flash probe utility. (Ref# 23136) - fs-flash3 doesn't have an iofdinfo() handler for mtree tests. (Ref# 18432) - If you create a 255-character filename using the 1.1.0 flash library (libfs-flash3) and the flash filesystem is subsequently mounted using an earlier version of libfs-flash3, the filename won't appear in the filesystem, but it is still present (i.e. if the filesystem is subsequently mounted again with the 1.1.0 libfs-flash3, the filename will reappear). This behavior applies only to forward compatibility whereby an older flash filesystem library is used to mount a newer filesystem. Backward compatibility (the ability of the new filesystem library to mount older filesystems) isn't affected. - During a power failure, the flash filesystem can be corrupted if the NOR device's power supply is in the indeterminate state. The solution is to design the hardware so that the NOR flash device enters RESET the moment the power supply drops below the proper operating range. (Ref# 24679) Known issues: IDE The IDE contains the following known issues: - General - Application Profiler perspective - System Profiler perspective - Memory Analysis perspective - System Information perspective - System Builder perspective - C/C++ development - Team/CVS - Debugging See also the list of host-specific issues, later in these notes. Known issues: General - We pre-index our documentation, but the first time you search for anything in the IDE's Help system, it has to combine the indexes (and index any Eclipse documents that haven't been indexed). This should take less than a minute. - If you install software updates (via), the wizard may list some secondary update sites. You install updates from such sites at your own risk. (Ref# 66909) - If you change the name of a library project name in the IDE, references to the library in application projects that use the library aren't automatically updated. (Ref# 45758; Ticket ID 75995) - You might see a message like this when you start the IDE: Subscription License Expired - Your QNX License could not be obtained, some QNX functionality will be disabled. This message appears only if QNX Software Systems has a contract with you to support server-based licenses. It indicates one of the following: - You need to configure your license server. - More users are trying to use the IDE at the same time than there are available licenses for. (Ref# 51688) - If you choose, and you already have the latest version of qconn, the updater simply exits without telling you that your qconn is up to date. (Ref# 50207) - The CDT Class browser appears to function normally; however, the information is not always accurate due to CDT indexer issues. (Ref# 26736) - The IDE has problems when projects are located in a remote (network drive) directory, and the network isn't reachable. (Ref# 11719) Workaround: Close any project that you created on a remote drive before disconnecting the drive. - The Eclipse editor doesn't behave correctly on very long lines (more than 4500 characters). At the end of a long line, the cursor doesn't position itself properly between characters, selections and changes are very slow, and the column number is reported incorrectly. (Ref# 29586, 21053; Eclipse-CDT PR 68116) - Pressing F1 for context-sensitive help doesn't always give you much information. (Ref# 21034) - When you're setting up a Run or Debug launch configuration, you can't use pathnames that are relative to the workspace for the local path to shared libraries. (Ref# 41476) - The progress bar in the Target Filesystem Navigator gives inaccurate status information when performing a large task, such as copying a large file from your target to your workspace. (Ref# 45567) - If the IDE can't open the browser to display the documentation, it may give you an error message of:) Known issues: Application Profiler perspective - The Application Profiler can allocate CPU time to the wrong line if you're profiling code that has profiling and debugging information, and if you linked against a static library that doesn't have profiling and debugging information. (Ref# 21024) Workaround: Build everything with debugging information, or use -gdwarf-2 instead of -gstabs. - The Application Profiler's sampling information shows “unknown” function names for MIPSLE and MIPSBE targets. There is no workaround at this time. (Ref# 24510) Known issues: System Profiler perspective - The System Profiler requires a minimum color depth of 16 bits; otherwise, the timelines might appear to be blank. (Ref# 23763) - The System Profiler can take a very long time to load and parse a .kev generated by a target system that's running Neutrino 6.3.0 SP1. The parsing is much faster for .kev files from a system with a later version of Neutrino. (Ref# 27221) - In the System Profiler's Timeline view, toggling the priority and event labels has no effect. (Ref# 42076) Workaround: For priority labels, you need to generate the log file in wide mode. - The text at top of the Summary page isn't displayed. (Ref# 45814) Workaround: Close and then reopen the System Profiler perspective to restore the summary information. Known issues: Memory Analysis perspective - While performing traces (and the traces are not empty), if all of the trace events that begin with 0xb do not have source information, a problem has occurred with the tracking of events from the shared library when the application ended. (Ref# 44617) - When you request the termination of the Memory Analysis service, it might take longer than expected. (Ref# 46228) - In the Memory Analysis perspective, if a function causes a buffer overflow, memory leaks aren't detected correctly. (Ref# 42312) - You might see some allocations take place before your application's main() function starts. This is normal; some of the system libraries allocate space as they're initialized. (Ref# 29698) - If a process is not running as the root on the target machine, the Attach mode will not function properly. (Ref# 44762) Workaround: Run the process as the root. If the process is launched using qconn, then qconn should be run as root. - The Memory Analysis Tool does not work properly when more than one IDE client connects to it. (Ref# 21819) Workaround: Use a unique file for MAT output (device or filesystem). - The Memory Analysis Tool changes the behavior of a program that uses fork. (Ref# 29032). - During a running session, if a running process exits, it usually terminates quickly; however, if the process receives a signal, it might take up to several minutes for the change in the session state to complete. (Ref# 46431) Known issues: System Information perspective - In the System Information perspective, if you use continuous logging and the currently selected process ends, the IDE loses focus. (Ref# 41630) - Occasionally, in the System Information perspective for the APS (adaptive partitioning scheduler) view, the Partition Child Process and Threads information aren't available if the target is slow, or if you're using QNX Momentics 6.3.0 SP3 or the SP2 to SP3 upgrade. (Ref# 40651) - The APS view in the System Information perspective doesn't show all of the bankruptcy information, such as when the last bankruptcy occurred. (Ref# 40057) - In the APS view's CPU Usage and Critical Time Usage panes, newly added partitions display with the same color as some of the existing ones, and partitions aren't cleared when you switch targets. (Ref# 40626) Known issues: System Builder perspective - In the global properties dialog, if you disable the QNX System Builder console option “Always clear before each build,” then the build doesn't happen, so no image is created, and no output is shown in the console at all. (Ref# 45597, 57986) - The System Builder doesn't currently have a way for you to enable the kernel dumper. (Ref# 56269) - If you have an application project that has a dependency on a shared library project, if you clean the application project, then open the Project Settings dialog and change an option (e.g., enable Code Coverage), after you click OK and click Yes to rebuild, the build process will fail with the following error:) - When you import some executable files into a project, they lose their executable permissions. (Ref# 12618) - In the System Builder, having the same binary and directory name in the overrides can result in an incorrect path for the binary. (Ref# 40287) - If you disable the system builder console preference Always clear before each build (, and select ), the System Builder can't perform a build. (Ref# 45597) Workaround: Don't disable the setting Always clear before each build. - For a System Builder project, if you select the Clean Project option, the make clean defined in the makefile isn't properly invoked. (Ref# 45628) Known issues: C/C++ development - If the Makefile for a regular C++ make project uses implicit rules such as the following:++ - Assigning a hotkey to the Rebuild Project action has no effect (the hotkey doesn't work). This is a bug in Eclipse 3.0; see bug 99193 at. (Ref# 25616) - The C++ Class Browser is currently experimental. (Ref# 37369) - The IDE sometimes ignores an explicit build request for a project, such as those invoked by using the Build Project entry in the C/C++ Project view's right-click menu, if the project uses an externally built library. (Ref# 20966) Workaround: Use an explicit target in the make command. Known issues: Team/CVS - If you check out an existing PhAB project from CVS, the IDE starts PhAB before checking the project out. PhAB creates some directories, and then the CVS checkout says that the files already exist. After telling the IDE to replace the files with the ones from CVS, PhAB still thinks that it's the old set of widgets (empty project). (Ref# 18405) Workaround: To check out an existing PhAB project from CVS: - Select an import source from. - Click Next. - In the Checkout Project from CVS Repository window, select an existing repository location, or create a new location. If you choose an existing location, select Use existing repository location. - Click Next. - In the Select Module window, enter a module name, or choose an existing module by selecting Use an existing module, and then browse the modules in the repository. - Click Next. - In the Check Out As window, select Check out as a project in the workspace. - Click Next. - In the Check Out As window, if not already checked, select Use default workspace location. - Click Next. - In the Select tag window, click Finish. - If you check a project out from CVS by using the New Project wizard, and you choose only one CPU with both debug and release versions, then when the wizard is done, the debug variant is always unchecked for SH, PPC, and x86. In addition, while the project is being checked out, a message displays indicating “The file has been changed on the file system, do you want to load the changes?”, but it doesn't indicate which file was changed. (Ref# 25422, Eclipse ref# 102659) Known issues: Debugging - Breakpoints set in .gdbinit don't show up in the user interface. (Ref# 55810) Known issues: Documentation - Additional documentation is required to complete the topics for the following areas in the IDE User's Guide: - the fast method of launching a debug session - debugging a DLL/shared library - attaching to a running process with the debugger (Ref# 42437) - Invoke the command for installing patches as applypatch, not applypatch.py as indicated in the Utilities Reference. (Ref# 68568) Known issues: Instant Device Activation - The default startup [image=] memory location as specified in buildfiles for Biscayne boards causes a memory error when using a minidriver. You get a “cannot remove all sysram” error message. (Ref# 23632) Workaround: Change the memory location to 8c004000 from 8c002000 (e.g. [image=0x8c004000]). - We've provided a mini serial port driver for the Freescale Lite5200B, but not for the Freescale Media5200b. (Ref# 40572) Workaround: If you need a mini serial port driver for the Media5200b, you can modify the one for the Lite5200B. You need to change the interrupt number to 68 and use PSC6, GPIO6. - The test-case code for the mini serial port drivers on the Renesas Biscayne and TI OSK5912 boards have the baud rate set to 14400. (Ref# 40570) Workaround: You need change the baud rate to the correct one below: - Renesas Biscayne - int baud=57600; - TI OSK5912 - int baud=115200; - In the current mini serial port driver test case for the Renesas Biscayne, when the real serial driver attaches to the interrupt, it disables Rx and Tx interrupts. It might lose some data between these two stages. (Ref# 40514). Known issues: I/O devices - io-usb - In some cases when unmounting DLLs and running the usb utility at the same time, some memory allocated by io-usb isn't freed. This is a rare situation. (Ref# 21716) - The io-usb server crashes if you repeatedly mount and umount and plug and unplug devices attached to the port. (Ref# 21556) - io-usb might crash with a SIGSEGV when you shut down the system and you don't have any USB devices inserted. (Ref# 29495) Known issues: Licensing - If a disk is full (on any host) when you execute QNX Momentics-licensed components, you might not get a meaningful message to alert you of the problem. Instead, you may be requested to type in your license key (but doing so doesn't rectify the issue). Check your disk and free up some space if necessary. We'll provide a clearer message in a future release. (Ref# 21116) Known issues: Multimedia - soundfile.so, soundfile_noph.so - Loading these legacy plugins causes a SIGBUS error. (Ref# 21707) Workaround: Use the -ae option to procnto to enable alignment fault emulation. - Multimedia TDK 1.0.1 - QNX SDP 6.4.1 doesn't support Multimedia v1.x. Applications that depend on Multimedia v1.x will no longer resolve symbols or execute under 6.4. For multimedia support, you need to install the QNX Aviage Multimedia Suite. (Ref# 55967) Known issues: Networking - devnp-* - Native io-pkt and ported BSD drivers don't put entries into the /dev/io-net namespace, so a waitfor command for such an entry won't work properly in buildfiles or scripts. (Ref# 67527) Workaround: Use the if_up -p command instead. For example, instead of waitfor /dev/io-net/en0, use if_up -p en0. - When using the m_pkthdr_csum_data member in network driver code, make sure that you use only the bottom 16 bits. The top 16 bits may contain undefined data. (Ref# 44622) - ifconfig - The commands: ifconfig iface_name up ifconfig iface_name up don't work individually for WiFi drivers. (Ref# 61246) Workaround: Combine the commands: ifconfig iface_name up scan - lsm-qnet.so - The only supported bind= options for Qnet are bind=ethernet_interface and bind=ip. Other values for bind=X are still accepted (that is, no error is given), but Qnet may not work with them if the specified ethernet_interface doesn't appear. (Ref# 58234) - Qnet currently expects all packets to be received and sent as a single contiguous buffer. This can be a problem if you're using jumbo packets. (Ref# 47828). - This module has many options to alter how Qnet functions (e.g. timeouts, retries, and idle times). You shouldn't use these options unless you're trying to overcome an issue related to your environment. Qnet is optimized to function with its default settings. (Ref# 21298) - Once Qnet has a domain, you can't set Qnet to not use a domain; you can only change the domain. (Ref# 38802) - Qnet treats the _CS_DOMAIN configuration string differently if it's undefined or set to a NULL string. If it's undefined, Qnet uses a domain of .net.intra; if it's set to a NULL string, Qnet applies that as the domain (for example hostname.). (Ref# 19676) - fs-cifs - The behavior of pwrite() isn't consistent with that on other Neutrino filesystems. If you open a file with O_APPEND, the offset supplied to pwrite() doesn't override O_APPEND. The data is written to the end of the file, regardless of the file offset supplied. (Ref# 38576) - If you unlink() a file on a CIFS mount point, any open file descriptors for that file become invalid. (Ref# 38574) - fs-cifs doesn't support POSIX file-locking functions. (Ref# 38570) - fs-cifs incorrectly sets an errno of EPERM instead of EBADF if you attempt to write to a file opened as O_RDONLY or O_ACCMODE. (Ref# 38565) - If a component of a pathname supplied to a function isn't a directory, fs-cifs should return ENOTDIR. It currently returns ENOENT. (Ref# 38564) - PATH_MAX for CIFS (and thus fs-cifs) isn't 1024 as in POSIX. This is set by both Windows and the CIFS specification. The pathname length can be up to 255 characters. (Ref# 38566) - fs-nfs2 - fs-nfs2 doesn't support files larger than 2 GB. (Ref# 39060) - fs-nfs2 doesn't correctly implement the options -w size=n and -w number=n as described in the fs-nfs2 usage message. Don't use them. (Ref# 39031) - If a path ends in a slash, it must be a directory. When accessing a link with a trailing slash, fs-nfs2 immediately returns EINVAL, instead of resolving the link and reporting errors such as EPERM (permission denied) or ENOTDIR (not a directory) before returning EINVAL (invalid argument). This behavior was seen as an optimization to reduce network traffic, because this kind of file access will ultimately fail. Strict POSIX behavior will be added in a future release. (Ref# 20877) - fs-nfs2 doesn't support a -B option greater than 8096. (Ref# 39022) - fs-nfs2, fs-nfs3 - fs-nfs2 lets you modify the on-disk binary file of an executable that is executing. It should return an error with errno set to EBUSY. (Ref# 38563) - The NFS clients don't distinguish between a pathname ending or not ending in / when passed as the argument to mkdir(). (Ref# 38484) - NFS is a connectionless protocol. If a server stops responding to the NFS client, it continues to try to reach the server to complete an operation until the server becomes available, or the user stops the operation. While the fs-nfs2 and fs-nfs3 clients are trying to reach the server, NFS operations are blocked until they're successful. This isn't an issue if the client is talking only to one server, but if an fs-nfs2 process has mounted multiple servers, the blocked operation also block the client's ability to talk to the other servers. (Ref# 39084) Workaround: Start separate client (fs-nfs2, fs-nfs3) processes for each server you wish to mount. - gns - A gns daemon can't act as both a client and server at the same time. If a local service is registered with a GNS client, the client can forward that information to redundant or backup servers; a server can't forward the information. (Ref# 21037) - Currently, GNS (name_attach()) isn't compatible with the resource manager framework. (Ref# 20062) Workaround: Your resource manager must handle the raw QNX messages until this is corrected. - io-pkt* - The stack might send zero-length mbufs to a driver for transmission. (Ref# 44621) Workaround: Drivers must accommodate for this by checking the length of the data in the mbuf and ignoring the mbuf if the length is zero. - The way in which the SIOCGIFCONF ioctl() command was used in our io-net code was incorrect but it worked. We've changed the implementation, but applications that use the old method will no longer work. For more information, see the Migrating from io-net appendix of the Core Networking User's Guide. (Ref# 58035) - nfsd - nfsd lets you access files only up to 16 subdirectory levels deep within the directory exported in the /etc/exports file. Deeper directory levels and files aren't accessible. (Ref# 40104) - pppd - If you use pppd with a serial port, io-pkt may become reply blocked. (Ref# 50977) Workaround: Disable hardware flow control by clearing ihflow and ohflow: stty -ihflow -ohflow < /dev/ser1 - PPPOE - The LCP timeout has decreased from 3 seconds in io-net to 1 second in io-pkt, so some connections might get dropped. (Ref# 54799) Workaround: You can increase the timeout to 3 seconds by using pppoectl: pppoectl pppeo0 lcp-timeout=3000 - SSH suite - The /var/chroot/sshd directory should be owned by root:root with permissions of 0755, not 0775. (Ref# 68628, 68640) Workaround: Log in as root, and then correct the permissions: chmod 0755 /var/chroot/sshd/ - TCP/IP v4 (now part of io-pkt) - If a packet is smaller than the minimum Ethernet packet size, the packet may be padded with random data, rather than zeroes. (Ref# 21460) - The TCP/IP stack doesn't maintain the statistics for outbound packets over VLAN interfaces. (Ref# 16684) - The TCP/IP stack doesn't maintain the statistics for the number of input and output bytes or packets if the packets are forwarded via the fast-forward feature. (Ref# 23041) - The TCP/IP stack doesn't maintain proper interface statistics for the link speed. (Ref# 27015) - If the default UDP socket receive-buffer size is set near its limit (for example sysctl -w net.inet.udp.recvspace=240000), UDP-based sockets become unreliable. (Ref# 27386) - TCP/IP v4 and v 6 (now part of io-pkt) - If a packet is smaller than the minimum Ethernet packet size, the packet may be padded with random data, rather than zeroes. (Ref# 21460) Known issues: Graphics - pwmopts - The Using the Photon microGUI chapter of the Neutrino User's Guide and the entry for pwmopts in the Utilities Reference refer to wframe_update.so, but the name is actually wframe_updated.so (with a second “d”). (Ref# 65310) - Arcs, Bézier curves, and ellipses - The plotting of these curves isn't very precise, resulting in lines that aren't very smooth. The default line joint is a miter, which doesn't produce good results for these curves, and — contrary to what the documentation says — the setting of the line joint is ignored if you use stroked arcs or stroked ellipses. Using PgSetStrokeJoin(Pg_BEVEL_JOIN) may give smoother results once this issue is resolved. (Ref# 58185, 58277) - libimg - An internal structure exchanged between libimg and the image codecs changed after we released version 2.0 of the Advanced Graphics TDK. Applications compiled and linked statically against the earlier versions of imglib.a might not function properly, especially while decoding JPEG images. (Ref# 48003, 56557) Workaround: Relink the applications against the newer version of imglib.a. - Blitting - The Advanced Graphics libffb library doesn't support blitting to a PAL8 surface from a non-PAL8 surface. Therefore, Pg_IMAGE_PALETTE_BYTE offscreen contexts can be blitted to and from only offscreen contexts of the same type. (Ref# 20391) - Dual-head displays - On systems with a dual-head display and screens set up to display different portions of the logical desktop, some convenience functions — such as PtFileSelection() and PtNotice() — are always constrained to the first screen. (Ref# 59614) - phs-to-pcl - This print filter can have a segmentation fault when running with the -Q2 quality setting. (Ref# 60885) - We don't currently ship phs-to-pcl for PPCBE; it will be provided in a forthcoming release. (Ref# 61378) - Starting Photon - When you start Photon, you might see a “Cannot attach mouse input report (error code2)” message on the console. It's a benign message that you can ignore. (Ref# 29662) - PhAB - If you can't find the icon for a minimized module, use the Show Module Tree item from the Window menu to locate it. (Ref# 60529) - The template editor currently lets you delete or rename the widget template, but doesn't provide a way to restore the default value. (Ref# 21969) - If you're using bash as your shell in Neutrino, PhAB doesn't populate the list of targets when you try to build an application. (Ref# 22850) - PhAB for Windows - Applications created by one user can't be built another user. (Ref# 61333) Workaround: Use Windows Security Properties to set the file permissions to give other users access. - If you start PhAB through the IDE on a Windows XP host, several security-alert dialogs are displayed, because the Windows XP firewall detects the background TCP/IP communication between the PhAB application and the Photon server. (Ref# 22282) Workaround: Configure Windows to unblock. Once you've done this, Windows won't display the security warnings when you restart PhAB. - If you try to copy and paste something in PhAB when running it as a nonadministrator user, you get an error message:. - You can't save a PhAB project in a path that contains spaces (e.g C:\Documents and Settings\some_user\my_phab_app). (Ref# 39883) - $HOME/.ph/phapps - Note that this file, which contains a list of the applications that you want Photon to launch automatically when it starts, must be executable. (Ref# 22196) - helpviewer - If you aren't using ksh or sh as your login shell, the environment variables that the helpviewer uses aren't initialized. (Ref# 27250) Workaround: Set QNX_HELP_HOME_PAGE to /usr/qnx641/target/qnx6/usr/help/product/momentics/bookset.html, and QNX_HELP_PATH to /usr/qnx641/target/qnx6/usr/help/product (assuming you installed QNX Momentics in the default location). - The helpviewer might not display all the images in a document. (Ref# 57766) Workaround: Click the Options button, click the Others tab, and then increase the size of the image cache. Alternatively, you can view the documentation in a web browser. - PgSetLayerArg() - Pg_LAYER_ARG_EDGE_MODE (for indicating how a layer should behave if the source viewport is larger than the extent of the source data) isn't currently implemented. (Ref# 52431) Known issues: Runtime kit - The sample 641-complete.txt file lists the files associated with the Mozilla Firefox browser under the relative directory pathname opt/mozilla, but they should be under the absolute path /opt/mozilla. (Ref# 68311) For information about creating a runtime QNX Neutrino system, see the How to create a Runtime Kit from the QNX Software Development Platform technote in the installed documentation. Known issues: System Analysis Toolkit - If you run tracelogger in direct map mode on an ARM LE system, you might encounter some increased latency in your tasks. (Ref# 51550) Workaround: Use nondirect map mode instead. Known issues: Host-specific QNX Neutrino self-hosted - Installing on USB drives - If you're installing directly to a USB drive, note the following: - When the installer asks you to choose the type of the partition, choose one of the QNX 4 types (t77, 78, or 79). The Power-Safe (fs-qnx6.so) filesystem can't guarantee that the filesystem is power-safe on devices that don't support synchronizing. - If the system doesn't boot automatically, replace the partition boot loader; from a working system, run: dloader /dev/part pc2 where part is the device name of the partition you need to boot. (Ref# 61707) - /x86 symbolic link - Self-hosted Neutrino systems are missing the /x86 symbolic link to the / directory. (Ref# 68593) Workaround: Log in as root, and then do the following: cd / ln -s . x86 Windows hosts - devg-vmware.so - VirtualPC and VMware require a Windows session that's operating with 32-bit graphics. (Ref# 60669, 68050) - IDE - The following issues apply to the Windows-hosted version of the IDE: - You can't do rename, delete, or move operations when the System Profiler editor is open. The editor maintains an open file pointer to the log file that it's working with; as long as that file is open, under Windows FAT32 filesystems, no modification can occur. (Ref# 28561) - If the IDE window spans two monitors, and you lock and then unlock your computer, the window is restored to be the size of one monitor. This is a general problem on Windows. (Ref# 28653) - When you do a build, stdout and stderr sometimes overlap, resulting in misleading error and warning messages. This is a general problem on Windows. (Ref# 15106) - Vista and Phindows or PhAB - On Vista, Phindows and PhAB seem to interfere with the gadget toolbar; the sidebar flickers and appears and disappears very rapidly and often, slowing down the system. (Ref# 62277) Workaround: This may be related to Phindows and PhAB's use of Direct3D double buffering, which is redundant when the Vista Aero compositing is enabled. To disable double buffering: - For Phindows, use the Double Buffering Method menu option in the Connect dialog, or pass the -d0 command-line option. - For PhAB, set the PHINDOWSOPTS environment variable to -d0. - Documentation - On Linux and Windows, you might need to install some international fonts in order to display the Multilingual Input documentation. (Ref# 61950) - echo.exe - On Windows, the QNX-provided echo.exe interprets the Windows \ separator as an escape character. As a result, environment variable settings won't work if you use \ as a path separator; use / instead. (Ref# 19924) - MAKEFLAGS - Microsoft Visual Studio also uses the MAKEFLAGS environment variable, but in a much different way than QNX Neutrino does. The result is that Microsoft Visual Studio no longer works after you've installed QNX Momentics. Workaround: If you want to work with Microsoft Visual Studio (MSVS) after installing QNX Momentics on the same system, do the following: - Open a command window and run cmd. - Type set. - Find the value for MAKEFLAGS and save it. - Type set MAKEFLAGS. - Do your MSVS work. - To work with QNX Momentics again, type: set MAKEFLAGS=saved_makeflags_value - sh, ksh - Because of the way that the MSYS versions of the shell manipulate the environment variable, PATH doesn't appear to include $QNX_HOST/usr/bin, but it does. (Ref# 59412) - User Account Control (UAC) - Windows Vista includes a new account policy, called User Account Control (UAC), that will impact various administrative features, such as being able to create and store files in a temporary directory (tmp). This directory is used by various applications, such as CVS and the split command, and various Photon applications. To successfully run these applications on a computer running Windows Vista, you must have administrator privileges and disable UAC. (Ref# 44027) Workaround: To disable UAC on your Windows Vista configuration: - Launch MSCONFIG by from the Run menu. (When you click on the launch button — the one with the windows logo commonly in the bottom left corner — the Run menu is an editable text bar with the string Start Search. Type msconfig and then press Enter.) - Click on the Tools tab. Scroll down until you find Disable UAC, and then click that line. - Press the Launch button. - A cmd window will open. When the command is done, you can close the window. - Close msconfig, and then restart your computer. You can reenable User Access Control by selecting the Enable UAC line and then clicking the Launch button. - BSPs - For Windows XP, the location that the BSP file for the IDE installs into is $QNX_CONFIGURATION/qconfig_directory, and it is write-protected for a default user. Consequently, subsequent updates to BSPs aren't permitted in this location because of permission errors under Windows Vista configurations if the user doesn't have administrative permissions. (Ref# 44668) Workaround: Modify your user permissions. For instructions about changing these permissions, see the steps in the workaround for the problem (Ref# 44027) above. - QNX utilities - The Windows installation includes various executables that have the same name as some QNX utilities, such as find, sort, and split. By default, Windows places the path to the Windows executables at the beginning of the Windows PATH environment variable, and the QNX executables appear afterward. This means that when you run these utilities from the command line, instead of using the QNX version, the PATH variable uses the Windows version. (Ref# 44457) Workaround: If you want to use the QNX utilities for find, sort, and split from a command prompt or shell prompt, specify a fully qualified path to any of the QNX executables. - ctags - The current version of ctags is 5.5.4, and the documentation included with QNX doesn't accurately describe the features for this version. (Ref# 44457) Workaround: See the detailed documentation at. - For Windows Vista, you receive the following error messages two times when using ctags because the Windows sort is being used instead of the sort utility included in QNX Momentics: Input file specified ctags: cannot sort tag file : No error The ctags utility still generates tag files; however, they won't be sorted. (Ref# 43530) Workaround: Manually call the QNX sort on the tags file. Linux hosts - Installers - The installers can't update the Gnome menu on some distributions of Linux. (Ref# 48770) - On some distributions of Linux, you might not be able to activate QNX SDP 6.4.1 immediately after installing it. (Ref# 59063) Workaround: Log out and back in again before trying to activate. - On Linux and Windows, you might need to install some international fonts in order to display the Multilingual Input documentation. (Ref# 61950) - Activation - On some distributions of Linux, you might not be able to activate QNX SDP 6.4.1 immediately after installing it. (Ref# 59063) Workaround: Log out and back in again before trying to activate. - On some distributions, the activation dialog doesn't appear automatically. (Ref# 68599) Workaround: Log out and back in again, and then start the activation program manually: /etc/qnx/bin/qnxactivate -a - IDE - The IDE won't run on some versions of Linux, such as Open SUSE 11 and Ubuntu 8.04. This is a bug in Eclipse; see bug 213194 at. (Ref# 66351, 66760) Workaround: Do the following: - On Ubuntu, type: sudo apt-get install xulrunner - Edit $QNX_HOST/usr/qde/eclipse/qde.ini and add this line after the -vmargs line: -Dorg.eclipse.swt.browser.XULRunnerPath=/usr/lib/xulrunner/xulrunner Known issues: Web browsers - Bon Echo - When you download files, the browser says that it saves the files on your desktop, but it actually saves them in your home directory. (Ref# 59830) - For the Send Link... command in the File menu to work (and for mailto: links on web pages in general), you need to set the network.protocol-handler.app.mailto configuration string. To set this string, type about:config in the Bon Echo address bar. You should set this string to be the full path to an executable that will start the desired email program. The first parameter passed is always the mailto: URL. (Ref# 59434) - If you place the Bon Echo window so that the bottom is outside the lower part of the screen, and you then scroll down for example with the wheel, the page isn't refreshed correctly. (Ref# 61837) Getting started with the documentation After you've installed QNX SDP 6.4.1, you'll find an extensive set of online documentation in HTML format. You can read it in the Integrated Development Environment's help system on Linux and Windows development hosts; on self-hosted QNX Neutrino systems, you can read it in the Photon helpviewer, or you can use a web browser to display: ${QNX_TARGET}/usr/help/product/momentics/bookset.html This “roadmap” page contains links to the various HTML booksets that accompany the OS (e.g. System Architecture, Programmer's Guide, Library Reference, Utilities Reference, etc.). Technical support To obtain technical support for any QNX product, visit the Support + Services area on our website (). You'll find a wide range of support options, including community forums. List of fixes The problems fixed in QNX SDP 6.4.1 include the following:
http://www.qnx.com/developers/docs/6.4.1/momentics/release_notes/rel_6.4.1.html#New_Momentics
crawl-003
refinedweb
14,108
55.44
I'd like to summarize a few known issues with VS/VWD intellisense and respective solutions or workarounds. Symptom: No markup validation and no code intellisense. When you type invalid element or attribute, red squiggly line does not appear. No C#/VB code intellisense or when you add server control, such as asp:button, code intellisense does not pick it up. When you open document outline window (View | Other Windows | Document Outline), it stays empty no matter how long you wait. Possible cause: some application keeps broadcasting messages to all windows breaking idle loop. Visual Studio runs several tasks at idle time. Idle time is when application message queue is empty. If messages keep coming, idle loop may never run. Idle tasks include HTML parsing, validation and document outline building. Code intellisense gets updated partially on idle, partially in a background thread. We have identified at least one application that breaks idle loop: PowerDVD Launcher. I have also seen reports that Visual Studio idle loop was not running correctly in XP on MacBook Pro under Apple Boot Camp, at least in early versions. According to this report, some video drivers may also exhibit the problem. New Microsoft Laser Mouse 6000 may be causing the problem as well. Solution: try disabling resident applications one by one until the problem goes away. This way you can figure out which app breaks the idle loop. Try upgrading video driver. Try switching to a different mouse, preferably wired and see if the issue goes away. Symptom: No intellisense for Atlas/Ajax controls. Markup inside UpdatePanel may get mangled after switch from Design view. Cause: change of Microsoft Ajax namespace from atlas to asp. Change in ASP.NET Ajax assembly location. Solution: See here: Symptom: You keep getting error "__o is not declared" in the Error list Solution: see here: Symptom: intellisense errors when using "</script>" string in C# or VB script block. Solution: see here Symptom: validation errors in ASP.NET controls in content page. Errors go away if you keep master page open. Solution: install Service Pack 1 or keep master page open.
http://blogs.msdn.com/b/mikhailarkhipov/archive/2006/12/25/repairing-visual-web-developer-intellisense-issues.aspx
CC-MAIN-2014-41
refinedweb
350
67.96
Ketil Malde wrote: > On Sun, 2007-07-01 at 12:45 +0200, apfelmus wrote: > >> Here's an (admittedly crazy) approach > > Why is it so crazy? The orthogonality issues with the different ways of > breaking up lists (split/break/span/take/drop), and the multitude of > possible predicates (either too complicated, or too specific) has always > been an annoyance to me. I thought your solution was quite nice! One problem is that you have to use drop (first 2) instead of drop 2 now. This can be remedied with some type-class hackery. Another problem is that performance will suffer a bit with the general approach. So, the specialized versions are likely to be kept around anyway. (Type classes can help with specialization, too.) Other than that, the general approach to drop & friends is not so crazy. But I don't like my implementation, so let's build a better one: One problem of the implementation is that I think it doesn't handle nicely the different semantics of dropPrefix compared to drop or dropWhile : whereas the latter don't fail on a premature end of the list, the dropPrefix version should fail. (The question whether it should fail with an error or with Nothing can be delegated by providing different variants of drop). The solution comes automatically when pondering what a Dropper really is: it's a *parser*. In other words, drop & friends are just functions that parse the beginning of a string and return how much has been parsed. Put differently, their feature is to ignore the "AST" resulting from a parse. type Dropper a = Parser a () -- token type a, result type () Here, I don't mean the usual (s -> (a,s)) parsers, but an implementation that fits the stream-like nature of our dropper: either a determinstic data Parser c r = Get (c -> Dropper c r) | Result r | Fail or a non-deterministic parser data Parser c a = Get (c -> Dropper c r) | Result r (Dropper c r) | Fail The latter are, of course, Koen Classen's parallel parsing processes (). Now, which ones to choose? With deterministic parsers, we loose the normal behavior of drop and dropWhile to accept lists that are too small. Thus, we choose non-deterministic parsers and implement drop with a "maximum munch" behavior -- drop as much as we can parse, but not more drop :: Dropper a -> [a] -> [a] drop p xs = case drop' p xs of Nothing -> error "drop: parse failed" Just xs -> xs where drop' Fail _ = Nothing drop' (Result _ p) xs = drop' p xs `mplus` Just xs drop' (Get f) (x:xs) = drop' (f x) xs drop' (Get _) [] = Nothing Here, the second equation of drop tries to drop more but jumps back via Maybe's `mplus` if that fails. With the usual Monad and MonadPlus instances for Parser c a, we can now write -- take while the condition is satisfied while :: (a -> Bool) -> Dropper a while = many' . satisfy where many' p = return () `mplus` p >> many' -- accept the first n characters or less first :: Int -> Dropper a first 0 = return () first n = return () `mplus` (get >> first (n-1)) -- parse a given String prefix :: Eq a => [a] -> Dropper a prefix [] = eaten prefix (x:xs) = get >>= \c -> if c == x then prefix xs else mzero By returning successes early, while and first accept an unexpected end of input. An alternative version of first that complains when not enough characters are available to drop would be exactly :: Int -> Dropper a exactly 0 = return () exactly n = get >> exactly (n-1) or exactly n = sequence_ (replicate n get) Regards, apfelmus
http://www.haskell.org/pipermail/libraries/2007-July/007714.html
CC-MAIN-2014-35
refinedweb
595
60.89
I think that we may need a thread local variable to handle parallel. This would mean some deep messing with the Property handling. Peter On Friday 17 October 2003 17:57, Jose Alberto Fernandez wrote: > > From: peter reilly [mailto:[email protected]] > > > > I would rather have Jose's idea of a <local-property/> > > task. > > > > This could be used outside of macrodef. > > > > The only problem is the implementation. > > Indeed, there is an easy implementation but will not solve the > case of <parallel>, because the local definition would really be a > temporary global one: > > public class LocalProperty extends Sequential { > private String property; > private String oldValue; > > public setName(String i_property){property = i_property;} > > public void execute() { > if (property == null) throw new BuildException("name attribute is > mandatory"); > try { > oldValue = getProject().getProperty(property); > getProject().setProperty(property, null); // This may need changes > to core > super.execute(); > } > finally { > // This is using the deprecated setProperty method > // that actually changes the properties even if set > getProject().setProperty(property, oldValue); > } > } > } > > Here we just change the property value on the project frame, for the > duration > of the task. And put the old value back before we leave. > > The problem with this simple implementation is that all the parallel > branches > will see the change, which is exactly what we were trying to avoid. To > do it > right, we would need to create a new execution frame that would be use > in the > "super" call. > > But if we do that (which is like what <ant> or <antcall> do), what > happens > if the user defines properties other than the local-property inside the > code? > Somehow, we would need to find them and propagate them back to the frame > above > upon exit. > > <local-property > <property name="y" value="myY"/> > <local-property> > <echo message="${y}"/> > > [echo] myY > > Doable, but not that easy anymore. > > What do you guys think? > > Jose Alberto > > > Peter > > > > On Friday 17 October 2003 17:02, Shatzer, Larry wrote: > > > Maybe allow <attribute> have another <attribute> that > > > > allows it to be > > > > > undeclared with default or passed in, so we can set it inside the > > > macrodef, such as "newcurrent". > > > > > > Example: > > > > > > <macrodef name="recursive"> > > > <attribute name="until"/> > > > <attribute name="current"/> > > > <attribute name="method"/> > > > <attribute name="newcurrent" staticscope="true"/> > > > ... > > > </macrodef> > > > > > > Then when you call "recursive" such as this: <recursive until="10" > > > it won't die that you did not pass > > > "newcurrent". > > > > > > the "staticscope" attribute name could be something else, or > > > scope="static" and a list of others that could be used. > > > > > > -- Larry > > > > > > > -----Original Message----- > > > > From: peter reilly [mailto:[email protected]] > > > > Sent: Friday, October 17, 2003 12:43 AM > > > > To: Ant Developers List > > > > Subject: Re: Macrodef and parallel in a recursive situation > > > > > > > > > > > > The parellel would cause grief. > > > > The problem is not the attribute, I think > > > > but the "newcurrent" global "variable". > > > > > > > > On using macrodef, I have noticed that > > > > it would be really cool to have a static scopped > > > > variable. > > > > > > > > Peter > > > > --------------------------------------------------------------------- > > > > >
http://mail-archives.eu.apache.org/mod_mbox/ant-dev/200310.mbox/%[email protected]%3E
CC-MAIN-2021-17
refinedweb
472
53.81
src/test/couette-gotm.c Turbulent Couette flow See the GOTM web site and section 12.1.1 of the GOTM manual. “The Couette scenario is the most basic of all GOTM scenarios. It represents a shallow (10 m deep), unstratified layer of fluid above a flat bottom that is driven by a constant surface stress in the x-direction. Earth’s rotation is ignored. This flow is often referred to as turbulent Couette flow. After the onset of the surface stress, a thin turbulent near-surface layer is generated that rapidly entrains into the non-turbulent deeper parts of the water column. The solution at the end of the simulation, when the problem has become fully stationary, is shown in the figure below.” Results set term SVG size 600,300 set xlabel 'u (m/s)' set ylabel 'z (m)' plot [0:1][-10:0]'log' u 2:1 w l t '' lw 2 set xlabel 'ν_t (m^2/s)' plot [0:0.05][-10:0]'log' u 3:1 w l t '' lw 2 References #include "grid/multigrid1D.h" #include "layered/hydro.h" #include "layered/gotm.h" int main() { G = 9.81; N = 1; nl = 100; DT = 20; size (100e3); periodic (right); We use the k-\epsilon model. turbulence_turb_method = turbulence_first_order; turbulence_tke_method = turbulence_tke_keps; turbulence_len_scale_method = turbulence_generic_eq; The bottom roughness leng scale (of GOTM) needs to be adjusted. meanflow_z0s_min = 0.003; meanflow_h0b = 0.1; The surface wind stress is constant. Initial conditions Ten metre deep, constant layer thicknesses. Outputs 24 hours is enough to reach a stationary profile. The turbulent diffusivity \nu_t computed by GOTM is accessed through the C interface of the corresponding Fortran field.
http://basilisk.fr/src/test/couette-gotm.c
CC-MAIN-2020-24
refinedweb
274
59.3
trace_vnlogf() Insert a user string trace event, using a variable argument list Synopsis: #include <sys/neutrino.h> #include <sys/trace.h> int trace_vnlogf( int code, int max, const char *fmt, va_list arglist ); Arguments: - code - The event code, which must be in the range from _NTO_TRACE_USERFIRST through _NTO_TRACE_USERLAST. - max - The maximum length of the string to include in the event, in bytes, or 0 for no limit. - fmt - A printf() -style formatting string. - arglist - A variable argument list of the items required by the fmt string. Library: libc Use the -l c option to qcc to link against this library. This library is usually included automatically. Description: The trace_vn.
https://developer.blackberry.com/playbook/native/reference/com.qnx.doc.neutrino.lib_ref/topic/t/trace_vnlogf.html
CC-MAIN-2020-34
refinedweb
108
68.16
Resource File Generator (Resgen.exe) files. Resgen.exe converts files by wrapping the methods implemented by the following four classes: ResourceReader Class Reads a .resources file. ResourceWriter Class Creates a .resources file from specified resources. ResXResourceReader Class Reads a .resx file. ResXResourceWriter Class Creates a .resx file from specified resources. Note that a .resx file created by the ResXResourceWriter class cannot be used directly by a .NET Framework application. Before adding this file to your application, run it through Resgen.exe to convert it to a .resources file. For more information about implementing these classes in your code, see their respective reference topics. In order for Resgen.exe to be able to parse your input, it is critical that your text and .resx files follow the correct format. Text (.txt or .restext) files can only contain string resources. String resources are useful if you are writing an application that must have strings translated into several languages. For example, you can easily regionalize menu strings by using the appropriate string resource. Resgen.exe reads text files containing name/value pairs, where the name is a string that describes the resource and the value is the resource string itself. You must specify each name/value pair on a separate line as follows: Note that empty strings are permitted in text files. For example: A text file must be saved with UTF-8 or Unicode encoding unless it contains only characters in the plain Roman alphabet, without diacritical marks such as the cedilla, umlaut, and tilde. For example, Resgen.exe removes extended ANSI characters when it processes a text file that does not have UTF-8 or Unicode encoding. Resgen.exe checks the text file for duplicate resource names. If the text file contains duplicate resource names, Resgen.exe will emit a warning and ignore the duplicate names. For more details on the text file format, see Resources in Text File Format. actually see the binary form of an embedded object (a picture for example) when this binary information is a part of the resource manifest. Just as with text files, you can open a .resx file with a text editor (such as Notepad or Microsoft Word) and write, parse, and manipulate the contents. Note that in order to do this, a good knowledge of XML tags and the .resx file structure is required. For more details on the .resx file format, see Resources in .Resx File Format. In order to create a .resources file containing embedded nonstring objects, you must either use Resgen.exe to convert a .resx file containing objects or add the object resources to your file directly from code, using the methods provided by the ResourceWriter Class. If you use Resgen.exe to convert a .resources file containing objects. The .NET Framework version 2.0 supports strongly typed resources. Strongly typed resource support encapsulates access to resources by creating classes that contain a set of static read-only (get) properties, thus providing an alternative way to consume resources, rather than using the methods of the ResourceManager class directly. The basic functionality is provided by the /str command line Resource File Generator (Resgen.exe) tool allows you to create .resources files as well as strongly typed wrappers to access those .resources files. When you create a strongly typed wrapper, the name of your .resources file must match the namespace and class name (for example, MyNamespace.MyClass.resources) of the generated code. However, the Resource File Generator (Resgen.exe) tool allows you to specify options that produce a .resources file with an incompatible name. To work around this behavior, rename incompatibly named output files after the Resource File Generator (Resgen.exe) tool generates them. When you are finished creating .resources files with Resgen.exe, use the Assembly Linker (Al.exe) to either embed the resources in a runtime binary executable or compile them into satellite assemblies. The following command, with no options specified, displays the command syntax and options for Resgen.exe. The following command reads the name/value pairs in myResources.txt and writes a binary resources file named myResources.resources. Because the output file name is not specified explicitly, it receives the same name as the input file by default. The following command reads the name/value pairs in myResources.restext and writes a binary resources file named yourResources.resources. The following command reads an XML-based input file myResources.resx and writes a binary resources file named myResources.resources. The following command reads a binary resources file myResources.resources and writes an XML-based output file named myResources.resx. The following.
http://msdn.microsoft.com/en-us/library/8ef159de-b660-4bec-9213-c3fbc4d1c6f4(v=vs.90)
CC-MAIN-2014-10
refinedweb
763
61.12
I have a csv file ready to load into my python code, however, I want to load it into the following format: data = [[A,B,C,D], [A,B,C,D], [A,B,C,D], ] data = np.array(data) The simplest solution is just: import numpy as np data = np.loadtxt("myfile.csv") As long as the data is convertible into float and has an equal number of columns on each row, this works. If the data is not convertible into float in some column, you may write your own converters for it. Please see the numpy.loadtxt documentation. It is really very flexible.
https://codedump.io/share/LShrIkHgOPOG/1/how-to-load-a-csv-into-ipython-notebook
CC-MAIN-2017-04
refinedweb
104
64.91
Security Hardening Hi, Looking for tips and references in order to collect a set of good practices to secure our LoPy devices and code (our devices will end up being installed in the public domain) Whilst we are aware there is only so much we can do to protect them when the attacker has physical access, it seems there isn't much in terms of documented best practices to achieve this. Perhaps if we can collect them here we can get it up in a wiki page at docs.pycom.io The key areas I am interested in are as follows: WiFi, changing the default WPA password, hidden network option ? Telnet/FTP passwords (and disabling the services) Options to only store/load precompiled python byte code Options to block access to the alternate boot modes. I've finally gotten around to doing a bit more, so I have a utility now on the device we run to "harden" them aside from setting some parameters in our code to disable debugging does two things. pycom.wifi_on_boot(False) (our project doesn't need the wifi radio at all and it better than halves the power consumption on the L01 we found) Then it writes the following into boot.py import os os.dupterm(None) Which disables the REPL Console, leaving only the initial boot messages generated by the ESP. The only easy way back in then is to short the pins on the module to perform a clean boot. - daniel administrators Hello Wayne, Thanks for this useful post. I think we need a set of extra functions to disable the safeboot option after deploying a firmware in production, that way you can avoid people from getting easy access to the device. Adding another function to permanently change the default AP SSID and PW is also a good idea. We'll work on that. Cheers, Daniel
https://forum.pycom.io/topic/2284/security-hardening
CC-MAIN-2018-22
refinedweb
313
64.95
Good day to whoever reads this OS: Linux Mint 10 Julia IDE: Codeblocks 10.05 with GNU GCC as compiler The program is working when in one single file but an error surfaces when it is divided into separate files namely: program.c, mylib.h and create.c I removed other functions so I could experiment with only one but still andandCode: void create(PTR_NODE *HEAD) does not do the trick in the header file..does not do the trick in the header file..Code: void create(PTR_NODE *) Small portion of the program: Any response will be appreciated.Any response will be appreciated.Code: // Code in program.c #include <stdio.h> #include <stdlib.h> #include <string.h> #include <mylib.h> //my user-defined header struct node //this is the struct of a node { int data; struct list *link; }; typedef struct node NODE; //NODE is declared with a format of struct node containing elements data and link typedef NODE *PTR_NODE; //this declaration confuses me... //the declared NODE is used as a type for another declaration but of pointer //PTR_NDOE works fine in declaring the head and tail for the linked list int main() { PTR_NODE head = NULL; //declare and initialize head //Other code create(&head); //in order to create a node, function must know what is head } // Code in mylib.h void create(PTR_NODE *HEAD) //this (pass by reference?) does not work when linked in separate files (program.c, mylib.c and create.c) //error says: expected ')' before '*' token // Code in create.c void create(PTR_NDOE *HEAD){ PTR_NODE local_variable_head; //declare local_variable_head = *HEAD; //copy contents //create a node }
http://cboard.cprogramming.com/c-programming/147868-linked-list-function-struct-ptr-parameter-header-file-error-printable-thread.html
CC-MAIN-2015-06
refinedweb
265
67.04
- Original author: brain in new VAT - Article link: from=weixin&utm_ campaign=client_ share&wxshare_ count=1×tamp=1638613017&app=news_ article&utm_ source=weixin&utm_ medium=toutiao_ ios&use_ new_ style=1&req_ id=2021120418165701015015608914163D60&share_ token=01C2E436-7124-40A6-B871-E04C0656CE78&group_ id=7028498316396839432 Monte Carlo simulation (or probabilistic simulation) is a technology used to understand the impact of financial sector risks and uncertainties, project management, costs and other predictive machine learning models. Risk analysis is part of almost every decision we make, because we often face uncertainty, fuzziness and variability in life. In addition, even if we have unprecedented information, we can't accurately predict the future. Monte Carlo simulation enables us to see all possible results and evaluate the risk impact, thus allowing better decision-making under uncertainty. In this article, we will understand the Monte Carlo simulation method through five different examples. The code for this tutorial can be found on Github and Google Colab. 1. Coin flip example The probability of tossing an unbiased coin face up is 1 / 2. However, is there any way we can prove it experimentally? In this example, we will use the Monte Carlo method to simulate 5000 coin tosses to find out why the probability of facing up is always 1 / 2. If we flip this coin many, many times, we can achieve higher accuracy. ## Import library import random import numpy as np import matplotlib.pyplot as plt ## Coin function def coin_filp(): return random.randint(0,1) # Generate 01 random number ## Monte Carlo Simulation prob_list1 = [] # An empty list stores probability values def monte_carlo(n): """ n Is the number of simulations """ results = 0 for i in range(n): flip_result = coin_filp() results =results + flip_result # Calculating probability value prob_value = results/(i+1) # append the probability values to the list prob_list1.append(prob_value) # plot the results plt.axhline(y = 0.5,color = 'red',linestyle='-') plt.xlabel('Iterations') plt.ylabel('Probability') plt.plot(prob_list1) return results/n # calling the function answer = monte_carlo(5000) print('Final value:',answer) Final value: 0.4968 As shown in the figure above, after 5000 iterations, the probability of positive upward is 0.502. Therefore, this is how we use Monte Carlo simulation to find the probability value. 2. Estimates Π (pi) To estimate the value of PI, we need the square and the area of the circle. To find these areas, we will randomly place points on the surface and calculate the points that fall in the circle and the points that fall in the square. This will give us an estimate of their area. Therefore, we will use the point count as the area instead of the size of the actual area. ## Import library import turtle import random import matplotlib.pyplot as plt import math ## Draw point # to visualize the random points mypen = turtle.Turtle() mypen.hideturtle() mypen.speed(0) # drawing a square mypen.up() mypen.setposition(-100,-100) mypen.down() mypen.fd(200) mypen.left(90) mypen.fd(200) mypen.left(90) mypen.fd(200) mypen.left(90) mypen.fd(200) mypen.left(90) # drawing a circle mypen.up() mypen.setposition(0,-100) mypen.down() mypen.circle(100) # Initialization data # to count the points inside and outside the circle in_circle = 0 out_circle = 0 # to store the values of the PI pi_values = [] # Main function # running for 5 times for i in range(5): for j in range(50): # generate random numbers x = random.randrange(-100,100) y = random.randrange(-100,100) # check if the number lies outside the circle if (x**2+y**2>100**2): mypen.color('black') mypen.up() mypen.goto(x,y) mypen.down() mypen.dot() out_circle = out_circle + 1 else: mypen.color('red') mypen.up() mypen.goto(x,y) mypen.down() mypen.dot() in_circle = in_circle + 1 # calculating the value of PI pi = 4.0 * in_circle/(in_circle + out_circle) # append the values of PI in list pi_values.append(pi) # Calculating the errors avg_pi_errors = [abs(math.pi - pi) for pi in pi_values] # Print the final values of PI for each iterations print(pi_values[-1]) 3.12 2.96 2.8533333333333335 3.0 3.04 # Draw image # Plot the PI values plt.axhline(y=math.pi,color='g',linestyle='-') plt.plot(pi_values) plt.xlabel('Iterations') plt.ylabel('Values of PI') plt.show() plt.axhline(y = 0.0,color = 'g',linestyle='-') plt.plot(avg_pi_errors) plt.xlabel('Iterations') plt.ylabel('Error') plt.show() Monty Hall problem Suppose in a game program, you can choose one of three doors: behind a door is a car; Behind the other doors are goats. You pick a door, for example, door 1. The host knows what is behind the door. Open another door, for example, door 3. There is a goat. The host then asks you: do you want to stick to your choice or choose another door? Is it good for you to change doors? According to the probability, it's good for us to change the door. Let's understand: Initially, the probability of obtaining a car (P) was the same for all three doors (P = 1/3). Now suppose the contestant chooses door 1. Then the host opened the third door. There was a goat behind the door. Next, the host asked the contestants if they wanted to change doors? We'll see why it's better to change doors: In the above figure, we can see that after the host opens door 3, the probability of owning a car at both doors increases to 2 / 3. Now we know that the probability of goat on the third door and car on the second door has increased to 2 / 3. Therefore, it is more advantageous to change the door. ## Import library import random import matplotlib.pyplot as plt ## Initialization data # 1 = car # 2 = goat doors = ['goat','goat','car'] switch_win_probability = [] stick_win_probability = [] plt.axhline(y = 0.66666,color = 'r',linestyle='-') plt.axhline(y = 0.33333,color = 'g',linestyle='-') <matplotlib.lines.Line2D at 0x205f5578648> import random import matplotlib.pyplot as plt ## Initialization data # 1 = car # 2 = goat doors = ['goat','goat','car'] switch_win_probability = [] stick_win_probability = [] # monte carlo simulation def monte_carlo(n): # calculating switch and stick wins: switch_wins = 0 stick_wins = 0 for i in range(n): # randomly placing the car and goats behind three doors random.shuffle(doors) # contestant's choice k = random.randrange(2) # If the contestant doesn't get car if doors[k] != 'car': switch_wins += 1 else: stick_wins += 1 # updatating the list values switch_win_probability.append(switch_wins/(i+1)) stick_win_probability.append(stick_wins/(i+1)) # plotting the data plt.plot(switch_win_probability) plt.plot(stick_win_probability) # print the probability values print('Winning probability if you always switch',switch_win_probability[-1]) print('Winning probability if you always stick to your original choice:',stick_win_probability[-1]) [the external chain picture transfer fails. The source station may have an anti-theft chain mechanism. It is recommended to save the picture and upload it directly (img-q0oAMCrF-1638621844579)(output_30_0.png)] monte_carlo(1000) plt.axhline(y = 0.66666,color = 'r',linestyle='-') plt.axhline(y = 0.33333,color = 'g',linestyle='-') Winning probability if you always switch 0.655 Winning probability if you always stick to your original choice: 0.345 <matplotlib.lines.Line2D at 0x205f40976c8> 4. Buffon's need problem Buffon, a French aristocrat, issued the following questions in 1777: Suppose we throw a short needle on a piece of scribing paper - how likely is it that the needle just crosses the line? The probability depends on the distance between the lines on the paper (d), the length of the needle we throw (l), or more precisely, the ratio l/d. For this example, we can consider the needle l ≤ D. In short, our purpose is to assume that the needle cannot cross two different lines at the same time. Surprisingly, the answer to this question involves PI. import random import matplotlib.pyplot as plt import math def monte_carlo(runs,needles,n_lenth,b_width): # Empty list to store pi values: pi_values = [] # Horizontal line for actual value of PI plt.axhline(y=math.pi,color = 'r',linestyle='-') # For all runs: for i in range(runs): # Initialize number of hits as 0 nhits = 0 # For all needles for j in range(needles): # We will find the distance from the nearest vertical line # min = 0,max = b_width/2 x = random.uniform(0,b_width/2.0) # The theta value will be from 0 to pi/2 theta = random.uniform(0,math.pi/2) # checking if the needle crosses the line or not xtip = x - (n_lenth/2.0)*math.cos(theta) if xtip < 0 : nhits += 1 # Going with the formula numerator = 2.0 * n_lenth * needles denominator = b_width * nhits # Append the final value of pi pi_values.append((numerator/denominator)) # Final pi value after all iterations print(pi_values[-1]) # Plotting the graph: plt.plot(pi_values) # Total number of runs runs = 100 # Total number of needles needles = 100000 # Length of needle n_lenth = 2 # space between 2 verical lines b_width = 2 # Calling the main function monte_carlo(runs,needles,n_lenth,b_width) 3.153728495513821 5. Why do dealers always win? How do casinos make money? The trick is simple - "the more you play, the more the casino makes. Let's see how it works through a simple Monte Carlo simulation example. Considering a hypothetical game, players must choose a chip from a bag of chips. Rules: - There are 1 ~ 100 chips in the bag. - Users can bet on an even or odd number of chips. - In this game, 10 and 11 are special numbers. If we bet on an even number, 10 will count as an odd number, and if we bet on an odd number, 11 will count as an even number. - If we bet on an even number and we get 10, then we lose. - If we bet on an odd number and we get 11, we lose. If we bet on an odd number, our probability of winning is 49 / 100. The dealer's probability of winning is 51 / 100. Therefore, for an odd number bet, the dealer's profit is = 51 / 100 - 49 / 100 = 200 / 10000 = 0.02 = 2% If we bet on an even number, the user's probability of winning is 49 / 100. The dealer's probability of winning is 51 / 100. Therefore, for an even number bet, the dealer's profit is = 51 / 100-49 / 100 = 200 / 10000 = 0.02 = 2% In short, for every $1 bet, $0.02 belongs to the dealer. In contrast, the minimum profit of roulette dealer is 2.5%. Therefore, we are sure that you are more likely to win in a hypothetical game than roulette. import random import matplotlib.pyplot as plt # Place your bet: # User can choose even or odd number choice = input("Don you want to bet on Even number or odd number\n") # For even if choice =='Even': def pickNote(): # get random number between 1-100 note = random.randint(1,100) # check for our game conditions # Notice that 10 isn't considered as even number if note%2 != 0 or note == 10: return False elif note%2 ==0: return True # For odd elif choice =='Odd': def pickNote(): # get random number between 1-100 note = random.randint(1,100) # check for our game conditions # Notice that 10 isn't considered as even number if note%2 == 0 or note == 11: return False elif note%2 == 1: return True Don you want to bet on Even number or odd number Odd # Main function def play(total_money ,bet_money,total_plays): num_of_plays = [] money = [] # start with play number 1 play = 1 for play in range(total_plays): # Win if pickNote(): # Add the money to our funds total_money = total_money + bet_money # append the play number num_of_plays.append(play) # append the new fund amout money.append(total_money) # Lose if pickNote(): # Add the money to our funds total_money = total_money - bet_money # append the play number num_of_plays.append(play) # append the new fund amout money.append(total_money) # Plot the data plt.ylabel('Player Money in $') plt.xlabel('Number of bets') plt.plot(num_of_plays,money) final_funds = [] # Final values after all the iterations final_funds.append(money[-1]) return( final_funds) # Run 10 time # Modify the number of times here for i in range(1000): ending_fund = play(10000,100,50) print(ending_fund) print(sum(ending_fund)) # print the money the player ends with print('The player started with $10,000') print('The player left with $',str(sum(ending_fund)/len(ending_fund))) [10400] 10400 The player started with $10,000 The player left with $ 10400.0
https://programmer.group/python-based-example-of-monte-carlo-simulation.html
CC-MAIN-2022-27
refinedweb
2,043
56.96
Main This server works with the same clients as seen for the synchronous server, here we deal just with the server. All the job is delegated to the Server class, whose constructor gets a reference to the application ASIO I/O context. namespace ba = boost::asio; // ... Server server(io); io.run();Server The server ctor initialize its own acceptor on the ASIO I/O context on the endpoint specifying the TCP IP protocol and port chosen, then it calls its private member method accept(): using ba::ip::tcp; // ... const uint16_t ECHO_PORT = 50'014; // ... class Server { private: tcp::acceptor acceptor_; void accept() { acceptor_.async_accept([this](bs::error_code ec, tcp::socket socket) // 1 { if (!ec) { std::make_shared<Session>(std::move(socket))->read(); // 2 } accept(); // 3 }); } public: Server(ba::io_context& io) : acceptor_(io, tcp::endpoint(tcp::v4(), ECHO_PORT)) { accept(); } };1. As handler to async_accept() is a lambda that gets as parameters an error code that let us know if the connection from the client has been accepted correctly, and the socket eventually created to support the connection itself. 2. A beautiful and perplexing line. We create a shared_prt smart pointer to a Session created from a rvalue reference to the socket received as parameter, and call on it its read() method. However this anonymous variable exits its definition block on the next line, so its life is in danger - better see what is going on in read(). Actually, we are in danger, too. If something weird happens in this session object, we don't have any way to do anything about. 3. As soon as a Session object is created, a new call to accept() is issued, an so the server puts itself in wait for a new client connection. Session As we have seen just above, we should expect some clever trick from Session, especially in its read() method. Thinking better about it, it is not a big surprise seeing that its superclass is enable_shared_from_this: class Session : public std::enable_shared_from_this<Session> { private: tcp::socket socket_; char data_[MAX_LEN]; // ... public: Session(tcp::socket socket) : socket_(std::move(socket)) {} // 1 void read() // 2 { std::shared_ptr<Session> self{ shared_from_this() }; // 3 socket_.async_read_some(ba::buffer(data_), [self](bs::error_code ec, std::size_t len) { // 4 if (!ec) { self->write(len); } }); } };1. The ctor gets in the socket that we seen was created by the acceptor and moved in, in its turn, the constructor moves it to its data member. 2. The apparently short lived Session object created by the handler of async_accept() calls this method. 3. A new shared_ptr is created from this! Actually, being such, it is the same shared_prt that we have seen in the calling handler, just its use counter increased. However, our object is still not safe, we need to keep it alive until the complete read-write cycle between client and server is completed. 4. We read asynchronously some bytes from the client. To better see the effect, I have set the size of the data buffer to a silly low value. But the more interesting part here is the handler passed to async_read_some(). Notice that in the capture clause of the lambda we pass self, the shared pointer from this. So our object is safe till the end of the read. So far so good. Just remember to ensure the object doesn't get invalidated during the writing process: void write(std::size_t len) { std::shared_ptr<Session> self{ shared_from_this() }; ba::async_write(socket_, ba::buffer(data_, len), [self](bs::error_code ec, std::size_t) { if (!ec) { self->read(); } }); }Same as in read(), we ensure "this" stays alive creating a shared pointer from it, and passing it to the async_write() handler. As required, as the read-write terminates, "this" has no more live references. Bye, bye, session. I have pushed my C++ source file to GitHub. And here is the link to the original example from Boost ASIO.
http://thisthread.blogspot.com/2018/03/boost-asio-echo-tcp-asynchronous-server.html
CC-MAIN-2018-43
refinedweb
644
64.41