text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringlengths
9
15
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
So my assignment is to go a little farther in this Mortgage calculator. by the way thank you for all the help thus far. I am trying to get this right, but can seem to get the math right. Here is the assignment: Write the program as a procedural C++ program. Calculate and display the mortgage payment amount using the amount of the mortgage, the term of the mortgage, and the interest rate of the mortgage as input by the user.. I have the code built, it compiles, but half way through the list it shows "0", and i dont know where i am messing that up. Can you please help!! Thanks #include <math.h> #include <iostream> using namespace std; int main () { // defines varibles double principle = 0.0; double interest = 0.0; double years = 0.0; float total = 0.0; char quit; do { do { // user input cout << "Enter interest rate:"; cin >> interest; cout << endl; if (! cin) { cout << "\nInvalid Interest...Please enter valid interest rate "; interest = 0; cin.clear(); cin.ignore(512, '\n'); } cout << "You entered: " << interest << "%" << endl; }while (interest < 0.001); do { cout << "Enter Total years: "; cin >> years; cout << endl; if (! cin) { cout << "\nInvalid Years...Please enter valid years "; years = 0; cin.clear(); cin.ignore(512, '\n'); } cout << "You entered: " << years << " years" << endl; }while (years < 0.001); do { cout << "Enter Loan amount: $"; cin >> principle; cout << endl; cout<<"\n"; if (! cin) { cout << "\nInvalid loan amount...Please enter valid loan amount "; principle = 0; cin.clear(); cin.ignore(512, '\n'); } cout << "You entered: $" << principle << endl; }while (principle < 0.001); double monthlyInterest = (interest / 100) / 12; int term = years * 12; total = (principle * monthlyInterest)/(1-pow(1 + monthlyInterest, -term)); //total for mortgage //double total = principle*(interest/1200)/(1-pow(1+(interest/1200),-1*years*12)); // amoritization formula double loanAmnt = principle; cout << "For a $"<<loanAmnt<<" loan your payment is $" <<total<<endl<<endl; double monthlyInt = principle * (interest/1200); double principalPaid = total - monthlyInt; double balance = principle-total; for (double i=0; i<years*12;i++) { double monthlyInt=balance*(interest/1200); double principalPaid=total-monthlyInt; if (balance<1.00) balance=0.0; //CHANGED else balance -= total ; //CHANGED cout <<" New loan balance for month "<<i+1<<" is "<<balance<<endl<<endl; cout <<" Your interest paid for this month is " <<monthlyInt<<endl<<endl; cout <<" Your Principal paid for this month is " <<principalPaid<<endl; cout << "\n"; cout << "To continue please press enter button."; getchar(); // pauses the program and allows you to view the output } // user can continue in a loop or quit //output cout<<"\n"; cout<<"To continue press C then enter.\n"; cout<<"If you wish to quit press Q then enter.\n"; //user input cin>>quit; cout<<"\n"; } while((quit!='q') && (quit!='Q')) ; cout<<"Thank you for trying my simple mortgage calculator\n"; return 0 ; }
https://www.daniweb.com/programming/software-development/threads/213136/c-mortgage-calculator-frustrasions
CC-MAIN-2017-26
refinedweb
453
58.48
Opened 12 years ago Closed 11 years ago #14262 closed New feature (fixed) Helper for "get_something as varname" template tag pattern Description It's a common pattern to write a template tag that fetches some data and sticks it in a context variable. Currently writing these tags requires writing a full-blown parsing function and Node class, which is mostly boilerplate except for the render() method. It would be good to have a template tag helper for writing tags that follow this pattern. (Similar to simple_tag, except allowing for updating the context). Attachments (3) Change History (14) comment:1 Changed 12 years ago by comment:2 follow-up: 3 Changed 12 years ago by I'm keen to write a patch for this, but first I'd like to be sure of the intent with this decorator. The name "assignment_tag" has been suggested in (1). Now which API do you anticipate? Is it one of the following: {% my_tag arg1 arg2 %} @register.assignment_tag def my_tag(arg1, arg2, ...): ... return { 'key1': value1, 'key2': value2, } # These keys/values will be added to the context or: {% my_tag arg1 arg2 as blah %} @register.assignment_tag def my_tag(arg1, arg2, ...): ... return value # This will be set to the variable 'blah' in the context or else? Also, it might be useful to have access to the context itself (at least in read-only mode) from within that tag. If you agree, then what would the API be? (1) comment:3 Changed 12 years ago by Now which API do you anticipate? Definitely the latter ( {% mytag arg1 arg2 as blah %}), with the varname explicitly specified in the template. The other encourages a pattern where the keys to update in the context are fixed in the tag, which makes the tag less flexible and the template less readable. Of course it's possible to have those keys passed in, but it makes it more work to do things right and easier to do them wrong. Also, it might be useful to have access to the context itself (at least in read-only mode) from within that tag. If you agree, then what would the API be? Now that both inclusion_tag and simple_tag have takes_context, I see no reason not to carry that consistency to this shortcut as well, and have a takes_context arg that works the same way. I know this leaves room for people to do wierd and confusing things where they update the context themselves in the tag code, and then also return something for the automatic assignment_tag update; but on the whole I think the value of consistency with the other shortcuts outweighs that minor concern. I'd also note that with the #14908 fix, it's now possible to write a simple_tag like this: @register.simple_tag(takes_context=True) def get_foo(context, _as, varname): # ... fetch some data ... context[varname] = ... return '' Which achieves the same thing as assignment_tag, although using it in a template requires unsightly quoting of both "as" and the target varname: {% get_foo "as" "bar" %}. I'm still in favor of adding assignment_tag, to make the syntax nicer in the template and make the technique more explicit. comment:4 Changed 12 years ago by I totally agree there should be an explicit and designer-friendly way of assigning values so I too am in favour of introducing this new decorator. My only concern now is that some times you may want to assign multiple values after executing some expensive calculations (e.g. heavy database queries), so it would be great if something like the following could be achieved: {% get_values arg1 arg2 as var1 var2 %} @register.assignment_tag def get_values(arg1, arg2): ... expensive calculations ... return (foo, bar) # foo would be assigned to 'var1' and foo to 'var2'. If multiple variables were specified after the 'as' then the tag would be expected to return a list or tuple. If only one variable were specified, then whatever is returned by the tag would be assigned to that variable (a single value or a list or whatever). If this idea is rejected, then at least one will still be able to do the following: {% get_values arg1 arg2 as var %} {# Tag returns a list #} {{ var.0 }} and {{ var.1 }} ... less explicit but no big deal. What do you think? comment:5 Changed 12 years ago by The multiple-assignment version is a poor tradeoff of adding complexity to support an edge case, when just assigning a list (or a dict for better template readability) works just fine. And if you have complex needs, you write a Node. The simple shortcuts are for simple cases. comment:6 Changed 12 years ago by comment:7 Changed 11 years ago by comment:8 Changed 11 years ago by Changed 11 years ago by comment:9 Changed 11 years ago by I've attached a patch which implements this new feature and includes tests and doc. Any feedback welcome ;) Changed 11 years ago by Updated patch to current trunk Changed 11 years ago by Small doc fixes Per design conversation with Jannis, Malcolm, and Russell at sprint, marking as Accepted and superseding #1105.
https://code.djangoproject.com/ticket/14262
CC-MAIN-2022-40
refinedweb
846
60.85
1. Competitive price 2.fast and safe 3.door to door service 4.insurance 5.best service Hi Friends: Sincerely wish you always a nice day! This is Kelly from Xiamen Realjet International Logistics Co., Ltd.(Xiamen, China). We are a global and professional freight forwarder approved by national trading ministry. We are located in Xiamen, We have built complete and good service network in main ports in China, even the world. We always provide the professional sea cargo & air freight service from China to all over the world with competitive prices. We establish good relationship with many shipping line. Now we are the agent of: COSCO, CSCL, MSK, OOCL, CMA, CSAV, ANL, SITC, MSC, NCL, HANJIN, UASC, EMC, ZIM, PIL, APL and so on. Our main services 1. Freight Forwarding (Sea and Air), FOB & EXW Pick-Up and Door Deliveries. (door to door , door to port , port to port ,port to door, DDU & DDP) 2. Logistics Solution Design. 3. Bonded Warehousing 4. Freight Brokerage, Charter 5. Full container & bulk cargos 6. CFS & CY Management 7. Inland Transportation 8. Warehousing. 9. Consolidation. 10. Custom Clearance. 11. Documentation. 12. Insurance 13. Other add-Value service, logistics services, packing Management Philosophy Realjet regards implementation and performance as the main management concept. This concept is based on the existing business model; therefore, the company investment rate of return (ROI) depends on the structural improvement, attention to corporate responsibility and modern business management. Service Concept Commercial logistic is Non-asset logistics service supplier (4PL) product of supply chain, logistics, transportation management and advisory service. We focus on assisting customers to reduce cost and optimize the overall layout and supply chain operation. We provide a wealth of professional experience in the logistics resources, and particularly offer solution project to each customer’s unique business need. The reason why we are out of the ordinary We offer specific solution project to obtain positive result. We do not make any insignificant promise. We guarantee to provide a Varity of quality management and excellent service to meet client’s need. We tract and report at all times, so you can know the progress and apply our resource in your daily business. We have a high-quality and professional team to dedicate to the ongoing project. Our business is built on the basis of strong relationships. We will work with your employees, suppliers and customers to maintain good relations. We will continue to improve our business, to ensure that you will be satisfied with our service. If you want to enquiry rate, pls kindly let me know the below information: 1. Description of the goods: names, weight, volume, measurement 2. POD 3. POL 4. Delivery date 5. Suppliers' details 6. Volume per month Once I get the information above, I can offer your rate exactly and quickly. Xiamen Realjet International Logistics CO., LTD Kelly Wang Tel:+86-592-5619356 Mobile:+86-13959213174
http://www.alibaba.com/product-detail/Professional-sea-freight-logistics-from-Xiamen_1560279944.html
CC-MAIN-2014-15
refinedweb
483
50.84
Haskell programming tips/Discussion About This page is meant for discussions about ThingsToAvoid, as concensus seems to be difficult to reach, and it'd be nice if newbies wouldn't bump into all kinds of Holy Wars during their first day of using Haskell ;) You may want to add your name to your comments to make it easier to refer to your words and ridicule you in public. Flame Away Avoid recursion Many times explicit recursion is the fastest way to implement a loop. e.g. loop 0 _ acc = acc loop i v acc = ... Using HOFs is more elegant, but makes it harder to reason about space usage, also explicit recursion does not make the code hard to read - just explicit about what it is doing. -- EinarKarttunen I disagree with this. Sometimes explicit recursion is simpler to design, but I don't see how it makes space usage any easier to reason about and can see how it makes it harder. By using combinators you only have to know the properties of the combinator to know how it behaves, whereas I have to reanalyze each explicitly implemented function. StackOverflow gives a good example of this for stack usage and folds. As far as being "faster" I have no idea what the basis for that is; most likely GHC would inline into the recursive version anyways, and using higher-order list combinators makes deforesting easier. At any rate, if using combinators makes it easier to correctly implement the function, then that should be the overriding concern. -- DerekElkins I read lots of code with recursion -- and it was hard to read, because it is hard to retrieve the data flow from it. -- HenningThielemann IMO explicit recursion usually does make code harder to read, as it's trying to do two things at once: Recursing and performing the actual work it's supposed to do. Phrases like OnceAndOnlyOnce and SeparationOfConcerns come to the mind. However, the concern about efficiency is (partly) justified. HOFs defined for certain recursion patterns often need additional care to achieve the same performance as functions using explicit recursion. As an example, in the following code, two sum functions are defined using two equivalent left folds, but only one of the folds is exported. Due to various peculiarities of GHCs strictness analyzer, simplifier etc, the call from main to mysum_2 works, yet the call to mysum_1 fails with a stack-overflow. module Foo (myfoldl_1, mysum_1, mysum_2) where -- exported myfoldl_1 f z xs = fold z xs where fold z [] = z fold z (x:xs) = fold (f z x) xs -- not exported myfoldl_2 f z xs = fold z xs where fold z [] = z fold z (x:xs) = fold (f z x) xs mysum_1 = myfoldl_1 (+) 0 mysum_2 = myfoldl_2 (+) 0 module Main where import Foo xs = [1..1000*1000] main = do print (mysum_2 xs) print (mysum_1 xs) (Results might differ for your particular GHC version, of course...) -- RemiTurk GHC made "broken" code work. As covered in StackOverflow, foldl is simply not tail-recursive in a non-strict language. Writing out mysum would still be broken. The problem here isn't the use of a HOF, but simply the use of non-tail-recursive function. The only "care" needed here is not relying on compiler optimizations (the code doesn't work in my version of GHC) or the care needed when relying on compiler optimizations. Heck, the potential failure of inlining (and subsequent optimizations following from it) could be handled by restating recursion combinator definitions in each module that uses them; this would still be better than explicit recursion which essentially restates the definition for each expression that uses it. -- DerekElkins Here is a demonstration of the problem - with the classic sum as the problem. Of course microbenchmarking has little sense, but it tells us a little bit which combinator should be used. import Data.List import System sum' :: Int -> Int -> Int sum' 0 n = sum [1..n] sum' 1 n = foldl (\a e -> a+e) 0 [1..n] sum' 2 n = foldl (\a e -> let v = a+e in v `seq` v) 0 [1..n] sum' 3 n = foldr (\a e -> a+e) 0 [1..n] sum' 4 n = foldr (\a e -> let v = a+e in v `seq` v) 0 [1..n] sum' 5 n = foldl' (\a e -> a+e) 0 [1..n] sum' 6 n = foldl' (\a e -> let v = a+e in v `seq` v) 0 [1..n] sum' 7 n = loop n 0 where loop 0 acc = acc loop n acc = loop (n-1) (n+acc) sum' 8 n = loop n 0 where loop 0 acc = acc loop n acc = loop (n-1) $! n+acc main = do [v,n] <- getArgs print $ sum' (read v) (read n) When executing with n = 1000000 it produces the following results: * seq does not affect performance - as excepted. * foldr overflows stack - as excepted. * explicit loop takes 0.006s * foldl takes 0.040s * foldl' takes 0.080s In this case the "correct" choice would be foldl' - ten times slower than explicit recursion. This is not to say that using a fold would not be better for most code. Just that it can have subtle evil effects in inner loops. -- EinarKarttunen This is ridiculous. The "explicit recursion" version is not the explicit recursion version of the foldl' version. Here is another set of programs and the results I get: import Data.List import System paraNat :: (Int -> a -> a) -> a -> Int -> a paraNat s = fold where fold z 0 = z fold z n = (fold $! s n z) (n-1) localFoldl' c = fold where fold n [] = n fold n (x:xs) = (fold $! c n x) xs sumFoldl' :: Int -> Int sumFoldl' n = foldl' (+) 0 [1..n] sumLocalFoldl' :: Int -> Int sumLocalFoldl' n = localFoldl' (+) 0 [1..n] sumParaNat :: Int -> Int sumParaNat n = paraNat (+) 0 n sumRecursionNat :: Int -> Int sumRecursionNat n = loop n 0 where loop 0 acc = acc loop n acc = loop (n-1) $! n+acc sumRecursionList :: Int -> Int sumRecursionList n = loop [1..n] 0 where loop [] acc = acc loop (n:ns) acc = loop ns $! n+acc main = do [v,n] <- getArgs case v of "1" -> print (sumFoldl' (read n)) "2" -> print (sumLocalFoldl' (read n)) "3" -> print (sumParaNat (read n)) "4" -> print (sumRecursionNat (read n)) "5" -> print (sumRecursionList (read n)) (best but typical real times according to time of a few trials each) sumFoldl' takes 2.872s sumLocalFoldl' takes 1.683s sumParaNat takes 0.212s sumRecursionNat takes 0.213s sumRecursionList takes 1.669s sumLocalFoldl' and sumRecursionList were practically identical in performance and sumParaNat and sumRecursionNat were practically identical in performance. All that's demonstrated is the cost of detouring through lists (and the cost of module boundaries I guess). -- DerekElkins n+k patterns n+k patterns are similar to the definition of infix functions, thus they make it harder to understand patterns. (Why I hate n+k) So far I have seen only one rule for Good Coding Practice in Haskell: Do Not Use n+k Patterns. I hope someone can give some directions, how to avoid known pitfalls (especially Space Leaks). -- On the haskell mailing list The most natural definition of many functions on the natural numbers is by induction, a fact that can very nicely be expressed with the (n+1)-pattern notation. Also, (n+k)-patterns are unlikely to produce space leaks, since if anything, they make the function stricter. The possible ambiguities don't seem to appear in real code. --ThomasJäger If natural numbers would be defined by PeanoNumbers then pattern matching on successors would be straightforward. This would be fairly slow and space consuming, that's why natural numbers are not implemented this way. They are implemented using binary numbers and it is not even tried to simulate the behaviour of Natural (e.g. laziness). Thus I wouldn't state, that 3 matches the pattern 2+1. -- HenningThielemann Lazyness/Strictness isn't really an argument in this situation, since when using a strict natural type, e.g. data Nat = Zero | Succ !Nat pattern matching on Nat behaves exactly like n+1 patterns. -- ThomasJaeger n+k patterns also apply to negative numbers - don't they? Yes, I see the analogy but in the current implementation it's nothing than sugar. -- HenningThielemann No, they don't. `let f (n+2) = n in f 1` is a runtime error. -- DerekElkins But translating it into pattern matching is impossible, thus it must be a static error. -- HenningThielemann Use syntactic sugar wisely I have to say, i strongly disagree with most of what is said in this section. First of all the claim Syntactic extensions make source code processors complicated and error prone. But they don't help to make programs safer (like type checks and contracts) or easier to maintain (like modularization and scoping). is obviously wrong. There certainly are applications of syntatic sugar that make programs easier to read, therefore easier to understand, easier to maintain, and safer, as you are more likely to spot bugs. - My statement was: Don't use syntactic sugar by default because you believe it makes your program more readable automatically (I've read lots of code of programmers who seem to believe that), but use syntactic sugar if (and only if) it makes the program more readable. Syntactic sugar is only a matter of readability not of safety in the sense of scoping and type checking. If I accidentally introduce inconsistencies into my code, the name resolver or the type checker will report problems, but not the de-sugarizer. -- HenningThielemann ad. right sections are Infix notation is problematic for both human readers and source code formatters. No, infix notation isn't problematic for human readers, it enables them to read the code faster in many cases. - ... if he knows the precedences, which is only true for (some) of the predefined operators. I guess you don't know all of the precedences of the Prelude operators. I also use infix operations like (+) and (:) but I'm very concerned with introducing lots of new operator symbols and `f` notation. -- HenningThielemann - Introducing new operators should definitely not be done careless (then again, one shouldn't be careless in programming anyway), and operator percedences might be better defined as a partial order. (e.g. there is an order between (+) and (*), and between (&&) and (||), but not between (+) and (&&)). Other proposals for replacing the current left/right associative + precedence system do exist. However, doing away with infix operators entirely appears to me to practically render combinator libraries unusable, which would make Haskell a lot less attractive. -- RemiTurk - The nice thing about precedences in Haskell is that it's often not necessary to know them exactly in order to use them. If the types of you operators are sufficiently general, or sufficiently distinct, only the sight way to parse them will lead to type-checking code, so you can just give it a shot without parenthesis and hopefully remember the precedence the next time you're in a similar situation. -- ThomasJaeger ( ., though it is not the most popular one. -- HenningThielemann - You do indeed have a point there: it's indeed an extension of (.), which I incorrectly denied. However, as it's not the extension, and AFAIK not even the most used extension, I consider the name Finally, there is no reason why one should expect a tool that processes haskell code not to be aware of Haskell 98's syntax. All mentioned syntactic extensions (list comprehension, guards, sections, infix stuff) fall under this category and can be used without any bad conscience. Sorry for having become so "religous" -- ThomasJaeger I agree. -- CaleGibbard - If you want a good example for unnecessary sugar, take this one: tuples :: Int -> [a] -> [[a]] tuples 0 _ = return [] tuples (r+1) xs = do y:ys <- tails xs (y:) `fmap` tuples r ys - Why is infix s]. You rewrote my code just with sugar but the structure which must be understood remained the same. Sorry, I don't find it easier to understand. Maybe people who believe a common notation rather than to try to understand the meaning are happy with the sugar. -- HenningThielemann - The pattern m >>= \x -> f xis exactly the reason the do-notation was introduced, so each time I write something like this, I replace it with a do notation for the following reason: It is definitely the more common style (nobody is using m >>= \x -> \n-style these days), so much more likely to be understood faster (at least for myself), the do notation expresses nicely that monadic (in this case notdeterminstic) effects are taking place, and finally it is much easier to make changes to the code if it's in do-form (e.g. add additional guards). Of course you CAN do the same changes in >>=-style, too, after all there is a straightforward translation (although complicated by the fact that you have to check if pattern matchings are exhaustive), but I'm not the kind of guy who does all kinds of verbose translation in his head just because he wants to stay away from syntactic sugar. - I disagree with arguments like "nobody is using ...". What does it tell about the quality of a technique? I write here to give reasons against too much syntactic sugar rather than to record today's habits of some programmers. -- HenningThielemann - You are further critizing that I am usingis defined by recursion on an Int and only uses the case of the predecessor, so this is a classical example for (n+1)-patterns. Note that the LHSs in your implementation are overlapped, so a reader might need more time to figure out what is going on in your implementation (I admit the effect is small, but this is a very tiny example). -. - If no one else objects, I'd like to put my implementation back on the main page, possibly with a less controversial comment. --ThomasJaeger - My argument is that the syntactic sugared version may be readable like the unsugared version, but it does not improve the readability, thus it should be avoided. Sugar shouldn't be the default, it shouldn't used just because it exists. That's the basic opinion where we disagree. Btw. I find the donotation in connection with the Listmonad very confusing because it looks imperative and it suggests that first something is chosen from the list then it is returned. -- HenningThielemann - While it may not be more readable for you, it is for me, for the reasons I'm getting tired of stating. Also, your opinions on the do-notation seem very strange to me. If we have monads - a way to unify different notions of effects - why make the artificial distinction between IO/State effects and more general ones again? The do-notation expresses in which order the effects are happening - that's the same for a list and an IO monad. However, a distinction between commutative and non-commutative monads would make sense, but unfortunately, there's no way to prove the commutativity of a monad statically in Haskell. There are still issues that aren't implemented in GHC which belong to the Haskell 98 standard and which are of more importance, I think, such as mutual recursive modules and some special case of polymorphic mutual function recursion. So I don't vote for wasting the time with syntactic sugar when real enhancements are deferred by it. If I would write a Haskell code processor I would certainly prevent me from the trouble of supporting guards and (n+k) patterns. I'm also fed up with the similar situation in HTML with its tons of extensions and the buggy code which is accepted by most browsers (which is also a sort of inofficial extension) - there is simply no fun in processing HTML code. Not to mention C++. By the way I'd like to have a real function Personally, I like the explicit `then` and `else` and find that they help when reading code to separate where the break is between the sections. It's not that I necessarily disagree with the inclusion of such a function, it is an easy one to write in any case, but I think that some sugar in the form of a few extra words to mark important points in common structures is useful. Human languages have many such words, and their presence makes reading or listening much easier. - CaleGibbard - Other people seem to have problems with this special syntax, too. And they propose even more special syntax to solve the problem. -- HenningThielemann
https://wiki.haskell.org/index.php?title=Haskell_programming_tips/Discussion&direction=next&oldid=7069&printable=yes
CC-MAIN-2021-43
refinedweb
2,743
60.24
Agenda See also: IRC log <scribe> scribe: various <Daniel> our discussion focuses on what properties are requrired and mapping these with ontology <Daniel> wonsuk: if you made an initial draft, please upload it to the wiki page colm: we have developed an API since many customers asked for it ... will demonstrate general and complex interaction and an demonstration purpose application ... we have 5 entities: reference to page, title like "obama vs. ..." ... .summary, content, ... ... document specific info like domain, channel (e.g. "you tube") ... tags from the youtube object itself ... language, media duration / width / heigth ... embedd tag, but not for all medias ... a publish date and a user friendly date (e.g. "9 days ago") raphael: you have your own XML Schema for that? colm: yes ... example with obama video, search query was "joe meets obama" raphael: do you keep track of folksomonies in the metadata? colm: no, we use only what is in the feed what we get. raphael: most of values for this element are specific to your format, e.g. date is not an ISO date colm: yes, we just developed our own format raphael: if you need to exchange the metadata your need to tell people that you use not the ISO date colm: we can make transformations in the API that solves that problem ... a more complex example: for each object we have a macro layer with information about the transaction, e..g languages that appear in the results ... they have their own categories, application specific ones, e.g. their own title / link / publish date / ... ... s/more complex example/more complex, application specific example/ ... in the application they have a web page that makes use of these categories for searching raphael: in the metadata you had English and "en" colm: they did the merging, we put both in and it is up to them to do the merging ... a more complex application: a video with transcription, and the user can click on the transcription and the audio of the video jumps to that passage ... the application can also do major scence detection daniel: that is automatic speech recognition? colm: yes plh: how long does processing of a video take? colm: 1:1, it is roughly a lot of plugins which run in paralell ... and this is not a function of the resolution size ... video and transcript interaction is flash+javascript ... for text video alignment we use ctm raphael: the metadata comes from the schema you showed? colm: yes ... we also can export the schema. CTM has (also) an XML serialization silvia: the 5 top elements? colm: datestring / date / summary / title / reference, plus content which can have whatever you want ... e.g. media duration, media width, meida height, media format, media type string, embed tag looking at raphael: ontology is not exactly like an object model ... since you have inferences ... e.g. if you have a new class in an ontology, you might get changes in the model, because of the OWL semantics ... that is why you don't have "getter" and "setter" models for everything in the ontology, but different design patterns ... there are examples of removing a property in the ontology ... you cannot remove the relation because after reclassification a class might appear again ... an ontology has basic rules to be consistent ... you apply these basic rules when you change the ontology to make sure that the ontology is consistent again ... main design decisions of this API is to use the Command design pattern with a visitor pattern ... so you do not have a get / set method, but a visitor pattern. If you make a change then that might influence the whole ontology ... I will give some examples. <Daniel> raphael: imagine we have an ontology as the output of our group. Somebody wants to add metadata. <Daniel> raphael: dataFactory will give you individuals , from that you built the triples ... AddAxioms might trigger more changes in the ontology model veronique: so it is an API for changing ontologies, but it can also be used for querying instances raphael: so in summary, you have inferences behind, and these make sure that the overall model is always consistent colm: we have only one method, no set values ... we have a giant search application, that's it ... so what we have is a subportion of what people want to achieve here plh: set capabilities are the second step ... one of the target is an API for browsers, they are not interested in "set" raphael: for authoring, you might need the set capability plh: will show the demo of the use case I have in mind ... in my service I have an URI as an input, from that RDF is generated ... this uses an exif library, which generates the RDF. ... it extracts all kinds of metadata (EXIF, XMP, ...) rapahel: it is easy to embedd metadata in an image, but not easy with a video ... what will our API do about that? plh: that is where the ontology comes into play ... I mean basically a set of terms ... if we have have a set of terms like "getAuthor", the browser can see what format is available and can get the information ... in this use case I assume that metadata is in the image itself rapahel: for video that is much harder felix: we had an proposal to link to external media information from the <video> tag plh: HTML people currently prefer to look only to media internal information silvia: I heard that the external information will be discussed in a later step ... we should not restrict us with these options plh shows API example, very simple, getTitle, getAuthor, getRights, silvia: we had in mind to have only one method with a query string, so that it is extensible plh: at the end we need both silvia: at the end we need to identify the property name larry: or constants to identify the property name silvia: yes larry: adobe is very interested in metadata ... XMP is very important for us ... XMP properties and media specific formats are being aligned, in a consortium called metadata working group ... they have published a deliverable on photo so far, video is to come ... they are concerned not so much on the web, but how to deal with metadata through production / assembly steps ... e.g. when a product is assembled form various parts: how do you take care of the meta data ... many people look only at the metadata of the final product ... we are taking steps before into account, including transcoding ... such capabilities are now being released as part of products which started shipping recently ... there is a transition between metadata and data, e.g. chapter marks which become part of the data on the DVD ... that is part of the processing changes, how to make the transition ... some of the specs of the metadata working group, e.g. metadata for video spec, are not public ... one level of the toolkit is public, additional utilities are not yet released, including some documentation plh: did you work on XMP embedding in existing video formats? larry: difference in video formats about whether you can add metadata or not ... there are issues like one product is aware of XMP, another one only of EXIF ... the metadata WG has worked on how to deal with such "creative workflow" issue silvia: we need to deal with both situations, final and "in process" metadata larry: most metadata is a question of opinion ... most opinions change ... if you built semantic web way of metadata, keep in mind that provenance is here and might change ... history of changing and provenance of metadata are matters of fact <nessy> looking at XMP larry: if you do external metadata you need to care about syncronization ... the XMP version is new, tracks, frame rates, different kinds of annotation and markers were added ... they also analyze other formats felix: if XMP is available is , is it always the first choice in provenance? larry: look at the metadata consortium deliverable, the conflict resolution mechanism raphael: if there are two sources of metadata which contradict each other, should not the application decide? larry: consumers do not want to decide, software should do that for them ... results are heuristics, results might be different for authors vs. time stamps ... a generic solution would still be complicated ... issue of merging metadata from different sources is very difficult ... seems to be more like a research project like a standards activity ... XMP has three parts: XMP model, some schemas which can be extended, and how metadata is embedded ... if you want a standard ontology look at part two ... most of the document is generic for any type of media daniel: what is your opinion on our WG? larrry: we support this activity, we encourage you to go ahead ... we were not able to submit XMP so far ... technically there should be no reason for you not to use XMP felix: so we could just use part 2 of XMP here? larry: yes, but I need to make sure plh: need to be aware that XMP refers to various other formats like dublin core larry: XMP takes various parts of dublin core plh: one way would be to take the core of XMP part 2, use if for the ontology and the API, and we are done larry: there is a c++ library for reading and manipulating XMP ... there is also javascript API for working with XMP ... and there is action script based separate model ... so that you can built standalone applications plh: you did not try to align XMP to other metadata? larry: that is part of the metadata consortium work ... and it is part of part 3 of XMP silvia: still confused about legal side . Is XMP copyright by adobe or also metadata WG? larry: not sure if that is a problem ... for a W3C member submission plh: with a member submission you give your ... we might ask Adobe to give up parts of part 2 ... W3C is not interested in all parts of XMP, e.g. "how to use XMP in flash" ... so adobe has the problem that in that case XMP would need to be splitted across organizations larry: you could make a normative reference to (parts of) XMP ... I would not spend a lot of time on the license issue doug: currently no standard for getting width and height in normal javascript APIs ... I wanted an API for just that information ... and have developed a spec for that ... we want to be able to get at information, two kinds of metadata: intrinsic and extrinsic meta data ... e.g. width and height is intrinsic, but the creator is extrinsic <chaals> -> draft from end of last year doug: we need authors need this for inserting information in web pages or more complex information ... having the browser vendors early on will allow very early feedback ... otherwise browser vendors will not implement your work ... the draft above is a very generic way of getting any meta data ... so just a method "getMetadata" ... there should be some mapping between keywords ... working on API first and later ontology is important larry: I think there is a real potential of mismatch in cross discussion ... if you think intrinsic properties like width / height, vs. descriptitve information like transcription ... I am not sure if the same API is useful for both ... there is the problem of having different vocabularies, e.g. resolution in terms of width / heigth, versus ratio ... media is updated independently of people who access it doug: in terms of conflating intrinsic and extrinsic: ... it is very different to say "who is the author?" ... but if you make a generic API it does not matter what kind of information you have ... the line between intrinsic and extrinsic is blurry, see e.g. EXIF information felix: is the browser perspective brought in by MS in the metadata WG? larry: not sure discussion on how much the API should differentiate between various kinds of metadata larry: usage of namespaces in XMP might be a problem for querying XMP metadata doug: I was mainly concerned with a web browser API larry: what are you use cases? ... examples of web applications that want to use your API doug: see flickr, it would be nice to put tags into a picture ... if I have embedded captioning break renato presents PLING Felix: currently we are considering having just a slot for expressing rights renato: understand, when you need some more information about policies in your framework, please come back to us. We are looking at general frameworks for policy languages rigo: if you want to describe "who has already seen this?" you might run into policy issues which are addressed by PLING ... you (media annotations WG) provides answers on "how is meta data attached to video?", PLING is concerned with the question "how to attach policy information?" ... we created PLING because many companies do not know how to use various languages together ... in the outcome of the media annotation WG we need a way to use an URI to reference to a policy description looking at rights management schema from XMP XMP spec, page 31 raphael: why two field for machine readable and human readable fields? ... the user agent can choose to follow the link and choose what he wants rigo: two options: have a type argument with the content type, or two fields jan: you need to give the author the freedom which choose ... you want to be able to attach rights to e.g. media you are distributing discussion on wheter a mechanism specifically for attaching a policy to medias needs to introduced or not <plh> raphael: what is the main interest of Samsung? daniel: technically several interests ... resolution for watching web videos on TV is no good ... comparing PC and laptop, TV has no good interaction between user and TV ... no scrolling or mouse ... so the question is how to make the video on the web useful for TV ... we think about a video recommendation system ... that can help users to find videos ... having a common ontology will help search and recommendation ... we are used to have a bookmark tool on the TV. But having an URI based mechanism (from the fragment WG) will be helpful for us wonsuk: requirements doc is not finished yet ... mainly basic sceleton of the document ... as result of yesterdays discussion wonsuk describes the document wonsuk: how to link between sec. 2 and sec. 3? veronique: subsections of sec. 3 can contain tables with links to sec. 2 schedule plan: veronique will edit the document, will send it around within the WG for another internal review <Daniel> before official publication, working group will have recurculation process for reviewing and modifying the existing document. 2 weeks or a bit longer felix will send XML later to veronique <scribe> ACTION: Felix to get CVS accounts for veronique and wonsuk [recorded in] <trackbot> Created ACTION-24 - Get CVS accounts for veronique and wonsuk [on Felix Sasaki - due 2008-10-31]. <Daniel> requirements section willl be subsection of each use case section. e.g., Video Use Case (Req1, Req2, Req3, etc...) <plh> ACTION: Felix to review XMP basic schema [recorded in] <trackbot> Created ACTION-25 - Review XMP basic schema [on Felix Sasaki - due 2008-10-31]. <plh> ACTION: Thierry to review XMP Dublin Core schema [recorded in] <trackbot> Created ACTION-26 - Review XMP Dublin Core schema [on Thierry Michel - due 2008-10-31]. <plh> ACTION: Felix to review XMP Rights Management schema [recorded in] <trackbot> Created ACTION-27 - Review XMP Rights Management schema [on Felix Sasaki - due 2008-10-31]. <plh> ACTION: Veronique to review XMP Media Management schema [recorded in] <trackbot> Sorry, couldn't find user - Veronique <plh> trackbot-ng, status <plh> ACTION: Véronique to review XMP Media Management schema [recorded in] <trackbot> Created ACTION-28 - Review XMP Media Management schema [on Véronique Malaisé - due 2008-10-31]. <plh> ACTION: Felix to tell Wonsuk to review XMP Basic Job Ticket schema and Paged-Text schema [recorded in] <trackbot> Created ACTION-29 - Tell Wonsuk to review XMP Basic Job Ticket schema and Paged-Text schema [on Felix Sasaki - due 2008-10-31]. <scribe> ACTION: wonsuk to review XMP Basic Job Ticket schema and Paged-Text schema [recorded in] <trackbot> Sorry, couldn't find user - wonsuk <plh> trackbot-ng, reload <plh> trackbot-ng, status <plh> ACTION: Joakim to review XMP Dynamic Media schema [recorded in] <trackbot> Created ACTION-30 - Review XMP Dynamic Media schema [on Joakim Söderberg - due 2008-10-31]. raphael: tom baker should be the person to contact <scribe> ACTION: Felix to contact tom baker about dc liaison [recorded in] <trackbot> Created ACTION-31 - Contact tom baker about dc liaison [on Felix Sasaki - due 2008-10-31]. <raphael> Dublin Core draft: <scribe> ACTION: Daniel to make a liaison with MPEG using information from the XP homepage [recorded in] <trackbot> Sorry, couldn't find user - Daniel <Daniel> spark3 <scribe> ACTION: Soohong to make a liaison with MPEG using information from the XP homepage [recorded in] <trackbot> Created ACTION-32 - Make a liaison with MPEG using information from the XP homepage [on Soohong Daniel Park - due 2008-10-31]. <raphael> XG MPEG Liaison document: <raphael> More generally, go to raphael: for IPTC, they are following our work, we will get informal comments EBU covered by Jean-Pierre <scribe> ACTION: Felix to evaluate if contact to IPTV Japan is valuable [recorded in] <trackbot> Created ACTION-33 - Evaluate if contact to IPTV Japan is valuable [on Felix Sasaki - due 2008-10-31]. <scribe> ACTION: Soohong to contact Open IPTV forum [recorded in] <trackbot> Created ACTION-34 - Contact Open IPTV forum [on Soohong Daniel Park - due 2008-10-31]. <scribe> ACTION: Joakim to check contacts to OMA [recorded in] <trackbot> Created ACTION-35 - Check contacts to OMA [on Joakim Söderberg - due 2008-10-31]. close ACTION-16 <trackbot> ACTION-16 Confirm that there should be a presentation about IMM closed close ACTION-18 <trackbot> ACTION-18 And others to elaborate on the top-down approach about use case closed close ACTION-19 <trackbot> ACTION-19 And others to look on the draft agenda for the TPAC meeting closed
http://www.w3.org/2008/10/24-mediaann-minutes.html
crawl-002
refinedweb
3,023
62.27
0 Here is a very simple segment of code. What I want is when I click on the button "First Button" then the window become empty. What actually happens is the window just becomes unresponsive. What could be the problem and what could be a solution?? import java.awt.event.ActionEvent; import java.awt.event.ActionListener; import javax.swing.JButton; import javax.swing.JFrame; /** * * @author Muhammad Anas */ public class ClearingJframe { private static JFrame mainWindow; public static void main(String[] args) { mainWindow = new JFrame( "Clearing a JFrame" ); mainWindow.setDefaultCloseOperation( JFrame.EXIT_ON_CLOSE ); JButton button1 = new JButton( "First Button" ); mainWindow.add( button1 ); button1.addActionListener( new ActionListener() { public void actionPerformed( ActionEvent eve ) { mainWindow.removeAll(); } } ); mainWindow.pack(); mainWindow.setLocation( 500, 500 ); mainWindow.setVisible( true ); } } Currently I want not to be concerned with issues like Event Dispatch Thread etc. for atleast today because I have not studied Multi Threading yet (Actually all discusions about this issue that I found via google are mixed up with the discusion of these topics). So, if a quick and dirty solution is possible then for now, I would preffer that. Thanks!!
https://www.daniweb.com/programming/software-development/threads/422183/removing-all-components-from-jframe-application-hangs
CC-MAIN-2018-30
refinedweb
181
52.05
A previous article in this series titled ‘Introducing NVIDIAs CUDA’ covered the basics of the NVIDIA CUDA device architecture. This article covers parallel programming using CUDA C with sequential and parallel implementations of a vector addition program. Parallel programming and general-purpose GPU computing are some of the hottest trends in computer science today due to the decreased prices of multi-core systems and the increase in compute efficiency. Various parallel programming languages like OpenCL and CUDA have been developed and evaluated over the years. This article will cover the basics of CUDA C, invoking kernels, threads and blocks with a vector addition program and it aims to give you an insight into beginning programming on your CUDA device. A few important points before we begin. To use CUDA on your system, you will need to have the following installed: 1. CUDA-capable GPU hardware 2. A supported version of Linux with a GCC compiler and toolchain 3. NVIDIA CUDA toolkit and drivers It is presumed that if you already have a CUDA device within your system, you probably have the latest toolkit and drivers installed and configured correctly. In case you do not have the NVIDIA CUDA drivers configured or you have recently upgraded your hardware with a CUDA device, you could follow the simple steps given below to configure your device. 1. Download the toolkit from the NVIDIA website, available at no cost from:. Select the correct product release depending on your operating system preferences. This download contains an all-in-one package, which includes the CUDA toolkit, SDK code samples and the required drivers. After downloading, follow the steps in the NVIDIA guide to install the drivers, CUDA samples and the toolkit, at:. This guide will help you set up the complete environment on the system. In this article, I will cover the serial and parallel versions of a vector addition program. Once you have understood the basics of the parallel vector addition program, you could use the concepts pretty well in parallelising other algorithms as per your requirements. First, let’s write a simple C program to perform vector addition of two arrays. Open your favourite editor and write a simple vector addition code that looks like what’s shown below: #include<stdio.h> static const int N=100; //Add the vectors and store result in vector C void vector_add(int *a,int *b, int *c) { int i=0; for (i=1;i<=N;i++) { c[i]=a[i]+b[i]; } } int main() { int a[N], b[N], c[N]; int i=0; //Initialize the vectors with values from 1 to 100 and its double in another array { a[i]=i; b[i]=2*i; } //Call function vector_add to display the result vector_add(a,b,c); //Print the resultant array. for (i=1;i<=N;i++) { printf("%d %d %d\n\n", a[i],b[i],c[i] ); } system("pause"); return 0; } This very simple serial vector addition program creates two arrays of integer values, and adds them using the vector_add function. Compile the code using the following command: gcc sequential_vector.c –o sequential Run it using the command given below: ./sequential In the above program, the processor runs each task sequentially, one after the other. Looping in the above program works sequentially as it starts with the first index and computes consequentially till the last index, and then exits the program. It is a single-threaded execution. Now, CUDA gives us the functionality to perform the same operation in parallel. What it does essentially is offload the data parallel sections to the GPU device and send the result back after computation. In what follows, you will get an insight into launching kernels, writing a device and host code, and performing the same serial vector addition program given above, in parallel. Here is a simple vector addition code in CUDA. #include <cuda.h> #include<stdio.h> #define N 100 #define numThread 1 // in this example we keep one thread in one block #define numBlock 100 // in this example we use 100 blocks __global__ void vector_add( int *a, int *b, int *c ) { // keep track of the index int tid = blockIdx.x; while (tid < N) { c[tid] = a[tid] + b[tid]; tid = tid+ numBlock; // shift by the total number of blocks, i.e. 100 in our case } } int main( void ) { int *a, *b, *c; int *dev_a, *dev_b, *dev_c; // allocate the memory on the CPU a = (int*)malloc( N * sizeof(int) ); b = (int*)malloc( N * sizeof(int) ); c = (int*)malloc( N * sizeof(int) ); //Initialize the vectors with values from 1 to 100 and its double in another array ); vector_add<<<numBlock,numThread>>>( dev_a, dev_b, dev_c ); // copy the array 'c' back from the GPU to the CPU cudaMemcpy( c, dev_c, N * sizeof(int),cudaMemcpyDeviceToHost ) ; //Prints the results for (int i=0; i<N; i++) { printf("%d %d %d \n\n", a[i],b[i],c[i] ); } // free the memory we allocated on the CPU free( a ); free( b ); free( c ); // free the memory we allocated on the GPU cudaFree( dev_a ) ; cudaFree( dev_b ) ; cudaFree( dev_c); return 0; } Let’s begin with analysing each part of the code, and then compile the code to get our results. From the previous article (Part 1 of this series), you know that CUDA programs execute in two places—the host (your CPU) and the device (GPU). You might be a bit surprised by the fact that writing the device code is much simpler than writing the CPU host code. Hence, let’s begin with analysing the device code first. Since we have 100 array values in this code, to simplify things, let’s have 100 blocks (kernels) running simultaneously, where each kernel runs a single thread. Hence, let’s set numThread to 1 and numBlock to 100, and use these variables later while calling the device from the host. The device code is: __global__ void vector_add(int *a, int *b, int *c) // keep track of the index int tid = blockIdx.x; while (tid < N) { c[tid] = a[tid] + b[tid]; tid = tid+ numBlock; // shift by the total number of blocks, i.e. 100 in our case } } As shown above, add a __global__ qualifier to the function vector_add. Notice that there are very few changes in the function vector_add of the serial and parallel sections. The __global__ qualifier indicates that this is a device function that would be called from the host. blockIdx is a built-in CUDA runtime variable, which is a three-component vector to identify threads in a one, two and three dimension index. Imagine a block as a 3-D matrix and to access the different components in this vector, use blockIdx.x, blockIdx.y and blockIdx.z. In this code, we will be using 100 blocks with a single thread on every grid, which will be seen while we analyse the host code, and hence we use only blockIdx.x which returns the current block number. The condition while tid<N checks that the bounds for array computation have not been reached and computes the array sum taking the block value as an index, i.e., tid. Add numBlock to the tid value, to shift the index by the number of blocks, as each block would be computing just one array index, and we have 100 blocks for 100 array indexes. This explanation pretty much sums up the device code for the program. Now move on to the host code, which prepares the GPU for execution and invokes the kernel. It works by allocating memory to the GPU and CPU, transfers the input vectors to the GPU, launches the kernel and transfers the result back to the host (CPU). int main( void ) { int *a, *b, *c; int *dev_a, *dev_b, *dev_c; // allocate the memory on the CPU a = (int*)malloc( N * sizeof(int) ); b = (int*)malloc( N * sizeof(int) ); c = (int*)malloc( N * sizeof(int) ); // fill the arrays 'a' and 'b' on the CPU ); As in the above code, similar to the allocation in C, variables a, b and c are allocated memory on the CPU. cudaMalloc() is a standard sub-routine of the CUDA API to allocate memory on the device. It works similar to the C malloc() function we used earlier. Since you cannot modify the memory allocated to the device from the host directly in CUDA, cudaMemcpy() is used to transfer data to the device. This method takes a pointer to the local memory, a pointer to the GPU memory being copied to, the number of bytes that will be copied, and a flag which determines the direction of the memory transfer, respectively. vector_add<<<numBlock,numThread>>>( dev_a, dev_b, dev_c ); This line is a call to the device kernel from the host to execute the function on the device. It is similar to the serial function call with some additional code. Blocks are organised in three dimensional grids and threads are organised in three dimensional blocks. numBlock and numThread are passed as arguments to let the device know about the structure to be adopted for computation. cudaMemcpy( c, dev_c, N * sizeof(int),cudaMemcpyDeviceToHost ) ; This copies the resultant data back from the GPU to the host. As you can see, it is similar to the cudaMemcpy() we used above, with the last variable being changed to cudaMemcpyDeviceToHost to indicate data transfer is between device to host. The rest of the code is pretty much self-explanatory, except that you use cudaFree() to free the memory allocated to the GPU. Now that you understand the code well, compile the code to verify your results. Save the code and parallel_vector.cu, and type the following command in the terminal: nvcc parallel_vector.cu –o parallel Now, to execute the code, run the following command: ./parallel If you have followed this guide correctly, it will print the vectors ‘a’ and ‘b’ along with their additional resultant ‘c’. This would verify that the first GPU code which you wrote has worked correctly. Still confused? Well, have a look at Figure 2, which will give you a clear understanding of how things work in parallel, in this case. Performance You may wonder how the GPU code can perform about 100 times faster than the CPU code, since we have created 100 blocks that are executing in parallel. This is not the case, since there is an overhead involved in copying data between the CPU and the GPU and the resultant data back to the CPU. Hence, CUDA is generally used for computing algorithms that are significantly data intensive, as it would then make sense to spend some time for data transfer. GPUs are, therefore, generally known as data intensive computational devices. As a next step, you could try programming matrix addition on the GPU in parallel to get a good grip on kernels, threads and parallel execution. Your best companion for this would be the links and the books mentioned in the ‘References’ section at the end of this article. I also recommend you visit the NVIDIA website and documentation, as it will give you a good idea about the power of CUDA if you are not already impressed by what this simple GPU device on your laptop can do. Next up in this series, I might cover an advanced CUDA program with multiple threads and blocks on a grid, and analyse the running time of the code. I will follow it up with a discussion on OpenACC and other simpler parallel programming models that have come up recently. Till then, start thinking of algorithms in parallel. The world of parallel computing is here to stay! References [1] ‘CUDA C Programming Guide’ by NVIDIA; [2] ‘CUDA Application Design and Development’ by Rob Farber [3] ‘Programming Massively Parallel Processors’ by David B. Kirk and Wen-mei W. Hwu
http://www.opensourceforu.com/2013/12/heterogeneous-parallel-programming-dive-world-cuda/
CC-MAIN-2014-15
refinedweb
1,959
58.11
question where should i put the help file ? i got only build , dist ,src ,nbproject at the project file . Where's the res folder? Great Work This will help for lot of people around the world on behalf of every one I thank U. edit and save file Hi, thank u for this..=D But, is there any way that I can edit the txt files and save the new file with the modification? Looking 4ward for your response i like your tutorials i am a computer science student i just discover your site and so far it has benefited me immensly thank you guys write to internal file Hi, Thanks dear sundeep kumar suman, have you any code to write to an internal file(in the resource folder) Thanks alot bye Res folder not in netbeans6.8 where is res folder in netbeans 6.8 when i am running this on my mobile it is not dispaling any J2ME Read File J2ME Read File In this J2ME application, we are going to read the specified file. This example shows you how to read the data of the specified file. To implement this type j2me question j2me question write a j2me program that accepts two integer values from the user through a form and display the result of multiplication on a new screen. Please visit the following link: j2me j2me What is JAD file what is necesary of that Hi Friend, Please visit the following link: JAD File Thanks Hi Friend, Please visit the following link: JAD File Thanks | J2ME Timer MIDlet | J2ME RMS Sorting | J2ME Read File | J2ME... Platform Micro Edition | MIDlet Lifecycle J2ME | jad and properties file... | J2ME RMS Read Write | J2ME Frame Animation | J2ME Cookies   j2me database question j2me database question **Is there any possibility to install a database into the mobile. If possible how can i connect it through midlet(j2me)** pls help me read a file read a file read a file byte by byte import java.io.File... static void main(String[] args) { File file = new File("D://Try.txt"); try { FileInputStream fin = new FileInputStream(file); byte
http://www.roseindia.net/tutorialhelp/allcomments/155970
CC-MAIN-2014-49
refinedweb
353
68.1
How we like our code. This is the style guide for the cpp-netlib project. We strive for consistency throughout the codebase to make it easier for developers and users to understand the code. The style guide is not exhaustive and is subject to change based on community agreement. If you have questions or clarifications to the contents of the style guide, get involved in the discussion at. We enforce the formatting rules prescribed by the Google Style Guide as implemented by the clang-format tool. We follow naming conventions used by the standard library. NETWORKand have words separated by underscores. We use all the facilities afforded us by C++. In particular, the most current C++ standard is the one we code against. We organize the whole library in namespaces, and where appropriate as macro's and static class members. impl.
https://cpp-netlib.org/style-guide.html
CC-MAIN-2019-09
refinedweb
141
67.25
Decision followed by traversing simple IF..AND..AND….THEN logic down the nodes. The root node (the first decision node) partitions the data based on the most influential feature partitioning. There are 2 measures for this, Gini Impurity and Entropy. Entropy The root node (the first decision node) partitions the data using the feature that provides the most information gain. Information gain tells us how important a given attribute of the feature vectors is. It is calculated as: $\text{Information Gain} = \text{entropy(parent)} – \text{[average entropy(children)]} $ Where entropy is a common measure of target class impurity, given as: $ Entropy = \Sigma_i – p_i \log_2 p_i $ where i is each of the target classes. Gini Impurity Gini Impurity is another measure of impurity and is calculated as follows: $ Gini = 1 – \Sigma_i p_i^2 $ Gini impurity is computationally faster as it doesn’t require calculating logarithmic functions, though in reality which of the two methods is used rarely makes too much of a difference. Predicting Survival in the Titanic Data Set We’ll be using a decision tree to make predictions about the Titanic data set from Kaggle. This data set provides information on the Titanic passengers and can be used to predict whether a passenger survived or not. import pandas as pd df = pd.read_csv('data/titanic.csv', index_col='PassengerId') print(df.head()) Survived Pclass \ PassengerId 1 0 3 2 1 1 3 1 3 4 1 1 5 0 3 Name Sex Age \ PassengerId 1 Braund, Mr. Owen Harris male 22.0 2 Cumings, Mrs. John Bradley (Florence Briggs Th... female 38.0 3 Heikkinen, Miss. Laina female 26.0 4 Futrelle, Mrs. Jacques Heath (Lily May Peel) female 35.0 5 Allen, Mr. William Henry male 35.0 SibSp Parch Ticket Fare Cabin Embarked PassengerId 1 1 0 A/5 21171 7.2500 NaN S 2 1 0 PC 17599 71.2833 C85 C 3 0 0 STON/O2. 3101282 7.9250 NaN S 4 1 0 113803 53.1000 C123 S 5 0 0 373450 8.0500 NaN S We will be using Pclass, Sex, Age, SibSp (Siblings aboard), Parch (Parents/children aboard), and Fare to predict whether a passenger survived. df = df[['Pclass', 'Sex', 'Age', 'SibSp', 'Parch', 'Fare', 'Survived']] We need to convert ‘Sex’ into an integer value of 0 or 1. df['Sex'] = df['Sex'].map({'male': 0, 'female': 1}) We will also drop any rows with missing values. df = df.dropna() X = df.drop('Survived', axis=1) y = df['Survived'] from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1) from sklearn import tree model = tree.DecisionTreeClassifier() Let’s take a look at our model’s attributes model') Defining some of the attributes like max_depth, max_leaf_nodes, min_impurity_split, and min_samples_leaf can help prevent overfitting the model to the training data. First we fit our model using our training data. model.fit(X_train, y_train)') Then we score the predicted output from model on our test data against our ground truth test data. y_predict = model.predict(X_test) from sklearn.metrics import accuracy_score accuracy_score(y_test, y_predict) 0.83240223463687146 We see an accuracy score of ~83.2%, which is significantly better than 50/50 guessing. Let’s also take a look at our confusion matrix: from sklearn.metrics import confusion_matrix pd.DataFrame( confusion_matrix(y_test, y_predict), columns=['Predicted Not Survival', 'Predicted Survival'], index=['True Not Survival', 'True Survival'] ) If we have graphviz installed, we can export our decision tree so we can explore the decision and leaf nodes. tree.export_graphviz(model.tree_, out_file='tree.dot', feature_names=X.columns) We can then convert this dot file to a png file. from subprocess import call call(['dot', '-T', 'png', 'tree.dot', '-o', 'tree.png']) 0 We can then view our tree, which looks like this (Click to view full): The root node, with the most information gain, tells us that the biggest factor in determining survival is Sex. If we zoom in on some of the leaf nodes, we can follow some of the decisions down. We have already zoomed into the part of the decision tree that describes males, with a ticket lower than first class, that are under the age of 10. The impurity is the measure as given at the top by Gini, the samples are the number of observations remaining to classify and the value is the how many samples are in class 0 (Did not survive) and how many samples are in class 1 (Survived). Let’s follow this part of the tree down, the nodes to the left are True and the nodes to the right are False: - We see that we have 19 observations left to classify: 9 did not survive and 10 did. - From this point the most information gain is how many siblings (SibSp) were aboard. A. 9 out of the 10 samples with less than 2.5 siblings survived. B. This leaves 10 observations left, 9 did not survive and 1 did. - 6 of these children that only had one parent (Parch) aboard did not survive. - None of the children aged > 3.5 survived - Of the 2 remaining children, the one with > 4.5 siblings did not survive. Random Forests in python using scikit-learn – Ben Alex Keen […] Decision trees are a great tool but they can often overfit the training set of data unless pruned effectively, hindering their predictive capabilities. […]
https://benalexkeen.com/decision-tree-classifier-in-python-using-scikit-learn/
CC-MAIN-2021-21
refinedweb
892
66.13
Direct upload to s3 with cors September 21, 2012 . Coding . Comments Tags: aws javascript rails Tweet EDIT Everything detailed in this article has been wrapped up in this gem, you should give it a look ! Anyway, I still advise you to read this article as it will probably help you how everything works ! Preface Since beginning of september, Amazon added CORS support to S3. As this is quite recent, there are not yet a lot of documentation and tutorials about how to set eveything up and running for your app. Furthermore, this jQuery plugin is awesome, mainly for the progress bar handling, but sadly the example in the wiki is obsolete. If somehow you're working with heroku you might have already faced the 30s limit on each requests. There are some alternatives, such as the extension of the great carrier wave gem, carrierwave direct. I gave it a quick look, but I found it quite crappy, as it forces you to change your carrier wave settings (removing the store_dir method, really ?) and it only works for a single file. So I thought it would be better to handle upload manually for big files, and rely on vanilla carrier_wave for my other small uploads. I found other interesting examples but they all lacked important things, and none of them worked out of the box, hence this short guide. This tutorial is inspired by that post and that one. Setup your bucket First you'll need to setup your bucket to enable CORS under certain conditions. <CORSConfiguration> <CORSRule> <AllowedOrigin>*</AllowedOrigin> <AllowedMethod>GET</AllowedMethod> <AllowedMethod>POST</AllowedMethod> <AllowedMethod>PUT</AllowedMethod> <AllowedHeader>*</AllowedHeader> </CORSRule> </CORSConfiguration> Of course those settings are only for development purpose, you'll probably want to restrict the Allowed Origin rule to your domain only. Documentation about those settings is quite good. Setup your server In order to send your files to s3, you have to include a set of options as described in the official doc here and there One solution would be to directly write the content of all those variables in the form, so it's ready to be submitted, but I believe that most of those value should not be written in the DOM. So we'll create a new route we'll use to fetch those data. This example is written with Rails, but writing the same for another framework should be really simple MyApp::Application.routes.draw do resources :signed_url, only: :index end Now that we have our new route, let's create the controller which will send back our data to the s3 form class SignedUrlsController < ApplicationController def index render json: { policy: s3_upload_policy_document, signature: s3_upload_signature, key: "uploads/#{SecureRandom.uuid}/#{params[:doc][:title]}", success_action_redirect: "/" } end private # generate the policy document that amazon is expecting. def s3_upload_policy_document Base64.encode64( { expiration: 30.minutes.from_now.utc.strftime('%Y-%m-%dT%H:%M:%S.000Z'), conditions: [ { bucket: ENV['S3_BUCKET'] }, { acl: 'public-read' }, ["starts-with", "$key", "uploads/"], { success_action_status: '201' } ] }.to_json ).gsub(/\n|\r/, '') end # sign our request by Base64 encoding the policy document. def s3_upload_signature Base64.encode64( OpenSSL::HMAC.digest( OpenSSL::Digest::Digest.new('sha1'), ENV['AWS_SECRET_KEY_ID'], s3_upload_policy_document ) ).gsub(/\n/, '') end end The policy and signature method are stolen from the linked blog posts above with one exception, I had to include the "starts-width" constraint, otherwise s3 was yelling 403 to me. Everything else is quite straight forward, there's just a small detail to consider if you set the acl to 'private', but more on that later. One last detail, the key value is actually the path of your file on your bucket, so set it to whatever you want but be sure it matches the constraint you set in the policy. Here we're using params[:doc][:file] to read the name of the file we're about to upload. We'll see more about that when setting the javascript. That's basically everything we have to do on the server side Add the jQueryFileUpload files Next you'll have to add the jQueryFileUpload files. The plugins ships with a lof of files, but I found most of them useless, so here is the list vendor/jquery.ui.widget jquery.fileupload Setup the javascript client side Now let's setup jQueryFileUpload to send the correct data to s3 Based on what we did on the server, the workflow will be composed of 2 requests, first, it's going to fetch the needed data from our server, then send everything to s3. Here is the form I'm using, the order of parameter is important. %form(action="{ENV['S3_BUCKET']}.s3.amazonaws.com" method="post" enctype="multipart/form-data" class='direct-upload') %input{type: :hidden, name: :key} %input{type: :hidden, name: "AWSAccessKeyId", value: ENV['AWS_ACCESS_KEY_ID']} %input{type: :hidden, name: :acl, value: 'public-read'} %input{type: :hidden, name: :policy} %input{type: :hidden, name: :signature} %input{type: :hidden, name: :success_action_status, value: "201"} %input{type: :file, name: :file } - # You can recognize some bootstrap markup here :) .progress.progress-striped.active .bar $(function() { $('.direct-upload').each(function() { var form = $(this) $(this).fileupload({ url: form.attr('action'), type: 'POST', autoUpload: true, dataType: 'xml', // This is really important as s3 gives us back the url of the file in a XML document add: function (event, data) { $.ajax({ url: "/signed_urls", type: 'GET', dataType: 'json', data: {doc: {title: data.files[0].name}}, // send the file name to the server so it can generate the key param async: false, success: function(data) { // Now that we have our data, we update the form so it contains all // the needed data to sign the request form.find('input[name=key]').val(data.key) form.find('input[name=policy]').val(data.policy) form.find('input[name=signature]').val(data.signature) } }) data.submit(); }, send: function(e, data) { $('.progress').fadeIn(); }, progress: function(e, data){ // This is what makes everything really cool, thanks to that callback // you can now update the progress bar based on the upload progress var percent = Math.round((e.loaded / e.total) * 100) $('.bar').css('width', percent + '%') }, fail: function(e, data) { console.log('fail') }, success: function(data) { // Here we get the file url on s3 in an xml doc var url = $(data).find('Location').text() $('#real_file_url').val(url) // Update the real input in the other form }, done: function (event, data) { $('.progress').fadeOut(300, function() { $('.bar').css('width', 0) }) }, }) }) }) So quick explanation about what's going on here : The add callback allows us to fetch the missing data before the upload. Once we have the data, we simply insert them in the form The send and done callbacks are only used for UX purpose, they show and hide the progress bar when needed. The real magic is the progress callback as it gives you the current progress of the upload in the event argument. In my example, this form sits next to a 'real' rails form which is used to save an object which has amongst its attributes a file_url, linked to the "big file" we just uploaded. So once the upload is done I fill the 'real' field so my object is correctly created with the good url without having to handle extra things. After submitting the real form my object is saved with the URL of the file uploaded on S3. If you're uploading public files, you're good to go, everything's perfect. But if you're uploading private file (this is set with the acl params), you still have a last thing to handle. Indeed the url itself is not enough, if you try accessing it, you'll face some ugly xml like that. The solution I used was to use the aws gem which provides a great method : AWS::S3Object#url_for. With that method, you can get an authorized url for the desired duration with your bucket name and the key (the path of your file in the bucket) of your file So my custom url accessor looked something like this : def url parent_url = super # If the url is nil, there's no need to look in the bucket for it return nil if parent_url.nil? # This will give you the last part of the URL, the 'key' params you need # but it's URL encoded, so you'll need to decode it object_key = parent_url.split(/\//).last AWS::S3::S3Object.url_for( CGI::unescape(object_key), ENV['S3_BUCKET'], use_ssl: true) end This involves some weird handling with the CGI::unescape, and there's probably a better way to achieve this, but this is one way to do it, and it works fine. Live example I'll set up a live example running on heroku, on which you'll be able to upload files in more than 30s coming soon Finally ! The demo if finally here : and code source can be found here : EDIT I changed every access to AWS variables (BUCKET, SECRET_KEY and ACCESS_KEY) by using environment variables. By doing so you don't have to put the variables directly in your files, but you just have to set correctly the variables : export S3_BUCKET=<YOUR BUCKET> export AWS_ACCESS_KEY_ID=<YOUR KEY> export AWS_SECRET_KEY_ID=<YOUR SECRET KEY> When deploying on heroku you just have to set the variables with heroku config:add AWS_ACCESS_KEY_ID=<YOUR KEY> --app <YOUR APP> ... Feel free to comment on the post but keep it clean and on topic.blog comments powered by Disqus Feel free to comment on the post but keep it clean and on topic.blog comments powered by Disqus
http://pjambet.github.io/blog/direct-upload-to-s3/
CC-MAIN-2016-22
refinedweb
1,567
58.52
int menu ( char *menu_name ) char *menu_name; // string containing menu name Synopsis #include "silver.h" The menu function causes the specified custom menu to be loaded, and allows the user to make a menu selection from it. A detailed description of the construction of custom menus, and the use of this function appears in Custom Menus. Parameters menu_name is a null-terminated string containing the base name of the menu to be loaded, which will be appended with ".CMF", to form the file name. Return Value menu returns the number associated with the menu item if a selection was successfully made. Otherwise, if Esc was pressed, menu returns 0. If a control key or function key was pressed, the predefined variable ich is set to the value of the key pressed, and menu returns 1. Help URL:
http://silverscreen.com/menu.htm
CC-MAIN-2021-21
refinedweb
136
62.88
#include <KServiceAction> Detailed Description Represents an action in a .desktop file Actions are defined with the config key Actions in the [Desktop Entry] group, followed by one group per action, as per the desktop entry standard. - See also - KService::actions Definition at line 36 of file kserviceaction.h. Constructor & Destructor Documentation Creates a KServiceAction. Normally you don't have to do this, KService creates the actions when parsing the .desktop file. Definition at line 44 of file kserviceaction.cpp. Needed for operator>> Definition at line 39 of file kserviceaction.cpp. Destroys a KServiceAction. Definition at line 51 of file kserviceaction.cpp. Copy constructor. Definition at line 55 of file kserviceaction.cpp. Member Function Documentation - Returns - the action's internal data. Definition at line 66 of file kserviceaction.cpp. - Returns - the action's exec command, as defined by the Exec key in the desktop action group Definition at line 91 of file kserviceaction.cpp. - Returns - the action's icon, as defined by the Icon key in the desktop action group Definition at line 86 of file kserviceaction.cpp. Returns whether the action is a separator. This is true when the Actions key contains "_SEPARATOR_". Definition at line 101 of file kserviceaction.cpp. - Returns - the action's internal name For instance Actions=Setup;... and the group [Desktop Action Setup] define an action with the name "Setup". Definition at line 76 of file kserviceaction.cpp. Returns whether the action should be suppressed in menus. This is useful for having actions with a known name that the code looks for explicitly, like Setup and Root for kscreensaver actions, and which should not appear in popup menus. - Returns - true to suppress this service Definition at line 96 of file kserviceaction.cpp. Assignment operator. Definition at line 60 of file kserviceaction.cpp. Sets the action's internal data to the given userData. Definition at line 71 of file kserviceaction.cpp. - Returns - the action's text, as defined by the Name key in the desktop action group Definition at line 81 of file kserviceaction.cpp. The documentation for this class was generated from the following files: Documentation copyright © 1996-2019 The KDE developers. Generated on Thu Apr 18 2019 02:45:55 by doxygen 1.8.11 written by Dimitri van Heesch, © 1997-2006 KDE's Doxygen guidelines are available online.
https://api.kde.org/frameworks/kservice/html/classKServiceAction.html
CC-MAIN-2019-26
refinedweb
384
51.04
sys/unix The sys/unix package provides access to the raw system call interface of the underlying operating system. See: Porting Go to a new architecture/OS combination or adding syscalls, types, or constants to an existing architecture/OS pair requires some manual effort; however, there are tools that automate much of the process. There are currently two ways we generate the necessary files. We are currently migrating the build system to use containers so the builds are reproducible. This is being done on an OS-by-OS basis. Please update this documentation as components of the build system change. GOOS != "linux") The old build system generates the Go files based on the C header files present on your system. This means that files for a given GOOS/GOARCH pair must be generated on a system with that OS and architecture. This also means that the generated code can differ from system to system, based on differences in the header files. To avoid this, if you are using the old build system, only generate the Go files on an installation with unmodified header files. It is also important to keep track of which version of the OS the files were generated from (ex. Darwin 14 vs Darwin 15). This makes it easier to track the progress of changes and have each OS upgrade correspond to a single change. To build the files for your current OS and architecture, make sure GOOS and GOARCH are set correctly and run mkall.sh. This will generate the files for your specific system. Running mkall.sh -n shows the commands that will be run. Requirements: bash, go GOOS == "linux") The new build system uses a Docker container to generate the go files directly from source checkouts of the kernel and various system libraries. This means that on any platform that supports Docker, all the files using the new build system can be generated at once, and generated files will not change based on what the person running the scripts has installed on their computer. The OS specific files for the new build system are located in the ${GOOS} directory, and the build is coordinated by the ${GOOS}/mkall.go program. When the kernel or system library updates, modify the Dockerfile at ${GOOS}/Dockerfile to checkout the new release of the source. To build all the files under the new build system, you must be on an amd64/Linux system and have your GOOS and GOARCH set accordingly. Running mkall.sh will then generate all of the files for all of the GOOS/GOARCH pairs in the new build system. Running mkall.sh -n shows the commands that will be run. Requirements: bash, go, docker This section describes the various files used in the code generation process. It also contains instructions on how to modify these files to add a new architecture/OS or to add additional syscalls, types, or constants. Note that if you are using the new build system, the scripts/programs cannot be called normally. They must be called from within the docker container. The hand-written assembly file at asm_${GOOS}_${GOARCH}.s implements system call dispatch. There are three entry points: func Syscall(trap, a1, a2, a3 uintptr) (r1, r2, err uintptr) func Syscall6(trap, a1, a2, a3, a4, a5, a6 uintptr) (r1, r2, err uintptr) func RawSyscall(trap, a1, a2, a3 uintptr) (r1, r2, err uintptr) The first and second are the standard ones; they differ only in how many arguments can be passed to the kernel. The third is for low-level use by the ForkExec wrapper. Unlike the first two, it does not call into the scheduler to let it know that a system call is running. When porting Go to a new architecture/OS, this file must be implemented for each GOOS/GOARCH pair. Mksysnum is a Go program located at ${GOOS}/mksysnum.go (or mksysnum_${GOOS}.go for the old system). This program takes in a list of header files containing the syscall number declarations and parses them to produce the corresponding list of Go numeric constants. See zsysnum_${GOOS}_${GOARCH}.go for the generated constants. Adding new syscall numbers is mostly done by running the build on a sufficiently new installation of the target OS (or updating the source checkouts for the new build system). However, depending on the OS, you may need to update the parsing in mksysnum. The syscall.go, syscall_${GOOS}.go, syscall_${GOOS}_${GOARCH}.go are hand-written Go files which implement system calls (for unix, the specific OS, or the specific OS/Architecture pair respectively) that need special handling and list //sys comments giving prototypes for ones that can be generated. The mksyscall.go program takes the //sys and //sysnb comments and converts them into syscalls. This requires the name of the prototype in the comment to match a syscall number in the zsysnum_${GOOS}_${GOARCH}.go file. The function prototype can be exported (capitalized) or not. Adding a new syscall often just requires adding a new //sys function prototype with the desired arguments and a capitalized name so it is exported. However, if you want the interface to the syscall to be different, often one will make an unexported //sys prototype, and then write a custom wrapper in syscall_${GOOS}.go. For each OS, there is a hand-written Go file at ${GOOS}/types.go (or types_${GOOS}.go on the old system). This file includes standard C headers and creates Go type aliases to the corresponding C types. The file is then fed through godef to get the Go compatible definitions. Finally, the generated code is fed though mkpost.go to format the code correctly and remove any hidden or private identifiers. This cleaned-up code is written to ztypes_${GOOS}_${GOARCH}.go. The hardest part about preparing this file is figuring out which headers to include and which symbols need to be #defined to get the actual data structures that pass through to the kernel system calls. Some C libraries preset alternate versions for binary compatibility and translate them on the way in and out of system calls, but there is almost always a #define that can get the real ones. See types_darwin.go and linux/types.go for examples. To add a new type, add in the necessary include statement at the top of the file (if it is not already there) and add in a type alias line. Note that if your type is significantly different on different architectures, you may need some #if/#elif macros in your include statements. This script is used to generate the system‘s various constants. This doesn’t just include the error numbers and error strings, but also the signal numbers and a wide variety of miscellaneous constants. The constants come from the list of include files in the includes_${uname} variable. A regex then picks out the desired #define statements, and generates the corresponding Go constants. The error numbers and strings are generated from #include <errno.h>, and the signal numbers and strings are generated from #include <signal.h>. All of these constants are written to zerrors_${GOOS}_${GOARCH}.go via a C program, _errors.c, which prints out all the constants. To add a constant, add the header that includes it to the appropriate variable. Then, edit the regex (if necessary) to match the desired constant. Avoid making the regex too broad to avoid matching unintended constants. This program is used to extract duplicate const, func, and type declarations from the generated architecture-specific files listed below, and merge these into a common file for each OS. The merge is performed in the following steps: zerrors_${GOOS}_${GOARCH}.go A file containing all of the system's generated error numbers, error strings, signal numbers, and constants. Generated by mkerrors.sh (see above). zsyscall_${GOOS}_${GOARCH}.go A file containing all the generated syscalls for a specific GOOS and GOARCH. Generated by mksyscall.go (see above). zsysnum_${GOOS}_${GOARCH}.go A list of numeric constants for all the syscall number of the specific GOOS and GOARCH. Generated by mksysnum (see above). ztypes_${GOOS}_${GOARCH}.go A file containing Go types for passing into (or returning from) syscalls. Generated by godefs and the types file (see above).
https://go.googlesource.com/sys/+/d3039528d8ac77ce0bb04ffba446619b12674a8b/unix/
CC-MAIN-2021-49
refinedweb
1,389
64.3
On Sat, Sep 10, 2011 at 8:11 PM, Nobody <nobody at nowhere.com> wrote: > I suspect that the one-to-one correspondence between classes and .class > files is mostly technical (e.g. Java's security model). The one-to-one > correspondence between class files and source files could probably be > relaxed, but at the expense of complicating the IDE and toolchain. One class per object file isn't a problem - you can always .jar your classes if the proliferation of small files bothers you, and then it's just a different way of indexing the mound of code. One class per source file complicates the human's view in order to simplify the tools'. Not sure that's really worthwhile. > I never saw it as a problem, given that Java is fundamentally class-based: > there are no global variables or functions, only classes. Yeah... of course you can easily simulate globals with static members in a dedicated class, but it's slower. THIS, though, is where Java's security model comes in - you can assign security X to Globals1.class and security Y to Globals2.class, rather than trying to juggle security issues in a monolithic "globals" namespace. IMHO it's not worth the hassle, though. I'd rather just have globals. ChrisA
https://mail.python.org/pipermail/python-list/2011-September/612149.html
CC-MAIN-2019-35
refinedweb
214
67.55
Toolkit which overloads 'print(…)' and 'input()' to redirect them to a web page. Project description term2web: terminal in a web page (Python version) This library is like termcolor, but with all the formatting possibility of CSS. Install ( pip install term2web), import ( from term2web import *) and all print(…) and input(…) will be redirected to a web page. You can also launch: git clone, cd term2web-python, python3 main.py(or directly python3 Basic.pyor python3 WithCSS.py). Live demonstration:. There are three other functions available. set_property(name,value) applies the CSS property of name name and value value. Example: set_property("font-style": "italic") set_properties(properties) applies the CSS properties stored in properties which is a dictionary whose keys are property names, and values the corresponding property values. Example: set_properties({ "text-decoration-line": "line-through" "text-decoration-style": "wavy", "text-decoration-color": "red" }) reset_properties() removes all the CSS properties set by above functions. Basic.py is an example with calls to print(…) and input(…), but without CSS formatting. WithCSS.py shows how CSS is used to format the displayed text. There is also a stub to for this library at address. Unlike other programs based on the Atlas toolkit, on which this library is based, it is not possible to simultaneously launch two or more instances of a program based on the term2web library. This is intentional, in order to keep this library simple to use, as it is mainly intended for beginners. This project is based on the Atlas toolkit. Other projects using this toolkit can be found here:. Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/term2web/
CC-MAIN-2020-05
refinedweb
285
57.37
Note The I/O registry is only meant to be used directly by users who want to define their own custom readers/writers. Users who want to find out more about what built-in formats are supported by Table by default should see Reading and writing Table objects. No built-in formats are currently defined for NDData, but this will be added in future). The I/O registry is a sub-module used to define the readers/writers available for the Table and NDData classes. The following example demonstrates how to create a reader for the Table class. First, we can create a highly simplistic FITS reader which just reads the data as a structured array: from astropy.table import Table def fits_table_reader(filename, hdu=1): from astropy.io import fits data = fits.open(filename)[hdu].data return Table(data) and then register it: from astropy.io import registry registry.register_reader('fits', Table, fits_table_reader) Reader functions can take any arguments except format (since this is reserved for read()) and should return an instance of the class specified as the second argument of register_reader (Table in the above case.) We can then read in a FITS table with: t = Table.read('catalog.fits', format='fits') In practice, it would be nice to have the read method automatically identify that this file was a FITS file, so we can construct a function that can recognize FITS files, which we refer to here as an identifier function. An identifier function should take three arguments: the first should be a string which indicates whether the identifier is being called from read or write, and the second and third are the positional and keyword arguments passed to Table.read respectively (and are therefore a list and a dictionary). We can write a simplistic function that only looks at filenames (but in practice, this function could even look at the first few bytes of the file for example). The only requirement is that it return a boolean indicating whether the input matches that expected for the format: def fits_identify(origin, args, kwargs): return isinstance(args[0], basestring) and \ args[0].lower().split('.')[-1] in ['fits', 'fit'] Note Identifier functions should be prepared for arbitrary input - in particular, the first argument may not be a filename or file object, so it should not assume that this is the case. We then register this identifier function: registry.register_identifier('fits', Table, fits_identify) And we can then do: t = Table.read('catalog.fits') If multiple formats match the current input, then an exception is raised, and similarly if no format matches the current input. In that case, the format should be explicitly given with the format= keyword argument. Similarly, it is possible to create custom writers. To go with our simplistic FITS reader above, we can write a simplistic FITS writer: def fits_table_writer(table, filename, clobber=False): import numpy as np from astropy.io import fits fits.writeto(filename, np.array(table), clobber=clobber) We then register the writer: io_registry.register_writer('fits', Table, fits_table_writer) And we can then write the file out to a FITS file: t.write('catalog_new.fits', format='fits') If we have registered the identifier as above, we can simply do: t.write('catalog_new.fits')
http://docs.astropy.org/en/v0.2.1/io/registry.html
CC-MAIN-2016-44
refinedweb
538
53.92
chmod(2) BSD System Calls Manual chmod(2) NAME chmod, fchmod -- change mode of file SYNOPSIS #include <sys/types.h> #include <sys/stat.h> int chmod(const char *path, mode_t mode); int fchmod(int fildes, mode_t mode); DESCRIPTION The function chmod() sets the file permission bits of the file specified by the pathname path to mode. Fchmod() sets the permission bits of the specified file descriptor fildes. Chmod() verifies that the process owner (user) either owns the file specified by path (or fildes), */ The ISVTX (the sticky bit) indicates to the system which executable files are shareable (the default) and the system maintains the program text of the files in the swap area. The sticky bit may only be set by the super user on shareable executable files. The chmod() system call will fail and the file mode will be unchanged if: [EACCES] Search permission is denied for a component of the path prefix. [EFAULT] Path points outside the process's allocated address space. [EINTR] Its execution was interrupted by a signal. . [EROFS] The named file resides on a read-only file system. fchmod() will fail if: [EBADF] fildes is not a valid file descriptor. [EINVAL] fildes refers to a socket, not to a file. [EINVAL] mode is not a valid file mode. [EINTR] Its execution was interrupted by a signal. [EIO] An I/O error occurred while reading from or writing to the file system. [EPERM] The effective user ID does not match the owner of the file and the effective user ID is not the super-user. [EROFS] The file resides on a read-only file system. LEGACY SYNOPSIS #include <sys/types.h> #include <sys/stat.h> The include file <sys/types.h> is necessary. SEE ALSO chmod(1), chown(2), open(2), stat(2), compat(5), sticky(8) STANDARDS The chmod() function is expected to conform to IEEE Std 1003.1-1988 (``POSIX.1''). HISTORY The fchmod() function call appeared in 4.2BSD. 4th Berkeley Distribution June 4, 1993 4th Berkeley Distribution Mac OS X 10.9.1 - Generated Sun Jan 5 19:40:58 CST 2014
http://manpagez.com/man/2/chmod/
CC-MAIN-2018-34
refinedweb
350
67.04
Siva, anteater is not very close to commons to my knowledge, as sebb pointed out somewhat quickly. I looked a bit at the documentation and saw they have jelly support as well which provides much of your expectations. The foreach tag, however, seems to be theirs. So, if you allow, here's a jelly answer, all in jelly namespace, which is quite close to java itself, I suppose this was your intent of posting to this list. This programme reads the lines from "path-to- file" and outputs them to output... you should be able to do something useful out of it provided you can embed this into anteater. <?xml version="1.0" encoding="utf-8" ?> <j:jelly xmlns: <j:new <j:arg </j:new> <j:new <j:arg </j:new> <j:set <j:while Here is a line: ${line} <j:set </j:while> </j:jelly> Here I have only used the core tag-library of jelly... there's a whole lot more you can do with other tag-libraries. The util tag library (see and examples at) might actually do the job easier provided you do not have zillions of URLs. paul Le 07-juil.-08 à 22:12, Kadamban, Sivasankari a écrit : > Hi, > > I am trying to use anteater for verifying various URLs in our > application using foreach. > > <target name="verify"> > <httpRequest group="server" path="${url}" method="GET"> > <match> > <responseCode value="200"/> > </match> > </httpRequest> > </target> > > <foreach list="url1,url2,url3,url4" target="verify" param="url"> </ > foreach> > > Is there any way that I can specify the list of URLs in a file and > read them through one by one in the "for loop"? > > Thanks, > Siva >
http://mail-archives.apache.org/mod_mbox/commons-dev/200807.mbox/%[email protected]%3E
CC-MAIN-2016-26
refinedweb
276
67.99
""" What is this register used for? Hmm.. I'll just rename it to veryuniquename, do a textual search, and find all references! Ok.. Waiting for the search to end.. any minute now.. Done! Now I just need to understand which of the search result is relevant to the current usage frame of the register. Shouldn't be too hard, right? """ If this happened to you (perhaps more than once), you are in for a treat! Just Shift-X, and your troubles will go away! You may also re(g)name the register in the usage frame. Just Shift-N, and follow instructions! Also - instead of changing the types of all the usages to a certain type, just Shift-T once. Note: Sometimes there is already another plugin using Shift-T. Remove that plugin - you never used it before anyway :-). Prerequisites This plugin uses sark to interact with the IDA scripts in a comfortable way, and cachetools to cache the frame scan which makes this a whole of a lot faster. [For python2] pip install sark pip install cachetools [For python3] If using python3 variant of IDA, you should instead run: pip3 install -U git+ pip3 install cachetools Clone the repo git clone Plugin installation The sark codebase offers many plugins. One of them is: We recommend copying it to your plugins directory and then run IDA once with administrator privilages (so it can create the plugins.list files). After doing so, you can add new plugins by adding the path to them to one of the plugins.list files created (eg. one is created in the cfg folder of IDA) Now, add to one of the plugins.list files: FULLPATH\oregami\oregami_plugin.py FULLPATH\oregami\regname_plugin.py FULLPATH\oregami\typeregter_plugin.py Restart IDA, and the plugins should work. Alternatively: Copy all files (including internal oregami folder, excluding setup.py) to the IDA plugins directory. Use as script Besides being used as plugins, oregami can be used also to write your own scripts! For this, you should first install using included setup.py file. Meaning that you should call: 'python setup.py develop', and from then on you may use the internal classes and functions. Note that we recommend using 'develop' and not 'install', so that if you pull a new version of oregami, it will work out of the box. For example: -- script.py -- def find_func_usage(func_ea, reg='r0'): """ Find and print all usages of a register, including the information of the specific operands it is in, and what operation it does in the operand. """ import oregami rf = oregami.RegFrame(func_ea, reg) for insn in rf.get_instructions(): print('Addr:{:x}'.format(insn.ea)) for opnd in insn.operands: if opnd.uf_is_external: continue print('--opnd_idx:{} - {}'.format(opnd.n, oregami.UsageBits(opnd.op_flags))) Scanning the usage frame Let's assume the following sequence of opcodes: ROM:01000010 e_lis r10, 0x4004 # 0x40040000 # Load Immediate ShiftedROM:01000014 e_add16i r10, r10, 0x1337 # 0x40041337 # Add ImmediateROM:01000020 se_mr r30, r31 # Move RegisterROM:01000022 cmplw r11, r10 # Compare Logical WordROM:01000026 se_bge loc_1000036 # Branch if greater than or equalROM:01000028ROM:01000028 loc_1000028: # CODE XREF: sub_0100000+144↓jROM:01000028 e_stmw r30, 0(r11) # Store Multiple WordROM:0100002C e_add16i r11, r11, 8 # Add ImmediateROM:01000030 cmplw r11, r10 # Compare Logical WordROM:01000034 se_blt loc_1000028 # Branch if less thanROM:01000036ROM:01000036 loc_1000036: # CODE XREF: sub_0100000+136↑jROM:01000036 e_add16i r10, r10, 8 # Add ImmediateROM:0100003A e_li r11, 0 # Load Immediate If we scan the usage frame of the r10 register, starting from the address 01000022, we will find three types of usages included in the usage frame. 1. Init This will include the instructions which initialize the value of the register. We may want to include only the last instruction that changed the register value (address 01000014 in the example), or a sequence of operations used to set the initial value of the register (addresses 01000010 and 01000014 in the example). The sequence of operations used in the register initialization may be called an "init stage". You may choose to support an init stage, or not, depending on the parameter init_stage_bool in the RegFrame initialization. 2. Pure This will include the instructions which use the value of the register, and do not change it in any way. These correspond to lines 01000022 and 01000030 in the example. 3. Break This will include the instructions which use the value of the register, but then change it's value. These instructions may be seen as included in two distinct usage frames - the one leading to them, and the one originating from them. This corresponds to line 01000036. 4. Out Break When scanning the usage register, getting to an init operation, or a break operation will cause us to stop scanning in a certain direction. But, we may also stop the scan because of instructions outside the usage frame. For example, scanning the usage frame of the r11 register starting from the address 01000030 will stop on line 0100003A. Classes and Functions RegFrame This is the basic class used in oregami. By initializing it on an address and specific register, it will scan the usage frame of the register, and will create an UFIntruction for all the relevant instructions. get_instruction - get the instruction from the given address get_instructions - a generator, returning the instructions in the usage frame. You may also ask for specific subsets of the used instructions: get_init_instructions - get only instructions of the init type get_pure_instructions - get only instructions of the pure type get_break_instructions - get only instructions of the break type get_nobreak_instructions - get only instructions which are not of the break type (ie. init + pure) get_noinit_instructions - get only instructions which are not of the init type (ie. pure + break) get_outbreak_instructions - get only instructions of the out break type By default this class will cache the results of the scan, and prevent itself from rescanning the same usage frame. This means that requesting the RegFrame of any instruction that was a part of the usage frame (specifically of the init + pure types. Not breaks, because starting a scan on them should return the usage frame originating from them) will return the same pre-calculated RegFrame instance. In order to force a rescan, use the force flag when initializing the class. RFInstruction This is the class returned by the RegFrame, representing an instruction in the usage frame. This class inherits from the sark Instruction class, and as such supports the same methods. One main difference is that instead of containing an operands array of sark Operand class, it will contain an array of UFOperand class. This class also contains methods to understand the instruction type (init, pure, break, outbreak), and the operations bits (read, write, explicit, and different types of implicit) RFOperand This is the class in the operands array inside a specific UFInstruction. This class inherits from the sark Instruction class, and as such supports the same methods. In additions to the sark operations, it contains methods to get the operation bits (read, write, explicit, and different types of implicit), and to know if the operand is actually part of the usage frame (useful to know which operand in a break type instruction is part of the usage frame) RegInstruction This is a class used to analyze a specific instruction, Instruction class, and as such supports the same methods. One main difference is that instead of containing an operands array of sark Operand class, it will contain an array of RegOperand class. RegOperand This is a class used to analyze a specific operand, Operand class, and as such supports the same methods.
https://amp.kitploit.com/2020/10/oregami-ida-plugins-and-scripts-for.html
CC-MAIN-2022-27
refinedweb
1,259
61.26
Introduction A while ago, Tomasz introduced Kotlin development on Android. To remind you: Kotlin is a new programming language developed by Jetbrains, the company behind one of the most popular Java IDEs, IntelliJ IDEA. Like Java, Kotlin is a general-purpose language. Since it complies to the Java Virtual Machine (JVM) bytecode, it can be used side-by-side with Java, and it doesn’t come with a performance overhead. In this article, I will cover the top 10 useful features to boost your Android development. Note: at the time of writing this article, actual versions were Android Studio 2.1.1. and Kotlin 1.0.2. Kotlin Setup Since Kotlin is developed by JetBrains, it is well-supported in both Android Studio and IntelliJ. The first step is to install Kotlin plugin. After successfully doing so, new actions will be available for converting your Java to Kotlin. Two new options are: - Create a new Android project and setup Kotlin in the project. - Add Kotlin support to an existing Android project. To learn how to create a new Android project, check the official step by step guide. To add Kotlin support to a newly created or an existing project, open the find action dialog using Command + Shift + A on Mac or Ctrl + Shift + A on Windows/Linux, and invoke the Configure Kotlin in Project action. To create a new Kotlin class, select: File> New> Kotlin file/class, or File> New> Kotlin activity Alternatively, you can create a Java class and convert it to Kotlin using the action mentioned above. Remember, you can use it to convert any class, interface, enum or annotation, and this can be used to compare Java easily to Kotlin code. Another useful element that saves a lot of typing are Kotlin extensions. To use them you have to apply another plugin in your module build.gradle file: apply plugin: 'kotlin-android-extensions' Caveat: if you are using the Kotlin plugin action to set up your project, it will put the following code in your top level build.gradle file: buildscript { ext.kotlin_version = '1.0.2' repositories { jcenter() } dependencies { classpath "org.jetbrains.kotlin:kotlin-gradle-plugin:$kotlin_version" // NOTE: Do not place your application dependencies here; they belong // in the individual module build.gradle files } } This will cause the extension not to work. To fix that, simply copy that code to each of the project modules in which you wish to use Kotlin. If you setup everything correctly, you should be able to run and test your application the same way you would in a standard Android project, but now using Kotlin. Saving Time with Kotlin So, let’s start with describing some key aspects of Kotlin language and by providing tips on how you can save time by using it instead of Java. Feature #1: Static Layout Import One of the most common boilerplate codes in Android is using the findViewById() function to obtain references to your views in Activities or Fragments. There are solutions, such as the Butterknife library, that save some typing, but Kotlin takes this another step by allowing you to import all references to views from the layout with one import. For example, consider the following activity XML layout: <="co.ikust.kotlintest.MainActivity"> <TextView android: </RelativeLayout> And the accompanying activity code: package co.ikust.kotlintest) helloWorldTextView.text = "Hello World!" } } To get the references for all the views in the layout with a defined ID, use the Android Kotlin extension Anko. Remember to type in this import statement: import kotlinx.android.synthetic.main.activity_main.* Note you don’t need to write semicolons at the end of the lines in Kotlin because they are optional. The TextView from layout is imported as a TextView instance with the name equal to the ID of the view. Don’t be confused by the syntax, which is used to set the label: helloWorldTextView.text = "Hello World!" We will cover that shortly. Caveats: - Make sure you import the correct layout, otherwise imported View references will have a nullvalue. - When using fragments, make sure imported View references are used after the onCreateView()function call. Import the layout in onCreateView()function and use the View references to setup the UI in onViewCreated(). The references won’t be assigned before the onCreateView()method has finished. Feature #2: Writing POJO Classes with Kotlin Something that will save the most time with Kotlin is writing the POJO (Plain Old Java Object) classes used to hold data. For example, in the request and response body of a RESTful API. In applications that rely on RESTful API, there will be many classes like that. In Kotlin, much is done for you, and the syntax is concise. For example, consider the following class in Java:; } } When working with Kotlin, you don’t have to write public keyword again. By default, everything is of public scope. For example, if you want to declare a class, you simply write: class MyClass { } The equivalent of the Java code above in Kotlin: class User { var firstName: String? = null var lastName: String? = null } Well, that saves a lot of typing, doesn’t it? Let’s walk through the Kotlin code. When defining variables in Kotlin, there are two options: - Mutable variables, defined by varkeyword. - Immutable variables, defined by valkeyword. The next thing to note is the syntax differs a bit from Java; first, you declare the variable name and then follow with type. Also, by default, properties are non-null types, meaning that they can’t accept null value. To define a variable to accept a null value, a question mark must be added after the type. We will talk about this and null-safety in Kotlin later. Another important thing to note is that Kotlin doesn’t have the ability to declare fields for the class; only properties can be defined. So, in this case, firstName and lastName are properties that have been assigned default getter/setter methods. As mentioned, in Kotlin, they are both public by default. Custom accessors can be written, for example: class User { var firstName: String? = null var lastName: String? = null val fullName: String? get() firstName + " " + lastName } From the outside, when it comes to syntax, properties behave like public fields in Java: val userName = user.firstName user.firstName = "John" Note that the new property fullName is read only (defined by val keyword) and has a custom getter; it simply appends first and last name. All properties in Kotlin must be assigned when declared or are in a constructor. There are some cases when that isn’t convenient; for example, for properties that will be initialized via dependency injection. In that case, a lateinit modifier can be used. Here is an example: class MyClass { lateinit var firstName : String; fun inject() { firstName = "John"; } } More details about properties can be found in the official documentation. Feature #3: Class Inheritance and Constructors Kotlin has a more concise syntax when it comes to constructors, as well. Constructors Kotlin classes have a primary constructor and one or more secondary constructors. An example of defining a primary constructor: class User constructor(firstName: String, lastName: String) { } The primary constructor goes after the class name in the class definition. If the primary constructor doesn’t have any annotations or visibility modifiers, the constructor keyword can be omitted: class Person(firstName: String) { } Note that a primary constructor cannot have any code; any initialization must be done in the init code block: class Person(firstName: String) { init { //perform primary constructor initialization here } } Furthermore, a primary constructor can be used to define and initialize properties: class User(var firstName: String, var lastName: String) { // ... } Just like regular ones, properties defined from a primary constructor can be immutable ( val) or mutable ( var). Classes may have secondary constructors as well; the syntax for defining one is as follows: class User(var firstName: String, var lastName) { constructor(name: String, parent: Person) : this(name) { parent.children.add(this) } } Note that every secondary constructor must delegate to a primary constructor. This is similar to Java, which uses this keyword: class User(val firstName: String, val lastName: String) { constructor(firstName: String) : this(firstName, "") { //... } } When instantiating classes, note that Kotlin doesn’t have new keywords, as does Java. To instantiate the aforementioned User class, use: val user = User("John", "Doe) Introducing Inheritance In Kotlin, all classes extend from Any, which is similar to Object in Java. By default, classes are closed, like final classes in Java. So, in order to extend a class, it has to be declared as open or abstract: open class User(val firstName, val lastName) class Administrator(val firstName, val lastName) : User(firstName, lastName) Note that you have to delegate to the default constructor of the extended class, which is similar to calling super() method in the constructor of a new class in Java. For more details about classes, check the official documentation. Feature #4: Lambda Expressions Lambda expressions, introduced with Java 8, are one its favorite features. However, things are not so bright on Android, as it still only supports Java 7, and looks like Java 8 won’t be supported anytime soon. So, workarounds, such as Retrolambda, bring lambda expressions to Android. With Kotlin, no additional libraries or workarounds are required. Functions in Kotlin Let’s start by quickly going over the function syntax in Kotlin: fun add(x: Int, y: Int) : Int { return x + y } The return value of the function can be omitted, and in that case, the function will return Int. It’s worth repeating that everything in Kotlin is an object, extended from Any, and there are no primitive types. An argument of the function can have a default value, for example: fun add(x: Int, y: Int = 1) : Int { return x + y; } In that case, the add() function can be invoked by passing only the x argument. The equivalent Java code would be: int add(int x) { Return add(x, 1); } int add(int x, int y) { return x + y; } Another nice thing when calling a function is that named arguments can be used. For example: add(y = 12, x = 5) For more details about functions, check the official documentation. Using Lambda Expressions in Kotlin Lambda expressions in Kotlin can be viewed as anonymous functions in Java, but with a more concise syntax. As an example, let’s show how to implement click listener in Java and Kotlin. In Java: view.setOnClickListener(new OnClickListener() { @Override public void onClick(View v) { Toast.makeText(v.getContext(), "Clicked on view", Toast.LENGTH_SHORT).show(); } }; In Kotlin: view.setOnClickListener({ view -> toast("Click") }) Wow! Just one line of code! We can see that the lambda expression is surrounded by curly braces. Parameters are declared first, and the body goes after the -> sign. With click listener, type for the view parameter isn’t specified since it can be inferred. The body is simply a call to toast() function for showing toast, which Kotlin provides. Also, if parameters aren’t used, we can leave them out: view.setOnClickListener({ toast("Click") }) Kotlin has optimized Java libraries, and any function that receives an interface with one method for an argument can be called with a function argument (instead of Interface). Furthermore, if the function is the last parameter, it can be moved out of the parentheses: view.setOnClickListener() { toast("Click") } Finally, if the function has only one parameter that is a function, parentheses can be left out: view.setOnClickListener { toast("Click") } For more information, check Kotlin for Android developers book by Antonio Leiva and the official documentation. Extension Functions Kotlin, similar to C#, provides the ability to extend existing classes with new functionality by using extension functions. For example, an extension method that would calculate the MD5 hash of a String: fun String.md5(): ByteArray { val digester = MessageDigest.getInstance("MD5") digester.update(this.toByteArray(Charset.defaultCharset())) return digester.digest() } Note that the function name is preceded by the name of the extended class (in this case, String), and that the instance of the extended class is available via this keyword. Extension functions are the equivalent of Java utility functions. The example function in Java would look like: public static int toNumber(String instance) { return Integer.valueOf(instance); } The example function must be placed in a Utility class. What that means is that extension functions don’t modify the original extended class, but are a convenient way of writing utility methods. Feature #5: Null-safety One of the things you hustle the most in Java is probably NullPointerException. Null-safety is a feature that has been integrated into the Kotlin language and is so implicit you usually won’t have to worry about. The official documentation states that the only possible causes of NullPointerExceptions are: - An explicit call to throw NullPointerException. - Using the !!operator (which I will explain later). - External Java code. - If the lateinitproperty is accessed in the constructor before it is initialized, an UninitializedPropertyAccessExceptionwill be thrown. By default, all variables and properties in Kotlin are considered non-null (unable to hold a null value) if they are not explicitly declared as nullable. As already mentioned, to define a variable to accept a null value, a question mark must be added after the type. For example: val number: Int? = null However, note that the following code won’t compile: val number: Int? = null number.toString() This is because the compiler performs null checks. To compile, a null check must be added: val number: Int? = null if(number != null) { number.toString(); } This code will compile successfully. What Kotlin does in the background, in this case, is that number becomes nun-null ( Int instead of Int?) inside the if block. The null check can be simplified using safe call operator ( ?.): val number: Int? = null number?.toString() The second line will be executed only if the number is not null. You can even use the famous Elvis operator ( ?:): val number Int? = null val stringNumber = number?.toString() ?: "Number is null" If the expression on the left of ?: is not null, it is evaluated and returned. Otherwise, the result of the expression on the right is returned. Another neat thing is that you can use throw or return on the right-hand side of the Elvis operator since they are expressions in Kotlin. For example: fun sendMailToUser(user: User) { val email = user?.email ?: throw new IllegalArgumentException("User email is null") //... } The !! Operator If you want a NullPointerException thrown the same way as in Java, you can do that with the !! operator. The following code will throw a NullPointerException: val number: Int? = null number!!.toString() Casting Casting in done by using an as keyword: val x: String = y as String This is considered “Unsafe” casting, as it will throw ClassCastException if the cast is not possible, as Java does. There is a “Safe” cast operator that returns the null value instead of throwing an exception: val x: String = y as? String For more details on casting, check the Type Casts and Casts section of the official documentation, and for more details on null safety check the Null-Safety section. lateinit properties There is a case in which using lateinit properties can cause an exception similar to NullPointerException. Consider the following class: class InitTest { lateinit var s: String; init { val len = this.s.length } } This code will compile without warning. However, as soon as an instance of TestClass is created, an UninitializedPropertyAccessException will be thrown because property s is accessed before it is initialized. Feature #6: Function with() Function with() is useful and comes with the Kotlin standard library. It can be used to save some typing if you need to access many properties of an object. For example: with(helloWorldTextView) { text = "Hello World!" visibility = View.VISIBLE } It receives an object and an extension function as parameters. The code block (in the curly braces) is a lambda expression for the extension function of the object specified as the first parameter. Feature #7: Operator Overloading With Kotlin, custom implementations can be provided for a predefined set of operators. To implement an operator, a member function or an extension function with the given name must be provided. For example, to implement the multiplication operator, a member function or extension function, with the name times(argument), must be provided: operator fun String.times(b: Int): String { val buffer = StringBuffer() for (i in 1..b) { buffer.append(this) } return buffer.toString() } The example above shows an implementation of binary * operator on the String. For example, the following expression will assign value “TestTestTestTest” to a newString variable: val newString = "Test" * 4 Since extension functions can be used, it means the default behavior of the operators for all the objects can be changed. This is a double-edged sword and should be used with caution. For a list of function names for all operators that can be overloaded, check the official documentation. Another big difference compared to Java are == and != operators. Operator == translates to: a?.equals(b) ?: b === null While operator != translates to: !(a?.equals(b) ?: What that means, is that using == doesn’t make an identity check as in Java (compare if instances of an object are the same), but behaves the same way as equals() method along with null checks. To perform identity check, operators === and !== must be used in Kotlin. Feature #8: Delegated Properties Certain properties share some common behaviors. For instance: - Lazy-initialized properties that are initialized upon first access. - Properties that implement Observable in Observer pattern. - Properties that are stored in a map instead as separate fields. To make cases like this easier to implement, Kotlin supports Delegated Properties: class SomeClass { var p: String by Delegate() } This means that getter and setter functions for the property p are handled by an instance of another class, Delegate. An example of a delegate for the String property: class Delegate { operator fun getValue(thisRef: Any?, property: KProperty<*>): String { return "$thisRef, thank you for delegating '${property.name}' to me!" } operator fun setValue(thisRef: Any?, property: KProperty<*>, value: String) { println("$value has been assigned to '${property.name} in $thisRef.'") } } The example above prints a message when a property is assigned or read. Delegates can be created for both mutable ( var) and read-only ( val) properties. For a read-only property, getValue method must be implemented. It takes two parameters (taken from the offical documentation): - receiver - must be the same or a supertype of the property owner (for extension properties, it is the type being extended). - metadata - must be of type KProperty<*>or its supertype. This function must return the same type as property, or its subtype. For a mutable property, a delegate has to provide additionally a function named setValue that takes the following parameters: - receiver - same as for getValue(). - metadata - same as for getValue(). - new value - must be of the same type as a property or its supertype. There are a few standard delegates that come with Kotlin that cover the most common situations: - Lazy - Observable - Vetoable Lazy Lazy is a standard delegate that takes a lambda expression as a parameter. The lambda expression passed is executed the first time getValue() method is called. By default, the evaluation of lazy properties is synchronized. If you are not concerned with multi-threading, you can use lazy(LazyThreadSafetyMode.NONE) { … } to get extra performance. Observable The Delegates.observable() is for properties that should behave as Observables in Observer pattern. It accepts two parameters, the initial value and a function that has three arguments (property, old value, and new value). The given lambda expression will be executed every time setValue() method is called: class User { var email: String by Delegates.observable("") { prop, old, new -> //handle the change from old to new value } } Vetoable This standard delegate is a special kind of Observable that lets you decide whether a new value assigned to a property will be stored or not. It can be used to check some conditions before assigning a value. As with Delegates.observable(), it accepts two parameters: the initial value, and a function. The difference is that the function returns a Boolean value. If it returns true, the new value assigned to the property will be stored, or otherwise discarded. var positiveNumber = Delegates.vetoable(0) { d, old, new -> new >= 0 } The given example will store only positive numbers that are assigned to the property. For more details, check the official documentation. Feature #9: Mapping an Object to a Map A common use case is to store values of the properties inside a map. This often happens in applications that work with RESTful APIs and parses JSON objects. In this case, a map instance can be used as a delegate for a delegated property. An example from the official documentation: class User(val map: Map<String, Any?>) { val name: String by map val age: Int by map } In this example, User has a primary constructor that takes a map. The two properties will take the values from the map that are mapped under keys that are equal to property names: val user = User(mapOf( "name" to "John Doe", "age" to 25 )) The name property of the new user instance will be assigned the value of “John Doe” and age property the value 25. This works for var properties in combination with MutableMap as well: class MutableUser(val map: MutableMap<String, Any?>) { var name: String by map var age: Int by map } Feature #10: Collections and Functional Operations With the support for lambdas in Kotlin, collections can be leveraged to a new level. First of all, Kotlin distinguishes between mutable and immutable collections. For example, there are two versions of Iterable interface: - Iterable - MutableIterable The same goes for Collection, List, Set and Map interfaces. For example, this any operation returns true if at least one element matches the given predicate: val list = listOf(1, 2, 3, 4, 5, 6) assertTrue(list.any { it % 2 == 0 }) For an extensive list of functional operations that can be done on collections, check this blog post. Conclusion We have just scratched the surface of what Kotlin offers. For those interested in further reading and learning more, check: - Antonio Leiva’s Kotlin blog posts and book. - Official documentation and tutorials from JetBrains. To sum up, Kotlin offers you the ability to save time when writing native Android applications by using an intuitive and concise syntax. It is still a young programming language, but in my opinion, it is now stable enough to be used for building production apps. The benefits of using Kotlin: - Support by Android Studio is seamless and excellent. - It is easy to convert an existing Java project to Kotlin. - Java and Kotlin code may coexist in the same project. - There is no speed overhead in the application. The downsides: - Kotlin will add its libraries to the generated .apk, so the final .apksize will be about 300KB larger. - If abused, operator overloading can lead to unreadable code. - IDE and Autocomplete behaves a little slower when working with Kotlin than it does with pure Java Android projects. - Compilation times can be a bit longer.
https://www.toptal.com/android/kotlin-boost-android-development
CC-MAIN-2022-27
refinedweb
3,818
56.05
Teabagger waves Tenther flag in support of Rep. Matt Shea. (Fuse) It’s not so surprising to see a Republican introduce far-right-wing legislation, but it is a little stunning to see the entire Republican caucus embrace the fringe constitutional theories of the Tenther movement, and with so little thought or hesitation. As I’ve previously reported, two-thirds of the House Republican caucus has already signed on to bills sporting stock, Tentherist boilerplate, and on Wednesday they attempted a procedural motion to move two of these bills to the floor for a vote without hearings or debate: HB 2669, which would have exempted Washington from national health care reform, and HB 2708, which would have declared null and void any federal greenhouse gas or fuel economy regulations. The motions failed on a party-line vote, with every single House Republican voting in favor. That’s just plain crazy, but what’s crazier still is that far from being a mere symbolic gesture, or ill-conceived effort at political gamesmanship, Republican legislators are eager to defend these measures on fringe Tentherist grounds, as Republican Minority Whip Rep. Bill Hinkle (R-13) recently did in an interview with Publicola: “Have you heard of the 10th Amendment?” Rep. Hinkle begins when asked to explain the bill. (Answer: Yes. That’d be state’s rights.) Hinkle, the Republican minority whip, says the health care bill is a federal power grab that violates the 10th Amendment “because it would be a national system, preventing states from having our own system … and this kind of stuff is driving people crazy. People in my district are furious.” Hinkle says, “It’s time for the states to excercise the power to remind the federal government of constitutional restrictions on their power.” Yeah, well, good point, except that Hinkle’s interpretation of the 10th Amendment flies in the face of 220 years of Supreme Court rulings. And Hinkle is not the only one. Back in November, Rep. Matt Shea (R-Greenacres) wrote a prominent post on the tentherist website, the Tenth Amendment Center, apparently outlining the WSRP’s 2010 legislative agenda, entitled “Resist DC: A Step-by-Step Plan for Freedom,” in which he makes the rather blunt assertion:. That not only represents a rather dubious interpretation of the Constitution, it also appears to be an every-state-for-itself call for dissolving the union. No wonder at least one of the teabaggers at yesterday’s sparsely attended rally waved a Confederate flag in support of Rep. Shea’s agenda. Really, read Shea’s post, for regardless of how wacky and fringe you think his constitutional theory might be, it reveals a dangerous political strategy that argues for states to act in defiance of both federal law and the federal courts. When teabaggers like Shea and Hinkle argue for what they call the “nullification doctrine,” they essentially argue for the dissolution of the union as we know it, for the power of this doctrine comes not from legal theory, but from the simple belief that if enough states were to defy Congress and the President, Congress and the President would be powerless to do much about it. This isn’t the doctrine of constitutional scholars. It is the doctrine of rebels. As House Speaker Pro Tem Jeff Morris (D-Mount Vernon) succinctly put it in a recent press release: “We want to lead the state out of recession. They want to lead the state out of the country.” Rep. Morris’s snark would be funnier, if it weren’t apparently true. When did the Paultards take control of the WSRP? Actually, I think even Dr. Paul might think these guys are going too far. With so few Republicans in the house of representatives it isn’t difficult to conceive all of them voting for these bills. Re: Confederate flag comment Goldy, please pick up a history book before you regurgitate your fourth-grade public school education. The civil war was about power and trade. It was not about slavery. The emancipation proclamation only applied to slaves in the southern “rebel” states, not in the northern states. The southern residents didn’t go to war and give up their lives for slaves, they did it for freedom. For the right to trade with whomever they saw fit, and to live their lives the way they saw best. The only good thing to come out of the civil war was the early release of slaves, but for the north, that was just an added bonus. Government-controlled education has morphed the civil war into a fairy tale about slavery. Things are rarely that black and white. You should know better. @3 Perhaps you should read a book first. Quick, what was every single major political battle about prior to the Civil War (Dred Scott trial, 3/5ths Compromise, Bleeding Kansas, etc etc)? Slavery. Only an idiot or a liar would try to claim the the Civil War was not about slavery. @3 PS One of the main reasons the Confederacy lost the Civil War was because of the notion of “state’s rights”. When Jefferson Davis attempted a draft, for instance, Georgia’s governor told him that his state would not participate. Other states refused to send Richmond money. The Confederacy was doomed since it had a much less centralized war effort than the United States. Also, that is not actually a Confederate Flag, at least not one that was used in land battles. It is not one of the three Confederate national flags nor. If you’re doing a media event you need to control what the media will see and report on. Freaking amateurs! Forget the arguments and look at the flag. It deliberately resembles the Confederate flag that Klansmen used to wave. It’s a secessionist flag that stands for treason and racism. The old guy probably with the flag probably doesn’t have a clue what all of this means. Someone told him “Obamacare” means “communizing America” and he swallowed it whole. Notice the expression of total uncomprehension on his face. Basically what these people are saying is if they don’t like the laws that Congresses passes, they’re going to break the law. It’s almost time to call out the National Guard. F**KING Republicans are just jokes now a days. I’ll wager that EVERY SINGLE one of these crazy Republican wingnuts lunatics has a parent or grandparent to gets F**KING social security and Medicare. I’d wager NONE OF THEM turn it down on ‘principle’. Let one of these child minded idiot Republicans get on TV and tell the public we’re shutdown down Medicare and social security…see how that goes over. Sure they’re big evil Federal programs (like the permanent standing full time Army that these wingnuts LIKE which our founding fathers never wanted….sigh)…but they’re really well liked big Federal programs. When did Republicans go from William F. Buckley to Glen Beck? I’m embarrassed FOR the few sane Republicans left….they must be just shaking their heads in disbelief that this is what’s left of their party anymore. Lola @3, the Civil War not about slavery? I suggest you read the Confederate Constitution which builds in the institution of negro slavery to the point of forbidding any law to eliminate it (Article I, Sec. 9). Furthermore, it required any new state joining the confederation to allow slavery throughout its territory. Any more historical fiction you want to share with us?? I love how the Tea Party nuts picked the Washington Center for The Performing Arts to hold their Truckers for Jesus rally. Yeah – the Center is owned by the state – supported by local and state taxpayers – and survives on grants from organizations like the NEA. Seriously – do these barely literate talk radio addicts have a clue? Couldn’t they have just walked the talk, and held that rally at an R.V. Park in Yelm?? Rabbit: given the uneducated yahoos who make up the Tea Bagger movement, can’t you see how treason and racism could easily be confused with patriotism? Did any of our local “media” bother to report on what the Republican caucus is up to? And they wonder why no reads/watches them anymore. Tea Party meets reality in Detroit: DETROIT —.” So-called “tea parties” have become popular forums for conservatives to vent over government tax policies, the economic stimulus packages, bank bailouts and the health care overhaul. But a call to protest the auto show by a Virginia-based group, the National Taxpayers Union, was opposed by some Michigan conservatives, who said their economically battered state needs the jobs. Interesting going ons in the National Tea Party. They say they’ve signed Sarah Palin, Michelle Bachman, and Marsha Blackburn as speakers at their “National Convention” in Nashville in February, and upon being questioned Palin has said she is waiving her usual $100,000 speaker’s fee. But the price tag of $549.00 per person to attend the convention (food, lodging, travel, etc. are extra) has some within the Tea Party movement questioning where the money is going. It turns out a good chunk is going into the pockets of Judson Philips, a Tennessee attorney who practices DWI and personal injury cases, has a history of financial troubles, and decided he wants to harness the Tea Party movement so he doesn’t have to practice law anymore. After getting volunteers to work incredible hours setting up the organization and website for the movement, he filed the organization as a for-profit corporation with himself as the sole owner, and with the “contributions” going into his wife’s PayPal account. When the web designer initially said he would need at least $180,000 to set up the type of system he described, he talked him into doing a simple “one page” website for free, and then by making many multiple requests for changes, ended up with a social networking site which over which he claimed ownership and which he planned to use to compete with Facebook. You can read the web designer’s story here The web designer is still a conservative idealist, but is outraged that this “grassroots movement” is being captured by someone he considers a profiteer. I’ve studied a lot about the Civil War. It’s part of my history hobby. If you get into discussion forums about the Civil War, there’s always somebody who will claim the war wasn’t about slavery. They will point out that at the beginning Lincoln said that if he could preserve the Union while keeping the slaves, he would choose to preserve the Union. They will point out that during the early years of the war, slavery was still allowed in the Union state of Maryland, in Washington D.C., and in the border states of Kentucky and Missouri. They will point out that the Emancipation Proclomation was a war powers device which only freed slaves in Confederate-held territory – the very places where it couldn’t be enforced (at that time). But saying the dispute between the North and the South came down to a dispute between the perceived unfariness of tariffs and ideological philosophies of federalism ignore the really big elephant in the room. The big issue between the North and the South was slavery, and the expansion of slavery into the new territories in the west. The reason why the election of Lincoln set off the firestorm of seccessions among southern states was precisely because they believed he would act in a manner consistent with his previously-stated beliefs – that a Union could continue if divided between slave-holding and free states. In short, the North wanted to preserve the Union – a Union without slaves. The South wanted to preserve States Rights – the right to own slaves. All the other statements made by each side are so much camoflauge over that main point. Remove slavery from the equation, and the North and South didn’t have that much to argue about – certainly not enough to justify seccession and civil war, costing hundreds of thousands of American lives. I’m not supporting them. This is a question. I thought that actually are very few SCOTUS decisions based in either the 9th or 10th amendments. Nobody has really ever challenged Federal Law on those grounds. Am I wrong, as IANAL Re 10 “Shutting down” social security and medicare/medicaid would be a great idea, over time. Those, like my father, who’ve been taxed for it for 40 years have a reasonable expectation of collecting on it that no sane person would deny. But a progressive elimination of those programs over the next 2 decadedes with decreasing benefits over the time period would be workable and fair. Young people starting out would be exponentially better off with private retirement accounts. The rate of return on your social security ‘investment’ is pitiful by any normal standard. Young to middle age people who purchase health insurance for their old age can buy at very reasonable rates. Those who choose not to made their choices and should live with the consequences. So should those who choose not to save for retirement. If that means living with your daughter in law, maybe you should have saved some money. Shh… Nobody tell Shea that we already have both federal and regional cap and trade programs. Or that people are making big money off cap and trade. The Rs left in the legislature are, with few exceptions, crazy wingnuts. They ARE the teabaggers. They ARE crazy, twisted human beings — and (in my memory) have always been crazy, twisted human beings. They just stand out more now because their craziness is not filtered / controlled by more sensible Rs. Anyone who has half a brain and some scruples is in the D “big” tent (big tents have their own problems) or is sitting on the sidelines. And they have nothing to lose. But my biggest fear is that “we” (not the folks who read HA) will be complacent because of the evident craziness. The MA-Sen (Coakely – Teddy’s seat!) race is a nailbiter for this reason. Complacency will lose WA-03, and could severely impact D majorities in our legislature and congress. In any case, Goldy, thanks for the attention you bring to this. And make sure he doesn’t find out about the Western Climate Initiative @21 The non-nutters Republican’s seem to be hanging out in NP places like city councils. If you click on the link to “Shea’s Post” what you’ll see is boilerplate militia stuff. Which raises the question of whether Rep. Shea himself is affiliated with the militia movement and may even be a member of a militia group. These groups are generally considered to be hate groups, and are tracked by civil rights organizations. There is quite a bit of information about the militia movement at this website: Many of the comments posted on Shea’s website are from the usual suspects: Tax resisters, militiamen, sovereign citizens, and secessionists. Here’s a sample comment by a poster named Dean Jamison: “We have not filed a tax form since 2002 and never will! yes they started billing us in the year 2008,But if they ever take me to court, I will state my case on SHOW ME THE LAW,as there is no LAW.My subjestion” Assuming Mr. Jamison is required to file a federal income tax return (not everyone is), and further assuming that what he says is true, then he is breaking the law and the IRS not only will assess and collect back taxes and penalties, but he could be subject to criminal prosecution as well. And what he is exhorting the other readers of Mr. Shea’s website to do is literally a crime. Re 21 Actually no. Democrats are the party of making the planet safe for the Lily Livered Toad, but uninhabitable to humans. Democrats are the party of fear. Obama won based on economic fear. He pushes through incredible invasions of basic economic rights on the same basis. Democrats are the party of partisanship. Where are Republicans in the talks over his disastrous health care plans? Democrats are the party of theft. Those who work must pay the housing costs, child care costs, food costs and now medical costs for those who won’t. Not can’t. Won’t. Great ‘big tent’ you folks have going. All the worst elements of American politics under one ideological banner! @19: That was a breathtakingly, pathetically ignorant post. Thankfully anyone who wasn’t just completely blotto the last 8 years or so (or just the last couple for that matter) has been completely immunized against that bullshit. @3 “The civil war was about power and trade. It was not about slavery.” Yes and no. Technically, you’re correct, because Lincoln fought the war to preserve the union, and was even willing to let the South keep slavery if they would come back into the union. But to say the war wasn’t about slavery is to completely ignore the context. Slavery was the issue that provoked the Civil War. “The emancipation proclamation only applied to slaves in the southern ‘rebel’ states, not in the northern states.” That is also technically correct; but as there were no northern slave states, the EP applied to all states that had slavery. “The southern residents didn’t go to war and give up their lives for slaves, they did it for freedom.” It’s disingenuous to argue they didn’t go to war for the “freedom” to own slaves, because they had all the other freedoms that people in the northern states had, and the war occurred at a time when the federal government was much smaller, much less intrusive in local affairs, and federalism was much stronger in both sentiment and practice. “Big government” in today’s terms had not yet come along. This was a time when there was no income tax, no programs, and federal regulation to speak of. “For the right to trade with whomever they saw fit, and to live their lives the way they saw best.” What trade issues were there, other than tariffs? The Civil War was not fundamentally about trade. It was about fundamental cultural and economic differences between North and South. The North was industrialized; the South had an agrarian economy that depended on the cheap labor provided by slaves. The abolitionist movement threatened the economic survival of the South. They went to war to preserve their livelihood. “The only good thing to come out of the civil war was the early release of slaves, but for the north, that was just an added bonus.” This evinces a shallow understanding of history. Slavery was a dying institution that, without the war, eventually would have withered on the vine by its own accord. It’s extremely unlikely the practice of slavery would have survived into the 20th century if there had been no Civil War, no Emanicipation Proclamation, and no 13th and 14th Amendments. What the Civil War really did was catapult America out of the agrarian age into the industrial age. The war was immediately followed by the building of the railroads, the settlement of the West, and the advent of mass industrialization. Preserving the union created a superpower that dominated the history of the century that followed. The Civil War also created modern warfare. Trench warfare, machineguns, and submarines wee all used for the first time in this war. Both the offensive and defensive tactics of the First World War were born on the battlefields of the Civil War. The Civil War has been called by historians the first “modern war.” It has also been called a “railroad war” because railroads were a critical and dominant strategic factor in the war; but in a real sense, the railroads were only the instrument of what really was the invention of modern logistics and mass troop movements. The Civil War also was the last big cavalry war, and marked the beginning of the end of the cavalry age, which lingered on for a while in the Plains Wars against the Indians but horsed soldiers would never again be a major tactical factor in wars between nations. Much technology also emerged from the Civil War. @24: Here’s another great site (Orcinus -happens to be local) which is great for following the militia / teabaggers. It’s run by Dave Niewert (author of several great books on the subject) who also posts at C&L on this stuff. America is an inherently violent nation that has a violent history. It has always had highly contentious politics. This kind of stuff from the right is just talk — so far. But inflammatory rhetoric is potentially dangerous because it can incite physical violence. Sometimes it is a symptom, not a cause, of political turmoil but in the present case I think this kind of rhetoric is playing an inciting role in our society. re 26 Some people would argue on substantive grounds. But knowing that there really is no argument keeps this from happening, so personal abuse is all Zotz has. Bravo. Re 29 American politics have always been contentious, true. I personally think that the outlet in speech and written expresion keeps the US from the more violent expressions of political differenc. And I don’t know that the US is any more or less violent than any other nation historically speaking. In fact I’d say we’re less. @3 Your comment has no relation to Goldy’s comment. WTF??? Sorry Zotz. In the interest of educating an opponent so that their arguments might be worth countering here’s a primer. You could say that in fact Social Security was a good investment compared to other private means of retirement savings. Wait, you couldn’t, because that wouldn’t be true. You could say that no-one ought to have a responsibility for creating their own financial security including retirement savings and some form of health insurance. This is dodgy ethically and morally, but that should be okay with. This concludes lesson one in basic argumentative strategy. You’re welcome. “The powers not delegated to the United States by the Constitution, nor prohibited by it to the States, are reserved to the States respectively, or to the people.” So just exactly what part of this do you consider bullshit? The plain reading of this is a constraint on Federal overreaching, like say the Feds forcing me to commit commerce with another private party (buying health insurance) or face jail time. The calculated use of the phrase “teabagger” is also a homophobic slur and you supposed liberals should be aghast at the impropriety. Here’s a teabagger. He’s wearing teabags. He’s a deluded member of an insane cult. End of story. Nothing to do with anyone’s sexuality – gay, straight or indifferent. @35 What’s homophobic about it? Can straight people not tea bag? @35 delbert 01/15/2010 at 6:09 pm, The calculated use of the phrase teabagger started with teabaggers. The fact that teabaggers were too stupid to find out about the other meaning of the term they chose to label themselves with is part and parcel to routine teabagger stupidity. Be it calling yourself a teabagger, carrying the KKK flag to the capitol steps, or supporting the Tentherist agenda, teabaggers are routinely shown to be ignorant f**ks. @38 But, by not doing research and ignoring their own use of the word the can pretend to be victims. Yes, those poor, suburban white guys are victims of those evil liburals who hate America and want the terrorists to win. They’re victims of the liberal media, while trying to stand up to the liberal-fascists, (oxy-moron) who drink white wine, (I prefer red) and drive hybrids and volvos (I drive a Ford- Ford owns Volvo, btw). Liberuls that don’t know a thing about running a business, like say, Google, Microsoft, REI, K2, Apple, Pretty much the whole bio-tech industry, and that according to George Will make on average about 6% more than conservatives. And so on and so forth. Why does Goldy have a picture of somebody waving Strom Thurmond’s underpants? @19 Lost, you’re becoming my new Puddy. Really easy target for making fun. You’re 100% right. The rate of return for Social Security is fairly low. It’s a low risk/low yield investment. Let’s let all us youngin’s opt out and fend for ourselves and phase it out. That would have worked out awesomely for anyone who put their finances in the hands of Bernie Madoff. That works awesomely for those who reached retirement age in the last half of the 2000’s as their lifeling investments dropped 35%-50% of value right about the time you need to start selling them and living off them. That would have worked awesomely for anyone who tied their investments to real estate and didn’t pull out by 2007. Oh, and you want to know how best to completely crash the U.S. economy? Have every American all their retirement investments out as the next down cycle begins. (Have you ever seriously considered economic theory as it relates to private, non-governmental, non-institutional investors?) Investments are cyclicle. If all you have to retire on are your private investments, pray to whatever God in whom you believe that the market is up when you get to age 65-70. If you reach that age in a down cycle? Sucks to be you. That, in a nutshell, is how your post boils down. @19 Complete fool: Lost is up to her usual mindless comments that ignore facts. Comments like these make me realize how stupid this person really is: Yeah, after the stock market crash retirees would have nothing to live on…..great idea, idiot! Wow, I guess if you repeat stupid stuff enough times it will make it true? Is that how the puny little mind of Lost works? @41: czech sorry I did not read your comment beofer lambasting the fool lost for the same ideas that you raked her over the coals for…. Yes, it’s true. Investment is a poor idea, and no one should save atr all. My mistake. Not. Ever considered learning to read English Zotz or Notcorrectorright? It is the language used in the US and might be helpful to you. A diversified retirement plan would have stood up under the last crisis. And yes, retirees are pulling out their savings, but here’s the thing. Not all at once. They do it over a period of decades. And you don’t answer the fundamental question. Who decided the government was in charge of making my retirement possible? Who made it okay for them to steal 15% of my income to pay for someone else’ retirement? Well intentioned isn’t always right. But maybe the two of you will learn how to utilise some elementary form of reasoned thought. Maybe. BTW Zotz, I realize you’re counting on the government to do everything from paying your doctor bills to wiping your…sorry. But private low risk/low yield investments do exist. Social Security isn’t the only one out there. Just thought you should know, as your financial planner seems to have told you nothing of any use. And nevercorrectorright, I realize you’re on the recieving end of the proposed looting, but basic morals shouldn’t be completely beyond you. If you didn’t earn it it doesn’t belong to you. Is that clear enough even for the terminally misty mind of a liberal? FDR and the United States Congress. Upheld by the Supremes in, Helvering v. Davis, Steward Machine Company v. Davis, & Flemming v. Nestor. @19. lostinaseaofblue 01/15/2010 at 5:11 pm, Young people starting out would be exponentially better off if they were not staring in the face of enormously high unemployment rates as a result of the Bush recession caused by the glibertarian Randroid deregulation and securitization of real estate mortgages in the United States which led to reckless and unsustainable lending practices and the complete meltdown of the banking and finance system. But by all means, tell us more about your glibertarian Randroid fantasies. @46 It looks like you can opt out of SS for religious reasons. & #4361 Roger Rabbit Quiz This quiz is a departure from the usual Roger Rabbit Quiz. It doesn’t have a right or wrong answer. It’s more like a Rorschach test. Which do you trust more? [ ] 1. Banks [ ] 2. FDIC Re 48 You know what Mike? Your success or failure in life is your choice, not my responsibility. I realize this takes some explaining to a liberal unused to logical thought, so let’s start at the beginning. When a young person starts out in a career or a trade they have choices. They can work hard for the money paid them or they can serve time at work. With the pay they recieve they can buy nice cars and vacations, or they can save and invest until they can properly afford those things. It is not the responsibility of your fellow citizens if you choose poorly. It is not the responsibility of the government. It is yours. If you don’t like the results of your choices you have the right to try better next time. You have the responsibility to live with the consequences of your actions. Being a liberal, I realize you were never taught this, and this seems strange to you. I realize that to you and your friends wealth and hard work are mysteries beyond your comprehension whose sources seem lost in impenetrable haze. But it is true, nonetheless. And expecting such behavior of citizens is the only way to build a decent culture. BTW Mike, The unemployment rate is less than 8% isn’t it. I mean Obama promised that if we spent federal money “saving or creating jobs” at a cost of nearly a trillion dollars unemployment wouldn’t exceed that number, so it must be below it, right? This from the fool who can’t figger out one Mike Rogers from another Mike Rogers. Lostinaseaofblue, keep the faith brotha, checkshisASSdailyforfissures has issues with facts. Then he tries to play nice Yes lostinaseaofblue provides factual presentations. @53 It would seem to me that when the size of your economy is tied to the price of oil you’d have a smaller economy @$80 a barrel of oil than you would @ $40. While I’m not super thrilled with Obama I do think that he was a better choice than McCain and is moving things in the right direction. @34 I guarantee the provision you quote doesn’t make health care reform unconstitutional. That’ll be $250, please. Or, if you wish, you can hire Mr. Shea and pay him $250,000 to litigate the issue — and find out the same thing. MikeDumbScout tells another whopper… Keep da progressive playbook warm in your fingers fool. @51 What, no none of the above? I encountered a few wingnut lawyers during my career as a judge. Instead of presenting evidence and arguing legal points that worked in their clients’ favor, they put up fireworks — quoting the Constitution and so forth. These shows clearly were intended for consumption by their clients, and were not a serious attempt to persuade the judge (me). They did it to make the client believe the lawyer was worth what he was charging them; when, in fact, the lawyer hurt the client’s case, and the client would have been better off arguing his own case. For example, one of these lawyers appeared before me in a child support adjustment case. He spent 45 minutes arguing the child support law was unconstitutional. Unfortunately, he didn’t present any evidence of his client’s earnings or other facts I could have used to reduce his client’s child support payment, so I entered what amounted to a default decision — the state got what it asked for because I had no facts on which to base a different decision. And the client, who undoubtedly was billed hundreds of dollars for this performance, almost certainly didn’t understand that the lawyer blew his case. This lawyer now sits on our state supreme court, and while there, has engaged in a pattern of ethically questionable conduct. I won’t mention names; you know who I mean. I don’t know anything about Mr. Shea, and I’m not making any insinuations or suggestions about how he practices law. He may be a very good and very effective lawyer, for all I know. All I’m saying is, you don’t win cases or get good results for clients with the kind of histrionics I saw in Mr. Shea’s “tenther” arguments. I hope, for the sake of his paying clients, that he has his feet on firmer ground when he’s representing his clients’ vital personal interests and getting paid for it. You mention oil prices frequently, probably with good cause. We are at war in the Middle East only because a steady supply of oil is necessary for our economy. We couldn’t power our industry, homes or vehicles without oil. We make political deals with dictators and tryants so that we can keep the supply of oil flowing. However the last I read all the alternative sources of energy put together would supply 15%of our needs just in the US, so we’re kind of stuck with it for now. And Obama is an educated man who could have drawn on virtually any source of information he wanted to find all this out. Before making promises he couldn’t keep. But I agree. The last 3 presidential elections have been choices between the devil and the deep blue sea for voters. @52 Lost is stuck in a 1930s mentality. Civilized, and yes, liberal, society has determined that making each generation care for the dotage of the previous is preferable to armies of homeless geriatrics depending on private charities for food. (Did you skip the great depression in your conservative education, Lost?) Quick, name me a first world society that doesn’t have some form of pension or social security paid by the general taxation of the citizenry? Choose poorly is what you fall back on. It’s all about your personal choice. I guess it is. In addition to choosing your field of work, every single American should also choose to be as educated in the investment field as a professional financial planner. Then they won’t choose to trust a professional (Madoff, Milken, Keating et all) who turn out to be common criminals. Madoff’s investors made a bad choice, effff ’em. Let them get in line at the soup kitchen. Ever stop to ponder why your brand of conservatism is doomed? @31 “I personally think that the outlet in speech and written expresion keeps the US from the more violent expressions of political differenc.” I agree with this, to a point. Blowing off steam works, to an extent, like the safety valve on a pressure cooker. It can have a cathartic effect. But this works only to a point. When rhetoric incites, it can become a cause, not a preventative, of violence. “And I don’t know that the US is any more or less violent than any other nation historically speaking. In fact I’d say we’re less.” The U.S. has one of the highest violent crime rates in the world. The U.S. has a history of lynchings and Indian genocide. It has, at times, been a warmonger country. It’s true we haven’t had a Hitler or Pol Pot — yet — but that could change. Human nature is the same everywhere, and given the right conditions, there’s no reason why Americans wouldn’t do what humans in other societies have done. @33 You make 3 major points in this comment, and every one of them is a falsehood. “You could say that in fact Social Security was a good investment compared to other private means of retirement savings. Wait, you couldn’t, because that wouldn’t be true.” Social security has never defaulted a payment. It has paid every cent of promised benefits on time since it began. How many private pensions are being bailed out by the government, with pensioners getting cents on the dollar? No corporate pension can be relied on anymore; not when there are incentives to defund them so they can be dumped on the government. “You could say that no-one ought to have a responsibility for creating their own financial security including retirement savings and some form of health insurance. This is dodgy ethically and morally, but that should be okay with you.” You could say that, but no one ever has. Social Security has always been intended to supplement pensions and individual retirement savings. It was not designed to be, and is not promoted as, a person’s only retirement income. Social Security doesn’t prevent people from working in jobs with pensions, saving in IRAs and 401(k)s, or acquiring stocks and other investments. In fact, most people.” Diversification was totally ineffective to prevent losses in this recession, because asset values went down across the board. If there is one thing this market decline has taught investors, it is that diversification doesn’t guarantee safety. @34 I guarantee that clause doesn’t make the health care bill unconstitutional. That’ll be $250, please. Or, you can pay Mr. Shea $250,000 to litigate it for you, and get the same result. @35 If teabaggers don’t want to be called that, they should stop using the term. @36 Hey, Americans love a circus! 19, 42 — A young person who invested his retirement savings in an S&P 500 index fund 10 years ago would have lost money. When Bush was trying to sell “private investment accounts” to the public, he used two data sets to make his arguments. The returns he said savers would get from private investment accounts were based on a best-case economic growth scenario that most economists said was unrealistic and wouldn’t be attained. His warnings about the future solvency of Social Security were based on a worst-case economic growth scenario that most economists said was unduly pessimistic. He could not make his case using the same data for both private investment accounts and Social Security. @44 “Yes, it’s true. Investment is a poor idea, and no one should save atr all. My mistake.” No one said that, or anything remotely close to it. “A diversified retirement plan would have stood up under the last crisis.” That is patently false. People with diversified investment portfolios suffered major declines. @44 (continued) “Who made it okay for them to steal 15% of my income to pay for someone else’ retirement?” This is also a falsehood. No one is “stealing” your FICA contributions. If you live a normal lifespan, you’ll get back more than you paid in. That’s because Social Security benefits are pegged to rising prosperity and living standards. In addition, Social Security is more than retirement savings; your FICA taxes also pay for disability and survivor insurance. The truth is, you can’t find a similar combination of insurance and retirement annuity with equivalent benefits in the private sector at anything remotely close to the same cost. @46 “If you didn’t earn it it doesn’t belong to you.” Well, that argument also could be applied to the public education, public roads and infrastructure, government tax breaks and subsidies, and all the other public investments that make your job and income possible. And it obviously could be applied to argue that no one should be allowed to inherit anything, because inheritances are unearned. And, as a point of law, this assertion is another falsehood. Government has the power to tax. When government taxes me, and uses the money to pay a benefit to you (e.g., food stamps, unemployment benefits, social security, veteran’s benefits, or whatever), the benefit becomes your private property when you receive it, and in every practical and legal sense is “yours.” Whether you did anything to earn it is irrelevant. @58 As I said, there is no right or wrong answer to that quiz — only revealing answers. @61 Actually, before Social Security, children were expected to, and did, support their parents in old age. The Baby Boomers were the first generation to be freed of this financial burden by Social Security. Lost @52, I guess those tens of thousands of Haitians who just got crushed in an earthquake made some awfully bad choices. Roger @ 59 As you know, too many clients want their lawyers to simply be “aggressive.” They want their lawyers to scream and yell; results be damned. Some lawyers decide that they can be that screamer and yeller. Screaming at a high hourly rate pays, apparently, the rent. I can’t do that. On a different note, I know which Justice of our state’s Supreme Court you’re talking about–he would be the Ron Paul of our Court. I am prohibited from demeaning our judiciary, as you know. I would say about him that he does not even begin to compare to another member of our Supreme Court. The mix of her arrogance with her lack of understanding produces an existential picture that Sartre would have written a novel (a regular-sized novel) about. Re 73 You are an intelligent man, whether I agree with your political worldview or not. Therefore I assume you know the difference between the asian Tsunami or the Haitian earthquake and finances. I know for a certainty that the market will rise and fall and can plan accordingly. I know that I may get sick, or lose a job and can plan accordingly. I can’t control all the aspects of my financial life, but I can certainly plan for most of the bad things that could happen. Should the infrastructure of Seattle be destroyed by an earthquake it would be illogical to expect people to help themselves find water or splint a broken arm. I can’t rebuild the streets or the electrical grid to my neighborhood. I can’t replace the mother or son or spouse lost in the wreckage. These are the legitimate province of government. The aid and assistance given by others are the decent response to the suffering of a fellow human being. You ought to be ashamed of yourself for using the grief and pain of the Haitian people to score cheap political points. Rabbit in general 10 years span doesn’t, for most, accurately reflect the investment lifespan of their retirement accounts. If you were using a 30 year data line that might be more relevant. As for taking my money in taxes to give to the personal needs of another citizen this is simple theft. I pay taxes for roads, civil protection, schools and national defense among other things, as these are commonly enjoyed by all citizens. They are not the redistribution of wealth on misbegotten philosphical grounds. The inheritance argument is specious. What’s mine is mine to distribute as I see fit, with a will being the instrument of that distribution. The person who died paid the taxes, not the person who inherited the money. So, if one earns a million dollars, he should pay taxes, but if he inherits it, he shouldn’t pay any? Grow up. What’s mine is mine is Walter Sobchek’s argument. re 76: You’re out of your leagur here, negro. There are rules! @75 “I know that I may get sick, or lose a job and can plan accordingly.” But Rabbit has a salient point about the diversity of SSI services. Say you’re a recent college graduate. You just signed on with a Fortune 500 and in a few short years of hard work you can smell that promotion and six figure salary. You’ve enrolled in their 401k and you’ve even managed to squirrel some money into an IRA while paying your student loans. You’re responsible and making ALL the right choices. Then one day an elk wanders into your path as you travel the sea to sky. Or a drunk leaving a bar runs you over in a crosswalk. Or someone at your office is negligent and the copier that’s being delivered tips over and pins you to the floor. Should have planned for that. Your vision of the world, your money was stolen from you. To that paralyzed 23 year-old? F.U. Sorry. Hope your church can find you a closet to live in. Or you could just do your conservative duty and roll your wheelchair around and fend for yourself until you die from malnourishment, the elements or hell just stop being a burden on society and off yourself. Yep. Must be soul-fulfilling to be your brand of conservative. @52 lostinaseaofblue 01/15/2010 at 8:52 pm, Please. Your inability to understand simple Aristotelian logic constructs has already been well documented. But since you brought up what people are taught as a justification of ignorance, let me help educate you. Choice does not equal responsibility. Try some self education. See if it helps. All said, we are not hopeful. Re 76 The person who died already paid taxes on that million prior to his death. This inheritance thing is a side not of Rabbits, but somehoow in the ‘wealth is evil’ mentality of the left being left money is particularly evil. Why is that? Re 79 In the optomistic hope that some rudimentary form of basic reason can be attained by you, I’ll try to correct some of your misapprehensions. By the way, make the most of one mis-type if you need to for your low self esteem. That replacement of argument with personal attack is very common with leftists, oddly enough. Must be something to do with subliminal messaging on MSNBC. If I say ‘your apple is not my orange’ does it make it easier for you? I know words like ‘choice’ and ‘responsibility’ scare liberals. Maybe with less emotionally freighted words you can see the negation in the sentence which makes your choice the opposite of my responsibility. Liberals and progressives think they practice compassion by ameliorating the lot of the less fortunate. Maybe in terms of the present moment this is true. But in the long term that should be the focus of public policy the very opposite is what you practice. You take from those who earn and work and plan part of the fruit of that planning and give it to those who don’t do any of those things. How is this compassionate to the one person in this equation deserving of compassion, the productive man? You rob the public coffers, borrowing against the tax income from your children and their children after them to pay for poor choices now. How is this compassionate to your children and grandchildren? You take from people the full value of being alive. In drinking to the dregs the bitterness of loss and error I learn lessons I would never have learned had someone taken the consequences of my choices away. I look back on choices good and bad, consequences good and bad and remember most clearly the bad. I try not to repeat those errors that resulted in time, resources and effort lost in repairing the damage of my mistakes. How in heavens’ name is the theft of this basic human experience ever capable of being construed as compassion? Worst, you create a culture that passes on values to the next generation. Are these values thrift, hard work, responsibility, the joy of struggling through until by merit and effort you make it to the other side? No, of course not. They are laziness, reliance on others for the most basic of your needs and the inability to stay your course once any obstacle appears in your path. That last is the Roosevelt generation. That is the Johnson generation that believed that government was the cure and a full and accomplished life the disease. Don’t claim you are compassionate, my friends. You may be in the short run, but it isn’t the planting of the seeds that matters, it’s the harvest. Lostinaseaofblue, When someone mentions Haiti, ask them how much they are sending to relief agencies to help the survivors? Ask them will they leave their cushy environment and go help for a week or two or are they just using Haiti for political arguments? To say Haitians died over poor choices makes one wonder about their motives in life. This from the fool who calls polls he doesn’t like outliers. Lostinaseaofblue, You are doing well here. There is room for helping others as well. There is even a moral imperative to do so, Individually and by ones’ own volition. Charity is a fundamental of Islam, I’m told. It is a commandment of Christ. I can’t speak to Eastern religions, but athiests and agnostics realize that helping others is the right thing to do, individually. I have helped and been helped and was glad to do the one and grateful for the other. For the government to take my money and redistribute it doesn’t meet my obligation to my fellow man. It is involuntary and I don’t get to know even where the money is going. It isn’t efficient either. Instead of me giving a dollar to my neigbor out of work the government takes my dollar and spends 30 cents of it giving it to that neighbor. He gets 70 cents and some otherwise useless beauracrat gets a job. I realize the value of helping others, and do so through my church and with my time and money. I’ve been helped in my career and in my personal life, though not monetarily, and am deeply grateful for the assistance. I just think governmnet is the wrong vehicle for delivering such help. Lostinaseaofblue, Liberals always want to use other peoples monies to “ameliorate the lot of the less fortunate.” Puddy saw this on this blog when Katrina hit. No personal compassion, no opening their wallets and sending money to relief agencies; only yelling and screaming at Mike Brown and GWBush. @81 lostinaseaofblue 01/16/2010 at 7:50 am, Excellent and very coherent writing Lost! Even more good writing! Your ability to string together words in a willy-nilly incoherent style reminiscent of half term Governor Palin inspires me to follow your advice; if only one could understand it. Re86 Ah, I see the confusion. My mistake. Once you become a big boy and start the 7th grade, little Mikey, maybe the big words the grown-ups use will make sense! Won’t that be exciting Mikey? LIAR!! employed yet? You’re talking about Iraq right? “How is this compassionate to the one person in this equation deserving of compassion, the productive man?” You just keep piling on the morally bankrupt statements don’t you. Those born with severe handicaps that by their nature prevent them from being productive. Not worthy of compassion. An orphaned child? Not deserving of compassion. 89 – You got your “work” cut out for you. That is if your “job” is name-calling the unemployed. It’ll work just great for your crowd come November. Heckuva job! @92 unemployed is one thing.. unemployed and fucking off all day on the internet is another…… Even Raygun talked about tax money going to the “truly needy”. Herbert Hoover harangued charities to “give more” to relieve the incredible deprivation during the Depression. The role of government is to make war, run prisons, give land to railroads.. In the long run everything will work out. But in the long run we’re all dead. Re 90 The subject was entitlement spending and the philosophy that leads to it. It was not military, although you can’t seem to talk about anything at all without reference to Iraq. Have you got an answer to the meat of the post, or are you just going to pick at the garnish? Re 91 I’ve consistently referred to those who won’t work, not those who can’t. If this was unclear I apologize. But, like YLB you don’t argue the basic points. I assume that this is because you know there to be no counter argument. 95 – You’re not worth it. You’re boring me to tears.. Re 92 Unemployment is 8% right? That’s what Obama promised it would peak at if we spent 1 trillion of taxpayer money on his stimulus program. hey lost dude: your philosophical notions that choice responsibility etc. mean all social programs and liberalism drives the economy downward are simply disproven by the gdp per capita rates of the nations of europe, canada, japan, etc. any honest and knowledgeable accounting of the income levels and welfare levels (I mean longevity, housing, how much food people eat, how many are hungry, how many go to college, etc.) in those nations shows they do far, far better than our nation does, and this becuase idiotic ignorant morons like you stand in the way of progress, spouting cliches with no reference to what works in the real world. freedom and choice led to the near calamity of the economy in teh financial crisis….as well as the great depression. freedom and choice mean in the USA half the population doesn’t get well fed well educated or stay healthy….this means half of assets are not well tended…this means the economy is smaller than it would be otherwise….. so your notions and philosophy are bad for everyone. the examples you pick of roads being a govt. responsibility is very telling. YOU ADMIT TO LIKING SOCIALISM in road building. thus in reality, you admit it’s okay when….it’s okay. there’s no difference between road building and getting people fed, housed, educated and productive, we all benefit, and you can’t explain why if road building works, and if the other social programs work, there’s any distinction. At this point you likely response is predictable, it will be to lie about the statistics and point to some european nation that has ten percent unemployment. but the massive mountain of facts shows that all those nations with social democratic programs or nationalized health care are either pretty much as rich as us on a per capita gdp basis, with FAR higher levels of social equity and without the GRINDING POVERTY of 20% of our population, or, like Canada, have in the last decades reached total parity with us. So your entire “philosophy” just crumbles. The Canadian socialized health care did NOT destroy the productive urges or abilities or competitiveness of that nation, and same with japan germany france switzerland sweden etc. etc. etc. The conservative response to all this is to lie, lie lie about facts and I fully expect you to do so, too, because it will just be too painful for you to tell the truth and say “oops, you’re right. I’ve been believing hogwash my entire life, how could I be so moronically stupid?” Anotehr common conservative response is to say none of this data counts because “We pay for their defense!” A good reason for us not too, wouldn’t you say? As for the involuntary nature of taxes your argument is with democracy and our founding fathers themselves who created a govt. TO TAX YOU and me and that was a specific failure under art. of confed. that they specifically rememdied. so yes, it’s theft, robbery, double taxation, whatever you want to call it…but it’s also LEGITIMATE under our constitutional structure so if you are against it you’re basically a traitor to the ideals of the founders. They even put in those words about the general welfare, too, I might add. So to sum up: we have a individualist and collectivist nature, both; the social democratic model of high taxes and lots of programs works purty darn well; conservative claims it leads to national bankruptcy are a pack of lies uttered by a pack of morons and idiots; and those nations also have plenty ‘o responsibility and initiative to keep the capitalist engines humming. Your theory would only be true if Canada Japan and Europe were all like Russia under communism, or Cuba today. They’re not. That’s your big, big big fat lie. You ought to be ashamed of your lies, or your ignorance. 98 – I like this guy.. Re 98 Many words to miss the point entirely. Bravo. To correct some of your more glaring misconceptions though- I made no claim as to how welfare spending affects economies. The argument was entirely humanist in nature. The first half of your post is off point completely as it deals entirely with (erroneous) economic impact statistics. I mentioned what most people call the commons in one post, and made the distinction you apparently lack the wit to comprehend. Taxes are legitimate, of course. They are levied to pay for those things that are constitutionally mandated for government to perform. You can call this socialism if you like. It would betray your complete lack of understaning of the term, but that’s your problem. Transportation infrastructure, civil defense, national security and such are within the umbrella of the commons. We collectively enjoy the use of those things. Taking a dollar from me to give to you isn’t a commons item. An individual enjoys the use of that specific money. That is theft. I realize you’re too lost in the dream world of progressivism to understand any of this, but planting a mental seed can sometimes help the delusional, eventually. Re 98 I keep mentioning this and somehow no progressive has rebutted it. Because they can’t. In Canada your much vaunted health care system is a failure. Canadian workmans comp won’t use it. It’s slow, ineffecient and costs more in care due to long waits that make recuperation longer. In the words of the pro-socialist journalist who talked about it-A wait of up to 6 months for injury care means that recuperation times go up by a factor of as much as 6 times. Workmans Comp has gone to a private clinic system where care can be delivered quickly and cost effectively. This is unfair to those in the system who must endure the long waits. It’s unfair to the doctors who can’t get jobs at the private clinics which pay more because they’re not good enough. It’s just unfair damn it! After this diatribe the fool of a Canadian journalist goes on to say that he can’t understand why we don’t adopt the socialist system! So learn your facts before typing. Re 99 I’m sure. He’s incoherent and spouts nonsensical numbers that appear to support your position without you needing to think. Handy. @95 Go back to your ECON 101. Full employment (zero unemployment) is not desirable under the American capitalist system. There MUST be a willing and able work force ready to be hired by new business should the need arise. This is the economic policy of the nation in which you live. The fiscal policy of the past 50+ years of the United States strives to create unemployment levels between 3-4% So tell me, if 3% of the working eligible are BY DESIGN unable to find work, are they worthy of compassion? Did they make poor choices that keep them from working? You know, there are pure free-market, 100% de-regulated economies in this world. Living under such a system might suit you. Enjoy your new life in Somalia. There, you can make as much as you can grab and no one will take taxes from you. Where is this un-taxed, un-socialized first world free market paradise you’re looking for? (I hear the ghost of a dead political philosophy wispering to you. To borrow a phrase, “America, love it or leave it!” I’m curious about the lack of basic English comprehension among the HA crowd. Is this an isolated thing, or leftists in general who can’t understand our natal tongue? So, to teach you some econ 101, you don’t need “3% of the working eligible are BY DESIGN unable to find work.” Plenty of lazy folks who won’t work until the rent is 3 months overdue and the eviction notice is on the door. No design needed. That is simple “I hate wealthy people” rhetoric common to the far left. Had you read my posts you would have seen a clear admission numerous times of a legitimate role for a taxing government. Since you so clearly hate this country I might suggest to you- To borrow a phrase, “America, love it or leave it!” @96…HAHAHAHAHAHHAHHAHAHAHHA…YLB gets his ass owned – and when he cant think of a response – he says “im bored”…. YLB keeps on proving why he is HA’s biggest loser. 105 – You lack historical context. In fact the only context you have is the Limbaugh show. Our lost soul also believes that homosexuals are by definition mentally ill even though the psychiatric professions abandoned that canard ages ago. He also proclaimed his Obama derangement syndrome on the day Obama was elected. Obama has continued the policies of Bush in many areas much to progressive disappointment. Does the lost fool have anything postive to say about that? Obviously not. Just the same old knee jerk right wing foolishness. So I’m pretty much bored with just about anything he has to say. I pity you right wing fools because your collective asses are “owned” by the likes of a Limbaugh whose fat ass in turn is owned by some pretty rich people. The only thing Limbaugh ever supported Clinton on was NAFTA. Need I say more? I heard “Slimebaugh” on the radio the other day. I like that one.. Re 106 Logical error 1- Straw man-I can’t argue with the reasoned position my opponent has taken so will attack some element of him instead to try to deflect attention from my lack of argument. But I’ll bite. Just because the likes of you are afraid of their beliefs don’t assume I am. If I tell my friends I think I’m a hedgehog who was unfortunate enough to have been born human they would be rightly concerned about me. Homosexuality is analgous. It isn’t dangerous to the sufferer or anyone else so isn’t my business. But it is a mental illness. Logical error 2- Some conservatives listen to and believe Rush Limbaugh. He is a conservative. He listens to and believes Rush Limbaugh. I don’t, never have. See, your error was in your assumptions made without support about what I or anyone else on the right believes about Limbaugh. This concludes todays lesson in logic. Have a nice day. 108 – You pretty much ape Limbaugh.. pretty much the same attitude. Yawwnnn… Logic is wasted on the likes of you. YOU”RE JUST NOT WORTH IT.. OK??? Get over yourself. Like your fellow travellers here – we’re just not that into you… 110 – That tirade was really “logical”.. ylb owned again….. Lost… If you continue to simply parrot the conservative talking head / teabag rhetoric, WE will continue to ignore it. Your distaste for appology is perfectly in keeping with a two year old’s temper tantrum. “If we’re an arrogant nation, they’ll resent us; if we’re a humble nation, but strong, they’ll welcome us.” Dubya. I’ll paraphrase another thing I read, might have been Franken. You love America like a two year old loves their mother. Absolutely and without question. I love America like an adult. I see that it has faults and makes mistakes but I love it anyway and work to make the relationship better. So to sum up your position, your grasp of American economic policy and history is limited. You believe that only productive people are worthy of compassion. You believe taxation is theft. You believe that the institutionally imposed unemployment rate is just lazy people. You believe that because someone substantively argues against your economic theory you can just change the subject and expect us to follow. Confronted with substantive argument, you go to the last refuges of the poor debater, spelling and grammar. @114….I agree: lets say “im sorry”…and then we keep every single fucking penny of foreign aid and use it at home instead of throwing it down the toilet of 2nd and 3rd world nations. I like that idea….”im sorry – and your own your fucking own – no more foreign aid, no more free food shipments, no more anything” have a nice day, world..lets see how well you do on your own. 115 – Bodies are stacked in the street in Port au Prince and the right wing equates a modicum of aid with “meals on wheels”.. We spend an inadequate amount on roads and bridges and other things to put some people back to work, things we’ve long needed and put off doing because of right wing ideology and by any measure not nearly enough to boot and the right wing calls that “pork”. Wars of choice? That’s just fine. Sure sucks to be a right winger. RE “So to sum up your position, your grasp of American economic policy and history is limited.” I’m neither historian or economist, but I’m not ignorant either, whatever your opinion of the matter is. “You believe that only productive people are worthy of compassion. You believe taxation is theft.” No, and there are ESL courses to give you the reading comprehension you so clearly need. “You believe that the institutionally imposed unemployment rate is just lazy people.” No, there is no such beastie. There are the truly unfortunate and the lazy. Those terminally unemployed and able bodies are the latter. “You believe that because someone substantively argues against your economic theory you can just change the subject and expect us to follow.” When? I’ve been trying to get you folks to engage on Obamas’ lack of integrity for a month. No dice and never an answer. I’ve been trying to get a direct answer to the basic worthlessness of universal healthcare for months. No answer to any direct statements of how these systems realy work. You folks don’t like a fact, you simply ignore it. Changing the subject is a liberal trick, not a conservative one. You don’t like the argument? Just say ‘fuck you’ and that’ll do the trick! “Confronted with substantive argument, you go to the last refuges of the poor debater, spelling and grammar.” I’m waiting for any substantive argument of any kind. Still waiting. Still waiting. ylb arschloch, Produce multiple links from your HA Backup database where HA Libtardos said they helped in Katrina with their own money. We’ll wait. @75 It is fruitless to debate with you not because you’re a condescending fuck, although you are, but because you stupidly assume you know what other people are thinking when you don’t pay even cursory attention to what they’re saying. These are your two overwhelming traits: You talk down to people, and you try to put words in their mouths that they never spoke. This comes through in nearly all your posts. To this must be added that, when you try to support your arguments with factual statements, you’re also often a liar. Now, as for the specific content of #75, if society chooses to take care of its weakest or most unfortunate members by taxing all of its members, then you have the same obligation as everyone else to pay those taxes in return for the privilege of living in that society and enjoying its benefits. In this country, we make those decisions by the principle of majority rule. As I’ve tried to point out before, as a business person you enjoy many benefits provided by taxpayers. Your business couldn’t exist if the public didn’t educate your workers to read and perform basic math. It couldn’t exist if the public didn’t provide roads for you to get men and materials to job sites. It couldn’t exist without many of the other things paid for by taxes. As a citizen, you don’t get to pick and choose which taxes you’re willing to pay, or what your taxes pay for, except at the ballot box when you vote for the people who make those decisions. That applies to you exactly as it applies to the rest of us. Finally, if you don’t like the idea of paying money for benefits that are ultimately given to other people, then don’t buy home, car, or business insurance; because that’s exactly the principle that insurance operates on — the many pool resources to provide benefits for a few so that all are protected against life’s risks. @80 “The person who died already paid taxes on that million prior to his death.” You are either hugely ignorant or a huge liar. Most multimillion-dollar estates consist largely of untaxed capital gains. The heirs get a basis step-up, which means those gains will never be subjected to capital gains tax. Without an inheritance tax, that income would go completely untaxed. Here’s how it works. Let’s say Joe Blow bought property outside Everett for $150,000 in 1955. The city has grown and the property is now prime commercial property worth $5,000,000. But Joe never sold it, so no taxes have been paid on the $4,850,000 gain. Then Joe dies and leaves the property to his son, Joe Jr., who immediately sells it for $5,000,000. How much of Joe Jr.’s capital gain is taxable? If you said $4,850,000, you’re wrong. The correct answer is $0. The reason is because when Joe Jr. inherited the property his basis for tax purposes was automatically bumped up to its current value of $5,000,000. Therefore, his capital gain is zero. In fact, if Joe Jr. sells the property in 2010, the inheritance tax on Joe’s estate is zero, too, so Joe Jr. is getting a $4,850,000 tax-free windfall. Only the original $150,000 was ever taxed. The $4,850,000 capital gain has not been, and never will be, taxed. That’s pretty goddam insulting to a wage earner doing dangerous and backbreaking work (e.g., installing a new roof on one of “lost’s” rebuilds) who gets only a $3,650 personal exemption and $5,700 standard deduction. Lost, you can transfer $3,500,000 of capital gains to your heirs without either one of you ever paying any income, capital gains, or inheritance taxes on it. For you to complain about your heirs having to pay inheritance taxes on amounts above that is, shall we say, whiny. But hey, I’m willing to make a deal. I’ll support abolishing the inheritance tax if you’ll support abolishing the basis step-up and subjecting inheritances to capital gains taxes. Deal? @108 “But it is a mental illness.” Do yourself a favor and read the American Psychiatric Association’s position statement on homosexuality before popping off on a subject you know nothing about, so you won’t look like an idiot. ” … the American Psychiatric Association … has maintained, since 1973, that homosexuality per se, is not a mental disorder.” @115 You’re not real familiar with where your food and almost all your consumer products come from. Most of THEM would do better than US in a situation like that. Welcome to the global economy. ‘O and I hope you’re not a fan of coffee or chocolate. Most of our aid goes to Israel and Egypt. Believe me, there are things we need to cut. Like most of our overseas bases and our gun running business (most of the guns are made overseas as well). Foreign aid should probably be mostly teacher, nurses, doctors and such teaching their third world counterparts. That’s what we did prior to WII and people liked us much better then. Rabbit, I tend to glance over most of lost’s posts, so I didn’t see him proclaiming homosexuality as “a mental illness.” Wow, he’s more mental than I thought he was. He actually thinks of himself as an intellectual. He abandons any thread, however, when thinking is required. I suspect lost is a divorced, aging, jobless, male whose kids don’t much care for him. That’s my guess. @121 Welcome to 1915, Rog. @110 You are a dunce without peer among the trolls on this blog, and that’s a bar so low it takes a gopher to get under it. This is what you said: “Obama had the nerve to travel around the world apologizing for the US for everything from the fall of the Roman Empire to the shortage of milk for breakfast cereal. Admitting error is one thing. In a dangerous world showing signs of weakness is quite another.” By contrast, this is what The Economist said: .” (Emphasis added.) The Economist is a highly respected magazine — Bill Gates says it’s his first choice for news. Frankly, I’m glad Obama is president, and I’m equally glad you’re not. @115 I like your idea, since the biggest chunk of our foreign aid goes to Israel and I think Israel is a bully. @117 “but I’m not ignorant either” You continually prove otherwise. @117 “I’ve been trying to get a direct answer to the basic worthlessness of universal healthcare for months.” Here’s your direct answer: It takes a real shit to believe some people should do without health care. @123 “I tend to glance over most of lost’s posts” Trust me, you’re not missing anything worth reading. @124 “Welcome to 1915, Rog.” I was thinking he’s more like 1692 or thereabouts. Wow Herr Goebbels Dumb Bunny… Projection 101. Puddy’s critique of your “This is how it works” series is one of your stupidly assume you know what other people are thinking diatribes. Better yet, this thread has many of your stupidly assume you know what other people are thinking diarrhea commentaries now! @131 Well, let’s see. We have 3 persistent trolls who elevate stupidity to an art form, each in his own unique way. First there’s Mr. Cynical, who sounds like a broken record, but whose fortitude you can’t help but admire; he has endured on HA for years despite all the deserved abuse we heap on him. Then there’s “lost,” who raves like a fool, but is impervious to our slings and arrows because he has armored himself with the arrogance of the ignorant dolt who knows absolutely nothing of the subjects on which he expounds. And finally, here’s our good friend puddy, who like Klown is still here too, although the long months of verbal combat on HA have reduced him to a babbling idiot who writes in tongues. You’d better stand well clear of him, because his head is going to explode any day now. @39 Actually, I’m not all that impressed by how REI is run these days. Unwilling to trust their members to nominate directors, candidates for the board of directors are now chosen by — the board of directors. With not even one seat on the board reserved for a popular candidate. This self-perpetuating cabal has turned REI into a virtual corporate monarchy in which the members are merely customers; it can hardly be considered a member-controlled cooperative anymore. I’m not saying it’s badly run, but it’s even the most obtuse corporations have better governing structures than this. It used to be — I don’t know if it still is — that a vintage REI membership numbers was quite the status symbol. For what it’s worth, my REI number dates to about 1960 and is in the low five digits. In the entire history of The Co-Op, there have been only about 35,000 members before me. For those of you unfamiliar with REI (do such people exist outside the Amazon rainforest?), the list of past and present REI members now numbers in the plural millions. So, all you wingnuts with 7-digit REI numbers greedily waiting for your dividend coupons, please bow down and kiss my cute cottontail when I come hopping by. @52 “They can work hard for the money paid them or they can serve time at work.” Generally speaking, the more you pay people the harder they work for you; and when cheap labor conservatives insist on paying third world wages, they can expect to get what they paid for. @81 Here’s a man drowning in a sea of wingnut platitudes, all of them false. “Lost” is striving to become the biggest HA liar of all time. And that’s no small feat. what exactly is so backwater about the 10th? It’s no surprise that liberals hate the constitution (to include the 10th amendment), but to take pride in that fact? Well, that’s another level of ignorance I can’t necessarily understand. …now back to regularly scheduled Roger Rabbit chat room activity
http://horsesass.org/has-the-wsrp-embraced-the-tentherist-agenda/
CC-MAIN-2021-10
refinedweb
13,046
64.3
Agenda See also: IRC log -> Accepted. -> Accepted. Norm, let's meet next on 13 December isntead Mohamed gives likely regrets for 13 December -> 1. Resolved 2. Why all these steps? Norm: I don't really have any sympathy for their position Some answers: pipelines are easier to understand, streaming is likely to be easier in some cases, ... Henry: I think it's worth emphasizing that we anticipate that a significant user community will be processing very large inputs through very simple pipeliens and having to pay the cost of an XSLT runtime for this is not very attractive Norm: What do we say about the fact that we don't guarantee streamabilty Henry: No, that's a QoI issue. Norm: Ok, let's start there. Anyone want to add anything else? Richard: The question about streaming is more specific. Henry: It's very easy to detect a very small subset that's streamable, but that subset covers a lot of use cases. Richard: If you want to do some analysis, you can stream up to a point and then build a tree which is less practical for XSLT 3. Parallel executions Norm: I don't understand the comment. Henry: I don't understand the third sentence. ... The classic case is a document styled with a stylesheet generated from the same document. What's the problem? Richard: I think maybe they just don't understand that we're saying you have to make it work. Norm: I think we should ask them to clarify. Richard: The statement in the spec about "the order determined by the connections" might be being taken too strong. Henry: Something like, "in an order consistent with the connections"... 4. resolved 5. resolved. 6. resolved <scribe> ACTION: Norm to reply [recorded in] -> Agreed: we'll give them unreferencable names -> Mohamed: I don't find any context where it would be confusing Norm: It's not possible on pipelines anymore, it's only on other compound steps Richard: It's only the defaulted output ports of subpipelines. The only reason for allowing them is to save people from making up names in simple, linear pipelines. ... I think if they want to make reference to names out-of-order then they should have to declare the port. Mohamed: I'm ok with that. It'll be defaulted on both ends usually. Henry: I think his message has been overtaken. ... Our discussion at the plenary clarified the issues a lot and moved us forward. ... It would be good to get Alex's input. But we could talk about the last three paragraphs. Richard: The main point of Henry's message is that there's a proof that it can be made to work. ... Henry's description is based on the approach that I'm taking which does several passes. Norm: I suggest we leave this open for a week to get Alex's input. Henry: We need to say something about reentrancy as well as circularity Richard: I don't think the term reentrant is generally understood to mean what you mean here ... It's the diamond pattern: a imports b and c and b and c both import d Norm: Yeah, the editor will have to figure out how to express that. Poor sod. -> Norm: I think they're all LEIRIs and that's all we need to say Henry: I think we should say that. Norm: I agree. I'll see if that satisfies Alex. -> Norm: There was some pushback on the places where I put names, so I think we probably need to discuss fragids and MIME types again Ricahrd: I think there's value in having the names independent of the fragid question, it improves error reporting. <ht> Here's the only place I can find which discusses fragids: Henry: If we do this, then we have to address the question of uniqueness. Do we make these things unique like their sisters and cousins or not? ... We have a general purpose mechanism for naming bits of XML syntax, it's xml:id. ... I'd like to set this aside from the question of our own XPointer scheme for a moment. ... It feels a little bit like we have a hammer in our hands so everything looks like a nail. ... The most bizarre aspect of this is the fact that there's no discussion of the name attribute in some of those places, like p:declare-step. ... If you give a pipeline library a name attribute, people are going to think you can refer to it by name. ... We don't really expect that to mean anything. ... Users are going to think there's some deep complexity there but there isn't, there's barely any there there. Richard: You took the names out, Norm, but they still get names like !3 Norm: Yes, but if we undo the fragid decision... Richard: Like I said, I use the names to report errors and I use the ! names if they don't have any. Henry: So what's the question? Richard: Well, there's no doubt that all steps have to have names because they have ports. ... So it's the things that don't have ports: pipeline-library, catch, when, otherwise, ... Henry: My compromise position is, I'm a little nervous, but I could live with putting names on the schizophrenic contstructs. Richard: It's the things inside try and inside choose except group. Henry: Like I said, I could live with names there; it doesn't really conflict with the object model. ... It's declare-step that I think shouldn't have one. Richard: That's funny, I was going to go the other way. It seems that declare-step should be like pipeline in this regard. Norm: We do have this weirdness with namespace and name in pipeline library. Henry: I agree, pipeline and declare-step should have the same naming structure. Norm: Name doesn't work then because they're NCNames. Richard: Why would you ever want to have a single pipeline library taht declare steps in separate namespaces? Norm: Because you aggregated them after you wrote them over time; I don't see how the library should have a bearing on the names. Richard: Do we allow any steps to be in no namespace. Norm: Yes, though it's not clear that we meant it to be that way. <MoZ> no namespace is impossible because of ignorable-prefix Norm: So where are we? Henry: I'm prepared to float the following pair of changes: <MoZ> <foo:step xmlns:...</foo:step> Henry: Reinstate optional names on when/otherwise/catch and remove name from pipeline and namespace from pipeline-library Norm: You can't remove name from pipeline, that's how the steps refer to its ports Henry: No, they use the local-name of the type attribute Norm: Uhm. My initial reaction is "eew" but maybe I need to think about it some more. Henry: Having to write name="foo" type="my:foo" is just hopelessly confusing. Richard: The reason that pipeline is like this is because if its usual schizophrenia ... You're allowed to have an unnamed pipeline at the moment. ... And an untyped one. Scribe lost Richard's thread Henry: We'll never see names and types that are different Norm: I don't agree, I might name all my pipelines 'main' irrespective of their type Henry: So what I said before with a small modification: 1. Remove name from declare-step and pipeline-library 2. Add name to when/otherwise/catch 3. Remove namespace from pipeline-library 4. Remvoe the magic about name/namespace for defaulted types in a pipeline library 5. Make type required on a pipeline in a pipeline library Norm: So the editor should give that a wack? Agreed. None. Adjourned
http://www.w3.org/XML/XProc/2007/11/29-minutes
CC-MAIN-2014-42
refinedweb
1,303
72.76
Hi, I recently started testing a Nextion TFT display with my pyboard. I am able to send and receive simple instructions through UART(3). A raspberry pi is connected via uos.dupterm(UART(6)) mainly REPL and file transfers. Occasionally new programming -.tft files - need to be uploaded to the display. This can be done easily from the raspberry pi to the display with a USB to TTL adapter using the python code found here: ... nextion.py. However, it would be nice to avoid all the plugging and unplugging and use a uart to uart pass through instead. There is an example of USB to UART pass through here: ... rough.html Not sure how to tailor this to my needs. Any advice will be appreciated - with examples even more so. Thanks all. Pyboard - UART to UART pass through for Nextion: Pyboard - UART to UART pass through for Nextion You shouldn't need to change the past thru function at all. Just open a UART rather than USB. Just open a UART rather than USB. Re: Pyboard - UART to UART pass through for Nextion I have exchanged USB for UART as suggested. I am able to send and receive simple messages - like set display brightness or get buttons pressed. My excitement was short lived however. When I try to upload .tft file things hang very quickly. Output: Output: Pyboard firmware: Code: Select all pi@raspberrypi:~ $ python /home/pi/DAPLC_PY/Screen/nextion2.py /home/pi/DAPLC_PY/Screen/Controlfreaks-7.tft /dev/ttyUSB1 Trying with baudrate: 9600... Trying with baudrate: 115200... Connected with baudrate: 115200... Status: #���comok Touchscreen: yes Model: NX8048T050_011R Firmware version: 99 MCU code: 61488 Serial: E466A0018F3E3822 Flash size: 167772 Downloading, 0.1%...() Could not transfer file pi@raspberrypi:~ $ My Pyboard code: Code: Select all MicroPython v1.9.3-558-ga60efa82 on 2018-04-23; PYBv1.1 with STM32F405RG Feels like the timing is off. I suspect me naive rework of the USB to UART pass through example is faulty. Code: Select all import pyb import select uart1 = UART(1) uart1.init(115200, timeout=0) uart3 = UART(3) uart3.init(115200, timeout=0) while True: select.select([uart1, uart3], [], []) # select line seems to make no difference if uart3.any(): uart1.write(uart3.read(256)) if uart1.any(): uart3.write(uart1.read(256)) - pythoncoder - Posts: 3051 - Joined: Fri Jul 18, 2014 8:01 am - Location: UK Re: Pyboard - UART to UART pass through for Nextion I would try something along these lines (note I haven't actually tested this): If you want to use uselect I suggest you read this part of the docs. However in this simple example I don't think it brings much to the party. Code: Select all import pyb uart1 = UART(1) uart1.init(115200, read_buf_len = 256) uart3 = UART(3) uart3.init(115200, read_buf_len = 256) # Try higher values if necessary while True: nrx = uart3.any() # Count available characters if nrx: uart1.write(uart3.read(nrx)) # Only read what's available: it should never time out nrx = uart1.any() if nrx: uart3.write(uart1.read(nrx)) Peter Hinch Re: Pyboard - UART to UART pass through for Nextion Success - I can now communicate remotely with my Pyboard and Nextion display (ESP8266 controls communications, as well a few i/o pins on Pyboard including reset). The Pyboard can boot up in one of two ways: Dupterm enabled or UART to UART pass through enabled. All help was appreciated. Many thanks pythoncoder. When booted into dupterm mode, the Pyboard interacts directly with the TFT display. Code: Select all # boot.py -- run on boot-up # i/o A15 switches boot modes from dupterm to UART6 <> UART3 pass through for screen programming. import pyb from pyb import UART import uos sp = pyb.Pin('A15', pyb.Pin.IN) # Activate input A15 (red led) connected to ESP8266 i/o 14 pyb.delay(10) if sp.value() == 1: uart6 = UART(6) uart6.init(115200, read_buf_len = 256) uart3 = UART(3) uart3.init(115200, read_buf_len = 256) # Try higher values if necessary while True: nrx = uart3.any() # Count available characters if nrx: uart6.write(uart3.read(nrx)) # Only read what's available: it should never time out nrx = uart6.any() if nrx: uart3.write(uart6.read(nrx)) else: uart6 = UART(6) uart6.init(115200) uos.dupterm(uart6) # duplicate repl on UART(6) pyb.main('main.py') # main script to run after this one pyb.usb_mode('CDC') # act as a serial (CDC) and not a storage device (MSC) All help was appreciated. Many thanks pythoncoder. Post Reply 5 posts • Page 1 of 1
https://forum.micropython.org/viewtopic.php?t=4930&p=28511
CC-MAIN-2018-47
refinedweb
756
69.58
Opened 8 years ago Closed 8 years ago Last modified 5 years ago #8072 closed (duplicate) NFA: Valid fieldsets are marked as 'error'. Description If more than one field is places on one line in the NFA, this line of fields has class 'error' and e.g. this fieldset cannot be collapsed. This is because the following line of the django/contrib/admin/options.py: 92 def errors(self): 93 return mark_safe(u'\n'.join([self.form[f].errors.as_ul() for f in self.fields])) For more than one fields, this code has generated '\n'. Let's see to the django/contrib/admin/templates/admin/includes/fieldset.html: 5 <div class="form-row{% if line.errors %} errors{% endif %} {% for field in line %}{{ field.field.name }} {% endfor %} "> The "\n" is True in this context, so "errors" class has illegally associated with this "div". This issue can be easily fixed by the following patch: Index: django/contrib/admin/options.py =================================================================== --- django/contrib/admin/options.py (revision 8168) +++ django/contrib/admin/options.py (working copy) @@ -89,7 +89,7 @@ yield AdminField(self.form, field, is_first=(i == 0)) def errors(self): - return mark_safe(u'\n'.join([self.form[f].errors.as_ul() for f in self.fields])) + return mark_safe(u'\n'.join([self.form[f].errors.as_ul() for f in self.fields].strip())) class AdminField(object): def __init__(self, form, field, is_first): Change History (2) comment:1 Changed 8 years ago by Karen Tracey <kmtracey@…> - Needs documentation unset - Needs tests unset - Patch needs improvement unset - Resolution set to duplicate - Status changed from new to closed comment:2 Changed 5 years ago by jacob - milestone 1.0 beta deleted Milestone 1.0 beta deleted Already reported in #5631, which has the same patch.
https://code.djangoproject.com/ticket/8072
CC-MAIN-2016-30
refinedweb
287
52.56
Details Description To reduce # socket connections, it would be useful for the new producer to close socket connections that are idle. We can introduce a new producer config for the idle time. Issue Links - depends upon KAFKA-1928 Move kafka.network over to using the network classes in org.apache.kafka.common.network - Resolved - is duplicated by KAFKA-1941 Timeout connections in the clients - Resolved Activity - All - Work Log - History - Activity - Transitions Hi, I noticed that the dependencies are done and I will resume this task. The task contributions had been: - a fix - unit test(s) As far as the fix is concerned, I noticed that it is already fixed in the current Selector, namely the lruConnections is a LinkedHashMap with accessOrder=true. This was the only fix needed, and I am 100% convinced that the fix is already done. I already have a unit test too, I will try to put a patch here this week. Just wanted to mention that the old connections should be closed by the kafka installations using the new reusable network code. Thanks Nicu nicu marasoiu, thanks for the patch. We are changing SocketServer to reuse Selector right now in KAFKA-1928. Once that's done, the idle connection logic will be moved into Selector and should be easier to test since Selector supports mock time. That patch is almost ready. Perhaps you can wait until it's committed and submit a new patch. Hi Jun Rao, Neha Narkhede, I added a test, please review. The patch has 2 variations (latest 2 patches), explained at point 2 below, while the latest implements 1' below. 1. I wanted to sleep on MockTime, but here we actually need to physically wait at leat one epoll/select cycle. Since I have put 10ms idle time & it works, mocked time would not bring benefits, i.e. only the select time needs to be waited over. 1'. Because of potentially large & not deterministically bounded select times, I implemented a mechanism to try a few times, waiting 50% more time every time. 2. Seems to work with low (10ms) idle timeout for all current test methods. However, I attach a patch with separate test class for this (and yet another utils class for reuse), to isolate configuration between group of test methods. 3. Shall I do a multiple connections test? I will do unit tests tommorow / day after. The fix should be ok otherwise, and ready to be pushed on trunk and 0.8.2. I will announce when done with units. On Tue, Dec 30, 2014 at 12:21 AM, Neha Narkhede (JIRA) <[email protected]> nicu marasoiu, Jun Rao This is marked for 0.8.2. Is anyone working or planning to work on this? This is already in 0.8.2 so we should incorporate the follow-ups there as well I think. Nicu, Thanks for the patch. Do you think it's easy to add a unit test on Processor? good catch nicu marasoiu. +1 on your change Fixed it - I have mistakenly deleted at some point the fact that the linked hash map needs to be in access order I tested with your scenario and looks ok now. Indeed, I can reproduce this. I did saw an instance where no exception was thrown by the producer but still the broker mentioned new connection being listened to suggesting close took place. However, checking with required-acks 0 I can see that after some time the connection does not close anymore. Nicu, I was doing some manual testing of this feature. What I observed is that sometimes, the idle connections are not closed. The following was what I did. 1. Configure a small connections.max.idle.ms = 10000. 2. start ZK and Kafka broker 3. start a console consumer bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic topic1 --from-beginning 4. start a console producer and type in sth every 15 secs or so. bin/kafka-console-producer.sh --broker-list localhost:9092 --topic topic1 --request-required-acks 1 What I observed was that initially, the producer connections kept getting killed by the broker correctly after being idle for 10 secs. The next producer send would hit an IOException and trigger a resend. However, after typing in 10 or so messages, at some point, no idle connections were killed by the broker any more and the producer send always succeeded. +1 on your latest patch. I'm leaning towards accepting the patch since the test above points to an issue that seems unrelated to the patch. nicu marasoiu, it will be great if you can follow Jun's suggestion to reproduce the issue. Then file a JIRA to track it. I'm guessing killing idle connections shouldn't lead to data loss. Do you think you can reproduce that data loss issue in 1 out of your 7 tests? With ack=1 and retries, this shouldn't happen. Perhaps it's useful to enable the trace logging in the producer to see what's exactly happening there. Could you also do the same test by enabling the new producer in console producer? attached, renamed time and for the "initial/reset value of the nextIdleCheck", i just inlined the function, the code is more clear like this i think Thanks for the updated patch. Overall, looks great. Few comments - 1. Can you rename initialNextIdleCloseCheckTimeValue to nextIdleCloseCheckTimeValue? 2. It will be easier to understand the code if we rename currentTime to currentTimeNanos. Indeed, ack=1 solves it for most times but not for all: - in 6 of 7 tests it gets a reset by peer and a socket timeout on fetch meta, than re connects and sends message. - in one test, after leaving one night the laptop, I entered: sdfgsdfgdsfg --> that never returned, no exception, nothing at all reported aaaaaaaaaaa aaaaaaaaaaa ff ff The "ok" flow, which reproduces most of the time with ack=1 is (sometimes with just one of the 2 expcetions): gffhgfhgfjfgjhfhjfgjhf [2014-09-18 08:22:35,057] WARN Failed to send producer request with correlation id 43 to broker 0 with data for partitions [topi,0] (kafka.producer.async.DefaultEventHandler) java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.read0(Native Method) .. at kafka.producer.async.ProducerSendThread.run(ProducerSendThread.scala:44) [2014-09-18 08:22:36,663] WARN Fetching topic metadata with correlation id 44 for topics [Set(topi)] from broker [id:0,host:localhost,port:9092] failed (kafka.client.ClientUtils$) java.net.SocketTimeoutException at sun.nio.ch.SocketAdaptor$SocketInputStream.read(SocketAdaptor.java:226) .. [2014-09-18 08:22:36,664] ERROR fetching topic metadata for topics [Set(topi)] from broker [ArrayBuffer(id:0,host:localhost,port:9092)] failed (kafka.utils.Utils$) kafka.common.KafkaException: fetching topic metadata for topics [Set(topi)] from broker [ArrayBuffer(id:0,host:localhost,port:9092)] failed at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:71) .. Caused by: java.net.SocketTimeoutException at sun.nio.ch.SocketAdaptor$SocketInputStream.read(SocketAdaptor.java:226) .. kafka.network.BoundedByteBufferReceive.readCompletely(BoundedByteBufferReceive.scala:29) at kafka.network.BlockingChannel.receive(BlockingChannel.scala:108) at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:74) at kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:71) at kafka.producer.SyncProducer.send(SyncProducer.scala:112) at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:57) ... 12 more gffhgfhgfjfgjhfhjfgjhf Interesting. The data loss may have to do with ack=0, which is the default in console producer. Could you try ack=1? in fact, this is something that needs fixing in the producer(s) anyway, but the issue is with the currently deployed producers. One of the main reasons to go with a broker side close of the idle connections was that it is easier to redeploy brokers then producers. But if this is indeed a bug in the producer(s) as I reproduced, those producers would need redeploy. So moving this to the producer side as a configuration may again be an option on the table. here is a time line: he -> produced he -> consumed [ wait beyond timeout here, connection got closed underneath by the other side] [2014-09-17 15:02:28,689] INFO Got user-level KeeperException when processing sessionid:0x148837ce1800001 type:setData cxid:0x24 zxid:0xec txntype:-1 reqpath:n/a Error Path:/consumers/console-consumer-87959/offsets/topi/0 Error:KeeperErrorCode = NoNode for /consumers/console-consumer-87959/offsets/topi/0 (org.apache.zookeeper.server.PrepRequestProcessor) [2014-09-17 15:02:28,691] INFO Got user-level KeeperException when processing sessionid:0x148837ce1800001 type:create cxid:0x25 zxid:0xed txntype:-1 reqpath:n/a Error Path:/consumers/console-consumer-87959/offsets Error:KeeperErrorCode = NoNode for /consumers/console-consumer-87959/offsets (org.apache.zookeeper.server.PrepRequestProcessor) dddddddddddddd --> produce attempt (never retried, or never reached the broker or at least never reached the consumer) [ many seconds wait, to see if the message is being retried, apparently not, even though the default retry is 3 times] wwwwwwwwwwwwwwwww --> new attempt (immediattely I see the message below with the stack trace, and reconnect + retry is instantly sucesfull) [2014-09-17 15:03:12,599] WARN Failed to send producer request with correlation id 9) at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:72) at kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:71) at kafka.producer.SyncProducer$$anonfun$send$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(SyncProducer.scala:102) at kafka.producer.SyncProducer$$anonfun$send$1$$anonfun$apply$mcV$sp$1.apply(SyncProducer.scala:102) at kafka.producer.SyncProducer$$anonfun$send$1$$anonfun$apply$mcV$sp$1.apply(SyncProducer.scala:102) at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33) at kafka.producer.SyncProducer$$anonfun$send$1.apply$mcV$sp(SyncProducer.scala:101) at kafka.producer.SyncProducer$$anonfun$send$1.apply(SyncProducer.scala:101) at kafka.producer.SyncProducer$$anonfun$send$1.apply(SyncProducer.scala:101) at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33) at kafka.producer.SyncProducer.send(SyncProducer.scala:100) at kafka.producer.async.DefaultEventHandler.kafka$producer$async$DefaultEventHandler$$send(DefaultEventHandler.scala:255) at kafka.producer.async.DefaultEventHandler$$anonfun$dispatchSerializedData$2.apply(DefaultEventHandler.scala:106) at kafka.producer.async.DefaultEventHandler$$anonfun$dispatchSerializedData$2.apply(DefaultEventHandler.scala:100) kafka.producer.async.DefaultEventHandler.dispatchSerializedData(DefaultEventHandler.scala:100) at kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:72) at kafka.producer.async.ProducerSendThread.tryToHandle(ProducerSendThread.scala:104) at kafka.producer.async.ProducerSendThread$$anonfun$processEvents$3.apply(ProducerSendThread.scala:87) at kafka.producer.async.ProducerSendThread$$anonfun$processEvents$3.apply(ProducerSendThread.scala:67) at scala.collection.immutable.Stream.foreach(Stream.scala:547) at kafka.producer.async.ProducerSendThread.processEvents(ProducerSendThread.scala:66) at kafka.producer.async.ProducerSendThread.run(ProducerSendThread.scala:44) [2014-09-17 15:03:12,712] INFO Closing socket connection to /127.0.0.1. (kafka.network.Processor) wwwwwwwwwwwwwwwww re-attached fixed patch, but we may have a blocker to the whole solution on the broker side, pls see comment above/below (first message after disconnect is lost on the client used in console-prod) Hi, Unfortunately the client used in console-producer is not very robust with respect to disconnections, as will detail below. Is this the "old" scala producer, and can we hope for a resilient behaviour that I can test with the new java producer? More specifically, the connection is closed from the broker side, but the producer is unaware of this. The first message after the close is lost (and is not retried later). The second message sees the broken channel, outputs the exception below, and reconnects and is succesfully retried, I can see it consumed. [2014-09-17 12:44:12,009] WARN Failed to send producer request with correlation id 15) Attached patch: every select iteration, zero or one connections are closed for being idle for too long. The units pass well, but For the moment I am blocked by: ./kafka-console-producer.sh Error: Could not find or load main class kafka.tools.ConsoleProducer Hi, I have understood what you say and I agree it is highly unintuitive and we should change that. I just saw you propose a solution which included a precomputation of the time to close, and it was bit confusion, looked like an attempt of micro optimization. I have not made any patch yet, I waited for feedback from Neha too, but I will do the patch today: it looks ok to me the idea of closing at most one old connection per selector iteration. So the solution will look more like the previous patch, but instead of traversing n+1 entries to close n old connections, it will just pick the oldest and check if it is time to close. For #1, the way Neha and me discussed, and the way you understood it works (for the latest patch), is that an old connection is taken into consideration for close only when a new connection is being opened up (or activity exists on an existing connection too). But this will no longer be the case. Nicu, On #2, I wasn't worried about any performance optimization. My concern is mostly on testing and ease of understanding. Since removeEldestEntry is only called on update, you can't test the logic on a single connection to the broker. It's a bit weird that if there is only a single idle connection, that connection is never killed. But as soon as a second connection is added, the idle connection will be killed. For the user's perspective, it's simpler to understand how idle connections are killed if they are not tied to # of connection. Also, could you explain how you fixed #1 in the latest patch? It wasn't obvious to me. Thanks for the patch, nicu marasoiu! Looks good overall. Few review comments - 1. Do we really need connectionsLruTimeout in addition to connectionsMaxIdleMs? It seems to me that we are translating the idle connection timeout plugged in by the user to 1000000x times more than what is configured. That's probably why Jun saw the behavior he reported earlier. 2. I don't really share Jun's concern in #2 and we can state that more clearly in the comment that describes the new config in KafkaConfig. Connections that are idle for more than connections.max.idle.ms may get killed. I don't think the users particularly care about a hard guarantee of their connections getting killed here. So the simplicity of this approach is well justified. 3. I do think that adding a produce and fetch test where the connections get killed will be great Neha Narkhede Hi, can you also check the new idea? It is consistent with my initial approach and solves the potential overhead of closing too many connections on a single iteration. Hi, I am not completely sure I fully understood your solution in point 2: Do you mean to close at most one connection per iteration, right? This is ok, the worst case scenario is closing 100K old connections in 10 hours, one per select. On storing the time to close in a local variable, the access of the oldest entry every iteration is O(1) super cheap so I would skip this optimization. Thanks for the latest patch. I was trying to do some local testing. The following are my observations. 1. I first started a local ZK and broker (setting connections.max.idle.ms 10secs). I then started a console-producer and a console-consumer. Then, I typed in sth in console-producer every 15 secs. However, I don't see the producer connection gets killed. I added sth instrumentation. It doesn't seem that removeEldestEntry() is called on every fetch request. 2. As I was debugging this, I realized that it's kind of weird to kill idle connections only when there is another non-idle connection. This makes debugging harder since one can't just test this out with a single connection. It's much simpler to understand if the idle connection can just be killed after the connection idle time, independent of other connections to the broker. To address the concern of closing many sockets in one iteration of the selector, we can calculate the time that a socket entry is expected to be killed (this is the access time of the oldest entry + maxIdleTime, or maxIdleTime if no entry exists). When that time comes during the iteration of the selector, we can just check the oldest entry and see if it needs to be closed. 3. It would be good to check if our clients (especially the producer, both old and new) can handle a closed idle connection properly. For example, when detecting an already closed socket, the producer should be able to resend the message and therefore we shouldn't see any data loss. I am sorry, Yes, that was the intent! I will write unit tests from now on to avoid such slips. Moreover, the removeEldestEntry will return false all the time, because it keeps the responsability of mutating the map for itself, as part of calling the close method. Attached the patch, tests pass. Looking at the patch again, in removeEldestEntry(), shouldn't we close the socket for eldest if the entry is to be removed? Right now, it seems that we only remove the entry from LRU w/o actually closing the idle socket connection. Patch updated. Configurable max idleness of a connection since the last read on it. On creating new N connections, the server will be Closing at most N idle connections too, if they are idle for more than the mentioned threshold, default 10 minutes. Thanks for the patch. The following should be * 1000000, right? private val connectionsLruTimeout: Long = connectionsMaxIdleMs * 1000 Neha Narkhede Hi, I implemented our discussion and applied Jun Rao suggestions, can you check and perhaps commit it if looks good? Hope for more tasks like this, do you have any suggestions? Nicu, Similar to producers, consumers just issue fetch requests. The SocketServer first reads the fetch request from the network and then writes the fetch response to the network once the fetch request is served by the broker. So, there is a 1-to-1 mapping btw reads and writes and writes typically happen within a second after the reads. Hi, Thank you, for 2. I agree for producers but I am not sure if the same SocketServer is used to serve consumers as well, and in this case, for consumers, the read/write ratio may be well in favor of writes making it risky perhaps to account just the reads? Thanks for the patch. Looks good to me overall. Some minor comments below. 1. Could we make connectionsLruTimeout a broker side configuration? 2. Do we need to insert the key to lruConnections in write()? It seems to me doing that in read() (for incoming requests) is enough. 3. The patch doesn't seem to apply for me. Could you rebase? git apply -p0 ~/Downloads/ KAFKA-1282_Disconnect_idle_socket_connection_in_Selector.patch /Users/jrao/Downloads/ KAFKA-1282_Disconnect_idle_socket_connection_in_Selector.patch:13: trailing whitespace. import java.util /Users/jrao/Downloads/ KAFKA-1282_Disconnect_idle_socket_connection_in_Selector.patch:21: trailing whitespace. import java.util.Map.Entry /Users/jrao/Downloads/ KAFKA-1282_Disconnect_idle_socket_connection_in_Selector.patch:30: trailing whitespace. private val connectionsLruTimeout: Long = TimeUnit.MINUTES.toNanos(10) /Users/jrao/Downloads/ KAFKA-1282_Disconnect_idle_socket_connection_in_Selector.patch:31: trailing whitespace. private var currentTime: Long = SystemTime.nanoseconds /Users/jrao/Downloads/ KAFKA-1282_Disconnect_idle_socket_connection_in_Selector.patch:32: trailing whitespace. private val lruConnections = new util.LinkedHashMap[SelectionKey, Long](100, .75F, true) { error: patch failed: core/src/main/scala/kafka/network/SocketServer.scala:16 error: core/src/main/scala/kafka/network/SocketServer.scala: patch does not apply After discussion with Neha, we agreed that using the removeEldestEntry approach works better in the sense that avoids disruption caused by potentially many connections being up for close at once, and evens out that overhead. The disadvantage remains that an inactive server will not close connections but seems less than the advantage of closing overhead leveling and of performance plus of not traversing and of not polling the oldest entry. Hi, I am sorry, but traversing will be limited to the connections that will actually be expired, so there is no traversing of non-expiring connections (please see the detailed example below). I do agree on the other hand that there will be a polling on the first entry until it expires, but this is how we can implement the requirement exactly as intended (expiration taking into account just time as per stated "stale connections" issue, not connection count or activity as well), and it can be done every 1000 selects. If we want to protect brokers from becoming zombies, this is a different concern I feel. However, I completely agree that we can do the LRU limiting as well to avoid zombeing (as part of this jira or another one). Both mechanisms to expire can be at work and solve both problems with no overhead in doing so (there would just be 2 contexts in which an evict+close would be performed, if we do not count the evict done in a normal close call). Jun Rao, Jay Kreps, what do you think? Say the server hold 100K connections. Say 100 connections are not used in the last 10 minutes. What the program does (or I will make sure it does) is just iterate through the first 101 connections, the first 100 will be expired and it will stop at number 101. I think this is an exact achievement of expected behavior of the jira task, as intended, and there is no performance penalty to that really! I will rewrite with a loop /(tail-)recursive function, to check the first entry, and if stale call close (which also does a remove on the map anyways), and retry the next entry. This would be to avoid copying of the first 100 selectionKeys as well as to avoid any overhead/eagerness in map function. My suggestion was not just to address the performance concern which is somewhat of an issue nevertheless. The motivation was that there is an upper bound on the number of open connections you can support on the broker. That number is the # of open file handles configured on the box. Since that number is known anyway, you probably would want to configure your server so that the connections never exceed a certain percentage of that upper limit. Currently, if the server runs out of open file handles, it effectively stays alive, but is unable to serve any data and becomes a 'zombie'. But a downside of the expiration based on the connection count is that it doesn't necessarily achieve the goal of expiring really old connections. Instead it tries to solve the problem of preventing the broker from running out of available file handles, in which case we probably need a fairer strategy for expiring connections. Thinking more, I think it might be sufficient to override removeEldestEntry and check if the oldest entry is older than the threshold and let the map remove it. If the oldest entry is not above the threshold, traversing the map doesn't buy you anything. The downside is that if no new activity takes place on any of the connections all of a sudden, the server wouldn't proactively drop all connections, which is less of a concern. The advantage is that you will still get the same benefit of expiring older connections and it removes the need to traverse. To make the ~O(1) cost of "traversing" more clear, typically only the first element in the linked list is accessed, and it will typically be used in the last 10 minutes, and in this case nothing happens anymore. Of course, this is if the low volume topics do not generate many connections, which they won't, with this cleaning up in place. And I am checking now that map() and the rest are lazy, or else for sure I can make so that only the relevant "prefix/first" part of the collection is iterated, typically first element only. Traversing is quite cheap (it is traversing a linked list underneath, and only a prefix of it) and can be done every 1000 selects. The intent of your suggestion is to optimize, I understand, but the effects is a different behavior as I feel it (changes the expiration by time and switches it to an expiration by connection count), and to a low performance benefit (I think traversing is much cheaper than blocking close on each channel, that would happen either way). The idea of limited connection count can be used complementary to the existing traversing, but if you mean to take out the traversing every n selects, that changes the expiration by time and switches it to an expiration by connection count - is it an agreed requirements change with Jun Rao? I must warn that it is dangerous in my view to configure a maximum connection count per broker, because in event many brokers go down, and many clients need to use the system, this connection thrashing would not help anybody, and be a worse effect than not having this connection expiration at all, in such a scenario, relevant to a highly available system. Took a look at the patch. How about the following - 1. Limit the LRU cache size to the number of active connections that should be supported by the Kafka server. I'm guessing this should be a config. 2. Override removeEldestEntry to evict the oldest entry if the cache size exceeds the configured number of LRU connections. That way, we don't have to traverse the map several times in the main loop, which can be expensive. Thanks for picking this up nicu marasoiu. Assigning to myself for review. I attached a first version of the patch. I am still thinking on any other implications, but wanted to share a first draft to collect some feedback already. Thanks Hi, I will spend up to 4 hours per day the next week (11-15 august), when I have this time. So I would like to keep this nice task. My estimate, I will have a first working solution to put up for review in ~3 days, so Thursday. Does that sound good? Hey Nicolae Marasoiu, are you actively working on this patch yet? If not, do you mind if we have someone else pick it up? Beautiful, I can't wait to work this out, so I take this to code right? The goal is just to reduce server connection count. In our environment there might be a single Kafka producer in each process we run publishing to a small Kafka cluster (say ~20 servers). However there are tens of thousands of client processes. Connections can end up going unused when leadership migrates and we should eventually close these out rather than retaining them indefinitely. As you say it is not critical as the server seems to do a good job of dealing with high connection counts, but it seems like a good thing to do. I agree that doing this on the server might be better. This does mean it is possible that the server will attempt to close the socket while the client is attempting to send something. But if the timeout is 10 mins, it is unlikely that this will happen often (i.e. if nothing was sent in the last 10 mins, it will not likely happen in the 0.5 ms it takes to do the close). The advantage of doing it on the server is that it will work for all clients. This change would be in core/.../kafka/network/SocketServer.scala. The only gotcha is that we likely need to avoid iterating over all connections to avoid latency impact (there could be 100k connections). One way to do this would be to use java.util.LinkedHashMap to implement an LRU hash map of the SelectionKeys, and access this every time the selection key comes up in a select operation. (There are a ton of details in LinkedHashMap--needs to be "access order", etc). Then every 5-10 select loop iterations we would iterate the map expiring connections until we come to a connection that doesn't need expiring, then stop. Right, the limitation is more critical on the client side of a client-server connection due to port count limitation, and/or socket/file count restrictions of the client env. On the other hand, the brokers could close the connections too on such condition, rather than relying on the clients(producers) to protect it. However, what is any other reason to reduce the socket connections count? To make the NIO select lighter on the server, on a lesser number of connections? I think epoll is quite relaxed on this. I would like to work on this, but also understand the original problem(s) / concern(s) to see if we can also see any more suitable solutions to the particular concern? nicu marasoiu, yes, the actual problem is now fixed in trunk. We just need to add a unit test. I created a followup jira KAFKA-2661 for that. Resolving this jira.
https://issues.apache.org/jira/browse/KAFKA-1282
CC-MAIN-2017-51
refinedweb
4,844
55.95
Outlook Import Wizard 1.5.8 Sponsored Links Outlook Import Wizard 1.5.8 Ranking & Summary RankingClick at the star to rank Ranking Level User Review: 10 (1 times) File size: 1628K Platform: Any Platform License: Shareware Price: $14.95 Downloads: 22 Date added: 2009-07-05 Publisher: OutlookImport.com Outlook Import Wizard 1.5.8 description Outlook Import Wizard 1.5.8. Utilities Outlook Import Wizard 1.5.8 Screenshot Outlook Import Wizard 1.5.8 Keywords Bookmark Outlook Import Wizard 1.5.8 Outlook Import Wizard 1.5.8 Copyright WareSeeker.com do not provide cracks, serial numbers etc for Outlook Import Wizard 1.5.8. Any sharing links from rapidshare.com, yousendit.com or megaupload.com are also prohibited. Featured Software Want to place your software product here? Please contact us for consideration. Contact WareSeeker.com Related Software A utility to backup Outlook Express. Free Download An absolute essential for every Outlook user. Restore passwords and account information instantly. Outlook Password Recovery is for those who love choosing long passwords... and then forget them! Free Download scans & removes traces of internet & computing activity import you data from Outlook Free Download The Locked Files Wizard formerly CopyLock is a simple assistant that allows you to replace, to move, to delete and to rename Free Download Email Sorter Wizard can sort/manage any Outlook email with great ease and speed. Free Download Outlook Transfer imports separate email files .eml and .msg into MS Outlook PST Free Download Latest Software Popular Software Favourite Software
http://wareseeker.com/Utilities/outlook-import-wizard-1.5.8.zip/4cf8ae5a8
CC-MAIN-2015-48
refinedweb
254
53.88
.'" GSM (Score:1) 5? (Score:3, Insightful) Shouldn't we concentrate on developing 4G first? Re: (Score:2, Funny) It's one louder Re: (Score:2) It's one louder I’m waiting for 11G. Re: (Score:2) Shouldn't we concentrate on developing 4G first? Why? It's been in deployment since 2009. 5G is the next natural step. Re: (Score:1) He's referring to the actual 4G standards, which still have not been met anywhere. The lax 4G advertising standard was accepted because every cell provider unanimously agreed that the existent 4G standards were impossible. This will just end up another advertising standard, I doubt there will be any change in speed beyond what you can expect with a slightly better data stream compression (which will use up more battery life to decode). Re:5? (Score:5, Insightful) 4G is LTE. What some carriers in the US did was sell HSDPA as 4G, but in Europe that has mostly been advertised as 3.5G. Re: (Score:2) Yup, I'm on a 3G contract (specifically not 4G) and I can get HSDPA, so its definitely not considered 4G here. Re: (Score:2) Re: (Score:2) 4G is LTE. What some carriers in the US did was sell HSDPA as 4G, but in Europe that has mostly been advertised as 3.5G. LTE is 3.9G. What the carriers did was declare it 4G. LTE Advanced was going to be the first 4th Generation mobile technolgy but thanks to marketers co-opting the term, the generation numbers are now meaningless. I may as well announce my farts now produce 6G speed. It would be just as accurate.? Re: (Score:2) Who are they? Re: (Score:2) Them there.. Re. Well with 4G you can use your monthly data cap in five minutes [pcpro.co.uk]. Many people look forward to the time when it will only take seconds. Re: (Score:2) What is this "monthly data cap" of which you're speaking? Re: (Score:2) While it is "technically" possible, chances are, you (yes you) cannot. Congestion, distance from Cell Tower, slow network/choke points etc. I'm a Network administrator and we run (currently) a 5000 node network across a Gig link, and only average something like 25% network saturation, during PEAK hours. While we have spike traffic that hits the max bandwidth, they are very very temporary. Chances are, you'll never hit a server capable of filling your gig link. The closest thing I've seen filling the link is a Re: (Score:2) However, if the techniques used are suitably clever, technology that can be used to demonstrate impressive-but-irrelevant peak speeds is likely also of use to provide endurable speeds to ever Re: (Score:1) God forbid you Google it. "If 5G appears, and reflects these prognoses, the major difference from a user point of view between 4G and 5G techniques must be something else than increased peak bit rate; for example higher number of simultaneously connected devices, higher system spectral efficiency (data volume per area unit), lower battery consumption, lower outage probability (better coverage), high bit rates in larger portions of the coverage area, lower latencies, higher number of supported devices, lower Re: (Score:2) God forbid you Google it God forbid you google "5G wireless" and not know the difference between the cellular definition of 5G and WiFi 5GHz, often referred to as 5G. Saw that namespace collision coming a mile away. It's going to be one of those things that causes confusion with PHBs for the next decade. They should have added another letter or something e.g. 5GX. Re: (Score:2) 5GHz WiFi should *never* have been called 5G by anyone But it was, and not because of marketing, just because it was convenient. The people writing mobile wifi standards should never have used "G" in the first place. It's a bullshit marketing name no matter which camp uses it. It stands for "Generation" without even telling you what its a generation of. It's a particularly retarded form of devolutionary e-bonics. 5G for mobile phone data connections will win this naming war. That we can agree on. They like to spend lots of money on advertisements, and it will work, because nobody cares that much about defending it for 5GHz Re: (Score:1) > Research is great, I'm just not thinking there is much practical that will come of this. I have a suspicion this is not about mobile phones. The german goverment has plans to deliver "Broadband" to the largets part of the population. But they shy away from the cost of digging trenches for cables for every little town and village in rural areas and then also connect each house with fibre or at least awesome-quality copper. Therefore, I assume they plan on doing it via mobile technology and hope they can o Re: (Score:2) I wouldn't mind seeing 5G add additional security. Since it would likely require a new type of SIM card, now is the time to a couple security features: 1: Ability to store data on a SIM card in a secure manner. For storing Google Authenticator info, PGP/gpg private keys, tetetc... a SIM card would be perfect because it has protection against brute force built in (PIN/PUK). If changing phones, I'd not have to worry about backing up or generating new authenticator codes. 2: Similar to #1, except allow SD-l Re: (Score:2) Re: (Score:2) Because, you know, we never ever actually create new things that are larger and need more bandwidth to transfer. re: standard definition vs high definition. If the market creates a capability, there will be someone or something that will seek to fill that capability. You can stream HD video now over 4G, how much more bandwidth can you sell and for what? Latency improvements might be a good thing, but by my reading 5G is targeting 100X the bandwidth of 4G. Even if they manage 10X, I'm here to tell you that they are going to have to find spectrum space for this. Physics require it. The only spectrum available that makes sense is higher frequencies than where we are now with cell phones. But the reason we don't use this spectrum now is because solid state devices that Re: (Score:2) Re: (Score:2) For ground based devices that don't move, fiber is cheaper and faster. Trust me on this, 5G is largely useless if the only point is bandwidth... Not saying they won't try to sell it, but the bandwidth required for 10X 4G will put you into spectrum you simply cannot afford in mobile devices due to power budgets and semiconductor costs. So it's really about the spectrum space being unavailable and what they can get being expensive to use. Re: (Score:2) Re: (Score:2) I beg to differ. We have highly directional antennas being used in urban areas now and cell sizes that are getting pretty small around congested areas. So we are already doing what you say will fix this. Putting in more cell towers and antennas does help, but in urban areas where 5G would likely be put first is already subdivided pretty small. But when you up the data rates, you have to increase the bandwidth and/or power. Physics demands it. So if you are already maxing out your licensed spectrum doin Re: (Score:2) Are you talking about something like the Artemis pCell system [artemis.com]? Their claim is that every device essentially gets its own 5/10/20 MHz of spectrum. Will be interesting to see if it actually works as well as it's being hyped. Ethernet syndrome (Score:4, Interesting) Mod parent down! (Score:2, Insightful) Mod parent down, for he is shortsighted and hates technology. Bigot! Re: (Score:1) Absolutely! If we only did things based on "need", life would blow. Re: (Score:2) Re:Ethernet syndrome (Score:4, Insightful) Sure there will be some niche applications, Feeding the last mile is not a niche application. In cities, it's reasonable to wire everyone up. In bumfuck, not so much. I live in bumfuck. I have three ISP options available to me, all of the WISPs. All of them crap. I am now on the one which is least crap. A notable percentage of Americans are in the same boat, or one which is indistinguishable from a distance. Population density is low enough and sectored antennas directional enough for wireless to be a last-mile solution for the USA. On the other hand, it'll still require microcells... Re: (Score:2) Re:Ethernet syndrome (Score:4, Insightful) For a cabled connection to your desktop, GB ethernet is probably more than you will ever need. No, it's just more than you can currently envisage using. What about streaming 3D interactive entertainment? The bandwidth requirements of such things are rather high, beyond what is practical now (and we also don't have all the other hardware required yet) but it's still reasonable to consider how to provide that. Expanding capacity has an additional benefit in urban areas: sharing of capacity between multiple users becomes easier. Maybe you live out in the sticks, but lots of people don't, and lots of them want fast internet. Re: (Score:2). Home Ethernet will be fast enough when it can keep up with my 10 disk RAID 0. I may not transfer large volumes of data over my Ethernet on a regular basis, but when I do, I want it done ASAP. Re: (Score:2) "GB ethernet is probably more than you will ever need. " I've heard this line before. On 10MB, 100MB, and now 1000MB connections. Yeah, we'll be filling int 10GB networks soon enough. If you build it, they will fill it. Re: (Score:3) I shall dub this network "The world barely keeping up with demand." More like "demand barely keeping up with offering". The truth is, consumers don't want to upgrade to the latest and greatest shit every 6 months in an economic slump. It's like super high definition TV or Blue-Ray discs: people aren't finished investing in the previous generation technology that a new one comes along. Not to mention, the contents - movies and TV shows - are still shit, and people aren't interested in high definition shit anymo Re: (Score:2) True innovators on Slashdot... (Score:5, Insightful) It seems a significant number of the readers here would rather say "64kb is all the memory anyone will ever need", because they are too lazy to try and think rather than just knock any and every innovation mentioned on Slashdot. As far as 5G - "why" the answer is use (consumption) will always expand to fill capacity. The question is not WHY the question that needs to be answered is how can we put that additional capacity to use. Re:True innovators on Slashdot... (Score:4, Insightful) "Why" should we invest in a technology that we don't know "how" to put to use? And, believe me, it's us that are investing in it. My mobile service provider keeps telling me about 4G. Says it's wonderful. Say's I'm ready to go. Except I don't have a 4G handset and have no intention of really getting one. Because it costs a lot more and does nothing that mine doesn't already do, just slightly faster (in theory). Coverage isn't there. Cost is too much (still measured in fucking megabytes). No real advantage over 3G at the moment. So my question is not "how" at all - I can name a million ways we *could* use 5G. Like I could name a million ways we *could* use 4G. Or 3G. Or EDGE, GPRS or any number of other technologies before it. Fact is, we still don't really do them. The problem is not "how". My question is "why". Why would I touch something that's likely to be commercially exploited to the hilt to my disadvantage and which I, honestly, hardly ever use? Sure it's cool to check GMail on the go. I've RDP'd in and fixed servers from a smartphone. It's useful. But it's not a killer application of the technology because I've been able to do that (maybe not quite so fast) since the GPRS days. And yes, you "can" video-stream etc. now Fact is, it all costs money and not everyone will pay you to watch Gravity in 4K on their 2" mobile screen (especially not if they're already paying £40 a month for 4G, and you want more for 5G to recoup your investment costs). Why deploy a technology "just because" it's supposed-progress? Isn't that what left us with all kinds of dead-end hardware and initiatives / technologies that never really took off (3DTV)? Why not use what we've got and get the most out of even 3G as it stands (because, ultimately, we certainly don't do that in the UK)? Let's use what we have to its limits, and be clever, and get better value out of those BILLIONS of pounds worth of 3G/4G licenses before we start jumping on the 5G bandwagon "just because". Hell, I'd infinitely rather have 3G everywhere at the max capable speed (which is surprisingly high!) than even a single base station with 5G. And if consumption expands to fill capacity, the opposite is true - we will squeeze every byte we can out of technologies if they are the upper limit. Re: (Score:1) No real advantage over 3G at the moment. Should I get off your lawn now, or can I wait till later? Hell, I'd infinitely rather have 3G everywhere at the max capable speed (which is surprisingly high!) than even a single base station with 5G. Replace 3G in this sentence with 4G and I'd be happy to agree with you. In my experience 3G wasn't fast enough. It wasn't fast enough when they first turned it on, it wasn't fast enough when they started replacing it with LTE. Re: (Score:2) still measured in fucking megabytes Storage is measured in megabytes, network is measured in megabits/second. In real-world usage mbps still plenty for most activities, assuming you are actually getting it. 10mbps is enough for an office full of people to use for email, web, voip, et cetera without noticing any issues. The problem is oversubscription of the wireless network so that although you can theoretically get hundreds of mbps, you are actually getting hundreds of kbps, and then only in bursts. Re: (Score:2) Obviously don't know that UK mobile providers put a data cap on - in Megabytes Per Month. You can have all the speed in the world, but it's useless if you can go over your limit (especially if you go to another European country) in a matter of seconds. Re: (Score:2) Obviously you don't know that all retail services measure caps in megabytes and that they have nothing at all to do with technology. It's like complaining about other people having fast cars because you live on a dirt road. Too soon! (Score:2) Did you have to say "collaborate"? We need HD Voice (Score:2, Funny) Great, so I can have 4K video instead of wimpy 1080p, but voice calls will still be barely intelligible. Priortize latency and power consumption (Score:1) ahem 5g (Score:2) Cameron's Spectrum Auction (Score:1) David Cameron couldn't give a shit about 5G. All he cares about is that sweet, sweet spectrum auction money for his government to spend. Huh (Score:2) By then Sprint will be up to smoke signals (Score:1) I mean seriously, compared to the rest of the world, we in the US may as well be reading about quantum teleportation. Lol (Score:1)
http://news.slashdot.org/story/14/03/10/1224214/uk-and-germany-to-collaborate-on-5g
CC-MAIN-2015-14
refinedweb
2,689
71.85
This question already has an answer here: Class.Class vs Namespace.Class for top level general use class libraries? 7 answers What you have are two different things. First scenario class example: You have an internal class with 3 nested private classes In your second scenario namespace example: You have 3 internal independent classes with no nesting. If the classes should only be used within StructuralCase use the first example, otherwise if the classes are independent and have no relationship then the namespace is the way forward. I would just use Namespace, because you don't all the overhead of a class. A class has more structure, variables, and methods, and offers layers of inheritance, but if you don't need them, don't use Class. Generally, you want to use a namespace, if only because it enables using statements - otherwise you have to refer to the class by all nested classes (except inside the parent class itself, of course). Thus in case 1, outside reference would have to say StructuralCase.Structure s = ... instead of using StructuralCase; // ... Structure s = ... Functionally the only real reason to make a nested class is internalfor some reason) structused for results of a specific query)
http://m.dlxedu.com/m/askdetail/3/fc7dd0ee4f38ea0cb44fbc1dc20f247b.html
CC-MAIN-2018-30
refinedweb
200
55.03
I'm an extreme beginner with Python (this game is the first time I have ever seen it) so please bear with me. Creating a simple dice game. Place a bet. Three dice are rolled, if any are the same value you lose. If you win, you get double your bet. I've gotten the majority of the game programmed but I cannot figure out how to input the bet data from my xhtml form into the final results. import cgi form = cgi.FieldStorage() nombre = form.getvalue("nombre") bet = form.getvalue("bet") # print HTTP/HTML headers print """Content-type: text/html <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" ""> <html><head> <title>Dice Game</title> </head><body> """ # print HTML body using form data import random die = random.randint(1,6) die2 = random.randint(1,6) die3 = random.randint(1,6) bet = int( form.getvalue("bet") ) print "<p>Thanks for playing, "+ nombre +". Your roll:</p>" print "<p> %i %i %i.</p>" % (die, die2, die3) if die == die2: print "<p>You lose $%i.</p>" %i (bet) if die2 == die3: print "<p>You lose $%i.</p>" %i (bet) if die == die3: print "<p>You lose $%i.</p>" %i (bet) else: print "<p>You win $&i.</p>" %i(bet*2) print """</body></html>""" When the game is played, it comes up with the error "NameError: name 'i' is not defined". I'm sure there's a very simple error I'm making but it's brought my progress to an absolute standstill. I also have to substitute images for the text numbers and am unsure of how to proceed with that. Any help would be MUCH appreciated. And for reference, this is how the game should function.
https://www.daniweb.com/programming/software-development/threads/275076/help-with-a-python-dice-game
CC-MAIN-2018-43
refinedweb
285
79.06
All opinions expressed here constitute my (Jeremy D. Miller's) personal opinion, and do not necessarily represent the opinion of any other organization or person, including (but not limited to) my fellow employees, my employer, its clients or their agents. I was perusing the session list for ALT.NET Canada this morning (sorry I couldn't make it guys) and saw this session title that reflects something I've had quite a few conversations about this year. Building extensible frameworks leveraging framework consumer selectable IoC containers. As long as we're going to work in static typed languages, it's a major advantage to achieve extensibility and pluggability by utilizing an existing Inversion of Control container. I'll make the statement that starting with an existing IoC tool makes framework construction far simpler and cheaper than it was before. The problem with that is that there are multiple IoC containers out there, and tying your framework to any of those containers is bound to irritate a big swath of people. When we were deciding between Ruby on Rails, the ASP.Net MVC, and MonoRail for our project architecture in June, I dismissed MonoRail in small part because I didn't want to be forced to use Windsor as our IoC tool (for some reason). We're perfectly content with the MVC framework at the moment*, but after reading through the MVC code and the way that they pass around dependencies through the myriad SomethingContext objects, I think the MVC team could streamline their own code quite a bit by using an IoC tool to resolve dependencies. Now, that brings up a contentious issue, which container would they use? The answer is that they can't use any specific container. When I was building the initial code that has became Fluent NHibernate there were a couple places where I needed to use an IoC container to resolve NHibernate objects and deal with configuration. I obviously chose StructureMap and buried a couple hard references to ObjectFactory.GetInstance<T>() in the code. The coupling to StructureMap doesn't particularly concern me, but when the code first went public, Nate Kohari (author of Ninject) lamented on Twitter that the tool was coupled to StructureMap. So, to make it safe for both Nate and myself to use the IoC of our choice, what I'd really like to see is a common interface to represent the very baseline functionality of an IoC container so that we could build and release frameworks that utilize an IoC container without forcing a particular choice of tool onto the development teams. A couple people are already going down the path of abstracting an IoC container with one off solutions. Off the top of my head I can think of at least three: All of these frameworks have a more or less similar interface for the container, then an adapter for most of the major IoC tools. I'd like to propose, and I'm not the first, that a new Service Locator abstraction be added to the .Net Framework itself as a first class citizen and publicized a little bit so framework builders can take advantage of a common interface. I'd like to start a new petition to make this little, itty bitty work be a part of the MEF project within Microsoft. All I think it needs to do is provide an interface with: [Advertisement] Hear, hear!!! Amen to that. If MS rejects the idea, or drags their feet, would a neutral open source project or organization providing a set of interfaces serve as a stand-in? Any chance something like that could gain critical mass? I think that in addition to IoC there are a small handful of other interfaces that could provide framework or infrastructure tool developers with the same benefit, like logging. Absolutely, Jeremy, well said. While I naturally want to use Ninject, I'm happy using other DI frameworks as well. I just don't think that infrastructure libraries should be opinionated, because you might end up being forced to put two DI frameworks in the same project. If this doesn't end up in MEF, maybe we should consider something like the AOP Alliance? I'd be willing to add it to Ninject, even if it means creating an external dependency. As long as the dependency was tiny, it would be worth it. That way framework developers could work against the common abstraction rather than creating their own custom abstractions each time. @All, I think I should have said that the MEF team is aware of the issue and is seriously considering doing just this. I just want to say it out loud and get some support for the idea so that it's a high enough priority for them to do it. Isn't someone going to make the ridiculous "we already have a locator, and it's IServiceProvider" claim? @Jimmy, Not good enough, not by half. IServiceProvider only gives you a tiny bit. There is a lot that we need to create first. Note, MonoRail has no dependency on Windsor That was my next note, we just completed a project that used StructureMap w/ MonoRail, just fine. When you're back in town, I can show you what we came up with. @Ayende Ha, that was *supposed* to be a joke. I had a colleague that would bring IServiceProvider up constantly as some kind of locator panacea. I think he works at Arby's now... Thanks for taking notice of the session! The session started out small, with just three of us, and finished with seven or eight. We didn't have anyone on-hand with direct experience with developing a framework with this kind of IoC dependency-and-yet-abstraction, so the core nugget for the session was reasonably addressed* and out of the way relatively early. That did, however, afford us the opportunity to expand the discussions out into a number of interesting directions more or less related to the original subject, in the end touching on everything from AOP to introducing IoC to teams to selling TDD and BDD to management above and team members below. You can check out my rough notes from the session, written up from memory during the two sessions since the IoC session this morning, here: If anyone has any comments or questions, I'll try to keep an eye on this post for a while, but if I miss anything feel free to mail me at [email protected]. * As for addressing the core subject during the session, the lowest common denominator (at least, the one that this framework needs) is captured in the notes (I think ;) but summarized thusly: - Service Locator, with support for resolution of both default and named instances of a given type - ctor DI with auto-wiring - a convention-based scanner, either built in or created while integrating the container with the application framework, that supports the conventions we will define. (It will be a basic pattern along the lines of "If IFoo refines IComponent, a discovered class Foo that implements IFoo is registered as the default implementation of IFoo, where any IFoo-implementing XFoo, YFoo, etc. are registered as named IFoo implementations called "X", "Y", etc. respectively. Shouldn't at least one of those signatures be object GetInstance(Type t, string name); There are many situations when I don't know the type at compile time. Also, we might need... object GetInstance(string name)? For instance, when doing interop with a DLR language, I might not have a type *at all* that I can pass. @Haacked - We do need a get that is by some kind of name key, and that one is already the list I posted as "Service Locator, with support for resolution of both default and named instances..." As for passing the type as an argument, I'm not entirely against your suggestion but do really wonder how it is you would expect to interact with the located service if you didn't have at least know some type information available. I tend to work entirely in static typing territory, however, so I would certainly understand that the requirements over in dynamic territory might be more than a bit different. ;) Pingback from ASP.NET MVC Archived Blog Posts, Page 1 @Jeremy M, Jeremy G, et al: I would push strongly for the whole "if you follow DI seriously, you won't need service locator stuff" At least in Fluent NHibernate, the ObjectFactory.GetInstance<> calls were all unnecessary and served only to keep some of the c'tors cleaner for speedier development and use (primarily for testing utility stuff where DIP may not have been as important). As soon as we inverted all the dependencies, pulling StructureMap out was a breeze. Likewise, for ASP.NET MVC, all the *Context classes can be done away with easily or with minor work. Locating the controller would be tricky without service location, but I think I'd rather just have an IControllerLocator implementation registered in the cloud rather than the current mechanism of subclassing and overriding methods. I'd rather ASP.NET MVC say: "You must have implementations of the following 10 interfaces registered in the container cloud in order for dependency injection to work". By default, ASP.NET might use Unity, or even a static container of some sort that has everything hard coded. MVCContrib could ship with a standard StructrueMap Registry files, Windsor Facilities (or whatever the equivalent is) and whatever the Ninject and Unity equivalents are. These Registries/Facilities/etc would be pre-configured with all the required ASP.NET MVC dependencies. If @Haacked/Gu/MSFT were serious about this and interested, I'd be willing to do a spike test on this to prove it but I won't waste my time if it's not in the cards for MSFT to consider. I know that nServiceBus has an abstract over a container (IBuilder) and then an adapter to Spring and Windsor. I think that in the Rhino Repository there is something similar. I'd suggest that, similar to Common.Logging we all just pick one and go with it. I mean, it's one project with one interface in it. And to Chad, opening up new windows in a smart client does require service locator. @Chad - I'm afraid I have to disagree. First of all, at the very least one needs a way to bootstrap their way into the application and some kind of Service Locator is almost surely going to find its way in there at some point. Also, you'll need Service Locator for those times where you need need a clean way to select, at runtime, from multiple available implementations of a service interface. Finally, there will always be times that you want access to something without the overhead of injecting it up front (e.g. cases where instantiated object count is higher but where a particular service or set of services are used through those objects less frequently.) While I definitely think it is great to minimize the number of points in your application where you work with a Service Locator, there are too many scenarios where it is needed to be able to do away with it entirely. @Chad again - to be clear, I am in general agreement with the rest of your comment. :) I think it's great that we standardise a basic interface for an IoC container; however I do have a couple of points. 1. I second the need for the following non-generic equivalents: object GetInstance(Type t) object GetInstance(Type t, string name) object GetAllInstances(Type t) 2. What are the semantics for GetAllInstances<T>()? Does it return all components registered as 'T' or all components that are a subclass of 'T" or implement 'T' (if T is an interface)? I think it needs to be well-defined, or it may not make it that easy to plug in different containers. 3. I think it is worth considering exposing a minimum set of Register* methods. Example: I'm writing a simple command-line utility that uses the IoC container interface, and based on certain command-line options, allows me to configure which implementation of the INotificationService is used. I want to programatically register either EmailNotification or ConsoleNotification with the container depending on the option specified. I also don't want to have a separate / external configuration file with my utility. 4. Attributes... I know this is a point of contention, and many will argue they do not want to litter their objects with [Dependency], etc; however, to standardize on a few might be worth considering. They allow the developer to declare their intent in the same code as the class, making it easier to understand the external dependencies from the perspective of a 3rd party. @Jeremy in a dynamic scenario, I might duck-type the returned object to a type I do know about. I can't statically pass the type I know about because the object I want doesn't implement it. Or, I might have information about the type I want from somewhere else and want to call methods on it dynamically (for example, if I pass the object to an IronRuby layer and call it from there. @Haacked - I hear ya. Exposing both flavors is completely sensible. Pingback from Dew Drop - August 17, 2008 | Alvin Ashcraft's Morning Dew "As long as we're going to work in static typed languages, it's a major advantage to achieve extensibility and pluggability by utilizing an existing Inversion of Control container. " Is it really static-typed languages that demand containers for IOC? Isn't it just languages without first-class/higher-order functions? Phil: I understand what you're talking about re: the DLR and GetInstance(string), but I don't know if it has an actual implementation in any of the popular DI frameworks. I know for certain that Ninject wouldn't know the first thing about how to resolve that, since it doesn't use string keys in the first place. :) I'm willing to add functionality to Ninject to support this abstraction layer, but I think we should just focus on making it the lowest common denominator. Then again, if Ninject is the only framework that doesn't support it, I'll bring it up to par. :) Rather than agreeing on a common base interface wouldn't be better instead of agreeing on a common fluent IoC API containing enough extensibiliy points so that we can cater for all situations ? WhenResolvingType<IFoo>().ResolveWith<Foo>().AsSingleton(); WhenResolvingType<IBar>().ResolveWith<Bar>().WhenCallingAssemblyIs(assembly).. etc etc I have probably absolutely no idea as to what I am talking about (honest) but I don't think Jeremy's approach to agree on a very basic IoC interface will allow us to go anywhere (no offense!). @Daniel, I couldn't disagree more, and I argued a long time with the Prism guys about doing just that. Going with a least common denominator approach for service location is low hanging fruit and would provide enough swappability for the framework things we mentioned above. After configuration time the only things you generally use are the 2-3 "give me this" methods. Coming up with a least common denominator approach for registering services effectively castrates all of the IoC containers, because this is where all of them have individual capabilities that set them apart. @Nate & @Haacked, StructureMap wouldn't allow that either (always have to know the service type upfront), but you can do that very thing with Spring.Net. I always thought it was a design flaw, but hey, whatever floats your boat. Jeremy, how do you handle the case where your framework requires registration of components for its internal use? I automatically enroll all the default components of OpenRasta when the framework is loaded, so that whatever IoC is in use they'll receive a registration for those components to resolve them when needed. The behavior is transparent to the user but lets them override this at any time. I've found that for my needs, covering registration was done through a couple of methods that were indeed the lowest common denominator of all containers, but that is only used for my framework. Users are still encouraged to do their registrations using whatever container-specific API they are used to. Is that so different from the resolve mechanism for framework writers? Sebastien, I'd make the registration abstraction be something coarse grained like: IBootstrapper.Bootstrap() and let users use the native registration. In your solution you're barely scratching the surface of what the tools provide for registration capabilities. I personally wouldn't be willing to accept that as a consumer of your framework. I'm not that wild about the "some configuration over here in form A" and some over there in form B. @Jeremy & Sebastien - In my specific case, I expect we'll be imposing convention scanner requirements instead of required registration mechanism support but both options are certainly be appropriate in different situations. This is why I'm tempted to agree that the absolute lowest common denominator is on the instance retrieval side of things and would not include any particular style or set of registration support requirements. This would still leave the possibility of each framework implementer layering on an additional set of requirements around registration (and/or convention scanner) functionality depending on the needs and style of the framework (and consuming applications) in question. Jeremy D: My mistake, I misunderstood your post which, for a good reason, doesn't mention registration. Having had nasty surprises with the multiple overloads of component registration with CastleWindsor I was here hoping for something better. Next time I will use StructureMap I swear. It might help your cause that the author of Windsor has joined the MEF team. hammett.castleproject.org No petitions! Just to update the world, we're listening and talking about options. I forget to say, i think this is a great idea! From a naming perspective I like IServiceLocator in that there is nothing particularly DIish about the suggested interface. The main thing is providing a clean pluggable mechanism for accessing services. If that mechanism happens to be a DI container great, but as a user, why do I care. I don't believe the interface should use generics AT ALL. As pointed out, there's times you must supply the type at runtime, not compile time. So you must have versions that take a Type parameter. So why not have both? Because it's not a very DRY solution. Far better to leave the generic versions out of the interface and provide extension methods for this. I'd also consider using IServiceProvider here. public interface INamedServiceProvider : IServiceProvider { GetService(Type type, string name); GetAllIServices(string name); } public static class ServiceProviderExtensions public static T GetService<T>(this IServiceProvider self) { return (T)self.GetService(typeof(T)); } // not showing all extensions for brevity Hi Jeremy, I couldn't agree more myself! I'm currently in the process of rewriting LinFu's own IoC container, so this is of a huge interest to me as well. FWIW, I think the basic service locator interface should support the basic functions: -Type Registration -Determining whether or not the locator can instantiate the service type -A GetService method of some sort that can instantiate the particular service type in question I think the lowest common denominator approach would be best since no two IoC frameworks are exactly alike, and from the end user standpoint, a common service locator interface would make it easy for people to try different IoC frameworks without having to do a massive rewrite of their code. It is by no means perfect, of course, but I'll certainly agree to implementing a common interface once all the other IoC container devs agree to do the same thing. Pingback from Zen and the art of Castle maintenance » Blog Archive » Integrating MonoRail with your favorite IoC container Announcing: The IServiceLocator interface Today we launched an exciting project on CodePlex, namely the Common Service Locator library . What is Pingback from Weekly Links #21 | GrantPalin.com Common Service Locator You said: "I owe P&P a reference implementation of Prism with StructureMap. I'm getting there" Jeremy, I need to either get what you have, or build this myself. I'd much rather get what you have! Any idea when this may be completed? @Jeff, David Mohundro is doing one, but I don't know where it's at. I might do something January-ish for my own purposes that I will release. I betcha that my recommendation is going to end up being to bypass the Prism bootstrapping a bit though. Artykuł opisuje propozycję implementacji zasady Inversion of Control (IoC) w systemach OLTP zbudowanych One of the goals of Prism was to avoid coupling too closely with any particular Dependency Injection AutoGen for Castle In this post, I will share some of the best practices/guideline in developing ASP.NET MVC
http://codebetter.com/blogs/jeremy.miller/archive/2008/08/16/it-s-time-for-ioc-container-detente.aspx
crawl-002
refinedweb
3,517
59.33
You can subscribe to this list here. Showing 8 results of 8 Hello, I have the following tables: class Package(SQLObject): name = StringCol(alternateID=True, length=255) [...] class TrackerPackage(SQLObject): [...] info = ForeignKey('Package') tracker = ForeignKey('Tracker') bugs = RelatedJoin('Bug') class Bug(SQLObject): [...] packages = RelatedJoin('TrackerPackage') status = RelatedJoin('BugStatus') class BugStatus(SQLObject): name = StringCol(alternateID=True, length=40) bugs = RelatedJoin('Bug') class Tracker(SQLObject): [...] packages = MultipleJoin('TrackerPackage') flags = MultipleJoin('CustomFlagDefinition') At the moment, i'm doing something like that : tracker.get(1) for package in tracker.packages: print package.info.name [...] for bug in package.bugs: print bug.bugNumber for status in bug.status: print status.name I wonder if there is a way to do a *JOIN* or someting else for avoiding too much SQL requests. I can't figure out how i could do that with sqlobject. Regards, Arnaud Fontaine On Sun, Aug 20, 2006 at 03:00:17PM +0200, Arnaud Fontaine wrote: > Thanks a lot. Last question: is there a way to have a dict with column > name as key directly instead of a tuple? Only with .select() - there is a method sqlmeta.asDict() - because sqlmeta knows the names and the order of the columns. sqlbuilder.Select() knows the names but the names could collide: Select([Table1.q.name, Table2.q.name]) - what keys you would want in the dictionary? And when you run connection.query() (or .queryAll) Select has been forgotten already - there is only a query string, so Select cannot build the dictionary. Oleg. -- Oleg Broytmann phd@... Programmers don't die, they just GOSUB without RETURN. >>>>> "Oleg" == Oleg Broytmann <phd@...> writes: Oleg> from sqlobject.sqlbuilder import Select Oleg> connection.queryAll(connection.sqlrepr(Select(...))) Hey, Thanks a lot. Last question: is there a way to have a dict with column name as key directly instead of a tuple? Regards, Arnaud Fontaine On Sun, Aug 20, 2006 at 01:24:30PM +0200, Arnaud Fontaine wrote: > is there a way to avoid excess overhead of the RDBMS. from sqlobject.sqlbuilder import Select connection.queryAll(connection.sqlrepr(Select(...))) Oleg. -- Oleg Broytmann phd@... Programmers don't die, they just GOSUB without RETURN. >>>>> "Oleg" == Oleg Broytmann <phd@...> writes: Oleg> When one does .select() on a table SQLObject only fetches Oleg> columns for that table. Hello, I understand. But is there a way to avoid excess overhead of the RDBMS. I mean, i have a table package with its associated bugs associated using a RelatedJoin, the bugs are related to a status table. So, for each package, there are about 6 SQL requests for each package. With about 300 packages, it is really annoying. Any idea for optimizing this kind of things ? Regards, Arnaud Fontaine On Sat, Aug 19, 2006 at 04:27:33PM -0400, Jorge Vargas wrote: > I personally don't know SO internals Now it's good time to start learning! > for the others I'll like to clarify where the bug is, is this a SO > issue or a TG issue? I am afrais it is someone else's job. I am not going to install and learn TurboGears just to understand a bug report. This is a job for those who run TG and report bugs. Oleg. -- Oleg Broytmann phd@... Programmers don't die, they just GOSUB without RETURN. On Sat, Aug 19, 2006 at 09:10:37PM +0100, Sean O'Donnell wrote: > I get 71 passed , and 100 failed when I run py.test on them. I run the entire test case after aplying an every patch. I run the test case 4 times - two for trunk, and two for 0.7-bugfix branch; these two runs are with SQLite and Postgres backends. Oleg. -- Oleg Broytmann phd@... Programmers don't die, they just GOSUB without RETURN. On Sat, Aug 19, 2006 at 10:16:54PM +0200, Arnaud Fontaine wrote: > list(model.TrackerPackage.select(join=LEFTJOIN(None,model.CustomFlagData))) > 1/Select : SELECT tracker_package.id, tracker_package.lock_date, > tracker_package.lock_period, tracker_package.done_by_date, > tracker_package.info_id, tracker_package.locked_by_id, > tracker_package.done_by_id, tracker_package.tracker_id FROM > tracker_package LEFT JOIN custom_flag_data WHERE 1 = 1 > > It doesn't do a SELECT * at all. .select() only selects columns for its table; TrackerPackage in this case. > It would be great > if it is possible to have all the informations of the JOIN table. It wouldn't. Where should SQLObject put this information? class MyTable(SQLObject): name = StringCol() age = IntCol() I have declared a table with two columns. If I join the table with another table - where SQLObject should put additional columns? There are no definitions for them. Even worse - if I join the table with itself (using Alias) - how do I distinguish columns form one table from the columns from the same joined table? When one does .select() on a table SQLObject only fetches columns for that table. Oleg. -- Oleg Broytmann phd@... Programmers don't die, they just GOSUB without RETURN.
http://sourceforge.net/p/sqlobject/mailman/sqlobject-discuss/?viewmonth=200608&viewday=20
CC-MAIN-2016-07
refinedweb
801
61.02
The need is a 4.7K Ohm or 10K Ohm resistor. In this tutorial, I’ll show you how to connect the DS18B20 to your Raspberry Pi and display the temperature readings on the SSH terminal or an LCD display. Parts used in this tutorial: Digital Temperature Sensors vs. Analog Temperature Sensors Digital temperature sensors like the DS18B20 differ from analog thermistors in several important ways. In thermistors, changes in temperature cause changes in the resistance of a ceramic or polymer semiconducting material. Usually, the thermistor is set up in a voltage divider, and the voltage is measured between the thermistor and a known resistor. The voltage measurement is converted to resistance and then converted to a temperature value by the microcontroller. Digital temperature sensors are typically silicon based integrated circuits. Most contain the temperature sensor, an analog to digital converter (ADC), memory to temporarily store the temperature readings, and an interface that allows communication between the sensor and a microcontroller. Unlike analog temperature sensors, calculations are performed by the sensor, and the output is an actual temperature value. About the DS18B20 The DS18B20 communicates with the “One-Wire” communication protocol, a proprietary serial communication protocol that uses only one wire to transmit the temperature readings to the microcontroller. The DS18B20 can be operated in what is known as parasite power mode. Normally the DS18B20 needs three wires for operation: the Vcc, ground, and data wires. In parasite mode, only the ground and data lines are used, and power is supplied through the data line. The DS18B20 also has an alarm function that can be configured to output a signal when the temperature crosses a high or low threshold that’s set by the user. A 64 bit ROM stores the device’s unique serial code. This 64 bit address allows a microcontroller to receive temperature data from a virtually unlimited number of sensors at the same pin. The address tells the microcontroller which sensor a particular temperature value is coming from. Technical Specifications - -55°C to 125°C range - 3.0V to 5.0V operating voltage - 750 ms sampling - 0.5°C (9 bit); 0.25°C (10 bit); 0.125°C (11 bit); 0.0625°C (12 bit) resolution - 64 bit unique address - One-Wire communication protocol For more details on timing, configuring parasite power, and setting the alarm, see the datasheet: You can also watch the video version of this tutorial here: Connect the DS18B20 to the Raspberry Pi The DS18B20 has three separate pins for ground, data, and Vcc: Wiring for SSH Terminal Output Follow this wiring diagram to output the temperature to an SSH terminal: R1: 4.7K Ohm or 10K Ohm resistor Wiring for LCD Output Follow this diagram to output the temperature readings to an LCD: R1: 4.7K Ohm or 10K Ohm resistor For more information about using an LCD on the Raspberry Pi, check out our tutorial Raspberry Pi LCD Set Up and Programming in Python. Enable the One-Wire Interface We’ll need to enable the One-Wire interface before the Pi can receive data from the sensor. Once you’ve connected the DS18B20, power up your Pi and log in, then follow these steps to enable the One-Wire interface: 1. At the command prompt, enter sudo nano /boot/config.txt, then add this to the bottom of the file: dtoverlay=w1-gpio 2. Exit Nano, and reboot the Pi with sudo reboot 3. Log in to the Pi again, and at the command prompt enter sudo modprobe w1- gpio 4. Then enter sudo modprobe w1-therm 5. Change directories to the /sys/bus/w1/devices directory by entering cd /sys/bus/w1/devices 6. Now enter ls to list the devices: In my case, 28-000006637696 w1_bus_master1 is displayed. 7. Now enter cd 28-XXXXXXXXXXXX (change the X’s to your own address) For example, in my case I would enter cd 28-000006637696 8. Enter cat w1_slave which will show the raw temperature reading output by the sensor: Here the temperature reading is t=28625, which means a temperature of 28.625 degrees Celsius. 9. Enter cd to return to the root directory That’s all that’s required to set up the one wire interface. Now you can run one of the programs below to output the temperature to an SSH terminal or to an LCD… Programming the Temperature Sensor The examples below are written in Python. If this is your first time running a Python program, check out our tutorial How to Write and Run a Python Program on the Raspberry Pi to see how to save and run Python files. Temperature Output to SSH Terminal This is a basic Python program that will output the temperature readings in Fahrenheit and Celsius to your SSH terminal:) Temperature Output to an LCD We’ll be using a Python library called RPLCD to drive the LCD. The RPLCD library can be installed from the Python Package Index, or PIP. PIP might already be installed on your Pi, but if not, enter this at the command prompt to install it: sudo apt-get install python-pip After you get PIP installed, install the RPLCD library by entering: sudo pip install RPLCD Once you have the library installed, you can run this program to output the temperature to an LCD display: import os import glob import time from RPLCD import CharLCD lcd = CharLCD(cols=16, rows=2, pin_rs=37, pin_e=35, pins_data=[33, 31, 29, 23]) #CELSIUS CALCULATION def read_temp_c(): lines = read_temp_raw() while lines[0].strip()[-3:] != 'YES': time.sleep(0.2) lines = read_temp_raw() equals_pos = lines[1].find('t='):] != 'YES': time.sleep(0.2) lines = read_temp_raw() equals_pos = lines[1].find('t=')") That should just about wrap it up! Hope you found it useful. Be sure to subscribe if you’d like to get an email each time we publish a new tutorial. Feel free to share it if you know someone else that would like it… And as always, let us know in the comments if you have any problems setting it up! Hi there! How can I do this with three sensors? Three DS18b20 and print it on the lcd? i have the same problem, did you find a solution back then? or do you remember the generall idea? My first attempt at using more than 1 sensor. import os import glob import time # os.system(‘modprobe w1-gpio’) # os.system(‘modprobe w1-therm’) base_dir = ‘/sys/bus/w1/devices/’ device_folder1 = glob.glob(base_dir + ’28*’)[0] device_folder2 = glob.glob(base_dir + ’28*’)[1] device_file1 = device_folder1 + ‘/w1_slave’ device_file2 = device_folder2 + ‘/w1_slave’ def read_temp_raw(): f1 = open(device_file1, ‘r’) f2 = open(device_file2, ‘r’) lines1 = f1.readlines() f1.close() lines2 = f2.readlines() f2.close() return (lines1, lines2) def calculate_temp(raw): “””Accepts raw output of DS18b20 sensor and returns a tuple of temp in C and F. “”” equals_pos = raw[1].find(‘t=’) if equals_pos != -1: temp_string = raw[1][equals_pos+2:] temp_c = float(temp_string) / 1000.0 temp_f = temp_c * 9.0 / 5.0 + 32.0 return (temp_c, temp_f) def read_temp(): “”” “”” # wait for a valid result lines1, lines2 = read_temp_raw() while lines1[0].strip()[-3:] != ‘YES’: time.sleep(0.2) lines1, lines2 = read_temp_raw() temp1 = calculate_temp(lines1) temp2 = calculate_temp(lines2) return (temp1, temp2) while True: print(read_temp()) time.sleep(1) Apologies, the editor wiped out my whitespace above. I hope there is enough information for you to fix it. … temp1 = calculate_temp(lines1) temp2 = calculate_temp(lines2) return (temp1, temp2) Is that inside the function? The indents aren’t clear. 6, in lcd = CharLCD(cols=16, rows=2, pin_rs=37, pin_e=35, pins_data=[33, 31, 29, 23]) File “/usr/local/lib/python2.7/dist-packages/RPLCD/_init__.py”, line 14, in __init_ super(CharLCD, self).__init__(*args, **kwargs) File “/usr/local/lib/python2.7/dist-packages/RPLCD/gpio.py”, line 95, in _init_ ‘must be either GPIO.BOARD or GPIO.BCM’ % numbering_mode) ValueError: Invalid GPIO numbering mode: numbering_mode=None, must be either GPIO.BOARD or GPIO.BCM=’):] != ‘YES’: time.sleep(0.2) lines = read_temp_raw() equals_pos = lines[1].find(‘t=’)”) I hope you did indent this code properly. :D Python is a programing language which requires indents instead of brackets to know what is a code block. I can report that the reason there is no indentation is that this site removes whitspace from comments.* hello, on your comment i am trying to wire multiple sensors and i do get different adresses but only separately. It only seems to work when i connect the data Pin of the sensor with GPIO Pin 7 of the Pi and doesnt work for any other pin of the pi. Do you know how i can get simoultaniously readings of 2 temp sensors ? should i only use the 7th Pin of the Pi and wire them serial?? Hello and thank you for the very helpful blog! I need for my project to read 4 temperature sensors and im using this specific one. Everything works great for the one sensor but i cant make it work for more than one. I am using an other Pin other than the Pin 7 of the Pi to read the temp data but it doesnt work. Why is it necessary to use the 7 Pin (GPIO 4 (GPCLK0)) to read the sensor and it deosnt work for other pins. Or should i alter the configurations to work with other Pins. Thank you in advance! Hi Aimilios, Please clarify. You have your sensors connected in a ‘daisy-chain’ to 1 GPIO pin, OR, you are connecting each to its own pin. Hi, I did the same with 3 sensors, I chained them together and it worked on GPIO 4 (GPCLK0), but it didn’t on any other pin, fofr example GPIO 5 (GPCLK1) or GPIO 6 (GPLCK2). Does anybody know why is this? Is there a way to make multiple sensors work on other pins? Hi, I have connected 30 sensors. All signal pins are connected to one pin of the RaspberryPi. See schematic on beginning of this article (WIRING FOR SSH TERMINAL OUTPUT). Others 2 connections of series of DS18B20 are 3V3 and 0V. Works o.k. Regards Herman Can I clarify something the thing that looks like cmd, is that on your computer? plus with this setup, can I upload the data to a website from the raspberry pi ? ls 00-400000000000 00-c00000000000 w1_bus_master1 Hi there. Any tips why I got back this address instead of correct format 28-XXXXXXXXXXXX ? Thank you. If you don’t see 28-* (like: 00-4* or 00-c*) you likely don’t have a resistor or the wrong resistor in use. pi@raspberrypi:~ $ sudo python temp.py Traceback (most recent call last): File “temp.py”, line 6, in lcd = CharLCD(cols=16, rows=2, pin_rs=37, pin_e=35, pins_data=[33, 31, 29, 23]) TypeError: this constructor takes no arguments I’m getting this everytime I type sudo python temp.py Could you please help Was looking at the DS18B20 with a meter of cable to hang out a window or something. Any changes or updates since the signals will be going through a meter of wire instead of 2cm? The cable is still three prong so the wiring should be the same. Hi Shaun, this page talks about wires up to 30m so 1m is not a problem. I have an Arduino with 2m length + 5 DS18B20 (up the side of the hot water tank). Great tutorial, and looks very good to what I’m working on. How ever, I have a problem with the Pythoncode (I have programmed quite a bit in Java, but I’m not as good in Python, that’s why I don’t quite understand what’s going on here). The readings go on quite fine for a bit, I think for some few minutes, then it crashes, it blames: while lines[0].strip()[-3:] != ‘YES’: with the message IndexERROR: list index out of range Does someone know what’s happened, or anyone experienced the same? Hi, I am new to Python so cant actually give you an answer. I get the same error. I added a counter in the loop to see if it crashes at the same point but it doesn’t its completely random. I am guessing its because it can’t ‘see’ the sensor but why? I have no idea. If I stop the routine and restart it immediately it runs fine for a while. very frustrating a I’m trying to use the sensor to log temperature over time. Hi. I think it could be a power supply issue. Since my last reply I have connected my sensors to an external power supply, instead of using the pi supplies and I have had it running now for 2 days without the error. nice place to start but I would load modules at startup # echo -e “w1-gpio\nw1-therm” >> /etc/modules-load.d/modules.conf and read temp as user without sudo # grep -r “t=” /sys/bus/w1/devices/28-*/w1_slave | cut -d = -f2 | sed -e “s/\(..\)\(…\)/\1.\2/” edit, grep -r “t=” /sys/bus/w1/devices/28-*/w1_slave | cut -d = -f2 | sed -e “s/.\{3\}$/.&/” Thank you for that very informative tutorial. I’m familiar with the one-wire DS18B20 temperature sensor . I use it with the Arduino to measure the temperature when fermenting my homebrewed beer. I’m new to Raspberry and Python . I want to use the Raspberry so I can measure temperature and gravity using a BLE gravity sensor.
https://www.circuitbasics.com/raspberry-pi-ds18b20-temperature-sensor-tutorial/
CC-MAIN-2021-39
refinedweb
2,249
64.91
Issues Insert deleted object problem with after flush listeners My guess is this problem is related to: Failing test case: import sqlalchemy as sa dns = 'sqlite:///:memory:' engine = sa.create_engine(dns) engine.echo = True connection = engine.connect() Base = sa.ext.declarative.declarative_base() class ModelA(Base): __tablename__ = 'a' id = sa.Column(sa.Integer, autoincrement=True, primary_key=True) name = sa.Column(sa.Unicode(255), nullable=False) class ModelB(Base): __tablename__ = 'b' id = sa.Column(sa.Integer, autoincrement=True, primary_key=True) name = sa.Column(sa.Unicode(255), nullable=False) Base.metadata.create_all(connection) Session = sa.orm.sessionmaker(bind=connection) session = Session() @sa.event.listens_for(sa.orm.session.Session, 'after_flush') def after_flush(session, flush_context): for obj in session: if not isinstance(obj, ModelA): continue b = session.query(ModelB).get(obj.id) if not b: b = ModelB(id=obj.id, name=u'b') session.add(b) else: b.name = u'updated b!' a = ModelA(name=u'A') session.add(a) session.flush() session.delete(a) session.flush() session.add(ModelA(id=a.id, name=u'A')) session.commit() b = session.query(ModelB).first() assert b.name == u'updated b!' This also throws a warning (which I think it should not throw): SAWarning: Attribute history events accumulated on 1 previously clean instances within inner-flush event handlers have been reset, and will not result in database updates. Consider using set_committed_value() within inner-flush event handlers to avoid this warning. Actually the problem isn't with insert deleted object at all. It seems the problem is more simpler and within after_flush listener handling. Updated test case: the issue is that in after_flush() there, the flush is done, and now the Session is about to sweep through all of its state and reset everything to "pristine", meaning, it's going to blow away your change there. after_flush() can't be used to establish changes like that, they get erased. so the warning there is exactly how we responded to the last time someone reported this as an issue :). basically you're reporting the fix for the bug, as the bug :) There's an easy way to do this kind of thing that avoids any kind of problem and i dont otherwise see any change to be made, so just going to close this with the recipe below: if you still want to suggest a change to the behavior here then just reopen. thanks! yeah in that second case, the B is part of the flush context already, so when you say "b.name = 'x'", that change just gets blown away, but we don't have any way to know that the particular change was after the flush, as opposed to beforehand. I guess a more bulletproof way to catch it would be to place some kind of "guard" on identity_map._modified within the post-flush period. Seems a little overkill to me though anytime I work with other libraries' extension points, things just either work or they don't... yah there's not a good way to get a "guard" in there without adding a whole lot more overhead to the ORM, when a state that already has changes is modified again, it doesn't check into anything in order to save on overhead. just accessing the state's owner session involves a weakref. Ok, thanks a lot Mike! This clarifies the issue for me. I've been re-implementing SQLAlchemy-Continuum's transaction handling. I have a nice plugin architecture coming up for it and banged my head on the wall with this one.
https://bitbucket.org/zzzeek/sqlalchemy/issues/2983/insert-deleted-object-problem-with-after
CC-MAIN-2016-36
refinedweb
587
58.79
Hi Guys, Welcome to Proto Coders Point, In this tutorial we will discuss on what is instance variable java with an example. Instance Variable java definition An instance variable is declared in a class just outside the method, constructor, or block of code. Basically, Java Instance Variable gets instantiated/created when an object of a class is created & destroyed when the object is destroyed. So we all know the class definition: An object is an instance of a class & every object has its own copy of the instance variable. Point to remember about java instance variable - Instance variable can be accessed only by creating an object. - No need to set the value to an instance variable, by default “0” is set for number, “false” is set for boolean datatype & “null” is for object references. - These are non-static variables. - They are declared inside class just outside method, constructor & block. - using the “new” keyword, when the object is created the class instance variable will also get created & when the object is destroyed even the instance variable gets destroyed. - Initialization of class instance variable java can be done by using class object name as shown below. Person p1 = new Person(); p1.name = "RAJAT PALANKAR"; //initialization value to variable Instance Variable Java Example class Person{ //instance variable of class String name; int age; } public class Main { public static void main(String[] args) { //creating person1 Object Person p1 = new Person(); p1.name="RAJAT PALANKAR"; // initializing value p1.age=25; //creating person1 Object Person p2 = new Person(); p2.name="RAM PALANKAR"; p2.age=25; //display data of person 2 System.out.println("Data of person 2"); System.out.println(p2.name); System.out.println(p2.age); } } In above java code, example on instance variable, we have 2 variable in class person i.e. name & age. I have create multiple object by name p1 & p2, each object has it own copies of variable instance values. output Data of person 2 RAM PALANKAR 25 Introduction to java programming Types of variable in java
https://protocoderspoint.com/instance-variable-in-java-with-java-program-example/
CC-MAIN-2021-21
refinedweb
336
55.64
Matrix Product Operator. More... #include <operator.h> Matrix Product Operator. Always kept entirely in RAM, as they tend not to be too big (even without symmetries at most just L³ (~ 2 GB for 512 sites) and typically 100 × L (~ 1 MB for 512 sites)). The order in which operators are applied to a state differ from the usual convention. That is: State s; s *= op1; s *= op2; first applies op1 and then op2 and is equivalent to s = (s * op1); s = (s * op2); which in turn is equivalent to t = op1; t *= op2; s *= t; and s = s * (op1 * op2); whereas in linear algebra one would expect the opposite ( \( \hat B \hat A \) first applying \( \hat A \), then \( \hat B \)). If we tried the other convention, State *= Operator would either have to be removed or stick out and Operator =* State unfortunately doesn’t exist.
https://syten.eu/docs/classsyten_1_1MPS_1_1Operator.html
CC-MAIN-2019-22
refinedweb
144
64.34
uncle: please upload -bin packages so these packages can be merged into the properly named -bin package bases. loathingkernel: feel free to upload them yourself too, as this has been getting ignored for half a year. Once the -bin packages are uploaded (I don't care who does it), please submit a merge request for this package (and the -qt one) to be merged into the -bin package(s). ... Once the peazip-gtk2 and peazip-qt packages have been renamed to -bin, this will open up the namespace for the -build packages to be moved into their place. Latest version is 6.6.0() peazip-6.6.0.LINUX.GTK2.tgz SHA256: 70e76700171dbb2f518710b7b95b388fc8e1374e7da666997a5c8ee28e324d73062aa70aca98538075d8902900c41c49b6e6dcfc05a01632a53a21c2e71fb42e SHA512: 4778575473883d75e025eb2792fe6725ad27ba066d861d3f67b8bac2b5841568
https://aur.archlinux.org/packages/peazip-gtk2/?comments=all
CC-MAIN-2018-26
refinedweb
115
70.94
Back button and shopping cart contentsCoen Damen Apr 4, 2011 6:49 AM Hi, I have a very simple scenario but I can't get it to work. Looked for some days now on forums and articles but I didn't find a solution (call me stupid :) So here's the scenario: Page 1 : categories , user selects a category, navigate to page 2 Page 2 : products, populated based on selected category, user adds a product, the shopping cart contents are updated, navigate to page 2 Page 2 again: user can add another product. shopping cart now shows 2 products. Now, when I press the back button, I navigate back to page 2, but the shopping cart is emptying the products. At least, the SFSB shopping cart contains the products but the page is not refreshed correctly. It just shows the previous states. How can I get this to work? I have JSF navigation which all work fine, but the back button issue is breaking my brain. Thanks for you help, Coen 1. Re: Back button and shopping cart contentsLeo van den berg Apr 4, 2011 8:00 AM (in response to Coen Damen) Hi, have you tried solving this with an action on page entry (defined in pages.xml) ? Leo 2. Re: Back button and shopping cart contentsFatih Alpay Apr 4, 2011 8:15 AM (in response to Coen Damen) It looks like you have problem with Bijection, Is your beans Conversation Scoped ? Well you said , SFSB which i understand is , State Full Session Bean, you can use you seam debug page, to see what object are stored in your conversation context. 3. Re: Back button and shopping cart contentsCoen Damen Apr 4, 2011 8:50 AM (in response to Coen Damen) Hi, I do have products.pages.xml files. <?xml version="1.0" encoding="UTF-8"?> <page xmlns="" xmlns: <!-- <header name="Cache-Control" value="no-cache, no-store, max-age=0, must-revalidate" /> --> <begin-conversation <navigation from- <rule if- <!-- <end-conversation/> --> <redirect view- </rule> <redirect /> </navigation> </page> I removed the end-conversation during tweaking. Here is my productSelection controller. @Name("prodSelCon") @Scope(ScopeType.SESSION) public class ProductSelectionController { @In @Out ShoppingCart shoppingCart; Set<Product> products = new HashSet<Product>(); @Logger private Log log; public String addProduct(Product product) { shoppingCart.addProduct(product); products.add(product); log.info("-------------------- adding new product " + product.getName() + " -------------------- "+products.size()); return "success"; } public ShoppingCart getShoppingCart() { log.info("-------------------- GETTING CARD -------------------- "); return shoppingCart; } } Here is my SFSB cart @Stateful @Name("shoppingCart") @Scope(ScopeType.SESSION) @AutoCreate public class ShoppingCartImpl implements ShoppingCart { @Logger private Log log; private List<Product> products = new ArrayList<Product>(); private Consumer consumer; @PostConstruct public void initialize() { log.info("------------------------- SHOPPINGCART CREATED ----------------------------"); //products = new ArrayList<Product>(); } @Override public String addProduct(Product product) { products.add(product); log.info("------------------------- ADDED PRODUCT TO CART ----------------------------" + products.size()); return "success"; } @Override public void removeProduct(Product product) { products.remove(product); } @Override public void setConsumer(Consumer consumer) { this.consumer = consumer; } @Override public Consumer getConsumer() { return this.consumer; } @Remove @Override public void remove() { } @Destroy @Override public void destroy() { //products = null; //consumer = null; } @Override public List<Product> getProducts() { return products; } @Override public int getTotalProducts(){ return products.size(); } } Hope this sheds some light. Thanks for your help, Coen 4. Re: Back button and shopping cart contentsCoen Damen Apr 4, 2011 8:54 AM (in response to Coen Damen) I also added this entry to the main pages.xml. <page view- <navigation from- <rule if- <!-- <end-conversation/> --> <redirect view- </rule> <redirect /> </navigation> </page> 5. Re: Back button and shopping cart contentsLeo van den berg Apr 4, 2011 9:22 AM (in response to Coen Damen) Hi, some simple guessing, because we don't have the page.. It seems to work, because your beans are in the sessions scope, which lasts until the user log-out/disconnects. In your ProductSelectionController you have a getShoppingCard-method, so it seems that you've incluuded an additional layer on top of the SFSB; Nice when you use Spring or Struts or whatever other action layers, but not necessary when you have Seam. Add all the needed functionallity in your SFSB. Additionally it means that the autocreateis not necessary anymore, because you call the SFSB directly in your view (which autocreates your beans). make it a simple conversation, so change the SFSB to that scope (or just remove the annotation, because it's default). Add a init-method on the SFSB, which is called when you start with your page. Add the action element to your page, and the init-method will called when you first open the page. Consider changing everything to simple POJO (unless you have specific requirements), because the Seam handling of a POJO saves you an additional Interface. Leo 6. Re: Back button and shopping cart contentsCoen Damen Apr 4, 2011 9:33 AM (in response to Coen Damen) Hi Leo, thanks for your reply. Yes indeed a lot is not according to standard, but you are saying it seems to work? I mean, when I add two products to the cart, and then press the back button, the selectedproducts show no entry because the app is back to the previous page without reloading/refreshing. So it never reloads the shopping cart entries to the page. I see that in the log. Here is the page <ui:define <ui:debug <div id="Container"> <div id="TopMenu"> <ul> <li>ProductPage</li> <li>Adress</li> <li>Enzo</li> <li> <h:outputLabel </li> </ul> </div> <div id="Outer"> <div id="Header"> <div id="Logo"> <div id="LogoContainer"></div> </div> </div> <div id="Menu"> <ul> <li>Home</li> <li>Adress</li> <li>Enzo</li> </ul> </div> </div> </div> </ui:define> <ui:define <div id="Wrapper"> <h:form <rich:dataList <h:commandLink <br /> </rich:dataList> </h:form> <h:form <h:outputLabel <rich:panel <f:facet <h:outputText</h:outputText> </f:facet> <rich:dataGrid <rich:panel <f:facet <h:outputText</h:outputText> </f:facet> <h:panelGrid <h:panelGroup> <h:outputText</h:outputText> <h:outputText <br /> <h:outputText</h:outputText> <h:outputText <br /> <h:commandLink <br /> </h:panelGroup> <h:graphicImage </h:panelGrid> </rich:panel> <f:facet <br /> <h:commandLink </f:facet> </rich:dataGrid> </rich:panel> </h:form> <h:form <h:outputText <br /> <rich:dataList <h:outputText <br /> </rich:dataList> </h:form> </div> </ui:define> So when I press the back button the shoppingCart.products is not executed. Thanks for your help, Coen 7. Re: Back button and shopping cart contentsLeo van den berg Apr 4, 2011 9:46 AM (in response to Coen Damen) Hi, try adding the action method to your page. I can see in your page that you have additional beans as repositories, so I assume that you have your persistency there. Keep in mind that Seam injection ONLY works when methods are directly called. So calling a method in a second bean (sucg as the getShoppingCard-method is not useful. As stated, get rid of the unneccesary layers; the SFSB works directly as a controller, so normally there is no need to have additional action layers. Leo. 8. Re: Back button and shopping cart contentsCoen Damen Apr 4, 2011 10:03 AM (in response to Coen Damen) Hi Leo, I do have the action in my page AND in the products.pages.xml. The action is triggered correctly I know I have not programmed very tidy (yet) but technically everything works, injection etc works fine. Only the back button problem exists. When I do the flowagain, the shopping cart shows the correct product, so - categories, select category - products page : add product to cart, add another product , right column shows two products in the cart - press back button, now products are shown but right columns shows no selected products (i.e. the page is not refreshed) - select another category, those products are shown, and the shopping cart entries are there again! So the problem is, why does the back button not refresh the page with the selected shopping cart entries? Thanks for your help, Coenos 9. Re: Back button and shopping cart contentsLeo van den berg Apr 4, 2011 10:15 AM (in response to Coen Damen) Hi, we're talking about two different things now. You mention the result of an action in the navigation-rule. I mean the addition of an action when you createthe view. This is an attribute in the page(s) tag. Examples: <page view- <action execute="#{doSomething.init}" if="#{ifNeeded}"/> </page> or <page view- Whne you add an action to your page definition, you're sure that something gets executed BEFORE rendering anything. Leo 10. Re: Back button and shopping cart contentsMartin Frey Apr 4, 2011 10:34 AM (in response to Coen Damen) Hi This sounds for me like a very basiccaching issue. Did you check if the browser refetches the page when the back button is used? Normal behavior here is that the browser just displays the cached page. I had a very similar issue with ajax rendered stuff on the page. The browser shows just the initially loaded page, which contains none of the ajax rendered parts. I think you use always standard h:commandbuttons? Then i could guess that the url looks always the same if you do not propagate the conversation id. If the url does not change from request to request i'm not sure if all browsers update the cache properly. As a last resort you could always rerender your shopping cart with an ajax call through the body onload function. This one is called also on the back button usage, even if the page is coming from the cache. As long as your backing bean is in the correct state there should be no issue with that. 11. Re: Back button and shopping cart contentsCoen Damen Apr 4, 2011 10:55 AM (in response to Coen Damen) Hi Martin, indeed, I am wondering, is it a cache issue or my own misconfiguration of my app??? Because when I navigate all the way back to the categories, the cart's products are shown. Only when navigating the productspages I have this problem. But I am reading in a lot of articles that Seam should be able to handle this; handling the correct state using the back button. Thanks, Coenos 12. Re: Back button and shopping cart contentsLeo van den berg Apr 4, 2011 11:07 AM (in response to Coen Damen) Hi Cone, Personally I think you have an application program as well as a configuration problem. Seam handles the use of the back-button transparantly, and as long as you configure everything as it should be it should work in a stateful and an statlesss environment. However your pages files miss the extra settings, so I assume you have the application handling everything. The second issue is the Scope of your beans. It may work in the prsent context, but you're missing Seam's perfect handling of conversations (also necessary to handle also the Back-button). My advice I can give you at the moment is that you re-consider things and go through the docs on the matter. Leo 13. Re: Back button and shopping cart contentsCoen Damen Apr 4, 2011 11:19 AM (in response to Coen Damen) Hi Leo, I changed the scope of my beans and added some method to pages.xml. I added this to pages.xml <page view- But why should I initialize something that is already initialized, like the Shopping Cart. The Cart contains the correct objects but it is not rerendered. You say that Seam handles the back button transparantly, but the docs don't say how to configure this. I thought I had everything as it should in the products.pages.xml and pages.xml but apparently not. So that's why I posted my question. Thanks, Coenos 14. Re: Back button and shopping cart contentsLeo van den berg Apr 4, 2011 11:30 AM (in response to Coen Damen) Hi, from the manual: 8.1.2. Seam and the back button . etc. etc. It's just there (my 2.1.2. version of the doc), Leo
https://developer.jboss.org/thread/193911
CC-MAIN-2018-39
refinedweb
2,005
62.17
function export relaxng-schema-type function compile-schema value string source schema-input Use compile-schema to compile a RELAX NG schema specified in XML format, so that you may validate XML input against it. Compiling the schema also serves to validate the schema itself. If the schema is invalid and cannot be compiled, compile-schema throws the compile-error exception. In this example we compile the schema "my-schema" and validate XML input against it, reporting any errors along the way: import "omrelaxng.xmd" prefixed by relaxng. process local relaxng.relaxng-schema-type example-schema set example-schema to relaxng.compile-schema file "example-schema.rng" do xml-parse scan file "example-input.xml" do markup-parse relaxng.validated #content against example-schema output "%c" done done element #implied output "%c"
http://developers.omnimark.com/docs/html/function/1949.htm
CC-MAIN-2017-34
refinedweb
132
53.07
Please see for the details about this release, including the steps to update. This release is now available as part of Visual Studio 2017 version 15.5 and in the Stable updater channels for Visual Studio 2017 for Mac and Visual Studio 2015 Tools for Xamarin. Please file a quick bug report using the "15.5 Release" new bug form if you see any suspicious behavior in these versions that you wish to report. Comment in this forum thread if your question is not related to a suspicious behavior after download, but instead one of the following: Please look at the attached image. I have tried multiple times, they are all just end up with 'failed'. Please help, there are works I need to deliver! After installing the latest update my Xamarin Installation ist completely broken. See attached screenshots. Reinstall has no effect. Please fix asap, it is business critical to us. Where can i get an older version (4.8.x.x) than the current version as an offline installer package? (.msi) The update seems to have cleared my Android SDK Location setting, but it wont let me re-set it through the Options window. It shows the red X next to the box even after filling it in, and errors "Please fix all errors before saving" when trying to save any Xamarin options. Same problem happened here I did a quick double-check locally to test the specificity of this download issue, and the downloads succeeded in my test environment. As I understand it, the most likely cause of a download failure for specific users may be related to geographical location. If you continue to experience download failures in Visual Studio 2017 for Mac, please report the issue via the Help > Report a Problem menu item in the IDE for further investigation. Thanks! After updating I can't deploy to iOS from my Mac, I get a build error : Error: No iOS signing identities match the specified provisioning profile ##### I work with Visual Studio for Mac for 6 months now, no previous update broke this, only the new one. please advise. Thank you in advance. Roy This appears to be a case of the old unfortunate upstream sensitivity of the Visual Studio extension mechanism (particularly in Visual Studio 2015). Some possible steps that can help resolve this: Try running devenv /setupin a Developer Command Prompt with administrative privileges. If that is unsuccessful, try clearing Visual Studio's MEF component cache. Related: If those steps are unsuccessful, please report a bug using the "15.5 Release" new bug form for further investigation by the Xamarin team. I was able to work around this by downloading the Xamarin.iOS package directly based on a link I found with Google, but I can't find direct links for other stuffs, are they listed somewhere?. The Mono package will be available soon on. The Visual Studio for Mac package is not listed anywhere (the Visual Studio for Mac installer is listed, but that might run into the same issue as the updater when it downloads the individual package). I have direct-messaged you the links for the current versions of these 2 packages. The Xamarin Profiler download is available from the Xamarin Profiler release notes. That message is accurate. Unfortunately by coincidence Apple released iOS 11.2 and Xcode 9.2 within effectively the same business day as the Visual Studio 2017 version 15.5 release. The Xamarin team is proceeding as usual to test for compatibility and publish updated versions of Xamarin.iOS and the Visual Studio Tools for Xamarin to align with the new version of Xcode. You can watch the Xamarin.iOS release notes or the Xamarin team release blog for additional updates about availability of iOS 11.2 and Xcode 9.2 essential compatibility. Please file a quick bug report using the "15.5 Release" new bug form if you see any suspicious behavior in these versions that you wish to report. Thanks! I don’t want to open a bug before I’m sure it’s a bug. Roy. Since updating Xcode and Visual Studio Mac, I'm getting the following error every time I try to build: Error. XCode is now v9.2, Visual Studio Mac is v7.3 My iPhone and iPhone simulators are all on 11.2, and the only SDK version available in Visual Studio is 11.2 as well. Any suggestions? open a .xib or .storyboard file failed See the comment that is 3 before yours in this thread. To reiterate, unfortunately (but not unlike Visual Studio), Apple does not provide pre-notification of their releases, so the current Visual Studio and Xamarin versions released on Monday have not yet been updated with iOS 11.2 and Xcode 9.2 compatibility (that Apple released within effectively the same business day as the new Visual Studio and Xamarin versions). For a possible workaround see also: @BrendanZagaeski The Android builds of my Xamarin Forms PCL now fail consistently with 50+ errors such as 'Android' namespace doesn't exist in Xamarin.Forms.Platform. I can delete bin & obj files - the solution build once and then fails. Xamarin.IOs seems fine for the moment Is there any documentation on how to downgrade VS for Mac to say the Nov 2017 release? I've lost a whole day trying to remedy the situation and think its best I try and downgrade. Do I need uninstall everthing first? After updating to the latest Xamarin version, I receive the following error when trying to build my project in VS 2015: 2: error: Error: String types not allowed (at 'versionCode' with value ''). Any insights on this would be greatly appreciated. Thanks. My understanding is that at the moment, the Visual Studio for Mac product group has not chosen to provide any particular documented downgrade process. I did find a couple UserVoice feature requests that look roughly applicable to the topic (admittedly for Windows I think, but Visual Studio for Mac would quite possibly follow the Windows precedent if something changed for Windows): And possibly also: I'll direct-message you individual package download links aligned with the November versions since those do by chance exist with the current way things get installed. In general, installing those packages in-place over the newer packages can work OK, but it is not a tested scenario. Here is my story for those who experiences unexpected behaviors after updating / upgrading VS for Mac. I have been working on an internal app for a long time, since Xamarin Studio days. I created a solution in those days and never recreated the solution from the scratch because it was always opened in XS / VS after every update / upgrade till the last one. After receiving 15.5 Feature Upgrade I received "Project.sln is not recognized as a VS solution" or something similar message while I was opening the solution. That drove me crazy In that crazy mood, I renamed the project's folder from Project to Project.old. I created new Project folder and an empty solution. I started to add old projects / libraries into the new solution. While I was doing that, I converted all PCL library projects to .NET Standard libraries. It took a few hours and it was painful. Really painful. Now, I am happy All those weird issues are gone. I feel VS for Mac is stable now. It is easy to say but I think it is worth giving a try. Similar problem here. Although the SDK location is correctly set in the Xamarin options, VS now thinks it's not. I can't start emulators, the SDK manager and so on so I'm basically not able to work at the moment. Even creating new projects doesn't work since they are completely emtpy. I found that if I loaded the Visual Studio Installer, selected More, and then Repair.... All my Android issues were then fixed... Does the problem with mono android package is solved with version 4.8.0.754 with vs 2015? I'm getting the infamous MT0026 error. I did launch Xcode 9.2 which seems to work but I'm still only getting the "Generic Simulator" and can't launch a simulator. Thanks for the info. Yeah, that helped me too. @BrendanZagaeski, could you please send me those download links, too? I'm afraid my demo at a conference on Wednesday depends on this downgrade. Sure thing. I've sent them over now. Yet another traditional Xamarin update which broke everything again. And again. Bravo! Now I can't launch my Android project. ClassNotFoundException just beacuse I have a class derived from Application class. Nothing helps. Clean, rebuild, app uninstall, restart, woodoo with settings... Nothing. Thank you for this. Is there a way to rollback this nightmare to the previous version in VS2017 without going through the hell? It sounds like you might be running into the issue described in: There is an issue where the Visual Studio Installer (for Visual Studio 2017) is apparently interpreting a change in requirements from "required" to "optional" for the Android SDK component as a signal to uninstall that component in certain update scenarios. Re-selecting the components in the Visual Studio Installer to re-install them has been reported to resolve the issue for several users. The Xamarin team is coordinating with the Visual Studio Installer team to set that component back to required for now until the issue with the Visual Studio Installer uninstall behavior can be investigated. will lead you to my.visualstudio.com, which offers an installer for Visual Studio 2017 version 15.0 in accordance with the Visual Studio servicing policy. Beyond that, the UserVoice links from my earlier comment might be relevant to your question. Thanks for the quick reply, Brendan. I'm pretty sure my problem is not related to the installer issue. Xamarin.Android is there and works. However, when I add a class to the project which derives from the Application class, app no longer launches -- ClassNotFoundException. This is pretty serious issue given that all my projects have their own Application-derived class. bugzilla.xamarin.com/show_bug.cgi?id=61014 Impossible to work and no feedback from Xamarin team for several days. I have this same issue with 15.5.1 in 2017. I can't build my Android project now. The AndroidManifest.xml in the project has a number but the one generated in the obj folder does not. Update: Since my original post, I found that the generated manifest file was only missing a number when I had the version number in the AndroidManifest.xml set to 0. Once I changed this number to something else (ex: 1), the generated file would have the correct version number and my project would build. I hope that this will help you until a proper fix can be put in place. I can build on simulator but can't build app on real phone , error showed The app has been terminated. Failed to Stop app: An error occured on client IDB480754 while executing a reply for topic xvs/idb/4.8.0.754/stop-app The app has been terminated. Thanks @pecrevier ! That workaround did it for me. It sucks that that broke all of a sudden. Yeah it's annoying isn't it... I can only guess Xamarin use the community as a whole as their QA team (evident by the amount of bugs that always open up from users straight after a release)... All we do is install, compile, and everything is broken.... Luckily there are some awesome users who are able to resolve stuff straight away and post in these forums... The VS repair solution that somebody posted saved me big time... And it was never an official Xamarin response to do it.... As above I guess they just do not have the man power to QA and respond to issues ASAP... @GVx Are you saying simple VS repair rolls the current version back? Given the fact almost each Xamarin release turns into a nightmare for me, I always keep the latest working .vsix files. This time I applied that to have the previous Xamarin.Android alongside the latest Xamarin.iOS. I didn't need to roll back... My Android stopped working altogether and the repair (VS Installer -> More -> Repair) fixed that.... So I am on the current version and everything seems to be OK at the moment... Android and iOS working... Thanks for the feedback. I can't speak much to the backstory for the particular issues that have been observed in this release (apart from the bit of information about the Visual Studio Installer Android SDK uninstallation issue from earlier in the thread). It is perhaps of interest to note that each item that the Xamarin team fixes in a service release does go through a follow-up internal discussion. Just as a little reminder, note that as mentioned in the first post in the thread this forum thread is aimed at the following purpose: Whereas for suspicious behaviors after download it is recommended: In particular, discussion and voting on DeveloperCommunity items reported via the Report a Problem feature in Visual Studio 2017 and Visual Studio for Mac is recommended for any suspicious behaviors you might see after download in one of those IDEs. Dedicated tracking items per issue helps keep the conversation well focused for each issue, and allows the engineers with the closest knowledge to monitor them as appropriate. Thanks! My team of developers is getting the same issue
https://forums.xamarin.com/discussion/111390/current-release-15-5-feature-release/p1
CC-MAIN-2018-43
refinedweb
2,264
65.12
Borrow cookies from your browser's authenticated session for use in Python scripts. Borrow cookies from your browser's authenticated session for use in Python scripts. NB: Use pipand pythoninstead of pip3and python3if you're still on Python 2 and using pycookiecheat < v0.4.0. pycookiecheat >= v0.4.0 requires Python 3.5+, and may soon go to 3.6+. python3 -m pip install pycookiecheat See #12. Chrome is now using a few different keyrings to store your Chrome Safe Storagepassword, instead of a hard-coded password. Pycookiecheat doesn't work with most of these so far, and to be honest my enthusiasm for adding support for ones I don't use is limited. However, users have contributed code that seems to work with some of the recent Ubuntu desktops. To get it working, you may have to sudo apt-get install libsecret-1-dev python-gi python3-gi, and if you're installing into a virtualenv (highly recommended), you need to use the --system-site-packagesflag to get access to the necessary libraries. Alternatively, some users have suggested running Chrome with the --password-store=basicor --use-mock-keychainflags. git clone cd pycookiecheat python3 -m venv .venv ./.venv/bin/python -m pip install -e .[dev] from pycookiecheat import chrome_cookies import requests url = '' Uses Chrome's default cookies filepath by default cookies = chrome_cookies(url) r = requests.get(url, cookies=cookies) Use the cookie_filekeyword-argument to specify a different filepath for the cookies-file: chrome_cookies(url, cookie_file='/abspath/to/cookies') Keep in mind that pycookiecheat defaults to looking for cookies for Chromium, not Google Chrome, so if you're using the latter, you'll need to manually specify something like "/home/username/.config/google-chrome/Default/Cookies"as your cookie_file. I don't use Windows or have a PC, so I won't be adding support myself. Feel free to make a PR :) cryptographymodule on OS X (pycookiecheat <v0.4.0) If you're getting this error and using Homebrew, then you need to follow the instructions for Building cryptography on OS X and export LDFLAGS="-L$(brew --prefix openssl)/lib" CFLAGS="-I$(brew --prefix openssl)/include"and try again. cryptographymodule on Linux Please check the official cryptography docs. On some systems (e.g. Ubuntu), you may need to do something like sudo apt-get install build-essential libssl-dev libffi-dev python-devprior to installing with pip. On KDE, Chrome defaults to using KDE's own keyring, KWallet. For pycookiecheat to support KWallet the dbus-pythonpackage must be installed. python -m pip install git+[email protected]
https://xscode.com/n8henrie/pycookiecheat
CC-MAIN-2021-49
refinedweb
425
56.96
In this day and age Enterprise Mobility is a real thing! And SAP are doing their bit in helping Enterprises by providing the tools (read SCP and its suite of services) to make mobile Apps that integrate with their SAP backends seamlessly. Once the basic app is developed, there is always a requirement to add bells and whistles to it so that it can be made even more user friendly. This blog will look at one such aspect which is Push Notifications to an iOS device. In our scenario we have a SAP Patient Engagement system hosted on the SAP Cloud Platform. We have two major players in this whole solution. The main focus is on the Patient who uses a mobile (iOS) app to track their wellbeing. The other player is the Clinician who monitors the Patients’ wellbeing on a Fiori App. The Patient App is built using the SAP SDK for iOS. The Patients are supposed to receive Notifications at regular intervals which serve as reminders to track their physical activity/statistics like temperature, bowel movement or physical state like mood, drowsiness etc. Here is a list of Components that form part of this solution – - Figure 1 Here is a bit more detail on what happens in each step of the diagram above. Step 1 – Obtaining Device Token - User(Patient) opens iOS app and is prompted to allow Notifications. This is a very familiar screen as shown in figure 2. Figure 2 - If the Patient agrees, the iOS App receives a device token from APNS. Step 2 – Send Device Token to SAP HANA - The App sends the phone’s device token to SAP HANA along with the Patient’s internal ID - This happens by the app calling an xsodata service which is exposed via the SAP Cloud Platform. The service receives the call and creates an entry in DB which registers this device token against the patient’s ID. Step 3 – Obtain Device Registration id from SAP Cloud Platform Mobile Services - The xsodata service above also calls an API to register the device in SCP Mobile Services. - The API provided by SCP mobile services has a similar url/endpoint to this –<your namespace>/Connections - The response of the API call above contains what is called a registration id in SCP Mobile services. This registration id is linked to the device token passed in the request. Figure 3 shows the general Navigation of how to get to the Registration Page in “Mobile Services” on the SAP Cloud Platform. It also highlights a table entry at the bottom of the page which shows this registration id. Figure 4 shows the device token (highlighted) registered against a particular SCP Mobile services registration id. - This registration id is stored in the HANA DB against the Patient and is used in step 4. Figure 3 Figure 4 Step 4 – Call notification API - A batch job in HANA(.xsjob) that runs at regular intervals (in this case daily at 9 am) . Here is a sample of the code in the xsjob file which calls an xsjs called sendNotifications. { "description": "Send notifications", "action": "<packagepath>:sendNotification.xsjs::sendNotifications", "schedules": [ { "description": "Send notification at 9.00 AM", "xscron": "* * * * 9 0 0" } ] } - This xsjs service calls an SCP Mobile services API to send Notifications. The url/endpoint of this service will be something like this – - The json payload of the request looks something like this { "notification": { "alert": "{\"title\": \"Time to check in with your daily plan\",\"subtitle\": \"\",\"body\": \"Press and hold here to view details for your treatment plan\"}", "badge": 1, "sound": "default", "customParameters": { "apns.category": "dailyCheckIn", "apns.mutable-content": 1, "apns.thread-id": "check-in-7-02-2019" } }, "registrations": [ notifSchedule.CONNECTION_ID ] } The category, mutable-content and thread-id fields of the payload are used by the app to perform different actions on receipt of the notification like showing a section of the app on 3-D Touch or showing an Image etc. In this case, a 3-d touch expand the notification below (figure 5) and shows details about the Patient’s treatment plan. Figure 5 - The urls of these APIs are available in the APIs tab when you register your application on SAP Cloud Platform Mobile services., the how-to of which is not covered in this blog as it has been covered elsewhere. Steps 5 and 6 – Send Push notification to iOS App One the SAP Cloud Platform Mobile services receives the “Notification” API request it forwards the content to the Apple Push Notification Service along with the iOS device token which in turn send it to the iOS App using that token. This completes the whole circle. Hope you found the blog useful. In case of details of code behind the iOS App sending the device token to HANA or for any part of the process in HANA do let me know in the comments section and I will be happy to provide more code snippets. Thanks you sir very helpful blog! But sir i have some issue like where i can write the XSJOB file also where you made the object of APN. and sir i just have one backend this backend connected with the SAP odata with hana and my ios app also connected same way . so i just want to know how can i get the messege immidietely when backend user push the data in the hana table. Hi Himanshu Instead of starting the Job (xsjob) you could call the xsjs (that sends the notification) itself on successful update of the record in the HANA table.
https://blogs.sap.com/2019/02/24/apple-push-notifications-using-scp-mobile-services-and-hana/
CC-MAIN-2021-10
refinedweb
925
60.14
On Tue, 2004-10-26 at 01:16, Grant? See the GPL FAQ at : <quote from above URL>. </quote> Thus, if you combine your code with GPLed code using py2exe, the GPL clearly applies. If you import GPLed Python code into the same namespace as your code, the GPL clearly applies. If your code calls a GPLed Python module via an "intimate" RPC mechanism like PyRO, then the GPL may applies (but it is not so clear-cut). If your code calls GPLed code via a less intimate mechanism, like XML-RPC or HTTP, the GPL clearly doesn't apply. If your code and GPL code are just on the same disc, the GPL clearly doesn't apply. --
https://mail.python.org/pipermail/python-list/2004-October/274256.html
CC-MAIN-2014-15
refinedweb
118
80.72
Hi, Could someone assist me with the code snippet to get the exact date/time a trade was placed? I would like to use that to do an interval TimeSpan to close the trade after a # of days. I have attached a code snippet of how I'm starting to work on it, but I need to replace random date with actual Trade Time in the Code with essentially the initial trade time I am using. Any help is always appreciated it, I keep getting closer to having my first algo completed. Thanks! -John using System; class Program { static void Main() { string tradeDate = "02-06-2018"; // Instead of 2/6/18 I need the trade date/time here DateTime startDate = DateTime.Parse(tradeDate); DateTime now = DateTime.Now; TimeSpan elapsed = now.Subtract(startDate); double daysAgo = elapsed.TotalDays; Console.WriteLine("{0} was {1} days ago", tradeDate, daysAgo.ToString("0")); } }
https://www.quantconnect.com/forum/discussion/3370/calculate-time-since-was-trade-placed/
CC-MAIN-2021-17
refinedweb
147
65.32
Bund. We’ll be covering: For comparing technical competencies, we have picked up React Facebook Pixel as a library and a very basic React app as a sample to benchmark each of these bundlers. This comparison is not to establish a single winner from amongst these great tools; rather, it is to help you more easily make your decision. All of these bundlers are definitely great tools managed by great people, and they are all super awesome in one way or another. To all the maintainers, contributors, sponsors, and backers, cheers 🍻 Configurations Configuring a bundle has been one of the most cursed yet most sophisticated areas in the frontend world. For small-scale applications, one might feel this should be very straightforward. Still, as the application’s size grows, we need more sophisticated configurations to keep our apps efficient and performant. We have witnessed many debates among developers about how tedious it is to configure a modern-day tech stack for a small app. These debates and the common patterns subsequently adopted by a majority of the community have led many bundlers to offer zero-config solutions. Though it’s claimed by almost all of these bundlers, being zero-config is not possible for any of them. It is more about being quickly configurable and keeping the configuration guides as comfortable as possible. All of these bundlers have their reds and blues in this area. Here, we are sharing configs for generating distribution packages for React Facebook Pixel. It will give you a glimpse of how it looks like for each of these bundlers. webpack const path = require('path'); const TerserPlugin = require('terser-webpack-plugin'); module.exports = { entry: ['./src/index.js'], output: { path: path.join(__dirname, 'dist'), filename: 'fb-pixel-webpack.js', libraryTarget: 'umd', library: 'ReactPixel', }, module: { rules: [ { use: 'babel-loader', test: /\.js$/, exclude: /node_modules/, }, ], }, resolve: { extensions: ['.js'], }, optimization: { minimize: true, minimizer: [ new TerserPlugin({ terserOptions: { warnings: false, compress: { comparisons: false, }, parse: {}, mangle: true, output: { comments: false, ascii_only: true, }, }, parallel: true, cache: true, sourceMap: true, }), ], nodeEnv: 'production', sideEffects: true, }, }; Rollup import babel from '@rollup/plugin-babel'; import { nodeResolve } from '@rollup/plugin-node-resolve'; import { terser } from 'rollup-plugin-terser'; import filesize from 'rollup-plugin-filesize'; import progress from 'rollup-plugin-progress'; import visualizer from 'rollup-plugin-visualizer'; export default { input: 'src/index.js', output: [ { file: 'dist/fb-pixel.js', format: 'cjs', name: 'ReactPixel', exports: 'named', }, ], plugins: [ terser(), babel({ babelHelpers: 'bundled' }), nodeResolve(), // All of following are just for beautification, not required for bundling purpose progress(), visualizer(), filesize(), ], }; Parcel.js We didn’t need any configs for Parcel, as the default configs were enough to handle our library. Here is the command we used: bash "bundle:parcel": "parcel build src/index.js --experimental-scope-hoisting --out-file fb-pixel-parcel.js", Here is my conclusion for this: - webpack still requires us to use ES5 syntax, which makes it a little problematic - Rollup has simpler syntax and looks ideal for managing libraries - Parcel v2 is coming up with configuration file support with awesome default configs to extend for sophisticated apps 1️⃣ Rollup 2️⃣ Parcel 3️⃣ Webpack Features To stay competent for new and more sophisticated web apps, each of these bundlers offers all the features required by most of the modern apps. The web.dev team recently launched a new initiative called Tooling.Report with the goal of making it easy to select the right tools for your next project by directly comparing their feature sets. Where bundlers are concerned, the team compared them across six dimensions and 61 feature tests. This report gives us great insight into what all of these bundlers are offering. Here we have summarized the results of these tests. Code splitting By code splitting, we mean to extract common dependencies or modules in a shared bundle and ensure that only the code required for the page is downloaded and executed. Code splitting is a crucial aspect of keeping large-scale applications efficient. The web.dev team evaluated each bundler against eight criteria. The results are below. Results: 1️⃣ Rollup [6/8] 2️⃣ Webpack [4/8] 3️⃣ Parcel [3.5/8] None of these bundlers can split modules based on exports used by other bundles. But besides that, Rollup stands on top, as it passes all other tests. Hashing To keep app load time lower, resources should be cached and reused on the client side after they have been downloaded once. To invalidate a resource’s cache, the resource name can be changed. This change can be done by associating a version identifier with the resource’s name. Build tools can generate version identifiers based on the content of the file. If the file contents change, it will have a new version ID; otherwise, it stays the same, resulting in the client reusing the cached result. To avoid excessive cache invalidation, bundlers have to ensure an invalidation “cascade” is implemented properly. This means every updated JS and non-JS asset should have a new hash, and all JS bundles referencing that asset need to be updated to reference the new URL — thus, updated content and a new hash for the JS referencing that asset, and so on. The bundlers were compared on 10 different caching criteria. Results: 1️⃣ Parcel [8.5/10] 2️⃣ Webpack [8/10] 3️⃣ Rollup [6/10] Parcel stands on top here as it beats webpack with a really impressive feature: the bundle hashes based on the final compiled code, which means changes in comments will not impact bundle hashes. Non-JavaScript resources Web apps are not just about JavaScript; they include many other resources, including rich content, fonts, serialized data, and HTML and CSS. In recent times, we have seen JS emerge as a central point that holds and places all of these assets. Though JS doesn’t allow for importing these non-JS assets, bundlers have now made it possible. Keeping in mind the code splitting and hashing features, handling these assets becomes more complicated. Bundlers consider applications as a graph. It handles each resource as a node connected with all other resources that it imports. This makes it easier to modify resource URLs after hashing and usage-based transformations like namespacing in CSS. For this feature category, the bundlers were compared across 16 criteria. Results: 1️⃣ Webpack [15.5/16] 2️⃣ Rollup [15/16] 3️⃣ Parcel [9.5/16] When it comes to handling resources, Parcel is way behind in the race. Rollup and webpack remain toe to toe as both now offer almost everything required to bundle non-JS resources. Output module format Modern browsers now support ECMAScript Modules (ESM), but supporting older browser versions means we have to transform our JS into CommonJS. There were just three criteria for this section. Results: 1️⃣ Rollup [3/3] 2️⃣ Webpack [2/3] 2️⃣ Parcel [2/3] Rollup takes a lead here as neither of the others can generate ESM bundles. Transformations A significant impetus for adopting bundlers in modern applications was the transformation of code and assets. Some of these transformations are general purpose, e.g., compression, minification, etc., while others are geared toward a specific set of assets. These transformations usually aim at supporting different versions of browsers and optimizations. The web.dev team identified seven criteria for comparing the bundlers’ transformations capabilities. Results: 1️⃣ Webpack [6/7] 1️⃣ Rollup [6/7] 3️⃣ Parcel [4.5/7] Though neither webpack nor Rollup can eliminate dead code from dynamically imported modules, these two passed all other tests, including Brotli compression support. Benchmarking Web bundlers today aren’t just used for creating production builds. Rather, our day-to-day development depends heavily upon their performance. As mentioned earlier, we created a small React application to benchmark bundling speed and the size of the bundles generated. These benchmarks were performed on: MacBook Pro (15-inch, 2018) | 2.2 GHz 6-Core Intel Core i7 | 16 GB 2400 MHz DDR4 | Radeon Pro 555X 4 GB, Intel UHD Graphics 630 1536 MB Bundling speeds For application development, webpack 4 is a clear winner here, with the fastest build time for both dev and prod environments. Parcel takes a big leap for library bundling in almost half the time as webpack. Build size As far as size is concerned, Rollup has the lead here, closely followed by Parcel v2. Please help make this benchmark better by sharing your results in the comments section or opening an issue in our repository. Documentation webpack has been one of the most cursed libraries for its complexity, but its documentation has improved over the past few years. A number of developers have been sharing their experiences, and many resources are available to learn about webpack’s complexities. Certain features are still undocumented, and most of them are required for real advanced use cases. Rollup has good documentation, and there are a good number of resources available to learn it in depth. You might find some difficulty in selecting plugins, as most of them are not official. Nevertheless, it is a go-to solution for library developers, as official and active plugins are enough to cover most use cases. Parcel v2 is still in beta, and documentation is a work in progress. Since it has set up standards for onboarding plugins, this will help as it progresses. Plugins and ecosystem There isn’t much to compare when it comes to plugins. Plugins for most common use cases are available for all the bundlers, but the quality of each may vary a lot. webpack has a large number of official plugins, which makes the selection easy and quick. Rollup has a lot of community plugins, both actively maintained and stalled. One has to put in some effort to test and decide what works best for them. Parcel had a unique mechanism for plugins with v1, wherein you don’t have to configure plugins at all — just install them and get them running. With v2, there is a configuration setup under development and will give more power for sophisticated use cases. Conclusion Whether you’re a new or a seasoned frontend dev, you will have probably heard debates about bundlers — or joined in on some yourself. webpack is praised for its flexibility yet cursed for its complex. Rollup is considered excellent for libraries. Parcel has made a big impact and could very well be making a bigger one once v2 is out of beta. What to select? As we said earlier, it depends upon your set of requirements. I hope this comparison will help in making the decision easier for you. Honorable mentions - Snowpack is new in town but is making reasonable grounds for the future. - Poi is a human-friendly wrapper about webpack. This bundler is somewhere between Parcel and webpack. - Pax, a Rust-based bundler, promises to deliver higher speed..
https://blog.logrocket.com/benchmarking-bundlers-2020-rollup-parcel-webpack/
CC-MAIN-2021-04
refinedweb
1,796
54.93
Given: public class Item { int id; int price; public Item (int id, int price) { this.id = id; this.price = price; } public String toString() { return id + " : " + price; } } and the code fragment: List<Item> inventory = Arrays.asList(new Item(1, 10), new Item(2, 15), new Item(3, 20)); Item item = inventory.stream() .reduce(new Item(4, 0), (x, y) -> { x.price += y.price; return new Item(x.id, y.price);}); inventory.add(item); inventory.stream() .parallel() .reduce((x, y) -> x.price > y.price ? x : y) .ifPresent(System.out::println); What is the result? A. 4 : 45 B. 4 : 0 C. 4 : 20 D. 1 : 10 2 : 15 3 : 20 4 : 45 E. The program prints nothing The correct answer is E. Since we see no filter(), and reduce()’s logic does guarantee a returned value, ifPresent() should have something to work with thus pushing us towards choosing among options A though D. The program itself, however, is a bit too complex; just take a look at the first reduction, which returns an Item whose id and price fields actually come from different objects; this is a good indicator of a potential comperr or RTE. And indeed this is our case because any List created with the Arrays.asList() method is structurally immutable; even the OCA-related Nailing 1Z0-808 mentioned this very fact on multiple occasions. As a result, the code throws an UnsupportedOperationException when trying to add a newly created element to inventory. The wording of option E is, technically speaking, correct because it’s the JVM itself rather than the program who prints the exception message. Populating a structurally modifiable inventory leads to printing 4 : 20: class Item { int id; int price; public Item (int id, int price) { this.id = id; this.price = price; } public String toString() { return id + " : " + price; } } class Test{ public static void main(String[] args) { // List<Item> inventory = Arrays.asList(new Item(1, 10), // new Item(2, 15), // new Item(3, 20)); List<Item> inventory = new ArrayList<>(); inventory.add(new Item(1, 10)); inventory.add(new Item(2, 15)); inventory.add(new Item(3, 20)); Item item = inventory.stream() .reduce(new Item(4, 0), (x, y) -> { x.price += y.price; return new Item(x.id, y.price);}); inventory.add(item); inventory.stream() .parallel() .reduce((x, y) -> x.price > y.price ? x : y) .ifPresent(System.out::println); // 4 : 20 } } Would you like to see why? Okay, let’s analyze how the inventory’s stream is being operated on here. Probably, the biggest challenge is getting a solid grasp of how the reduce() method actually works. The java.util.stream.Stream interface defines three reduce() methods for ordinary reduction: and two more collect() methods for the so-called mutable reduction: The differences between the two reduction approaches are not important for our exam; still, it might be a good idea to watch Angelika Langer’s presentation on this subject. Okay, our Problem uses the second version of reduce(), the one with an identity arg. From the package java.util.stream description, section Reduction operations: “The identity element is both an initial seed value for the reduction and a default result if there are no input elements”. Reductions always operate on each and every element of the stream in question. In our case reduce() first takes identity (i.e., the object created by new Item(4,0)) and the starting element of the stream (i.e., new Item(1,10)), does its thing by using these two objects (the lambda expression calls them x and y, respectively), and then returns a single element (i.e., new Item(x.id, y.price)). In other words, two elements are reduced to one. Finally, the reduction’s result gets assigned to the item variable. Then reduce() grabs identity again, looks at the next element in the stream (i.e., new Item(2, 15)), reduces this pair to another Item object according to the lambda expression’s logic, and assigns this object to item. After that the process is repeated for the last time because this particular stream contains only three elements. Please note that our reduce() does not do its thing like a true blackbox, performing everything inside itself and only after that producing some final result. No, the method returns a result for each pair consisting of identity and every successive element until the stream is exhausted. Illustration: final List<Item> reduced = new ArrayList<>(); List<Item> list = Arrays.asList(new Item(1,1), new Item(2,2), new Item(3,3)); list.stream() .reduce(new Item(4,0), (x,y) -> { Item item = new Item(x.id, y.price); reduced.add(item); return item; }); System.out.println(reduced); // [4 : 1, 4 : 2, 4 : 3] The above example populates reduced with each returned result. As we can see, the x object is always the same (identity), and y corresponds to the elements in list, one after another. By the way, we practically reproduced how the collect() method works, and even demonstrated the pitfalls of mutability, something that Angelika Langer talks about in her presentation. Also please note the final modifier on reduced: lambdas may reference only final or effectively final variables when these vars have been declared outside the lambda expression. Actually, in our code this final modifier isn’t even necessary because we don’t change reference to the reduced object; we only change its contents by adding more and more new elements. Just keep this point in mind, OK? because there’s a question on the exam that’ll ask you about this… Fine; now that we know how reduce() works, it’s time to take a look at the last block of stream operations in our Problem 32: inventory.stream() .parallel() .reduce((x, y) -> x.price > y.price ? x : y) .ifPresent(System.out::println); Here we have the first version of reduce(), the one with a BinaryOperator arg. Since this time there’s no guaranteed value to be returned (no default identity), the return type is an Optional, which explains why we see ifPresent(), instead of, say, forEach(). BinaryOperator means that the functional method of this interface (that is, apply()) takes two args of the same type, does something to them – or maybe with them, or maybe even doesn’t touch them at all, – and then returns some value of the exact same type; that’s why it is called operator. Functions, on the other hand, take in and return different types. It was just a reminder, alright? So, our reduce() compares two Item objects and returns an Optional wrapped around the Item object whose price is higher, and then ifPresent() prints this Optional’s contents. Which we can even read thanks to the overridden toString() in the Item class.
http://igor.host/index.php/2017/08/05/ocp-question-33-explanation/
CC-MAIN-2017-39
refinedweb
1,120
56.25
PowerShell Scriptomatic Have you ever found yourself thinking, “I wonder when the Scripting Guys are going to write a PowerShell Scriptomatic”? Well, you can stop wondering: the Scripting Guys will probably never write a PowerShell Scriptomatic, a utility that would make it a snap to create WMI scripts using Windows PowerShell. Is that because the Scripting Guys don’t believe such a tool would be useful? Heck no; we think a PowerShell Scriptomatic would be incredibly useful. So then why aren’t we going to write such a tool? One reason and one reason only: Ed Wilson has already written a PowerShell Scriptomatic for us. For those of you who don’t know the prolific Mr. Wilson, Ed is the author of about half-a-zillion books for Microsoft Press, including Microsoft Windows PowerShell Step-by-Step and the forthcoming Windows PowerShell Scripting Guide. In conjunction with his latest book, Ed has also put together the Windows PowerShell Scriptomatic, a scripting utility you can download from here. Ah, good question: what exactly is a Windows PowerShell Scriptomatic? Well, when you first load the PowerShell Scriptomatic you see a window very similar to this: Don’t let the small size and the clean, crisp interface fool you; Ed has packed quite a bit of power into this little package. For example, do this: click the dropdown list labeled WMI Namespace. When you do that, and in a matter of seconds, you should see all the WMI namespaces available on the local computer: Select the namespace root\CIMV2. Now click the dropdown labeled WMI Class. When you do that, you should see a list of all the dynamic WMI classes found in the root\CIMV2 namespace: Now pick a class; for example, choose Win32_BIOS. The moment you make a selection, the Scriptomatic writes a Windows PowerShell script designed to return all the information that the Win32_BIOS class can return: Nice, huh? But it gets even nicer. Click the Run button and the PowerShell Scriptomatic will start an instance of Windows PowerShell and run your script for you: See? We told you it would get even nicer. That’s the basic idea behind the PowerShell Scriptomatic: it makes it easy to create, run, and save WMI scripts written in Windows PowerShell. Of course, what would a PowerShell Scriptomatic be without some additional options? For example, if you click the first icon on the toolbar you can do such things have the data saved to a text file, an XML file, or a CSV (comma-separated values) file. Here’s another example of what the Scriptomatic can do. Click the option Use All Properties and the Scriptomatic will return all the properties of the WMI class. (By default, PowerShell typically returns only selected properties of a WMI class.) What’s that? You say you’d like to run this script against a remote computer? That’s fine; just type the name of that computer (or its IP address) into the text box labeled Target Computers and then click Refresh Script. Want to run this script against multiple computers? Again, no problem: just type the names of each computer into the text box, making sure to separate the computer names with commas (and with no spaces between those commas): Etc. Oh, and try this: click the third icon on the toolbar and select Display Class Properties, then click Refresh Script. Now take a look at your Scriptomatic window: in addition to writing a script for you, it also displays the properties of the selected WMI class: In other words, you now have an easy-to-use WMI browser. Very cool. But don’t just take our word for it; download the PowerShell Scriptomatic and give it a try. Will this prove to be the best thing you’ve ever done in your life? Well, maybe. But, at the very least, it should be one of the 4 or 5 best things you’ve ever done in your life.
https://technet.microsoft.com/en-us/library/ff730935
CC-MAIN-2015-14
refinedweb
663
69.21
This is the 31 Visual Basic MVP Kathleen Dollard Thanks, Kathleen!. ETW takes responsibility for tracing control, doesn’t require you to restart your application, manages complexities like rolling buffers, and is blazingly, blazingly fast. Your application becomes one of many available providers in a rich and holistic tracing stream, so you can get an integrated view of what’s happening on the box. And, when EventSource targets a friendly listener like ETW, tracing data is strongly typed. For example, a trace might include an entry named “PrimaryKey” typed as an integer or Guid. Behind the scenes, EventSource accomplishes this via a manifest –XML (you never see) that explains to other tools how to interpret the data. The design of EventSource encourages action based tracing with its accompanying IntelliSense support. In action based tracing a specific method call indicates what occurred LoadFile…(…), OpenConnection…(…), AccessPrimaryKey…(…), LoadJsonData…(…). In addition to predictable switches like severity level, and keywords, there’s a common pattern of actions, the most important of which is Start/Stop, such as LoadFileStart() and LoadFileStop(). EventSource was introduced in .NET 4.5, so why do I think it’s so important in relation to .NET 4.5.1? The initial version of EventSource had an Achilles Heel – common errors caused silent failures in tracing. Silent failure in the trace can lead to misinterpretation, and since you’re often using traces in time critical scenarios (a euphemism for stuff just hit the fan), that can be a disaster. The new .NET 4.5.1 version of EventSource solves this problem by communicating failure in two new ways, without raising an exception. An exception in tracing would either crash your application, or would slow it down as you avoided crashing the app. If the .NET 4.5.1 version of EventSource encounters an issue and can’t output a trace entry, it outputs a zero event trace entry. EventSource identifies individual trace events via an integer, and by convention your EventId’s start with one. Starting in .NET 4.5.1, if tracing fails, TraceSource sends a trace entry with an EventId of zero and information about the failure. So, you can know that there’s a problem and avoid misinterpreting your traces. This EventSource class shows how easy it is to make the kind of mistakes that lead to zero event failures: [EventSource(Name = "KadGen-ETWPreMan-TestEventSource")] public class TestEventSource : EventSource { // illustrates problems, do not copy and use public void LoadFileStart(string FileName) { WriteEvent(2, FileName); } [Event(1)] public void AccessByPrimaryKey(int PrimaryKey, string TableName) { if (IsEnabled()) WriteEvent(1, TableName, PrimaryKey); } } There are at least three things here that will cause tracing to fail. Since the first method does not specify its EventId via an attribute, the default is assigned based on its position in the file – so it has an EventId of 1. Since this doesn’t match the first value passed in its WriteEvent call, tracing fails. The second method fails for two reasons. It’s specified EventId conflicts with the implicit EventId of the LoadFileStart method, so is non-unique. But even if this is fixed, tracing fails because the parameters do not match between the method declaration and the WriteEvent call. The difficulty in guaranteeing that these kinds of problems do not exist in your EventSource code is the reason I think the new diagnostics for EventSource are so important, particularly the zero event trace entries. Zero events are likely to become a well-known pattern, with ETW consumers blaring on about the failure. But for now, keep an eye out for these trace entries. The other scenario in which the earlier version of EventSource allowed silent failures was the EventSource constructor. Within the constructor, there’s a good chance that target listeners like ETW are not yet correctly hooked up, so the event zero approach could fail because no one was listening. The EventSource class gets a ConstructorException property in.NET 4.5.1. This property will contain any exception thrown by the constructor, with a null value indicating correct construction. The EventSource class in .NET 4.5.1 also includes a number of new methods and properties that include “ActivityId” in the name. The primary purpose of these features is to lay groundwork for future tools. As a complex operation is performed, many players may be involved – multiple threads or servers contributing to a single logical operation. ActivityId related features allow correlation of these actions – and the great news is that someone else can generally take responsibility for keeping it all straight! The new EventSource features in .NET 4.5.1 give you a solid tracing foundation and ETW makes it fast enough to turn on in production. Things popping up in the surrounding space - such as the NuGet version, TraceEvent, WPA support in the Windows 8.1 ADK, and SLAB - indicate how exciting the next few years will be for diagnostics. Look for more information on Vance Morrison's blog and coming up on my blog..
http://blogs.msdn.com/b/microsoft_press/archive/2013/10/31/from-the-mvps-eventsource-improvements-include-better-diagnostics-for-diagnostics.aspx
CC-MAIN-2014-15
refinedweb
832
60.75
MySQL Shell 8.0 (part of MySQL 8.0) MySQL Shell's JSON import utility util.importJSON(), introduced in MySQL Shell 8.0.13, enables you to import JSON documents from a file (or FIFO special file) or standard input to a MySQL Server collection or relational table. The utility checks that the supplied JSON documents are well-formed and inserts them into the target database, removing the need to use multiple INSERT statements or write scripts to achieve this task. From MySQL Shell 8.0.14, the import utility can. You can import the JSON documents to an existing table or collection or to a new one created for the import. If the target table or collection does not exist in the specified database, it is automatically created by the utility, using a default collection or table structure. The default collection is created by calling the createCollection() function from a schema object. The default table is created as follows: CREATE TABLE `dbname`.`tablename` ( target_column JSON, id INTEGER AUTO_INCREMENT PRIMARY KEY ) CHARSET utf8mb4 ENGINE=InnoDB; The default collection name or table name is the name of the supplied import file (without the file extension), and the default target_column name is doc. To convert JSON extensions for BSON types into MySQL types, you must specify the convertBsonTypes option. The JSON import utility requires an existing X Protocol connection to the server. The utility cannot operate over a classic MySQL protocol connection. In the MySQL Shell API, the JSON import utility is a function of the util global object, and has the following signature: importJSON (path, options) path is a string specifying the file path for the file containing the JSON documents to be imported. This can be a file written to disk, or a FIFO special file (named pipe). Standard input can only be imported with the --import command line invocation of the utility. options is a dictionary of import options that can be omitted if it is empty. (Before MySQL 8.0.14, the dictionary was required.) The following options are available to specify where and how the JSON documents are imported: schema: " db_name" The name of the target database. If you omit this option, MySQL Shell attempts to identify and use the schema name in use for the current session, as specified in a URI-like connection string, \use command, or MySQL Shell option. If the schema name is not specified and cannot be identified from the session, an error is returned. collection: " collection_name" The name of the target collection. This is an alternative to specifying a table and column. If the collection does not exist, the utility creates it. If you specify none of the collection, table, or tableColumn options, the utility defaults to using or creating a target collection with the name of the supplied import file (without the file extension). table: " table_name" The name of the target table. This is an alternative to specifying a collection. If the table does not exist, the utility creates it. tableColumn: " column_name" The name of the column in the target table to which the JSON documents are imported. The specified column must be present in the table if the table already exists. If you specify the table option but omit the tableColumn option, the default column name doc is used. If you specify the tableColumn option but omit the table option, the name of the supplied import file (without the file extension) is used as the table name. convertBsonTypes: true Recognizes and converts BSON data types that are represented using extensions to the JSON format. The default for this option is false. When you specify convertBsonTypes: true, each represented BSON type is converted to an identical or compatible MySQL representation, and the data value is imported using that representation. Additional options are available to control the mapping and conversion for specific BSON data types; for a list of these control options and the default type conversions, see Section 7.2.3, “Conversions for representations of BSON data types”. The convertBsonOid option must also be set to true, which is that option's default setting when you specify convertBsonTypes: true. If you import documents with JSON extensions for BSON types and do not use convertBsonTypes: true, the documents are imported in the same way as they are represented in the input file, as embedded JSON documents. convertBsonOid: true Recognizes and converts MongoDB ObjectIDs, which are a 12-byte BSON type used as an _id value for documents, represented in MongoDB Extended JSON strict mode. The default for this option is the value of the convertBsonTypes option, so if that option is set to true, MongoDB ObjectIDs are automatically also converted. When importing data from MongoDB, convertBsonOid must always be set to true if you do not convert the BSON types, because MySQL Server requires the _id value to be converted to the varbinary(32) type. extractOidTime: " field_name" Recognizes and extracts the timestamp value that is contained in a MongoDB ObjectID in the _id field for a document, and places it into a separate field in the imported data. extractOidTime names the field in the document that contains the timestamp. The timestamp is the first 4 bytes of the ObjectID, which remains unchanged. convertBsonOid: true must be set to use this option, which is the default when convertBsonTypes is set to true. The following examples import the JSON documents in the file /tmp/products.json to the products collection in the mydb database: mysql-js> util.importJson("/tmp/products.json", {schema: "mydb", collection: "products"}) mysql-py> util.import_json("/tmp/products.json", {"schema": "mydb", "collection": "products"}) The following example has no options specified, so the dictionary is omitted. mydb is the active schema for the MySQL Shell session. The utility therefore imports the JSON documents in the file /tmp/stores.json to a collection named stores in the mydb database: mysql-js> \use mydbmysql-js> util.importJson("/tmp/stores.json") The following example imports the JSON documents in the file /europe/regions.json to the column jsondata in a relational table named regions in the mydb database. BSON data types that are represented in the documents by JSON extensions are converted to a MySQL representation: mysql-js> util.importJson("/europe/regions.json", {schema: "mydb", table: "regions", tableColumn: "jsondata", convertBsonTypes: true}); The following example carries out the same import but without converting the JSON representations of the BSON data types to MySQL representations. However, the MongoDB ObjectIDs in the documents are converted as required by MySQL, and their timestamps are also extracted: mysql-js> util.importJson("/europe/regions.json", {schema: "mydb", table: "regions", tableColumn: "jsondata", convertBsonOid: true, extractOidTime: "idTime"}); When the import is complete, or if the import is stopped partway by the user with Ctrl+C or by an error, a message is returned to the user showing the number of successfully imported JSON documents, and any applicable error message. The function itself returns void, or an exception in case of an error. The JSON import utility can also be invoked from the command line. Two alternative formats are available for the command line invocation. You can use the mysqlsh command interface, which accepts input only from a file (or FIFO special file), or the --import command, which accepts input from standard input or a file.
https://docs.oracle.com/cd/E17952_01/mysql-shell-8.0-en/mysql-shell-utilities-json.html
CC-MAIN-2020-45
refinedweb
1,213
53.92
Migrations¶ Migrations are like a version control system for your database. Each migration defines a change to the database and how to undo it. By modifying your database through migrations, you create a consistent, testable, and shareable way to evolve your databases over time. // An example migration. struct MyMigration: Migration { func prepare(on database: Database) -> EventLoopFuture<Void> { // Make a change to the database. } func revert(on database: Database) -> EventLoopFuture<Void> { // Undo the change made in `prepare`, if possible. } } If you're using async/ await you should implement the AsyncMigration protocol: struct MyMigration: AsyncMigration { func prepare(on database: Database) async throws { // Make a change to the database. } func revert(on database: Database) async throws { // Undo the change made in `prepare`, if possible. } } The prepare method is where you make changes to the supplied Database. These could be changes to the database schema like adding or removing a table or collection, field, or constraint. They could also modify the database content, like creating new model instances, updating field values, or doing cleanup. The revert method is where you undo these changes, if possible. Being able to undo migrations can make prototyping and testing easier. They also give you a backup plan if a deploy to production doesn't go as planned. Migrations are registered to your application using app.migrations. import Fluent import Vapor app.migrations.add(MyMigration()) You can add a migration to a specific database using the to parameter, otherwise the default database will be used. app.migrations.add(MyMigration(), to: .myDatabase) Migrations should be listed in order of dependency. For example, if MigrationB depends on MigrationA, it should be added to app.migrations second. Migrate¶ To migrate your database, run the migrate command. vapor run migrate You can also run this command through Xcode. The migrate command will check the database to see if any new migrations have been registered since it was last run. If there are new migrations, it will ask for a confirmation before running them. Revert¶ To undo a migration on your database, run migrate with the --revert flag. vapor run migrate --revert The command will check the database to see which batch of migrations was last run and ask for a confirmation before reverting them. Auto Migrate¶ If you would like migrations to run automatically before running other commands, you can pass the --auto-migrate flag. vapor run serve --auto-migrate You can also do this programatically. try app.autoMigrate().wait() // or try await app.autoMigrate() Both of these options exist for reverting as well: --auto-revert and app.autoRevert(). Next Steps¶ Take a look at the schema builder and query builder guides for more information about what to put inside your migrations.
https://docs.vapor.codes/fluent/migration/
CC-MAIN-2022-21
refinedweb
449
57.06
- Cookie policy - Advertise with us © Future Publishing Limited, Quay House, The Ambury, Bath BA1 1UA. All rights reserved. England and Wales company registration number 2008885.. As we start on our journey of discovery, let's do something easy, such as changing our desktop background depending on what the weather's doing. There are already a number of desktop apps and widgets that provide this service, so this data should be freely available on the internet somewhere. Sure enough, a quick search for "weather API data" provides a whole host of links. Our criteria are: the API must be easy to understand, cover as much of the world as possible and have some reasonable documentation. This rules out a couple of promising sources, mostly on the documentation front - yes, it's possible to spend your time and effort making sense of the data or how the API works, but why would you if someone else provides both the data and a guide on how to use it? After much fiddling about, it seems that Yahoo's weather service fits our needs well. It's pretty straightforward, and has enough documentation to get us started without much effort. A bit of digging around yields, which provides plenty of detail and some examples of how to use the service. Bonus points to Yahoo! Determining location information is as simple as using Yahoo's online service and reading the URL it provides. The Yahoo method of working is to append a location identifier to the end of a URL. The service will then provide an RSS feed of the weather data for that area. This is useful in some respects, because it means we can try it out without actually writing any code. It should also be easy enough to find the location code - the documentation suggests just going to the main weather page, typing in our city and taking a good look at the URL it takes us to. "Bath, GB" takes us to, so our location code is UKXX0637. For some reason, this comes up as Avon Park in Yahoo (perhaps that's the weather station's name?). That oddness aside, we now have our location information. The instructions also say that we can append a value to get temperatures in Fahrenheit or Celsius, so we've chosen to add the Celsius option u=c to our URL. We can now test this URL to see that it works, still without writing any code. A good browser, such as Firefox, will make a fair effort at rendering the RSS feed for us, so all we need to do is enter the URL to direct us to the feed:. We can check that our submitted URL produces an RSS feed in Firefox, which does a good job of rendering it too. Okay, so now that we know the information we want is available as a convenient RSS feed, how can we get that into a Python script and decode it? If you open the above URL in Firefox and choose View > Page Source from the menu, you will see that there's quite a lot of data there, plus all the headers and so on. We could construct a parser that would take this raw information and spit out the bits that we want - but, as is usually the case, someone has already done it. There's a library module for Python called Feedparser that will build a Python object out of an RSS stream for us to fiddle with. You're best off installing this module through your usual package manager, since things can get a bit messy otherwise. You can, of course, write scripts and applications to crunch web data in pretty much any language, so why are we using Python over C#, for instance? There are several very good reasons. Python is a simple and straightforward language that is easy to write code in and, more importantly perhaps, easy to understand when you read code. It has excellent capabilities for using and manipulating text (which a lot of our data is going to be), it is cross-platform and there are a huge number of helper libraries available for all sorts of web services and protocols. Using Python and a few libraries, you should be able to bash out a working application or script in no time. With Feedparser installed, it may finally be time for some coding. Run Python from a shell first, so we can see what we're dealing with. You will be taken to the interactive Python shell where we can start by entering: >>> import feedparser >>>>> data = feedparser.parse(url) >>> data The result of this last line is a long spewing chunk of characters, which represents the feed. Fortunately, although it looks like a collection of random terms strung together with a lot of brackets, it is in fact a reasonably well-structured object. To prove it, try entering this in your Python shell: >>>for x in data : ... print x ... feed status version encoding bozo headers etag href namespaces entries This simple loop runs through the objects contained in the data object. One of the great things about Python is that it is very easy to manipulate objects, and even get them to tell you a little bit about themselves. For example, here we might want to know exactly what type of object we're dealing with: >>>type(data) <class 'feedparser.FeedParserDict'> Well, that helps us out a little. You'll probably have come across a dictionary object if you've used Python before - it simply stores data as key = value pairs. The objects we got listed out before are the keys in this case. If we browse through the Feedparser documentation, we'll get a bit more background on what exactly these keys hold, since they're common for the Feedparser object. The important one for our purposes is the entries key. This contains a Python list object of the individual feed entries, which form the actual content of the RSS feed. Python lists start with an index of 0, so to reference the object of the first entry, we would use: >>> data.entries[0] {'updated': u'Wed, 1 Apr 2009 12:50 am BST', 'yweather_condition': u'', 'updated_parsed': ... >>> for x in data.entries[0] ... print x ... updated yweather_condition updated_parsed links title summary_detail geo_lat summary guidislink title_detail link geo_long yweather_forecast id Once again we have a dictionary object with key and value pairs. This time they're defined by the XML structure of the feed itself, so there is no module blueprint for this object, although the items are documented on the Yahoo site. After a bit of investigation, it seems that the summary is going to be the most useful to us, because it contains the current conditions including the temperature. The slight problem is that the data we need isn't nicely contained in just a single field. This particular slice of the feed is formatted as HTML for rendering on a web page, but we just want the text. There's no place to hide from regular expressions (also known as regexes) - these will become increasingly necessary in your quest to mash up the world. A regular expression is just a sort of supercharged way of doing search and replace, and although it looks like an arcane form of glyph-based art, it isn't that difficult to understand. In this case, all we want to do is remove all those unpleasant HTML tags. Although we could probably extract the data we want without going to this trouble, it will become useful later on if we want to adapt this script to handle different sources. We can import the Python re module (it's included as one of the standard libraries, so there's no need to download anything this time) and put it to work on that text. We don't have the space in this tutorial to explain in detail how regular expressions work, but check out the Regular Expressions box below for more information. Regular expressions crop up all over the place. They can be pretty simple, or fiendishly complex. In short, a regular expression is a group of symbols that represents a particular grouping of characters in a string. There are also special characters, such as full stops that can match with any character. In addition, there are lists, groups and even operators, so matches can be grouped together - you can probably come up with a regular expression that covers all the bases for searching. When you're starting out with building patterns, it's best to use some sort of tool to help you check that you're matching patterns correctly - a single misplaced character is all it takes to cause a catastrophe! One of the better tools is the online regex builder at. Paste in some sample text and test out your pattern-matching skills. You may also like to visit the regex documentation at. Regular expressions aren't anyone's idea of fun, but using the service will help. For our expression, we want to match anything enclosed in the 'greater than' and 'less than' signs used as markers for HTML tags. This is pretty easy; the expression would be a <, followed by a pattern for any character, repeated any number of times, then a .+? and > to end. The +? match is the same as +*, but it's a lazy match, meaning it will match the shortest valid string, which is what we want - everything is between <>, so we'd be left with nothing otherwise. >>>summary = data.entries[0].summary >>>import re >>>>>temp = re.sub(pattern,'',summary) >>>temp u'\nCurrent Conditions:\nHaze, 13 C\nForecast:\nThu - Rain. High: 14 Low: 9\nFri - Light Rain. High: 12 Low: 7\n\n Full Forecast at Yahoo! Weather\n(provided by The Weather Channel)' So, now we have the text without the HTML tags, but it still has line breaks. We could use the standard string module to split this into a list, but as we already have the re module loaded, we might as well use that. To search for the new line character, we'll have to tell Python that we want to use a raw string value by placing an r at the front of the string. >>> temp = re.split(r'\n',temp) >>> temp [u'', u'Current Conditions:', u'Haze, 13 C', u'Forecast:', u'Thu - Rain. High: 14 Low: 9', u'Fri - Light Rain. High: 12 Low: 7', u'', u'Full Forecast at Yahoo! Weather', u'(provided by The Weather Channel)'] >>>temp[2] u'Haze, 13 C' As you can see, we have made the assumption that the third element of the resulting list will contain the string we want. To extract the temperature from this, a further regex could be used to match just numbers in that string. >>>temp = re.findall(u'[0-9]+',temp[2])[0] >>>temp u'13' >>>>>> temp = int(temp) >>> temp 13 A final step would be to use Python's built-in type conversion to change the string containing the temperature value into an integer for easy comparison. In a real-world script we might combine some of these stages for efficiency, but this is a fairly low overhead application and it does gain some clarity from being broken down into steps. Now all that remains is to fulfil our initial promise of changing the desktop background depending on the weather. How will we change the background? Well, we'll just have Python call an external command to do it. For a Gnome backdrop, you can change the image by calling the command gconftool-2, which sets the environment variable that holds the location of the desktop wallpaper filename. That's all we need to do. However, because we will have several variations, it makes sense to turn this into a function. A simple function isn't going to tax your brain too much. In Python there's a simple statement to define the function and its parameters, followed by indented text. And yes, you can even construct these within the interactive shell: >>> def change_wallpaper(filename): ... cmd = string.join(["gconftool-2 -s /desktop/gnome/background/picture_filename -t string \"",filename,"\""],'') ... os.system(cmd) ... >>> change_wallpaper('plop.jpg') The first line of the function proper constructs a shell command, while invoking the os.system call executes it. When we call the function, the environment variable is changed and the new image is loaded. The filename we have used here is just a dummy - in reality, it would be a good idea to store your images somewhere accessible in your home folder, such as a directory called weather, and name them something easier to match up with the conditions. With Gnome, you can now use SVG images as wallpaper, which means you can create some nice crisp, scalable graphics for your desktop. Now, we're not sure about you, but we're thinking there should be five images - freezing, cold, mild, warmish and hot. Please adjust for your geographical location accordingly, but we're going to assign these to the following temperature ranges: below 0C is freezing, 0-8 is cold, 9-15 is mild, 16-25 is warmish and above that is hot. In some languages, you might have a case/switch construction to deal with this. Python doesn't have one, so we'll just have to use a bank of if/elif/else statements as follows: >>>') ... If we now wrap that all up in a script, you'll get something like this. All you need to do is supply the images. #!/usr/bin/python # -*- coding: utf-8 -*- import feedparser,re,os,string def change_wallpaper(filename): cmd = string.join(["gconftool-2 -s /desktop/gnome/background/picture_filename -t string \"",filename,"\""],'') os.system(cmd) url = "" data = feedparser.parse(url) # extract the summary from the data summary = data.entries[0].summary temp = re.split(r'\n',re.sub('<.+?>','',summary)) temp = int(re.findall('[0-9]+',temp[2])[0])') Of course, this is just a little script and not a full-blown application by any stretch of the imagination, but it could be the basis of one. What we've achieved here is taking data from one place on the web and placing it automatically into our desktop context. We've seen how a simple RSS feed works and how to manipulate objects in Python. There were some scary regular expressions, and we saw Python hooking into the OS calls to execute external commands. These are all things we can build on as we explore the world of web services and bend them to our will. You could easily extend this little script, maybe by allowing user selection of the location, or by turning the whole thing into an applet. At the moment, it will fail if it can't reach the internet, which isn't ideal, but we'll learn more tricks for that in future tutorials. Be there! Who wouldn't like a lovely SVG-based wallpaper of pretty snowflakes? This can be yours! We've established that a number of web applications exist that can serve up helpful data to us. But you don't really expect it to be that easy, do you? There are several different protocols or ways in which they like to provide this data, and in some cases (such as Flickr) they actually provide more than one. To further confuse things, they are sometimes used inconsistently across sites. In the coming months we'll work our way through the top protocols, so stay tuned! First published in Linux Format magazine You should follow us on Identi.ca or Twitter Your comments SVG Wallpapers!? Garrett (not verified) - July 1, 2009 @ 4:32pm I want a snowflake one! retired harry xray (not verified) - July 4, 2009 @ 7:58pm How about a foggy drizzle look... Working on this project has been interesting and a challenge It would help if the graphics were available for down load This would be helpful following the programming Harry Brrrrrrrrriliant. Fair weather Penguin (not verified) - July 23, 2009 @ 8:37pm Very interesting and clever, gives me some great ideas. Just a pity it ain't in Perl.
http://www.tuxradar.com/content/code-project-use-weather-wallpapers
CC-MAIN-2017-09
refinedweb
2,700
69.92
… because... One typo (maybe): "We can visualize what the object hierarchies would look like at runtime with the following diagram." Here you mean "at design time", right? Question.. I have a base class in App_Code that extends the UI.Page class, and so I created a .master page, and on the content page, I changed BOTH the Inherits attribute of the aspx to the new extended base class name, AND the class declaration (class Default3 : MyExtendedPageClass) but I get: "Make sure that the class defined in this code file matches the 'inherits' attribute, and that it extends the correct base class (e.g. Page or UserControl" Should I NOT change the class declaration? and one more while I got you interested... Can you use multiple user controls programatically on a Master page itself? (not the content page) and what is the correct way to do it?? I hope you can help me out. Thanks!! GREAT article BTW Make sure you also add the CodeFileBaseClass attribute in your @ Page directive and point it to that base class. Yes, you can programatically use user controls on a master page. I wouldn't treat the master page any different than a regular Page, all of the same examples apply. Good Article. I've been fumbling with findcontrol and masterpage's for a while. I decided to use the property get set instead, much cleaner. I'm curious though and forgive me if I missed this or if this is a stupid question but is there a way to expose all of the controls properties as opposed to just the specific property, when you call it? Thanks, Jon -- I completely and thourgh;y hate the Name Mangling when using master pages. It makes it damn near impossible... or highly impractical to create sanely named CSS outside of using themes. This becomes twice as bad when you have developers who do the code, and designers who do the CSS, and you have a predefined Id naming conventions. The Id the designers see in the .aspx source isn't what you get in the output, obviously makeing CSS mistmatch the expectations. It's why we're not using master pages at all. I can't think of an easy way to expose all of the controls as properties (without writing all the code, that is). Perhaps some sort of macro or VS add-in could do that by generating code. Chris: I understand your problem. I guess it is a trade-off the ASP.NET team made (easier for postback resolution, harder for css and javascript). Perhaps it's something they could provide a workaround for in the future. Hope this helps, Scott Nice, thorough, well thought out. One of the things I appreciate about your tech writing style is that you just don't say, "this is how you do it..." You give different examples of how it can be accomplished, and then revisit each scenario and point out the pros and cons of each and expanding on the best direction to go. That style is nothing short of value added! Many thanks for your continued efforts in contributing to edumacation of us knuckleheads out here! :) Well even if I can't define the entire control it's still a better solution then working with findcontrol. I was pleased to find out I could issue a Atlas UpdatePanel Update command within the property get set routine to show the new changes, since a have a update panel in my master page. It works like a charm. I agree with Ryan about your tech writting style. Keep up the good work!!! I have read your great article and i have question for you :) if you can please help me., i try to implement you code esp "Master Page To Content Page Interaction" i have setup as you have described in your web site and i always get "null" value here: i'm working VS2005: protected void SendEmailButton_Click(object sender, System.EventArgs e) { SendEmailEventArgs eventArgs = new SendEmailEventArgs(this.EmailAddressBox.Text); if (SendEmail != null) <<always null value { SendEmail(this, eventArgs); } } if you want to see the whole page, please let me know and ii can post here or email you whichever is easy for you., hope to hear from you soon. thanks Content Page To Master Page Interaction? thank you The event will be null until someone subscribes to the event. Typically the page would subscribe to the event (SendEmail += new SendEmailEventHandler(...)) I find it's behavior so annoying I created a simple routine to find any control within any other control. This uses recursion to return the FIRST control matching the supplied name. If the control is not found it simply returns null, instead of raising an exception. Let me know what you think. public static Control Find(Control C, String ControlName) { if (C.ID == ControlName) return C; foreach (Control c in C.Controls) { Control cntrl = Find(c, ControlName); if (cntrl != null) return cntrl; } return null; } Thanks! Quick Question- Any idea on how to create 'Menu round shaped multiline Tabs'in webforms (Asp.net 2.0) similar to 'Tabcontrol' now available in winforms (.net 2.0). Any available references? My Email- [email protected] Thanks Vani [email protected] BaseMaster footer = new BaseMaster(); footer.FooterText = "My Footer"; in the contentpage in Page_PreInit, or page load or in a LinkButton1_Click, bombs. Any idea what I am missing? I can access and set other properties in the Base Class, no prob but the setting I lifted from your article that starts with “protected Label FooterLabel;” fails. Thanks again for a great article I will be working on it for a while. Ed Your article is very impressive. I have a question though. My case is as follows: I use a master page which has 4 main ContentPlaceHolders (header, footer, subNavigagion and Main content) The content pages (child pages of this Master Page) will, at Page_PreRender event load different user controls into this place holders, based on a selected language (up to 7 different languages) and reading from a global resource file (items.resx, items.fr.resx, etc.). What happens is that the user controls, such as dropdowns or checkboxes, buttons etc… trigger the autoPostBack event but do not go into the actual Click event. For instance: a dropdown will look like this inside the UserControl: Protected Sub lstAttributeName_SelectedIndexChanged(ByVal sender As Object, ByVal e As System.EventArgs) Handles lstAttributeName.SelectedIndexChanged End Sub But at runtime, it goes into the pageLoad event for that UserControl and it NEVER reaches SelectedIndexChanged method. HOW can I make this dropdown to actually go into this method? I’ve been spinning my wheels on this for a while now. Please help me if you can. Much appreciated. Diego Beltrán You asked for comments about mixing VB and C#? I have to say, I found it annoying. Trevor Any suggestions? This would be awesome because I could dynamically swap out the MasterPage including the parent of all nested MasterPages. Thanks, Tyler Great article - thanks for taking the time to wrote it. I have a particular problem which I don't seem to be able to solve so I was hoping you could offer some insight. I have a common baseclass called TemplatePage which I use for all my pages which is derived from System.Web.UI.Page. Each page in my site in turn derives from TemplatePage. How can I reference my MasterPage from the TemplatePage class? I get an error at present which states that the master page doesn't exist. I assume that this is because my base class is in the App_Code directory and so the MasterPage class isn't available. Any advice would be very appreciated I tried to read it all in one go, but my brain popped, almost as if I'd been struck. I vow to return, and bring my friends, rough it up a bit, and send it on its way with no lunch money. James: You've identified the problem. Because App_Code compiles before any .aspx and .master files the base page class won't see any master page types. You could define a base class for your master page, too, and the base page class could talk to the master through that base class (an interface also works). We've decided instead to use Web Application Projects instead because of these problems. Thanks again for your help protected new MyMaster Master { get { return (MyMaster)Master; } } This had been working great but fails in the nested scenario. I can set the MasterType in the page file but I liked doing this in one spot in the base class for all pages. By the way this caused a nasty stack overflow. Your discussion of using HttpModules to hook into events from an ASP.Net page was very thorough and well organized. I'm a repeat customer to your blog. Keep up the good work. Pryia: I'd have to see some code. If you could email me a sample that demonstrates the problem, that would work best. Thanks, Priya Gr8 article!!! I was just trying to figure out a better way of including JS scripts in content pages, your article helped me in figuring that out & understanding the limitations of master pages. -Vik I have been racking my brains out on this for days... I have added a menu control to a master page. The only issue I have is when you click on menu items which have navigateURL assigned (as in site maps) the page changes to the URL specified, and you can not tell which menu item was clicked. I had no problem with this when I used LinkButtons for navigation (on every page) since I could always access them from various pages and change their color. Of course, I only care about this behavior at the static menu level only. I am using a dynamic menu based upon the contents of a SQL SiteMap table. Any help or suggestions would surely be well appreciated. Thanks. P.S. Why do I see no examples of anyone trying to manipulate the menu control on a master page? I am having *major* difficulties using the FindControl to access a Button WebControl. I'm using nested master pages, so I've got the following objects I'm working with: phMain - the ContentPlaceHolderID on my main Master Page, titled ABSOLUTE_main.master phAdminContent - the ContentPlaceHolderID on my nested Master Page, titled ABSOLUTE_admin.master btnAdjust - the Button WebControl that resides on my content page The issue centers around my nested master pages. I did as you mentioned by calling everything out in a single line of code; I used the following (in C#): WebControl button = (WebControl)Page.Master.FindControl( "phMain" ).FindControl("phAdminContent").FindControl("btnAdjust"); This throws an unhandled NullPointerException, which is on the button "btnAdjust". The button *does* exist, as this line of code (using the hairy munging naming convention) will, in fact, find my button: WebControl button = (WebControl)Page.FindControl("ctl00$ctl00$phMain$phAdminContent$btnAdjust"); That works...as ugly as it is, but I cannot work this way. I've got other controls that I need to "Find" on the Page, and this guessing game of what the control will ultimately be named is both maddening and impossible. I decided to simplify my code for grins (try the ol' outward-in trick) by simply seeing what would return from this statement: ContentPlaceHolder content = (ContentPlaceHolder)Page.Master.FindControl( "phMain" ); Guess what I get??? A null value! Why would I receive a null value on that line of code??? My expectation was to receive an object that I could dissect and eventually work my way into. A null value isn't acceptable. I decided to go the other way, thinking that maybe the nested master page resolves first, as you suggest in the article. I did the following, by itself: ContentPlaceHolder content = (ContentPlaceHolder)Page.Master.FindControl( "phAdminContent" ); Any guesses what value came back?? A null. I'm at a serious loss here. What can possibly be going wrong? I can send you anything you need...I'll send it. :) I am in an extremely dire situation here. I cannot continue with my project until I get this worked through. Why did I ever attempt to use Master Pages???? Thank you for the article (it was very informative) but it didn't spend enough time on the perils of nested pages. Or, maybe it did, and I just need to stay away from master pages entirely. :) J.P. [email protected] J.P: That does seem like odd behavior. If you want to send me a sample that compiles and demonstrates the problem and I can probably take a look: scott @ OdeToCode.com. After contemplating hara-kari (j/k), I continued to poke and prod at this problem. I thought back to something you said ("the page will resolve from the inside out"). My nested page (ABSOLUTE_admin.master) would resolve itself and dump all of its constituent controls into its contentplaceholder, then the main master page (ABSOLUTE_main.master) would resolve, and all of the controls from the nested master page would be dumped into the outer-most master page. THEN, the outer-most master page would dump all of ITS controls into the content page....and away I go. After thinking about this a bit, I wondered what would return if I tried the following code: Object obj = Page.Master; Lo and behold, it was a reference to my nested master page (ABSOLUTE_admin.master). This made sense to me, based on what you said about the resolution timing of all elements. O.k., so I thought a bit....what happens here: Object obj = Page.Master.MASTER; And, as expected, I got a reference to the outer-most master page (ABSOLUTE_main.master). O.k., so now, with all fingers and toes crossed, I tried the following code and prayed and prayed: WebControl button = (WebControl)Page.Master.MASTER.FindControl( "phMain" ).FindControl("phAdminContent").FindControl("btnAdjust"); And it worked!!! I think the main thing I have to keep in mind when working with nested master pages is that you ALWAYS have to move to the outer-most level...get a reference to the outer-most ContentPlaceHolder first, then drill into it to get what you need. So, in my case, I needed the master OF THE MASTER to begin my search. And, it makes total sense...you need to start with the FindControl() method at the page level, then drill into each container to get what you are after. I just need to keep in mind that the ContentPlaceHolders in the Master Page environment are the outer-most containers...always. :) <insert huge sigh of relief> I sincerely thank you for writing this article, articulating the pitfalls of Master Pages. My Professional ASP.NET 2.0 book from WROX is very well-written, but only introduces you to Master Pages. I was able to get my pages working, but the issues I've been facing have been long and difficult. I believe your article will come in handy down the road to side-step these landmines. Thanks so much! J.P. Great article, and timely too. I will be back. In the article, you mention using a HttpModule to set the Theme for all pages. Couldn't the same be done in the global.asax in the PreRequestHandlerExecute event? This would save having to configure the web.config for the application. Thoughts? Steve That's true, it's just I've seen global.asax file grow out of control with little bits of functionality here and there, so proceed with caution :) Consider this code: MasterPage 1: <asp:ContentPlaceHolder </asp:ContentPlaceHolder> Some Html... <asp:ContentPlaceHolder </asp:ContentPlaceHolder> MasterPage 2: <asp:ContentPlaceHolder </asp:ContentPlaceHolder> Some Html... <asp:ContentPlaceHolder </asp:ContentPlaceHolder> Page: <asp:Content </asp:Content> <asp:Content </asp:Content> <asp:Content </asp:Content> This will issue an error saying that the content is pointing to a cph that doesn't exist (which is a logical error). However, I'm building multibranded sites that will need such an approach! I tried to find ways to catch the error and ignore it, also, tried, in a HttpModule, to find the Content that doesn't have an equivillent CPH to remove it from the page but that didn't work! The problem is that you can't catch the Content because it doesn't really exist! Do you have any solution or an alternative approach (not by using IF ELSE) for this problem? Regards, Adam Best "Get under the hood" Master-Mechanic article on MasterPages on the net. Good work. Appreciate it! Your guide touches on sharing masterpages across multiple apps. Your first suggestion seems the most appropriate although I cannot get it to work. Just a "Masters/Master1.master" file does not exist error. I created the virtual directory for the web app using the masterpage app through the IIS admin. Is that what you meant? Thanks In the meantime, I would be most grateful if you could answer a question that I posted to a newsgroup by did not receive a satisfactory answer: groups.google.com/.../eb862a9635f5ad3e The only solution i could think of was to pass the server rootpath into a javascript variable and then use this in the javascript file... <script language=JavaScript> var rootPath = '<%= ResolveClientUrl(".") %>' + '/'; </script> I added this to the MasterPage. I have a question for you. I have a master page which takes care of tracking page views etc. I then have a content page which varies based on the query string. I want to have the code in the master called but cache the page content as neccessary. I can cache the content of the page in a user control and that is fine. What I can't (yet) do is somehow cache the html header section (ie meta tags for keywords, description, and page title), without caching the WHOLE page. Right now the first request has the updated header but all subsequent (cached) pages revert to the master page's version of the header. Do you have any idea how I could fix this? Thanks John thank you very much Fabiano Arruda [email protected] i am having one master page and 3 content pages are there.In the content page1 i am giving refernce in the aspx page of remaing 2 pages.Its working fine but "The issue is since each page is compiled to a different assembly it does not know the referes and hence cannot reference it." what the next step i had to proceed? plz clarify ASAP. thks Great article, now I actually understand at least a little bit of masterpages! Meta tags is the only trouble we've had with Master pages, which was an oversight on Microsoft's part. Anyway, I tried to implement as you've outlined in the article, to no avail. Here are my steps: 1) Create a new Class file in App_Code that inherits from Page (I've called this MetaPage) 2) That class contains your code and seems to compile OK 3) I then changed an individual ASPX codebehind page to inherit from that new MetaP MetaPage instead. I did not touch the Master file, as your example also did not seem to touch the Master file. Any guidance is appreciated. :-) That is the Visual Studio validation being overly zealous. The project should still compile and run, it's only Visual Studio that complains because it doesn't know about the property. Let me know, No, it neither compiled nor ran. Does the .master also have to inherit from the MetaPage class or just the Page class? Do you have a working sample project that implements this perchance? I have an alternate solution that doesn't do page directives instead using a Page_Load that calls the Master, but my graphics designers don't do/understand code-behind code, but could deal with page directive codes. They are the ones frequently tagging these, so your solution is the most elegant... if it would work. Thanks! :-) ARe you using code-beside files (CodeFile="")? If so, use a CodeFileBaseClass attribute also. The CodeFileBaseClass attribute will contain the name of your base class. I'll be posting a sample tonight or tommorow, stay tuned. I need to find out what page has been loaded as a child of the master page. Reason, I have a few menus that are on the master page which are in panel containers and I only wish to make certain panels visible depending on which pages are selected. So, I need to check from the master page what child page is active. Thanks in advance. 1 js file location itself has to be resolved in sub-folders 2 url of dialog called in a javascript function requirs resolution (2) can be resolved as follows in code-behind. if (!Page.ClientScript.IsStartupScriptRegistered("JavascriptSiteNamePrefix")) Page.ClientScript.RegisterStartupScript(typeof(Page), "JavascriptSiteNamePrefix", @"<script language='javascript'> var rootPath = '" + Request.ApplicationPath + "/'; </script>"); Thanks in advance Great article. Two quesitons. 1. any thoughts on how to get a background image into a div tag on a master page? (that works from aspx pages in subdirectories). 2. How to you format your code on your blog? Thanks! -Peter 1) I'd set the background image using CSS and keep the image and style sheet in the theme directory. 2) I use Jeff Atwood's code formatting macro: I'm not quite understanding what you mean by placing the image in the theme directory. I put the jpg in my directory called "MyProject/App_Themes/ThemeNormal" and made my background in the css this: <div id="header_image" style="background: transparent url(bg_header.jpg) repeat-x scroll 0% 0%;cursor: pointer;"> I'm still not getting the image. Sorry for being a little thick. -Peter A web form with a div like: <div class="myDiv"> .... </div> and the web form sets Theme="SomeTheme" in the @ Page directive. Inside of the SomeTheme directory is a .css file with the following: .myDiv { background-image: url('example.jpg'); } The .jpg file is also inside the themes directory. This does work for me as the browser will request the .jpg file relative to where it fetches the style sheet from, so even with web forms in different sub directories the path is valid. Let me know if that works / makes sense. Getting a little fuzzy here late at night. I have blogged a tidy solution to including javascript files in a consistent manner. extraview.co.uk/... thanks for a great article! When implementing the HttpModule i found that my login page crashed. This page is a stand alone page and does not belong to any MasterPage. I resolved it by doing a null check: Page page = sender as Page; if ((page != null) && (page.Master != null)) { ... } Just thought I should share this... /Arne (Sweden) Thanks, it helped. One note: in Javascript opened windows, the way to address controls on the opener also changes; I didn't figured it out yet though. Great article! I have some problems though with my master page. The master page has a backgroung image and a login.aspx as the startup content page. When i ran the project, the image is not displayed in the startup page (login.aspx) but when login is successful and control is transferred to my default.aspx the image is displayed. In my web.config the authentication mode="forms". Whenever i change this to "windows" the project works fine. How can i display images in the master page when the "authentication=forms" in web.config ? Hope you can help me. Thanks. Thanks a million! @Ricky: Check this post by ScottGu: weblogs.asp.net/.../437027.aspx Great article! The best one on this topic. I have a question that was not covered in it though. In one of the content pages I have the following (simplified) setup - runat="server" omitted, etc.: <asp:Content <asp:TextBox </asp:Content> <asp:Content <asp:GridView <asp:ObjectDataSource ...> <SelectParameters> <asp:ControlParameter ControlID='<%= GetControlID("data", "tb") %>' .../> </SelectParameters> </asp:ObjectDataSource> </asp:Content> The problem is that GetControlID method is never called in this scenario and request is failing. Obviously, the ControlID parameter for the ControlParameter control is set before... I've been banging on this problem for quite some time and couldn't find a solution. Any help would be greatly appreciated. It would be also very interesting to hear your recommendations regarding inter-contentPlaceholder controls interactions. Thanks again for the superb article! Michael I am using a custom control dervied from compositecontrol class. I have a button in this control which fires an event. When I use this custom control in a normal aspx page it works fine, but in master page the event is not getting fired. Could you help? Error 1 Error parsing attribute 'metakeywords': Type 'System.Web.UI.Page' does not have a public property named 'metakeywords'. c:\inetpub\wwwroot\showMachines2.aspx 1 What can I do to fix this ?? Thanks, Mario We put all this time into figuring out how to use something that is supposed to save us time. You have to ask yourself: was it worth it? Perhaps another excellent article called "Master Pages: Use them? or Avoid them?", would be in order to address the type of site it is worth using them on. Thanks! ED I am trying to use the "Define a custom SendEmail event, and let each page subscribe to the event" but having problems to get it to work. Is it possible to display the entire sample here for downloads? I have the following code in my Content's page but keep getting this error message "No overload for 'EmailReport' matches delegate 'System.EventHandler'" . Can you please tell me what I am doing wrong? Thanks a lot! --------------------------------------------- protected void Page_Init(object sender, System.EventArgs e) { Master.SendEmail += new System.EventHandler(this.EmailReport); } protected void EmailReport(object sender, classLibrary.SendEmailEventArgs e) { string address = e.ToAddress; // do work } Meta tags is the only trouble we've had with Master pages. Anyway, I tried to implement as you've outlined in the article, to no avail. Here are my steps: 1) Create a new Class file in App_Code that inherits from Page (I've called this BasePage) 2) That class contains your code and seems to compile OK 3) I then changed the Default.aspx codebehind page to inherit from that new BaseP BasePage instead. I did not touch the Master file, as your example also did not seem to touch the Master file. I tried the CodeFileBaseClass attribute as well but that gave me even more errors. Any guidance is appreciated. :-) I've been testing and it would appear so. This means that currently written applications that you plan on 'wiring up' to a Master Page can no longer use server-based forms that get even simple postbacks (i.e. Request["dropdownlist1"];). In my testing, I did something this simple and it works without master pages, but once connected (and setting the MasterType virtual path), the code no longer retrieves the data. Thoughts? So now I would like to push a button on the web user control that is on the first tab and have the second tab appear or become active. I need a way to access the webtab control from the web user control that is on the first tab. This may not be a master page problem at all and if not, I appologize. Thank you. Great article on master pages. We use masterpages in .net 2.0 and also use dreamweaver templates. This is so our customers use macromedia contribute to edit pages. This works great however the customer has the ability to change the title tag within contribute. we have a content control within ht e head to set the title tag. However asp.net always adds an empty title tag on the page as well so we end up with 2 title tags. Is it possible to turn off the title in the page directive or remove it when the page is rendered? Thanks @ Heather: That is not really a master page problem, but I'd look at having the control raise an event to the page, and the page will then change the visibility on the tabs. @ Bill: You have to be careful with using syntax like Request.Form["xyz"], because names can change. It would be better to access the drop down list through the field. @ Patrick: I'd need to see some code. I believe you sent me an example in email :) Very informative article and thanks for that. I have a question on Sharing the master page with multiple applications. As you explained in your article, we are implementing with IIS sharing approach. With that approach, I am getting two types of errors 1.When we use MasterType in the aspx page, it added public get property in the corresponding designer file which is by design. But it did not recognize the masterpage type and so the build is failing. Of course the reason was, the application dll can not find the masterpage type. One way to solve this copy the master page project dll into this application bin directory. But whenever we make changes to Masterpages, we need to copy the dll back to application bin direcotry which is big maintenance issue in our case. Did you go through this? 2.Application content page could not find the masterpagefile and so source mode of the page is complaining as "masterpage file not found" Do you think it is VS 2005 issue? Ram In some cases I skip the strongly typed MasterPage property. Instead, I define an interface in a class library and reference the class lib from both the master page project and the web site project. The master page can inherit from the interface and implement its members. The content pages cast the MasterPage property to this interface type and invoke the properties. <head runat="server"> ... <meta name="author" content="<%= MASTER_COMPANY_NAME %>" /> <meta name="copyright" content="<%= MASTER_COMPANY_NAME %> - <%= CurrentYear %>" /> ... </head> When a page renders, they look like <meta name="author" content="<%= MASTER_COMPANY_NAME %>" /> <meta name="copyright" content="<%= MASTER_COMPANY_NAME %> - <%= CurrentYear %>" /> After i read your article i created a master page implemengting the following interface: public interface IMasterForm { string MasterFieldText { get; } } after implementing the above interface in masterpage: ------------------- public partial class MyMaster : System.Web.UI.MasterPage, IMasterForm { protected void Page_Load(object sender, EventArgs e) { if (ContentPlaceHolder1.TemplateControl.Page.PreviousPage != null) { IMasterForm form = ContentPlaceHolder1.TemplateControl.Page.PreviousPage.Master as IMasterForm; if (form != null) { if (!string.IsNullOrEmpty(form.MasterFieldText)) { //here i will set the label which is supposed to be in all the pages setLabel = form.MasterFieldText; } } } } public string setLabel { set { lblResult.Text = value; } get { return lblResult.Text; } } #region IMasterForm Members public string MasterFieldText { get { return setLabel; } } #endregion } ------------------------------ I want to set the master page values only once in page1 and maintain the same master page values for the remaining pages(page2,page3,page4...) page1.aspx.cs: public partial class Page1: System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { } protected void Button1_Click(object sender, EventArgs e) { ((MyMaster)Page.Master).setLabel = txtBox.Text; } } page1.aspx: <%@ Page Language="C#" MasterPageFile="~/MyMaster.master" AutoEventWireup="true" CodeFile="Page1.aspx.cs" Inherits="Page1" Title="Untitled Page" %> <asp:Content <asp:Label runat=server ID=myLableResult</asp:Label> <asp:Button </asp:Content> --------------- After i implemented this logic i could able to browse through into different pages with postbackurl setting in each page control to a different page. Is this a good idea or is there any other way to implement? as i don't want to implement the interface in each child web form and carry the values along with the child web pages. Not sure I follow the question. ScottGu expanded on the nested master tricks here: weblogs.asp.net/.../430382.aspx. @ Tom: I would probably add those tags using code behind. @ Sree: I would probably let the master page maintain its state in the database, or in Session. This keeps the content page and master page a bit more seperated. I am developing an application using ASP.NET. I have a form and would like to include a form within it. Any help will be appreciated. I find myself very frustrated by the difficulties in using javascript libraries with master pages. I don't really understand the methods posted above for javascript inclusion, so I tried one of the methods you suggested, with no success I'm afraid. I have a questions.aspx page and a questions.js library in the root directory of my web app. The master page is in a master directory ie: ~/master/template1.master. I can't get VS 2005 to debug javascript written into the aspx page so I have been using included js files for the debugging as well as the efficiency of grouping functionality. Protected Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load I think this should work, but it doesn't.... Dim si As HtmlGenericControl = New HtmlGenericControl() si.TagName = "script" si.Attributes("runat") = "server" si.Attributes.Add("type", "text/javascript") si.Attributes.Add("src", "~/Questions.js") Master.Page.Header.Controls.Add(si) Any ideas or suggestions would be great. thanks for your time David Master Pages cannot access Session? Myth. The base master page has 2 user control and i conent placeholder. my problem is the button click event is not getting fired in the page. It's not at all doing post back. I'm just starting to use master pages and have put all my navigation buttons there. I'm trying to reset them using the following code structure but can't find any buttons. I tried getting hold of the contentplaceholder, but can't loop through it. protected void Reset_Buttons() { foreach(Control ctrl in Page.Master.Controls) { if(ctrl is Button) { Button btn = (Button)ctrl; MessageBox.Show(btn.Text); } } } thanks, Graham @Saravana: That is a scenario that should work find. I can't say more without seeing the code. @Graham: MessageBox won't work on the server. It pops up a windows message box that nobody would be able to see. Also, the buttons are probably inside a form control. Set trace="true" in your @ Page directive and see what the button's parent control is. Loop through the parent control's Controls collection. Hope that makes sense. great article, I find it very useful, 10X. I have request can someone please translate the following VB code to C# <%@ Master Language="VB" %> <script runat="server"> Public Event SendEmail As SendEmailEventHandler Protected Sub SendEmailButton_Click(ByVal sender As Object, _ ByVal e As System.EventArgs) Dim eventArgs As New SendEmailEventArgs(EmailAddressBox.Text) RaiseEvent SendEmail(Me, eventArgs) End Sub </script> <script runat="server"> Protected Sub Page_Init(ByVal sender As Object, ByVal e As System.EventArgs) AddHandler Master.SendEmail, AddressOf EmailReport End Sub Protected Sub EmailReport(ByVal sender As Object, ByVal e As SendEmailEventArgs) Dim address As String = e.ToAddress ' do work End Sub </script> I cannot understand what are the orders: RaiseEvent SendEmail(Me, eventArgs) and AddHandler Master.SendEmail, AddressOf EmailReport what are the parallel functions in C# that I have to wirte down? and again 10X Keren Thank you very much for such a nice article. I wish I should have read it before submitting my project. But any how I have got a great help I was stuck for running a javascript function and I finally found that I should use ctl00_ContentPlaceHolder1_Label1 in place of label1 in content page. Can you please tell what should be used for master Page Controls????? “Dynamic Server Controls, Events in content pages that also utilize a MasterPage…” Basically, for this test I have some dynamic server controls, a dropdown, button and textbox for example… They are all created in the page_init of the content page as are the event handlers… Pretty standard stuff, when there is no MasterPage the events are raised as anticipated, when a plain jane MasterPage is added, the dynamic events for said dynamic server controls are not raised… Can’t seem to make it fly… Can you shed any light on the subject, possibly a blog article with a functional sample? Thanks in advance. I was wondering if someone would help me with an issue that's driving me insane. I have a menu control that works fine on an aspx page. However, when I move the control to a Master page, it behaves differently. Specifically, the when you hover over a link in the Menu control, the entire <td> background should change color. When the control is on the Master page, only the background around the actual text changes. As far as I can tell, the only difference in the resulting HTML is in the class attribute. e.g, class="Header1_Menu1_1 TopNavLink Header1_Menu1_3" (when it's on the page) class="ctl00_Header1_Menu1_1 TopNavLink ctl00_Header1_Menu1_3" (when it's on the Master page) Anyone know why? My designer won't let this slide by! Great article! I’m developing a CMS like application, where I would like to use a MasterPage for the “outer design” and a single Content page, which dynamic builds the “inner content” for the page, loading UserControls and “injecting” these into the Content Page. What I have is this: .aspx file <%@ Page Language="C#" AutoEventWireup="true" CodeFile="mhaDynContent.aspx.cs" Inherits="mhaDynContent" Title="Untitled Page" %> .aspx.cs file protected void Page_PreInit(object sender, EventArgs e) { this.AppRelativeVirtualPath = "~/mhaDefault.aspx"; this.MasterPageFile = "~/MasterPage.master"; base.AddContentTemplate("ContentPlaceHolder1", new TestContent()); } public class TestContent : System.Web.UI.ITemplate { void System.Web.UI.ITemplate.InstantiateIn(Control container) { container.Controls.Add(new LiteralControl("Hello World")); //Control toAdd = LoadControl("mhaCalendar.ascx"); //container.Controls.Add(toAdd); //UserControl a = new UserControl(); //a.LoadControl("mhaCalendar.ascx"); //container.Controls.Add(a); } } I can easily add a new control (like e.g. the LiteralControl to the “container”, but what I really would like to do is load (“inject”) a UserControl into the container and I can’t make this work (I’ve commented out some of my code – attempts :) I hope you understand my problem and any help would be appreciated. Best regards, Michael Holm Andersen Does anyone know how I would take this example below and change it to inherit from the base master page class defined below? ----------------------------------------------------------------------- File: default.master ----------------------------------------------------------------------- <%@ Master Language="C#" AutoEventWireup="true" CodeFile="default.master.cs" Inherits="Resources_default" %> ... ----------------------------------------------------------------------- File: default.master.cs ----------------------------------------------------------------------- using System.Collections; ... public partial class Resources_default : System.Web.UI.MasterPage { protected void Page_Load(object sender, EventArgs e) { if (!Page.IsPostBack) { Trace.Warn("MADE IT"); } } } ----------------------------------------------------------------------- File: BaseMasterPage.cs (located in App_Code) ----------------------------------------------------------------------- using System; ... public abstract class BaseMasterPage : MasterPage { //NOTHING HERE YET } ----------------------------------------------------------------------- I cannot figure out how to "wire" it up so that my events in the default.master.cs are handled. I appreciate any suggestions. Kevin Any assistance would be great... Master Page: MasterPage.master Content Page: detail.aspx Form View Control: FVDetail HyperLink Control: mapLink I have tried many ways but couldnt solve the problem. I will be thankful if somebody can solve my problem I was wondering to call an event from master page. your article helped me out. You are doing great job. Cheers, Satish Kacham I want to change a label in my master page. Thanks to your article I have manage to do that. The problem is that I have to do that in every page, is there a way to optimize that? Thanks I am still having a little trouble. I built a Master page, build a separate Header control, added the control to the master page. Now on other pages, I would like to access and change the label on the Header control. Is there a way to do this? -smc Question: I want to disable back button effect of browser in my asp.net application.but i want to do it in a particular content page of a master page. can you please suggest a solution.? i have a registration flow like 7 to 8 pages....patient inputs data and when it clicks on next updatedataholder() method calls which saves the content of that page and put it in session. now the complex part i want to implement a breadcrumb thing in master page and i should be able to navigate from 1st to for example 5th page. Questions is how will i know that what was previous page? and previous page updatedataholder() should be called? the worst approach is i call this method on every page unload event? what do you say? Thank u in advance. i have master page name :sitemaster.master page:default2.aspx page:default3.aspx control on master :treeview on form load of Default2.aspx i am changing the label1.text of masterpage.But when i used to go to default3.aspx using tree view then the label1.text of master page is showing empty. how i can show same text of label1 in default3.aspx also. If cak give some ideas it will be great help. thanks and regards, lokesh) So i want that at runtime grid should be visible on whole page and on selecting grid column panel should be visible. But currently at page load my grid is visble on half page and half remain vacant. so please can u help me i will be very thankful to u Great article about master pages. I did as you instructed. I added a MasterPageModule class, BasePage class, updated the Webconfig the MyMasterPageModule in httpModules. When I compiled the code I received the following errorss in my BasePAge Class Error 1 'BasePage' does not contain a definition for 'PreInit' and no extension method 'PreInit' accepting a first argument of type 'BasePage' could be found (are you missing a using directive or an assembly reference?) C:\Users\sstacel\Documents\Visual Studio 2008\WebSites\WAP_Web4\App_Code\BasePage.cs 20 14 C:\...\WAP_Web4\ Error 2 The name 'MasterPageFile' does not exist in the current context C:\Users\sstacel\Documents\Visual Studio 2008\WebSites\WAP_Web4\App_Code\BasePage.cs 25 9 C:\...\WAP_Web4\ I am not sure what I did wrong. Can you help me resolve it. Thanks I HAVE A WEBSITE WITH A MASTERPAGE AND 2 LINK IN MASTERPAGE AND A DEFAULT.ASPX THAT USEING MASTERPAGE FOR CONTENT. WHEN I CLICK ON THE LINKS WHOLE MASTERPAGE REFRESHING BUT I WANT JUST CONTENT PART REFRESHING WOULD YOU HELP ME? BEST REGARDS You are really superb .... You have given nice information. I am trying to use controls of the content page in the Javascript of the master page. I am not sure is there any way like to use it directly like ctl000_contentplaceholder_contentctlname which didnt work for me. But my problem is in my master page I don't have BODY tag because it is in content pages and Index page. So in Codebehind of Master page I am getting value of the control of the content page and can assign in the Hiddenfield. But then i can not use that field in the javascript of the master page. Any help will be appreciated. Thanks, Deepa I have some dropdown lists (company,region,country etc..)which are used in different web forms. I have a master page too..Now i wanna create a reusable web control for all dropdowns (which should be part of the presentation layer). And i tried like this- i created one ascx file with a dropdownlist which binds all regions[Region name] from database. And i added ddlRegion.Items.Insert(0,"--Select--"); in Databind().In a new aspx file with a button the control it works fine.ie.When nothing is selected (--Select--) it will alert as 'Select a Region',and if data is selected it will return the correct selectedValue. Now, my problem is that the javascript alert is not working when i use the control with the aspx page where i have used master page reference[i wrote it in ascx page].Or how can get the first item ie.--Select-- from ascx to aspx??? Just help me out from this... Am using visual studio 2010-C#,sql server 2008 isn't finding the right control ID when used in a master page - the master page changes the control IDs on the client. My aspx page look like this: <%@ Page Language="C#" MasterPageFile="~/MasterPage.master" AutoEventWireup="true" CodeFile="Company.aspx.cs" Inherits="Basicdata_Company" %> <%@ Register assembly="AjaxControlToolkit" namespace="AjaxControlToolkit" tagprefix="cc1" %> <%@ Register TagPrefix="Dropdown" TagName="Region" Src="~/Usercontrol/Dropdown.ascx"%> <asp:Content </asp:Content> <asp:Content <script type="text/javascript"> function validate() { var objname = document.getElementById("<%=txtCompanyname.ClientID %>") if (objname.value == "") { alert("Value is required"); objname.focus(); return false; } var objurl = document.getElementById("<%=txtUrl.ClientID %>") if (objurl.value == "") { alert("Value is required"); objurl.focus(); return false; } /// I tried this type code for ddlRegion[user control] but it returns error //// </script> <asp:UpdatePanel <ContentTemplate> ---Something-- <Dropdown:Region </Dropdown:Region> --Something-- </ContentTemplate> </asp:UpdatePanel> </asp:Content> And this is my ascx: <%@ Control Language="C#" AutoEventWireup="true" CodeFile="Dropdown.ascx.cs" Inherits="Usercontrol_Dropdown" %> <script type="text/javascript"> function validate() { var itoken = document.getElementById('<%=ddlRegion.ClientID%>').options[document.getElementById('<%=ddlRegion.ClientID%>').selectedIndex].value; if (itoken == "--Select--") { alert("Select a Region"); return false; } } /// is there any way to pass value of itoken to aspx and make use of that??? /// </script> <asp:DropDownList </asp:DropDownList> The javascript written in ascx works fine when i make use of it with an aspx which have no reference to master page. What u said is correct-because when i tried to use that id(ddlRegion),it returns error.....ie. object is undefined or null. Please make me clear with these concepts.. Any solution??? Excellent article... Finally i got the solution... Thanks......... can u help me in showing a user friendly message while throwing exceptions--I am using exception handling application block--how can i do that using replace handler?? I have used, exceptionMessage="An error has occured, contact admin". Can i have this in an alert box. Am using web application C#, .net 4.0 If possible plz help me out......
https://odetocode.com/blogs/scott/archive/2006/04/12/the-masterpage-article-i-thought-id-never-finish.aspx
CC-MAIN-2019-18
refinedweb
7,659
66.74
Here is a question by Vince that many beginners may encounter, on namespaces and references and setting up a project. Although it may seem confusing the first time through, it is very simple to handle once you know how. Question: Can you point me to material, blog or forum that discusses how to insert Code Regions from the SDK material into existing code? I'm learning to code so any basic level information that you can provide me would be a big help. I figured out how to do the Hello World tutorial located at the beginning of the Revit 2010 API user Manual, but now I would like to expand on the information by utilizing the additional code regions supplied in the User Manual but I continue to get errors. I'm sure it's basic in nature but because of my limited experience difficulty is around every corner. Again, if you could supply me with a simple tutorial showing how to insert a code region into existing code would be very helpful. Answer: I cannot really say anything special about copying source code snippets from one project to another. I just use copy and paste in any old editor, actually. In .NET, you just need to ensure that all required references and using statements are in place. Maybe that is causing the errors you see. The 'using' statements at the head of each module specify namespaces that can be used without explicitly specifying the namespace each time, so that you can write 'Element' instead of the full class name 'Autodesk.Revit.Element': using System; using System.Collections.Generic; using System.Diagnostics; using Autodesk.Revit; using Autodesk.Revit.Elements; The references provide the definitions of the namespaces, and are actually .NET assemblies, i.e. DLL files that need to exist on you system and on the system executing your plug-in. They are loaded by the .NET framework when your plug-in is loaded: Adding the required references to a project is demonstrated by the Revit DevTV recording. The entire Revit API resides in one single assembly, RevitAPI.dll, which includes all the Revit namespaces, so that is simple. Which class resides in which namespace is documented by the Revit API help file RevitAPI.chm, which is included with the Revit SDK: In general, all you have to do when you copy code from one project to another is ensure that the required references are loaded, and then add appropriate using statements, unless the code you copied uses the full name of every class it references.
http://thebuildingcoder.typepad.com/blog/2009/10/namespaces.html
CC-MAIN-2015-22
refinedweb
427
54.83
scala-macro-debugscala-macro-debug Scala macros to make debugging easier. Comes in two flavors, DebugMacros, as described on the blog (see introduction below), and an enhanced version DebugConsole. DebugMacros example: class Test { import com.softwaremill.debug.DebugMacros._ val v1 = 10 def test() { val v2 = 20 debug("Values in test", v1, v2) } } Should print: Values in test, Test.this.v1 = 10, v2 = 20 DebugConsole example: class Test { import com.softwaremill.debug.DebugConsole._ val v1 = 10 def test() { val v2 = 20 debug("Values in test", v1, v2) } } Should print: |D| Values in test, Test.this.v1 = 10, v2 = 20 And: class Test { import com.softwaremill.debug.DebugConsole._ val v1 = 10 def test() { val v2 = 20 debugReport("Values in test", v1, v2) } } Should print: |D| Values in test |D| Test.this.v1 = 10 |D| v2 = 20 FeaturesFeatures - Two Modes: debug (dynamic single line debugging message) and debugReport (title and variables report) - Can be disabled at compile time (the debugging code is removed from the final .classfiles) - Can be used to print the current source code file name and line - Really easy to use (there are only two methods) IntroductionIntroduction We all use println messages to debug our code and check the execution flow. And we quickly end up with things like this: println("Values in test, Test.this.v1 = " + v1 + ", v2 = " + v2) And this is only for two variables. It can quickly grow ugly. And also, hunting down all the lost println lines lost in the middle of the code ends up being nightmarish. This project is the brainchild of a tutorial to learn to code Scala Macros. See the blog: "Starting with Scala Macros: a short tutorial". Getting the Project: SBTGetting the Project: SBT To use in your project, add the following dependency: "com.softwaremill.scalamacrodebug" %% "macros" % "0.3" // scala 2.10 "com.softwaremill.scalamacrodebug" %% "macros" % "0.4.1" // scala 2.11, 2.12 and 2.13 Getting the Project: MavenGetting the Project: Maven To use in your project, add the following dependency: <dependency> <groupId>com.softwaremill.scalamacrodebug</groupId> <artifactId>macros_2.11</artifactId> <version>0.4</version> </dependency> UsageUsage Always include: import com.softwaremill.debug.DebugMacros._ Or: import com.softwaremill.debug.DebugConsole._ You can also extend the DebugMacros or DebugConsole traits in your util-object/package-object, if you have such an object which you frequently import (then debug will be easily available without additional imports). The great strength of the debug methods is the ability to create a label for an expression to be debugged: debug(a + b) prints: |D| a.+(b) = 30 You can combine as many as you want: debug(a + b, c, 7 + 3) prints: |D| a.+(b) = 30, c = 14, 7.+(3) = 10 You can also mix as many constant literals (typically a String) as you want. They would be left untouched: debug(a + b, "which should be different from", 7 + 3) prints: |D| a.+(b) = 30, which should be different from, 7.+(3) The debugReport method prints the expressions debug with one expression per line: debugReport(a + b, c, 7 + 3) prints: |D| a.+(b) = 30 |D| c = 14 |D| 7.+(3) = 10 With an optional title as the first parameter: debugReport("And the set of vars is:", a + b, c, 7 + 3) prints: |D| And the set of vars is: |D| a.+(b) = 30 |D| c = 14 |D| 7.+(3) = 10 And finally, if you do: debug() or debugReport() You will get a debug message that reports the Source File and Line this call is placed in. DisablingDisabling One of the strengths of the library is that it can be disabled on compile time. And if you disable it, all the debug and debugReport calls are literally removed from the generated code. You will no longer need to hunt down all the println sentences lost in the middle of the code. To do this, you can set the environment variable enable_debug_messages to false (it's considered to be true by default). You can also send it as a system property (the system property takes precedence over the environment variable).
https://index-dev.scala-lang.org/adamw/scala-macro-debug
CC-MAIN-2022-40
refinedweb
680
65.32
Hi Gabel, I have used MVC controller to make the API request in the current project instead of using ApiController class and able to get cart info for the anonymous user also Please try the below steps. Step 1: Add below reference into the library project using EPiServer.Commerce.Order; using Mediachase.Commerce.Customers; using Mediachase.Commerce.Orders; Step 2: Inject IOrderRepository instance to load the cart. private readonly IOrderRepository _orderRepository; var cart = _orderRepository.LoadOrCreateCart<ICart>(CustomerContext.Current.CurrentContactId, "Default"); Hope it will work for you! Hi Gabel, We have added some settings in web.config handler level which is allowed to call MVC controller similar to APIController. We are using same project for Frontend (react) and Backend (MVC controller) to communicate but in your case, you should need to send data in the request of endpoint for the retrieving cart details if the previous solution is not working. Hi! I need to get some user info in a commerce project in an ApiController class. This is in a library project, so I will not know what kind of a provider a consuming project will have implemented. If I understand it corectly, I need to get CustomerContact to get commerce user data. Problem is I cannot use Current user ID, because it is WebApi. I can get EpiserverProfile, because I have userName in Request payload, but what about Anonymous users? I would want to get Icart of those users, will that be possible?
https://world.optimizely.com/forum/developer-forum/Commerce/Thread-Container/2020/3/get-custommercontext-in-webapi/
CC-MAIN-2021-39
refinedweb
243
56.86
morris6, mansaxel and 11 Guests are viewing this topic. LMAO I am tempted at the moment Enjoyment = Zero so far. Just disappointment. Decided “no more bloody heathkits” lol Postage to US is $22 I’ll sleep on it and let you know when I’ve had my morning coffee $25 on slAmazonGlue a giant "Edison" blub on it and sell it on Etsy for $600. mnem Dammitt NewEgg... while looking over the MB for any visible faults I discover that it is not a new MB, but a repackaged customer return (or maybe some reviewer's preview board reinjected back into the stream); it never had the socket cover and the SSD heatsink has imprints on it from someone else's build, and I can see thermal paste residue on the CPU socket, where my build has only ever had the stock cooler on it with its printed-on compound. Looking back over everything, I notice that neither the box or baggie inside was actually sealed; and that the barcode label has been ripped off and a generic one from a portable labeler slapped on the box.Duck me... it's going back and I'm gonna buy from someone else. mnemSuck a duck, Murphy. I couldn't resist... I bought the hundred buck 485 on ebay and it arrived today. Largest pièce of equipment I own right now. TEA worthy indeed! Well I've got a bundle of hell on my hands with this Heathkit VTVM. The original "builder" (fuckerizer) did a poor job of this one. Some highlights:Got my new 12AU7 delivered by hand So out goes the CHunts capacitor...Then I smell a turd, so I've stripped the board out of it for repair / rekitting ... Ugh look at that wonky ass soldering and also the wax spooge over everything. Yeah thanks for that whoever decided to try and seal the damn pot with it ffs.And what do I find underneath. Yes the numpty didn't even trim the effing leads on the cap and look at that awful shit soldering So this one is being rekitted properly Well that was a productive evening. I think. EVERY damn resistor in that VTVM other than the trimmers and the 22M ones had drifted way out of spec. Also the caps in it were cracked and looking iffy so I have pulled and chucked them Also whoever soldered the light bulb holder on the board lifted both of the pads off and it was danging. I have removed ALL of the resistors and stuffed with ones I have in stock. Need to order some more. Have used 0.8mm thick CIF single-sided FR4 to make two new pads for the light bulb holder "manhattan style" and eviscerated all the wax gunk over everything. So off to RS ... project shelved for a few days now.Edit: Here's what remains of the board at this time (Attachment Link) THE BITCH LIVES. While researching possible compatibility issues with my particular combination of CPU/RAM/MB (Even though my exact RAM is listed on the MB's QVL), I was led back to the MB support page where I discovered a new BIOS released JUST YESTERDAY. In desperation, I Q-FLASHED it again... and it booted to BIOS. The aggravating part is it still refuses to boot with XMP enabled; so RAM just operates at slowest JEDEC speed of 2667. I suspect the main difference is just that prior BIOS versions defaulted to XMP ENABLED, while this version doesn't. I think I'm going to accept this small victory and go to bed.mnemSufficient unto the day is the evil thereof. Where the precision range resistors on the switches OK? Quote from: Specmaster on August 02, 2019, 06:30:58 amWhere the precision range resistors on the switches OK? I thought so initially but after testing them, the 9.1 ohm one is burned (this is relatively normal) and some of the divider contacts are pretty bad.@med: I'm going to persist with this one and see how far I get. I'll let you know if I give up again ... I'm not sure you want this one though as the transformer is 240V only. The UK model is slightly different to the US one as well. Also it's a rust bucket inside. Has been kept in a damp shed by the looks. Gotcha makes sense. I may strip it for parts to sell TBH. I'm looking at it again today and I don't fancy it. Meter movement is yours if I do that. It will cost a lot less to send if that's the only bit you need. I was wondering just why med wanted it, surely as Heathkit is an American company these must be cheap as cheap as chips over there surely? There is about 8 on Ebay USA right now although some of them are looking for stupid money IMO.
https://www.eevblog.com/forum/testgear/test-equipment-anonymous-(tea)-group-therapy-thread/36050/
CC-MAIN-2019-43
refinedweb
832
81.63
1 /*2 * RFC2822OutputStream.java3 * Copyright (C) 2002 The Free Software Foundation4 gnu.mail.util;20 21 import java.io.*;22 23 /**24 * An output stream that ensures that lines of characters in the body are25 * limited to 998 octets (excluding CRLF).26 *27 * This is required by RFC 2822, section 2.3.28 *29 * In order to conform to further requirements of RFC 2822 the underlying30 * stream must be a CRLFOutputStream.31 *32 * @author <a HREF="mailto:[email protected]">Chris Burdess</a>33 */34 public class RFC2822OutputStream35 extends FilterOutputStream36 {37 38 /** The CR octet. */39 public final static int CR = 13;40 41 /** The LF octet. */42 public final static int LF = 10;43 44 /** The maximum allowed size of a line */45 public final static int MaxLineLength = 998;46 47 /** The number of bytes in the line. */48 protected int count;49 50 51 /**52 * Constructs an RFC2822 output stream53 * connected to the specified CRLF output stream.54 *55 * @param out the underlying CRLFOutputStream56 */57 public RFC2822OutputStream(CRLFOutputStream out)58 {59 super(out);60 count = 0;61 }62 63 64 /**65 * Writes a character to the underlying stream.66 *67 * @param ch Description of Parameter68 * @exception IOException if an I/O error occurred69 */70 public void write(int ch)71 throws IOException72 {73 if (ch == CR || ch == LF) {74 out.write(ch);75 count = 0;76 }77 else {78 if (count > MaxLineLength) {79 out.write(CR);80 out.write(LF);81 count = 0;82 }83 out.write(ch);84 count++;85 }86 }87 88 }89 90 Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ |
http://kickjava.com/src/gnu/mail/util/RFC2822OutputStream.java.htm
CC-MAIN-2017-30
refinedweb
273
66.33
Parminder Sohal thinks this is interesting: The import statement simply adds the package to the search path when hunting for names. So no true dependency is created by such imports, and they therefore serve to keep our modules less coupled. From - Chapter 17: Smells and Heuristics - from Clean Code - Publisher: Prentice Hall - Released: August 2008 Note reading python oo we learned using import something.* is never a good idea. because you never know what the heck you are importing in your namespace. you can get classes imported you do not what. Even worse suppose you did import a. and import b. now both a and b can be having some compute() function. you don't know which was is being called at times now. Not just that but you can get other unexpected behaviors also because of this. modern editors like eclipse can collapse all the imports and do a check if the import is valid or not. Share this highlight Minimise
https://www.safaribooksonline.com/a/clean-code/10360360/
CC-MAIN-2018-30
refinedweb
162
75.81
[FIXED] [CLOSED] BaseEvent type is null [FIXED] [CLOSED] BaseEvent type is null I use component and not source. Also please read the javadocs for component event / button event etc. Component Component Not sure if you followed my original post so here is some code public class Foo extends Window implements Listener{ public Foo(){ newProjectButton = new Button("New Project"); newProjectButton.addListener(Events.Select, this); .... rest of code } public void handleEvent(BaseEvent te) { if(te.source == null){ // this should not be null } } } If I check the instance of BaseEvent as a instanceof ButtonEvent then I can cast and get the be.button as the source of the event. The assumption is that BaseEvent.source should not be null in the base class. Thanks Scooter The javadocs tell you that not all fields are set. Code: if(be instanceof ComponentEvent) { source = be.component } source source I understand the challenges associated with a base class and subclasses that are appropriate for the specific "Component" and the corresponding subclass event. In the case of BaseEvent the source should be a common or global attribute of all subclasses. If source can be null in BaseEvent and has only three attributes then source probably shouldn't be in BaseEvent. It doesn't make sense to have source as an object in BaseEvent and not have it set as the source of the event. And i can only suggest you again to read the ajvadocs. It tell you what fields are set. Response Response I can only reply that having Docs that explains bad design or a bug is ok but that doesn't excuse having code that can be improved or follow the original intent of a base class in the design. No need to reply! We did a change that makes source filled everytime.
http://www.sencha.com/forum/showthread.php?61814-FIXED-CLOSED-BaseEvent-type-is-null&s=be272cd2fb29a179ec04584ecf8495a4&p=311202
CC-MAIN-2014-42
refinedweb
298
73.27
1.1 Glossary This document uses the following terms:. ASCII: The American Standard Code for Information Interchange (ASCII) is an 8-bit character-encoding scheme based on the English alphabet. ASCII codes represent text in computers, communications equipment, and other devices that work with text. ASCII refers to a single 8-bit ASCII character or an array of 8-bit ASCII characters with the high bit of each character set to zero.. Distributed File System (DFS): A file system that logically groups physical shared folders located on different servers by transparently connecting them to one or more hierarchical namespaces. DFS also provides fault-tolerance and load-sharing capabilities.. Fid: A 16-bit value that the Server Message Block (SMB) server uses to represent an opened file, named pipe, printer, or device. A Fid is returned by an SMB server in response to a client request to open or create a file, named pipe, printer, or device. The SMB server guarantees that the Fid value returned is unique for a given SMB connection until the SMB connection is closed, at which time the Fid value can be reused. The Fid is used by the SMB client in subsequent SMB commands to identify the opened file, named pipe, printer, or device. file:. file. guest account: A security account available to users who do not have an account on the computer.. that provides connectionless datagram delivery of messages. See [IPX]. datagram service: An implementation of NetBIOS services in a datagram environment as specified in [RFC1001] section 17.): An authentication protocol that is based on a challenge-response sequence for authentication.., and..B transport: Any protocol that acts as a transport layer for the SMB Protocol..
https://msdn.microsoft.com/en-us/library/ee441810.aspx
CC-MAIN-2019-04
refinedweb
282
53.1
CONCEPTS USED: Greedy algorithm. DIFFICULTY LEVEL: Medium. PROBLEM STATEMENT (SIMPLIFIED ): PrepBuddy is given N activities with their start and finish times. The task is to select the maximum number of activities that can be performed by PrepBuddy, assuming that he can only work on a single activity at a time. See original problem statement here For Example : 2 5 1 3 4 6 8 3 5 13 14 15 6 3 5 6 7 8 9 5 7 8 9 10 12 3 4 In the first test case, the activities present at index 0, 1 and 3 can be completed. In the second test case, the activities present at index 0, 1, 3 and 4 can be completed. OBSERVATION: Activity selection problem is one of the most frequently asked problems and also holds great significance when it comes to implementing greedy algorithm. The activity selection problem is a combinatorial. The activity selection problem is also known as the Interval scheduling maximization problem (ISMP), which is a special type of the more general Interval Scheduling problem. SOLVING APPROACH: What does greedy say??? It says that, at every step, we can make a choice that looks best at the moment, and we get the optimal solution of the complete problem. Here,the greedy choice is to always pick the next activity whose finish time is least among the remaining activities and the start time is more than or equal to the finish time of previously selected activity. We can refer online coding courses and sort the activities according to their finishing time so that we always consider the next activity as minimum finishing time activity. >1. Sort the activities according to their finish time. > >2. Select the first activity and set the counter to 1. > >3. Now iterate over the entire array and keep comparing the selected finish time with the current start time. > >4. If the start time is greater ,increment the counter by 1 and change the value of the selected finish time to the current finish time. Time Complexity: >When activities are sorted by their finish time: O(N) > >When activities are not sorted by their finish time, the time complexity is O(N log N) due to complexity of sorting SOLUTIONS: #include <bits/stdc++.h> using namespace std; int main() { int t;cin>>t; while(t--) { int n;cin>>n; vector<pair<int,int>>v(n); for(int i=0;i<n;i++) { int st; cin>>st; v[i].second=st; } for(int i=0;i<n;i++) { int end; cin>>end; v[i].first=end; } sort(v.begin(),v.end()); int ans=1; int st=v[0].first; for(int i=1;i<n;i++) { if(v[i].second>=st) {ans++; st=v[i].first; } } cout<<ans<<"\n"; } return 0; }import java.util.*; import java.lang.*; import java.io.*; class ActivitySelection { public static void printMaxActivities(int s[], int f[], int n) { int i, j; int count=1; i = 0; for (j = 1; j < n; j++) { if (s[j] >= f[i]) { count=count+1; i = j; } } System.out.println(count); } public static void main(String[] args) { Scanner sc = new Scanner(System.in); int t= sc.nextInt(); while(t-- >0 ){ int n = sc.nextInt(); int f[]=new int[n]; int s[]=new int[n]; for(int i=0;i<n;i++) { s[i] = sc.nextInt(); } for(int i=0;i<n;i++) { f[i] = sc.nextInt(); } printMaxActivities(s, f, n); } } }
https://www.prepbytes.com/blog/greedy-algo-interview-coding/activity-selection-problem/
CC-MAIN-2021-39
refinedweb
572
55.74
See also the article about localizing an ASP.NET system here. See also my video about localizing an ASP.NET WPF windows below are a single XAML window file that uses different resource files to change the Window Title, the Labels and the Button text at runtime. There are a few ways to implement localization into a WPF program. In this example I will use RESX files, as it is the approach I like the best. After you have created your new WPF project, open the Resources.resx file and add the below. You will find the Resources.resx file Properties directory inside your solution explorer. - Create a Button – ButtonSubmit, where the text will be Submit. - Create a Label – LabelAddress, where the text will be Address. - Create a Label – LabelFirstName, where the text to be First Name. - Create a Label – LabelLastName, where the text will be Last Name. - Set the title of the window to User Information. The Resources.resx is your default Resource file. You will need to create 3 additional Resources files for this example, one for English, German and Chinese. All 3 need to be moved into the Properties directory and named like in the below picture. Resources.de-DE.resx file will look like this: Resource.en-US.resx will look like this: Resource.zh-CN.resx will look like this: It is very important to make these resource files Public. There is a drop-down list at the top of the window when you are editing the file. Make sure it shows Public. This approach will not work is the access modifier is set to Internal. Within the Window tag of your XAML window file, add the following line of code. xmlns:properties="clr-namespace:YourAssemblyName.Properties" For each control within our grid (Button, Labels and Window Title) we will not set the Title or Content values to static text. Instead we will set it to use the resources file value. …Title="{x:Static properties:Resources.WindowTitle}"… <Label Content="{x:Static properties:Resources.LabelFirstName}"… <Button Content="{x:Static properties:Resources.ButtonSubmit}"… I don’t like to hardcode values into the my source code. Simply, because if the value needs to change you have to make the code change, rebuild the components and redeploy the program. To avoid this, I always place settings in the App.config or the Web.Config files when there is a chance they would/could change. This goes for setting the culture in this example. I add the below into the App.config file. <?xml version="1.0" encoding="utf-8" ?> <configuration> <appSettings> <add key="Culture" value="de-DE" /> </appSettings> </configuration> If you try to run this example on your computer, you can change the value to de-DE, en-EN or zh-CN to see the windows change language. If you want, you can also modify the project and add a new resource file for another language, then modify the value to the language you just created. The last step is to set the culture of the program. You do this by adding the below code to the MainWindow() method, before the InitializeComponent() method is executed. Properties.Resources.Culture = new CultureInfo(ConfigurationManager.AppSettings["Culture"]); You will need to add System.Configuration to your references in order to access the ConfigurationManager to function. I have found that simply adding the namespace to the class file with using is not enough.
https://www.thebestcsharpprogrammerintheworld.com/2018/04/26/localizing-a-wpf-program-using-c/
CC-MAIN-2021-39
refinedweb
569
59.6
> > Umm No. There is an menu item in the debug build but there is no code to implement this. This did not make the feature list by the time of the feature freeze. > * Text to Table > * Textwapping around Images We have this in 2.2. What we have in 2.3 is the "tightly" wrapped text around images with transperency. > * Grammar Checking (Plugin) > * Math Support (Plugin) > * GNOME Office Chart Embedding (Plugin, experimental) > * OpenDocument Support (Plugin, experimental) We have substantial improvements to MS Word import, RTF import/export, WP import and HTML/Latex export. Cheers Martin > > AbiWord v2.3.0 is parallel installable with AbiWord v2.2 so users can try > it out > without disturbing their stable AbiWord 2.2 version. We are very much > interested > in any bug you may find. Please report these to >. > > While we encourage people to try out the new snapshot, please be aware > that is a > development snapshot and is not expected to be stable in any sort of way. > > Availability: > > More information: > > Enjoy! > > The AbiWord Development Wed May 11 00:48:56 2005 This archive was generated by hypermail 2.1.8 : Wed May 11 2005 - 00:48:56 CEST
http://www.abisource.com/mailinglists/abiword-user/2005/May/0019.html
CC-MAIN-2016-07
refinedweb
196
67.65
. Changed in version 3.3: The struct_time type was extended to provide the tm_gmtoff and tm_zone attributes when platform supports corresponding struct tm members. Use the following functions to convert between time representations: The module defines the following functions and data items: string of the following form: 'Sun Jun 20 23:21:05 1993'. If t is not provided, the current time as returned by localtime() is used. Locale information is not used by asctime().. Return the resolution (precision) of the specified clock clk_id. Availability: Unix. New in version 3.3. Return the time of the specified clock clk_id. Availability: Unix. New in version 3.3. Set the time of the specified clock clk_id. Availability: Unix. New in version 3.3. The Solaris OS has a CLOCK_HIGHRES timer that attempts to use an optimal hardware source, and may give close to nanosecond resolution. CLOCK_HIGHRES is the nonadjustable, high-resolution clock. Availability: Solaris. New in version 3.3. Clock that cannot be set and represents monotonic time since some unspecified starting point. Availability: Unix. New in version 3.3. Similar to CLOCK_MONOTONIC, but provides access to a raw hardware-based time that is not subject to NTP adjustments. Availability: Linux 2.6.28 or later. New in version 3.3. High-resolution per-process timer from the CPU. Availability: Unix. New in version 3.3. System-wide real-time clock. Setting this clock requires appropriate privileges. Availability: Unix. New in version 3.3. Thread-specific CPU-time clock. Availability: Unix. New in version 3.3.(). Nonzero if a DST timezone is defined. Get information on the specified clock as a namespace object. Supported clock names and the corresponding functions to read their value are: The result has the following attributes: New in version 3.3.. Like gmtime() but converts to local time. If secs is not provided or None, the current time as returned by time() is used. The dst flag is set to 1 when DST applies to the given time.. Availability: Windows, Mac OS X, Linux, FreeBSD, OpenBSD, Solaris. New in version 3.3. 3.3: tm_gmtoff and tm_zone attributes are available on platforms with C library supporting the corresponding fields in struct tm.
https://wingware.com/psupport/python-manual/3.3/library/time.html
CC-MAIN-2015-35
refinedweb
365
61.93
.3: Numbers, Words, Sounds, and Pictures About This Page Questions Answered: Could we learn some Scala now, please? How do I use numbers in a Scala program? What about text? What’s a convenient way to experiment with individual commands? Topics: Working in the REPL environment. Some Scala basics: numbers, arithmetic operations, strings of characters, the println command. Accessing packages with import. Sounds and images. What Will I Do? Alternate between reading and practising. Rough Estimate of Workload:? A bit over an hour. Points Available: A13. Related Projects: GoodStuff, which you should already have from the previous chapter. This chapter, and nearly all upcoming ones, also require O1Library.) Notes: This chapter makes occasional use of sound, so speakers or headphones are recommended. They aren’t strictly necessary, though. Where Are We? Now that you have an overview of programming from the previous chapters, let’s start building some concrete skills one small step at a time. This is the plan: - Between this chapter and Chapter 1.6, you’ll learn to use selected programming techniques and understand the concepts behind them. You’ll program by giving individual instructions to the computer, one by one. You’ll use various techniques as “separate pieces”: they don’t form an entire application yet, but they’ll be useful when implementing just about any application program. - In Chapters 1.7 and 1.8, you’ll learn to create commands of your own by combining existing ones. - From Chapter 2.1 onwards, we’ll put the pieces together and construct application programs. The techniques that you’ll have learned by then will be useful as we do this. One Plus One Practically all programs require at least a bit of elementary arithmetic: addition, subtraction, multiplication, and/or division. For instance, the GoodStuff application from the previous chapter divides numbers to determine value-for-money figures, and Pong constantly computes new coordinates for the paddles and the ball while the game is running. Scala, like other programming languages, gives us tools for computing with numbers. For instance, you can issue this Scala command to instruct the computer to compute one plus one: 1 + 1 Simple. But where should you write such a command? One option would be to create an entire program that makes use of this command somehow and save that program in a file. But there’s another option that is often more convenient as we try out individual Scala commands: The REPL In the Eclipse menu, choose Alt+F11. Try it! When the REPL is launched, Eclipse asks you which project you’d like to associate with the new REPL session. Select GoodStuff. You’ll be greeted with a view that looks like this: The REPL appears at the bottom of the Eclipse view under the heading Scala Interpreter. You can type Scala code at the <type an expression> prompt. Press Ctrl+Enter to execute the code that you typed in. The REPL reports the results above the prompt. This view is good for interactive experimentation. “REPL” is short for the words read–evaluate–print loop, which convey the basic idea: - read: You can type in bits of Scala that the REPL receives as input, or “reads”. - evaluate: The REPL runs your code as soon as it receives it. In the case of arithmetic, for instance, it performs the given calculation to produce a result. - print: The REPL reports the results of evaluation onscreen. - loop: This interaction between the user and the REPL keeps repeating as long as the user likes. Find the prompt at the bottom edge of the REPL. Enter 1 + 1, for example, and press Ctrl+Enter. The output will be something like this: 1 + 1res0: Int = 2 This the Scala REPL’s way of informing you that the result of the computation is two. res0 comes from the word result and means “result number 0”, that is, the first result obtained during this REPL session. Here, as in many programming contexts, numbers run from zero upwards. Int is the Scala name for integer numbers. In Scala, as in many programming contexts, many terms are based on English words. Ask the REPL to compute some more values. Each result gets reported after the previous ones and labeled with a larger number: 2 + 5res1: Int = 7 1 - 5res2: Int = -4 2 * 10res3: Int = 20 15 / 3res4: Int = 5 Feel free to experiment with other arithmetic operations. A practical hint Hold down the Ctrl key and press the up and down arrows on your keyboard (on a Mac, Ctrl+Cmd plus the arrows). As you see, you can access and rerun the commands you issued previously during the same REPL session. This is often very convenient. The REPL vs. a whole program In the previous chapter, we looked at the GoodStuff application. It was stored in files, which is why it was easy for us to run and rerun the entire app at will. The acts of writing the program code and running it were unmistakeably separate from each other. Notice how working in the REPL has a different character. The commands you enter there don’t get permanently stored anywhere. In the REPL, the same separation between writing and running a program doesn’t exist: each command is executed as soon as you finish typing it in. On Expressions The examples above were extremely simple but they already involve a number of fundamental programming concepts that’ll be essential to understand as we move forward. The most important of these concepts is the expression (lauseke). For our purposes, it’s not necessary to define the concept formally. This will suffice: an expression, in programming, is a piece of code that has a value (arvo). So far, we’ve used arithmetic expressions, which get their values from mathetical computations. For example, 1 + 1 is an expression; its value is the integer two. Below is one more example in the REPL and a few other key concepts explained in terms of the example. (Reminder: highlight the parts of code associated with a boxed explanation by hovering the mouse cursor over the box.) 50 * 30res5: Int = 1500 Intis one of the data types defined as part of the Scala language. It’s perfectly possible to enter just a literal as an input to the REPL, as shown below. The evaluation of such inputs couldn’t be much simpler: the literal’s value is exactly what it says on the tin: 123res6: Int = 123 Already in this chapter — and increasingly in subsequent ones — you’ll see data types other than Int, operators other than those for basic math, and expressions more complex than literal and arithmetic ones. Spaces around operators The spaces that surround the operators in our examples are not mandatory and don’t affect the values of the arithmetic expressions. Spaces do, however, make the code a bit easier to read (once we start working with larger amounts of code). Using them is part of good programming practice. Dividing Numbers Scala’s arithmetic is largely familiar from mathematics. However, there are some things that may surprise you. Experiment with division. Try these, for instance: 15 / 5res7: Int = 3 15 / 7res8: Int = 2 19 / 5res9: Int = 3 What if I want to round numbers “right”? Scala’s way of handling integers is common in programming languages. It’s perhaps surprising how often it’s convenient that integer division works as it does. There are certainly tools for your other rounding needs. We’ll run into them later (e.g., in Chapter 4.5). Operator Precedence and Brackets Some operators have precedence over others. As far as arithmetic operators are concerned, the rules should be familiar from mathematics. Multiplication is evaluated before addition, for instance: 1 + 2 * 3res10: Int = 7 5 * 4 + 3 * 2 + 1res11: Int = 27 You can use round brackets to influence evaluation order. This, too, has a familiar feel: (1 + 2) * 3res12: Int = 9 5 * ((4 + 3) * 2 + 1)res13: Int = 75 A parenthetical warning In school math, you may have used a different notation where one set of brackets appears inside another. For instance, you may have written square brackets around regular round brackets just for clarity of reading; each kind of bracket had the same meaning. That won’t work in Scala code. In Scala, round brackets are used for grouping expressions (and for a few other things). Square or curly brackets won’t do for that purpose. Those other brackets have other uses, which we’ll get to in due time. In this matter, Scala is a typical programming language. It’s common in programming that different kinds of brackets have different meanings. On Decimals What about decimal numbers? In many ways, they work as you might expect. In Scala, like in most English-speaking countries, the dot is used as the decimal mark: 4.5res14: Double = 4.5 2.12 * 3.2205res15: Double = 6.82746 4.0res16: Double = 4.0 As you can tell from the above, “decimal numbers” go by the name Double in Scala. The type is oddly named for historical reasons. When you use values of type Double, division works more like how you might expect. Dividing a Double by a Double yields a result that is also a Double: 999.0 / 1000.0res17: Double = 0.999 Combining numerical types You can use Double values and Int values in the same expression: 29 / 10res18: Int = 2 29.0 / 10res19: Double = 2.9 29 / 10.0res20: Double = 2.9 30.0 / 10.0res21: Double = 3.0 If one of the operands is an integer and one is a decimal number, the result is a decimal number. That applies even if the quotient is equal to an integer. What about more complex expressions that contain different kinds of numbers? Say, (10 / 6) * (2.0 + 3) / 4. Here, the operators and parentheses determine the order of evaluating the subexpressions; that order determines the type of each value produced as an intermediate result. Can you work out the value of that expression? To check your thinking, see this interactive animation: Computer memory and the animations in this ebook The memory of a computer stores potentially vast amounts of data in the form of bits. As a program runs, it can reserve parts of this memory for various purposes. Each part of memory has its own unique address, a running number of sorts, that identifies that particular memory location. The bit-level details of how computer memory works are unimportant in this introductory course. Nonetheless, even you the beginner (we presume) will greatly benefit from having a general idea of what gets stored in memory and when. This is why we use animated diagrams, like the one above, that depict the effects of a program run on the contents of memory. The “areas” in these diagrams correspond to parts of computer memory that have been reserved for different purposes. The first animated example was extremely simple, and you might have understood the expression just fine without it. Even so, it’s a good idea to get familiar with this way of representing program runs, since we’ll be using similar animations to illustrate phenomena that are considerably more complex. Strings of Characters Besides numbers, most programs need to manipulate text. Let’s try it: "Hi"res22: String = Hi "Hey, there's an echo!"res23: String = Hey, there's an echo! As the REPL tells us, chunks of text or strings (merkkijono) are represented in Scala by the String type. A string literal is a string written into program code as is (cf. the Int and Double literals above). String literals go in double quotation marks as shown. (Thinking back to the previous chapter, you may recall another string literal: the quoted bit in GoodStuff that contained a typo.) Forget the quotation marks, and you’re liable to get an error message, because the REPL will then try to interpret the text you wrote as a Scala command: Hi<console>:8: error: not found: value Hi Hi ^ A Command for Printing The REPL reports the value and type of each expression, which is a useful service. But suppose we wish to output something other than the familiar litany of resX: SomeType = value? Scala’s println command lets us tailor the output to what we want. println("Greetings!")Greetings! println(123)123 println(-5.2)-5.2 Even though these commands are fairly self-evident, we can spot several features of Scala that are worthy of our attention: printlnis short for print line. printlncommand is given a single parameter expression that details what the command should print out. (The word “print” may lead the non-programmer’s thoughts to the kinds of printers that churn out paper. But programmers also use this word about outputting text onscreen.) A parameter expression doesn’t have to be simply a literal. The following animation portrays the stages of executing the command println("Programming" + (100 - 99)). As you saw, execution progressed “from inside the brackets outwards”. Later on, you’ll see how this rule of thumb holds in many situations: the parameter expressions inside the round brackets get evaluated first, producing parameter values (parametriarvo) that are then passed to the command, which uses them as it executes. A Stringed Instrument We can use strings of characters to represent text, but strings are useful for representing other things as well. Such as music. Here is a print command that outputs the first couple of bars of Ukko Nooa (“Uncle Noah”), a very simple song traditionally taught to Finnish children as they start to learn the piano, as evidenced on YouTube. In our program, we represent notes as letters that correspond to the keys of a piano as shown in the picture above. println("cccedddf")cccedddf That was none too exciting. It would be more rewarding to play the notes than just printing them out. Let’s pick up another command. Unlike println, which is part of the Scala’s basic toolkit, the command we use next, o1.play, has been designed for the needs of this very course. This command will bring some life to programs such as the string-manipulating examples in this chapter. (The play command is defined in O1Library. If you didn’t import that project into your workspace in Chapter 1.2, do it now and relaunch your REPL.) Try the following. o1.play("cccedddf") The command produces no visible printout. Instead, it plays the notes listed in its String parameter on a virtual piano. Did you have sounds enabled on your computer? Also try passing other strings as parameters to o1.play. For instance, you can include spaces, which o1.play interprets as pauses: o1.play("cccdeee f ffe e") And hyphens, which o1.play interprets as longer notes: o1.play("cccdeee-f-ffe-e") Sound in O1 Here and there during O1, we’ll make use of sound, as we just did. Obviously, this will work only if you have a sounds enabled on your computer. If you study among other people, please take a pair of headphones along. If you don’t have access to headphones, you can use println instead of o1.play. Parts of this ebook will be duller for it, but you won’t endanger your course performance. We don’t require O1 students to have musical ability. A note about notes o1.play uses “European” names for notes. This means that what is known as a B note in many English-speaking countries, is known as H instead. Here’s an example: o1.play("cdefgah") The character b (or alternatively ♭) denotes flat notes. This plays an E-flat, for instance: o1.play("eb") Strings attached We can combine strings with the plus operator. When it comes to numbers, the operator stands for addition, but it has a different meaning when strings are involved: "cat" + "fish"res24: String = catfish "REPL with" + "out" + " a cause"res25: String = REPL without a cause Notice that spaces inside the quotation marks do affect the result. Of course, the same operator works fine even in a parameter expression: println("Uncle Noah, Uncle Noah," + " was an upright man.")Uncle Noah, Uncle Noah, was an upright man. o1.play("cccedddf" + "eeddc---") String multiplication Another way to change octaves In addition to supporting < and > as just described, o1.play lets you mark an octave number for any individual note. You can use the numbers from 0 to 9; the default octave is number 5. The following two commands have the same effect, for instance: o1.play(">cd<<e")o1.play("c6d6e4") This is unimportant as such, since it’s specific to a particular music-playing command created for the purposes of this course. Still, this can be convenient to know if you choose to mess around with o1.play just for fun. Pictures and Packages GoodStuff marks the favorite experience with a picture of a grinning face. Pong displays the paddles as pictures of rectangles and the ball as a picture of a circle. Programs manipulate pictures. Displaying a picture Let’s use Scala to load an image from a network address. Try it. o1.Pic("")res26: o1.gui.Pic = Picis the name of a data type for representing images and it’s spelled in upper case just as Stringand Intare. o1indicates that we’re again using a tool created for this course. Not to worry, though: as you use these tools, you are sure to pick up general principles that are useful outside O1 as well. Picthat holds image data loaded from the net. Where did that get us? A particular image is now stored in your computer’s memory, but we didn’t get to see it. One easy way to display an image is o1.show, which works for pictures much like o1.play did for sounds. See below for an example. We’ll come across other ways of displaying images later on. o1.show(o1.Pic("")) The image appears in a separate little window in the vicinity of the top-left corner of your screen. Click it or press Esc to close the window. In the preceding example, we loaded the picture from the net. You can also load an image from a file stored on your computer’s disk drive. Like so: o1.show(o1.Pic("d:/example/folder/mypic.png")) On the other hand, it’s often not necessary to write the full path. If the image file is located within an active project, there is a simpler way. Assuming you selected GoodStuff when you launched the REPL, the following will also work, since face.png is a file within that project. o1.show(o1.Pic("face.png")) o1 and other packages You’ve seen that some of the tools that we use are located in a package called o1. When we use those tools, we need to mention in the Scala program that we’re accessing that package. There are a number of reasons why programming tools are placed in different packages. One of the main ones is to avoid name clashes: when building a larger program, perhaps as a collaborative effort, or when using tools built by others, you easily end up with the same name being used for different things in different places. For instance, the names play or show might mean something else in a context other than o1. A solution is to place the tools with the same name in different packages. Then we can use the package names to indicate which tool we use. A downside of this approach is that repeatedly typing in the package name is tiresome and can make the code harder to read. We can mitigate the downside with the import command. Let’s get to know it before we get back to working with pictures. Convenient use of tools with import When we plan to repeatedly use a command in package o1 — say, o1.show — we can first indicate this intention: import o1.showimport o1.show This import command means, roughly: “Wherever it says show below, it refers to the show command that’s defined in package o1.” The import command isn’t an expression and doesn’t have a value. The REPL acknowledges it by simply repeating the command text back at us. Having issued the import, we can now work with pictures a bit more conveniently: show(o1.Pic(""))show(o1.Pic("face.png")) showwith o1. Pic, though. If we had also given the command import o1.Picearlier, we could have omitted this prefix, too. We could type in a separate import command for each of the tools in package o1 that we intend to use. But instead, we’ll often use this handy notation to import the entire contents of the package in one go: import o1._import o1._ _has various context-dependent meanings in Scala, but generally speaking it means something like “any” or “all”. Now we can use any tool in the package with no hassle: play("cdefg")show(Pic(""))show(Pic("face.png")) Fair enough, the imports didn’t save all that many keystrokes yet, but they will, as we’ll soon use more tools from the package. In later chapters, we’ll be importing many other tools from many other packages. Colored shapes The Pong game is an example of a program whose graphics are based on familiar geometric shapes. Our package provides tools that make it easy to define such shapes in different colors. Let’s play with these tools a bit. Let’s start with colors. Assuming the command import o1._ has been issued, you can refer to common colors simply by their English names: Blueres27: o1.gui.Color = Blue Greenres28: o1.gui.Color = Green DarkGreenres29: o1.gui.Color = DarkGreen Color. Let’s use the circle command to create a blue circle: circle(200, Blue)res30: o1.gui.Pic = circle-shape Picdata type that already featured in earlier examples. Any value of type Pic is a valid parameter for show. Instead of an image from the net or your local drive, we can show a circle whose attributes we define in the Scala code: show(circle(200, Blue)) Experiment with other colors and shapes. For instance, here are a couple of rectangles and an isosceles triangle: show(rectangle(200, 500, Green))show(rectangle(500, 200, Red))show(triangle(150, 200, Orange)) About the REPL Perhaps you have wondered whether the REPL is a tool for students/beginners only rather than for serious professionals? No, it isn’t. Many professionals, too, use a REPL for experimentation and testing. Besides, even serious professionals are students as they familiarize themselves with new programming languages and program components created by others. Complex program components can be used in the REPL, including the components of specific applications. Even though we haven’t practiced coding much yet, you can already easily use the REPL to experiment with a component of an existing program. Try the following if you want. Launching the GoodStuff GUI in the REPL In the previous chapter, you used the menu in Eclipse to launch the GoodStuff GUI. It’s also possible to display the GUI window by entering the following commands in the REPL. import o1.goodstuff.gui.CategoryDisplayWindowimport o1.goodstuff.gui.CategoryDisplayWindow import o1.goodstuff.Categoryimport o1.goodstuff.Category new CategoryDisplayWindow(new Category("Hotel", "night"))... Here’s a brief explanation: This is how we tell the computer to create a new category of experiences for hotels (or whatever you choose, if you replace the string literals with something else) and a new GUI window that enables you to record new experiences. To accomplish all this, we use tools defined in the packages o1.goodstuff and o1.goodstuff.gui. The exact meaning of the example code will be clearer to you after the first couple of weeks of O1. Summary of Key Points - You can combine integers ( Int), decimal numbers ( Double), and arithmetic operators to form arithmetic expressions. - An expression is a piece of code that has a value. To evaluate an expression is to determine the expression’s value. - You can use the type Stringto form expressions whose values are strings of characters, such as text. - You can use the printlncommand to output strings or other values. - The importcommand comes in handy when you want to use the tools contained in a particular package, such as the o1package that we often use in O1. - The o1package gives us tools for playing notes represented as strings ( play) and working with images ( Pic, show, circle, Red, etc.). - The REPL environment is convenient for experimenting with new programming techniques, testing, and learning. - During the first weeks of O1, you’ll find out how the basic data types, expressions, and operations from this chapter can be used for building GoodStuff or another application program. - Links to the glossary: REPL; expression, value, to evaluate, literal; operator; data type; string; to print; parameter expression, parameter value (or argument). Below is a diagram that is intended to clarify some of the most important concepts we have covered and their main relationships. We’ll add to this diagram later. Learn the concepts in this chapter! Many of the terms and concepts introduced in this chapter are central to O1. Not simply for their own sake, but because you can use these concepts to make sense of programming phenomena. Knowing the terminology will help you both read this ebook and communicate with other programmers (such as your student pair or the teaching assistants). Pay particular attention to the concepts in the diagram above. But it’s even more important that... ... you get a feel for programming in practice. The REPL is great for this. Don’t hesitate to try things in the REPL, including things that aren’t specifically suggested in these. *.
https://plus.cs.aalto.fi/o1/2018/w01/ch03/
CC-MAIN-2020-24
refinedweb
4,305
65.93
In Tkinter, graphical images are displayed by creating independent PhotoImage or BitmapImage objects, and then attaching those image objects to other widgets via image attribute settings. Buttons, labels, canvases, text, and menus can display images by associating prebuilt image objects in this way. To illustrate, Example 9-36 throws a picture up on a button. gifdir = "../gifs/" from Tkinter import * win = Tk( ) igm = PhotoImage(file=gifdir+"ora-pp.gif") Button(win, image=igm).pack( ) win.mainloop( ) I could try to come up with a simpler example, but it would be toughall this script does is make a Tkinter PhotoImage object for a GIF file stored in another directory, and associate it with a Button widget's image option. The result is captured in Figure 9-36. PhotoImage and its cousin, BitmapImage, essentially load graphics files and allow those graphics to be attached to other kinds of widgets. To open a picture file, pass its name to the file attribute of these image objects. Canvas widgetsgeneral drawing surfaces discussed in more detail later in this tourcan display pictures too; Example 9-37 renders Figure 9-37. gifdir = "../gifs/" from Tkinter import * win = Tk( ) img = PhotoImage(file=gifdir+"ora-lp.gif") can = Canvas(win) can.pack(fill=BOTH) can.create_image(2, 2, image=img, anchor=NW) # x, y coordinates win.mainloop( ) Buttons are automatically sized to fit an associated photo, but canvases are not (because you can add objects to a canvas, as we'll see in Chapter 10). To make a canvas fit the picture, size it according to the width and height methods of image objects, as in Example 9-38. This version will make the canvas smaller or larger than its default size as needed, lets you pass in a photo file's name on the command line, and can be used as a simple image viewer utility. The visual effect of this script is captured in Figure 9-38. gifdir = "../gifs/" from sys import argv from Tkinter import * filename = (len(argv) > 1 and argv[1]) or 'ora-lp.gif' # name on cmdline? win = Tk( ) img = PhotoImage(file=gifdir+filename) can = Canvas(win) can.pack(fill=BOTH) can.config(width=img.width(), height=img.height( )) # size to img size can.create_image(2, 2, image=img, anchor=NW) win.mainloop( ) And that's all there is to it. In Chapter 10, we'll see images show up in a Menu, other Canvas examples, and the image-friendly Text widget. In later chapters, we'll find them in an image slideshow (PyView), in a paint program (PyDraw), on clocks (PyClock), and so on. It's easy to add graphics to GUIs in Python/Tkinter. Once you start using photos in earnest, though, you're likely to run into two tricky bits which I want to warn you about here: Supported file types At present, the PhotoImage widget only supports GIF, PPM, and PGM graphic file formats, and BitmapImage supports X Windows-style .xbm bitmap files. This may be expanded in future releases, and you can convert photos in other formats to these supported formats, of course. But as we'll see later in this chapter, it's easy to support additional image types with the PIL open source extension toolkit. Hold on to your photos Unlike all other Tkinter widgets, an image is utterly lost if the corresponding Python image object is garbage collected. That means you must retain an explicit reference to image objects for as long as your program needs them (e.g., assign them to a long-lived variable name or data structure component). Python does not automatically keep a reference to the image, even if it is linked to other GUI components for display; moreover, image destructor methods erase the image from memory. We saw earlier that Tkinter variables can behave oddly when reclaimed too, but the effect is much worse and more likely to happen with images. This may change in future Python releases (though there are good reasons for not retaining big image files in memory indefinitely); for now, though, images are a "use it or lose it" widget. I tried to come up with an image demo for this section that was both fun and useful. I settled for the fun part. Example 9-39 displays a button that changes its image at random each time it is pressed. from Tkinter import * # get base widget set from glob import glob # filename expansion list import demoCheck # attach checkbutton demo to me import random # pick a picture at random gifdir = '../gifs/' # where to look for GIF files def draw( ): name, photo = random.choice(images) lbl.config(text=name) pix.config(image=photo) root=Tk( ) lbl = Label(root, text="none", bg='blue', fg='red') pix = Button(root, text="Press me", command=draw, bg='white') lbl.pack(fill=BOTH) pix.pack(pady=10) demoCheck.Demo(root, relief=SUNKEN, bd=2).pack(fill=BOTH) files = glob(gifdir + "*.gif") # GIFs for now images = map((lambda x: (x, PhotoImage(file=x))), files) # load and hold print files root.mainloop( ) This code uses a handful of built-in tools from the Python library: The Python glob module we met earlier in the book gives a list of all files ending in .gif in a directory; in other words, all GIF files stored there. The Python random module is used to select a random GIF from files in the directory: random.choice picks and returns an item from a list at random. To change the image displayed (and the GIF file's name in a label at the top of the window), the script simply calls the widget config method with new option settings; changing on the fly like this changes the widget's display. Just for fun, this script also attaches an instance of the demoCheck check button demo bar, which in turn attaches an instance of the Quitter button we wrote earlier. This is an artificial example, of course, but again it demonstrates the power of component class attachment at work. Notice how this script builds and holds on to all images in its images list. The map here applies a PhotoImage constructor call to every .gif file in the photo directory, producing a list of (file,image) tuples that is saved in a global variable (a list comprehension [(x, PhotoImage(file=x)) for x in files] would do the same). Remember, this guarantees that image objects won't be garbage collected as long as the program is running. Figure 9-39 shows this script in action on Windows. Although it may not be obvious in this grayscale book, the name of the GIF file being displayed is shown in red text in the blue label at the top of this window. This program's window grows and shrinks automatically when larger and smaller GIF files are displayed; Figure 9-40 shows it randomly picking a taller photo globbed from the image directory. And finally, Figure 9-41 captures this script's GUI displaying one of the wider GIFs, selected completely at random from the photo file directory.[*] [*] This particular image appeared as a banner ad on developer-related web sites such as slashdot.com when the book Learning Python was first published. It generated enough of a backlash from Perl zealots that O'Reilly eventually pulled the ad altogether. Which is why, of course, it appears in this book. [*] This particular image appeared as a banner ad on developer-related web sites such as slashdot.com While we're playing, let's recode this script as a class in case we ever want to attach or customize it later (it could happen). It's mostly a matter of indenting and adding self before global variable names, as shown in Example 9-40. from Tkinter import * # get base widget set from glob import glob # filename expansion list import demoCheck # attach check button example to me import random # pick a picture at random gifdir = '../gifs/' # default dir to load GIF files class ButtonPicsDemo(Frame): def _ _init_ _(self, gifdir=gifdir, parent=None): Frame._ _init_ _(self, parent) self.pack( ) self.lbl = Label(self, text="none", bg='blue', fg='red') self.pix = Button(self, text="Press me", command=self.draw, bg='white') self.lbl.pack(fill=BOTH) self.pix.pack(pady=10) demoCheck.Demo(self, relief=SUNKEN, bd=2).pack(fill=BOTH) files = glob(gifdir + "*.gif") self.images = map(lambda x: (x, PhotoImage(file=x)), files) print files def draw(self): name, photo = random.choice(self.images) self.lbl.config(text=name) self.pix.config(image=photo) if _ _name_ _ == '_ _main_ _': ButtonPicsDemo().mainloop( ) This version works the same way as the original, but it can now be attached to any other GUI where you would like to include such an unreasonably silly button.
https://flylib.com/books/en/2.726.1.93/1/
CC-MAIN-2019-43
refinedweb
1,465
63.19
by Zoran Horvat Jan 06, 2014. We can discover that a given number is prime by failing to prove the opposite, i.e. by verifying that there is no eligible value which divides it without remainder. The trick in this task is to properly identify eligible divisors. These should be numbers greater than one which are not equal to the number tested. Further on, there is no point in taking into account values greater than N because none of them could possibly divide N. So, by now we have constrained candidates to set {2, 3, ..., N-1}. Now suppose that there is such a value k which is the smallest number which divides N: This implies that m is also a divisor of N. But the fact that k is the smallest divisor implies that m cannot be less than k. From these facts we can derive one important conclusion: This has significantly reduced the problem. But we can make a step further. There is no point in trying to divide N with even values of k, except the trivial candidate 2. This because if N is not divisible by 2, then there is no chance that any other even number could divide it. The same logic goes with number 3. If 3 doesn't divide N, then none other multiple of 3 can divide N either. This leads to an interesting conclusion about possible divisors of N: the only viable candidates that could divide N are values 2, 3 and odd numbers around multiples of 6: This opts out roughly two out of three candidates not exceeding square root of N. And here is the pseudocode which solves the problem: function IsPrime(n) begin result = false if n <= 3 then result = true else if n mod 2 <> 0 AND n mod 3 <> 0 then begin k = 5 step = 2 while k * k <= n AND n mod k <> 0 begin k = k + step step = 6 - step end if n = k OR n mod k <> 0 then result = true end return result end The following C# code is a console application which implements the IsPrime function to test whether its argument is prime or not: using System; namespace PrimeNumber { public class Program { static bool IsPrime(int n) { bool result = false; if (n <= 3) { result = true; } else if (n % 2 != 0 && n % 3 != 0) { int k = 5; int step = 2; while (k * k <= n && n % k != 0) { k = k + step; step = 6 - step; } if (n == k || n % k != 0) result = true; } return result; } static void Main(string[] args) { while (true) { Console.Write("Enter number (zero to exit): "); int n = int.Parse(Console.ReadLine()); if (n <= 0) break; if (IsPrime(n)) Console.WriteLine("Number {0} is prime.", n); else Console.WriteLine("Number {0} is not prime.", n); } } } } When application above is run, it produces the following output: Enter number (zero to exit): 2 Number 2 is prime. Enter number (zero to exit): 3 Number 3 is prime. Enter number (zero to exit): 17 Number 17 is prime. Enter number (zero to exit): 18 Number 18 is not prime. Enter number (zero to exit): 143 Number 143 is not prime. Enter number (zero to exit): 64657551 Number 64657551 is not prime. Enter number (zero to exit): 64657553 Number 64657553 is prime. Enter number .
http://codinghelmet.com/exercises/prime-testing
CC-MAIN-2019-04
refinedweb
549
71.95
On Fri, Jun 08, 2012 at 03:02:53PM -0700, Andrew Morton wrote:> On Sat, 9 Jun 2012 00:41:03 +0300> "Kirill A. Shutemov" <[email protected]> wrote:> > > There's no reason to call rcu_barrier() on every deactivate_locked_super().> > We only need to make sure that all delayed rcu free inodes are flushed> > before we destroy related cache.> > > > Removing rcu_barrier() from deactivate_locked_super() affects some> > fas paths. E.g. on my machine exit_group() of a last process in IPC> > namespace takes 0.07538s. rcu_barrier() takes 0.05188s of that time.> > What an unpleasant patch. Is final-process-exiting-ipc-namespace a> sufficiently high-frequency operation to justify the change?> I don't really understand what's going on here. Are you saying that> there is some filesystem against which we run deactivate_locked_super()> during exit_group(), and that this filesystem doesn't use rcu-freeing> of inodes? The description needs this level of detail, please.I think the rcu_barrier() is in wrong place. We need it to safely destroyinode cache. deactivate_locked_super() is part of umount() path, but allfilesystems I've checked have inode cache for whole filesystem, notper-mount.>.> (kmem_cache_destroy() already has an rcu_barrier(). Can we do away> with the private rcu games in the vfs and switch to> SLAB_DESTROY_BY_RCU?)-- Kirill A. Shutemov[unhandled content-type:application/pgp-signature]
http://lkml.org/lkml/2012/6/8/543
CC-MAIN-2015-40
refinedweb
218
51.75
I have been having trouble getting my program to print out individual elements of a list. I have been given a number of tips and suggestions that have helped me a lot, however I am still not getting the desired output. First I need to show the code that I am working on. What I am currently using is an extension class that handles printing the list onto the screen. using UnityEngine; using System.Collections; using System.Collections.Generic; public static class Extensions { /// <summary> /// Converts a list to a string /// </summary> public static string ConvertToString<T>(this List<T> list) { var output = string.Empty; foreach (var item in list) { output += item.ToString() + ","; } return output; } } This is the list that I want to use: public static List<string> task1Gesture = new List<string>(new[] { "Tap", "Grab", "Grip" }); This is how I am using the list and the extension class string[] gestureOptions = { Cube_DemoPhase.task1Gesture.ConvertToString<string>(), Cube_DemoPhase.task1Gesture.ConvertToString<string>(), Cube_DemoPhase.task1Gesture.ConvertToString<string>() }; This diagram will show what I am getting on the screen: Can anyone help me figure out what I need to do in order to get this right? If you initialize a generic list, there's no need to use new[] inside. It's enough to write: new[] new List<string> { "Tap", "Grab", "Grip" } Answer by ArkaneX · Jan 04, 2014 at 12:24 AM Your extension method outputs all list elements as a single string, while you need to get separate elements from it. When you create gestureOptions array, you just pass to it three the same elements. The easiest solution is to just fill the gestureOptions variable with proper values: gestureOptions string[] gestureOptions = { "Tap", "Grab", "Grip" }; But in case when you have to store these values in Cube_DemoPhase class as a generic list, then you can do: Cube_DemoPhase string[] gestureOptions = Cube_DemoPhase.task1Gesture.ToArray(); There's no need for your extension class at all... EDIT, that I hope will finish the line of "array of strings problem" After reading all the comments + all your other questions related to the same problem, my solution is below. Please note that it required LINQ, so add using System.Linq; on top of your class: using System.Linq; string[] gestureOptions = { "a", "b", "c", "d", "e" }; int[] indexes = Enumerable.Range(0, gestureOptions.Length).ToArray(); for (int i = indexes.Length - 1; i > 0; i--) { var r = Random.Range(0, i); var tmp = indexes[i]; indexes[i] = indexes[r]; indexes[r] = tmp; } var index = 0; gestureOptions = gestureOptions.OrderBy(x => indexes[index++]).ToArray(); Now, in indexes array, you have the positions of your randomized strings... And here's my advice: next time please describe your problem fully in one question, and don't be afraid that it is long. You decided to share your problem and code piece by piece, and that caused a lot of confusion and lost time. Believe me - if the first question related to this problem were detailed, you'd have your answer the day you posted it. Please go through all the previous question and close them all. And don't forget to reward people who tried to help by upvoting their answers/comments. This. Upvoted. @ArkaneX: I get errors when I do string[] gestureOptions = Cube_DemoPhase.task1Gesture.ToArray(); string[] gestureOptions = Cube_DemoPhase.task1Gesture.ToArray(); @HappyMoo - do you ever sleep? ;) You can also do this: string[] gestureOptions = { Cube_DemoPhase.task1Gesture[0], Cube_DemoPhase.task1Gesture[1], Cube_DemoPhase.task1Gesture[2] }; ArkaneX, I sleep between the seconds @HappyMoo - you forgot task1Gesture in your comment with indexes. task1Gest How to access the size of a List C# 2 Answers need help adding all gameobjects with a specific name to a list 1 Answer C# how to create a Descending GUI List 1 Answer C# Dividing Gameobject List by Half 1 Answer
https://answers.unity.com/questions/608625/trouble-with-my-string-list.html
CC-MAIN-2020-05
refinedweb
621
54.73
A Simple CString Tokenizer As the name suggests, this is a simple class to extract tokens from a CSting. I wrote this class becuase during the course of my final year project at University I needed a simple way to extract 'tokens' from a CString. This is what I came up with. Its probably not the best or most effeicent way to accomplish the task - but it works which in the main thing. -First include the CToken header file in your porgram: #include "Token.h" -Then you are free to create and use an instance of CToken. For example: CString str = "A B C D" CString newTok; CToken tok(str); tok.SetToken(" "); while(tok.MoreTokens()) { newTok = tok.GetToken() }Improvements: If people can suggest any improvements or there bugs please drop me an email Download demo project - 15 KB Date Posted: March 3, 1999
http://www.developer.com/net/cplus/article.php/633611/A-Simple-CString-Tokenizer.htm
CC-MAIN-2016-50
refinedweb
143
72.97
Hi Zolt=E1n,. Hope this helps, benny Hi, I am trying to port my application from Linux to MinGW. I found out that I cannot get my sources that include <sql.h> to compile by themselves. I found out that the problem is that sql.h is not enough by itself under MinGW. In the search for the missing type definitions and constant defines, I found that I have to #include <ansidecl> (because of its "#define CONST const" line and <windef.h> (because of its many typedefs) before <sql.h>. Another this is that under MinGW I have to test for -lodbc32 instead of -lodbc and use the first if found. But the second part cannot be reached for reasons below. Here's the (I think correct) configure.in script for testing the needs of the compilation of sql.h. AC_MSG_CHECKING(whether <sql.h> needs <ansidecl.h> and <windef.h>) AC_TRY_COMPILE([#include <sql.h>], [], sql_h_needs_windef_h=no, AC_TRY_COMPILE([#include <ansidecl.h> #include <windef.h> #include <sql.h>], [], sql_h_needs_windef_h=yes, AC_MSG_ERROR([don't know how to compile <sql.h>]) )) AC_MSG_RESULT([$sql_h_needs_windef_h]) if test $sql_h_needs_windef_h = yes; then AC_DEFINE_UNQUOTED(NEED_WINDEF_H, 1) fi But it still does not gives me the correct result. It stops with "error: don't know how to compile <sql.h>" This is the part from config.log that shows the error: In file included from c:/mingw/include/sql.h:13, from configure:5416: c:/mingw/include/sqltypes.h:24: parse error before ';' token configure:5411: $? = 1 configure: failed program was: | #line 5389 "configure" | /* confdefs.h. */ | | #define PACKAGE_NAME "" | #define PACKAGE_TARNAME "" | #define PACKAGE_VERSION "" | #define PACKAGE_STRING "" | #define PACKAGE_BUGREPORT "" | #define PACKAGE "sms_oe" | #define VERSION "0.1" | #define STDC_HEADERS 1 | #define GETTEXT_PACKAGE "sms_oe" | _LOCALE_H 1 | #define HAVE_BIND_TEXTDOMAIN_CODESET 1 | #define HAVE_GETTEXT 1 | #define ENABLE_NLS 1 | /* end confdefs.h. */ | #include <ansidecl.h> | #include <windef.h> | #include <sql.h> | int | main () | { | | ; | return 0; | } configure:5424: error: don't know how to compile <sql.h> The 24th line of sqltypes.h is: typedef PVOID PTR; I don't know what to do with this line, PVOID is declared in winnt.h that is #included by windef.h. "Type PTR" should be defined by this exact line. I tried to rewrite the line as the spaces may not be what they seem. Same problem happened, diff -u did not show any differences between the original sqltypes.h and the edited one. Now what? I am trying to copy the unixODBC headers from my Linux into my MinGW installation to see if it makes any difference... Best regards, Zoltán Benjamin Riefenstahl wrote: >Hi Zoltán, > > . > Thanks, you really helped me. Find my configure.in script attached that show my (ehm) misery and salvation. :-) The script does the following: 1. checks whether sql.h and sqlext.h are present. 2a. If the system is Cygwin or MinGW, these are performed: - checks whether the compiler supports -mms-bitfields (gcc-3+) - if not, checks for -fnative-struct (gcc-2.x) - according to the above, sets CFLAGS, also sets -mwindows so I don't get a console window under NT/200/XP. - checks whether compilation of sql.h needs definition of WIN32_LEAN_AND_MEAN and inclusion of windows.h. - according to this check, it tries to find SQLAllocHandle() in odbc32.dll, quits if not found. 2b. If the system is anything else, e.g. Linux, or other POSIX compatible, it checks for SQLAllocHandle() in libodbc.so, unixODBC assumed. This script helped me to compile and link my GTK2/ODBC application on Windows. I also did what this link said: Unfortunately the MS link for ODBC35IN.EXE is not there anymore but Google helped. ;-) Best regards and thanks for all the help! Zoltán Hi Zolt=E1n, Boszormenyi Zoltan <zboszor@...> writes: > Find my configure.in script attached that show my (ehm) misery and > salvation. :-) Sorry, I forgot to mention this one: There is an ODBC.m4 in the TclODBC project, see, . I'm not sure where this originaly comes from, according to copyright it's part FSF and part Ajuba (now ActiveState (?)). It covers common Unix versions of ODBC and ODBC on Windows. It doesn't cover ODBC on MacOSX (yet). Maybe you can get some more ideas from that. so long, benny
http://sourceforge.net/p/mingw/mailman/message/14720880/
CC-MAIN-2014-49
refinedweb
694
70.9
server is your primary server for acme.com? Exchange or Linux? From your post I concluded it's the Exchange box. If it were your Linux box than there would be no reason not to route your mails directly to Linux instead of Exchange. If I'm correct about this - do not make changes to your Exchange address space setup. Let's move on. When I asked about namespace of the Linux box, I didn't mean the domain in server fqdn (linux.acme.com) which you don't have to change. I was referring to SMTP domain name (sorry but I'm not too familiar with Linux terminology). I was hoping it would be possible to setup the Linux SMTP server to accept mail for both acme.com (if necessary) but also something like acme.ticket. That way you could create a mail recipient in Linux with e-mail address [email protected] (and also - but only if necessary [email protected]). If that is possible in Linux I'll guide you on what to do in Exchange and Active directory. It consists of two main steps: 1) Create an Exchange enabled contact in Active Directory with primary mail address [email protected]. 2) Create a SMTP send connector in Exchange Management that will route all e-mail for acme.ticket domain to linux.acme.com as smart host. Creating an Exchange enabled contact in Active Directory (this is for W2K3 server DC). 1) Open Start > Programs > Administrative Tools > Active Directory Users and Computers. 2) Choose and appropriate AD container (OU) or create a new one. 3) Right click on the white space in the right side window and choose New > Contact. 4) On the first screen enter the name and descriptive display name for the contact and then click Next. 5) On the second screen leave the check box in Create an Exchange e-mail address field, make sure that the Alias field contains only text "support" and then click Modify. 6) Choose SMTP adress in the list and click OK. 7) Enter [email protected] in the E-mail address field on General tab of Internet Address Properties and then click OK. This will be a primary e-mail address for this contact. 8) The address will be displayed in E-mail field of New Object - Contact screen. Click Next. 9) Click Finish. This finishes the creation of new contact. 10) To check your newly created contact, right click it and choose properties. The primary e-mail address ([email protected]) should be displayed in E-mail field on tab General in contact properties. 11) If you click on the E-mail Addresses tab you will probably see an empty list at this poin but thats normal since Exchange probably didn't run the recipient policy update yet. For now just check there's a check in the Automatically update e-mail addresses based on recipient policies field. Close the contact properties window. Now you'll have to create a new SMTP send connector in Exchange Server to deliver any mail destined for acme.ticket domain to linux.acme.com. 1) Open Start > Programs > Microsoft Exhange > System Manager. 2) In management console right click the Connectors container and choose New > SMTP connector. 3) On the General tab enter the name for your new connector, and choose to route all mail through this connector to smart host linux.acme.com. Add your Exchange virtual SMTP server to the list of local bridgeheads. 4) On the Address Space tab add new address space for domain acme.ticket with cost of 1. Click OK to create the connector. (Later on if you'll experience any problems with delivery to acme.ticket domain, you may try to go to the properties of your standard SMTP connector and change the cost of default SMTP * namespace on that connector to a number greater than 1.) 5) Your new connector should appear in the list of connectors. Close the Exchange management console. Now return to Active Directory Users and Computers and open your contact properties again and navigate to E-mail Addresses tab. Depending on your recipient policies you should now see e-mail addresses listed but they might be different than in my example screenshot. (If there are no adresses visible you can close the contact properties, reopen Exchange System Manager, navigate to Recipients > Recipient Policies container and then right click each recipient policy in the right window and choose Apply this policy now... This should ensure the e-mail addresses on your contact are created). The following 2 addresses must be visible on your contact: 1) [email protected] (should be bolded which means this is the primary e-mail address). 2) [email protected] (not bolded - not primary). If these addresses are not there your recipient policy may be set up in a way that doesn't work as you need to. To correct that - remove the check from the Automatically update e-mail addresses based on recipient policies field and manually add those two SMTP addresses. Make sure that you set the [email protected] as the primary address. This is it. Try sending the e-mail to [email protected]. It should be delivered to your Exchange server and automatically routed to your Linux box. What happens is the following: 1) Exchange server receives the e-mail for [email protected]. 2) It resolves to the address of the Support contact. 3) It determines that the primary address for Support contact is [email protected] and forwards accordingly. 4) The message is routed through the newly created SMTP connector (responsible for acme.ticket) to your Linux box which is set up as a smart host in that connector. And that's it. It all hinges on the possibility that you can set up your Linux box to receive mail for acme.ticket domain and create a recipient there for your ticketing system with the address [email protected]. Good luck and regards, Tomislav 1) Do you want to store the e-mail in Exchange mailbox too or just forward it to Linux box? 2) Can you set up a namespace different than amce.com on your Linux SMTP server? Regards, Tomislav 2) No, I have to use the same namespace on the linux box. If managing Active Directory using Windows Powershell® is making you feel like you stepped back in time, you are not alone. For nearly 20 years, AD admins around the world have used one tool for day-to-day AD management: Hyena. Discover why. [1] You need to configure Exchange so that it isn't authoritative for the shared SMTP address space (@acme.com in your example). If acme.com is your primary address space then you may need to add a phantom address space as your default (you can't configure Exchange so that it isn't authoritative for your default address space). When Exchange receives email for a non-authoritative address space, if the recipient isn't in Active Directory then it will attempt to deliver the message to a system that it. This needs to be your Linux box. You can do this in a number of ways. The most suitable for your circumstances will probably be to configure the Exchange SMTP virtual server to forward all unresolved recipient email to a specific host. Everything you need to know is right here... Hope this helps, Dave If during the test process you run into Exchange errors saying that the e-mail can't be relayed for acme.ticket domain, then go to the properties of your new SMTP send connector (Support Ticketing on Linux Box) in Exchange System Manager and place a check mark in Allow messages to be relayed to these domains field on the Address Space tab. Regards, Tomislav I will be testing it out today hopefully (if other people cooperate with me) and will award points asap. Thanks again!
https://www.experts-exchange.com/questions/26618370/Configuring-Exchange-2003-server-to-route-mail-to-Linux-box.html
CC-MAIN-2018-17
refinedweb
1,329
65.93
program code is written using javascript (not sure, an example below for defining an integer) .... so how can i write the port manipulation commands such as PORT, PIN and DDR? import muvium.compatibility.arduino.*; static final int voltone = 2; static final int volttwo = 4; static final int voltthree = 7; static final int voltfour = 8; static final int voltfive = 9; static final int voltsix = 10; static final int voltseven = 12; public void setup() { DDRD = DDRD | B11111100; DDRB = B00001111; } public void loop() { PORTD = PORTD |B11111100; delay(500); PORTB = B00001111; delay(500); }} We should not have to Google for links to the product you are using and guess whether this is the right one.You're asking about how to use an emulator. You assume we know which one, but you haven't told us (until somebody else suggested which one it might be). You imply that you expect it to provide a runtime environment similar to the Arduino one, although it is clearly NOT using the Arduino hardware, runtime software or even the same programming language, and you've declined to explain how this relates to Arduino. My conclusion: it's nothing to do with Arduino. If you want to know how to use the emulator, go ask the people who supply and support it. Please enter a valid email to subscribe We need to confirm your email address. To complete the subscription, please click the link in the Thank you for subscribing! Arduino via Egeo 16 Torino, 10131 Italy
http://forum.arduino.cc/index.php?topic=147148.0;prev_next=next
CC-MAIN-2015-22
refinedweb
248
66.37
Tuples A tuple is a grouping of unnamed but ordered values, possibly of different types. Tuples can either be reference types or structs. Syntax (element, ... , element) struct(element, ... ,element ) Remarks Each element in the previous syntax can be any valid F# expression. Examples Examples of tuples include pairs, triples, and so on, of the same or different types. Some examples are illustrated in the following code. (1, 2) // Triple of strings. ("one", "two", "three") // Tuple of generic types. (a, b) // Tuple that has mixed types. ("one", 1, 2.0) // Tuple of integer expressions. (a + 1, b + 1) // Struct Tuple of floats struct (1.025f, 1.5f) Obtaining Individual Values You can use pattern matching to access and assign names for tuple elements, as shown in the following code. let print tuple1 = match tuple1 with | (a, b) -> printfn "Pair %A %A" a b You can also deconstruct a tuple via pattern matching outside of a match expression via let binding: let (a, b) = (1, 2) // Or as a struct let struct (c, d) = struct (1, 2) Or you can pattern match on tuples as inputs to functions: let getDistance ((x1,y1): float*float) ((x2,y2): float*float) = // Note the ability to work on individual elements (x1*x2 - y1*y2) |> abs |> sqrt If you need only one element of the tuple, the wildcard character (the underscore) can be used to avoid creating a new name for a value that you do not need. let (a, _) = (1, 2) Copying elements from a reference tuple into a struct tuple is also simple: // Create a reference tuple let (a, b) = (1, 2) // Construct a struct tuple from it let struct (c, d) = struct (a, b) The functions fst and snd (reference tuples only) sum a b = a + b let addTen = sum 10 let result = addTen 95 // Result is 105. Using a tuple as the parameter disables currying. For more information, see "Partial Application of Arguments" in Functions. Interoperation with C# Tuples C# 7.0 introduced tuples to the language. Tuples in C# are structs, and are equivalent to struct tuples in F#. If you need to interoperate with C#, you must use struct tuples. This is easy to do. For example, imagine you have to pass a tuple to a C# class and then consume its result, which is also a tuple: namespace CSharpTupleInterop { public static class Example { public static (int, int) AddOneToXAndY((int x, int y) a) => (a.x + 1, a.y + 1); } } In your F# code, you can then pass a struct tuple as the parameter and consume the result as a struct tuple. open TupleInterop let struct (newX, newY) = Example.AddOneToXAndY(struct (1, 2)) // newX is now 2, and newY is now 3 Converting between Reference Tuples and Struct Tuples Because Reference Tuples and Struct Tuples have a completely different underlying representation, they are not implicitly convertible. That is, code such as the following won't compile: // Will not compile! let (a, b) = struct (1, 2) // Will not compile! let struct (c, d) = (1, 2) // Won't compile! let f(t: struct(int*int)): int*int = t You must pattern match on one tuple and construct the other with the constituent parts. For example: // Pattern match on the result. let (a, b) = (1, 2) // Construct a new tuple from the parts you pattern matched on. let struct (c, d) = struct (a, b) Compiled Form of Reference Tuples This section explains the form of tuples when they're compiled. The information here isn't necessary to read unless you are targeting .NET Framework 3.5 or lower. Tuples are compiled into objects of one of several generic types, all named System. Compiled Form of Struct Tuples Struct tuples (for example, struct (x, y)), are fundamentally different from reference tuples. They are compiled into the ValueTuple type, overloaded by arity, or the number of type parameters. They are equivalent to C# 7.0 Tuples and Visual Basic 2017 Tuples, and interoperate bidirectionally. See also Feedback
https://docs.microsoft.com/en-us/dotnet/fsharp/language-reference/tuples
CC-MAIN-2019-35
refinedweb
662
64.61
After some discussion I want to propose a meta data facility to specify arbitrary parameters to any ANT objects. The need for that appeared when I started to write a parallel executor. Often (mostly with test targets) it is necessary to specify that some targets should not be executed in parallel. One of the ways of specifying mutexes is using global properties, but it is inconvenient. The better way is to mark each target with a list of associated mutexes, but adding yet another parameter into ANT core just for one executor does not make sense. Instead meta data tags could be used. For example: -- one level naming schema <?META mutexes=”test”?> <target name=”aaa-test”/> -- two level naming schema <?MUTEX names=”test”?> <target name=”aaa-test”/> The meta data could be accessed via additional Project API: public MetaData getMetaData(Object o); // using identity map What do you think about using any namespace that starte with "meta:"? For example: <project xmlns: <target name="test1" thr: </project> I personally have no objections. :)
https://bz.apache.org/bugzilla/show_bug.cgi?id=33243
CC-MAIN-2020-05
refinedweb
171
54.93
. Note also that you have to use the gcnew keyword to instantiate the managed type into the managed heap, this type will then be marked collectable at the gcroot<T> destructor. Here's a sample: #include <vcclr.h> #using <mscorlib.dll> using namespace System; class Unmanaged { private: gcroot<String ^> _myString; public: Unmanaged() { _myString = gcnew String(); } int GetHashCode() { return (_myString->GetHashCode()); } };. First, let's make a managed class called MessageBoxShower in C#: //); }. Talking about collections, the conversion is also up to you: for instance ArrayList or List<> to std::vector<> or Hashtable to std::map<>, and back.. General News Question Answer Joke Rant Admin
http://www.codeproject.com/KB/mcpp/ijw_unmanaged.aspx
crawl-002
refinedweb
103
57.87
The reference count is the number of times an object is referenced by a variable. If an object isn’t referenced anymore, it can be safely removed from the memory—what’s the use of an object nobody cares for anyway? This article shows you how to count the number of references to a Python object. What’s the Current Reference Count of Python Object Question: How to get the current reference count of a Python object? Answer: You can get the current reference count with the sys module’s getrefcount() method. For example, to get the number of times variable x is referenced, run sys.getrefcount(x). Here’s the general strategy for an arbitrary object: import sys print(sys.getrefcount(object)) The following code snippet shows an example of an object x that has a reference count of 13. After creating a new reference y to the object x, the reference count increases by one to 14. import sys x = 42 print(sys.getrefcount(x)) # 13 y = x print(sys.getrefcount(x)) # 14 You may not have expected that the initial reference count is 13. So, why is that? Let’s see. Why is the Reference Count Higher Than Expected? Reason 1: Implicit Helper Variables Python is a high-level programming language. It provides you many convenience functions and memory management. You don’t have to care about the concrete memory locations of your objects and Python automatically performs garbage collection—that is—removing unused objects from memory. To do all of this work, Python relies on its own structures. Among those, are helper variables that may refer to an object you’ve created. All of this happens under the hood, so even if you don’t refer an object in your code, Python may still have created many references to that object. That’s why the reference count can be higher than expected. Reason 2: Temporary Reference in getrefcount() Note that by calling the function getrefcount(x), you create an additional temporary reference to the object x. This alone increases the reference counter by.
https://blog.finxter.com/how-to-get-the-current-reference-count-of-an-object-in-python/
CC-MAIN-2022-21
refinedweb
347
57.27
Message-ID: <935165757.8609.1418895386877.JavaMail.haus-conf@codehaus02.managed.contegix.com> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_8608_1267309929.1418895386876" ------=_Part_8608_1267309929.1418895386876 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location: Groovy reduces typing to a second-class citizen, and I think it = is very reasonable to use that as an excuse to eliminate the traditional ty= pe-cast syntax. In a dynamically-typed language, it is very difficult to kn= ow what is a type and what is a variable, so any syntax that allows one to = be mistaken for the other is bad for both readability and parsing. I propose we add "as" as an operator which will be used in pla= ce of the traditional type-cast operator, and which can be overloaded on th= e class for doing things like automatic type conversions. x =3D new InputStream() y =3D x as Reader=20 In the above example, InputStream would define an asReader() method, whi= ch would be used by the compiler to do the type conversion. The compiler it= self does the type cast, of course. As "as" is an infix operator that has specific operand require= ments, we eliminate the ambiquities of what is a type and what isn't. Consi= der the following code: def method( x ) { y =3D (Test)+x }=20 Whether that is a type cast or an addition is less-than-trivial to figur= e out during compilation. Further, if Test is a class in the local package,= it might be totally ambiguous to the casual reader, too. The same code with the new syntax is much more obvious: def method( x ) { y =3D x as Test }=20 The problem gets worse if the class name starts with a lower-case letter= and proper scoping is in effect: b =3D SomeBuilder() b { p( (test)+a ) }=20 Is that an addition of two variables, a type cast, or the addition of th= e result of the delegate's getTest() accessor and a? It is utterly= impossible to say until runtime, if proper scoping is in effect.<= /p> For complex cases, "as" is mainly neutral on readability (it r= educes parenthesis counts, but calls less attention to the purpose of the c= ode). Compare: x =3D ((MyClass)list.get(0)).doStuff()=20 to: x =3D (list.get(0) as MyClass).doStuff()=20 A further benefit of this change is that if we allow class names to be o= verridden by local variables, type-casting will still be possible: import scripts.myscript def method( x ) { myscript =3D new myscript() y =3D +x as myscript }=20 Maybe not advisable, but all unambiguous. From the viewpoint of preserving optionals, changing to "as" i= s important as it eliminates an overload for parenthesis. As optional paren= thesis on method calls are considered by many to be an important feature of= the language, eliminating ambiguity is important. Consider: println (MyClass)a=20 or, worse: println (MyClass)+a=20 The parser is entirely at a loss with these, and will usually guess wron= g. This then means rewriting of the AST by later phases, once the problem h= as been identified. That said, we still can't eliminate: println (a)+y=20 but some reduction in ambiguity is better than none. =E2=80=93 Chris Poirier.
http://docs.codehaus.org/exportword?pageId=2422
CC-MAIN-2014-52
refinedweb
559
50.26
A const member function is not supposed to operate the data member. However, it can operate the data member if it is a reference data type. Why can operate the reference data member even if the member function is constant? It is shown below: two classes: Bird & BirdHouse BirdHouse composite two Birds one is object and another one is a reference #include <iostream> #include <string> #include <sstream> using namespace std; class Bird { int nb; public: Bird(int n):nb(n) { } void set(int i){ nb = i;} }; class BirdHouse { Bird& rb; Bird b; int bhn; public: BirdHouse( Bird& b1,Bird&b2):rb(b1),b(b2) { bhn = b1.get(); } void set(int a)const //constant member function { rb.set(a); //why?reference data member can be assigned a new value //b.set(a); //it cannot be operated } }; int main() { } line 28 the reference can be operated in a constant member function. why? line 29, the object cannot be operated. This follows the constant member function rules. thanks in advance
https://www.daniweb.com/programming/software-development/threads/213290/why-can-the-reference-data-member-be-assigned-in-constant-member-function
CC-MAIN-2018-43
refinedweb
167
56.35
#include <iostream> #include <iomanip> #include <fstream> #include <string> using namespace std; int main() { //set variables string search, file, line; ifstream fin; ofstream fout; fin.open("/export/home/wyatt/public_html/!Data/shakespeare.txt"); fout.open("s_jmsaline.txt"); cout << endl; fout << endl; cout << "Please enter a word or phrase to be searched for: "; cin >> search; do { cout << search <<endl; getline(fin, search); }while(!fin.eof()); fin.close(); cout << endl; fout << endl; //close outputting file fout.close(); return 0; } i feel like i am right on the verge of having this thing done... this is what i have to do but i cant quite figure out how to get it to read the lines where there is a certain name. all i get is it reading out the entire file.
http://www.dreamincode.net/forums/topic/97006-reading-specific-lines/
CC-MAIN-2017-22
refinedweb
127
78.08
in reply to Re: RFC US Region Module in thread RFC US Region Module If I leave it OO don't I avoid the issue of the function name collision you mention? While I agree there is no current or compelling reason for this module to be OO it does seem to work easily enough to provide the functionality I need. Would I see benefits from it being non-OO? The DATA read has been moved outside of the new since that was the wrong spot for it and subsequent object creations would have resulted in an empty hash ref being assigned to $self. Of the names I felt the Geography namespace would be best, but I am still struggling with the Census::Regions / Regions::Census issue for these reasons: If I leave it OO don't I avoid the issue of the function name collision you mention? Perl uses packages to implement OO but packages don't necessarily imply OO. Packages provide namespaces that help avoid function and variable name collisions. Those functions don't necessarily have to be written in an OO fashion, though. Makeshifts last the longest. Priority 1, Priority 2, Priority 3 Priority 1, Priority 0, Priority -1 Urgent, important, favour Data loss, bug, enhancement Out of scope, out of budget, out of line Family, friends, work Impossible, inconceivable, implemented Other priorities Results (252 votes), past polls
http://www.perlmonks.org/?node_id=271808
CC-MAIN-2015-32
refinedweb
231
57.2
"logfmt" is the name for a key value logging convention we've adopted at Heroku. This library is for both converting lines in logfmt format to objects and for logging objects to a stream in logfmt format. It provides a logfmt parser, logfmt stringifier, a logging facility, and both streaming and non-streaming body parsers for express and restify. You should use this library if you're trying to write structured logs or if you're consuming them (especially if you're writing a logplex drain). npm install logfmt The logfmt module is a singleton that works directly from require. var logfmt = ;logfmt;// 'foo=bar'logfmt;// {foo: 'bar'} It is also a constructor function, so you can use new logfmt to create a new logfmt that you can configure differently. var logfmt2 = ;// replace our stringify with JSON'slogfmt2stringify = JSONstringify// now we log JSON!logfmt2// {"foo":"bar"}// and the original logfmt is untouchedlogfmt// foo=bar accepts lines on STDIN and converts them to json > echo "foo=bar a=14 baz=\"hello kitty\" cool%story=bro f %^asdf" | logfmt { "foo": "bar", "a": 14, "baz": "hello kitty", "cool%story": "bro", "f": true, "%^asdf": true } accepts JSON on STDIN and converts them to logfmt > echo '{ "foo": "bar", "a": 14, "baz": "hello kitty", \ "cool%story": "bro", "f": true, "%^asdf": true }' | logfmt -r foo=bar a=14 baz="hello kitty" cool%story=bro f=true %^asdf=true round trips for free! > echo "foo=bar a=14 baz=\"hello kitty\" cool%story=bro f %^asdf" | logfmt | logfmt -r | logfmt { "foo": "bar", "a": 14, "baz": "hello kitty", "cool%story": "bro", "f": true, "%^asdf": true } Serialize an object to logfmt format logfmt.stringify(object) Serializes a single object. logfmt//> 'foo=bar a=14 baz="hello kitty"' Parse a line in logfmt format logfmt.parse(string) logfmt//> { "foo": "bar", "a": '14', "baz": "hello kitty", "cool%story": "bro", "f": true, "%^asdf": true, "code" : "H12" } The only conversions are from the strings true and false to their proper boolean counterparts. We cannot arbitrarily convert numbers because that will drop precision for numbers that require more than 32 bits to represent them. Put this in your pipe and smoke it. logfmt.streamParser() Creates a streaming parser that will automatically split and parse incoming lines and emit javascript objects. Stream in from STDIN processstdin Or pipe from an HTTP request req logfmt.streamStringify([options]) Pipe objects into the stream and it will write logfmt. You can customize the delimiter via the options object, which defaults to \n (newlines). var {if!line return;this}processstdin Example command line of parsing logfmt and echoing objects to STDOUT: var logfmt = ;var through = ;processstdin Example HTTP request parsing logfmt and echoing objects to STDOUT: var http = ;var logfmt = ;var through = ;http; // streamingapp;// bufferingapp; logfmt.bodyParserStream([opts]) Valid Options: contentType: defaults to application/logplex-1 If you use the logfmt.bodyParserStream() for a body parser, you will have a req.body that is a readable stream. Pipes FTW: var app = ;var http = ;var through = ;var logfmt = ;app;apphttp; Or you can just use the readable event: var app = ;var http = ;var logfmt = ;app;// req.body is now a Readable Streamapphttp; logfmt.bodyParser([opts]) Valid Options: contentType: defaults to application/logplex-1 If you use the logfmt.bodyParser() for a body parser, you will have a req.body that is an array of objects. var logfmt = ;app;// req.body is now an array of objectsapphttp; test it: curl -X POST --header 'Content-Type: application/logplex-1' -d "foo=bar a=14 baz=\"hello kitty\" cool%story=bro f %^asdf" Log an object to logfmt.stream (defaults to STDOUT) Uses the logfmt.stringify function to write the result to logfmt.stream logfmt//=> foo=bar a=14 baz="hello kitty"//> undefined logfmt.log(object, [stream]) Defaults to logging to process.stdout logfmt//=> foo=bar a=14 baz="hello kitty" logfmt.log() Accepts as 2nd argument anything that responds to write(string) var logfmt = ;logfmt//=> foo=bar a=14 baz="hello kitty" Overwrite the default global location by setting logfmt.stream var logfmt = ;logfmtstream = processstderrlogfmt//=> foo=bar a=14 baz="hello kitty" You can have multiple, isolated logfmts by using new. var logfmt = ;var errorLogger = ;errorLoggerstream = processstderrlogfmt;//=> hello=stdouterrorLogger;//=> hello=stderr logfmt.namespace(object) Returns a new logfmt with object's data included in every log call. var logfmt = ;logfmt//=> app=logfmt foo=bar a=14 baz="hello kitty"logfmt//=> app=logfmtlogfmt//=> app=logfmt hello=world logfmt.time([label]) Log how long something takes. Returns a new logfmt with elapsed milliseconds included in every log call. label: optional name for the milliseconds key. defaults to: elapsed=<milliseconds>ms var timer = logfmt;timer;//=> elapsed=1ms String label changes the key to <string>=<milliseconds>ms var timer = logfmt;timer;//=> time=1mstimer;//=> time=2ms If you'd like to include data, just chain a call to namespace. var timer = logfmt;timer;//=> time=1ms foo=bartimer;//=> time=2ms foo=bar logfmt.error(error) Accepts a Javascript Error object and converts it to logfmt format. It will print up to logfmt.maxErrorLines lines. var logfmt = ;logfmt;//=> at=error id=12345 message="test error"//=> at=error id=12345 line=0 trace="Error: test error"//=> ... app;//=> ip=127.0.0.1 time=2013-08-05T20:50:19.216Z method=POST path=/logs status=200 content_length=337 content_type=application/logplex-1 elapsed=4ms logfmt.requestLogger([options], [formatter(req, res)]) If no formatter is supplied it will default to logfmt.requestLogger.commonFormatter which is based on having similiar fields to the Apache Common Log format. Valid Options: immediate: log before call to next()(ie: before the request finishes) elapsed: renames the elapsedkey to a key of your choice when in non-immediate mode Defaults to immediate: true and elapsed: 'elapsed' app;//=> method=POST app;//=> request.method=POST request.time=12ms formatter(req, res) A formatter takes the request and response and returns a JSON object for logfmt.log app;//=> method=POST elapsed=4ms It's always possible to piggyback on top of the commonFormatter app;//=> ip=127.0.0.1 time=2013-08-05T20:50:19.216Z foo=bar elapsed=4ms Pull Requests welcome. > npm test MIT
https://www.npmjs.com/package/logfmt
CC-MAIN-2017-43
refinedweb
1,015
55.95
- Application Settings - Accessing and Saving Data - Collections - Web Content - Syndicated Content - Streams, Buffers, and Byte Arrays - Compressing Data - Encrypting and Signing Data - Web Services - Summary Streams, Buffers, and Byte Arrays The traditional method for reading data from a file, website, or other source in .NET is to use a stream. A stream enables you to transfer data into a data structure that you can read and manipulate. Streams may also provide the ability to transfer the contents of a data structure back to the stream to write it. Some streams support seeking, finding a position within a stream the same way you might skip ahead to a different scene on a DVD movie. Streams are commonly written to byte arrays. The byte array is the preferred way to reference binary data in .NET. It can be used to manipulate data like the contents of a file or the pixels that make up the bitmap for an image. Many of the stream classes in .NET support converting a byte array to a stream or reading streams into a byte array. You can also convert other types into a byte array using the BitConverter class. The following example converts a 64-bit integer to an array of 8 bytes (8 bytes x 8 bits = 64 bits) and then back again: var bigNumber = 4523452345234523455L; var bytes = BitConverter.GetBytes(bigNumber); var copyOfBigNumber = BitConverter.ToInt64(bytes, 0); Debug.Assert(bigNumber == copyOfBigNumber); The Windows Runtime introduces the concept of an IBuffer that behaves like a cross between a byte array and a stream. The interface itself only provides two members: a Capacity property (the maximum number of bytes that the buffer can hold) and a Length property (the number of bytes currently being used by the buffer). Many operations in the Windows Runtime either consume or produce an instance of IBuffer. It is easy to convert between streams, byte arrays, and buffers. The methods to copy a stream into a byte array or send a byte array into a stream already exist as part of the .NET Framework. The WindowsRuntimeBufferExtensions class provides additional facilities for converting between buffers and byte arrays. It exists in the System.Runtime.InteropServices.WindowsRuntime namespace. It provides another set of extension methods including AsBuffer (cast a Byte[] instance to an IBuffer), AsStream (cast an IBuffer instance to a Stream), and ToArray (cast an IBuffer instance to a Byte[] instance).
http://www.informit.com/articles/article.aspx?p=1969708&seqNum=6
CC-MAIN-2020-10
refinedweb
398
53.61
While there is a sample market data file included in the source code package, I've made additional market and security event data files availabled for download at for those who wish to experiment further. Added explicit instructions for swapping the 'SparkAPI' project over to use the 32-bit version of the native spark.dll file. market object model. The article is aimed at intermediate to advanced developers who wish to gain an understanding of basic financial market data processing. I recommend that those who are already familiar with trading terminology skip ahead to the Market Data section. The article is structured as follows: The following section explains the basic terms and concepts related to trading and market data structures. It is framed in terms of the equities (stock) market, but generally applies to most trading markets (e.g. derivatives, commodities, etc.). A trade occurs when a seller agrees to transfer ownership of a specified quantity of stock to a buyer at a specified price. How do buyers and sellers meet? They use a centralised market place called a stock market. People come together and then announce their desire to buy or sell a specific stock; I want to buy 500 shares in BHP for $35.00, I want to sell 2,000 shares in RIO for $65.34. These are called orders. Buy orders are also referred to as bid orders. Sell orders are also referred to as ask or offer orders. When a buy order's price is equal to or higher than the lowest priced sell order currently available, a trade occurs. When a sell order's price is equal to or lower than the highest priced buy order currently available, a trade occurs. This process is also known as a match, because the buy and sell prices must match or cross over for a trade to occur. What happens if an order is submitted to the market, but it does not match? It is entered into a list of orders called the order book. The order will remain there until the trader cancels it, or it expires (e.g. end of day if it is a day order). Some orders expire immediately after matching against what is available on the order book. These are known as 'immediate or cancel' (IOC) or 'fill and kill' (FAK) orders. These orders are never entered into the order book regardless of whether they match or not. be at the top of the book). It's like standing in a queue: you must be at the head of the line to get served. Orders in the order book are ordered by price-time priority. This means that orders are sorted first by price and then by time of submission. The highest priority buy order will be the buy order with the highest price that was submitted first. The highest priority sell order will be the sell order with the lowest price that was submitted first. Here is an example order book showing two queues of orders in price-time priority: one queue for the buy (bid) orders and one queue for the sell (ask) orders. The time refers to the submission time of each order: The bid (buy) orders and ask (sell) orders are arranged from top to bottom in priority. The ask orders are orders are ordered by time (earliest first) because they are the same price. Notice that the bid order for 100 shares is higher in priority than the order for 19 shares, even though it was submitted later, because it has a higher price. I will now step through a trade match using the example order book shown above. Consider that a sell order for 150 shares at 23.34 is submitted into the market (the short-hand for this order would be: Sell [email protected]). The new order is marked in yellow: Notice the bid and ask prices are now crossed. After each order is submitted, the exchange checks for crossed prices and then performs the required number of matches to return the market to an uncrossed state. In this case, the exchange matches the 150 sell against the 100 buy at 23.34, generating a trade for 100 share at 23.34 ([email protected]). The remaining 50 share sell order is now the highest priority ask order in the book: Often traders are not interested in the entire order book of a stock, but just the highest bid and the lowest ask prices currently in the market, and the quantity available at those prices. All this information together is known as a market quote. When a market is in continuous trading (not closed or in an auction state), the bid price must always be lower than the ask price. If they were crossed, a trade would occur. The difference between the lowest ask and highest bid price is called the spread.. For example, tick sizes on the Australian market are: There are several different 'grades' of market data. Data quality is determined by its granularity and its detail. Granularity refers to the observational time interval of the data. Snapshot observations record a particular moment in time (for example daily closing prices, or market quotes for each minute of the day). Event-based observations are recorded each time a relevant field changes (e.g. trade update, change to the order book). Event-level data sets are superior to snapshot data sets because a snapshot view can always be derived from event data, however the inverse does not hold true. If they are so much better, why is it that all data sets are not supplied in event form? A few reasons. They are big to store, harder to code against, they are slower to process and not everyone needs this level of detail. Detail refers to what information is contained in the data set. There are three levels of market data detail: trades, quotes and depth. Trade & quote updates combined together are often referred to as Level 1 (L1) data. Depth updates are referred to as Level 2 (L2) data. The simplest level of detail occurs in the form of trade prices (e.g. daily closing prices). These are widely available and are often used by retail investors to select a stock to buy and hold for a longer investment period (e.g. months, years). Below is a sample of daily closing prices for BHP traded on the ASX: Date Open High Low Close Volume 2013-02-05 37.80 37.94 37.68 37.92 5683782 2013-02-04 37.38 37.61 37.33 37.48 6140610 2013-02-03 37.50 37.64 37.42 37.62 6676410 2013-02-02 37.30 37.30 37.04 37.17 6936594 2013-02-01 37.25 37.27 36.95 37.10 13737522 2013-01-31 36.90 37.22 36.82 37.16 7174644 2013-01-30 37.00 37.15 36.86 37.06 9143136 2013-01-29 36.54 36.85 36.50 36.58 5569151 A big step up from daily closing prices is intraday trade history (also known as tick data), which contains a series of records detailing every trade that occurred for a stock. A good trade data set will contain the following fields: Below is a sample of intraday trade records for FMG (Fortescue Metals Group) traded on the ASX for the 30th Sep 2011: Date Time Symbol Exch Price Quantity Type 20110930 11:14:24.475 FMG ASX 4.62 1000 20110930 11:14:24.475 FMG ASX 4.62 5000 XT 20110930 11:14:24.475 FMG ASX 4.62 249 20110930 11:14:24.477 FMG ASX 4.62 25722 20110930 11:14:24.480 FMG ASX 4.62 1518 XT 20110930 11:14:24.482 FMG ASX 4.62 113 XT 20110930 11:14:25.046 FMG ASX 4.62 2702 NOTE: The 'XT' flag indicates that the trade was a crossing. This occurs when the same broker executes both sides of the trade (e.g. one client is buying through the broker and another is selling). Intraday trade records are often used in retail trading software for intraday charting and technical analysis. Sometimes people will attempt to backtest a trading strategy (backtest means to evaluate performance using historical data) using trade records. Don't do this if you are testing intraday strategies, you're results will be useless because the trade price won't always reflect the actual price you could buy or sell for at that time. The charts displayed by Google Finance for a security are generated using intraday trade records (with the trade time on the x-axis and trade price on the y-axis). The next step up in market data detail is the inclusion of market quote updates. A good market quote data set will contain a record of the following fields every time there is a change: Some quote data sets will provide these additional fields, which are useful for certain types of analytics, but not particularly relevant for backtesting: Below is a sample of real-time quote update events for NAB (National Australia Bank) on the 31st Oct 2012: Date Time Update Symbol Exch Side Price Quantity 2012-10-31 13:51:13.784 QUOTE NAB ASX Bid 25.81 15007 2012-10-31 13:51:14.615 QUOTE NAB ASX Bid 25.82 10 2012-10-31 13:51:14.633 QUOTE NAB ASX Bid 25.81 13623 2012-10-31 13:51:14.684 QUOTE NAB ASX Ask 25.82 2500 2012-10-31 13:52:09.168 QUOTE NAB ASX Bid 25.80 12223 2012-10-31 13:52:09.173 QUOTE NAB ASX Ask 25.81 1278 2012-10-31 13:52:39.750 QUOTE NAB ASX Ask 25.80 136 2012-10-31 13:52:39.754 QUOTE NAB ASX Bid 25.79 12656 2012-10-31 13:54:20.870 QUOTE NAB ASX Ask 25.81 10375 2012-10-31 13:54:20.878 QUOTE NAB ASX Bid 25.80 1098 A good quality market quote dataset is all that is required for back-testing intraday trading strategies, as long as you are only planning to trade against the best market price rather than post orders in the order book and waiting for fills (putting aside issues of data feed latency, execution latency and other more advanced topics for now). The final level of market data detail is the inclusion of market depth updates. Depth updates contain a record of every change to every order in the order book for a particular security. Depth data sets can often be limited. Some depth data sets only provide an aggregated price view, the total quantity and number of orders at each price level. Others will only provide the first few price levels. A good depth update data set will contain a record of the following fields every time one of them changes: Market depth updates are required for accurately back-testing intraday trading strategies where you are submitting orders that will enter the order book queue rather than executing immediately at the best market price. Market data comes in two forms: live & historical. A live market data feed is required for trading. Historical data sets are used for analysis and back-testing. Historical daily closing prices are publicly available for free from a variety of sources (such as Google Finance). Most data and trading software vendors can provide historical intraday trade data for a specified time window (e.g. 6 months). For example, you can access recent historical daily closing prices for BHP (BHP Billiton) from Google Finance. Live intraday trade data can also be accessed on the internet for free, but it is often delayed (20 minutes is standard) to prevent users from trading with it. Non-delayed live intraday trade data should be available through any trading software vendor for a modest price. All good trading software vendors will provide live quote and trade data via their user interface. Some higher-quality vendors will provide quote and trade (Level 1) intraday data live via an API (e.g. Interactive Brokers). Historical Level 1 data can be harder to acquire but is available through some vendors. Live depth updates (Level 2) are very commonly accessible via the trading software user interface using the security depth view, but you don't see the update event details, just an up to date version of the order book. However, it is very rare to be able to access this data via an API. Historical Level 2 update records are virtually impossible to acquire as a retail investor, and are generally only kept by research institutions (e.g. SIRCA) or privately recorded by trading institutions (e.g. investment banks, market makers, high-frequency trading (HFT) groups) for their own use. The figure below displays the basic flow of market data processing: This code example is concerned with the first two layers; the receiving of events and the processing into an object model that can be used by higher-level processes. The following sections of the code example are structured as follows: While different data feeds will have their own format (e.g. overlayed C structures via an API, FIX like messages in fixed-length strings), they all contain a similar set of information. An Australian company named Iguana2 have created an interesting product called the Spark API which provides a programmatically accessible event stream that is essentially equivalent to what it receives from the exchange. It supports access to data feeds for the Australian and New Zealand equities, warrants and option markets. The event stream includes a real-time feed for: trade updates (L1), quote updates (L1), depth updates (L2), exchange news, market state changes (e.g. pre-open, open, auction, close, etc.), and quote base changes (e.g. ex-dividend). Here is a link to the specifications: One of the useful things about the API is that it standardises and interleaves the market data feeds from different exchanges into a unified view of the market. For example, the ASX market data interface comes in via a C-based component called Trade OI which is based off Genium INET technology from the NASDAQ. Australia's secondary exchange, Chi-X, provides a market data feed via fixed length string stream that used tag-value combinations in a semi-FIX like structure. For those interested in learning about coding against an exchange market data feed, the Spark API provides the closest equivalent I've encountered that is available to retail investors. With the extensions I've written in the Spark API SDK, it can run historical data files in off-line mode without requiring a connection to the Spark servers. If you are interested in seeing what an institutional grade market data feeds look like, here are some links to institutional vendor and exchange trade feed specifications I've compiled: Data Vendors Exchanges The Spark API SDK is a C# component I've written to provide easy access to the Spark API, and smooth over the quirks that come from accessing a native-C component via .NET. In addition, it includes the classes required to process and represent the event-feed in a form that is useful for higher-level logic such as trades, orders, order depth and securities. The SparkApi C# component contains three primary namespaces: Data Market Common I'll refer to specific classes from these namespaces in the following sections as we examine the concepts related to market data processing. The code in the Spark API SDK will be used as the example of how to access and process a market data feed. UPDATE: While there is a sample market data file included in the source code package, I've made additional market and security event data files availabled for download here for those who wish to experiment further. IMPORTANT: While the SparkAPI component in the SDK references the 'Spark.Net.dll' .NET library to access the Spark API, 'Spark.Net.dll' is actually an interop wrapper to a C library called 'spark.dll'. As the C library is not a COM object, it cannot be referenced directly. There are 32-bit and 64-bit versions of the spark.dll included in the download, however the solution is setup by default to utilise the 64-bit version. If you are running on a 32-bit machine, please follow the instructions below to swap the SparkAPI project over to use the 32-bit version. INSTRUCTIONS: In the Spark API project: In this code example, we will be processing market data updates from the Spark.Event structure supported by the Spark API. Spark.Event Below is a class diagram showing the Spark.Event struct in its C# form (rather than native C form): The following fields are relevant to all message types: The other fields are only relevant to some event types: Stock market related applications often perform comparison operations on prices, for example, comparing an aggressive market order price against a limit price in the order book to determine if a trade has occurred. As prices are expressed in dollars and cents, prices are normally represented in code as a floating point number (float or double). However, comparisons using floating point numbers are error prone (e.g. 36.0400000001 != 36.04) and yield unpredictable results. To avoid this issue, stock market related applications often internally convert prices into an integer format. Not only does this ensure accurate comparison operations, but it also reduces the memory footprint and speeds up comparisons (integer operations are faster on a CPU than floating point operations). The sub-dollar section of prices are retained in an integer format by multiplying the price by a scaling factor. For example, the price 34.25 would become 342500 with a scaling factor of 10,000. Four decimal places are sufficient to represent the valid range of tick prices in the Australian market, as the smallest price would be a mid-point trade on a sub $0.10 stock with a tick of 0.001 (e.g. Bid=0.081 Ask=0.082 Mid=0.0815). An accuracy of four decimal places works for market prices, however calculation of values such as average execution price on an order are better stored in a double or decimal type as they do not require accurate comparisons. The following steps are required are required to process a Spark data feed: The Spark API SDK does a lot of this work automatically for you. Here is some example code that replays a stock data file, processes the events in a security and writes the trade and quote updates to console: public void Main() { //Create an event feed using replay from file var replayManager = new SparkAPI.Data.ApiEventFeedReplay(@"Data\TestData\AHD_Event_20120426.txt"); //Create a security to receive the event feed var security = new SparkAPI.Market.Security("AHD"); replayManager.AddSecurity(security); //Add event handlers to write each trade and quote update to console security.OnTradeUpdate += (sender, args) => Console.WriteLine("TRADE\t" + args.Value.ToString()); security.OnQuoteUpdate += (sender, args) => Console.WriteLine("QUOTE\t" + args.Value.ToString()); //Initiate the event replay replayManager.Execute(); } Let's drill down into this and see what is happening. The first step is to read the events from the market data file. Obviously, this step is not required when you are connected to a market data server that is delivering the events via an API. Here are the event feed structures available in the SDK shown in a class diagram: The ApiSecurityEventFeed and ApiMarketEventFeed classes are used when receiving live market data. As we are replaying from file, we'll use the ApiEventFeedReplay class. When the ApiEventFeedReplay.Execute() method is called, it starts streaming lines from the event file and parsing them into the required data struct: ApiSecurityEventFeed ApiMarketEventFeed ApiEventFeedReplay ApiEventFeedReplay.Execute() public override void Execute() { var reader = new SparkAPI.Data.ApiEventReaderWriter(); reader.StreamFromFile(FileName, EventRecieved); } The ApiEventReaderWriter class contains all the logic required to read and write Spark events to file. We stream the events from file one at a time rather than reading them all at once into memory because loading every event from the exchange into memory before processing will generate an out-of-memory exception on a 32-bit build. It is also much faster. ApiEventReaderWriter Each line read from file is parsed into a Spark.Event struct using the SparkAPI.Data.ApiEventReaderWriter.Parse() method and then passed to the event processing method specified in the StreamFromFile command. In the case of a replay from file, this will be the ApiEventFeedReplay.EventReceived() method, which in turn calls the ApiEventFeedBase.RaiseEvent() method. The ApiEventFeedBase.RaiseEvent() method is where the replay and live event feed code paths align. All feed associated classes (ApiEventFeedReplay, ApiMarketEventFeed, ApiSecurityEventFeed) inherit from ApiEventFeedBase. SparkAPI.Data.ApiEventReaderWriter.Parse() ApiEventFeedReplay.EventReceived() ApiEventFeedBase.RaiseEvent() ApiEventFeedBase Let have a look and see what it does: internal void RaiseEvent(EventFeedArgs eventFeedArgs) { //Raise direct event if feed handler is assigned if (OnEvent != null) OnEvent(this, eventFeedArgs); //Raise event for security if in the dictionary Security security; if (Securities.TryGetValue(eventFeedArgs.Symbol, out security)) { security.EventReceived(this, eventFeedArgs); } } The EventFeedArgs contains a reference to the Spark.Event struct, a time-stamp for the event, and a symbol and exchange identifier. The ApiEventFeedBase class supports two mechanisms to propagate events: EventFeedArgs OnEvent EventReceived() The Security dictionary lookup allows the event feed to pipe event updates to the correct security and ignore the rest. Security In order to make this example easier to understand and step through via debugging, I've kept the entire market data processing sequence in a single thread. In practice, different tasks such as market data event processing and analytics are often allocated to different threads or different processes. This is a topic for a separate article. If you're interested, there is an event replay feed in the SDK using a multi-threaded implementation called SparkAPI.Data.ApiEventFeedThreadedReplay. This implements a producer-consumer pattern where the producer thread streams the events from file, parses them into structs and adds them to a concurrent queue. The consumer thread dequeues the event using a blocking collection and performs further processing. SparkAPI.Data.ApiEventFeedThreadedReplay So how do we represent all this market data in a useful object model? We need a Security class. It contains the following properties: Symbol MarketState Trades OrderBooks The classes associated with representing the security object model are shown below: The most complex area of the Security class relates to updating order depth in the LimitOrderBook class. We need to maintain the current order depth for each venue we receive data from. In Australia, there are two trading venues: the Australian Stock Exchange (ASX) and Chi-X Australia (CXA). As the Security class receives events for both venues, we store multiple LimitOrderBook classes in the OrderBooks dictionary, using the exchange ID (e.g. ASX, CXA) as the dictionary key. LimitOrderBook A LimitOrderBook class contains two LimitOrderList classes (Bid and Ask), which represent the bid and ask order queues. The LimitOrderList is a wrapper for a List<LimitOrder> generic collection. The LimitOrder class contains the detail for each order currently in the queue. The bid (buy) queue is sorted by price-time priority with the highest priced order at the top of the queue. The ask (sell) queue is sorted by price-time priority with the lowest priced order at the top of the queue. LimitOrderList Bid Ask List<LimitOrder> <L LimitOrder Once events reach the Security object, they need to be interpreted to update the Security data objects and fields. Here is the method that processes the events: internal void EventReceived(object sender, EventFeedArgs eventFeedArgs) { //Process event Spark.Event eventItem = eventFeedArgs.Event; switch (eventItem.Type) { //Depth update case Spark.EVENT_NEW_DEPTH: case Spark.EVENT_AMEND_DEPTH: case Spark.EVENT_DELETE_DEPTH: //Check if exchange order book exists and create if it doesn't LimitOrderBook orderBook; if (!OrderBooks.TryGetValue(eventFeedArgs.Exchange, out orderBook)) { orderBook = new LimitOrderBook(eventFeedArgs.Symbol, eventFeedArgs.Exchange); OrderBooks.Add(eventFeedArgs.Exchange, orderBook); } //Submit update to appropriate exchange order book orderBook.SubmitEvent(eventItem); if (OnDepthUpdate != null) OnDepthUpdate(this, new GenericEventArgs<LimitOrderBook>(eventFeedArgs.TimeStamp, orderBook)); break; //Trade update case Spark.EVENT_TRADE: //Create and store trade record Trade trade = eventItem.ToTrade(eventFeedArgs.Symbol, eventFeedArgs.Exchange, eventFeedArgs.TimeStamp); Trades.Add(trade); if (OnTradeUpdate != null) OnTradeUpdate(this, new GenericEventArgs<Trade>(eventFeedArgs.TimeStamp, trade)); break; //Trade cancel case Spark.EVENT_CANCEL_TRADE: //Find original trade in trade record and delete Trade cancelledTrade = eventItem.ToTrade(eventFeedArgs.TimeStamp); Trade originalTrade = Trades.Find(x => (x.TimeStamp == cancelledTrade.TimeStamp && x.Price == cancelledTrade.Price && x.Volume == cancelledTrade.Volume)); if (originalTrade != null) Trades.Remove(originalTrade); break; //Market state update case Spark.EVENT_STATE_CHANGE: State = ApiFunctions.ConvertToMarketState(eventItem.State); if (OnMarketStateUpdate != null) OnMarketStateUpdate(this, new GenericEventArgs<MarketState>(eventFeedArgs.TimeStamp, State)); break; //Market quote update (change to best market bid-ask prices) case Spark.EVENT_QUOTE: if (OnQuoteUpdate != null) { LimitOrderBook depth = OrderBooks[eventFeedArgs.Exchange]; MarketQuote quote = new MarketQuote(eventFeedArgs.Symbol, eventFeedArgs.Exchange, depth.BidPrice, depth.AskPrice, eventFeedArgs.TimeStamp); OnQuoteUpdate(this, new GenericEventArgs<MarketQuote>(eventFeedArgs.TimeStamp, quote)); } break; default: break; } } Trades, quotes and market state updates only require the conversion of the information in the event struct into an C# equivalent object, and then update the relevant property (for market state and quote) or list (for trades). Updating the limit order book entries is more complex, so we'll examine that in detail. In the LimitOrderBook.SubmitEvent() method, we determine whether we should add it to the Bid or Ask queue: LimitOrderBook.SubmitEvent() public void SubmitEvent(Spark.Event eventItem) { LimitOrderList list = (ApiFunctions.GetMarketSide(eventItem.Flags) == MarketSide.Bid) ? Bid : Ask; lock (_lock) { list.SubmitEvent(eventItem); } } A lock is used when submiting an event as the limit order book queues may be traversed by other threads that require the information. Once we have a reference to the correct LimitOrderList object, its SubmitEvent() method is called: SubmitEvent() public void SubmitEvent(Spark.Event eventItem) { switch (eventItem.Type) { case Spark.EVENT_NEW_DEPTH: //ENTER LimitOrder order = eventItem.ToLimitOrder(); if (Count == 0) { Add(order); } else { Insert(eventItem.Position - 1, order); } break; case Spark.EVENT_AMEND_DEPTH: //AMEND this[eventItem.Position - 1].Volume = (int)eventItem.Volume; break; case Spark.EVENT_DELETE_DEPTH: //DELETE RemoveAt(eventItem.Position - 1); break; default: break; } } The Position field in the event struct is the key to determining where the action should occur. For ENTER orders, it provides the insertion position, and for AMEND or DELETE orders, it provides a reference to the correct order. Note that Position uses a base 1 rather than base 0 reference point. Some data feeds may not provide a position value, but a unique order identifier for a depth update. In this case, you will need to determine the correct location of the order based on time-price priority rules for ENTER orders, and use the order ID via a hashtable lookup to locate orders when amending or deleting. There are many topics in this article I feel should be discussed in more detail, such as synchronising event processing across multiple threads and the impact that latency has on processing market data for backtesting. There is also the question of what you do with the market data, covering areas such as metrics, trading strategies and the complex area of order state management. However, I'm hopeful that I have provided an introduction to the concepts and data structures involved, and given some code examples and sample market data to those interested in experimenting further. Please feel free to post any comments, questions or suggestions you may have. Version 1.0 - 27-Feb-2013 - Initial versionVersion 1.1 - 04-Mar-2013 - Minor editing, added links to additional event data files This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) spark.dll Unable to load DLL 'spark.dll': The specified module could not be found. (Exception from HRESULT: 0x8007007E) Trust you doing great! We have a requirement where we want to broadcast data feed to different subscribers. We want to do unicast and multicast both. As we understand UDP / UDT protocol will serve the best. However, we are not sure on design / architecture. Do we need to setup P2P connection between each client and our server (In other words single dedicated port for each subscriber) or we can connect multiple subscriber on single port without any delay and loss of data ? Do we need to implement any message queue system? If single port can be serviced to multiple subscribers then ideally how many subscribers per port ? Any comments, knowledge sharing on above would be great. Or if you can share any link ? //Stop existing event feed if required if (_eventFeed != null) Stop(); //Start new event feed using _eventFeed = new ApiSecurityEventFeed(symbol, exchange); //Create stock to process events _security = new Security(symbol); _eventFeed.AddSecurity(_security); securityMarketView.Security = _security; //Initiate replay on separate thread _eventFeedThread = new Thread(_eventFeed.Execute); _eventFeedThread.Start(); public void ExecuteTest_Market() { //Create market event feed var feedManager = new SparkAPI.Data.Markets.ApiMarketEventFeed("ASX"); //Add securties that you want to monitor List<string> symbols = File.ReadLines(@"C:\Temp\ASX200.txt").ToList(); symbols.ForEach(security => feedManager.AddSecurity(security)); //Initiate feed feedManager.Execute(); } [TestFixture] public class ApiSecurityEventFeedTest { private ConcurrentQueue<string> _output; [Test] public void ExecuteTest_MultiSecurity_Verbose() { //Read symbol list from file IEnumerable<string> symbols = File.ReadLines(@"C:\Temp\ASX100.txt"); //Initiate each event feed in separate thread _output = new ConcurrentQueue<string>(); Task.WaitAll(symbols.Select(symbol => Task.Factory.StartNew(() => InitiateFeed(symbol, "ASX"))).ToArray()); //Write output to file var builder = new StringBuilder(); string line; while (_output.TryDequeue(out line)) { builder.AppendLine(line); } File.WriteAllText(@"C:\Temp\MarketData.txt", builder.ToString()); } private void InitiateFeed(string symbol, string exchange) { //Create a security event feed var feedManager = new SparkAPI.Data.Securities.ApiSecurityEventFeed(symbol, exchange); Security security = feedManager.AddSecurity(symbol); //Add event handlers to write each trade and quote update to console security.OnTradeUpdate += (sender, args) => _output.Enqueue(string.Format("{0}\tTRADE\t{1}", symbol, args.Value.ToString())); security.OnQuoteUpdate += (sender, args) => _output.Enqueue(string.Format("{0}\tQUOTE\t{1}", symbol, args.Value.ToString())); //Initiate the event feed feedManager.Execute(); } } private static List<LimitOrderBookPriceLevel> GeneratePriceLevels(IEnumerable<LimitOrder> orderList, int maximumPriceLevels) { List<LimitOrderBookPriceLevel> result = new List<LimitOrderBookPriceLevel>(); LimitOrderBookPriceLevel priceLevel = null;// note here, priceLevel is null int currentPrice = 0; int levelIndex = 0; foreach (LimitOrder order in orderList) { //Update price level if (order.Price != currentPrice) { levelIndex++; if (priceLevel != null) result.Add(priceLevel);// First time priceLevel will be null, so priceLevel will not be added to result List. priceLevel = new LimitOrderBookPriceLevel(); priceLevel.Price = order.Price; currentPrice = order.Price; } priceLevel.Count++; priceLevel.Volume += order.Volume; //Exit if maximium price level is reached if (levelIndex > maximumPriceLevels) break; } return result; } private static List<LimitOrderBookPriceLevel> GeneratePriceLevels(IEnumerable<LimitOrder> orderList, int maximumPriceLevels) { List<LimitOrderBookPriceLevel> result = new List<LimitOrderBookPriceLevel>(); LimitOrderBookPriceLevel priceLevel = null; int currentPrice = 0; int levelIndex = 0; foreach (LimitOrder order in orderList) { //Update price level if (order.Price != currentPrice) { levelIndex++; if (priceLevel != null) result.Add(priceLevel); priceLevel = new LimitOrderBookPriceLevel(); priceLevel.Price = order.Price; currentPrice = order.Price; } priceLevel.Count++; priceLevel.Volume += order.Volume; //Exit if maximium price level is reached if ((levelIndex > maximumPriceLevels) && (maximumPriceLevels != -1)) break; } //Add current price level if exists if (priceLevel != null) result.Add(priceLevel); return result; } "BadImageFormatException" error occurred in this position. >> Spark.Init(); please, help me to solve this problem. thank you. General News Suggestion Question Bug Answer Joke Praise Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
http://www.codeproject.com/Articles/553206/An-Introduction-to-Real-Time-Stock-Market-Data-Pro?fid=1827346&df=90&mpp=10&noise=1&prof=True&sort=Position&view=Normal&spc=Relaxed
CC-MAIN-2016-22
refinedweb
5,232
56.96
service that can be used not only by components running in the same process as local service, but activities and services, running in different processes, can bind to it and send and receive data. When we implement a bound service we have always to extend Service class but we have to override onBind method too. This method returns an object that implements IBinder, that can be used to interact with the service. There are three way we can create a bound service: - Extending IBinder interface - Using Messenger - Using AIDL In this post we want to analyze how to create a Android service with Messenger. Using this method, we can create a service that can be used by components in different processes. In this case, we use Handler and Message to exchange data between service and other components. Implementing bound service with Messenger Service based on Messenger can communicate with other components in different processes, known as Inter Process Communication (IPC), without using AIDL. To implement a service like this we need: - A service handler: this component handles incoming requests from clients that interact with the service itself. - A Messenger: this class is used to create an object implementing IBinderinterface so that a client can interact with the service. So let’s implement the Service. As example we can suppose we want to create a service that receives a string and converts it in upper-case and returns the result to the client. So as first thing, we create a class that implements Service: public class ConvertService extends Service { .. } As told before, we need an Handler to implement incoming request from clients so, we can create an inner class like this: class ConvertHanlder extends Handler { @Override public void handleMessage(Message msg) { // This is the action int msgType = msg.what; switch(msgType) { case TO_UPPER_CASE: { try { // Incoming data String data = msg.getData().getString("data"); Message resp = Message.obtain(null, TO_UPPER_CASE_RESPONSE); Bundle bResp = new Bundle(); bResp.putString("respData", data.toUpperCase()); resp.setData(bResp); msg.replyTo.send(resp); } catch (RemoteException e) { e.printStackTrace(); } break; } default: super.handleMessage(msg); } } In handleMessage we start handling the incoming requests. The first thing we have to do it “decode” the type of request we are handling. We can use for this purpose the what attribute of Message class. Depending on its value we perform different operations: in our case we just convert in upper case a string. We pass the string value a Bundle attached to the Message, so that at line 12 we get the value. We have to send a response to the client, so we create another Message (line 13) that holds the response and attach a new Bundle holding the converted string (line 14-15). At line 16 we send the message back to the client (we will see it later). So in this way, we created our request handler but we have to create an IBinder instance so that a client can use our service. To do it, we need a Messenger: public class ConvertService extends Service { ... private Messenger msg = new Messenger(new ConvertHanlder());; ... @Override public IBinder onBind(Intent arg0) { return msg.getBinder(); } } At line 3, a new instance of Messenger class is created passing the Handler we discussed before. At line 6 we override the onBind method and return an instance of IBinder interface. Our Service is ready. At the end we define it in Manifest.xml: <service android: Notice we used android:process so that the Service runs in a different process from the client. Android Service client Now we have to implement a client that binds to the service and send data to it. We can suppose that the client is an Activity that allows the user to insert a string that has to be converted in uppercase. The activity calls bindService method to bind to the service created before. When we bind to a “remote” service, using bindService method, we need to provide a callback methods so that we get notified when the bind process is completed and we can “use” the service. We have to create a class that implements ServiceConnection to receive this notification: .. private ServiceConnection sConn; private Messenger messenger; .. @Override protected void onCreate(Bundle savedInstanceState) { // Service Connection to handle system callbacks sConn = new ServiceConnection() { @Override public void onServiceDisconnected(ComponentName name) { messenger = null; } @Override public void onServiceConnected(ComponentName name, IBinder service) { // We are conntected to the service messenger = new Messenger(service); } }; ... // We bind to the service bindService(new Intent(this, ConvertService.class), sConn, Context.BIND_AUTO_CREATE); .. } At line 8, we create an instance of ServiceConnection override its methods. At line 18 we create a Messanger that we use, later, to get the IBinder instance so that we can send the messages to our service. Finally at line 24 we bind to the service specifying the service class and the callback interface Now we need a “receiving” handler to manage the service response: // This class handles the Service response class ResponseHandler extends Handler { @Override public void handleMessage(Message msg) { int respCode = msg.what; switch (respCode) { case ConvertService.TO_UPPER_CASE_RESPONSE: { result = msg.getData().getString("respData"); txtResult.setText(result); } } } } This handler behaves like the one in the service implementation, it extracts the response from the Bundle attached to the Message and show the result to the user at line 11. The last thing we have to cover is sending from the Activity to the Service the string that has to be converted. In this case we can suppose we have a Button in our interface that when user clicks it, the Activity sends the data: btn.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { String val = edt.getText().toString(); Message msg = Message .obtain(null, ConvertService.TO_UPPER_CASE); msg.replyTo = new Messenger(new ResponseHandler()); // We pass the value Bundle b = new Bundle(); b.putString("data", val); msg.setData(b); try { messenger.send(msg); } catch (RemoteException e) { e.printStackTrace(); } } }); Please notice that at line 9 we set the reply Messenger that will be used the the service when it has to send back the response.
http://www.javacodegeeks.com/2014/01/android-bound-service-ipc-with-messenger.html
CC-MAIN-2014-42
refinedweb
1,001
55.03
Log in as superuser on the machine designated to be the root master server. The examples in these steps use rootmaster as the root master server and doc.com. as the root domain. Check the root master server's domain name. Use the domainname command to make sure the root master server is using the correct domain name. The domainname command returns a workstation's current domaindirectory. If the name is not correct, change it. (Do not include a trailing dot with the domainname command. The domainname command is not an NIS+ command, so it does not follow the NIS+ conventions of a trailing dot.) The above example changes the domain name of the root master server from strange.domain to doc.com. When changing or establishing a domain name, make sure that it has at least two elements; for example, doc.com instead of doc. The final element should end in either an Internet organizational name (such as .com) or a geographical identifier (such as .jp or .uk). Check the root master server's switch-configuration file. Make sure the root master server is using the NIS+ version of the nsswitch.conf file, even if it will run in NIS-compatibility mode. This step ensures that the primary source of information for the root master are NIS+ tables. This command displays the current nsswitch.conf file. The primary name service referenced by this file should be nisplus. If the root master server's configuration file does not use nisplus as the primary name service, exchange it for one that does, as explained in "Selecting a Different Configuration File". Optionally, configure the Diffie-Hellman key length. If you are using DES authentication, you can elect to increase the Diffie-Hellman key length from the default 192 bits. For example, to allow both 640 and 192-bit keys type the following: If you made any changes at all to the nsswitch.conf file, stop and restart the nscd daemon. Because nscd caches the contents of the nsswitch.conf file, it is necessary to stop and restart nscd after any change to the switch file. Complete instructions are provided in Chapter 1, Setting Up the Name Service Switch. Now kill and restart keyserv, as shown below. Clean out leftover NIS+ material and processes. If the workstation you are working on was previously used as an NIS+ server or client, remove any files that might exist in /var/nis and kill the cache manager, if it is still running. In this example, a cold-start file and a directory cache file still exist in /var/nis: This step makes sure files left in /var/nis or directory objects stored by the cache manager are completely erased so they do not conflict with the new information generated during this configuration process. If you have stored any admin scripts in /var/nis, you might want to consider temporarily storing them elsewhere, until you finish setting up the root domain. Kill server daemons If the workstation you are working on was previously used as an NIS+ server, check to see if rpc.nisd or rpc.nispasswdd is running. If either of these daemons is running, kill them. Name the root domain's admin group. Although you won't actually create the admin group until Step 16, you must identify it now. Identifying it now ensures that the root domain's org_dir directory object, groups_dir directory object, and all its table objects are assigned the proper default group when they are created in Step 14. To name the admin group, set the value of the environment variable NIS_GROUP to the name of the root domain's admin group. Here are two examples, one for csh users, and one for sh/ksh users. They both set NIS_GROUP to admin.doc.com.. For C Shell For Bourne or Korn Shell Create the root directory and initialize the root master server. This step creates the first object in the namespace--the root directory--and converts the workstation into the root master server. Use the nisinit -r command, as shown below. (This is the only instance in which you will create a domain's directory object and initialize its master server in one step. In fact, nisinit -r performs an automatic nismkdir for the root directory. In any case, except the root master, these two processes are performed as separate tasks.) A UNIX directory with the name /var/nis/data is created. Within the /var/nis directory is a file named root.object. This is not the root directory object; it is a file that NIS+ uses to describe the root of the namespace for internal purposes. The NIS+ root directory object is created in Step 11 or Step 12. In subsequent steps, other files are added beneath the directory created in this step. Although you can verify the existence of these files by looking directly into the UNIX directory, NIS+ provides more appropriate commands. They are called out where applicable in the following steps. Do not rename the /var/nis or /var/nis/data directories or any of the files in these directories that were created by nisinit or any of the other NIS+ configuration procedures. In Solaris Release 2.4 and earlier, the /var/nis directory contained two files named hostname. It also contained a subdirectory named /var/nis/hostname. In Solaris Release 2.5, the two files are named trans.log and data.dict, and the subdirectory is named /var/nis/data. In Solaris Release 2.5, the content of the files has also version of rpc.nisd. Therefore, you should not rename either the directories or the files. [NIS-Compatibility only] Start the NIS+ daemon with -Y. Perform this step only if you are setting up the root domain in NIS-compatibility mode; if setting up a standard NIS+ domain, perform Step 12 instead. This step includes instructions for supporting the DNS forwarding capabilities of NIS clients. Substep a starts the NIS+ daemon in NIS-compatibility mode. Substep b makes sure that when the server is rebooted, the NIS+ daemon restarts in NIS-compatibility mode. After substep b of Step b, go to Step 14. Use rpc.nisd with the -Y, -B, and -S 0 options. The -Y option invokes an interface that answers NIS requests in addition to NIS+ requests. The -B option supports DNS forwarding. The -S 0 flag sets the server's security level to 0, which is required at this point for bootstrapping. Because no cred table exists yet, no NIS+ principals can have credentials; if you used a higher security level, you would be locked out of the server. Edit the /etc/init.d/rpc file. Search for the string EMULYP="Y" in the /etc/init.d/rpc file. Uncomment the line and, to retain DNS forwarding capabilities, add the -B flag. An rpc file with DNS forwarding contains: An rpc file without DNS forwarding contains: If you do not need to retain DNS forwarding capabilities, uncomment the line but do not add the -B flag. [Standard NIS+ only] Start the NIS+ daemon. Use the rpc.nisd and be sure to add the -S 0 flag. The -S 0 flag sets the server's security level to 0, which is required at this point for bootstrapping. Because no cred table exists yet, no NIS+ principals can have credentials, and if used with a higher security level, you would be locked out of the server. Verify that the root objects have been properly created. As a result of Step 11 or Step 12, your namespace should now have: A root directory object (root.dir) A root master server (rootmaster) running the NIS+ daemon (rpc.nisd) A cold start file for the master server (NIS_COLD_START) A transaction log file (trans.log) A table dictionary file (data.dict). The root directory object is stored in the directory created in Step 10. Use the ls command to verify that it is there. At this point, the root directory is empty; in other words, it has no subdirectories. You can verify this by using the nisls command. However, it has several object properties, which you can examine using niscat -o: Notice that the root directory object provides full (read, modify, create, and destroy) rights to both the owner and the group, while providing only read access to the world and nobody classes. (If your directory object does not provide these rights, you can change them using the nischmod command.) To verify that the NIS+ daemon is running, use the ps command. The root domain's NIS_COLD_START file, which contains the Internet address (and, eventually, public keys) of the root master server, is placed in /var/nis. Although there is no NIS+ command that you can use to examine its contents, its contents are loaded into the server's directory cache (NIS_SHARED_DIRCACHE). You can examine those contents with the /usr/lib/nis/nisshowcache command. Also created are a transaction log file (trans.log) and a dictionary file (data.dict). The transaction log of a master server stores all the transactions performed by the master server and all its replicas since the last update. You can examine its contents by using the nislog command. The dictionary file is used by NIS+ for internal purposes; it is of no interest to an administrator. Create the root domain's subdirectories and tables. This step adds the org_dir and groups_dir directories, and the NIS+ tables, beneath the root directory object. Use the nissetup utility. For an NIS-compatible domain, be sure to include the -Y flag. Here are examples for both versions: For standard NIS+ only NIS-compatible only Each object added by the utility is listed in the output: The -Y option creates the same tables and subdirectories as for a standard NIS+ domain, but assigns read rights to the passwd table to the nobody class so that requests from NIS clients, which are unauthenticated, can access the encrypted password in that column. Recall that when you examined the contents of the root directory with nisls (in Step 12), it was empty. Now, however, it has two subdirectories. You can examine the object properties of the subdirectories and tables by using the niscat -o command. You can also use the niscat option without a flag to examine the information in the tables, although at this point they are empty. Create DES credentials for the root master server. The root master server requires DES credentials so that its own requests can be authenticated. To create those credentials, use the nisaddcred command, as shown below. When prompted, enter the server's root password. If you enter a password that is different from the server's root password, you receive a warning message and a prompt to repeat the password: If you persist and retype the same password, NIS+ will still create the credential. The new password will be stored in /etc/.rootkey and be used by the keyserver when it starts up. To give the keyserver the new password right away, run keylogin -r, as described in the credentials chapter of Solaris Naming Administration Guide. If you decide to use your login password after all, press Control-c and start the sequence over. If you were to retype your login password as encouraged by the server, you would get an error message designed for another purpose, but which in this instance could be confusing. As a result of this step, the root server's private and public keys are stored in the root domain's cred table (cred.org_dir.doc.com.) and its secret key is stored in /etc/.rootkey. You can verify the existence of its credentials in the cred table by using the niscat command. Since the default domain name is doc.com., you don't have to enter the cred table's fully qualified name; the org_dir suffix is enough. You can locate the root master's credential by looking for its secure RPC netname. Create the root domain's admin group. This step creates the admin group named in Step 9. Use the nisgrpadm command with the -c option. The example below creates the admin.doc.com. group. This step only creates the group--it does not identify its members. That is done in Step 17. To observe the object properties of the group, use niscat -o, but be sure to append groups_dir in the group's name. Add the root master to the root domain's admin group. Since at this point the root master server is the only NIS+ principal that has DES credentials, it is the only member you should add to the admin group. Use the nisgrpadm command again, but with the -a option. The first argument is the group name, the second is the name of the root master server. This example adds rootmaster. doc.com. to the doc.com domain. To verify that the root master is indeed a member of the group, use the nisgrpadm command with the -l option (see the groups chapter of Solaris Naming Administration Guide). With group-related commands such as nisgrpadm, you don't have to include the groups_dir subdirectory in the name. You need to include that directory with commands like niscat because they are designed to work on NIS+ objects in general. The group-related commands are "targeted" at the groups_dir subdirectory. Update the root domain's public keys. the directories chapter of Solaris Naming Administration Guide.) To propagate the root master server's public key from the root domain's cred table to those three directory objects, use the /usr/lib/nis/nisupdkeys utility for each directory object. After each instance, you will receive a confirmation message such as this one: If you look in any of those directories (use niscat -o), you can find one or more entries like the following in the public key field: Start the NIS+ cache manager. The cache manager maintains a local cache of location information for an NIS+ client (in this case, the root master server). It obtains its initial set of information from the client's cold-start file (created in Step 11 or Step 12), and downloads it into a file named NIS_SHARED_DIRCACHE in /var/nis. To start the cache manager, enter the nis_cachemgr command as shown below. After the cache manager has been started, you have to restart it only if you have explicitly killed it. You don't have to restart it if you reboot, since the NIS_COLD_START file in /var/nis starts it automatically when the client is rebooted. For more information about the NIS+ cache manager, see the directories chapter of Solaris Naming Administration Guide. Restart the NIS+ daemon with security level 2. Now that the root master server has DES credentials and the root directory object has a copy of the root master's public key, you can restart the root master with security level 2. First kill the existing daemon, then restart with security level 2. For a standard NIS+ domain only: For an NIS-compatible root domain, be sure to use the -Y (and -B) flags: For a NIS-compatible NIS+ domain: Since security level 2 is the default, you don't need to use an -S 2 flag. Operational networks with actual users should always be run at security level 2. Security levels 0 and 1 are for configuration and testing purposes only. Do not run an operational network at level 0 or 1. Add your LOCAL credentials to the root domain. Because you don't have access rights to the root domain's cred table, you must perform this operation as superuser. In addition, the root master's /etc/passwd file must contain an entry for you. Use the nisaddcred command with the -p and -P flags as shown below. The principal-name consists of the administrator's login name and domain name. This example adds a LOCAL credential for an administrator with a UID of 11177 and an NIS+ principal name of topadmin.doc.com. For more information about the nisaddcred command, see the credentials chapter of Solaris Naming Administration Guide. Add your DES credentials to the root domain. Use the nisaddcred command again, but with the following syntax: The secure-RPC-netname consists of the prefix unix followed by your UID, the symbol @, and your domain name, but without a trailing dot. The principal-name is the same as for LOCAL credentials: your login name followed by your domain name, with a trailing dot. If, after entering your login password, you get a password that differs from the login password warning, yet the password you entered is your correct login password, ignore the error message. The message appears because NIS+ cannot read the protected /etc/shadow file that stores the password, as expected. The message would not have appeared if you had no user password information stored in the /etc/passwd file. Add credentials for the other administrators. Add the credentials, both LOCAL and DES, of the other administrators who will work in the root domain. You can do this in the following ways. An easy way to create temporary credentials for the other administrators is to use Solstice AdminSuite (if you have it available) running in NIS+ mode. A second way is to ask them to add their own credentials. However, they will have to do this as superuser. Here is an example that adds credentials for an administrator with a UID of 33355 and a principal name of miyoko.doc.com. A third way is for you to create temporary credentials for the other administrators, using dummy passwords. (Note that the other administrator, in this example miyoko, must have an entry in the NIS+ passwd table. If no such entry exists, you must first create one with nistbladm. The example below includes that step.) In this case, the first instance of nisaddent populates the passwd table--except for the password column. The second instance populates the shadow column. Each administrator can later change his or her network password using the chkey command. The credentials chapter of Solaris Naming Administration Guide describes how to do this. Add yourself and other administrators to the root domain's admin group. You don.doc.com. group: Allocate sufficient swap space to accommodate NIS+ tables. Swap space should be double the size of the maximum size of rpc.nisd. To determine how much memory rpc.nisd is using, issue the following command: rpc.nisd will under certain circumstances fork a copy of itself. If there is not enough memory, rpc.nisd fails. You can also calculate the memory and swap space requirements for NIS+ tables. For example, if you have 180,000 users and 180,000 hosts in your NIS+ tables, those two tables occupy approximately 190 Mbytes of memory. When you add credentials for 180,000 users and 180,000 hosts, the cred table has 540,000 entries (one entry for each local user credential, one entry for each DES user credential, and one entry for each host). The cred table occupies approximately 285 Mbytes of memory. In this example, rpc.nisd occupies at least 190 Mbytes + 285 Mbytes = 475 Mbytes of memory. So, you will require at least 1 Gbyte swap space. You will also want at least 500 Mbytes of memory to hold rpc.nisd entirely in memory.
http://docs.oracle.com/cd/E19455-01/806-1386/c3root-27799/index.html
CC-MAIN-2015-27
refinedweb
3,224
65.12
§Comet sockets §Using chunked responses to create Comet sockets A good use for Chunked responses is to create Comet sockets. A Comet socket is just a chunked text/html response containing only <script> elements. At each chunk we write a <script> tag that is immediately executed by the web browser. This way we can send events live to the web browser from the server: for each message, wrap it into a <script> tag that calls a JavaScript callback function, and writes it to the chunked response. Let’s write a first proof-of-concept: an enumerator that generates <script> tags that each call the browser console.log JavaScript function: def comet = Action { val events = Enumerator( """<script>console.log('kiki')</script>""", """<script>console.log('foo')</script>""", """<script>console.log('bar')</script>""" ) Ok.chunked(events).as(HTML) } If you run this action from a web browser, you will see the three events logged in the browser console. We can write this in a better way by using play.api.libs.iteratee.Enumeratee that is just an adapter to transform an Enumerator[A] into another Enumerator[B]. Let’s use it to wrap standard messages into the <script> tags: import play.twirl.api.Html // Transform a String message into an Html script tag val toCometMessage = Enumeratee.map[String] { data => Html("""<script>console.log('""" + data + """')</script>""") } def comet = Action { val events = Enumerator("kiki", "foo", "bar") Ok.chunked(events &> toCometMessage) } Tip: Writing events &> toCometMessageis just another way of writing events.through(toCometMessage) §Using the play.api.libs.Comet helper We provide a Comet helper to handle these Comet chunked streams that do almost the same stuff that we just wrote. Note: Actually it does more, like pushing an initial blank buffer data for browser compatibility, and it supports both String and JSON messages. It can also be extended via type classes to support more message types. Let’s just rewrite the previous example to use it: def comet = Action { val events = Enumerator("kiki", "foo", "bar") Ok.chunked(events &> Comet(callback = "console.log")) } §The forever iframe technique The standard technique to write a Comet socket is to load an infinite chunked comet response in an HTML iframe and to specify a callback calling the parent frame: def comet = Action { val events = Enumerator("kiki", "foo", "bar") Ok.chunked(events &> Comet(callback = "parent.cometMessage")) } With an HTML page like: <script type="text/javascript"> var cometMessage = function(event) { console.log('Received event: ' + event) } </script> <iframe src="/comet"></iframe> Next: WebSockets
https://www.playframework.com/documentation/2.4.2/ScalaComet
CC-MAIN-2021-43
refinedweb
411
56.76
Blender 3D: Blending Into Python/Custom datablock properties Contents Adding support for Per Datablock Properties Proposal[edit] A Draft, proposing the addition of user defined, per datablock properties. This means that any blender datatype with an ID should be able to have properties assigned to it. Object/Mesh/Camera/Metaballs/Lamp/Lattice/Text/IPO/Screen/Scene/World/Wave - This does not include faces or verts for the moment. This may be added later but would require further work and is outside the scope of this proposal. There are some areas I do not know enough about and request others to expand. I encourage others who may benefit from this, to read through this page and see if there are any improvements that could be made (edit, give examples and discuss :) ) - Ideasman Discussion[edit] Message from Ton About the property system, what needs to be sorted out is; - how it influences the UI - how does hierarchical work precisely? (also for UI access) - is there a way to default to property sets? Means; you can set custom properties to be added by default for yafray, crystalspace, etc. Would save a lot of hassle, and would create good accessible UIs that way. - I'm busy and haven't time for all these now, - Ideasman Reed Hedges says: In response to Ton's questions above, here's what I think: In the UI, you need to add something in some pane that's exactly like the existing gamelogic properties (name, value, type {string,bool,int,float,vector,etc.}). This could be a new window type instead of the game logic window or it could be in the buttons window with the Object-level operations, or it could be part of the Outliner view. From what I read below, it looks like the hierarchical naming is primarily a naming convention. This could be used by the UI to display as a tree, or they could just be a standard list with the "hierarchies" just in the name format. To me, default properties are not useful. However, they could be a separate mode of the same gui used for individual objects. +-CUSTOM PROPERTIES -------------------------------------+ | | | Selected Object: ME:Suzanne01 (Defauts) | <-- (Defaults) is a two-state toggle button | | to show default props instead of sel. object | Name: Type: Value: Del: | | [Example.Prop....] (String) [This is my stri...] (X) | <-- Name is text field. Type is multivalue | [Another.example.] (Bool ) (true) (X) | dropdown. Value depends on type. | (New) | <-- (New) creates a new empty property above it. +--------------------------------------------------------+ I would also request that all object types be able to have properties, including the Scene/World. (Yep, Scene & World will since they have ID's - Ideasman42) Applications[edit] - Annotations Comments about a mesh, object or material could be stored as a link to a text block or a string. Other metadata might also be added for different projects. - Render Settings Extra Properties for external rendering engines, may of these could be added to the material data, and global render settings could be assigned to the Scene datablock. This could be a step towards better renderman intergration! - Game Engine Many game engine specific settings such as Level Of Detail for objects. Different pose states for figures. could be stored in these settings. - Blender CAD Many attributes could be added for use in the BlenderCAD project. (please expand) - Simulation There is a gap to be filled in the simulation industry, there are a number of people already using blender in simulation, however there is no easy way to add settings for Simulation output (Namely OpenFlight Properties) - Extended datas some internal functionalities may need to store values, while not wanting to impose the added weight to all datablocks, even those who don't need it - Comment: Allow users to mess with data that's accessed by internal functions as well is not a good idea. If it's useful for internals at all (I can't think of a situation where I wouldn't prefer another implementation) user data and Blender data should be separated or access restricted (read-only for Blender stuff). -- tbleicher - Answer to comment : it is not because you use properties mechanism that it must be user editable or even viewable. that case is useful when you need broad datas added in very few cases like a tiny percent of edges on a whole mesh - (by Ideasman) Guys, Id like to resolve this, feel free to disagree, but a line has to be drawn as to the functionality this project covers. - 1) Users must be able to edit properties directly, there will be an interface for this. Blenders internal tools can manage properties also but there will be a panel just for datablock properties. You could argue that letting the user touch them introduces possible errors in the data, but the advantages of allowing the user edit the properties far outweigh the effort of making python-scripts/blender correctly handle a variety of data. - 2)Properties will not be assigned to edges, bezier points, faces, indervidual bones etc. This is out of the scope of this project. If there is a need for pre mesh face properties for e.g., we can address this as a separate addition. for now only data with an ID struct will have custom properties. - Blender BVH Motion Capture Joint names could be assigned as properties so that duplicate object names would not conflict between different motion capture rigs. At the moment I am making all joint names unique because there is no good way to set the objects goint name. (Ideasman) Defaults[edit] For default settings a text file could be used in the home directory called .Bprops Existing files files are .B.blend for the default new file and .Bfs for a list of default dirs in the fileselector. The .Bprops text file would store properties that would be shown by default (without manually adding to the datablock) DataBlockType:PropertyType:PropertyName=DefaultValue - Would be the following format. Object:bool:CrystalSpace.staticObject=1 Material:Text:RenderMan.Shader="" Scene:int:RenderMan.Quality=5 Lamp:bool:OpenFlight.LightString=0 NOTE from the perspective of b2cs (crystalspace exporter), not only defaults would be useful, but also minimum, maximum and possibly even a description for each property. Example Usage[edit] Here are projects that may directly benefit from properties, appretiate input from authors and expansion on how this might benefit their project. Does this proposal furfill your needs? - Blight (openflight compatibility- storing openflight spesific notes) - Make Human(make human, per mesh humen settings...) - Brad Blender/Radiance Exporter (material settings...) - material: save description as text string; save dependencies on other definitions ("modifiers") - mesh: use props for "special" export objects instead of meshobj names - lamps: strore reference to luminaire data and used data source (db, fs) - scene: render options and post process information (filters, image conversion) - CrystalSpace OpenSource Game engine. - RenderMan Adding RenderMan support to Blender. - shader: specify shader name and parameters for RenderMan shaders (surface, displacement...) - primitives: provide hints to RenderMan exporter for processing specific primitives. - scene: provide hints to RenderMan exporter for scene level rendering control. Add your project here Property Structure[edit] Properties will be stored in a hierarchy. each datablock can any number of property folders (I'm proposing no root level properties to enforce better design) A plugin that used properties would have a folder in the root for all its settings- This would stop it from getting messy in the UI, and keep namespace separate for each plugin. Example structure. python can wrap the C data and access as a dict or as a class we may allow both to coexist ob.prop.foo or ob.prop['foo'] - Ideasman42), object.prop.metadata.annotation = 'This is a building' # string object.prop.metadata.timestamp = sys.time() # float object.prop.crystalSpace.lod = Mathutils.Vector(0,50) # vec object.prop.crystalSpace.shadowMap.used = True # bool object.prop.crystalSpace.shadowMap.size = 512 # int Property Types[edit] Here are a list of property types that could be used by plugins. - bool - Int - Float - String May be limited in size, for large multiline texts a text block could be linked. - Vector (2D, 3D, 4D) - Datablock link Would allow you to link to another datablock- text/object/mesh/curve etc. - Arrays of all previous values (not a requirement, since a folder of properties could be used, could be useful) User Interface[edit] The OOps view can be extended to have a panel that would allow editing of each datablocks properties. This could also be viewable in the outliner, however OOps view is better suited because it shows each datablock only once. and shows all datablocks at the same time. Python API Interface[edit] - Comment 1: For scripting more important than accessing and editing the whole set of properties will be the access to individual props by name/hierarchy. For BPy I'd like to have an implementation of __getitem__/__setitem__ (like in dictionaries). Examples: - amesh.prop["Radiance"]["export"]["smooth"] = True # set the property, bool - var= amesh.prop["Radiance"]["export"]["smooth"] # get the property - amesh.prop["Radiance"]["export"].keys() # get a list of sub properties. ["smooth", "emit_light"] - del amesh.prop["Radiance"]["export"]["smooth"] KeyError should be raised whenever a non-existent key is specified (or an existing key is converted to "container"). How to check this and react is the job of the script. - Comment 2: To import/export the property set a universal way to serialize the data would be wellcome. Currently we have to use the Python standard library to store/read runtime data from the Registry/text objects/text files. A builtin way to create/parse text strings from/to Python objects could help script writers to load and save persistent data. That way the whole properties set could be stored as a string attribute of the object. -- tbleicher (comment: Python's pickle module provides this, though I don't know how this might interact with Blender objects, etc. -- sapir) Internal Implementation[edit] Any Blender Data with an ID would be able to have a property. (please expand and add to how this may be implemented - in Blenders DNA) User data already sort of exists for objects?[edit] Sorry, I just came upon this page searching the web trying to figure out how to add user data to objects in Blender. Couldn't find anything in my searches, but finally figured out some stuff from the API docs. Try these in Blender's Python interactive console: cube = bpy.data.objects["Cube"] cube.addProperty("myUserDataName", 1, 'INT') dir(cube.getProperty("myUserDataName")) var = cube.getProperty("myUserDataName").data cube.getProperty("myUserDataName").setData(2) cube.getAllProperties() cube.removeProperty("myUserDataName")
https://en.wikibooks.org/wiki/Blender_3D:_Blending_Into_Python/Custom_datablock_properties
CC-MAIN-2019-13
refinedweb
1,761
54.42
I read that this is a tricky question because an applet is run in the browser. But I would like my applet window to always maintain the same size. (Right now working with Eclipse I can slide the size of the window.) For the moment I only do this: public class myJApplet extends JApplet{ public void init() { this.setSize(800, 480); } } this.setResizable(false) Set the size of the applet in the HTML. E.G. <html> <body> <applet code="myJApplet" width="800" height="480"> </applet> </body> </html> The applet will still be resizable when the HTML is loaded in the AppletViewer, but that is not relevant to deployment.
https://codedump.io/share/oTLrqnRkVZAH/1/japplet-setting-the-size-of-the-frame
CC-MAIN-2017-13
refinedweb
108
73.88
0 def translate(response): """Translates an English word into Pig Latin.""" # Initial lists and strings vowels = ["a", "e", "i", "o", "u", "A", "E", "I", "O", "U", "y", "Y"] consonants = ["b", "c", "d", "f", "g", "h", "j", "k", "l", "m", "n", "p", "q", "r", "s", "t", "v", "w", "x", "z", "B", "C", "D", "F", "G", "H", "J", "K", "L", "M", "N", "P", "Q", "R", "S", "T", "V", "W", "X", "Z"] consonant_string = "" response = response.split() for word in response: for i in vowels: if word[0] == i: word += "way" break for x in range(len(word)): if word[x] in vowels: break for i in consonants: if word[x] == i: consonant_string += i word = word[len(consonant_string):] word += consonant_string word += "ay" return response An assignment for school was to make a program which would take an English word and translate it to the fake language of Pig Latin. I'm now trying to make it so that it can translate an entire sentence, one word at a time. Because of this, a lot of stuff comes copy and pasted from the original and may not make sense. However, I can not figure out what's wrong here. It just returns the list of words unaltered.
https://www.daniweb.com/programming/software-development/threads/184030/pig-latin-translator
CC-MAIN-2017-09
refinedweb
202
69.15
How to send data to characteristic over BLE I could use some help. I can connect to the service and characteristic, been able to receive data from another device I'm using. What I don't know is how to send over data to the device that should receive data. Is there a command for that? I'm using Windows 7 and Atom. I've been looking at the documentation for the LoPy and it should be characteristic.write that sends the data. # Handler for data Received (Working on Send) def characteristic_callback(char): #v = int.from_bytes(char.value(), "little") #print("Data: {}".format(chr(v))) i = 25 characteristic.write(i) characteristic.callback(trigger=Bluetooth.CHAR_NOTIFY_EVENT, handler=None, arg=None)``` - jmarcelino last edited by @codemaniac64 Umm I'm not sure, do you have any debug visibility on your PSoC code? Seems like it's rejecting something. Can you see if the BLE connection is still up when you try the .write(b'x0f')? It really helps to have a BLE sniffer like this So I've tried both using bytes([4])and payload = struct.pack("<H", 2048)and I get this error everytime. What am I doing wrong? Is it because I'm not doing anything in my characteristic_callback? Also I did change charto characteristicjust as Seb said. bytes([]) really expects an array of one byte integer to convert, if you try to covert something larger than one byte you'll get an error. To do it properly you should use struct.pack functions() where you can specify the byte order, for example to get the byte representation of 2048 as a little endian 16 bit short you''d use: import struct payload = struct.pack("<H", 2048) characteristic.write(payload) For Strings you can use payload_str = mystring.encode('ascii')(or Unicode, etc) Finally you can simple concatenate bytes with +e.g. payload + payload_str @jmarcelino Thank you so much! This has helped a lot. Is it also possible that I store it in a variable and print it out, that way I can enter numbers and letters I want to send over. bytes([value]) @codemaniac64 Yes that should do it. As explained on the documentation "For now it only accepts bytes object representing the value to be written." so you need to covert your integer to a bytes type with (for example to write one byte with the number 4): bytes([4]) Should it be like this? Also what does the error I get mean? If the PSoC 4 is your GATT Server and you’ve already connected and found the characteristic it should just be a matter of doing characteristic.write(value) But do this outside of the callback function. Just use the characteristic you found via service.characteristics() @jmarcelino My LoPy would be my GATT Client that I will send to my PSoC 4 BLE that will be my GATT Server. Another thing, in your trigger you have Bluetooth.CHAR_READ_EVENT, but the documentation says that it can must be Bluetooth.CHAR_NOTIFY_EVENT. So if I want to write something wouldn't it be: char.callback(trigger=Bluetooth.CHAR_WRITE_EVENT, handler=char_cb_handler) I already have the connection established to the desired characteristic, so that step is not needed. I also already know that the characteristic has a writable bit set because I've already programmed it that way and done tests. I have tried the characteristic.write(value)aswell but it doesn't work. The only step I want to know is how to send something to the GATT server. - jmarcelino last edited by Can you clarify what is your GATT Server and Client in your setup? If the LoPy is your GATT Server you'd trigger on a CHAR_READ_EVENT in order to send data to the remote device def char_cb_handler(chr): global some_number return some_number char.callback(trigger=Bluetooth.CHAR_READ_EVENT, handler=char_cb_handler) So firstly you need to find the desired characteristic from the device using code like: from network import Bluetooth import time bluetooth = Bluetooth() bluetooth.start_scan(10) while True: adv = bluetooth.get_adv() if adv and binascii.hexlify(adv.mac) == 'XXXXXXXXXX': try: print('1') conn = bluetooth.connect(adv.mac) except: bluetooth.start_scan(5) continue break wanted_char = None services = conn.services() for service in services: print("Service: {}".format(service.uuid())) if service.uuid() == b'XXXXXXXXXXXXX': characteristics = service.characteristics() for characteristic in characteristics: print("Characteristic: {}".format(characteristic.uuid())) if characteristic.uuid() == b'XXXXXXXXXXXX': wanted_char = characteristic You will then need to check the wanted_char.properties()to see if it has the writable bit set, if it does you can then do wanted_char.write(b'x0f') Are you sending to another LoPy or a third party device? I want to send something over BLE from LoPy. That is all. A simple 'Hello' or a value that I can enter in putty. What exactly are you trying to achieve? @seb Do I need to change every char to characteristic? Nothing changed when I changed the char in the characteristic_callback In the code you have posted the callback parameter is called charnot characteristicso please adjust your code accordingly.
https://forum.pycom.io/topic/2826/how-to-send-data-to-characteristic-over-ble
CC-MAIN-2021-43
refinedweb
839
59.9
As we seen in our last post, “What is Git ?” , git works for local as well as remote development. In this post, we will try to prepare local development machine with git, so that we can create our local git and start working on it. Git can be installed on Ubuntu using below command, $ sudo apt-get update $ sudo apt-get install git The above command first updates your machine’s source repository and then goes on to install git using command “apt-get install git” . Note, we have installed this on Ubuntu 18.04, but if you are installing git on lower versions of Ubuntu, if above command didn’t worked, you can also try following command, $ sudo apt-get install git-core Once, the above installation is successful, it should add “git” command to your development machine, details of which can be checked as, $ which git /usr/bin/git $ git --help usage: git [--version] [--help] [-C ] [-c = ] [--exec-path[= ]] [--html-path] [--man-path] [--info-path] [-p | --paginate | --no-pager] [--no-replace-objects] [--bare] [--git-dir= ] [--work-tree= ] [--namespace= ] [ ] checkout Switch branches or restore working tree files 'git help -a' and 'git help -g' list available subcommands and some concept guides. See 'git help ' or 'git help ' to read about a specific subcommand or concept.
https://www.lynxbee.com/how-to-install-git-on-ubuntu/
CC-MAIN-2020-24
refinedweb
216
51.41
Lab Exercise 9: Inheritance The purpose of this lab is to give you practice in creating a base class and many child classes that inherit the methods and fields of a base class. In addition, we'll be modifying the L-system class to enable it to use rules that include multiple possible replacement strings and choose them stochastically (randomly). The other piece we'll be implementing this week is handling rules with multiple possible replacements. Adding this capability will enable you to draw yet more complex L-system structures where each tree is unique. We'll continue to use material from the ABOP book. Tasks This week we'll modify the lsystem.py and interpreter.py files and create a new shape.py file that implements shapes other than trees. Please follow the naming conventions for classes and methods. We'll provide some test code that expects particular classes, methods, and parameter orderings. - Create a new working folder. Copy your lsystem.py and interpreter.py files from your prior assignment (version 2). This week we are writing version 3 of both files. Label them as version 3. - The change to the Lsystem class is to enable it to use rules with multiple possible replacements. During replacement, the particular replacement string is chosen randomly. There are a number of methods requiring modification: __init__, __str__, read, replace, and addRule. In addition, you'll need to import the random package.' ] In the __init__ method, change the initialization of the self.rules field to be an empty dictionary instead of an empty list. - In the addRule method we need to redo the function. rule argument, then the rule variable will be: [ 'F', 'F[+F]', 'F[-F]', 'F[+F][-F]' ] The symbol is the first item in the list (rule[0]) and the set of replacements is the remainder of the list (rule[1:]). To avoid problems, we want to copy the elements of rule[1:] into a new list and then assign that list to the dictionary entry with the symbol as the key. The algorithm is below. def addRule( self, rule ): # set a local variable to the empty list # for each replacement string in the rule (2nd element on) # append the element to the local list # set the dictionary entry for the key (rule[0]) to be the new list - In the replace function - The last two changes are in the read method. First, we have to remember to initialize self.rules to an empty dictionary (I did say this duplicate code would be a pain). Second, in the call to addRule we should pass in all but the first item of the words list: words[1:] (if you don't already do this). When you're done with these changes, try running one of the following L-systems with just one or two iterations. Do it twice and see if you get the same string both times. - Make three changes to the Interpreter class. First, just after the class Interpreter: created the class variable, add the following three lines to the beginning of the __init__ method. Note that we have to use the class name to access a class variable. if Interpreter.initialized: return Interpreter.initialized = True Now the init function will not recreate the turtle window if it already exists. Second, add two cases to draw string. For the character '{' call turtle.fill(True), and for the character '}' call turtle.fill(False). Third, if you don't already have it, create a method color(self, c) that sets the turtle's color to the value in c. When you're done, download and run the following test function. - We're now going to start building a system that makes use of our interpreter class to build shapes that are not necessarily the output of L-systems. Just as we made functions in the second project to draw shapes, now we're going to make classes that create shape objects. Create a new file shape.py. In it, create a parent class called Shape. Then define an init method with the following definition. def __init__(self, distance = 100, angle = 90, color = (0, 0, 0), istring = '' ): # create a field called distance and assign it distance # create a field called angle and assign it angle # create a field called color and assign it color # create a Interpreter object # have the Interpreter object place the turtle at (xpos, ypos, orientation) # have the Interpreter object set the turtle color # have the Interpreter, # color, an angle of 90,. Once you have finished the lab, go ahead and get started on project 9.
http://cs.colby.edu/courses/S10/cs151-labs/labs/lab09/
CC-MAIN-2017-51
refinedweb
766
73.68
#include <perfmon/pfmlib.h> int pfm_get_event_name(unsigned int e, char *name, size_tmaxlen); int pfm_get_full_event_name(pfmlib_event_t *ev, char *name, size_tmaxlen); int pfm_get_event_mask_name(unsigned int e, unsigned int mask, char *name, size_tmaxlen); int pfm_get_event_code(unsigned int e, int *code); int pfm_get_event_mask_code(unsigned int e, unsigned int mask, int *code); int pfm_get_event_code_counter(unsigned int e, unsigned int cnt, int *code); int pfm_get_event_counters(int e, pfmlib_regmask_t counters); int pfm_get_num_events(unsigned int *count); int pfm_get_max_event_name_len(size_t *len); int pfm_get_event_description(unsigned int ev, char **str); int pfm_get_event_mask_description(unsigned int ev, unsigned int mask, char **str); The pfm_get_full_event_name function returns in name the event name given the full event description in ev. The description contains the event code in ev->event and optional unit masks descriptors in ev->unit_masks. The maxlen argument indicates the maximum length of the buffer provided for name. If more than maxlen-1 characters are needed to represent the event, an error is returned. In case unit masks are provided, the final event name string is structured as: event_name:unit_masks1[:unit_masks2]. Event names and unit masks names are returned in all upper case. The pfm_get_event_code function returns the event code in code given its opaque descriptor e. On some PMU models, the code associated with an event is different based on the counter it is programmed into. The pfm_get_event_code_counter function is used to retrieve the event code in code when the event e is programmed into counter cnt. The counter index cnt must correspond to of a counting PMD register. Given an opaque event descriptor e, the pfm_get_event_counters function returns in counters a bitmask of type pfmlib_regmask_t where each bit set represents a PMU config register which can be used to program this event. The bitmask must be accessed using accessor macros defined by the library. It is possible to list all existing events for the detected host PMU using accessor functions as the full table of events is not accessible to the applications. The index of the first event is always zero, then using pfm_get_num_events you get the total number of events. Event descriptors are contiguous therefore a simple loop will allow complete scanning. The typical scan loop is constructed as follows: unsigned int i, count; char name[256]; pfm_get_num_events(&count); for(i=0;i < count; i++) { pfm_get_event_name(i, name, 256); printf("%s\n", name); } The pfm_get_num_events function returns in count the total number of events supported by the host PMU. The former pfm_get_first_event has been deprecated. You can simply initialize your variable to 0 to point to the first event. The former pfm_get_next_event has been deprecated. You need to retrieve the total number of events for the host PMU and then increment your loop variable until you reach that count. The pfm_get_max_event_name_len function returns in len the maximum length in bytes for the name of the events or its unit masks, if any, available on one PMU implementation. The value excludes the string termination character ('\0'). The pfm_get_event_description function returns in str the description string associated with the event specified in ev. The description is returned into a buffer that is allocated to hold the entire description text. It is the responsibility of the caller to free the buffer when it becomes useless by calling the free(3) function. The pfm_get_event_mask_code function must be used to retrieve the actual unit mask value given a event descriptor in e and a unit mask descriptor in mask. The value is returned in code. The pfm_get_event_mask_name function must be used to retrieve the name associated with a unit mask specified in mask for event e. The name is returned in the buffer specified in name. The maximum size of the buffer must be specified in maxlen. The pfm_get_event_mask_description function returns in str the description string associated with the unit mask specified in mask for the event specified in ev. The description is returned into a buffer that is allocated to hold the entire description text. It is the responsibility of the caller to free the buffer when it becomes useless by calling the free(3) function.
http://www.makelinux.net/man/3/P/pfm_get_event_mask_description
CC-MAIN-2015-40
refinedweb
672
51.68
Change Page Title in React Change the page title when the user routes to a new page. A Great Tool There are many ways to get the page title to change and some great tools to assist you. There is a particularly great tool that will do that and a bit more. It's called react-helmet. It can set anything that occurs in the head of the document. It has a great declarative API because it's exposed as just another component. Using react-helmet To use this feat of software engineering, we'll install it: npm install react-helmet And then use it inside of one or more of our components. Let's say that we have a home page and an about page. We might have a top-level component as the entry point for each page. We would put a call to react-helmet in each of these components, assigning the respective titles: import { Helmet } from 'react-helmet' const Home = _ =>const About = _ =>const About = _ => Welcome to muffin land. It's breakfast time!Welcome to muffin land. It's breakfast time! Muffins R Us How we got into muffins.How we got into muffins. About | Muffins R Us No when your router changes components from Home to About or vice versa, the Helmet will engage and change your site's title. Depending on your site design, you could change the site title even when the URL hasn't changed. Since the title is set when a component is rendered, all you'd need to do is render a component with a different declaration inside it, and the site title would change. How else do you handle updating the page title in a React app? Or have you found other interesting uses for react-helmet?
https://jaketrent.com/post/change-page-title-in-react/
CC-MAIN-2022-40
refinedweb
300
74.08
Happy New Year! Thought I’d start the new year with some more off-road fun on how to interact with the diagnostic components in VS. People have often asked me “what is the format for a VSP file John”. The answer is “it depends”. That sounds a bit glib, but realistically, the VSP is too raw for most people to get any value out of. It contains the blobs of data that we have read at profile time, and varies in format depending on profile type used. It has not done any analysis of what the data means and does not contain any symbolic information (i.e. no names of functions or classes). To make sense of it, you’d really have to duplicate our analysis engine, which seems kinda pointless. The analysis basically has two major outputs: 1) An ADO.Net dataset with the aggregate tables like function, module, type 2) A calltree that gets represented in the call tree view In future we hope to build a real extensibility and automation model that is fully supported, but for the moment let me mirror what I did with code coverage in previous articles but in a much shorter way. Note, this is using VS2008 bits. This was not available in VS2005, we essentially re-architected the engine in VS2008 to set us up to provide richer extensibility. Here’s a code snippet to read a VSP and print the methods from the method table. using System; using System.Collections.Generic; using System.Text; // There are actually 4 references you will need to add to // your project that includes this code : // VSPerfPresentation, VSPerfAnalysis, VSPerfData, VSPerfReader using VSPerfPresentation; using VSPerfAnalysis; using VSPerfReader; namespace VSPMethodDump { class VSPMethodDump { string _filename; ProfileDataProvider _pdp; // A simple method dumper program that dumps profiler data for all the // methods/functions in a VSP file static void Main(string[] args) { VSPMethodDump vmd = new VSPMethodDump(args[0]); vmd.Dump(); } VSPMethodDump(string filename) { _filename = filename; // The ProfileDataProvider class is the class that allows // reading and analysis of profiler data in VSP files // First parameter is a reghive for when PDP is working inside VS ProfileDataProvider.Initialize(null); _pdp = new ProfileDataProvider(); } void Dump() { // The option store gives a variety of ways for analysis // to be performed on the target file. We’ll keep it to a simple // Function summary VSPerfAnalysis.OptionStore ops = new VSPerfAnalysis.OptionStore(); ops.Clear(); ops.FunctionSummary = true; // Noise reduction is the folding you see in the VS2008 profiler _pdp.EnableNoiseReduction = false; // Read the file and anlayze. There are certain operations that can // be performed after the open step, which is basically a read of the index // of the file _pdp.OpenFile(_filename, ops); _pdp.Analyze(); // A ProfileDataViewer is a class that allows access to the analyzed data // in this case the Function summary, which is the same as the Function view // in VS2008. They are pre-populated with a set of default visible columns ProfileDataViewer pdv = _pdp.GetViewer(“Function”); WriteHeaders(pdv); // A flat provider mimics the flat table view as seen in the profiler. Others // are available when the view is more tree like, e.g. CallTree IFlatProfileViewer flat = pdv as IFlatProfileViewer; // Each IProfileRow has the data for a given method/function in this example IList<IProfileRow> rows = flat.GetRows(); foreach (IProfileRow row in rows) { WriteRow(row,pdv); } // Close is needed to ensure proper closure of symbol files and files _pdp.Close(); } void WriteHeaders(ProfileDataViewer viewer) { // As well as visible columns you can inspect AvailableColumns to see // what could be written here. If you want to write said columns, they // need to be turned visible by updating the “Visible” property for (int i = 0; i < viewer.VisibleColumns.Count; i++) { string colString = viewer.VisibleColumns[i].MediumName; Console.Write(colString + “,”); } Console.WriteLine(); } void WriteRow(IProfileRow row, ProfileDataViewer viewer) { for (int i = 0; i < viewer.VisibleColumns.Count; i++) { string colString = viewer.GetText(row, i); Console.Write(colString + “,”); } Console.WriteLine(); } } } Sadly I’m not good enough at blogging to have this as an attachment. Next time out on the ramps, I’ll show some other areas of the API that can provide more insight. Enjoy! John PingBack from
https://blogs.msdn.microsoft.com/ms_joc/2008/01/02/more-off-road-monster-truck-madness-how-to-read-a-vs-profiler-vsp-file-programmatically/
CC-MAIN-2016-44
refinedweb
687
55.44
Hello, I am newbie to Quantopian. I am just writting a very simple code to try with. The code looks like below. def initialize(context) : context.dat = { } def handle_data(context,data) : full = data[symbol('SPY')] date = str(full.datetime)[:10] context.dat[date] = [full.open_price,full.high,full.low,full.close_price] log.info(context.dat) #log.info(context.dat['2015-06-04']) I am just trying to get the data every minute and append it to a dictionary which is indexed by time so that I can use the data to compute my indicators. I ran it for 3 days with daily data settings. End of the backtest the dictionary looks like this {'2015-06-08': [209.64, 209.83, 208.39, 208.47], '2015-06-04': [211.07, 211.78, 209.75, 210.22], '2015-06-05': [209.95, 210.58, 208.98, 209.79]} The problem now is that I am not able to access the dictionaries. If i say context.dat['2015-06-04'] It's giving me key error 2015-06-04 why is it giving the key error even if there is a key? I just created same kind of dictionary in my own computer's python command prompt and I am able to access the dictionary using keys But its not woking as expected in quantopian. I am unable to figure out the problem here. Please assist me to solve this. Thanks for any help.
https://www.quantopian.com/posts/unable-to-acess-the-dictionary-with-time-key-values
CC-MAIN-2018-51
refinedweb
241
68.67
At this point you might be thinking that we have to spend a lot of time working out from the mouse position which Ellipse had been clicked on - well you don't have to because of the way WPF routed events work. The MouseDown event is actually raised on the Ellipse that the user clicked on and, because the Ellipse doesn't handle it, it bubbles up to the Grid which does handle it. The point is that the OriginalSource property of the event Args contains a reference to the object that was clicked on, i.e the Ellipse in question, so we don't have to work out which Ellipse is involved as we are told this as part of the event. We have to check that the source of the event is indeed an Ellipse but apart from this we can simply cast to Ellipse and update. The only problem is that we don't want to update the Ellipse - we want to update the cell in the array that corresponds to the Ellipse and let databinding update the Ellipse automatically. All we have to do to find out which cell in the array is bound to the Ellipse is examine its DataContext and cast to a cell: private void grid1_MouseDown( object sender, MouseButtonEventArgs e){ if (e.OriginalSource is Ellipse) { Ellipse temp=((Ellipse)e.OriginalSource); Cell C = (Cell)temp.DataContext; C.state = !C.state; }} An alternative would have been to implement a two-way data binding and simply update the Ellipses Opacity property but sometimes a direct approach is easier. At this point you can now run the program and attempt to draw on the Grid - but nothing happens. The problem is that while the initial setting of the cell's state was transferred to the Ellipses when everything was being set up, there is nothing to trigger the Ellipse's property update when the cell's state changes. So although you are changing the cell's state when you click on an Ellipse, this isn't being transferred to the Ellipse's bound property. The solution is very simple - you have to implement the INotifyPropertyChanged interface. You first need to add: using System.ComponentModel; to the start of the program and to the Cell class: class Cell: INotifyPropertyChanged To implement the interface you have to define an event: public event PropertyChangedEventHandler PropertyChanged; which has to be raised when the property is changed. To do this we need to modify the set modifier functions: set{ _state = value; if (PropertyChanged != null) { PropertyChanged(this, new PropertyChangedEventArgs("state")); };} The if statement checks to see whether another object has subscribed to the event. If it has it raises the event with parameters set to the object that the property belongs to and the name of the property that has changed. Notice that once added to a class the same event can be used to signal a change on any property. Now if you run the program you can click on an Ellipse (or where an Ellipse is hidden) and it will change its state. A starting pattern>
http://www.i-programmer.info/projects/38/992-life-in-wpf.html?start=2
CC-MAIN-2015-14
refinedweb
515
65.66
The following code is just an example of a problem I having with structures and accessing the structure members. This program contains two structures that are to be part of a larger program for a project, but I just wanted to play with them to see how I would use them. The code for the typedef's and structures was the suggested code that was supplied in my text book, and since it is not the way I would have gone about it I wanted to test it first. Anyway the code compiles fine - but upon running it the program fails while attempting to store the input at the specified location in memory. I get the feeling that this is a real fundamental error.. stupid brain! #include <stdio.h> #include <stdlib.h> typedef struct node NODE; typedef struct list DLL; struct node { int *data; NODE *prior; NODE *next; }; struct list { long int count; NODE *head; NODE *tail; }; int main(void) { DLL *foobar; foobar = (DLL *) malloc(sizeof(DLL)); if(foobar == NULL) { printf("Could not allocate memory."); exit(0); } printf("Enter a value: "); scanf("%d", &foobar->head->data); return 0; }
https://cboard.cprogramming.com/c-programming/10203-accessing-structure-members.html
CC-MAIN-2017-13
refinedweb
188
66.17
Results 1 to 4 of 4 - xr600Guestnewbie Q - how to wait for keystroke in gcc/xcode ?? Hi ... Is there a worthy GCC alternative to the "getche" command in win/dos c++, I've been searching the web for 3 hours now, and can't really find any solution :/. The thing is, that I wan't my program to wait for a keystroke (like cin, but without pressing "return")... is there some easy way to do that ??. Newbie thanX. Torben. - xr600Guest Originally Posted by shadov . It really should'nt be that hard... should it ??, and please remember that I'm a complete newbie to C++ (last time I did programming Z80 Assembler code was the hottest ). Regards. Torben. - binkleyGuest Originally Posted by xr600 Code: #include <curses.h> int main() { int c; initscr(); c = getch(); printf("\r\nGot character: %c.\r\n", c); return 0; } Thread Information Users Browsing this Thread There are currently 1 users browsing this thread. (0 members and 1 guests) Similar Threads Xcode & gccBy namu in forum macOS - Operating SystemReplies: 2Last Post: 07-13-2012, 01:15 AM GCC and xcode.By macosuser in forum macOS - Apps and GamesReplies: 2Last Post: 06-28-2010, 01:51 AM Installing gcc on Mac OSX without re-installing XCodeBy jigglypuff in forum macOS - Development and DarwinReplies: 0Last Post: 02-26-2010, 01:16 AM C Program Trouble Compiling w/GCC, Trouble Running in XcodeBy ebeccarayray in forum macOS - Apps and GamesReplies: 1Last Post: 09-28-2009, 10:41 PM XCode - C - GCC optionsBy federicog in forum macOS - Development and DarwinReplies: 1Last Post: 08-21-2007, 03:17 PM
http://www.mac-forums.com/forums/os-x-development-darwin/4930-newbie-q-how-wait-keystroke-gcc-xcode.html
CC-MAIN-2018-05
refinedweb
267
59.84