text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringlengths
9
15
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
12 February 2010 23:59 [Source: ICIS news] LONDON (ICIS news)--European purified terephthalic acid (PTA) contract prices have increased by €40/tonne ($55/tonne) in January compared with December, in line with upstream paraxylene (PX) price movements, sources said on Friday. “We got the PX pass-through. January was very good because of demand, because of good base demand for the time of year,” said a producer, echoing comments from another producer. Absolute numbers were recorded between €690-720/tonne FD (free delivered) NWE (northwest ?xml:namespace> A customer said it was having to pay over €730/tonne for the month of January and recorded its acceptance of the PX pass-through for that month. Producers had been targeting a €10/tonne premium over the cost of PX. But because downstream polyethylene terephthalate (PET) producers endured greater-than-expected January increases in the price of PET’s other raw material, monoethylene glycol (MEG), a PTA producer said it decided against imposing the additional €10/tonne. “PET health is my health, so I need to take care of my customers,” said the producer. PX prices rose by €60/tonne to €800/tonne in January and rolled over in February. “[PTA] customers should not be running with huge inventories, because some thought that prices would drop in March,” said the second producer. PET plants were indeed running well in Producers of PET were attempting to raise their February prices by €50-80/tonne over January in order to compensate for failing to pass through the full upstream costs and to reflect further hikes in February. Sources viewed MEG as balanced to tight. MEG prices soared by €125/tonne in January to €785/tonne, and February looked set to increase further, according to industry players. February PTA discussions have yet to get under way. ($1 = €0.73) For more on PET, PTA, PX
http://www.icis.com/Articles/2010/02/12/9334461/europe-january-pta-increases-by-40tonne-due-to-px.html
CC-MAIN-2013-48
refinedweb
311
60.35
Python Programming, news on the Voidspace Python Projects and all things techie. Decorators I used decorators for the first time a couple of days ago (at work) [1]. I was pleased (and surprised), that not only did I get the syntax right without looking it up, but it worked first time [2]. Hooray, bonus points for me. It was a simple decorator for test methods in our home-brew unittest framework, it just prints the function name when it's called. This was so we could see exactly which test was freezing. """Prints the function (or method name) when called.""" name = func.func_name def decorated(*args, **keywargs): print name return func(*args, **keywargs) return decorated We're now part-way through swapping all our tests over to the Python UnitTest framework. So this code isn't needed, but it was fun to play. The biggest hassle in changing over, is that our framework followed the test parameter conventions of the Java unit test framework that the Resolver boss was familiar with. Tests had the following parameter order : message, expected, actual first, second, message Corrected from earlier, thanks to reader comments which are correct. I thought the Python framework owed its heritage to the common Java unit-test framework ? Oh well, four-hundred tests converted, only about twelve-hundred more to go. Like this post? Digg it or Del.icio.us it. Posted by Fuzzyman on 2006-05-18 13:45:42 | | New IronPython Home IronPython, has a new home... almost. IronPython on Codeplex There's not much there yet. So is Codeplex Microsoft's answer to sourceforge ? Update In Michael Swanson's Blog, he quotes : CodePlex is an online software development environment for open and shared source developers to create, host and manage projects throughout the project lifecycle. It has been written from the ground up in C# using .NET 2.0 technology with Team Foundation Server on the back end. So, not that unlike sourceforge hey... In totally unrelated news, it looks like there are at least two ways of running Mac OS X on an ordinary PC : Maybe I don't need to buy a Mac mini after all, and can just build my own PC... Like this post? Digg it or Del.icio.us it. Posted by Fuzzyman on 2006-05-16 13:57:33 | | Categories: General Programming, Computers, Python, Hacking, IronPython Archives This work is licensed under a Creative Commons Attribution-Share Alike 2.0 License. Counter...
http://www.voidspace.org.uk/python/weblog/arch_d7_2006_05_13.shtml
CC-MAIN-2016-36
refinedweb
411
73.47
What a great thread about another good outing by the IJC. Thank you for sharing.. <?xml:namespace prefix = o ns = "urn:schemas-microsoft-com<o:p></o:p><o:p></o:p> Tilla Jogian also finds mention in the epic love poem Heer Ranjha of Waris Shah. Ranjha, the story's protagonist, who when spending his time on the rebound, sublimating his love & passion in the spiritual world, came here for consolation and got his ears ringed here as was the tradition of Guru Goraknath's followers. Tilla Jogian comprises a complex of Hindu mandirs housing at least three baths and a network of waterworks with at least two minor dams. Tilla Jogian can be seen from districts of Mandi Bahauddin, Gujrat, <?xml:namespace prefix = st1<ST1:PJhelum</ST1:P</st1:place>. From its height of 3200 feet, you can see a panorama unparalleled in <st1:country-region w:<st1:place w:Pakistan</st1:place></st1:country-region>.. Source: Wikipedia Tilla Jogian Page @SK... thanks for detail :).... lovely pictures by all... Yup, we were too freaked to spend another few hour bumping down the track, so to get things interesting me and Lazzaz just raced down as fast as we could.... he did a decent job keeping up Had to stop time and again for NN to catch up :P:P But made it down to Rohtas in just over two hours I think. We were almost done with tea by the time the Mitsus caught up :P:P:P @SKGood Info, im resident of Rohtas, your pics are good too:) Hi to all,dear friends sorry for posting a late reply bcz i was not getting through.First of all thanks a lots to all for the appreciation of my tikka skills and liking the stuff. @NN very good writeup from your side.@Suhaib kiani thanks for the pictures and the knwledge about the TILLA JOGIAN really impressed.@Amir bhai nice and a very proffesional style of photography. Overall it was a good package as a whole enjoyed a lot specially weather and also the company of all the friends. @Hamid Chaudhry(PISTOLS) thankyou very much for spending such a long time with the(IJC)family.Hope you enjoyed the whole journy with all of us. Rgards ZEB CHAUDHRY. @ SK, Its a Jeep thing, you won't catch it.......................i.e if they are powered by Toyota engines! PS; Zabardast photoghraphy, captions n details.............truely appreciated. @NN koi bach ke na jane payeeee. Look like an enjoyable historical tour... Lovely Pics Suhaib Kiani Aamir bhai nice pics sir... Missed the great Blackop BBQ Excellent Pictures (Y) Excellent Pictures, SK, Aamir bhai & Green Jeep. What a place to be...Very good naration of the trip nn bhai...SK your information about Rohtas Fort and Tilla is superb Its only because of IJC's forums, I have come across so many new places. Inspiring stuff indeed...Will take my Mini Jero thr definately. (H) This is nice the most active Club of pakwheels is no doubt considered IJC @Libra! Your appreciation of my photography is always very enouraging for me! I do hope that someday, I will be able to spare some cash (from my cars and jeeps!) to invest in my equipment a bit more as I need better lenses Many thanks! Thanks for the kind words! The write -ups on Rohtas and Tilla are actually chaapaas from Wikipedia! I put it on Rohtas one but need to mention the source on the Tilla post! The well known Hadith encourages us to go to Cheen if we have to to search for knowledge. We should thank our lucky stars that Wikipedia saves us the journey on most occasions. Otherwise we would have been left with silly excuses like the road is still blocked Anyways long live Wiki..... pedia, leaks and all Thank you Sir! We missed you on the trip!
https://www.pakwheels.com/forums/t/ijc-visits-tilla-joogian-on-sunday-5-th-june-2011/149707?page=9
CC-MAIN-2017-04
refinedweb
650
75.2
So far in this series, we've discussed object-oriented programming in general, and the OOP principle of cohesion. In this article, we'll look at the principle of coupling and how it helps in game development. Note: Although this tutorial is written using Java, you should be able to use the same techniques and concepts in almost any game development environment. What Is Coupling? Coupling is the principle of "separation of concerns". This means that one object doesn't directly change or modify the state or behavior of another object. Coupling looks at the relationship between objects and how closely connected they are. A Relations Diagram is a great way to visualise the connections between objects. In such a diagram, boxes represent objects and arrows represent a connection between two objects where one object can directly affect another object. A relations diagram A good example of coupling is HTML and CSS. Before CSS, HTML was used for both markup and presentation. This created bloated code that was hard to change and difficult to maintain. With the advent of CSS, HTML became used just for markup, and CSS took over for presentation. This made the code fairly clean and easily changeable. The concerns of presentation and markup were separated. Why Is Coupling Helpful? Objects that are independent from one another and do not directly modify the state of other objects are said to be loosely coupled. Loose coupling lets the code be more flexible, more changeable, and easier to work with. A loosely coupled system Objects that rely on other objects and can modify the states of other objects are said to be tightly coupled. Tight coupling creates situations where modifying the code of one object also requires changing the code of other objects (also known as a ripple effect). Tightly coupled code is also harder to reuse because it can't be separated. A tightly coupled system A common phrase you'll hear is "strive for low coupling and high cohesion". This phrase is a helpful reminder that we should strive for code that separates tasks and doesn't rely heavily on each other. Thus, low (or loose) coupling is generally good, while high (or tight) coupling is generally bad. How to Apply It Asteroids First, lets look at the objects of Asteroids and how they are connected. Recall that the objects are a ship, an asteroid, a flying saucer, and a bullet. How are these objects related or connected to each other? In Asteroids, a ship can fire a bullet, a bullet can hit an asteroid and a flying saucer, and an asteroid and a flying saucer can hit the ship. Our relations diagram then looks as follows: As you can see the objects are all pretty well interrelated. Because of this, we have to be careful of how we write the code, otherwise we will end up with a tightly coupled system. Lets take for example the ship firing a bullet. If the ship were to create a bullet object, keep track of its position, and then modify the asteroid when the bullet hits, our system would be very tightly coupled. Instead, the ship should create a bullet object, but not worry about it after that point. Another class would be responsible for keeping track of the bullet's position as well as what happens when a bullet hits an asteroid. With an intermediary class in between our relationships, the diagram would look as follows: This relations diagram looks a lot better and creates a very loosely coupled system. In this system, if we were to add an object, such as a meteor, we could easily do so without having to change how the ship or bullet objects function - we'd just let our intermediary class take care of it all. Tetris Since there is only one object in Tetris, the Tetrimino, coupling isn't really a problem as there cannot be any relationships with other objects. Pac-Man For Pac-Man, tight coupling could occur when Pac-Man eats a power pellet. When a power pellet is eaten, Pac-Man can then eat ghosts and the ghost's state changes to eatable. If the Pac-Man object were to track when it ate a power pellet and then initiate the change state of each ghost, the system would be tightly coupled. Again, another class is required to keep track of this information and process it so that our objects can be loosely coupled. Below is an example of how an intermediary class could track Pac-Man eating a power pellet. /** * Intermediary Class that keeps track of game events */ public class Game { ... if (Pac-Man.eats(Power_Pellet)) { Ghosts.changeState(); } ... } Conclusion Coupling is the principle of reducing how objects directly affect the states and behaviors of other objects. Coupling helps to create code that is easier to read as well as easier to change. In the next Quick Tip, we'll discuss the principle of encapsulation and why it helps with creating maintainable code. Follow us on Twitter, Facebook, or Google+ to keep up to date with the latest posts. Envato Tuts+ tutorials are translated into other languages by our community members—you can be involved too!Translate this post
https://gamedevelopment.tutsplus.com/tutorials/quick-tip-the-oop-principle-of-coupling--gamedev-1935
CC-MAIN-2017-47
refinedweb
872
62.38
Python math.hypot() Method Example Find the hypotenuse of a right-angled triangle where perpendicular and base are known: import math #set perpendicular and base parendicular = 10 base = 5 #print the hypotenuse of a right-angled triangle print(math.hypot(parendicular, base)) Definition and Usage The math.hypot() method returns the Euclidean norm. The Euclidian norm is the distance from the origin to the coordinates given. Prior Python 3.8, this method was used only to find the hypotenuse of a right-angled triangle: sqrt(x*x + y*y). From Python 3.8, this method is used to calculate the Euclidean norm as well. For n-dimensional cases, the coordinates passed are assumed to be like (x1, x2, x3, ..., xn). So Euclidean length from the origin is calculated by sqrt(x1*x1 + x2*x2 +x3*x3 .... xn*xn). Syntax Parameter Values Technical Details More Examples Example Find the Euclidean norm for the given points: import math #print the Euclidean norm for the given points print(math.hypot(10, 2, 4, 13)) print(math.hypot(4, 7, 8)) print(math.hypot(12, 14))
https://www.w3schools.com/python/ref_math_hypot.asp
CC-MAIN-2020-45
refinedweb
182
59.5
zhttp_client(3) CZMQ Manual - CZMQ/4.1.1 Name zhttp_client - Class for provides a simple http client Synopsis // This is a draft class, and may change without notice. It is disabled in // stable builds by default. If you use this in applications, please ask // for it to be pushed to stable state. Use --enable-drafts to enable. #ifdef CZMQ_BUILD_DRAFT_API // *** Draft method, for development use, may change without warning *** // Create a new http client CZMQ_EXPORT zhttp_client_t * zhttp_client_new (bool verbose); // *** Draft method, for development use, may change without warning *** // Destroy an http client CZMQ_EXPORT void zhttp_client_destroy (zhttp_client_t **self_p); // *** Draft method, for development use, may change without warning *** // Self test of this class. CZMQ_EXPORT void zhttp_client_test (bool verbose); #endif // CZMQ_BUILD_DRAFT_API Please add '@interface' section in './../src/zhttp_client.c'. Description zhttp_client - provides a simple http client Please add @discuss section in ./../src/zhttp_client.c. Example From zhttp_client_test method zhttp_client_t *self = zhttp_client_new (verbose); assert (self); zhttp_request_t* request = zhttp_request_new (); zhttp_request_set_url (request, url); zhttp_request_set_method (request, "GET"); int rc = zhttp_request_send (request, self, /*timeout*/ 10000, /* user args*/ NULL, NULL); assert (rc == 0); void *user_arg; void *user_arg2; zhttp_response_t* response = zhttp_response_new (); rc = zhttp_response_recv (response, self, &user_arg, &user_arg2); assert (rc == 0); assert (streq (zhttp_response_content (response), "Hello World!")); zhttp_client_destroy (&self); zhttp_request_destroy (&request); zhttp_response_destroy (&response);.
http://czmq.zeromq.org/czmq4-1:zhttp-client
CC-MAIN-2022-33
refinedweb
201
51.99
WE_Frontend::MainCommon - common methods for all WE_Frontend::Main* modules Do not use this module at its own!!! Just consult the methods. Note that all methods are loaded into the WE_Frontend::Main namespace. Use the appropriate publish method according to the WEsiteinfo::Staging config member livetransport. May return a hash reference with following members: List reference of published directories. List reference of published files. Options to publish: Be verbose if set to true. Reference to an array with additional directories to be published. Reference to an array with additional files to be published. livetransport may be any of the standard ones: rsync, ftp, ftp-md5sync, rdist, or rdist-ssh. For custom methods, use either of the following: custom:method_name Where method_name has to be a method in the WE_Frontend::Main namespace and already loaded in This will cause to load a module with the name WE_Frontend::Publish::basename (with uppercase basename) and call a method publish_basename (lowercase). This will case to require the module (based on the package name of the method) and call this method. XXX This method is not used XXX. Use the appropriate search indexer method according to the WEsiteinfo::SearchEngine config member searchindexer. searchindexer may take any of the following standard values: htdig or oosearch. Checks recursively all links from -url (which may be a scalar or an array reference), or for all languages homepages. By default, the language homepages should be in $c->paths->rooturl . "/html/" . $lang . "/" . "home.html" but the last part ("home.html") can be changed by the -indexhtml argument. Slaven Rezic - [email protected] WE_Frontend::Main, WE_Frontend::Main2.
http://search.cpan.org/~srezic/WE_Framework-0.097_03/lib/WE_Frontend/MainCommon.pm
CC-MAIN-2015-11
refinedweb
265
59.6
At 10:46 PM 10/7/2009 +0200, M.-A. Lemburg wrote: >P.J. Eby wrote: > > At 07:27 PM 10/7/2009 +0200, M.-A. Lemburg wrote: > >> Having more competition will also help, e.g. ActiveState's PyPM looks > >> promising (provided they choose to open-source it) and then there's > >> pip. > > > > Note that both PyPM and pip use setuptools as an important piece of > > their implementation (as does buildout), so they are technically the > > competition of easy_install, rather than setuptools per se. > > > > IOW, putting setuptools in the stdlib wouldn't be declaring a victor in > > the installation tools competition, it'd simply be providing > > infrastructure for (present and future) tools to build on. > >I'm sure that some implementation of some of the concepts of >setuptools will end up in the stdlib - in a well-integrated and >distutils compatible form. > >Perhaps we can even find a way to remove the need for .pth files >and long sys.path lists :-) Setuptools doesn't *require* either of those now. Installing in flat, distutils-compatible format has been supported for years, and .pth files are only needed for good namespace package support. But PEP 382 makes namespace packages possible without .pth files, so even that could go away.
https://mail.python.org/pipermail/python-dev/2009-October/092739.html
CC-MAIN-2016-50
refinedweb
207
67.25
Chapter 3 Writing Python Programs in Maya Author Seth Gibson Project Develop functions for a texture processing framework Example Files Synopsis This chapter introduces the fundamentals of creating Python programs by focusing on the development of functions. It presents a number of properties of functions including arguments and return values. The chapter also makes comparisons with MEL to introduce keywords and concepts for iteration, conditional statements, and exception handling when working with Maya commands. Readers will develop a basic texture processing framework throughout the chapter, which they could customize to use in their own work. Resources Python Programming Style Guide Lambda Forms Other Notes None Errata for the First Edition On p. 105, the process_diffuse() function contains a line reading shaders = maya.cmds.listConnections('%s.outColor'%'file1', destination=True,), which should instead read shaders = maya.cmds.listConnections('%s.outColor'%'file_node', destination=True). Hi Seth, You might already know this, but I’ll mention it, in case it hasn’t come to your attention. There is a tiny error, on page 92, step 12. Running the “process_all_textures(**kwargs)” function won’t return the results printed in the book, on page 93. Unless we remove the “_” from “prefix = concrete_”(bottom of page 92, 6th printed line up), or add an “elif”, to the process_all_textures() definition, similar to what I have written below: elif pre[-1] == ‘_’: pre = pre I know it’s such a minor nitpick. The whole book is so solid so far that when I ran into it, I was rewriting the function multiple times to make sure that I hadn’t missed anything. Lastly, I just to say, that this book has been an fantastic learning resource so far. Looking forward to working through the rest! -Jeffrey M. Hi Jeffrey, I just retested the example myself and still get the same results as those printed in the book. What results do you get? Wow, excellent book! It would be useful to have all the code in the book to copy paste while testing, but typing practice is good too. This is minor, but on P.73, step 20, I get different results. Where you state ‘my_file1’, I get ‘my_texture’: def process_all_textures(**kwargs): # Set defaults for dictionary entries, so def won’t choke if they are missing pre = kwargs.setdefault(‘prefix’, ‘my_’) texture = kwargs.setdefault(‘texture_node’) # Default value will be: None print (‘%s%s’%(pre, texture)) # Calling with no keywords does nothing process_all_textures() # my_None # calling with an unhandled keyword produces no error: process_all_textures(file_name=’default.jpg’) # my_None # specifying the ‘texture_node’ key will process file1: process_all_textures(texture_node= ‘texture’) # my_texture Thanks for getting in touch, Mitch! I’m glad you’be been enjoying the book so far. The problem in this case is that this chapter is basically a big, run-on example. In this case, the variable texturewas defined in step 6. Sorry for the confusion! Same page – missing ‘ ‘: arg_dict = { ‘prefix’:’grass_’, ‘texture_node’:tx1 } # NameError: name ‘tx1’ is not defined # Likewise, in this case, the variable tx1was defined in step 16. Thanks for the quick response Adam! My bad, I didn’t start following with code until P.72. Duh. Thanks so much for writing this. I’ve been using Python in Maya for about 2 years – without a good understanding of what’s going on under the hood. The book is really expanding my understanding of the language and toolset. Hi Adam: I ran into another problem. When working w/the code on P91-2, I get an error from line 17 of the def: When we have: newTextures = [‘nothing’,’persp’,cmds.shadingNode(‘file’,asTexture = 1)] newTextures = process_all_textures3(prefix = ‘concrete_’, texture_nodes = newTextures) Maya throws an error on line 17 when it tries to get nodeType on the string ‘nothing’: cmds.nodeType(‘nothing’)# Error: No object matches name: nothing Then if we take ‘nothing’ out of the list: newTextures = [‘persp’,cmds.shadingNode(‘file’,asTexture = 1)] Maya throws an error when it tries to rename the perspective camera: # File “”, line 19, in process_all_textures3 # RuntimeError: Can’t rename a read only node. # Ignore my last post – I found an error in my function. Sorry about that…. Hi I had an an error writing code from page 84 : new_textures=process_all_textures( texture_nodes=textures, prefix=’dirt_’ ) print(new_textures) # Error: AttributeError: ‘str’ object has no attribute ‘texture’ # What could be wrong ? Hi Hans, I’d be willing to bet it’s a punctuation error in your method definition. In step 1, p. 83, I would wager you have a period instead of a comma in this part: ‘%s%s’%(pre, texture) Hi guys. Fantastic book and incredibly helpful. I was just wondering if I could clarify something though? It’s more my understanding than your writing though. On page 72, after we declare the function as process_all_textures(*args) you say that “In this implementation, the function assumes that the data at position 0 corresponds to a prefix and that all further arguments in the slice are texture names.” And that the function requires one positional argument for the function to execute. I was just wondering if you could clarify that a little? Surely as we have just used (*args) the function will just put everything into a tuple and see it all as the same thing? No matter what we put into the function, even the list underneath would work? Sorry for my muddled explanation. . . Hi Paddy, This particular code example is just a basic demonstration of variable-length argument lists, so unfortunately it’s a little vague and abstract since it doesn’t really get built on further. The main point with the section you cited was to emphasize that there must be at least one positional argument since index 0 is used explicitly in the method body, which itself was intended to be doing something like “print the prefix followed by a tuple of all of texture names.” Since there’s no actual functionality in there, it will of course execute as long as you pass in something. Why is there so much white space at the side of the book? All the code has been crunched into a small column that rolls over to the next page and it is hard to know where the indents are supposed to be. I keep getting syntax errors and indent errors because of this. Is there a version of any of the scripts I can make comparison to as it is very frustrating having to keep going through it line by line to check for errors. I am using notepad++ and the script editor. Neither of which are very useful at pointing out where errors are. Content is good, but presentation is proving to be the hold up with this learning curve Hi Malcolm, I’m sorry to hear you’ve been having issues. Unfortunately you’ve discovered one of the hardest problems to deal with when writing a book about Python. Final versions of any of the scripts which span multiple pages in the book should appear in the files repository on this site (), which will hopefully help you follow along more easily. Managed to get formatting right for the code on pages 100-102, but what is the result of the code up to the generation of this line?: texture_nodes = maya.cmds.ls(type = ‘file’); I tried running all the code from page 100 -the above line on page 102 in an empty scene, but nothing happened. I thought it was to return 3 empty lists if no nodes can be found. Confused Hi Malcolm, You may just not be seeing the results if you are executing in Maya’s Script Editor. Try testing it by doing something like this: result = process_all_textures() print(result) hello, i was wondering about this code. def process_all_textures(**kwargs): pre = kwargs.setefault(‘prefix’, ‘My_’) texture= kwargs.setdefault(‘texture_node’) ## Here is the strange part return maya.cmds.rename(texture, ‘%s%s’%(pre,texture)) ## I run the function but i get an error. process_all_textures(texture_node = texture) NameError: Name ‘texture’ is not defined if i run : process_all_textures(texture_node = ‘war’) RuntimeError: No object matches name I don’t understand why running the ‘print’ works but as soon as i add the rename function and as to be return it goes all wrong :-)…this is Chapter 3, Page 75 thanks so much. ps: great book can’t wait to go throught it all Hi Olivier, This is actually an editing miscalculation on my part. Basically, the examples in each chapter are written assuming you sit down and work through the whole thing in one session. In this case, the assumption is that you have a variable named texturein your __main__ namespace, whose value is the name of some file node in your scene (assigned in step 6 on p. 67). If you don’t have that variable defined, you’ll get a NameError when you try to access it. If you instead pass a string literal (e.g., ‘war’) instead of a variable, then the function will proceed, and the rename command will look for a node whose name matches that literal value. Hi There, So far the book is fantastic. But I could really do with some further explanation on using % within formatting strings. For example: def process_all_textures(texture_node, prefix): print(‘Processed %s%s’%(prefix, texture_node)); I just don’t understand why they are being used and how? I really hope someone can answer me as it’s really bugging me. Thanks very much and hope to hear back! Hi Danny! Glad the book is helping you so far. If you’re having trouble with the string formatting in the book, you may look at this section of the Python documentation: That said, while we use the % operator throughout the text mostly due to space constraints of printing code, we strongly encourage use of str.format() instead. You can read about it here: Happy programming! Hi Seth, I’m having a hard time to understand a part of the function on page 97. Could you explain the part of: if not(maya.cmds.ls(texture) and maya.cmds.nodeType(texture)==’file’): I don’t understand the ls() command and the nodeType() command. Thanks you very much and hope to hear back:)! Hi there! Did you work through chapter 1? Some of these things are covered there. You should also check out the Python Command Reference (linked on chapter 1 page). please What is the meaning of this sentence status, texture = process_diffuse(name, out_dir) Seems this hasn’t been adressed yet: In the “textures.py” supplied on this page there’s an error in line 18, it’s not > print(‘Processing texture %s’%node) but as printed correctly in the book > print(‘Processing texture %s’%name) And there’s another one in the same script, maybe: In line 89 it says > ‘%s.outColor’%’file1’, which produces an error. This is the same in the book so it’s probably just me doing something wrong, though I’d say this can’t possibly work because in line 74 it says > if ‘_diff’ in file_node: return True” I consequently renamed the texture-file in my scene (before launching the script, of course) to “file1_diff” and changed line 89 to > ‘%s.outColor’%file_node, This way everything works perfectly and the script compiles without errors.. what am I not getting? It looks like you in fact found a couple of mistakes. I’m surprised no one else noticed them yet! I have updated the file on the site and added a note in the errata section of this page. Hi, I think I’ve found a small error on page 101, last line. I’m not sure if my copy of the book is 1st or 2nd edition… I report it as well 🙂 The print command is missin a “s” for formatting the string. print(‘Failed to process %’%name); instead of print(‘Failed to process %s’%name); By the way, the textures.py file available for download is correct 🙂 Hello I’m having a problem with the example of renaming texture files to have the prefix’dirt_’. It prints all 3 of the file nodes correctly but it only renames one file (‘file1’ to ‘dirt_file1’)… I was just wondering what I was doing wrong. Here is the Process: import maya.cmds; def process_all_textures(**kwargs): pre = kwargs.setdefault(‘prefix’, ‘my_’); textures = kwargs.setdefault(‘texture_nodes’); new_texture_names = []; for texture in textures: new_texture_names.append( maya.cmds.rename(texture,’%s%s’%(pre, texture)));return new_texture_names; #new scene with 3 file nodes maya.cmds.file(new=True, f=True); textures = []; for i in range(3): textures.append( maya.cmds.shadingNode(‘file’, asTexture=True)); print(textures); #pass to process_all_textures() and print new_textures = process_all_textures(texture_nodes=textures, prefix=’dirt_’); print(new_textures); [u’file1′, u’file2′, u’file3′] [u’dirt_file1′] —– Thank You import maya.cmds nodes = maya.cmds.ls(type=”transform*”) print(nodes) Does not works…Why? Hi! Why do you have the asterisk inside the quotes? The problem here is that there is no node type ‘transform*’ but there is a node type ‘transform’ Hey, firstly great book. It’s really amazing how interesting this hole thing can be. So I have a problem on page 92 and I can’t find a solution. So the whole ‘process_all_textures’ script seems to work but at the end it doesn’t output ‘After: [u’concrete_file2’]’ it just prints the same as the page before it: ‘After: None’. My full script is: import maya.cmds def process_all_textures(**kwargs): pre = kwargs.setdefault(‘prefix’) if(isinstance(pre, str) or isinstance(pre, unicode)): if not pre[-1] == ‘_’: pre += ‘_’ else: pre = ” textures = kwargs.setdefault(‘texture_nodes’) new_texture_names = [] if (isinstance(textures, list) or isinstance(textures, tuple)): for texture in textures: if not (maya.cmds.ls(texture) and maya.cmds.nodeType(texture)==’file’): continue new_texture_names.append(maya.cmds.rename(texture,’%s%s’%(pre, texture))) return new_texture_names else: maya.cmds.error(‘No texture nodes speficied’) new_textures = [‘nothing’,’persp’,maya.cmds.shadingNode(‘file’, asTexture=True)] print(‘Before: %s’%new_textures) new_textures = process_all_textures(texture_nodes=new_textures, prefix=’concrete_’) print(‘After: %s’%new_textures) Hi I’m having trouble with the changes to the code on pg 84-85 Here is what I have import maya.cmds as cmds def process_all_textures(**kwargs): pre = kwargs.setdefault(‘prefix’,’my_’) texture = kwargs.setdefault(‘texture_nodes’) new_texture_names = [] if (isinstance(textures,list) or isinstance(textures, tuple)): for texture in textures: new_texture_names.append( cmds.rename(texture,’%s%s’%(pre,texture))) return new_texture_names else: cmds.error(‘No texture nodes specified’) cmds.file(new=True,force=True) textures= [] for i in range(3): textures.append(cmds.shadingNode(‘file’,asTexture=True)) print(textures) new_textures [0]= process_all_textures( prefix =’mud_’) print(new_textures) I don’t get the error ‘No textures specified’ when I run this code Any ideas?
http://www.maya-python.com/chapter-3/
CC-MAIN-2020-24
refinedweb
2,444
55.95
Christine Dodrill - Blog - Contact - Gallery - Resume - Talks - Signal Boost - Feeds | GraphViz - When Then Zen Author's Note: this was intended to be documentation for a service that never ended up being implemented. It was going to help Derpibooru convert its existing markup to Markdown. This never happened. This program listens on port 5000 and serves an unchecked-path web handler that converts Derpibooru Textile via HTML into Markdown, using a two-step process. The first step is to have SimpleTextile emit a HTML AST of the comment. The second is to have Pandoc turn that HTML into Markdown. This is intended to be helpful during Derpi's migration from Textile. The following pragma tells the compiler to automagically tease string literals into whatever type they need to be. For more information on this, see this page. {-# LANGUAGE OverloadedStrings #-} module Main where In order to accomplish our task, we need to import some libraries. import Data.String.Conv (toS) import Network.Wai import Network.HTTP.Simple import Network.HTTP.Types import Network.Wai.Handler.Warp (run) import System.Environment (lookupEnv) import Text.Pandoc import Text.Pandoc.Error (PandocError, handleError) getEnvDefault queries an environment variable, returning a default value if it is unset. getEnvDefault :: String -> String -> IO String getEnvDefault name default' = do envvar <- lookupEnv name case envvar of Nothing -> pure default' Just x -> pure x htmlToMarkdown uses Pandoc to convert a HTML input string into the equivalent Markdown. The Either type is used here in place of raising an exception. htmlToMarkdown :: String -> Either PandocError String htmlToMarkdown inp = do let corpus = readHtml def inp case corpus of Left x -> Left x Right x -> pure $ writeMarkdown def x Now we are getting into the meat of the situation. This is the main Application. toMarkdown :: Application First, let's use a guard to ensure that we are only accepting requests. If the request is not a POST request, return HTTP error code 405. toMarkdown req respond | requestMethod req /= methodPost = respond $ responseLBS status405 [("Content-Type", "text/plain")] "Not allowed" Otherwise, this is a POST request, so we should: htmlToMarkdown We use http-conduit to contact the Sinatra app. | otherwise = do body <- requestBody req targetHost <- getEnvDefault "TARGET_SERVER" "" remoteRequest' <- parseRequest ("POST " ++ targetHost ++ "/textile/html") The ($) operator is a synonym for calling functions. It is defined in the Prelude as f $ x = f x and is mainly used for omitting parentheses. Here it is used to combine HTTP request settings into one big request. Additionally we use a custom [Manager][manager] to avoid any issues with request timeouts, as those are not important for the scope of this tool. let settings = defaultManagerSettings { managerResponseTimeout = Nothing } manager <- newManager settings let remoteRequest = setRequestBodyLBS (toS body) $ setRequestManager manager $ remoteRequest' Now it is time to send off the request and unpack the response. response <- httpLBS remoteRequest If the sinatra app failed to deal with this properly for some reason, report its error as text/plain and return 400. if getResponseStatusCode response /= 200 then respond $ responseLBS status400 [("Content-Type", "text/plain")] $ toS $ getResponseBody response else do let rbody = toS $ getResponseBody response Convert the result body into Markdown. If there is an error, respond with a 400 and the contents of that error. let mbody = htmlToMarkdown rbody case mbody of Left x -> respond $ responseLBS status400 [("Content-Type", "text/plain")] $ toS $ show x Right x -> do respond $ responseLBS status200 [("Content-Type", "text/markdown")] $ toS x Now we bootstrap it all by running the toMarkdown Application on port 5000. No other code is needed. main :: IO () main = run 5000 toMarkdown This article was posted on M02 08 2017..
https://christine.website/blog/textile-to-markdown-literate-haskell-2017-02-08
CC-MAIN-2021-17
refinedweb
590
55.74
. Advertisements Here’s my Python interactive session, with one (typo, error) removed. If you run my version of the program, you’ll see that there are two unique solutions, only one of which divides { 1..16 } into subsets of equal size. Both solutions have the same sums though. Interesting. Ah. That teaches me to code before breakfast. I have a small error in the program above: the line which reads val = i should be “val = i + 1 ;”. When that change is made, it finds the single, unique solution to the problem. My hint that something was wrong was that the sum of the numbers was 60. The sum of the numbers from 1 to 16 should be 136, and since they are divided into two partitions, they should each sum to 68. Since Michi and kernelbob already posted some Python, here’s a Common Lisp solution: and here’s a Haskell one: In both, I calculated the sums of the integers, squares of integers, and cubes of integers in {1, …, 16} via formulae for the first three types of figurate numbers. Here’s a ruby version … This problem can be formulated as a (binary) integer programming problem. For example, using GMPL I just found the “snoob” function over here. That led to this somewhat golfed C solution. Basically, snoob(n) returns the next integer after n with the same number of bits set. If we use a bit mask to denote the members of the first partition, we can use snoob() to iterate through the partitions directly. for (i = 0xFF; i <= 0xFF00; i = snoob(i)) { …} // 0xFF is the first partition, {1, 2, 3, 4, 5, 6, 7, 8}. // 0xFF00 is the last partition, {9, 10, 11, 12, 13, 14, 15, 16}. snoob() should be blazingly fast since it's about 7 arithmetic operations and no memory references. #include <stdio.h> #include <string.h> unsigned snoob(unsigned x) { unsigned smallest = x & -x; unsigned ripple = x + smallest; unsigned ones = x ^ ripple; ones = (ones >> 2) / smallest; return ripple | ones; } int main() { unsigned i, j, k, l; for (i = 0xFF; i <= 0xFF00; i = snoob(i)) { unsigned sums[2][3] = { { 0, 0, 0 }, { 0, 0, 0 } }; for (j = 0; j < 16; j++) for (k = 0, l = 1; k < 3; k++) sums[!(i & 1 << j)][k] += l *= j + 1; if (!memcmp(sums[0], sums[1], sizeof sums[0])) { for (j = 0; j < 16; j++) if (i & 1 << j) printf("%d ", j + 1); printf("\n"); } } return 0; } Oops. Formatted. A haskell one-liner in ghci: And a little sorter solution with list comprehension: @neez Nice Haskell solution. You can also do it without precalculating the constants. @Jan Van lent Wow, beautiful soution, especially using the i=0 case to partition into equal sized sets! @neez Well, it is only a small change from your solution. I used the same trick in the GMPL solution. Note that the constraint that the sets be of equal size is actually redundant. You can try with [1..3], [2..3] and even just 3 and they all give the same solution. Create 2 (random) partitions, and swap between them, making sure the difference is less. Keep doing this until we get a difference of 0. package com.algorithm; import java.util.Arrays; public class SameSumSquareCube { private static int[] set1; private static int[] set2; public static void main(String[] args) { set1 = new int[] {1, 2, 3, 4, 5, 6, 7, 8}; set2 = new int[] {9, 10, 11, 12, 13, 14, 15, 16}; recursive(7); System.out.println(“completed”); } private static void recursive(int count) { for (int i=count; i>=0; i–) { for (int j=7; j>=0; j–) { int temp = set1[i]; set1[i] = set2[j]; set2[j] = temp; if (isCubeSumEqual(set1, set2) && isSquareSumEqual(set1, set2) && isSumEqual(set1, set2)) { System.out.println(Arrays.toString(set1)); System.out.println(Arrays.toString(set2)); } } recursive(–count); } } private static boolean isCubeSumEqual(int[] set1, int[] set2) { int set1CubeSum = 0; for (int number : set1) { set1CubeSum += number * number * number; } int set2CubeSum = 0; for (int number : set2) { set2CubeSum += number * number * number; } return set1CubeSum == set2CubeSum; } private static boolean isSquareSumEqual(int[] set1, int[] set2) { int set1SquareSum = 0; for (int number : set1) { set1SquareSum += number * number; } int set2SquareSum = 0; for (int number : set2) { set2SquareSum += number * number; } return set1SquareSum == set2SquareSum; } private static boolean isSumEqual(int[] set1, int[] set2) { int set1Sum = 0; for (int number : set1) { set1Sum += number; } int set2Sum = 0; for (int number : set2) { set2Sum += number; } return set1Sum == set2Sum; } } […] the weekend, I was amusing myself with this problem from the always entertaining Programming Praxis. The problem is to partition the set {1, 2, […]
https://programmingpraxis.com/2011/11/15/phil-harveys-puzzle/?like=1&_wpnonce=274c59a080
CC-MAIN-2017-13
refinedweb
758
60.85
#include <deal.II/grid/intergrid_map.h> This class provides a map between two grids which are derived from the same coarse grid. For each cell iterator of the source map, it provides the respective cell iterator on the destination map, through its operator []. Usually, the two grids will be refined differently. Then, the value returned for an iterator on the source grid will be either: Keys for this map are all cells on the source grid, whether active or not. For example, consider these two one-dimensional grids: * Grid 1: * x--x--x-----x-----------x * 1 2 3 4 * * Grid 2: * x-----x-----x-----x-----x * 1 2 3 4 * (Cell numbers are only given as an example and will not correspond to real cell iterator's indices.) The mapping from grid 1 to grid 2 will then be as follows: * Cell on grid 1 Cell on grid 2 * 1 ------------------> 1 * 2 ------------------> 1 * 3 ------------------> 2 * 4 ------------------> mother cell of cells 3 and 4 * (a non-active cell, not shown here) * Besides the mappings shown here, the non-active cells on grid 1 are also valid keys. For example, the mapping for the mother cell of cells 1 and 2 on the first grid will point to cell 1 on the second grid. Note that this class could in principle be based on the C++ std::map<Key,Value> data type. Instead, it uses another data format which is more effective both in terms of computing time for access as well as with regard to memory consumption. In practice, use of this class is as follows: Note that the template parameters to this class have to be given as InterGridMap<DoFHandler<2> >, which here is DoFHandler (and could equally well be Triangulation or PersistentTriangulation). Definition at line 114 of file intergrid_map.h. Typedef to the iterator type of the grid class under consideration. Definition at line 120 of file intergrid_map.h. Constructor setting the class name arguments in the SmartPointer members. Definition at line 37 of file intergrid_map.cc. Create the mapping between the two grids. Definition at line 46 of file intergrid_map.cc. Access operator: give a cell on the source grid and receive the respective cell on the other grid, or if that does not exist, the most refined cell of which the source cell would be created if it were further refined. Definition at line 157 of file intergrid_map.cc. Delete all data of this class. Definition at line 174 of file intergrid_map.cc. Return a reference to the source grid. Definition at line 185 of file intergrid_map.cc. Return a reference to the destination grid. Definition at line 194 of file intergrid_map.cc. Determine an estimate for the memory consumption (in bytes) of this object. Definition at line 203 of file intergrid_map.cc. Set the mapping for the pair of cells given. These shall match in level of refinement and all other properties. Definition at line 102 of file intergrid_map.cc. Set the value of the key src_cell to dst_cell. Do so as well for all the children and their children of src_cell. This function is used for cells which are more refined on src_grid than on dst_grid; then all values of the hierarchy of cells and their children point to one cell on the dst_grid. Definition at line 141 of file intergrid_map actual data. Hold one iterator for each cell on each level. Definition at line 182 of file intergrid_map.h. Store a pointer to the source grid. Definition at line 187 of file intergrid_map.h. Likewise for the destination grid. Definition at line 192 of file intergrid_map.h.
https://dealii.org/developer/doxygen/deal.II/classInterGridMap.html
CC-MAIN-2021-10
refinedweb
603
65.52
update your webhook URL every time you start a new ngrok session. In this article I’m going to show you how to fully automate ngrok, by incorporating it into your Python application. The application will create an ngrok tunnel and update the Twilio webhook with the assigned URL automatically! Tutorial requirements To follow this tutorial you need the following components: - Python 3.6 or newer. If your operating system does not provide a Python interpreter, you can go to python.org to download an installer. - A phone with SMS support. - A Twilio account. If you are new to Twilio create a free account now. If you use this link you’ll receive $10 in credit when you upgrade to a paid account (review the features and limitations of a free Twilio account). Configuration Let’s start by creating a directory where our project will live: $ mkdir twilio-pyngrok $ cd twilio-pyngrok Following best practices, we are now going to create a virtual environment where we will install our Python dependencies. If you are using a Unix or MacOS system, open a terminal and enter the following commands to do the tasks described above: $ python3 -m venv venv $ source venv/bin/activate (venv) $ _ For those of you following the tutorial on Windows, enter the following commands in a command prompt window: $ python -m venv venv $ venv\Scripts\activate (venv) $ _ The pyngrok package The secret tool that makes it possible to automate the ngrok workflow is the pyngrok package, a Python wrapper to the ngrok command line tool. You can install pyngrok into your project’s virtual environment using the pip installer: (venv) $ pip install pyngrok The package automatically downloads the ngrok client the first time you use it, so you do not need to download or install anything else besides this. After installation, you’ll have the ngrok command installed in your virtualenv’s bin directory ready to be used, either directly from the command line as you may have done before, or from Python code, which is much more interesting and fun, as you will see below. In the command line, you can run ngrok http 5000 to start a tunnel to the local application running on port 5000, in the same way you would if you downloaded the ngrok client directly. But when using pyngrok you also have the option to configure your tunnel programmatically. The equivalent using Python code is: from pyngrok import ngrok url = ngrok.connect(5000).public_url The value returned by the ngrok.connect().public_url expression is the randomly generated URL that is tunneled to your local port. You can try the above code in a Python shell if you like, but you’ll have an opportunity to apply this technique on a real project later in this tutorial. Setting Twilio webhooks programmatically Once we have the temporary ngrok URL assigned, the next step is to set the appropriate webhook for our application. Normally we do this by logging in to the Twilio Console, going to the Phone Numbers dashboard and finally copy/pasting the webhook URL into the SMS or Voice webhook field. To eliminate the tedious manual work involved in this we are going to use the Twilio API to set the SMS and/or Voice webhooks. If your application does not use the Twilio API already, you will need to install the Twilio Helper Library: (venv) $ pip install twilio You will also need to have your Twilio Account SID and Auth Token set as environment variables. To obtain these credentials log in to your Twilio Console and then select “Settings” in the left sidebar. Scroll down until you see the “API Credentials” section. Two environment variables with the names TWILIO_ACCOUNT_SID and TWILIO_AUTH_TOKEN must be defined for the Twilio client to be able to authenticate. You can set these variables in any way you like. In the next section we’ll use a .env file to configure these variables. Once the variables are in the environment, you can change the SMS and voice webhooks for a phone number associated with your account using Python code as follows: from twilio.rest import Client client = Client() # set the SMS webhook client.incoming_phone_numbers.list(phone_number=NUMBER)[0].update( sms_url=SMS_URL) # set the voice webhook client.incoming_phone_numbers.list(phone_number=NUMBER)[0].update( voice_url=VOICE_URL) # set both webhooks together! client.incoming_phone_numbers.list(phone_number=NUMBER)[0].update( sms_url=SMS_URL, voice_url=VOICE_URL) The client object automatically imports the Twilio credentials from the environment variables mentioned above. If the variables are missing or incorrectly set, the client won’t be able to call the Twilio service. The list() method that we call on the incoming_phone_numbers resource performs a search of our target phone number. The return of the search is a list, but because we are looking for an exact number we can directly use the first entry, on which we call the update() method to set our webhooks. The update() method accepts a few more arguments that are related to webhooks. Here is the complete list: sms_url: the SMS webhook. sms_method: the HTTP method to use when invoking the SMS webhook. Defaults to sms_fallback_url: the SMS fallback webhook. sms_fallback_method: the HTTP method to use when invoking the SMS fallback webhook. Defaults to voice_url: the voice webhook. voice_method: the HTTP method to use when invoking the voice webhook. Defaults to voice_fallback_url: the voice fallback webhook. voice_fallback_method: the HTTP method to use when invoking the voice fallback webhook. Defaults to voice_receive_mode: either voiceor voice. status_callback: the status webhook. status_callback_method: the method to use when invoking the status webhook. Defaults to A complete example To help you incorporate these techniques into your application, I’m going to show you a complete example. Make sure your current directory is the twilio-pyngrok one you created above, and also that your virtual environment is activated. Install the following Python dependencies into the virtual environment: (venv) $ pip install twilio python-dotenv pyngrok flask Create a .env file (note the leading dot) where you can store credentials and application settings: TWILIO_ACCOUNT_SID=<your-account-sid> TWILIO_AUTH_TOKEN=<your-auth-token> TWILIO_PHONE_NUMBER=<your-twilio-phone-number> For the phone number, use the canonical E.164 format. For a number in the United States, the format is +1aaabbbcccc, where (aaa) is the area code, and bbb-cccc is the local phone number. For this example we are going to use a small bot that replies to all incoming messages with a greeting. You can copy the following code in a file called bot.py: import os from dotenv import load_dotenv from flask import Flask, request from twilio.twiml.messaging_response import MessagingResponse load_dotenv() app = Flask(__name__) @app.route('/bot', methods=['POST']) def bot(): user = request.values.get('From', '') resp = MessagingResponse() resp.message(f'Hello, {user}, thank you for your message!') return str(resp) def start_ngrok(): from twilio.rest import Client from pyngrok import ngrok url = ngrok.connect(5000).public_url print(' * Tunnel URL:', url) client = Client() client.incoming_phone_numbers.list( phone_number=os.environ.get('TWILIO_PHONE_NUMBER'))[0].update( sms_url=url + '/bot') if __name__ == '__main__': if os.environ.get('WERKZEUG_RUN_MAIN') != 'true': start_ngrok() app.run(debug=True) Let’s review what this application does. Right after the imports, we call the load_dotenv() function. This function comes from the python-dotenv package, and as its name implies, it looks for a .env file and loads any variables it finds in it into the environment. The app instance is the Flask application instance. The bot() function is our SMS webhook handler. This function is associated with the /bot URL, which is going to be configured as the SMS webhook that Twilio will call whenever the Twilio phone number receives a message. Since this application is intended as a simple example, the only logic in this bot is to send a greeting as a response. The start_ngrok() function uses the techniques we have learned in this article to automate the use of ngrok. First we set up a tunnel on port 5000, which is the port on which the Flask application will listen for requests locally. Once we have the ngrok URL, we instantiate a Twilio client and use it to update the SMS webhook URL as shown in the previous section. Note how we use the TWILIO_PHONE_NUMBER environment variable to select the phone number we want to configure. The webhook URL that we are configuring has two parts. The root URL is the temporary URL generated by ngrok. We then append the path to our bot route, which is /bot. The last four lines of the script have the startup code for the application. Normally we just call the app.run() function from Flask to get the server started, but now we want to set up the ngrok tunnel before we do that. Instead of simply calling start_ngrok() before app.run(), I added a check on the WERKZEUG_RUN_MAIN environment variable, which is used by the Flask reloader subsystem. If the reloader isn’t used, then this variable isn’t defined so start_ngrok() will be called every time. If the reloader is used, this condition will be true when this code executes in the parent reloader process, but not on the child process that gets recycled with every reload. This ensures that the tunnel URL stays the same as the server reloads. Ready to give this a try? Make sure you have your credentials and your phone number in the .env file, and then run the bot.py script: (venv) $ python bot.py * Tunnel URL: * Serving Flask app "bot" (lazy loading) * Environment: development * Debug mode: on * Running on (Press CTRL+C to quit) * Restarting with stat * Debugger is active! * Debugger PIN: 337-709-012 And now send an SMS to your Twilio phone number to confirm that the bot’s webhook was automatically configured. When you stop the server with Ctrl+C the ngrok tunnel will be deleted. Conclusion In this tutorial we have implemented a convenient workflow that uses the pyngrok package to attach a public URL to your Twilio webhooks during development with no manual configuration needed. I hope this technique makes you more productive when working on your Twilio apps! Miguel Grinberg is a Python Developer for Technical Content at Twilio. Reach out to him at mgrinberg [at] twilio [dot] com if you have a cool Python project you’d like to share on this blog!
https://www.twilio.com/blog/automating-ngrok-python-twilio-applications-pyngrok
CC-MAIN-2021-10
refinedweb
1,721
63.9
I have a program which loads many fxml files when executed. The application will be finished in a short time, and loading the application just takes too long. There are many fxml files (20+) and all these fxml files are loaded with Java code. There will be a point that the application is finished and ready for use, but all files will be loaded with every execution of the program. Can the fxml files only be compiled once, because they won't be changed when finished? The java code will of course be compiled once, it's only the fxml files. The application now takes 25 seconds to start, with 14 seconds loading the fxml. Is there a way to make all this faster? EDIT #1: Are there any tools that are provided for free and that make the execution of applications (Java) much faster? Or is the execution time merely dependent on the way the program is written? Which design patterns could help fastening up the execution time of your application? EDIT #2: The following code will explain my problem in a nutshell: package main; ....... ....... public class MainClass { .... .... List<InformationController> controllerList; public mainClass(List<InformationControllers> controllerList) { this.controllerList = otherClass.getControllerList(); loadFXMLFiles(); } public void loadFXMLFiles() { for(InformationController controller : controllerList) { controller.loadFXML(); } doOtherStuff(); } .... } On How to Speed up FXML Performance I'll make this answer community wiki, if anybody else has further ideas, please edit it and add them. FXMLLoader.load()but should instead create an instance of FXMLLoader and reuse that one using the instance loader.load()method. The reason is that the FXML-Loader caches, e.g. class lookups, and hence is faster on loading the next FXML-File. FXMLLoader.load()documentation in JavaFX 8 includes a reference to FXML templates, but I don't think that was ever implemented, so maybe it won't help you. Reflection is why FXML is Slow I think the key reason the Java 8 FXML implementation is slow is that it relies so heavily on reflection and reflection is slow as described in the reflection tutorial: Because reflection involves types that are dynamically resolved, certain Java virtual machine optimizations can not be performed. Consequently, reflective operations have slower performance than their non-reflective counterparts, and should be avoided in sections of code which are called frequently in performance-sensitive applications.
https://codedump.io/share/fdYr7OwXnYv8/1/is-there-a-way-to-compile--load-fxml-files-faster-and-only-one-time-and-not-at-every-restart-of-an-application
CC-MAIN-2017-04
refinedweb
388
54.63
C# gives you the capability to build controls for both Windows and Web applications. We're going to do that in this chapter, starting with Windows custom controls, called user controls. Later in the chapter we'll see how to create custom controls for Web applications, called Web user controls. Using Visual Studio .NET, you can create your own user controls for use in Windows forms. For example, you might want such a control to display a day planner or a mortgage amortization calculator. Building reusable user controls lets you avoid the tedium of rebuilding your day planner or mortgage calculator in multiple applicationsyou just need to drop your user control onto the appropriate form. At design time, user controls appear much like mini-Windows forms, and you can add standard Windows forms controls (such as buttons in a mortgage calculator) to them to create a composite control. Or you can draw the appearance of the control yourself using its Paint event. You can also make use of the user control's built-in events, like the Click event, to support your events in your user control. We'll create an example user control now that will support a custom property, method, and event, just as you'd expect a control to do. To follow along, choose File, New Project to open the New Project dialog box that you see in Figure 11.1. This time, select the Windows Control Library item, naming this new project ch11_01, as you see in Figure 11.1 (note that you can also add a new user control to an existing application by choosing Project, Add User Control). The Paint event occurs when an object, such as a control or a form, needs to be (re)drawn. In Windows applications, the Paint event handler is passed a System.Windows.Forms.PaintEventArgs object named e , and you can access a Graphics object as e.Graphics . The Graphics class, one of the largest of the FCL classes, is used for drawing. For example, to draw a rectangle, call e.Graphics.DrawRectangle ; to draw a line, call e.Graphics.DrawLine ; to draw an image, call e.Graphics.DrawImage ; to draw a filled polygon, call e.Graphics.FillPolygon ; and so on. Clicking the OK button in the New Project dialog box creates and opens a new user control in the IDE, as you see in Figure 11.2. As you can see in the figure, the new user control is rectangular, just like a Windows form, and in fact it acts much like a mini-Windows form. In this example, we're going to use a Windows label control to cover most of the user control, so add that label to the user control now as you see in the figure. We'll start coding this user control by adding a new property to it; in this case, we'll add a new property named DisplayColor that will set the color of the label in the middle of the user control. When you create a control for our new user control's class, assigning a value to the control's DisplayColor property will set the color of the control. The control's color is determined by setting the background color of the label in the center of the control. To implement the DisplayColor property, open the user control's code in a code designer now. As you can see in the code designer, the new user control class, UserControl1 , is based on the System.Windows.Forms.UserControl class: public class UserControl1 : System.Windows.Forms.UserControl { private System.Windows.Forms.Label label1; . . . } We can create the new DisplayColor property like any other property, using get and set accessor methods . In this case, DisplayColor is going to be the color displayed by the label in this user control. To set a color, we use the label's BackColor property (which takes objects of the System.Drawing.Color class) this way: public class UserControl1 : System.Windows.Forms.UserControl { private System.Windows.Forms.Label label1; #region Component Designer generated code . . . public Color DisplayColor { get { return label1.BackColor; } set { label1.BackColor = value; } } } This code implements the DisplayColor property in the user control; now you can assign or retrieve System.Drawing.Color objects using this property. Before seeing this property at work, we'll add a new method to our control as well. You can add methods to user controls as easily as you can add properties. As with other objects, you just add the code for the new method to the user control's class. To make that method accessible outside the control, make it a public method. In this example, we'll add a method named DrawText , which will display the text you pass to it in the label control. That's easy enough to write; here's what that method looks like in the new user control's code: public class UserControl1 : System.Windows.Forms.UserControl { private System.Windows.Forms.Label label1; #region Component Designer generated code . . . public Color DisplayColor { get { return label1.BackColor; } set { label1.BackColor = value; } } public void DrawText(string text) { label1.Text = text; } } That adds a new method to our user control. And you can also add events, coming up next . You add events to user controls as we saw in Chapter 4, "Handling Inheritance and Delegates"you just use a delegate and the event statement to create a new event. In this example, we'll add an event named NewText to our user control, which will occur when the text in the control changes. As we've written the user control, the only way of changing the text in the control is with the DrawText method, so we'll fire the NewText event in that method's code: public class UserControl1 : System.Windows.Forms.UserControl { private System.Windows.Forms.Label label1; #region Component); } } At this point, we've given our new user control a property, DisplayColor , a method, DrawText , and an event, NewText . With these custom items, our user control is ready to use, just as you'd use any other Windows control. In order to make our user control available to other Windows projects, it has to be compiled into .DLL form (actually a .NET assembly with the extension .DLL). To compile it, choose Build, Build Solution in the IDE. After you do, you can use this new user control in other projects by adding a reference to the control in the other project. To see that at work, we'll create a new Windows application project now, grouping it with our user control by adding that new application to the current solution in the IDE. IDE solutions can hold multiple projects; our current solution only holds the user control project, ch11_01, but you can choose File, Add Project, New Project to add a new Windows application to this solution. Name that new application ch11_02, as you see in Figure 11.3. This new Windows application will display our user control at runtime. Because you can't run a user control directly, you have to make the new Windows application, ch11_02, the startup project for the current solution. You do that by selecting that project in the Solution Explorer, right-clicking it, and then choosing Set as Startup Project. (Alternatively, select the project and choose Project, Set as Startup Project from the IDE's main menu system.) We'll need to add our new user control to the Windows application's main form. The IDE makes it easy to do that when you add a reference to our user control, which will make our user control appear in the toolbox like any other Windows control. To add a reference to the user control, ch11_01, in the Windows application's toolbox, right-click the ch11_02 application's References item in the Solution Explorer and choose Add Reference, opening the Add Reference dialog box you see in Figure 11.4. To add a reference to the UserControls project, click the Projects tab and double-click the UserControls item, which adds that a reference to the Selected Components box at the bottom of the dialog box. Finally, click OK. This adds the user control to the toolbox's My User Controls tab, as you see in Figure 11.5. (In earlier versions of Visual Studio .NET, the user control is added to the Windows Forms tab.) To add a user control to the main form in the ch11_02 Windows application, just drag the control from the toolbox, creating a control named userControl11 (the first object of the UserControl1 class). Note also that the properties window displays the properties of the new user control, including the custom DisplayColor property, as you see in Figure 11.5. Because we've made the type of that property System.Drawing.Color , the IDE will display drop-down lists of colors you can select for the DisplayColor property, just as it does for any property that takes System.Drawing.Color objects (such as the BackColor property of most controls). In this case, we've selected aquamarine for the DisplayColor property, which appears in the label in the center of our user control, as you can see in Figure 11.5 (in stunning black and white). You can also call the methods of our new user control, userControl11 , such as the DrawText method. To let the user call this method, add a new button with the caption Click Me! to the Windows application, ch11_02, and add this code to the button's Click event handler (you might note that as you add the code to call DrawText , the IDE's IntelliSense facility will list the type of the data you pass to DrawText , just as it would for any method built into a control you're working with): private void button1_Click(object sender, System.EventArgs e) { userControl11.DrawText("User Controls!"); } When the user clicks the Click Me! button, the text "User Controls!" is passed to the user control's DrawText method, which displays that text in the label in the user control, as you can see in Figure 11.6. When you change the text in the label in the user control by calling the DrawText method, the control's NewText event fires. You can add code to that event as you can any control event. Simply select the user control in the IDE, click the lightning button in the properties window to see its events, and double-click the NewText event to open its event handler in a code designer. In this case, we will make the NewText event handler for userControl11 to display the new text in a message box this way: private void userControl11_NewText(object UserControl1, string text) { MessageBox.Show("New text: " + text); } Now when you run the Windows application and click the Click Me! button, the NewText event occurs and that event's handler displays the new text in the message box we've added to the Windows application. You can see the results in Figure 11.7. That's how to make use of a user control from another application in the same solution. But what if you want to use a user control in a project not in the same solution? In that case, you can still use a reference to the user control's .DLL file, ch11_01.dll. This time, the user control won't appear in the new project's toolbox by default. To add a user control to a Windows form, you can create a ch11_01.UserControl1 object in code, and then add an event handler and display the control like this: ch11_01.UserControl1 uc1; private void Form1_Load(object sender, System.EventArgs e) { uc1 = new ch11_01.UserControl1(); uc1.DisplayColor = System.Drawing.Color.Aquamarine; uc1.Top = 100; uc1.Left = 100; uc1.NewText += new ch11_01.UserControl1.NewTextDelegate(uc1_NewText); Controls.Add(uc1); } private void uc1_NewText(object sender, string text) { MessageBox.Show(text); } private void button1_Click(object sender, System.EventArgs e) { uc1.DrawText("User Controls!"); } Alternatively, you can add the user control to the toolbox if you take a few extra steps. To do that, select Tools, Add/Remove Toolbox Items in the IDE, and then select the .NET Framework Components tab. Browse to and select ch11_01.dll, make sure the check box for ch11_01.dll is checked in the .NET Framework Components tab, and then click OK. The user control should appear at the bottom of the toolbox, ready for use in Windows applications like any other control. You can publish your user control in this way; just distribute its .DLL file and programmers can add it to their IDE installation's toolbox using this technique. The idea behind user controls is simple: code re-use. The idea is that you write it once and use it many times. But what about working with earlier components also designed for code re-useCOM components and ActiveX controls of the kind developed with previous versions of Visual Studio? Can you use them in Visual Studio .NET? When you try to use or import these items, the main issue is one of security, something that Microsoft is desperately trying to shore up. COM components are not by nature safe ( especially because they can use pointers extensively) in the same way that .NET components are, and often have to be substantially rewritten to fit in with .NET. Although it's possible to rewrite COM components for use in .NET, my experience is that you're usually better off rewriting the component's functionality using .NET code in the first place. On the other hand, ActiveX .OCX controlsdesigned for uses that included the Internetwere automatically made much more secure, which means that it is fairly easy to import such controls into the .NET IDE. To import an ActiveX control, select Tools, Add/Remove Toolbox Items, but this time select the COM Components tab in the Customize Toolbox dialog box, not the .NET Framework Components tab. Browse to the ActiveX control's .OCX file, make sure its check box is checked in the COM Components tab, and click OK, which adds the ActiveX control to the toolbox. (Alternatively, you can use the ActiveX importing utility, AxImp.exe, which comes with Visual Studio .NET, to import an ActiveX control into .NET this way: AxImp controlname .ocx . After you do, you can add the ActiveX control to the toolbox using the .NET Framework Components tab, not the COM Components tab, in the Customize Toolbox dialog box.) As with much else that motivated .NET, the issue here was security, and it turns out to be lucky that ActiveX controls were made as secure as they were, making it easy to add them to .NET applications. That's it for our discussion on user controls, which you use in Windows applications. The other available type of user control is the Web user control, which you use in Web applications; they're coming up next.
https://flylib.com/books/en/1.254.1.120/1/
CC-MAIN-2022-05
refinedweb
2,485
64.61
Ext Designer implementation question Ext Designer implementation question I've finally gotten my license for Designer and built my first UI with it today and noticed that everything that comes out is an extension of the component that I selected in the designer. Is that the preferred method to build a UI? All of my hand-coded stuff doesn't do it that way because the examples are not built that way either for the most part. I just want to make sure I'm producing good code. - Join Date - Mar 2007 - Location - Baltimore, MD. - 1,745 - Vote Rating - 7 Generally speaking, yes, creating your own sub-classes in their own .js files is the standard way that Ext recommends building a large & complex project. Our next big feature is what's call "Promote to Class", which will allow you to save/share custom created components in the Designer for reuse, among other things. The examples are very basic to show the components off, but not necessarily showing a good standard for organizing your project code. Small things are fine to just configure and use, but larger projects it is recommended to create sub-classes as good practice to promote component-oriented design and reuseability/extensibility. Thanks for the explanation Jarred! I wish I had known that a long time ago now that I am several months into a pretty large project. I'm sure I could've read it somewhere...oh well. There will be future projects for me to do it that way. - Join Date - Mar 2007 - Location - Baltimore, MD. - 1,745 - Vote Rating - 7 You're welcome. Being able to wrap up your behavior, event handlers, etc. inside of a custom sub-class is invaluable for well-structured javascript code. You can fudge it with using your own objects or singletons to "scope" or "namespace" your behavior, but that quickly becomes hard to maintain. Similar Threads Noob Ext Designer question : building an app with Ext.windowsBy dcoan in forum Ext Designer: Help & DiscussionReplies: 10Last Post: 14 Dec 2011, 2:13 AM Ext Designer QuestionBy imegai in forum Ext Designer: Help & DiscussionReplies: 2Last Post: 12 May 2010, 7:35 AM Ext Designer: How to add multiple radio buttons with designer?By lelapinblanc in forum Ext Designer: Help & DiscussionReplies: 1Last Post: 26 Mar 2010, 1:38 PM Impressions on Ext Designer - question for the conference attendees.By [email protected] in forum Community DiscussionReplies: 23Last Post: 15 Aug 2009, 5:25 AM
http://www.sencha.com/forum/showthread.php?100049-Ext-Designer-implementation-question&s=0f20390c09bd7f19731247aada8dc885&p=470629
CC-MAIN-2013-48
refinedweb
413
62.88
I am probably the only person even trying to use the ReST interface, so I doubt I'll get much help. But I am stuck, so I'll give it a shot. Basically I am running the "push" example, but I am rewriting the PostOrder part of it to actually use ReST instead of JMS. (It is sad in general that the ReST examples are mostly just JMS. But I digress.) The first thing is that when I go to the URL hosting the queue in FireFox I get this: XML Parsing Error: prefix not bound to a namespace Location: Line Number 1, Column 37:<queue><name>jms.queue.orders</name><atom:link<atom:link<atom:link<atom:link</queue> The first thing we see here is that "link" is preceded with "atom:" and that is why FF won't parse it nicely. How can that be fixed? I mean, I sure would like it if FF would draw the XML nicely like it does with other XML. My group uses Jersey for everything, so I am trying to do it with Jersey instead of with RestEasy. When I hit that URL with jersey it returns a 200 response, but there is nothing in the headers and nothing in the links. Could this be related to the namespace issue FF is seeing? Do you have to somehow tell Jersey to use a specific namespace? What namespace is that? How can you tell it? The last thing that is confusing me is that the bit of ReST code in the example uses "msg-push-consumers" as the link name. This is also what the docs say to use. But if you look at what FF printed above the rel= just says "push-consumers". How does that work? Thanks for any suggestions. I hate that when people follow up to their own message. I don't know what I did, but this much is working now. It is returning the headers. And they do start with "msg-". Still don't understand why, or what namespace is being used here. If you have any answer to that, please share.
https://developer.jboss.org/thread/167996
CC-MAIN-2018-17
refinedweb
356
83.86
I'm planning to design a program such that any number entered will be divided by all the numbers less than it and if any of the division gives 0 as the reminder,it is not a prime.Here is my attempt so far and iam not able to proceed after that.. #include <stdio.h> #include <conio.h> #include <math.h> void main() {long int n,i,k; clrscr(); printf("Please enter any number.\n"); scanf("%ld" , &n); for (i=2;i<=n-1;i=i+1) { k=n%i;} if (k==0) printf("It is not a prime.\n"); else printf("It is a prime.\n"); getch(); }
https://www.daniweb.com/programming/software-development/threads/295196/program-to-check-if-any-number-entered-is-a-prime-number
CC-MAIN-2018-05
refinedweb
108
84.78
from brightway2 import * Create a project for this notebook: projects.set_current("databases demo") Before going any further, please read about what a database is in Brightway2. One of the nicest ideas in Brightway2 is that data is just data, and not hidden behind a database and object-relational mapping (ORM). This makes it very easy to do data manipulation, and to add new fields to datasets. Let's define an example database: db = Database("example") example_data = { ("example", "A"): { "name": "A", "exchanges": [{ "amount": 1.0, "input": ("example", "B"), "type": "technosphere" }], 'unit': 'kilogram', 'location': 'here', 'categories': ("very", "interesting") }, ("example", "B"): { "name": "B", "exchanges": [], 'unit': 'microgram', 'location': 'there', 'categories': ('quite', 'boring') } } This is quite a simple example - two activities, one of whom has no inputs. In fact, this example dataset has only a few fields. Actually, there are no required fields* in datasets, only some suggested ones, and general guidelines on how to use those suggested fields. It's like not wearing underwear - Brightway2 gives you the freedom to do it, but most people you are interacting with would prefer that you didn't. * If you are using Activity proxies, then the name field is required. Let's talk a bit about the fields in example_data: name: This one is pretty easy :) exchanges: This is a list of inputs and outputs, like how much energy or material is needed for this dataset's production. Every exchangeis an intput, unless it has the value type = "production". A productionexchange defines how much of the output is produced. Most of the time, you can ignore this, as the default value is one - this is what we do in the example data. However, sometimes it is useful to define a dataset that produces more. See also: What happens with a non-unitary production amount in LCA?. db.write(example_data) Writing activities to SQLite3 database: 0% 100% [##] | ETA[sec]: 0.000 Total time elapsed: 0.017 sec Title: Writing activities to SQLite3 database: Started: 07/15/2016 11:53:39 Finished: 07/15/2016 11:53:39 Total time elapsed: 0.017 sec CPU %: 114.000000 Memory %: 0.350535 db.random() 'A' (kilogram, here, ('very', 'interesting')) num_exchanges = [(activity, len(activity.exchanges())) for activity in db] num_exchanges [('B' (microgram, there, ('quite', 'boring')), 0), ('A' (kilogram, here, ('very', 'interesting')), 1)] db.search("*") ['B' (microgram, there, ('quite', 'boring')), 'A' (kilogram, here, ('very', 'interesting'))] del databases[db.name]
http://nbviewer.jupyter.org/urls/bitbucket.org/cmutel/brightway2/raw/default/notebooks/Databases.ipynb
CC-MAIN-2018-51
refinedweb
396
56.15
CodePlexProject Hosting for Open Source Software Hi, when i try to build orchard i recive the following error: The type or namespace name "Blog" does not exist in the namespace 'Orchard' (are you missing an assembly reference?) Can you help me ? Thanks Where did you get the source code from? If you've cloned the repository are you sure you got the entire set of changes? i have download orchard from codeplex i try to run orchard in webmatrix and no problem but in vs2010 no. Try running a Build->Clean->Rebuild. If that does not work, make sure you've actually downloaded the source release and build. I've done this multiple times and I never had any problems Nothing i still have the same problem, The strange thing is that it works with webmatrix. do you know if VS2010 must have some specific dependencies? Try the latest version download at. I've got it this morning and ran the build without any issues. Are you opening the full Orchard.sln file (which should have a whole load of different projects in it), or just the Orchard.Web site folder? i download the link below and i open orchard.web.1.1.30\orchard site folder. Orchard.Web.1.1.30.zip Orchard.Web is not Orchard.Source. You want Orchard.Source for vs2010 Or clone the repository: With version 1.3, I open a working project in VS2010 (from WebMatrix). When VS builds it, first off there's an error from the Markdown module (a guy on StackOverflow had the same thing). So I remove Markdown and build again... and it comes up with the same error that youngfox got in June: "The type or namespace name "Blog" does not exist in the namespace 'Orchard' (are you missing an assembly reference?)" If I manage to fix that, how many more errors are to come? Sigh... I shouldn't have to deal with this sort of thing. I just want to fill my CMS with content. I'd happily stick with WebMatrix, except for the fact that it doesn't have Source Code Control support... that's why I moved to VS. But Orchard has build errors in VS. Well, you may want to take your WebMatrix comments to the people who are actually building it. They have a forum. If you want to compile from Visual Studio, you need the full source code. The smaler WebPI package does not require compilation. In fact, it will not compile if you try, as you've discovered. Quoting the release notes: If you just want to use Orchard and don't care about the source code, Orchard.Web.1.3.9.zip is what you want to use, preferably through the WebPI instructions. If you want to take a look at the source code, OrchardSource.1.3.9.zip is fine. If you want to setup a development environment for patch or module development, you should clone the repository by following the instructions here: Thanks Bertrand. Actually I meant "Orchard" when I said "WebMatrix" not being ready, but I've removed that comment now. Actually my experience with Orchard CMS is currently better than with Umbraco or DNN. There's a mismatch here... WebMatrix is a good starting point, and it has that enticing button to open the website in VS2010. So it's not surprising that someone like me would click it. I don't particularly care about the source code... I just want an environment that has SVN. Also it's my understanding that MS want new developers to start with WebMatrix and gravitate to VS. So it would seem to me that you would save a lot of scratched heads and logged discussions here if the Orchard "WebPI" package did build ok in VS... is that possible? Or, alternatively, there should be a way for you to signal in your smaller WebPI package that "it does not build in VS" so that WebMatrix knows not to display that button. I note that there's nothing in your quote from the release notes that says "it will not compile" so I hope you will understand my confusion. Yeah, I agree, I hate that WebMatrix button. I must have answered that question about 50 times already because of it. No, the webpi package will never build. Removing the csproj file should improve things: it should never have been there in the first place. Are you sure you want to delete this post? You will not be able to recover it later. Are you sure you want to delete this thread? You will not be able to recover it later.
https://orchard.codeplex.com/discussions/260288
CC-MAIN-2017-39
refinedweb
779
75.71
- Author: - dek - Posted: - August 25, 2009 - Language: - Python - Version: - 1.1 - admin generic export csv action - Score: - 7 (after 7 ratings) A generic admin action to export selected objects as csv file. The csv file contains a first line with header information build from the models field names followed by the actual data rows. Access is limited to staff users. Requires django-1.1. Usage: Add the code to your project, e.g. a file called actions.py in the project root. Register the action in your apps admin.py: from myproject.actions import export_as_csv class MyAdmin(admin.ModelAdmin): actions = [export_as_csv] # Please login first before commenting.
https://djangosnippets.org/snippets/1697/
CC-MAIN-2016-40
refinedweb
106
62.54
New tools for automating end-to-end tests for VS Code extensions . How it works VS Code is based on Electron, so it is a web application. Hence, the idea was to automate the tests using the Selenium WebDriver. For that purpose, we needed to: - Download the appropriate version of ChromeDriver, which meant knowing the version of Chromium packaged inside the Electron browser our VS Code uses. - Add ChromeDriver to our PATH. - Choose the appropriate VS Code binary (which is different in every OS). - Set up our VS Code to run the tests properly. We cannot, for instance, use the native title bar. - Download another instance of VS Code just for testing. (We do not want to mess up the instance of VS Code we actually use.) - Build our extension. - Install the extension into the new instance. Finally, we were all set to begin writing our tests, but Figure 1 shows what we would have had to sift through in order to push a button and open a view: That’s 15 layers of block elements just to find an icon representing a view container, which is quite a tall order to find a simple element. You can imagine what the rest of the DOM looks like.But enough scare tactics, we are here to make testing exciting. Almost as exciting as coding itself, because we are turning testing into coding. Let’s see how easy all of this becomes once we employ the vscode-extension-testerframework. Making it simple To demonstrate, we will take an extension and create end-to-end tests for it using our framework. As a first step, I like to use something simple, like the helloworldsample extension from Microsoft’s extension samples repo. This extension contributes a new command called Hello Worldthat shows a notification saying Hello World!Now we need to write tests to verify that the command works properly. Getting the dependencies First, we need to get the necessary dependencies. To start, we need the Extension Tester itself, along with the test framework it integrates into: Mocha. We can get both from the npmregistry: $ npm install --save-dev vscode-extension-tester mocha @types/mocha I will also use Chai for assertions. You can use whichever assertion library you like: $ npm install --save-dev chai @types/chai Setting up the test Now that we have our dependencies installed, we can start putting all the pieces together. Let us start by creating a test file. Our test files will rest in the src/ui-testfolder, but you can use any path that is covered by your tsconfig, because we will write our tests in TypeScript just like the rest of the extension. Let’s go ahead and create the folder we chose and create a test file inside. I will call mine helloworld-test.ts. Our file structure should now look like Figure 3: Next, we need a way to launch our tests. For that purpose, we create a new script in our package.jsonfile. Let’s call our new script ui-test, and use the CLI that comes with the Extension Tester, called extest. For this demo, we want to use the default configuration with the latest version of VS Code, the default settings, and the default storage location (which we will come back to momentarily). We also want to perform all of the setup and then run our tests within a single command. For that purpose, we can use the setup-and-runcommand that takes the path to our test files as an argument in the form of a glob. Note that we cannot use the original .tsfiles to launch the tests. Instead, we need to use the compiled .jsfiles, which in this case are located in the out/folder. The script will then look something like this: "ui-test": "extest setup-and-run out/ui-test/*.js" It is also important to compile our tests before attempting to run them, which we can do along with the rest of the code. For that purpose, this extension has a compile script we can use. The final script will then look like this: "ui-test": "npm run compile && extest setup-and-run out/ui-test/*.js" Setting up the build Now is the time to talk about the importance of the storage folder I mentioned earlier. This is where the framework stores everything it needs for the tests to run, including a fresh instance of VS Code, the ChromeDriver binary, and potentially screenshots from failed tests. It is imperative to exclude this folder from compilation and vscepackaging. Otherwise, you are bound to run into build errors. We also recommend adding the storage folder into your .gitignorefile. By default, this folder is called test-resources, and is created in the root of your extension repository. First, let us exclude the folder from compilation. We need to open the tsconfig.jsonfile and add the storage folder into the "exclude"array. This is what my tsconfig now looks like: { "compilerOptions": { "module": "commonjs", "target": "es6", "outDir": "out", "sourceMap": true, "strict": true, "rootDir": "src" }, "exclude": ["node_modules", ".vscode-test", "test-resources"] } With that code, our extension should not run into build errors with the folder present. Next, we need to make sure the folder is not included in the final .vsixfile when we package the extension. For that purpose, we can utilize the .vscodeignorefile. Let’s go ahead and create one in the root of our repository if it doesn’t already exist. Then, put the folder into it just like we would with .gitignore, as shown in Figure 4: With these three simple steps completed, we are ready to dive into writing our tests. If you wish to get additional information about the test setup, check out the framework’s wiki. Writing the tests Remember that dreadful screenshot from the VS Code DOM? If you are familiar with WebDriver testing, you know how tedious it can become when the element structure is that complex. Introducing page objects Luckily, we do not need to bother ourselves with the DOM now. The Extension Tester framework brings us a comprehensive Page Object API. Each type of component in VS Code is represented by a particular typescript class and can be manipulated by a set of easy-to-understand methods. We recommend going through the page objects quick guide to get an understanding of what each object represents in the browser. Additionally, each object extends the vanilla WebDriver’s WebElement, so you can use plain WebDriver code to your heart’s desire. Back to the test at hand First, we need to create a test suite and a test case using the Mocha BDD format. The first step of our test case is to execute the command Hello World. For that purpose, we can use the Workbenchclass and its executeCommandmethod. Our test file now looks a bit like this: import { Workbench } from 'vscode-extension-tester'; describe('Hello World Example UI Tests', () => { it('Command shows a notification with the correct text', async () => { const workbench = new Workbench(); await workbench.executeCommand('Hello World'); }); }); Simple, isn’t it? Now, we need to assert that the correct notification has appeared. This command will take time to execute and display the result, so we cannot do this assertion straight away. Therefore, we use WebDriver to wait for the notification to appear. For that, we need a suitable wait condition. Our wait condition needs to view the currently displayed notifications and return the notification that matches our needs. In this case, the notification would be one that contains, say, the text Hello. If no such condition is found, do not return anything (return undefined). This way, the wait will terminate once the first truthy value is returned: async function notificationExists(text: string): Promise<Notification | undefined> { const notifications = await new Workbench().getNotifications(); for (const notification of notifications) { const message = await notification.getMessage(); if (message.indexOf(text) >= 0) { return notification; } } } With this condition set up, we now start waiting. To do this, we need a reference to the underlying WebDriver instance. We can get that reference from the VSBrowserobject, which is the entry point to the Extension Tester API. We will use the beforefunction to initialize the WebDriver instance before the tests run by adding the following lines to the beginning of our suite: let driver: WebDriver; before(() => { driver = VSBrowser.instance.driver; }); Initiating the wait is now as simple as this: const notification = await driver.wait(() => { return notificationExists('Hello'); }, 2000) as Notification; Note the cast at the end. Our wait condition may return undefined, and we need to work with a Notificationobject. The last step is to assert that our notification has the correct attributes by checking if the notification has the correct text, and is of an infotype. Using Chai’s expectto accomplish this task looks like this: expect(await notification.getMessage()).equals('Hello World!'); expect(await notification.getType()).equals(NotificationType.Info); At this point, our first test is finished. The whole test file should look as follows: import { Workbench, Notification, WebDriver, VSBrowser, NotificationType } from 'vscode-extension-tester'; import { expect } from 'chai'; describe('Hello World Example UI Tests', () => { let driver: WebDriver; before(() => { driver = VSBrowser.instance.driver; }); it('Command shows a notification with the correct text', async () => { const workbench = new Workbench(); await workbench.executeCommand('Hello World'); const notification = await driver.wait(() => { return notificationExists('Hello'); }, 2000) as Notification; expect(await notification.getMessage()).equals('Hello World!'); expect(await notification.getType()).equals(NotificationType.Info); }); }); async function notificationExists(text: string): Promise<Notification | undefined> { const notifications = await new Workbench().getNotifications(); for (const notification of notifications) { const message = await notification.getMessage(); if (message.indexOf(text) >= 0) { return notification; } } } Launching the tests All that is left now is to launch our tests. To do that, we can head to our favorite terminal and launch the script we created during the setup phase: $ npm run ui-test Now we can watch as the tooling runs the setup for us automatically: Our test run was a success: We verified our extension’s feature works. And best of all, we do not need to do all of this work manually anymore. Learning more If you wish to learn more about using the Extension Tester, be sure to visit the GitHub repository or the npm registry page. The wiki, in particular, might be of interest. To find detailed descriptions of all the steps we have gone through in this article, see the links below: Interested in the sample project we used in this article? Check out its code in the sample projects section, complete with commented tests. We also already have a few working test suites for real extensions (not just example ones). Feel free to take a look for inspiration: - The Apache Camel extension test suite. - Fuse tooling’s UI test tooling, extending the Extension Tester. - Extension Tester’s own test suite, which covers almost every available page object. If you would like to get involved, check out the Contributor’s guide. We are always happy to see your feedback and suggestions, or indeed your code contributions.
https://developers.redhat.com/blog/2019/11/18/new-tools-for-automating-end-to-end-tests-for-vs-code-extensions/
CC-MAIN-2021-21
refinedweb
1,835
55.95
To use the component from VB .NET you have to go through the same steps. First you have to add a reference to the object select Add Reference, next click Browse and navigate to the Debug directory of the MyWinRTComp project and select the MyWinRTComp.winmd file. You also have to add a reference due to a glitch in the pre-beta software platform.winmd that you will find in C:\Program Files (x86)\Microsoft Visual Studio 11.0\VC\bin\. You can also add Imports MyWinRTComp to the start of the program to avoid having to quote fully qualified names. Once this is done you can use the component as if it was a standard Basic object. First create an instance: Dim MyObject As New WinRTComponent Again this creates a COM object behind the scenes. You can use its properties, methods and events as if it was a standard VB object: MyObject.PropertyA = 1Dim temp As Integer = MyObject.PropertyAtemp = MyObject.Method(1)AddHandler MyObject.someEvent, AddressOf Myhandler All you have to do to make use of the component from JavaScript is, once again, load a reference. Right click on References in the Solution explorer and brows to the Debug directory of the MyWinRTComp project and select the MyWinRTComp.winmd file. After this the WinRT component looks just like a standard JavaScript object. However there are some slight differences from the way it all works in the other languages. The first is that there is no JavaScript equivalent of "using" or "import" - you have to use a fully qualified name. For example to create an instance of the object you have to use: var MyObject = new MyWinRTComp.WinRTComponent(); The second difference is the JavaScript enforces a "camel case" style on method and properties. That is even if you have created a method with the name MethodOne it will be changed to methodOne i.e the first character is always lower case. With this difference taken into account everything works in the same way: var temp = MyObject.propertyA;temp = MyObject.method(1);MyObject.someEvent = function EventHandler(){}; Except at the moment it doesn't actually work. In the current build you get an error when you try to create the object. It does work if you add the WinRT component project to the solution and add the reference directly. Presumably this is a short lived bug that will be fixed in the next release. Interestingly using the component from C++ is probably the most difficult thing to do with the current state of the software - ignore the lack of Intellisense prompting and even some initial error messages. First to use the component you have to add a reference. Right click on the projects name and select References. When the dialog box appears select Add New Reference and navigate to the Debug directory of the MyWinRTComp project and select the MyWinRTComp.winmd file. You can also add using namespace MyWinRTComp; if you want to avoid typing fully qualified names. Once you have done this, there is no need to add a reference to platform.winmd in this case you can start to use the component. First we need to instantiate an object: WinRTComponent^ MyObject = ref new WinRTComponent(); Notice the use of ^ and ref new to create a reference and a new COM object. Once we have the reference to the COM object it can be used, with some slight modifications, as if it was a native object. For example: MyObject->PropertyA=1;int temp=MyObject->PropertyA;temp=MyObject->Method(1);MyObject->someEvent += ref new SomeEventHandler( [](int i){}); The only slightly strange instruction is the use of the lambda to set the event handler. The Lambda is simply [](int i){}; which defines a dummy i.e. empty function with the signature (int) returning void. Lambdas were introduced recently but you can't use them with managed code i.e. they are not managed code lambdas but this isn't managed code - ignore any error messages as it does work with COM objects. The most important thing to realize is that you don't need to know any of the inner workings of the WinRT component to make use of it. WinRT objects are COM objects but their primary interface is IInspectable rather than IUnknown - however IInspetable inherits from IUnknown so you really do have a COM object to work with. WinRT objects are created by an activation factory rather than an object factory. An activation factory has to implement the IActivationFactory interface and the compiler generates one for you automatically. You can use a range of low level functions and templates that have been included in WinRT but it is probably better to leave the compiler to do the job. It also uses QueryInterface to retrieve the interface it needs and AddRef and Release to enable the use count to be maintained. The compiler automatically generates the _IWinRTComponentPublicNonVirtuals interface to provide access to your custom methods. For most of the time you can ignore the fact that you are working with COM but there is one small place where the mechanism breaks the surface. You can use whatever data types you like in the internal workings of your component but data types that are going to be placed in the public interface used to call your object have to be COM datatypes. In many cases the data types are the same but where there is a need new COM data types have been introduced in WinRT. For example, the COM and C++ native string data type are different. If you need to use a string as a return type, parameter or property then it has to be a WinRT string i.e. Platform::String: Platform::String^ MyString="Hello World"; and there are conversion methods to and from standard C++ strings. For example std::MyStdString=MyString->Date(); and Platform::String^ MyCOMString= ref new Platform::String( MyStdString.c_str()); As well as simple COM types, WinRT also has COM equivalents of more complex data structures, such as collections. This juggling of data types is about the only real complication in creating a WinRT component. Yes, this really is COM and the point is that when you instantiate a WinRT object a lot goes on behind the scenes. For the most part this is best where it stays!
http://i-programmer.info/programming/winrt/3172-building-and-using-winrt-components.html?start=1
CC-MAIN-2015-32
refinedweb
1,050
53.41
Subject: Re: [OMPI users] MPI_CANCEL From: Richard Treumann (treumann_at_[hidden]) Date: 2008-04-15 22:48:10 Hi slimtimmy I have been involved in several of the MPI Forum's discussions of how MPI_Cancel should work and I agree with your interpretation of the standard. By my reading of the standard, the MPI_Wait must not hang and the cancel must succeed. Making an MPI implementation work exactly as the standard describes may have performance implications and MPI_Cancel is rarely used so as a practical matter an MPI implementation may chose to fudge the letter of the law for better performance. There also may be people who would argue you and I have misread the standard and I am happy to follow up (off line if they wish) with anyone who interprets the standard differently. The MPI Forum is working on MPI 2.2 right now and if there is something that needs fixing in the MPI standard, now is the time to get a resolution. Regards -_at_[hidden] wrote on 04/15/2008 03:14:39 PM: > I encountered some problems when using MPI_CANCEL. I call > Request::Cancel followed by Request::Wait to ensure that the request has > been cancelled. However Request::Wait does not return when I send bigger > messages. The following code should reproduce this behaviour: > > #include "mpi.h" > #include <iostream> > > using namespace std; > > enum Tags > { > TAG_UNMATCHED1, > TAG_UNMATCHED2 > }; > > int main() > { > MPI::Init(); > > const int rank = MPI::COMM_WORLD.Get_rank(); > const int numProcesses = MPI::COMM_WORLD.Get_size(); > const int masterRank = 0; > > if (rank == masterRank) > { > cout << "master" << endl; > const int numSlaves = numProcesses - 1; > for(int i = 0; i < numSlaves; ++i) > { > const int slaveRank = i + 1; > int buffer; > MPI::COMM_WORLD.Recv(&buffer, 1, MPI::INT, slaveRank, > TAG_UNMATCHED1); > } > > } > else > { > cout << "slave " << rank << endl; > //const int size = 1; > const int size = 10000; > int buffer[size]; > MPI::Request request = MPI::COMM_WORLD.Isend(buffer, size, > MPI::INT, > masterRank, TAG_UNMATCHED2); > > cout << "slave ("<< rank<<"): sent data" << endl; > > request.Cancel(); > > cout << "slave ("<< rank<<"): cancel issued" << endl; > > request.Wait(); > > cout << "slave ("<< rank<<"): finished" << endl; > } > > > MPI::Finalize(); > > return 0; > } > > > If I set size to 1, everything works as expected, the slave process > finishes execution. However if I use a bigger buffer (in this case > 10000) the wait blocks forever. That's the output of the program when > run with two processes: > > master > slave 1 > slave (1): sent data > slave (1): cancel issued > > > Have I misinterpreted the standard? Or does Request::Wait block until > the message is delievered? > > > _______________________________________________ > users mailing list > users_at_[hidden] >
http://www.open-mpi.org/community/lists/users/2008/04/5398.php
CC-MAIN-2014-10
refinedweb
411
53.41
>> Palindrome Substring Queries in C++ In this tutorial, we need to solve palindrome substring queries of the given string. Solving palindrome substring queries is far more complex than solving regular queries in C++. It requires a far more complex code and logic. In this tutorial, we are provided string str and Q number of substring[L...R] queries, each with two values L and R. we aim to write a program that will solve Queries to determine whether or not substring[L...R] is a palindrome. We must decide whether or not the substring formed within the range L to R is a palindrome to solve each query. For example − Let's input "abbbabaaaba" as our input string. The queries were [3, 13], [3, 11], [5, 8], [8, 12] It is necessary to determine whether the substring is a plaindrome A palindrome is "abaaabaaaba" (3, 13) . It is not possible to write "baaa" as a palindrome [3, 11]. As in [5, 8]: "aaab" cannot be a palindrome. There is a palindrome in "baaab" ([3, 12]). Approach to Find the Solution Naive Method Here, we must find a palindrome by checking whether the substring is from index range L to R. Therefore, we need to check all the substring queries one by one and determine whether or not they are palindromes. As there are Q queries and each query takes 0(N) time to answer. It takes 0(Q.N) time in the worst case. Example #include <bits/stdc++.h> using namespace std; int isPallindrome(string str){ int i, length; int flag = 0; length = str.length(); for(i=0;i < length ;i++){ if(str[i] != str[length-i-1]) { flag = 1; break; } } if (flag==1) return 1; return 0; } void solveAllQueries(string str, int Q, int query[][2]){ for(int i = 0; i < Q; i++){ isPallindrome(str.substr(query[i][0] - 1, query[i][1] - 1))? cout<<"Palindrome\n":cout<<"Not palindrome!\n"; } } int main() { string str = "abccbeba"; int Q = 3; int query[Q][2] = {{3, 5}, {5, 7}, {2, 1}}; solveAllQueries(str, Q, query); return 0; } Output Palindrome Palindrome Not palindrome! Dynamic Programming Method Using a dynamic programming approach to solve the problem is an efficient option. To solve, we'll need to make a DP array, which is a two-dimensional array that contains a boolean value indicating if the substring[i...j] is a palindrome for DP[i][j]. This DP matrix will be created, and all L-R values for each query will be checked. Example #include <bits/stdc++.h> using namespace std; void computeDP(int DP[][50], string str){ int length = str.size(); int i, j; for (i = 0; i < length; i++) { for (j = 0; j < length; j++) DP[i][j] = 0; } for (j = 1; j <= length; j++) { for (i = 0; i <= length - j; i++) { if (j <= 2) { if (str[i] == str[i + j - 1]) DP[i][i + j - 1] = 1; } else if (str[i] == str[i + j - 1]) DP[i][i + j - 1] = DP[i + 1][i + j - 2]; } } } void solveAllQueries(string str, int Q, int query[][2]){ int DP[50][50]; computeDP(DP, str); for(int i = 0; i < Q; i++){ DP[query[i][0] - 1][query[i][1] - 1]?cout <<"not palindrome!\n":cout<<"palindrome!\n"; } } int main() { string str = "abccbeba"; int Q = 3; int query[Q][2] = {{3, 5}, {5, 7}, {2, 1}}; solveAllQueries(str, Q, query); return 0; } Output palindrome! not palindrome! palindrome! Conclusion In this tutorial, we learned how to solve palindrome substring queries along with the c++ code. We can also write this code in java, python, and other languages. This code was one of the most complex and lengthy codes. Palindrome queries are harder than regular substring queries, and they require very accurate logic. We hope you find this tutorial helpful. - Related Questions & Answers - Queries to check if substring[L…R] is palindrome or not in C++ Program - Can Make Palindrome from Substring in Python - C++ code to find palindrome string whose substring is S - C++ program to find string with palindrome substring whose length is at most k - Palindrome Partitioning - Entering MySQL Queries - Replace substring with another substring C++ - Substring in C# - Substring in C++ - Palindrome in Python: How to check a number is palindrome? - Palindrome program in Java. - Palindrome Number in Python - Valid Palindrome in Python - Palindrome Partitioning in C++ - Prime Palindrome in C++
https://www.tutorialspoint.com/palindrome-substring-queries-in-cplusplus
CC-MAIN-2022-33
refinedweb
728
72.46
IPython-enabled pdb Project description IPython pdb Use ipdb exports functions to access the IPython debugger, which features tab completion, syntax highlighting, better tracebacks, better introspection with the same interface as the pdb module. Example usage: import ipdb ipdb.set_trace() ipdb.set_trace(context=5) # will show five lines of code # instead of the default three lines. Issues with stdout Some tools, like nose fiddle with stdout. Until ipdb==0.9.4, we tried to guess when we should also fiddle with stdout to support those tools. However, all strategies tried until 0.9.4 have proven brittle. If you use nose or another tool that fiddles with stdout, you should explicitely ask for stdout fiddling by using ipdb like this import ipdb ipdb.sset_trace() ipdb.spm() from ipdb import slaunch_ipdb_on_exception with slaunch_ipdb_on_exception(): [...].12.2 (2019-07-30) - Avoid emitting term-title bytes [steinnes] 0.12.1 (2019-07-26) - Fix –help [native-api] 0.12 (2019-03-20) - Drop support for Python 3.3.x [bmw] - Stop deprecation warnings from being raised when IPython >= 5.1 is used. Support for IPython < 5.1 has been dropped. [bmw] 0.11 (2018-02-15) - Simplify loading IPython and getting information from it. Drop support for python 2.6 Drop support for IPython < 5.0.0 [takluyver] 0.10.3 (2017-04-22) - For users using python 2.6, do not install IPython >= 2.0.0. And for users using python 2.7, do not install IPython >= 6.0.0. [vphilippon] - Drop support for python 3.2. [vphilippon] - Command line usage consistent with pdb - Add argument commands [zvodd] 0.10.2 (2017-01-25) - Ask IPython which debugger class to use. Closes [gnebehay, JBKahn] - ipdb.set_trace() does not ignore context arg anymore. Closes. [gnebehay, Garrett-R] 0.10.1 (2016-06-14) - Support IPython 5.0. [ngoldbaum] 0.10.0 (2016-04-29) - Stop trying to magically guess when stdout needs to be captured. Like needed by nose. Rather, provide a set of function that can be called explicitely when needed. [gotcha] - drop support of IPython before 0.10.2 0.9.4 (2016-04-29) - Fix Restart error when using python -m ipdb Closes. [gotcha] 0.9.3 (2016-04-15) - Don’t require users to pass a traceback to post_mortem. [Wilfred] 0.9.2 (2016-04-15) - Closes. [gotcha] 0.9.1 (2016-04-12) - Reset sys.modules['__main__'] to original value. Closes [gotcha] - Fix support of IPython versions 0.x [asivokon] 0.9.0 (2016-02-22) - Switch to revised BSD license (with approval of all contributors). Closes [lebedov, gotcha] 0.8.3 (2016-02-17) - Don’t pass sys.argv to IPython for configuration. [emulbreh] 0.8.2 (2016-02-15) - Fix lexical comparison for version numbers. [sas23] - Allow configuring how many lines of code context are displayed by set_trace [JamshedVesuna] - If an instance of IPython is already running its configuration will be loaded. [IxDay] 0.8.1 (2015-06-03) - Make Nose support less invasive. Closes Closes [blink1073, gotcha] - Fix for post_mortem in context manager. Closes [omergertel]. Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/ipdb/
CC-MAIN-2019-35
refinedweb
536
71.41
__skb_unlink, skb_unlink − remove an sk_buff from its list #include <linux/skbuff.h> void __skb_unlink(struct sk_buff *skb, struct sk_buff_head *list); void skb_unlink(struct sk_buff *skb); The skb_unlink function is a wrapper for __skb_unlink. __skb_unlink removes skb from its sk_buff_head. It decrements the list qlen pointer, and cleanly detaches the sk_buff from its queue. This function should always be used instead of performing this task manually, as it provides a clean, standardized way of manipulating an sk_buff_head, and provides interrupt disabling (see NOTES below.) Most users will not call __skb_unlink directly, as it requires that two arguments be supplied and does not provide any interrupt handling. skb_unlink determines the list from which skb should be detached, and disables interrupts. None. It is important to note the difference between not only __skb_unlink and skb_unlink,_dequeue(9), skb_insert(9), skb_queue_head(9), skb_queue_tail(9) /usr/src/linux/net/core/skbuff.c /usr/src/linux/net/ipv4/af_inet.c /usr/src/linux/net/ipv4/ip_output.c /usr/src/linux/net/ipv4/tcp.c Cyrus Durgin <cider AT speakeasy DOT org>
http://man.sourcentral.org/MGA3/9+skb_unlink
CC-MAIN-2018-39
refinedweb
175
55.44
Oct 18, 2011 08:45 AM|yenni104|LINK Hi, I am using MVC 3 unobtrusive client side validation and I need to implement some custom validation rules for my DateTime field. I got ExpiryDate and Storagedate, where Expirydate should be later or equal to today's date. Also, the Expirydate should not be earlier than the StorageDate. However, I got no idea how to implement the custom validation. I got the partial class to customize the validation in my Model like this: [MetadataType(typeof(FoodMetaData))] public partial class FOOD { [Bind(Exclude = "FoodID")] public class FoodMetaData { [ScaffoldColumn(false)] public int FoodID { get; set; } [Required(AllowEmptyStrings = false, ErrorMessage = "Please enter a name")] public object FoodName { get; set; } public object StorageDate { get; set; } public object ExpiryDate { get; set; } } } Any idea how can I declare a custom validation for the DateTime field?? I googled a lot but still no able to find similar solution... Please help... Oct 18, 2011 09:49 AM|francesco abbruzzese|LINK Use the DateTimeInput and the DateRangeAttribute of my Mvc Controls Toolkit. You can define both the previous constraint, on the client side and on the server side, easily by placing two DateRange attributes on the ExpiryDate each for each constraint and by adding a new property called ToDay to your ViewModel (it has just a get returning today). User would not be able to enter a date is nort allowed at all. You have also the option, when the user choose a storageDate, to change expiryDate automatically to make the constraint holds instead of limiting the value of ExpiryDate. The example application here show an example of use. Other examples of the use of the DateTimeInput are...pratically in almost all tutorials. Oct 18, 2011 10:32 AM|yenni104|LINK Hi, can you give me some example?? Because I need something like [DateRange(SMaximum = "2012-1-1", SMinimum = "2008-1-1")] but if for ExpiryDate, i only need the minimum part which is Today's date && StorageDate. I not really understand how to do this since there is no example available there.. Sorry and thanks for your kind help.. Oct 18, 2011 02:04 PM|francesco abbruzzese|LINK There are examples....you have to see the ViewModel used in the page of the software distribution. Anyway: 1) define a property you call Today (for example): [Milestone] public DateTime Today { get{ return DateTime.Now; } } please notoce the milestone attribute, this inform the engine that the proprty will not be renderedi in the page; now you write: [DateTange(DynamicMinimum='Today')] [DateTange(DynamicMinimum='StorageDate')] public DateTime ExpiryDate{get; set;} This way when user will introduce a wrong date the date will be automatically corrected to match the constraint. if you use [DateTange(DynamicMinimum='StorageDate', RangeAction = RangeAction.Propagate)] instead when user will introduce e data that is smaller than ExpiryDate ExpiriDate will be decreased in the Html page to keep the constraint valid. Th minimum you have define are called Dinamic because they depends on the value of other properties. I suggest to addr also some fixed minimum and fixed maximum, say 1900 and 2100 or less, do force the date to be in an interval tha male sense for your application in any case: [DateTange(DynamicMinimum='StorageDate', SMaximum = "2000-1-1", SMinimum = "2100-1-1")] You can add as many date range attribute you like to the same property. Oct 19, 2011 12:01 AM|yenni104|LINK Hi, really thanks for your detailed answer. But I still cant it work.. Do I need to add anything else (in script or etc..) for the DateRange validation to work? Currently even I put my expirydate to < today's date, nothing happen (no validation message, no auto-correct). I could not find the problem.. I will show my code here: [MetadataType(typeof(FoodMetaData))] public partial class FOOD { [Bind(Exclude = "FoodID")] public class FoodMetaData { [ScaffoldColumn(false)] public int FoodID { get; set; } [DateRange(DynamicMinimum="Today", ErrorMessage="Test")] [DateRange(DynamicMinimum="StorageDate", ErrorMessage="Test storage date")] public DateTime ExpiryDate { get; set; } } [MileStone] public DateTime Today { get { return DateTime.Now; } } } The Form: @using (Html.BeginForm("CreateFood", "Stock", FormMethod.Post, new { Expiry Date </div> <div class="editor-field"> @Html.EditorFor(model => model.ExpiryDate, new { @class = "expirydate" }) @Html.ValidationMessageFor(model => model.ExpiryDate) </div> } Validation for other field do works, just for date i get nothing.. Please help... Really appreciate it.... Oct 19, 2011 08:21 AM|francesco abbruzzese|LINK 1) The autorrect behaviour happens just if you use the DateTimeInput of the Mvc Controls Toolkit: otherwise, normal unobtrusive validation is performed and when the field is invalid the usual validation message is shown. 2) For everything to work properly you have to install the Mvc Controls Toolkit since it has custom Validation and Metadata provider. It is available throught Nuget (Mvc3ControlsToolkit or Mvc2ControlsToolkit), or you can install it manually as explained here: Oct 19, 2011 08:32 AM|yenni104|LINK Hi, I did install it.. but I still cant get any validation message.. is it a must to use Html.DateTimeFor for the validation to work? If yes, do you have any idea how to assign a class to Html.DateTimeFor helper like what I did for the EditorFor? <div class="editor-field"> @Html.EditorFor(model => model.ExpiryDate, new { @class = "expirydate" }) @Html.ValidationMessageFor(model => model.ExpiryDate) </div> And is it any additional steps needed because I am using a datepicker? Really thanks for your help.... Oct 19, 2011 10:39 AM|francesco abbruzzese|LINK In my previous post I have not noticed you use editorFor....It is not compulsory to use DateTimeFor...but you can't use editorfor on the single datetime value for the reasons I explain below. In any case I strongly suggest to use DateTimeFor...it offers you more options. DateTimeFor returns an object you can use to render separately the date part and the time part. You can style both with separate css. The date part can be rendered with three dropdowns(Date method) or with a jQuery picker(DateCalendar method). Both methods have an html attributes dictionary new Dictionary<object, string> {{"class", "expirydate"}}. However if you use the jquery picker the html attributes will apply just to the textbox the DatePicker will be attached to. To style the picker you have to customiza a jQuery UI css file Now I can explain why the editorfor cannot be used: the point is that bot fields that need to be compared need to be in the same model, otherwise there is no way for the mvc engine to read the value to compare. This means both the field Today and ExpiryDate MUST be in the same model because the value of Today must be read and inserted in the validationattributes of ExpiryDate....Now if you use EditorFor on the single ExpiryDate property behind the scene Mvc creates a new model containing just this field ExpiryDate and pass it to a new View....so it is impossible to read the value of Today from there :) THERE IS NO WAY TO OVERCOME THIS LIMIT, that apply to all conditional validation attributes, since it depends on how the core data structure of the Mvc engine were designed. However the constraint on Storagedate should work, because it is resolved on the client side, since the Storagedate filed is rendered in the View (at least I have understood ...this way) Oct 19, 2011 04:35 PM|francesco abbruzzese|LINK See the example of use of the DateTimeFor here: BinariesWithSimpleExamples Basically: @{var DT = DateTimeFor(m => m.ExpiryDate, dateInCalendar : true)} @DT.DateCalendar( inLine: false, calendarOptions: new CalendarOptions { ChangeYear=true, ChangeMonth=true, }, containerHtmlAttributes: new Dictionary<string, object> {{"class", "expirydate"}}) There are various calendar option you can choose....I put the ones I like more I have chosen to use the jQuery picker, so you have to include the relateive js files, and css Oct 20, 2011 12:28 AM|francesco abbruzzese|LINK Sorry I have written that code directly in the post without compiling it..and in a hurry...so I forgot the Html before the DateTimeFor and a ; This should work: @{var DT = Html.DateTimeFor(m => m.ExpiryDate, dateInCalendar : true);} @DT.DateCalendar( inLine: false, calendarOptions: new CalendarOptions { ChangeYear=true, ChangeMonth=true, }, containerHtmlAttributes: new Dictionary<string, object> {{"class", "expirydate"}}) Oct 20, 2011 07:59 AM|francesco abbruzzese|LINK The point is that you showed me just the MetaDataType...so I don't know too much of your model. Does your ExpiryDate is a DateTime? ie a nullable DateTime ? In such a case you have to furnish a default date to be used as initial date when the Date is null as shown below: @{var DT = Html.DateTimeFor(m => m.ExpiryDate, dateInCalendar : true, emptyDate: DateTime.Now);} Obviously you can set emptyDate to whatever you like not necessaruly to DateTime.Now. Don't forget to include also the js and css files needed by the jQuery datepicker. Finally, either you render the StorageDate in yiour page with a DateTimeInput, or apply to it the Milestone attribute as you have done with the Today property. Oct 21, 2011 08:27 AM|francesco abbruzzese|LINK Pls motice, you will see no error message, but the user will simply unable to insert a wrong date, because any wrong date will be ccorrected to be withing the constraints. Pls prepare a selfcontained example with no db and send it to me through the contact form of my blog(it has the option to attach files). Oct 23, 2011 01:48 PM|Young Yang - MSFT|LINK Hi yenni You can write the code as follow: public sealed class ExpiryDateAttribute : ValidationAttribute { public override bool IsValid(object value) { DateTime expiryDate = (DateTime)value; return (expiryDate >= DateTime.Now); } } public sealed class StoragedateAttribute : ValidationAttribute { public string expiryDateProperty { get; set; } public override bool IsValid(object value) { string expiryDateString = HttpContext.Current.Request[expiryDateProperty]; DateTime storagedate = (DateTime)value; DateTime expiryDate = DateTime.Parse(expiryDateString); return expiryDate < storagedate; } } And in your view model: [ExpiryDate] public DateTime ExpiryDate { get; set; } [Storagedate(expiryDateProperty = "ExpiryDate")] public DateTime Storagedate { get; set; } Hope this helpful Regards Young Yang Oct 23, 2011 02:06 PM|yenni104|LINK Hi, amazingly it works! Really thanks a lot for your help... I got one more question, do you have any idea how I can validate both the datefield to ensure it is a valid Datetime in a specific format? Because even with the datepicker, user can input any string/numbers in the textfield and click submit.. so I want to validate those manually entered value to ensure it is a DateTime which match the format of my datepicker's Date declared as: $('#expirydate').datepicker({ constrainInput: true, minDate: 0, dateFormat: 'D, dd M yy' }); And one of the datetime form field with the format: <div class="editor-label"> Expiry Date </div> <div class="editor-field"> @Html.TextBox("ExpiryDate", String.Format("{0:ddd, d MMM yyyy}", DateTime.Now), new { id = "expirydate" }) @Html.ValidationMessageFor(model => model.ExpiryDate) </div> Really thanks a lot for your help! Oct 24, 2011 02:26 AM|Young Yang - MSFT|LINK Hi yenni, You just need to add a regularexpression attribute in your model, you can do it like this: [ExpiryDate] [RegularExpression(@"^(19|20)dd([- /.])(0[1-9]|1[012])2(0[1-9]|[12][0-9]|3[01])$”)] public DateTime ExpiryDate { get; set; } You can change the expression to what you want. Hope this helpful Regards Young Yang Oct 24, 2011 03:54 AM|Young Yang - MSFT|LINK Please find the expression from here:. If you still not found it. Please create a new thread about regular expression. Hope this helpful Regards Young Yang 20 replies Last post Oct 24, 2011 03:54 AM by Young Yang - MSFT
http://forums.asp.net/t/1731351.aspx?Custom+DateTime+Validation+for+MVC+3+unobtrusive+client+side+validation
CC-MAIN-2014-49
refinedweb
1,927
54.02
Tabstops Tabstops are highlighted placeholders for inserting content in snippets and templates. Tabstops with the same number are linked, and will mirror the content inserted in the first one. When tabstops are present in the buffer, the tab key moves the cursor through the tabstops instead of inserting a whitespace tab. Creating Tabstops Tabstops are set in snippets and templates using a similar syntax to bracketed interpolation shortcuts. The simplest type just provides a placeholder for inserting text: [[%tabstop]] Adding a colon followed by any string provides a default value (or just a helpful name): [[%tabstop:myValue]] Numbered tabstops are linked together in the buffer. Text replacements in one numbered tabstop will be replicated in tabstops with the same number. Linked tabstops should use the same default string or omit the default string once it's been defined in the first linked tabstop: print <<"[[%tabstop1:EOT]]"; [[%tabstop:TextHere]] [[%tabstop1]] Tabstops can be nested within other tabstops: def [[%tabstop1:method]](self[[%tabstop2:, ARGS]]):[[%tabstop: [[%tabstop1]].__init__(self[[%tabstop2]])]] [[%tabstop:pass]] Using Tabstops When the snippet or template is inserted or loaded in the buffer, the tabstops will show the default text (i.e. specified after the colon) highlighted in the buffer. Hitting the Tab key cycles the cursor through the tabstops in the order they appear in the snippet or template (not in numbered order). With the cursor on a tabstop, typing overwrites the default value (if defined). Hitting Tab again inserts the default value and moves to the next tabstop. Examples This "blank" PHP function snippet uses numbered tabstops. This is what it looks like when defined in the snippet: /* * function [[%tabstop1:name]] * @param [[%tabstop2:arg]] */ function [[tabstop1:name]]($[[%tabstop2:arg]]) { [[%tabstop]]; } When inserted in the buffer, it will look like this: /* * function name * @param $arg; */ function name($arg) { } The "name" tabstop is the first place Tab will stop. It gets replaced with the function name which is propagated through the rest of the snippet when Tab is hit again. The "arg" tabstop is the next place Tab stops, and is treated the same way. The unnamed tabstop is invisible in the buffer, but hitting Tab again will move the cursor to its position. Here's another example of a short snippet for HTML: <[[%tabstop1:div]]>[[%s]]</[[%tabstop1:div]]> The current selection is wrapped in an element (a "div" by default). <div>[[%s]]</div> Hitting Tab after insertion will select the first "div" for replacement. If it is replaced, the closing tag will be changed as you type. If it is not replaced, "div" will be used as the element type.
http://docs.activestate.com/komodo/6.0/tabstops.html
CC-MAIN-2015-06
refinedweb
431
60.35
Hi, I'm trying to count the number of operations it takes a certain sorting algorighm (in this case a shell sort) to sort arrays of different sizes (numbers in the arrays are randomly generated). Here is my code: The problem:The problem:Code:#include <iostream> #include <string> #include <cmath> #include <fstream> #include <iomanip> #include <stdlib.h> #include <time.h> using namespace std; void fill_array(int A[], int size); int shell_sort(int A[], int size); void print_array(int A[], int size); int main() { const int size = 10000; int A [size]; int B [21] = {1,2,3,4,5,6,7,8,9,10,20,30,40,70,100,200,300,500,1000,2000,5000}; int new_size = 0; srand ((unsigned int)time((time_t)0)); for (int i = 0; i < 21; i++) { new_size = B[i]; fill_array(A, new_size); shell_sort(A, new_size); //int count = shell_sort(A, new_size); //cout<<count<<endl; } return 0; } int shell_sort(int A[], int size) { int count = 0; int i, j, increment, temp; increment = size / 2; while (increment > 0) { for (i = increment; i < size; i++) { j = i; count++; temp = A[i]; count++; while ((j >= increment) && (A[j-increment] > temp)) { A[j] = A[j - increment]; count++; j = j - increment; count++; } A[j] = temp; count++; } if (increment == 2) increment = 1 && count++; else increment = (int) (increment / 2.2) && count++; } cout.width(4); cout<<size<<": "; cout<<count<<endl; return count; } void fill_array(int A[], int size) { int first_rand = 0; while(first_rand < size) { A[first_rand] = rand() % 1000; first_rand++; } } If i print the output of the number of operations (count) in the for loop in main, i get results such as these (The left column is the size of the array, and the right column is how many operations it takes to sort the array): Now, if i print count and the end of the sorting function, i get results such as these:Now, if i print count and the end of the sorting function, i get results such as these:Code:1: 0 2: 3 3: 6 4: 16 5: 22 6: 25 7: 31 8: 34 9: 40 10: 43 20: 88 30: 133 40: 178 70: 313 100: 448 200: 898 300: 1348 500: 2248 1000: 4498 2000: 8998 5000: 22498 So, when i print count in the sorting function, the numbers are alot higer than when count is printed in the for loop in main. So, i'm not sure what is going on, and which count is correct.So, when i print count in the sorting function, the numbers are alot higer than when count is printed in the for loop in main. So, i'm not sure what is going on, and which count is correct.Code:1: 0 2: 3 3: 10 4: 18 5: 30 6: 39 7: 43 8: 56 9: 70 10: 61 20: 220 30: 427 40: 692 70: 1735 100: 3730 200: 15362 300: 31148 500: 85822 1000: 338354 2000: 1341068 5000: 8419422 *Note, when i change where i output count (in main or in the sorting function) i comment out the other count.
https://cboard.cprogramming.com/cplusplus-programming/98152-counting-sorting-operations.html
CC-MAIN-2017-51
refinedweb
510
51.38
Some say, the best way to learn is by example. In this post we will grab the 'Each' library by ThoughtWorks, explain why it's useful, how it was designed (macros, implicits and other cool Scala stuff) and try to reuse them in our code. The goal is to learn by exploring Each and pick up some Scala ideas along the way. by Patryk Jażdżewski June 23, 2016 Tags : Scala Macro In this post we will discuss Each. Each allows Scala users to write imperative syntax, which is later on translated into scalaz monadic expressions. In a way, it’s adding syntactic sugar on top of ordinary Scala. Simplicity and the way it extends ordinary code caught my attention, so I’ve decided to dig deeper. This post will answer some basic questions about Each - how to use it, why it works and how to use the same principles in your code. Hopefully, we will demystify Scala macros along the way. Each allows you to write operations on values enclosed in Monads in a plain, imperative syntax. In a sense it’s similar to await from Python on C#, but without being limited to Futures. It helps you to eliminate as much boilerplate code as possible and focus on the business logic. Although it might sound complex in theory it’s straightforward in practice. I’m sure you will get the idea after looking at these examples. You got the idea - import each, wrap your computation in monadic[TheMonadYouAreUsing] { ... } and access raw values with .each inside the code block. Neat. You can find more examples and use cases in the project’s README. We will focus now on inspecting Each and figuring out how to use similar techniques in our code. The short answer is “Scala magic”, but if you are a curious developer you probably won’t be satisfied with this answer. The (slightly) longer answer won’t come as a surprise to a seasoned Scala Developer - combining implicits, pimp my object pattern and Scala macros. The in-depth answer goes like this … The import that we used before import com.thoughtworks.each.Monadic._ brings us few things in scope. One of them is an implicit conversion which is able to transform any Monad ( F[A], where F is monad and A is the underlying type) into a ‘Pimp’ class EachOps. The class looks simple enough We see our familiar each method here, but you might be surprised that it’s undefined at this point. The secret here is that each isn’t part of the final code. It makes sure the types do match and serves as a marker. Later on the block passed into monadic will be rewritten and each will be substituted with real calls. We will see that in a second. The real thing happens inside the monadic[X] call. It takes the type information and body you passed in and expands a macro Macro is the place where our example becomes much more convoluted. Each covers many cases that we didn’t mention here, so we will drop some details to keep it understandable. For the sake of simplicity, we will use the Future addition example from the first paragraph. The high level objective of this macro is to extract information about the code we need to transform and then build a new tree with ordinary scalaz inside. We start of with our apply method in MacroImplementation. Here c is the whitebox context provided by scala, monad is the monad we are using in the current block ( Future in our example) and body is what we would like to transform. It will roughly correspond to our code Now, that we have all the raw data in place, we can extract type information using scala.reflect.api and build a custom MonadicTransformer class to transform the plain code into it’s final form. Scala represents abstract syntax trees as convenient case classes, so we can use plain pattern matching to traverse and transform the code recursively. We will not analyze all the cases here, but we will get back to it in the next section. The point of all those transformations it to simplify each expressions into plain scalaz calls like bind invocation Above you can see reflection API in action. To explain the code in a few words Apply indicates that a function should be called, Select and TermName help us to choose what code to call exactly, Function allows us to create a new function, List … that’s an ordinary list to provide arguments. I encourage you to explore the documentation to learn more. At the end of the expansion process we should be left with a code like this It might look hard with all the $ and brackets, but you can see the pattern here - nested calls leading to a simple Int sum at the end. The tree that we have built can now be used as a subtraction for our original block. No each calls are needed. That’s it. In the next section we will try to use the same pattern on our own. We took a sneak peak under the hood to get some understanding what is going on, now let’s take a look how much we really did understand. We will try to implement a very simple macro on our own. We will call it with withLogging. This macro will accept a generic function and print the return type before proceeding, so you can debug your code more easily. Note that in order for macros to work they need to be in a separate unit of compilation - they cannot be created in the same project/module where you use them. I decided to stick to the previous example and drop the code straight into the codebase of Each, but you might want to put it into a sbt subproject. Let’s get started with some dummy code we want to log: As you can see we have our withLogging macro-based function and we pass an anonymous () => Int function into it. withLogging takes a function and returns a new one. This new function does a simple trick: calls the argument, saves the result, prints it and returns it back. To help us with debugging we will also print the line number. Fairly straightforward stuff. The code is as simple as the description: We start of with defining the external interface. It’s worth pointing out that Scala macros are typesafe. You cannot (or shouldn’t) cheat the type system. If the generated code doesn’t type-check, compilation will fail. That’s a good thing. Let’s walk through this code quickly. It’s based around the same principles as original Each. We have a method that take the Context of our macro transformations and a Tree of the code that we want to transform. The following code is building and expression and an AST. To do that we use helper classes provided to us by Scala. They might look intimidating at the beginning, but if you get a better understanding of what they do it becomes simple. What’s cool about them is that they are ordinary case classes and all the usual tricks apply - easy instance creation (as above), destructing in pattern matching and so on. {...}. val foo = .... It requires a name and value. It can be given modifiers an explicit type. Quite neat, isn’t it? Not that complicated either. Unfortunately, a bit verbose. With Scala 2.11.X we have a new more natural way of building ASTs - quasiquotes. They allow you to write code as a String with the q interpolator to add Trees into the mix. It’s easier to explain with an example: That’s even better! In this post we have learnt a bit about Each as a tool, how it’s built and how can we use the same tricks in our code. We have played a bit with macros and shown that there’s no magic in Scala. But, most importantly, we shown that digging into an unknown code might be a excellent way to learn. by Patryk Jażdżewski June 23, 2016 Tags : Scala Macro
https://blog.scalac.io/2016/06/23/each_lib_macros.html
CC-MAIN-2018-26
refinedweb
1,369
72.16
QML button's palette with enabled property Hello I am using the raw of buttons to perform some actions and at the same time to represent the current state of the project. So I use palette.buttonto set the current color. When the button has been clicked by the user it should go to disabled state for the action running time. So I do enabled = false. All works as expected except that the button visually stays in the disabled state (text is faded) even after I re-enable it ( enabled = true) though it is, in fact, active now (I can click on it and the action will propagate). It seems like non-default palettes aren't compatible with enabledproperty. Is it a known behavior or I'm missing something? I am using PySide2. QML imports are: import QtQuick 2.12 import QtQuick.Controls 2.12 import QtQuick.Layouts 1.12 import QtGraphicalEffects 1.12 import QtQuick.Dialogs 1.3 as QtDialogs import Qt.labs.platform 1.1 as QtLabs import ProjectListItem 1.0 import Settings 1.0 OK, I guess I should just set buttonTextproperty explicitly when manipulating button's palette so the text always will be in the desired state. Not very elegant but anyway... Probably can use Statecomponent to group both changes in background and text under one thing. I don't know. But: gives... something. But also, if you dig into source you could probably know what you are looking to seek (I don't know exactly your situation but you can find out). I.e. for me ( QML ) I'd start digging and find out how things are, places things like: "\Qt<version><kit>\qml\QtQuick\Controls.2\Button.qml" for example: background: Rectangle { implicitWidth: 100 implicitHeight: 40 visible: !control.flat || control.down || control.checked || control.highlighted color: Color.blend(control.checked || control.highlighted ? control.palette.dark : control.palette.button, control.palette.mid, control.down ? 0.5 : 0.0) border.color: control.palette.highlight border.width: control.visualFocus ? 2 : 0 } and if I'd used Material theme see@ "\Qt<version><kit>\qml\QtQuick\Controls.2\Material\Button.qml" for example: color: !control.enabled ? control.Material.buttonDisabledColor : control.highlighted ? control.Material.highlightedButtonColor : control.Material.buttonColor Anyhow, point is, you might find just a theme might do it - possibly not. In which case, you can always overwrite the background delegate for fine tuning yourself. Or any other section that doesn't quite do it for you. OK, I guess I should just set buttonTextproperty explicitly when manipulating button's palette so the text always will be in the desired state. Not very elegant but anyway... Probably can use Statecomponent to group both changes in background and text under one thing.
https://forum.qt.io/topic/112777/qml-button-s-palette-with-enabled-property/2
CC-MAIN-2022-33
refinedweb
452
51.34
i want to do nested sorting . I have a course object which has a set of applications .Applications have attributes like time and priority. Now i want to sort them according to the priority first and within priority i want to sort them by time. For example, given this class (public fields only for brevity): public class Job { public int prio; public int timeElapsed; } you might implement sorting by time using the static sort(List, Comparator) method in the java.util.Collections class. Here, an anonymous inner class is created to implemented the Comparator for "Job". This is sometimes referred to as an alternative to function pointers (since Java does not have those). public void sortByTime() { AbstractList<Job> list = new ArrayList<Job>(); //add some items Collections.sort(list, new Comparator<Job>() { public int compare(Job j1, Job j2) { return j1.timeElapsed - j2.timeElapsed; } }); } Mind the contract model of the compare() method:
https://codedump.io/share/xcGseRmZ56Pu/1/sorting-objects-in-java
CC-MAIN-2017-34
refinedweb
151
57.27
Created attachment 28405 [details] testcase to trigger the ICE Running the attached testcase with the current Debian's gcc-snapshot gfortran I get: $ cat m1.f95 module solver_2D_m use adv_m type :: solver_2D_t class(adv_t), pointer :: adv contains end type contains subroutine solver_2D_solve(this) class(solver_2D_t) :: this end subroutine end module $ cat m2.f95 module adv_m type, abstract :: adv_t contains procedure(op_2D_i), deferred :: op_2D end type abstract interface subroutine op_2D_i(this) import :: adv_t class(adv_t) :: this end subroutine end interface end module $ cat m12.f95 #include "m1.f95" #include "m2.f95" $ cat m21.f95 #include "m2.f95" #include "m1.f95" program m21 end $ cat trigger.sh #!/bin/bash /usr/lib/gcc-snapshot/bin/gfortran -cpp m21.f95 /usr/lib/gcc-snapshot/bin/gfortran -cpp m12.f95 $ ./trigger.sh m1.f95:10:0: internal compiler error: in gfc_create_module_variable, at fortran/trans-decl.c:4013 end subroutine ^ Please submit a full bug report, with preprocessed source if appropriate. See <> for instructions. Huh, the procedure to reproduce the bug is rather strange. However, I can confirm the error (with 4.7 and trunk). I could even reduce it a bit more: > cat m1.f95 module solver_2D_m use adv_m class(adv_t), pointer :: adv end module > cat m2.f95 module adv_m type :: adv_t end type end module > cat m12.f95 #include "m1.f95" #include "m2.f95" > cat trigger.sh #!/bin/bash gfortran-4.8 -c m2.f95 gfortran-4.8 -cpp -c m12.f95 The error on the second line only happens if the module file of 'adv_m' is present (created by the first line). The example seems very 'constructed' to me. Does it have any practical relevance to you? Hi, I've came across it after simply switching the order of module definitions in a file (i.e. no preprocessor - I've used the preprocessor to simplify the test case). I would then answer: definitely YES! - fixing it might save someone a lot of time. Due to the ICE, and due the fact that the presence of the .mod file influences gfortran's behaviour here, figuring out what's wrong is really tricky and time consuming. Sylwester One way to get rid of the error is to simply remove the assert that causes it (which was already constrained by Paul for PR43450). However, I'm not sure if that's justified. Index: gcc/fortran/trans-decl.c =================================================================== --- gcc/fortran/trans-decl.c (revision 192392) +++ gcc/fortran/trans-decl.c (working copy) @@ -4006,15 +4006,6 @@ gfc_create_module_variable (gfc_symbol * sym) decl = sym->backend_decl; gcc_assert (sym->ns->proc_name->attr.flavor == FL_MODULE); - /* -fwhole-file mixes up the contexts so these asserts are unnecessary. */ - if (!(gfc_option.flag_whole_file && sym->attr.use_assoc)) - { - gcc_assert (TYPE_CONTEXT (decl) == NULL_TREE - || TYPE_CONTEXT (decl) == sym->ns->proc_name->backend_decl); - gcc_assert (DECL_CONTEXT (TYPE_STUB_DECL (decl)) == NULL_TREE - || DECL_CONTEXT (TYPE_STUB_DECL (decl)) - == sym->ns->proc_name->backend_decl); - } TYPE_CONTEXT (decl) = sym->ns->proc_name->backend_decl; DECL_CONTEXT (TYPE_STUB_DECL (decl)) = sym->ns->proc_name->backend_decl; gfc_module_add_decl (cur_module, TYPE_STUB_DECL (decl)); (In reply to comment #3) > One way to get rid of the error is to simply remove the assert that causes it > (which was already constrained by Paul for PR43450). However, I'm not sure if > that's justified. At least it does not introduce any regressions in the testsuite.
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=54880
CC-MAIN-2015-27
refinedweb
529
52.56
A. Before this security fix you could use this feature out of the box and there was no control over the criteria that were passed by the user. Information outside of an admin user’s permission could leak. In order to avoid this a new method called lookup_allowed has been added to the ModelAdmin. By default this method restricts the lookup to what is declared in list_filter or date_hierarchy. As with most things in Django, you can easily override this function in the subclass of ModelAdmin to allow for additional lookups. For the purpose of this blog post, we’ll leave that as an exercise for the reader, but see this blog post by Chris Adams for more details. The Basics Let’s assume we have an app in our project called newsroom that is used to manage published content. A peek into models.py shows us: class Article(models.Model): teaser = models.CharField(max_length=200) source = models.CharField(max_length=5) section = models.ForeignKey("Section") categories = models.ForeignKey("Category") update_date = models.DateTimeField() is_published = models.BooleanField(default=False) class Section(models.Model): name = models.CharField(max_length=50) slug = models.SlugField() class Category(models.Model): name = models.CharField(max_length=100) slug = models.SlugField() After hooking your models into the admin, you can filter them simply by appending a query string to the admin change list URL. For example: /admin/newsroom/article/?teaser__isnull=False You can even chain filters, using the & between each one: /admin/newsroom/article/?teaser__isnull=False&source;__exact=GR This is not all you can also do join tables to narrow down your choices: /admin/newsroom/article/?categories__slug__exact=politics&section;__slug__exact=news datetime is also supported and there is very little suprised in the syntax: /admin/newsroom/article/?updated_date__gt=2010-09-01 boolean fields are a bit more tricky because you need to actually pass an integer 0 (False) 1(True) /admin/newsroom/article/?is_published__exact=1 These are just a few examples, but you can create any number of complex filters by following Django’s documentation on database queries. Admin integration Being able to dynamically craft the URLs to build the query is useful but limited to people that are “in the know.” While working on this article, Martin brought to my attention that these links can easily be integrated into the django.contrib.admin interface by overriding the admin template. The change_list.html template has its own block: {% block filters %} {% if cl.has_filters %} <div id="changelist-filter"> <h2>{% trans 'Filter' %}</h2> {% for spec in cl.filter_specs %}{% admin_list_filter cl spec %}{% endfor %} </div> {% endif %} {% endblock %} Due to the current structure of the admin templates, you’ll need to maintain some boiler-plate code1. You can minimize that, however, thanks to block.super. {% block filters %} {{ block.super }} <div id="changelist-filter"> <h2>{% trans 'Custom Filters' %}</h2> <!-- list of filter links --> </div> {% endblock %} See the Django documentation about overriding admin templates for more information. Note: In order to avoid hard-coding the URL you can reverse admin URLs from django.core.urlresolver import reverse reverse('admin:%s_%s_changelist' % ('newsroom', 'article')) The snippet above will return something like : /admin/newsroom/article/ If you want to use this in a template you can take advantage of the {% url %} tag. Conclusion Even though the Django admin does not currently provide a nice UI for adding custom filters, you can still take advantage of this hidden gem. They provide your users with easily bookmarked links to the information they care about. You could also use this technique to create a dashboard or reports page. 1Ticket #5833 aims to make this process even easier in the future
https://lincolnloop.com/blog/2011/jan/11/custom-filters-django-admin/
CC-MAIN-2017-04
refinedweb
599
51.14
Is there anyone else about to read this, as fascinated as I am by simulations? A simulation of a game, weather, aerodynamic behavior, or any other action, has intrinsically the same goal - take a peek into the future and see how things will perform in real life. These simulations all require an extent of calculations and variables to take into account, but also a model that will elegantly run through them. "Aren't you describing Machine Learning? Isn't it also predicting the future with complicated maths and a model that no one really understands?", you ask. Yeah, I guess that ML fits into this description, but that's not the kind of predictions that I will be writing here. As an Engineer, I like to think that I build things. Some digital, some physical. Some work, some don't. So, for this problem, I'll use a carpentry workshop as my example (because, you know, carpenters build things that mostly work). Let's say that in our imaginary carpentry we have the following working stations: - Cut wood - Polish wood - Paint wood - Varnish and protective coating - Assembly In this workshop, not all orders have the same processing. A chair may not need a painting or protective coating, but a kitchen table might. Or it can be a supplier for that Swedish company that doesn't assemble their products cough cough. Also, a given workstation can receive orders from various different stations. Imagine a directed graph like the one below: If we were about to score a big deal with a major client, but we don't know if we can handle the request in time, given that we have already so many orders in the shop, how can we decide on that? Simulation Time! Assuming that we know how much time each process takes, and the types of processing each order needs, we can verify when all of our registered orders, together with the new one, will be completed. I know these calculations could be done by hand, (yeah yeah) but bear with me on this one. Note that I'm not a Production Engineer, don't take my advice on production management. It's just for the sake of the blog post and the use for simulations in our lives. Looking at workstations as if they were picking an order from the pile/queue, "doing their thing" and sending it to the next step, we can think of them as completely individual and independent actors from each other, inside the simulation engine. They simply pick an order from the queue and put it in the next one after the work is done. If nothing is waiting in line, the workstation stops until a new one appears. It can also be the starting or finishing point of an order. Quick intro to Elixir and Actors Continuing with the same thought, we identify some Actor Model programming characteristics. In response to a message it receives, an actor can: - make local decisions; - create more actors; - send more messages; - determine how to respond to the received message. Actors may modify their private state, but can only affect each other indirectly through messaging. An actor is also a lightweight entity, unlike Operating System processes or threads. One of the selling points of Elixir is its support for concurrency. The concurrency model relies on Actors. So, implementing the initial idea, where each workstation is an actor part of the simulation, should not be a problem. Different types of simulations There are two different types of simulation, Discrete Event Simulation (DES) and Continuous Simulation (CS). DES: It models the operation of a system as a (discrete) sequence of events in time. CES: the system state is changed continuously over time based on a set of differential equations defining the rates of change of state variables. In our carpentry workshop, events can be treated as discrete. So in our simulation, time "hops" because events are instantaneous (jump straight to finish processing time) and do not occur in a continuous form. This way, the clock skips to the next event moment, each time an event is completed, as the simulation proceeds; Step by step Guess the boring talk is over and it's now time to put the ideas into work. Here, I'll use GenServers, as workstations. With the Erlang and GenServer characteristics, a process has a FIFO "mailbox", ensuring the ordering of the messages received by each workstation. If process A sends messages x and y, by this order, to process B, we know that it'll receive them by the same order. # Process A send(B, x) send(B, y) # Process B receive() # => x receive() # => y Though, if we have a process C that also sends a message w to B in the meantime, we don't have guarantees in what order it'll be received. Only that x will be first then y. # Process A send(B, x) send(B, y) # Process C send(B, w) # Process B receive() # => w || x receive() # => w || x || y receive() # => w || y Everyone that has already dealt with concurrency problems like this one, with several actors swapping messages, can smell a problem here... We need to have some kind of synchronization between processes. Order of messages from different actors/workstations matters in our simulation. That's why I'm introducing the Simulation Manager. We must ensure that the Manager only fast forwards the clock when it has all the updated values from the actors affected by some event in the past, keeping "timed events" from overlapping. This process will ensure synchronization and message order. The existence of a centralized actor simplifies this problem and potentially increases efficiency due to less metadata and extra messages in a peer-to-peer case. A bit of code 1. To start our simulation, we start a new process (actor). This will be the Simulation Manager and is responsible for spawning an actor for each workstation that composes our workshop's pipeline; # Simulation Manager code def start_link do actors_pids = CarpentryWorkshop.Workstations.get_all() |> Map.new(fn place -> start_place_actor(place) end) GenServer.start_link( __MODULE__, %{ pids: actors_pids, clock: Timex.now(), events: [], awaiting: [] }, name: SimManager ) end defp start_place_actor(place) do {:ok, pid} = Pipe.SimPlace.start_link(place) {place.id, pid} end In the code above, we have the Simulation Manager starting all the other actors and storing their pids to identify each one in the rest of the distributed algorithm. Note that there were other alternatives to this in Elixir, but this one is probably the easiest. Events represent messages received from each place and awaiting are actors that the Manager should wait for before advancing the simulation time. 2. Each of these actors will collect the list of orders in their queue, and sort them by the delivery date. Announcing to the manager when the next event will occur. The next event is calculated by the processing time of the first package in the queue; # Simulation Place code def start_link(place_id) do events = get_events_list GenServer.start_link( __MODULE__, %{ self_info: place_id, pids: %{}, events: events, } ) next = hd(events) events |> hd |> finish_processing_timestamp send(SimManager, {:next_event, next, place_id} end def get_events_list do place_id |> CarpentryWorkshop. Orders.list_for_place(place_id) |> CarpentryWorkhop. Orders.sort_by_delivery_time(events) end def finish_processing_timestamp do # Based on the type of order # Calculate end processing time end 3. When the manager gets all announcements, orders them, and advances clock time to the next event, warning the actor where the event will occur; def handle_info({:next_event, event, sender}, state) do ordered = insert_ordered(state.events, event, sender) awaiting = state.awaiting -- [sender] {rest_events, next_step} = send_next_event(ordered, awaiting) ... end # we have all answers def send_next_event(events, []) do next_event = get_first_from_queue(events) send(next_event.place_id, {:advance, event}) end # missing answers from actors def send_next_event(events, awaiting) do {events, nil} end 4. Then, the respective workstation Actor receives a message from the manager and completes the "event" (package processed). If the package is concluded, it will update the database value for the expected delivery time. Else, it will send a message with the package to the next workstation (actor). It also warns the manager, that it needs to wait for the next event update from itself and the next workstation; def handle_info({:advance, event}, state) do # check if this is the last place for this order case tl(event.rest_path) do [] -> # Update DB Value update_prediction(event) path -> warn_next_workstation(state, event) warn_manager(state, event) end ... end 5. The intervening workstation actor will recalculate the next event time and announce it to the simulation manager ( nil in case that there is no package in the queue), to continue the simulation; This part is analogous to Step 2. 6. The simulation stops when the manager doesn’t have more "next events" in the queue. if rest_events == [] && next_step == nil do # Finished Simulation stop_workstation_actors(pids) {:stop, :normal, []} end In the code snippets above, the parts for calculating timestamps and the next events are omitted, but they account for a major part of this simulation and synchronization between actors. After glueing every missing part of this simulation, our workshop is now ready to answer when an order (newly placed or not) will be ready. Wrapping up We demonstrated some of the advantages of using Elixir for distributed algorithms/problems. However, use this kind of distributed approach with caution, as it can introduce all sorts of bugs. And believe me, you won't like debugging distributed stuff... Hope I got you a little bit more interested in simulations in general and taking advantage of the actor model to solve exquisite problems. Discussion (0)
https://practicaldev-herokuapp-com.global.ssl.fastly.net/finiam/simulations-with-elixir-and-the-actor-model-2lmf
CC-MAIN-2022-33
refinedweb
1,586
60.65
Pytrack i2c causes system to freeze Hi, I'm trying to connect a GPy to a Pytrack, check GPS and battery, run some other code, then turn the pytrack and gpy off. Half of the time I'm getting I2c errors, garbage GPS output and battery readings, but those I can try/catch. The problem I can't wrap my head around are random freezes when initializing the pytrack. Those cause the entire board to cease function with only a reset returning it to working condition. This is the code that causes the freeze. I try to initialize as shown in the documentation with an added try / catch / repeat if there are i2c errors which is the case most of the time. This is ~10s after starting, since waiting to init i2c helped for others. I also tried to reinstall the pytrack firmware, gpy firmware and tried a wipy and another gpy instead, all with the code freshly uploaded. def initPyCoProc(): global py pydone = False # is the init successful and done? while not pydone: try: print("try") py = Pytrack() # init happens here, freeze also. pydone = True print("pytrack init success") except Exception as e: # try to catch whatever causes the connection to fail print(e) time.sleep(2) # try again after 2 seconds py.gps_state(gps=True) # unrelated function added to pycoproc lib to turn gps on / off independently. No effect on freeze behavior if removed The only thing that can stop the freeze is a manual reset or one through a WDT. I'd like to not have to add WDT code through my entire codebase. Are there any known solutions for this apart from the somewhat hacky watchdog timer reset? Is there any known reason for this behavior? Ideally, I'd like my code to run reliably... This post is deleted! @crumble Thanks, it seems to be more stable with the new firmware. Pybytes is a whole new rabbit hole of problems though. I might try @andreas firmware later. It would be really nice to have more documentation from pycom regarding the versions and differences, as well as all functions of their various libraries while we're at it. This is not the latest one. You will get 1.20.1(r1) only if you choose the pybytes option. Version 1.20.0.x is based on IDF 3.1, version 1.20.1 on IDF 3.2. IDF is the development environment from Xtensa for its ESP32 microcontroler. Version 3.2 fixed a lot of issues regarding i2c. With 1.20.0.x I got a lot of garbage from the L76 as well. You will get 1.20.1 from pycom only when you select pybytes. No account needed. Not all users liked the pybyte behaviour and compiled their own firmware. @andreas hosts some of them. @crumble the lastest firmware I see in the pycom firmware update tool (dev builds enabled or not) is 1.20.0rc13 I don't see a newer version here either. My Pytrack is on firmware 0.0.8 which seems to be the latest. Did you flash from a file or would this be due to differences between lopy and gpy? Since version 1.20.1(r1)? my communication problems between LoPy and pytrack are gone. There is only one left, which seems to be related to the L76. Init fails sometimes. In this case I have to reboot the L76. Cause you cannot communicate with the chip, you have to cut the power supply . Either manually or by calling the pytracks deepsleep method with GPS parameter.
https://forum.pycom.io/topic/5595/pytrack-i2c-causes-system-to-freeze
CC-MAIN-2020-40
refinedweb
597
76.82
C Programming/C Reference/stdarg.h stdarg.h is a header in the C standard library of the C programming language that allows functions to accept an indefinite number of arguments.[1] It provides facilities for stepping through a list of function arguments of unknown number and type. C++ provides this functionality in the header cstdarg; the C header, though permitted, is deprecated in C++. The contents of stdarg.h are typically used in variadic functions, though they may be used in other functions (for example, vprintf) called by variadic functions. Contents Declaring variadic functions[edit] Variadic functions are functions which may take a variable number of arguments and are declared with an ellipsis in place of the last parameter. An example of such a function is printf. A typical declaration is int check(int a, double b, ...); Variadic functions must have at least one named parameter, so, for instance, char *wrong(...); is not allowed in C. (In C++, such a declaration is permitted, but not very useful.) In C, a comma must precede the ellipsis; in C++, it is optional. Defining variadic functions[edit] The same syntax is used in a definition: long func(char, double, int, ...); long func(char a, double b, int c, ...) { /* ... */ } An ellipsis may also appear in old-style function definitions: long func(); long func(a, b, c, ...) char a; double b; { /* ... */ } stdarg.h types[edit] stdarg.h macros[edit] Accessing the arguments[edit] To access the unnamed arguments, one must declare a variable of type va_list in the variadic function. The macro va_start is then called with two arguments: the first is the variable declared of the type va_list, the second is the name of the last named parameter of the function. After this, each invocation of the va_arg macro yields the next argument. macro invocation va_copy(va2, va1) copies va1 into va2. There is no mechanism defined for determining the number or types of the unnamed arguments passed to the function. The function is simply required to know or determine this somehow, the means of which vary. Common conventions include: - Use of a printfor scanf-like format string with embedded specifiers that indicate argument types. - A sentinel value at the end of the variadic arguments. - A count argument indicating the number of variadic arguments. Type safety[edit] Some C implementations provide C extensions that allow the compiler to check for the proper use of format strings and sentinels. Barring these extensions, the compiler usually cannot check whether the unnamed arguments passed are of the type the function expects, or convert them to the required type. Therefore, care should be taken to ensure correctness in this regard, since undefined behavior results if the types do not match. For example, if passing a null pointer, one should not write simply NULL (which may be defined as 0) but cast to the appropriate pointer type. Another consideration is the default argument promotions applied to the unnamed arguments. A float will automatically be promoted to a double. Likewise, arguments of types narrower than an int will be promoted to int or unsigned int. The function receiving the unnamed arguments must expect the promoted type. GCC has an extension that checks the passed arguments: - format(archetype, string-index, first-to-check) - The format attribute specifies that a function takes printf, scanf, strftimeor strfmonstyle arguments which should be type-checked against a format string. For example, the declaration:causes the compiler to check the arguments in calls tocauses the compiler to check the arguments in calls toextern int my_printf (void *my_object, const char *my_format, ...) __attribute__ ((format (printf, 2, 3))); my_printffor consistency with the printfstyle format string argument my_format. Example[edit] #include <stdio.h> #include <stdarg.h> /* print all non-negative args one at a time; all args are assumed to be of int type */ void printargs(int arg1, ...) { va_list ap; int i; va_start(ap, arg1); for (i = arg1; i >= 0; i = va_arg(ap, int)) printf("%d ", i); va_end(ap); putchar('\n'); } int main(void) { printargs(5, 2, 14, 84, 97, 15, 24, 48, -1); printargs(84, 51, -1); printargs(-1); printargs(1, -1); return 0; } This program yields the output: 5 2 14 84 97 15 24 48 84 51 1 To call other var args functions from within your function (such as sprintf) you need to use the var arg version of the function (vsprintf in this example): void MyPrintf(const char* format, ...) { va_list args; char buffer[BUFSIZ]; va_start(args,format); vsprintf (buffer, format, args ); FlushFunnyStream(buffer); va_end(args); } varargs.h[edit] POSIX defines the legacy header varargs.h, which dates from before the standardization of C and provides functionality similar to stdarg.h. This header is not part of ISO C. The file, as defined in the second version of the Single UNIX Specification, simply contains all of the functionality of C89 stdarg.h, with the exceptions that: it cannot be used in standard C new-style definitions; you may choose not to have a given argument (standard C requires at least one argument); and the way it works is different—in standard C, one would write: #include <stdarg.h> int summate(int n, ...) { va_list ap; int i = 0; va_start(ap, n); for (; n; n--) i += va_arg(ap, int); va_end(ap); return i; } or with old-style function definitions: #include <stdarg.h> int summate(n, ...) int n; { /* ... */ } and call with summate(0); summate(1, 2); summate(4, 9, 2, 3, 2); With varargs.h, the function would be: #include <varargs.h> summate(n, va_alist) va_dcl /* no semicolon here! */ { va_list ap; int i = 0; va_start(ap); for (; n; n--) i += va_arg(ap, int); va_end(ap); return i; } and is called the same way. varargs.h requires old-style function definitions because of the way the implementation works.[2] References[edit] - ↑ "IEEE Std 1003.1 stdarg.h".. Retrieved 2009-07-04. - ↑ "Single UNIX Specification varargs.h".. Retrieved 2007-08-01.
https://en.wikibooks.org/wiki/C_Programming/C_Reference/stdarg.h
CC-MAIN-2014-10
refinedweb
981
55.74
Code is far easier to maintain if it uses real names to identify variables. A real name is one that describes what the variable represents rather than what type it is. Avoid using arbitrary or generic words (including abbreviations of them) on their own in the code. Some commonly seen naming mistakes are shown below. : private static final int HOUR_CONSTANT = 1000 * 60 * 60; private static final int ORDERS_CONSTANT = 40; : The word CONSTANT by itself does not convey more information to the user about what the constant represents. Final variables are constants by definition, so it is redundant information. Use the name of what the constant represents instead, in upper case. : private static final int MILLISECONDS_PER_HOUR = 1000 * 60 * 60; private static final int MAX_ORDER_ITEMS = 40; : Variables too can have unneccessarily long names that add little value: public int updateStatus(String nameVariable, int statusVariable) { : return resultVar; } Since all variables are already variables, drop the word “Variable” and use the real name instead, even if it is only a temporary variable or simple string. public int updateStatus(String itemName, int recordStatus) { : return lastUpdateResult; } Avoid using the word 'type' on its own because it does not convey enough information about what kind of type it is. In some databases it is also a reserved word, so java object-to-database mapping will be confusing. public class Order { private int type; private Date date; : } In the above case, the word 'order' should be included to clarify type. Also, a field called 'date' is too vague; dates relate to an event that happens rather than to the order itself. So, the example is better updated as below: public class Order { private int orderType; private Date submitDate; private Date paymentDate; private Date deliveryDate; : } By convention, 'input' or 'in' is OK to use if it is in a self-contained stream reader loop where it represents the input stream. But if the input reference is being remembered or passed around, it is better to state what kind of input it is. The same applies for output. State what kind of input or output it is if it adds some readability. private Label parseForLabel(InputStream input1, File input2) { : } Sure, the Javadoc can give clearer information or the calling code can be analyzed to find what kind of values are passed in. But the java method is more readable when the InputStream is given a real name. private Label parseForLabel(InputStream orderFileInputStream, File addressXmlInputFile) { : } The count keyword is seen often without neccesary qualification. : if (wasRecordCreated(returnMessage)) { count++; } if (isErrror(returnMessage)) { count2++; } else { count3 += returnMessage.getItemWeight(); } : Using the count keyword in a very simple and obvious loop is reasonable, but in all other cases use more explicit language to explain what kind of count it is. : if (wasNewRecordCreated(returnMessage)) { createdRecordCount++; } if (isErrror(returnMessage)) { errorCount++; } else { // Hey, not a count after all - it's a total! totalOrderWeight += returnMessage.getItemWeight(); } :
http://javagoodways.com/name_real_Use_real_names.html
CC-MAIN-2021-21
refinedweb
477
50.16
Python gotta go faster Crossposted from pepijndevos.nl While discussing the disappointing performance of my Futhark DCT on my “retro GPU” (Nvidia NVS 4200M) with Troels Henriksen, it came up that the Python backend has quite some calling overhead. Futhark can compile high-level functional code to very fast OpenCL, but Futhark is meant to be embedded in larger programs. So it provides a host library in C and Python that set up the GPU, transfer the memory, and run the code. It turns out the the Python backend based on PyOpenCL is quite a bit slower at this than the C backend. I wondered why the Python backend did not use the C one via FFI, and Troels mentioned that someone had done this for a specific program and saw modest performance gains. However, this does require a working compiler and OpenCL installation, rather than just a pip install PyOpenCL, so he argued that PyOpenCL is the easiest solution for the average data scientist. I figured I might be able to write a generic wrapper for the generated C code by feeding the generated header directly to CFFI. That worked on the first try, so that was nice. The hard part was writing a generic, yet efficient and Pythonic wrapper around the CFFI module. The first proof of concept required quite a few fragile hacks (pattern matching on function names and relying on the type and number of arguments to infer other things) But it worked! My DCT ran over twice as fast. Then, Troels, helpful as always, modified the generated code to reduce the number of required hacks. He then proceeded to port some of the demos and benchmarks, request some features, and contribute Python 2 support. futhark-pycffi now supports all Futhark types on both Python 2 and 3, resulting in speedups of anywhere between 20% and 100% compared to the PyOpenCL backend. Programs that make many short calls benefit a lot, while programs that call large, long-running code benefit very little. The OpenCL code that runs is the same, only the calling overhead is reduced. One interesting change suggested by Troels is to not automatically convert Futhark to Python types. For my use case I just wanted to take a Numpy array, pass it to Futhark, and get a Numpy array back. But for a lot of other programs, the Futhark types are passed between functions unchanged, so not copying them between the GPU and CPU saves a lot of time. There is even a compatibility shim that lets you use futhark-ffi with existing PyOpenCL code by merely changing the imports. An example of this can be seen here. After installing Futhark, you can simply get my library with pip. (working OpenCL required): pip install futhark-ffi Usage is as follows. First generate a C library, and build a Python binding for it: futhark-opencl --library test.fut build_futhark_ffi test From there you can import both the CFFI-generated module and the library to run your Futhark code even faster! import numpy as np import _test from futhark_ffi import Futhark test = Futhark(_test) res = test.test3(np.arange(10)) test.from_futhark(res)
https://futhark-lang.org/blog/2018-07-05-python-gotta-go-faster.html
CC-MAIN-2020-16
refinedweb
528
69.11
On Monday 09 April 2018 03:24:14 Christoph Hellwig wrote: > On Mon, Apr 09, 2018 at 12:10:09PM +0200, Pali Rohár wrote: > > Another example: > > > > fd = open("/a") > > link("/a", "/b") > > unlink("/a") > > > > Calling funlink for fd should unlink "/b" or it should fail? > > It should fail, as '/a' doesn't refer to name that is visible in the > namespace. > > > And another example: > > > > fd = open("/a") > > rename("/a", "/b") > > > > What should funlink do for fd now? > > remove the directory entry refering to '/b' as that is what fd refers > to. Advertising Why it should differ in these two cases? Calling /bin/ln /a /b followed by /bin/rm /a results in the same state as calling /bin/mv /a /b. This is something which works in POSIX systems. I think it is strange that new possible funlink call would work only if external applications uses /bin/mv and would fail if /bin/ln and /bin/rm are used. This is reason why I suggested two parameters funlink, it takes fd for unlinking and pathname which must contain same inode as fd. So when you call it with fd+"/b" it unlink "/b" without failing. -- Pali Rohár [email protected]
https://www.mail-archive.com/[email protected]/msg1659450.html
CC-MAIN-2018-17
refinedweb
201
71.65
Purpose WildFly provides EJB client API project as well as remote-naming project for invoking on remote objects exposed via JNDI. This article explains which approach to use when and what the differences and scope of each of these projects is. History Previous versions of JBoss AS (versions < WildFly 8) used JNP project () as the JNDI naming implementation. Developers of client applications of previous versions of JBoss AS will be familiar with the jnp:// PROVIDER_URL URL they used to use in their applications for communicating with the JNDI server on the JBoss server. Starting WildFly 8, the JNP project is not used. Neither on the server side nor on the client side. The client side of the JNP project has now been replaced by jboss-remote-naming project (). There were various reasons why the JNP client was replaced by jboss-remote-naming project. One of them was the JNP project did not allow fine grained security configurations while communicating with the JNDI server. The jboss-remote-naming project is backed by the jboss-remoting project () which allows much more and better control over security. Overview Now that we know that for remote client JNDI communication with WildFly 8 requires jboss-remote-naming project, let's quickly see what the code looks like. Client code relying on jndi.properties in classpath As you can see, there's not much here in terms of code. We first create a InitialContext () which as per the API will look for a jndi.properties in the classpath of the application. We'll see what our jndi.properties looks like, later. Once the InitialContext is created, we just use it to do a lookup on a JNDI name which we know is bound on the server side. We'll come back to the details of this lookup string in a while. Let's now see what the jndi.properties in our client classpath looks like: Those 2 properties are important for jboss-remote-naming project to be used for communicating with the WildFly server. The first property tells the JNDI API which initial context factory to use. In this case we are pointing it to the InitailContextFactory class supplied by the jboss-remote-naming project. The other property is the PROVIDER_URL. Developers familiar with previous JBoss AS versions would remember that they used jnp://localhost:1099 (just an example). In WildFly, the URI protocol scheme for jboss-remote-naming project is remote://. The rest of the PROVIDER_URL part is the server hostname or IP and the port on which the remoting connector is exposed on the server side. By default the http-remoting connector port in WildFly 8 is 8080. That's what we have used in our example. The hostname we have used is localhost but that should point to the server IP or hostname where the server is running. So we saw how to setup the JNDI properties in the jndi.properties file. The JNDI API also allows you to pass these properties to the constructor of the InitialContext class (please check the javadoc of that class for more details). Let's quickly see what the code would look like: That's it! You can see that the values that we pass to those properties are the same as what we did via the jndi.properties. It's upto the client application to decide which approach they want to follow. How does remoting naming work We have so far had an overview of how the client code looks like when using the jboss-remote-naming (henceforth referred to as remote-naming - too tired of typing jboss-remote-naming everytime ) project. Let's now have a brief look at how the remote-naming project internally establishes the communication with the server and allows JNDI operations from the client side. Like previously mentioned, remote-naming internally uses jboss-remoting project. When the client code creates an InitialContext backed by the org.jboss.naming.remote.client.InitialContextFactory class, the org.jboss.naming.remote.client.InitialContextFactory internally looks for the PROVIDER_URL (and other) properties that are applicable for that context (doesn't matter whether it comes from the jndi.properties file or whether passed explicitly to the constructor of the InitialContext). Once it identifies the server and port to connect to, the remote-naming project internally sets up a connection using the jboss-remoting APIs with the remoting connector which is exposed on that port. We previously mentioned that remote-naming, backed by jboss-remoting project, has increased support for security configurations. Starting WildFly 8, every service including the http remoting connector (which listens by default on port 8080), is secured (see for details). This means that when trying to do JNDI operations like a lookup, the client has to pass appropriate user credentials. In our examples so far we haven't passed any username/pass or any other credentials while creating the InitialContext. That was just to keep the examples simple. But let's now take the code a step further and see one of the ways how we pass the user credentials. Let's at the moment just assume that the remoting connector on port 8080 is accessible to a user named "peter" whose password is expected to be "lois". The code is similar to our previous example, except that we now have added 2 additional properties that are passed to the InitialContext constructor. The first is which passes the username (peter in this case) and the second is which passes the password (lois in this case). Of course the same properties can be configured in the jndi.properties file (read the javadoc of the Context class for appropriate properties to be used in the jndi.properties). This is one way of passing the security credentials for JNDI communication with WildFly. There are some other ways to do this too. But we won't go into those details here for two reasons. One, it's outside the scope of this article and two (which is kind of the real reason) I haven't looked fully at the remote-naming implementation details to see what other ways are allowed. JNDI operations allowed using remote-naming project So far we have mainly concentrated on how the naming context is created and what it internally does when an instance is created. Let's now take this one step further and see what kind of operations are allowed for a JNDI context backed by the remote-naming project. The JNDI Context has various methods that are exposed for JNDI operations. One important thing to note in case of remote-naming project is that, the project's scope is to allow a client to communicate with the JNDI backend exposed by the server. As such, the remote-naming project does not support many of the methods that are exposed by the javax.naming.Context class. The remote-naming project only supports the read-only kind of methods (like the lookup() method) and does not support any write kind of methods (like the bind() method). The client applications are expected to use the remote-naming project mainly for lookups of JNDI objects. Neither WildFly 8 nor remote-naming project allows writing/binding to the JNDI server from a remote application. Pre-requisites of remotely accessible JNDI objects On the server side, the JNDI can contain numerous objects that are bound to it. However, not all of those are exposed remotely. The two conditions that are to be satisfied by the objects bound to JNDI, to be remotely accessible are: 1) Such objects should be bound under the java:jboss/exported/ namespace. For example, java:jboss/exported/foo/bar 2) Objects bound to the java:jboss/exported/ namespace are expected to be serializable. This allows the objects to be sent over the wire to the remote clients Both these conditions are important and are required for the objects to be remotely accessible via JNDI. JNDI lookup strings for remote clients backed by the remote-naming project In our examples, so far, we have been consistently using "foo/bar" as the JNDI name to lookup from a remote client using the remote-naming project. There's a bit more to understand about the JNDI name and how it maps to the JNDI name that's bound on the server side. First of all, the JNDI names used while using the remote-naming project are always relative to the java:jboss/exported/ namespace. So in our examples, we are using "foo/bar" JNDI name for the lookup, that actually is (internally) "java:jboss/exported/foo/bar". The remote-naming project expects it to always be relative to the "java:jboss/exported/" namespace. Once connected with the server side, the remote-naming project will lookup for "foo/bar" JNDI name under the "java:jboss/exported/" namespace of the server. How does remote-naming project implementation transfer the JNDI objects to the clients When a lookup is done on a JNDI string, the remote-naming implementation internally uses the connection to the remoting connector (which it has established based on the properties that were passed to the InitialContext) to communicate with the server. On the server side, the implementation then looks for the JNDI name under the java:jboss/exported/ namespace. Assuming that the JNDI name is available, under that namespace, the remote-naming implementation then passes over the object bound at that address to the client. This is where the requirement about the JNDI object being serializable comes into picture. remote-naming project internally uses jboss-marshalling project to marshal the JNDI object over to the client. On the client side the remote-naming implementation then unmarshalles the object and returns it to the client application. So literally, each lookup backed by the remote-naming project entails a server side communication/interaction and then marshalling/unmarshalling of the object graph. This is very important to remember. We'll come back to this later, to see why this is important when it comes to using EJB client API project for doing EJB lookups (EJB invocations from a remote client using JNDI) as against using remote-naming project for doing the same thing. Summary That pretty much covers whatever is important to know, in the remote-naming project, for a typical client application. Don't close the browser yet though, since we haven't yet come to the part of EJB invocations from a remote client using the remote-naming project. In fact, the motivation behind writing this article was to explain why not to use remote-naming project (in most cases) for doing EJB invocations against WildFly server. Those of you who don't have client applications doing remote EJB invocations, can just skip the rest of this article if you aren't interested in those details. Remote EJB invocations backed by the remote-naming project In previous sections of this article we saw that whatever is exposed in the java:jboss/exported/ namespace is accessible remotely to the client applications under the relative JNDI name. Some of you might already have started thinking about exposing remote views of EJBs under that namespace. It's important to note that WildFly server side already by default exposes the remote views of a EJB under the java:jboss/exported/ namespace (although it isn't logged in the server logs). So assuming your server side application has the following stateless bean: Then the "Foo" remote view is exposed under the java:jboss/exported/ namespace under the following JNDI name scheme (which is similar to that mandated by EJB3.1 spec for java:global/ namespace):[app-name] app-name/module-name/bean-name!bean-interface where, app-name = the name of the .ear (without the .ear suffix) or the application name configured via application.xml deployment descriptor. If the application isn't packaged in a .ear then there will be no app-name part to the JNDI string. module-name = the name of the .jar or .war (without the .jar/.war suffix) in which the bean is deployed or the module-name configured in web.xml/ejb-jar.xml of the deployment. The module name is mandatory part in the JNDI string. bean-name = the name of the bean which by default is the simple name of the bean implementation class. Of course it can be overridden either by using the "name" attribute of the bean definining annotation (@Stateless(name="blah") in this case) or even the ejb-jar.xml deployment descriptor. bean-interface = the fully qualified class name of the interface being exposed by the bean. So in our example above, let's assume the bean is packaged in a myejbmodule.jar which is within a myapp.ear. So the JNDI name for the Foo remote view under the java:jboss/exported/ namespace would be: java:jboss/exported/myapp/myejbmodule/FooBean!org.myapp.ejb.Foo That's where WildFly will automatically expose the remote views of the EJBs under the java:jboss/exported/ namespace, in addition to the java:global/ java:app/ java:module/ namespaces mandated by the EJB 3.1 spec. So the next logical question would be, are these remote views of EJBs accessible and invokable using the remote-naming project on the client application. The answer is yes! Let's quickly see the client code for invoking our FooBean. Again, let's just use "peter" and "lois" as username/pass for connecting to the remoting connector. As you can see, most of the code is similar to what we have been seeing so far for setting up a JNDI context backed by the remote-naming project. The only parts that change are: 1) An additional "jboss.naming.client.ejb.context" property that is added to the properties passed to the InitialContext constructor. 2) The JNDI name used for the lookup 3) And subsequently the invocation on the bean interface returned by the lookup. Let's see what the "jboss.naming.client.ejb.context" does. In WildFly, remote access/invocations on EJBs is facilitated by the JBoss specific EJB client API, which is a project on its own. So no matter, what mechanism you use (remote-naming or core EJB client API), the invocations are ultimately routed through the EJB client API project. In this case too, the remote-naming internally uses EJB client API to handle EJB invocations. From a EJB client API project perspective, for successful communication with the server, the project expects a EJBClientContext backed by (atleast one) EJBReceiver(s). The EJBReceiver is responsible for handling the EJB invocations. One type of a EJBReceiver is a RemotingConnectionEJBReceiver which internally uses jboss-remoting project to communicate with the remote server to handle the EJB invocations. Such a EJBReceiver expects a connection backed by the jboss-remoting project. Of course to be able to connect to the server, such a EJBReceiver would have to know the server address, port, security credentials and other similar parameters. If you were using the core EJB client API, then you would have configured all these properties via the jboss-ejb-client.properties or via programatic API usage as explained here EJB invocations from a remote client using JNDI. But in the example above, we are using remote-naming project and are not directly interacting with the EJB client API project. If you look closely at what's being passed, via the JNDI properties, to the remote-naming project and if you remember the details that we explained in a previous section about how the remote-naming project establishes a connection to the remote server, you'll realize that these properties are indeed the same as what the RemotingConnectionEJBReceiver would expect to be able to establish the connection to the server. Now this is where the "jboss.naming.client.ejb.context" property comes into picture. When this is set to true and passed to the InitialContext creation (either via jndi.properties or via the constructor of that class), the remote-naming project internally will do whatever is necessary to setup a EJBClientContext, containing a RemotingConnectionEJBReceiver which is created using the same remoting connection that is created by and being used by remote-naming project for its own JNDI communication usage. So effectively, the InitialContext creation via the remote-naming project has now internally triggered the creation of a EJBClientContext containing a EJBReceiver capable of handling the EJB invocations (remember, no remote EJB invocations are possible without the presence of a EJBClientContext containing a EJBReceiver which can handle the EJB). So we now know the importance of the "jboss.naming.client.ejb.context" property and its usage. Let's move on the next part in that code, the JNDI name. Notice that we have used the JNDI name relative to the java:jboss/exported/ namespace while doing the lookup. And since we know that the Foo view is exposed on that JNDI name, we cast the returned object back to the Foo interface. Remember that we earlier explained how each lookup via remote-naming triggers a server side communication and a marshalling/unmarshalling process. This applies for EJB views too. In fact, the remote-naming project has no clue (since that's not in the scope of that project to know) whether it's an EJB or some random object. Once the unmarshalled object is returned (which actually is a proxy to the bean), the rest is straightforward, we just invoke on that returned object. Now since the remote-naming implementation has done the necessary setup for the EJBClientContext (due to the presence of "jboss.naming.client.ejb.context" property), the invocation on that proxy will internally use the EJBClientContext (the proxy is smart enough to do that) to interact with the server and return back the result. We won't go into the details of how the EJB client API handles the communication/invocation. Long story short, using the remote-naming project for doing remote EJB invocations against WildFly is possible! Why use the EJB client API approach then? I can guess that some of you might already question why/when would one use the EJB client API style lookups as explained in the EJB invocations from a remote client using JNDI article instead of just using (what appears to be a simpler) remote-naming style lookups. Before we answer that, let's understand a bit about the EJB client project. The EJB client project was implemented keeping in mind various optimizations and features that would be possible for handling remote invocations. One such optimization was to avoid doing unnecessary server side communication(s) which would typically involve network calls, marshalling/unmarshalling etc... The easiest place where this optimization can be applied, is to the EJB lookup. Consider the following code (let's ignore how the context is created): Now foo/bar JNDI name could potentially point to any type of object on the server side. The jndi name itself won't have the type/semantic information of the object bound to that name on the server side. If the context was setup using the remote-naming project (like we have seen earlier in our examples), then the only way for remote-naming to return an object for that lookup operation is to communicate with the server and marshal/unmarshal the object bound on the server side. And that's exactly what it does (remember, we explained this earlier). The EJB client API project on the other hand optimizes this lookup. In order to do so, it expects the client application to let it know that a EJB is being looked up. It does this, by expecting the client application to use the JNDI name of the format "ejb:" namespace and also expecting the client application to setup the JNDI context by passing the "org.jboss.ejb.client.naming" value for the Context.URL_PKG_PREFIXES property. Example: More details about such code can be found here EJB invocations from a remote client using JNDI When a client application looks up anything under the "ejb:" namespace, it is a clear indication (for the EJB client API project) to know that the client is looking up an EJB. That's where it steps in to do the necessary optimizations that might be applicable. So unlike, in the case of remote-naming project (which has no clue about the semantics of the object being looked up), the EJB client API project does not trigger a server side communication or a marshal/unmarshal process when you do lookup for a remote view of a stateless bean (it's important to note that we have specifically mentioned stateless bean here, we'll come to that later). Instead, the EJB client API just returns a java.lang.reflect.Proxy instance of the remote view type that's being looked up. This not just saves a network call, marshalling/unmarshalling step but it also means that you can create an EJB proxy even when the server isn't up yet. Later on, when the invocation on the proxy happens, the EJB client API does communicate with the server to carry out the invocation. Is the lookup optimization applicable for all bean types? In the previous section we (intentionally) mentioned that the lookup optimization by the EJB client API project happens for stateless beans. This kind of optimization is not possible for stateful beans because in case of stateful beans, a lookup is expected to create a session for that stateful bean and for session creation we do have to communicate with the server since the server is responsible for creating that session. That's exactly why the EJB client API project expects the JNDI name lookup string for stateful beans to include the "?stateful" string at the end of the JNDI name: Notice the use of "?stateful" in that JNDI name. See EJB invocations from a remote client using JNDI for more details about such lookup. The presence of "?stateful" in the JNDI name lookup string is a directive to the EJB client API to let it know that a stateful bean is being looked up and it's necessary to communicate with the server and create a session during that lookup. So as you can see, we have managed to optimize certain operations by using the EJB client API for EJB lookup/invocation as against using the remote-naming project. There are other EJB client API implementation details (and probably more might be added) which are superior when it is used for remote EJB invocations in client applications as against remote-naming project which doesn't have the intelligence to carry out such optimizations for EJB invocations. That's why the remote-naming project for remote EJB invocations is considered "deprecated". Note that if you want to use remote-naming for looking up and invoking on non-EJB remote objects then you are free to do so. In fact, that's why that project has been provided. You can even use the remote-naming project for EJB invocations (like we just saw), if you are fine with not wanting the optimizations that the EJB client API can do for you or if you have other restrictions that force you to use that project. Restrictions for EJB's If the remote-naming is used there are some restrictions as there is no full support of the ejb-client features. - No loadbalancing, if the URL conatains multiple "remote://" servers there is no loadbalancing, the first available server will be used and only in case it is not longer available there will be a failover to the next available one. - No cluster support. As a cluster needs to be defined in the jboss-ejb-client.properties this feature can not be used and there is no cluster node added - No client side interceptor. The EJBContext.getCurrent() can not be used and it is not possible to add a client interceptor - No UserTransaction support - All proxies become invalid if .close() for the related Initalcontext is invoked, or the InitialContext is not longer referenced and gets garbage-collected. In this case the underlying EJBContext is destroyed and the conections are closed. - It is not possible to use remote-naming if the client is an application deployed on another JBoss instance 5 Commentscomments.show.hide Jun 21, 2012 Kiran Anantha Hello, Are EJB invocations via JNDI allowed from remote server instances as well? Jul 29, 2013 Jan Martiska Hi, of course yes. See this article: Apr 12, 2017 valsaraj viswanathan Apr 12, 2017 valsaraj viswanathan Apr 12, 2017 valsaraj viswanathan
https://docs.jboss.org/author/display/WFLY8/Remote+EJB+invocations+via+JNDI+-+EJB+client+API+or+remote-naming+project
CC-MAIN-2018-13
refinedweb
4,085
52.39
Opened 9 years ago Closed 9 years ago Last modified 5 years ago #7285 closed (fixed) inspectdb outputs invalid python variable when it encounters a dash Description (last modified by ) When inspectdb encounters a field with a dash, it outputs it as a variable name which is invalid in python. For instance, when i ran it, I got the line: buy-back_amount = models.FloatField(null=True, blank=True) which I had to manually change to: buy_back_amount = models.FloatField(null=True, blank=True, db_column="buy-back") This was found in the SVN version of django with a mysql database. Attachments (1) Change History (10) comment:1 Changed 9 years ago by Changed 9 years ago by comment:2 Changed 9 years ago by I attached a patch. Are there any other characters that are allowed in a SQL table name that aren't in a Python variable name? comment:3 Changed 9 years ago by Pretty much anything is allowed. It might even start with a digit. What about python keywords? import re, keyword att_name = re.sub(r'[^a-z0-9_]', '_', row[0].lower()) if keyword.iskeyword(att_name): att_name += '_' elif att_name[0].isdigit(): att_name = '_' + att_name And this won't be enough. You would have to prevent clashes too to make it bulletproof (with other fields as well as Model methods and python __magic__). I don't think that's worth it. Stick to dash replacement if that's a common issue. And if you use perl code in your column names you will want to hack the output anyway. comment:4 Changed 9 years ago by comment:5 Changed 9 years ago by comment:6 Changed 9 years ago by comment:7 Changed 9 years ago by comment:8 Changed 9 years ago by comment:9 Changed 5 years ago by Milestone 1.0 deleted Replaces table name dashes with underscores.
https://code.djangoproject.com/ticket/7285
CC-MAIN-2017-09
refinedweb
313
65.93
From: Gennadiy Rozental (gennadiy.rozental_at_[hidden]) Date: 2004-07-13 14:24:57 > IMO, there's nothing wrong with using namespace std; in local > scope. Here's a fix: There is already workaround on top of the file dealing with gcc 3.3. I wonder why is that gcc does not put namely wcscmp into namespace std or why BOOST_NO_STDC_NAMESPACE is not defined? We need to figure out what is going on and then upddate above mentioned workaround. Gennadiy. Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2004/07/67961.php
CC-MAIN-2021-49
refinedweb
101
70.19
Post your Comment Convert a String into an Integer Data Convert a String into an Integer Data In this section you will learn to convert a string type of data to integer type data. Converting string to integer and integer to string is a basic task in java programming language because Convert a String into an Integer Data Convert a String into an Integer Data  ... the functionality to convert the string type data into an integer type data. Code... is used to convert an integer type data into a string type data. Here Convert an Integer into a String Convert an Integer into a String In this section, you will learn to convert an integer... convert Integer number to string number. toString(): The toString() method integer to string integer to string i have to develop a program which convert integer into character from 1 to 10,000 if we input 1 then it give output 'one' n so on till 'ten thousand' kindly give me the codes plzplz sir........ Have retreive integer data from database retreive integer data from database i made a table named result... as an integer(rollno) and the marks associated with that must be displayed.i write...=lalit&database=mydb"); String url="select marks from student where rollno Convert Hexadecimal number into Integer to convert hexadecimal data into integer. The java.lang package provides the functionally to convert the hexadecimal data into an integer type data. Code Description: The following program takes an integer type data at the console Integer exception in java Integer exception in java The integer class is a wrapper for integer value... want to store an integer in a hash table, you have to "wrap" an Integer Convert Integer to Float Convert Integer to Float In this section, you will learn to convert an integer type data... into a float type data. Code Description: This program takes an integer number Convert Character into Integer helps you to convert the character data into an integer type data. It defines... Convert Character into Integer  ... into a integer. The java.lang package convert provides the facility to convert javascript integer to string conversion javascript integer to string conversion How to Javascript Integer to String Conversion <script type="text/javascript">..., this works result = a + b + c; printWithType(result); //123 string Conversion of String to Integer Conversion of String to Integer public class Test { public static void main(String[] args) { int countA = 0; int countB = 0...'; chr++) { (int) String "count"+chr = 0 Convert integer type data into binary, octal and hexadecimal Convert integer type data into binary, octal and hexadecimal... will learn how to convert an integer type data into binary, octal and hexadecimal... the string representation of unsigned integer value that will represent the data Convert Decimal to Integer Convert Decimal to Integer  ... into an integer. The java.lang package provides the functionality to convert... in converting a decimal type data into an integer. The parseInt() method gets Convert Integer to Double Convert Integer to Double In this section, you will learn to convert an integer... for converting an integer type data into a double. Code Description: This program helps find the given input is integer or string find the given input is integer or string simple coding for to check the given input value is integer or no.If the given value is integer display "Value is integer" otherwise "Value is not a integer"? class.   Converting a String to Integer in Java how to convert a String value to Integer in Java? The easiest way to convert... if it is not able to convert the supplied string into Integer, so you should... tutorial of "How to convert String to Integer in Java?": Here i want display integer number in a string of statement i want display integer number in a string of statement i want display integer number in a string of statement PHP Integer Data Type Learn PHP Integer Data type: In PHP integers can be decimal which is a normal... data type is platform dependent. Usually the size of integer is 2 million in 32... more than the supported range of integer then it would be converted to float data Java Integer class or integer value in String format. Converting primitive types into the corresponding... Java Integer class Java provides wrapper classes corresponding to each primitive data types String String write a program using string it should replace 'c'char to integer number as 1 in whole source How To Read Integer From Command Line In Java How To Read Integer From Command Line In Java In this section we will discuss about how an integer can be read through the command line. In Java all the inputs are treated as a String. So, input for other than the String is required get integer at run time and then convert using the function Integer.parseInt(variable). Here is the example code: String s =request.getParameter("myvariable"); Integer i...get integer at run time how to get integer value at run time in j2ee Reverse integer array program Reverse integer array program Been tasked with the following question: Write a method that accepts an integer array and returns a new array... a; } public static void main(String[] args) { int a [] = {2,4,6,8 Integer display problem Integer display problem class Bean{ int n1,n2; public Bean(){ } public Bean(int n1, int n2){ this.n1=n1; this.n2=n2; } public static void main(String arg[]){ Bean b1=new Bean(010,50); System.out.println convertig biginteger value into integer class variable into integer like BigInteger p = BigInteger.valueOf(10); now i want to convert p value into int . I searched so many times on google to solve... ConvertBigIntegerToInt { public static void main(String[] args Java integer to string Java integer to string Many times we need to convert a number to a string to be able to operate on the value in its string form. Converting numeric values to Strings with Java Initializing Integer Variable issue? Initializing Integer Variable issue? My program is supposed to take... to..." then take that phrase and convert it to its actual phone number. My problem is my integer variables (number0 - number6). It is forcing me to initialize Convert an Integer type object to a String object Convert an Integer type object to a String object In this section you will learn to convert the Integer type object to a String object using the Integer.toBinaryString Convert Float to Integer Convert Float to Integer In this section, we will learn to convert a float type data into an integer. Code Description: In this program, first of all we declare a float Post your Comment
http://www.roseindia.net/discussion/18796-Convert-a-String-into-an-Integer-Data.html
CC-MAIN-2016-07
refinedweb
1,115
52.7
I have a form mailer which I validate using Javascript. When a required text field is left empty or inappropriately filled the program places a message in the field and highlights it - setting the focus to the field at the same time. With Windows Explorer and Opera this works fine. The page window is repositioned to show the focused box. With Firefox no repositioning takes place. I therefore included an anchor in the hope that this would do the trick. Again, this worked for other browsers but not for Firefox. Here is the relevant code:- if (form.textarea.value.indexOf(" ") == - 1) { document.form.textarea.focus(); document.form.textarea.value = "Message to user"; document.form.textarea.select(); window.location.hash="ancrname"; return (false); } Is there a way of effecting the repositioning in Firefox? Any help would be greatly appreciated. There are currently 1 users browsing this thread. (0 members and 1 guests) Forum Rules
http://www.webdeveloper.com/forum/showthread.php?211211.html
CC-MAIN-2018-13
refinedweb
153
61.43
Text pictures of repair work, warranty inquiries, or pricing requests, all integrated into existing service and CRM. Repairs in a snap ID items on the go iCracked wants to streamline repair. Customers send MMS images of repair items to technicians, improving resolution accuracy.View Customer StoryTalk to Sales Fetch wants text to power their business. Text descriptions or MMS photos for quotes and available options on products.Talk to Sales Familiarity for Visual Estimates while increasing the read rate Sending MMS pictures makes describing an inquiry easier than verbal or text-only descriptions. Simplify a support agent or technician's process with photos and images. Reduce diagnosis time and cost while improving customer satisfaction with rich images. All the components needed to build nimble Visual Estimates Choose US and Canadian numbers to receive and send images. Create a five-digit short code to messages. Use Image Upload from any customer's phone. Use existing applications to send and receive images quickly. Employ two-way messaging to receive customer responses with personalized messages using an existing CRM. Pay $0.01 per to send and receive MMs images with no upfront fees or hidden costs. Simple code to build Visual Estimates onto a flexible and easy Twilio API from flask import Flask import twilio.twiml app = Flask(__name__) @app.route("/", methods=['GET', 'POST']) def hello_monkey(): """Respond to incoming calls with a simple text message.""" resp = twilio.twiml.Response() resp.message("Thanks! We'll get back to you with an estimate.") return str(resp) if __name__ == "__main__": app.run(debug=True) Create modern communications with free accounts and advice.
https://www.twilio.com/use-cases/visual-estimates
CC-MAIN-2017-26
refinedweb
266
50.53
Uploading files This is quite simple. HTML has a form input element of type "file" used to select the file. The form element looks like this <input type="file" name="upload_file"/>The "name" is the variable name that TurboGears will pass into the controller function. There are several ways to send a form to the server. File uploads must be sent using the "multipart/form-data" encoding, which is not the default. Here's an example form <form action="" method="POST" enctype="multipart/form-data"> filename: <input type="file" name="upload_file"/><br /> <input type="submit" value="Upload"/> </form>The form will be POSTed to the url "do_upload" and encoded with the right content-type. There are two form elements, one of type "file" and the other of type "submit". The "file" input will send the upload file as the form variable "upload_file". The "upload_file" variable will reference a FieldStorage object. It's most important attribute is 'file', which contains the text of the uploaded document. Another possibly useful attribute is 'filename', which is the name from the server. People sometimes use this to help figure out the file type, by looking at the extension. Here's an example controller showing this in action @expose() def do_upload(self, upload_file): return ("You sent %r, which contains %d bytes" % (upload_file.filename, len(upload_file.file.read()))) When I tried it on a file I got: You sent 'cv.txt', which contains 3315 bytes To upload two files, use two "file" input elements.
http://www.dalkescientific.com/writings/NBN/file_uploads.html
CC-MAIN-2018-34
refinedweb
248
56.35
A history lesson on the Xamarin.Mac target frameworks and their new names All .NET applications depend upon classes and methods from the Base Class Library (BCL) as part of their implementation — Strings, System.Console, System.IO.Path, System.Net.Http.HttpClient and so on. When targeting many platforms, such as Xamarin.iOS or “desktop” console, the choice of which BCL library to link against is generally fixed and there is not much to consider. With Xamarin.Mac things are not so simple, as it ships with two different supported “Target Frameworks” and also allows unsupported linking against the system mono as well. To understand how there came to be three options and why they are being renamed, a bit of a history lesson will be needed first. Fall of 2014 - Xamarin.Mac 1.10 was released which included a preview of the new “unified” profile. This unified profile allowed: - Fixing a number of systemic issues preventing 64-bit support - The unification of namespaces which were causing significant pain in sharing code between Xamarin.Mac and Xamarin.iOS projects - Many backwards incompatible API improvements Unlike classic, which built against the installed system mono, a copy of the BCL was included inside the Xamarin.Mac framework for unified. This reduced the need to upgrade mono and Xamarin.Mac in lockstep. Since this migration from classic to unified was API break, this BCL was based upon a modified version of the one powering Xamarin.iOS. It was significantly slimmer and since it did not include System.Configuration it was safe to enable linking as an option — which reduces final application size significantly by stripping dead code from the embedded libraries and the application itself. Shortly after 1.10 went out, forum posts and bug reports came back describing the pain users were hitting when trying to port their applications to unified. Many applications depended on nuget packages or other 3rd party libraries that were built against the normal “desktop” BCL. Convincing library maintainers to add an additional build, against this new unified mac target, across the entire nuget ecosystem turned out to be a “boil the ocean” level task. A short term bandaid was added, allowing linking to the system mono (like classic did), but we knew that for 2.0 we needed a better story, and fast. It needed to be as compatible as possible with the wide array of nuget packages customers were using, but reducing the library surface was still a major goal. Some corners of the mono class library only had partial implementations or often had breaking changes. Removing them from the “supported” BCL helped prevent customers from depending on one of these libraries, running into breakages after updating, and then feeling justifiably frustrated. The Xamarin.Mac 4.5 (XM 4.5) target framework was the result. It trimmed down the standard “desktop” net_4_5 mono profile, and did some creative bending of the truth in msbuild to convince everyone it was really the desktop BCL everyone knew and loved. It wasn’t a perfect solution, and the bending of the truth came back and caused a number of issues that needed stamping out, but overall it was a success. Once we had the XM 4.5 target framework, though, we needed a name for the originally designed one. Since it was powered by the same bits as Xamarin.iOS naming it “Mobile” seemed to make sense. Since Xamarin.Mac 2.0, the imperfections of the two names have become more and more clear. Customers tend to shy away from mobile when they see this dialog, as they are “building a desktop application, why would then need mobile”. XM 4.5 now supports assemblies with .NET 4.6 APIs, and the name will increasingly be out of sync with reality as time goes on. The mono profile it was based upon is now named net_4_x for a reason. Naming is one of the great two hard problems in computer science (along with cache invalidation and off by one errors) and it took multiple long discussions and e-mail threads to settle on the least bad suggested names. The planned new names and descriptions are: - Modern (Optimized profile also powering Xamarin.iOS) replace Mobile. - Full (Extended desktop API compatibility) replaces XM 4.5. As the adoption of netstandard continues to spread through the nuget community, it is expected that many applications will be able to transition to the Modern target framework. The Full target framework will still be available, however, when the extended library surface of Full (such as System.Configuration) is required.
https://medium.com/@donblas/a-history-lesson-on-the-xamarin-mac-target-frameworks-and-their-new-names-473d2731d887
CC-MAIN-2017-13
refinedweb
763
56.05
A pure Dart library for Mixpanel analytics. Add this to your package's pubspec.yaml file: dependencies: pure_mixpanel: ^1.0.6 You can install packages from the command line: with Flutter: $ flutter packages get Alternatively, your editor might support flutter packages get. Check the docs for your editor to learn more. Now in your Dart code, you can use: import 'package:pure_mixpanel/pure_mixpanel.dart'; We analyzed this package on Jan 15, 2019, and provided a score, details, and suggestions below. Analysis was completed with status completed using: Detected platforms: Flutter References Flutter, and has no conflicting libraries. Format lib/pure_mixpanel.dart. Run flutter format to format lib/pure_mixpanel.dart. Support latest stable Dart SDK in pubspec.yaml. (-20 points) The SDK constraint in pubspec.yaml doesn't allow the latest stable Dart SDK release. pure_mixpanel.dart.
https://pub.dartlang.org/packages/pure_mixpanel
CC-MAIN-2019-04
refinedweb
136
52.76
Reading env variables from a Tauri App “Build smaller, faster, and more secure desktop applications with a web frontend” is the promise made by Tauri. And indeed, it is a great Electron replacement. But being in its first days (the beta has just been released!) a bit of documentation is still missing, and on the internet there aren’t many examples on how to write code. Tauri is very light, as highlighted by these benchmarks. Tauri is very light, as highlighted by these benchmarks. Haven’t you started with Tauri yet? Go and read the intro doc! One thing that took me a bit of time to figure out, is how to read environment variables from the frontend of the application (JavaScript / Typescript or whichever framework you are using). The Command As usual, when you know the solution the problem seems easy. In this case, we just need an old trick: delegating! In particular, we will use the Command API to spawn a sub-process ( printenv) and reading the returned value. The code is quite straightforward, and it is synchronous: import { Command } from "@tauri-apps/api/shell"; async readEnvVariable(variableName: string): Promise<string> { const commandResult = await new Command( "printenv", variableName ).execute(); if (commandResult.code !== 0) { throw new Error(commandResult.stderr); } return commandResult.stdout; } The example is in Typescript, but can be easily transformed in standard JavaScript. The Command API is quite powerful, so you can read its documentation to adapt the code to your needs. Requirements There are another couple of things you should consider: first, you need to have installed in your frontend environment the @tauri-apps/api package. You also need to enable your app to execute commands. Since Tauri puts a strong emphasis on the security (and rightly so!), your app is by default sandboxed. To being able to use the Command API, you need to enable the shell.execute API. It should be as easy as setting tauri.allowlist.shell.execute to true in your tauri.json config file. Tauri is very nice, and I really hope it will conquer the desktops and replace Electron, since it is lighter, faster, and safer! Questions, comments, feedback, critics, suggestions on how to improve my English? Reach me on Twitter (@rpadovani93) or drop me an email at [email protected]. Ciao, R.
https://rpadovani.com/read-env-tauri
CC-MAIN-2022-40
refinedweb
382
67.25
Grab image from URL Above is the sample result of grabbing an image from URL to display in an ImageView control. Let’s roll! Create new Android Project Project Name: DownloadImageDisplay Build Target: Android 2.3.3 Application Name: Download Image Display Package Name: pete.android.study Create Activity: MainActivity Min SDK: 10 A – The layout Pretty much simple: + One text view to display the URL of the image, if + One image view to display the <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns: <TextView android: <ImageView android: </LinearLayout> B – Coding package pete.android.study; import java.io.InputStream; import java.net.URL; import android.app.Activity; import android.graphics.drawable.Drawable; import android.os.Bundle; import android.widget.ImageView; import android.widget.TextView; public class MainActivity extends Activity { // declare internal using controls private TextView txtUrl; private ImageView imgView; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); // set url text String url = ""; txtUrl = (TextView)findViewById(R.id.txtUrl); txtUrl.setText(url); // load image view control imgView =(ImageView)findViewById(R.id.imgView); // grab image to display try { imgView.setImageDrawable(grabImageFromUrl(url)); } catch(Exception e) { txtUrl.setText("Error: Exception"); } } private Drawable grabImageFromUrl(String url) throws Exception { return Drawable.createFromStream((InputStream)new URL(url).getContent(), "src"); } } C – Note – In order to make application to be able to connect to Internet and download files, you need to set uses-permission. Open file: AndroidManifest.xml and add this line: <uses-permission android:</uses-permission> – The above code is not the best implementation – Due to just giving an example of grabbing image, so I don’t care what kind of exception it might catch. You can find it yourselves, it’s pretty much easy to figure out. D – What I Learned? – Know how to get an image from an URL (without caching file) E – Final Words – The code seems to explain itself, and is very simple. – Don’t mind to write a comment whatever you like to ask, to know, or to suggest, recommend. – Hope you enjoy it! Cheers, Pete Houston i am run the code same as given above and app install in emulator successfully but image never load in emulator it show error : Exception It is going to generate network on main thread exception. Put the image fetching from url part inside an asynctask to avoid the exception. not working codee….. Hi tried running your code . I am unable to load the image . The line Drawable.createFromStream((InputStream)new URL(url).getContent(), “src”) throws an exception . I have added internt permision in manifest and internet is also working. Thanks in support of sharing such a nice thought, piece of writing is nice, thats why i have read it completely Incredible quest there. What happened after? Take care! I’m not sure exactly why but this website is loading extremely slow for me. Is anyone else having this issue or is it a problem on my end? I’ll check back later and see if the problem still exists. It works,thanks dude… Can I scale the image with its height and width fit to the screen? Thanks It doesn’t work with SDK versions 11 and above, any idea why? thx Great………… Finally a tight code for this url thingee, Great tut. how about setImgeUri with image… I have some problems that is how to get a field in if we don’t have their webservice? Thank in adv!! Great!!!Lv u guys:D Hi, The stuff works but is there any way in which we can increase the height and width of the image, that we are getting? I tried setting that in the xml layout file, but that seems not working Resize the image after downloading, using Hi xjaphx, Wonderful tutorial. Is there any way that I could save the image in the url. I would like to grab the image and save it onto a database for later use. Thanks. Here you are, buddy: Enjoy with my technical blog 🙂
https://xjaphx.wordpress.com/2011/06/10/grab-image-from-url/
CC-MAIN-2017-13
refinedweb
664
59.3
Writing Addons This is a complete guide on how to create addons for Storybook. Storybook BasicsStorybook Basics Before we begin, we need to learn a bit about how Storybook works. Basically, Storybook has a Manager App and a Preview Area. Manager App is the client side UI for Storybook. Preview Area is the place where the story is rendered. Usually the Preview Area is an iframe. When you select a story from the Manager App, the relevant story is rendered inside the Preview Area. As shown in the above image, there’s a communication channel that the Manager App and Preview Area use to communicate with each other. CapabilitiesCapabilities With an addon, you can add more functionality to Storybook. Here are a few things you could do: - Add a panel to Storybook (like Action Logger). - Add a tool to Storybook (like zoom or grid). - Add a tab to Storybook (like notes). - Interact/communicate with the iframe/manager. - Interact/communicate with other addons. - Change storybook’s state using it’s APIs. - Navigating. - Register keyboard shortcuts (coming soon). With this, you can write some pretty cool addons. Look at our Addon gallery to have a look at some sample addons. Getting StartedGetting Started Let’s write a simplistic addon for Storybook. We’ll add some metadata to the story, which the addon can then use. Add simple metadataAdd simple metadata We write a story for our addon like this: import React from 'react'; import { storiesOf } from '@storybook/react'; import Button from './Button'; storiesOf('Button', module).add('with text', () => <Button>Hello Button</Button>, { myAddon: { data: 'this data is passed to the addon', }, }); Add a panelAdd a panel We write an addon that responds to a change in story selection like so: // register.js import React from 'react'; import { STORY_CHANGED } from '@storybook/core-events'; import addons, { types } from '@storybook/addons'; const ADDON_ID = 'myaddon'; const PARAM_KEY = 'myAddon'; const PANEL_ID = `${ADDON_ID}/panel`; class MyPanel extends React.Component { state = { value: '' }; componentDidMount() { const { api } = this.props; api.on(STORY_CHANGED, this.onStoryChange); } componentWillUnmount() { const { api } = this.props; api.off(STORY_CHANGED, this.onStoryChange); } onStoryChange = id => { const { api } = this.props; const params = api.getParameters(id, PARAM_KEY); if (params && !params.disable) { const value = params.data; this.setState({ value }); } else { this.setState({ value: undefined }); } }; render() { const { value } = this.state; const { active } = this.props; return active ? <div>{value}</div> : null; } } addons.register(ADDON_ID, api => { const render = ({ active }) => <MyPanel api={api} active={active} />; const title = 'My Addon'; addons.add(PANEL_ID, { type: types.PANEL, title, render, }); }); register the addonregister the addon Then create an addons.js inside the Storybook config directory and add the following content to it. import 'path/to/register.js'; Now restart/rebuild storybook and the addon should show up! When changing stories, the addon’s onStoryChange method will be invoked with the new storyId. Note:Note: If you get an error similar to: ModuleParseError: Module parse failed: Unexpected token (92:22) You may need an appropriate loader to handle this file type. | var value = this.state.value; | var active = this.props.active; > return active ? <div>{value}</div> : null; | } | }]); It is likely because you do not have a .babelrc file or do not have it configured with the correct presets { "presets": ["@babel/preset-env", "@babel/preset-react"] } A more complex addonA more complex addon If we want to create a more complex addon, one that wraps the component being rendered for example, there are a few more steps. Essentially you can start communicating from and to the manager using the storybook API. Now we need to create two files, register.js and index.js,. register.js will be loaded by the manager (the outer frame) and index.js will be loaded in the iframe/preview. If you want your addon to be framework agnostic, THIS is the file where you need to be careful about that. Creating a decoratorCreating a decorator Let’s add the following content to the index.js. It will expose a decorator called withFoo which we use the .addDecorator() API to decorate all our stories. The @storybook/addons package contains a makeDecorator function which we can easily use to create such a decorator: import React from 'react'; import addons, { makeDecorator } from '@storybook/addons'; export default makeDecorator({ name: 'withFoo', parameterName: 'foo', // This means don't run this decorator if the notes decorator is not set skipIfNoParametersOrOptions: true, wrapper: (getStory, context, { parameters }) => { const channel = addons.getChannel(); // Our simple API above simply sets the notes parameter to a string, // which we send to the channel channel.emit('foo/doSomeAction', parameters); // we can also add subscriptions here using channel.on('eventName', callback); return getStory(context); } }) In this case, our component can access something called the channel. It lets us communicate with the panel (in the manager). It has a NodeJS EventEmitter compatible API. In the above case, it will emit the notes’ text to the channel, so our panel can listen to it. Then add the following code to the register.js. Notice how the storybook API itself has .on(), .off() and .emit() methods just like the EventEmitter. import React from 'react'; import addons from '@storybook/addons'; import { STORY_CHANGED } from '@storybook/core-events'; class MyPanel extends React.Component { onSomeAction = text => { // do something with the passed data }; onStoryChange = id => { // do something with the new selected storyId }; componentDidMount() { const { api } = this.props; api.on('foo/doSomeAction', this.onSomeAction); api.on(STORY_CHANGED, this.onStoryChange); } componentWillUnmount() { const { api } = this.props; api.off('foo/doSomeAction', this.onSomeAction); api.off(STORY_CHANGED, this.onStoryChange); } render() { const { active } = this.props; return active ? <div /> : null; } } // Register the addon with a unique name. addons.register('MYADDON', api => { // Also need to set a unique name to the panel. addons.addPanel('MYADDON/panel', { title: 'My Addon', render: ({ active, key }) => <MyPanel key={key} api={api} active={active} />, }); }); It will register our addon and add a panel. In this case, the panel represents a React component called MyPanel. That component has access to the storybook API. Then it will listen for events. You can listen for core storybook events, event by other addons or custom events created by index.js. Have a look at the above annotated code. In this example, we are only sending messages from the Preview Area to the Manager App (our panel). But we can do it the other way around as well. It also listens to another event, called onStory, in the storybook API, which fires when the user selects a story. We use that event to clear the previous notes when selecting a story. Multiple addons can be loaded, but only a single panel can be shown, the render function will receive an active prop, which is true if the addon is shown. It is up to you to decide if this mean your component must be unmounted, or just visually hidden. This allows you to keep state but unmount expensive renderings. Using the complex addonUsing the complex addon Add the register.js to your addons.js file. Then you need to start using the decorator: import React from 'react'; import { storiesOf } from '@storybook/react'; import withFoo from 'path/to/index.js'; import Button from './Button'; storiesOf('Button', module) .addDecorator(withFoo) .add('with text', () => <Button>Hello Button</Button>, { foo: { data: 'awesome', }, }); Styling your addonStyling your addon We use emotion for styling, AND we provide a theme which can be set by the user! We highly recommend you also use emotion to style your components for storybook, but it’s not a requirement. You can use inline styles or another css-in-js lib. You can receive the theme as a prop by using the withTheme hoc from emotion. Read more about theming. Re-using existing componentsRe-using existing components Wouldn’t it be awesome if we provided you with some common used components you could use to build out your own addon quickly and fit in right away? Good news! WE DO! We publish most of storybook’s UI components as a package: @storybook/components. You can check them out in our storybook (pretty meta right?). Addon APIAddon API Here we’ve only used a few functionalities of our Addon API. You can learn more about the complete API here. PackagingPackaging You can package this addon into a NPM module very easily. As an example, have a look at this package. In addition to moving the above code to a NPM module, we’ve set react and @storybook/addons as peer dependencies. Local DevelopmentLocal Development When you are developing your addon as a package, you can’t use npm link to add it to your project. Instead add your package as a local dependency into your package.json as shown below: { "dependencies": { "@storybook/addon-notes": "" } } Package MaintenancePackage Maintenance Your packaged Storybook addon needs to be written in ES5. If you are using ES6, then you need to transpile it.
https://storybook.js.org/docs/addons/writing-addons/
CC-MAIN-2019-26
refinedweb
1,458
50.63
tag:blogger.com,1999:blog-6907873803403737979.post8599472448509282628..comments2018-11-09T14:47:20.548+00:00Comments on C# Bits: Dynamic Data and Field Templates - A Second Advanced FieldTemplate ***UPDATED***[email protected]:blogger.com,1999:blog-6907873803403737979.post-80765554982436414762010-12-08T14:48:28.057+00:002010-12-08T14:48:28.057+00:00I think you need to look at the old Ajax History o...I think you need to look at the old <a href="" rel="nofollow">Ajax History</a> or <a href="" rel="nofollow">Filter History for .Net 4</a><br />SteveSteve, Question : when i switch to List Mode from a ...Hi, <br />Question : when i switch to List Mode from a Edit mode, the Cascade filter return to standard Value "All". <br />Any help [email protected]:blogger.com,1999:blog-6907873803403737979.post-43246616385167935462010-05-27T14:12:10.643+01:002010-05-27T14:12:10.643+01:00Hi Steve, i'm trying to use your Cascade Field...Hi Steve, i'm trying to use your Cascade FieldTemplate with EF but...<br /><br />"Unable to create a constant value of type '%TableName%'. Only primitive types ('such as Int32, String, and Guid') are supported in this context."<br /><br /... <br /><br />Can you help me?ArtCava there, if you go to the last article in this se...Hi there, if you go to the last <a href="" rel="nofollow">article</a> in this series the download is there.<br /><br />Steve :DSteve Code?????Download [email protected]:blogger.com,1999:blog-6907873803403737979.post-37625540317495348492009-02-20T10:56:00.000+00:002009-02-20T10:56:00.000+00:00Sorry Azamat, Ive had no luck with that as yet, wh...Sorry Azamat, Ive had no luck with that as yet, what Ive done is create custom cascading field templates. also have a look at my article "Dynamic Data – Cascading FieldTemplates"()<BR/><BR/>Steve :DSteve, can you help how we can fix issue with Edi...Please, can you help how we can fix issue with Edit mode. <BR/>The cascade controls do not reflect the field values (e.g. "reset" themselves).<BR/><BR/>[email protected]:blogger.com,1999:blog-6907873803403737979.post-55410806741928602632008-12-10T21:08:00.000+00:002008-12-10T21:08:00.000+00:00Yep that's a TODO: ;-)Hopefully I'll get around to...Yep that's a TODO: ;-)<BR/><BR/>Hopefully I'll get around to it soon.<BR/><BR/>Steve :DSteve article!One question: when I switch to "...Excellent article!<BR/>One question: when I switch to "Edit" mode, the cascade controls do not reflect the field values (e.g. "reset" themselves).<BR/>Is there a way to make them behave in a more natural manner?<BR/>Thanks in advance!Cristobal Galleguillos Katz I know all about the Mode property in DD this ...<BR/>Steve :DSteve"there is no way of Dynamic Data knowing what Fiel..."there is no way of Dynamic Data knowing what FieldTemplate to use in Read-Only mode."<BR/><BR/>Actually, there is a way to tell.<BR/>FieldTemplayUserControl has a property "Mode".<BR/><BR/>namespace System.Web.UI.WebControls<BR/>{<BR/> // Summary:<BR/> // Represents the different data-entry modes for a data-bound control or a particular<BR/> // field in ASP.NET Dynamic Data.<BR/> public enum DataBoundControlMode<BR/> {<BR/> // Summary:<BR/> // Represents the display mode, which prevents the user from modifying the values<BR/> // of a record or a data field.<BR/> ReadOnly = 0,<BR/> //<BR/> // Summary:<BR/> // Represents the edit mode, which enables users to update the values of an<BR/> // existing record or data field.<BR/> Edit = 1,<BR/> //<BR/> // Summary:<BR/> // Represents the insert mode, which enables users to enter values for a new<BR/> // record or data field.<BR/> Insert = 2,<BR/> }<BR/>}<BR/><BR/><BR/>Here is a snippet of what i'm using in my project:<BR/>Visible = (Mode != DataBoundControlMode.Insert);[email protected]
http://csharpbits.notaclue.net/feeds/8599472448509282628/comments/default
CC-MAIN-2019-35
refinedweb
648
50.84
This is a best practice I developed when working on Vue projects with a large code base. These tips will help you develop more efficient code that is easier to maintain and share. In my freelance career this year, I had the opportunity to work on some large Vue applications. The project I'm talking about has more than 12 Vuex stores, a large number of components (sometimes hundreds) and many views (pages). In fact, it was a very meaningful experience for me because I found many interesting patterns to make the code extensible. I also had to fix some of the mistakes that led to the famous spaghetti code problem. 🍝 So today, I'm going to share 10 best practices with you. If you want to deal with a large number of code bases, I suggest you follow these best practices. 1. Use slot s to make components easier to understand and more powerful I recently wrote an article about some important things you need to know about slots in Vue.js. It highlights how slots make your components more reusable and easier to maintain, and why you use them. 🧐 But what does this have to do with the large Vue.js project? One picture is worth a thousand words, so I will draw a picture for you. This is the first time I regret not using them. One day, I just need to create a pop-up window. At first glance, there is nothing really complicated, just including title, description and some buttons. So all I have to do is treat everything as an attribute. Finally, I used three properties to customize the component and issue an event when people click the button. Very simple! 😅 However, as the project developed, the team asked us to display many other new contents: form fields, different buttons (depending on which page to display), cards, footers and lists. I found that if I continue to use properties to expand this component, it seems to be OK. But God, 😩 I was wrong! The component quickly became too complex to understand because it contained countless subcomponents, used too many properties, and emitted a large number of events. 🌋 I've experienced a terrible situation where when you make a change somewhere, it eventually destroys the rest of the content on another page in some way. I made a Frankenstein monster, not a maintainable component! 🤖 However, it would have been better if I had relied on slots from the beginning. Finally, I refactor everything to provide this widget. Easy to maintain, faster to understand and more scalable! view is that, based on experience, projects built by developers who know when to use slots do have a great impact on their future maintainability. This reduces the number of events emitted, makes the code easier to understand, and provides greater flexibility when displaying any required components internally. ⚠️ As a rule of thumb, remember that you should start using slots from this point when you finally copy the properties of a child component in its parent component. 2. Organize your Vuex storage correctly Usually, new Vue.js developers start learning Vuex because they stumble upon the following two problems: - They either need to access the data of a given component from another component in the tree structure that is actually too far away, or - They need the data to survive the destruction of the component. That's when they create the first Vuex storage, understand the modules, and start organizing in the application. 💡 The problem is that the first mock exam module has no single mode to follow. But, 👆🏼 I strongly recommend that you consider how to organize them. As far as I know, most developers like to organize them by function. For example: - Verification Code - Inbox - set up As far as I am concerned, I find it easier to understand when they are organized according to the data model they extract from the API. For example: - Number of users - ranks - Message content - Widget - article Which one you choose is up to you. The only thing to remember is that well-organized Vuex storage will make the team more productive in the long run. It will also make it easier for newcomers to focus your ideas around your code base when they join your team. 3. Use Vuex Actions to call API and submit data Most, if not all, of my API calls take place in my vuex actions. You might wonder: why is it better to call here? 🤨 Just because most of them extract the data I need to submit in the vuex store. In addition, they provide encapsulation and reusability that I really like. There are other reasons why I do this: - If I need to get the front page of the article in two different places (such as blog and front page), I can call the appropriate scheduler with the correct parameters. The data will be extracted, submitted and returned without duplicate code except for the scheduler call. - If I need to create some logic to avoid extracting it when extracting the first page, I can do it in one place. In addition to reducing the load on the server, I am confident that it can be used anywhere. - I can track most of my Mixpanel events in these vuex actions, making the analysis code base really easy to maintain. I do have some applications where all Mixpanel calls are made separately in the operation. When I don't have to know what to track, what not to track, and when to send, 😂 How much happiness this way of working will bring to me. Mixpanel is a data tracking and analysis company that allows developers to track various user behaviors, such as the number of pages viewed by users, iPhone application analysis, Facebook application interaction, and Email analysis. A buried point analysis tool similar to Firebase. 4. Use mapState, mapGetters, mapMutations and mapAction to simplify the code base When you only need to access state / getters or call action/mutation inside a component, you usually do not need to create multiple calculated properties or methods. Using mapState, mapGetters, mapMutations and mapActions can help you shorten your code, simplify it by grouping, and master the whole situation from one place in your storage module. // need on these handy helpers is available in the official Vuex documentation. 🤩 5. Use API factory I usually like to create a helper for this.$api that can be called anywhere to get the API endpoint. At the root of the project, I have a folder containing all the classes (see one below). api ├── auth.js ├── notifications.js └── teams.js Each node groups all endpoints of its category. This is how I use plug-ins to initialize this pattern in my Nuxt application (which is very similar to the process in a standard Vue application). //); }; JavaScript for export default $axios => ({ forgotPassword(email) { return $axios.$post("/auth/password/forgot", { email }); }, login(email, password) { return $axios.$post("/auth/login", { email, password }); }, logout() { return $axios.$get("/auth/logout"); }, register(payload) { return $axios.$post("/auth/register", payload); } }); JavaScript for Now, I can simply call them in my component or Vuex operation, as shown below: export default { methods: { onSubmit() { try { this.$api.auth.login(this.email, this.password); } catch (error) { console.error(error); } } } }; JavaScript for 6. Use $config to access your environment variables (especially useful in templates) Your project may have some global configuration variables defined in some files: config ├── development.json └── production.json I like to access them quickly through the this.$config assistant, especially when I'm in a template. As usual, extending Vue objects is easy: // NPM import Vue from "vue"; // PROJECT: COMMONS import development from "@/config/development.json"; import production from "@/config/production.json"; if (process.env.NODE_ENV === "production") { Vue.prototype.$config = Object.freeze(production); } else { Vue.prototype.$config = Object.freeze(development); } 7. Follow a convention to write submission notes As the project evolves, you will need to periodically browse the submission history of components. If your team does not follow the same Convention to write their submission instructions, it will be difficult to understand the behavior of each team member. I always use and recommend the Angular commit message guidelines. In every project I work on, I follow it, and in many cases, other team members will soon find it better to follow it. Following these guidelines leads to more readable messages, making it easier to track submissions when viewing project history. In short, this is how it works: git commit -am "<type>(<scope>): <subject>" # Here are some samples git commit -am "docs(changelog): update changelog to beta.5" git commit -am "fix(release): need to depend on latest rxjs and zone.js" 8. Always freeze the version of the software package when producing the project I know... All software packages should follow semantic version control rules. But the reality is that some of them are not. 😅 In order to avoid damaging the whole project because one of your dependencies wakes up in the middle of the night, locking the versions of all software packages will reduce your morning work pressure. 😇 Its meaning is simple: avoid using versions starting with ^: { " } } 9. Use the Vue virtual scroll bar when displaying large amounts of data When you need to display many rows in a given page or need to cycle through a large amount of data, you may have noticed that the page is rendered quickly. To solve this problem, you can use Vue virtual scoller. npm install vue-virtual-scroller It renders only the visible items in the list and reuses components and dom elements to make them as efficient as possible. It's really easy to use and smooth! ✨ <template> <RecycleScroller class="scroller" : <div class="user"> {{ item.name }} </div> </RecycleScroller> </template> HTML for 10. Track the size of third-party packages When many people work on the same project, if no one pays attention to them, the number of installed packages will increase rapidly, which is incredible. To prevent your application from slowing down (especially when the mobile network slows down), I used the import fee package in Visual Studio Code. In this way, I can directly see the size of the imported module library from the editor, and see what happens when the imported module library is too large. For example, in a recent project, the entire lodash library was imported (about 24kB after compression). The problem is that only one method, cloneDeep, is used in the project. After identifying this problem in the imported fee package, we solved the problem in the following ways: npm remove lodash npm install lodash.clonedeep Then you can import the clonedeep function where needed: import cloneDeep from "lodash.clonedeep"; JavaScript for For further optimization, you can also use the Webpack Bundle Analyzer package to visualize the size of the Webpack output file through an interactive scalable tree view. Do you have any other best practices when dealing with large Vue code bases? Please let me know in the comments below or contact me on twitter @ rifki Nada. 🤠 About the author Nada Rifki Nada is a JavaScript developer who likes to use UI components to create interfaces with excellent UX. She specializes in Vue.js and likes to share anything that can help her front-end Web developers. Nada is also involved in digital marketing, dance and Chinese. Article: Yujiaao come from: Translated from:
https://programmer.help/blogs/6194e33f98ffe.html
CC-MAIN-2021-49
refinedweb
1,898
62.78
Member Since 3 Years Ago 2,870clark6595 left a reply on Base Table Or View Not Found: 1146 Grrrrrrrrrrrrrrrr!!!!! Helps when one adds the table to the same database that the application is reading from !!!!!! kclark6595 left a reply on Base Table Or View Not Found: 1146 @Christopher - Tried that already as an off chance. Its generating the correct table name even without, its just that for some reason, it cant see it in the databse kclark6595 started a new conversation Base Table Or View Not Found: 1146 Just added a new table to my database: unittypes My migration file (create_unittypes_table.php) is: <?php use Illuminate\Database\Schema\Blueprint; use Illuminate\Database\Migrations\Migration; class CreateUnittypesTable extends Migration { /** * Run the migrations. * * @return void */ public function up() { Schema::create('unittypes', function (Blueprint $table) { $table->integer('id'); $table->string('description',25); $table->timestamps(); }); } /** * Reverse the migrations. * * @return void */ public function down() { Schema::drop('unittypes'); } } My Models file (Unittype.php) is: <?php namespace App\Models; use Illuminate\Database\Eloquent\Model; class Unittype extends Model { /** * The attributes that are mass assignable. * * @var array */ protected $fillable = [ 'id', 'description', ]; /** * The attributes excluded from the model's JSON form. * * @var array */ protected $hidden = [ ]; } The error I get is: SQLSTATE[42S02]: Base table or view not found: 1146 Table 't4r.unittypes' doesn't exist (SQL: select * from `unittypes` order by `description` asc) I have verified that the table unittypes exists and is populated in the database (I can even run the sql code generated in the error just fine) For whatever reason, the application is not seeing the table. It reads from teh other tables in the application just fine, just not that one. I am feeling that I have left out a line of code somewhere, but am clueless as to where. Keith kclark6595 started a new conversation RESOLVED: CURL Sorry, not sure how to remove the question, no sooner than I added it I found my type. kclark6595 started a new conversation Using Logged In Windows Username For Auth Hello all. Trying to accomplish what seems like a relatively simple task, but is beating me up pretty good. I have a Laravel 5.2 application that I would like to be able to authenticate a person to based on the username of the person logged into the computer. Kind of a modified single sign on...once they log into windows, then based on groups they belong to dictate the access they have to the application. I have already used adLDAP successfully to query our Active Directory servers for group information about a given user, I just need a way to retrieve the username of the person currently logged into the P.C. I know this can be done through .Net applications, but am hoping there is a way to also do this via a Laravel application. Thanks in advance for any help. Keith kclark6595 left a reply on Use Illuminate\Support\Facades\Input; Vs Use Input In Laravel 5.2 @premsaurav Thanks for that information. Helps a lot. Still relatively new to the Laravel scene, so little things like that kinda toss me a curve. kclark6595 started a new conversation Use Illuminate\Support\Facades\Input; Vs Use Input In Laravel 5.2 I am trying to figure out why, in Laravel 5.2, I need to use use Illuminate\Support\Facades\Input; in my controllers where I have several Laravel 5.1 projects that only have use Input; Any light on the subject? kclark6595 started a new conversation Using Sass/gulp On Laravel 5.2 (Windows) Without The Bloat Is there a way to use sass with Laravel 5.2 on windows without having to deal with the enormous bloat of nodejs?? (I installed nodejs, npm --global gulp and then npm install, and my laravel install went from approx 6000 files to almost 80,000 !!!! the node_modules folder went CRAZY !!) The only thing I want to use is gulp, but can one carve it and only it out? Thanks, Keith kclark6595 started a new conversation Gulp And Deduping Just learned the usage of gulp and Elixir. Love the flexibility it gives me in my CSS creation. Question I have surrounds concatenation of several scss files. Is there a way to dedupe during gulp? For example: first.scss contains: html { color: red; } body { font-size: 1em; } second.scss contains: body { font-size: 2em; } I would like the resulting .css file to be: html { color: red; } body { font-size: 2em; } instead of html { color: red; } body { font-size: 1em; } body { font-size: 2em; } Thanks for any advice kclark6595 started a new conversation Composer Breaks When Moving A View To A Subfolder More of a 'wonder why' than something broken since I can leave files where they currently are. In my main view, I @include a header view. the header view file resides in the same directory as my main view. I have a composer set up to load an array variable with data when the header view is called. And it works great. However, if I move my header to a partials folder located in the same directory as the main, and change the @include to point to the new location, when I run the application, it tells me I have an undefined variable in the header file. I have looked through what code I code find (or more remember where was) and I dont see anything that specifically references paths for the view in the composer. Is there somewhere I need to look to specifically tell the composers where the view is located? kclark6595 started a new conversation Fail On Reference To Specific Field Have a controller function: public function pageedit($id) { $page = Pages::where('pgeID','=',121)->get(); return view('admin.pageedit')->with('page',$page); } And a simple view: @extends('admin') @section('content') @include('/partials/adminpageheader') Edit Page { { $page } } @endsection When I run, I get: Intranet Web Control Panel (Pages) Admin Home - Admin Home Page Edit Page [{"pgeID":"121","pgeActive":"0","pgeContent":"1","pgeMainLevel":"6","pgeLevel":"1","pgeParent":"6","pgeOrder":"3","pgeTitle":"Employee Assistance Program","pgeHeader":"Null","pgeBody":Employee Assistance Program that all employees can access 24\/7. The website, has an amazing amount of information available. If you are looking for training, articles, resources and more, check them out. The use","pgeNav":null,"pgeContents":"Null","pgeCallOut":"Null","pgeEditor":"Null","pgeTitleDRAFT":"Null","pgeTemplateDRAFT":null,"pgeNavDRAFT":null,"pgeContentsDRAFT":"Null","pgeCallOutDRAFT":"Null"}] If I ty to callout JUST the page title (pgeTitle) in the view using {{ $page->pgeTitle }} I get undefined property. If I use {{ $page['pgeTitle'] }} I get an undefined index. I know its got to be a simple step I am missing, but dang if I can figure it out kclark6595 left a reply on Changing Required Fields On Login Page I did find that near the bottom of that same file there was a section that had 'email' hard coded like 'password' is here. After some playing around, I found that as long as I name my table field 'password', I could use whatever other field I wanted to authenticate against simply by changing the hard coding of the 'email' field at the bottom of the file. Unfortunately, there are too many places the 'password' was coded into sections like Guard to make it easy to change that. So, I changed the field name in my database from 'usrPassword' to 'password' and I am now able to authenticate using 'usrID' and 'password'. Thanks for pointing me in the right direction. kclark6595 started a new conversation Changing Required Fields On Login Page I have a table other than Users that I want to use for authentication. I updated my config\auth.php file to point to the new table and use the new model: <?php return [ 'driver' => 'database', 'model' => App\Models\tblUsers::class, 'table' => 'tblUsers', ]; Then I changed my login view: <form class="form-horizontal" role="form" method="POST" action="{{ url('/auth/login') }}"> <input type="hidden" name="_token" value="{{ csrf_token() }}"> <div class="form-group"> <label class="col-md-4 control-label">User ID</label> <div class="col-md-6"> <input type="text" class="form-control" name="usrID" value="{{ old('usrID') }}"> </div> </div> <div class="form-group"> <label class="col-md-4 control-label">Password</label> <div class="col-md-6"> <input type="password" class="form-control" name="usrPassword"> </div> </div> <div class="form-group"> <div class="col-md-6 col-md-offset-4"> <button type="submit" class="btn btn-primary">Login</button> </div> </div> </form> I changed email to usrID and password to usrPassword. I have finally got it to accept that the fields have been filled out, but now I am getting the message that it cant authenticate the user. Is there any documentation that explains modifiying the built in Auth methods to use different fields or am I better off building the login authorizations from scratch? kclark6595 left a reply on Laravel 5 On IIS 7.5 Figured it out. Missed the line to add IUSR with full control to the storage folder, not IIS_IUSRS Doh! kclark6595 started a new conversation Laravel 5 On IIS 7.5 Yes, I know it seems like this is a dead horse, and that it shouldn't be this hard. Then I remembered I was working with Microsoft products. I currently have been developing an application off line, using the php built in server. Ready to deploy to our IIS server. Search Laracasts, found several posts on doing so, but no matter what I try, I just get a plain while blank screen. No errors. No nothing. Only time I go any error was a RouteColletion error when the document root was set to c:\inpetpub\wwwroot\public and I tried opening v-webserver/public. Every other combo I tried, including adding teh index.php file, ended in a blank screen. The one thing that does differ from the instructions was that I did not "install" laravel using composer in my wwwroot folder, I copied my development folder over and changed the .env file accordingly. I dont think this should have caused any issues, but I am very new at Laravel. I have: I am out of options to try Below is my web.config file: <?xml version="1.0" encoding="UTF-8"?> <configuration> <system.webServer> <rewrite> <rules> <rule name="Imported Rule 1" stopProcessing="true"> <match url="^(.*)/$" ignoreCase="false" /> <conditions logicalGrouping="MatchAll"> <add input="{REQUEST_FILENAME}" matchType="IsDirectory" ignoreCase="false" negate="true" /> </conditions> <action type="Redirect" url="/{R:1}" redirectType="Permanent" /> </rule> <rule name="Imported Rule 2" stopProcessing="true"> <match url="^" ignoreCase="false" /> <conditions logicalGrouping="MatchAll"> <add input="{REQUEST_FILENAME}" matchType="IsDirectory" ignoreCase="false" negate="true" /> <add input="{REQUEST_FILENAME}" matchType="IsFile" ignoreCase="false" negate="true" /> </conditions> <action type="Rewrite" url="index.php" /> </rule> </rules> </rewrite> </system.webServer> </configuration> kclark6595 left a reply on IIS7 Laravel 5 Ok, at my wits end. Have a Laravel 5 project I have been building offline, using the built in PHP server to test. Now I want to move to an IIS 7.5 server. Followed all the instructions above and all I get is a blank white screen. I have verified all my PHP settings via phpinfo. I have added the IIS_IUSR account giving full control to the storage folder I have tried my root folder as c:\inetpub\wwwroot and c:\inetpub\wwwroot\public I have verified index.php is my default doc I even tried deleting my web.config and importing using url_rewrite of IIS, no luck. The one place I differ from the instructions is the installation of Laravel. Instead of installing via composer, I am copying the main folder from my development location and altering the .env file accordingly. I was able to get one error message and that was error in RouteCollection.php when I had the folder root set to my public folder and I tried to use v-webserver/public as my URL. I wouldnt think it should be this hard to do, but then I remembered I am dealing with Microsoft. Any suggestions of where to check of what I missed? Thanks! kclark6595 left a reply on Fetching Data From A View I found that what I needed to use was a composer to load a collection on the view call kclark6595 left a reply on Fetching Data From A View What I have (very abbreviated) is: main.blade.php <html> @include('header') <body> @section('content') @stop </body> </html> home.blade.php @extends('main') @section('content') <h1>Home</h1> @endsection Now from the controller, I call the home.blade.php view Can I fetch the data thats needed in the header view from INSIDE the header file rather than having to pass the data from every view I call? Or do I need to simply create a global function that I call in every controller before I call a view? kclark6595 started a new conversation Fetching Data From A View Learning about Laravel. Know how to create a controller that builds an array of data from an Eloquent model, pass that array to a view and display it. What I am trying to do is create a main view that has the header and footer information all pages use, then use a child view to display the body. The main blade file has an include for a file that builds a dynamic navigation bar. How to I go about retrieving data from within the include file the main blade file that's called by a child? kclark6595 left a reply on Multiple Calls To Same Model Within A Single View Getting closer! in my Controller now, I have: $level1Navs = tblPages::where('pgeParent','=',0)->orderBy('pgeOrder','ASC')->orderBy('pgeTitle','ASC')->get(); var_dump($level1Navs); foreach($level1Navs as $level1Nav) { $navbar = array(); $navbar['pgeID'] = $level1Nav->pgeID; $navbar['pgeTitle'] = $level1Nav->pgeTitle; $navbar['pgeContent'] = $level1Nav->pgeContent; $navbar['pgeMainLevel'] = $level1Nav->pgeMainLevel; $navbar['level2'] = array(); $level2Navs = tblPages::where('pgeParent','=',$level1Nav->pgeID)->where('pgeActive','=',1)->orderBy('pgeOrder','ASC')->orderBy('pgeTitle','ASC')->get(); foreach($level2Navs as $level2Nav) { $navbar['level2']['pgeID'] = $level2Nav->pgeID; $navbar['level2']['pgeTitle'] = $level2Nav->pgeTitle; $navbar['level2']['pgeContent'] = $level2Nav->pgeContent; $navbar['level2']['pgeMainLevel'] = $level2Nav->pgeMainLevel; $navbar['level2']['level3'] = array(); $level3Navs = tblPages::where('pgeParent', '=', $level2Nav->pgeID)->where('pgeActive', '=', 1)->orderBy('pgeOrder', 'ASC')->get(); foreach ($level3Navs as $level3Nav) { $navbar['level2']['level3']['pgeID'] = $level3Nav->pgeID; $navbar['level2']['level3']['pgeTitle'] = $level3Nav->pgeTitle; $navbar['level2']['level3']['pgeContent'] = $level3Nav->pgeContent; $navbar['level2']['level3']['pgeMainLevel'] = $level3Nav->pgeMainLevel; } } } var_dump($navbar); There are 9 items off the first level, but when it hits the var_dump, I only get this: array (size=5) 'pgeID' => string '10' (length=2) 'pgeTitle' => string 'News' (length=4) 'pgeContent' => string '0' (length=1) 'pgeMainLevel' => string '10' (length=2) 'level2' => array (size=5) 'pgeID' => string '50' (length=2) 'pgeTitle' => string 'News Stories' (length=12) 'pgeContent' => string '1' (length=1) 'pgeMainLevel' => string '10' (length=2) 'level3' => array (size=0) empty Its like its only running through the foreach one time. Am I missing something simple? kclark6595 left a reply on Multiple Calls To Same Model Within A Single View That was my issue. I was brain blocking on how to get code such as the old straight line programming that you called when the output required it, to the new MVC style to where your data is fetched beforehand and then handed off to the view. Its not impossible to teach old dogs new tricks, we just take a bit (albeit quite a bit) longer. Thanks for the great responses. Cant wait to try them out tonight kclark6595 left a reply on Multiple Calls To Same Model Within A Single View I think I understand that. I would run the code in the controller, adding each level to the collection, then pass the collection to the view. I will give that a spin! Thanks kclark6595 started a new conversation Multiple Calls To Same Model Within A Single View Love the concept of MVC programming but very old school top down programmer here. I follow all the concepts of passing a model result set to a view for display, but when it comes to multiple calls and multiple recordsets within one view, this is whats tripping me up. I current have a simple webpage that has a dynamically built menu bar based on items in a database table. Currently, I use nested foreach statements to build my structure like so: $sql = 'select pgeContent, pgeTitle, pgeMainLevel, pgeID from tblPages where pgeParent=0 order by pgeOrder, pgeTitle'; $qryLevel1Nav = $conn->query($sql); $recLevel1Nav = $qryLevel1Nav->fetchAll(PDO::FETCH_ASSOC); if(!empty($recLevel1Nav)) { echo '<ul id="nav">'; foreach($recLevel1Nav as $rowLevel1Nav) { // // CODE TO WRITE <li> ENTRIES --- BLOCK REMOVED FOR BRIEVITY // // LEVEL 2 NAV STARTS $sql = 'select pgeContent, pgeTitle, pgeMainLevel, pgeID from tblPages where pgeParent='.$rowLevel1Nav['pgeID'].' and pgeActive=1 ORDER BY pgeOrder, pgeTitle'; $qryLevel2Nav = $conn->query($sql); $recLevel2Nav = $qryLevel2Nav->fetchAll(PDO::FETCH_ASSOC); if(!empty($recLevel2Nav)) { echo '<ul>'; foreach($recLevel2Nav as $rowLevel2Nav) { // // CODE TO WRITE <li> ENTRIES --- BLOCK REMOVED FOR BRIEVITY // // LEVEL 3 NAV STARTS $sql = 'select pgeTitle, pgeMainLevel, pgeID from tblPages where pgeParent='.$rowLevel2Nav['pgeID'].' and pgeActive=1 order by pgeOrder asc'; $qryLevel3Nav = $conn->query($sql); $recLevel3Nav = $qryLevel3Nav->fetchAll(PDO::FETCH_ASSOC); if(!empty($recLevel3Nav)) { echo '<ul>'; foreach($recLevel3Nav as $rowLevel3Nav) { // // CODE TO WRITE <li> ENTRIES --- BLOCK REMOVED FOR BRIEVITY // } echo '</ul>'; } $qryLevel3Nav->closeCursor(); // LEVEL 3 NAV ENDS } echo '</ul>'; } $qryLevel2Nav->closeCursor(); // LEVEL 2 NAV ENDS } } echo '</ul>'; } $qryLevel1Nav->closeCursor(); // LEVEL 1 NAV ENDS I am at a loss how I would do this using the MVC framework approach Any suggestions or suggested readings would be greatly appreciated. Keith kclark6595 started a new conversation 'tinker' Flavors Using git bash as my shell. I call tinker and get a screen much different than the one in the Eloquent 101 video. I am getting the Psy Shell. Where everything seems to work as indicated, the editing features are much different: no up arrow to repeat last command, no insert function for correcting typing errors, etc. What shell is tinker using in the video??
https://laracasts.com/@kclark6595
CC-MAIN-2019-35
refinedweb
2,971
52.19
An efficient implementation of integer sets. Since many function names (but not the type name) clash with Prelude names, this module is usually imported qualified, e.g. import Data.IntSet (IntSet) import qualified Data.IntSet as IntSet(min(n,W)). The expression (split x set) is a pair (set1,set2) where set1 comprises the elements of set less than x and set2 comprises the elements of set greater than x. split 3 (fromList [1..5]) == (fromList [1,2], fromList [4,5]) O(min(n,W)). Delete and find the minimal element. deleteFindMin set = (findMin set, deleteMin set) O(min(n,W)). Delete and find the maximal element. deleteFindMax set = (findMax set, deleteMax set)
https://downloads.haskell.org/~ghc/6.12.3/docs/html/libraries/containers-0.3.0.0/Data-IntSet.html
CC-MAIN-2017-09
refinedweb
113
71
15 February 2010 By clicking Submit, you accept the Adobe Terms of Use. Note: This article was created based on Flash Builder 4 beta. Minor changes in the description and code may be necessary before it can be applied to Flash Builder 4. For many types of games, the user experience depends on how many pixels you can have on the screen and how fast you can move them around. When animating large numbers of DisplayObject objects, such as MovieClip or Sprite objects, Adobe Flash Player may not perform well enough for effective game play. Flash Player must traverse the display object tree and compute the rendering output for each vector-based DisplayObject. This eats up CPU cycles and can be a real bottleneck, especially on lower-end machines. For games with many on-screen animations that can be pre-rendered into a bitmap, a technique known as blitting can provide a solution. Blitting is not the answer to every performance issue but it does enable smooth, consistent animation frame rates across a wide range of machines. The term blitting comes from the BitBLT routine created for Xerox Alto computers. BitBLT, pronounced "bit blit," stands for bit-block (image) transfer, a technique that takes several bitmaps and combines them into one bitmap. In Flash Player it is faster to copy bitmap pixels into one rendered bitmap than to render each DisplayObject separately. In this article, I describe the software blitting technique and provide sample code so you can apply it in ActionScript. A game is made up of graphical assets, for example a car on a racetrack or a tree in a forest. For this article these assets will be bitmaps. A group of bitmaps put together in a single image file is called a sprite sheet. For example, a sprite sheet might contain all the frames for an animation of a character walking. The term is derived from sprite, which, in the computer graphics world, is an image or animation integrated into a larger scene. Although the blitting technique can use various sources of bitmap data, this article will focus on sprite sheets. A sprite sheet can be a combination of all kinds of bitmaps of varying sizes. Assembling all the graphical assets in one (or a few) large image files reduces load time (it's faster to open and read one large file that contains 100 frames than it is to open and read 100 smaller files) and provides compression benefits. Typically sprite sheets will hold similarly sized bitmaps that form a sequence or animation around a particular game asset. For example, the sprite sheet used in this tutorial is made up of five columns and four rows of 40 x 40 pixel tiles, each containing a brown collector (see Figure 1). Before you can try out the example code, you'll need to complete these steps to set up the project in Flash Builder 4: You now have all of the sample code in Flash Builder 4, and you'll be able to run all of the examples. You can embed images in ActionScript through the use of the Embed metadata tag. (Read Embedding metadata with Flash for more information.) Once they're embedded, you can create an instance of the class and attach it to the display list, as in ActionScriptBlittingPart1.as. ActionScriptBlittingPart1.as package { import flash.display.Sprite; [SWF(width=480, height=320, frameRate=24, backgroundColor=0xE2E2E2)] public class ActionScriptBlittingPart1 extends Sprite { public function ActionScriptBlittingPart1() { addChild(new BrownCollector()); } [Embed(source="spritesheets/browncollector.png")] public var BrownCollector:Class; } } To run the first example, follow these steps: As a second step, you'll use the Bitmap and BitmapData Flash Player APIs to copy one tile (or frame) from the sprite sheet onto the screen. This is done by using the BitmapData.copyPixels() method, which copies the pixels of the input bitmap data onto the bitmap instance that is making the call. Central to blitting in ActionScript, the copyPixels() method also provides parameters to define the input bitmap region to be copied as well as how to define and merge alpha pixels. ActionScriptBlittingPart2.as package { import flash.display.Bitmap; import flash.display.BitmapData; import flash.display.Sprite; import flash.geom.Point; import flash.geom.Rectangle; [SWF(width=480, height=320, frameRate=24, backgroundColor=0xE2E2E2)] public class ActionScriptBlittingPart2 extends Sprite { public function ActionScriptBlittingPart2() { // Create input bitmap instance spritesheet = (new BrownCollector() as Bitmap).bitmapData; // Add a Bitmap to the display list that will copyPixels() to. canvas = new BitmapData(480, 320, true, 0xFFFFFF); addChild(new Bitmap(canvas)); rect = new Rectangle(0, 0, 40,40); // 1st Tile //** Section 1 ** // // rect = new Rectangle(40, 0, 40, 40); // 2nd Tile // rect = new Rectangle(80, 0, 40, 40); // 3rd Tile // ... // rect = new Rectangle(160, 120, 40, 40); // 20th Tile canvas.copyPixels(spritesheet, rect, new Point(0, 0)); //** END Section 1 **/ /** Section 2 ** // for (var i:int = 0; i < 20; i++) { rect.x = (i % 5) * 40; rect.y = int(i / 5) * 40; canvas.copyPixels(spritesheet, rect, new Point(i*10, 0)); // Section 3: // canvas.copyPixels(spritesheet, rect, // new Point(i*10, 0), null, null, true); } //** END Section 2 **/ } [Embed(source="spritesheets/browncollector.png")] public var BrownCollector:Class; public var canvas:BitmapData; public var spritesheet:BitmapData; public var rect:Rectangle; } } To see this in action, run ActionScriptBlittingPart2.as by right-clicking it in the Package Explorer and selecting Run Application. You'll see the first tile from the sprite sheet displayed in your browser (see Figure 2). Now that you can draw one tile, why not draw all of the tiles? To draw them all, follow these steps: The Section 2 code uses a for loop to draw each tile offset horizontally 10 pixels from the previous tile (see Figure 3). That doesn't look quite right, and this is where the alpha parameters of BitmapData.copyPixels() come into play. The last three parameters ( alphaBitmapData, alphaPoint, and mergeAlpha) provide different ways of handling alpha regions. Since the sprite sheet PNG already has alpha data inside the image, you don't need alphaBitMapData or alphaPoint. You simply need to turn on alpha merging by setting the last parameter, mergeAlpha, to true. To make this change: copyPixels()in Section 2: canvas.copyPixels(spritesheet, rect, new Point(i*10, 0)); copyPixels()in Section 3: canvas.copyPixels(spritesheet, rect, new Point(i*10, 0), null, null, true); Now that you know how to display the bitmap data from the sprite sheet, you're ready to animate it. The code in ActionScriptBlittingPart3.as animates the brown collector image and moves it to the location of any mouse click on the Stage. The basic idea is to use a timer that can be based on Flash Player frames, a timer, or a combination of the two. A typical approach is to use a combination of the ENTER_FRAME event and a call to getTimer() to control the speed of the animations across various computer environments. The code below is an excerpt from the ActionScript class ActionScriptBlittingPart3.as (line 62): /** * Handles the timer */ private function enterFrameHandler(event:Event):void { tickPosition = int((getTimer() % 1000) / framePeriod); if (tickLastPosition != tickPosition) { tickLastPosition = tickPosition; canvas.lock(); canvas.fillRect(canvasClearRect, 0x000000); render(); canvas.unlock(); } } The enterFrameHandler() method is fired on each ENTER_FRAME event. The code determines the number of seconds that have elapsed since the SWF started and divides this number by framePeriod, the period at which the game designer wants to render animation. Then if this value differs from the previous rendering frame event, the canvas is cleared and the animation is rendered. The timing in Flash Player (sometimes described as an elastic race track) can vary and the arrival of the ENTER_FRAME event can fluctuate significantly across different machines. Combining the event and timing check enables smooth animation at a consistent rate, even if a machine is running faster than the SWF frame rate. Also if the SWF frame rate is increased, the animation frame rate can be kept at a consistent lower rate, allowing the CPU to process other logic without rendering each frame. Either way, you'll need to strike a balance between rendering consistency and CPU utilization needs. If you push the frame rate too high, slower machines may be unable to handle it. When you run the ActionScriptBlittingPart3.as application, you should see a small circle rotating around the brown collector. The Flash Player MouseEvent.MOUSE_UP event provides the x and y coordinates on the Stage of the mouse click. Using the mouse click coordinates along with the collector's current x and y position, you can move the image to a different position on the canvas with each rendering. This gives the user the ability to move it around. The code below is an excerpt from the ActionScript class ActionScriptBlittingPart3.as (line 79) showing how the animation is moved around with blitting: /** * Render any bitmap data. */ private function render():void { rect.x = (currentTile % 5) * 40; rect.y = int(currentTile / 5) * 40; collectorX += (destX-collectorX-20)/5; collectorY += (destY-collectorY-30)/5; canvas.copyPixels(spritesheet, rect, new Point(collectorX, collectorY), null, null, true); currentTile = ++currentTile % 20; } /** * Used to move the animation around. */ private function mouseUpHandler(event:MouseEvent):void { destX = event.stageX; destY = event.stageY; } } The mouseUpHandler() method stores the x and y coordinates of the mouse click as the destination. Later the render() method determines a nonlinear delta between the collector's current position and the destination position. The delta is added back into the current position for a new position, which is then provided to the copyPixels() method as the blitting position. If you click in the Stage near the running animation, the collector will move towards the location of your click. The final example integrates multiple sprite sheets on a single Stage. In the previous example, you needed to store information about where to place the bitmap data in order to move the animation. When combining animations, you will need to keep track of more than just position. In addition to position, you may need to maintain and process depth level (to determine what bitmap is shown on top when two bitmaps overlap), animation state changes, differing animation frame rates, collision detection, and more. In ActionScriptBlittingPart4.as, the collector will be on the lowest depth level. Randomly created colored gels fall from the top of the screen. If a gel collides with a collector, a third animation is run to show the gel melting on the collector. You probably have created animations in Flash that involve placing multiple objects on the Stage. You may have also used the mx.effects package to move or rotate an object. If objects overlapped, the z-index (depth in the display list) determined the order in which objects stacked up. While blitting, there is only one object being displayed: the destination bitmap. Since you are handling the rendering yourself, you will also need to keep track of the depth level of your objects and make sure everything is copied in the correct order. To keep the falling colored gel animations on top of the collector, you must manage the order of blitting. In the render() method in ActionScriptBlittingPart4.as, all gel blitting (line 138 and 144) is done after the collector has been blitted (line 110). Each colored gel is created at a random time in the enterFrameHandler() method. When a colored gel is created, createGel() sets its initial properties including a random x position, a zero y position, the default state, a zero meltFrame, and a unique name. The gel is then saves in gels, a Dictionary instance that the render() method loops through: /** * Create a gel */ private function createGel():void { var gel:Object = new Object(); gel.posX = ((Math.random() * 0xffffff) % 280) + 20 gel.posY = 0; gel.state = "animate"; gel.meltFrame = 0; gel.name = "gel" + gelCount++; gels[gel.name] = gel; } The logic for moving the colored gel and changing to a melt state is located in the render() method: /** * Render any bitmap data. */ private function render():void { rect.x = (currentTile % 5) * 40; rect.y = int(currentTile / 5) * 40; collectorX += (destX-collectorX-20)/5; collectorY += (destY-collectorY-30)/5; canvas.copyPixels(spritesheetCollector, rect, new Point(collectorX, collectorY), null, null, true); // Render Gel at half the frame rate, to slow it down if (currentTile % 2 == 1) { rect.x = ((currentTile-1) % 5) * 40; rect.y = int((currentTile-1) / 5) * 40; } for each (var gel:Object in gels) { // Hit Check 5 px within Y and X if (Math.abs(gel.posY - collectorY + 6) < 14 && Math.abs(gel.posX - collectorX) < 10) { gel.state = "melt"; } if (gel.state == "melt") { // Clear out if done melting if (gel.meltFrame < 20) gel.meltFrame++; else { delete gels[gel.name]; continue; } rect.x = (gel.meltFrame % 5) * 40; rect.y = int(gel.meltFrame / 5) * 40; canvas.copyPixels(spritesheetGelMelt, rect, new Point(collectorX-1, collectorY-12), null, null, true); continue; } else { canvas.copyPixels(spritesheetGel, rect, new Point(gel.posX, gel.posY), null, null, true); } gel.posY += 3; if (gel.posY > 320) { delete gels[gel.name]; continue; } } currentTile = ++currentTile % 20; } The gels are animated at half the rate of the collector by resetting the rect value with currentTile-1 for odd numbered frames. Next, the code loops over all colored gels on the screen and checks for a collision with the collector. If there is a collision, it changes the state to "melt". In this state, render() uses the colored gel melt sprite sheet and runs off its own meltFrame count for 20 frames. Also when a collision has occurred the collector's x and y position are used for the melting animation, so it will follow the collector if it moves away. If there is no collision, the gel is moved down by three pixels. When its y position goes past 320, it is removed. As you see, with blitting you have to handle all of the animation logic. In this article, you've learned how to create a software blitting engine. You can use these techniques in other Flash Player rendering scenarios. You may want to explore different methods of creating source bitmap data used in the blitting process. For example, you can convert DisplayObject animation frames into a Bitmap and cache them into an array. This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License
http://www.adobe.com/devnet/flex/articles/actionscript_blitting.html?devcon=f8
CC-MAIN-2015-32
refinedweb
2,374
57.06
Hi, I have this following method, wanting to add a string to a std stream the function toString() returns a pointer to a string. the question is, when I add it to a stream, does it create a copy or add the current pointer, and therefore, what is the correct time to release the allocated memory of this pointer? I just don't get this whole streaming thing... Code://Edge.h class Edge { char* toString() const; }; #endif //EdgeNode.h class EdgeNode { friend std::ostream& operator << (ostream& out,const EdgeNode &eNode); }; //EdgeNode.cpp #include EdgeNode.h ostream& operator << (ostream &out, const EdgeNode &eNode) { char *p=eNode.m_edge.toString(); return out << p; delete [] p; } //EdgeList.h class EdgeList { friend std::ostream& operator << (ostream& out,EdgeList eList &); }
http://cboard.cprogramming.com/cplusplus-programming/115652-ostream.html
CC-MAIN-2015-14
refinedweb
124
67.55
Ulrich von Zadow wrote: > Diego Biurrun schrieb: >>. > > Extern "C" is, __STDC_CONSTANT_MACROS isn't standard C++ stuff. It's not that simple. C99 section 7.18.4, footnote 220 says C++ implementations should define these macros only when __STDC_CONSTANT_MACROS is defined before <stdint.h> is included. The latest C++ draft standard (N2134), section 18.3 talks about integer typedefs: 18.3 Integer types 18.3.1 Header <cstdint> synopsis [...] The header also defines numerous macros of the form: INT[FAST LEAST]{8 16 32 64}_MIN [U]INT[FAST LEAST]{8 16 32 64}_MAX INT{MAX PTR}_MIN [U]INT{MAX PTR}_MAX {PTRDIFF SIG_ATOMIC WCHAR WINT}{_MAX _MIN} SIZE_MAX plus function macros of the form: [U]INT{8 16 32 64 MAX}_C The header defines all functions, types, and macros the same as C99 subclause 7.18. 18.3.2 The header <stdint.h> The header behaves as if it includes the header <cstdint>, and provides sufficient using declarations to declare in the global namespace all type names defined in the header <cstdint>. This doesn't explicitly mention __STDC_CONSTANT_MACROS, although it could be inferred from the "same as C99" language to be required. A request for clarification on this matter has apparently been accepted: To be honest, that doesn't make things much clearer to me. > From my perspective as a C++ coder, an FAQ entry for > __STDC_CONSTANT_MACROS would be enough. I'm glad you can accept that. -- M?ns Rullg?rd mans at mansr.com
http://ffmpeg.org/pipermail/ffmpeg-devel/2007-May/035351.html
CC-MAIN-2014-42
refinedweb
248
59.5
Hello, I am not a very experinced BGP in Junos system and would require some help. In my case, I need to export some usally IP prefixes to my provider and also need to export some /32 IPs to get the provider to blackhole them. As the provider only provide one Peer IP to accept both the IP prefixes and the /32 IPs, so that I will need to export 2 policies to them, one is for the IP prefixes, and another for the /32 IPs. Then the problem is once I add both the policies in the export term, I found that only the policy in the first place will aonnonce to the provider. Here is the config from my MX router, MX80# show protocols bgp group uplink_1 type external; neighbor 64.x.x.1 { description "[peer-as: 111 cust: 64.x.x.x/30 peer: 64.x.x.x/30 key: ]"; local-address 64.x.x.2; import [ no-default no-rfc1918-dsua bgp-in ]; export [ no-rfc1918-dsua bgp-out provider_blackhole ]; peer-as 111; } [edit] You see above, that bgp-out will export the IP such as many /24 prefixes from my AS, and the provider_blackhole will export /32 IPs for blackhole. If I get the export term order as bgp-out before provider_blackhole,the IP prefixes such as /24 will annonce to the provider, if I get the export term order as provider_blackhole before bgp-out ,the /32 IPs will annonce to the provider for blackhole. Which I will need both the IP prefixes and the /32 IPs annonce to the provider. But seems now only the export will work only with the first policy in the export policies order. Anybody know how to get them both work at the same time? Thank you very much. Can you include the policy statements you are using ? I'm guessing you are using a reject at the end of both policies, if so. After the reject the other policy will not be used. What I would do is to include a Blackhole term in the policy just like @rsuraj suggested Yes, there are reject command at each end of the policy so break to active the 2nd policy. Finally, I setup a new policy to including all terms as rsuraj said. Thank you all. You are welcome 🙂 Using a policy chain like that on import/export works, but it will stop processing as soon as it matches the first terminating action (accept / reject). If a terminating action is found then all processing on that route stops, it doesn't go to the next policy. If you want to use a policy chain like that, you need to make sure that the policies earlier in the chain don't have a default action set. Then you need to make sure that the last policy in the chain has the proper default action you want. If you don't set one then the default BGP behavior will apply. They way you configured the policies with a default reject is what you would do when you are using a policy as a subroutine (calling a policy from within a policy). When used as a subroutine you absolutely must be sure that you set a default action or the subroutine will almost certainly pick up more than you wanted. I initially built our configurations around the policy chain method. Over time I found that people would stick things into an existing policy in a chain that was only necesary for one peer, but because the policy was used on multiple peers it started to make a mess of what was being sent to the different peers. I've since started shifting to using policy subroutines, with a unique policy per peer that simply references the subroutine policies. If someone needs to update something for a peer then they should simpy update that peer's policy, not the subroutine that affects multiple peers. Cheers! -Chad Please share some example configuration for multiple FBF for single IP address (Single VLAN)
https://community.juniper.net/answers/communities/community-home/digestviewer/viewthread?GroupId=25&MID=70292&CommunityKey=18c17e96-c010-4653-84e4-f21341a8f208&tab=digestviewer
CC-MAIN-2021-10
refinedweb
676
68.1
Hello, is there a way to fill a histogram with the FillRandom method if the function involved is not as simple as in the example below? I am thinking of something like : def func(x, y, p1, p2 , p3 …, pn): return z = … where func is given to a TF2 object xy = TF2( ‘xy’, ‘func’, x0, x1, y0, y1) Do you have any suggestions? Example of working code used above in the text: # Upper and lower bounds of the 2D histogram and the 2D function: x0 = 0. x1 = 100. y0 = 0. y1 = 100. # Simple 2D function to fill the histogram : xy = TF2( ‘xy’, ‘(x*y)’, x0, x1, y0, y1) # Create and fill histogram : bin_no = 11 h3 = TH2D(“h3”, “fill the boxes”, bin_no, x0, x1, bin_no, y0, y1) n_entries = 1000 h3.FillRandom(“xy”, n_entries)
https://root-forum.cern.ch/t/pyroot-fillrandom-method-with-complicated-function/27029
CC-MAIN-2022-33
refinedweb
132
64.58
I am trying to solve this one at the moment : Program is working pretty well as you can see here : I think, I wrote the code pretty fine. For example if I set limits as 13-100, it gives correct result. 13-100 >> As you can see, 97 is the really longest chain starter. But when I put limit as 1000000, it says "longest : 35655" while answer is 837799. I don't really get it. For example i run it 291-333, It says 291 but as you can see, longest one is 327 with 144. Is it about my code or Is it something else? Code here: #include <stdio.h> #include <math.h> int main() { int i,j=13,chaincounter=1,longest=1; while(j<1000000){ i=j; chaincounter=1; // Pseudo code: // while i is not 1: // i = i divided by 2 when i is even // i = i multiplied by 3 plus 1 when i is odd // increment chain counter if(chaincounter > longest){longest=j;} j=j+2; } printf("longest chain starter is %d",longest); getch(); } Waiting here for your help, thanks from now! And also, I wonder when will I start making some 2D games or some graphical stuff because I don't understand, I am solving math problems all day and there isn't anything about graphics, I don't know how to go out of MSDOS command console. This post has been edited by Skydiver: 10 February 2014 - 07:25 PM Reason for edit:: Renamed title to be more descriptive
http://www.dreamincode.net/forums/topic/339746-collatz-conjecture-project-euler/
CC-MAIN-2016-30
refinedweb
254
77.67
Recent Notes Displaying keyword search results 1 - 9.... Sample code for UDP client and server in C. The server simply echos back the client message. The client stays in "receive" loop to demonstrate the connection-less nature of the UDP protocol. Server code: #include <stdio.h> #include <stdlib.h> #incl...Client code: #include <stdio.h> #include <stdlib.h> #incl...Try it out: Start the server with: ./udpserver 8888Send message to server: ./udpclient localhost 8888 "Hi, it's me! "Server console displays: Received from 127.0.0.1:41776: Hi, it's me!UDP is connectionless. Send a message from a second client to the first client: $ ./udpclient localhost 41776 "From client 2"First client console displays: $ ./udpclient localhost 8888 "Hi, it's me! " Re...It should be noted that the distinction between client and server is blurry. The only major... A simple single threaded echo server: #include <stdio.h> #include <stdlib.h> #incl... ...... A simple socket client in C. #include <stdio.h> #include <stdlib.h> #incl......
http://www.xinotes.net/notes/keywords/code/int/port/
CC-MAIN-2015-18
refinedweb
166
73.54
Fast ASP.NET Websites, authored by Dean Hume, enables you to learn different ways to enhance the performance of ASP.NET and MVC applications. The book is divided into 12 chapters spread across 3 parts (defining performance, General performance best practices and ASP.NET-specific techniques). The first chapter examines the need for optimization along with a coverage of various impact scenarios such as business, search engine ranking, mobile user and environmental. The author examines the steps required to optimize a website, with a performance cycle diagram, besides pointing out where you need to optimize. He points out core factors such as profile, identity, implementation and monitoring which will be discussed in the upcoming chapters of the book. Hume briefly examines the concept of front-end optimization, which he considers as a golden performance rule. He also suggests developers optimize the back-end code as it will play an important role in high performance applications. In chapter 2, you will learn the basics of HTTP, the concept of empty and primed caches, in addition to the various tools to interpret waterfall charts and diagrams. He specifically points out the use of the F12 developer hot key. Towards the end of this chapter, the author outlines the 14 golden rules for faster front-end performance, which were originally coined by Steve Souders, author of High Performance Web Sites. For instance, rule 3 advises you to add an “expires” header and rule 9 prompts you to reduce DNS lookups. The book than moves to look at general performance best practices. Chapter 3 examines the concept of compression and its various types such as Gzip, Deflate, and SDCH. The author has covered the need for compression, its pros and cons and the use of Accept-Encoding. The chapter also examines the steps required to add compression via IIS and its web.config file. Hume also discusses some of the other methods such as the use of a few NuGet packages and custom-written libraries. However, he doesn't provide specific details about these libraries and strongly advises you not to use any of them. Chapter 4 talks about HTTP, IIS caching and the related web.config settings in addition to a few other considerations such as file renaming for instant cache updates. Towards the end of this chapter, the author examines the concept of output caching using a sample application with the help of comprehensive explanations and relevant source code. Chapter 5 examines the concept of minification and bundling with special reference to the new features in ASP.NET 4.5. It also includes coverage of the steps required to utilize bundling in ASP.NET MVC and Web Forms with the help of screenshots and supported source codes. In chapter 6, you will learn where to position CSS and JavaScript to achieve the best performance with the help of meaningful diagrams. Hume examines rendering issues which might occur when the order of styles and scripts is modified, along with coverage of HTML5 browser support, asynchronous JavaScript, web workers and application caches using both MVC and Web Forms applications. The author has discussed both online and command line image optimization tools such as Smush.it, Kraken and pngcrush in Chapter 7. You will also learn the use of the Image Optimizer Visual Studio extension, data URIs and the role of image dimensions in the performance of applications, with the help of relevant HTML tags. The next chapter discusses the use of ETags and the relevant steps to remove them in ASP.NET Web Forms and MVC applications. The author displays the results of the Yahoo! YSlow performance score before and after removing ETags with the help of real time screenshots. Chapter 9 examines the concept of Content Delivery Networks and domain sharding. The chapter offers a comparison of results before and after the activation of CDN. It also provides a list of reliable CDN providers. The final section of the book examines ASP.NET-specific techniques. Chapter 10 and 11 provide a detailed overview of the various steps required to tweak the performance of ASP.NET MVC and Web Forms applications. The author discusses the following concepts: - Release mode and debug mode - Use of favicon and code profiler - web.config settings - Difference between Response.Redirect and Server.Transfer The last chapter provides a brief overview of server side caching, System.Runtime.Caching namespace and distributed caching. Hume also demonstrates the steps required to apply caching to the sample Surf Store application. The author has uploaded two different versions of the project created for the purpose of this book that enable developers to assess the performance of the application before and after the application of optimization. The book includes two appendices that examine the steps required to set up a local server with IIS and additional resource links mentioned throughout the book. Fast ASP.NET Websites will be useful for those developers who would like to optimize ASP.NET applications for enhanced performance. In short, the book is a ready reckoner for emerging .NET developers. I would also though consider it a must read for all ASP.NET developers since visitors are more inclined to stay on your site if it loads faster. You can download a free sample chapter of the book, and purchase a copy from Manning Publications. InfoQ had a chat with Hume to find out more about the book. InfoQ: What prompted you to write Fast ASP.NET Websites? After visiting conferences such as Velocity, which are aimed at web performance, I realized that the .NET community was in need of a back to basics guide on web performance for ASP.NET applications. This book offers a chapter by chapter guide to improving the performance of your application. Whether you are a seasoned developer or just starting out - the fundamentals covered in this book will help you build fast websites. InfoQ: In Chapter 2, you talk about caches. Does it play a crucial role in ASP.NET? Chapter 2 runs through the basic tools and skills that you need to know in order to start analyzing your website. One of the areas that it covers is the basics of HTTP, including HTTP caching, which is an important part of the web, not just for ASP.NET developers, but for anyone developing for web in any language! InfoQ: You talk about performance rules in the book. Is it necessary for a developer to follow them? I think that most developers will at some point in their career come across a slow website that needs a bit of improvement in order to speed it up. By simply following performance rules as you develop your website, you ensure that you have a speedy website right from the start. Everybody loves a fast website! InfoQ: Can you share with us the use of compression in ASP.NET applications? Internet Information Services (IIS) has some great built in support for compression, it simply needs to be enabled in your application. There are so many sites that are out there that are running without compression, and with a few simple configuration steps it can make such a big difference to the performance of your site. There is some research that states that every day more than 99 human years are wasted because we spend our time waiting for web pages with uncompressed content. That’s an astonishing fact! InfoQ: Can you share the difference between Minification and Bundling? Minification can be applied to both CSS and JavaScript files and it simply removes and shortens any unnecessarily long code. Bundling CSS & JavaScript is an effective way to reduce the number of HTTP requests that a web page needs to make. By combining all JavaScript files into a single script, and similarly combining all CSS into a single Stylesheet you can severely reduce the number of HTTP requests that your web page makes. However, when you combine Minification and Bundling you get a double performance boost. ASP.NET has some great built in features that will automatically minify and bundle your CSS and Scripts for you. InfoQ: In Chapter 8, you talk about ETags. In this context, how will a developer know the purpose of the generated tag code? ETags are unique strings that are sent back in the HTTP response that help the browser identify and validate the browser cache. If used incorrectly they can actually be less efficient and in this book we explore ways of tweaking your application so that it performs at its best. InfoQ: How does the use of CDN improve performance of websites? A content delivery network (CDN) serves content to users with high availability and high performance. CDNs are distributed throughout the world and distribute your content to servers that are closest to your user. This means that if your user is in Mumbai, they get served from a server closest to Mumbai, instead of travelling half way around the world to a server in Europe, for example. This reduced distance greatly reduces network latency, which can make a big difference in improving the performance of your website. There is a great website called CDN planet that offers a list of affordable CDN providers. I recommend checking them out. InfoQ: In chapter 10, you talk about favicons. What are the potential issues which might occur if favicons are not used properly? In a high traffic ASP.NET application, you may find that by forgetting to add a favicon, you get the famous “The controller for path /favicon.ico does not implement IController” error appearing in your error logs. Most browsers will look for a favicon by default, and by not adding one, these unnecessary 404 errors cause disk reads and extra computation. While you may not notice this effect on a smaller website, a website with more traffic will take a hit in performance. InfoQ: Are you aware of any other book which competes with your book? There is a great book called High Performance Websites by Steve Souders. It provides practical tips and essential knowledge for developers. I wouldn’t say it’s competition but a must read for anyone who is interested in web development! About the Book Author Dean Hume is the author of the book Fast ASP.Net Websites, and the popular MVC HTML5 Toolkit. An avid blogger and contributor to various tech and coding websites, he brings a strong desire to constantly build his skillset and help do the same for those around him by writing, presenting, or training on various subjects. Community comments
https://www.infoq.com/articles/fast-aspnet-websites-review-interview-dean-hume/?itm_source=articles_about_clustering&itm_medium=link&itm_campaign=clustering
CC-MAIN-2019-51
refinedweb
1,757
63.29
Think of a house, any house, and if you do, it will probably be a bit blank, a bit unfilled in. You may think of a shape, be it one story, two story, or more, but the details won't have been fleshed out. You may have pictured the windows, but not if they had curtains, or the colour or pattern of the curtains. This, in simple terms, is called modeling. We do it all the time we can imagine something in basic shapes and forms, and we know what it means. Sure, we can fill in the details later, but most of the time, we like to the think that the details are already there. That we know what they are, but we just didn't bother to expand on them because we know what a house is. When we come to computers though, we see something different. Our whole perception of computers and the way they work is not based on models of the same variety. When it comes to computers, everything has to be fleshed out to the nth degree. They need to be exact, to know everything about everything; otherwise, they won't work, and the slightest not knowing of something is a bug, a fault in the program that needs correcting. Sure, we have arrays, which are models of a sort, and hashtables and dictionaries which, if we are lucky, may allow us to link up two related pieces of data together. Each piece of data is still unique. Even when we look at relational databases, the data is still separated into chunks, with each chunk, say, a company, being linked to other data, on companies, by what it is the company does. While the data itself is possibly very complex, the data type, i.e., company, is still a relatively simple data type in that it is a single block of data that represents a single item. Of course, for demonstration purposes, I over generalise and simplify, but the reason for this is to show the thinking behind the creation of Nets, which attempts to deal with complex data types. Or to be more precise, strings of data types, and of course, the perfect example for dealing with strings of data types is to deal with strings themselves, which is why the example provided shows how Nets deal with words and then sentences and then paragraphs. Of course, as with any data structure, it is up to the individual whether they have any use for it, and I wouldn't have thought of it unless I had a specific use in mind. A Net gets its name from the diamond pattern style of fisherman's net, although this is not entirely accurate as we don't complete the diamond shape. A basic Net node looks like this: It largely is an expansion on the tree data structure, with a pointer to the forward node, and nodes to the left and right that are based on object comparisons, with the node forward of the current node to the left or up in the picture, being less than the data in the forward node itself, and the data in the right nodes being greater than the data within the forward node. Of course, things are made more complicated by the fact that the left and right nodes are actually lists of nodes. So, without getting into code just yet, I'll try to explain the way it works. If you have three strings "ABC", "ABB", and "ABF", and you add them to the Net, you will get a Net that looks like: If you add them in the order listed above and if you were to add the "ABF" string first, then "C" and "B" would both be to the left or up on the picture. It should be added here that if you were to add the string "AB" at this point, it would disappear as the simple Net does not allow duplicate strings. The "AB" string would simply merge in with the other strings and be forgotten. However, if you add two more strings "ABBE" and "ABAF", the Net will look like this: At this point, the original string "ABB" has been completely overwritten, and will no longer be returned when you print the contents of the Net. As can be seen, the lists to the left and right (above and below in the image) are sorted automatically as the Net is built. This is now an integral part of how the Nets work (see the code discussion), but it is clear that the Nets would work faster without this, although the primary goal in the development of this project has been to get the Nets data structure working correctly rather than getting it to work fast. The best way to show exactly how the Net works is to run through the demonstration project and to use it with the aim of not only seeing how it works but understanding what can be achieved through using the Nets data structure. The initial and main demonstration deals with text files. This is pretty rough and ready code, and the output depends pretty much entirely on the editing of the file. I mention this because included in the project is a text copy of Charles Darwin's Origin of the Species, which is used throughout the main part of the demonstration, and which does contain a few editing errors. The tab for the book demo looks like: There are three types to this demo: the words, sentences, and the paragraphs demo. As the names suggest, the idea is that you select a book and the Net is built up from either the words, the sentences, or the paragraphs contained in the book. It should be noted that the more complex the Net, the longer it will take the computer to build it, so the words demo will build relatively fast, while the sentences demo will take a while, and if you are going to run the paragraphs demo, then bring a book or don't surf the news pages until after you've started it. As it is the quickest, the words demo is always set as the default on start up. The bottom half of the tab deals with the search options that we shall look at shortly. First, start up the demo by selecting the File button at the top: and selecting a text file. The program will then read the file and build the Net. The output will look like: You'll have to scroll down for the more meaningful items, but as you can see: 6512 items to choose from. There are two types of search that can be run on the Net, and these are a search on data strings within the Net that start with a given string, which is on the left, and a search for a data string that contains the given items. The search starting with is on the left of the tab. If we enter the letter o in the text box and press the left most search button: we get all the individual words in the Net, starting with the letter o. There are 161. If we move to the right side of the tab and enter oo into the text box there, we can run the search for all the words containing the letters o and o: Here, we get the 482 words in the book that contain the values o and o. Note here that the default behaviour is to return any item that contains the values o and o, and if you want to run a search for all the items that contain the values o and o together, you need to specify this by clicking the sequential checkbox. There are 81. The sentences and the paragraphs demos work in exactly the same way; they are just dealing with much more complicated data. The data demo is much simpler in construction in that it is simply provided to show the Nets working using a custom or more advanced data type: The Insert data button builds a Net of random data using a class that has an integer and a string as members. These are then created in random length strings of data and inserted into the Net. You can click on the Insert Data button as many times as you feel like. The search options allow for the entering of a single integer value and text to go with it. The valid range is from 0 - 9 for the integer, and one of "up", "down", "left", "right", "forward", "backward" for the text. Note that they are all lower case, and that they don't have any intrinsic meaning within the program, they are simply strings. If you click the Insert Data button a few times, you will get something that looks like this: And a search through the generated data for the value 5 and the text "forward" will give something like: 5 forward Print out for item 0 Value = 3 String = forward Value = 7 String = left Value = 4 String = right Value = 3 String = down Value = 8 String = down Value = 0 String = down Value = 5 String = forward Value = 5 String = up Value = 6 String = down Value = 3 String = backward Value = 1 String = down Value = 3 String = right Value = 4 String = down Value = 6 String = right Value = 1 String = up Value = 5 String = forward Value = 3 String = forward Value = 0 String = down Value = 8 String = left Print out for item 1 Value = 6 String = backward Value = 4 String = down Value = 5 String = forward Print out for item 2 Value = 6 String = up Value = 8 String = forward Value = 3 String = right Value = 0 String = forward Value = 5 String = right Value = 5 String = forward Value = 3 String = forward Print out for item 3 Value = 7 String = backward Value = 6 String = backward Value = 3 String = forward Value = 4 String = right Value = 8 String = forward Value = 8 String = right Value = 4 String = backward Value = 3 String = down Value = 5 String = forward Value = 4 String = left Value = 5 String = left Value = 5 String = backward Value = 6 String = left Value = 0 String = backward Value = 2 String = forward Value = 0 String = backward Print out for item 4 Value = 8 String = right Value = 6 String = down Value = 5 String = left Value = 2 String = right Value = 9 String = forward Value = 3 String = down Value = 6 String = forward Value = 8 String = up Value = 6 String = down Value = 4 String = right Value = 3 String = down Value = 5 String = forward Note that the search mechanism here is for all the strings containing the value entered; so in this example, we get no values that start with the entered data. The SimpleNet class is defined as: SimpleNet public class SimpleNet< T > where T : IComparer< T > which is a simple generic class definition that uses a class that implements the IComparer interface. This class can be made up of any data that is required by the user, with the only requirement being that it must implement the IComparer Compare method. This is the sole requirement to using the SimpleNet class. So the class we use for the book example is defined as: IComparer IComparer Compare public class StringData : IComparer< StringData > { private string strData; public string Data { get { return strData; } set { strData = value; } } public StringData( string data ) { strData = data; } public StringData() { strData = null; } public int Compare( StringData dataOne, StringData dataTwo ) { return String.Compare( dataOne.Data.ToLower(), dataTwo.Data.ToLower() ); } public override string ToString() { return strData; } } As you can see, there is nothing special about this class at all, except the Compare function which calls the string Compare function, which probably calls the IComparable CompareTo function. This then is the technical complexity needed to use classes in SimpleNet. A Compare function that calculates if the data in the objects is equal, returning a zero if they are, or a -1 if the first object passed is smaller than the second object, and a 1 if the first object is greater than the second object. Compare IComparable CompareTo As we are dealing with complex data strings or collections, the data must be entered into the Net as lists of data, and this is done in the mainform.cs file when we create the strings of data for the data demo with the code: List< DemoData > newList = new List< DemoData >(); if( demoNet == null ) demoNet = new SimpleNet< DemoData >(); int nNumberOfItems = rand.Random( 20 ); int nPiecesOfData = 0; List< string > stringCol = new List< string >(); stringCol.Add( "up" ); stringCol.Add( "down" ); stringCol.Add( "left" ); stringCol.Add( "right" ); stringCol.Add( "forward" ); stringCol.Add( "backward" ); for( int i=0; i<nNumberOfItems; i++ ) { newList.Clear(); nPiecesOfData = rand.Random( 20 ); for( int j=0; j<nPiecesOfData; j++ ) { newList.Add( new DemoData( rand.Random( 10 ), stringCol[ rand.Random( stringCol.Count ) ] ) ); } demoNet.InsertList( newList ); } I use this example as it is very simple in that all we do is create a list, define a few variables, and then build it before adding to the Net with the InsertList function. It is also possible to use the in-built facilities of the class which contains a list that can hold the data before inserting it into the Net. This is done through the AddToList( T data ) function and then calling the InsertList() function, though you must remember, you should use this technique to call ClearList() before starting a new list. InsertList AddToList( T data ) InsertList() ClearList() To perform a search, you use the code: List< StringData > list = new List< StringData >(); for( int i=0; i<strWorking.Length; i++ ) { list.Add( new StringData( strWorking[ i ].ToString() ) ); } List< List< StringData > > resultsList = wordNet.FindListsStartingWith( list ); which builds a list to search for and then calls the search function. In this example, I am using the code to search for a word at the start of a string from the wordNet example in mainform.cs. The SimpleNet class is 1400 lines long, so I will give more of an overview of its basic operations here and let those who are interested to go through the code. The class itself consists of 14 public functions; of these, 4 functions are for inserting data, these being the already mentioned AddToList, ClearList, and the overloaded InsertList. Two functions return lists of lists of data in the Net, these being the NetPrintOut function which returns the data as it gets it, and the GetSortedList which returns the same data but sorts it first. There are then two IsListPresent functions which return a boolean value for the given list. There are two overloads for FindListsStartingWith and five overloads for FindListsContaining. AddToList ClearList NetPrintOut GetSortedList IsListPresent FindListsStartingWith FindListsContaining To start with, all data in the SimpleNet is stored and manipulated as part of the SimpleNetNode class. This is your typical data type node that contains the lists and the points to the node in front and the node behind or the parent node. It is defined as: SimpleNetNode public class SimpleNetNode< T > where T : IComparer< T > and contains the data: private SimpleNetNode< T > parentNode; private SimpleNetNode< T > forwardNode; private List< SimpleNetNode< T > > leftList; private List< SimpleNetNode< T > > rightList; private T data; private bool bIsBase; private bool bIsRightBase; private bool bIsLeftBase; As you can see, the first four variables point to either other nodes or lists of other nodes. The data variable is the class that is added to the node, and the Base variables are used as quick checks to see if the lists are used. data There is no reason for a user of the Net class to ever create a Net node object. In fact, this was originally private, but as we are using Generics, the compiler was never quite sure if there was an error or not. So, it was simpler and cleaner to have it as a separate class. In the SimpleNet class, there are two conversion functions: private List< SimpleNetNode< T > > ConvertListToNodes( List< T > dataList ) which convert the input lists to a list of SimpleNetNodes and the functions: private List< T > ConvertList( List< SimpleNetNode< T > > list ) and private List< List< T > > ConvertList( List< List< SimpleNetNode< T > > > list ) which convert the lists back before they are returned to the calling code. As you have seen, the lists of data are inserted using the InsertList function which is responsible for building the Net. This function starts by taking the list as a parameter: private void InsertList( List< SimpleNetNode< T > > insertionList ) and then cycles through the list, dealing with one item at a time: for( int i=0; i<insertionList.Count; i++ ) { SimpleNetNode< T > nnNewData = new SimpleNetNode< T >( insertionList[ i ].Data ); } If the list is empty, the data is made to be the root: if( TrueRoot.ForwardNode == null ) { TrueRoot.ForwardNode = nnNewData; CurrentNode = TrueRoot.ForwardNode; continue; } If there is already data in the list, we check the next item which is the forward of the current node. And if this is null, we add the data at this point. If at this point the CurrentNode forward is not null, we have to do a comparison and decide where to place the item, so the main block of the InsertList code looks like: null CurrentNode nnTempNode = CurrentNode; int nResult = nnNewData.Data.Compare( nnNewData.Data, CurrentNode.ForwardNode.Data ); if( nResult == 0 ) { CurrentNode = CurrentNode.ForwardNode; } else if( nResult == -1 ) { } else { } This is where data is overwritten, because if the data we are adding is the same as the data in the list, we do nothing with it, but simply move forward and go on to the next item in the list. If the data is not equal, whether it be larger or greater, then we must continue checking it, so for demonstration purposes, we'll assume that the Compare function returned -1. bool bFound = false; for( int n=0; n<CurrentNode.LeftListCount(); n++ ) { nnTempNode = CurrentNode.GetLeftNodeAt( n ); if( nnTempNode.Data.Compare( nnTempNode.Data, nnNewData.Data ) == 0 ) { CurrentNode = nnTempNode; bFound = true; break; } } if( bFound == false ) { SimpleNetNode< T > insertionNode = new SimpleNetNode< T >( nnNewData.Data ); insertionNode.ParentNode = CurrentNode; CurrentNode.AddLeftNode( insertionNode ); nnSortNode = CurrentNode; CurrentNode.IsBase = true; CurrentNode.IsLeftBase = true; CurrentNode = insertionNode; bSortLeftList = true; } We cycle through all the elements in the chosen list, and if it is present, set the current pointer to the node in the list and continue from there and we move to the next item in the insert list. If the node is not present in the list, we add it to the list. The main searching functions in the SimpleNet class are: public List< List< T > > FindListsStartingWith( List< T > dataList ) public List< List< T > > FindListsStartingWith() These two functions, as with their companion FindListsContaining functions, search through the SimpleNet and return the results as a list of lists for the data. This is done by converting the Net into a list of lists internally and then searching through the lists. There are other ways to do this, but with a data structure as complicated as a Net, it is easy to tie yourself into knots when trying to move through searching for specific items, so it was felt that this is a solution viable for a simple iteration of the Nets data structure. This article is an introduction to the ideas and the mechanisms of the Net data structure, and as such, with the exception of the insertion code, the techniques, especially for the search, are of a more simplistic nature than they are in a full Nets structure. Despite this, it is hoped that this introduction has given an idea of the power that Nets can bring to a project. This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) General News Suggestion Question Bug Answer Joke Rant Admin Man throws away trove of Bitcoin worth $7.5 million
http://www.codeproject.com/Articles/20477/Introducing-Nets?fid=459279&df=90&mpp=10&sort=Position&spc=None&tid=2229712
CC-MAIN-2013-48
refinedweb
3,296
62.21
bool repeatedSubstringPattern(string& s) { return (s+s).find(s,1) < s.size(); } Since neither string::find nor std::strstr specify complexity, the algorithm is up to whatever their implementation is. (e.g., O(N) time and space if using KMP, where N = s.size()) Why condition return (s+s).find(s,1) < s.size() is equivalent to substring repetition? Proof: Let N = s.size() and L := (s+s).find(s,1), actually we can prove that the following 2 statements are equivalent: 0 < L < N; N%L == 0and s[i] == s[i%L]is true for any iin [0, N). (which means s.substr(0,L)is the repetitive substring) Consider function char f(int i) { return s[i%N]; }, obviously it has a period N. "1 => 2": From condition 1, we have for any i in [0,N) s[i] == (s+s)[i+L] == s[(i+L)%N], which means Lis also a positive period of function f. Note that N == L*(N/L)+N%L, so we have f(i) == f(i+N) == f(i+L*(N/L)+N%L) == f(i+N%L), which means N%Lis also a period of f. Note that N%L < Lbut L := (s+s).find(s,1)is the minimum positive period of function f, so we must have N%L == 0. Note that i == L*(i/L)+i%L, so we have s[i] == f(i) == f(L*(i/L)+i%L) == f(i%L) == s[i%L], so condition 2 is obtained. "2=>1": If condition 2 holds, for any i in [0,N), note that N%L == 0, we have (s+s)[i+L] == s[(i+L)%N] == s[((i+L)%N)%L] == s[(i+L)%L] == s[i], which means (s+s).substr(L,N) == s, so condition 1 is obtained.
https://discuss.leetcode.com/topic/72712/1-line-c-solution-return-s-s-find-s-1-s-size-with-proof
CC-MAIN-2017-47
refinedweb
304
76.32
Aside from applying that killer layout and visual design to your project, stylesheets need to be organized in a way that effectively communicates markup and style relationships and allows for quick and easy modification. A good measure of a well architected front-end is how easy it is for a new team member to jump into your project or for a new team to it take over. If your methods for organization and selector naming aren’t consistent and easily communicated to or understood by others, you’re probably creating a lot of code debt and certainly creating a higher than necessary barrier to entry for newcomers to the project. This is how we do it Thanks to the partials feature in css preprocessors like Sass, less.js, and Stylus, we can organize our stylesheets better than ever before. We work with Sass because it comes built into rails and it’s awesome. From the very start, all the apps we build come with a gem we built called Flutie. Flutie provides a stylesheet reset and most importantly, a method that adds a class name to the body element of every page. The body class name is made from the name of the controller and action responsible for generating that page. This allows us to target specific pages by using the unique body class and it also helps us semantically model our stylesheets directory after our views directory. So if we have a Clips controller with an Edit action and we have an associated view for that action called edit.html.erb in views/clips/ then the body element in that view would get a class of .clips clips-edit and in our stylesheets directory we should create a sass partial called _clips_edit.scss, if we are going to be applying styles to that view. Instead of namespacing the .scss files you could group your style sheets in folders (ex: clips/_edit.scss) the same way the rails views are organized. Organizing your styles in this way makes it much easier to find the styles you’re looking for when working between the browser’s inspector and your text editor. Variables, Mixins, Extends, etc So what about all the variables, mixins, extends, and other great stuff you can do with Sass and most other preprocessors, where do they go? We have found that organizing our variables, extends, mixins, animations and general base styles (default styles for things like typography, links, etc) in partials to be very useful. These partials are usually namespaced with _base or kept in a folder base/. So in a typical project I will have _base.scss, _base_variables.scss, _base_mixins.scss etc. Shared components Often times we will build components of an application to be modular. So to keep the styles for these components modular as well, we create “shared” partials. If in your application you have a header component that maintains all or the majority of its styles across multiple views in your app, create a partial called _shared_header.scss or put _header.scss into a “shared” folder. If you have a view, clips_index.html.erb for example, in your app that is sharing the header with another view ( clips_edit.html.erb lets say) but needs a slight tweak, target the header component in the _clips_index.scss stylesheet partial and make the clips_index.html.erb view specific tweaks there. Generating the css Now that we have our styles logically organized we create a file, often called application.scss or screen.scss, in which we import all of our partials with the @import directive (ex: @import “clips_edit.scss”;) to render the single stylesheet our app will use. The order in which you import these files is important as the import will render the contents of the imported files in the order they are imported and these are cascading styles sheets after all, so make sure you are importing your variables and base styles at the top. Stay Fresh As a consulting shop, one of our goals is to deliver clean code that our clients can continue to easily work with and grow after we are done working on the project. From the front-end perspective, an important piece of making that happen is keeping our markup and styles clearly organized by constantly using agreed upon methods. All code should look like it was written by one person, regardless of the actual number of contributors. Trying to work in a large application that has no clear reasoning for the structure of its front-end can feel like battling a kraken with a countless number of limbs. So even if you will be the only person ever working on a project, your future self will certainly appreciate the efforts you make now to keep your styles organized, modular and easily modified.
https://robots.thoughtbot.com/style-sheet-swag-architecting-your-applications-styles
CC-MAIN-2015-11
refinedweb
802
61.26
Hi! Is there a way to set the default local working path If someone opens the project, he has to set the working path, but I want to set a default path! regards Simon But where can I define such a workspace template Ok thanks, I will try that. I cannot find, where I can select the existing workspace. If I go to File --> Source Control--> "Open From Source Control" --> Select my Project, than I cannot set any other destination workspace ! G'day, You can view your workspaces using "File->Source Control->Workspaces..." or by selecting "Workspaces..." in Workspace combo box in Source Control Explorer. When you perform "Open from Source Control", you need to select solution or project from source control tree, and from Workspace combo box in Destination one of the workspaces available. I think the issue you might be having may be related to the fact that when you view TFS projects in "Open from Source Control" dialog (that is list in Look In is list of TFS projects and in combo box is selected TFS server root node), the workspaces combo is disabled. You need to select some path under TFS project - then combo box will become enabled. Hope that helps. Yours truly, No that was not the problem, I wanted to set a default Local Path on the server .... but I think it is only possibly with the command tool ! I think you misunderstand the concept. In order to get files from source control to local folder, you need to specify source (source control path) and destination (local disk path). In TFS there is concept of workspace, which essentially entity that holds all such mappings for specific user and workstation. Therefore, in order to define such mapping, you need to select (after creating one if needed) workspace, and specify server and local paths for mapping. As far as I can see, you want the user to be able to open Visual Studio, and perform Open from SC so that default local path will already be specified, meaning that for specific user the mapping must be already set. That can be done by creating one workspace (with desired local path mapping for the project server path) for some user and then duplicating the workspace definitions (i.e. mappings) - that is creation workspace for another user at other workstation using existing workspace as a template (that's what Richard post details). See "Working with Team Foundation Source Control" (especially, "Working with Source Control Workspaces" and "Getting a Local Copy of Files from the Source Control Server") for further information. Mhm, ok thanks a lot. So I have to make a default namespace and copy that workspace. See message from Richard Berg. Thanks a lot! Regards
http://databaseforum.info/30/500024.aspx
CC-MAIN-2017-26
refinedweb
458
70.02
Buttonwood Caught short Returns may have improved but hedge funds still face a lot of problems THE hedge-fund industry is making money again. Figures from Hedge Fund Research (HFR), a consultancy, show that the average fund gained 5.2% in May, the best monthly performance since 2000. Several sectors have earned double-digit returns so far this year. The rebound comes after a traumatic 2008 for the industry, during which the average fund lost nearly 19% and there was a wave of redemption demands from investors. Assets under management have fallen by around $600 billion from their peak, leaving the industry with just over $1.3 trillion to look after. Nearly 1,500 funds were liquidated last year, the population of the entire industry back in 1993. Although some new funds have been created, HFR reckons there were just 8,860 funds at the end of March, down from 10,096 in 2007. This Darwinian process may have helped those managers who survived. A smaller industry means there is less competition for profitable opportunities. Work by Bill Fung and Narayan Naik of the London Business School shows that one of the best periods for hedge-fund outperformance occurred in the aftermath of the collapse of Long-Term Capital Management in 1998. Then, as recently, market prices moved erratically, creating anomalies to exploit. Taking advantage of those anomalies proved difficult at the height of the credit crunch. Hedge-fund managers needed to hold cash to meet redemptions. In addition, prime brokers (the main providers of finance to the sector) were restricting the amount that managers could borrow. Now JPMorgan, a leading prime broker, is saying that managers are willing and able to borrow again. Not on the scale seen in 2006, admittedly, but enough to take advantage of rising markets. Even though returns are recovering, the industry is in a much weakened position. Clients are now suspicious of the fees that hedge funds could command in their glory days. A memo from CalPERS, a huge Californian pension fund, said it would “no longer partner with managers whose fee structures result in a clear misalignment of interest between managers and investors.” In practice, that means phasing performance fees over longer periods and demanding that such fees are only generated when the manager exceeds a hurdle rate of return (not zero, as was often the case). In addition, many investors are now demanding that hedge funds run assets in “managed accounts”, where their money is held separately and the holdings are transparent. In the boom years, hedge-fund managers could afford to refuse such requests, which they tended to regard as too much hassle. A further advantage of managed accounts, from the investors' point of view, is that it is easy for them to withdraw their money. Many investors are smarting from their experiences in 2008, when hedge-fund managers abruptly imposed “gates” to restrict withdrawals. Those funds that did resort to gates are still being punished by investors this year. Figures from HFR show that investors withdrew around $104 billion in the first quarter of the year, which was more than 7% of the industry's total assets. The bit of the industry that is suffering most is the funds-of-funds sector. It claims, for an additional fee, to be able to put together a portfolio of the best hedge-fund managers. But this claim was rather dented by the exposure of several leading groups to Bernie Madoff. Funds-of-funds are struggling in 2009 as well: their returns are lagging five percentage points behind the average for the industry. Funds-of-funds are especially vulnerable to an investor squeeze. The investors who have proved most disillusioned with the hedge-fund industry have been high-net-worth individuals, who believed all the talk about “absolute returns” and were dismayed by last year's double-digit losses. That means the industry will become more focused on institutional investors like pension funds and endowments. But these investors can develop their own expertise in picking the best funds or may opt for model-based “clones” that aim to produce hedge-fund-like returns at lower cost. To add to the industry's problems, it faces renewed interest from regulators. Managers are particularly exercised by a draft European directive which will impose limits on leverage and greater requirements for transparency, which could limit returns. Among other things, the new rules also require prime brokers to be resident in the same jurisdiction as their clients, a mindless piece of bureaucracy. From being masters of the universe two years ago, hedge-fund managers now fear becoming servants of Brussels. Excerpts from the print edition & blogs » Editor's Highlights, The World This Week, and more »
http://www.economist.com/node/13832269
CC-MAIN-2014-35
refinedweb
788
61.16
[ ] Alex Karasulu updated DIREVE-137: --------------------------------- Fix Version: 0.9 > Problems with mixed-case in suffix > ---------------------------------- > > Key: DIREVE-137 > URL: > Project: Directory Server > Type: Bug > Components: jndi-provider, jdbm database, server main > Versions: 0.8 > Reporter: Endi S. Dewata > Assignee: Alex Karasulu > Priority: Minor > Fix For: 0.9 > Attachments: mixed-case.patch > > The server currently has some problems with suffixes that contain mixed cases, e.g. dc=VergeNet, dc=com. > 1. Add > ------ > In ServerContextFactory.java line #630, JdbmDatabase is initialized with un-normalized suffix. > Database db = new JdbmDatabase( upSuffix, wkdir ); > But in JdbmDatabase.java line #673, when adding an entry to the database, the suffix is compared with the normalized dn of the new entry. > if ( dn.equals( suffix ) ) > This is causing the add operation to fail. > 2. Search > --------- > In RootNexus.java line #203, the suffix is being normalized during registration. > backends.put( backend.getSuffix( true ).toString(), backend ); > However, in RootNexus.java line #556, the dn used to look up the backend is not normalized. > return ( ContextPartition ) backends.get( clonedDn.toString() ); > This is causing the search operation to fail. -- This message is automatically generated by JIRA. - If you think it was sent incorrectly contact one of the administrators: - If you want more information on JIRA, or have a bug to report see:
http://mail-archives.apache.org/mod_mbox/directory-dev/200503.mbox/%[email protected]%3E
CC-MAIN-2017-17
refinedweb
211
52.97
We are about to switch to a new forum software. Until then we have removed the registration on this forum. Hi everyone! I have run into a problem with some code. The code is made so when the user presses A, S or D it will load one of three files assigned for each letter. However, when I press play it comes up with the error: "expecting RPAREN, found ';' syntax error, maybe a missing perenthesis?" for line 32. does anyone know what the problem is here? Thanks :) import processing.video.*; String[] one = { "ag1.mov", "ag2.mov", "ag3.mov", "am1.mov", "am2.mov", "am3.mov", "tran1.mov", "tran2.mov", "tran3.mov" }; Movie myMovie; int a = 0; float md = 0; float mt = 0; void setup() { size(400, 400, P2D); } void draw() { image(myMovie, 0, 0, width, height); } void keyPressed() { if (key == 'a' || key == 'A') { myMovie = new Movie(this, int(random(1, 3)); myMovie.play(); } else if (key == 's' || key == 'S') { myMovie = new Movie(this, int(random(4, 6)); myMovie.play(); } else if (key == 'd' || key == 'D') { myMovie = new Movie(this, int(random(7, 9)); myMovie.play(); } else { myMovie.stop(); } } Answers Line #32 -> Missing an extra closing parens. Tip: Hit CTRL+T inside PDE. O:-) You need to add a right bracket before the semicolon. Change it into this myMovie = new Movie(this, int(random(1, 3))); The same goes for lines 38 & 44 as well. thank you so much for your fast reply GoToLoop & Alex_Pr ! I added an extra right bracket to lines 32, 38 & 44. After I click play the error comes up "The constructor "Movie(ThisWork, int)" does not exist. I am quite new to processing so I'm wondering if this is because I haven't yet put the video files into the sketch? Also do you know if there's similiar codes out there that does the same thing as this code does? Thank you again. :) A very nasty bug that time to time shows up is saving the sketch using the same name as another class. Sorry, I'm not sure what you mean by "using the same name as another class". I'm a noob when it comes to processing. Thanks :) The error comes up because instead of a string you give an int to Movie(parent, filename) If the name of the file is a number you need to convert the int(random()) to a String This is easily done by replacing myMovie = new Movie(this, int(random(1, 3));to myMovie = new Movie(this, ""+int(random(1, 3)); Thank you Alex, that seemed to work. However now there is an error on line 24 image(myMovie, 0, 0, width, height); with NullPointerException is this something to do with the size of the video it is trying to load? That has to do with the fact that you access myMovie without it having a value. You should edit the code so that image(); command is executed only if you have pressed a button, like this: This got me further. So now my error is now on line 35 "Could not load movie file 1" the file names are exactly the same as they are in the sketch, I apologise for the amount of questions I'm asking here but I really appreciate your help. :) What type is your file, .mov or something else? Is the video located in the sketch's data folder? In the IDE go to "Sketch" select "Add file..." and add the movie file you want, see if that works There's only three actual video files in the sketch at the moment which is 1.mp4, 2.mp4 and 3.mov all three are in the sketchbook folder under data (assuming the sketchbook folder is the file that saves as the same name as the sketch with a data folder inside) and I tried replacing each video by going to "Sketch" select and "Add file". Try playing 3.mov to check whether the filetype is the problem Tried each file individually in the sketch and it still comes up with the error "Could not load movie file 3" I'm not sure what the problem is (Deleted duplicate question) It seems that I have managed to get the code to call a video and play it however it only plays sounds and the rest of the screen is white for each video and when I press "A" to go onto the next video they overlay each other and don't stop playing. #35 myMovie = new Movie(this, int(random(1, 3))+".mp4"); and Add Int(random(1,3)) -> 1 or 2 not 3 Thank you! this worked perfectly. I appreciate everyone helping out with this. You're all awesome. :)
https://forum.processing.org/two/discussion/14377/random-video-select-code
CC-MAIN-2021-39
refinedweb
788
80.51
You need to sign in to do that Don't have an account? Calling Apex Rest custom web service from Apex code Hi everyone, I am a newbie to FDC (so please correct me if I am wrong anywhere) . I have gone thorugh Apex REST API and found that we can create our custom web services in Apex and expose them as REST services using Apex REST. But in most of the references it is given that we can use any programming language of our choice ( i.e. from outside FDC) to invoke this custom WS via REST API. But I want to invoke the same using Apex code (i.e., from inside FDC) via REST API. Any help is highly appreciated You beat me, Simon :-) Vikash - try adding the following line after req.setEndpoint(url); Simon - it's a bit odd that it would complain about API being disabled, rather than a bad or missing token :-/ All Answers Hi Vikash, I assume that you're trying to call an Apex REST service in one org (let's call it the 'target' org) from another (let's call it the 'source' org), since if the caller and service were the same org, you could just call the function directly :-) Here's some sample code for the target org - it's very simple indeed: It's a little bit more involved for the source org, since the source app needs to authenticate to the target org. All of the following instructions apply to the source org... First, add and your target org instance URL (e.g. or) as Remote Sites (Setup | Administration Setup | Security Controls | Remote Site Settings). Next, create a remote access application (Setup | App Setup | Develop | Remote Access) - you can just give it a dummy as the callback URL. Now you'll need to copy JSONObject from to a new Apex Class in your source org. Finally, here's the sample code for the source org: You will need to insert your remote access app consumer key/secret and target org username/password in the appropriate places in the restTest() method. What that source app code does is to authenticate to the target org via OAuth username/password flow, then call the REST service and return the result. If you execute RestTest.restTest('Vikash'); in the system log, you should see Hello Vikash in the debug output. Hi Pat, I am very much thankful for your help, I am giving you my code so that you can get better Idea of wat I am doing My target org code: and my source org code is: which on cliking submit button should give me 'Hi, You have invoked getservice1 created using Apex Rest and exposed using REST API'.......... Instead of which I am getting '[{"message":"API is disabled for this User","errorCode":"API_CURRENTLY_DISABLED"}]' I have already added '' in Remote Sites (Setup | Administration Setup | Security Controls | Remote Site Settings). Plz correct me if there is any flaw/mistake/wrong in the way I am doing this??? Does your user in the source org have API access? The error indicates that it does not. Hi Vikash, As Daniel says, the error message gives the clue - check that your user has API access. Are you using one org here, or two? Cheers, Pat Hi, Yes I already have the API enabled and I am using same target and source org for this example. Since I am using a GET method , It should work if I open is a browser but am getting a 403 error when I do this. You are not sending an Authorization: OAuth {someToken} header to authenticate your request. Yes I am not sending an Authorization: OAuth {someToken} header to authenticate my request. But is that neccessary??? You beat me, Simon :-) Vikash - try adding the following line after req.setEndpoint(url); Simon - it's a bit odd that it would complain about API being disabled, rather than a bad or missing token :-/ Yes - it's very necessary - all calls to the API [1] must carry a token. [1] Well, nearly all - the 'root' /services/data call doesn't. You can call it without a token and it will tell you what API versions the REST service supports: Take a bow Pat, Awesome , it worksssssssssssssssssssssssssssssssssssssss.................. Can you just explain me about this a little bit??? Sure - as I mentioned, every API call must have a token representing an authenticated user. The REST API accepts an OAuth token (pretty much a session ID) in the 'Authorization' HTTP header (see for more detail). In Apex Code, UserInfo.getSessionId() returns you a session ID that you can then add as a header to your outbound call. Now... Why are you using the REST API to call into your own org? Why not just call getservice1.postRestMethod()? BTW - this mechanism WILL NOT WORK for calls from one org to another. For that you will need to obtain a token valid for the target org, which is much more tricky. I posted a username/password mechanism earlier in this thread. You can also do OAuth web server flow (see wiki.developerforce.com/index.php/Digging_Deeper_into_OAuth_2.0_on_Force.com) with code similar to - actually, that's code to do OAuth against Facebook, but the principle is the same :-) Yeah Pat, when I try to call it from another org I am getting the following error System.CalloutException: Unauthorized endpoint, please check Setup->Security->Remote site settings. endpoint = Class.getservice2.getNameMethod: line 19, column 29 Class.getservice2.submit: line 8, column 17 External entry point Can you tell what to add in my code to get it authenticated As the error message indicates, you need login to the web app, goto setup -> security -> remote sites and add new remote site for that url. It has already been added in the remote site settings, I am worrying about authenticated code for my web app when I access it from another org You have to add the target org host to the remote sites in the source org. You might have it the other way round... I have added '' as target host in remote site settings, but I am getting the error as " [{"message":"Could not find a match for URL /GetService","errorCode":"NOT_FOUND"}]". What is to be added in remote site '' or '' Just Which end did you move to your new org - source or target? source Hmm - not sure. If you've made no other changes, you're passing a source org session ID to the target org in the Authentication header, you're confusing the hell out of things. I'm not even sure web server OAuth flow is going to work here, since all the cookies may end up in the same domain. Let me try it out on my DE org(s) and get back to you... I apologise for messing up things. This code is working fine if both source and target are in same org. I need to know what the neccessary changes to be made in above code if the source is moved to another org I just got this working. It's pretty tricky, because you have to get a token for a user in the target org. You can't do this using login.salesforce.com, since you'll likely just get a token for your source org, which won't work. So... Step 1 - you need to set up 'My Domain' in the target org - Setup | Administration Setup | Company Profile | My Domain. Note that this takes a few hours to take effect. You'll end up with a domain name of the form yourdomain-developer-edition.my.salesforce.com. This is the key to getting org->org OAuth working - you need two distinct domain names. Step 2. Here is the code for the source org. The target org is unchanged (apart from setting up My Domain): Controller Page - I called it 'RestCall' Step 3. Create a remote access application (Setup | App Setup | Develop | Remote Access) in the source org with your Visualforce page URL (e.g.) as the callback URL. Edit the controller to set the client ID and client secret values (consumer key and consumer secret in the remote access app page). Now when you go to the page, you may be prompted to log in (depending on whether or not you have a session in the target org), then you should be prompted to allow the remote access app to access your data. After that, the REST call should take place and you'll see the result :-) The code is a little horrible, since it puts the whole oauth response in a cookie, but it works :-) A better solution is to generate a random string for the cookie and save the oauth response in a custom object record, indexed by the cookie. You can see how that works in the latest Facebook toolkit code at - look at FacebookObject.cls. BTW - it works equally well with a 'site' at the source org - then you only have to login to the target org. Hi Pat, I am trying the way you showed but unable to get the web page for logging in and it shows the error as http 400 bad request. I am totally stuck at this point My code goes as below: Hi Vikash, You are still using UserInfo.getSessionId() - this won't work when you call between orgs - you have to use a token from the target org - the login() method gets this and puts it in a cookie. Your getNameMethod should look like this: Cheers, Pat Hi Pat, I have made the neccessary changes in getNameMethod. But according to the code, as soon as the visual page loads, it should redirect me to where i can give my credentials. But I am geeting HTTP 400 Bad Request and the URL was Hi Pat, The Error it sowing is "redirect url mismatch", but as far as I know , we have followed your code correctly, Really stuck at this point Hi Vikash, Double check that the callback URL you set in the remote access app is identical to the one in your code - Also - I notice some corruption in the URL you posted - 'redirec%E2%80%8Bt_uri' should be 'redirect_uri'. Also, in the source, you seem to have some corruption in the highlighted line: That '&redirect_uri' seems to have some non-printable characters between '&redirec' and 't_uri'. If you retype that string, and make sure the URLs match, it should work. Cheers, Pat Hi Pat, The error is exaclty what you said, now its working very fine. We are done with it. Im very much thankful to you, you really helped us alot, Can you just tell me whether this OAuth authentication is one time authentication or not? Because when I open the page for the first time it is going to loginUrl as expected, & on subsequent access of that page , its directly giving the visual force page. But when I cloase the browser window and open again, it again taking me to loginUrl page, So I think this is not one time authentication, Can you guide about this plz The authentication is session-based, just like logging in to login.salesforce.com. If you keep your browser open, you can keep going to pages within salesforce.com until your session expires (by default, I think this is 2 hours). Similarly for this integration here. By the way - the code I posted is 'proof of concept' - it would need some tidying up for production. In particular, there is no error handling. To be robust, you will need to handle a 401 response (invalid or expired token) for the REST call. If you get a 401, you need to repeat the login process to get a new access token. Hi Vikash, Access tokens follow the same rules as regular Salesforce session IDs - they have an idle timeout that is set in Setup | Administration Setup | Session Settings | Session Timeout. The default session timeout is 2 hours, but an org admin can set it to any of a variety of values from 15 minutes to 8 hours. See§ion=Security for more details. Cheers, Pat Hi Pat, I have a small idea, If I store the access token in a database at source org and use that for further communications btw source & target, do I still need to authorize when I access the target webservice from another machine? I mean, can the same accesstoken be used(until it doesnt get expired) from different machines to invoke target service?? Suppose user A logged into machine A with his own credentials and invoked the target web service ans as a result got accesstoken. Now can this accesstoken be used by another/same user on different machine, say machine B??? My intention is to authorize the target webservice only once(that is for the first time) and use that accesstoken from anywhere with out any need to authorize again. Hi Vikash, Yes, I think that would work - token expiration is your only problem. You know, most of this stuff, you can just try it out quicker than writing a question here, and certainly quicker than waiting for a reply :smileywink: Cheers, Pat Hi Vikash, I've just been reminded that you receive a long-lived 'refresh token' as part of the response in the web server flow. You can store this persistently and use it to obtain a new access token when the old one expires. See my article for a deeper discussion of refresh tokens and their use. Cheers, Pat Hi Pat, As far as I know, storing the accesstoken in the database is not a better solution as far as security is concerned. Is there any way to implement this using OpenSSO?? Hi Vikash, You would store the refresh token (not the access token) in the database for a given user. Why would this not be secure? You can lock security down so it is only accessible to the relevant user. I don't think OpenSSO really helps, since you have one salesforce org calling another. Cheers, Pat But is it not accessable even to administrators of salesforce??? I am looking for a very secured way of implementing OAuth(using this token) & also not making the user to give the credentials of target again & again(this is why I have stored it in db) Hi Pat, I am trying to explain my requirement in a better way here................. As explained here , I have installed OpenAM in my machine which is acting as a identity provider. when I give the credentials to log into OpenAM , using SAML assertion I am able to login to salesforce without again giving Salesforce credentials. Now using the same SAML assertion(or token) I wanna invoke the service of target org from source org. The reason for this is , if I store the accesstoken in the cookie , it gets invalid once I close the browser. If I store accesstoken in db , It can be compromised(I am not so happy with this even if its safe).............. So I think this can be done in 2 ways(may be both are not possible) 1. Use this SAML assertion as accesstoken to invoke target org from source using Oauth (this may not possible as accesstoken is being given by authorizaton server) 2. Any other way to use this SAML assertion as token with out using OAuth concept. I need your ideas on this & Pat, your guidance is really putting me in the driving seat........... I am really thankful to you, Hi Vikash, If you have a SAML assertion issued by an IdP that your Salesforce org trusts, your app can exchange it for an OAuth token - see Cheers, Pat "I assume that you're trying to call an Apex REST service in one org from another since if the caller and service were the same org, you could just call the function directly" Not true. Salesforce is exposing functionality via its REST API (like chatter @mentions) that are not available from within Apex. In my client's case, this is exactly what we'd like to do (that along with some reporting stuffs). tggagne - you're right, and in that case (calling REST APIs in the same org) it's very straightforward - see I'd found that earlier, but I don't want to call it from Javascript. I want to talk to my own org from Apex. tggagne - that is possible - vikash posted code to do exactly that earlier in this thread - Hi Vikas, how did you start writing sourchis code , can you write this one in apexclasses. and how did you got source code url. Please give me stepwise information Thanks, Sarvesh. HttpRequest req = new HttpRequest(); HttpResponse res = new HttpResponse(); Http http = new Http(); //String endPoint1 = instanceURL + '/services/apexrest/Insurance'; req.setEndpoint(''); req.setMethod('POST'); req.setHeader('Content-Type', 'application/json; charset=UTF-8'); req.setHeader('Accept', 'application/json'); req.setHeader('Authorization', 'OAuth'+sessionId); req.setBody('{"Name" : "New1"}'); Please let me know if i m missing something or not. this is my exception Visualforce ErrorHelp for this Page System.CalloutException: Invalid HTTP method: post Error is in expression '{!getResult}' in component <apex:commandButton> in page wq: Class.getservice21.getResult: line 19, column 1 Class.getservice21.getResult: line 19, column 1 thanks in advance. @Pat Patterson, I used your code for connecting one salesforce Org to another using Connected App. My code as Follows: public class RestTest { public String restTest { get; set; } private static JSONObject oauthLogin(String loginUri, String clientId, String clientSecret, String username, String password) { HttpRequest req = new HttpRequest(); req.setMethod('GET'); req.setEndpoint(loginUri+'/services/oauth2/token'); req.setBody('grant_type=password' + '&client_id=' + clientId + '&client_secret=' + clientSecret + '&username=' + EncodingUtil.urlEncode(username, 'UTF-8') + '&password=' + EncodingUtil.urlEncode(password, 'UTF-8')); Http http = new Http(); HTTPResponse res = http.send(req); System.Debug('*****************************'+res); System.debug('BODY: '+res.getBody()); System.debug('STATUS:'+res.getStatus()); System.debug('STATUS_CODE:'+res.getStatusCode()); return new JSONObject(res.getBody()); } public static String restTest(String name) { JSONObject oauth = oauthLogin('<a href= "" target="_blank"></a>', '3MVG9Y6d_Btp4xp6GjWvM6Vqowzgn3PxnMwVPSeWRuVClTo..VJYYOIeHxVdyMpoeDeGeDC8MzTFwaPSLBoAU', '4075257097069349419', '[email protected]', '*****'); String accessToken = oauth.getValue('access_token').str, instanceUrl = oauth.getValue('instance_url').str; HttpRequest req = new HttpRequest(); req.setMethod('GET'); req.setEndpoint(instanceUrl+'/services/apexrest/Account35?name='+name); req.setHeader('Authorization', 'OAuth '+accessToken); Http http = new Http(); HTTPResponse res = http.send(req); System.debug('BODY: '+res.getBody()); System.debug('STATUS:'+res.getStatus()); System.debug('STATUS_CODE:'+res.getStatusCode()); return res.getBody(); } } I am getting error Like: System.CalloutException: no protocol: <a href= "";</a>/services/oauth2/token Class.prabhaks.RestTest.oauthLogin: line 18, column 1 Class.prabhaks.RestTest.restTest: line 28, column 1 How to Resolve this.. Any one can help . Thanks in advance... I've also added my domain name "" in Remote site setting.I'm getting "URL No Longer Exists" error whenever I'm executing the VF page.Please find the below codes. Target code : @RestResource(urlMapping='/GetService/*') global with sharing class getservice1 { @HttpGet global static String getRestMethod() { RestRequest req = RestContext.request; RestResponse res = RestContext.response; String name = req.params.get('name'); return 'Hello '+name+', you have just invoked a custom Apex REST web service exposed using REST API' ; } } Source Code : public abstract class OAuthRestController { static String clientId = '3MVG9Y6d_Btp4xp7FzYaVu5vF77qEk0y_iGvHzdmXyk71gTL75PCoi5rOmM7f_kMqP9THR3Y7iXsRl9i_DwZg'; // Set this in step 3 static String clientSecret = '4692916739247047542'; // Set this in step 3 static String redirectUri = '<a href="" target="_blank" rel="nofollow"></a>'; // YOUR PAGE URL IN THE SOURCE ORG static String loginUrl = '<a href="" target="_blank" rel="nofollow"></a>'; // YOUR MY DOMAIN URL IN THE TARGET ORG static String cookieName = 'oauth'; public PageReference login() { // Get a URL for the page without any query params String url = ApexPages.currentPage().getUrl().split('\\?')[0]; System.debug('url is '+url); String oauth = (ApexPages.currentPage().getCookies().get(cookieName) != null ) ? ApexPages.currentPage().getCookies().get(cookieName).getValue() : null; if (oauth != null) { // TODO - Check for expired token } System.debug('oauth='+oauth); if (oauth != null) { // All done return null; } // If we get here we have no token PageReference pageRef; if (! ApexPages.currentPage().getParameters().containsKey('code')) { // Initial step of OAuth - redirect to OAuth service System.debug('OAuth Step 1'); String authuri = loginUrl+'/services/oauth2/authorize?'+ 'response_type=code&client_id='+clientId+'&redirect_uri='+redirectUri; pageRef = new PageReference(authuri); } else { // Second step of OAuth - get token from OAuth service String code = ApexPages.currentPage().getParameters().get('code'); System.debug('OAuth Step 2 - code:'+code); String body = 'grant_type=authorization_code&client_id='+clientId+ '&redirect_uri='+redirectUri+'&client_secret='+clientSecret+ '&code='+code; System.debug('body is:'+body); HttpRequest req = new HttpRequest(); req.setEndpoint(loginUrl+'/services/oauth2/token'); req.setMethod('POST'); req.setBody(body); Http h = new Http(); HttpResponse res = h.send(req); String resp = res.getBody(); System.debug('FINAL RESP IS:'+EncodingUtil.urlDecode(resp, 'UTF-8')); ApexPages.currentPage().setCookies(new Cookie[]{new Cookie(cookieName, res.getBody(), null,-1,false)}); // Come back to this page without the code param // This makes things simpler later if you end up doing DML here to<br> // save the token somewhere<br> pageRef = new PageReference(url); pageRef.setRedirect(true); } return pageRef; } public static String getRestTest() { String oauth = (ApexPages.currentPage().getCookies().get(cookieName) != null ) ? ApexPages.currentPage().getCookies().get(cookieName).getValue() : null; JSONObject oauthObj = new JSONObject( new JSONObject.JSONTokener(oauth)); String accessToken = oauthObj.getValue('access_token').str, instanceUrl = oauthObj.getValue('instance_url').str; HttpRequest req = new HttpRequest(); req.setMethod('GET'); req.setEndpoint(instanceUrl+'/services/apexrest/superpat/TestRest?name=Pat'); req.setHeader('Authorization', 'OAuth '+accessToken); Http http = new Http(); HTTPResponse res = http.send(req); System.debug('BODY: '+res.getBody()); System.debug('STATUS:'+res.getStatus()); System.debug('STATUS_CODE:'+res.getStatusCode()); return res.getBody(); } } Page : <apex:page <h1>Rest Test</h1> getRestTest says "{!restTest}" </apex:page> Please share your valuable thoughts and kindly reply me, what went wrong and how would I fix it? I was able to set up a sandbox to call itself. Let's call this one SB1. I set up the Remote Site and the Connected App and was able to have it call out to itself which is great. But then I next tried to move all of my code to a new sandbox (Let's call it SB2). I wanted that new sandbox to try and connect to itself. So, I set up a Remote Site & Connected App, but I always get an authentication error. Then, I tried this next: In SB2 I created a Remote Site with the endpoint of SB1 so that I could try calling SB1 from SB2. Also I used the client Id and client Secret of SB1. I used my username/password for Sb2 and it worked. I am really confused because I didn't think I could use the url, clientId, clientSecret for SB1 with the credentials of SB2. And further, I don't know why I can't use credentials of SB2 with the url, clientId, clientSecrent of SB2. I'm wondering if I only need 1 Connected App for all my Sandboxes? Any help would be apprectiated. Thanks, Chris Could you please help me in this problem , In fact , i need to get the authorization code automatically in the controller without redirecting to the authorization URL to hget the code (using pagereference). Is there a manner to do it? In fact from an enduser side , when clicking on a button he is automatically connected without being interrupted with the link to get the code. Thank you for your help, this helped me to call a Webservice as callout in another Webservice in same org. Thank you for posting solution. @ Pat Patterson This will help you for sure.Here rest api integration is performed by taking example of two salesforce system.
https://developer.salesforce.com/forums?id=906F000000099zbIAA
CC-MAIN-2020-34
refinedweb
3,921
63.09
Win a copy of Barcodes with iOS this week in the iOS forum or Core Java for the Impatient in the Java 8 forum! Parul Sweetie wrote: 2. class Game { 3. static String s = "-"; 4. String s2 = "s2"; 5. Game(String arg) { s += arg; } 6. } 7. public class Go extends Game { 8. Go() { super(s2); } 9. { s += "i "; } 10. public static void main(String[] args) { 11. new Go(); 12. System.out.println(s); 13. } 14. static { s += "sb "; } 15. } The problem is that we can,t use s2 in call to super as the instance has not been created yet...Please let me know how to compile this code.
http://www.coderanch.com/t/605575/java-programmer-SCJP/certification/Instance-variable
CC-MAIN-2015-11
refinedweb
109
89.55
Namespaces and autoloading are not topics that are usually discussed when it comes to working with WordPress plugins. Some of this has to do with the community that's around it, some of this has to do with the versions of PHP that WordPress supports, and some of it simply has to do with the fact that not many people are talking about it. And that's okay, to an extent. Neither namespaces nor autoloading are topics that you absolutely need to use to create plugins. They can, however, provide a better way to organize and structure your code as well as cut down on the number of require, require_once, include, or include_once statements that your plugins use.. Before We Begin To get started, you're going to need the following tools: -. Once you've got all of that in place, let's get started on building a plugin. Note that if any of the above seems new to you, then don't hesitate to review any of my previous tutorials on my profile page. Furthermore, you can follow me on my blog and/or Twitter at @tommcfarlin where I talk about software development in the context of WordPress. With that said, let's get started. What We're Going to Build In this series, we're going to be building a simple plugin primarily to demonstrate how namespaces and autoloading work in PHP. But to do that, it always helps to apply the concepts in a practical way. To that end, we'll be building a plugin that makes it easy to load stylesheets and JavaScript styles in our plugin, and that displays a meta box that prompts the user with a question to help them brainstorm something about which to blog. No, this isn't something that you'd likely submit to the WordPress Plugin Repository, nor is it something that you'll likely use outside of this particular demo. But remember, the purpose of this series is to demonstrate how namespaces and autoloading work. And it's through this example that we're going to do just that. Building the Plugin If you've followed any of my previous tutorials, then you know one of the things I like to do is to plan out what we're going to build before we jump into writing any code. So for the first iteration of this plugin, this is what we know we're going to need to do: - Define a bootstrap file for starting the plugin. - Set up a directory for all of the files that will render the meta box. - Create a directory for housing the class that will load our dependencies. - Prepare the stylesheets and JavaScript for our plugin. It seems straightforward, right? If not, no worries. I'm going to walk you through the entire process complete with code, comments, screenshots, and explanations. Let's get started. Creating the Plugin Directory From the outset, we know that we're going to need a file that serves as the bootstrap for the plugin. We also know that we're going to need a directory for the administrative functionality. Let's go ahead and create that now: Obviously, we have a single empty file and an admin directory. Let's go ahead and set up this plugin so that it shows up within the context of the WordPress Plugin activation screen. To do this, we need to add the following block of code to the top of the plugin 0.1.0 * @package tutsplus_namespace_demo * * @wordpress-plugin * Plugin Name: Tuts+ Namespace Demo * Plugin URI: * Description: Learn how to use Namespaces and Autoloading in WordPress. * Version: 0.1.0 * Author: Tom McFarlin * Author URI: * License: GPL-2.0+ * License URI: */ Then, when you navigate to the WordPress Plugin Page in the administration area, you should see it show up in your list of plugins. If you opt to activate it, nothing will happen since we haven't written any code. At this point, we can go ahead and begin defining the class that will render our meta box on the Add New Post page. Adding a Meta Box This part of the tutorial will assume that you're somewhat versed in creating meta boxes. If not, don't hesitate to review the concepts in this series and then return to this part once done. First, let's create a file called class-meta-box-display.php in the admin directory of our plugin. The code should include the following. Be sure to review the comments to make sure you understand everything that this class is responsible for doing. < { /** * Renders a single string in the context of the meta box to which this * Display belongs. */ public function render() { echo 'This is the meta box.'; } } From the code above, you should be able to ascertain that this class will be responsible for displaying the content inside the meta box. For now, however, we simply have it echoing a statement for the view. We'll change this later in the tutorial. Next, we need to introduce a class that represents the meta box itself. So create a class-meta-box.php file in the admin directory of our plugin. Here's the code for doing exactly that. Again, review the code and then I'll explain what's happening below the class: <?php /** * Represents a meta box to be displayed within the 'Add New Post' page. */ /** * Represents a meta box to be displayed within the 'Add New Post' page. * * The class maintains a reference to a display object responsible for * displaying whatever content is rendered within the display. */ class Meta_Box { /** * A reference to the Meta Box Display. * * @access private * @var Meta_Box_Display */ private $display; /** * Initializes this class by setting its display property equal to that of * the incoming object. * * @param Meta_Box_Display $display Displays the contents of this meta box. */ public function __construct( $display ) { $this->display = $display; } /** * Registers this meta box with WordPress. * * Defines a meta box that will render inspirational questions at the top * of the sidebar of the 'Add New Post' page in order to help prompt * bloggers with something to write about when they begin drafting a post. */ public function init() { add_meta_box( 'tutsplus-post-questions', 'Inspiration Questions', array( $this->display, 'render' ), 'post', 'side', 'high' ); } } This class maintains a single attribute which is a reference to its display. This means that this class is responsible for defining the meta box (which, in turn, calls on the display object to render the message). The display is maintained as a private property set in the constructor. The meta box isn't actually defined until the init method is called (which we'll see in the plugin's bootstrap later in the tutorial). At this point, we have everything we need to display a rudimentary meta box on the Add New Post Page. But first, we need to set up our plugin's bootstrap. In previous tutorials, I've done this a lot so I'm going to include just the code that's required (since I've defined the header above). I've added comments, but I'll also make sure to explain what's happening after the code. This is specifically relevant because our autoloader will eventually negate the need for some of what you're going to see. <?php // If this file is accessed directory, then abort. if ( ! defined( 'WPINC' ) ) { die; } // Include the files for rendering the display. include_once( 'admin/class-meta-box.php' ); include_once( 'admin/class-meta-box-display.php' ); add_action( 'plugins_loaded', 'tutsplus_namespace_demo' ); /** * Starts the plugin by initializing the meta box, its display, and then * sets the plugin in motion. */ function tutsplus_namespace_demo() { $meta_box = new Meta_Box( new Meta_Box_Display() ); $meta_box->init(); } First, we make sure that this file can't be accessed directly and it can only be run by WordPress itself. Next, we include_once the classes we've created thus far. Next, we instantiate the Meta_Box and pass it an instance of the Meta_Box_Display in its constructor. Finally, we call the init method that resides in the Meta_Box class. Assuming all goes well, we should be able to activate the plugin and see the meta box on an Add New Post page (or, really, an Update Post page, as well). At this point, we have a functioning plugin, but it doesn't really do anything other than create a meta box and display a string of text. Let's at least get it to display some inspirational quotes and display a random one each time the page is loaded. Displaying Inspiration Quotes First, we need to find a text file of inspiration quotes. Luckily, the Internet provides a plethora of these that we can use in our project (and they are freely available). To that end, I've created a data subdirectory in admin that I'm using to house my questions.txt file. Next, we're going to need to create a class that will: - Open the file. - Read a random line into a string. - Close the file. - Return the string to the caller. Let's go ahead and create that class now. Because this is a utility and it's going to be used in the administrative side of the plugin, let's create a util subdirectory in admin. Next, let's create a file called class-question-reader.php. We'll specify the code for this class in a moment, but return to the plugin's bootstrap file and remember to include the file. The resulting code should look like this: <?php // Include the files for rendering the display. include_once( 'admin/class-meta-box.php' ); include_once( 'admin/class-meta-box-display.php' ); include_once( 'admin/util/class-question-reader.php' ); As you can see, the number of files we're having to manually include is getting longer. Imagine if we were working on a large plugin! Nevertheless, we'll come back to this later in the series. For now, let's turn our attention back to the question reader. The code for the class should look like this: <?php /** * Reads the contents of a specified file and returns a random line from the * file. */ /** * Reads the contents of a specified file and returns a random line from the * file. * * This class is used to populate the contents of the meta box with questions * that prompt the user for ideas about which to write. * * Note this class is only for demo purposes. It has no error handling and * assumes the specified file always exists. */ class Question_Reader { /** * Retrieves a question from the specified file. * * @param string $filename The path to the file that contains the question. * @return string $question A single question from the specified file. */ public function get_question_from_file( $filename ) { $question = ''; $file_handle = $this->open( $filename ); $question = $this->get_random_question( $file_handle, $filename ); $this->close( $file_handle ); return $question; } /** * Opens the file for reading and returns the resource to the file. * * @access private * @param string $filename The path to the file that contains the question. * @return resource A resource to the file. */ private function open( $filename ) { return fopen( $filename, 'r' ); } /** * Closes the file that was read. * * @access private * @param string $file_handle The resource to the file that was read. */ private function close( $file_handle ) { fclose( $file_handle ); } /** * Opens the file for reading and returns the resource to the file. * * @access private * @param string $file_handle The resource to the file that was read. * @param string $filename The path to the file containing the question. * @return string $question The question to display in the meta box. */ private function get_random_question( $file_handle, $filename ) { $questions = fread( $file_handle, filesize( $filename ) ); $questions = explode( "\n", $questions ); // Look for a question until an empty string is no longer returned. $question = $questions[ rand( 0, 75 ) ]; while ( empty( $question ) ) { $question = $questions[ rand( 0, 75 ) ]; } return $question; } } Notice that the code for this is relatively straightforward, but if you're not familiar with some of the basic file operations in PHP, here's what we're doing: - We're opening the file using fopen, which will grant us a resource to read the file. - Next, we're reading the contents of the file and then taking each line of the file and writing it to an index of an array. - After that, we pick a random number from the array of questions and return it to the method that's calling it. If it returns an empty string, we look again until a question is located. - Then, we close the resource to the file. Ultimately, to use this class, you simply need to instantiate it, know the path to a file full of questions, and then call the get_question_from_file method. Note: This class does not do any error handling. That's a standard practice when working with files. For example, what should we do if the file doesn't exist? What should we do if it's not formatted correctly, or what if we fail to close the resource? All of these are good questions, but they are outside the scope of this tutorial. All of this information can be found in the PHP manual (and perhaps some other tutorials across the Envato Tuts+ network). For now, though, we're concerned with reading a file that we know exists, and we're concerned with displaying the results in a meta box. What We Have So Far At this point, we can begin putting everything together. Assuming we've done everything correctly, we should be able to pass an instance of the Question_Reader to the Meta_Box_Display, ask for a question, and then display it in the meta box. First, let's update the bootstrap file: <?php function tutsplus_namespace_demo() { $meta_box = new Meta_Box( new Meta_Box_Display( new Question_Reader() ) ); $meta_box->init(); } In the code above, notice that the Meta_Box_Display now accepts an instance of the question reader into its constructor. This implies we'll need to introduce a new property, which we'll do now: < { /** * A reference to the object responsible for retrieving a question to display. * * @access private * @var Question_Reader $question_reader */ private $question_reader; /** * Initializes the class by setting the question reader property. * * @param Question_Reader $question_reader The object for retrieving a question. */ public function __construct( $question_reader ) { $this->question_reader = $question_reader; } /** * Renders a single string in the context of the meta box to which this * Display belongs. */ public function render() { $file = dirname( __FILE__ ) . '/data/questions.txt'; $question = $this->question_reader->get_question_from_file( $file ); echo wp_kses( $question ); } } Notice that this file uses the path to the questions that file added to the data subdirectory. Aside from the path being hard-coded, this class is also dependent on an instance of the Question_Reader. For the purposes of the demo that we're working toward (namely, namespaces and autoloading), that's okay. In a future project, we'd like the projects to have less coupling among themselves. Perhaps this will be a topic for a future tutorial. The primary takeaway from the code above, though, is that the Meta_Box_Display class can now display a question to the user. Furthermore, notice the use of wp_kses for the sake of sanitizing the data before presenting it to the user. Refreshing the Add New Post page should present you with an image like this: And if you refresh the page then you can see new questions being loaded. Where Do We Go From Here? Obviously, we've yet to actually tackle the topics of namespaces and autoloading, but that's okay! It's important that we lay the foundation for a plugin that doesn't use them. That way, when we do implement them, we can see the benefits they carry. Furthermore, we've still got some additional work to do: We need to introduce JavaScript and CSS and an assets loader. This will allow us to get an even broader picture of how packaging our files into namespaces is beneficial. Remember, you can find all of my previous tutorials on my profile page, and you can follow me on my blog or on Twitter.. With that said, we'll have a working version of the plugin ready to download starting in the next tutorial in this series. If you'd like to use the code in the above tutorial, don't hesitate to try and do so. Furthermore, feel free to ask any questions in the comments. Resources Envato Tuts+ tutorials are translated into other languages by our community members—you can be involved too!Translate this post
https://code.tutsplus.com/tutorials/using-namespaces-and-autoloading-in-wordpress-plugins-part-1--cms-27157
CC-MAIN-2020-29
refinedweb
2,715
71.95
This fortune telling machine is a nod to the old Zoltar machines found at circuses and arcades. It's quick and easy to build and can accurately predict the future 50% of the time! Modulo Gizmo Kit has everything you need for this project (except basic supplies like paper and tape). About Modulo This project is built using Modulo, a modular electronics system that makes it easy to assemble electronic devices. To learn more about Modulo in general, check out modulo.co. Physical Construction Download and print out the Zoltar enclosure, pointer and servo mount. It's best to use thick paper. You could also glue copy paper to cardstock for better rigidity. Cut out a holes for the knob and servo. You may wish to use an x-acto knife for this, though scissors will work in a pinch. Cut slits at the corners (on the short bold lines) and use tape to make a box. Cut out the black square from the servo mount. Using tape, mount the servo so that the shaft barely protrudes through the top of the enclosure. With the servo's shaft turned all the way counter-clockwise align the servo horn with the topmost answer line. Tape the pointer to the servo horn. Connect the servo to the modulo controller with the orange signal wire next to the "0". Put the knob through the hole and attach the knob cap. Your Zoltar is ready to be programmed! Software We'll use the Arduino app to control Zoltar. If you haven't used Modulo with Arduino before, check out the getting started tutorials here. If you'd like to skip ahead and get Zoltar working quickly, you can just download the run the Zoltar.ino file below. The code first includes the Modulo, Wire, and Servo header files and declares objects for the Servo and Knob. #include "Modulo.h" #include "Wire.h" #include "Servo.h" KnobModulo knob; Servo servo; Next, define a callback function that gets executed any time the knob is pressed. The function will move the servo in a sinusoidal pattern for 3 seconds. While it's moving, it will change the color of the knob in a rainbow pattern. void onButtonPress(KnobModulo &k) { # First get the current time in milliseconds. long startTime = millis(); # Continually do this until 3 seconds (3000ms) past the start time while (millis() < startTime + 3000) { # Figure out the servo angle, which ranges between 0 and 180 # in a sinusoidal pattern based on the current time. servo.write(90*(1 + sin(millis()/400.0))); # Set the hue of the knob based on the current time too. knob.setHSV((millis() % 500)/500.0, 1, 1); } } The setup function attaches the Servo to pin 0 and registers the callback function. void setup() { servo.attach(0); knob.setPositionChangeCallback(onPositionChanged); knob.setButtonPressCallback(onButtonPress); } The loop function calls Modulo.loop() (so that button press events can be handled) and pulses the Knob's LED. void loop() { Modulo.loop(); knob.setHSV(0, 1, sin(millis()/1000.0)); } Upload your code to the Modulo controller and your Zoltar is ready to predict the future!
https://www.hackster.io/ekt/diy-zoltar-a-fortune-telling-machine-406404?ref=platform&ref_id=12765_trending___&offset=1
CC-MAIN-2018-26
refinedweb
518
67.15
The combined size of all attached files for a job is limited to 4 GB. By default, a worker on a Windows® operating system is installed as a service running as LocalSystem, so it does not have access to mapped network drives. Often a network is configured to not allow services running as LocalSystem to access UNC or mapped network shares. In this case, you must run the mjs service under a different user with rights to log on as a service. See the section Set the User (MATLAB Parallel Server) in the MATLAB® Parallel Server™ System Administrator's Guide. If a worker cannot find the task function, it returns the error message Error using ==> feval Undefined command/function 'function_name'. The worker that ran the task did not have access to the function function_name. One solution is to make sure the location of the function’s file, function_name.m, is included in the job’s AdditionalPaths property. Another solution is to transfer the function file to the worker by adding function_name.m to the AttachedFiles property of the job. If a worker cannot save or load a file, you might see the error messages ??? Error using ==> save Unable to write file myfile.mat: permission denied. ??? Error using ==> load Unable to read file myfile.mat: No such file or directory. In determining the cause of this error, consider the following questions: What is the worker’s current folder? Can the worker find the file or folder? What user is the worker running as? Does the worker have permission to read or write the file in question? A job or task might get stuck in the queued state. To investigate the cause of this problem, look for the scheduler’s logs: Platform LSF® schedulers might send emails with error messages. Microsoft® Windows HPC Server (including CCS), LSF®, PBS Pro®, and TORQUE save output messages in a debug log. See the getDebugLog reference page. If using a generic scheduler, make sure the submit function redirects error messages to a log file. Possible causes of the problem are: job storage location. The storage location might not be accessible to all the worker nodes, or the user that MATLAB runs as does not have permission to read/write the job files. If using a generic scheduler: The environment variable PARALLEL_SERVER_DECODE_FUNCTION was not defined before the MATLAB worker started. The decode function was not on the worker’s path. If your job returned no results (i.e., fetchOutputs(job) returns an empty cell array), it is probable that the job failed and some of its tasks have their Error properties set. You can use the following code to identify tasks with error messages: errmsgs = get(yourjob.Tasks, {'ErrorMessage'}); nonempty = ~cellfun(@isempty, errmsgs); celldisp(errmsgs(nonempty)); This code displays the nonempty error messages of the tasks found in the job object yourjob. If you are using a supported third-party scheduler, you can use the getDebugLog function to read the debug log from the scheduler for a particular job or task. For example, find the failed job on your LSF scheduler, and read its debug log: c = parcluster('my_lsf_profile') failedjob = findJob(c, 'State', 'failed'); message = getDebugLog(c, failedjob(1)) For testing connectivity between the client machine and the machines of your compute cluster, you can use Admin Center. For more information about Admin Center, including how to start it and how to test connectivity, see Start Admin Center (MATLAB Parallel Server) and Test Connectivity (MATLAB Parallel Server). Detailed instructions for other methods of diagnosing connection problems between the client and MATLAB Job Scheduler can be found in some of the Bug Reports listed on the MathWorks Web site. The following sections can help you identify the general nature of some connection problems. If you cannot locate or connect to your MATLAB Job Scheduler with parcluster, the most likely reasons for this failure are: The MATLAB Job Scheduler is currently not running. Firewalls do not allow traffic from the client to the MATLAB Job Scheduler. The client and the MATLAB Job Scheduler are not running the same version of the software. The client and the MATLAB Job Scheduler cannot resolve each other’s short hostnames. The MATLAB Job Scheduler is using a nondefault BASE_PORT setting as defined in the mjs_def file, and the Host property in the cluster profile does not specify this port. If a warning message says that the MATLAB Job Scheduler cannot open a TCP connection to the client computer, the most likely reasons for this are The example code for generic schedulers with non-shared file systems contacts an sftp server to handle the file transfer to and from the cluster’s file system. This use of sftp is subject to all the normal sftp vulnerabilities. One problem that can occur results in an error message similar to this: Caused by: Error using ==> RemoteClusterAccess>RemoteClusterAccess.waitForChoreToFinishOrError at 780 The following errors occurred in the com.mathworks.toolbox.distcomp.clusteraccess.UploadFilesChore: Could not send Job3.common.mat for job 3: One of your shell's init files contains a command that is writing to stdout, interfering with sftp. Access help com.mathworks.toolbox.distcomp.remote.spi.plugin.SftpExtraBytesFromShellException: One of your shell's init files contains a command that is writing to stdout, interfering with sftp. Find and wrap the command with a conditional test, such as if ($?TERM != 0) then if ("$TERM" != "dumb") then /your command/ endif endif : 4: Received message is too long: 1718579037 The telling symptom is the phrase " Received message is too long:" followed by a very large number. The sftp server starts a shell, usually bash or tcsh, to set your standard read and write permissions appropriately before transferring files. The server initializes the shell in the standard way, calling files like .bashrc and .cshrc. This problem happens if your shell emits text to standard out when it starts. That text is transferred back to the sftp client running inside MATLAB, and is interpreted as the size of the sftp server's response message. To work around this error, locate the shell startup file code that is emitting the text, and either remove it or bracket it within if statements to see if the sftp server is starting the shell: if ($?TERM != 0) then if ("$TERM" != "dumb") then /your command/ endif endif You can test this outside of MATLAB with a standard UNIX or Windows sftp command-line client before trying again in MATLAB. If the problem is not fixed, the error message persists: > sftp yourSubmitMachine Connecting to yourSubmitMachine... Received message too long 1718579042 If the problem is fixed, you should see: > sftp yourSubmitMachine Connecting to yourSubmitMachine...
https://fr.mathworks.com/help/parallel-computing/troubleshooting-and-debugging.html
CC-MAIN-2020-05
refinedweb
1,107
54.42
If you are starting to create a library of reuseable PHP functions, you will soon encounter some typical problems. For example, you will experience name clashes if you start mixing your own components with that of other developers: sooner or later some foreign function will have the same name as one of yours or will use a global variable that you are using, too. But you may even experience problems if you are using only selfmade components. Imagine for example a set of functions that is manipulating a database link. Example 1 shows such a set of functions which shares a common variable named $Link_ID. Example 1: A hypothetical set of functions for accessing a database. <?php $Link_ID = 0; // ID of current database link $Query_ID = 0; // ID of currently active query $error = 0; // current database error state function connect() { … } function query() { … } function next_record() { … } function num_rows() { … } ?> This is going to be a problem as soon as you have a page that needs two concurrently active queries, because these queries would fight for the global variables. If you had PHP pointer or reference types, you could call connect(), query() and next_record() with the with references to the appropriate variables. But in this case you would have gained nothing, because you would be back to dealing with $Link_IDs and $Query_IDs yourself. PHP offers a different approach to solve this problem: You may group a number of variables and functions together into a package and name that package. The package itself uses no names in your global namespace. You may then create copies of the packages and insert them under any variable name into your global namespace, much like you can mount disks anywhere in a directory hierarchy. Creating a package of variables and functions is called “declaring a class” in PHP and mounting a package copy in your namespace is called “creating an object as an instance of a class”. Example 2 shows how a class is defined using the “class” keyword and how objects are created using the “new” operator. Compare this to the definition shown in Example 1 and see how they match one to one. Example 2: Definition of a class DB_MiniSQL with call properties of Example 1. Creation of two object instances $db1 and $db2 of that class. <?php class DB_MiniSQL { var $Link_ID = 0; // ID of current database link var $Query_ID = 0; // ID of currently active query var $error = 0; // current database error state function connect() { … } function query() { … } function next_record() { … } function num_rows() { … } } $db1 = new DB_MiniSQL; $db2 = new DB_MiniSQL; ?> A declaration of a class does not use names in the global variable namespace - the class declaration only establishes a plan how to build DB_MiniSQL variables, but does not actually build such variables. PHP now knows what makes up a DB_MiniSQL object, if it were asked to make one. We ask PHP to make such objects using the “new” operator and name them by assigning them to a variable. We can have multiple objects of the same type under different names – $db1 and $db2 in our example. Unlike the situation in Example 1, this does not lead to name clashes, because both variables differ in their “pathnames” (Remember the disk mountpoint analogy!): $db1->Link_ID and $db2->Link_ID are obviously different variables. With function calls it is the same: $db1->query() sends a query via one link, $db2->query() via the other link. For library developers this is an important feature, since it allows us to encapsulate the definition of our functionality in a non-intrusive way. We leave it to the user of our functions to decide how many copies of them are needed and under what name. For users of such a library it is easy to handle this: They just have to get used to choose an appropriate name for the imported functions (for example by writing “$db1 = new DB_MiniSQL”) and then always use the functions under that name prefix (for example by writing “$db1->query()”). But that’s the view from the outside, from the users side of the code. From the inside it is a little bit different. Why? Imagine the query() function wanted to check the value of $Link_ID. It would have to know its own name, because it would have to decide whether to access $db1->Link_ID or $db2->Link_ID or another, completely different object. That would be quite inconvenient to code. All local object variables and functions are available under the prefix $this instead, independent of the actual name of the object. So in our case, query() could simply access its own Link_ID as $this->Link_ID and call its own connect() function as $this->connect(). Note that the variable name is “this->Link_ID” and thus it is written as “$this->Link_ID”, not as “$this->$Link_ID”. This is a very common beginners error. {mospagebreak title=A database access class as example} We will be coding a class DB_Sql for access to a MySQL database as an example. [1] Our class shall have variables $Host, $Database, $User and $Password, which define the server to connect and contain all necessary information to log on to the database server. The result of that logon will be a $Link_ID, which must be saved by the class, too. Queries to the database will either produce a result set referenced by a $Query_ID or error codes, which will be kept in $Error and $Errno for the error text and number respectively. While we read the result set of the query, we will keep the current row in a hash named $Record and we will keep the current row number in $Row. To do all this, our class will have to create the variables show in Example 3 – the functions working with these variables are still missing, though. Example 3: Definition and explaination of all variables used in DB_Sql. class DB_Sql { var $Host = “”; // Hostname of our MySQL server. var $Database = “”; // Logical database name on that server. var $User = “”; // User und Password for login. var $Password = “”; var $Link_ID = 0; // Result of mysql_connect(). var $Query_ID = 0; // Result of most recent mysql_query(). var $Record = array(); // current mysql_fetch_array()-result. var $Row; // current row number. var $Errno = 0; // error state of query… var $Error = “”; // insert functions here. } To be able to work with this class you will have to add at least code that establishes a database connection. This can fail, if the server is not reachable, the database is not present or username and password are wrong. The class must have a way to signal an error and stop the program. We define a function halt(), which prints an error message and stops the program. We also define a function connect(), which tries to get hold of a valid $Link_ID. The code is shown in Example 4. Example 4: The functions halt() and connect() are to be added to the class DB_Sql.. function halt($msg) { printf(“</td></tr></table><b> Database error:</b> %s<br>n”, $msg); printf(“<b>MySQL Error</b>: %s (%s)<br>n”, $this->Errno, $this->Error); die(“Session halted.”); } function connect() { if ( 0 == $this->Link_ID ) { $this->Link_ID=mysql_connect($this->Host, $this->User, $this->Password); if (!$this->Link_ID) { $this->halt(“Link-ID == false, connect failed”); } if (!mysql_query(sprintf(“use %s”,$this->Database),$this->Link_ID)) { $this->halt(“cannot use database “.$this->Database); } } } The connect() function tests for a valid link. If this is not the case, we try to establish such a link using the values of $Host, $User and $Password. If this fails, we signal an error and stop processing, otherwise we try to change the current database to $Database employing a MySQL “use” command.{mospagebreak title=Extending classes} Our class is now useable, albeit not productively. We are able to establish a database link, but without query() and next_record() functions we are unable to make use of it. We are going to use the class anyway, to show how to set it up for a production environment. Example 5 shows a workable, but inconvenient method how you could deploy the class. Example 5: A non-recommended method to configure and deploy DB_Sql. <?php // The include file contains the definition of DB_Sql. require(“db_mysql.inc”); // $db is our database object. $db = new DB_Sql; // Overwrite the connection parameters as needed. $db->Host = “localhost”; $db->User = “kris”; $db->Password = “” $db->Database = “sampleserv”; // Try to connect to the database server process. $db->connect(); ?> This is not a recommended method to configure a class for use, because after creation of the object you have to set up all variables within that object manually and you have to do it over and over on each page where you are using it. It would be much nicer if we were able to define a class that is just like DB_Sql, but with different connect parameters. In fact we can easily do this: We can extend any given class and base the definition of a new class on any single existing class. Example 6 shows the definition of a class DB_Sample, which performs exactly the same connect as Example 5. Example 6b shows how to use this class. Example 6: Definition of a new class DB_Sample, based on DB_Sql // DB_Sample is just like DB_Sql, only different.. class DB_Sample extends DB_Sql { var $Host = “localhost”; var $User = “kris”; var $Password = “”; var $Database = “sampleserv”; }class DB_Sample extends DB_Sql { var $Host = “localhost”; var $User = “kris”; var $Password = “”; var $Database = “sampleserv”; } Example 6b: Using DB_Sample. // This include file contains the definiton of DB_Sql require(“db_mysql.inc”); // This include file contains the definition of DB_Sample require(“local.inc”); // Create a database connection object $db = new DB_Sample; // Connection to database… $db->connect(); DB_Sample is not empty, but contains exactly the same variables and functions as DB_Sql, although these are not written down explicitly in the class definition. The magic is in the class definition: DB_Sample extends DB_Sql, that is, DB_Sample starts as a simple copy of DB_Sql. Within the class body of DB_Sample certain definitions of DB_Sql are overwritten, specifically we redefine the database connection parameters. On use of DB_Sample as shown in Example 6b, the database connection will be created using these redefined parameters. Unlike Example 5 we do not have to mention these parameters on each page, but DB_Sample “automatically” knows the appropriate parameters and does the right thing. If we had to change the connection parameters, we could do so by editing a single file, local.inc. This is very convenient, especially in larger projects. {mospagebreak title=Queries and query results} Example 7: Adding the functions query(), next_record() and seek() to DB_Sql. function query($Query_String) { $this->connect(); # printf(“Debug: query = %s<br>n”, $Query_String); $this->Query_ID = mysql_query($Query_String,$this->Link_ID); $this->Row = 0; $this->Errno = mysql_errno(); $this->Error = mysql_error(); if (!$this->Query_ID) { $this->halt(“Invalid SQL: “.$Query_String); } return $this->Query_ID; } function next_record() { $this->Record = mysql_fetch_array($this->Query_ID); $this->Row += 1; $this->Errno = mysql_errno(); $this->Error = mysql_error(); $stat = is_array($this->Record); if (!$stat) { mysql_free_result($this->Query_ID); $this->Query_ID = 0; } return $stat; } function seek($pos) { $status = mysql_data_seek($this->Query_ID, $pos); if ($status) $this->Row = $pos; return; } Example 7 adds three functions to our DB_Sql class which make the class actually useful: Finally we are able to make use of the database link for sending queries to the database and retrieving the results. For this purpose, query() calls connect() internally to create the database link. This saves you a manual call to connect if you are using the database class later in your pages. If you activate the disabled printf() statement within query(), you get a list of all queries on a page as they are made on the page. This is very useful for debugging your SQL and to get a feeling how expensive the creation of a certain page actually is. When you send a query, a new $Query_ID is being generated and the current row number is reset to zero. After that we check the error state to see if the query was legal. If not, we raise an error and halt the program. If the query was legal, we return the query id to the caller. The next_record() function can be used to retrieve the query result. The function reads the current result row, increments the row counter and checks for errors. If the result set has been read completely, we call mysql_free_result() to save application memory. next_record() returns “true” as long as there are still result records so that you may use the function as the condition of a while()-loop. Using seek() you may move within the current result set and read a single result multiple times (unless it has been freed) or skip certain records at the beginning of the result set. Example 8 shows how to use query() and next_record() to get data from a table. Example 8: Query to the table ad_customers within the database sampleserv. <?php require(“db_mysql.inc”); // DB_Sql require(“local.inc”); // DB_Sample $db = new DB_Sample; $query = “select name, graphics, link, desc from ad_customers”; $db->query($query); ?> <html> <body bgcolor=”#ffffff”> <table border=1 bgcolor=”#eeeeee”> <tr> <th>ID</th> <th>Graphics</th> <th>Link</th> <th>Desc</th> </tr> <?php while($db->next_record()): ?> <tr> <td><?php print $db->Record["name"] ?></td> <td><?php print $db->Record["graphics"] ?></td> <td><?php print $db->Record["link"] ?></td> <td><?php print $db->Record["desc"] ?></td> </tr> <?php endwhile ?> </table> </body> </html> Example 8b: Definition of table ad_customers. CREATE TABLE ad_customers ( id int(11) DEFAULT ’0′ NOT NULL auto_increment, name varchar(127) DEFAULT ” NOT NULL, graphics varchar(127) DEFAULT ” NOT NULL, link varchar(127) DEFAULT ” NOT NULL, desc varchar(127) DEFAULT ” NOT NULL, PRIMARY KEY (id), KEY name (name), ); CREATE TABLE banner_rotate ( pos int(11) DEFAULT ’0′ NOT NULL, ); Many webservers keep rotating banner ads at the top or bottom of their pages. These banners are present as GIF images with known path names. In our sample database we keep a table named ad_customers, which lists information about each banner. We keep a banner name, the pathname to the GIF image on disk, a link target that is to be activated when the banner is clicked and a description text for the images alt-attribute. Example 8 shows, how to read this table using the DB_Sample class. We are generating an HTML table with all banner names and related data. Example 8b shows the database table definitions involved. The second table, banner_rotate, contains just a single row with a single column with the currently active ad banner number. The rotation program uses this information to control the banner rotation. The actual banner rotation program (Example 9) is just a single function banner_rotate(), which does nothing more than incrementing the pos counter from the banner_rotate table and produces the appropriate image tag. The locking shown in that function is specific to MySQL (MySQL does not do proper transactions). The function is pretty linear: It locks the banner_rotate table and updates the counter using an SQL update statement. After that it uses a SQL select statement to read that counter value and unlocks the table. Using the counter value corrected modulo the number of actual ad customers the appropriate customer data from the ad_customers table is selected and an image tag is created which is embedded into a link. We do not directly jump into the customers presentation using this link, but we refer the user to another local program which registers the click and additional data about the users browser. It is that second program which generates a Location header to redirect the user to the final external destination. This is the only way to measure the efficiency of a banner and to get provable data for the customer. Example 9: Function banner_rotate() to rotate banner ads. <?php function banner_rotate() { global $db; // Assumes that a global object with that name exists. $max_ads = 4; // CONFIGURE ME! $db->query(“lock tables banner_rotate”); // Set lock. $db->query(“update banner_rotate set pos = pos + 1″); // Increment counter. $db->query(“select pos from banner_rotate”); // Read counter. $db->next_record(); $pos = $db->Record["pos"]; $db->query(“unlock tables”); // Drop lock. // Find matching customer (mod $max_ads). $query = sprintf(“select * from ad_customers where id = ‘%s’”, $pos % $max_ads); $db->query($query); $db->next_record(); // Link and Image generation printf(“<a href=”jump.php3?name=%s”> <img src=”%s” alt=”%s” width=468 height=60 border=0></a>”, $this->Record["name"], $this->Record["graphics"], $this->Record["desc"]); } ?> The jump.php3 script is not shown. It receives a parameter name identifying the clicked banner. Using that parameter the script can extract the link information from ad_customers and create a Location header to redirect the users browser to the final location. Also, it does record the click in another table named banner. Example 10: More functions for DB_Sql <?php function num_rows() { return mysql_num_rows($this->Query_ID); } function num_fields() { return mysql_num_fields($this->Query_ID); } function f($Name) { return $this->Record[$Name]; } function p($Name) { print $this->Record[$Name]; } function affected_rows() { return @mysql_affected_rows($this->Link_ID); } ?> To complete DB_Sql, we add the functions shown in Example 10. These are intended to ease access to query results: num_rows() and num_fields() return the width and height of the result set. The functions f() and p() are useful to access single result set values. And affected_rows() can be used to test the success of SQL insert or update statments. The class DB_Sql is a basic building part of PHPLIB The version of DB_Sql from PHPLIB contains some additional functions which are not relevant for this tutorial.
http://www.devshed.com/c/a/PHP/Accessing-Databases-with-Class/
CC-MAIN-2014-35
refinedweb
2,893
62.58
Java Power Tools 2.7.0 Original Release Note for January 24, 2008 (2.7.0): The most important innovation in JPT 2.7.0 is the ability to use the Java Power Framework in both applications as in the past with JPF and now in applets with JPFApplet. In addition, the core reflection technology has been implemented in a public class JPFPane that may be used to investigate the behavior of suitable methods defined for any class even the library classes supplied by Java. Class TileBox has been changed so that it may be selected with the mouse even it is has no Paintable object within it. This convention makes it easier to use TileBox is simple interactive applications and games. Many of the changes to JPT in Fall 2007 were designed to prepare for the improvements to the Java Power Framework. It is therefore worthwhile to review the release notes on the JPT 2.6.0 site. Release Note for January 25, 2008 (2.7.0a): In the initial posting of January 24, 2008 one applet failed due to the fact that JPT used getDeclaredMethods in class Class. It turns out that even asking for all declared methods causes a Java security exception in an applet deployed in a browser. Why this should be so is something of a puzzle since you have not yet called any methods. It is also the case that the same code works locally in AppletViewer so as a test program AppletViewer cannot be trusted. The classes JPFHelper and JPFPane have been modified to work around this security issue. Now the method getMethods in class Class is used instead. Note that Eclipse 3.3 is so similar to Eclipse 3.2 as far as the settings shown at the above link that we have not created a new version of the screen snapshots for Eclipse 3.3. Methods.java MethodsApplet.java
http://www.ccs.neu.edu/jpt/jpt_2_7/index.htm
CC-MAIN-2015-48
refinedweb
318
63.8
An Introduction to Reasonably Pure Functional Programming Free JavaScript Book! Write powerful, clean and maintainable JavaScript. RRP $11.95 This article was peer reviewed by Panayiotis «pvgr» Velisarakos, Jezen Thomas and Florian Rappl. Thanks to all of SitePoint’s peer reviewers for making SitePoint content the best it can be! When learning to program you’re first introduced to procedural programming; this is where you control a machine by feeding it a sequential list of commands. After you have an understanding of a few language fundamentals like variables, assignment, functions and objects you can cobble together a program that achieves what you set out for it do – and you feel like an absolute wizard. The process of becoming a better programmer is all about gaining a greater ability to control the programs you write and finding the simplest solution that’s both correct and the most readable. As you become a better programmer you’ll write smaller functions, achieve better re-use of your code, write tests for your code and you’ll gain confidence that the programs you write will continue to do as you intend. No one enjoys finding and fixing bugs in code, so becoming a better programmer is also about avoiding certain things that are error-prone. Learning what to avoid comes through experience or heeding the advice of those more experienced, like Douglas Crockford famously explains in JavaScript: The Good Parts. Functional programming gives us ways to lower the complexity of our programs by reducing them into their simplest forms: functions that behave like pure mathematical functions. Learning the principles of functional programming is a great addition to your skill set and will help you write simpler programs with fewer bugs. The key concepts of functional programming are pure functions, immutable values, composition and taming side-effects. Pure Functions A pure function is a function that, given the same input, will always return the same output and does not have any observable side effect. // pure function add(a, b) { return a + b; } This function is pure. It doesn’t depend on or change any state outside of the function and it will always return the same output value for the same input. // impure var minimum = 21; var checkAge = function(age) { return age >= minimum; // if minimum is changed we're cactus }; This function is impure as it relies on external mutable state outside of the function. If we move this variable inside of the function it becomes pure and we can be certain that our function will correctly check our age every time. // pure var checkAge = function(age) { var minimum = 21; return age >= minimum; }; Pure functions have no side-effects. Here are a few important ones to keep in mind: - Accessing system state outside of the function - Mutating objects passed as arguments - Making a HTTP call - Obtaining user input - Querying the DOM Controlled Mutation You need to be aware of Mutator methods on Arrays and Objects which change the underling objects, an example of this is the difference between Array’s splice and slice methods. // impure, splice mutates the array var firstThree = function(arr) { return arr.splice(0,3); // arr may never be the same again }; // pure, slice returns a new array var firstThree = function(arr) { return arr.slice(0,3); }; If we avoid mutating methods on objects passed to our functions our program becomes easier to reason about, we can reasonably expect our functions not to be switching things out from under us. let items = ['a','b','c']; let newItems = pure(items); // I expect items to be ['a','b','c'] Benefits of Pure Functions Pure functions have a few benefits over their impure counterparts: - More easily testable as their sole responsibility is to map input -> output - Results are cacheable as the same input always yields the same output - Self documenting as the function’s dependencies are explicit - Easier to work with as you don’t need to worry about side-effects Because the results of pure functions are cacheable we can memoize them so expensive operations are only performed the first time the functions are called. For example, memoizing the results of searching a large index would yield big performance improvements on re-runs. Unreasonably Pure Functional Programming Reducing our programs down to pure functions can drastically reduce the complexity of our programs. However, our functional programs can also end up requiring Rain Man’s assistance to comprehend if we push functional abstraction too far. import _ from 'ramda'; import $ from 'jquery'; var Impure = { getJSON: _.curry(function(callback, url) { $.getJSON(url, callback); }), setHtml: _.curry(function(sel, html) { $(sel).html(html); }) }; var img = function (url) { return $('<img />', { src: url }); }; var url = function (t) { return '' + t + '&format=json&jsoncallback=?'; }; var mediaUrl = _.compose(_.prop('m'), _.prop('media')); var mediaToImg = _.compose(img, mediaUrl); var images = _.compose(_.map(mediaToImg), _.prop('items')); var renderImages = _.compose(Impure.setHtml("body"), images); var app = _.compose(Impure.getJSON(renderImages), url); app("cats"); Take a minute to digest the code above. Unless you have a background in functional programming these abstractions (curry, excessive use of compose and prop) are really difficult to follow, as is the flow of execution. The code below is easier to understand and to modify, it also much more clearly describes the program than the purely functional approach above and it’s less code. - The appfunction takes a string of tags - fetches JSON from Flickr - pulls the URLs out the response - builds an array of <img>nodes - inserts them into the document var app = (tags)=> { let url = `{tags}&format=json&jsoncallback=?` $.getJSON(url, (data)=> { let urls = data.items.map((item)=> item.media.m) let images = urls.map((url)=> $('<img />', { src: url }) ) $(document.body).html(images) }) } app("cats") Or, this alternative API using abstractions like fetch and Promise helps us clarify the meaning of our asynchronous actions even further. let flickr = (tags)=> { let url = `{tags}&format=json&jsoncallback=?` return fetch(url) .then((resp)=> resp.json()) .then((data)=> { let urls = data.items.map((item)=> item.media.m ) let images = urls.map((url)=> $('<img />', { src: url }) ) return images }) } flickr("cats").then((images)=> { $(document.body).html(images) }) Note: fetch and Promise are upcoming standards so they require polyfills to use today. Learn PHP for free! Make the leap into server-side programming with a comprehensive cover of PHP & MySQL. Normally RRP $11.95 Yours absolutely free The Ajax request and the DOM operations are never going to be pure but we could make a pure function out of the rest, mapping the response JSON to an array of images – let’s excuse the dependence on jQuery for now. let responseToImages = (resp)=> { let urls = resp.items.map((item)=> item.media.m ) let images = urls.map((url)=> $('<img />', { src: url })) return images } Our function is just doing two things now: - mapping response data-> urls - mapping urls-> images The “functional” way to do this is to create separate functions for those two tasks and we can use compose to pass the response of one function into the other. let urls = (data)=> { return data.items.map((item)=> item.media.m) } let images = (urls)=> { return urls.map((url)=> $('<img />', { src: url })) } let responseToImages = _.compose(images, urls) compose returns a function that is the composition of a list of functions, each consuming the return value of the function that follows. Here’s what compose is doing, passing the response of urls into our images function. let responseToImages = (data)=> { return images(urls(data)) } It helps to read the arguments to compose from right to left to understand the direction of data flow. By reducing our program down to pure functions it gives us a greater ability to reuse them in the future, they are much simpler to test and they are self documenting. The downside is that when used excessively (like in the first example) these functional abstractions can make things more complex which is certainly not what we want. The most important question to ask when refactoring code though is this: Is the code easier to read and understand? Essential Functions Now, I’m not trying to attack functional programming at all. Every developer should make a concerted effort to learn the fundamental functions that let you abstract common patterns in programming into much more concise declarative code, or as Marijn Haverbeke puts it.. A programmer armed with a repertoire of fundamental functions and, more importantly, the knowledge on how to use them, is much more effective than one who starts from scratch. – Eloquent JavaScript, Marijn Haverbeke Here is a list of essential functions that every JavaScript developer should learn and master. It’s also a great way to brush up on your JavaScript skills to write each of these functions from scratch. Arrays Functions Less Is More Let’s look at some practical steps we can take to improve the code below using functional programming concepts. let items = ['a', 'b', 'c']; let upperCaseItems = ()=> { let arr = []; for (let i = 0, ii = items.length; i < ii; i++) { let item = items[i]; arr.push(item.toUpperCase()); } items = arr; } Reduce functions dependence on shared state This may sounds obvious and trivial but I still write functions that access and modify a lot of state outside of themselves, this makes them harder to test and more prone to error. // pure let upperCaseItems = (items)=> { let arr = []; for (let i = 0, ii = items.length; i < ii; i++) { let item = items[0]; arr.push(item.toUpperCase()); } return arr; } Use more readable language abstractions like forEach to iterate let upperCaseItems = (items)=> { let arr = []; items.forEach((item) => { arr.push(item.toUpperCase()); }); return arr; } Use higher level abstractions like map to reduce the amount of code let upperCaseItems = (items)=> { return items.map((item)=> item.toUpperCase()) } Reduce functions to their simplest forms let upperCase = (item)=> item.toUpperCase() let upperCaseItems = (items)=> items.map(upperCase) Delete code until it stops working We don’t need a function at all for such a simple task, the language provides us with sufficient abstractions to write it out verbatim. let items = ['a', 'b', 'c'] let upperCaseItems = items.map((item)=> item.toUpperCase()) Testing Being able to simply test our programs is a key benefit of pure functions, so in this section we’ll set up a test harness for our Flickr module we were looking at earlier. Fire up a terminal and have your text editor poised and ready, we’ll use Mocha as our test runner and Babel for compiling our ES6 code. mkdir test-harness cd test-harness npm init -yes npm install mocha babel-register babel-preset-es2015 --save-dev echo '{ "presets": ["es2015"] }' > .babelrc mkdir test touch test/example.js Mocha has a bunch of handy functions like describe and it for breaking up our tests and hooks such as before and after for setup and teardown tasks. assert is a core node package that can perform simple equality tests, assert and assert.deepEqual are the most useful functions to be aware of. Let’s write our first test in test/example.js import assert from 'assert'; describe('Math', ()=> { describe('.floor', ()=> { it('rounds down to the nearest whole number', ()=> { let value = Math.floor(4.24) assert(value === 4) }) }) }) Open up package.json and amend the "test" script to the following mocha --compilers js:babel-register --recursive Then you should be able run npm test from the command line to confirm everything is working as expected. Math .floor ✓ rounds down to the nearest whole number 1 passing (32ms) Boom. Note: You can also add a -w flag at the end of this command if you want mocha to watch for changes and run the tests automatically, they will run considerably faster on re-runs. mocha --compilers js:babel-register --recursive -w Testing Our Flickr Module Let’s add our module into lib/flickr.js import $ from 'jquery'; import { compose } from 'underscore'; let urls = (data)=> { return data.items.map((item)=> item.media.m) } let images = (urls)=> { return urls.map((url)=> $('<img />', { src: url })[0] ) } let responseToImages = compose(images, urls) let flickr = (tags)=> { let url = `{tags}&format=json&jsoncallback=?` return fetch(url) .then((response)=> response.json()) .then(responseToImages) } export default { _responseToImages: responseToImages, flickr: flickr, } Our module is exposing two methods: flickr to be publicly consumed and a private function _responseToImages so that we can test that in isolation. We have a couple of new dependencies: jquery, underscore and polyfills for fetch and Promise. To test those we can use jsdom to polyfill the DOM objects window and document and we can use the sinon package for stubbing the fetch api. npm install jquery underscore whatwg-fetch es6-promise jsdom sinon --save-dev touch test/_setup.js Open up test/_setup.js and we’ll configure jsdom with our globals that our module depends on. global.document = require('jsdom').jsdom('<html></html>'); global.window = document.defaultView; global.$ = require('jquery')(window); global.fetch = require('whatwg-fetch').fetch; Our tests can sit in test/flickr.js where we’ll make assertions about our functions output given predefined inputs. We “stub” or override the global fetch method to intercept and fake the HTTP request so that we can run our tests without hitting the Flickr API directly. import assert from 'assert'; import Flickr from "../lib/flickr"; import sinon from "sinon"; import { Promise } from 'es6-promise'; import { Response } from 'whatwg-fetch'; let sampleResponse = { items: [{ media: { m: 'lolcat.jpg' } },{ media: { m: 'dancing_pug.gif' } }] } // In a real project we'd shift this test helper into a module let jsonResponse = (obj)=> { let json = JSON.stringify(obj); var response = new Response(json, { status: 200, headers: { 'Content-type': 'application/json' } }); return Promise.resolve(response); } describe('Flickr', ()=> { describe('._responseToImages', ()=> { it("maps response JSON to a NodeList of <img>", ()=> { let images = Flickr._responseToImages(sampleResponse); assert(images.length === 2); assert(images[0].nodeName === 'IMG'); assert(images[0].src === 'lolcat.jpg'); }) }) describe('.flickr', ()=> { // Intercept calls to fetch(url) and return a Promise before(()=> { sinon.stub(global, 'fetch', (url)=> { return jsonResponse(sampleResponse) }) }) // Put that thing back where it came from or so help me! after(()=> { global.fetch.restore(); }) it("returns a Promise that resolves with a NodeList of <img>", (done)=> { Flickr.flickr('cats').then((images)=> { assert(images.length === 2); assert(images[1].nodeName === 'IMG'); assert(images[1].src === 'dancing_pug.gif'); done(); }) }) }) }) Run our tests again with npm test and you should see three assuring green ticks. Math .floor ✓ rounds down to the nearest whole number Flickr ._responseToImages ✓ maps response JSON to a NodeList of <img> .flickr ✓ returns a Promise that resolves with a NodeList of <img> 3 passing (67ms) Phew! We’ve successfully tested our little module and the functions that comprise it, learning about pure functions and how to use functional composition along the way. We’ve separated the pure from the impure, it’s readable, comprised of small functions, and it’s well tested. The code is easier to read, understand, and modify than the unreasonably pure example above and that’s my only aim when refactoring code. Pure functions, use them. Links - Professor Frisby’s Mostly Adequate Guide to Functional Programming – @drboolean – This excellent free book on Functional Programming by Brian Lonsdorf is the best guide to FP I’ve come across. A lot of the ideas and examples in this article have come from this book. - Eloquent Javascript – Functional Programming @marijnjh – Marijn Haverbeke’s book remains one of my all time favorite intros to programming and has a great chapter on functional programming too. - Underscore – Digging into a utility library like Underscore, Lodash or Ramda is an important step in maturing as a developer. Understanding how to use these functions will drastically reduce the amount of code you need to write, and make your programs more declarative. — That’s all for now! Thanks for reading and I hope you have found this a good introduction to functional programming, refactoring and testing in JavaScript. It’s an interesting paradigm that’s making waves at the moment, due largely to the growing popularity of libraries like React, Redux, Elm, Cycle and ReactiveX which encourage or enforce these patterns. Jump in, the water is warm. Learn valuable skills with a practical introduction to Python programming! Give yourself more options and write higher quality CSS with CSS Optimization Basics.
https://www.sitepoint.com/an-introduction-to-reasonably-pure-functional-programming/
CC-MAIN-2020-29
refinedweb
2,687
54.32
Builtin Tokens¶ In the pygments.token module, there is a special object called Token that is used to create token types. You can create a new token type by accessing an attribute of Token: >>> from pygments.token import Token >>> Token.String Token.String >>> Token.String is Token.String True Note that tokens are singletons so you can use the is operator for comparing token types. As of Pygments 0.7 you can also use the in operator to perform set tests: >>> from pygments.token import Comment >>> Comment.Single in Comment True >>> Comment in Comment.Multi False This can be useful in filters and if you write lexers on your own without using the base lexers. You can also split a token type into a hierarchy, and get the parent of it: >>> String.split() [Token, Token.Literal, Token.Literal.String] >>> String.parent Token.Literal In principle, you can create an unlimited number of token types but nobody can guarantee that a style would define style rules for a token type. Because of that, Pygments proposes some global token types defined in the pygments.token.STANDARD_TYPES dict. For some tokens aliases are already defined: >>> from pygments.token import String >>> String Token.Literal.String Inside the pygments.token module the following aliases are defined: The Whitespace token type is new in Pygments 0.8. It is used only by the VisibleWhitespaceFilter currently. Normally you just create token types using the already defined aliases. For each of those token aliases, a number of subtypes exists (excluding the special tokens Token.Text, Token.Error and Token.Other) The is_token_subtype() function in the pygments.token module can be used to test if a token type is a subtype of another (such as Name.Tag and Name). (This is the same as Name.Tag in Name. The overloaded in operator was newly introduced in Pygments 0.7, the function still exists for backwards compatibility.) With Pygments 0.7, it’s also possible to convert strings to token types (for example if you want to supply a token from the command line): >>> from pygments.token import String, string_to_tokentype >>> string_to_tokentype("String") Token.Literal.String >>> string_to_tokentype("Token.Literal.String") Token.Literal.String >>> string_to_tokentype(String) Token.Literal.String Keyword Tokens¶ - Keyword For any kind of keyword (especially if it doesn’t match any of the subtypes of course). - Keyword.Constant For keywords that are constants (e.g. Nonein future Python versions). - Keyword.Declaration For keywords used for variable declaration (e.g. varin some programming languages like JavaScript). - Keyword.Namespace For keywords used for namespace declarations (e.g. importin Python and Java and packagein Java). - Keyword.Pseudo For keywords that aren’t really keywords (e.g. Nonein old Python versions). - Keyword.Reserved For reserved keywords. - Keyword.Type For builtin types that can’t be used as identifiers (e.g. int, charetc. in C). Name Tokens¶ - Name For any name (variable names, function names, classes). - Name.Attribute For all attributes (e.g. in HTML tags). - Name.Builtin Builtin names; names that are available in the global namespace. - Name.Builtin.Pseudo Builtin names that are implicit (e.g. selfin Ruby, thisin Java). - Name.Class Class names. Because no lexer can know if a name is a class or a function or something else this token is meant for class declarations. - Name.Constant Token type for constants. In some languages you can recognise a token by the way it’s defined (the value after a constkeyword for example). In other languages constants are uppercase by definition (Ruby). - Name.Decorator Token type for decorators. Decorators are syntactic elements in the Python language. Similar syntax elements exist in C# and Java. - Name.Entity Token type for special entities. (e.g. in HTML). - Name.Exception Token type for exception names (e.g. RuntimeErrorin Python). Some languages define exceptions in the function signature (Java). You can highlight the name of that exception using this token then. - Name.Function Token type for function names. - Name.Function.Magic same as Name.Function but for special function names that have an implicit use in a language (e.g. __init__method in Python). - Name.Label Token type for label names (e.g. in languages that support goto). - Name.Namespace Token type for namespaces. (e.g. import paths in Java/Python), names following the module/ namespacekeyword in other languages. - Name.Other Other names. Normally unused. - Name.Tag Tag names (in HTML/XML markup or configuration files). - Name.Variable Token type for variables. Some languages have prefixes for variable names (PHP, Ruby, Perl). You can highlight them using this token. - Name.Variable.Class same as Name.Variable but for class variables (also static variables). - Name.Variable.Global same as Name.Variable but for global variables (used in Ruby, for example). - Name.Variable.Instance same as Name.Variable but for instance variables. - Name.Variable.Magic same as Name.Variable but for special variable names that have an implicit use in a language (e.g. __doc__in Python). Literals¶ - Literal For any literal (if not further defined). - Literal.Date for date literals (e.g. 42din Boo). - String For any string literal. - String.Affix Token type for affixes that further specify the type of the string they’re attached to (e.g. the prefixes rand u8in r"foo"and u8"foo"). - String.Backtick Token type for strings enclosed in backticks. - String.Char Token type for single characters (e.g. Java, C). - String.Delimiter Token type for delimiting identifiers in “heredoc”, raw and other similar strings (e.g. the word ENDin Perl code print <<'END';). - String.Doc Token type for documentation strings (for example Python). - String.Double Double quoted strings. - String.Escape Token type for escape sequences in strings. - String.Heredoc Token type for “heredoc” strings (e.g. in Ruby or Perl). - String.Interpol Token type for interpolated parts in strings (e.g. #{foo}in Ruby). - String.Other Token type for any other strings (for example %q{foo}string constructs in Ruby). - String.Regex Token type for regular expression literals (e.g. /foo/in JavaScript). - String.Single Token type for single quoted strings. - String.Symbol Token type for symbols (e.g. :fooin LISP or Ruby). - Number Token type for any number literal. - Number.Bin Token type for binary literals (e.g. 0b101010). - Number.Float Token type for float literals (e.g. 42.0). - Number.Hex Token type for hexadecimal number literals (e.g. 0xdeadbeef). - Number.Integer Token type for integer literals (e.g. 42). - Number.Integer.Long Token type for long integer literals (e.g. 42Lin Python). - Number.Oct Token type for octal literals. Operators¶ - Operator For any punctuation operator (e.g. +, -). - Operator.Word For any operator that is a word (e.g. not). Punctuation¶ New in version 0.7. - Punctuation For any punctuation which is not an operator (e.g. [, (…) Generic Tokens¶ Generic tokens are for special lexers like the DiffLexer that doesn’t really highlight a programming language but a patch file. - Generic A generic, unstyled token. Normally you don’t use this token type. - Generic.Deleted Marks the token value as deleted. - Generic.Emph Marks the token value as emphasized. - Generic.Error Marks the token value as an error message. - Generic.Heading Marks the token value as headline. - Generic.Inserted Marks the token value as inserted. - Generic.Output Marks the token value as program output (e.g. for python cli lexer). - Generic.Prompt Marks the token value as command prompt (e.g. bash lexer). - Generic.Strong Marks the token value as bold (e.g. for rst lexer). - Generic.Subheading Marks the token value as subheadline. - Generic.Traceback Marks the token value as a part of an error traceback. Token type for any comment. #!). Token type for multiline comments. Token type for preprocessor comments (also <?php/ <%constructs). Token type for comments that end at the end of a line (e.g. # foo). Special data in comments. For example code tags, author and license information, etc.
https://pygments.org/docs/tokens/
CC-MAIN-2019-51
refinedweb
1,306
54.49
I solved this problem using both brute-force and kmp, whereas the brute-force took 15ms, kmp 18ms. Is there something wrong with my kmp solution? what is the run time of your kmp code? I would be much appreciated if some one could tell me his/her run time for reference below is my kmp code: public class Solution { public int strStr(String haystack, String needle) { int len = needle.length(); if (haystack == null || needle == null || (haystack.length() == 0&&len != 0)) return -1; if (len == 0) return 0; int[] lsp = new int[len]; lsp[0] = 0; for (int i=1;i<len;i++) { lsp[i] = 0; int j = lsp[i-1]; while (j>0 && needle.charAt(i) != needle.charAt(j)) j = lsp[j-1]; if (needle.charAt(i) == needle.charAt(j)) lsp[i] = j + 1; } for (int i=0, j=0;i<haystack.length();i++) { while (j>0 && haystack.charAt(i) != needle.charAt(j)) j = lsp[j-1]; if (haystack.charAt(i) == needle.charAt(j)) j++; if (j == len) return (i-len+1); } return -1; - }
https://discuss.leetcode.com/topic/89766/why-kmp-takes-more-time-than-brute-force
CC-MAIN-2017-51
refinedweb
175
93.54
1.1 ! jdf 1: # The Domain Name System ! 2: ! 3: Use of the Domain Name System has been discussed in previous chapters, without ! 4: going into detail on the setup of the server providing the service. This chapter ! 5: describes setting up a simple, small domain with one Domain Name System (DNS) ! 6: nameserver on a NetBSD system. It includes a brief explanation and overview of ! 7: the DNS; further information can be obtained from the DNS Resources Directory ! 8: (DNSRD) at [](). ! 9: ! 10: ## DNS Background and Concepts ! 11: ! 12: The DNS is a widely used *naming service* on the Internet and other TCP/IP ! 13: networks. The network protocols, data and file formats, and other aspects of the ! 14: DNS are Internet Standards, specified in a number of RFC documents, and ! 15: described by a number of other reference and tutorial works. The DNS has a ! 16: distributed, client-server architecture. There are reference implementations for ! 17: the server and client, but these are not part of the standard. There are a ! 18: number of additional implementations available for many platforms. ! 19: ! 20: ### Naming Services ! 21: ! 22: Naming services are used to provide a mapping between textual names and ! 23: configuration data of some form. A *nameserver* maintains this mapping, and ! 24: clients request the nameserver to *resolve* a name into its attached data. ! 25: ! 26: The reader should have a good understanding of basic hosts to IP address mapping ! 27: and IP address class specifications, see ! 28: [[Name Service Concepts|guide/net-intro#nsconcepts]]. ! 29: ! 30: In the case of the DNS, the configuration data bound to a name is in the form of ! 31: standard *Resource Records* (RRs). These textual names conform to certain ! 32: structural conventions. ! 33: ! 34: ### The DNS namespace ! 35: ! 36: The DNS presents a hierarchical name space, much like a UNIX filesystem, ! 37: pictured as an inverted tree with the *root* at the top. ! 38: ! 39: TOP-LEVEL .org ! 40: | ! 41: MID-LEVEL .diverge.org ! 42: ______________________|________________________ ! 43: | | | ! 44: BOTTOM-LEVEL strider.diverge.org samwise.diverge.org wormtongue.diverge.org ! 45: ! 46: The system can also be logically divided even further if one wishes at different ! 47: points. The example shown above shows three nodes on the diverge.org domain, but ! 48: we could even divide diverge.org into subdomains such as ! 49: "strider.net1.diverge.org", "samwise.net2.diverge.org" and ! 50: "wormtongue.net2.diverge.org"; in this case, 2 nodes reside in ! 51: "net2.diverge.org" and one in "net1.diverge.org". ! 52: ! 53: There are directories of names, some of which may be sub-directories of further ! 54: names. These directories are sometimes called *zones*. There is provision for ! 55: symbolic links, redirecting requests for information on one name to the records ! 56: bound to another name. Each name recognised by the DNS is called a *Domain ! 57: Name*, whether it represents information about a specific host, or a directory ! 58: of subordinate Domain Names (or both, or something else). ! 59: ! 60: Unlike most filesystem naming schemes, however, Domain Names are written with ! 61: the innermost name on the left, and progressively higher-level domains to the ! 62: right, all the way up to the root directory if necessary. The separator used ! 63: when writing Domain Names is a period, ".". ! 64: ! 65: Like filesystem pathnames, Domain Names can be written in an absolute or ! 66: relative manner, though there are some differences in detail. For instance, ! 67: there is no way to indirectly refer to the parent domain like with the UNIX `..` ! 68: directory. Many (but not all) resolvers offer a search path facility, so that ! 69: partially-specified names can be resolved relative to additional listed ! 70: sub-domains other than the client's own domain. Names that are completely ! 71: specified all the way to the root are called *Fully Qualified Domain Names* or ! 72: *FQDN*s. A defining characteristic of an FQDN is that it is written with a ! 73: terminating period. The same name, without the terminating period, may be ! 74: considered relative to some other sub-domain. It is rare for this to occur ! 75: without malicious intent, but in part because of this possibility, FQDNs are ! 76: required as configuration parameters in some circumstances. ! 77: ! 78: On the Internet, there are some established conventions for the names of the ! 79: first few levels of the tree, at which point the hierarchy reaches the level of ! 80: an individual organisation. This organisation is responsible for establishing ! 81: and maintaining conventions further down the tree, within its own domain. ! 82: ! 83: ### Resource Records ! 84: ! 85: Resource Records for a domain are stored in a standardised format in an ASCII ! 86: text file, often called a *zone file*. The following Resource Records are ! 87: commonly used (a number of others are defined but not often used, or no longer ! 88: used). In some cases, there may be multiple RR types associated with a name, and ! 89: even multiple records of the same type. ! 90: ! 91: #### Common DNS Resource Records ! 92: ! 93: * *A: Address* -- This record contains the numerical IP address associated with ! 94: the name. ! 95: ! 96: * *CNAME: Canonical Name* -- This record contains the Canonical Name (an FQDN ! 97: with an associated A record) of the host name to which this record is bound. ! 98: This record type is used to provide name aliasing, by providing a link to ! 99: another name with which other appropriate RR's are associated. If a name has ! 100: a CNAME record bound to it, it is an alias, and no other RR's are permitted ! 101: to be bound to the same name. ! 102: ! 103: It is common for these records to be used to point to hosts providing a ! 104: particular service, such as an FTP or HTTP server. If the service must be ! 105: moved to another host, the alias can be changed, and the same name will reach ! 106: the new host. ! 107: ! 108: * *PTR: Pointer* -- This record contains a textual name. These records are ! 109: bound to names built in a special way from numerical IP addresses, and are ! 110: used to provide a reverse mapping from an IP address to a textual name. This ! 111: is described in more detail in [[Reverse Resolution|guide/dns#bg-reverse]]. ! 112: ! 113: * *NS: Name Server* -- This record type is used to *delegate* a sub-tree of the ! 114: Domain Name space to another nameserver. The record contains the FQDN of a ! 115: DNS nameserver with information on the sub-domain, and is bound to the name ! 116: of the sub-domain. In this manner, the hierarchical structure of the DNS is ! 117: established. Delegation is described in more detail in ! 118: [[Delegation|guide/dns#bg-delegation]]. ! 119: ! 120: * *MX: Mail eXchange* -- This record contains the FQDN for a host that will ! 121: accept SMTP electronic mail for the named domain, together with a priority ! 122: value used to select an MX host when relaying mail. It is used to indicate ! 123: other servers that are willing to receive and spool mail for the domain if ! 124: the primary MX is unreachable for a time. It is also used to direct email to ! 125: a central server, if desired, rather than to each and every individual ! 126: workstation. ! 127: ! 128: * *HINFO: Host Information* -- Contains two strings, intended for use to ! 129: describe the host hardware and operating system platform. There are defined ! 130: strings to use for some systems, but their use is not enforced. Some sites, ! 131: because of security considerations, do not publicise this information. ! 132: ! 133: * *TXT: Text* -- A free-form text field, sometimes used as a comment field, ! 134: sometimes overlaid with site-specific additional meaning to be interpreted by ! 135: local conventions. ! 136: ! 137: * *SOA: Start of Authority* -- This record is required to appear for each zone ! 138: file. It lists the primary nameserver and the email address of the person ! 139: responsible for the domain, together with default values for a number of ! 140: fields associated with maintaining consistency across multiple servers and ! 141: caching of the results of DNS queries. ! 142: ! 143: ### Delegation ! 144: ! 145: Using NS records, authority for portions of the DNS namespace below a certain ! 146: point in the tree can be delegated, and further sub-parts below that delegated ! 147: again. It is at this point that the distinction between a domain and a zone ! 148: becomes important. Any name in the DNS is called a domain, and the term applies ! 149: to that name and to any subordinate names below that one in the tree. The ! 150: boundaries of a zone are narrower, and are defined by delegations. A zone starts ! 151: with a delegation (or at the root), and encompasses all names in the domain ! 152: below that point, excluding names below any subsequent delegations. ! 153: ! 154: This distinction is important for implementation - a zone is a single ! 155: administrative entity (with a single SOA record), and all data for the zone is ! 156: referred to by a single file, called a *zone file*. A zone file may contain more ! 157: than one period-separated level of the namespace tree, if desired, by including ! 158: periods in the names in that zone file. In order to simplify administration and ! 159: prevent overly-large zone files, it is quite legal for a DNS server to delegate ! 160: to itself, splitting the domain into several zones kept on the same server. ! 161: ! 162: ### Delegation to multiple servers ! 163: ! 164: For redundancy, it is common (and often administratively required) that there be ! 165: more than one nameserver providing information on a zone. It is also common that ! 166: at least one of these servers be located at some distance (in terms of network ! 167: topology) from the others, so that knowledge of that zone does not become ! 168: unavailable in case of connectivity failure. Each nameserver will be listed in ! 169: an NS record bound to the name of the zone, stored in the parent zone on the ! 170: server responsible for the parent domain. In this way, those searching the name ! 171: hierarchy from the top down can contact any one of the servers to continue ! 172: narrowing their search. This is occasionally called *walking the tree*. ! 173: ! 174: There are a number of nameservers on the Internet which are called *root ! 175: nameservers*. These servers provide information on the very top levels of the ! 176: domain namespace tree. These servers are special in that their addresses must be ! 177: pre-configured into nameservers as a place to start finding other servers. ! 178: Isolated networks that cannot access these servers may need to provide their own ! 179: root nameservers. ! 180: ! 181: ### Secondaries, Caching, and the SOA record ! 182: ! 183: In order to maintain consistency between these servers, one is usually ! 184: configured as the *primary* server, and all administrative changes are made on ! 185: this server. The other servers are configured as *secondaries*, and transfer the ! 186: contents of the zone from the primary. This operational model is not required, ! 187: and if external considerations require it, multiple primaries can be used ! 188: instead, but consistency must then be maintained by other means. DNS servers ! 189: that store Resource Records for a zone, whether they be primary or secondary ! 190: servers, are said to be *authoritative* for the zone. A DNS server can be ! 191: authoritative for several zones. ! 192: ! 193: When nameservers receive responses to queries, they can *cache* the results. ! 194: This has a significant beneficial impact on the speed of queries, the query load ! 195: on high-level nameservers, and network utilisation. It is also a major ! 196: contributor to the memory usage of the nameserver process. ! 197: ! 198: There are a number of parameters that are important to maintaining consistency ! 199: amongst the secondaries and caches. The values for these parameters for a ! 200: particular domain zone file are stored in the SOA record. These fields are: ! 201: ! 202: #### Fields of the SOA Record ! 203: ! 204: * *Serial* -- A serial number for the zone file. This should be incremented any ! 205: time the data in the domain is changed. When a secondary wants to check if ! 206: its data is up-to-date, it checks the serial number on the primary's SOA ! 207: record. ! 208: ! 209: * *Refresh* -- A time, in seconds, specifying how often the secondary should ! 210: check the serial number on the primary, and start a new transfer if the ! 211: primary has newer data. ! 212: ! 213: * *Retry* -- If a secondary fails to connect to the primary when the refresh ! 214: time has elapsed (for example, if the host is down), this value specifies, in ! 215: seconds, how often the connection should be retried. ! 216: ! 217: * *Expire* -- If the retries fail to reach the primary within this number of ! 218: seconds, the secondary destroys its copies of the zone data file(s), and ! 219: stops answering requests for the domain. This stops very old and potentially ! 220: inaccurate data from remaining in circulation. ! 221: ! 222: * *TTL* -- This field specifies a time, in seconds, that the resource records ! 223: in this zone should remain valid in the caches of other nameservers. If the ! 224: data is volatile, this value should be short. TTL is a commonly-used acronym, ! 225: that stands for "Time To Live". ! 226: ! 227: ### Name Resolution ! 228: ! 229: DNS clients are configured with the addresses of DNS servers. Usually, these are ! 230: servers which are authoritative for the domain of which they are a member. All ! 231: requests for name resolution start with a request to one of these local servers. ! 232: DNS queries can be of two forms: ! 233: ! 234: * A *recursive* query asks the nameserver to resolve a name completely, and ! 235: return the result. If the request cannot be satisfied directly, the ! 236: nameserver looks in its configuration and caches for a server higher up the ! 237: domain tree which may have more information. In the worst case, this will be ! 238: a list of pre-configured servers for the root domain. These addresses are ! 239: returned in a response called a *referral*. The local nameserver must then ! 240: send its request to one of these servers. ! 241: ! 242: * Normally, this will be an *iterative* query, which asks the second nameserver ! 243: to either respond with an authoritative reply, or with the addresses of ! 244: nameservers (NS records) listed in its tables or caches as authoritative for ! 245: the relevant zone. The local nameserver then makes iterative queries, walking ! 246: the tree downwards until an authoritative answer is found (either positive or ! 247: negative) and returned to the client. ! 248: ! 249: In some configurations, such as when firewalls prevent direct IP communications ! 250: between DNS clients and external nameservers, or when a site is connected to the ! 251: rest of the world via a slow link, a nameserver can be configured with ! 252: information about a *forwarder*. This is an external nameserver to which the ! 253: local nameserver should make requests as a client would, asking the external ! 254: nameserver to perform the full recursive name lookup, and return the result in a ! 255: single query (which can then be cached), rather than reply with referrals. ! 256: ! 257: ### Reverse Resolution ! 258: ! 259: The DNS provides resolution from a textual name to a resource record, such as an ! 260: A record with an IP address. It does not provide a means, other than exhaustive ! 261: search, to match in the opposite direction; there is no mechanism to ask which ! 262: name is bound to a particular RR. ! 263: ! 264: For many RR types, this is of no real consequence, however it is often useful to ! 265: identify by name the host which owns a particular IP address. Rather than ! 266: complicate the design and implementation of the DNS database engine by providing ! 267: matching functions in both directions, the DNS utilises the existing mechanisms ! 268: and creates a special namespace, populated with PTR records, for IP address to ! 269: name resolution. Resolving in this manner is often called *reverse resolution*, ! 270: despite the inaccurate implications of the term. ! 271: ! 272: The manner in which this is achieved is as follows: ! 273: ! 274: * A normal domain name is reserved and defined to be for the purpose of mapping ! 275: IP addresses. The domain name used is `in-addr.arpa.` which shows the ! 276: historical origins of the Internet in the US Government's Defence Advanced ! 277: Research Projects Agency's funding program. ! 278: ! 279: * This domain is then subdivided and delegated according to the structure of IP ! 280: addresses. IP addresses are often written in *decimal dotted quad notation*, ! 281: where each octet of the 4-octet long address is written in decimal, separated ! 282: by dots. IP address ranges are usually delegated with more and more of the ! 283: left-most parts of the address in common as the delegation gets smaller. ! 284: Thus, to allow delegation of the reverse lookup domain to be done easily, ! 285: this is turned around when used with the hierarchical DNS namespace, which ! 286: places higher level domains on the right of the name. ! 287: ! 288: * Each byte of the IP address is written, as an ASCII text representation of ! 289: the number expressed in decimal, with the octets in reverse order, separated ! 290: by dots and appended with the in-addr.arpa. domain name. For example, to ! 291: determine the hostname of a network device with IP address 11.22.33.44, this ! 292: algorithm would produce the string `44.33.22.11.in-addr.arpa.` which is a ! 293: legal, structured Domain Name. A normal nameservice query would then be sent ! 294: to the nameserver asking for a PTR record bound to the generated name. ! 295: ! 296: * The PTR record, if found, will contain the FQDN of a host. ! 297: ! 298: One consequence of this is that it is possible for mismatch to occur. Resolving ! 299: a name into an A record, and then resolving the name built from the address in ! 300: that A record to a PTR record, may not result in a PTR record which contains the ! 301: original name. There is no restriction within the DNS that the "reverse" mapping ! 302: must coincide with the "forward" mapping. This is a useful feature in some ! 303: circumstances, particularly when it is required that more than one name has an A ! 304: record bound to it which contains the same IP address. ! 305: ! 306: While there is no such restriction within the DNS, some application server ! 307: programs or network libraries will reject connections from hosts that do not ! 308: satisfy the following test: ! 309: ! 310: * the state information included with an incoming connection includes the IP ! 311: address of the source of the request. ! 312: ! 313: * a PTR lookup is done to obtain an FQDN of the host making the connection ! 314: ! 315: * an A lookup is then done on the returned name, and the connection rejected if ! 316: the source IP address is not listed amongst the A records that get returned. ! 317: ! 318: This is done as a security precaution, to help detect and prevent malicious ! 319: sites impersonating other sites by configuring their own PTR records to return ! 320: the names of hosts belonging to another organisation. ! 321: ! 322: ## The DNS Files ! 323: ! 324: Now let's look at actually setting up a small DNS enabled network. We will ! 325: continue to use the examples mentioned in [Chapter 24, *Setting up TCP/IP on ! 326: NetBSD in practice*](chap-net-practice.html "Chapter 24. Setting up TCP/IP on ! 327: NetBSD in practice"), i.e. we assume that: ! 328: ! 329: * Our IP networking is working correctly ! 330: * We have IPNAT working correctly ! 331: * Currently all hosts use the ISP for DNS ! 332: ! 333: Our Name Server will be the `strider` host which also runs IPNAT, and our two ! 334: clients use "strider" as a gateway. It is not really relevant as to what type of ! 335: interface is on "strider", but for argument's sake we will say a 56k dial up ! 336: connection. ! 337: ! 338: So, before going any further, let's look at our `/etc/hosts` file on "strider" ! 339: before we have made the alterations to use DNS. ! 340: ! 341: **Example strider's `/etc/hosts` file** ! 342: ! 343: 127.0.0.1 localhost ! 344: 192.168.1.1 strider ! 345: 192.168.1.2 samwise sam ! 346: 192.168.1.3 wormtongue worm ! 347: ! 348: This is not exactly a huge network, but it is worth noting that the same rules ! 349: apply for larger networks as we discuss in the context of this section. ! 350: ! 351: The other assumption we want to make is that the domain we want to set up is ! 352: `diverge.org`, and that the domain is only known on our internal network, and ! 353: not worldwide. Proper registration of the nameserver's IP address as primary ! 354: would be needed in addition to a static IP. These are mostly administrative ! 355: issues which are left out here. ! 356: ! 357: The NetBSD operating system provides a set of config files for you to use for ! 358: setting up DNS. Along with a default `/etc/named.conf`, the following files are ! 359: stored in the `/etc/namedb` directory: ! 360: ! 361: * `localhost` ! 362: * `127` ! 363: * `loopback.v6` ! 364: * `root.cache` ! 365: ! 366: You will see modified versions of these files in this section, and I strongly ! 367: suggest making a backup copy of the original files for reference purposes. ! 368: ! 369: *Note*: The examples in this chapter refer to BIND major version 8, however, it ! 370: should be noted that format of the name database and other config files are ! 371: almost 100% compatible between version. The only difference I noticed was that ! 372: the `$TTL` information was not required. ! 373: ! 374: ### /etc/named.conf ! 375: ! 376: The first file we want to look at is `/etc/named.conf`. This file is the config ! 377: file for bind (hence the catchy name). Setting up system like the one we are ! 378: doing is relatively simple. First, here is what mine looks like: ! 379: ! 380: options { ! 381: directory "/etc/namedb"; ! 382: allow-transfer { 192.168.1.0/24; }; ! 383: allow-query { 192.168.1.0/24; }; ! 384: listen-on port 53 { 192.168.1.1; }; ! 385: }; ! 386: ! 387: zone "localhost" { ! 388: type master; ! 389: notify no; ! 390: file "localhost"; ! 391: }; ! 392: ! 393: zone "127.IN-ADDR.ARPA" { ! 394: type master; ! 395: notify no; ! 396: file "127"; ! 397: }; ! 398: ! 399: zone "0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.ip6.int" { ! 400: type master; ! 401: file "loopback.v6"; ! 402: }; ! 403: ! 404: zone "diverge.org" { ! 405: type master; ! 406: notify no; ! 407: file "diverge.org"; ! 408: }; ! 409: ! 410: zone "1.168.192.in-addr.arpa" { ! 411: type master; ! 412: notify no; ! 413: file "1.168.192"; ! 414: }; ! 415: ! 416: zone "." in { ! 417: type hint; ! 418: file "root.cache"; ! 419: }; ! 420: ! 421: Note that in my `named.conf` the root (".") section is last, that is because ! 422: there is another domain called diverge.org on the internet (I happen to own it) ! 423: so I want the resolver to look out on the internet last. This is not normally ! 424: the case on most systems. ! 425: ! 426: Another very important thing to remember here is that if you have an internal ! 427: setup, in other words no live internet connection and/or no need to do root ! 428: server lookups, comment out the root (".") zone. It may cause lookup problems if ! 429: a particular client decides it wants to reference a domain on the internet, ! 430: which our server couldn't resolve itself. ! 431: ! 432: Looks like a pretty big mess, upon closer examination it is revealed that many ! 433: of the lines in each section are somewhat redundant. So we should only have to ! 434: explain them a few times. ! 435: ! 436: Lets go through the sections of `named.conf`: ! 437: ! 438: #### options ! 439: ! 440: This section defines some global parameters, most noticeable is the location of ! 441: the DNS tables, on this particular system, they will be put in `/etc/namedb` as ! 442: indicated by the "directory" option. ! 443: ! 444: Following are the rest of the params: ! 445: ! 446: * `allow-transfer` -- This option lists which remote DNS servers acting as ! 447: secondaries are allowed to do zone transfers, i.e. are allowed to read all ! 448: DNS data at once. For privacy reasons, this should be restricted to secondary ! 449: DNS servers only. ! 450: ! 451: * `allow-query` -- This option defines hosts from what network may query this ! 452: name server at all. Restricting queries only to the local network ! 453: (192.168.1.0/24) prevents queries arriving on the DNS server's external ! 454: interface, and prevent possible privacy issues. ! 455: ! 456: * `listen-on port` -- This option defined the port and associated IP addresses ! 457: this server will run ! 458: [named(8)]() ! 459: on. Again, the "external" interface is not listened here, to prevent queries ! 460: getting received from "outside". ! 461: ! 462: The rest of the `named.conf` file consists of `zone`s. A zone is an area that ! 463: can have items to resolve attached, e.g. a domain can have hostnames attached to ! 464: resolve into IP addresses, and a reverse-zone can have IP addresses attached ! 465: that get resolved back into hostnames. Each zone has a file associated with it, ! 466: and a table within that file for resolving that particular zone. As is readily ! 467: apparent, their format in `named.conf` is strikingly similar, so I will ! 468: highlight just one of their records: ! 469: ! 470: #### zone diverge.org ! 471: ! 472: * `type` -- The type of a zone is usually of type "master" in all cases except ! 473: for the root zone `.` and for zones that a secondary (backup) service is ! 474: provided - the type obviously is "secondary" in the latter case. ! 475: ! 476: * `notify` -- Do you want to send out notifications to secondaries when your ! 477: zone changes? Obviously not in this setup, so this is set to "no". ! 478: ! 479: * `file` -- This option sets the filename in our `/etc/namedb` directory where ! 480: records about this particular zone may be found. For the "diverge.org" zone, ! 481: the file `/etc/namedb/diverge.org` is used. ! 482: ! 483: ### /etc/namedb/localhost ! 484: ! 485: For the most part, the zone files look quite similar, however, each one does ! 486: have some unique properties. Here is what the `localhost` file looks like: ! 487: ! 488: 1|$TTL 3600 ! 489: 2|@ IN SOA strider.diverge.org. root.diverge.org. ( ! 490: 3| 1 ; Serial ! 491: 4| 8H ; Refresh ! 492: 5| 2H ; Retry ! 493: 6| 1W ; Expire ! 494: 7| 1D) ; Minimum TTL ! 495: 8| IN NS localhost. ! 496: 9|localhost. IN A 127.0.0.1 ! 497: 10| IN AAAA ::1 ! 498: ! 499: Line by line: ! 500: ! 501: * *Line 1*: This is the Time To Live for lookups, which defines how long other ! 502: DNS servers will cache that value before discarding it. This value is ! 503: generally the same in all the files. ! 504: ! 505: * *Line 2*: This line is generally the same in all zone files except ! 506: `root.cache`. It defines a so-called "Start Of Authority" (SOA) header, which ! 507: contains some basic information about a zone. Of specific interest on this ! 508: line are "strider.diverge.org." and "root.diverge.org." (note the trailing ! 509: dots!). Obviously one is the name of this server and the other is the contact ! 510: for this DNS server, in most cases root seems a little ambiguous, it is ! 511: preferred that a regular email account be used for the contact information, ! 512: with the "@" replaced by a "." (for example, mine would be ! 513: "jrf.diverge.org."). ! 514: ! 515: * *Line 3*: This line is the serial number identifying the "version" of the ! 516: zone's data set (file). The serial number should be incremented each time ! 517: there is a change to the file, the usual format is to either start with a ! 518: value of "1" and increase it for every change, or use a value of "YYYYMMDDNN" ! 519: to encode year (YYYY), month (MM), day (DD) and change within one day (NN) in ! 520: the serial number. ! 521: ! 522: * *Line 4*: This is the refresh rate of the server, in this file it is set to ! 523: once every 8 hours. ! 524: ! 525: * *Line 5*: The retry rate. ! 526: ! 527: * *Line 6*: Lookup expiry. ! 528: ! 529: * *Line 7*: The minimum Time To Live. ! 530: ! 531: * *Line 8*: This is the Nameserver line, which uses a "NS" resource record to ! 532: show that "localhost" is the only DNS server handing out data for this zone ! 533: (which is "@", which indicates the zone name used in the `named.conf` file, ! 534: i.e. "diverge.org") is, well, "localhost". ! 535: ! 536: * *Line 9*: This is the localhost entry, which uses an "A" resource record to ! 537: indicate that the name "localhost" should be resolved into the IP-address ! 538: 127.0.0.1 for IPv4 queries (which specifically ask for the "A" record). ! 539: ! 540: * *Line 10*: This line is the IPv6 entry, which returns ::1 when someone asks ! 541: for an IPv6-address (by specifically asking for the AAAA record) of ! 542: "localhost.". ! 543: ! 544: ### /etc/namedb/zone.127.0.0 ! 545: ! 546: This is the reverse lookup file (or zone) to resolve the special IP address ! 547: 127.0.0.1 back to "localhost": ! 548: ! 549: 1| $TTL 3600 ! 550: 2| @ IN SOA strider.diverge.org. root.diverge.org. ( ! 551: 3| 1 ; Serial ! 552: 4| 8H ; Refresh ! 553: 5| 2H ; Retry ! 554: 6| 1W ; Expire ! 555: 7| 1D) ; Minimum TTL ! 556: 8| IN NS localhost. ! 557: 9| 1.0.0 IN PTR localhost. ! 558: ! 559: In this file, all of the lines are the same as the localhost zonefile with ! 560: exception of line 9, this is the reverse lookup (PTR) record. The zone used here ! 561: is "@" again, which got set to the value given in `named.conf`, i.e. ! 562: "127.in-addr.arpa". This is a special "domain" which is used to do ! 563: reverse-lookup of IP addresses back into hostnames. For it to work, the four ! 564: bytes of the IPv4 address are reserved, and the domain "in-addr.arpa" attached, ! 565: so to resolve the IP address "127.0.0.1", the PTR record of ! 566: "1.0.0.127.in-addr.arpa" is queried, which is what is defined in that line. ! 567: ! 568: ### /etc/namedb/diverge.org ! 569: ! 570: This zone file is populated by records for all of our hosts. Here is what it ! 571: looks like: ! 572: ! 573: 1| $TTL 3600 ! 574: 2| @ IN SOA strider.diverge.org. root.diverge.org. ( ! 575: 3| 1 ; serial ! 576: 4| 8H ; refresh ! 577: 5| 2H ; retry ! 578: 6| 1W ; expire ! 579: 7| 1D ) ; minimum seconds ! 580: 8| IN NS strider.diverge.org. ! 581: 9| IN MX 10 strider.diverge.org. ; primary mail server ! 582: 10| IN MX 20 samwise.diverge.org. ; secondary mail server ! 583: 11| strider IN A 192.168.1.1 ! 584: 12| samwise IN A 192.168.1.2 ! 585: 13| www IN CNAME samwise.diverge.org. ! 586: 14| worm IN A 192.168.1.3 ! 587: ! 588: There is a lot of new stuff here, so lets just look over each line that is new ! 589: here: ! 590: ! 591: * *Line 9*: This line shows our mail exchanger (MX), in this case it is ! 592: "strider". The number that precedes "strider.diverge.org." is the priority ! 593: number, the lower the number their higher the priority. The way we are setup ! 594: here is if "strider" cannot handle the mail, then "samwise" will. ! 595: ! 596: * *Line 11*: CNAME stands for canonical name, or an alias for an existing ! 597: hostname, which must have an A record. So we have aliased `` ! 598: to `samwise.diverge.org`. ! 599: ! 600: The rest of the records are simply mappings of IP address to a full name (A ! 601: records). ! 602: ! 603: ### /etc/namedb/1.168.192 ! 604: ! 605: This zone file is the reverse file for all of the host records, to map their IP ! 606: numbers we use on our private network back into hostnames. The format is similar ! 607: to that of the "localhost" version with the obvious exception being the ! 608: addresses are different via the different zone given in the `named.conf` file, ! 609: i.e. "0.168.192.in-addr.arpa" here: ! 610: ! 611: 1|$TTL 3600 ! 612: 2|@ IN SOA strider.diverge.org. root.diverge.org. ( ! 613: 3| 1 ; serial ! 614: 4| 8H ; refresh ! 615: 5| 2H ; retry ! 616: 6| 1W ; expire ! 617: 7| 1D ) ; minimum seconds ! 618: 8| IN NS strider.diverge.org. ! 619: 9|1 IN PTR strider.diverge.org. ! 620: 10|2 IN PTR samwise.diverge.org. ! 621: 11|3 IN PTR worm.diverge.org. ! 622: ! 623: ### /etc/namedb/root.cache ! 624: ! 625: This file contains a list of root name servers for your server to query when it ! 626: gets requests outside of its own domain that it cannot answer itself. Here are ! 627: first few lines of a root zone file: ! 628: ! 629: ; ! 630: ; This file holds the information on root name servers needed to ! 631: ; initialize cache of Internet domain name servers ! 632: ; (e.g. reference this file in the "cache . <file>" ! 633: ; configuration file of BIND domain name servers). ! 634: ; ! 635: ; This file is made available by InterNIC ! 636: ; under anonymous FTP as ! 637: ; file /domain/db.cache ! 638: ; on server ! 639: ; -OR- RS.INTERNIC.NET ! 640: ; ! 641: ; last update: Jan 29, 2004 ! 642: ; related version of root zone: 2004012900 ! 643: ; ! 644: ; ! 645: ; formerly NS.INTERNIC.NET ! 646: ; ! 647: . 3600000 IN NS A.ROOT-SERVERS.NET. ! 648: A.ROOT-SERVERS.NET. 3600000 A 198.41.0.4 ! 649: ; ! 650: ; formerly NS1.ISI.EDU ! 651: ; ! 652: . 3600000 NS B.ROOT-SERVERS.NET. ! 653: B.ROOT-SERVERS.NET. 3600000 A 192.228.79.201 ! 654: ; ! 655: ; formerly C.PSI.NET ! 656: ; ! 657: . 3600000 NS C.ROOT-SERVERS.NET. ! 658: C.ROOT-SERVERS.NET. 3600000 A 192.33.4.12 ! 659: ; ! 660: ... ! 661: ! 662: This file can be obtained from ISC at <> and usually comes ! 663: with a distribution of BIND. A `root.cache` file is included in the NetBSD ! 664: operating system's "etc" set. ! 665: ! 666: This section has described the most important files and settings for a DNS ! 667: server. Please see the BIND documentation in `/usr/src/dist/bind/doc/bog` and ! 668: [named.conf(5)]() ! 669: for more information. ! 670: ! 671: ## Using DNS ! 672: ! 673: In this section we will look at how to get DNS going and setup "strider" to use ! 674: its own DNS services. ! 675: ! 676: Setting up named to start automatically is quite simple. In `/etc/rc.conf` ! 677: simply set `named=yes`. Additional options can be specified in `named_flags`, ! 678: for example, I like to use `-g nogroup -u nobody`, so a non-root account runs ! 679: the "named" process. ! 680: ! 681: In addition to being able to startup "named" at boot time, it can also be ! 682: controlled with the `ndc` command. In a nutshell the `ndc` command can stop, ! 683: start or restart the named server process. It can also do a great many other ! 684: things. Before use, it has to be setup to communicate with the "named" process, ! 685: see the [ndc(8)]() ! 686: and ! 687: [named.conf(5)]() ! 688: man pages for more details on setting up communication channels between "ndc" ! 689: and the "named" process. ! 690: ! 691: Next we want to point "strider" to itself for lookups. We have two simple steps, ! 692: first, decide on our resolution order. On a network this small, it is likely ! 693: that each host has a copy of the hosts table, so we can get away with using ! 694: `/etc/hosts` first, and then DNS. However, on larger networks it is much easier ! 695: to use DNS. Either way, the file where order of name services used for ! 696: resolution is determined is `/etc/nsswitch.conf` (see ! 697: [[`nsswitch.conf`|guide/net-practice#ex-nsswitch]]. Here is part of a typical ! 698: `nsswitch.conf`: ! 699: ! 700: ... ! 701: group_compat: nis ! 702: hosts: files dns ! 703: netgroup: files [notfound=return] nis ! 704: ... ! 705: ! 706: The line we are interested in is the "hosts" line. "files" means the system uses ! 707: the `/etc/hosts` file first to determine ip to name translation, and if it can't ! 708: find an entry, it will try DNS. ! 709: ! 710: The next file to look at is `/etc/resolv.conf`, which is used to configure DNS ! 711: lookups ("resolution") on the client side. The format is pretty self explanatory ! 712: but we will go over it anyway: ! 713: ! 714: domain diverge.org ! 715: search diverge.org ! 716: nameserver 192.168.1.1 ! 717: ! 718: In a nutshell this file is telling the resolver that this machine belongs to the ! 719: "diverge.org" domain, which means that lookups that contain only a hostname ! 720: without a "." gets this domain appended to build a FQDN. If that lookup doesn't ! 721: succeed, the domains in the "search" line are tried next. Finally, the ! 722: "nameserver" line gives the IP addresses of one or more DNS servers that should ! 723: be used to resolve DNS queries. ! 724: ! 725: To test our nameserver we can use several commands, for example: ! 726: ! 727: # host sam ! 728: sam.diverge.org has address 192.168.1.2 ! 729: ! 730: As can be seen, the domain was appended automatically here, using the value from ! 731: `/etc/resolv.conf`. Here is another example, the output of running ! 732: `host`: ! 733: ! 734: $ host ! 735: is an alias for. ! 736: has address 68.142.226.38 ! 737: has address 68.142.226.39 ! 738: has address 68.142.226.46 ! 739: has address 68.142.226.50 ! 740: has address 68.142.226.51 ! 741: has address 68.142.226.54 ! 742: has address 68.142.226.55 ! 743: has address 68.142.226.32 ! 744: ! 745: Other commands for debugging DNS besides ! 746: [host(1)]() are ! 747: [nslookup(8)]() ! 748: and ! 749: [dig(1)](). Note ! 750: that ! 751: [ping(8)]() ! 752: is *not* useful for debugging DNS, as it will use whatever is configured in ! 753: `/etc/nsswitch.conf` to do the name-lookup. ! 754: ! 755: At this point the server is configured properly. The procedure for setting up ! 756: the client hosts are easier, you only need to setup `/etc/nsswitch.conf` and ! 757: `/etc/resolv.conf` to the same values as on the server. ! 758: ! 759: ## Setting up a caching only name server ! 760: ! 761: A caching only name server has no local zones; all the queries it receives are ! 762: forwarded to the root servers and the replies are accumulated in the local ! 763: cache. The next time the query is performed the answer will be faster because ! 764: the data is already in the server's cache. Since this type of server doesn't ! 765: handle local zones, to resolve the names of the local hosts it will still be ! 766: necessary to use the already known `/etc/hosts` file. ! 767: ! 768: Since NetBSD supplies defaults for all the files needed by a caching only ! 769: server, it only needs to be enabled and started and is immediately ready for ! 770: use! To enable named, put `named=yes` into `/etc/rc.conf`, and tell the system ! 771: to use it adding the following line to the `/etc/resolv.conf` file: ! 772: ! 773: # cat /etc/resolv.conf ! 774: nameserver 127.0.0.1 ! 775: ! 776: Now we can start named: ! 777: ! 778: # sh /etc/rc.d/named restart ! 779: ! 780: ### Testing the server ! 781: ! 782: Now that the server is running we can test it using the ! 783: [nslookup(8)]() ! 784: program: ! 785: ! 786: $ nslookup ! 787: Default server: localhost ! 788: Address: 127.0.0.1 ! 789: ! 790: > ! 791: ! 792: Let's try to resolve a host name, for example "": ! 793: ! 794: > ! 795: Server: localhost ! 796: Address: 127.0.0.1 ! 797: ! 798: Name: ! 799: Address: 204.152.190.12 ! 800: ! 801: If you repeat the query a second time, the result is slightly different: ! 802: ! 803: > ! 804: Server: localhost ! 805: Address: 127.0.0.1 ! 806: ! 807: Non-authoritative answer: ! 808: Name: ! 809: Address: 204.152.190.12 ! 810: ! 811: As you've probably noticed, the address is the same, but the message ! 812: `Non-authoritative answer` has appeared. This message indicates that the answer ! 813: is not coming from an authoritative server for the domain NetBSD.org but from ! 814: the cache of our own server. ! 815: ! 816: The results of this first test confirm that the server is working correctly. ! 817: ! 818: We can also try the ! 819: [host(1)]() and ! 820: [dig(1)]() commands, ! 821: which give the following result. ! 822: ! 823: $ host ! 824: has address 204.152.190.12 ! 825: $ ! 826: $ dig ! 827: ! 828: ; <<>> DiG 8.3 <<>> ! 829: ;; res options: init recurs defnam dnsrch ! 830: ;; got answer: ! 831: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 19409 ! 832: ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 5, ADDITIONAL: 0 ! 833: ;; QUERY SECTION: ! 834: ;;, type = A, class = IN ! 835: ! 836: ;; ANSWER SECTION: ! 837:. 23h32m54s IN A 204.152.190.12 ! 838: ! 839: ;; AUTHORITY SECTION: ! 840: NetBSD.org. 23h32m54s IN NS uucp-gw-1.pa.dec.com. ! 841: NetBSD.org. 23h32m54s IN NS uucp-gw-2.pa.dec.com. ! 842: NetBSD.org. 23h32m54s IN NS ns.NetBSD.org. ! 843: NetBSD.org. 23h32m54s IN NS adns1.berkeley.edu. ! 844: NetBSD.org. 23h32m54s IN NS adns2.berkeley.edu. ! 845: ! 846: ;; Total query time: 14 msec ! 847: ;; FROM: miyu to SERVER: 127.0.0.1 ! 848: ;; WHEN: Thu Nov 25 22:59:36 2004 ! 849: ;; MSG SIZE sent: 32 rcvd: 175 ! 850: ! 851: As you can see ! 852: [dig(1)]() gives ! 853: quite a bit of output, the expected answer can be found in the "ANSWER SECTION". ! 854: The other data given may be of interest when debugging DNS problems. ! 855:
https://wiki.netbsd.org/cgi-bin/cvsweb/wikisrc/guide/dns.mdwn?annotate=1.1
CC-MAIN-2015-32
refinedweb
7,081
79.26
Some time ago, I discussed several timestamp formats you might run into. Today we’ll take a logical step from that information and develop a list of special values you might encounter. Note that if you apply time zone adjustments, the actual timestamp may shift by up to a day..) BTW, does anybody know why time_t is signed, yet negative values are considered invalid by functions such as localtime() ? Why do FILETIMEs start in 1601? Leap years follow a 400 year cycle, with 1600, 2000, etc. being the "most significant" special case. If you base your dates on one of these dates plus one (i.e. 1601, 2001, etc.) then the calculation to convert the numeric value to a year is very slightly simplified (you save yourself a single subtraction.) And 1601 was the most recent such date when Windows NT was developed. I don’t actually know if this is the real reason 1601 was chosen, but it seems plausible. The world is going to come to an end on Jan 19, 2038. I believe that the 1601 is related to the gregorian calendar, or something like that. And the world ending on Jan 19, 2038 only happens on Unix systems. NT’s good until 30,000 or so. Yet another reason to switch – dates don’t run out as quickly :) I (jag) appreciate (uppskattar) all references (alla hänsyftningar) to Monty Python (till Monty Python)! Utmärkt! Seeing a zero in a CLR DateTime is not that unusual, as that value is often used in place of null. It’s the value returned by DateTime.MinValue. In my opinion it was a design mistake to make DateTime a value type instead of a ref type. Most likely so that time math is much more easily handled? Totally guessing, though. Untill 1994 the ONLY computer date I ever saw was 1 Jan 1980. Maybe I got to see Jan 2 sometimes, but one needed to reboot between programs. Er… Why does 0x80000000 as a time_t mean 1901, while 0xFFFFFFFF as a time_t mean 2106? Shouldn’t they both be either signed or unsigned? It looks like 0xFFFFFFFF was interpreted as unsigned, while 0x80000000 was interpreted as signed. (Both dates should be before midnight, Jan. 1, 1970, since they’re both negative. 0xFFFFFFFF should be one second before it, while 0x80000000 should probably be exactly what was posted. Alternately, interpreting them both as unsigned would mean that 0x80000000 would be just after Jan. 19, 2038, and 0xFFFFFFFF would probably be what was posted.) Not that I think many Unix-like systems would actually allow you to use negative time_t’s (see Serge’s comment above), but any given Unix-like system should treat them either as unsigned (and allow them), or signed (and possibly return errors). Not both. Er, time_t isn’t a Unix thing — its a C thing. I am sure most Unixes/Linux doesn’t use time_t internally … just like Windows doesn’t. Only signed 32-bit time_t runs out in 2038. Many C environments have a time64_t (which is of course 64 bits) – eg: Bryan: Yes, sometimes the value is signed and sometimes it’s unsigned. The interpretation should be clear from context. "Any given unix-like system should treat them either as unsigned or signed, not both." Right, but you the end user might have to deal multiple machines, some which use the signed interpretation and some which use the unsigned interpretation. So I included both, because the goal of the table was to show everything you might encounter. Writing "The value 0x80000000 as a time_t when interpreted on a system for which time_t is a signed type" would have been more precise but would have cluttered the table with information that really isn’t relevant to the main point: Recognizing sentinel values. Ah, many thanks. I ran into a "30 Dec 1899" and the *only* guess I had come up with was that it was a Saturday. Which might make certain day-of-the-week computations easier. time_t is a signed integer for historical reasons: - dates all the way back to when C didn’t have unsigned integers - unix’s time_t is signed and the year 2038 is plenty enough of time to switch over to 64 bits. Today’s programmers will be either dead or retired in that year, so it’s somebody else’s problem - lots of crappy programmers assume it’s signed and do subtraction on it instead of using difftime and expect it to be a negative value if the dates are backwards (even though this doesn’t handle large distances correctly) - functions return (clock_t)-1 on error and due to C’s (IMHO stupid) way of doing arithmetic conversions on the comparison operators, this can result in the wrong result on some platforms under certain conditions C trivia: time_t/clock_t are allowed to be any arithmetic type including an unsigned integer or floating point type. Yay, so in another 33 years we’ll have another possible Y2K! Ok guys, this time we make it right! We need to scare the crap out of everyday users about the EOW so that they pay us a lot of money for fixing it! We must henceforth use signed time_t for all times and dates in our software, especially any software going into nuclear reactors! Good luck! I’m counting on you guys! Norman: The Gregorian calendar was adopted in most Roman Catholic countries in 1582. Britain and its colonies (including what is now the US), being Protestant by then, were slow to change over, though not as slow as e.g. Russia. In VC8 (Whidbey) which just RTMed (WooHoo!), we widened time_t to 64 bits by default. It is still signed. You can use a #define to switch back to 32 bit time. We also provide specifically named time functions for the 32 bit and 64 bit versions of the type. Martyn Lovell Development Lead Visual C++ Libraries Monday, October 31, 2005 2:07 PM by Ben Hutchings > Norman: The Gregorian calendar was adopted > in most Roman Catholic countries in 1582. Yikes. I was 100 years off. I’d better stop relying on memory before talking about non-computer stuff that way. I even have enough discipline to look up which argument is which in functions like memcpy almost every time I use them (because memcpy and bcopy had them in opposite orders and if I try to remember which is which then I’ll remember wrong). So I really was out of line by not looking up centuries. I am duly chastised. Geez, a century. That might even be enough time for Windows Vista beta 1 checked build to finish installing itself. The time_t typedef is a relatively recent invention, created during the the ANSI C standardization process, IIRC. The original Unix definition of time() and related functions used "long". PingBack from At least we think they’re fake.
https://blogs.msdn.microsoft.com/oldnewthing/20051028-29/?p=33573/
CC-MAIN-2016-22
refinedweb
1,155
72.26
The .NET Stacks #29: More on route-to-code and some Kubernetes news This week, we dig deep on route-to-code and discuss some Kubernetes news. Note: This is the published version of my free, weekly newsletter, The .NET Stacks. It was originally sent to subscribers on December 7, 2020. Subscribe at the bottom of this post to get the content right away! Happy Monday! Here’s what we’re talking about this week: - Digging deeper on “route-to-code” - Kubernetes is deprecating Docker … what? - Last week in the .NET world 🔭 Digging deeper on “route-to-code” Last week, I talked about the future of writing APIs for ASP.NET Core MVC. The gist: there’s a new initiative (Project Houdini) coming to move MVC productivity features to the core of the stack, and part of that is generating imperative APIs for you at compile time using source generation. This leverages a way to write slim APIs in ASP.NET Core without the bloat of the MVC framework: it’s called “route-to-code.” We talked about it in early October. I thought it’d be fun to migrate a simple MVC CRUD API to this model, and I wrote about it this week. As I wrote, this isn’t meant to be an MVC replacement, but a solution for simple JSON APIs. It does not support model binding or validation, content negotiation, or dependency injection from constructors. Most times, though, you’re wanting to separate business logic from your execution context—it’s definitely worth a look. Here’s me using an in-memory Entity Framework Core database to get some bands: endpoints.MapGet("/bands", async context => { var repository = context.RequestServices.GetService<SampleContext>(); var bands = repository.Bands.ToListAsync(); await context.Response.WriteAsJsonAsync(bands); }); There’s no framework here, so instead of using DI to access my EF context, I get a service through the HttpContext. Then, I can use helper methods that let me read from and write to my pipe. Pretty slick. Here’s me getting a record by ID: simplicity comes with a cost: it’s all very manual. I even have to convert the ID to an integer myself (not a big deal, admittedly). How does a POST request work? I can check to see if the request is asking for JSON. With no framework or filters, my error checking is setting a status code and returning early. (I can abstract this out, obviously. It took awhile to get used to not having a framework to lean on.)); }); In the doc, Microsoft will be the first to tell you it’s for the simplest scenarios. It’ll be interesting to see what improvements come: will mimicking DI become easier? I hope so. 🤯 Kubernetes is deprecating Docker … what? I know this is a .NET development newsletter, but these days you probably need at least a passing knowledge of containerization. To that end: this week, you may have heard something along the lines of “Kubernetes is deprecating Docker.” It sounds concerning, but Kubernetes says you probably shouldn’t worry and Docker is saying the same. Still, it’s true: Kubernetes is deprecating Docker as a container runtime after v1.20—currently planned for late 2021. From a high level, I think Google’s Kelsey Hightower summed it up best: Think of it like this – Docker refactored its code base, broke up the monolith, and created a new service, containerd, which both Kubernetes and Docker now use for running containers. Docker isn’t a magical “make me a container” button—it’s an entire tech stack. Inside of that is a container runtime, containerd. It contains a lot of bells and whistles for us when doing development work, but k8s doesn’t need it because it isn’t a person. (If it were, I’d like to have a chat.) For k8s to get through this abstraction layer, it needs to use the Dockershim tool to get to containerd—yet another maintenance headache. Kubelets are removing Dockershims at the end of 2021, which removes Docker support. When this change comes, you just need to change your container runtime from Docker to another supported runtime. Because this addresses a different environment than most folks use with Docker, it shouldn’t matter—the install you’re using in dev is typically different than the runtime in your k8s cluster. This change would largely impact k8s administrators and not developers. Hopefully this clears up some potential confusion. We don’t talk about Docker and Kubernetes often, but this was too important not to discuss. (I could hardly contain myself.) 🌎 Last week in the .NET world 🔥 The Top 3 - Niels Swimberghe makes phone calls from Blazor WebAssembly with Twilio Voice. (And while you’re there, hover over the burger. You’re welcome.) - Steve Smith warns against wrapping DbContext in using, and other gotchas. - Khalid Abuhakmeh writes about understanding the .NET 5 runtime environment. 📢 Announcements - Claire Novotny shines a light on debug-time productivity with Source Link. - .NET Core 2.1, 3.1, and 5.0 updates are coming to Microsoft Update. - Uno Platform 3.1 is released. - Scott Addie recaps what’s new in the ASP.NET Core docs for November 2020. - Bri Achtman writes about ML.NET Model Builder November updates. - Tara Overfield releases the November 2020 cumulative update preview for .NET Framework. 📅 Community and events - Just one community standup this week: Xamarin talks about .NET MAUI. - The .NET Docs Show talks to Dave Brock (yes, that one) about C# 9. 😎 ASP.NET Core / Blazor - Dave Brock writes about simple JSON APIs with ASP.NET Core route-to-code. - David Ramel writes about reported performance degradation when moving from WinForms to Blazor. - Ricardo Peres writes about the pitfalls when working with async file uploads in ASP.NET Core. - Jon Hilton passes arguments to onclick functions in Blazor. - Marinko Spasojevic writes about complex model validation in Blazor apps. - Damien Bowden secures an ASP.NET Core API that uses multiple access tokens. - Michael Shpilt writes about some must-know packages for ASP.NET Core. 🚀 .NET 5 - Jonathan Allen writes more about .NET 5 breaking changes. - Eran Stiller writes about .NET 5 runtime improvements. - Jonathan Allen writes about .NET 5 breaking changes to the BCL. - Antonio Laccardi writes about ASP.NET Core improvements in .NET 5. - Norm Johnson writes about .NET 5 AWS Lambda support with container images. ⛅ The cloud - Frank Boucher configures a secured custom domain on an Azure Function or website. - Paul Michaels handles events inside an Azure Function. - David Ramel writes how Google Cloud Functions supports .NET Core 3.1 but not .NET 5. - Abel Wang and Isaac Levin write about dev productivity with GitHub, VS Code, and Azure. 📔 Languages - Claudio Bernasconi writes about top-level statements in C# 9, and also works through switch expressions in C# 8. - Munib Butt uses the proxy pattern in C#. - Ian Russell introduces partial function application in F#. - Ian Griffiths shows the pitfalls of mechanism over intent with C# 9 patterns. - Matthew Crews writes about object expressions in F#. 🔧 Tools - Sean Killeen gets started with PowerShell Core in Windows Terminal, and also writes about things he’s learned about NUnit. - Andrew Lock uses Quartz.NET with ASP.NET Core and worker services. 📱 Xamarin - James Montemagno warns against using Android in your namespaces, and also gets you writing your first app for iOS and Android with Xamarin and Visual Studio. - Yogeshwaran Mohan creates a marquee control. - Nick Randolph explains the correlation between .NET 5, WinUI, and MAUI (Xamarin.Forms). - Matheus Castello writes about Linux + .NET 5 + VS Code XAML Preview + Hot Reload running on embedded Linux. 👍 Design, architecture and best practices - Kamil Grzybek continues his series on modular monoliths. - Scott Brady reminds us: OAuth is not user authorization. - Peter Vogel shows the advantages of end-to-end testing. - Nathan Bennett compares GraphQL to REST. - Derek Comartin talks about handling duplicate messages. 🎤 Podcasts - The Changelog talks about growing as a software engineer. - The Stack Overflow podcast explains why devs are increasing demanding ethics in tech. - The 6 Figure Developer talks to Rob Richardson about .NET 5, pipelines, and testing. 🎥 Videos - Jeff Fritz works with Entity Framework Core. - The ON.NET Show customizes the Graph SDKs and discusses microcontrollers with the Meadow IoT platform. - Data Exposed gets started with DevOps for Azure SQL. - The Loosely Coupled Show talks about the difficulties of caching.
https://www.daveabrock.com/2020/12/12/dotnet-stacks-29/
CC-MAIN-2021-39
refinedweb
1,396
69.18
Brown12,406 Points Why aren't I receiving the correct output or return? For some reason I keep getting an error on this code. I'm not sure what I'm not doing here. func greeting(person: String) -> (greeting: String, language: String) { let language = "English" let greeting = "Hello \(person)" return (language, greeting) } var result = greeting("Tom") 1 Answer jcorum71,813 Points One small issue: you have greeting and language reversed: return (language, greeting). It should be the other way round. Otherwise, right on! func greeting(person: String) -> (greeting: String, language: String) { let language = "English" let greeting = "Hello \(person)" return (greeting, language) } var result = greeting("Tom") println(result.language)
https://teamtreehouse.com/community/why-arent-i-receiving-the-correct-output-or-return
CC-MAIN-2022-27
refinedweb
108
57.37
29 March 2012 15:31 [Source: ICIS news] LONDON (ICIS)--The European ethylene contract price for April is fully confirmed at €1,345/tonne ($1,793/tonne), up by €40/tonne from March, on the back of firm naphtha costs, market sources said on Thursday. The 3% increase took European contract ethylene to its second record high in as many months. The hike was largely in line with most players’ expectations as most had accepted that with naphtha prices around €30–40/tonne higher than at the time of the March settlement, ethylene producers would again be forced to target higher contract prices in order to recover lost margin. “It’s a settlement in line with what everyone expected," said one major producer. “We hope we get some normal margins, the business has been suffering in ?xml:namespace> Sources said that it was too early to say whether ethylene demand would be impacted by another contract price increase. Ethylene has risen by €265/tonne since December 2011. “Will demand be strong enough to accept the increase?” said a second integrated producer. “I don’t see the increment being of a scale that would have a significant impact on demand,” said a second key producer, adding that it had hoped for “another €10/tonne” for April. “Business is good,” said the first integrated producer. “Even the most militant and vocal customer wants additional tonnes.” However, an integrated consumer said that the increase was larger than it had been expecting. It and another key consumer said that demand was trending lower as a direct result of the higher contract price settlement. Many are concerned about European derivatives' competitiveness given the strength of the European market, particularly compared with A recent deep-sea deal for early April loading was concluded below $1,550/tonne CIF NWE, sources said this week. “It’s interesting to see that while contract goes up, spot is coming down. That is not such a good sign for the market,” the first major producer said. The ethylene contract price
http://www.icis.com/Articles/2012/03/29/9546056/europe-april-ethylene-fully-settled-at-1345tonne-fd-nwe.html
CC-MAIN-2015-11
refinedweb
340
61.77
On Mon, Jan 12, 2009 at 10:39:34AM -0500, Christoph Hellwig wrote: > On Mon, Jan 12, 2009 at 10:33:06AM +1100, Dave Chinner wrote: > > That should probably be namespaced correctly because it won't be > > static on debug builds. i.e. xfs_handle_acceptable() > > Ok. > > > The args in this function are back to front compared to all the > > other functions - the others are (filp, arg), this one is the > > opposite. > > Yeah, I first had all this way and then changed it around to match > the non-handle ioctls more closely but forgot this one. > > Updated patch below: > > --- > > Subject: xfs: fix dentry aliasing issues in open_by_handle > From: Christoph Hellwig <hch@xxxxxx> > > Open by handle just grabs an inode by handle and then creates itself > a dentry for it. While this works for regular files it is horribly > broken for directories, where the VFS locking relies on the fact that > there is only just one single dentry for a given inode, and that > these are always connected to the root of the filesystem so that > it's locking algorithms work (see Documentations/filesystems/Locking) > > Remove all the existing open by handle code and replace it with a small > wrapper around the exportfs code which deals with all these issues. > At the same time we also make the checks for a valid handle strict > enough to reject all not perfectly well formed handles - given that > we never hand out others that's okay and simplifies the code. > > > Signed-off-by: Christoph Hellwig <hch@xxxxxx> Reviewed-by: Dave Chinner <david@xxxxxxxxxxxxx> -- Dave Chinner david@xxxxxxxxxxxxx
http://oss.sgi.com/archives/xfs/2009-01/msg00911.html
CC-MAIN-2015-06
refinedweb
261
53.17
NFSSVC(2) MidnightBSD System Calls Manual NFSSVC(2) NAME nfssvc — NFS services LIBRARY Standard C Library (libc, −lc) SYNOPSIS #include <sys/param.h> #include <sys/mount.h> #include <sys/time.h> #include <nfs/rpcv2.h> #include <nfsserver/nfs.h> #include <unistd.h> int nfssvc(int flags, void *argstructp); DESCRIPTION The nfssvc() system call. On the client side, nfsiod(8) calls nfssvc() with the flags argument set to NFSSVC_BIOD and argstructp set to NULL to enter the kernel as a block I/O server daemon. For NQNFS, mount_nfs(8) calls nfssvc() with the NFSSVC_MNTD flag, optionally or’d with the flags NFSSVC_GOTAUTH and NFSSVC_AUTHINFAIL along with a pointer to a struct nfsd_cargs { }; structure. The initial call has only the NFSSVC_MNTD flag set to specify service for the mount point. If the mount point is using Kerberos, then the mount_nfs(8) utility will return from nfssvc() with errno == ENEEDAUTH whenever the client side requires an ‘‘rcmd’’ authentication ticket for the user. The mount_nfs(8) utility will attempt to get the Kerberos ticket, and if successful will call nfssvc() with the flags NFSSVC_MNTD and NFSSVC_GOTAUTH after filling the ticket into the ncd_authstr field and setting the ncd_authlen and ncd_authtype fields of the nfsd_cargs structure. If mount_nfs(8) failed to get the ticket, nfssvc() will be called with the flags NFSSVC_MNTD, NFSSVC_GOTAUTH and NFSSVC_AUTHINFAIL to denote a failed authentication attempt. On the server side, nfssvc() is called with the flag NFSSVC_NFSD and a pointer to a struct nfsd_srvargs { }; to enter the kernel as an nfsd(8) daemon. Whenever an nfsd(8) daemon receives a Kerberos authentication ticket, it will return from nfssvc() with errno == ENEEDAUTH. The nfsd(8) utility will attempt to authenticate the ticket and generate a set of credentials on the server for the ‘‘user id’’ specified in the field nsd_uid. This is done by first authenticating the Kerberos ticket and then mapping the Kerberos principal to a local name and getting a set of credentials for that user via getpwnam(3) and getgrouplist(3). If successful, the nfsd(8) utility will call nfssvc() with the NFSSVC_NFSD and NFSSVC_AUTHIN flags set to pass the credential mapping failure. The master nfsd(8) server daemon calls nfssvc() with the flag NFSSVC_ADDSOCK and a pointer to a struct nfsd_args { }; to pass a server side NFS socket into the kernel for servicing by the nfsd(8) daemons. RETURN VALUES Normally nfssvc() does not return unless the server is terminated by a signal when a value of 0 is returned. Otherwise, -1 is returned and the global variable errno is set to specify the error. ERRORS [ENEEDAUTH] This special error value is really used for authentication support, particularly Kerberos, as explained above. [EPERM] The caller is not the super-user. SEE ALSO mount_nfs(8), nfsd(8), nfsiod(8) HISTORY The nfssvc() system call first appeared in 4.4BSD. BUGS The nfssvc() system call is designed specifically for the NFS support daemons and as such is specific to their requirements. It should really return values to indicate the need for authentication support, since ENEEDAUTH is not really an error. Several fields of the argument structures are assumed to be valid and sometimes to be unchanged from a previous call, such that nfssvc() must be used with extreme care. MidnightBSD 0.3 June 9, 1993 MidnightBSD 0.3
http://www.midnightbsd.org/documentation/man/nfssvc.2.html
CC-MAIN-2014-52
refinedweb
549
52.9
Closed Bug 1036606 Opened 6 years ago Closed 6 years ago Add options dict and vr Device to moz Request Full Screen Categories (Core :: DOM: Core & HTML, defect) Tracking () mozilla36 People (Reporter: vlad, Assigned: vlad) References Details (Keywords: dev-doc-needed) Attachments (1 file, 2 obsolete files) mozRequestFullScreen currently takes no arguments. This adds an optional dictionary argument for options; it defines a "vrDevice" option that can accept a HMDVRDevice to indicate a) what display the window should be made full screen on; b) that VR rendering/postprocessing is to be done. Attachment #8453295 - Flags: feedback?(bzbarsky) Comment on attachment 8453295 [details] [diff] [review] Add options dict arg to mozRequestFullScreen >+++ b/content/base/public/Element.h >+ void MozRequestFullScreen(const mozilla::dom::RequestFullscreenOptions& aOptions); Should be able to drop the "mozilla::dom" bit there, I expect. >+ Element::MozRequestFullScreen(mozilla::dom::RequestFullscreenOptions()); \ And here. >+++ b/content/base/src/Element.cpp >+ if (aOptions.mVrDisplay.WasPassed() && >+ aOptions.mVrDisplay.Value()) So there's an API design question here. Do we want script to allow explicitly passing { vrDisplay: null } for this options struct and treating that as if there were no vrDisplay passed at all? That's what your code aims for right now, but it's not clear to me that we want that. It might make more sense to throw on explicit null passed. In any case, if you want the behavior you have now, you should probably make the IDL say "HMDVRDevice? vrDisplay = null;" and then here you can just do: if (aOptions.mVrDisplay) { opts.mVRHMDDevice = aOptions.mVrDisplay->GetHMD(); } or some such, without the WasPassed() and Value() bits. If, on the other hand, you want to disallow explicit null, then have the IDL say "HMDVRDevice vrDisplay" and have this code be: if (aOptions.mVrDisplay.WasPassed()) { opts.mVRHMDDevice = aOptions.mVrDisplay.Value()->GetHMD(); } In either case, the curly before the if body goes at end of line, not beginning of line. >+++ b/content/base/src/nsDocument.cpp >+ nsCallRequestFullScreen(Element* aElement, dom::FullScreenOptions& aOptions) Why do you need the "dom::" bit? >+nsDocument::AsyncRequestFullScreen(Element* aElement, >+ mozilla::dom::FullScreenOptions& aOptions) And the mozilla::dom here? >@@ -11022,7 +11033,9 @@ nsresult nsDocument::RemoteFrameFullscreenChanged(nsIDOMElement* aFrameElement, >+ dom::FullScreenOptions opts; And the dom:: here. > nsDocument::RequestFullScreen(Element* aElement, >+ dom::FullScreenOptions& aOptions, And here. >+++ b/dom/base/nsPIDOMWindow.h Should probably change the IID here. >+++ b/dom/webidl/Element.webidl >+dictionary RequestFullscreenOptions { Document that this is non-standard? I assume this stuff isn't supposed to live behind a pref or anything? Attachment #8453295 - Flags: feedback?(bzbarsky) → feedback+ Updated. I can't remove the mozilla::dom:: in the macro definition because it's used outside of the dom namespace in at least one place. This isn't behind a pref, because obtaining the VRHMDDevice object is -- this doesn't really do much without it, so I figure it's safe, right? Attachment #8453295 - Attachment is obsolete: true Attachment #8514413 - Flags: review?(bzbarsky) Comment on attachment 8514413 [details] [diff] [review] Add options dict arg to mozRequestFullScreen (v2) This doesn't address my comments about the Element.cpp code above. Nor some of the nsDocument.cpp ones. r- for that bit. > this doesn't really do much without it, so I figure it's safe, right? Well, maybe. It makes mozRequestFullScreen(5) throw where it didn't use to before.... Probably safe enough, but still a bit worrisome. Attachment #8514413 - Flags: review?(bzbarsky) → review- Now with review comments properly addressed. The remaining mozilla::dom:: namespace references are necessary (they're in headers outside of namespace blocks, or in the case of Element.h, part of a macro that gets used outside of the dom namespace and without a using decl). I also went with having "null" to vrDisplay mean "no vr display/do normal fullscreen". It seemed a bit of a coin toss, and doing it this way means that code can potentially use a variable to indicate whether it's using VR or not, so can write |mozRequestFullScreen({ vrDisplay: vrHMD });| instead of writing if (vrHMD) { el.mozRequestFullScreen({ vrDisplay: vrHMD }); } else { el.mozRequestFullScreen(); } But the latter might be clearer as to what's going on. I just picked one. Attachment #8523103 - Flags: review?(bzbarsky) Comment on attachment 8523103 [details] [diff] [review] Add options dict arg to mozRequestFullScreen (v3) r=me Attachment #8523103 - Flags: review?(bzbarsky) → review+ Status: NEW → RESOLVED Closed: 6 years ago Resolution: --- → FIXED Target Milestone: --- → mozilla36 This change broke a few websites (bug 1106351, bug 1117579). So, we should backout this change in 36 and find a more compatible solution for 37. Valdimir, could you prepare a backout of this patch? Thanks. Flags: needinfo?(vladimir) clearing the needinfo, I don't think we need that anymore... Flags: needinfo?(vladimir) Component: DOM → DOM: Core & HTML
https://bugzilla.mozilla.org/show_bug.cgi?id=1036606
CC-MAIN-2020-45
refinedweb
783
57.87
I am coding an 3D app on Windows 8 and looking for information about DirectX 3D on Windows 8. I used StarterKit sample to load file.fbx, I can change color for 3D mesh but I want to load image as skin of 3D mesh, how can I create textures from image and apply texture for 3D mesh?. Please let me know if you know any DirectX API that can handle this behavior in Windows 8? DirectXTK includes DDSTextureLoader and WICTextureLoader. DDS is supported for Windows Store apps and Windows phone 8 apps using namespace DirectX; using namespace Microsoft::WRL; ComPtr<ID3D11ShaderResourceView> srv; HRESULT hr = CreateDDSTextureFromFile( d3dDevice, L"SEAFLOOR.DDS", nullptr, &srv ) ThrowIfFailed(hr); WIC-based image formats (JPG, BMP, TIFF, GIF, etc.) are supported for Windows Store apps using WIC, which is not available on the Windows phone 8 platform. using namespace DirectX; using namespace Microsoft::WRL; ComPtr<ID3D11ShaderResourceView> srv; HRESULT hr = CreateWICTextureFromFile( d3dDevice, immContext, L"LOGO.BMP", nullptr, &srv ) ThrowIfFailed(hr); Microsoft is conducting an online survey to understand your opinion of the Msdn Web site. If you choose to participate, the online survey will be presented to you when you leave the Msdn Web site. Would you like to participate?
http://social.msdn.microsoft.com/Forums/windowsapps/en-US/43220708-90a7-495c-be57-16967eeff09f/directx-3d-create-texture?forum=wingameswithdirectx
CC-MAIN-2014-10
refinedweb
203
55.34
I am trying to find some guides or information on how to Proceduralay generate stars for my space game. Need to be more specific, please. By stars, are you wanting to use particles, meshes, GUI, planes with textures? Is it a 2D or 3D game, will the stars be traveled to, or just used for background? Give a bit more detail on what you're going for :) I am trying to make as real as possible of a 3d space game. I was looking at using particles but did not know if that would take to many to make a good space scene. If you're using particles, you could attach a particle system to your "ship", and set the particles to "stretched" then give them a velocity scale. This would give a "travel" effect, here's an old space scene I did to show you what I mean. EXAMPLE try using procedural example in unity demos, check out its "Fractal Texture" scene which uses perlin noise based texture, I think you might need to replace its texture with your star sequence sprite, I hope it would be good starting point Answer by Dave-Carlile · Dec 27, 2012 at 02:43 PM You don't need anything too sophisticated for this. The key is Random.onUnitSphere, which gives you a random position on the surface of a unit sphere. You can then multiply that by the desired radius to move the point out further. This script gives an example. Create a Sphere game object with a scale of 0.1, assign that to StarObject in the editor. Once you have it working change StarCount to 5000 and you'll have a decent looking set of stars. Now, using spheres for this is obviously a really bad idea, but I wanted to give a simple example of the procedural part of this. It's just repeatable randomness, and for this particular thing you don't need Perlin. To make this efficient you could use billboards for the stars rather than the spheres. Better yet, use Pro so you can take advantage of batching - should be able to do the whole thing in a single draw call. Another option for efficiency (again, requires pro) is to generate the random sky dome using this method, then create skybox textures using render textures, and use those for rendering a sky box. You'd have procedurally generated skyboxes that way - the best of both worlds. public class StarDome : MonoBehaviour { public GameObject StarObject; public int Seed = 42; public int StarCount = 500; public float MinRadius = 100; public float MaxRadius = 120; void Awake() { // make sure we generate the same positions each time Random.seed = Seed; for (int i = 0; i < StarCount; i++) { // get random position on unit sphere Vector3 p = Random.onUnitSphere; // move it out to the sky sphere distance, with some randomness p *= Random.Range(MinRadius, MaxRadius); // instantiate game object GameObject star = Instantiate(StarObject, p, Quaternion.identity) as GameObject; star.transform.parent = this.transform; } } } Is there anyway I can proceduraly generate a ring for my planet I am new to this hole prodedural generation. Why not? The StarObject could be what you want, f.e. a planet with Convert js to c# 2 Answers Convert JS to C# 1 Answer Realistic Sun 2 Answers Js To C# Problem 0 Answers
https://answers.unity.com/questions/369541/procedural-generation.html?sort=oldest
CC-MAIN-2020-45
refinedweb
553
70.43
5611/how-to-getters-and-setters-work-in-java In Java, getter and setter are two conventional methods that are used for retrieving and updating the value of a variable. The following code is an example of a class with a private variable and few getter/setter methods: public class GetterAndSetter { private int num; public int getNumber() { return this.num; } public void setNumber(int number) { this.num = number; } } super() is used to call immediate parent. super() can be ...READ MORE You can use: new JSONObject(map); READ MORE Hey, @Pooja, Before starting with anything you should ...READ MORE I want to maintain a book library. ...READ MORE In Java getters and setters are completely ...READ MORE You can use Java Runtime.exec() to run python script, ...READ MORE First, find an XPath which will return ...READ MORE See, both are used to retrieve something ...READ MORE We can use Java API to play ...READ MORE keytool does not provide such basic functionality ...READ MORE OR Already have an account? Sign in.
https://www.edureka.co/community/5611/how-to-getters-and-setters-work-in-java
CC-MAIN-2021-10
refinedweb
171
69.28
Let’s say you are work ing with images for your project. A lot of times, when you have to work across multiple platforms, the encoding doesn’t remain the same. In these scenarios, you cannot process images directly by treating them like 2D matrices. One of the most common image formats you will comes across is JPEG. If you are working on the same platform using a library like OpenCV, you can directly read JPEG files into 2D data structures. If not, you will have to read it into a byte array, process it and then encode it back. How do we do that? A novice would do the following and run into an error: // Define file stream object, and open the file std::ifstream file("./sample.jpg", ios::binary); // Prepare iterator pairs to iterate the file content std::istream_iterator<unsigned char> begin(file), end; // Reading the file content using the iterator std::vector<unsigned char> buffer(begin,end); std::copy(buffer.begin(), buffer.end(), std::ostream_iterator<unsigned int>(std::cout, ",")); Why is there an error? The code seems fine, right? Well, here is the deal. On most of the machines, “char” type is signed. A JPEG file typically consists of both positive and negative values. So when you cast a negative number to unsigned int, you get a big garbage value. These big values in the output are actually negative values, but they were converted to garbage values during type casting. As we all know, when char is signed, its value can be -128 to 127, but a byte can be between 0 to 255. So any value greater than 127 would become negative between the range -128 to -1. You need to use unsigned char as given below: unsigned char *s; Or do this: is<<static_cast<unsigned int>(static_cast<unsigned char>(s[i]))<<","; ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ cast to unsigned char first ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ then cast to unsigned int Here, we are casting to unsigned char first by using “static_cast<unsigned char>(s[i]”. After that, we are casting the resultant value to unsigned int. Basically, cast char to unsigned char first, and then to unsigned int. Do you see what happened here? Take a moment to think about it and see how it solves our problem. Why did we use static casting here? We used static_cast to instruct the compiler that you know that the conversion will not result into a truncation. For example, if you convert int to a char, the compiler will warn you that not all the values are going to fit inside this datatype. So if you are absolutely sure that none of the values will exceed the range, you can use static_cast and inform the compiler that you are aware of the situation and it’s okay with you. On a more thing to note about C++ is that you should avoid using “new” when it’s not necessary. You can use std::vector as shown below: // Define file stream object, and open the file std::ifstream file("image.jpg", ios::binary); // Prepare iterator pairs to iterate the file content! std::istream_iterator<unsigned char> begin(file), end; // Reading the file content using the iterator! std::vector<unsigned char> buffer(begin,end); The last line reads all the data from the file into buffer. As you can see, this solution doesn’t use “new”, neither does it use any kind of casting! Now you can print it as: std::copy(buffer.begin(), buffer.end(), std::ostream_iterator<unsigned int>(std::cout, ",")); For the sake of completion, you need to include the following headers to make this work: #include <vector> // for vector #include <iterator> // for std::istream_iterator and std::ostream_iterator #include <algorithm> // for std::copy ————————————————————————————————-
https://prateekvjoshi.com/2013/12/28/reading-jpeg-into-a-byte-array/
CC-MAIN-2021-39
refinedweb
615
63.7
On Mon, Dec 24, 2018 at 06:10:49PM +0200, Eli Zaretskii wrote: > > Date: Mon, 24 Dec 2018 04:08:47 +0200 > > From: Khaled Hosny <address@hidden> > > Cc: address@hidden, address@hidden, address@hidden, > > address@hidden, address@hidden > > > > I think we are almost good now. There is only one serious FIXME left: > > > > /* FIXME: guess_segment_properties is BAD BAD BAD. > > * we need to get these properties with the LGSTRING. */ > > #if 1 > > hb_buffer_guess_segment_properties (hb_buffer); > > #else > > hb_buffer_set_direction (hb_buffer, XXX); > > hb_buffer_set_script (hb_buffer, XXX); > > hb_buffer_set_language (hb_buffer, XXX); > > #endif > > > > We need to know, for a given lgstring we are shaping: > > * Its direction (from applying bidi algorithm). Each lgstring we are > > shaping must be of a single direction. > > Communicating this to ftfont_shape_by_hb will need changes in a couple > of interfaces (the existing shaping engines didn't need this > information). I will work on this soon. Great. > > * Its script, possibly after applying something like: > > > > Per previous discussions, we decided to use the Harfbuzz built-in > methods for determining the script, since Emacs doesn't have this > information, and adding it will just do the same as Harfbuzz does, > i.e. find the first character whose script is not Common etc., using > the UCD database. I think it was you who suggested to use the > Harfbuzz built-ins in this case.. > > * Its language, is Emacs allows setting text language (my understand is > > that it doesn’t). Some languages really need this for applying > > language-specfic features (Urdu digits, Serbian alternate glyphs, etc.). > > We don't currently have a language property for chunks of text, we > only have the current global language setting determined from the > locale (and there's a command to change that for Emacs, should the > user want it). This is not really appropriate for multilingual > buffers, but we will have to use that for now, and hope that in the > future, infrastructure will be added to allow more flexible > determination of the language of each run of text. (I see that > Harfbuzz already looks a the locale for its default language, but > since Emacs allows user control of this, however unlikely, I think > it's best to use the value Emacs uses.) I will work on this as well. Yes, better pass that from Emacs to HarfBuzz. Regards, Khaled
https://lists.gnu.org/archive/html/bug-gnu-emacs/2018-12/msg00877.html
CC-MAIN-2021-17
refinedweb
372
60.04
Plugin for Pelican that computes average read time. Project description Plugin for Pelican that computes a piece of content’s read time. It adds a readtime and readtime_string attributes to every article and/or page, with the time estimation for reading the article. Setting Up Adding ‘ReadTime’ to the list of plugins: In pelicanconf.py: PLUGINS = [ ... , 'readtime' ] 1. Words Per Minute Only In your settings you would use assign the READTIME_WPM variable to an integer like so: In pelicanconf.py: READTIME_WPM = 180 Every article’s read time would be calculated using this average words per minute count. (See the Usage section for how to use the calculated read times in templates). This is the simplest read time method. 2. Words Per Minute per language This is the preferred method if you are dealing with multiple languages. Take a look at the following settings In pelicanconf.py: READTIME_WPM = { 'default': { 'wpm': 200, 'min_singular': 'minute', 'min_plural': 'minutes', 'sec_singular': 'second', 'sec_plural': 'seconds' }, 'es': { 'wpm': 220, 'min_singular': 'minuto', 'min_plural': 'minutos', 'sec_singular': 'segundo', 'sec_plural': 'segundos' } } In this example the default reading time for all articles is 200 words per minute. Any articles in spanish will be calculated at 220 wpm. This is useful for information dense languages where the read time varies rapidly. Chances are the average reading time will not vary rapidly from language to language, however using this method also allows you to set plurals which make templating easier in the long run. Usage Four variables are accessible through the read time plugin: readtime, readtime_string, readtime_with_seconds, and readtime_string_with_seconds {% if article.readtime %} This article takes {{article.readtime}} minute(s) to read.{% endif %} // This article takes 4 minute(s) to read. {% if article.readtime_string %} This article takes {{article.readtime_string}} to read.{% endif %} // This article takes 4 minutes to read. {% if article.readtime_with_seconds %} This article takes {{article.read_with_seconds[0]}} minutes(s) and {{article.read_with_seconds[1]}} second(s) to read. {% endif %} // This article takes 4 minutes and 21 second(s) to read. {% if article.readtime_string_with_seconds %} This article takes {{article.readtime_string_with_seconds}} to read.{% endif %} // This article takes 4 minutes, 1 second to read. Links Project details Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/pelican-readtime/
CC-MAIN-2022-33
refinedweb
368
51.34
You can subscribe to this list here. Showing 7 results of 7 Hi, I am trying to get Gnome2 installed on my machine (Jaguar, Fink from scratch). I got the ".info" and ".patch" files from Mr. Sekino. I was able to install gnome-desktop (2.0.8-1), gnome-session (2.0.7-1) and control-center (2.0.1.1-1) without much problem. Docbook-dtd-catalog (1.0-1) and scrollkeeper (0.3.11-1) were also installed as dependencies to these packages. I ran into problem with installing gnome-applets (2.0.3-1) and eog (1.0.3-1). Installation of gnome-applets stopped at setting gnome-panel (2.0.9-1). Gnome-panel-dev and -shlib were both installed, however, despite of the problem. Both gnome-panel and eog installation errors occurred was due to scrollkeeper-update (at least that was what I guessed). The errors are shown as below: (for gnome-panel) <snip> > pkg gnome-panel version ### > pkg gnome-panel version 2.0.9-1 > The following package will be installed or updated: > gnome-panel > dpkg -i > /sw/fink/dists/local/main/binary-darwin-powerpc/gnome-panel_2.0.9-1_darwin-pow > erpc.deb > (Reading database ... 24806 files and directories currently installed.) > Preparing to replace gnome-panel 2.0.9-1 (using > .../gnome-panel_2.0.9-1_darwin-powerpc.deb) ... > Unpacking replacement gnome-panel ... > Setting up gnome-panel (2.0.9-1) ... > /sw/var/lib/dpkg/info/gnome-panel.postinst: line 6: 606 Segmentation fault > scrollkeeper-update > dpkg: error processing gnome-panel (--install): > subprocess post-installation script returned error exit status 139 > Errors were encountered while processing: > gnome-panel > ### execution of dpkg failed, exit code 1 > Failed: can't install package gnome-panel-2.0.9-1 > [borcim113:~] chia% (for eog) <snip> > Writing control file... > Writing package script postinst... > Writing package script postrm... > Writing conffiles list... > dpkg-deb -b root-eog-1.0.3-1 /sw/fink/dists/local/main/binary-darwin-powerpc > dpkg-deb: building package `eog' in > `/sw/fink/dists/local/main/binary-darwin-powerpc/eog_1.0.3-1_darwin-powerpc.de > b'. > ln -sf > /sw/fink/dists/local/main/binary-darwin-powerpc/eog_1.0.3-1_darwin-powerpc.deb > /sw/fink/debs/ > rm -rf /sw/src/root-eog-1.0.3-1 > dpkg -i > /sw/fink/dists/local/main/binary-darwin-powerpc/eog_1.0.3-1_darwin-powerpc.deb > Selecting previously deselected package eog. > (Reading database ... 25132 files and directories currently installed.) > Unpacking eog (from .../eog_1.0.3-1_darwin-powerpc.deb) ... > Setting up eog (1.0.3-1) ... > /sw/var/lib/dpkg/info/eog.postinst: line 6: 16106 Segmentation fault > scrollkeeper-update > dpkg: error processing eog (--install): > subprocess post-installation script returned error exit status 139 > Errors were encountered while processing: > eog > ### execution of dpkg failed, exit code 1 > Failed: can't install package eog-1.0.3-1 > [SH-Calvin:~] chia% To make the long story short, I found that in the ".info" files for both gnome-panel and eog, there was a section towards the end as the following: > PostInstScript: << > scrollkeeper-update > export GCONF_CONFIG_SOURCE=`gconftool-2 --get-default-source` > for s in panel-global-config panel-per-panel-config \ > mailcheck pager tasklist clock fish ; do > gconftool-2 --makefile-install-rule %p/etc/gconf/schemas/$s.schemas > >/dev/null > done > << > PostRmScript: << > if [ upgrade != "$1" ]; then > scrollkeeper-update > fi > << In the .info files for control-center and gnome-session (two packages that installed successfully), the line "scrollkeeper-update" and the PostRmScript section were not there. I am not a package developer, but I thought that I would try to modify the ".info" file to see if that would fix the problem. I thought that I would delete out the line "scrollkeeper-update" and the PostRmScript section. Remove the .deb file and re-install these two packages. I have contacted Mr. Sekino about errors in gnome-applets (-panel) installation, but have not heard back from him. I am sure that he is very busy, so I thought that I would try to figure this one out myself. However, I would like someone's opinion before I go ahead and cause a catastrophy on my system. Thanks for your help!! Best regards, Chia Hung _______________________________________________ Chia Hung Postdoctoral Research Fellow Department of Molecular Microbiology Washington University School of Medicine 660 S. Euclid Ave., Box 8230 St. Louis, MO 63110 Tel: (314) 747-3627 The proper place for these reports is the bug tracker. They won't get fixed otherwise. -Ben On Tuesday, October 15, 2002, at 06:44 AM, jfm wrote: > gnupg is not alone to install in wrong places: > > ~# dpkg -S /sw/libexec > octave-atlas, gnupg: /sw/libexec > ~# dpkg -S /sw/man3 > extutils-f77, pgplot-perl: /sw/man3 > ~# dpkg -S /sw/man > cdbkup: /sw/man > ~# dpkg -S /sw/doc > gperf, anjuta: /sw/doc > ~# ls -p1 /sw/bin | grep '/$' | sed -r 's|^|/sw/bin/|g' | sed -r > 's|/$||g' | xargs dpkg -S > calc: /sw/bin/cscript > Finally, files directly in /sw/share : > ~# ( cd /sw/share ; ls -p1 | grep '[A-Za-z0-9]$' | sed -r > 's|^|/sw/share/|g' | xargs dpkg -S | sort | cut -d ':' -f 1 | uniq -c > ) > 1 emacs-w3 > 124 gmt > 2 gnome-games > 19 gnome-libs > 1 gnuplot > 1 gtop > 1 openjade > 1 openssh > 1 units > When a package installs 20 or 120 files in /sw/share, it might seem > preferable to install them in a subdirectory... > > JF Mertens > > > > ------------------------------------------------------- > This sf.net email is sponsored by:ThinkGeek > Welcome to geek heaven. > > _______________________________________________ > Fink-devel mailing list > Fink-devel@... > > At 15:56 Uhr +0200 23.10.2002, Christian Schaffner wrote: ?) For changes to BuildDepends: no. For changes to Depends: yes. Cheers, Max -- ----------------------------------------------- Max Horn Software Developer phone: (+49) 6151-494890?) >>> And is an "=" for the version really required _ wouldn't ">=" be >>> good enough ? >> >> Unfortunately 'neon23 (= 0.23.4-10)' is needed right now, since >> subversion depends on a specific version of neon. It will complain >> otherwise and not build. This should be fixed for a future version of >> svn. > > So, you mean, 0.23.5 will not work once it is released? It should, as > it has the same API - if the API changes, they will go to 0.24.x, and > you make neon24. So I don't see the problem here.. Well, I checked it again: It was like this (unfortunately) with subversion 0.14.2 and below. But 0.14.3 seems to work fine with 0.23.5 (which will be in the cvs unstable tree soon). So I will change the "=" to ">=". Thanks for your help, chris. dear finkers i had a go at porting geal (Electronic design application for GNOME) to darwin. now in version 0.18 pygtk (>= 1.99.8) shows up as a dep. now pygtk-1.99.13 (i called it pygtk2-1.99.13) crashes with the following bus error: [mathias:~] mathias% python -v /sw/src/pygtk2-1.99.13-1/pygtk-1.99.13/examples/simple/hello.py [...] [GCC Apple cpp-precomp 6.14] on darwin Type "help", "copyright", "credits" or "license" for more information. import gtk # directory /sw/lib/python2.2/site-packages/gtk-2.0/gtk # /sw/lib/python2.2/site-packages/gtk-2.0/gtk/__init__.pyc matches /sw/lib/python2.2/site-packages/gtk-2.0/gtk/__init__.py import gtk # precompiled from /sw/lib/python2.2/site-packages/gtk-2.0/gtk/__init__.pyc import gobject # dynamically loaded from /sw/lib/python2.2/site-packages/gtk-2.0/gobjectmodule.so # /sw/lib/python2.2/site-packages/gtk-2.0/gtk/keysyms.pyc matches /sw/lib/python2.2/site-packages/gtk-2.0/gtk/keysyms.py import gtk.keysyms # precompiled from /sw/lib/python2.2/site-packages/gtk-2.0/gtk/keysyms.pyc import encodings # directory /sw/lib/python2.2/encodings # /sw/lib/python2.2/encodings/__init__.pyc matches /sw/lib/python2.2/encodings/__init__.py import encodings # precompiled from /sw/lib/python2.2/encodings/__init__.pyc # /sw/lib/python2.2/codecs.pyc matches /sw/lib/python2.2/codecs.py import codecs # precompiled from /sw/lib/python2.2/codecs.pyc import struct # dynamically loaded from /sw/lib/python2.2/lib-dynload/struct.so import _codecs # dynamically loaded from /sw/lib/python2.2/lib-dynload/_codecs.so # /sw/lib/python2.2/encodings/aliases.pyc matches /sw/lib/python2.2/encodings/aliases.py import encodings.aliases # precompiled from /sw/lib/python2.2/encodings/aliases.pyc # /sw/lib/python2.2/encodings/utf_8.pyc matches /sw/lib/python2.2/encodings/utf_8.py import encodings.utf_8 # precompiled from /sw/lib/python2.2/encodings/utf_8.pyc Bus error [mathias:~] mathias% as i'm no python expert at all i hope anyone from the list can help me out. i have submitted the packages to the submission tracker so if you want to have a go you can get the files there. i have mailed with Jeremy Higgs, the maintainer of pygtk-0.6.9-2 and we agreed that i should submit the package and ask the list. thanks in advance for any help mathias Etienne Beaule wrote: > I'm using Mac OS 10.2.1 and the unstable 10.2 Fink distro. So far, > pretty much everything I tried to install worked. I tried to compile and > install bundle-gnome and libgtop failed to build with the following error: > > config.status: creating macros/Makefile > sed: 42: /tmp/cs4054-8697/subs-2.sed: unescaped newline inside > substitute pattern > config.status: creating config.h > make > make: *** No targets. Stop. > ### execution of make failed, exit code 2 > Failed: compiling libgtop-1.0.13-10 failed > > This may have been reported earlier (in which case, you have my > apologies)... Search features on the mailing list archive are not > excellent... Just use google for searching. "subs-2.sed: unescaped newline inside substitute pattern" gives you quite a number of hits, among them several reports from Pedro Massobrio in August and September about the same package as you. Unfortunately, as far as i can see, there was no definite explanation. The only explanation I have seen in other contexts was a problem with zsh. You should verify with "ll /bin/*sh" that /bin/sh is identical to /bin/bash and not to /bin/zsh as it used to be on 10.1. -- Martin I'm using Mac OS 10.2.1 and the unstable 10.2 Fink distro. So far,=20 pretty much everything I tried to install worked. I tried to compile=20 and install bundle-gnome and libgtop failed to build with the following=20= error: config.status: creating macros/Makefile sed: 42: /tmp/cs4054-8697/subs-2.sed: unescaped newline inside=20 substitute pattern config.status: creating config.h make make: *** No targets. Stop. ### execution of make failed, exit code 2 Failed: compiling libgtop-1.0.13-10 failed This may have been reported earlier (in which case, you have my=20 apologies)... Search features on the mailing list archive are not=20 excellent... =C9tienne=
http://sourceforge.net/p/fink/mailman/fink-devel/?viewmonth=200210&viewday=23
CC-MAIN-2014-52
refinedweb
1,817
53.58
Search: Search took 0.03 seconds. - 5 Dec 2007 8:08 PM - Replies - 1 - Views - 1,954 What is the problem? The Ext PagingToolbar sends parameters start and limit. Just create query using HQL or Criteria and use these parameters with SetFirstResult and SetMaxResults to get the data you... - 5 Dec 2007 12:21 AM - Replies - 23 - Views - 8,204 Yes... Here's what ScottGu says about it: So, there is a hope :) - 4 Dec 2007 2:29 AM - Replies - 23 - Views - 8,204 Visual Studio 2008 seems to be quite nice: It also works in free Visual Studio Express edition. - 28 Nov 2007 12:17 PM - Replies - 0 - Views - 903 Hello Is it possible to return some value with the createInterceptor? I'd like to intercept the original function AND return something to the caller. The current implementation only checks for... - 16 Nov 2007 8:34 AM Well... this fix is for Ext 1.1 Grid. Ext 2 GridPanel is a bit different, but it seems that similar fix should also be needed. However, I'm still stuck with Ext 1.1, so I don't know much about Ext 2... - 16 Nov 2007 7:40 AM Hmm, I don't know... I just looked at your code and, since you are using reconfigure on Grid, thought that this fix might help you. You can put this fix directly into ext-all.js, but it's better to... - 15 Nov 2007 6:31 PM Try this: It's for Ext 1.1, but perhaps it also works in Ext 2 HTH, Tom - 15 Nov 2007 6:21 PM - Replies - 10 - Views - 11,679 Thanks a lot! I didn't notice anything wrong, but I didn't use it too much. I'll change my copy ASAP Bye, Tom - 9 Nov 2007 1:35 PM - Replies - 10 - Views - 11,679 Here's something I just wrote, inspired by the code above, so it's probably buggy. It uses DateField and TimeField (it's from Ext 2, but works with Ext 1.1). Ext.namespace("Ext.ux",... - 6 Nov 2007 1:12 PM - Replies - 4 - Views - 1,641 Did you use reconfigure on the grid? Load mask doesn't work after reconfigure. If that's the case, see here: for a possible fix. - 6 Nov 2007 11:28 AM Hm, perhaps you misunderstood me... I'd like to use both 123.45 and 123,45 as a valid float number. So, I hoped that specifying '.,' as a separator would treat it as a list of valid separator... - 5 Nov 2007 9:02 PM Some of my users have US keyboard, some have Croatian. If you press key '.' on numeric keyboard with US layout, it will emit the '.' character, but on Croatian it will be ','. So, it's easier to... - 5 Nov 2007 3:47 PM The documentation for Numberfield says: decimalSeparator : String Character(s) to allow as the decimal separator (defaults to '.') which means that more than one character can be used as a... - 5 Nov 2007 12:47 PM - Replies - 7 - Views - 3,323 No, I don't think it's a bug in Ext. You are probably attaching the same event handler twice. Try using Firebug, and look at the call stack, and/or try to log calls to your method to see what's going... - 5 Nov 2007 10:28 AM - Replies - 14 - Views - 5,135 It depends... I tried that route a while ago, but in the end it was quite messy and I had to rewrite everything in 'pure' Ext. It didn't feel right, and it was painful to use events between the stuff... - 31 Oct 2007 3:02 AM - Replies - 0 - Views - 1,844 After a call to reconfigure, loadmask is lost. I believe that, instead of Ext.apply, it should use Ext.applyIf. So, instead of this.loadMask = new Ext.LoadMask(this.container,... - 31 Oct 2007 2:59 AM - Replies - 8 - Views - 2,698 Try this and see if it helps HTH, Tom - 31 Oct 2007 2:56 AM - Replies - 2 - Views - 1,976 I believe that Grid.reconfigure should call Ext.applyIf, instead of Ext.apply. Here's a fix that seems to work for me. In Grid.reconfigure: if(this.loadMask) { this.loadMask.destroy(); //... - 23 Oct 2007 1:23 PM Jump to post Thread: New ExtJS 2.0 Theme: "Slate" by FritFrut - Replies - 186 - Views - 188,788 Mmm, great... - 19 Oct 2007 12:48 AM - Replies - 28 - Views - 12,108 You can use 'metachange' event on the Grid (in Ext 1.1, don't know about Ext 2.), send the new grid configuration from the server, and reconfigure grid accordingly. There's an implementation of this... - 18 Oct 2007 7:22 AM - Replies - 21 - Views - 17,796 This is great! Thanks a lot! - 15 Oct 2007 9:30 AM - Replies - 3 - Views - 2,082 It's quite simple... alpha channel is like gray scale channel, where level of 'grayness' represents level of transparency. You can use any tool on the alpha channel, for example you can draw, select,... - 15 Oct 2007 8:35 AM - Replies - 3 - Views - 2,082 Hmm, if I remember correctly, in Photoshop and Gimp you need an alpha channel to have transparent PNGs. - 12 Oct 2007 2:29 PM Jump to post Thread: Combo with first entry "null" by FritFrut - Replies - 25 - Views - 11,519 No, there are many problems with this. Data store should be precisely that, a DATA store. Empty row in this case is not meaningful data, it's here just to add some functionality of the UI. You... - 12 Oct 2007 1:46 PM Jump to post Thread: Ext 2.0 - Ext Designer by FritFrut - Replies - 63 - Views - 41,661 Thanks! This looks very nice and useful. Results 1 to 25 of 80
https://www.sencha.com/forum/search.php?s=9089177b99f70708466a7b1969c29fd7&searchid=13305203
CC-MAIN-2015-48
refinedweb
970
84.57
Screen Scraping What is screen scraping? From what I know it's like getting info from some database(i think). And also how can I screen scrape? Thanks for your answers Screen Scraping is the art and science of: 1) getting all the text from a computer display (terminal, webpage, etc.) and then 2) selecting out only those data fields of interest for storage or further processing. It used to be about getting data from terminal displays but these daze it is mostly about scraping data off of web pages. The Pythonista tools that I prefer for web scraping are requests(for getting all the HTML of a webpage) and beautiful soup 4(selecting out only those data fields of interest). bs4 is complicated but it is supercool once you get the hang of it. Here are two recent examples of web scraping. They follow the model: import bs4, requests def get_beautiful_soup(url): return bs4.BeautifulSoup(requests.get(url).text) soup = get_beautiful_soup('') print(soup.prettify()) # See: for all the things you can do with the soup. As you can see by looking at the output, the harder part is selecting out only those data fields of interest. ;-) If bs4 is too complicated for your purposes, you can do html = requests.get(url).textand then try using str.find()and str.partition()or Python's regular expressions module, reas a poor man's soup. Happy scraping. Cool! Thanks for the response - scraperhunk This post is deleted!last edited by
https://forum.omz-software.com/topic/1513/screen-scraping
CC-MAIN-2021-39
refinedweb
248
67.15
Zim, an application that brings the concept of a wiki to the users’ desktop by helping them store information, link pages and edit with WYSISYG markup, is now at version 0.58. With Zim 0.58, creating a new page is as easy as linking to a non-existing page, as everything is stored in a folder structure that can also have attachments. Highlights of Zim 0.58: • A new plugin has been added for a distraction-free fullscreen mode; • An option has been added to limit the tasklist plugin to certain namespaces; • An option has been added to the tasklist plugin in order to flag non-actionable tasks with a special tag; • A template option to list attachments in export is now available; • Class attributes were added to links in the HTML output. A complete list of changes is available in the official announcement. Download Zim 0.58 right now from Softpedia.
http://linux.softpedia.com/blog/Graphical-Text-Editor-Zim-0-58-Gets-a-New-Fullscreen-Mode-315957.shtml
CC-MAIN-2015-32
refinedweb
153
69.82